Commit Graph

27402 Commits

Author SHA1 Message Date
Amara Emerson 453ef4e376 [AArch64][GlobalISel] Fix zext narrowScalar to use the right type when creating
the merges.

Fixes PR43171.

llvm-svn: 370627
2019-09-02 08:18:55 +00:00
Sanjay Patel c882208367 [DAGCombiner] improve throughput of shift+logic+shift
The motivating case for this is a long way from here:
https://bugs.llvm.org/show_bug.cgi?id=43146
...but I think this is where we have to start.

We need to canonicalize/optimize sequences of shift and logic to ease
pattern matching for things like bswap and improve perf in general.
But without the artificial limit of '!LegalTypes' (early combining),
there are a lot of test diffs, and not all are good.

In the minimal tests added for this proposal, x86 should have better
throughput in all cases. AArch64 is neutral for scalar tests because
it can fold shifts into bitwise logic ops.

There are 3 shift opcodes and 3 logic opcodes for a total of 9 possible patterns:
https://rise4fun.com/Alive/VlI
https://rise4fun.com/Alive/n1m
https://rise4fun.com/Alive/1Vn

Differential Revision: https://reviews.llvm.org/D67021

llvm-svn: 370617
2019-09-01 18:38:15 +00:00
Shiva Chen adfdcb9c26 [TargetLowering] Fix Bugzilla ID 43183 to avoid soften comparison broken with constant inputs
Summary:
  This fixes the bugzilla id 43183 which triggerd by the following commit:
  [RISCV] Avoid generating AssertZext for LP64 ABI when lowering floating LibCall

llvm-svn: 370604
2019-09-01 04:52:54 +00:00
Sanjay Patel 9e57b49392 [DAGCombiner] clean up code in visitShiftByConstant()
This is not quite NFC because the SDLoc propagation is changed,
but there are no regression test diffs from that.

llvm-svn: 370587
2019-08-31 15:08:58 +00:00
Amaury Sechet 82825ab882 [DAGCombiner] Match (add X, X) as (shl X, 1) when detecting rotate.
Summary: The combiner transforms (shl X, 1) into (add X, X).

Reviewers: craig.topper, efriedma, RKSimon, lebedev.ri

Subscribers: llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66882

llvm-svn: 370578
2019-08-31 11:40:02 +00:00
James Molloy e62c509cd4 [DAGCombiner] Don't create illegal narrow stores
Narrowing stores when the target doesn't support the narrow version
forces the target to expand into a load-modify-store sequence, which
is highly suboptimal. The information narrowing throws away (legality
of the inverse transform) is hard to re-analyze. If the target doesn't
support a store of the narrow type, don't narrow even in pre-legalize
mode.

No test as this is DAGCombiner and depends on target bits.

llvm-svn: 370576
2019-08-31 10:46:16 +00:00
Bjorn Pettersson e27c74abb6 [CodeGen] Refactor DAGTypeLegalizer::ExpandIntRes_MULFIX. NFC
Restructured the code a little bit in preparation for adding
UMULFIXSAT. I think it will be easier to understand the code
if not interleaving the codegen for signed/unsigned/saturated
cases that much.

llvm-svn: 370569
2019-08-31 09:28:50 +00:00
James Molloy 790a779f06 [MachinePipeliner] Separate schedule emission, NFC
This is the first stage in refactoring the pipeliner and making it more
accessible for backends to override and control. This separates the logic and
state required to *emit* a scheudule from the logic that *computes* and
validates a schedule.

This will enable (a) new schedule emitters and (b) new modulo scheduling
implementations to coexist.

NFC.

Differential Revision: https://reviews.llvm.org/D67006

llvm-svn: 370500
2019-08-30 18:49:50 +00:00
Simon Pilgrim 3be7081aa1 [DAGCombine] ReduceLoadWidth - remove duplicate SDLoc. NFCI.
SDLoc(N0) and SDLoc(cast<LoadSDNode>(N0)) should be equivalent.

llvm-svn: 370498
2019-08-30 18:19:02 +00:00
Simon Pilgrim 2d1e0899e9 [TargetLowering] SimplifyDemandedBits ADD/SUB/MUL - correctly inherit SDNodeFlags from the original node.
Just disable NSW/NUW flags. This matches what we're already doing for the other situations for these nodes, it was just missed for the demanded constant case.

Noticed by inspection - confirmed in offline discussion with @spatel. I've checked we have test coverage in the x86 extract-bits.ll and extract-lowbits.ll tests

llvm-svn: 370497
2019-08-30 17:58:55 +00:00
Matt Arsenault 466ec2d552 GlobalISel: Fix missing pass dependency
llvm-svn: 370496
2019-08-30 17:41:58 +00:00
Craig Topper 30ddd2ab6c [ValueTypes] Add v16f16 and v32f16 to EVT::getEVTString and Tablegen's getEnumName
Missed these when I hadded the enum entries

llvm-svn: 370494
2019-08-30 17:34:29 +00:00
Simon Pilgrim ab8cb1a3c5 [DAGCombine] visitVSELECT - remove equivalent getValueType() call. NFCI.
llvm-svn: 370489
2019-08-30 17:21:20 +00:00
Simon Pilgrim c2fed1dc8a [DAGCombine] visitVSELECT - remove duplicate getOperand calls. NFCI.
llvm-svn: 370478
2019-08-30 15:17:37 +00:00
Simon Pilgrim 3367669668 [DAGCombine] visitVSELECT - use getShiftAmountTy for shift amounts.
llvm-svn: 370471
2019-08-30 13:30:37 +00:00
Simon Pilgrim 8e1989e79a [DAGCombine] visitMULHS - use getScalarValueSizeInBits() to make safe for vector types.
This is hidden behind a (scalar-only) isOneConstant(N1) check at the moment, but once we get around to adding vector support we need to ensure we're dealing with the scalar bitwidth, not the total.

llvm-svn: 370468
2019-08-30 12:22:06 +00:00
Bjorn Pettersson 227145924a [CodeGen] Introduce MachineBasicBlock::replacePhiUsesWith helper and use it. NFC
Summary:
Found a couple of places in the code where all the PHI nodes
of a MBB is updated, replacing references to one MBB by
reference to another MBB instead.

This patch simply refactors the code to use a common helper
(MachineBasicBlock::replacePhiUsesWith) for such PHI node
updates.

Reviewers: t.p.northover, arsenm, uabelho

Subscribers: wdng, hiraditya, jsji, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66750

llvm-svn: 370463
2019-08-30 11:23:10 +00:00
Simon Pilgrim 7cbf823f93 [DAGCombine] visitMULHS/visitMULHU - isBuildVectorAllZeros doesn't mean node is all zeros
Return a proper zero vector, just in case some elements are undef.

Noticed by inspection after dealing with a similar issue in PR43159.

llvm-svn: 370460
2019-08-30 10:42:14 +00:00
David Stenberg b35d4699d0 [LiveDebugValues] Insert entry values after bundles
Summary:
Change LiveDebugValues so that it inserts entry values after the bundle
which contains the clobbering instruction. Previously it would insert
the debug value after the bundle head using insertAfter(), breaking the
bundle.

Reviewers: djtodoro, NikolaPrica, aprantl, vsk

Reviewed By: vsk

Subscribers: hiraditya, llvm-commits

Tags: #debug-info, #llvm

Differential Revision: https://reviews.llvm.org/D66888

llvm-svn: 370448
2019-08-30 09:06:50 +00:00
Petar Avramovic 6412b56513 [MIPS GlobalISel] Lower fptoui
Add lower for G_FPTOUI. Algorithm is similar to the SDAG version
in TargetLowering::expandFP_TO_UINT.
Lower G_FPTOUI for MIPS32.

Differential Revision: https://reviews.llvm.org/D66929

llvm-svn: 370431
2019-08-30 05:44:02 +00:00
Dan Gohman 8cfeeaf9de [CodeGen] Fix lowering for returning the result of an extractvalue
When the number of return values exceeds the number of registers available,
SelectionDAGBuilder::visitRet transforms a function's return to use a
pointer to a buffer to hold return values. When the returned value is an
operator such as extractvalue, the value may have a non-zero result number.
Add that number to the indexing when obtaining the values to store.

This fixes https://bugs.llvm.org/show_bug.cgi?id=43132.

Differential Revision: https://reviews.llvm.org/D66978

llvm-svn: 370430
2019-08-30 04:33:22 +00:00
Jordan Rupprecht f9f81289e6 Revert [MBP] Disable aggressive loop rotate in plain mode
This reverts r369664 (git commit 51f48295cb)

It causes many benchmark regressions, internally and in llvm's benchmark suite.

llvm-svn: 370398
2019-08-29 19:03:58 +00:00
Matt Arsenault 093ebf9275 GlobalISel: Don't compute known bits for non-integral GEP
llvm-svn: 370392
2019-08-29 17:55:05 +00:00
Matt Arsenault b2b9a23758 GlobalISel: Add maskedValueIsZero and signBitIsZero to known bits
I dropped the DemandedElts since it seems to be missing from some of
the new interfaces, but not others.

llvm-svn: 370389
2019-08-29 17:24:36 +00:00
Matt Arsenault caff0a88dd GlobalISel: Add known bits to InstructionSelector
AMDGPU uses this for some addressing mode selection patterns. The
analysis run itself doesn't do anything so it seems easier to just
always require this than adding a way to opt in.

llvm-svn: 370388
2019-08-29 17:24:32 +00:00
Simon Pilgrim ea67741899 [DAGCombine] Fix shadow variable warnings. NFCI.
llvm-svn: 370365
2019-08-29 14:34:07 +00:00
Jeremy Morse ca0e4b3689 [DebugInfo] LiveDebugValues: correctly discriminate kinds of variable locations
The missing line added by this patch ensures that only spilt variable
locations are candidates for being restored from the stack. Otherwise,
register or constant-value information can be interpreted as a spill
location, through a union.

The added regression test replicates a scenario where this occurs: the
stack load from [rsp] causes the register-location DBG_VALUE to be
"restored" to rsi, when it should be left alone. See PR43058 for details.

Un x-fail a test that was suffering from this from a previous patch.

Differential Revision: https://reviews.llvm.org/D66895

llvm-svn: 370334
2019-08-29 11:20:54 +00:00
Simon Pilgrim 6c2fc64edc Fix signed/unsigned comparison warning. NFCI.
llvm-svn: 370333
2019-08-29 11:18:53 +00:00
Simon Pilgrim 27f43e6b1a Fix shadow variable warning. NFCI.
llvm-svn: 370332
2019-08-29 11:16:32 +00:00
Jeremy Morse 313d2ce999 [DebugInfo] LiveDebugValues should always revisit backedges if it skips them
The "join" method in LiveDebugValues does not attempt to join unseen
predecessor blocks if their out-locations aren't yet initialized, instead
the block should be re-visited later to see if any locations have changed
validity. However, because the set of blocks were all being "process"'d
once before "join" saw them, that logic in "join" was actually ignoring
legitimate out-locations on the first pass through. This meant that some
invalidated locations were not removed from the head of loops, allowing
illegal locations to persist.

Fix this by removing the run of "process" before the main join/process loop
in ExtendRanges. Now the unseen predecessors that "join" skips truly are
uninitialized, and we come back to the block at a later time to re-run
"join", see the @baz function added.

This also fixes another fault where stack/register transfers in the entry
block (or any other before-any-loop-block) had their tranfers initially
ignored, and were then never revisited. The MIR test added tests for this
behaviour.

XFail a test that exposes another bug; a fix for this is coming in D66895.

Differential Revision: https://reviews.llvm.org/D66663

llvm-svn: 370328
2019-08-29 10:53:29 +00:00
Amaury Sechet 8365e42010 [DAGCombiner] (insert_vector_elt (vector_shuffle X, Y), (extract_vector_elt X, N), IdxC) -> (vector_shuffle X, Y)
Summary: This is beneficial when the shuffle is only used once and end up being generated in a few places when some node is combined into a shuffle.

Reviewers: craig.topper, efriedma, RKSimon, lebedev.ri

Subscribers: llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66718

llvm-svn: 370326
2019-08-29 10:35:51 +00:00
Simon Pilgrim dfb2a19ac2 LegalizeSetCCCondCode - Reduce scope of NeedSwap to fix cppcheck warning. NFCI.
No need for this to be defined outside the only switch case its used in.

llvm-svn: 370320
2019-08-29 10:11:34 +00:00
Craig Topper 1aadf6f39f [X86] Make inline assembly 'x' and 'v' constraints work for f128.
Including a type legalizer fix to make bitcast operand promotion
work correctly when getSoftenedFloat returns f128 instead of i128.

Fixes PR43157

llvm-svn: 370293
2019-08-29 05:13:56 +00:00
Shiva Chen b39876d8cd [RISCV] Avoid generating AssertZext for LP64 ABI when lowering floating LibCall
The patch fixed the issue that RV64 didn't clear the upper bits
when return complex floating value with lp64 ABI.

float _Complex
complex_add(float _Complex a, float _Complex b)
{
   return a + b;
}

RealResult = zero_extend(RealA + RealB)
ImageResult = ImageA + ImageB
Return (RealResult | (ImageResult << 32))

The patch introduces shouldExtendTypeInLibCall target hook to suppress
the AssertZext generation when lowering floating LibCall.

Thanks to Eli's comments from the Bugzilla
https://bugs.llvm.org/show_bug.cgi?id=42820

Differential Revision: https://reviews.llvm.org/D65497

llvm-svn: 370275
2019-08-28 23:40:37 +00:00
Kevin P. Neal ddf13c00ed [FPEnv] Add fptosi and fptoui constrained intrinsics.
This implements constrained floating point intrinsics for FP to signed and
unsigned integers.

Quoting from D32319:
The purpose of the constrained intrinsics is to force the optimizer to
respect the restrictions that will be necessary to support things like the
STDC FENV_ACCESS ON pragma without interfering with optimizations when
these restrictions are not needed.

Reviewed by:	Andrew Kaylor, Craig Topper, Hal Finkel, Cameron McInally, Roman Lebedev, Kit Barton
Approved by:	Craig Topper
Differential Revision:	http://reviews.llvm.org/D63782

llvm-svn: 370228
2019-08-28 16:33:36 +00:00
Jessica Paquette af0bd41e06 [AArch64][GlobalISel] Fall back when translating musttail calls
These are currently translated as normal functions calls in AArch64.

Until we have proper tail call lowering, we shouldn't translate these.

Differential Revision: https://reviews.llvm.org/D66842

llvm-svn: 370225
2019-08-28 16:19:01 +00:00
Ryan Taylor 3b1459ed7c [AMDGPU] Adjust number of SGPRs available in Calling Convention
This reduces the number of SGPRs due to some concerns about running
out of SGPRs if you make all the SGPRs that aren't reserved available
for the calling convention.

Change-Id: Idb4ca4dc72f5b6808cb524ff7270915a8de5b4c1
llvm-svn: 370215
2019-08-28 15:00:45 +00:00
Simon Pilgrim 14e07d7f4b [DAGCombine] Fix cppcheck shadow variable warning. NFCI.
We already have an outer Ops variable.

llvm-svn: 370197
2019-08-28 12:48:41 +00:00
Amaury Sechet 4f4387dd12 [TargetLowering] Add buildLegalVectorShuffle facility to help build legal shuffles
Summary: There are at least 2 ways to express the same shuffle. Various pieces of code explicit check for both option, but other places do not when they would benefit from doing it. This patches refactor the codebase to use buildLegalVectorShuffle in order to make that behavior more consistent.

Reviewers: craig.topper, efriedma, RKSimon, lebedev.ri

Subscribers: javed.absar, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66804

llvm-svn: 370190
2019-08-28 12:00:06 +00:00
Simon Pilgrim c5b38e2869 [DAGCombine] Remove LoadedSlice::Cost default 'ForCodeSize' constructor arguments. NFCI.
These were always being passed in and it allowed me to add the explicit tag to stop a cppcheck warning about 1 argument constructors.

llvm-svn: 370189
2019-08-28 11:50:36 +00:00
Amara Emerson e20b91c265 [GlobalISel] Replace hard coded dynamic alloca handling with G_DYN_STACKALLOC.
This change moves the actual stack pointer manipulation into the legalizer,
available to targets via lower(). The codegen is slightly different because
we're using explicit masks instead of G_PTRMASK, and using G_SUB rather than
adding a negative amount via G_GEP.

Differential Revision: https://reviews.llvm.org/D66678

llvm-svn: 370104
2019-08-27 19:54:27 +00:00
Matt Arsenault 2910184936 DAG: computeNumSignBits for MUL
Copied directly from the IR version.

Most of the testcases I've added for this are somewhat problematic
because they really end up testing the yet to be implemented version
for MUL_I24/MUL_U24.

llvm-svn: 370099
2019-08-27 19:05:33 +00:00
Sanjay Patel b516f1afdd [DAGCombiner] cancel fnegs from multiplied operands of FMA
(-X) * (-Y) + Z --> X * Y + Z

This is a missing optimization that shows up as a potential regression in D66050,
so we should solve it first. We appear to be partly missing this fold in IR as well.

We do handle the simpler case already:
(-X) * (-Y) --> X * Y

And it might be beneficial to make the constraint less conservative (eg, if both
operands are cheap, but not necessarily cheaper), but that causes infinite looping
for the existing fmul transform.

Differential Revision: https://reviews.llvm.org/D66755

llvm-svn: 370071
2019-08-27 15:17:46 +00:00
Jinsong Ji 7f536bcf22 Revert "[CodeGen] Do the Simple Early Return in block-placement pass to optimize the blocks"
This reverts commit b3d258fc44.

@skatkov is reporting crash in D63972#1646303
Contacted @ZhangKang, and revert the commit on behalf of him.

llvm-svn: 370069
2019-08-27 14:59:08 +00:00
Petar Avramovic a393238422 [GlobalISel] Factor narrowScalar for G_ASHR and G_LSHR. NFC
Main difference is in the way Hi for Long shift (HiL) is made.
G_LSHR fills HiL with zeros, while G_ASHR fills HiL with sign bit value.

Differential Revision: https://reviews.llvm.org/D66589

llvm-svn: 370064
2019-08-27 14:33:05 +00:00
Petar Avramovic d568ed40e0 [GlobalISel] Fix narrowScalar for shifts to match algorithm from SDAG
Fix typos. Use Hi and Lo prefixes for Or instead of LHS and RHS
to match names of surrounding variables.

Differential Revision: https://reviews.llvm.org/D66587

llvm-svn: 370062
2019-08-27 14:22:32 +00:00
Amaury Sechet f28dee2cff [DAGCombiner] Add node to the worklist in topological order in parallelizeChainedStores
Summary: As per title.

Reviewers: craig.topper, efriedma, RKSimon, lebedev.ri

Subscribers: llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66659

llvm-svn: 370056
2019-08-27 13:27:57 +00:00
Amaury Sechet a1e5ef3fd4 [DAGCombiner] Add node to the worklist in topological order after relegalization.
Summary: As per title.

Reviewers: craig.topper, efriedma, RKSimon, lebedev.ri

Subscribers: llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66702

llvm-svn: 370040
2019-08-27 11:06:09 +00:00
Craig Topper 243ede9970 [SelectionDAGBuilder] Hide existence of ConstantDataVector vector from visitGetElementPtr.
ConstantDataVector is a specialized verison of ConstantVector
that stores data in a packed array of bits instead of as
individual pointers to other Constants. But we really shouldn't
expose that if we can void it. And we should handle regular
ConstantVector equally well.

This removes a dyn_cast to ConstantDataVector and just calls
getSplatValue directly on a Constant* if the type is a vector.

llvm-svn: 370018
2019-08-27 06:39:50 +00:00
Craig Topper 4a3f62f9fd [SelectionDAGBuilder] Fix typo in comment. NFC
llvm-svn: 370017
2019-08-27 06:38:51 +00:00
Richard Trieu 58e67b8aa3 Revert r369927 - [DAGCombiner] Remove a bunch of redundant AddToWorklist calls.
This change causes instrumented builds of Clang to have a fatal error in the
backend.  https://reviews.llvm.org/D66537 has the details.

llvm-svn: 370006
2019-08-27 02:04:11 +00:00
Shafik Yaghmour 90e00bd8f3 Debug Info: Support for DW_AT_export_symbols for anonymous structs
This implements the DWARF 5 feature described in:

http://dwarfstd.org/ShowIssue.php?issue=141212.1

To support recognizing anonymous structs:

  struct A {
    struct { // Anonymous struct
        int y;
    };
  } a

This patch adds support for the new flag in constructTypeDIE(...) and test to verify this change.

Differential Revision: https://reviews.llvm.org/D66605

llvm-svn: 369969
2019-08-26 20:59:44 +00:00
Vedant Kumar 58a0714885 [DWARF] Rename getDwarf5OrGNUCallSite{Attr,Tag}, NFC
llvm-svn: 369967
2019-08-26 20:53:34 +00:00
Vedant Kumar 533dd0214c [DWARF] Pick the DWARF5 OP_entry_value opcode on Darwin
Use the GNU extension for OP_entry_value consistently (i.e. whenever GNU
extensions are used for TAG_call_site).

llvm-svn: 369966
2019-08-26 20:53:12 +00:00
Craig Topper 846429de74 [DAGCombiner][X86] Teach SimplifyVBinOp to fold VBinOp (concat X, undef/constant), (concat Y, undef/constant) -> concat (VBinOp X, Y), VecC
This improves the combine I included in D66504 to handle constants in the upper operands of the concat. If we can constant fold them away we can pull the concat after the bin op. This helps with chains of madd reductions on X86 from loop unrolling. The loop madd reduction pattern creates pmaddwd with half the width of the add that follows it using zeroes to fill the upper bits. If we have two of these added together we can pull the zeroes through the accumulating add and then shrink it.

Differential Revision: https://reviews.llvm.org/D66680

llvm-svn: 369937
2019-08-26 17:59:11 +00:00
Amaury Sechet b7075e40f3 [DAGCombiner] Remove a bunch of redundant AddToWorklist calls.
Summary:
This comes as a first step toward processing the DAG nodes in topological orders. Doing so ensure that arguments of a node are combined before the node itself is combined, which exposes ore opportunities for optimization and/or reduce the amount of patterns a node has to match for.

DAGCombiner adding nodes to the worklist is various places causes the nodes to be in a different order from what is expected. In addition, this is reduant because these nodes end up being added to the worklist anyways due to the machinery at line 1621.

Reviewers: craig.topper, efriedma, RKSimon, lebedev.ri

Subscribers: llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66537

llvm-svn: 369927
2019-08-26 17:02:12 +00:00
Craig Topper b8b90ac1c5 [X86][DAGCombiner] Teach narrowShuffle to use concat_vectors instead of inserting into undef
Summary:
Concat_vectors is more canonical during early DAG combine. For example, its what's used by SelectionDAGBuilder when converting IR shuffles into SelectionDAG shuffles when element counts between inputs and mask don't match. We also have combines in DAGCombiner than can pull concat_vectors through a shuffle. See partitionShuffleOfConcats. So it seems like concat_vectors is a better operation to use here. I had to teach DAGCombiner's SimplifyVBinOp to also handle concat_vectors with undef. I haven't checked yet if we can remove the INSERT_SUBVECTOR version in there or not.

I didn't want to mess with the other caller of getShuffleHalfVectors that's used during shuffle lowering where insert_subvector probably is what we want to produce so I've enabled this via a boolean passed to the function.

Reviewers: spatel, RKSimon

Reviewed By: RKSimon

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66504

llvm-svn: 369872
2019-08-25 17:59:49 +00:00
Xing Xue ef039a3ccd [PowerPC][AIX] Adds support for writing the .data section in assembly files
Summary:
Adds support for generating the .data section in assembly files for global variables with a non-zero initialization. The support for writing the .data section in XCOFF object files will be added in a follow-on patch. Any relocations are not included in this patch.

Reviewers: hubert.reinterpretcast, sfertile, jasonliu, daltenty, Xiangling_L

Reviewed by: hubert.reinterpretcast

Subscribers: nemanjai, hiraditya, kbarton, MaskRay, jsji, wuzish, shchenz, DiggerLin, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66154

llvm-svn: 369869
2019-08-25 15:17:25 +00:00
Nikita Popov aa71c977ba [SDAG] Fold umul_lohi with 0 or 1 multiplicand
These can turn up during multiplication legalization. In principle
these should also apply to smul_lohi, but I wasn't able to figure
out how to produce those with the necessary operands.

Differential Revision: https://reviews.llvm.org/D66380

llvm-svn: 369864
2019-08-25 08:04:22 +00:00
Nilanjana Basu 7da6f432d8 Removing block comments from CodeView records in assembly files & related code cleanup
llvm-svn: 369860
2019-08-25 01:09:11 +00:00
Amara Emerson 3f6dd0c588 [GlobalISel] Introduce a G_DYN_STACKALLOC opcode to represent dynamic allocas.
This just adds the opcode and verifier, it will be used to replace existing
dynamic alloca handling in a subsequent patch.

Differential Revision: https://reviews.llvm.org/D66677

llvm-svn: 369833
2019-08-24 02:25:56 +00:00
Guillaume Chatelet b7be5b9095 [LLVM][NFC] remove unused fields
Summary:
Here is the commit introducing the fields
https://github.com/llvm/llvm-project/commit/cf6749e4c091

It dates back from 2006 and was used by AArch64 backend.
There is no more reference to these fields in the whole codebase so I think it's fine.

Reviewers: courbet

Subscribers: javed.absar, kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66683

llvm-svn: 369810
2019-08-23 20:49:06 +00:00
Volkan Keles 277631e3b8 [GlobalISel] Legalizer: Retry combining illegal artifacts as long as there new artifacts
Summary:
Currently, Legalizer aborts if it’s unable to legalize artifacts. However, it’s
possible to combine them after processing the rest of the instruction because
the legalization is likely to generate more artifacts that allow ArtifactCombiner
to combine away them.

Instead, move illegal artifacts to another list called RetryList and wait until all of the
instruction in InstList are legalized. After that, check if there is any new artifacts and
try to combine them again if that’s the case. If not, abort. The idea is similar to D59339,
but the approach is a bit different.

This patch fixes the issue described above, but the legalizer still may be unable to handle
some cases depending on when to legalize artifacts. So, in the long run, we probably need
a different legalization strategy that handles this dependency in a better way.

Reviewers: dsanders, aditya_nandakumar, qcolombet, arsenm, aemerson, paquette

Reviewed By: dsanders

Subscribers: jvesely, wdng, nhaehnle, rovka, javed.absar, hiraditya, Petar.Avramovic, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65894

llvm-svn: 369805
2019-08-23 20:30:35 +00:00
Benjamin Kramer dc5f805d31 Do a sweep of symbol internalization. NFC.
llvm-svn: 369803
2019-08-23 19:59:23 +00:00
Matt Arsenault 2fd1afe8ef RegScavenger: Use Register
llvm-svn: 369794
2019-08-23 18:25:34 +00:00
Craig Topper e7211bb567 [SelectionDAG][X86] Enable iX SimplifyDemandedBits to vXi1 SimplifyDemandedVectorElts simplification. Add a hack to X86 to avoid a regression
Patch showing the effect of enabling bool vector oversimplification.

Non-VLX builds can simplify a kshift shuffle, but VLX builds simplify:

insert_subvector v8i zeroinitializer, v2i --> insert_subvector v8i undef, v2i

Preventing the removal of the AND to clear the upper bits of result

Differential Revision: https://reviews.llvm.org/D53022

llvm-svn: 369780
2019-08-23 17:14:58 +00:00
Jeremy Morse 0ae5498146 [DebugInfo] Remove invalidated locations during LiveDebugValues
LiveDebugValues gives variable locations to blocks, but it should also take
away. There are various circumstances where a variable location is known
until a loop backedge with a different location is detected. In those
circumstances, where there's no agreement on the variable location, it
should be undef / removed, otherwise we end up picking a location that's
valid on some loop iterations but not others.

However, LiveDebugValues doesn't currently do this, see the new testcase
attached. Without this patch, the location of !3 is assumed to be %bar
through the loop. Once it's added to the In-Locations list, it's never
removed, even though the later dbg.value(0... of !3 makes the location
un-knowable.

This patch checks during block-location-joining to see whether any
previously-present locations have been removed in a predecessor. If they
have, the live-ins have changed, and the block needs reprocessing.
Similarly, in transferTerminator, assign rather than |= the Out-Locations
after processing a block, as we may have deleted some previously valid
locations. This will mean that LiveDebugValues performs more propagation
 -- but that's necessary for it being correct.

Differential Revision: https://reviews.llvm.org/D66599

llvm-svn: 369778
2019-08-23 16:33:42 +00:00
Simon Pilgrim 04906ef1f2 [DAGCombine] GetNegatedExpression - add FMA\FMAD support
If the accumulator and either of the multiply operands are negatable then we can we negate the entire expression.

Differential Revision: https://reviews.llvm.org/D63141

llvm-svn: 369746
2019-08-23 10:49:46 +00:00
Peter Collingbourne 2452d7030b IR. Change strip* family of functions to not look through aliases.
I noticed another instance of the issue where references to aliases were
being replaced with aliasees, this time in InstCombine. In the instance that
I saw it turned out to be only a QoI issue (a symbol ended up being missing
from the symbol table due to the last reference to the alias being removed,
preventing HWASAN from symbolizing a global reference), but it could easily
have manifested as incorrect behaviour.

Since this is the third such issue encountered (previously: D65118, D65314)
it seems to be time to address this common error/QoI issue once and for all
and make the strip* family of functions not look through aliases.

Includes a test for the specific issue that I saw, but no doubt there are
other similar bugs fixed here.

As with D65118 this has been tested to make sure that the optimization isn't
load bearing. I built Clang, Chromium for Linux, Android and Windows as well
as the test-suite and there were no size regressions.

Differential Revision: https://reviews.llvm.org/D66606

llvm-svn: 369697
2019-08-22 19:56:14 +00:00
Matt Arsenault fba82858f2 GlobalISel: Don't create G_UADDE with constant false carry in
The x86 tests are now broken (in paticular add-scalar.ll now hits the
DAG fallback) due to not handling G_UADDO. The DAG x86 backend has a
custom lowering for this, so that will need to be implemented.

llvm-svn: 369673
2019-08-22 17:29:17 +00:00
Francis Visoiu Mistrih 5b5ee61b5f [MachO][TLOF] Use hasLocalLinkage to determine if indirect symbol is local
Local symbols in the indirect symbol table contain the value
`INDIRECT_SYMBOL_LOCAL` and the corresponding __pointers entry must
contain the address of the target.

In r349060, I added support for local symbols in the indirect symbol
table, which was checking if the symbol `isDefined` && `!isExternal` to
determine if the symbol is local or not.

It turns out that `isDefined` will return false if the user of the
symbol comes before its definition, and we'll again generate .long 0
which will be the symbol at the adress 0x0.

Instead of doing that, use GlobalValue::hasLocalLinkage() to check if
the symbol is local.

Differential Revision: https://reviews.llvm.org/D66563

llvm-svn: 369671
2019-08-22 16:59:00 +00:00
Guozhi Wei 51f48295cb [MBP] Disable aggressive loop rotate in plain mode
Patch https://reviews.llvm.org/D43256 introduced more aggressive loop layout optimization which depends on profile information. If profile information is not available, the statically estimated profile information(generated by BranchProbabilityInfo.cpp) is used. If user program doesn't behave as BranchProbabilityInfo.cpp expected, the layout may be worse.

To be conservative this patch restores the original layout algorithm in plain mode. But user can still try the aggressive layout optimization with -force-precise-rotation-cost=true.

Differential Revision: https://reviews.llvm.org/D65673

llvm-svn: 369664
2019-08-22 16:21:32 +00:00
Amaury Sechet 95cf66de7c [DAGCombiner] Remove explicit call to AddToWorklist in sqrt and reciprocal computations
Summary: These nodes end up being processed regardless due to DAGCombiner ensuring arguments are processed. This changes the order in which nodes are processed, which fixes an issue on PowerPC.

Reviewers: craig.topper, efriedma, RKSimon, lebedev.ri, mcberg2017, stefanp, hfinkel

Subscribers: nemanjai, MaskRay, jsji, steven.zhang, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66548

llvm-svn: 369662
2019-08-22 15:35:45 +00:00
Jinsong Ji 545e993b8b [SlotIndexes] Add print-slotindexes to disable printing slotindexes
Summary:
When we print the IR with --print-after/before-*,
SlotIndexes will be printed whenever available (We haven't freed it).

This introduces some noises when we try to compare the IR
among different optimizations.

eg:
-print-before=machine-cp will print SlotIndexes for 1st machine-cp
pass, but NOT for 2nd machine-cp;
-print-after=machine-cp will NOT print SlotIndexes for both
machine-cp passes.
So SlotIndexes in 1st pass introduce noises when differing these IRs.

This patch introduces an option to hide indexes.

Reviewers: stoklund, thegameg, qcolombet

Reviewed By: thegameg

Subscribers: hiraditya, arphaman, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66500

llvm-svn: 369650
2019-08-22 13:44:47 +00:00
Shiva Chen 72a41e7b0d [TargetLowering] Remove optional arguments passing to makeLibCall
The patch introduces MakeLibCallOptions struct as suggested by @efriedma on D65497.
The struct contain argument flags which will pass to makeLibCall function.
The patch should not has any functionality changes.

Differential Revision: https://reviews.llvm.org/D65795

llvm-svn: 369622
2019-08-22 04:59:43 +00:00
Fangrui Song 246750c2a9 [COFF] Fix section name for constants larger than 64 bits on Windows
APIntToHexString returns wrong value ("0000000000000000ffffffffffffffff")
for integer larger than 64 bits, and thus
TargetLoweringObjectFileCOFF::getSectionForConstant returns same section name
for all numbers larger than 64 bits. This patch tries to fix it.

Differential Revision: https://reviews.llvm.org/D66458
Patch by Senran Zhang

llvm-svn: 369610
2019-08-22 01:48:34 +00:00
Craig Topper 3f59bfd5be [MVT] Add v16f16 and v32f16 vectors.
I might look at improving PR43065 which will require being
able to mark a 256 and 512 bit vector of f16 as Legal.

Differential Revision: https://reviews.llvm.org/D66515

llvm-svn: 369565
2019-08-21 19:14:48 +00:00
Amaury Sechet c0f190a048 [DAGCombiner] Remove mostly redundant calls to AddToWorklist
Summary:
These calls change the order in which some nodes are processed and so have an effect on codegen.

The change in fixup-bw-copy.ll is due to (and (load anyext)) gets transformed into (load zext) while previously the and was removed by SimplifyDemandedBits, so the (load anyext) remained.

Reviewers: craig.topper, efriedma, RKSimon, lebedev.ri

Subscribers: llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66543

llvm-svn: 369561
2019-08-21 18:51:08 +00:00
Matt Arsenault 954a012b4c GlobalISel: Implement moreElementsVector for G_UNMERGE_VALUES sources
This is necessary for handling <3 x s16> on AMDGPU, assuming this
should be handled as 2 separate legalization actions. The alternative
would be for fewerElementsVector to handle 3->2.

llvm-svn: 369547
2019-08-21 16:59:10 +00:00
Nilanjana Basu ac3851c434 Improving CodeView debug info type record's inline comments
llvm-svn: 369533
2019-08-21 15:19:58 +00:00
Alexander Timofeev 78347c979e [AMDGPU] Prevent VGPR copies from moving across the EXEC mask definitions
Differential Revision: https://reviews.llvm.org/D63731
Reviewers: qcolombet, rampitec

llvm-svn: 369532
2019-08-21 15:15:04 +00:00
Guillaume Chatelet 1c18a9cb9e [LLVM][Alignment] Introduce Alignment In MachineFrameInfo
Summary:
This is patch is part of a serie to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790

Reviewers: jfb

Subscribers: hiraditya, dexonsmith, llvm-commits, courbet

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65800

llvm-svn: 369531
2019-08-21 14:29:30 +00:00
Amaury Sechet 045f33aec9 [DAGCombiner] Various nits. NFC
llvm-svn: 369520
2019-08-21 12:01:37 +00:00
Petar Avramovic 5b4c5c2c54 [MIPS GlobalISel] NarrowScalar G_TRUNC
Add NarrowScalar for G_TRUNC when NarrowTy is half the size of source.
NarrowScalar G_TRUNC to s32 for MIPS32.

Differential Revision: https://reviews.llvm.org/D66202

llvm-svn: 369509
2019-08-21 09:26:39 +00:00
Jeremy Morse 67443c3c6e [DebugInfo] Avoid dropping location info across block boundaries
LiveDebugValues propagates variable locations between blocks by creating
new DBG_VALUE insts in the successors, then interpreting them when it
passes back through the block at a later time. However, this flushes out
any extra information about the location that LiveDebugValues holds: for
example, connections between variable locations such as discussed in
D65368. And as reported in PR42772 this causes us to lose track of the
fact that a spill-location is actually a spill, not a register location.

This patch fixes that by deferring the creation of propagated DBG_VALUEs
until after propagation has completed: instead location propagation occurs
only by sharing location ID numbers between blocks. 

Differential Revision: https://reviews.llvm.org/D66412

llvm-svn: 369508
2019-08-21 09:22:31 +00:00
Amara Emerson 56606a4db3 [AArch64][GlobalISel] Add support for narrowScalar of G_ZEXT
We do this by merging the source with the high bits set to 0.

Differential Revision: https://reviews.llvm.org/D66181

llvm-svn: 369480
2019-08-21 00:12:37 +00:00
Craig Topper ba375263e8 [DAGCombiner][X86] Teach visitCONCAT_VECTORS to combine (concat_vectors (concat_vectors X, Y), undef)) -> (concat_vectors X, Y, undef, undef)
I also had to add a new combine to X86's combineExtractSubvector to prevent a regression.

This helps our vXi1 code see the full concat operation and allow it optimize undef to a zero if there is already a zero in the concat. This helped us use a movzx instead of an AND in some of the tests. In those tests, one concat comes from SelectionDAGBuilder and the second comes from type legalization of v4i1->i4 bitcasts which uses an additional concat. Though these changes weren't my original motivation.

I'm looking at making X86ISelLowering's narrowShuffle emit a concat_vectors instead of an insert_subvector since concat_vectors is more canonical during early DAG combine. This patch helps prevent a regression from my experiments with that.

Differential Revision: https://reviews.llvm.org/D66456

llvm-svn: 369459
2019-08-20 22:12:50 +00:00
Sean Fertile 1e46d4cec5 Adds support for writing the .bss section for XCOFF object files.
Adds Wrapper classes for MCSymbol and MCSection into the XCOFF target
object writer. Also adds a class to represent the top-level sections, which we
materialize in the ObjectWriter.

executePostLayoutBinding will map all csects into the appropriate
container depending on its storage mapping class, and map all symbols
into their containing csect. Once all symbols have been processed we
- Assign addresses and symbol table indices.
- Calaculte section sizes.
- Build the section header table.
- Assign the sections raw-pointer value for non-virtual sections.

Since the .bss section is virtual, writing the header table is enough to
add support. Writing of a sections raw data, or of any relocations is
not included in this patch.

Testing is done by dumping the section header table, but it needs to be
extended to include dumping the symbol table once readobj support for
dumping auxiallary entries lands.

Differential Revision: https://reviews.llvm.org/D65159

llvm-svn: 369454
2019-08-20 22:03:18 +00:00
Aditya Nandakumar 08bd080872 [GlobalISel] Handle multiple registers in dbg.value intrinsic
https://reviews.llvm.org/D66077

The value passed into dbg.value may relate to multiple registers,
each of which need a DBG_VALUE.

This fix calls MIRBuilder.buildDirectDbgValue for each register.

Without this, IR passed in from flang-compiler/flang may fail an
assertion in getOrCreateVReg.

Patch by : peterwaller-arm.

llvm-svn: 369403
2019-08-20 16:28:37 +00:00
Thomas Raoux be699bf389 [CodeGen] Add a pass to do block predication on SSA machine IR.
For targets requiring aggressive scheduling and/or software pipeline we need to
    apply predication before preRA scheduling. This adds a pass re-using the early
    if-cvt infrastructure but generating predicated instructions instead of
    speculatively executing instructions. It allows doing if conversion on blocks
    containing instructions with side-effects. The pass re-use the target hook from
    postRA if-conversion to let the target decide on the heuristic to apply.

    Differential Revision: https://reviews.llvm.org/D66190

llvm-svn: 369395
2019-08-20 15:54:59 +00:00
Karl-Johan Karlsson 40da6be2bd [AsmPrinter] Remove const qualifier from EmitBasicBlockStart.
Overriders may want to modify state in it. AMDGPU wants
to, but has to make its members mutable in order to do so.

Besides, EmitBasicBlockEnd is not const, so why should
Start be?

Patch by Bevin Hansson.

Reviewed By: nickdesaulniers

Differential Revision: https://reviews.llvm.org/D66341

llvm-svn: 369325
2019-08-20 05:13:57 +00:00
Vyacheslav Zakharin f7229ac7d8 Fixed placement of llvm.global_dtors on Windows.
Differential revision: https://reviews.llvm.org/D66373

llvm-svn: 369299
2019-08-19 21:07:03 +00:00
Craig Topper 93c2787193 [CGP] Remove ModifiedDT from the makeBitReverse loop
I don't think anything in this loop modifies the control flow and we don't restart any iteration after setting the flag.

This code was added in http://reviews.llvm.org/D16893 but looking at the test case added there the code that caused the dominator tree to change was merging blocks with their predecessor not the bitreverse optimization.

Differential Revision: https://reviews.llvm.org/D66366

llvm-svn: 369283
2019-08-19 18:02:24 +00:00
Roman Lebedev edfaee0811 [TargetLowering] x s% C == 0 fold: vector divisor with INT_MIN handling
Summary:
The general fold is only valid for positive divisors.
Which effectively means, it is invalid for `INT_MIN` divisors,
and we currently bailout if we see them.

But that is too strict, we can just fix-up the results.
For that, let's do a second computation 'in parallel':
```
Name: srem -> and
Pre: isPowerOf2(C)
%o = srem i8 %X, C
%r = icmp eq %o, 0
  =>
%n = and i8 %X, C-1
%r = icmp eq %n, 0
```
https://rise4fun.com/Alive/Sup

And then just blend results: if the divisor was `INT_MIN`,
pick the value we got via bit-test,
else pick the value from general fold.

There's interesting observation - `ISD::ROTR` is set to
`LegalizeAction::Expand` before AVX512, so we should not
treat `INT_MIN` divisor as even; and as it can be seen
while `@test_srem_odd_even_one` improves on all run-lines,
`@test_srem_odd_even_INT_MIN` only improves for AVX512.

Reviewers: RKSimon, craig.topper, spatel

Reviewed By: RKSimon

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66300

llvm-svn: 369268
2019-08-19 15:01:42 +00:00
Jinsong Ji 0776da5236 [PeepholeOptimizer] Don't assume bitcast def always has input
Summary:
If we have a MI marked with bitcast bits, but without input operands,
PeepholeOptimizer might crash with assert.

eg:
If we apply the changes in PPCInstrVSX.td as in this patch:

[(set v4i32:$XT, (bitconvert (v16i8 immAllOnesV)))]>;

We will get assert in PeepholeOptimizer.

```
llvm-lit llvm-project/llvm/test/CodeGen/PowerPC/build-vector-tests.ll -v

llvm-project/llvm/include/llvm/CodeGen/MachineInstr.h:417: const
llvm::MachineOperand &llvm::MachineInstr::getOperand(unsigned int)
const: Assertion `i < getNumOperands() && "getOperand() out of range!"'
failed.
```

The fix is to abort if we found out of bound access.

Reviewers: qcolombet, MatzeB, hfinkel, arsenm

Reviewed By: qcolombet

Subscribers: wdng, arsenm, steven.zhang, wuzish, nemanjai, hiraditya, kbarton, MaskRay, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65542

llvm-svn: 369261
2019-08-19 14:19:04 +00:00
David Stenberg 88df53e6ea [DebugInfo] Allow bundled calls in the MIR's call site info
Summary:
Extend the MIR parser and writer so that the call site information can
refer to calls that are bundled.

Reviewers: aprantl, asowda, NikolaPrica, djtodoro, ivanbaev, vsk

Reviewed By: aprantl

Subscribers: arsenm, hiraditya, llvm-commits

Tags: #debug-info, #llvm

Differential Revision: https://reviews.llvm.org/D66145

llvm-svn: 369256
2019-08-19 12:41:22 +00:00
Jeremy Morse 176bbd5cde [DebugInfo] Make postra sinking of DBG_VALUEs subregister-safe
Currently the machine instruction sinker identifies DBG_VALUE insts that
also need to sink by comparing register numbers. Unfortunately this isn't
safe, because (after register allocation) a DBG_VALUE may read a register
that aliases what's being sunk. To fix this, identify the DBG_VALUEs that
need to sink by recording & examining their register units. Register units
gives us the following guarantee:

  "Two registers overlap if and only if they have a common register unit"
  [MCRegisterInfo.h]

Thus we can always identify aliasing DBG_VALUEs if the set of register
units read by the DBG_VALUE, and the register units of the instruction
being sunk, intersect. (MachineSink already uses classes like
"LiveRegUnits" for determining sinking validity anyway).

The test added checks for super and subregister DBG_VALUE reads of a sunk
copy being sunk as well.

Differential Revision: https://reviews.llvm.org/D58191

llvm-svn: 369247
2019-08-19 09:53:07 +00:00
Craig Topper 74168ded03 [TargetLowering] Teach computeRegisterProperties to only widen v3i16/v3f16 vectors to the next power of 2 type if that's legal.
These were recently made simple types. This restores their
behavior back to something like their EVT legalization.

We might be able to fix the code in type legalization where the
assert was failing, but I didn't investigate too much as I had
already looked at the computeRegisterProperties code during the
review for v3i16/v3f16.

Most of the test changes restore the X86 codegen back to what
it looked like before the recent change. The test case in
vec_setcc.ll and is a reduced version of the reproducer from
the fuzzer.

Fixes https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=16490

llvm-svn: 369205
2019-08-18 06:28:06 +00:00
Craig Topper f43106e341 [SelectionDAG] Add a node creation debug message to getMachineNode.
llvm-svn: 369204
2019-08-18 06:28:00 +00:00
Kang Zhang b3d258fc44 [CodeGen] Do the Simple Early Return in block-placement pass to optimize the blocks
Summary:

Fix a bug of preducessors.

In `block-placement` pass, it will create some patterns for unconditional we can do the simple early retrun.
But the `early-ret` pass is before `block-placement`, we don't want to run it again.
This patch is to do the simple early return to optimize the blocks at the last of `block-placement`.

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D63972

llvm-svn: 369191
2019-08-17 14:37:05 +00:00
Sanjay Patel acceedb15f [CodeGenPrepare] Fix use-after-free
If OptimizeExtractBits() encountered a shift instruction with no operands at all,
it would erase the instruction, but still return false.

This previously didn’t matter because its caller would always return after
processing the instruction, but https://reviews.llvm.org/D63233 changed the
function’s caller to fall through if it returned false, which would then cause
a use-after-free detectable by ASAN.

This change makes OptimizeExtractBits return true if it removes a shift
instruction with no users, terminating processing of the instruction.

Patch by: @brentdax (Brent Royal-Gordon)

Differential Revision: https://reviews.llvm.org/D66330

llvm-svn: 369168
2019-08-16 23:10:34 +00:00
Evgeniy Stepanov 187c63f145 Escape % in printf format string.
Fixes branch-relax-block-size.mir on the ASan builder.

llvm-svn: 369138
2019-08-16 18:23:54 +00:00
Amara Emerson c809230a69 [AArch64][GlobalISel] Lower G_SHUFFLE_VECTOR with 1 elt src and 1 elt mask.
Again, it's weird that these are allowed. Since lowering support was added in
r368709 we started crashing on compiling the neon intrinsics test in the test
suite. This fixes the lowering to fold the 1 elt src/mask case into copies.

llvm-svn: 369135
2019-08-16 18:06:53 +00:00
Guozhi Wei e03f6a1631 [CodeGen/Analysis] Intrinsic llvm.assume should not block tail call optimization
In function Analysis.cpp:isInTailCallPosition, instructions between call and ret are checked to see if they block tail call optimization. If an instruction is an intrinsic call, only llvm.lifetime_end is allowed and other intrinsic functions block tail call. When compiling tcmalloc, we found llvm.assume between a hot function call and ret, it blocks the optimization. But llvm.assume doesn't generate instructions, it should not block tail call.

Differential Revision: https://reviews.llvm.org/D66096

llvm-svn: 369125
2019-08-16 16:26:12 +00:00
Florian Hahn 403e85cbc5 Revert [CodeGen] Do the Simple Early Return in block-placement pass to optimize the blocks
This reverts r368997 (git commit 2a903c0b67)

It looks like this commit adds invalid predecessors to MBBs. The example
below fails the verifier after MachineBlockPlacement (run llc
-verify-machineinstrs):

@global.4 = external constant i8*

declare i32 @zot(...)

define i16* @snork.67() personality i8* bitcast (i32 (...)* @zot to i8*) {
bb:
  invoke void undef()
          to label %bb5 unwind label %bb4

bb4:                                              ; preds = %bb
  %tmp = landingpad { i8*, i32 }
          catch i8* null
  unreachable

bb5:                                              ; preds = %bb
  %tmp6 = load i32, i32* null, align 4
  %tmp7 = icmp eq i32 %tmp6, 0
  br i1 %tmp7, label %bb14, label %bb8

bb8:                                              ; preds = %bb11, %bb5
  invoke void undef()
          to label %bb9 unwind label %bb11

bb9:                                              ; preds = %bb8
  %tmp10 = invoke i16* undef()
          to label %bb14 unwind label %bb11

bb11:                                             ; preds = %bb9, %bb8
  %tmp12 = landingpad { i8*, i32 }
          cleanup
          catch i8* bitcast (i8** @global.4 to i8*)
  %tmp13 = icmp ult i64 undef, undef
  br i1 %tmp13, label %bb8, label %bb14

bb14:                                             ; preds = %bb11, %bb9, %bb5
  %tmp15 = phi i16* [ null, %bb5 ], [ null, %bb11 ], [ %tmp10, %bb9 ]
  ret i16* %tmp15
}

llvm-svn: 369104
2019-08-16 13:19:29 +00:00
Bjorn Pettersson 9dddd26e31 [DAGCombiner] Add simple folds for SMULFIX/UMULFIX/SMULFIXSAT
Summary:
Add the following DAGCombiner folds for mulfix being
one of SMULFIX/UMULFIX/SMULFIXSAT:
  (mulfix x, undef, scale) -> 0
  (mulfix x, 0, scale) -> 0

Also added canonicalization of constants to RHS.

Reviewers: RKSimon, craig.topper, spatel

Reviewed By: RKSimon

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66052

llvm-svn: 369103
2019-08-16 13:16:48 +00:00
Jeremy Morse 8b593480d3 [DebugInfo] Handle complex expressions with spills in LiveDebugValues
In r369026 we disabled spill-recognition in LiveDebugValues for anything
that has a complex expression. This is because it's hard to recover the
complex expression once the spill location is baked into it.

This patch re-enables spill-recognition and slightly adjusts the DBG_VALUE
insts that LiveDebugValues tracks: instead of tracking the last DBG_VALUE
for a variable, it tracks the last _unspilt_ DBG_VALUE. The spill-restore
code is then able to access and copy the original complex expression; but
the rest of LiveDebugValues has to be aware of the slight semantic shift,
and produce a new spilt location if a spilt location is propagated between
blocks.

The test added produces an incorrect variable location (see FIXME), which
will be the subject of future work.

Differential Revision: https://reviews.llvm.org/D65368

llvm-svn: 369092
2019-08-16 10:04:17 +00:00
Volkan Keles 0ae6006bee [GlobalISel] CSEMIRBuilder: Add support for G_GEP
Summary:
This patch adds G_GEP to `shouldCSEOpc` so that it can be CSEd. It also refactors
`translateGetElementPtr` by replacing `createGenericVirtualRegister` calls with types.

Reviewers: aditya_nandakumar, arsenm, dsanders, paquette, aemerson

Reviewed By: aditya_nandakumar

Subscribers: wdng, rovka, javed.absar, hiraditya, Petar.Avramovic, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66316

llvm-svn: 369070
2019-08-15 23:45:45 +00:00
Daniel Sanders 0c47611131 Apply llvm-prefer-register-over-unsigned from clang-tidy to LLVM
Summary:
This clang-tidy check is looking for unsigned integer variables whose initializer
starts with an implicit cast from llvm::Register and changes the type of the
variable to llvm::Register (dropping the llvm:: where possible).

Partial reverts in:
X86FrameLowering.cpp - Some functions return unsigned and arguably should be MCRegister
X86FixupLEAs.cpp - Some functions return unsigned and arguably should be MCRegister
X86FrameLowering.cpp - Some functions return unsigned and arguably should be MCRegister
HexagonBitSimplify.cpp - Function takes BitTracker::RegisterRef which appears to be unsigned&
MachineVerifier.cpp - Ambiguous operator==() given MCRegister and const Register
PPCFastISel.cpp - No Register::operator-=()
PeepholeOptimizer.cpp - TargetInstrInfo::optimizeLoadInstr() takes an unsigned&
MachineTraceMetrics.cpp - MachineTraceMetrics lacks a suitable constructor

Manual fixups in:
ARMFastISel.cpp - ARMEmitLoad() now takes a Register& instead of unsigned&
HexagonSplitDouble.cpp - Ternary operator was ambiguous between unsigned/Register
HexagonConstExtenders.cpp - Has a local class named Register, used llvm::Register instead of Register.
PPCFastISel.cpp - PPCEmitLoad() now takes a Register& instead of unsigned&

Depends on D65919

Reviewers: arsenm, bogner, craig.topper, RKSimon

Reviewed By: arsenm

Subscribers: RKSimon, craig.topper, lenary, aemerson, wuzish, jholewinski, MatzeB, qcolombet, dschuff, jyknight, dylanmckay, sdardis, nemanjai, jvesely, wdng, nhaehnle, sbc100, jgravelle-google, kristof.beyls, hiraditya, aheejin, kbarton, fedor.sergeev, javed.absar, asb, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, jrtc27, MaskRay, zzheng, edward-jones, atanasyan, rogfer01, MartinMosbeck, brucehoult, the_o, tpr, PkmX, jocewei, jsji, Petar.Avramovic, asbirlea, Jim, s.egerton, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65962

llvm-svn: 369041
2019-08-15 19:22:08 +00:00
Matt Arsenault 1f2b727298 MVT: Add v3i16/v3f16 vectors
AMDGPU has some buffer intrinsics which theoretically could use
this. Some of the generated tables include the 3 and 4 element vector
versions of these rounded to 64-bits, which is ambiguous. Add these to
help the table disambiguate these.

Assertion change is for the path odd sized vectors now take for R600.
v3i16 is widened to v4i16, which then needs to be promoted to v4i32.

llvm-svn: 369038
2019-08-15 18:58:25 +00:00
Philip Reames d202899431 [NFC] Add a couple of dump routines for RegisterPressure helper classes
llvm-svn: 369037
2019-08-15 18:49:39 +00:00
Jeremy Morse c476124bc8 [DebugInfo] Avoid crash from dropped fragments in LiveDebugValues
This patch avoids a crash caused by DW_OP_LLVM_fragments being dropped
from DIExpressions by LiveDebugValues spill-restore code. The appearance
of a previously unseen fragment configuration confuses LDV, as documented
in PR42773, and reproduced by the test function this patch adds (Crashes
on a x86_64 debug build).

To avoid this, on spill restore, we now use fragment information from the
spilt-location-expression.

In addition, when spilling, we now don't spill any DBG_VALUE with a complex
expression, as it can't be safely restored and will definitely lead to an
incorrect variable location. The discussion of this is in D65368.

Differential Revision: https://reviews.llvm.org/D66284

llvm-svn: 369026
2019-08-15 17:49:46 +00:00
Jonas Devlieghere 0eaee545ee [llvm] Migrate llvm::make_unique to std::make_unique
Now that we've moved to C++14, we no longer need the llvm::make_unique
implementation from STLExtras.h. This patch is a mechanical replacement
of (hopefully) all the llvm::make_unique instances across the monorepo.

llvm-svn: 369013
2019-08-15 15:54:37 +00:00
Simon Pilgrim d4df81f463 Remove SmallBitVector.h include. NFCI.
SmallBitVector/BitVector types aren't used at all in the cpp file.

llvm-svn: 369008
2019-08-15 14:40:37 +00:00
Simon Pilgrim 983e9118a2 Remove BitVector.h include. NFCI.
BitVector type isn't used at all in the cpp file.

llvm-svn: 369007
2019-08-15 14:39:28 +00:00
Simon Pilgrim ed804dad1e [DAGCombine] MergeConsecutiveStores - fix cppcheck/MSVC extension warning. NFCI.
Set the StartIdx type to size_t so that it matches the StoreNodes SmallVector size() and index types.

Silences the MSVC analyzer warning that unsigned increment might overflow before exceeding size_t on 64-bit targets - this isn't likely to happen but it means we use consistent types and reduces the warning "noise" a little.

llvm-svn: 368998
2019-08-15 13:07:14 +00:00
Kang Zhang 2a903c0b67 [CodeGen] Do the Simple Early Return in block-placement pass to optimize the blocks
Summary:

This patch has trigger a bug of r368339, and the r368339 has been reverted, So upstream this patch again.

In `block-placement` pass, it will create some patterns for unconditional we can do the simple early retrun.
But the `early-ret` pass is before `block-placement`, we don't want to run it again.
This patch is to do the simple early return to optimize the blocks at the last of `block-placement`.

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D63972

llvm-svn: 368997
2019-08-15 13:05:16 +00:00
Sanjay Patel 57d459309d [SDAG][x86] check for relaxed math when matching an FP reduction
If the last step in an FP add reduction allows reassociation and doesn't care
about -0.0, then we are free to recognize that computation as a reduction
that may reorder the intermediate steps.

This is requested directly by PR42705:
https://bugs.llvm.org/show_bug.cgi?id=42705
and solves PR42947 (if horizontal math instructions are actually faster than
the alternative):
https://bugs.llvm.org/show_bug.cgi?id=42947

Differential Revision: https://reviews.llvm.org/D66236

llvm-svn: 368995
2019-08-15 12:43:15 +00:00
Florian Hahn de1d6c8220 Add ptrmask intrinsic
This patch adds a ptrmask intrinsic which allows masking out bits of a
pointer that must be zero when accessing it, because of ABI alignment
requirements or a restriction of the meaningful bits of a pointer
through the data layout.

This avoids doing a ptrtoint/inttoptr round trip in some cases (e.g. tagged
pointers) and allows us to not lose information about the underlying
object.

Reviewers: nlopes, efriedma, hfinkel, sanjoy, jdoerfert, aqjune

Reviewed by: sanjoy, jdoerfert

Differential Revision: https://reviews.llvm.org/D59065

llvm-svn: 368986
2019-08-15 10:12:26 +00:00
Craig Topper e7ea06b7d2 [SelectionDAGBuilder] Teach gather/scatter getUniformBase to look through vector zeroinitializer indices in addition to scalar zeroes.
llvm-svn: 368926
2019-08-14 21:38:56 +00:00
Sanjay Patel ecccf29e6c [SDAG] move variable closer to use; NFC
llvm-svn: 368905
2019-08-14 19:46:15 +00:00
Taewook Oh df7022825c [DebugInfo] Consider debug label scope has an extra lexical block file
Summary: There are places where a case that debug label scope has an extra lexical block file is not considered properly. The modified test won't pass without this patch.

Reviewers: aprantl, HsiangKai

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66187

llvm-svn: 368891
2019-08-14 17:58:45 +00:00
Jeremy Morse 90c2794bfc [DebugInfo] MCP: collect and update DBG_VALUEs encountered in local block
MCP currently uses changeDebugValuesDefReg / collectDebugValues to find
debug users of a register, however those functions assume that all
DBG_VALUEs immediately follow the specified instruction, which isn't
necessarily true. This is going to become very often untrue when we turn
off CodeGenPrepare::placeDbgValues.

Instead of calling changeDebugValuesDefReg on an instruction to change its
debug users, in this patch we instead collect DBG_VALUEs of copies as we
iterate over insns, and update the debug users of copies that are made
dead. This isn't a non-functional change, because MCP will now update
DBG_VALUEs that aren't immediately after a copy, but refer to the same
register. I've hijacked the regression test for PR38773 to test for this
new behaviour, an entirely new test seemed overkill.

Differential Revision: https://reviews.llvm.org/D56265

llvm-svn: 368835
2019-08-14 12:20:02 +00:00
Fangrui Song 8caa0aaa4d [AsmPrinter] Delete redundant .type foo, @function when emitting an ifunc
In MCAsmStreamer:

.type foo,@function   # <--- this is redundant
.type foo,@gnu_indirect_function

In MCELFStreamer, the latter STT_GNU_IFUNC overrides STT_FUNC.

llvm-svn: 368823
2019-08-14 10:30:27 +00:00
Aditya Nandakumar c65ac865c3 [GlobalISel]: Fix lowering of G_Shuffle_vector where we pick up the wrong source index
https://reviews.llvm.org/D66182

llvm-svn: 368781
2019-08-14 01:23:33 +00:00
Aditya Nandakumar 615eee6402 [GlobalISel]: Fix lowering of G_SHUFFLE_VECTOR with scalar sources
https://reviews.llvm.org/D66171

llvm-svn: 368753
2019-08-13 21:49:11 +00:00
Matt Arsenault 28215caa60 GlobalISel: Partially implement fewerElementsVector G_UNMERGE_VALUES
Odd sized vectors aren't handled yet.

llvm-svn: 368713
2019-08-13 16:26:28 +00:00
Matt Arsenault 690645bda0 GlobalISel: Implement lower for G_SHUFFLE_VECTOR
llvm-svn: 368709
2019-08-13 16:09:07 +00:00
Matt Arsenault 0a04a06250 GlobalISel: Add more verifier checks for G_SHUFFLE_VECTOR
llvm-svn: 368705
2019-08-13 15:52:21 +00:00
Matt Arsenault 5af9cf042f GlobalISel: Change representation of shuffle masks
Currently shufflemasks get emitted as any other constant, and you end
up with a bunch of virtual registers of G_CONSTANT with a
G_BUILD_VECTOR. The AArch64 selector then asserts on anything that
doesn't fit this pattern. This isn't an ideal representation, and
should avoid legalization and have fewer opportunities for a
representational error.

Rather than invent a new shuffle mask operand type, similar to what
ShuffleVectorSDNode does, just track the original IR Constant mask
operand. I don't completely like the idea of adding another link to
the IR, but MIR is already quite dependent on IR constants already,
and this will allow sharing the shuffle mask utility functions with
the IR.

llvm-svn: 368704
2019-08-13 15:34:38 +00:00
Roman Lebedev 676594305a [CodeGen][SelectionDAG] More efficient code for X % C == 0 (SREM case)
Summary:
This implements an optimization described in Hacker's Delight 10-17:
when `C` is constant, the result of `X % C == 0` can be computed
more cheaply without actually calculating the remainder.
The motivation is discussed here: https://bugs.llvm.org/show_bug.cgi?id=35479.

One huge caveat: this signed case is only valid for positive divisors.

While we can freely negate negative divisors, we can't negate `INT_MIN`,
so for now if `INT_MIN` is encountered, we bailout.
As a follow-up, it should be possible to handle that more gracefully
via extra `and`+`setcc`+`select`.

This passes llvm's test-suite, and from cursory(!) cross-examination
the folds (the assembly) match those of GCC, and manual checking via alive
did not reveal any issues (other than the `INT_MIN` case)

Reviewers: RKSimon, spatel, hermord, craig.topper, xbolva00

Reviewed By: RKSimon, xbolva00

Subscribers: xbolva00, thakis, javed.absar, hiraditya, dexonsmith, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65366

llvm-svn: 368702
2019-08-13 14:57:37 +00:00
Roman Lebedev f4de7eda4a [TargetLowering][NFC] prepareUREMEqFold(): fixup comment
The comment initially matched the code, but the code was incorrect
and was fixed after the initial revert back back when it was introduced,
but the comment was never updated.

llvm-svn: 368701
2019-08-13 14:57:08 +00:00
Hans Wennborg 5390d25f2b Revert r368276 "[TargetLowering] SimplifyDemandedBits - call SimplifyMultipleUseDemandedBits for ISD::EXTRACT_VECTOR_ELT"
This introduced a false positive MemorySanitizer warning about use of
uninitialized memory in a vectorized crc function in Chromium. That suggests
maybe something is not right with this transformation. See
https://crbug.com/992853#c7 for a reproducer.

This also reverts the follow-up commits r368307 and r368308 which
depended on this.

> This patch attempts to peek through vectors based on the demanded bits/elt of a particular ISD::EXTRACT_VECTOR_ELT node, allowing us to avoid dependencies on ops that have no impact on the extract.
>
> In particular this helps remove some unnecessary scalar->vector->scalar patterns.
>
> The wasm shift patterns are annoying - @tlively has indicated that the wasm vector shift codegen are to be refactored in the near-term and isn't considered a major issue.
>
> Differential Revision: https://reviews.llvm.org/D65887

llvm-svn: 368660
2019-08-13 09:33:25 +00:00
Amara Emerson e14c91b71a [GlobalISel] Make the InstructionSelector instance non-const, allowing state to be maintained.
Currently we can't keep any state in the selector object that we get from
subtarget. As a result we have to plumb through all our variables through
multiple functions. This change makes it non-const and adds a virtual init()
method to allow further state to be captured for each target.

AArch64 makes use of this in this patch to cache a call to hasFnAttribute()
which is expensive to call, and is used on each selection of G_BRCOND.

Differential Revision: https://reviews.llvm.org/D65984

llvm-svn: 368652
2019-08-13 06:26:59 +00:00
Aditya Nandakumar 70fdfed45f [GlobalISel]: Add KnownBits for G_XOR
https://reviews.llvm.org/D66119

llvm-svn: 368648
2019-08-13 04:32:33 +00:00
Daniel Sanders a58a27513b Eliminate implicit Register->unsigned conversions in VirtRegMap. NFC
Summary:
This was mostly an experiment to assess the feasibility of completely
eliminating a problematic implicit conversion case in D61321 in advance of
landing that* but it also happens to align with the goal of propagating the
use of Register/MCRegister instead of unsigned so I believe it makes sense
to commit it.

The overall process for eliminating the implicit conversions from
Register/MCRegister -> unsigned was to:
1. Add an explicit conversion to support genuinely required conversions to
   unsigned. For example, using them as an index for IndexedMap. Sadly it's
   not possible to have an explicit and implicit conversion to the same
   type and only deprecate the implicit one so I called the explicit
   conversion get().
2. Temporarily annotate the implicit conversion to unsigned with
   LLVM_ATTRIBUTE_DEPRECATED to make them visible
3. Eliminate implicit conversions by propagating Register/MCRegister/
   explicit-conversions appropriately
4. Remove the deprecation added in 2.

* My conclusion is that it isn't feasible as there's too much code to
  update in one go.

Depends on D65678

Reviewers: arsenm

Subscribers: MatzeB, wdng, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65685

llvm-svn: 368643
2019-08-13 00:55:24 +00:00
Aditya Nandakumar 55371e697c [GISel]: Fix a bug in KnownBits where we should have been using SizeInBits
https://reviews.llvm.org/D66039

We were using getIndexSize instead of getIndexSizeInBits().
Added test case for G_PTRTOINT and G_INTTOPTR.

llvm-svn: 368618
2019-08-12 21:28:12 +00:00
Hans Wennborg a45f301f7a Revert r368339 "[MBP] Disable aggressive loop rotate in plain mode"
It caused assertions to fire when building Chromium:

  lib/CodeGen/LiveDebugValues.cpp:331: bool
  {anonymous}::LiveDebugValues::OpenRangesSet::empty() const: Assertion
  `Vars.empty() == VarLocs.empty() && "open ranges are inconsistent"' failed.

See https://crbug.com/992871#c3 for how to reproduce.

> Patch https://reviews.llvm.org/D43256 introduced more aggressive loop layout optimization which depends on profile information. If profile information is not available, the statically estimated profile information(generated by BranchProbabilityInfo.cpp) is used. If user program doesn't behave as BranchProbabilityInfo.cpp expected, the layout may be worse.
>
> To be conservative this patch restores the original layout algorithm in plain mode. But user can still try the aggressive layout optimization with -force-precise-rotation-cost=true.
>
> Differential Revision: https://reviews.llvm.org/D65673

llvm-svn: 368579
2019-08-12 14:23:13 +00:00
Kang Zhang 489efc68a5 Revert r368565: [CodeGen] Do the Simple Early Return in block-placement pass to optimize the blocks
llvm-svn: 368574
2019-08-12 14:00:31 +00:00
David Stenberg 9b29ec58b7 [DebugInfo] Remove call sites when eliminating unreachable blocks
Summary:
When eliminating an unreachable block we must remove any call site
information for calls residing in the block.

This was originally found on a downstream target, and the attached x86
test case was produced by hand-modifying some MIR.

Reviewers: aprantl, asowda, NikolaPrica, djtodoro, ivanbaev, vsk

Reviewed By: NikolaPrica, vsk

Subscribers: vsk, hiraditya, llvm-commits

Tags: #debug-info, #llvm

Differential Revision: https://reviews.llvm.org/D64500

llvm-svn: 368566
2019-08-12 13:22:29 +00:00
Kang Zhang 342fb0db6d [CodeGen] Do the Simple Early Return in block-placement pass to optimize the blocks
Summary:

In `block-placement` pass, it will create some patterns for unconditional we can do the simple early retrun.
But the `early-ret` pass is before `block-placement`, we don't want to run it again.
This patch is to do the simple early return to optimize the blocks at the last of `block-placement`.

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D63972

llvm-svn: 368565
2019-08-12 13:15:31 +00:00
Hans Wennborg 5b96d4655c Revert r368509 "[CodeGen] Do the Simple Early Return in block-placement pass to optimize the blocks"
> In `block-placement` pass, it will create some patterns for unconditional we can do the simple early retrun.
> But the `early-ret` pass is before `block-placement`, we don't want to run it again.
> This patch is to do the simple early return to optimize the blocks at the last of `block-placement`.
>
> Reviewed By: efriedma
>
> Differential Revision: https://reviews.llvm.org/D63972

This also revertes follow-ups r368514 and r368532.

llvm-svn: 368560
2019-08-12 12:43:51 +00:00
Simon Pilgrim 05e8209e33 [TargetLowering] SimplifyDemandedBits - call SimplifyMultipleUseDemandedBits for ISD::TRUNCATE
llvm-svn: 368553
2019-08-12 10:56:05 +00:00
Bjorn Pettersson 27038a3780 [SelectionDAG] Widen vector results of SMULFIX/UMULFIX/SMULFIXSAT
Summary:
After the commits that changed x86 backend to widen vectors
instead of using promotion some of our downstream tests
started to fail. It was noticed that WidenVectorResult has
been missing support for SMULFIX/UMULFIX/SMULFIXSAT. This
patch adds the missing functionality.

Reviewers: craig.topper, RKSimon

Reviewed By: craig.topper

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66051

llvm-svn: 368540
2019-08-11 19:27:06 +00:00
Kang Zhang b1a62d168f [NFC][CodeGen] Use while loop instead for loop in MachineBlockPlacement::optimizeBranches()
This will pass EXPENSIVE check.

llvm-svn: 368532
2019-08-11 12:58:50 +00:00
Kang Zhang 555f7495df [NFC][CodeGen] Modify the PI++ to ++PI in MachineBlockPlacement::optimizeBranches()
llvm-svn: 368514
2019-08-10 16:23:17 +00:00
Kang Zhang 36cd84bdd9 [CodeGen] Do the Simple Early Return in block-placement pass to optimize the blocks
Summary:

In `block-placement` pass, it will create some patterns for unconditional we can do the simple early retrun.
But the `early-ret` pass is before `block-placement`, we don't want to run it again.
This patch is to do the simple early return to optimize the blocks at the last of `block-placement`.

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D63972

llvm-svn: 368509
2019-08-10 09:58:52 +00:00
Sanjay Patel 26b2c11451 [DAGCombiner] exclude x*2.0 from normal negation profitability rules
This is the codegen part of fixing:
https://bugs.llvm.org/show_bug.cgi?id=32939

Even with the optimal/canonical IR that is ideally created by D65954,
we would reverse that transform in DAGCombiner and end up with the same
asm on AArch64 or x86.

I see 2 options for trying to correct this:

  1. Limit isNegatibleForFree() by special-casing the fmul pattern (this patch).
  2. Avoid creating (fmul X, 2.0) in the 1st place by adding a special-case
     transform to SelectionDAG::getNode() and/or SelectionDAGBuilder::visitFMul()
     that matches the transform done by DAGCombiner.

This seems like the less intrusive patch, but if there's some other reason to
prefer 1 option over the other, we can change to the other option.

Differential Revision: https://reviews.llvm.org/D66016

llvm-svn: 368490
2019-08-09 21:37:32 +00:00
Daniel Sanders e9a57c2b23 [globalisel] Add G_SEXT_INREG
Summary:
Targets often have instructions that can sign-extend certain cases faster
than the equivalent shift-left/arithmetic-shift-right. Such cases can be
identified by matching a shift-left/shift-right pair but there are some
issues with this in the context of combines. For example, suppose you can
sign-extend 8-bit up to 32-bit with a target extend instruction.
  %1:_(s32) = G_SHL %0:_(s32), i32 24 # (I've inlined the G_CONSTANT for brevity)
  %2:_(s32) = G_ASHR %1:_(s32), i32 24
  %3:_(s32) = G_ASHR %2:_(s32), i32 1
would reasonably combine to:
  %1:_(s32) = G_SHL %0:_(s32), i32 24
  %2:_(s32) = G_ASHR %1:_(s32), i32 25
which no longer matches the special case. If your shifts and extend are
equal cost, this would break even as a pair of shifts but if your shift is
more expensive than the extend then it's cheaper as:
  %2:_(s32) = G_SEXT_INREG %0:_(s32), i32 8
  %3:_(s32) = G_ASHR %2:_(s32), i32 1
It's possible to match the shift-pair in ISel and emit an extend and ashr.
However, this is far from the only way to break this shift pair and make
it hard to match the extends. Another example is that with the right
known-zeros, this:
  %1:_(s32) = G_SHL %0:_(s32), i32 24
  %2:_(s32) = G_ASHR %1:_(s32), i32 24
  %3:_(s32) = G_MUL %2:_(s32), i32 2
can become:
  %1:_(s32) = G_SHL %0:_(s32), i32 24
  %2:_(s32) = G_ASHR %1:_(s32), i32 23

All upstream targets have been configured to lower it to the current
G_SHL,G_ASHR pair but will likely want to make it legal in some cases to
handle their faster cases.

To follow-up: Provide a way to legalize based on the constant. At the
moment, I'm thinking that the best way to achieve this is to provide the
MI in LegalityQuery but that opens the door to breaking core principles
of the legalizer (legality is not context sensitive). That said, it's
worth noting that looking at other instructions and acting on that
information doesn't violate this principle in itself. It's only a
violation if, at the end of legalization, a pass that checks legality
without being able to see the context would say an instruction might not be
legal. That's a fairly subtle distinction so to give a concrete example,
saying %2 in:
  %1 = G_CONSTANT 16
  %2 = G_SEXT_INREG %0, %1
is legal is in violation of that principle if the legality of %2 depends
on %1 being constant and/or being 16. However, legalizing to either:
  %2 = G_SEXT_INREG %0, 16
or:
  %1 = G_CONSTANT 16
  %2:_(s32) = G_SHL %0, %1
  %3:_(s32) = G_ASHR %2, %1
depending on whether %1 is constant and 16 does not violate that principle
since both outputs are genuinely legal.

Reviewers: bogner, aditya_nandakumar, volkan, aemerson, paquette, arsenm

Subscribers: sdardis, jvesely, wdng, nhaehnle, rovka, kristof.beyls, javed.absar, hiraditya, jrtc27, atanasyan, Petar.Avramovic, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D61289

llvm-svn: 368487
2019-08-09 21:11:20 +00:00
Bill Wendling 79176a2542 [CodeGen] Require a name for a block addr target
Summary:
A block address may be used in inline assembly. In which case it
requires a name so that the asm parser has something to parse. Creating
a name for every block address is a large hammer, but is necessary
because at the point when a temp symbol is created we don't necessarily
know if it's used in inline asm. This ensures that it exists regardless.

Reviewers: nickdesaulniers, craig.topper

Subscribers: nathanchance, javed.absar, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65352

llvm-svn: 368478
2019-08-09 20:18:30 +00:00
Bill Wendling 1b10438875 [MC] Don't recreate a label if it's already used
Summary:
This patch keeps track of MCSymbols created for blocks that were
referenced in inline asm. It prevents creating a new symbol which
doesn't refer to the block.

Inline asm may have a reference to a label. The asm parser however
doesn't recognize it as a label and tries to create a new symbol. The
result being that instead of the original symbol (e.g. ".Ltmp0") the
parser replaces it in the inline asm with the new one (e.g. ".Ltmp00")
without updating it in the symbol table. So the machine basic block
retains the "old" symbol (".Ltmp0"), but the inline asm uses the new one
(".Ltmp00").

Reviewers: nickdesaulniers, craig.topper

Subscribers: nathanchance, javed.absar, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65304

llvm-svn: 368477
2019-08-09 20:16:31 +00:00
Sanjay Patel 0b4ae34c2f [DAGCombiner] remove redundant fold for X*1.0; NFC
This is handled at node creation time (similar to X/1.0)
after:
rL357029
(no fast-math-flags needed)

llvm-svn: 368443
2019-08-09 14:30:59 +00:00
Jinsong Ji 6349ce5ca5 [MachinePipeliner] Avoid indeterminate order in FuncUnitSorter
Summary:
This is exposed by adding a new testcase in PowerPC in
https://reviews.llvm.org/rL367732

The testcase got different output on different platform, hence breaking
buildbots.

The problem is that we get differnt FuncUnitOrder when calculateResMII.

The root cause is:
1. Two MachineInstr might get SAME priority(MFUsx) from minFuncUnits.
2. Current comparison operator() will return `MFUs1 > MFUs2`.
3. We use iterators for MachineInstr, so the input to FuncUnitSorter
   might be different on differnt platform due to the iterator nature.

So for two MI with same MFU, their order is actually depends on the
iterator order, which is platform (implemtation) dependent.

This is risky, and may cause cross-compiling problems.

The fix is to check make sure we assign a determine order when they are
equal.

Reviewers: bcahoon, hfinkel, jmolloy

Subscribers: nemanjai, hiraditya, MaskRay, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65992

llvm-svn: 368441
2019-08-09 14:10:57 +00:00
Tim Northover e1a5f668b3 GlobalISel: pack various parameters for lowerCall into a struct.
I've now needed to add an extra parameter to this call twice recently. Not only
is the signature getting extremely unwieldy, but just updating all of the
callsites and implementations is a pain. Putting the parameters in a struct
sidesteps both issues.

llvm-svn: 368408
2019-08-09 08:26:38 +00:00
Craig Topper 9158e54270 [SelectionDAG][X86] Move setcc mask splitting for mload/mstore/mgather/mscatter from DAGCombiner to the type legalizer.
We may be able to look to how VSELECT is handled to further
improve this, but this appears to be neutral or an improvement
on the test cases we have.

llvm-svn: 368344
2019-08-08 21:14:08 +00:00
Craig Topper bce4d79f37 [LegalizeTypes] Remove SplitVSETCC helper and just call SplitVecRes_SETCC.
llvm-svn: 368343
2019-08-08 21:13:58 +00:00
Guozhi Wei 80347c3acc [MBP] Disable aggressive loop rotate in plain mode
Patch https://reviews.llvm.org/D43256 introduced more aggressive loop layout optimization which depends on profile information. If profile information is not available, the statically estimated profile information(generated by BranchProbabilityInfo.cpp) is used. If user program doesn't behave as BranchProbabilityInfo.cpp expected, the layout may be worse.

To be conservative this patch restores the original layout algorithm in plain mode. But user can still try the aggressive layout optimization with -force-precise-rotation-cost=true.

Differential Revision: https://reviews.llvm.org/D65673

llvm-svn: 368339
2019-08-08 20:25:23 +00:00
Brian Cain 6dbbd0f343 [llvm-mc] Add reportWarning() to MCContext
Adding reportWarning() to MCContext, so that it can be used from
the Hexagon assembler backend.

llvm-svn: 368327
2019-08-08 19:13:23 +00:00
David Tenty 8558aac82c Enable assembly output of local commons for AIX
Summary:
This patch enable assembly output of local commons for AIX using .lcomm
directives. Adds a EmitXCOFFLocalCommonSymbol to MCStreamer so we can emit the
AIX version of .lcomm assembly directives which include a csect name. Handle the
case of BSS locals in PPCAIXAsmPrinter by using EmitXCOFFLocalCommonSymbol. Adds
a test for generating .lcomm on AIX Targets.

Reviewers: cebowleratibm, hubert.reinterpretcast, Xiangling_L, jasonliu, sfertile

Reviewed By: sfertile

Subscribers: wuzish, nemanjai, hiraditya, kbarton, MaskRay, jsji, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64825

llvm-svn: 368306
2019-08-08 15:40:35 +00:00
Simon Pilgrim e2e366797e [TargetLowering] SimplifyDemandedBits - call SimplifyMultipleUseDemandedBits for ISD::EXTRACT_VECTOR_ELT
This patch attempts to peek through vectors based on the demanded bits/elt of a particular ISD::EXTRACT_VECTOR_ELT node, allowing us to avoid dependencies on ops that have no impact on the extract.

In particular this helps remove some unnecessary scalar->vector->scalar patterns.

The wasm shift patterns are annoying - @tlively has indicated that the wasm vector shift codegen are to be refactored in the near-term and isn't considered a major issue.

Differential Revision: https://reviews.llvm.org/D65887

llvm-svn: 368276
2019-08-08 10:37:03 +00:00
Amy Huang 0b870b969f Recommit "[MS] Emit S_HEAPALLOCSITE debug info in Selection DAG"
with a fix to clear the SDNode map when SelectionDAG is cleared.

llvm-svn: 368230
2019-08-07 22:49:40 +00:00
Bob Haarman 885fa02da9 Revert r367501 "Create unique, but identically-named ELF sections..."
This reverts commit fbc563e2cb "Create
unique, but identically-named ELF sections for explicitly-sectioned
functions and globals when using -function-sections and
-data-sections."

Reason for revert: sections are created with potentially wrong
attributes.

llvm-svn: 368204
2019-08-07 20:45:23 +00:00
Tim Northover 3c10f346dc GlobalISel: factor common code from translateCall and translateInvoke. NFC.
llvm-svn: 368166
2019-08-07 12:43:53 +00:00
Simon Pilgrim 0eafe011ca [TargetLowering] SimplifyDemandedBits - call SimplifyMultipleUseDemandedBits for ISD::VECTOR_SHUFFLE
In particular this helps the SSE vector shift cvttps2dq+add+shl pattern by avoiding the need for zeros in shuffle style extensions to vXi32 types as we'll be shifting out those bits anyway

llvm-svn: 368155
2019-08-07 11:43:13 +00:00
Kai Luo 02b8056cc1 [MachineCSE][NFC] Use 'profitable' rather than 'beneficial' to name method.
llvm-svn: 368124
2019-08-07 05:40:21 +00:00
Aditya Nandakumar 6bbfde5c48 [GISel]: Fix trivial build breakage
llvm-svn: 368067
2019-08-06 17:53:04 +00:00
Aditya Nandakumar c8ac029d0a [GISel]: Add GISelKnownBits analysis
https://reviews.llvm.org/D65698

This adds a KnownBits analysis pass for GISel. This was done as a
pass (compared to static functions) so that we can add other features
such as caching queries(within a pass and across passes) in the future.
This patch only adds the basic pass boiler plate, and implements a lazy
non caching knownbits implementation (ported from SelectionDAG). I've
also hooked up the AArch64PreLegalizerCombiner pass to use this - there
should be no compile time regression as the analysis is lazy.

llvm-svn: 368065
2019-08-06 17:18:29 +00:00
Simon Pilgrim dae5ddad9d [TargetLowering] SimplifyMultipleUseDemandedBits - return UNDEF for undemanded ops
If we demand no bits/elts from an Op, just return UNDEF

llvm-svn: 368043
2019-08-06 14:30:42 +00:00
Igor Kudrin f26a70a5e7 Switch LLVM to use 64-bit offsets (2/5)
This updates all libraries and tools in LLVM Core to use 64-bit offsets
which directly or indirectly come to DataExtractor.

Differential Revision: https://reviews.llvm.org/D65638

llvm-svn: 368014
2019-08-06 10:49:40 +00:00
Ulrich Weigand 7b24dd741c [Strict FP] Allow custom operation actions
This patch changes the DAG legalizer to respect the operation actions
set by the target for strict floating-point operations. (Currently, the
legalizer will usually fall back to mutate to the non-strict action
(which is assumed to be legal), and only skip mutation if the strict
operation is marked legal.)

With this patch, if whenever a strict operation is marked as Legal or
Custom, it is passed to the target as usual. Only if it is marked as
Expand will the legalizer attempt to mutate to the non-strict operation.
Note that this will now fail if the non-strict operation is itself
marked as Custom -- the target will have to provide a Custom definition
for the strict operation then as well.

Reviewed By: hfinkel

Differential Revision: https://reviews.llvm.org/D65226

llvm-svn: 368012
2019-08-06 10:43:13 +00:00
Cullen Rhodes ced419f4d7 [SelectionDAG] Extend base addressing modes supported by MGATHER/MSCATTER
Summary:
Before this patch MGATHER/MSCATTER is capable of representing all
common addressing modes, but only when illegal types are used.
This patch adds an IndexType property so more representations
are available when using legal types only.

Original modes:
 vector of bases
 base + vector of signed scaled offsets

New modes:
 base + vector of signed unscaled offsets
 base + vector of unsigned scaled offsets
 base + vector of unsigned unscaled offsets

The current behaviour of addressing modes for gather/scatter remains
unchanged.

Patch by Paul Walker.

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D65636

llvm-svn: 368008
2019-08-06 09:46:13 +00:00
Matt Arsenault f4d3113a5f CodeGen: Migration to using Register
llvm-svn: 367974
2019-08-06 03:59:31 +00:00
Amara Emerson bc1172df14 [GlobalISel][CallLowering] Rename isArgumentHandler() -> isIncomingArgumentHandler()
Previous name and comment incorrectly implied it was just for formal arg handlers,
which is not true.

llvm-svn: 367945
2019-08-05 23:05:28 +00:00
Daniel Sanders eac86ec25f Revert Register/MCRegister: Add conversion operators to avoid use of implicit convert to unsigned. NFC
MSVC finds ambiguity where clang doesn't and it looks like it's not going to be an easy fix
Reverting while I figure out how to fix it

This reverts r367916 (git commit aa15ec3c23)
This reverts r367920 (git commit 5d14efe279)

llvm-svn: 367932
2019-08-05 21:34:45 +00:00
Daniel Sanders 5d14efe279 Fix MSVC error after r367916
It seems that MSVC sees ambiguity between the operator==()'s where clang
doesn't

llvm-svn: 367920
2019-08-05 20:03:43 +00:00
Amara Emerson 85e5e28ab4 [AArch64][GlobalISel] Inline tiny memcpy et al at -O0.
FastISel already does this since the initial arm64 port was upstreamed, so
it seems there are no issues with doing this at -O0 for very small memcpys.

Gives a 0.2% geomean code size improvement on CTMark.

Differential Revision: https://reviews.llvm.org/D65758

llvm-svn: 367919
2019-08-05 20:02:52 +00:00
Matt Arsenault 3922392969 AMDGPU: Correct behavior of f16 buffer loads
Don't assume format loads for f16. Also fixes support for targets
without i16.

llvm-svn: 367879
2019-08-05 15:59:07 +00:00
Nilanjana Basu da60fc813c Changing representation of .cv_def_range directives in Codeview debug info assembly format for better readability
llvm-svn: 367867
2019-08-05 14:16:58 +00:00
Nilanjana Basu b5e4d7de17 Revert "Changing representation of .cv_def_range directives in Codeview debug info assembly format for better readability"
This reverts commit a885afa9fa.

llvm-svn: 367861
2019-08-05 13:55:21 +00:00
Nilanjana Basu a885afa9fa Changing representation of .cv_def_range directives in Codeview debug info assembly format for better readability
llvm-svn: 367850
2019-08-05 13:11:51 +00:00
Sanjay Patel eaf13044bd [DAGCombiner][x86] prevent infinite loop from truncate/extend transforms
The test case is based on the example from the post-commit thread for:
https://reviews.llvm.org/rGc9171bd0a955

This replaces the x86-specific simple-type check from:
rL367766
with a check in the DAGCombiner. Adding the check isn't
strictly necessary after the fix from:
rL367768
...but it seems likely that we're heading for trouble if
we are creating weird types in this transform.

I combined the earlier legality check into the initial
clause to simplify the code.

So we should only try the trunc/sext transform at the
earliest combine stage, but we limit the transform to
simple types anyway because the TLI hook is probably
too lax about what it considers a free truncate.

llvm-svn: 367834
2019-08-05 11:27:07 +00:00
Graham Hunter 208d63ea90 [MVT][SVE] Map between scalable vector IR Type and VTs
Adds a two way mapping between the scalable vector IR type and
corresponding SelectionDAG ValueTypes.

Reviewers: craig.topper, jeroen.dobbelaere, fhahn, rengolin, greened, rovka

Reviewed By: greened

Differential Revision: https://reviews.llvm.org/D47770

llvm-svn: 367832
2019-08-05 11:18:19 +00:00
Guillaume Chatelet c97a3d15d2 [LLVM][Alignment] Introduce Alignment Type
Summary:
This is patch is part of a serie to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790

Reviewers: courbet, jfb, jakehehrlich

Reviewed By: jfb

Subscribers: wuzish, jholewinski, arsenm, dschuff, nemanjai, jvesely, nhaehnle, javed.absar, sbc100, jgravelle-google, hiraditya, aheejin, kbarton, asb, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, jrtc27, MaskRay, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, dexonsmith, PkmX, jocewei, jsji, s.egerton, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65514

llvm-svn: 367828
2019-08-05 11:02:05 +00:00
Guillaume Chatelet 6c5fb61f8b [LLVM][Alignment] Introduce Alignment In CallingConv
Summary:
This is patch is part of a serie to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790

Subscribers: hiraditya, llvm-commits, courbet, jfb

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65659

llvm-svn: 367822
2019-08-05 09:49:09 +00:00
Oliver Stannard 8ed8353fc4 Reland: Fix and test inter-procedural register allocation for ARM
Add an explicit construction of the ArrayRef, gcc 5 and earlier don't
seem to select the ArrayRef constructor which takes a C array when the
construction is implicit.

Original commit message:

- Avoid a crash when IPRA calls ARMFrameLowering::determineCalleeSaves
  with a null RegScavenger. Simply not updating the register scavenger
  is fine because IPRA only cares about the SavedRegs vector, the acutal
  code of the function has already been generated at this point.
- Add a new hook to TargetRegisterInfo to get the set of registers which
  can be clobbered inside a call, even if the compiler can see both
  sides, by linker-generated code.

Differential revision: https://reviews.llvm.org/D64908

llvm-svn: 367819
2019-08-05 09:04:10 +00:00
Guillaume Chatelet 65e4b47aad [LLVM][Alignment] Introduce Alignment Type in DataLayout
Summary:
This is patch is part of a serie to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790

Reviewers: courbet, jfb, jakehehrlich

Subscribers: hiraditya, dexonsmith, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65521

Make getFunctionPtrAlign() return MaybeAlign

llvm-svn: 367817
2019-08-05 09:00:43 +00:00
Fangrui Song d9b948b6eb Rename F_{None,Text,Append} to OF_{None,Text,Append}. NFC
F_{None,Text,Append} are kept for compatibility since r334221.

llvm-svn: 367800
2019-08-05 05:43:48 +00:00
Craig Topper 5a4989e2ac [TargetLowering][X86] Teach SimplifyDemandedVectorElts to replace the base vector of INSERT_SUBVECTOR with undef if none of the elements are demanded even if the node has other users.
Summary:
The SimplifyDemandedVectorElts function can replace with undef
when no elements are demanded, but due to how it interacts with
TargetLoweringOpts, it can only do this when the node has
no other users.

Remove a now unneeded DAG combine from the X86 backend.

Reviewers: RKSimon, spatel

Reviewed By: RKSimon

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65713

llvm-svn: 367788
2019-08-04 17:30:41 +00:00
Craig Topper 76f0f2e0f0 [SelectionDAG] Add node creation debug message to getMemIntrinsicNode.
llvm-svn: 367771
2019-08-04 02:32:06 +00:00
Craig Topper 2edeb8a11a [DAGCombiner] Prevent the combine added in r367710 from creating illegal types after type legalization.
This is further fix for PR42880.

Sanjay already disabled the X86 TLI hook for non-simple types,
but we should really call isTypeLegal here if we're after type
legalization.

llvm-svn: 367768
2019-08-03 23:09:13 +00:00
Bill Wendling 41a2847a9a Emit diagnostic if an inline asm constraint requires an immediate
Summary:
An inline asm call can result in an immediate after inlining. Therefore emit a
diagnostic here if constraint requires an immediate but one isn't supplied.

Reviewers: joerg, mgorny, efriedma, rsmith

Reviewed By: joerg

Subscribers: asb, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, PkmX, jocewei, s.egerton, MaskRay, jyknight, dylanmckay, javed.absar, fedor.sergeev, jrtc27, Jim, krytarowski, eraman, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D60942

llvm-svn: 367750
2019-08-03 05:52:47 +00:00
Amara Emerson c835164a47 Re-commit "[GlobalISel] Add legalization support for non-power-2 loads and stores""
This is an old commit that exposed a bug in the GISel importer, which caused
non-truncating stores to be selected for truncating store patterns. Now that's
been fixed in r367737 this can go back in.

llvm-svn: 367739
2019-08-02 23:44:24 +00:00
Craig Topper b1cfcd1a56 [ScalarizeMaskedMemIntrin] Bitcast the mask to the scalar domain and use scalar bit tests for the branches for expandload/compressstore.
Same as what was done for gather/scatter/load/store in r367489.
Expandload/compressstore were delayed due to lack of constant
masking handling that has since been fixed.

llvm-svn: 367738
2019-08-02 23:43:53 +00:00
Douglas Yung 42618b270d Revert Fix and test inter-procedural register allocation for ARM
This reverts r367669 (git commit f6b00c279a)

This was breaking a build bot http://lab.llvm.org:8011/builders/netbsd-amd64/builds/21233

llvm-svn: 367731
2019-08-02 22:11:49 +00:00
Simon Pilgrim 794f7591ec [TargetLowering] SimplifyMultipleUseDemandedBits - don't assume INSERT_VECTOR_ELT value type is simple.
Noticed by inspection - this was copied from the X86 target equivalent where we can assume its legal/simple.

llvm-svn: 367721
2019-08-02 21:07:07 +00:00
Daniel Sanders e7694f34ab Use MCRegister in MCRegisterInfo's interfaces
Summary:
As part of this, define DenseMapInfo for MCRegister (and Register while I'm at it)

Depends on D65599

Reviewers: arsenm

Subscribers: MatzeB, qcolombet, jvesely, wdng, nhaehnle, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65605

llvm-svn: 367719
2019-08-02 20:23:00 +00:00
Philip Reames 511be2a158 [Statepoints] Fix overalignment of loads in no-realign-stack functions
This really should have been part of 366765.  For some reason, I forgot to handle the corresponding load side, and the readable test cases (using deopt vs statepoints) turned out to be overly reduced.  Oops.

As seen in the test change, the problem was that we were using a load with alignment expectations rather than the unaligned variant when the stack alignment was less than that prefered type alignment.

llvm-svn: 367718
2019-08-02 20:17:37 +00:00
Craig Topper de9b1d7912 [ScalarizeMaskedMemIntrin] Add constant mask support to expandload and compressstore scalarization
This adds support for generating all the loads or stores for a constant mask into a single basic block with no conditionals.

Differential Revision: https://reviews.llvm.org/D65613

llvm-svn: 367715
2019-08-02 20:04:34 +00:00
Sanjay Patel 68264558f9 [DAGCombiner] try to convert opposing shifts to casts
This reverses a questionable IR canonicalization when a truncate
is free:

sra (add (shl X, N1C), AddC), N1C -->
sext (add (trunc X to (width - N1C)), AddC')

https://rise4fun.com/Alive/slRC

More details in PR42644:
https://bugs.llvm.org/show_bug.cgi?id=42644

I limited this to pre-legalization for code simplicity because that
should be enough to reverse the IR patterns. I don't have any
evidence (no regression test diffs) that we need to try this later.

Differential Revision: https://reviews.llvm.org/D65607

llvm-svn: 367710
2019-08-02 19:33:46 +00:00
Eric Christopher 5fb56b1966 Temporarily Revert "Changing representation of cv_def_range directives in Codeview debug info assembly format for better readability"
This is breaking bots and the author asked me to revert.

This reverts commit 367704.

llvm-svn: 367707
2019-08-02 19:10:37 +00:00
Nilanjana Basu 1c67521591 Changing representation of cv_def_range directives in Codeview debug info assembly format for better readability
llvm-svn: 367704
2019-08-02 18:44:39 +00:00
Peter Collingbourne 4dcf8800e2 CodeGen: Don't follow aliases when extracting type info.
This fixes a crash in the case where the type info object is an alias
pointing to a non-zero offset within a global or is otherwise unanalyzable
by the stripPointerCasts() function. Looking through the alias is not the
right thing to do anyway for similar reasons as D65118.

Differential Revision: https://reviews.llvm.org/D65314

llvm-svn: 367696
2019-08-02 17:43:45 +00:00
Tim Northover 522fb7eedc GlobalISel: support swiftself attribute
llvm-svn: 367683
2019-08-02 14:09:49 +00:00
Oliver Stannard 4b7239ebac [IPRA][ARM] Disable no-CSR optimisation for ARM
This optimisation isn't generally profitable for ARM, because we can
save/restore many registers in the prologue and epilogue using the PUSH
and POP instructions, but mostly use individual LDR/STR instructions for
other spills.

Differential revision: https://reviews.llvm.org/D64910

llvm-svn: 367670
2019-08-02 10:23:17 +00:00
Oliver Stannard f6b00c279a Fix and test inter-procedural register allocation for ARM
- Avoid a crash when IPRA calls ARMFrameLowering::determineCalleeSaves
  with a null RegScavenger. Simply not updating the register scavenger
  is fine because IPRA only cares about the SavedRegs vector, the acutal
  code of the function has already been generated at this point.
- Add a new hook to TargetRegisterInfo to get the set of registers which
  can be clobbered inside a call, even if the compiler can see both
  sides, by linker-generated code.

Differential revision: https://reviews.llvm.org/D64908

llvm-svn: 367669
2019-08-02 10:23:05 +00:00
Kang Zhang 038dd43782 [NFC][CodeGen] Modify the type element of TailCalls to simplify the dupRetToEnableTailCallOpts()
Summary:
The old code can be simplified to define the element type of TailCalls as `BasicBlock` not `CallInst`. Also I use the for-range loop instead the for loop.

Reviewed By: jsji

Differential Revision: https://reviews.llvm.org/D64905

llvm-svn: 367644
2019-08-02 03:09:07 +00:00
Eric Christopher 5a00b0772a Temporarily revert "Changes to improve CodeView debug info type record inline comments"
due to a sanitizer failure.

This reverts commit 367623.

llvm-svn: 367640
2019-08-02 01:05:47 +00:00
Daniel Sanders 2bea69bf65 Finish moving TargetRegisterInfo::isVirtualRegister() and friends to llvm::Register as started by r367614. NFC
llvm-svn: 367633
2019-08-01 23:27:28 +00:00
Nilanjana Basu ac7e5788ca Changes to improve CodeView debug info type record inline comments
Signed-off-by: Nilanjana Basu <nilanjana.basu87@gmail.com>
llvm-svn: 367623
2019-08-01 22:05:14 +00:00
Matt Arsenault d9d30a408e GlobalISel: Lower scalarizing unmerge of a vector to shifts
AMDGPU sometimes has legal s16 and <2 x s16> operations, but all
registers are really 32-bit. An unmerge destination really should ben
widened to a 32-bit register. If widening a scalarizing vector with a
target size that matches the vector size, bitcast to integer and
extract the relevant bits with shifts.

I'm not sure if this is the right place for this. This could arguably
be part of widenScalar for the result. I also have a growing feeling
that we're missing a bitcast legalize action.

llvm-svn: 367604
2019-08-01 19:10:05 +00:00
Craig Topper a9ed5436bd [X86] In decomposeMulByConstant, legalize the VT before querying whether the multiply is legal
If a type is larger than a legal type and needs to be split, we would previously allow the multiply to be decomposed even if the split multiply is legal. Since the shift + add/sub code would also need to be split, its not any better to decompose it.

This patch figures out what type the mul will eventually be legalized to and then uses that type for the query. I tried just returning false illegal types and letting them get handled after type legalization, but then we can't recognize and i64 constant splat on 32-bit targets since will be destroyed by type legalization. We could special case vectors of i64 to avoid that...

Differential Revision: https://reviews.llvm.org/D65533

llvm-svn: 367601
2019-08-01 18:49:07 +00:00
Matt Arsenault e56a2ad85e CodeGen: Allow virtual registers in bundles
The note in the documentation suggests this restriction is a compile
time optimization for architectures that make heavy use of
bundling. Allowing virtual registers in a bundle is useful for some
(non-R600) AMDGPU use cases and are infrequent enough to matter.

A more common AMDGPU use case has already been using virtual registers
in bundles since r333691, although never calling finalizeBundle on
them and manually creating the use/def list on the BUNDLE
instruction. This is also relatively infrequent, and only happens for
consecutive sequences of some load/store types.

llvm-svn: 367597
2019-08-01 18:41:28 +00:00
Matt Arsenault 5faa533e47 GlobalISel: Fix widenScalar for G_MERGE_VALUES to pointer
AMDGPU testcase isn't broken now, but will be in a future patch
without this.

llvm-svn: 367591
2019-08-01 18:13:16 +00:00
Simon Pilgrim 1d183b407a [TargetLowering] SimplifyMultipleUseDemandedBits - Add ISD::INSERT_VECTOR_ELT handling
Allow us to peek through vector insertions to avoid dependencies on entire insertion chains.

llvm-svn: 367588
2019-08-01 17:46:44 +00:00
Craig Topper 388df2ea19 [SelectionDAG] Use APInt::isSubsetOf/intersects to simplify some code.
Also use KnownBits::isNegative/isNonNegative to further simplify.

llvm-svn: 367518
2019-08-01 06:06:21 +00:00
Matt Arsenault 7bedceb5b2 GlobalISel: moreElementsVector for G_LOAD/G_STORE
AMDGPU change and test is a placeholder until a future patch with
complete handling.

llvm-svn: 367503
2019-08-01 01:44:22 +00:00
Peter Collingbourne fbc563e2cb Create unique, but identically-named ELF sections for explicitly-sectioned functions and globals when using -function-sections and -data-sections.
This allows functions and globals to to be reordered later in the linking phase
(using the -symbol-ordering-file) even though reordering will be limited to
the scope of the explicit section.

Patch by Rahman Lavaee!

Differential Revision: https://reviews.llvm.org/D65478

llvm-svn: 367501
2019-08-01 01:38:53 +00:00
Amy Huang 153f20057c Revert "[MS] Emit S_HEAPALLOCSITE debug info in Selection DAG" and
and partial fix.
Causes windows buildbot errors.

This reverts commit 6e65c34523963094acd0d6c94a5f5c64b32fe6aa and
53da7ca943.

llvm-svn: 367496
2019-07-31 23:59:31 +00:00
Craig Topper b70026c43c [ScalarizeMaskedMemIntrin] Bitcast the mask to the scalar domain and use scalar bit tests for the branches.
X86 at least is able to use movmsk or kmov to move the mask to the scalar
domain. Then we can just use test instructions to test individual bits.

This is more efficient than extracting each mask element
individually.

I special cased v1i1 to use the previous behavior. This avoids
poor type legalization of bitcast of v1i1 to i1.

I've skipped expandload/compressstore as I think we need to
handle constant masks for those better first.

Many tests end up with duplicate test instructions due to tail
duplication in the branch folding pass. But the same thing
happens when constructing similar code in C. So its not unique
to the scalarization.

Not sure if this lowering code will also be good for other targets,
but we're only testing X86 today.

Differential Revision: https://reviews.llvm.org/D65319

llvm-svn: 367489
2019-07-31 22:58:15 +00:00
Michael Berg 005d705d43 Migrate some more fadd and fsub cases away from UnsafeFPMath control to utilize NoSignedZerosFPMath options control
Summary: Honoring no signed zeroes is also available as a user control through clang separately regardless of fastmath or UnsafeFPMath context, DAG guards should reflect this context.

Reviewers: spatel, arsenm, hfinkel, wristow, craig.topper

Reviewed By: spatel

Subscribers: rampitec, foad, nhaehnle, wuzish, nemanjai, jvesely, wdng, javed.absar, MaskRay, jsji

Differential Revision: https://reviews.llvm.org/D65170

llvm-svn: 367486
2019-07-31 21:57:28 +00:00
Amy Huang 27a73dd02c Fix to r367374 "[MS] Emit S_HEAPALLOCSITE debug info in Selection DAG"
after windows buildbot failure.

Added a check that the MachineInstr exists and is a call before trying
to add symbols around it.

llvm-svn: 367483
2019-07-31 21:03:38 +00:00
Eric Christopher 36fb93982f Fix unused variable warning for non-assert builds.
llvm-svn: 367482
2019-07-31 21:02:03 +00:00
Mark Lacey 7b8d3eb9e2 [GISel] Pass MD_callees metadata down in call lowering.
Summary:
This will make it possible to improve IPRA by taking into account
register usage in indirect calls.

NFC yet; this is just laying the groundwork to start building
up patches to take advantage of the information for improved register
allocation.

Reviewers: aditya_nandakumar, volkan, qcolombet, arsenm, rovka, aemerson, paquette

Subscribers: sdardis, wdng, javed.absar, hiraditya, jrtc27, atanasyan, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65488

llvm-svn: 367476
2019-07-31 20:34:02 +00:00
Peter Collingbourne 33773d5cfc SelectionDAG, MI, AArch64: Widen target flags fields/arguments from unsigned char to unsigned.
This makes the field wider than MachineOperand::SubReg_TargetFlags so that
we don't end up silently truncating any higher bits. We should still catch
any bits truncated from the MachineOperand field as a consequence of the
assertion in MachineOperand::setTargetFlags().

Differential Revision: https://reviews.llvm.org/D65465

llvm-svn: 367474
2019-07-31 20:14:09 +00:00
Wei Mi f49c107f06 [DAGCombine] Limit the number of times for the same store and root nodes
to bail out in store merging dependence check.

We run into a case where dependence check in store merging bail out many times
for the same store and root nodes in a huge basicblock. That increases compile
time by almost 100x. The patch add a map to track how many times the bailing
out happen for the same store and root, and if it is over a limit, stop
considering the store with the same root as a merging candidate.

Differential Revision: https://reviews.llvm.org/D65174

llvm-svn: 367472
2019-07-31 19:59:24 +00:00
Djordje Todorovic b9973f87c6 Reland "[DwarfDebug] Dump call site debug info"
The build failure found after the rL365467 has been
resolved.

Differential Revision: https://reviews.llvm.org/D60716

llvm-svn: 367446
2019-07-31 16:51:28 +00:00
Amy Huang 53da7ca943 [MS] Emit S_HEAPALLOCSITE debug info in SelectionDAG
Summary: This emits labels around heapallocsite calls in SelectionDAG.

Reviewers: rnk

Subscribers: MatzeB, aprantl, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D61105

llvm-svn: 367374
2019-07-31 00:16:13 +00:00
Matt Arsenault 9cf980d4a7 GlobalISel: Add G_ATOMICRMW_{FADD|FSUB}
llvm-svn: 367369
2019-07-30 23:56:30 +00:00
Wei Mi 888efda280 [DAGCombiner] Add an option to control whether or not to enable store merging.
Add an option to control whether or not to enable store merging in dag combiner
so we can workaround some bugs more easily.

Differential Revision: https://reviews.llvm.org/D65482

llvm-svn: 367365
2019-07-30 23:14:56 +00:00
Austin Kerbow c99f62e313 [AMDGPU/GlobalISel] Add llvm.amdgcn.fdiv.fast legalization.
Reviewers: arsenm

Reviewed By: arsenm

Subscribers: volkan, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, rovka, dstuttard, tpr, t-tye, hiraditya, Petar.Avramovic, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64966

llvm-svn: 367344
2019-07-30 18:49:16 +00:00
Sean Fertile 39f3503814 Address post commit review comments on revision 366727.
Addresses number of comment made on D64652 after commiting:

- Reorders function decls in the TargetLoweringObjectFileXCOFF class.
- Fix comment in MCSectionXCOFF to include description of external reference
  csects.
- Convert several llvm_unreachables to report_fatal_error
- Convert several dyn_casts to casts as they are expected not to fail.
- Avoid copying DataLayout object.

llvm-svn: 367324
2019-07-30 15:37:01 +00:00
Simon Pilgrim f8a7e9de06 [DAGCombine] narrowInsertExtractVectorBinOp - early out for binops that change value type. NFCI.
This is implicit in the value type checks in getSubVectorSrc - this just makes it upfront and obvious.

llvm-svn: 367220
2019-07-29 11:34:45 +00:00
Simon Pilgrim 76f2f04d9d [DAGCombine] narrowInsertExtractVectorBinOp - early out for illegal op. NFCI.
If the subvector binop is illegal then early-out and avoid the subvector searches.

llvm-svn: 367181
2019-07-27 19:42:58 +00:00
Simon Pilgrim 603f94aa2a [TargetLowering] SimplifyMultipleUseDemandedBits - add BITCAST pass through support (Reapplied)
This allows us to peek through BITCASTs, attempt to simplify the source operand, and then bitcast back.

This reapplies rL367091 which was reverted at rL367118 - we were inconsistently peeking through the bitcasts to the source value.

Fixes PR42777

llvm-svn: 367174
2019-07-27 14:11:59 +00:00
Simon Pilgrim 8a52671782 [SelectionDAG] Check for any recursion depth greater than or equal to limit instead of just equal the limit.
If anything called the recursive isKnownNeverNaN/computeKnownBits/ComputeNumSignBits/SimplifyDemandedBits/SimplifyMultipleUseDemandedBits with an incorrect depth then we could continue to recurse if we'd already exceeded the depth limit.

This replaces the limit check (Depth == 6) with a (Depth >= 6) to make sure that we don't circumvent it. 

This causes a couple of regressions as a mixture of calls (SimplifyMultipleUseDemandedBits + combineX86ShufflesRecursively) were calling with depths that were already over the limit. I've fixed SimplifyMultipleUseDemandedBits to not do this. combineX86ShufflesRecursively is trickier as we get a lot of regressions if we reduce its own limit from 8 to 6 (it also starts at Depth == 1 instead of Depth == 0 like the others....) - I'll see what I can do in future patches.

llvm-svn: 367171
2019-07-27 12:48:46 +00:00
Simon Pilgrim 3ff6126487 [TargetLowering] Add depth limit to SimplifyMultipleUseDemandedBits
We're getting reports of massive compile time increases because SimplifyMultipleUseDemandedBits was losing track of the depth and not earlying-out. No repro yet, but consider this a pre-emptive commit.

llvm-svn: 367169
2019-07-27 12:23:36 +00:00
Amara Emerson 7bc4fad0fb [AArch64][GlobalISel] Implement narrowing of G_SEXT.
We need this to narrow a sext to s128.

Differential Revision: https://reviews.llvm.org/D65357

llvm-svn: 367164
2019-07-26 23:46:38 +00:00
Sean Fertile 9df6177d38 [PowerPC][AIX]Add lowering of MCSymbol MachineOperand.
Adds machine operand lowering for MCSymbolSDNodes to the PowerPC
backend. This is needed to produce call instructions in assembly for AIX
because the callee operand is a MCSymbolSDNode. The test is XFAIL'ed for
asserts due to a (valid) assertion in PEI that the AIX ABI isn't supported yet.

Differential Revision: https://reviews.llvm.org/D63738

llvm-svn: 367133
2019-07-26 17:25:27 +00:00
Nico Weber 13f337c4cb Revert r367091, it caused PR42777.
llvm-svn: 367118
2019-07-26 14:58:42 +00:00
Simon Pilgrim a424a1f351 [SelectionDAG] GetDemandedBits - update SIGN_EXTEND_INREG op to just call SimplifyMultipleUseDemandedBits.
llvm-svn: 367098
2019-07-26 10:03:07 +00:00
Simon Pilgrim 9758407bf1 [TargetLowering] SimplifyMultipleUseDemandedBits - add SIGN_EXTEND_INREG support.
llvm-svn: 367096
2019-07-26 09:41:08 +00:00
Simon Pilgrim d0164fc525 [SelectionDAG] GetDemandedBits - update OR/XOR ops to just call SimplifyMultipleUseDemandedBits.
Eventually all of these will be moved over, but we create nodes in GetDemandedBits recursion at the moment which causes regressions when we try to remove them all.

llvm-svn: 367092
2019-07-26 09:13:29 +00:00
Simon Pilgrim b32ceb79b0 [TargetLowering] SimplifyMultipleUseDemandedBits - add BITCAST pass through support.
This allows us to peek through BITCASTs and attempt simplify the source operand, and then bitcast back.

llvm-svn: 367091
2019-07-26 08:38:39 +00:00
Kang Zhang 4e794a8bae Some case eror for: detected memory leaks
llvm-svn: 367083
2019-07-26 03:25:58 +00:00
Kang Zhang 5c61015455 [PowerPC] Do the Simple Early Return in block-placement pass to optimize the blocks
Summary:
In `block-placement` pass, it will create some patterns for unconditional we can do the simple early retrun.
But the `early-ret` pass is before `block-placement`, we don't want to run it again.
This patch is to do the simple early return to optimize the blocks at the last of `block-placement`. 

Below is an example
```
BB:                   | BB:
   XOR 3, 3, 4        |   XOR 3, 3, 4
   B TBB              |   B ChainBB
...                   | ...
ChainBB:              | ChainBB:
   B TBB              |   ADD 3, 3, 4
...                   |   BLR
TBB:                  |
   ADD 3, 3, 4        |
   BLR                |
```

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D63972

llvm-svn: 367080
2019-07-26 01:58:53 +00:00
Francis Visoiu Mistrih 2d8fdcae96 Reland: [Remarks] Add support for serializing metadata for every remark streamer
This allows every serializer format to implement metaSerializer() and
return the corresponding meta serializer.

Original llvm-svn: 366946
Reverted llvm-svn: 367004

This fixes the unit tests on Windows bots.

llvm-svn: 367078
2019-07-26 01:33:30 +00:00
Francis Visoiu Mistrih 0503add6da [CodeGen] Don't resolve the stack protector frame accesses until PEI
Currently, stack protector loads and stores are resolved during
LocalStackSlotAllocation (if the pass needs to run). When this is the
case, the base register assigned to the frame access is going to be one
of the vregs created during LocalStackSlotAllocation. This means that we
are keeping a pointer to the stack protector slot, and we're using this
pointer to load and store to it.

In case register pressure goes up, we may end up spilling this pointer
to the stack, which can be a security concern.

Instead, leave it to PEI to resolve the frame accesses. In order to do
that, we make all stack protector accesses go through frame index
operands, then PEI will resolve this using an offset from sp/fp/bp.

Differential Revision: https://reviews.llvm.org/D64759

llvm-svn: 367068
2019-07-25 22:23:48 +00:00
Simon Pilgrim 55fd57ba95 Revert rL366946 : [Remarks] Add support for serializing metadata for every remark streamer
This allows every serializer format to implement metaSerializer() and
return the corresponding meta serializer.
........
Fix windows build bots
http://lab.llvm.org:8011/builders/llvm-clang-x86_64-win-fast
http://lab.llvm.org:8011/builders/llvm-clang-lld-x86_64-scei-ps4-windows10pro-fast
http://lab.llvm.org:8011/builders/llvm-clang-x86_64-expensive-checks-win

llvm-svn: 367004
2019-07-25 10:20:39 +00:00
Roman Lebedev 017e272c3a [Codegen] (X & (C l>>/<< Y)) ==/!= 0 --> ((X <</l>> Y) & C) ==/!= 0 fold
Summary:
This was originally reported in D62818.
https://rise4fun.com/Alive/oPH

InstCombine does the opposite fold, in hope that `C l>>/<< Y` expression
will be hoisted out of a loop if `Y` is invariant and `X` is not.
But as it is seen from the diffs here, if it didn't get hoisted,
the produced assembly is almost universally worse.

Much like with my recent "hoist add/sub by/from const" patches,
we should get almost universal win if we hoist constant,
there is almost always an "and/test by imm" instruction,
but "shift of imm" not so much, so we may avoid having to
materialize the immediate, and thus need one less register.
And since we now shift not by constant, but by something else,
the live-range of that something else may reduce.

Special care needs to be applied not to disturb x86 `BT` / hexagon `tstbit`
instruction pattern. And to not get into endless combine loop.

Reviewers: RKSimon, efriedma, t.p.northover, craig.topper, spatel, arsenm

Reviewed By: spatel

Subscribers: hiraditya, MaskRay, wuzish, xbolva00, nikic, nemanjai, jvesely, wdng, nhaehnle, javed.absar, tpr, kristof.beyls, jsji, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D62871

llvm-svn: 366955
2019-07-24 22:57:22 +00:00
Amara Emerson 13af1ed8e3 [GlobalISel] Support for inlining memcpy, memset and memmove calls.
This introduces a new family of combiner helper routines that re-use the
target specific cost model from SelectionDAG, and generate inline implementations
of the memcpy family of intrinsics.

The combines are only enabled at optimization levels higher than -O0, and give
very substantial performance improvements.

Differential Revision: https://reviews.llvm.org/D65167

llvm-svn: 366951
2019-07-24 22:17:31 +00:00
Francis Visoiu Mistrih 62388e3846 [Remarks] Add support for serializing metadata for every remark streamer
This allows every serializer format to implement metaSerializer() and
return the corresponding meta serializer.

llvm-svn: 366946
2019-07-24 21:29:44 +00:00
Amara Emerson a1997ce2e5 [AArch64][GlobalISel] Fix a crash during s128 G_ICMP legalization due to r366317.
r366317 added a legalization for s128 G_ICMP narrow scalar which tried to hard
code the result type of the new legalized G_SELECT. Change this to instead use
type of the original G_ICMP result and allow the target to legalize it if necessary
later.

llvm-svn: 366943
2019-07-24 20:46:42 +00:00
Francis Visoiu Mistrih ff4b515a77 [Remarks][NFC] Rename remarks::Serializer to remarks::RemarkSerializer
llvm-svn: 366939
2019-07-24 19:47:57 +00:00
Simon Pilgrim 2bf871be4c Fix signed/unsigned comparison warning. NFCI.
llvm-svn: 366935
2019-07-24 17:44:22 +00:00
Simon Pilgrim 7d318b2bb1 [DAGCombine] matchBinOpReduction - add partial reduction matching
This patch adds support for recognizing cases where a larger vector type is being used to reduce just the elements in the lower subvector:

e.g. <8 x i32> reduction pattern in a <16 x i32> vector:

<4,5,6,7,u,u,u,u,u,u,u,u,u,u,u,u>
<2,3,u,u,u,u,u,u,u,u,u,u,u,u,u,u>
<1,u,u,u,u,u,u,u,u,u,u,u,u,u,u,u>

matchBinOpReduction returns the lower extracted subvector in such cases, assuming isExtractSubvectorCheap accepts the extraction.

I've only enabled it for X86 reduction sums so far. I intend to enable it for the bitop/minmax cases in future patches, and eventually I think its worth turning it on all the time. This is mainly just a case of ensuring calls to matchBinOpReduction don't make assumptions on the vector width based on the original vector extraction.

Fixes the x86 partial reduction sum cases in PR33758 and PR42023.

Differential Revision: https://reviews.llvm.org/D65047

llvm-svn: 366933
2019-07-24 17:29:56 +00:00
Simon Pilgrim 3f01c7197f [SelectionDAG] makeEquivalentMemoryOrdering - early out for equal chains (PR42727)
If we are already using the same chain for the old/new memory ops then just return.

Fixes PR42727 which had getLoad() reusing an existing node.

llvm-svn: 366922
2019-07-24 16:53:14 +00:00
Sanjay Patel 10dad95a75 [SDAG] convert (sub x, 1) to (add x, -1) in ctpop expansion; NFC
We canonicalize to the add form, so create that directly for efficiency.

llvm-svn: 366914
2019-07-24 15:43:50 +00:00
Simon Pilgrim 0e8359aec1 [TargetLowering] SimplifyMultipleUseDemandedBits - add VECTOR_SHUFFLE support.
If all the demanded elts are from one operand and are inline, then we can use the operand directly.

The changes are mainly from SSE41 targets which has blendvpd but not cmpgtq, allowing the v2i64 comparison to be simplified as we only need the signbit from alternate v4i32 elements.

llvm-svn: 366817
2019-07-23 15:35:55 +00:00
Simon Pilgrim 743d45ee25 [TargetLowering] Add SimplifyMultipleUseDemandedBits
This patch introduces the DAG version of SimplifyMultipleUseDemandedBits, which attempts to peek through ops (mainly and/or/xor so far) that don't contribute to the demandedbits/elts of a node - which means we can do this even in cases where we have multiple uses of an op, which normally requires us to demanded all bits/elts. The intention is to remove a similar instruction - SelectionDAG::GetDemandedBits - once SimplifyMultipleUseDemandedBits has matured.

The InstCombine version of SimplifyMultipleUseDemandedBits can constant fold which I haven't added here yet, and so far I've only wired this up to some basic binops (and/or/xor/add/sub/mul) to demonstrate its use.

We do see a couple of regressions that need to be addressed:

    AMDGPU unsigned dot product codegen retains an AND mask (for ZERO_EXTEND) that it previously removed (but otherwise the dotproduct codegen is a lot better).
	
    X86/AVX2 has poor handling of vector ANY_EXTEND/ANY_EXTEND_VECTOR_INREG - it prematurely gets converted to ZERO_EXTEND_VECTOR_INREG.

The code owners have confirmed its ok for these cases to fixed up in future patches.

Differential Revision: https://reviews.llvm.org/D63281

llvm-svn: 366799
2019-07-23 12:39:08 +00:00
Craig Topper a658cb0b12 [DAGCombiner] Make ShrinkLoadReplaceStoreWithStore return an SDValue instead of an SDNode*. NFCI
The function was calling getNode() on an SDValue to return and the
caller turned the result back into a SDValue. So just return the
original SDValue to avoid this.

llvm-svn: 366779
2019-07-23 05:13:39 +00:00
Craig Topper f5247244f2 [DAGCombiner] Use SDNode::isOperandOf to simplify some code. NFCI
llvm-svn: 366778
2019-07-23 05:13:35 +00:00
Richard Trieu 81a5045cd6 Move variable out from debug only section.
MFI is no longer just needed for an assert.  Move it out of the debug only
section to allow non-assert builds to be able to find it.

llvm-svn: 366773
2019-07-23 02:59:15 +00:00
Philip Reames 2f5543aa72 [Statepoints] Fix a bug in statepoint lowering for functions w/no-realign-stack
We were silently using the ABI alignment for all of the stores generated for deopt and gc values.  We'd gotten the alignment of the stack slot itself properly reduced (via MachineFrameInfo's clamping), but having the MMO on the store incorrect was enough for us to generate an aligned store to a unaligned location.

The simplest fix would have been to just pass the alignment to the helper function, but once we do that, the helper function doesn't really help.  So, inline it and directly call the MMO version of DAG.getStore with a properly constructed MMO.

Note that there's a separate performance possibility here.  Even if we *can* realign stacks, we probably don't *want to* if all of the stores are in slowpaths.  But that's a later patch, if at all.  :)

llvm-svn: 366765
2019-07-22 23:33:18 +00:00
Sean Fertile 942537d9fa Stubs out TLOF for AIX and add support for common vars in assembly output.
Stubs out a TargetLoweringObjectFileXCOFF class, implementing only
SelectSectionForGlobal for common symbols. Also adds an override of
EmitGlobalVariable in PPCAIXAsmPrinter which adds a number of defensive errors
and adds support for emitting common globals.

llvm-svn: 366727
2019-07-22 19:15:29 +00:00
Matt Arsenault 542720b2bc TableGen: Support physical register inputs > 255
This was truncating register value that didn't fit in unsigned char.
Switch AMDGPU sendmsg intrinsics to using a tablegen pattern.

llvm-svn: 366695
2019-07-22 15:02:34 +00:00
Christudasan Devadasan 006cf8c03d Added address-space mangling for stack related intrinsics
Modified the following 3 intrinsics:
int_addressofreturnaddress,
int_frameaddress & int_sponentry.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D64561

llvm-svn: 366679
2019-07-22 12:42:48 +00:00
Oliver Stannard 6771a89fa0 [IPRA][ARM] Make use of the "returned" parameter attribute
ARM has code to recognise uses of the "returned" function parameter
attribute which guarantee that the value passed to the function in r0
will be returned in r0 unmodified. IPRA replaces the regmask on call
instructions, so needs to be told about this to avoid reverting the
optimisation.

Differential revision: https://reviews.llvm.org/D64986

llvm-svn: 366669
2019-07-22 08:44:36 +00:00
Aditya Nandakumar d7504a1569 [GISel]: Attach missing range metadata while translating G_LOADs
https://reviews.llvm.org/D65048

Attach range information to G_LOAD when only defining one register.

reviewed by: arsenm

llvm-svn: 366656
2019-07-21 14:07:54 +00:00
Roman Lebedev cd9b19484b [Codegen][SelectionDAG] X u% C == 0 fold: non-splat vector improvements
Summary:
Four things here:
1. Generalize the fold to handle non-splat divisors. Reasonably trivial.
2. Unban power-of-two divisors. I don't see any reason why they should
   be illegal.
   * There is no ban in Hacker's Delight
   * I think the ban came from the same bug that caused the miscompile
      in the base patch - in `floor((2^W - 1) / D)` we were dividing by
      `D0` instead of `D`, and we **were** ensuring that `D0` is not `1`,
      which made sense.
3. Unban `1` divisors. I no longer believe Hacker's Delight actually says
   that the fold is invalid for `D = 0`. Further considerations:
   * We know that
     * `(X u% 1) == 0`  can be constant-folded to `1`,
     * `(X u% 1) != 0`  can be constant-folded to `0`,
   *  Also, we know that
     * `X u<= -1` can be constant-folded to `1`,
     * `X u>  -1` can be constant-folded to `0`,
   * https://godbolt.org/z/7jnZJX https://rise4fun.com/Alive/oF6p
   * We know will end up with the following:
       `(setule/setugt (rotr (mul N, P), K), Q)`
   * Therefore, for given new DAG nodes and comparison predicates
     (`ule`/`ugt`), we will still produce the correct answer if:
     `Q` is a all-ones constant; and both `P` and `K` are *anything*
     other than `undef`.
   * The fold will indeed produce `Q = all-ones`.
4. Try to re-splat the `P` and `K` vectors - we don't care about
   their values for the lanes where divisor was `1`.

Reviewers: RKSimon, hermord, craig.topper, spatel, xbolva00

Reviewed By: RKSimon

Subscribers: hiraditya, javed.absar, dexonsmith, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D63963

llvm-svn: 366637
2019-07-20 16:33:15 +00:00
Matt Arsenault c14334e959 LiveIntervals: Fix handleMove asserting on BUNDLE
The top-level BUNDLE instruction should behave as an ordinary
instruction. It is supposed to have all relevant registers as implicit
operands. Moving it should work as any other instruction. I believe
the assert intended to avoid moving instructions inside bundles.

llvm-svn: 366605
2019-07-19 19:32:00 +00:00
Nick Desaulniers 4e9196ebcb Revert "Use the MachineBasicBlock symbol for a callbr target"
This reverts commit r366523/ccbffefccaff42b0d094c9ef0f49fc3e8c8456ea.

Two regressions were immediately reported:
- https://github.com/ClangBuiltLinux/linux/issues/614
- https://github.com/ClangBuiltLinux/linux/issues/615

Reported-by: nathanchance
llvm-svn: 366600
2019-07-19 18:18:02 +00:00
Matt Arsenault 5905aae169 DAG: Handle dbg_value for arguments split into multiple subregs
This was handled previously for arguments split due to not fitting in
an MVT. This was dropping the register for argument registers split
due to TLI::getRegisterTypeForCallingConv.

llvm-svn: 366574
2019-07-19 13:36:46 +00:00
Kai Luo dec624682e [MachineCSE][MachinePRE] Avoid hoisting code from code regions into hot BBs.
Summary:
Current PRE hoists common computations into
CMBB = DT->findNearestCommonDominator(MBB, MBB1).
However, if CMBB is in a hot loop body, we might get performance
degradation.

Differential Revision: https://reviews.llvm.org/D64394

llvm-svn: 366570
2019-07-19 12:58:16 +00:00
Oliver Stannard 0ed7732671 [IPRA] Don't rely on non-exact function definitions
If a function definition is not exact, then the linker could select a
differently-compiled version of it, which could use different registers.

https://reviews.llvm.org/D64909

llvm-svn: 366557
2019-07-19 09:59:26 +00:00
Bill Wendling ccbffefcca Use the MachineBasicBlock symbol for a callbr target
Summary:
Inline asm doesn't use labels when compiled as an object file. Therefore, we
shouldn't create one for the (potential) callbr destination. Instead, use the
symbol for the MachineBasicBlock.

Reviewers: nickdesaulniers, craig.topper

Reviewed By: nickdesaulniers

Subscribers: xbolva00, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64888

llvm-svn: 366523
2019-07-19 01:10:28 +00:00
Amara Emerson cf12c7815f [GlobalISel] Translate calls to memcpy et al to G_INTRINSIC_W_SIDE_EFFECTs and legalize later.
I plan on adding memcpy optimizations in the GlobalISel pipeline, but we can't
do that unless we delay lowering to actual function calls. This patch changes
the translator to generate G_INTRINSIC_W_SIDE_EFFECTS for these functions, and
then have each target specify that using the new custom legalizer for intrinsics
hook that they want it expanded it a libcall.

Differential Revision: https://reviews.llvm.org/D64895

llvm-svn: 366516
2019-07-19 00:24:45 +00:00
Peter Collingbourne 50057f3288 CodeGen: Allow !associated metadata to point to aliases.
This is a small extension of !associated, mostly useful for the implementation
convenience of instrumentation passes that RAUW globals with aliases, such
as LowerTypeTests.

Differential Revision: https://reviews.llvm.org/D64951

llvm-svn: 366502
2019-07-18 21:37:16 +00:00
Amy Huang f332fe642c [COFF] Change a variable type to be const in the HeapAllocSite map.
llvm-svn: 366479
2019-07-18 18:22:52 +00:00
Simon Pilgrim 8b525e357f [DAGCombine] Pull getSubVectorSrc helper out of narrowInsertExtractVectorBinOp. NFCI.
NFC step towards reusing this in other EXTRACT_SUBVECTOR combines.

llvm-svn: 366435
2019-07-18 13:45:53 +00:00
Nilanjana Basu 4e22770219 Changes to display code view debug info type records in hex format
llvm-svn: 366390
2019-07-17 23:43:58 +00:00
Nilanjana Basu 6e4076699c Adding inline comments to code view type record directives for better readability
llvm-svn: 366372
2019-07-17 21:01:12 +00:00
Francis Visoiu Mistrih 9f2b290add [PEI] Don't re-allocate a pre-allocated stack protector slot
The LocalStackSlotPass pre-allocates a stack protector and makes sure
that it comes before the local variables on the stack.

We need to make sure that later during PEI we don't re-allocate a new
stack protector slot. If that happens, the new stack protector slot will
end up being **after** the local variables that it should be protecting.

Therefore, we would have two slots assigned for two different stack
protectors, one at the top of the stack, and one at the bottom. Since
PEI will overwrite the assigned slot for the stack protector, the load
that is used to compare the value of the stack protector will use the
slot assigned by PEI, which is wrong.

For this, we need to check if the object is pre-allocated, and re-use
that pre-allocated slot.

Differential Revision: https://reviews.llvm.org/D64757

llvm-svn: 366371
2019-07-17 20:46:19 +00:00
Francis Visoiu Mistrih 90ba54bf67 [CodeGen][NFC] Simplify checks for stack protector index checking
Use `hasStackProtectorIndex()` instead of `getStackProtectorIndex() >=
0`.

llvm-svn: 366369
2019-07-17 20:46:09 +00:00
Matt Arsenault 0966dd0d69 GlobalISel: Handle widenScalar of arbitrary G_MERGE_VALUES sources
Extract the sources to the GCD of the original size and target size,
padding with implicit_def as necessary.

Also fix the case where the requested source type is wider than the
original result type. This was ignoring the type, and just using the
destination. Do the operation in the requested type and truncate back.

llvm-svn: 366367
2019-07-17 20:22:44 +00:00
Matt Arsenault 914a59cad8 GlobalISel: Handle more cases for widenScalar of G_MERGE_VALUES
Use an anyext to the requested type for the leftover operand to
produce a slightly wider type, and then truncate the final merge.

I have another implementation almost ready which handles arbitrary
widens, but I think it produces worse code in this example (which I
think is 90% due to not folding redundant copies or folding out
implicit_def users), so I wanted to add this as a baseline first.

llvm-svn: 366366
2019-07-17 20:22:38 +00:00
Evgeniy Stepanov d752f5e953 Basic codegen for MTE stack tagging.
Implement IR intrinsics for stack tagging. Generated code is very
unoptimized for now.

Two special intrinsics, llvm.aarch64.irg.sp and llvm.aarch64.tagp are
used to implement a tagged stack frame pointer in a virtual register.

Differential Revision: https://reviews.llvm.org/D64172

llvm-svn: 366360
2019-07-17 19:24:02 +00:00
Alex Bradbury ab009a602e [AsmPrinter] Make the encoding of call sites in .gcc_except_table configurable and use for RISC-V
The original behavior was to always emit the offsets to each call site in the
call site table as uleb128 values, however on some architectures (eg RISCV)
these uleb128 offsets into the code cannot always be resolved until link time
(because relaxation will invalidate any calculated offsets), and there are no
appropriate relocations for uleb128 values. As a consequence it needs to be
possible to specify an alternative.

This also switches RISCV to use DW_EH_PE_udata4 for call side encodings in
.gcc_except_table

Differential Revision: https://reviews.llvm.org/D63415
Patch by Edward Jones.

llvm-svn: 366329
2019-07-17 14:00:35 +00:00
Alex Bradbury b94c233d06 [RISCV] Set correct encodings for DWARF exception handling
This patch sets correct encodings for DWARF exception handling for RISC-V
(other than call site encoding, which must be udata4 rather than uleb128 and
is handled by D63415).

This has the same intend as D63409, except this version matches GCC/binutils
behaviour which uses the same encodings regardless of PIC/non-PIC and
medlow/medany code model.

llvm-svn: 366327
2019-07-17 13:54:38 +00:00
Petar Avramovic 1e62635d05 [MIPS GlobalISel] ClampScalar and select pointer G_ICMP
Add narrowScalar to half of original size for G_ICMP.
ClampScalar G_ICMP's operands 2 and 3 to to s32.
Select G_ICMP for pointers for MIPS32. Pointer compare is same
as for integers, it is enough to declare them as legal type.

Differential Revision: https://reviews.llvm.org/D64856

llvm-svn: 366317
2019-07-17 12:08:01 +00:00
Matt Arsenault 1c3f4ec7fc GlobalISel: Add overload of handleAssignments with CCState
AMDGPU needs to allocate special argument registers separately from
the user function argument list, so needs direct control over the
CCState.

The ArgLocs argument is only really necessary because CCState doesn't
allow access to it.

llvm-svn: 366279
2019-07-16 22:41:34 +00:00
David Blaikie 40580d36c4 DWARF: Skip zero column for inline call sites
D64033 <https://reviews.llvm.org/D64033> added DW_AT_call_column for
inline sites. However, that change wasn't aware of "-gno-column-info".
To avoid adding column info when "-gno-column-info" is used, now
DW_AT_call_column is only added when we have non-zero column (when
"-gno-column-info" is used, column will be zero).

Patch by Wenlei He!

Differential Revision: https://reviews.llvm.org/D64784

llvm-svn: 366264
2019-07-16 21:15:19 +00:00
Ulrich Weigand 450c62e33e [Strict FP] Allow more relaxed scheduling
Reimplement scheduling constraints for strict FP instructions in
ScheduleDAGInstrs::buildSchedGraph to allow for more relaxed
scheduling.  Specifially, allow one strict FP instruction to
be scheduled across another, as long as it is not moved across
any global barrier.

Differential Revision: https://reviews.llvm.org/D64412

Reviewed By: cameron.mcinally

llvm-svn: 366222
2019-07-16 15:55:45 +00:00
Francis Visoiu Mistrih cc909812a3 [Remarks][NFC] Combine ParserFormat and SerializerFormat
It's useless to have both.

llvm-svn: 366216
2019-07-16 15:24:59 +00:00
Amaury Sechet f34a69c2e2 [DAGCombiner] fold (addcarry (xor a, -1), b, c) -> (subcarry b, a, !c) and flip carry.
Summary:
As per title. DAGCombiner only mathes the special case where b = 0, this patches extends the pattern to match any value of b.

Depends on D57302

Reviewers: hfinkel, RKSimon, craig.topper

Subscribers: llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D59208

llvm-svn: 366214
2019-07-16 15:17:00 +00:00
Rui Ueyama 49a3ad21d6 Fix parameter name comments using clang-tidy. NFC.
This patch applies clang-tidy's bugprone-argument-comment tool
to LLVM, clang and lld source trees. Here is how I created this
patch:

$ git clone https://github.com/llvm/llvm-project.git
$ cd llvm-project
$ mkdir build
$ cd build
$ cmake -GNinja -DCMAKE_BUILD_TYPE=Debug \
    -DLLVM_ENABLE_PROJECTS='clang;lld;clang-tools-extra' \
    -DCMAKE_EXPORT_COMPILE_COMMANDS=On -DLLVM_ENABLE_LLD=On \
    -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ ../llvm
$ ninja
$ parallel clang-tidy -checks='-*,bugprone-argument-comment' \
    -config='{CheckOptions: [{key: StrictMode, value: 1}]}' -fix \
    ::: ../llvm/lib/**/*.{cpp,h} ../clang/lib/**/*.{cpp,h} ../lld/**/*.{cpp,h}

llvm-svn: 366177
2019-07-16 04:46:31 +00:00
Heejin Ahn 9f96a58ccc [WebAssembly] Rename except_ref type to exnref
Summary:
We agreed to rename `except_ref` to `exnref` for consistency with other
reference types in
https://github.com/WebAssembly/exception-handling/issues/79. This also
renames WebAssemblyInstrExceptRef.td to WebAssemblyInstrRef.td in order
to use the file for other reference types in future.

Reviewers: dschuff

Subscribers: sbc100, jgravelle-google, hiraditya, sunfish, jfb, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64703

llvm-svn: 366145
2019-07-15 22:49:25 +00:00
Matt Arsenault 434d664095 GlobalISel: Implement narrowScalar for vector extract/insert indexes
llvm-svn: 366113
2019-07-15 19:37:34 +00:00
Fangrui Song 335f955dc4 [PowerPC] Support fp128 libcalls
On PowerPC, IEEE 754 quadruple-precision libcall names use "kf" instead of "tf".

In libgcc, libgcc/config/rs6000/float128-sed converts TF names to KF
names. This patch implements its 24 substitution rules.

Reviewed By: hfinkel

Differential Revision: https://reviews.llvm.org/D64282

llvm-svn: 366039
2019-07-15 05:02:32 +00:00
Jonas Devlieghere 83264b3580 [DebugInfo] Add column info for inline sites
The column field is missing for all inline sites, currently it's always
zero. This changes populates DW_AT_call_column field for inline sites.
Test case modified to cover this change.

Patch by: Wenlei He

Differential revision: https://reviews.llvm.org/D64033

llvm-svn: 365945
2019-07-12 19:25:45 +00:00
Fangrui Song b251cc0d91 Delete dead stores
llvm-svn: 365903
2019-07-12 14:58:15 +00:00
Simon Pilgrim 701e2c0d71 [DAGCombine] narrowExtractedVectorBinOp - wrap subvector extraction in helper. NFCI.
First step towards supporting 'free' subvector extractions other than concat_vectors.

llvm-svn: 365896
2019-07-12 13:00:35 +00:00
Djordje Todorovic 0739ccd3b5 Revert "[DwarfDebug] Dump call site debug info"
A build failure was found on the SystemZ platform.

This reverts commit 9e7e73578e54cd22b3c7af4b54274d743b6607cc.

llvm-svn: 365886
2019-07-12 09:45:12 +00:00
Jinsong Ji 9577086628 [MachinePipeliner] Fix order for nodes with Anti dependence in same cycle
Summary:
Problem exposed in PowerPC functional testing.

We did not consider Anti dependence for nodes in same cycle,
so we may end up generating bad machine code.
eg: the reduced test won't verify.

*** Bad machine code: Using an undefined physical register ***
- function:    lame_encode_buffer_interleaved
- basic block: %bb.4  (0x4bde4e12928)
- instruction: %29:gprc = ADDZE %27:gprc, implicit-def dead $carry, implicit $carry
- operand 3:   implicit $carry

Reviewers: bcahoon, kparzysz, hfinkel

Subscribers: MaskRay, wuzish, nemanjai, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64192

llvm-svn: 365859
2019-07-12 01:59:42 +00:00
Simon Pilgrim d0307f93a7 [DAGCombine] narrowInsertExtractVectorBinOp - add CONCAT_VECTORS support
We already split extract_subvector(binop(insert_subvector(v,x),insert_subvector(w,y))) -> binop(x,y).

This patch adds support for extract_subvector(binop(concat_vectors(),concat_vectors())) cases as well.

In particular this means we don't have to wait for X86 lowering to convert concat_vectors to insert_subvector chains, which helps avoid some cases where demandedelts/combine calls occur too late to split large vector ops.

The fast-isel-store.ll load folding regression is annoying but I don't think is that critical.

Differential Revision: https://reviews.llvm.org/D63653

llvm-svn: 365785
2019-07-11 14:45:03 +00:00
Matt Arsenault 6eb8ae8f17 RegUsageInfoCollector: Skip calling conventions I missed before
llvm-svn: 365784
2019-07-11 14:41:40 +00:00
Matt Arsenault 7e71902b79 GlobalISel: Use Register
llvm-svn: 365780
2019-07-11 14:18:19 +00:00
Tim Northover 67828edbbd OpaquePtr: switch to GlobalValue::getValueType in a few places. NFC.
llvm-svn: 365770
2019-07-11 13:13:02 +00:00
Tim Northover f2d6597653 OpaquePtr: use byval accessor instead of inspecting pointer type. NFC.
The accessor can deal with both "byval(ty)" and "ty* byval" forms
seamlessly.

llvm-svn: 365769
2019-07-11 13:12:38 +00:00
Sanjay Patel 138328e45c [SDAG] commute setcc operands to match a subtract
If we have:

R = sub X, Y
P = cmp Y, X

...then flipping the operands in the compare instruction can allow using a subtract that sets compare flags.

Motivated by diffs in D58875 - not sure if this changes anything there,
but this seems like a good thing independent of that.

There's a more involved version of this transform already in IR (in instcombine
although that seems misplaced to me) - see "swapMayExposeCSEOpportunities()".

Differential Revision: https://reviews.llvm.org/D63958

llvm-svn: 365711
2019-07-10 23:23:54 +00:00
Amara Emerson 7a4d2df04a [AArch64][GlobalISel] Optimize compare and branch cases with G_INTTOPTR and unknown values.
Since we have distinct types for pointers and scalars, G_INTTOPTRs can sometimes
obstruct attempts to find constant source values. These usually come about when
try to do some kind of null pointer check. Teaching getConstantVRegValWithLookThrough
about this operation allows the CBZ/CBNZ optimization to catch more cases.

This change also improves the case where we can't find a constant source at all.
Previously we would emit a cmp, cset and tbnz for that. Now we try to just emit
a cmp and conditional branch, saving an instruction.

The cumulative code size improvement of this change plus D64354 is 5.5% geomean
on arm64 CTMark -O0.

Differential Revision: https://reviews.llvm.org/D64377

llvm-svn: 365690
2019-07-10 19:21:43 +00:00
Michael Berg f4572249d7 Move three folds for FADD, FSUB and FMUL in the DAG combiner away from Unsafe to more aligned checks that reflect context
Summary: Unsafe does not map well alone for each of these three cases as it is missing NoNan context when accessed directly with clang.  I have migrated the fold guards to reflect the expectations of handing nan and zero contexts directly (NoNan, NSZ) and some tests with it.  Unsafe does include NSZ, however there is already precedent for using the target option directly to reflect that context. 

Reviewers: spatel, wristow, hfinkel, craig.topper, arsenm

Reviewed By: arsenm

Subscribers: michele.scandale, wdng, javed.absar

Differential Revision: https://reviews.llvm.org/D64450

llvm-svn: 365679
2019-07-10 18:23:26 +00:00
Nick Desaulniers 8728e45706 [TargetLowering] support BlockAddress as "i" inline asm constraint
Summary:
This allows passing address of labels to inline assembly "i" input
constraints.

Fixes pr/42502.

Reviewers: ostannard

Reviewed By: ostannard

Subscribers: void, echristo, nathanchance, ostannard, javed.absar, hiraditya, llvm-commits, srhines

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64167

llvm-svn: 365664
2019-07-10 17:08:25 +00:00
Matt Arsenault 6ce1b4fec5 GlobalISel: Legalization for G_FMINNUM/G_FMAXNUM
llvm-svn: 365658
2019-07-10 16:31:19 +00:00
Matt Arsenault e595a2c964 GlobalISel: Define the full family of FP min/max instructions
llvm-svn: 365657
2019-07-10 16:31:15 +00:00
Simon Pilgrim 94c84aca5d [DAGCombine] visitINSERT_SUBVECTOR - use uint64_t subvector index. NFCI.
Keep the uint64_t type from getZExtValue() to stop truncation/extension overflow warnings in MSVC in subvector index math.

llvm-svn: 365621
2019-07-10 12:21:35 +00:00
Simon Pilgrim bb1167a3a1 Fix const/non-const lambda return type warning. NFCI.
llvm-svn: 365613
2019-07-10 10:45:09 +00:00
Matt Arsenault b1843e130a GlobalISel: Implement lower for G_FCOPYSIGN
In SelectionDAG AMDGPU treated these as legal, but this was mostly
because the bitcasts required for FP types were painful. Theoretically
the bitpattern should eventually match to bfi, so don't bother trying
to get the patterns to import.

llvm-svn: 365583
2019-07-09 23:34:29 +00:00
Matt Arsenault 14a4495155 GlobalISel: Combine unmerge of merge with intermediate cast
This eliminates some illegal intermediate vectors when operations are
scalarized.

llvm-svn: 365566
2019-07-09 22:19:13 +00:00
Craig Topper 84a1f07363 [X86][AMDGPU][DAGCombiner] Move call to allowsMemoryAccess into isLoadBitCastBeneficial/isStoreBitCastBeneficial to allow X86 to bypass it
Basically the problem is that X86 doesn't set the Fast flag from
allowsMemoryAccess on certain CPUs due to slow unaligned memory
subtarget features. This prevents bitcasts from being folded into
loads and stores. But all vector loads and stores of the same width
are the same cost on X86.

This patch merges the allowsMemoryAccess call into isLoadBitCastBeneficial to allow X86 to skip it.

Differential Revision: https://reviews.llvm.org/D64295

llvm-svn: 365549
2019-07-09 19:55:28 +00:00
Jinsong Ji 06fef0b359 Revert "[HardwareLoops] NFC - move hardware loop checking code to isHardwareLoopProfitable()"
This reverts commit d955573065.

llvm-svn: 365520
2019-07-09 17:53:09 +00:00
Amara Emerson 6616e269a6 [AArch64][GlobalISel] Optimize conditional branches followed by unconditional branches
If we have an icmp->brcond->br sequence where the brcond just branches to the
next block jumping over the br, while the br takes the false edge, then we can
modify the conditional branch to jump to the br's target while inverting the
condition of the incoming icmp. This means we can eliminate the br as an
unconditional branch to the fallthrough block.

Differential Revision: https://reviews.llvm.org/D64354

llvm-svn: 365510
2019-07-09 16:05:59 +00:00
Simon Pilgrim 57603cbde8 [DAGCombine] LoadedSlice - keep getOffsetFromBase() uint64_t offset. NFCI.
Keep the uint64_t type from getOffsetFromBase() to stop truncation/extension overflow warnings in MSVC in alignment math.

llvm-svn: 365504
2019-07-09 15:28:57 +00:00
Chen Zheng d955573065 [HardwareLoops] NFC - move hardware loop checking code to isHardwareLoopProfitable()
Differential Revision: https://reviews.llvm.org/D64197

llvm-svn: 365497
2019-07-09 14:56:17 +00:00
Petar Avramovic be20e36107 [MIPS GlobalISel] Register bank select for G_PHI. Select i64 phi
Select gprb or fprb when def/use register operand of G_PHI is
used/defined by either:
 copy to/from physical register or
 instruction with only one mapping available for that use/def operand.

Integer s64 phi is handled with narrowScalar when mapping is applied,
produced artifacts are combined away. Manually set gprb to all register
operands of instructions created during narrowScalar.

Differential Revision: https://reviews.llvm.org/D64351

llvm-svn: 365494
2019-07-09 14:36:17 +00:00
Simon Pilgrim 480e8ad217 [CodeGen] AccelTable - remove non-constexpr (MSVC) Atom defs
Now that we've dropped VS2015 support (D64326) we can enable the constexpr variables on MSVC builds as VS2017+ correctly handles them

llvm-svn: 365477
2019-07-09 13:07:48 +00:00
Djordje Todorovic c1e0ea9765 [NFC][AsmPrinter] Fix the formatting for the rL365467
In addition, fix the build failure for the 'unused'
variable. The variable was used inside the 'LLVM_DEBUG()'.

llvm-svn: 365469
2019-07-09 12:06:21 +00:00
Tim Northover 60afa49abe OpaquePtr: add Type parameter to Loads analysis API.
This makes the functions in Loads.h require a type to be specified
independently of the pointer Value so that when pointers have no structure
other than address-space, it can still do its job.

Most callers had an obvious memory operation handy to provide this type, but a
SROA and ArgumentPromotion were doing more complicated analysis. They get
updated to merge the properties of the various instructions they were
considering.

llvm-svn: 365468
2019-07-09 11:35:35 +00:00
Djordje Todorovic 01eaae6dd1 [DwarfDebug] Dump call site debug info
Dump the DWARF information about call sites and call site parameters into
debug info sections.

The patch also provides an interface for the interpretation of instructions
that could load values of a call site parameters in order to generate DWARF
about the call site parameters.

([13/13] Introduce the debug entry values.)

Co-authored-by: Ananth Sowda <asowda@cisco.com>
Co-authored-by: Nikola Prica <nikola.prica@rt-rk.com>
Co-authored-by: Ivan Baev <ibaev@cisco.com>

Differential Revision: https://reviews.llvm.org/D60716

llvm-svn: 365467
2019-07-09 11:33:56 +00:00
Bjorn Pettersson 051a6a1c33 [SelectionDAG] Simplify some calls to getSetCCResultType. NFC
DAGTypeLegalizer and SelectionDAGLegalize has helper
functions wrapping the call to TLI.getSetCCResultType(...).
Use those helpers in more places.

llvm-svn: 365456
2019-07-09 10:27:51 +00:00
Bjorn Pettersson 59029017a6 [LegalizeTypes] Fix saturation bug for smul.fix.sat
Summary:
Make sure we use SETGE instead of SETGT when checking
if the sign bit is zero at SMULFIXSAT expansion.

The faulty expansion occured when doing "expand" of
SMULFIXSAT and the scale was exactly matching the
size of the smaller type. For example doing
  i64 Z = SMULFIXSAT X, Y, 32
and expanding X/Y/Z into using two i32 values.

The problem was that we sometimes did not saturate
to min when overflowing.

Here is an example using Q3.4 numbers:

Consider that we are multiplying X and Y.
  X = 0x80 (-8.0 as Q3.4)
  Y = 0x20 (2.0 as Q3.4)
To avoid loss of precision we do a widening
multiplication, getting a 16 bit result
  Z = 0xF000 (-16.0 as Q7.8)

To detect negative overflow we should check if
the five most significant bits in Z are less than -1.
Assume that we name the 4 most significant bits
as HH and the next 4 bits as HL. Then we can do the
check by examining if
 (HH < -1) or (HH == -1 && "sign bit in HL is zero").

The fault was that we have been doing the check as
 (HH < -1) or (HH == -1 && HL > 0)
instead of
 (HH < -1) or (HH == -1 && HL >= 0).

In our example HH is -1 and HL is 0, so the old
code did not trigger saturation and simply truncated
the result to 0x00 (0.0). With the bugfix we instead
detect that we should saturate to min, and the result
will be set to 0x80 (-8.0).

Reviewers: leonardchan, bevinh

Reviewed By: leonardchan

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64331

llvm-svn: 365455
2019-07-09 10:24:50 +00:00
Guillaume Chatelet 336f3e1601 Fixing @llvm.memcpy not honoring volatile.
This is explicitly not addressing target-specific code, or calls to memcpy.

Summary: https://bugs.llvm.org/show_bug.cgi?id=42254

Reviewers: courbet

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D63215

llvm-svn: 365449
2019-07-09 09:53:36 +00:00
Jeremy Morse 9bebc65d79 Revert r364515 and r364524
Jordan reports on llvm-commits a performance regression with r364515,
backing the patch out while it's investigated.

llvm-svn: 365448
2019-07-09 09:38:03 +00:00
Djordje Todorovic 12aca5de02 Reland "[LiveDebugValues] Emit the debug entry values"
Emit replacements for clobbered parameters location if the parameter
has unmodified value throughout the funciton. This is basic scenario
where we can use the debug entry values.

([12/13] Introduce the debug entry values.)

Co-authored-by: Ananth Sowda <asowda@cisco.com>
Co-authored-by: Nikola Prica <nikola.prica@rt-rk.com>
Co-authored-by: Ivan Baev <ibaev@cisco.com>

Differential Revision: https://reviews.llvm.org/D58042

llvm-svn: 365444
2019-07-09 08:36:34 +00:00
Jinsong Ji cbd64f7648 [MachinePipeliner] Fix Phi refers to Phi in same stage in 1st epilogue
Summary:
This is exposed by functional testing on PowerPC.
In some pipelined loops, Phi refer to phi did not get value defined by
the Phi, hence getting wrong value later.

As the comment mentioned, we should "use the value defined by the Phi,
unless we're generating the firstepilog and the Phi refers to a Phi
 in a different stage.", so Phi refering to same stage Phi should use
the value defined by the Phi here.

Reviewers: bcahoon, hfinkel

Reviewed By: hfinkel

Subscribers: MaskRay, wuzish, nemanjai, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64035

llvm-svn: 365428
2019-07-09 02:27:35 +00:00
Nilanjana Basu faed8516e4 Changing CodeView debug info type record representation in assembly files to make it more human-readable & editable & fixing bug introduced in r364987
llvm-svn: 365417
2019-07-09 01:11:02 +00:00
Reid Kleckner 2f07c2e9d9 Standardize on MSVC behavior for triples with no environment
Summary:
This makes it so that IR files using triples without an environment work
out of the box, without normalizing them.

Typically, the MSVC behavior is more desirable. For example, it tends to
enable things like constant merging, use of associative comdats, etc.

Addresses PR42491

Reviewers: compnerd

Subscribers: hiraditya, dexonsmith, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64109

llvm-svn: 365387
2019-07-08 21:05:20 +00:00
Matt Arsenault 5630e3a1c7 RegUsageInfoCollector: Don't iterate all regs for every reg class
This is extremly slow on AMDGPU, which has a lot of physical register
and a lot of register classes.

determineCalleeSaves, via MachineRegisterInfo::isPhysRegUsed already
added all of the super registers to the saved set.

llvm-svn: 365370
2019-07-08 18:48:42 +00:00
Matt Arsenault 079f77b590 GlobalISel: Convert some build functions to using SrcOp/DstOp
llvm-svn: 365343
2019-07-08 16:27:47 +00:00
Matt Arsenault bd791b57f8 GlobalISel: widenScalar for G_BUILD_VECTOR
llvm-svn: 365320
2019-07-08 13:48:06 +00:00
Simon Pilgrim 9285bf0fb9 [TargetLowering] SimplifyDemandedBits - just call computeKnownBits for BUILD_VECTOR cases.
Don't do this locally, computeKnownBits does this better (and can handle non-constant cases as well).

A next step would be to actually simplify non-constant elements - building on what we already do in SimplifyDemandedVectorElts.

llvm-svn: 365309
2019-07-08 11:00:39 +00:00
David Majnemer 617df204b5 [CodeGen] Add larger vector types for i32 and f32
Some out of tree backend require larger vector type. Since maintaining the changes out of tree is difficult due to the many manual changes needed when adding a new type we are adding it even if no backend currently use it.

Differential Revision: https://reviews.llvm.org/D64141

Patch by Thomas Raoux!

llvm-svn: 365274
2019-07-07 04:47:37 +00:00
Simon Pilgrim 9c68aa33e3 [DAGCombine] convertBuildVecZextToZext - remove duplicate getOpcode() call. NFCI.
llvm-svn: 365269
2019-07-06 18:32:15 +00:00
Quentin Colombet 0ffe0db6fa [RegisterCoalescer] Fix an overzealous assert
Although removeCopyByCommutingDef deals with full copies, it is still
possible to copy undef lanes and thus, we wouldn't have any a value
number for these lanes.

This fixes PR40215.

llvm-svn: 365256
2019-07-06 00:34:54 +00:00
Matt Arsenault 705e46f449 RegUsageInfoCollector: Skip AMDGPU entry point functions
I'm not sure if it's worth it or not to add a hook to disable the pass
for an arbitrary function.

This pass is taking up to 5% of compile time in tiny programs by
iterating through all of the physical registers in every register
class. This pass should be rewritten in terms of regunits. For now,
skip doing anything for entry point functions. The vast majority of
functions in the real world aren't callable, so just not running this
will give the majority of the benefit.

llvm-svn: 365255
2019-07-05 23:33:43 +00:00
Michael Liao 8d6ea2d48c [CodeGen] Enhance `MachineInstrSpan` to allow the end of MBB to be used.
Summary:
- Explicitly specify the parent MBB to allow the end iterator to be
  used.

Reviewers: aprantl, MatzeB, craig.topper, qcolombet

Subscribers: arsenm, jvesely, nhaehnle, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64261

llvm-svn: 365240
2019-07-05 20:23:59 +00:00
Matt Arsenault 27a6985d90 ScheduleDAG: Fix incorrectly killing registers in bundles
When looking for uses/defs to add kill flags, the iterator was double
incremented, skipping the first instruction in the bundle. The use
register in the first bundle instruction was then incorrectly killed.
The "First" instruction should be the BUNDLE itself as the proper
reverse iterator endpoint.

llvm-svn: 365216
2019-07-05 15:32:28 +00:00
Robert Lougher 2478b62098 Revert r365198 as this accidentally commited something that
should not have been added.

llvm-svn: 365199
2019-07-05 12:30:45 +00:00
Robert Lougher 3bea2b15f5 This reverts r365061 and r365062 (test update)
Revision r365061 changed a skip of debug instructions for a skip
of meta instructions. This is not safe, as IMPLICIT_DEF is classed
as a meta instruction.

llvm-svn: 365198
2019-07-05 12:20:21 +00:00
Craig Topper e9aed963ce [DAGCombiner] Don't combine (addcarry (uaddo X, Y), 0, Carry) -> (addcarry X, Y, Carry) if the Carry comes from the uaddo.
Summary:
The uaddo won't be removed and the addcarry will still be
dependent on the uaddo. So we'll just increase the use count
of X and Y and potentially require a COPY.

Reviewers: spatel, RKSimon, deadalnix

Reviewed By: RKSimon

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64190

llvm-svn: 365149
2019-07-04 18:18:46 +00:00
Matt Arsenault 43cbca50e4 GlobalISel: Fix widenScalar for pointer typed G_MERGE_VALUES
llvm-svn: 365093
2019-07-03 23:08:06 +00:00
Francis Visoiu Mistrih 83bbe2f418 [CodeGen] Make branch funnels pass the machine verifier
We previously marked all the tests with branch funnels as
`-verify-machineinstrs=0`.

This is an attempt to fix it.

1) `ICALL_BRANCH_FUNNEL` has no defs. Mark it as `let OutOperandList =
(outs)`

2) After that we hit an assert: ``` Assertion failed: (Op.getValueType()
!= MVT::Other && Op.getValueType() != MVT::Glue && "Chain and glue
operands should occur at end of operand list!"), function AddOperand,
file
/Users/francisvm/llvm/llvm/lib/CodeGen/SelectionDAG/InstrEmitter.cpp,
line 461.  ```

The chain operand was added at the beginning of the operand list. Move
that to the end.

3) After that we hit another verifier issue in the pseudo expansion
where the registers used in the cmps and jmps are not added to the
livein lists. Add the `EFLAGS` to all the new MBBs that we create.

PR39436

Differential Review: https://reviews.llvm.org/D54155

llvm-svn: 365058
2019-07-03 17:16:45 +00:00
Amaury Sechet 57dfacb32d Use getAllOnesConstants instead of -1 in DAGCombiner. NFC
llvm-svn: 365054
2019-07-03 16:34:36 +00:00
Amaury Sechet bddb8c3597 [DAGCombine] More diamong carry pattern optimization.
Summary:
This diff improve the capability of DAGCOmbine to generate linear carries propagation in presence of a diamond pattern. It is now able to match a large variety of different patterns rather than some hardcoded one.

Arguably, the codegen in test cases is not better, but this is to be expected. The goal of this transformation is more about canonicalisation than actual optimisation.

Reviewers: hfinkel, RKSimon, craig.topper

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D57302

llvm-svn: 365051
2019-07-03 16:15:59 +00:00
James Molloy fa4aac7335 [SelectionDAG] Propagate alias metadata to target intrinsic nodes
When a target intrinsic has been determined to touch memory, we construct a MachineMemOperand during SDAG construction. In this case, we should propagate AAMDNodes metadata to the MachineMemOperand where available.

Differential revision: https://reviews.llvm.org/D64131

llvm-svn: 365043
2019-07-03 14:33:29 +00:00
Oliver Stannard 830b20344b [ARM] Thumb2: favor R4-R7 over R12/LR in allocation order when opt for minsize
For Thumb2, we prefer low regs (costPerUse = 0) to allow narrow
encoding. However, current allocation order is like:
  R0-R3, R12, LR, R4-R11

As a result, a lot of instructs that use R12/LR will be wide instrs.

This patch changes the allocation order to:
  R0-R7, R12, LR, R8-R11
for thumb2 and -Osize.

In most cases, there is no extra push/pop instrs as they will be folded
into existing ones. There might be slight performance impact due to more
stack usage, so we only enable it when opt for min size.

https://reviews.llvm.org/D30324

llvm-svn: 365014
2019-07-03 09:58:52 +00:00
Roman Lebedev c4b83a6054 [Codegen][X86][AArch64][ARM][PowerPC] Inc-of-add vs sub-of-not (PR42457)
Summary:
This is the backend part of [[ https://bugs.llvm.org/show_bug.cgi?id=42457 | PR42457 ]].
In middle-end, we'd want to prefer the form with two adds - D63992,
but as this diff shows, not every target will prefer that pattern.

Out of 4 targets for which i added tests all seem to be ok with inc-of-add for scalars,
but only X86 prefer that same pattern for vectors.

Here i'm adding a new TLI hook, always defaulting to the inc-of-add,
but adding AArch64,ARM,PowerPC overrides to prefer inc-of-add only for scalars.

Reviewers: spatel, RKSimon, efriedma, t.p.northover, hfinkel

Reviewed By: efriedma

Subscribers: nemanjai, javed.absar, kristof.beyls, kbarton, jsji, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64090

llvm-svn: 365010
2019-07-03 09:41:35 +00:00
Nilanjana Basu c0b557744a Revert Changing CodeView debug info type record representation in assembly files to make it more human-readable & editable
This reverts r364982 (git commit 2082bf28eb)

llvm-svn: 364987
2019-07-03 00:51:49 +00:00
Nilanjana Basu 2082bf28eb Changing CodeView debug info type record representation in assembly files to make it more human-readable & editable
llvm-svn: 364982
2019-07-03 00:26:23 +00:00
Teresa Johnson e6768d613a [RA] Fix spelling of Greedy register allocator internal option
The internal option added with r323870 has a typo. It isn't being used
by any tests, but I decided to fix the spelling and leave it in for use
in debugging the changes added in that patch.

llvm-svn: 364958
2019-07-02 18:54:03 +00:00
Matt Arsenault ce690544a6 GlobalISel: Add G_FENCE
The pattern importer is for some reason emitting checks for G_CONSTANT
for the immediate operands.

llvm-svn: 364926
2019-07-02 14:16:39 +00:00
Roman Lebedev 7c8ee375d8 [NFC][TargetLowering] Some preparatory cleanups around 'prepareUREMEqFold()' from D63963
llvm-svn: 364921
2019-07-02 13:21:23 +00:00
Amara Emerson 000ef2c2ae [TailDuplicator] Fix copy instruction emitting into the wrong block.
The code for duplicating instructions could sometimes try to emit copies
intended to deal with unconstrainable register classes to the tail block of the
original instruction, rather than before the newly cloned instruction in the
predecessor block.

This was exposed by GlobalISel on arm64.

Differential Revision: https://reviews.llvm.org/D64049

llvm-svn: 364888
2019-07-02 06:04:46 +00:00
Zi Xuan Wu 7ae536a1ce [DAGCombiner] Exploiting more about the transformation of TransformFPLoadStorePair function
For a given floating point load / store pair, if the load value isn't used by any other operations, 
then consider transforming the pair to integer load / store operations if the target deems the transformation profitable.

And we can exploiting much more when there are other operation nodes with chain operand between the load/store pair 
so long as we keep the chain ordering original. We only replace the register used to load/store from float to integer.

I only add testcase in ARM because the TLI.isDesirableToTransformToIntegerOp hook is only enabled in ARM target.

Differential Revision: https://reviews.llvm.org/D60601

llvm-svn: 364883
2019-07-02 02:54:52 +00:00
Nilanjana Basu 8b7a0baa20 Testing commit access through minor formatting change
llvm-svn: 364843
2019-07-01 20:27:37 +00:00
Matt Arsenault c9f14f29f5 GlobalISel: Try to widen merges with other merges
If the requested source type an be used as a merge source type, create
a merge of merges. This avoids creating large, illegal extensions and
bit-ops directly to the result type.

llvm-svn: 364841
2019-07-01 19:36:10 +00:00
Matt Arsenault 03ca176ab3 GlobalISel: Verify G_MERGE_VALUES operand sizes
llvm-svn: 364822
2019-07-01 18:01:35 +00:00
Aditya Nandakumar 1023a2eca3 [GlobalISel]: Allow backends to custom legalize Intrinsics
https://reviews.llvm.org/D31359

Add a hook "legalizeInstrinsic" to allow backends to override this
and custom lower/legalize intrinsics.

llvm-svn: 364821
2019-07-01 17:53:50 +00:00
Matt Arsenault 6f74f55750 GlobalISel: Implement lower for min/max
llvm-svn: 364816
2019-07-01 17:18:03 +00:00
Diana Picus 2ba16011c1 Fixup r364512
Fix stack-use-after-scope errors from r364512. One instance was already
fixed in r364611 - this patch simplifies that fix and addresses one more
instance of similar code.

Discussed in: https://reviews.llvm.org/D63905

llvm-svn: 364778
2019-07-01 15:07:38 +00:00
Benjamin Kramer ed13fef477 [SelectionDAG] Do minnum->minimum at legalization time instead of building time
The SDAGBuilder behavior stems from the days when we didn't have fast
math flags available in SDAG. We do now and doing the transformation in
the legalizer has the advantage that it also works for vector types.

llvm-svn: 364743
2019-07-01 11:00:23 +00:00
Jeremy Morse d2b6665e33 [DebugInfo] Avoid adding too much indirection to pointer-valued variables
This patch addresses PR41675, where a stack-pointer variable is dereferenced
too many times by its location expression, presenting a value on the stack as
the pointer to the stack.

The difference between a stack *pointer* DBG_VALUE and one that refers to a
value on the stack, is currently the indirect flag. However the DWARF backend
will also try to guess whether something is a memory location or not, based
on whether there is any computation in the location expression. By simply
prepending the stack offset to existing expressions, we can accidentally
convert a register location into a memory location, which introduces a
suprise (and unintended) dereference.

The solution is to add DW_OP_stack_value whenever we add a DIExpression
computation to a stack *pointer*. It's an implicit location computed on the
expression stack, thus needs to be flagged as a stack_value.

For the edge case where the offset is zero and the location could be a register
location, DIExpression::prepend will still generate opcodes, and thus
DW_OP_stack_value must still be added.

Differential Revision: https://reviews.llvm.org/D63429

llvm-svn: 364736
2019-07-01 09:38:23 +00:00
Sam Parker 98722691b0 [ARM] WLS/LE Code Generation
Backend changes to enable WLS/LE low-overhead loops for armv8.1-m:
1) Use TTI to communicate to the HardwareLoop pass that we should try
   to generate intrinsics that guard the loop entry, as well as setting
   the loop trip count.
2) Lower the BRCOND that uses said intrinsic to an Arm specific node:
   ARMWLS.
3) ISelDAGToDAG the node to a new pseudo instruction:
   t2WhileLoopStart.
4) Add support in ArmLowOverheadLoops to handle the new pseudo
   instruction.

Differential Revision: https://reviews.llvm.org/D63816

llvm-svn: 364733
2019-07-01 08:21:28 +00:00
Fangrui Song 78ee2fbf98 Cleanup: llvm::bsearch -> llvm::partition_point after r364719
llvm-svn: 364720
2019-06-30 11:19:56 +00:00
Craig Topper 4d0feb28ec [SelectionDAG] Use the memory VT instead of result VT for FoldingSet profiling in getMaskedLoad/getMaskedStore.
This matches what is done by the Profile function. Otherwise CSE
won't work properly.

llvm-svn: 364717
2019-06-30 06:46:33 +00:00
Sam Parker 9a92be1b35 [HardwareLoops] Loop counter guard intrinsic
Introduce llvm.test.set.loop.iterations which sets the loop counter
and also produces an i1 after testing that the count is not zero.

Differential Revision: https://reviews.llvm.org/D63809

llvm-svn: 364628
2019-06-28 07:38:16 +00:00
Matt Arsenault 3018d1845b GlobalISel: Use Register
llvm-svn: 364618
2019-06-28 01:47:44 +00:00
Matt Arsenault 5e66db6b8c GlobalISel: Convert rest of MachineIRBuilder to using Register
llvm-svn: 364615
2019-06-28 01:16:41 +00:00
Amara Emerson ecb7ac35f9 [GlobalISel][IRTranslator] Fix some PHI bugs related to jump tables when optimizations are used.
The new switch lowering code that tries to generate jump tables and range checks
were tested at -O0 on arm64, but on -O3 the generic switch lowering code goes to
town on trying to generate optimized lowerings, e.g. multiple jump tables, range
checks etc. This exposed bugs in the way PHI nodes are handled because the CFG
looks even stranger after all of this is done.

llvm-svn: 364613
2019-06-27 23:56:34 +00:00
Rumeet Dhindsa ddc2804e1a Fix ASAN error caused by commit r364512.
This patch intends to fix ASAN stack-use-after-scope error.
This is at least a short-term fix to unbreak LLVM's mainline.

Differential Revision: https://reviews.llvm.org/D63905

llvm-svn: 364611
2019-06-27 23:37:04 +00:00
Roman Lebedev 29d05c005f [CodeGen] [SelectionDAG] More efficient code for X % C == 0 (UREM case) (try 3)
Summary:
I'm submitting a new revision since i don't understand how to reclaim/reopen/take over the existing one, D50222.
There is no such action in "Add Action" menu...

This implements an optimization described in Hacker's Delight 10-17: when `C` is constant,
the result of `X % C == 0` can be computed more cheaply without actually calculating the remainder.
The motivation is discussed here: https://bugs.llvm.org/show_bug.cgi?id=35479.

This is a recommit, the original commit rL364563 was reverted in rL364568
because test-suite detected miscompile - the new comparison constant 'Q'
was being computed incorrectly (we divided by `D0` instead of `D`).

Original patch D50222 by @hermord (Dmytro Shynkevych)

Notes:
- In principle, it's possible to also handle the `X % C1 == C2` case, as discussed on bugzilla.
  This seems to require an extra branch on overflow, so I refrained from implementing this for now.
- An explicit check for when the `REM` can be reduced to just its LHS is included:
  the `X % C` == 0 optimization breaks `test1` in `test/CodeGen/X86/jump_sign.ll` otherwise.
  I hadn't managed to find a better way to not generate worse output in this case.
- The `test/CodeGen/X86/jump_sign.ll` regresses, and is being fixed by a followup patch D63390.

Reviewers: RKSimon, craig.topper, spatel, hermord, xbolva00

Reviewed By: RKSimon, xbolva00

Subscribers: dexonsmith, kristina, xbolva00, javed.absar, llvm-commits, hermord

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D63391

llvm-svn: 364600
2019-06-27 21:52:10 +00:00
Djordje Todorovic 774eabd097 Revert "[LiveDebugValues] Emit the debug entry values"
Appears that the 'test/DebugInfo/MIR/X86/dbginfo-entryvals.mir'
does not pass on Windows.

This reverts commit rL364553.

llvm-svn: 364571
2019-06-27 18:12:04 +00:00
Roman Lebedev 0a2b7b79fa Revert "[CodeGen] [SelectionDAG] More efficient code for X % C == 0 (UREM case) (try 2)"
*Appears* to break test-suite on
http://lab.llvm.org:8011/builders/clang-cmake-x86_64-sde-avx512-linux/builds/23790

FAIL: burg.execution_time
FAIL: spiff.execution_time
FAIL: employ.execution_time
FAIL: llu.execution_time
FAIL: gramschmidt.execution_time
FAIL: fdtd-apml.execution_time

This reverts commit r364563.

llvm-svn: 364568
2019-06-27 17:22:31 +00:00
Roman Lebedev 0627b09863 [CodeGen] [SelectionDAG] More efficient code for X % C == 0 (UREM case) (try 2)
Summary:
I'm submitting a new revision since i don't understand how to reclaim/reopen/take over the existing one, D50222.
There is no such action in "Add Action" menu...
Original patch D50222 by @hermord (Dmytro Shynkevych)

This implements an optimization described in Hacker's Delight 10-17: when `C` is constant,
the result of `X % C == 0` can be computed more cheaply without actually calculating the remainder.
The motivation is discussed here: https://bugs.llvm.org/show_bug.cgi?id=35479.

Original patch author: @hermord (Dmytro Shynkevych)!

Notes:
- In principle, it's possible to also handle the `X % C1 == C2` case, as discussed on bugzilla.
  This seems to require an extra branch on overflow, so I refrained from implementing this for now.
- An explicit check for when the `REM` can be reduced to just its LHS is included:
  the `X % C` == 0 optimization breaks `test1` in `test/CodeGen/X86/jump_sign.ll` otherwise.
  I hadn't managed to find a better way to not generate worse output in this case.
- The `test/CodeGen/X86/jump_sign.ll` regresses, and is being fixed by a followup patch D63390.

Reviewers: RKSimon, craig.topper, spatel, hermord, xbolva00

Reviewed By: RKSimon, xbolva00

Subscribers: xbolva00, javed.absar, llvm-commits, hermord

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D63391

llvm-svn: 364563
2019-06-27 16:45:42 +00:00
Djordje Todorovic d6a46aff59 [LiveDebugValues] Emit the debug entry values
Emit replacements for clobbered parameters location if the parameter
has unmodified value throughout the funciton. This is basic scenario
where we can use the debug entry values.

([12/13] Introduce the debug entry values.)

Co-authored-by: Ananth Sowda <asowda@cisco.com>
Co-authored-by: Nikola Prica <nikola.prica@rt-rk.com>
Co-authored-by: Ivan Baev <ibaev@cisco.com>

Differential Revision: https://reviews.llvm.org/D58042

llvm-svn: 364553
2019-06-27 15:35:48 +00:00
Djordje Todorovic 7a9ca67fd5 [LiveRangeEdit] Fix build failure caused by the rL364536
llvm-svn: 364549
2019-06-27 14:31:52 +00:00
Simon Pilgrim 83e1a1e79b [TargetLowering] SimplifyDemandedVectorElts - add shift/rotate support.
llvm-svn: 364548
2019-06-27 14:25:54 +00:00
Djordje Todorovic a0d45058eb [DWARF] Handle the DW_OP_entry_value operand
Add the IR and the AsmPrinter parts for handling of the DW_OP_entry_values
DWARF operation.

([11/13] Introduce the debug entry values.)

Co-authored-by: Ananth Sowda <asowda@cisco.com>
Co-authored-by: Nikola Prica <nikola.prica@rt-rk.com>
Co-authored-by: Ivan Baev <ibaev@cisco.com>

Differential Revision: https://reviews.llvm.org/D60866

llvm-svn: 364542
2019-06-27 13:52:34 +00:00
Simon Pilgrim c692a8dc51 [TargetLowering] SimplifyDemandedBits - use DemandedElts to better identify partial splat shift amounts
llvm-svn: 364541
2019-06-27 13:48:43 +00:00
Djordje Todorovic 71d3869f60 [Backend] Keep call site info valid through the backend
Handle call instruction replacements and deletions in order to preserve
valid state of the call site info of the MachineFunction.

NOTE: If the call site info is enabled for a new target, the assertion from
the MachineFunction::DeleteMachineInstr() should help to locate places
where the updateCallSiteInfo() should be called in order to preserve valid
state of the call site info.

([10/13] Introduce the debug entry values.)

Co-authored-by: Ananth Sowda <asowda@cisco.com>
Co-authored-by: Nikola Prica <nikola.prica@rt-rk.com>
Co-authored-by: Ivan Baev <ibaev@cisco.com>

Differential Revision: https://reviews.llvm.org/D61062

llvm-svn: 364536
2019-06-27 13:10:29 +00:00
Djordje Todorovic 7eeeb5947e [ISEL][X86] Tracking of registers that forward call arguments
While lowering calls, collect info about registers that forward arguments
into following function frame. We store such info into the MachineFunction
of the call. This is used very late when dumping DWARF info about
call site parameters.

([9/13] Introduce the debug entry values.)

Co-authored-by: Ananth Sowda <asowda@cisco.com>
Co-authored-by: Nikola Prica <nikola.prica@rt-rk.com>
Co-authored-by: Ivan Baev <ibaev@cisco.com>

Differential Revision: https://reviews.llvm.org/D60715

llvm-svn: 364516
2019-06-27 10:51:15 +00:00
Jeremy Morse d528bcd965 [DebugInfo] Avoid register coalesing unsoundly changing DBG_VALUE locations
Once MIR code leaves SSA form and the liveness of a vreg is considered,
DBG_VALUE insts are able to refer to non-live vregs, because their
debug-uses do not contribute to liveness. This non-liveness becomes
problematic for optimizations like register coalescing, as they can't
``see'' the debug uses in the liveness analyses.

As a result registers get coalesced regardless of debug uses, and that can
lead to invalid variable locations containing unexpected values. In the
added test case, the first vreg operand of ADD32rr is merged with various
copies of the vreg (great for performance), but a DBG_VALUE of the
unmodified operand is blindly updated to the modified operand. This changes
what value the variable will appear to have in a debugger.

Fix this by changing any DBG_VALUE whose operand will be resurrected by
register coalescing to be a $noreg DBG_VALUE, i.e. give the variable no
location. This is an overapproximation as some coalesced locations are
safe (others are not) -- an extra domination analysis would be required to
work out which, and it would be better if we just don't generate non-live
DBG_VALUEs.

This fixes PR40010.

Differential Revision: https://reviews.llvm.org/D56151

llvm-svn: 364515
2019-06-27 10:20:27 +00:00
Diana Picus 74a50a723b [GlobalISel] Remove [un]packRegs from IRTranslator
Remove the last use of packRegs from IRTranslator and delete
pack/unpackRegs. This introduces a fallback to DAGISel for intrinsics
with aggregate arguments, since we don't have a testcase for them so
it's hard to tell how we'd want to handle them.

Discussed in https://reviews.llvm.org/D63551

llvm-svn: 364514
2019-06-27 09:49:07 +00:00
Diana Picus 43fb5ae50c [GlobalISel] Accept multiple vregs for lowerCall's args
Change the interface of CallLowering::lowerCall to accept several
virtual registers for each argument, instead of just one.  This is a
follow-up to D46018.

CallLowering::lowerReturn was similarly refactored in D49660 and
lowerFormalArguments in D63549.

With this change, we no longer pack the virtual registers generated for
aggregates into one big lump before delegating to the target. Therefore,
the target can decide itself whether it wants to handle them as separate
pieces or use one big register.

ARM and AArch64 have been updated to use the passed in virtual registers
directly, which means we no longer need to generate so many
merge/extract instructions.

NFCI for AMDGPU, Mips and X86.

Differential Revision: https://reviews.llvm.org/D63551

llvm-svn: 364512
2019-06-27 09:18:03 +00:00
Diana Picus 8138996128 [GlobalISel] Accept multiple vregs for lowerCall's result
Change the interface of CallLowering::lowerCall to accept several
virtual registers for the call result, instead of just one.  This is a
follow-up to D46018.

CallLowering::lowerReturn was similarly refactored in D49660 and
lowerFormalArguments in D63549.

With this change, we no longer pack the virtual registers generated for
aggregates into one big lump before delegating to the target. Therefore,
the target can decide itself whether it wants to handle them as separate
pieces or use one big register.

ARM and AArch64 have been updated to use the passed in virtual registers
directly, which means we no longer need to generate so many
merge/extract instructions.

NFCI for AMDGPU, Mips and X86.

Differential Revision: https://reviews.llvm.org/D63550

llvm-svn: 364511
2019-06-27 09:15:53 +00:00
Diana Picus c3dbe23977 [GlobalISel] Accept multiple vregs in lowerFormalArgs
Change the interface of CallLowering::lowerFormalArguments to accept
several virtual registers for each formal argument, instead of just one.
This is a follow-up to D46018.

CallLowering::lowerReturn was similarly refactored in D49660. lowerCall
will be refactored in the same way in follow-up patches.

With this change, we forward the virtual registers generated for
aggregates to CallLowering. Therefore, the target can decide itself
whether it wants to handle them as separate pieces or use one big
register. We also copy the pack/unpackRegs helpers to CallLowering to
facilitate this.

ARM and AArch64 have been updated to use the passed in virtual registers
directly, which means we no longer need to generate so many
merge/extract instructions.

AArch64 seems to have had a bug when lowering e.g. [1 x i8*], which was
put into a s64 instead of a p0. Added a test-case which illustrates the
problem more clearly (it crashes without this patch) and fixed the
existing test-case to expect p0.

AMDGPU has been updated to unpack into the virtual registers for
kernels. I think the other code paths fall back for aggregates, so this
should be NFC.

Mips doesn't support aggregates yet, so it's also NFC.

x86 seems to have code for dealing with aggregates, but I couldn't find
the tests for it, so I just added a fallback to DAGISel if we get more
than one virtual register for an argument.

Differential Revision: https://reviews.llvm.org/D63549

llvm-svn: 364510
2019-06-27 08:54:17 +00:00
Diana Picus 69ce1c1319 [GlobalISel] Allow multiple VRegs in ArgInfo. NFC
Allow CallLowering::ArgInfo to contain more than one virtual register.
This is useful when passes split aggregates into several virtual
registers, but need to also provide information about the original type
to the call lowering. Used in follow-up patches.

Differential Revision: https://reviews.llvm.org/D63548

llvm-svn: 364509
2019-06-27 08:50:53 +00:00
Djordje Todorovic a7cde103c1 [MachineFunction] Base support for call site info tracking
Add an attribute into the MachineFunction that tracks call site info.

([8/13] Introduce the debug entry values.)

Co-authored-by: Ananth Sowda <asowda@cisco.com>
Co-authored-by: Nikola Prica <nikola.prica@rt-rk.com>
Co-authored-by: Ivan Baev <ibaev@cisco.com>

Differential Revision: https://reviews.llvm.org/D61061

llvm-svn: 364506
2019-06-27 07:48:06 +00:00
Djordje Todorovic 59b39faa18 [IR] Add DISuprogram and DIE for a func decl
A unique DISubprogram may be attached to a function declaration used for
call site debug info.

([6/13] Introduce the debug entry values.)

Co-authored-by: Ananth Sowda <asowda@cisco.com>
Co-authored-by: Nikola Prica <nikola.prica@rt-rk.com>
Co-authored-by: Ivan Baev <ibaev@cisco.com>

Differential Revision: https://reviews.llvm.org/D60713

llvm-svn: 364500
2019-06-27 06:07:41 +00:00
Matt Arsenault 47345534aa PEI: Add default handling of spills to registers
llvm-svn: 364472
2019-06-26 20:56:15 +00:00
Evandro Menezes 42e13c8328 [CodeGen] Improve formatting of jump tables (NFC)
Split jump tables into individual lines and fix spacing.

llvm-svn: 364436
2019-06-26 15:11:31 +00:00
Roman Lebedev b0ecc1cc6b [X86] X86DAGToDAGISel::matchBitExtract(): pattern b: truncation awareness
Summary:
(Not so) boringly identical to pattern a (D62786)
Not yet sure how do deal with the last pattern c.

Reviewers: RKSimon, craig.topper, spatel

Reviewed By: RKSimon

Subscribers: llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D62793

llvm-svn: 364418
2019-06-26 12:19:39 +00:00
Clement Courbet 2851248fa1 Revert "r364412 [ExpandMemCmp][MergeICmps] Move passes out of CodeGen into opt pipeline."
Breaks sanitizers:
    libFuzzer :: cxxstring.test
    libFuzzer :: memcmp.test
    libFuzzer :: recommended-dictionary.test
    libFuzzer :: strcmp.test
    libFuzzer :: value-profile-mem.test
    libFuzzer :: value-profile-strcmp.test

llvm-svn: 364416
2019-06-26 12:13:13 +00:00
Chen Zheng aa99952896 [HardwareLoops] NFC - move loop with irreducible control flow checking logic to HarewareLoopInfo.
llvm-svn: 364415
2019-06-26 12:02:43 +00:00
Clement Courbet 7b3a5f0e6d [ExpandMemCmp][MergeICmps] Move passes out of CodeGen into opt pipeline.
This allows later passes (in particular InstCombine) to optimize more
cases.

One that's important to us is `memcmp(p, q, constant) < 0` and memcmp(p, q, constant) > 0.

llvm-svn: 364412
2019-06-26 11:50:18 +00:00
Simon Pilgrim a6319e5f83 [DAGCombine] visitEXTRACT_SUBVECTOR - add TODO for extract_subvector(bitcast()) support
We support 'big to little' (e.g. extract_subvector(v16i8 bitcast(v2i64))) but not 'little to big' cases  (e.g. extract_subvector(v2i64 bitcast(v16i8)))

llvm-svn: 364405
2019-06-26 11:17:38 +00:00
Chen Zheng 46ce9e4fff [HardwareLoops] NFC - move loop with irreducible control flow checking logic to isHardwareLoopProfitable()
llvm-svn: 364397
2019-06-26 09:12:52 +00:00
QingShan Zhang e0e7d4c366 Teach the DAGCombine to fold this pattern(c1 and c2 is constant).
// fold (sext (select cond, c1, c2)) -> (select cond, sext c1, sext c2)
// fold (zext (select cond, c1, c2)) -> (select cond, zext c1, zext c2)
// fold (aext (select cond, c1, c2)) -> (select cond, sext c1, sext c2)
Sign extend the operands if it is any_extend, to keep the signess of the operands that, the other combine rule would apply. The any_extend is handled as zero extend for constants. i.e.

t1: i8 = select t0, Constant:i8<-1>, Constant:i8<0>
t2: i64 = any_extend t1
 -->
t3: i64 = select t0, Constant:i64<-1>, Constant:i64<0>
 -->
t4: i64 = sign_extend_inreg t3

Differential Revision: https://reviews.llvm.org/D63318

llvm-svn: 364382
2019-06-26 05:12:53 +00:00
Jinsong Ji fee855b5bc [MachinePipeliner] Fix risky iterator usage R++, --R
When we calculate MII, we use two loops, one with iterator R++ to
check whether we can reserve the resource, then --R to move back
the iterator to do reservation.

This is risky, as R++, --R may not point to the same element at all.
The can cause wrong MII.

Differential Revision: https://reviews.llvm.org/D63536

llvm-svn: 364353
2019-06-25 21:50:56 +00:00
Philip Reames be0dedb2e1 [Peephole] Allow folding loads into instructions w/multiple uses (such as test64rr)
Peephole opt has a one use limitation which appears to be accidental. The function being used was incorrectly documented as returning whether the def had one *user*, but instead returned true only when there was one *use*. Add a corresponding hasOneNonDbgUser helper, and adjust peephole-opt to use the appropriate one.

All of the actual folding code handles multiple uses within a single instruction. That codepath is well exercised through instruction selection.

Differential Revision: https://reviews.llvm.org/D63656

llvm-svn: 364336
2019-06-25 17:29:18 +00:00
Simon Pilgrim 9762b26032 [DAGCombine] combineRepeatedFPDivisors - recognize -1.0 / X as a reciprocal
Fixes issue identified by @nemanjai (Nemanja Ivanovic) in D62963 / rL363040 - infinite loop due to GetNegatedExpression fighting combineRepeatedFPDivisors resulting in fneg(fdiv(x,splat)) -> fneg(fmul(x,1.0/splat)) -> fmul(x,-1.0/splat) -> fmul(x,(-1.0 * 1.0)/splat) ......

llvm-svn: 364326
2019-06-25 16:00:16 +00:00
Sanjay Patel 685c5cbc65 [SDAG] expand ctpop != 1
Change the generic ctpop expansion to more efficiently handle a
check for not-a-power-of-two value:
(ctpop x) != 1 --> (x == 0) || ((x & x-1) != 0)

This is the inverted predicate sibling pattern that was added with:
D63004

This should have been done before I changed IR canonicalization to
favor this form with:
rL364246
...so if this requires revert/changing, the earlier commit may also
need to modified.

llvm-svn: 364319
2019-06-25 14:46:52 +00:00
Simon Pilgrim 1a18bb6f25 [TargetLowering] SimplifyDemandedBits - add ANY_EXTEND_VECTOR_INREG support
Add 'lowest' demanded elt -> bitcast fold to all *_EXTEND_VECTOR_INREG cases.

Reapplies rL363856.

llvm-svn: 364311
2019-06-25 13:25:57 +00:00
Simon Pilgrim 36953ce769 [TargetLowering] SimplifyDemandedBits ZERO_EXTEND_VECTOR_INREG -> ANY_EXTEND_VECTOR_INREG
Simplify ZERO_EXTEND_VECTOR_INREG if the extended bits are not required.

Matches what we already do for ZERO_EXTEND.

Reapplies rL363850 but now with legality checks added at rL364290

llvm-svn: 364303
2019-06-25 12:57:43 +00:00
Sanjay Patel e4ef62291b [SDAG] improve expansion of ctpop+setcc
This should not cause any visible change in output, but it's
more efficient because we were producing non-canonical 'sub x, 1'
and 'setcc ugt x, 0'. As mentioned in the TODO, we should also
be handling the inverse predicate.

llvm-svn: 364302
2019-06-25 12:49:35 +00:00
Simon Pilgrim 69fc111184 [TargetLowering] SimplifyDemandedBits SIGN_EXTEND_VECTOR_INREG -> ANY/ZERO_EXTEND_VECTOR_INREG
Simplify SIGN_EXTEND_VECTOR_INREG if the extended bits are not required/known zero.

Matches what we already do for SIGN_EXTEND.

Reapplies rL363802 but now with legality checks added at rL364290

llvm-svn: 364299
2019-06-25 12:19:12 +00:00
Simon Pilgrim b23c942ce4 [VectorLegalizer] ExpandANY_EXTEND_VECTOR_INREG/ExpandZERO_EXTEND_VECTOR_INREG - widen source vector
The *_EXTEND_VECTOR_INREG opcodes were relaxed back around rL346784 to support source vector widths that are smaller than the output - it looks like the legalizers were never updated to account for this.

This patch inserts the smaller source vector into an undef vector of the same width of the result before performing the shuffle+bitcast to correctly handle this.

Part of the yak shaving to solve the crashes from rL364264 and rL364272

llvm-svn: 364295
2019-06-25 11:31:37 +00:00
Simon Pilgrim 49b3778e32 [TargetLowering] SimplifyDemandedBits - legal checks for SIGN/ZERO_EXTEND -> ZERO/ANY_EXTEND
As part of the fix for rL364264 + rL364272 - limit the *_EXTEND conversion to !TLO.LegalOperations || isOperationLegal cases.

We'll improve X86 legality in future commits.

llvm-svn: 364290
2019-06-25 10:51:15 +00:00
Roman Lebedev cdd43eac4f [Codegen] TargetLowering::SimplifySetCC(): omit urem when possible
Summary:
This addresses the regression that is being exposed by D50222 in `test/CodeGen/X86/jump_sign.ll`
The missing fold, at least partially, looks trivial:
https://rise4fun.com/Alive/Zsln
i.e. if we are comparing with zero, and comparing the `urem`-by-non-power-of-two,
and the `urem` is of something that may at most have a single bit set (or no bits set at all),
the `urem` is not needed.

Reviewers: RKSimon, craig.topper, xbolva00, spatel

Reviewed By: xbolva00, spatel

Subscribers: xbolva00, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D63390

llvm-svn: 364286
2019-06-25 10:01:42 +00:00
Clement Courbet 3bc5ad551a [ExpandMemCmp] Move all options to TargetTransformInfo.
Split off from D60318.

llvm-svn: 364281
2019-06-25 08:04:13 +00:00
Craig Topper 079924b0b7 Revert r363802, r363850, and r363856 "[TargetLowering] SimplifyDemandedBits..."
This reverts the following patches.
"[TargetLowering] SimplifyDemandedBits SIGN_EXTEND_VECTOR_INREG -> ANY/ZERO_EXTEND_VECTOR_INREG"
"[TargetLowering] SimplifyDemandedBits ZERO_EXTEND_VECTOR_INREG -> ANY_EXTEND_VECTOR_INREG"
"[TargetLowering] SimplifyDemandedBits - add ANY_EXTEND_VECTOR_INREG support"

We can end up with an any_extend_vector_inreg with a 256 bit result type
and a 128 bit result type. This is allowed by the ISD opcode, but the
generic operation legalizer is only able to expand cases where the
total vector width is the same.

The X86 backend creates these mismatched cases for zext_vec_inreg/sext_vec_inreg.
The SimplifyDemandedBits changes are allowing those nodes to become
aext_vec_inreg. For the zext/sext cases, the X86 backend has Custom
handling and never lets them get to the generic legalizer. We need to do the same
for aext_vec_inreg.

llvm-svn: 364264
2019-06-25 01:32:42 +00:00
Roland Froese ea08248b2b [CodeGen] Add missing vector type legalization for ctlz_zero_undef
Widen vector result type for ctlz_zero_undef and cttz_zero_undef the same as
ctlz and cttz.

Differential Revision: https://reviews.llvm.org/D63463

llvm-svn: 364221
2019-06-24 19:27:07 +00:00
Matt Arsenault faeaedf8e9 GlobalISel: Remove unsigned variant of SrcOp
Force using Register.

One downside is the generated register enums require explicit
conversion.

llvm-svn: 364194
2019-06-24 16:16:12 +00:00
Matt Arsenault e3a676e9ad CodeGen: Introduce a class for registers
Avoids using a plain unsigned for registers throughoug codegen.
Doesn't attempt to change every register use, just something a little
more than the set needed to build after changing the return type of
MachineOperand::getReg().

llvm-svn: 364191
2019-06-24 15:50:29 +00:00
Simon Pilgrim 69144a925e [DAGCombine] visitMUL - allow shift by zero in MulByConstant.
This can occur under certain circumstances when undefs are created later on in the constant multipliers (e.g. in this case due to SimplifyDemandedVectorElts). Its better to let the shift by zero to occur and perform any cleanup afterward.

Fixes OSS Fuzz #15429

llvm-svn: 364179
2019-06-24 12:47:17 +00:00
Fangrui Song f955d5f623 SlotIndexes: delete unused functions
llvm-svn: 364154
2019-06-23 16:05:29 +00:00
Fangrui Song 6620e3b2f6 SlotIndexes: simplify IdxMBBPair operators
llvm-svn: 364152
2019-06-23 13:16:03 +00:00
Craig Topper 6ddc7912b0 [SelectionDAG] Remove the code that attempts to calculate the alignment for the second half of a split masked load/store.
The code divides the alignment by 2 if the original alignment is
equal to the original VT size. But this wouldn't be correct
if the alignment was larger than the VT size.

The memory operand object already takes care of calling MinAlign
on the base alignment and the memory pointer offset. So we don't
need any special code at all.

llvm-svn: 364151
2019-06-23 07:00:46 +00:00
Fangrui Song 43e14390b0 Make GlobalISel depend on SelectionDAG after D63169
GlobalISel/IRTranslator.cpp now references SelectionDAG/FunctionLoweringInfo.cpp.
This fixes a link error in -DBUILD_SHARED_LIBS=on builds:

    ld.lld: error: undefined symbol: llvm::FunctionLoweringInfo::clear()
    >>> referenced by IRTranslator.cpp:2198 (../lib/CodeGen/GlobalISel/IRTranslator.cpp:2198)
    >>>               lib/CodeGen/GlobalISel/CMakeFiles/LLVMGlobalISel.dir/IRTranslator.cpp.o:(llvm::IRTranslator::finalizeFunction())

llvm-svn: 364124
2019-06-22 01:30:17 +00:00
Amara Emerson fe4625fb24 [GlobalISel][IRTranslator] Change switch table translation to generate jump tables and range checks.
This change makes use of the newly refactored SwitchLoweringUtils code from
SelectionDAG to in order to generate jump tables and range checks where appropriate.

Much of this code is ported from SDAG with some modifications. We generate
G_JUMP_TABLE and G_BRJT instructions when JT opportunities are found. This means
that targets which previously relied on the naive one MBB per case stmt
translation will now start falling back until they add support for the new opcodes.

For range checks, we don't generate any previously unused operations. This
just recognizes contiguous ranges of case values and generates a single block per
range. Single case value blocks are just a special case of ranges so we get that
support almost for free.

There are still some optimizations missing that I haven't ported over, and
bit-tests are also unimplemented. This patch series is already complex enough.

Actual arm64 support for selection of jump tables is coming in a later patch.

Differential Revision: https://reviews.llvm.org/D63169

llvm-svn: 364085
2019-06-21 18:10:38 +00:00
Simon Pilgrim 0da13ed1f6 [DAGCombine] narrowExtractedVectorBinOp - pull out repeated getOpcode(). NFCI.
llvm-svn: 364076
2019-06-21 16:44:51 +00:00
Simon Pilgrim ca9933c22d [DAGCombine] narrowInsertExtractVectorBinOp - reuse "extract from insert" detection code.
Move the "extract from insert detection code" into a lambda helper function.

llvm-svn: 364059
2019-06-21 14:46:21 +00:00
Fangrui Song dc8de6037c Simplify std::lower_bound with llvm::{bsearch,lower_bound}. NFC
llvm-svn: 364006
2019-06-21 05:40:31 +00:00
Amara Emerson bc0d08e0ee [GlobalISel][Localizer] Allow localization of G_INTTOPTR and chains of instructions.
G_INTTOPTR can prevent the localizer from moving G_CONSTANTs, but since it's
essentially a side effect free cast instruction we can remat both instructions.
This patch changes the localizer to enable localization of the chains by
iterating over the entry block instructions in reverse order. That way, uses will
localized first, and then the defs are free to be localized as well.

This also changes the previous SmallPtrSet of localized instructions to use a
SetVector instead. We're dealing with pointers and need deterministic iteration
order.

Overall, this change improves ARM64 -O0 CTMark code size by around 0.7% geomean.

Differential Revision: https://reviews.llvm.org/D63630

llvm-svn: 364001
2019-06-21 00:36:19 +00:00
Simon Pilgrim 801c0f12b0 [DAGCombiner] Use getAPIntValue() instead of getZExtValue() where possible.
Better handling of out-of-i64-range values due to large integer types or from fuzz tests.

llvm-svn: 363955
2019-06-20 17:36:23 +00:00
Jordan Rupprecht 02508decf4 [DAGCombiner][NFC] Remove unused var
llvm-svn: 363954
2019-06-20 17:30:01 +00:00
Amy Huang 7fac5c8d94 Store a pointer to the return value in a static alloca and let the debugger use that
as the variable address for NRVO variables.

Subscribers: hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D63361

llvm-svn: 363952
2019-06-20 17:15:21 +00:00
Evandro Menezes aa10f05044 [CodeGen] Fix formatting and comments (NFC)
llvm-svn: 363947
2019-06-20 16:34:00 +00:00
Simon Pilgrim 1d8093249f [DAGCombiner] Support (shl (zext (srl x, C)), C) -> (zext (shl (srl x, C), C)) non-uniform folds.
Use matchBinaryPredicate instead of isConstOrConstSplat to let us handle non-uniform shift cases. 

llvm-svn: 363929
2019-06-20 14:42:27 +00:00
Simon Pilgrim 98a0ac5c0f [DAGCombine] Add TODOs for some combines that should support non-uniform vectors
We tend to only test for scalar/scalar consts when really we could support non-uniform vectors using ISD::matchUnaryPredicate/matchBinaryPredicate etc.

llvm-svn: 363924
2019-06-20 12:48:49 +00:00
Simon Pilgrim a487628270 [DAGCombine] Reduce scope of ShAmtVal variable. NFCI.
Fixes cppcheck warning.

Use the more capable getAPIntVal() instead of getZExtValue() as well since I'm here.

llvm-svn: 363921
2019-06-20 10:56:37 +00:00
Petar Avramovic 153bd24eda [MIPS GlobalISel] Select integer to floating point conversions
Select G_SITOFP and G_UITOFP for MIPS32.

Differential Revision: https://reviews.llvm.org/D63542

llvm-svn: 363912
2019-06-20 09:05:02 +00:00
Petar Avramovic 4b4dae1c76 [MIPS GlobalISel] Select floating point to integer conversions
Select G_FPTOSI and G_FPTOUI for MIPS32.

Differential Revision: https://reviews.llvm.org/D63541

llvm-svn: 363911
2019-06-20 08:52:53 +00:00
Simon Pilgrim 046d49a8dc [DAGCombine] Use ConstantSDNode::getAPIntValue() instead of getZExtValue().
Use getAPIntValue() in a few more places. Most of the time getZExtValue() is fine, but occasionally there's fuzzed code or someone decides to create i65536 or something.....

llvm-svn: 363887
2019-06-19 22:14:24 +00:00
Simon Pilgrim f05369768c [TargetLowering] SimplifyDemandedBits - add ANY_EXTEND_VECTOR_INREG support
Move 'lowest' demanded elt -> bitcast fold out of ZERO_EXTEND_VECTOR_INREG into ANY_EXTEND_VECTOR_INREG case.

llvm-svn: 363856
2019-06-19 18:34:58 +00:00
Simon Pilgrim 6016fb726c [TargetLowering] SimplifyDemandedBits ZERO_EXTEND_VECTOR_INREG -> ANY_EXTEND_VECTOR_INREG
Simplify ZERO_EXTEND_VECTOR_INREG if the extended bits are not required.

Matches what we already do for ZERO_EXTEND.

llvm-svn: 363850
2019-06-19 18:00:24 +00:00
Simon Pilgrim c3994f77cb [TargetLowering] SimplifyDemandedBits SIGN_EXTEND_VECTOR_INREG -> ANY/ZERO_EXTEND_VECTOR_INREG
Simplify SIGN_EXTEND_VECTOR_INREG if the extended bits are not required/known zero.

Matches what we already do for SIGN_EXTEND.

llvm-svn: 363802
2019-06-19 13:58:02 +00:00
Simon Pilgrim 9eed5d2f78 [DAGCombiner] Support (shl (ext (shl x, c1)), c2) -> (shl (ext x), (add c1, c2)) non-uniform folds.
Use matchBinaryPredicate instead of isConstOrConstSplat to let us handle non-uniform shift cases. 

llvm-svn: 363793
2019-06-19 12:41:37 +00:00
Simon Pilgrim 8c49366c9b [DAGCombiner] Support (shl (ext (shl x, c1)), c2) -> 0 non-uniform folds.
Use matchBinaryPredicate instead of isConstOrConstSplat to let us handle non-uniform shift cases. 

This requires us to tweak matchBinaryPredicate to allow it to (optionally) handle constants with different type widths.

llvm-svn: 363792
2019-06-19 12:25:29 +00:00
Simon Pilgrim bb6b856183 [DAGCombiner] visitSHL - pull out repeated shift amount VT. NFCI.
llvm-svn: 363789
2019-06-19 11:31:26 +00:00
Simon Pilgrim d954a53633 [DAGCombine] Fix (shl (ext (shl x, c1)), c2) -> (shl (ext x), (add c1, c2)) comment. NFCI.
We pre-extend, not post.

llvm-svn: 363787
2019-06-19 11:17:48 +00:00
Chen Zheng c5b918de58 [NFC] move some hardware loop checking code to a common place for other using.
Differential Revision: https://reviews.llvm.org/D63478

llvm-svn: 363758
2019-06-19 01:26:31 +00:00
Matt Arsenault 9cac4e6d14 Rename ExpandISelPseudo->FinalizeISel, delay register reservation
This allows targets to make more decisions about reserved registers
after isel. For example, now it should be certain there are calls or
stack objects in the frame or not, which could have been introduced by
legalization.

Patch by Matthias Braun

llvm-svn: 363757
2019-06-19 00:25:39 +00:00
Amara Emerson d11ea2c8c5 [GlobalISel][Localizer] Remove redundant set lookup.
After changing the algorithm to only process the entry block we never revisit
a processed instruction.

llvm-svn: 363745
2019-06-18 22:08:40 +00:00
Jinsong Ji ba43840bfe [MachinePipeliner][NFC] Do resource tracking log only when requested.
In most cases we don't need to do resource tracking debug,
so leave them off by default.

llvm-svn: 363733
2019-06-18 20:24:49 +00:00
Simon Pilgrim 5bef886cd8 [TargetLowering] SimplifyDemandedBits - Cleanup ANY_EXTEND handling
Match SIGN_EXTEND + ZERO_EXTEND handling - will be adding ANY_EXTEND_VECTOR_INREG support in a future patch.

llvm-svn: 363716
2019-06-18 18:22:30 +00:00
Simon Pilgrim 032b54f8e8 [TargetLowering] SimplifyDemandedBits - Merge ZERO_EXTEND+ZERO_EXTEND_VECTOR_INREG handling
Other than adding consistent demanded elts handling which was a trivial addition, the other differences in functionality will be added in later patches.

llvm-svn: 363713
2019-06-18 18:08:30 +00:00
Simon Pilgrim b6e7108dcd [TargetLowering] SimplifyDemandedBits - Merge SIGN_EXTEND+SIGN_EXTEND_VECTOR_INREG handling
Other than adding consistent demanded elts handling which was a trivial addition, the other differences in functionality will be added in later patches.

llvm-svn: 363710
2019-06-18 17:57:53 +00:00
Simon Pilgrim 9aa25be149 [TargetLowering] SimplifyDemandedVectorElts - support MUL and ANY_EXTEND_VECTOR_INREG
Also fold ANY_EXTEND_VECTOR_INREG -> BITCAST if we only need the bottom element.

Fixes temporary regression introduced in rL363693.

llvm-svn: 363694
2019-06-18 15:49:35 +00:00
Simon Pilgrim 83bacd8d72 [SelectionDAG] Legalize vaargs that require vector splitting
This adds vector splitting for vaarg instructions during type legalization

Committed on behalf of @luke (Luke Lau)

Differential Revision: https://reviews.llvm.org/D60762

llvm-svn: 363671
2019-06-18 12:24:02 +00:00
Tom Stellard 1f7f64665c GlobalISel: Remove redundant pass initialization
Summary:
All the GlobalISel passes are initialized when the target calls
initializeGlobalISel(), so we don't need to call the initializers
from the pass constructors.

Reviewers: qcolombet, t.p.northover, paquette, dsanders, aemerson, aditya_nandakumar

Reviewed By: aemerson

Subscribers: rovka, kristof.beyls, hiraditya, volkan, Petar.Avramovic, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D63235

llvm-svn: 363642
2019-06-18 02:05:06 +00:00
Matt Arsenault 5a321b899e GlobalISel: Use the original flags when lowering fneg to fsub
This was ignoring the flag on fneg, and using the source instruction's
flags. Also fixes tests missing from r358702.

Note the expansion itself isn't correct without nnan, but that should
be fixed separately.

llvm-svn: 363637
2019-06-17 23:48:43 +00:00
Peter Collingbourne fb9ce100d1 hwasan: Add a tag_offset DWARF attribute to instrumented stack variables.
The goal is to improve hwasan's error reporting for stack use-after-return by
recording enough information to allow the specific variable that was accessed
to be identified based on the pointer's tag. Currently we record the PC and
lower bits of SP for each stack frame we create (which will eventually be
enough to derive the base tag used by the stack frame) but that's not enough
to determine the specific tag for each variable, which is the stack frame's
base tag XOR a value (the "tag offset") that is unique for each variable in
a function.

In IR, the tag offset is most naturally represented as part of a location
expression on the llvm.dbg.declare instruction. However, the presence of the
tag offset in the variable's actual location expression is likely to confuse
debuggers which won't know about tag offsets, and moreover the tag offset
is not required for a debugger to determine the location of the variable on
the stack, so at the DWARF level it is represented as an attribute so that
it will be ignored by debuggers that don't know about it.

Differential Revision: https://reviews.llvm.org/D63119

llvm-svn: 363635
2019-06-17 23:39:41 +00:00
Amara Emerson 146882242f [GlobalISel][Localizer] Rewrite localizer to run in 2 phases, inter & intra block.
Inter-block localization is the same as what currently happens, except now it
only runs on the entry block because that's where the problematic constants with
long live ranges come from.

The second phase is a new intra-block localization phase which attempts to
re-sink the already localized instructions further right before one of the
multiple uses.

One additional change is to also localize G_GLOBAL_VALUE as they're constants
too. However, on some targets like arm64 it takes multiple instructions to
materialize the value, so some additional heuristics with a TTI hook have been
introduced attempt to prevent code size regressions when localizing these.

Overall, these changes improve CTMark code size on arm64 by 1.2%.

Full code size results:

Program                                         baseline       new       diff
------------------------------------------------------------------------------
 test-suite...-typeset/consumer-typeset.test    1249984      1217216     -2.6%
 test-suite...:: CTMark/ClamAV/clamscan.test    1264928      1232152     -2.6%
 test-suite :: CTMark/SPASS/SPASS.test          1394092      1361316     -2.4%
 test-suite...Mark/mafft/pairlocalalign.test    731320       714928      -2.2%
 test-suite :: CTMark/lencod/lencod.test        1340592      1324200     -1.2%
 test-suite :: CTMark/kimwitu++/kc.test         3853512      3820420     -0.9%
 test-suite :: CTMark/Bullet/bullet.test        3406036      3389652     -0.5%
 test-suite...ark/tramp3d-v4/tramp3d-v4.test    8017000      8016992     -0.0%
 test-suite...TMark/7zip/7zip-benchmark.test    2856588      2856588      0.0%
 test-suite...:: CTMark/sqlite3/sqlite3.test    765704       765704       0.0%
 Geomean difference                                                      -1.2%

Differential Revision: https://reviews.llvm.org/D63303

llvm-svn: 363632
2019-06-17 23:20:29 +00:00
Michael Berg f9bff2a55e Propagate fmf in IRTranslate for fneg
Summary: This case is related to D63405 in that we need to be propagating FMF on negates.

Reviewers: volkan, spatel, arsenm

Reviewed By: arsenm

Subscribers: wdng, javed.absar

Differential Revision: https://reviews.llvm.org/D63458

llvm-svn: 363631
2019-06-17 23:19:40 +00:00
Daniel Sanders 184c8ee920 [globalisel] Fix iterator invalidation in the extload combines
Summary:
Change the way we deal with iterator invalidation in the extload combines as it
was still possible to neglect to visit a use. Even worse, it happened in the
in-tree test cases and the checks weren't good enough to detect it.

We now take a cheap copy of the use list before iterating over it. This
prevents iterator invalidation from occurring and has the nice side effect
of making the existing schedule-for-erase/schedule-for-insert mechanism
moot.

Reviewers: aditya_nandakumar

Reviewed By: aditya_nandakumar

Subscribers: rovka, kristof.beyls, javed.absar, volkan, Petar.Avramovic, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D61813

llvm-svn: 363616
2019-06-17 20:56:31 +00:00
Matt Arsenault 3e140066bc GlobalISel: Ignore callsite attributes when picking intrinsic type
A target intrinsic may be defined as possibly reading memory, but the
call site may have additional knowledge that it doesn't read
memory. The intrinsic lowering will expect the pessimistic assumption
of the intrinsic definition, so the chain should still be used.

I fixed the same bug in SelectionDAG in r287593.

llvm-svn: 363580
2019-06-17 17:01:35 +00:00
Matt Arsenault a7f09f3c9e GlobalISel: Verify intrinsics
I keep using the wrong instruction when manually writing tests. This
really needs to check the number of operands, but I don't see an easy
way to do that right now.

llvm-svn: 363579
2019-06-17 17:01:32 +00:00
Whitney Tsang 15b7f5b72d PHINode: introduce setIncomingValueForBlock() function, and use it.
Summary:
There is PHINode::getBasicBlockIndex() and PHINode::setIncomingValue()
but no function to replace incoming value for a specified BasicBlock*
predecessor.
Clearly, there are a lot of places that could use that functionality.

Reviewer: craig.topper, lebedev.ri, Meinersbur, kbarton, fhahn
Reviewed By: Meinersbur, fhahn
Subscribers: fhahn, hiraditya, zzheng, jsji, llvm-commits
Tag: LLVM
Differential Revision: https://reviews.llvm.org/D63338

llvm-svn: 363566
2019-06-17 14:38:56 +00:00
Sam Parker 1bd3d00e7e [CodeGen] Check for HardwareLoop Latch ExitBlock
The HardwareLoops pass finds exit blocks with a scevable exit count.
If the target specifies to update the loop counter in a register,
through a phi, we need to ensure that the exit block is a latch so
that we can insert the phi with the correct value for the incoming
edge.

Differential Revision: https://reviews.llvm.org/D63336

llvm-svn: 363556
2019-06-17 13:39:28 +00:00
Luis Marques 2e46312ffd [DAGCombiner] [CodeGenPrepare] More comprehensive GEP splitting
Some GEPs were not being split, presumably because that split would just be 
undone by the DAGCombiner. Not performing those splits can prevent important 
optimizations, such as preventing the element indices / member offsets from 
being (partially) folded into load/store instruction immediates. This patch:

- Makes the splits also occur in the cases where the base address and the GEP 
  are in the same BB.
- Ensures that the DAGCombiner doesn't reassociate them back again.

Differential Revision: https://reviews.llvm.org/D60294

llvm-svn: 363544
2019-06-17 10:54:12 +00:00
Simon Pilgrim ef78e55205 [SelectionDAG] Fold insert_subvector(undef, extract_subvector(v, c), c) -> v in getNode
This is already done in DAGCombiner::visitINSERT_SUBVECTOR, but this helps a number of shuffles across different vector widths recognise when they come from the same source.

llvm-svn: 363542
2019-06-17 10:14:52 +00:00
Sander de Smalen 5d6ee76c16 Describe stack-id as an enum
This patch changes MIR stack-id from an integer to an enum,
and adds printing/parsing support for this in MIR files. The default
stack-id '0' is now renamed to 'default'.

This should make MIR tests that have stack objects with different stack-ids
more descriptive. It also clarifies code operating on StackID.

Reviewers: arsenm, thegameg, qcolombet

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D60137

llvm-svn: 363533
2019-06-17 09:13:29 +00:00
Sanjay Patel c8d88ad1a9 [CodeGenPrepare][x86] shift both sides of a vector select when profitable
This is based on the example/discussion in PR37428:
https://bugs.llvm.org/show_bug.cgi?id=37428

Proper vector shift instructions don't appear until AVX2, so we may generate several
extra instructions within a loop trying to compensate for that. It's difficult to
recover from that shift expansion later than this, so use the existing TLI hook and
splat analysis to enable better codegen.

This extends CGP functionality introduced with:
rL201655

Differential Revision: https://reviews.llvm.org/D63233

llvm-svn: 363511
2019-06-16 15:29:03 +00:00
Michael Berg ad6bb86b2d adding more fmf propagation for selects plus updated tests
llvm-svn: 363484
2019-06-15 04:53:51 +00:00
Fangrui Song 968b5f84af Revert "adding more fmf propagation for selects plus tests"
This reverts rL363474. -debug-only=isel was added to some tests that
don't specify `REQUIRES: asserts`. This causes failures on
-DLLVM_ENABLE_ASSERTIONS=off builds.

I chose to revert instead of fixing the tests because I'm not sure
whether we should add `REQUIRES: asserts` to more tests.

llvm-svn: 363482
2019-06-15 03:51:08 +00:00
Matt Arsenault 9487278010 Reapply "GlobalISel: Avoid producing Illegal copies in RegBankSelect"
This reapplies r363410, avoiding null dereference if there is no
AltRegBank.

llvm-svn: 363478
2019-06-15 00:33:26 +00:00
Mitch Phillips 0d44f129bb Revert "GlobalISel: Avoid producing Illegal copies in RegBankSelect"
This patch breaks UBSan build bots. See
https://github.com/google/sanitizers/wiki/SanitizerBotReproduceBuild for
a guide as to how to reproduce the error.

This reverts commit c2864c0de0.
This reverts rL363410.

llvm-svn: 363476
2019-06-14 23:45:34 +00:00
Michael Berg 69394bedc5 adding more fmf propagation for selects plus tests
llvm-svn: 363474
2019-06-14 23:30:52 +00:00
Guozhi Wei d2210af332 [MBP] Move a latch block with conditional exit and multi predecessors to top of loop
Current findBestLoopTop can find and move one kind of block to top, a latch block has one successor. Another common case is:

    * a latch block
    * it has two successors, one is loop header, another is exit
    * it has more than one predecessors

If it is below one of its predecessors P, only P can fall through to it, all other predecessors need a jump to it, and another conditional jump to loop header. If it is moved before loop header, all its predecessors jump to it, then fall through to loop header. So all its predecessors except P can reduce one taken branch.

Differential Revision: https://reviews.llvm.org/D43256

llvm-svn: 363471
2019-06-14 23:08:59 +00:00
Amara Emerson f79d3bc724 [GlobalISel] Add a G_BRJT opcode.
This is a branch opcode that takes a jump table pointer, jump table index and an
index into the table to do an indirect branch.

We pass both the table pointer and JTI to allow targets like ARM64 to more
easily use the existing jump table compression optimization without having to
walk up the block to find a paired G_JUMP_TABLE.

Differential Revision: https://reviews.llvm.org/D63159

llvm-svn: 363434
2019-06-14 17:55:48 +00:00
Matt Arsenault c2864c0de0 GlobalISel: Avoid producing Illegal copies in RegBankSelect
Avoid producing illegal register bank copies for reg_sequence and
phi. The default implementation assumes it is possible to pick any
operand's bank and use that for the result, introducing a copy for
operands with a different bank. This does not check for illegal
copies. It is not legal to introduce a VGPR->SGPR copy, so any VGPR
operand requires the result to be a VGPR.

The changes in getInstrMappingImpl aren't strictly necessary, since
AMDGPU now just bypasses this for reg_sequence/phi. This could be
replaced with an assert in case other targets run into this. It is
currently responsible for producing the error for unsatisfiable
copies, but this will be better served with a verifier check.

For phis, for now assume any undetermined operands must be
VGPRs. Eventually, this needs to be able to defer mapping these
operations. This also does not yet have a way to check for whether the
block is in a divergent region.

llvm-svn: 363410
2019-06-14 15:22:25 +00:00
Sanjay Patel 7ea378b940 [CodeGenPrepare] propagate debuginfo when copying a shuffle
llvm-svn: 363409
2019-06-14 15:05:35 +00:00
Matt Arsenault 731a81598e RegBankSelect: Remove checks for invalid mappings
Avoid a check for valid and a set of redundant asserts. The place
InstructionMapping is constructed asserts all of the default fields
are passed anyway for an invalid mapping, so don't overcomplicate
this.

llvm-svn: 363391
2019-06-14 13:42:40 +00:00
Matt Arsenault 3062e87a1e Fix not calling TargetCustom PSVs printer
If the enum value was greater than the starting target custom value,
the custom printer wasn't called.

llvm-svn: 363386
2019-06-14 13:26:34 +00:00
David Blaikie 4129e3e0f8 DebugInfo: Include enumerators in pubnames
This is consistent with GCC's behavior (which is the defacto standard
for pubnames). Though I find the presence of enumerators from enum
classes to be a bit confusing, possibly a bug on GCC's end (since they
can't be named unqualified, unlike the other names - and names nested in
classes don't go in pubnames, for instance - presumably because one must
name the class first & that's enough to limit the scope of the search)

llvm-svn: 363349
2019-06-14 01:58:56 +00:00
Amy Huang 49275272e3 Use fully qualified name when printing S_CONSTANT records
Summary:
Before it was using the fully qualified name only for static data members.
Now it does for all variable names to match MSVC.

Reviewers: rnk

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D63012

llvm-svn: 363335
2019-06-13 22:53:43 +00:00
Amara Emerson fb0a40f064 [GlobalISel][IRTranslator] Add debug loc with line 0 to constants emitted into the entry block.
Constants, including G_GLOBAL_VALUE, are all emitted into the entry block which
lets us use the vreg def assuming it dominates all other users. However, it can
cause jumpy debug behaviour since the DebugLoc attached to these MIs are from
a user instruction that could be in a different block.

Fixes PR40887.

Differential Revision: https://reviews.llvm.org/D63286

llvm-svn: 363331
2019-06-13 22:15:35 +00:00
Jinsong Ji 1c88445840 [MachinePiepliner] Don't check boundary node in checkValidNodeOrder
This was exposed by PowerPC target enablement.

In ScheduleDAG, if we haven't seen any uses in this scheduling region,
we will create a dependence edge to ExitSU to model the live-out latency.
This is required for vreg defs with no in-region use, and prefetches with
no vreg def.

When we build NodeOrder in Scheduler, we ignore these boundary nodes.
However, when we check Succs in checkValidNodeOrder, we did not skip
them, so we still assume all the nodes have been sorted and in order in
Indices array. So when we call lower_bound() for ExitSU, it will return
Indices.end(), causing memory issues in following Node access.

Differential Revision: https://reviews.llvm.org/D63282

llvm-svn: 363329
2019-06-13 21:51:12 +00:00
David Bolvansky 896ece41e4 [Codegen] Merge tail blocks with no successors after block placement
Summary:
I found the following case having tail blocks with no successors merging opportunities after block placement.

Before block placement:

bb0:
    ...
    bne a0, 0, bb2:

bb1:
    mv a0, 1
    ret 

bb2:
    ...

bb3:
    mv a0, 1
    ret

bb4:
    mv a0, -1
    ret

The conditional branch bne in bb0 is opposite to beq.

After block placement:

bb0:
    ...
    beq a0, 0, bb1

bb2:
    ...

bb4:
    mv a0, -1
    ret

bb1:
    mv a0, 1
    ret

bb3:
    mv a0, 1
    ret

After block placement, that appears new tail merging opportunity, bb1 and bb3 can be merged as one block. So the conditional constraint for merging tail blocks with no successors should be removed. In my experiment for RISC-V, it decreases code size.


Author of original patch: Jim Lin

Reviewers: haicheng, aheejin, craig.topper, rnk, RKSimon, Jim, dmgreen

Reviewed By: Jim, dmgreen

Subscribers: xbolva00, dschuff, javed.absar, sbc100, jgravelle-google, aheejin, kito-cheng, dmgreen, PkmX, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D54411

llvm-svn: 363284
2019-06-13 18:11:32 +00:00
David Stenberg 1278a19282 Remove ';' after namespace's closing bracket [NFC]
llvm-svn: 363267
2019-06-13 14:02:55 +00:00
Diogo N. Sampaio 0be2d25ecc [FIX] Forces shrink wrapping to consider any memory access as aliasing with the stack
Summary:
Relate bug: https://bugs.llvm.org/show_bug.cgi?id=37472

The shrink wrapping pass prematurally restores the stack, at a point where the stack might still be accessed.
Taking an exception can cause the stack to be corrupted.

As a first approach, this patch is overly conservative, assuming that any instruction that may load or store could access
the stack.

Reviewers: dmgreen, qcolombet

Reviewed By: qcolombet

Subscribers: simpal01, efriedma, eli.friedman, javed.absar, llvm-commits, eugenis, chill, carwil, thegameg

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D63152

llvm-svn: 363265
2019-06-13 13:56:19 +00:00
Jeremy Morse d2cd9c23b4 [NFC] Sink a function call into LiveDebugValues::process
This was requested in D62904, which I successfully missed. This is just
a refactor and shouldn't change any behaviour.

llvm-svn: 363259
2019-06-13 13:11:57 +00:00
Simon Pilgrim 6b56ad164c [CodeGen] Add getMachineMemOperand + MachineMemOperand::Flags allocator helper wrapper. NFCI.
Pre-commit for D62726 on behalf of @luke (Luke Lau)

llvm-svn: 363257
2019-06-13 12:58:55 +00:00
Jeremy Morse bf2b2f08b0 [DebugInfo] Honour variable fragments in LiveDebugValues
This patch makes the LiveDebugValues pass consider fragments when propagating
DBG_VALUE insts between blocks, fixing PR41979. Fragment info for a variable
location is added to the open-ranges key, which allows distinct fragments to be
tracked separately. To handle overlapping fragments things become slightly
funkier. To avoid excessive searching for overlaps in the data-flow part of
LiveDebugValues, this patch:
 * Pre-computes pairings of fragments that overlap, for each DILocalVariable
 * During data-flow, whenever something happens that causes an open range to
   be terminated (via erase), any fragments pre-determined to overlap are
   also terminated.

The effect of which is that when encountering a DBG_VALUE fragment that
overlaps others, the overlapped fragments do not get propagated to other
blocks. We still rely on later location-list building to correctly handle
overlapping fragments within blocks.

It's unclear whether a mixture of DBG_VALUEs with and without fragmented
expressions are legitimate. To avoid suprises, this patch interprets a
DBG_VALUE with no fragment as overlapping any DBG_VALUE _with_ a fragment.

Differential Revision: https://reviews.llvm.org/D62904

llvm-svn: 363256
2019-06-13 12:51:57 +00:00
Nikola Prica 076ae0d2e2 [DebugInfo] Move Value struct out of DebugLocEntry as DbgValueLoc (NFC)
Since the DebugLocEntry::Value is used as part of DwarfDebug and
DebugLocEntry make it as the separate class.

Reviewers: aprantl, dstenb

Reviewed By: aprantl

Differential Revision: https://reviews.llvm.org/D63213

llvm-svn: 363246
2019-06-13 10:23:26 +00:00
Jeremy Morse 181bf0cefb [DebugInfo] Use FrameDestroy to extend stack locations to end-of-function
We aim to ignore changes in variable locations during the prologue and
epilogue of functions, to avoid using space documenting location changes
that aren't visible. However in D61940 / r362951 this got ripped out as
the previous implementation was unsound.

Instead, use the FrameDestroy flag to identify when we're in the epilogue
of a function, and ignore variable location changes accordingly. This fits
in with existing code that examines the FrameSetup flag.

Some variable locations get shuffled in modified tests as they now cover
greater ranges, which is what would be expected. Some additional
single-location variables are generated too. Two tests are un-xfailed,
they were only xfailed due to r362951 deleting functionality they depended
on.

Apparently some out-of-tree backends don't accurately maintain FrameDestroy
flags -- if you're an out-of-tree maintainer and see changes in variable
locations disappear due to a faulty FrameDestroy flag, it's safe to back
this change out. The impact is just slightly more debug info than necessary.

Differential Revision: https://reviews.llvm.org/D62314

llvm-svn: 363245
2019-06-13 10:03:17 +00:00
Simon Pilgrim 4e0648a541 [TargetLowering] Add MachineMemOperand::Flags to allowsMemoryAccess tests (PR42123)
As discussed on D62910, we need to check whether particular types of memory access are allowed, not just their alignment/address-space.

This NFC patch adds a MachineMemOperand::Flags argument to allowsMemoryAccess and allowsMisalignedMemoryAccesses, and wires up calls to pass the relevant flags to them.

If people are happy with this approach I can then update X86TargetLowering::allowsMisalignedMemoryAccesses to handle misaligned NT load/stores.

Differential Revision: https://reviews.llvm.org/D63075

llvm-svn: 363179
2019-06-12 17:14:03 +00:00
Matt Arsenault f29366b1f5 StackProtector: Use PointerMayBeCaptured
This was using its own, outdated list of possible captures. This was
at minimum not catching cmpxchg and addrspacecast captures.

One change is now any volatile access is treated as capturing. The
test coverage for this pass is quite inadequate, but this required
removing volatile in the lifetime capture test.

Also fixes some infrastructure issues to allow running just the IR
pass.

Fixes bug 42238.

llvm-svn: 363169
2019-06-12 14:23:33 +00:00
Anton Afanasyev 339b39b773 [MIR] Skip hoisting to basic block which may throw exception or return
Summary:
Fix hoisting to basic block which are not legal for hoisting cause
it can be terminated by exception or it is return block.

Reviewers: john.brawn, RKSimon, MatzeB

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D63148

llvm-svn: 363164
2019-06-12 13:51:44 +00:00
Hsiangkai Wang 93be25b580 [NFC] Correct comments in RegisterCoalescer.
Differential Revision: https://reviews.llvm.org/D63124

llvm-svn: 363119
2019-06-12 02:58:04 +00:00