Commit Graph

65071 Commits

Author SHA1 Message Date
Sam Elliott 6b877f6aac [RISCV] Add Option for Printing Architectural Register Names
Summary:
This is an option primarily to use during testing. Instead of always
printing registers using their ABI names, this allows a user to request they
are printed with their architectural name.

This is then used in the register constraint tests to ensure the mapping
between architectural and abi names is correct.

Reviewers: asb, luismarques

Reviewed By: asb

Subscribers: pzheng, hiraditya, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, kito-cheng, shiva0217, jrtc27, MaskRay, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, rkruppe, PkmX, jocewei, psnobl, benna, Jim, s.egerton, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65950

llvm-svn: 371531
2019-09-10 15:55:55 +00:00
Sanjay Patel 8812157b11 [x86] add a test for BreakFalseDeps; NFC
As discussed in D67363

llvm-svn: 371528
2019-09-10 15:42:22 +00:00
Djordje Todorovic b21cc626c9 Revert "[utils] Implement the llvm-locstats tool"
This reverts commit rL371520.

llvm-svn: 371527
2019-09-10 14:48:52 +00:00
Sanjay Patel d2434e65fa [ARM] add test for BreakFalseDeps with minsize attribute; NFC
llvm-svn: 371526
2019-09-10 14:29:02 +00:00
Simon Pilgrim 937ca68157 [X86] Add AVX partial dependency tests as noted on D67363
llvm-svn: 371525
2019-09-10 14:28:29 +00:00
Sanjay Patel 3b0b3def86 [ARM] auto-generate complete test checks; NFC
llvm-svn: 371524
2019-09-10 14:26:25 +00:00
Djordje Todorovic 54008972d1 [utils] Implement the llvm-locstats tool
The tool reports verbose output for the DWARF debug location coverage.
The llvm-locstats for each variable or formal parameter DIE computes what
percentage from the code section bytes, where it is in scope, it has
location description. The line 0 shows the number (and the percentage) of
DIEs with no location information, but the line 100 shows the number (and
the percentage) of DIEs where there is location information in all code
section bytes (where the variable or parameter is in the scope). The line
50..59 shows the number (and the percentage) of DIEs where the location
information is in between 50 and 59 percentage of its scope covered.

The tool will be very useful for tracking improvements regarding the
"debugging optimized code" support with LLVM ecosystem.

Differential Revision: https://reviews.llvm.org/D66526

llvm-svn: 371520
2019-09-10 13:47:03 +00:00
Roman Lebedev 7dfd0fb7f1 [NFC][InstCombine] PR43251 - valid for other predicates too
llvm-svn: 371519
2019-09-10 13:29:40 +00:00
Florian Hahn 18a1f0818b [InstCombine] Use SimplifyFMulInst to simplify multiply in fma.
This allows us to fold fma's that multiply with 0.0. Also, the
multiply by 1.0 case is handled there as well. The fneg/fabs cases
are not handled by SimplifyFMulInst, so we need to keep them.

Reviewers: spatel, anemet, lebedev.ri

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D67351

llvm-svn: 371518
2019-09-10 13:10:28 +00:00
Florian Hahn 8886d0134e [InstCombine] Precommit tests for D67351.
llvm-svn: 371517
2019-09-10 13:05:34 +00:00
Martin Storsjo 5d26959039 [Object] Implement relocation resolver for COFF ARM/ARM64
Adding testscases for this via llvm-dwarfdump.

Also add testcases for the existing resolver support for X86.

Differential Revision: https://reviews.llvm.org/D67340

llvm-svn: 371515
2019-09-10 12:31:40 +00:00
Alexander Timofeev c2d292f839 [AMDGPU]: PHI Elimination hooks added for custom COPY insertion.
Reviewers: rampitec, vpykhtin

  Differential Revision: https://reviews.llvm.org/D67101

llvm-svn: 371508
2019-09-10 10:58:57 +00:00
Dmitri Gribenko 2bf8d77453 Revert "Reland "r364412 [ExpandMemCmp][MergeICmps] Move passes out of CodeGen into opt pipeline.""
This reverts commit r371502, it broke tests
(clang/test/CodeGenCXX/auto-var-init.cpp).

llvm-svn: 371507
2019-09-10 10:39:09 +00:00
Djordje Todorovic c714a88a4d [llvm-dwarfdump] Add additional stats fields
The additional fields will be parsed by the llvm-locstats tool in order to
produce more human readable output of the DWARF debug location quality
generated.

Differential Revision: https://reviews.llvm.org/D66525

llvm-svn: 371506
2019-09-10 10:37:28 +00:00
Clement Courbet 664d9d2da2 [ExpandMemCmp] Add lit.local.cfg
To prevent AArch64 tests from running when the target is not compiled.

Fixes r371502:

/home/buildslave/ps4-buildslave4/llvm-clang-lld-x86_64-scei-ps4-ubuntu-fast/llvm.src/test/Transforms/ExpandMemCmp/AArch64/memcmp.ll:11:15: error: CHECK-NEXT: expected string not found in input
; CHECK-NEXT: [[TMP0:%.*]] = bitcast i8* [[S1:%.*]] to i64*

llvm-svn: 371503
2019-09-10 10:00:15 +00:00
Clement Courbet 612c260ec3 Reland "r364412 [ExpandMemCmp][MergeICmps] Move passes out of CodeGen into opt pipeline."
With a fix for sanitizer breakage (see explanation in D60318).

llvm-svn: 371502
2019-09-10 09:18:00 +00:00
Fangrui Song 1da4f47195 [yaml2obj] Set p_align to the maximum sh_addralign of contained sections
The address difference between two sections in a PT_LOAD is a constant.
Consider a hypothetical case (pagesize can be very small, say, 4).

```
.text     sh_addralign=4
.text.hot sh_addralign=16
```

If we set p_align to 4, the PT_LOAD will be loaded at an address which
is a multiple of 4. The address of .text.hot is guaranteed to be a
multiple of 4, but not necessarily a multiple of 16.

This patch deletes the constraint

  if (SHeader->sh_offset == PHeader.p_offset)

Reviewed By: grimar, jhenderson

Differential Revision: https://reviews.llvm.org/D67260

llvm-svn: 371501
2019-09-10 09:16:34 +00:00
Craig Topper e8b432fa0e [LegalizeTypes] Teach SoftenFloatOp_SELECT_CC to handle operand 2 or 3 being softened.
This can only happen on X86 when fp128 is a legal type, but we
go through softening to generate libcalls. This causes fp128 to
be softened to fp128 instead of an integer type. This can be
removed if D67128 lands.

llvm-svn: 371493
2019-09-10 07:56:02 +00:00
Petr Hosek 7d1757aba8 Revert "clang-misexpect: Profile Guided Validation of Performance Annotations in LLVM"
This reverts commit r371484: this broke sanitizer-x86_64-linux-fast bot.

llvm-svn: 371488
2019-09-10 06:25:13 +00:00
Craig Topper 0e533ca4bb [X86] Add broadcast load unfolding support for VCMPPS/PD.
llvm-svn: 371487
2019-09-10 05:49:53 +00:00
Craig Topper 7c2fdf2779 [X86] Add broadcast load unfold tests for VCMPPS/PD.
llvm-svn: 371486
2019-09-10 05:49:48 +00:00
Petr Hosek a10802fd73 clang-misexpect: Profile Guided Validation of Performance Annotations in LLVM
This patch contains the basic functionality for reporting potentially
incorrect usage of __builtin_expect() by comparing the developer's
annotation against a collected PGO profile. A more detailed proposal and
discussion appears on the CFE-dev mailing list
(http://lists.llvm.org/pipermail/cfe-dev/2019-July/062971.html) and a
prototype of the initial frontend changes appear here in D65300

We revised the work in D65300 by moving the misexpect check into the
LLVM backend, and adding support for IR and sampling based profiles, in
addition to frontend instrumentation.

We add new misexpect metadata tags to those instructions directly
influenced by the llvm.expect intrinsic (branch, switch, and select)
when lowering the intrinsics. The misexpect metadata contains
information about the expected target of the intrinsic so that we can
check against the correct PGO counter when emitting diagnostics, and the
compiler's values for the LikelyBranchWeight and UnlikelyBranchWeight.
We use these branch weight values to determine when to emit the
diagnostic to the user.

A future patch should address the comment at the top of
LowerExpectIntrisic.cpp to hoist the LikelyBranchWeight and
UnlikelyBranchWeight values into a shared space that can be accessed
outside of the LowerExpectIntrinsic pass. Once that is done, the
misexpect metadata can be updated to be smaller.

In the long term, it is possible to reconstruct portions of the
misexpect metadata from the existing profile data. However, we have
avoided this to keep the code simple, and because some kind of metadata
tag will be required to identify which branch/switch/select instructions
are influenced by the use of llvm.expect

Patch By: paulkirth
Differential Revision: https://reviews.llvm.org/D66324

llvm-svn: 371484
2019-09-10 03:11:39 +00:00
Kai Luo 73da43aeb3 [PowerPC][NFC] Update test assertions using update_llc_test_checks.py
Summary:
This patch is made due to https://reviews.llvm.org/rL371289 where typo
fixes failed.

Differential Revision: https://reviews.llvm.org/D67317

llvm-svn: 371483
2019-09-10 02:28:24 +00:00
Reid Kleckner 87d47cb7c4 Remove some unnecessary REQUIRES: shell lines
This means these tests will run on Windows. Replace one with
UNSUPPORTED: system-windows.

llvm-svn: 371473
2019-09-10 00:06:52 +00:00
Matt Arsenault a91f017ae3 AMDGPU/GlobalISel: Fix insert point when lowering fminnum/fmaxnum
llvm-svn: 371471
2019-09-09 23:30:11 +00:00
Reid Kleckner bf02399a85 [Windows] Replace TrapUnreachable with an int3 insertion pass
This is an alternative to D66980, which was reverted. Instead of
inserting a pseudo instruction that optionally expands to nothing, add a
pass that inserts int3 when appropriate after basic block layout.

Reviewers: hans

Differential Revision: https://reviews.llvm.org/D67201

llvm-svn: 371466
2019-09-09 23:04:25 +00:00
Philip Reames b8cddb7611 [Tests] Fix a typo in a test
llvm-svn: 371456
2019-09-09 21:33:59 +00:00
Philip Reames 847fbf7013 [Tests] Precommit test case for D67372
llvm-svn: 371455
2019-09-09 21:32:16 +00:00
Philip Reames 7403569be7 [LoopVectorize] Leverage speculation safety to avoid masked.loads
If we're vectorizing a load in a predicated block, check to see if the load can be speculated rather than predicated.  This allows us to generate a normal vector load instead of a masked.load.

To do so, we must prove that all bytes accessed on any iteration of the original loop are dereferenceable, and that all loads (across all iterations) are properly aligned.  This is equivelent to proving that hoisting the load into the loop header in the original scalar loop is safe.

Note: There are a couple of code motion todos in the code.  My intention is to wait about a day - to be sure this sticks - and then perform the NFC motion without furthe review.

Differential Revision: https://reviews.llvm.org/D66688

llvm-svn: 371452
2019-09-09 20:54:13 +00:00
Philip Reames 48453bb8ed [Tests] Add anyextend tests for unordered atomics
Motivated by work on changing our representation of unordered atomics in SelectionDAG, but as an aside, all our lowerings for O3 are terrible.  Even the ones which ignore the atomicity.  

llvm-svn: 371449
2019-09-09 20:26:52 +00:00
Douglas Yung 4bd6eb8ff2 Relax opcode checks in test to check for only a number instead of a specific number.
llvm-svn: 371447
2019-09-09 20:12:29 +00:00
Philip Reames 20aafa3156 Introduce infrastructure for an incremental port of SelectionDAG atomic load/store handling
This is the first patch in a large sequence. The eventual goal is to have unordered atomic loads and stores - and possibly ordered atomics as well - handled through the normal ISEL codepaths for loads and stores. Today, there handled w/instances of AtomicSDNodes. The result of which is that all transforms need to be duplicated to work for unordered atomics. The benefit of the current design is that it's harder to introduce a silent miscompile by adding an transform which forgets about atomicity.  See the thread on llvm-dev titled "FYI: proposed changes to atomic load/store in SelectionDAG" for further context.

Note that this patch is NFC unless the experimental flag is set.

The basic strategy I plan on taking is:

    introduce infrastructure and a flag for testing (this patch)
    Audit uses of isVolatile, and apply isAtomic conservatively*
    piecemeal conservative* update generic code and x86 backedge code in individual reviews w/tests for cases which didn't check volatile, but can be found with inspection
    flip the flag at the end (with minimal diffs)
    Work through todo list identified in (2) and (3) exposing performance ops

(*) The "conservative" bit here is aimed at minimizing the number of diffs involved in (4). Ideally, there'd be none. In practice, getting it down to something reviewable by a human is the actual goal. Note that there are (currently) no paths which produce LoadSDNode or StoreSDNode with atomic MMOs, so we don't need to worry about preserving any behaviour there.

We've taken a very similar strategy twice before with success - once at IR level, and once at the MI level (post ISEL). 

Differential Revision: https://reviews.llvm.org/D66309

llvm-svn: 371441
2019-09-09 19:23:22 +00:00
Matt Arsenault a0933e6df7 AMDGPU/GlobalISel: Legalize G_BUILD_VECTOR v2s16
Handle it the same way as G_BUILD_VECTOR_TRUNC. Arguably only
G_BUILD_VECTOR_TRUNC should be legal for this, but G_BUILD_VECTOR will
probably be more convenient in most cases.

llvm-svn: 371440
2019-09-09 18:57:51 +00:00
Matt Arsenault 8bc05d7d60 AMDGPU: Make VReg_1 size be 1
This was getting chosen as the preferred 32-bit register class based
on how TableGen selects subregister classes.

llvm-svn: 371438
2019-09-09 18:43:29 +00:00
Matt Arsenault 77e3e9cafd AMDGPU/GlobalISel: Select llvm.amdgcn.class
Also fixes missing SubtargetPredicate on f16 class instructions.

llvm-svn: 371436
2019-09-09 18:29:45 +00:00
Matt Arsenault d6c1f5bb15 AMDGPU/GlobalISel: Select fmed3
llvm-svn: 371435
2019-09-09 18:29:37 +00:00
Eli Friedman 79f0d3a6e5 [IfConversion] Correctly handle cases where analyzeBranch fails.
If analyzeBranch fails, on some targets, the out parameters point to
some blocks in the function. But we can't use that information, so make
sure to clear it out.  (In some places in IfConversion, we assume that
any block with a TrueBB is analyzable.)

The change to the testcase makes it trigger a bug on builds without this
fix: IfConvertDiamond tries to perform a followup "merge" operation,
which isn't legal, and we somehow end up with a branch to a deleted MBB.
I'm not sure how this doesn't crash the compiler.

Differential Revision: https://reviews.llvm.org/D67306

llvm-svn: 371434
2019-09-09 18:29:27 +00:00
Sanjay Patel c195bde3d4 [x86] add test for false dependency with minsize (PR43239); NFC
llvm-svn: 371433
2019-09-09 18:14:10 +00:00
Matt Arsenault 6ebf605851 AMDGPU: Use PatFrags to allow selecting custom nodes or intrinsics
This enables GlobalISel to handle various intrinsics. The custom node
pattern will be ignored, and the intrinsic will work. This will also
allow SelectionDAG to directly select the intrinsics, but as they are
all custom lowered to the nodes, this ends up leaving dead code in the
table.

Eventually either GlobalISel should add the equivalent of custom nodes
equivalent, or intrinsics should be directly used. These each have
different tradeoffs.

There are a few more to handle, but these are easy to handle
ones. Some others fail for other reasons.

llvm-svn: 371432
2019-09-09 18:10:31 +00:00
Craig Topper ce2cb0f09e [X86] Allow _MM_FROUND_CUR_DIRECTION and _MM_FROUND_NO_EXC to be used together on instructions that only support SAE and not embedded rounding.
Current for SAE instructions we only allow _MM_FROUND_CUR_DIRECTION(bit 2) or _MM_FROUND_NO_EXC(bit 3) to be used as the immediate passed to the inrinsics. But these instructions don't perform rounding so _MM_FROUND_CUR_DIRECTION is just sort of a default placeholder when you don't want to suppress exceptions. Using _MM_FROUND_NO_EXC by itself is really bit equivalent to (_MM_FROUND_NO_EXC | _MM_FROUND_TO_NEAREST_INT) since _MM_FROUND_TO_NEAREST_INT is 0. Since we aren't rounding on these instructions we should also accept (_MM_FROUND_CUR_DIRECTION | _MM_FROUND_NO_EXC) as equivalent to (_MM_FROUND_NO_EXC). icc allows this, but gcc does not.

Differential Revision: https://reviews.llvm.org/D67289

llvm-svn: 371430
2019-09-09 17:48:05 +00:00
Simon Atanasyan 56e4ea2bff [mips] Fix decoding of microMIPS JALX instruction
microMIPS jump and link exchange instruction stores a target in a
26-bits field. Despite other microMIPS JAL instructions these bits
are target address shifted right 2 bits [1]. The patch fixes the
JALX instruction decoding and uses 2-bit shift.

[1] MIPS Architecture for Programmers Volume II-B: The microMIPS32 Instruction Set

Differential Revision: https://reviews.llvm.org/D67320

llvm-svn: 371428
2019-09-09 17:28:45 +00:00
Matt Arsenault d2a9516a6d AMDGPU: Move MnemonicAlias out of instruction def hierarchy
Unfortunately MnemonicAlias defines a "Predicates" field just like an
instruction or pattern, with a somewhat different interpretation.

This ends up overriding the intended Predicates set by
PredicateControl on the pseudoinstruction defintions with an empty
list. This allowed incorrectly selecting instructions that should have
been rejected due to the SubtargetPredicate from patterns on the
instruction definition.

This does remove the divergent predicate from the 64-bit shift
patterns, which were already not used for the 32-bit shift, so I'm not
sure what the point was. This also removes a second, redundant copy of
the 64-bit divergent patterns.

llvm-svn: 371427
2019-09-09 17:25:35 +00:00
Sanjay Patel c0728eac15 [SLP] add test for over-vectorization (PR33958); NFC
llvm-svn: 371426
2019-09-09 17:16:03 +00:00
Jessica Paquette bfb00e3d53 [GlobalISel][AArch64] Handle tail calls with non-void return types
Just return once you emit the call, which is exactly what SelectionDAG does in
this situation.

Update call-translator-tail-call.ll.

Also update dllimport.ll to show that we tail call here in GISel again. Add
-verify-machineinstrs to the GISel line too, to defend against verifier
failures.

Differential revision: https://reviews.llvm.org/D67282

llvm-svn: 371425
2019-09-09 17:15:56 +00:00
Matt Arsenault 64ecca90d4 AMDGPU/GlobalISel: Implement LDS G_GLOBAL_VALUE
Handle the simple case that lowers to a constant.

llvm-svn: 371424
2019-09-09 17:13:44 +00:00
Matt Arsenault 182f9248e8 AMDGPU/GlobalISel: Legalize G_BUILD_VECTOR_TRUNC
Treat this as legal on gfx9 since it can use S_PACK_* instructions for
this.

This isn't used by anything yet. The same will probably apply to
16-bit G_BUILD_VECTOR without the trunc.

llvm-svn: 371423
2019-09-09 17:04:18 +00:00
Dmitri Gribenko d9c4060bd5 Revert "[MachineCopyPropagation] Remove redundant copies after TailDup via machine-cp"
This reverts commit 371359. I'm suspecting a miscompile, I posted a
reproducer to https://reviews.llvm.org/D65267.

llvm-svn: 371421
2019-09-09 16:46:45 +00:00
Fangrui Song c28f3e6e2c [yaml2obj] Simplify p_filesz/p_memsz computing
This fixes a bug as well. When "FileSize:" (p_filesz) is specified and
different from the actual value, the following code probably should not
use PHeader.p_filesz:

  if (SHeader->sh_offset == PHeader.p_offset + PHeader.p_filesz)
    PHeader.p_memsz += SHeader->sh_size;

Reviewed By: jhenderson

Differential Revision: https://reviews.llvm.org/D67256

llvm-svn: 371420
2019-09-09 16:45:17 +00:00
David Green 2b7089949e [ARM] Fix loads and stores for predicate vectors
These predicate vectors can usually be loaded and stored with a single
instruction, a VSTR_P0. However this instruction will store the entire P0
predicate, 16 bits, zeroextended to 32bits. Each lane of the the
v4i1/v8i1/v16i1 representing 4/2/1 bits.

As far as I understand, when llvm says "store this v4i1", it really does need
to store 4 bits (or 8, that being the size of a byte, with this bottom 4 as the
interesting bits). For example a bitcast from a v8i1 to a i8 is defined as a
store followed by a load, which is how the code is expanded.

So this instead lowers the v4i1/v8i1 load/store through some shuffles to get
the bits into the correct positions. This, as you might imagine, is not as
efficient as a single instruction. But I believe it is needed for correctness.
v16i1 equally should not load/store 32bits, only storing the 16bits of data.
Stack loads/stores are still using the VSTR_P0 (as can be seen by the test not
changing). This is fine as they are self-consistent, it is only "externally
observable loads/stores" (from our point of view) that need to be corrected.

Differential revision: https://reviews.llvm.org/D67085

llvm-svn: 371419
2019-09-09 16:35:49 +00:00
Matt Arsenault 63e6d8db1c AMDGPU/GlobalISel: Select atomic loads
A new check for an explicitly atomic MMO is needed to avoid
incorrectly matching pattern for non-atomic loads

llvm-svn: 371418
2019-09-09 16:18:07 +00:00
Matt Arsenault d8409b178e AMDGPU/GlobalISel: Fix RegBankSelect for unaligned, uniform constant loads
llvm-svn: 371416
2019-09-09 16:06:37 +00:00
Matt Arsenault 02eb308387 AMDGPU/GlobalISel: Fix regbankselect for uniform extloads
There are no scalar extloads.

llvm-svn: 371414
2019-09-09 16:03:45 +00:00
Matt Arsenault ebbd6e4976 AMDGPU: Remove code address space predicates
Fixes 8-byte, 8-byte aligned LDS loads. 16-byte case still broken due
to not be reported as legal.

llvm-svn: 371413
2019-09-09 16:02:07 +00:00
Matt Arsenault c34b4036ff AMDGPU/GlobalISel: Select G_PTR_MASK
llvm-svn: 371412
2019-09-09 15:46:13 +00:00
Matt Arsenault fdb7030117 AMDGPU/GlobalISel: Fix reg bank for uniform LDS loads
The pointer is always a VGPR. Also fix hardcoding the pointer size to
64.

llvm-svn: 371411
2019-09-09 15:44:16 +00:00
Matt Arsenault 2dd088ec7d AMDGPU/GlobalISel: Use known bits for selection
llvm-svn: 371409
2019-09-09 15:39:32 +00:00
Matt Arsenault 8e3bc9b572 AMDGPU/GlobalISel: Legalize wavefrontsize intrinsic
llvm-svn: 371407
2019-09-09 15:20:49 +00:00
Simon Tatham 0e48bd24e2 [ARM] Remove some spurious MVE reduction instructions.
The family of 'dual-accumulating' vector multiply-add instructions
(VMLADAV, VMLALDAV and VRMLALDAVH) can all operate on both signed and
unsigned integer types, and they all have an 'exchange' variant (with
an X in the name) that modifies which pairs of vector lanes in the two
inputs are multiplied together. But there's a clause in the spec that
says that the X variants //don't// operate on unsigned integer types,
only signed. You can have X, or unsigned, or neither, but not both.

We didn't notice that clause when we implemented the MC support for
these instructions, so LLVM believes that things like VMLADAVX.U8 do
exist, contradicting the spec. Here I fix that by conditioning them
out in Tablegen.

In order to do that, I've reversed the nesting order of the Tablegen
multiclasses for those instructions. Previously, the innermost
multiclass generated the X and not-X variants, and the one outside
that generated the A and not-A variants. Now X is done by the outer
multiclass, which allows me to bypass that one when I only want the
two not-X variants.

Changing the multiclass nesting order also changes the names of the
instruction ids unless I make a special effort not to. I decided that
while I was changing them anyway I'd make them look nicer; so now the
instructions have names like MVE_VMLADAVs32 or MVE_VMLADAVaxs32,
instead of cumbersome _noacc_noexch suffixes.

The corresponding multiply-subtract instructions are unaffected. Those
don't accept unsigned types at all, either in the spec or in LLVM.

Reviewers: ostannard, dmgreen

Subscribers: javed.absar, kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67214

llvm-svn: 371405
2019-09-09 15:17:26 +00:00
Roman Lebedev 59608c0049 [NFC][InstCombine] Fixup test i added in rL371352.
llvm-svn: 371401
2019-09-09 14:27:39 +00:00
James Molloy b6c7fce67a [DFAPacketizer] Reapply: Track resources for packetized instructions
Reapply with fix to reduce resources required by the compiler - use
unsigned[2] instead of std::pair. This causes clang and gcc to compile
the generated file multiple times faster, and hopefully will reduce
the resource requirements on Visual Studio also. This fix is a little
ugly but it's clearly the same issue the previous author of
DFAPacketizer faced (the previous tables use unsigned[2] rather uglily
too).

This patch allows the DFAPacketizer to be queried after a packet is formed to work out which
resources were allocated to the packetized instructions.

This is particularly important for targets that do their own bundle packing - it's not
sufficient to know simply that instructions can share a packet; which slots are used is
also required for encoding.

This extends the emitter to emit a side-table containing resource usage diffs for each
state transition. The packetizer maintains a set of all possible resource states in its
current state. After packetization is complete, all remaining resource states are
possible packetization strategies.

The sidetable is only ~500K for Hexagon, but the extra tracking is disabled by default
(most uses of the packetizer like MachinePipeliner don't care and don't need the extra
maintained state).

Differential Revision: https://reviews.llvm.org/D66936

llvm-svn: 371399
2019-09-09 13:17:55 +00:00
Clement Courbet 388b9794b6 [Inliner][NFC] Make test less brittle.
Summary:
This tests inlining size thresholds, but relies on the output of running
the full O2 pipeline, making it brittle against changes in unrelated
passes.

Only run the inlining pass and set thresholds on the test RUN line
instead.

Found while investigating D60318.

Reviewers: RKSimon, qcolombet

Subscribers: llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67349

llvm-svn: 371397
2019-09-09 13:08:16 +00:00
Sam Parker 1ad508e8e2 [ARM][MVE] VCTP instruction selection
Add codegen support for vctp{8,16,32}.

Differential Revision: https://reviews.llvm.org/D67344

llvm-svn: 371395
2019-09-09 12:54:47 +00:00
Simon Pilgrim 462e3d8050 Revert rL371198 from llvm/trunk: [DFAPacketizer] Track resources for packetized instructions
This patch allows the DFAPacketizer to be queried after a packet is formed to work out which
resources were allocated to the packetized instructions.

This is particularly important for targets that do their own bundle packing - it's not
sufficient to know simply that instructions can share a packet; which slots are used is
also required for encoding.

This extends the emitter to emit a side-table containing resource usage diffs for each
state transition. The packetizer maintains a set of all possible resource states in its
current state. After packetization is complete, all remaining resource states are
possible packetization strategies.

The sidetable is only ~500K for Hexagon, but the extra tracking is disabled by default
(most uses of the packetizer like MachinePipeliner don't care and don't need the extra
maintained state).

Differential Revision: https://reviews.llvm.org/D66936
........
Reverted as this is causing "compiler out of heap space" errors on MSVC 2017/19 NDEBUG builds

llvm-svn: 371393
2019-09-09 12:33:22 +00:00
Cullen Rhodes 55244beeee [AArch64][SVE] Implement abs and neg intrinsics
Summary:
This patch implements two arithmetic intrinsics:

      * int_aarch64_sve_abs
      * int_aarch64_sve_neg

testing the support for scalable vector types in intrinsics added in D65930.

Reviewed By: greened

Differential Revision: https://reviews.llvm.org/D65931

llvm-svn: 371388
2019-09-09 11:21:14 +00:00
Tim Northover 36147adc0b GlobalISel: add combiner to form indexed loads.
Loosely based on DAGCombiner version, but this part is slightly simpler in
GlobalIsel because all address calculation is performed by G_GEP. That makes
the inc/dec distinction moot so there's just pre/post to think about.

No targets can handle it yet so testing is via a special flag that overrides
target hooks.

llvm-svn: 371384
2019-09-09 10:04:23 +00:00
George Rimar c11af417e0 [yaml2obj] - Fix BB after r371380
Just a fix for an input file name.

llvm-svn: 371383
2019-09-09 09:55:56 +00:00
George Rimar 3212ecfea8 [lib/ObjectYAML] - Improve and cleanup error reporting in ELFState<ELFT> class.
The aim of this patch is to refactor how we handle and report error.

I suggest to use the same approach we use in LLD: delayed error reporting.
For that I introduced 'HasError' flag which triggers when we report an error.
Now we do not exit instantly on any error. The benefits are:

1) There are no more 'exit(1)' calls in the library code.
2) Code was simplified significantly in a few places.
3) It is now possible to print multiple errors instead of only one.

Also, I changed the messages to be lower case and removed a full stop.

Differential revision: https://reviews.llvm.org/D67182

llvm-svn: 371380
2019-09-09 09:43:03 +00:00
Oliver Stannard 6b9aedaec6 [ARM][MVE] Decoding of uqrshl and sqrshl accepts unpredictable encodings
Specify the Unpredictable bits, and return softfails when appropriate.

Patch by Mark Murray!

Differential revision: https://reviews.llvm.org/D66939

llvm-svn: 371374
2019-09-09 08:50:28 +00:00
Sam Parker c363deb575 [ARM][ParallelDSP] Fix for sext input
The incoming accumulator value can be discovered through a sext, in
which case there will be a mismatch between the input and the result.
So sign extend the accumulator input if we're performing a 64-bit mac.

Differential Revision: https://reviews.llvm.org/D67220

llvm-svn: 371370
2019-09-09 08:39:14 +00:00
Craig Topper a88f58ff0e [X86] Add broadcast load unfolding support for vpcmpeq/vpcmpgt/vpcmp/vpcmpu.
llvm-svn: 371368
2019-09-09 07:46:11 +00:00
Craig Topper 667f039c8c [X86] Add broadcast load unfolding tests for vpcmpeq/vpcmpgt/vpcmp/vpcmpu.
llvm-svn: 371367
2019-09-09 07:46:07 +00:00
Craig Topper 8c2ab1c4cb [X86] Add broadcast load unfold support for smin/umin/smax/umax.
llvm-svn: 371366
2019-09-09 06:32:24 +00:00
Craig Topper 68b2e1973f [X86] Add broadcast load unfolding tests for smin/umin/smax/smin.
llvm-svn: 371365
2019-09-09 06:32:20 +00:00
Craig Topper ad7822329f [X86] Add broadcast load unfolding support for VMAXPS/PD and VMINPS/PD.
llvm-svn: 371363
2019-09-09 04:25:01 +00:00
Craig Topper 8d42a796c2 [X86] Add broadcast load unfolding tests for vmaxps/pd and vminps/pd
llvm-svn: 371362
2019-09-09 04:24:57 +00:00
Craig Topper 197901081b [X86] Add fp128 test cases for ceil/floor/trunc/nearbyint/rint/round libcalls.
llvm-svn: 371360
2019-09-09 02:44:46 +00:00
Kai Luo 9115c477bb [MachineCopyPropagation] Remove redundant copies after TailDup via machine-cp
Summary:
After tailduplication, we have redundant copies. We can remove these
copies in machine-cp if it's safe to, i.e.
```
$reg0 = OP ...
... <<< No read or clobber of $reg0 and $reg1
$reg1 = COPY $reg0 <<< $reg0 is killed
...
<RET>
```
will be transformed to
```
$reg1 = OP ...
...
<RET>
```

Differential Revision: https://reviews.llvm.org/D65267

llvm-svn: 371359
2019-09-09 02:32:42 +00:00
Craig Topper fb1e77505a [X86] Add test cases for fptoui/fptosi/sitofp/uitofp between fp128 and i128.
llvm-svn: 371358
2019-09-09 01:35:04 +00:00
Craig Topper 72624b0e59 [X86] Use xorps to create fp128 +0.0 constants.
This matches what we do for f32/f64. gcc also does this for fp128.

llvm-svn: 371357
2019-09-09 01:35:00 +00:00
Craig Topper 861d343949 [X86] Add avx and avx512f RUN lines to fp128-cast.ll
llvm-svn: 371356
2019-09-09 01:34:55 +00:00
Douglas Yung debac75dea Relax opcode checks in test to check for only a number instead of a specific number.
llvm-svn: 371355
2019-09-09 01:21:33 +00:00
Simon Pilgrim e0ea746215 [X86][SSE] SimplifyDemandedVectorEltsForTargetNode - add faux shuffle support.
This patch decodes target and faux shuffles with getTargetShuffleInputs - a reduced version of resolveTargetShuffleInputs that doesn't resolve SM_SentinelZero cases, so we can correctly remove zero vectors if they aren't demanded.

llvm-svn: 371353
2019-09-08 21:38:33 +00:00
Roman Lebedev 139a9d6c0e [InstCombine][NFC] Some tests for usub overflow+nonzero check improvement (PR43251)
https://rise4fun.com/Alive/kHq

https://bugs.llvm.org/show_bug.cgi?id=43251

llvm-svn: 371352
2019-09-08 21:30:34 +00:00
Craig Topper 77dd86ee4a [X86] Add a hack to combineVSelectWithAllOnesOrZeros to turn selects with two zero/undef vector inputs into an all zeroes vector.
If the two zero vectors have undefs in different places they
won't get combined by simplifySelect.

This fixes a regression from an earlier commit.

llvm-svn: 371351
2019-09-08 20:56:09 +00:00
Craig Topper 9c11901256 [X86] Remove call to getZeroVector from materializeVectorConstant. Add isel patterns for zero vectors with all types.
The change to avx512-vec-cmp.ll is a regression, but should be
easy to fix. It occurs because the getZeroVector call was
canonicalizing both sides to the same node, then SimplifySelect
was able to simplify it. But since only called getZeroVector
on some VTs this isn't a robust way to combine this.

The change to vector-shuffle-combining-ssse3.ll is more
instructions, but removes a constant pool load so its unclear
if its a regression or not.

llvm-svn: 371350
2019-09-08 20:56:05 +00:00
Roman Lebedev 6e2c5c8710 [InstSimplify] simplifyUnsignedRangeCheck(): if we know that X != 0, handle more cases (PR43246)
Summary:
This is motivated by D67122 sanitizer check enhancement.
That patch seemingly worsens `-fsanitize=pointer-overflow`
overhead from 25% to 50%, which strongly implies missing folds.

In this particular case, given
```
char* test(char& base, unsigned long offset) {
  return &base + offset;
}
```
it will end up producing something like
https://godbolt.org/z/LK5-iH
which after optimizations reduces down to roughly
```
define i1 @t0(i8* nonnull %base, i64 %offset) {
  %base_int = ptrtoint i8* %base to i64
  %adjusted = add i64 %base_int, %offset
  %non_null_after_adjustment = icmp ne i64 %adjusted, 0
  %no_overflow_during_adjustment = icmp uge i64 %adjusted, %base_int
  %res = and i1 %non_null_after_adjustment, %no_overflow_during_adjustment
  ret i1 %res
}
```
Without D67122 there was no `%non_null_after_adjustment`,
and in this particular case we can get rid of the overhead:

Here we add some offset to a non-null pointer,
and check that the result does not overflow and is not a null pointer.
But since the base pointer is already non-null, and we check for overflow,
that overflow check will already catch the null pointer,
so the separate null check is redundant and can be dropped.

Alive proofs:
https://rise4fun.com/Alive/WRzq

There are more patterns of "unsigned-add-with-overflow", they are not handled here,
but this is the main pattern, that we currently consider canonical,
so it makes sense to handle it.

https://bugs.llvm.org/show_bug.cgi?id=43246

Reviewers: spatel, nikic, vsk

Reviewed By: spatel

Subscribers: hiraditya, llvm-commits, reames

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67332

llvm-svn: 371349
2019-09-08 20:14:15 +00:00
Sanjay Patel 354a46444c [InstCombine] add tests for icmp with srem operand; NFC
llvm-svn: 371348
2019-09-08 19:48:47 +00:00
Roman Lebedev 94db67f0e1 [X86] X86DAGToDAGISel::combineIncDecVector(): call getSplatBuildVector() manually
As reported in post-commit review of r370327,
there is some case where the code crashes.

As discussed with Craig Topper, the problem is that getConstant()
internally calls getSplatBuildVector(), so we don't insert
the constant itself.

If we do that manually we're good.

llvm-svn: 371346
2019-09-08 19:36:13 +00:00
Craig Topper dac34f52d3 [DAGCombiner][X86][ARM] Teach visitMULO to fold multiplies with 0 to 0 and no carry.
I modified the ARM test to use two inputs instead of 0 so the
test hopefully still tests what was intended.

llvm-svn: 371344
2019-09-08 19:24:39 +00:00
Craig Topper 30837abd96 [X86] Teach materializeVectorConstant to not call getZeroVector/getOnesVector on the types we already have isel patterns for.
llvm-svn: 371343
2019-09-08 19:24:29 +00:00
Sanjay Patel aff5bee35f [InstCombine] fold extract+insert into identity shuffle
This is similar to the existing fold for splats added with:
rL365379

If we can adjust the shuffle mask to include another element
in an identity mask (if it changes vector length, that's an
extract/insert subvector operation in the backend), then that
can eliminate extractelement/insertelement pairs in IR.

All targets are expected to lower shuffles with identity masks
efficiently.

llvm-svn: 371340
2019-09-08 19:03:01 +00:00
Roman Lebedev 64965430db [NFC][InstSimplify] Some tests for dropping null check after uadd.with.overflow of non-null (PR43246)
https://rise4fun.com/Alive/WRzq

Name: C <= Y && Y != 0  -->  C <= Y  iff C != 0
Pre: C != 0
  %y_is_nonnull = icmp ne i64 %y, 0
  %no_overflow = icmp ule i64 C, %y
  %r = and i1 %y_is_nonnull, %no_overflow
=>
  %r = %no_overflow

Name: C <= Y || Y != 0  -->  Y != 0  iff C != 0
Pre: C != 0
  %y_is_nonnull = icmp ne i64 %y, 0
  %no_overflow = icmp ule i64 C, %y
  %r = or i1 %y_is_nonnull, %no_overflow
=>
  %r = %y_is_nonnull

Name: C > Y || Y == 0  -->  C > Y  iff C != 0
Pre: C != 0
  %y_is_null = icmp eq i64 %y, 0
  %overflow = icmp ugt i64 C, %y
  %r = or i1 %y_is_null, %overflow
=>
  %r = %overflow

Name: C > Y && Y == 0  -->  Y == 0  iff C != 0
Pre: C != 0
  %y_is_null = icmp eq i64 %y, 0
  %overflow = icmp ugt i64 C, %y
  %r = and i1 %y_is_null, %overflow
=>
  %r = %y_is_null

https://bugs.llvm.org/show_bug.cgi?id=43246

llvm-svn: 371339
2019-09-08 17:50:40 +00:00
David Stenberg 5a583665f4 [DebugInfo][X86] Describe call site values for zero-valued imms
Summary:
Add zero-materializing XORs to X86's describeLoadedValue() hook in order
to produce call site values.

I have had to change the defs logic in collectCallSiteParameters() a bit
to be able to describe the XORs. The XORs implicitly define $eflags,
which would cause them to never be considered, due to a guard condition
that I->getNumDefs() is one. I have changed that condition so that we
now only consider instructions where a forwarded register overlaps with
the instruction's single explicit define. We still need to collect the implicit
defines of other forwarded registers to remove them from the work list.
I'm not sure how to move towards supporting instructions with multiple
explicit defines, cases where forwarded register are implicitly defined,
and/or cases where an instruction produces values for multiple forwarded
registers. Perhaps the describeLoadedValue() hook should take a register
argument, and we then leave it up to the hook to describe the loaded
value in that register? I have not yet encountered a situation where
that would be necessary though.

Reviewers: aprantl, vsk, djtodoro, NikolaPrica

Reviewed By: vsk

Subscribers: ychen, hiraditya, llvm-commits

Tags: #debug-info, #llvm

Differential Revision: https://reviews.llvm.org/D67225

llvm-svn: 371333
2019-09-08 14:22:06 +00:00
Simon Pilgrim 178cd2cd3a [X86][SSE] Fix out of range shift introduced in D67070/rL371328
Use APInt to create the comparison mask instead.

llvm-svn: 371330
2019-09-08 12:44:22 +00:00
Simon Pilgrim 9d57002070 [X86] Add test case for PR32546
llvm-svn: 371329
2019-09-08 11:56:07 +00:00
Simon Pilgrim 3262084384 [X86][SSE] Add support for <64 x i1> bool reduction
This generalizes the existing <32 x i1> pre-AVX2 split code to support reductions from <64 x i1> as well, we can probably generalize to any larger pow2 case in the future if the (unlikely) need ever arises.

We still need to tweak combineBitcastvxi1 to improve AVX512F codegen as its assumes vXi1 types should be handled on the mask registers even when they aren't legal.

Differential Revision: https://reviews.llvm.org/D67070

llvm-svn: 371328
2019-09-08 11:46:21 +00:00
Craig Topper 37dd59298f [X86] Make getZeroVector return floating point vectors in their native type on SSE2 and later.
isel used to require zero vectors to be canonicalized to a single
type to minimize the number of patterns needed to match. This is
 no longer required.

I plan to do this to integers too, but floating point was simpler
to start with. Integer has a complication where v32i16/v64i8 aren't
legal when the other 512-bit integer types are.

llvm-svn: 371325
2019-09-08 00:43:52 +00:00
Craig Topper 1829a09bea [X86] Add support for unfold broadcast loads from FMA instructions.
llvm-svn: 371323
2019-09-07 21:54:40 +00:00
Craig Topper a461c26dd8 [X86] Add broadcast load unfolding tests for FMA instructions.
llvm-svn: 371322
2019-09-07 21:54:36 +00:00
Sebastian Pop eacb2c2c97 [aarch64] Add combine patterns for fp16 fmla
This patch enables generation of fused multiply add/sub for instructions operating on fp16.
Tested on aarch64-linux.

Differential Revision: https://reviews.llvm.org/D67297

llvm-svn: 371321
2019-09-07 20:24:51 +00:00
Craig Topper 8cfff1e1bc [X86] Add prefer-128-bit subtarget feature.
Summary:
Similar to the previous prefer-256-bit flag. We might want to
enable this by default some CPUs. This just starts the initial
work to implement and prove that it effects TTI's vector width.

Reviewers: RKSimon, echristo, spatel, atdt

Reviewed By: RKSimon

Subscribers: lebedev.ri, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67311

llvm-svn: 371319
2019-09-07 19:54:22 +00:00
Simon Pilgrim 31c98abda3 [X86][AVX] Add 'f5' v4f64 shuffle test mentioned in D66004
llvm-svn: 371314
2019-09-07 16:13:48 +00:00
Fangrui Song 72e99e63a2 [ELF][MC] Set types of aliases of IFunc to STT_GNU_IFUNC
```
.type  foo,@gnu_indirect_function
.set   foo,foo_resolver

.set foo2,foo
.set foo3,foo2
```

The types of foo2 and foo3 should be STT_GNU_IFUNC, but we currently
resolve them to the type of foo_resolver. This patch fixes it.

Differential Revision: https://reviews.llvm.org/D67206
Patch by Senran Zhang

llvm-svn: 371312
2019-09-07 14:58:47 +00:00
Roman Lebedev 4e76f88072 [SimplifyCFG][NFC] Autogenerate PhiEliminate3.ll
llvm-svn: 371311
2019-09-07 13:53:14 +00:00
Roman Lebedev 88bab08a88 [SimplifyCFG][NFC] Autogenerate two tests
llvm-svn: 371310
2019-09-07 13:35:54 +00:00
Bjorn Pettersson d065c81164 [CodeGen] Handle SMULFIXSAT with scale zero in TargetLowering::expandFixedPointMul
Summary:
Normally TargetLowering::expandFixedPointMul would handle
SMULFIXSAT with scale zero by using an SMULO to compute the
product and determine if saturation is needed (if overflow
happened). But if SMULO isn't custom/legal it falls through
and uses the same technique, using MULHS/SMUL_LOHI, as used
for non-zero scales.

Problem was that when checking for overflow (handling saturation)
when not using MULO we did not expect to find a zero scale. So
we ended up in an assertion when doing
  APInt::getLowBitsSet(VTSize, Scale - 1)

This patch fixes the problem by adding a new special case for
how saturation is computed when scale is zero.

Reviewers: RKSimon, bevinh, leonardchan, spatel

Reviewed By: RKSimon

Subscribers: wuzish, nemanjai, hiraditya, MaskRay, jsji, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67071

llvm-svn: 371309
2019-09-07 12:16:23 +00:00
Bjorn Pettersson 5e331e4ce8 [Intrinsic] Add the llvm.umul.fix.sat intrinsic
Summary:
Add an intrinsic that takes 2 unsigned integers with
the scale of them provided as the third argument and
performs fixed point multiplication on them. The
result is saturated and clamped between the largest and
smallest representable values of the first 2 operands.

This is a part of implementing fixed point arithmetic
in clang where some of the more complex operations
will be implemented as intrinsics.

Patch by: leonardchan, bjope

Reviewers: RKSimon, craig.topper, bevinh, leonardchan, lebedev.ri, spatel

Reviewed By: leonardchan

Subscribers: ychen, wuzish, nemanjai, MaskRay, jsji, jdoerfert, Ka-Ka, hiraditya, rjmccall, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D57836

llvm-svn: 371308
2019-09-07 12:16:14 +00:00
Nikita Popov 314893cc4b [X86] Fix pshuflw formation from repeated shuffle mask (PR43230)
Fix for https://bugs.llvm.org/show_bug.cgi?id=43230.

When creating PSHUFLW from a repeated shuffle mask, we have to apply
the checks to the repeated mask, not the original one. For the test
case from PR43230 the inspected part of the original mask is all undef.

Differential Revision: https://reviews.llvm.org/D67314

llvm-svn: 371307
2019-09-07 12:13:44 +00:00
Nikita Popov fdc6977ff3 [LVI] Look through extractvalue of insertvalue
This addresses the issue mentioned on D19867. When we simplify
with.overflow instructions in CVP, we leave behind extractvalue
of insertvalue sequences that LVI no longer understands. This
means that we can not simplify any instructions based on the
with.overflow anymore (until some over pass like InstCombine
cleans them up).

This patch extends LVI extractvalue handling by calling
SimplifyExtractValueInst (which doesn't do anything more than
constant folding + looking through insertvalue) and using the block
value of the simplification.

A possible alternative would be to do something similar to
SimplifyIndVars, where we instead directly try to replace
extractvalue users of the with.overflow. This would need some
additional structural changes to CVP, as it's currently not legal
to remove anything but the current instruction -- we'd have to
introduce a worklist with instructions scheduled for deletion or similar.

Differential Revision: https://reviews.llvm.org/D67035

llvm-svn: 371306
2019-09-07 12:03:59 +00:00
Nikita Popov 5d02f259c0 [X86] Add test for PR43230; NFC
llvm-svn: 371305
2019-09-07 12:03:48 +00:00
Bjorn Pettersson 2b698a13a1 [DwarfExpression] Disallow some rewrites to avoid undefined behavior
Summary:
The value operand in DW_OP_plus_uconst/DW_OP_constu value can be
large (it uses uint64_t as representation internally in LLVM).
This means that in the uint64_t to int conversions, previously done
by DwarfExpression::addMachineRegExpression, could lose information.
Also, the negation done in "-Offset" was undefined behavior in case
Offset was exactly INT_MIN.

To avoid the above problems, we now avoid transformation like
 [Reg, DW_OP_plus_uconst, Offset] --> [DW_OP_breg, Offset]
and
 [Reg, DW_OP_constu, Offset, DW_OP_plus]  --> [DW_OP_breg, Offset]
when Offset > INT_MAX.

And we avoid to transform
 [Reg, DW_OP_constu, Offset, DW_OP_minus] --> [DW_OP_breg,-Offset]
when Offset > INT_MAX+1.

The patch also adjusts DwarfCompileUnit::constructVariableDIEImpl
to make sure that "DW_OP_constu, Offset, DW_OP_minus" is used
instead of "DW_OP_plus_uconst, Offset" when creating DIExpressions
with negative frame index offsets.

Notice that this might just be the tip of the iceberg. There
are lots of fishy handling related to these constants. I think both
DIExpression::appendOffset and DIExpression::extractIfOffset may
trigger undefined behavior for certain values.

Reviewers: sdesmalen, rnk, JDevlieghere

Reviewed By: JDevlieghere

Subscribers: jholewinski, aprantl, hiraditya, ychen, uabelho, llvm-commits

Tags: #debug-info, #llvm

Differential Revision: https://reviews.llvm.org/D67263

llvm-svn: 371304
2019-09-07 11:40:10 +00:00
Bjorn Pettersson e85acf946d [DebugInfo] Pre-commit of test case for DW_OP_breg/DW_OP_fbreg folds
This currently triggers undefined behavior if executed with an
ubsan build. It is just a precommit of the test case to show that
we got a problem.

Fix is proposed in https://reviews.llvm.org/D67263 and plan is to
commit the fix directly after this patch.

llvm-svn: 371303
2019-09-07 11:39:57 +00:00
Roman Lebedev 395f254bf0 [SimplifyCFG][NFC] Make merge-cond-stores-cost.ll X86-specific, and rewrite it
We clearly perform store-merging, even though div is really costly.

llvm-svn: 371300
2019-09-07 10:55:04 +00:00
Roman Lebedev 0ff6d7f305 [SimplifyCFG][NFC] Show that we don't consider the cost when merging cond stores
We count instruction count in each BB's separately, not their cost.

llvm-svn: 371297
2019-09-07 09:25:26 +00:00
Roman Lebedev 8d3e4d3a4d [SimplifyCFG][NFC] Regenerate merge-cond-stores* tests
llvm-svn: 371296
2019-09-07 09:25:18 +00:00
Hideto Ueno f2b9dc4758 [Attributor] ValueSimplify Abstract Attribute
Summary:
This patch introduces initial `AAValueSimplify` which simplifies a value in a context.

example
- (for function returned) If all the return values are the same and constant, then we can replace callsite returned with the constant.
- If an internal function takes the same value(constant) as an argument in the callsite, then we can replace the argument with that constant.

Reviewers: jdoerfert, sstefan1

Reviewed By: jdoerfert

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66967

llvm-svn: 371291
2019-09-07 07:03:05 +00:00
Xing GUO ed20dcb88b Revert [CodeGen] Fix typos to run tests. NFC.
This reverts r371286 (git commit b38105bbd0)

r371286 caused build bots' failure. I'll check it.

llvm-svn: 371289
2019-09-07 05:14:47 +00:00
Xing GUO b38105bbd0 [CodeGen] Fix typos to run tests. NFC.
llvm-svn: 371286
2019-09-07 04:57:53 +00:00
Teresa Johnson 9c27b59cec Change TargetLibraryInfo analysis passes to always require Function
Summary:
This is the first change to enable the TLI to be built per-function so
that -fno-builtin* handling can be migrated to use function attributes.
See discussion on D61634 for background. This is an enabler for fixing
handling of these options for LTO, for example.

This change should not affect behavior, as the provided function is not
yet used to build a specifically per-function TLI, but rather enables
that migration.

Most of the changes were very mechanical, e.g. passing a Function to the
legacy analysis pass's getTLI interface, or in Module level cases,
adding a callback. This is similar to the way the per-function TTI
analysis works.

There was one place where we were looking for builtins but not in the
context of a specific function. See FindCXAAtExit in
lib/Transforms/IPO/GlobalOpt.cpp. I'm somewhat concerned my workaround
could provide the wrong behavior in some corner cases. Suggestions
welcome.

Reviewers: chandlerc, hfinkel

Subscribers: arsenm, dschuff, jvesely, nhaehnle, mehdi_amini, javed.absar, sbc100, jgravelle-google, eraman, aheejin, steven_wu, george.burgess.iv, dexonsmith, jfb, asbirlea, gchatelet, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66428

llvm-svn: 371284
2019-09-07 03:09:36 +00:00
Craig Topper dd507867ef [X86] Add tests for fp128 frem, sqrt, sin, and cos.
llvm-svn: 371283
2019-09-07 01:39:21 +00:00
Craig Topper 2dd5a205e6 [X86] Autogenerate fp128-libcalls.ll
llvm-svn: 371282
2019-09-07 01:39:12 +00:00
Amara Emerson a1cf4d9795 [AArch64][GlobalISel] Enable the localizer for optimized builds.
Despite the fact that the localizer's original motivation was to fix horrendous
constant spilling at -O0, shortening live ranges still has net benefits even
with optimizations enabled.

On an -Os build of CTMark, doing this improves code size by 0.5% geomean.

There are a few regressions, bullet increasing in size by 0.5%. One example from
bullet where code size increased slightly was due to GlobalISel actually now
generating the same code as SelectionDAG. So we actually have an opportunity
in future to implement better heuristics for localization and therefore be
*better* than SDAG in some cases. In relation to other optimizations though that
one is relatively minor.

Differential Revision: https://reviews.llvm.org/D67303

llvm-svn: 371266
2019-09-06 22:27:09 +00:00
Craig Topper 03936cb0f9 [X86] Add a AVX512VBMI command line to min-legal-vector-width.ll. Always enable fast-variable-shuffle
Trying to minimize the features we need to manipulate when this
is updated for D67259.

The VBMI is interesting because it enables some improved combining
for truncates.

I enabled fast-variable-shuffle because all the CPUs we're going
to add implicitly enable it. So they can share check lines.

llvm-svn: 371261
2019-09-06 21:49:01 +00:00
Craig Topper a31112e357 [X86] Replace -mcpu with -mattr on some tests.
llvm-svn: 371260
2019-09-06 21:48:44 +00:00
Matt Arsenault cf10372119 GlobalISel: Add G_FMAD instruction
llvm-svn: 371254
2019-09-06 20:49:10 +00:00
Matt Arsenault 3e45c70288 GlobalISel: Support physical register inputs in patterns
llvm-svn: 371253
2019-09-06 20:32:37 +00:00
Puyan Lotfi 5476bd9432 [llvm-ifs] Improving detection of PlatformKind from triple for TBD generation.
It was pointed out that I had hard-coded PlatformKind. This is rectifying that.

Differential Revision: https://reviews.llvm.org/D67255

llvm-svn: 371248
2019-09-06 19:59:59 +00:00
Sean Fertile eaf34a983c [PowerPC][XCOFF] Remove basic test. [NFC]
Test verified that we could compile an empty module and produce an XCOFF
object file. Newer tests superssed this coverage, its safe to remove.

llvm-svn: 371247
2019-09-06 19:55:44 +00:00
Evandro Menezes 88a98ea3f7 [ConstantFolding] Add new test cases for transcendentals (NFC)
llvm-svn: 371246
2019-09-06 19:41:49 +00:00
Craig Topper 7bb433c87b [X86] Use MOVSX by default instead of CBW to extend i8 to AX for i8 sdivrem.
We can use a MOVSX16 here then rely on FixupBWInst to change to
MOVSX32 if the upper bits are dead. With a special case to
not promote if it could be turned into CBW.

Then we can rely on X86MCInstLower to turn the MOVSX into CBW
very late if register allocation worked out.

Using MOVSX gives an opportunity to use the MOVSX as a both a
copy and a sign extend since the input and output register aren't
tied together.

Differential Revision: https://reviews.llvm.org/D67192

llvm-svn: 371243
2019-09-06 19:17:02 +00:00
Craig Topper 22b35c4291 [X86] Use MOVZX16rr8/MOVZXrm8 when extending input for i8 udivrem.
We can rely on X86FixupBWInsts to turn these into MOVZX32. This
simplifies a follow up commit to use MOVSX for i8 sdivrem with
a late optimization to use CBW when register allocation works out.

llvm-svn: 371242
2019-09-06 19:15:04 +00:00
Craig Topper 0364d89b6d [X86] Teach FixupBWInsts to turn MOVSX16rr8/MOVZX16rr8/MOVSX16rm8/MOVZX16rm8 into their 32-bit dest equivalents when the upper part of the register is dead.
llvm-svn: 371240
2019-09-06 19:14:49 +00:00
Sean Fertile 74966aca35 [PowerPC][XCOFF] Verify symbol table in xcoff object files. [NFC]
Extend the common/local-common testing for object files to also verify the
symbol table now that the needed functionality has landed in llvm-readobj.

Differential Revision: https://reviews.llvm.org/D66944

llvm-svn: 371237
2019-09-06 18:56:14 +00:00
Oliver Cruickshank a050307c05 [ARM] Add patterns for VSUB with q and r registers
Added patterns for VSUB to support q and r registers, which reduces
pressure on q registers.

llvm-svn: 371231
2019-09-06 17:02:42 +00:00
Oliver Cruickshank 3aed95af4e [ARM] Add patterns for VADD with q and r registers
Added support for VADD to use q and r registers, which reduces pressure
on q registers.

llvm-svn: 371230
2019-09-06 17:02:35 +00:00
Oliver Cruickshank 9bf27928e1 [ARM] Add patterns for VMUL with q and r registers
Added support for VMUL to use an r register, this reduces pressure on
the q registers.

llvm-svn: 371229
2019-09-06 17:02:21 +00:00
Jessica Paquette 121d9114f5 [AArch64][GlobalISel] Always fall back on tail calls with -tailcallopt
-tailcallopt requires that we perform different stack adjustments than with
sibling calls. For example, the `@caller_to0_from8` function in
test/CodeGen/AArch64/tail-call.ll requires that we adjust SP. Without
-tailcallopt, this adjustment does not happen. With it, however, it is expected.

So, to ensure that adding sibling call support doesn't break -tailcallopt,
make CallLowering always fall back on possible tail calls when -tailcallopt
is passed in.

Update test/CodeGen/AArch64/tail-call.ll with a GlobalISel line to make sure
that we don't differ from the SDAG implementation at any point.

Differential Revision: https://reviews.llvm.org/D67245

llvm-svn: 371227
2019-09-06 16:49:13 +00:00
JF Bastien 52614dfc7f [InstCombine] pow(x, +/- 0.0) -> 1.0
Summary:
This isn't an important optimization at all... We're already doing:
  pow(x, 0.0) -> 1.0
My patch merely teaches instcombine that -0.0 does the same.

However, doing this fixes an AMAZING bug! Compile this program:

  extern "C" double pow(double, double);
  double boom(double base) {
    return pow(base, -0.0);
  }

With:
  clang++ ~/Desktop/fast-math.cpp -ffast-math -O2 -S

And clang will crash with a signal. Wow, fast math is so fast it ICEs the
compiler! Arguably, the generated math is infinitely fast.

What's actually happening is that we recurse infinitely in getPow. In debug we
hit its assertion:
  assert(Exp != 0 && "Incorrect exponent 0 not handled");

We avoid this entire mess if we instead recognize that an exponent of positive
and negative zero yield 1.0.

A separate commit, r371221, fixed the same problem. This only contains the added
tests.

<rdar://problem/54598300>

Reviewers: scanon

Subscribers: hiraditya, jkorous, dexonsmith, ributzka, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67248

llvm-svn: 371224
2019-09-06 16:26:59 +00:00
Sanjay Patel 4f0e429acc [SimplifyLibCalls] handle pow(x,-0.0) before it can assert (PR43233)
https://bugs.llvm.org/show_bug.cgi?id=43233

llvm-svn: 371221
2019-09-06 16:10:18 +00:00
Sam Tebbs f1cdd95a2f [ARM] Sink add/mul(shufflevector(insertelement())) for MVE instruction selection
This patch sinks add/mul(shufflevector(insertelement())) into the basic block in which they are used so that they can then be selected together.

This is useful for various MVE instructions, such as vmla and others that take R registers.

Loop tests have been added to the vmla test file to make sure vmlas are generated in loops.

Differential revision: https://reviews.llvm.org/D66295

llvm-svn: 371218
2019-09-06 16:01:32 +00:00
Valery Pykhtin e8ade89bb3 [AMDGPU] Enable constant offset promotion to immediate operand for VMEM stores
Differential revision: https://reviews.llvm.org/D66958

llvm-svn: 371214
2019-09-06 15:33:53 +00:00
George Rimar edfd276cbc [llvm-readelf] - Print unknown st_other value if present in GNU output.
This is a fix for https://bugs.llvm.org/show_bug.cgi?id=40785.

llvm-readelf does not print the st_value of the symbol when
st_value has any non-visibility bits set.

This patch:

* Aligns "Ndx" row for the default and a new cases.
(it was 1 space character off for the case when "PROTECTED" visibility was printed)

* Prints "[<other>: 0x??]" for symbols which has an additional st_other bits set.
In compare with GNU, this logic is a bit simpler and seems to be more consistent.

For MIPS GNU can print named flags, though can't print a mix of them:
0: 00000000 0 NOTYPE LOCAL DEFAULT UND 
1: 00000000 0 NOTYPE GLOBAL DEFAULT [OPTIONAL] UND a1
2: 00000000 0 NOTYPE GLOBAL DEFAULT [MIPS PLT] UND a2
3: 00000000 0 NOTYPE GLOBAL DEFAULT [MIPS PIC] UND a3
4: 00000000 0 NOTYPE GLOBAL DEFAULT [MICROMIPS] UND a4
5: 00000000 0 NOTYPE GLOBAL DEFAULT [MIPS16] UND a5
6: 00000000 0 NOTYPE GLOBAL DEFAULT [<other>: c] UND b1
7: 00000000 0 NOTYPE GLOBAL DEFAULT [<other>: 28] UND b2

On PPC64 it can print a localentry value that is encoded in the high bits of st_other
63: 0000000000000850 208 FUNC GLOBAL DEFAULT [<localentry>: 8] 12

We chose to print the raw st_other field, prefixed with '0x'.

Differential revision: https://reviews.llvm.org/D67094

llvm-svn: 371201
2019-09-06 13:05:34 +00:00
Djordje Todorovic d409408e31 [test] Update the name of the debug entry values option. NFC
llvm-svn: 371199
2019-09-06 12:23:37 +00:00
James Molloy db2fa06722 [DFAPacketizer] Track resources for packetized instructions
This patch allows the DFAPacketizer to be queried after a packet is formed to work out which
resources were allocated to the packetized instructions.

This is particularly important for targets that do their own bundle packing - it's not
sufficient to know simply that instructions can share a packet; which slots are used is
also required for encoding.

This extends the emitter to emit a side-table containing resource usage diffs for each
state transition. The packetizer maintains a set of all possible resource states in its
current state. After packetization is complete, all remaining resource states are
possible packetization strategies.

The sidetable is only ~500K for Hexagon, but the extra tracking is disabled by default
(most uses of the packetizer like MachinePipeliner don't care and don't need the extra
maintained state).

Differential Revision: https://reviews.llvm.org/D66936

llvm-svn: 371198
2019-09-06 12:20:08 +00:00
Jeremy Morse 5d9cd3b4ca [DebugInfo] LiveDebugValues: explicitly terminate overwritten stack locations
If a stack spill location is overwritten by another spill instruction,
any variable locations pointing at that slot should be terminated. We
cannot rely on spills always being restored to registers or variable
locations being moved by a DBG_VALUE: the register allocator is entitled
to spill a value and then forget about it when it goes out of liveness.

To address this, scan for memory writes to spill locations, even those we
don't consider to be normal "spills". isSpillInstruction and
isLocationSpill distinguish the two now. After identifying spill
overwrites, terminate the open range, and insert a $noreg DBG_VALUE for
that variable.

Differential Revision: https://reviews.llvm.org/D66941

llvm-svn: 371193
2019-09-06 10:08:22 +00:00
Jay Foad 6c0204c794 [AMDGPU] Mark s_barrier as having side effects but not accessing memory.
Summary:
This fixes poor scheduling in a function containing a barrier and a few
load instructions.

Without this fix, ScheduleDAGInstrs::buildSchedGraph adds an artificial
edge in the dependency graph from the barrier instruction to the exit
node representing live-out latency, with a latency of about 500 cycles.
Because of this it thinks the critical path through the graph also has
a latency of about 500 cycles. And because of that it does not think
that any of the load instructions are on the critical path, so it
schedules them with no regard for their (80 cycle) latency, which gives
poor results.

Reviewers: arsenm, dstuttard, tpr, nhaehnle

Subscribers: kzhuravl, jvesely, wdng, yaxunl, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67218

llvm-svn: 371192
2019-09-06 10:07:28 +00:00
Sam Parker 29bf68fcfa [ARM] Fix for buildbot
llvm-svn: 371187
2019-09-06 09:36:23 +00:00
Fangrui Song d20c41dd31 [yaml2obj] Rename SHOffset (e_shoff) field to SHOff. NFC
`struct Elf*_Shdr` has a field `sh_offset`, named `ShOffset` in
llvm::ELFYAML::Section. Rename SHOffset (e_shoff) to SHOff to prevent confusion.

Reviewed By: grimar

Differential Revision: https://reviews.llvm.org/D67254

llvm-svn: 371185
2019-09-06 09:23:17 +00:00
Sam Parker 312409e464 [ARM] MVE Tail Predication
The MVE and LOB extensions of Armv8.1m can be combined to enable
'tail predication' which removes the need for a scalar remainder
loop after vectorization. Lane predication is performed implicitly
via a system register. The effects of predication is described in
Section B5.6.3 of the Armv8.1-m Arch Reference Manual, the key points
being:
- For vector operations that perform reduction across the vector and
  produce a scalar result, whether the value is accumulated or not.
- For non-load instructions, the predicate flags determine if the
  destination register byte is updated with the new value or if the
  previous value is preserved.
- For vector store instructions, whether the store occurs or not.
- For vector load instructions, whether the value that is loaded or
  whether zeros are written to that element of the destination
  register.

This patch implements a pass that takes a hardware loop, containing
masked vector instructions, and converts it something that resembles
an MVE tail predicated loop. Currently, if we had code generation,
we'd generate a loop in which the VCTP would generate the predicate
and VPST would then setup the value of VPR.PO. The loads and stores
would be placed in VPT blocks so this is not tail predication, but
normal VPT predication with the predicate based upon a element
counting induction variable. Further work needs to be done to finally
produce a true tail predicated loop.

Because only the loads and stores are predicated, in both the LLVM IR
and MIR level, we will restrict support to only lane-wise operations
(no horizontal reductions). We will perform a final check on MIR
during loop finalisation too.

Another restriction, specific to MVE, is that all the vector
instructions need operate on the same number of elements. This is
because predication is performed at the byte level and this is set
on entry to the loop, or by the VCTP instead.

Differential Revision: https://reviews.llvm.org/D65884

llvm-svn: 371179
2019-09-06 08:24:41 +00:00
Kang Zhang f879c68755 [CodeGen] Do the Simple Early Return in block-placement pass to optimize the blocks
Summary:

Fix a bug of not update the jump table and recommit it again.

In `block-placement` pass, it will create some patterns for unconditional we can do the simple early retrun.
But the `early-ret` pass is before `block-placement`, we don't want to run it again.
This patch is to do the simple early return to optimize the blocks at the last of `block-placement`.

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D63972

llvm-svn: 371177
2019-09-06 08:16:18 +00:00
Mikael Holmen dee0702b2a [MIR] Change test case to read from stdin instead of file
The

    ;CHECK: bb
    ;CHECK-NEXT: %namedVReg1353:_(p0) = COPY $d0

parts of the test case failed when the tests were placed in a directory
including "bb" in the path, since the full path of the file is then
output in the
 ; ModuleID = '/repo/bb/
line which the CHECK matched on and then the CHECK-NEXT failed.

llvm-svn: 371171
2019-09-06 06:55:54 +00:00
Craig Topper 463c8e5eeb [X86] Add tests for extending and truncating between v16i8 and v16i64 with min-legal-vector-width=256.
It looks like we might be able to do these in fewer steps, but
I'm not sure.

llvm-svn: 371170
2019-09-06 06:02:17 +00:00
Alex Brachet dfacf8851e Fix rL371162 again
llvm-svn: 371164
2019-09-06 03:31:42 +00:00
Alex Brachet 27d42af603 Fix failing test from rL371162
llvm-svn: 371163
2019-09-06 02:56:48 +00:00
Alex Brachet 0b69c59656 [yaml2obj] Make e_phoff and e_phentsize 0 if there are no program headers
Summary: It says [[ http://www.sco.com/developers/gabi/latest/ch4.eheader.html | here ]] that if there are no program headers than e_phoff should be 0, but currently it is always set after the header. GNU's `readelf` (but not `llvm-readelf`) complains about this: `readelf: Warning: possibly corrupt ELF header - it has a non-zero program header offset, but no program headers`.

Reviewers: jhenderson, grimar, MaskRay, rupprecht

Reviewed By: jhenderson, grimar, MaskRay

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67054

llvm-svn: 371162
2019-09-06 02:27:55 +00:00
Alina Sbirlea 57fcb1d7fc Cleanup test.
llvm-svn: 371158
2019-09-06 00:58:03 +00:00
Fangrui Song 9d2504b6d8 [llvm-readobj][yaml2obj] Support SHT_LLVM_SYMPART, SHT_LLVM_PART_EHDR and SHT_LLVM_PART_PHDR
See http://lists.llvm.org/pipermail/llvm-dev/2019-February/130583.html
and D60242 for the lld partition feature.

This patch:

* Teaches yaml2obj to parse the 3 section types.
* Teaches llvm-readobj/llvm-readelf to dump the 3 section types.

There is no test for SHT_LLVM_DEPENDENT_LIBRARIES in llvm-readobj. Add
it as well.

Reviewed By: thakis

Differential Revision: https://reviews.llvm.org/D67228

llvm-svn: 371157
2019-09-06 00:53:28 +00:00
Matt Arsenault 4d90625271 AMDGPU/GlobalISel: Fix load/store of types in other address spaces
There should probably be a size only matcher.

llvm-svn: 371155
2019-09-06 00:36:06 +00:00
Matt Arsenault 9ceb6edf11 GlobalISel/TableGen: Fix handling of EXTRACT_SUBREG constraints
This was only using the correct register constraints if this was the
final result instruction. If the extract was a sub instruction of the
result, it would attempt to use GIR_ConstrainSelectedInstOperands on a
COPY, which won't work. Move the handling to
createAndImportSubInstructionRenderer so it works correctly.

I don't fully understand why runOnPattern and
createAndImportSubInstructionRenderer both need to handle these
special cases, and constrain them with slightly different methods. If
I remove the runOnPattern handling, it does break the constraint when
the final result instruction is EXTRACT_SUBREG.

llvm-svn: 371150
2019-09-06 00:05:58 +00:00
Matt Arsenault 60c8b8bcf2 AMDGPU: Allow getMemOperandWithOffset to analyze stack accesses
Report soffset as a base register if the scratch resource can be
ignored.

llvm-svn: 371149
2019-09-05 23:54:35 +00:00
Matt Arsenault 59ff77ee38 AMDGPU: Fix emitting multiple stack loads for stack passed workitems
The same stack is loaded for each workitem ID, and each use. Nothing
prevents you from creating multiple fixed stack objects with the same
offsets, so this was creating a load for each unique frame index,
despite them being the same offset. Re-use the same frame index so the
loads are CSEable.

llvm-svn: 371148
2019-09-05 23:40:14 +00:00
Eli Friedman 9dd453ce8d [AArch64] Add testcase for codegen for sdiv by 2.
llvm-svn: 371147
2019-09-05 23:40:03 +00:00
Matt Arsenault 524a9d5774 InstCombine: Fix crash on icmp of gep with addrspacecasted null
llvm-svn: 371146
2019-09-05 23:39:21 +00:00
David Blaikie 707be7ef9c llvm-reduce: Use %python from lit to get the correct/valid python binary for the reduction script
llvm-svn: 371143
2019-09-05 23:33:44 +00:00
Alina Sbirlea 35548e80d6 [AliasSetTracker] Correct AAInfo check.
Properly check if NewAAInfo conflicts with AAInfo.
Update local variable and alias set that a change occured when a conflict is found.
Resolves PR42969.

llvm-svn: 371139
2019-09-05 23:00:36 +00:00
Vitaly Buka 9020f11377 [SimplifyCFG] Don't SimplifyBranchOnICmpChain with ExtraCase
Summary:
Here we try to avoid issues with "explicit branch" with SimplifyBranchOnICmpChain
which can check on undef. Msan by design reports branches on uninitialized
memory and undefs, so we have false report here.

In general msan does not like when we convert

```
// If at least one of them is true we can MSAN is ok if another is undefs
if (a || b)
  return;
```
into
```
// If 'a' is undef MSAN will complain even if 'b' is true
if (a)
  return;
if (b)
  return;
```

Example

Before optimization we had something like this:
```
while (true) {
  bool maybe_undef = doStuff();

  while (true) {
    char c = getChar();
    if (c != 10 && c != 13)
     continue
    break;
  }

  // we know that c == 10 || c == 13 if we get here,
  // so msan know that branch is not affected by maybe_undef
  if (maybe_undef || c == 10 || c == 13)
    continue;
  return;
}
```

SimplifyBranchOnICmpChain will convert that into
```
while (true) {
  bool maybe_undef = doStuff();

  while (true) {
    char c = getChar();
    if (c != 10 && c != 13)
      continue;
    break;
  }

  // however msan will complain here:
  if (maybe_undef)
    continue;

  // we know that c == 10 || c == 13, so either way we will get continue
  switch(c) {
    case 10: continue;
    case 13: continue;
  }
  return;
}
```

Reviewers: eugenis, efriedma

Reviewed By: eugenis, efriedma

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67205

llvm-svn: 371138
2019-09-05 22:49:34 +00:00
Puyan Lotfi dc97ca9f25 [MIR] MIRNamer pass for improving MIR test authoring experience.
This patch reuses the MIR vreg renamer from the MIRCanonicalizerPass to cleanup
names of vregs in a MIR file for MIR test authors. I found it useful when
writing a regression test for a globalisel failure I encountered recently and
thought it might be useful for other folks as well.

Differential Revision: https://reviews.llvm.org/D67209

llvm-svn: 371121
2019-09-05 20:44:33 +00:00
Jessica Paquette 20e8667098 Recommit "[AArch64][GlobalISel] Teach AArch64CallLowering to handle basic sibling calls"
Recommit basic sibling call lowering (https://reviews.llvm.org/D67189)

The issue was that if you have a return type other than void, call lowering
will emit COPYs to get the return value after the call.

Disallow sibling calls other than ones that return void for now. Also
proactively disable swifterror tail calls for now, since there's a similar issue
with COPYs there.

Update call-translator-tail-call.ll to include test cases for each of these
things.

llvm-svn: 371114
2019-09-05 20:18:34 +00:00
Eli Friedman cae1e47f6e [IfConversion] Fix diamond conversion with unanalyzable branches.
The code was incorrectly counting the number of identical instructions,
and therefore tried to predicate an instruction which should not have
been predicated.  This could have various effects: a compiler crash,
an assembler failure, a miscompile, or just generating an extra,
unnecessary instruction.

Instead of depending on TargetInstrInfo::removeBranch, which only
works on analyzable branches, just remove all branch instructions.

Fixes https://bugs.llvm.org/show_bug.cgi?id=43121 and
https://bugs.llvm.org/show_bug.cgi?id=41121 .

Differential Revision: https://reviews.llvm.org/D67203

llvm-svn: 371111
2019-09-05 20:02:38 +00:00
Roman Lebedev 071ce66729 [NFC][InstCombine] Overhaul 'unsigned add overflow' tests, ensure that all 3 patterns have full test coverage
llvm-svn: 371108
2019-09-05 19:13:15 +00:00
Craig Topper 0fde412140 [X86] Enable BuildSDIVPow2 for i16.
We're able to use a 32-bit ADD and CMOV here and should work
well with our other i16->i32 promotion optimizations.

llvm-svn: 371107
2019-09-05 18:49:52 +00:00
Craig Topper b8d6ba3ca2 [X86] Override BuildSDIVPow2 for X86.
As noted in PR43197, we can use test+add+cmov+sra to implement
signed division by a power of 2.

This is based off the similar version in AArch64, but I've
adjusted it to use target independent nodes where AArch64 uses
target specific CMP and CSEL nodes. I've also blocked INT_MIN
as the transform isn't valid for that.

I've limited this to i32 and i64 on 64-bit targets for now and only
when CMOV is supported. i8 and i16 need further investigation to be
sure they get promoted to i32 well.

I adjusted a few tests to enable cmov to demonstrate the new
codegen. I also changed twoaddr-coalesce-3.ll to 32-bit mode
without cmov to avoid perturbing the scenario that is being
set up there.

Differential Revision: https://reviews.llvm.org/D67087

llvm-svn: 371104
2019-09-05 18:15:07 +00:00
Roman Lebedev 8360c42e25 [InstCombine] foldICmpBinOp(): consider inverted check in 'unsigned sub overflow' check
A follow-up for r329011.
This may be changed to produce @llvm.sub.with.overflow in a later patch,
but for now just make things more consistent overall.

A few observations stem from this:
* There does not seem to be a similar one-instruction fold for uadd-overflow
* I'm not sure we'll want to canonicalize `B u> A` as `usub.with.overflow`,
  so since the `icmp` here no longer refers to `sub`,
  reconstructing `usub.with.overflow` will be problematic,
  and will likely require standalone pass (similar to DivRemPairs).

https://rise4fun.com/Alive/Zqs

Name: (A - B) u> A --> B u> A
  %t0 = sub i8 %A, %B
  %r = icmp ugt i8 %t0, %A
=>
  %r = icmp ugt i8 %B, %A

Name: (A - B) u<= A --> B u<= A
  %t0 = sub i8 %A, %B
  %r = icmp ule i8 %t0, %A
=>
  %r = icmp ule i8 %B, %A

Name: C u< (C - D) --> C u< D
  %t0 = sub i8 %C, %D
  %r = icmp ult i8 %C, %t0
=>
  %r = icmp ult i8 %C, %D

Name: C u>= (C - D) --> C u>= D
  %t0 = sub i8 %C, %D
  %r = icmp uge i8 %C, %t0
=>
  %r = icmp uge i8 %C, %D

llvm-svn: 371101
2019-09-05 17:41:02 +00:00
Roman Lebedev ecb7ea1ae7 [InstCombine] foldICmpBinOp(): consider inverted check in 'unsigned add overflow' check
A follow-up for r342004.
This will be changed to produce @llvm.add.with.overflow in a later patch,
but for now just make things more consistent overall.

https://rise4fun.com/Alive/qxE

Name: (Op1 + X) u< Op1 --> ~Op1 u< X
  %t0 = add i8 %Op1, %X
  %r = icmp ult i8 %t0, %Op1
=>
  %n = xor i8 %Op1, -1
  %r = icmp ult i8 %n, %X

Name: (Op1 + X) u>= Op1 --> ~Op1 u>= X
  %t0 = add i8 %Op1, %X
  %r = icmp uge i8 %t0, %Op1
=>
  %n = xor i8 %Op1, -1
  %r = icmp uge i8 %n, %X

;-------------------------------------------------------------------------------

Name: Op0 u> (Op0 + X) --> X u> ~Op0
  %t0 = add i8 %Op0, %X
  %r = icmp ugt i8 %Op0, %t0
=>
  %n = xor i8 %Op0, -1
  %r = icmp ugt i8 %X, %n

Name: Op0 u<= (Op0 + X) --> X u<= ~Op0
  %t0 = add i8 %Op0, %X
  %r = icmp ule i8 %Op0, %t0
=>
  %n = xor i8 %Op0, -1
  %r = icmp ule i8 %X, %n

llvm-svn: 371100
2019-09-05 17:40:49 +00:00
Roman Lebedev 1d9e0dcc9d [InstCombine][NFC] Tests for 'unsigned sub overflow' check
----------------------------------------
Name: unsigned sub, overflow, v0
  %sub = sub i8 %x, %y
  %ov = icmp ugt i8 %sub, %x
=>
  %agg = usub_overflow i8 %x, %y
  %sub = extractvalue {i8, i1} %agg, 0
  %ov = extractvalue {i8, i1} %agg, 1

Done: 1
Optimization is correct!

----------------------------------------
Name: unsigned sub, no overflow, v0
  %sub = sub i8 %x, %y
  %ov = icmp ule i8 %sub, %x
=>
  %agg = usub_overflow i8 %x, %y
  %sub = extractvalue {i8, i1} %agg, 0
  %not.ov = extractvalue {i8, i1} %agg, 1
  %ov = xor %not.ov, -1

Done: 1
Optimization is correct!

llvm-svn: 371099
2019-09-05 17:40:37 +00:00
Roman Lebedev 745046c23f [InstCombine][NFC] Tests for 'unsigned add overflow' check
----------------------------------------
Name: unsigned add, overflow, v0
  %add = add i8 %x, %y
  %ov = icmp ult i8 %add, %x
=>
  %agg = uadd_overflow i8 %x, %y
  %add = extractvalue {i8, i1} %agg, 0
  %ov = extractvalue {i8, i1} %agg, 1

Done: 1
Optimization is correct!

----------------------------------------
Name: unsigned add, overflow, v1
  %add = add i8 %x, %y
  %ov = icmp ult i8 %add, %y
=>
  %agg = uadd_overflow i8 %x, %y
  %add = extractvalue {i8, i1} %agg, 0
  %ov = extractvalue {i8, i1} %agg, 1

Done: 1
Optimization is correct!

----------------------------------------
Name: unsigned add, no overflow, v0
  %add = add i8 %x, %y
  %ov = icmp uge i8 %add, %x
=>
  %agg = uadd_overflow i8 %x, %y
  %add = extractvalue {i8, i1} %agg, 0
  %not.ov = extractvalue {i8, i1} %agg, 1
  %ov = xor %not.ov, -1

Done: 1
Optimization is correct!

----------------------------------------
Name: unsigned add, no overflow, v1
  %add = add i8 %x, %y
  %ov = icmp uge i8 %add, %y
=>
  %agg = uadd_overflow i8 %x, %y
  %add = extractvalue {i8, i1} %agg, 0
  %not.ov = extractvalue {i8, i1} %agg, 1
  %ov = xor %not.ov, -1

Done: 1
Optimization is correct!

llvm-svn: 371098
2019-09-05 17:40:28 +00:00
Sanjay Patel 10412a69f9 [x86] fix horizontal math bug exposed by improved demanded elements analysis (PR43225)
https://bugs.llvm.org/show_bug.cgi?id=43225

llvm-svn: 371095
2019-09-05 17:28:17 +00:00
Craig Topper 673da001c5 [X86] Remove unneeded CHECK lines from a test. NFC
llvm-svn: 371093
2019-09-05 17:24:25 +00:00
Denis Bakhvalov 58f172f05a [MergedLoadStoreMotion] Sink stores to BB with more than 2 predecessors
If we have:

bb5:
  br i1 %arg3, label %bb6, label %bb7

bb6:
  %tmp = getelementptr inbounds i32, i32* %arg1, i64 2
  store i32 3, i32* %tmp, align 4
  br label %bb9

bb7:
  %tmp8 = getelementptr inbounds i32, i32* %arg1, i64 2
  store i32 3, i32* %tmp8, align 4
  br label %bb9

bb9:  ; preds = %bb4, %bb6, %bb7
  ...

We can't sink stores directly into bb9.
This patch creates new BB that is successor of %bb6 and %bb7
and sinks stores into that block.

SplitFooterBB is the parameter to the pass that controls
that behavior.

Change-Id: I7fdf50a772b84633e4b1b860e905bf7e3e29940f
Differential: https://reviews.llvm.org/D66234
llvm-svn: 371089
2019-09-05 17:00:32 +00:00
Sanjay Patel 3856512334 [x86] add test for horizontal math bug (PR43225); NFC
llvm-svn: 371088
2019-09-05 16:58:18 +00:00
Hiroshi Yamauchi d842f2eec4 [PGO][CHR] Speed up following long, interlinked use-def chains.
Summary:
Avoid visiting an instruction more than once by using a map.

This is similar to https://reviews.llvm.org/rL361416.

Reviewers: davidxl

Reviewed By: davidxl

Subscribers: llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67198

llvm-svn: 371086
2019-09-05 16:56:55 +00:00
Alina Sbirlea ae900d3882 [MemorySSA] Update MemorySSA when removing debug.value calls.
llvm-svn: 371084
2019-09-05 16:25:24 +00:00
Krzysztof Parzyszek 0ce93194fe [Hexagon] Fix type in HexagonTargetLowering::ReplaceNodeResults
llvm-svn: 371083
2019-09-05 16:19:47 +00:00
Simon Pilgrim 29361c704d [X86][SSE] EltsFromConsecutiveLoads - ignore non-zero offset base loads (PR43227)
As discussed on D64551 and PR43227, we don't correctly handle cases where the base load has a non-zero byte offset.

Until we can properly handle this, we must bail from EltsFromConsecutiveLoads.

llvm-svn: 371078
2019-09-05 15:07:07 +00:00
Fangrui Song c3bc697974 [yaml2obj] Write the section header table after section contents
Linkers (ld.bfd/gold/lld) place the section header table at the very
end. This allows tools to strip it, which is optional in executable/shared objects.
In addition, if we add or section, the size of the section header table
will change. Placing the section header table in the end keeps section
offsets unchanged.

yaml2obj currently places the section header table immediately after the
program header. Follow what linkers do to make offset updating easier.

Reviewed By: grimar

Differential Revision: https://reviews.llvm.org/D67221

llvm-svn: 371074
2019-09-05 14:25:57 +00:00
George Rimar 4e14bf71b7 [llvm-readelf] - Allow dumping dynamic symbols when there is no program headers.
D62179 introduced a regression. llvm-readelf lose the ability to dump the dynamic symbols
when there is .dynamic section with a DT_SYMTAB, but there are no program headers:
https://reviews.llvm.org/D62179#1652778

Below is a program flow before the D62179 change:

1) Find SHT_DYNSYM.
2) Find there is no PT_DYNAMIC => don't try to parse it.
3) Print dynamic symbols using information about them found on step (1).

And after the change it became:

1) Find SHT_DYNSYM.
2) Find there is no PT_DYNAMIC => find SHT_DYNAMIC.
3) Parse dynamic table, but fail to handle the DT_SYMTAB because of the absence of the PT_LOAD. Report the "Virtual address is not in any segment" error.

This patch fixes the issue. For doing this it checks that the value of DT_SYMTAB was
mapped to a segment. If not - it ignores it.

Differential revision: https://reviews.llvm.org/D67078

llvm-svn: 371071
2019-09-05 14:02:58 +00:00
David Green 83a3341246 [ARM] Fixup the creation of VPT blocks
This attempts to just fix the creation of VPT blocks, fixing up the iterating,
which instructions are considered in the bundle, and making sure that we do not
overrun the end of the block.

Differential Revision: https://reviews.llvm.org/D67219

llvm-svn: 371064
2019-09-05 13:37:04 +00:00
Simon Pilgrim 215910eeb2 [X86][SSE] Add (failing) test case for PR43227
llvm-svn: 371061
2019-09-05 12:36:11 +00:00
Petar Avramovic a4bfc8dfda [MIPS GlobalISel] Select G_FENCE
G_FENCE comes form fence instruction. For MIPS fence is generated in
AtomicExpandPass when atomic instruction gets surrounded with fence
instruction when needed.
G_FENCE arguments don't have LLT, because of that there is no job for
legalizer and regbankselect. Instruction select G_FENCE for MIPS32.

Differential Revision: https://reviews.llvm.org/D67181

llvm-svn: 371056
2019-09-05 11:20:32 +00:00
Petar Avramovic f5c7fe0795 [MIPS GlobalISel] Select llvm.trap intrinsic
Select G_INTRINSIC_W_SIDE_EFFECTS for Intrinsic::trap for MIPS32
via legalizeIntrinsic.

Differential Revision: https://reviews.llvm.org/D67180

llvm-svn: 371055
2019-09-05 11:16:37 +00:00
Petar Avramovic d2574d79b6 [MIPS GlobalISel] Lower SRet pointer arguments
Instead of returning structure by value clang usually adds pointer
to that structure as an argument. Pointers don't require special
handling no matter the SRet flag. Remove unsuccessful exit from
lowerCall for arguments with SRet flag if they are pointers.

Differential Revision: https://reviews.llvm.org/D67179

llvm-svn: 371054
2019-09-05 11:12:01 +00:00
Simon Pilgrim 071287c5a9 Revert rL370996 from llvm/trunk: [AArch64][GlobalISel] Teach AArch64CallLowering to handle basic sibling calls
This adds support for basic sibling call lowering in AArch64. The intent here is
to only handle tail calls which do not change the ABI (hence, sibling calls.)

At this point, it is very restricted. It does not handle

- Vararg calls.
- Calls with outgoing arguments.
- Calls whose calling conventions differ from the caller's calling convention.
- Tail/sibling calls with BTI enabled.

This patch adds

- `AArch64CallLowering::isEligibleForTailCallOptimization`, which is equivalent
   to the same function in AArch64ISelLowering.cpp (albeit with the restrictions
   above.)
- `mayTailCallThisCC` and `canGuaranteeTCO`, which are identical to those in
   AArch64ISelLowering.cpp.
- `getCallOpcode`, which is exactly what it sounds like.

Tail/sibling calls are lowered by checking if they pass target-independent tail
call positioning checks, and checking if they satisfy
`isEligibleForTailCallOptimization`. If they do, then a tail call instruction is
emitted instead of a normal call. If we have a sibling call (which is always the
case in this patch), then we do not emit any stack adjustment operations. When
we go to lower a return, we check if we've already emitted a tail call. If so,
then we skip the return lowering.

For testing, this patch

- Adds call-translator-tail-call.ll to test which tail calls we currently lower,
  which ones we don't, and which ones we shouldn't.
- Updates branch-target-enforcement-indirect-calls.ll to show that we fall back
  as expected.

Differential Revision: https://reviews.llvm.org/D67189
........
This fails on EXPENSIVE_CHECKS builds due to a -verify-machineinstrs test failure in CodeGen/AArch64/dllimport.ll

llvm-svn: 371051
2019-09-05 10:38:39 +00:00
Jonas Paulsson 821858780e [SystemZ] Recognize INLINEASM_BR in backend
Handle the remaining cases also by handling asm goto in
SystemZInstrInfo::getBranchInfo().

Review: Ulrich Weigand
https://reviews.llvm.org/D67151

llvm-svn: 371048
2019-09-05 10:20:05 +00:00
Guillaume Chatelet aff45e4b23 [LLVM][Alignment] Make functions using log of alignment explicit
Summary:
This patch renames functions that takes or returns alignment as log2, this patch will help with the transition to llvm::Align.
The renaming makes it explicit that we deal with log(alignment) instead of a power of two alignment.
A few renames uncovered dubious assignments:

 - `MirParser`/`MirPrinter` was expecting powers of two but `MachineFunction` and `MachineBasicBlock` were using deal with log2(align). This patch fixes it and updates the documentation.
 - `MachineBlockPlacement` exposes two flags (`align-all-blocks` and `align-all-nofallthru-blocks`) supposedly interpreted as power of two alignments, internally these values are interpreted as log2(align). This patch updates the documentation,
 - `MachineFunctionexposes` exposes `align-all-functions` also interpreted as power of two alignment, internally this value is interpreted as log2(align). This patch updates the documentation,

Reviewers: lattner, thegameg, courbet

Subscribers: dschuff, arsenm, jyknight, dylanmckay, sdardis, nemanjai, jvesely, nhaehnle, javed.absar, hiraditya, kbarton, fedor.sergeev, asb, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, jrtc27, MaskRay, zzheng, edward-jones, atanasyan, rogfer01, MartinMosbeck, brucehoult, the_o, dexonsmith, PkmX, jocewei, jsji, Jim, s.egerton, llvm-commits, courbet

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65945

llvm-svn: 371045
2019-09-05 10:00:22 +00:00
George Rimar e7b4d20998 Recommit r371023 "[lib/ObjectYAML] - Stop calling error(1) when mapping the st_other field of a symbol."
Fix: added missing return "return 0;"

Original commit message:
This eliminates one of the error(1) call in this lib.
It is different from the others because happens on a fields mapping stage
and can be easily fixed.

Differential revision: https://reviews.llvm.org/D67150

llvm-svn: 371030
2019-09-05 08:52:26 +00:00
George Rimar 7f1f50de41 Revert r371023 "[lib/ObjectYAML] - Stop calling error(1) when mapping the st_other field of a symbol."
It broke BBots:

http://lab.llvm.org:8011/builders/lld-x86_64-darwin13/builds/36387/steps/build_Lld/logs/stdio
http://lab.llvm.org:8011/builders/clang-x86_64-debian-fast/builds/17117/steps/test/logs/stdio

llvm-svn: 371024
2019-09-05 08:38:29 +00:00
George Rimar 2c9c432256 [lib/ObjectYAML] - Stop calling error(1) when mapping the st_other field of a symbol.
This eliminates one of the error(1) call in this lib.
It is different from the others because happens on a fields mapping stage
and can be easily fixed.

Differential revision: https://reviews.llvm.org/D67150

llvm-svn: 371023
2019-09-05 08:28:43 +00:00
Igor Kudrin e46639620d [DWARF] Fix referencing Range List Tables from CUs for DWARF64.
As DW_AT_rnglists_base points after the header and headers have
different sizes for DWARF32 and DWARF64, we have to use the format
of the CU to adjust the offset correctly in order to extract
the referenced range list table.

The patch also changes the type of RangeSectionBase because in DWARF64
it is 8-bytes long.

Differential Revision: https://reviews.llvm.org/D67098

llvm-svn: 371016
2019-09-05 07:02:28 +00:00
Igor Kudrin 991f0fb149 [DWARF] Support DWARF64 in DWARFListTableHeader.
This enables 64-bit DWARF support for parsing range and location list tables.

Differential Revision: https://reviews.llvm.org/D66643

llvm-svn: 371014
2019-09-05 06:49:05 +00:00
Matt Arsenault f581d575ce AMDGPU: Add intrinsics for address space identification
The library currently uses ptrtoint and directly checks the queue ptr
for this, which counts as a pointer capture.

llvm-svn: 371009
2019-09-05 02:20:39 +00:00
Matt Arsenault 69b1a2ae65 AMDGPU/GlobalISel: Restore insert point when getting aperture
Avoids SSA violations in a future patch.

llvm-svn: 371008
2019-09-05 02:20:32 +00:00
Matt Arsenault 25156ae7ea AMDGPU/GlobalISel: Fix placeholder value used for addrspacecast
llvm-svn: 371007
2019-09-05 02:20:29 +00:00
Matt Arsenault d51a3746d0 AMDGPU/GlobalISel: Fix assert on load from constant address
llvm-svn: 371006
2019-09-05 02:20:25 +00:00
Puyan Lotfi 6d3ea2d9b6 [mir-canon][NFC] Adding -verify-machineinstrs to mir-canon tests.
In the review process for some of the refactoring of MIRCanonicalizationPass it
was noted that some of the tests didn't have verifier enabled. Enabling here.

llvm-svn: 371005
2019-09-05 02:10:41 +00:00
Reid Kleckner 29ccc8523a Use -mtriple to fix AMDGPU test sensitive to object file format
GOTPCREL32 doesn't exist on COFF, so it isn't used when this test runs
on Windows.

llvm-svn: 371000
2019-09-05 00:34:01 +00:00
Jessica Paquette b78324fc40 [AArch64][GlobalISel] Teach AArch64CallLowering to handle basic sibling calls
This adds support for basic sibling call lowering in AArch64. The intent here is
to only handle tail calls which do not change the ABI (hence, sibling calls.)

At this point, it is very restricted. It does not handle

- Vararg calls.
- Calls with outgoing arguments.
- Calls whose calling conventions differ from the caller's calling convention.
- Tail/sibling calls with BTI enabled.

This patch adds

- `AArch64CallLowering::isEligibleForTailCallOptimization`, which is equivalent
   to the same function in AArch64ISelLowering.cpp (albeit with the restrictions
   above.)
- `mayTailCallThisCC` and `canGuaranteeTCO`, which are identical to those in
   AArch64ISelLowering.cpp.
- `getCallOpcode`, which is exactly what it sounds like.

Tail/sibling calls are lowered by checking if they pass target-independent tail
call positioning checks, and checking if they satisfy
`isEligibleForTailCallOptimization`. If they do, then a tail call instruction is
emitted instead of a normal call. If we have a sibling call (which is always the
case in this patch), then we do not emit any stack adjustment operations. When
we go to lower a return, we check if we've already emitted a tail call. If so,
then we skip the return lowering.

For testing, this patch

- Adds call-translator-tail-call.ll to test which tail calls we currently lower,
  which ones we don't, and which ones we shouldn't.
- Updates branch-target-enforcement-indirect-calls.ll to show that we fall back
  as expected.

Differential Revision: https://reviews.llvm.org/D67189

llvm-svn: 370996
2019-09-04 22:54:52 +00:00
Matt Arsenault 2df41a8e38 AMDGPU/GlobalISel: Select G_BITREVERSE
llvm-svn: 370980
2019-09-04 20:46:31 +00:00
Matt Arsenault 5ff310e298 GlobalISel: Add basic legalization for G_BITREVERSE
llvm-svn: 370979
2019-09-04 20:46:15 +00:00
Johannes Doerfert 7ab5253704 [Attributor][Fix] Make sure we do not delete live code
Summary: Liveness needs to mark edges, not blocks as dead.

Reviewers: sstefan1, uenoku

Subscribers: hiraditya, bollu, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67191

llvm-svn: 370975
2019-09-04 20:34:52 +00:00
Leonard Chan eca01b031d [NewPM][Sancov] Make Sancov a Module Pass instead of 2 Passes
This patch merges the sancov module and funciton passes into one module pass.

The reason for this is because we ran into an out of memory error when
attempting to run asan fuzzer on some protobufs (pc.cc files). I traced the OOM
error to the destructor of SanitizerCoverage where we only call
appendTo[Compiler]Used which calls appendToUsedList. I'm not sure where precisely
in appendToUsedList causes the OOM, but I am able to confirm that it's calling
this function *repeatedly* that causes the OOM. (I hacked sancov a bit such that
I can still create and destroy a new sancov on every function run, but only call
appendToUsedList after all functions in the module have finished. This passes, but
when I make it such that appendToUsedList is called on every sancov destruction,
we hit OOM.)

I don't think the OOM is from just adding to the SmallSet and SmallVector inside
appendToUsedList since in either case for a given module, they'll have the same
max size. I suspect that when the existing llvm.compiler.used global is erased,
the memory behind it isn't freed. I could be wrong on this though.

This patch works around the OOM issue by just calling appendToUsedList at the
end of every module run instead of function run. The same amount of constants
still get added to llvm.compiler.used, abd we make the pass usage and logic
simpler by not having any inter-pass dependencies.

Differential Revision: https://reviews.llvm.org/D66988

llvm-svn: 370971
2019-09-04 20:30:29 +00:00
Evandro Menezes bf78e39cbb [InstCombine] Add more test cases (NFC)
Add more test cases simplifying `log()`.

llvm-svn: 370966
2019-09-04 20:01:09 +00:00
Alina Sbirlea 6da79ce1fe [MemorySSA] Re-enable MemorySSA use.
Differential Revision: https://reviews.llvm.org/D58311

llvm-svn: 370957
2019-09-04 19:16:04 +00:00
David Bolvansky 420cbb6190 [InstCombine] sub(xor(x, y), or(x, y)) -> neg(and(x, y))
Summary:
```
Name: sub(xor(x, y), or(x, y)) -> neg(and(x, y))
%or = or i32 %y, %x
%xor = xor i32 %x, %y
%sub = sub i32 %xor, %or
  =>
%sub1 = and i32 %x, %y
%sub = sub i32 0, %sub1

Optimization: sub(xor(x, y), or(x, y)) -> neg(and(x, y))
Done: 1
Optimization is correct!
```

https://rise4fun.com/Alive/8OI

Reviewers: lebedev.ri

Reviewed By: lebedev.ri

Subscribers: llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67188

llvm-svn: 370945
2019-09-04 18:03:21 +00:00
David Bolvansky f6233d90f0 [NFC] Added tests for new fold
llvm-svn: 370941
2019-09-04 17:37:06 +00:00
David Bolvansky 2ceb00db76 [NFC] Adjust test filename
llvm-svn: 370939
2019-09-04 17:33:53 +00:00
Craig Topper f0081dac81 [X86] Pre-commit test cases and test run line changes for D67087
llvm-svn: 370937
2019-09-04 17:33:38 +00:00
David Bolvansky 0e07248704 [InstCombine] Fold sub (and A, B) (or A, B)) to neg (xor A, B)
Summary:
```
Name: sub(and(x, y), or(x, y)) -> neg(xor(x, y))
%or = or i32 %y, %x
%and = and i32 %x, %y
%sub = sub i32 %and, %or
  =>
%sub1 = xor i32 %x, %y
%sub = sub i32 0, %sub1

Optimization: sub(and(x, y), or(x, y)) -> neg(xor(x, y))
Done: 1
Optimization is correct!
```

https://rise4fun.com/Alive/VI6

Found by @lebedev.ri. Also author of the proof.

Reviewers: lebedev.ri, spatel

Reviewed By: lebedev.ri

Subscribers: llvm-commits, lebedev.ri

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67155

llvm-svn: 370934
2019-09-04 17:30:53 +00:00
Matt Arsenault 84489b34f6 AMDGPU: Handle frame index expansion with no free SGPRs pre gfx9
Since an add instruction must produce an unused carry out, this
requires additional SGPRs. This can be avoided by keeping the entire
offset computation in SGPRs. If one SGPR is still available, this only
costs one extra mov. If none are available, the entire computation can
be done in place and reversed.

This does assume the use is a VGPR operand. This was already assumed,
and we currently only select frame indexes to VALU instructions. This
should probably be fixed at some point to handle more possible MIR.

llvm-svn: 370929
2019-09-04 17:12:57 +00:00
Matt Arsenault 70becc20fa GlobalISel: Add G_BITREVERSE
This is the first failing pattern for AMDGPU and is trivial to handle.

llvm-svn: 370927
2019-09-04 17:06:53 +00:00
Johannes Doerfert 2f6220633c [Attributor] Look at internal functions only on-demand
Summary:
Instead of building attributes for internal functions which we do not
update as long as we assume they are dead, we now do not create
attributes until we assume the internal function to be live. This
improves the number of required iterations, as well as the number of
required updates, in real code. On our tests, the results are mixed.

Reviewers: sstefan1, uenoku

Subscribers: hiraditya, bollu, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66914

llvm-svn: 370924
2019-09-04 16:35:20 +00:00
Johannes Doerfert 97fd582b91 [Attributor] Use the white list for attributes consistently
Summary:
We create attributes on-demand so we need to check the white list
on-demand. This also unifies the location at which we create,
initialize, and eventually invalidate new abstract attributes.

The tests show mixed results, a few more call site attributes are
determined which can cause more iterations.

Reviewers: uenoku, sstefan1

Subscribers: hiraditya, bollu, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66913

llvm-svn: 370922
2019-09-04 16:26:20 +00:00
Matt Arsenault d9af712da4 AMDGPU/GlobalISel: Make 16-bit constants legal
This is mostly for the benefit of patterns which use 16-bit constants.

llvm-svn: 370921
2019-09-04 16:19:45 +00:00
Johannes Doerfert b0412e437c [Attributor] Deal more explicit with non-exact definitions
Summary:
Before we tried to rule out non-exact definitions early but that lead to
on-demand attributes created for them anyway. As a consequence we needed
to look at the definition in the initialize of each attribute again.
This patch centralized this lookup and tightens the condition under
which we give up on non-exact definitions.

Reviewers: uenoku, sstefan1

Subscribers: hiraditya, bollu, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67115

llvm-svn: 370917
2019-09-04 16:16:13 +00:00
Krzysztof Parzyszek 08a09822a5 [Hexagon] Improve generated code for test-if-bit-clear, one more time
Adjust isel patterns after recent commit. Fixes https://llvm.org/PR43194.

llvm-svn: 370913
2019-09-04 15:22:36 +00:00
Sanjay Patel 4a2cd7be5a [InstSimplify] guard against unreachable code (PR43218)
This would crash:
https://bugs.llvm.org/show_bug.cgi?id=43218

llvm-svn: 370911
2019-09-04 15:12:55 +00:00
Alexey Lapshin cbf1f3b771 [Debuginfo][SROA] Need to handle dbg.value in SROA pass.
SROA pass processes debug info incorrecly if applied twice.
Specifically, after SROA works first time, instcombine converts dbg.declare
intrinsics into dbg.value. Inlining creates new opportunities for SROA,
so it is called again. This time it does not handle correctly previously
inserted dbg.value intrinsics.

Differential Revision: https://reviews.llvm.org/D64595

llvm-svn: 370906
2019-09-04 14:19:49 +00:00
Sanjay Patel 791949afe5 [InstCombine] add tests for insert/extract with identity shuffles; NFC
llvm-svn: 370901
2019-09-04 13:38:49 +00:00
David Bolvansky b9e9478244 [NFC] Added a negative test for new fold
llvm-svn: 370890
2019-09-04 12:46:25 +00:00
David Bolvansky 13dadedc29 [NFC] Fixed test
llvm-svn: 370888
2019-09-04 12:43:14 +00:00
David Bolvansky 3747c48d64 [NFC] Adjust tests for new fold
llvm-svn: 370886
2019-09-04 12:22:28 +00:00
David Bolvansky 163b05b45d [NFC] Added tests for new fold
llvm-svn: 370885
2019-09-04 12:18:53 +00:00
David Bolvansky 358b80b340 [InstCombine] Fold sub (or A, B) (and A, B) to (xor A, B)
Summary:
```
Name: sub or and to xor
%or = or i32 %y, %x
%and = and i32 %x, %y
%sub = sub i32 %or, %and
  =>
%sub = xor i32 %x, %y

Optimization: sub or and to xor
Done: 1
Optimization is correct!
```
https://rise4fun.com/Alive/eJu

Reviewers: spatel, lebedev.ri

Reviewed By: lebedev.ri

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67153

llvm-svn: 370883
2019-09-04 12:00:33 +00:00
Pavel Labath 98634c2e11 Fix address sizes in the dwarfdump-debug-loc-error-cases test
the test is building a 64-bit executable, so the addresses should be
64-bit too. The test was still passing even with smaller address size,
but it was hitting the "unexpected end of data" error sooner than it
should.

llvm-svn: 370882
2019-09-04 11:47:20 +00:00
David Bolvansky 54f3a651f3 [NFC] Added a new test for D67153
llvm-svn: 370881
2019-09-04 11:44:00 +00:00
David Bolvansky 75d734475a [NFC] Added tests for 'SUB of OR and AND to XOR' fold
llvm-svn: 370878
2019-09-04 11:17:08 +00:00
Jeremy Morse 337a7cb55e [DebugInfo] LiveDebugValues: locations with different exprs should not be merged
When comparing variable locations, LiveDebugValues currently considers only
the machine location, ignoring any DIExpression applied to it. This is a
problem because that DIExpression can do pretty much anything to the machine
location, for example dereferencing it.

This patch adds DIExpressions to that comparison; now variables based on the
same register/memory-location but with different expressions will compare
differently, and be dropped if we attempt to merge them between blocks. This
reduces variable coverage-range a little, but only because we were producing
broken locations.

Differential Revision: https://reviews.llvm.org/D66942

llvm-svn: 370877
2019-09-04 11:09:05 +00:00
Pavel Labath 88b4e28a67 DWARF: Fix a regression in location list dumping
Summary:
While fixing the handling of some error cases, r370363 introduced new
problems -- assertion failures due to unchecked errors (my excuse is that a very
early version of that patch used Optional<T> instead of Expected).

This patch adds proper handling of parsing errors encountered when
dumping location lists from inside DWARF DIEs, and adds a bunch of
additional tests.

I reorder the arguments of the location list dumping functions to make
them consistent, and also be able to dump the two kinds of location
lists generically.

Reviewers: JDevlieghere, dblaikie, probinson

Subscribers: aprantl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67102

llvm-svn: 370868
2019-09-04 10:09:12 +00:00
Fangrui Song 441d450115 [yaml2obj] Support PT_GNU_STACK and PT_GNU_RELRO
PT_GNU_STACK is used in an llvm-objcopy test.

I plan to use PT_GNU_RELRO in a patch to improve nested segment
processing in llvm-objcopy (PR42963).

Reviewed By: grimar

Differential Revision: https://reviews.llvm.org/D67146

llvm-svn: 370857
2019-09-04 09:19:31 +00:00
Sam Parker fea532230b [ARM][ParallelDSP] SExt mul for accumulation
For any unpaired muls, we accumulate them as an input to the
reduction. Check the type of the mul and perform a sext if the
existing accumlator input type is not the same.

Differential Revision: https://reviews.llvm.org/D66993

llvm-svn: 370851
2019-09-04 08:41:34 +00:00
Taewook Oh 1975e635e6 [IRPrinting] Improve module pass printer to work better with -filter-print-funcs
Summary: Previously module pass printer pass prints the banner even when the module doesn't include any function provided with `-filter-print-funcs` option. This introduced a lot of noise, especailly with ThinLTO. This diff addresses the issue and makes the banner printed only when the module includes functions in `-filter-print-funcs` list.

Reviewers: fedor.sergeev

Subscribers: mehdi_amini, hiraditya, dexonsmith, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66560

llvm-svn: 370849
2019-09-04 08:08:58 +00:00
Jim Lin b77aa1d248 [RISCV] Enable tail call opt for variadic function
Summary: Tail call opt can treat variadic function call the same as normal function call

Reviewers: mgrang, asb, lenary, lewis-revill

Reviewed By: lenary

Subscribers: luismarques, pzheng, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, kito-cheng, shiva0217, jrtc27, MaskRay, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, rkruppe, PkmX, jocewei, psnobl, benna, s.egerton, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66278

llvm-svn: 370835
2019-09-04 02:03:36 +00:00
Puyan Lotfi 954d6d661f [NFC][llvm-ifs] Adding .ifs files to the test list for llvm-ifs tool.
llvm-svn: 370830
2019-09-04 00:07:49 +00:00
Reid Kleckner 3fa07dee94 Revert [Windows] Disable TrapUnreachable for Win64, add SEH_NoReturn
This reverts r370525 (git commit 0bb1630685)
Also reverts r370543 (git commit 185ddc08ee)

The approach I took only works for functions marked `noreturn`. In
general, a call that is not known to be noreturn may be followed by
unreachable for other reasons. For example, there could be multiple call
sites to a function that throws sometimes, and at some call sites, it is
known to always throw, so it is followed by unreachable. We need to
insert an `int3` in these cases to pacify the Windows unwinder.

I think this probably deserves its own standalone, Win64-only fixup pass
that runs after block placement. Implementing that will take some time,
so let's revert to TrapUnreachable in the mean time.

llvm-svn: 370829
2019-09-03 22:27:27 +00:00
Heejin Ahn 49e7ee4dd5 [WebAssembly] Compare functions by names in Emscripten Sjlj
Summary:
This removes all string constants for function names and compares
functions by string directly when needed. Many of these constants are
used only once or twice so the benefit of defining them separately is
not very clear, and this actually fixes a bug.

When we already have a `malloc` declaration which is an alias to
something else within the module,
```
@malloc = weak hidden alias i8* (i32), i8* (i32)* @dlmalloc
```
(this happens compiling with emscripten with `-s WASM_OBJECT_FILES=0`
because all bc files are merged before being fed into `wasm-ld` which
runs the backend optimizations as LTO)

`Module::getFunction("malloc")` in `canLongjmp` returns `nullptr`
because `Module::getFunction` dyncasts pointer into `Function`, but the
alias is a `GlobalValue` but not a `Function`. This makes `canLongjmp`
return false for `malloc` in this case, and we end up adding a lot of
longjmp handling code around malloc. This is not only a code size
increase but actually a bug because `malloc` is used in the entry block
when preparing for setjmp tables for emscripten sjlj handling, and this
makes initial setjmp preparation, which has to happen in the entry
block, move to another split block, and this interferes with SSA update
later.

This also adds two more functions, `getTempRet0` and `setTempRet0`, in
the list of not longjmp-able functions.

Fixes https://github.com/emscripten-core/emscripten/issues/8935.

Reviewers: sbc100

Subscribers: mehdi_amini, jgravelle-google, hiraditya, sunfish, dexonsmith, dschuff, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D67129

llvm-svn: 370828
2019-09-03 22:26:49 +00:00
Vedant Kumar 0fcfe89717 [llvm-profdata] Add mode to recover from profile read failures
Add a mode in which profile read errors are not immediately treated as
fatal. In this mode, merging makes forward progress and reports failure
only if no inputs can be read.

Differential Revision: https://reviews.llvm.org/D66985

llvm-svn: 370827
2019-09-03 22:23:16 +00:00
Vedant Kumar 95fb23ab37 [InstrProf] Tighten a check for malformed data records in raw profiles
The check needs to validate a counter offset before performing pointer
arithmetic with the (potentially corrupt) offset.

Found by UBSan's pointer overflow check.

rdar://54843625

Differential Revision: https://reviews.llvm.org/D66979

llvm-svn: 370826
2019-09-03 22:23:14 +00:00
Amara Emerson 2a2c25ba48 [AArch64][GlobalISel] Legalize 128 bit divisions to libcalls.
Now that we have the infrastructure to support s128 types as parameters
we can expand these to libcalls.

Differential Revision: https://reviews.llvm.org/D66185

llvm-svn: 370823
2019-09-03 21:42:32 +00:00
Amara Emerson fbaf425b79 [GlobalISel][CallLowering] Add support for splitting types according to calling conventions.
On AArch64, s128 types have to be split into s64 GPRs when passed as arguments.
This change adds the generic support in call lowering for dealing with multiple
registers, for incoming and outgoing args.

Support for splitting for return types not yet implemented.

Differential Revision: https://reviews.llvm.org/D66180

llvm-svn: 370822
2019-09-03 21:42:28 +00:00
Alina Sbirlea ccb1862bc9 [MemorySSA] Disable MemorySSA use.
Differential Revision: https://reviews.llvm.org/D58311

llvm-svn: 370821
2019-09-03 21:20:46 +00:00
Johannes Doerfert b19cd27b28 [Attributor] Use the delete API for liveness
Reviewers: uenoku, sstefan1

Subscribers: hiraditya, bollu, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66833

llvm-svn: 370818
2019-09-03 20:42:16 +00:00