Commit Graph

70581 Commits

Author SHA1 Message Date
Simon Pilgrim 5e426363ba [X86][AVX] Add tests showing failure to use chained PACKSS/PACKUS for multi-stage compaction shuffles
The sign/zero extended top bits mean that we could use chained PACK*S ops here
2020-04-03 14:16:16 +01:00
Jay Foad c7e1fc8496 [AMDGPU] Fix CHECK lines 2020-04-03 10:07:21 +01:00
Guillaume Chatelet ca11c480e7 [Alignment][NFC] Convert MachineIRBuilder::buildDynStackAlloc to Align
Summary:
The change in IRTranslator is not trivial but is NFC as far as I can tell.

This is patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790

Reviewers: courbet

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D77292
2020-04-03 09:05:19 +00:00
OCHyams 9b56cc9361 [DebugInfo] Salvage debug info when sinking loop invariant instructions
Reviewed By: vsk, aprantl, djtodoro

Differential Revision: https://reviews.llvm.org/D77318
2020-04-03 09:19:26 +01:00
Scott Constable 5b519cf1fc [X86] Add Indirect Thunk Support to X86 to mitigate Load Value Injection (LVI)
This pass replaces each indirect call/jump with a direct call to a thunk that looks like:

lfence
jmpq *%r11

This ensures that if the value in register %r11 was loaded from memory, then
the value in %r11 is (architecturally) correct prior to the jump.
Also adds a new target feature to X86: +lvi-cfi
("cfi" meaning control-flow integrity)
The feature can be added via clang CLI using -mlvi-cfi.

This is an alternate implementation to https://reviews.llvm.org/D75934 That merges the thunk insertion functionality with the existing X86 retpoline code.

Differential Revision: https://reviews.llvm.org/D76812
2020-04-03 00:34:39 -07:00
Sourabh Singh Tomar 69c8fb1c65 [DWARF5] Added support for debug_macro section parsing and dumping in llvm-dwarfdump.
Summary:
This patch adds parsing and dumping DWARFv5 .debug_macro section in llvm-dwarfdump,
it does not introduce any new switch. Existing switch "--debug-macro"
should be used to dump macinfo or macro section.

Reviewed By: dblaikie, ikudrin, jhenderson

Differential Revision: https://reviews.llvm.org/D73086
2020-04-03 12:23:51 +05:30
Xiang1 Zhang fef2dab100 Bugix for buildbot failure at commit 43f031d312
Author: Xiang1 Zhang <xiang1.zhang@intel.com>
Date:   Fri Apr 3 11:25:38 2020 +0800

    Enable IBT(Indirect Branch Tracking) in JIT with CET(Control-flow Enforcement Technology)
2020-04-03 13:25:35 +08:00
Scott Constable 71e8021d82 [X86][NFC] Generalize the naming of "Retpoline Thunks" and related code to "Indirect Thunks"
There are applications for indirect call/branch thunks other than retpoline for Spectre v2, e.g.,

https://software.intel.com/security-software-guidance/software-guidance/load-value-injection

Therefore it makes sense to refactor X86RetpolineThunks as a more general capability.

Differential Revision: https://reviews.llvm.org/D76810
2020-04-02 21:55:13 -07:00
laith sakka a0983ed3d2 Handle exp2 with proper vectorization and lowering to SVML calls
Summary:
Add mapping from exp2 math functions
to corresponding SVML calls.

This is a follow up and extension for llvm diff
https://reviews.llvm.org/D19544

Test Plan:
- update test case and run ninja check.
- run tests locally

Reviewers: wenlei, hoyFB, mmasten, mzolotukhin, spatel

Reviewed By: spatel

Subscribers: llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D77114
2020-04-02 21:11:13 -07:00
Hongtao Yu 88da019977 Fix a bug in the inliner that causes subsequent double inlining
Summary:
A recent change in the instruction simplifier enables a call to a function that just returns one of its parameter to be simplified as simply loading the parameter. This exposes a bug in the inliner where double inlining may be involved which in turn may cause compiler ICE when an already-inlined callsite is reused for further inlining.
To put it simply, in the following-like C program, when the function call second(t) is inlined, its code t = third(t) will be reduced to just loading the return value of the callsite first(). This causes the inliner internal data structure to register the first() callsite for the call edge representing the third() call, therefore incurs a double inlining when both call edges are considered an inline candidate. I'm making a fix to break the inliner from reusing a callsite for new call edges.

```
void top()
{
    int t = first();
    second(t);
}

void second(int t)
{
   t = third(t);
   fourth(t);
}

void third(int t)
{
   return t;
}
```
The actual failing case is much trickier than the example here and is only reproducible with the legacy inliner. The way the legacy inliner works is to process each SCC in a bottom-up order. That means in reality function first may be already inlined into top, or function third is either inlined to second or is folded into nothing. To repro the failure seen from building a large application, we need to figure out a way to confuse the inliner so that the bottom-up inlining is not fulfilled. I'm doing this by making the second call indirect so that the alias analyzer fails to figure out the right call graph edge from top to second and top can be processed before second during the bottom-up.  We also need to tweak the test code so that when the inlining of top happens, the function body of second is not that optimized, by delaying the pass of function attribute deducer (i.e, which tells function third has no side effect and just returns its parameter). Since the CGSCC pass is iterative, additional calls are added to top to postpone the inlining of second to the second round right after the first function attribute deducing pass is done. I haven't been able to repro the failure with the new pass manager since the processing order of ininlined callsites is a bit different, but in theory the issue could happen there too.

Note that this fix could introduce a side effect that blocks the simplification of inlined code, specifically for a call site that can be folded to another call site. I hope this can probably be complemented by subsequent inlining or folding, as shown in the attached unit test. The ideal fix should be to separate the use of VMap. However, in reality this failing pattern shouldn't happen often. And even if it happens, there should be a good chance that the non-folded call site will be refolded by iterative inlining or subsequent simplification.

Reviewers: wenlei, davidxl, tejohnson

Reviewed By: wenlei, davidxl

Subscribers: eraman, nikic, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76248
2020-04-02 21:08:05 -07:00
Xiang1 Zhang 43f031d312 Enable IBT(Indirect Branch Tracking) in JIT with CET(Control-flow Enforcement Technology)
Summary:
This patch comes from H.J.'s 2bd54ce7fa

**This patch fix the failed llvm unit tests which running on CET machine. **(e.g. ExecutionEngine/MCJIT/MCJITTests)

The reason we enable IBT at "JIT compiled with CET" is mainly that:  the JIT don't know the its caller program is CET enable or not.
If JIT's caller program is non-CET, it is no problem JIT generate CET code or not.
But if JIT's caller program is CET enabled,  JIT must generate CET code or it will cause Control protection exceptions.

I have test the patch at llvm-unit-test and llvm-test-suite at CET machine. It passed.
and H.J. also test it at building and running VNCserver(Virtual Network Console), it works too.
(if not apply this patch, VNCserver will crash at CET machine.)

Reviewers: hjl.tools, craig.topper, LuoYuanke, annita.zhang, pengfei

Subscribers: tstellar, efriedma, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76900
2020-04-03 11:44:07 +08:00
Jessica Paquette 71947ed927 [AArch64][GlobalISel] Constrain reg operands in selectBrJT
This was causing a machine verifier failure on the test suite.

Make sure that we don't end up with a weird register class here.

Failure for reference:

*** Bad machine code: Illegal virtual register for instruction ***
- function:    check_constrain
- basic block: %bb.1  (0x7f8b70839f80)
- instruction: early-clobber %6:gpr64, early-clobber %7:gpr64sp =
  JumpTableDest32 %5:gpr64, %1:gpr64sp, %jump-table.0
- operand 3:   %1:gpr64sp
Expected a GPR64 register, but got a GPR64sp register

Differential Revision: https://reviews.llvm.org/D77349
2020-04-02 20:34:11 -07:00
Wenju He fe8ac0fe51 [x86] Fix Intel OpenCL builtin CalleeSavedRegs on skx
Summary: Align with AVX512 builtins implementations, some of which don't preserve rdi.

Reviewers: yubing, tianqing, craig.topper

Reviewed By: craig.topper

Subscribers: yaxunl, Anastasia, hiraditya

Differential Revision: https://reviews.llvm.org/D77032
2020-04-03 11:27:40 +08:00
Qiu Chaofan 71f1ab5354 [PowerPC] Remove unnecessary XSRSP instruction
MI peephole will remove unnecessary FRSP instructions. This patch
removes such unnecessary XSRSP.

Reviewed By: steven.zhang

Differential Revision: https://reviews.llvm.org/D77208
2020-04-03 11:05:14 +08:00
Austin Kerbow 30f18ed387 [AMDGPU] Handle SMRD signed offset immediate
Summary:
This fixes a few issues related to SMRD offsets. On gfx9 and gfx10 we have a
signed byte offset immediate, however we can overflow into a negative since we
treat it as unsigned.

Also, the SMRD SOFFSET sgpr is an unsigned offset on all subtargets. We
sometimes tried to use negative values here.

Third, S_BUFFER instructions should never use a signed offset immediate.

Differential Revision: https://reviews.llvm.org/D77082
2020-04-02 17:41:52 -07:00
Adrian Prantl 93fe58c9cf Teach the stripNonLineTableDebugInfo pass about the llvm.dbg.label intrinsic.
Debug info for labels is not generated at -gline-tables-only, so this
pass should remove them.

Differential Revision: https://reviews.llvm.org/D77345
2020-04-02 17:39:33 -07:00
Adrian Prantl c024f3ebdc Teach the stripNonLineTableDebugInfo pass about the llvm.dbg.addr intrinsic.
This patch also strips llvm.dbg.addr intrinsics when downgrading debug
info to linetables-only.

Differential Revision: https://reviews.llvm.org/D77343
2020-04-02 17:39:33 -07:00
Nico Weber e875ba1509 Try again to get tests passing again on Windows.
Things pass locally, but some tests on some bots are still unhappy.
I'm not sure why. See if using forward slashes as before helps.
2020-04-02 20:00:38 -04:00
Lang Hames 05598441de Re-apply 0071eaaf08, "[ORC] Export __cxa_atexit ...", with fixes.
Forgot to include part of the testcase. Thank to Nico for spotting that and
reverting!
2020-04-02 16:03:35 -07:00
Matt Arsenault 2680e88069 AMDGPU: Fix broken check lines 2020-04-02 18:52:49 -04:00
Matt Arsenault f68cc2a7ed AMDGPU: Use 128-bit DS operations by default 2020-04-02 17:17:47 -04:00
Matt Arsenault 192cccb152 AMDGPU: Add some tests for exotic denormal mode combinations 2020-04-02 17:17:12 -04:00
Matt Arsenault 5660bb6bc9 AMDGPU: Remove denormal subtarget features
Switch to using the denormal-fp-math/denormal-fp-math-f32 attributes.
2020-04-02 17:17:12 -04:00
Matt Arsenault 75cf30918f AMDGPU: Assume f32 denormals are enabled by default
This will likely introduce catastrophic performance regressions on
older subtargets, but should be correct. A follow up change will
remove the old fp32-denormals subtarget features, and switch to using
the new denormal-fp-math/denormal-fp-math-f32 attributes. Frontends
should be making sure to add the denormal-fp-math-f32 attribute when
appropriate to avoid performance regressions.
2020-04-02 17:17:12 -04:00
Nico Weber a16ba6fea2 Reland "Make it possible for lit.site.cfg to contain relative paths, and use it for llvm and clang"
The problem on Windows was that the \b in "..\bin" was interpreted
as an escape sequence. Use r"" strings to prevent that.

This reverts commit ab11b9eefa,
with raw strings in the lit.site.cfg.py.in files.

Differential Revision: https://reviews.llvm.org/D77184
2020-04-02 16:12:03 -04:00
Craig Topper 4fdb63bbf0 [X86] Enable combineExtSetcc for vectors larger than 256 bits when we've disabled 512 bit vectors.
The compares are going to be type legalized to 256 bits so we
might as well fold the extend.
2020-04-02 12:44:27 -07:00
Nico Weber ab11b9eefa Revert "Make it possible for lit.site.cfg to contain relative paths, and use it for llvm and clang"
This reverts commit fb80b6b2d5 and
follow-up 631ee8b24a.

Seems to not work on Windows:
http://lab.llvm.org:8011/builders/llvm-clang-lld-x86_64-scei-ps4-windows10pro-fast/builds/31684
http://lab.llvm.org:8011/builders/llvm-clang-win-x-aarch64/builds/6512

Let's revert while I investigate.
2020-04-02 15:00:09 -04:00
Anna Thomas bf7a16a768 [InlineFunction] Update valid return attributes at callsite within callee body
Consider a callee function that has a call (C) within it which feeds
into the return. When we inline that callee into a callsite that has
return attributes, we can backward propagate valid attributes to the
call (C) within that inlined callee body.

This is safe to do so only if we can guarantee transfer of execution to
successor in the window of instructions between return value (i.e. the
call C) and the return instruction.

Also, this is valid only for attributes which are a property of a
callsite and not those that are not dependent on the ABI, or a property
of the call itself.

Reviewed-By: reames, jdoerfert

Differential Revision: https://reviews.llvm.org/D76140
2020-04-02 14:13:12 -04:00
Matt Arsenault c3d3c22a58 AMDGPU: Hack out noinline on functions using LDS globals
This is a workaround for clang adding noinline to all functions at
-O0. Previously, we would just add alwaysinline, and the verifier
would complain about having both noinline and alwaysinline. We
currently can't truly codegen this case as a freestanding function, so
override the user forcing noinline.
2020-04-02 14:12:07 -04:00
Nico Weber fb80b6b2d5 Make it possible for lit.site.cfg to contain relative paths, and use it for llvm and clang
Currently, all generated lit.site.cfg files contain absolute paths.

This makes it impossible to build on one machine, and then transfer the
build output to another machine for test execution. Being able to do
this is useful for several use cases:

1. When running tests on an ARM machine, it would be possible to build
   on a fast x86 machine and then copy build artifacts over after building.

2. It allows running several test suites (clang, llvm, lld) on 3
   different machines, reducing test time from sum(each test suite time) to
   max(each test suite time).

This patch makes it possible to pass a list of variables that should be
relative in the generated lit.site.cfg.py file to
configure_lit_site_cfg(). The lit.site.cfg.py.in file needs to call
`path()` on these variables, so that the paths are converted to absolute
form at lit start time.

The testers would have to have an LLVM checkout at the same revision,
and the build dir would have to be at the same relative path as on the
builder.

This does not yet cover how to figure out which files to copy from the
builder machine to the tester machines. (One idea is to look at the
`--graphviz=test.dot` output and copy all inputs of the `check-llvm`
target.)

Differential Revision: https://reviews.llvm.org/D77184
2020-04-02 13:53:16 -04:00
Sanjay Patel f4448063cc [InstCombine] try to reduce shuffle with bitcasted operand
shuf (bitcast X), undef, Mask --> bitcast X'

The 'inverse shuffles' test (shuf_bitcast_operand) is a pattern
in the motivating examples from PR35454:
https://bugs.llvm.org/show_bug.cgi?id=35454
(see also D76727)

We can deal with this class of patterns in generic instcombine
because we are not creating any new shuffles, just a bitcast.

Alive2 proof:
http://volta.cs.utah.edu:8080/z/mwDUZf

Differential Revision: https://reviews.llvm.org/D76844
2020-04-02 13:44:50 -04:00
Sanjay Patel b6050ca181 [VectorCombine] transform bitcasted shuffle to narrower elements
bitcast (shuf V, MaskC) --> shuf (bitcast V), MaskC'

We do not attempt this in InstCombine because we do not want to change
types and create new shuffle ops that are potentially not lowered as
well as the original code. Here, we can check the cost model to see if
it is worthwhile.

I've aggressively enabled this transform even if the types are the same
size and/or equal cost because moving the bitcast allows InstCombine to
make further simplifications.

In the motivating cases from PR35454:
https://bugs.llvm.org/show_bug.cgi?id=35454
...this is enough to let instcombine and the backend eliminate the
redundant shuffles, but we probably want to extend VectorCombine to
handle the inverse pattern (shuffle-of-bitcast) to get that
simplification directly in IR.

Differential Revision: https://reviews.llvm.org/D76727
2020-04-02 13:30:22 -04:00
Stanislav Mekhanoshin f2334a7ef2 [AMDGPU] Fix crash in SILoadStoreOptimizer
SILoadStoreOptimizer::checkAndPrepareMerge() expects base and
paired instruction to come in order and scans MBB from base to
the paired instruction. An original order can be changed if
there were a dependent instruction in between and base instruction
was moved.

Fixed by bailing the optimization. In theory it might be possible
still to perform a merge by swapping instructions, but on practice
it bails anyway because it finds dependency on that same instruction
which has resulted in the base move.

Differential Revision: https://reviews.llvm.org/D77245
2020-04-02 10:26:47 -07:00
Sanjay Patel 12fcbcecff [InstCombine] add tests for cmyk benchmark; NFC
These are versions of a function that regressed with:
rGf2fbdf76d8d0

That particular problem occurs with an instcombine-simplifycfg-instcombine
sequence, but we can show that it exists within instcombine only with
other variations of the pattern.
2020-04-02 13:00:46 -04:00
Sanjay Patel 1008435f3d Revert "[InstCombine] do not exclude min/max from icmp with casted operand fold"
This reverts commit f2fbdf76d8.
As noted in the post-commit thread:
https://reviews.llvm.org/rGf2fbdf76d8d0
...this can obscure a min/max pattern where the components
have extra uses. We can show that the problem is independent
of this change with a slightly modified source example, so
this revert just delays/reduces the need to fix the real
problem.

We need to improve our analysis of negation or -- more
generally -- subtraction using patches like D77230 or D68408.
2020-04-02 09:15:23 -04:00
Tyker c00cb76274 [NFC] Split Knowledge retention and place it more appropriatly
Summary:
Splitting Knowledge retention into Queries in Analysis and Builder into Transform/Utils
allows Queries and Transform/Utils to use Analysis.

Reviewers: jdoerfert, sstefan1

Reviewed By: jdoerfert

Subscribers: mgorny, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D77171
2020-04-02 15:01:41 +02:00
Jonas Paulsson 36d4421f50 [LoopDataPrefetch + SystemZ] Let target decide on prefetching for each loop.
This patch adds

- New arguments to getMinPrefetchStride() to let the target decide on a
  per-loop basis if software prefetching should be done even with a stride
  within the limit of the hw prefetcher.

- New TTI hook enableWritePrefetching() to let a target do write prefetching
  by default (defaults to false).

- In LoopDataPrefetch:

  - A search through the whole loop to gather information before emitting any
    prefetches. This way the target can get information via new arguments to
    getMinPrefetchStride() and emit prefetches more selectively. Collected
    information includes: Does the loop have a call, how many memory
    accesses, how many of them are strided, how many prefetches will cover
    them. This is NFC to before as long as the target does not change its
    definition of getMinPrefetchStride().

  - If a previous access to the same exact address was 'read', and the
    current one is 'write', make it a 'write' prefetch.

  - If two accesses that are covered by the same prefetch do not dominate
    each other, put the prefetch in a block that dominates both of them.

  - If a ConstantMaxTripCount is less than ItersAhead, then skip the loop.

- A SystemZ implementation of getMinPrefetchStride().

Review: Ulrich Weigand, Michael Kruse

Differential Revision: https://reviews.llvm.org/D70228
2020-04-02 14:57:46 +02:00
Kang Zhang 9dcac87297 [NFC][PowerPC] Using update_llc_test_checks.py to update atomics-regression.ll 2020-04-02 12:47:35 +00:00
Sanjay Patel a19b27b90e [PhaseOrdering] add test for vector trunc; NFC
See discussion in D76983.
2020-04-02 08:13:19 -04:00
Sanjay Patel ecb048c7ac [InstCombine] add tests for disguised vector trunc; NFC 2020-04-02 08:13:19 -04:00
Djordje Todorovic 5e508b9bac [llvm-dwarfdump] Add the --show-sections-sizes option
Add an option to llvm-dwarfdump to calculate the bytes within
the debug sections. Dump this numbers when using --statistics
option as well.

This is an initial patch (e.g. we should support other units,
since we only support 'bytes' now).

Differential Revision: https://reviews.llvm.org/D74205
2020-04-02 13:14:30 +02:00
Kang Zhang ce8b85c0b8 [NFC][PowerPC] Add a new test case loop-comment.ll 2020-04-02 10:16:02 +00:00
David Green fbd53ffc3a [ARM] MVE VMULL patterns
This adds MVE vmull patterns, which are conceptually the same as
mul(vmovl, vmovl), and so the tablegen patterns follow the same
structure.

For i8 and i16 this is simple enough, but in the i32 version the
multiply (in 64bits) is illegal, meaning we need to catch the pattern
earlier in a dag fold. Because bitcasts are involved in the zext
versions and the patterns are a little different in little and big
endian. I have only added little endian support in this patch.

Differential Revision: https://reviews.llvm.org/D76740
2020-04-02 10:57:40 +01:00
David Green c697dd9ffd [ARM] Make remaining MVE instruction predictable
The unpredictable/hasSideEffects flag is usually inferred by tablegen
from whether the instruction has a tablegen pattern (and that pattern
only has a single output instruction). Now that the MVE intrinsics are
all committed and producing code, the remaining instructions still
marked as unpredictable need to be specially handled. This adds the flag
directly to instructions that need it, notably the V*MLAL instructions
and some of the MOV's.

Differential Revision: https://reviews.llvm.org/D76910
2020-04-02 10:57:40 +01:00
Clement Courbet fb4aa30f27 [ExpandMemCmp] Allow overlaping loads in the zero-relational case.
Summary:
This allows doing `memcmp(p, q, 7)` with 2 loads instead of a call to
memcmp.
This fixes part of PR45147.

Reviewers: spatel

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76133
2020-04-02 11:20:47 +02:00
Florian Hahn a63b5c9e53 [CallSiteSplitting] Simplify isPredicateOnPHI & continue checking PHIs.
As pointed out by @thakis, currently CallSiteSplitting bails out after
checking the first PHI node. We should check all PHI nodes, until we
find one where call site splitting is beneficial.

This patch also slightly simplifies the code using BasicBlock::phis().

Reviewers: davidxl, junbuml, thakis

Reviewed By: davidxl

Differential Revision: https://reviews.llvm.org/D77089
2020-04-02 10:11:27 +01:00
Kristof Beyls deb902252a Fix RUN line in AArch64/speculation-hardening.ll 2020-04-02 09:42:15 +01:00
WangTianQing d08fadd662 [X86] Add SERIALIZE instruction.
Summary: For more details about this instruction, please refer to the latest ISE document: https://software.intel.com/en-us/download/intel-architecture-instruction-set-extensions-programming-reference

Reviewers: craig.topper, RKSimon, LuoYuanke

Reviewed By: craig.topper

Subscribers: mgorny, hiraditya, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D77193
2020-04-02 16:19:23 +08:00
Fangrui Song 85adce3d73 [PPCInstPrinter] Change B to print the target address in hexadecimal form
Follow-up of D76591 and D76907
2020-04-01 22:38:24 -07:00
Johannes Doerfert bcd8009369 [Attributor] Use the proper context instruction in genericValueTraversal
There was a TODO in genericValueTraversal to provide the context
instruction and due to the lack of it users that wanted one just used
something available. Unfortunately, using a fixed instruction is wrong
in the presence of PHIs so we need to update the context instruction
properly.

Reviewed By: uenoku

Differential Revision: https://reviews.llvm.org/D76870
2020-04-01 22:20:47 -05:00
Johannes Doerfert a8b2fed0ae [Utils][FIX] Properly deal with occasionally deleted functions
While D68850 allowed functions to be deleted I accidentally saved some
version of the function to be used once a suitable prefix was found.
This turned out to be problematic when the occasionally deleted function
is also occasionally modified. The test case is adjusted to resemble the
case in which the problem was found.

Reviewed By: lebedev.ri

Differential Revision: https://reviews.llvm.org/D76586
2020-04-01 21:56:18 -05:00
Johannes Doerfert 9e19693994 [Attributor] Derive better alignment for accessed pointers
Use DL & ABI information for better alignment deduction, e.g., if a type
is accessed and the ABI specifies an alignment requirement for such an
access we can use it. This is based on a patch by @lebedev.ri and
inspired by getBaseAlign in Loads.cpp.

Depends on D76673.

Reviewed By: lebedev.ri

Differential Revision: https://reviews.llvm.org/D76674
2020-04-01 21:49:57 -05:00
Nico Weber 5bac8d427d Revert "[ORC] Export __cxa_atexit from the main JITDylib in LLJIT."
This reverts commit 0071eaaf08.
Inputs/noop-main.ll wasn't checked in, so this breaks check-llvm
everywhere.
2020-04-01 22:49:38 -04:00
Johannes Doerfert b1c788d051 [Attributor][FIX] Prevent alignment breakage wrt. must-tail calls
If we have a must-tail call the callee and caller need to have matching
ABIs. Part of that is alignment which we might modify when we deduce
alignment of arguments of either. Since we would need to keep them in
sync, which is not as simple, we simply avoid deducing alignment for
arguments of the must-tail caller or callee.

Reviewed By: rnk

Differential Revision: https://reviews.llvm.org/D76673
2020-04-01 21:40:07 -05:00
Johannes Doerfert f7f9322843 [Attributor][NFC] Cleanup leftover check lines 2020-04-01 21:37:33 -05:00
Lang Hames 0071eaaf08 [ORC] Export __cxa_atexit from the main JITDylib in LLJIT.
Failure to export __cxa_atexit can lead to an attempt to import a definition
from the process itself (if __cxa_atexit is referenced from another JITDylib),
but the process definition will clash with the existing non-exported definition
to produce an unexpected DuplicateDefinitionError.

This patch fixes the immediate issue by exporting __cxa_atexit. It also fixes a
bug where atexit functions in other JITDylibs were not being run by adding a
copy of run_atexits_helper to every JITDylib.

A follow up patch will deal with the bug where definition generators are called
despite a non-exported definition being present.
2020-04-01 19:12:08 -07:00
Sam Clegg 296ccef703 [WebAssembly] EmscriptenEHSjLj: Mark __invoke_ functions as imported
This means the linker will be expect them be undefined at link time an
will generate imports from the `env` module rather than reporting
undefined externals.

Differential Revision: https://reviews.llvm.org/D77192
2020-04-01 16:33:33 -07:00
Lang Hames 8e5a8f620c [ORC] Don't require a null-terminator on MemoryBuffers for objects in archives.
The MemoryBuffer::getMemBuffer method's RequiresNullTerminator parameter
defaults to true, but object files are not null terminated so we need to
explicitly pass false here.
2020-04-01 12:16:38 -07:00
Sanjay Patel 3d90048791 [InstCombine] enhance freelyNegateValue() by handling xor
Negation is equivalent to bitwise-not + 1, so try to convert more
subtracts into adds using this relationship:
0 - (A ^ C) => ((A ^ C) ^ -1) + 1 => A ^ ~C + 1

I doubt this will recover the regression noted in rGf2fbdf76d8d0,
but seems like we're going to need to improve here and/or revive D68408?

Alive2 proofs:
http://volta.cs.utah.edu:8080/z/Re5tMU
http://volta.cs.utah.edu:8080/z/An-uns

Differential Revision: https://reviews.llvm.org/D77230
2020-04-01 15:05:13 -04:00
Sanjay Patel 8431dbacd4 [InstCombine] add tests for negate with xor operand; NFC 2020-04-01 15:05:13 -04:00
Jonathan Roelofs 1148f004fa Fix PR45371: SeparateConstOffsetFromGEP clean up bookkeeping
find() was altering the UserChain, even in cases where it subsequently
discovered that the resulting constant was a 0. This confuses
rebuildWithoutConstOffset() when it attempts to walk the chain later, since it
is expected that the chain itself be a path down the use-def edges of an
expression.
2020-04-01 12:38:15 -06:00
Uday Bondhugula 6ee11c3b0f [NewGVN] Make NewGVN aware of aligned_alloc
Make the New GVN pass aware of aligned_alloc.

Depends on D76975.

Differential Revision: https://reviews.llvm.org/D76976
2020-04-01 23:26:51 +05:30
Uday Bondhugula 4cf70af94f [GVN] Make GVN aware of aligned_alloc
Make the GVN pass aware of aligned_alloc.

Depends on D76974.

Differential Revision: https://reviews.llvm.org/D76975
2020-04-01 23:26:50 +05:30
Uday Bondhugula c4499e3333 [Attributor] Make attributor aware of aligned_alloc for heap to stack conversion
Make the attributor pass aware of aligned_alloc for converting heap
allocations to stack ones.

Depends on D76971.

Differential Revision: https://reviews.llvm.org/D76974
2020-04-01 23:26:50 +05:30
Matt Arsenault 3f465d0d36 AMDGPU: Fix broken check lines 2020-04-01 10:52:22 -07:00
Matt Arsenault 68e283940a AMDGPU/GlobalISel: Switch test to checking final ISA
The naming convention is for unprefixed .ll tests to check the final
ISA instructions.
2020-04-01 13:03:02 -04:00
Matt Arsenault 5e4e8d0388 AMDGPU/GlobalISel: Change intrinsic ID for _L to _LZ opt
Still should handle the other case changes the opcode this way.
2020-04-01 13:03:02 -04:00
Heejin Ahn c87b5e7e22 [WebAssembly] Fix subregion relationship in CFGSort
Summary:
The previous code for determining the innermost region in CFGSort was
not correct. We determine subregion relationship by domination of their
headers, i.e., if region A's header dominates region B's header, B is a
subregion of A. Previously we assumed that if a BB belongs to both a
loop and an exception, the region with fewer number of BBs is the
innermost one. This may not be true, because while WebAssemblyException
contains BBs in all its subregions (loops or exceptions), MachineLoop
may not, because MachineLoop does not contain BBs that don't have a path
to its header even if they are dominated by its header.

                Loop header  <---|
                    |            |
              Exception header   |
                    | \          |
                    A  B         |
                    |   \        |
                    |    C       |
                    |            |
                Loop latch       |
                    |            |
                    -------------|

For example, in this CFG, the loop does not contain B and C, because
they don't have a path back to the loops header. But for CFGSort we
consider the exception here belongs to the loop and the exception should
be a subregion of the loop and scheduled together.

So here we should use `WE->contains(ML->getHeader())` (but not
`ML->contains(WE->getHeader())`, for the stated region above).

This also fixes some comments and deletes `Regions` vector in
`RegionInfo` class, which was not used anywere.

Reviewers: dschuff

Subscribers: sbc100, jgravelle-google, hiraditya, sunfish, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D77181
2020-04-01 08:12:41 -07:00
Georgii Rymar f527e6f2e1 [llvm-readobj] - Do not crash when SHT_HASH table is broken.
We have scenarios when the logic of --elf-hash-histogram/--hash-symbols/--hash-table
options might crash when given a broken hash table.

This patch adds pre-checks for tables for these 3 options
and provides test cases.

Differential revision: https://reviews.llvm.org/D77147
2020-04-01 18:03:02 +03:00
Jessica Clarke 616289ed29 [LegalizeTypes][RISCV] Correctly sign-extend comparison for ATOMIC_CMP_XCHG
Summary:
Currently, the comparison argument used for ATOMIC_CMP_XCHG is legalised
with GetPromotedInteger, which leaves the upper bits of the value
undefind. Since this is used for comparing in an LR/SC loop with a
full-width comparison, we must sign extend it. We introduce a new
getExtendForAtomicCmpSwapArg to complement getExtendForAtomicOps, since
many targets have compare-and-swap instructions (or pseudos) that
correctly handle an any-extend input, and the existing function
determines the extension of the result, whereas we are concerned with
the input.

This is related to https://reviews.llvm.org/D58829, which solved the
issue for ATOMIC_CMP_SWAP_WITH_SUCCESS, but not the simpler
ATOMIC_CMP_SWAP.

Reviewers: asb, lenary, efriedma

Reviewed By: asb

Subscribers: arichardson, hiraditya, rbar, johnrusso, simoncook, sabuasal, niosHD, kito-cheng, shiva0217, MaskRay, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, rkruppe, jfb, PkmX, jocewei, psnobl, benna, Jim, s.egerton, pzheng, sameer.abuasal, apazos, luismarques, evandro, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74453
2020-04-01 15:51:26 +01:00
Puyan Lotfi e3033c0ce5 [llvm][clang][IFS] Enhancing the llvm-ifs yaml format for symbol lists.
Prior to this change the clang interface stubs format resembled
something ending with a symbol list like this:

 Symbols:
   a: { Type: Func }

This was problematic because we didn't actually want a map format and
also because we didn't like that an empty symbol list required
"Symbols: {}". That is to say without the empty {} llvm-ifs would crash
on an empty list.

With this new format it is much more clear which field is the symbol
name, and instead the [] that is used to express an empty symbol vector
is optional, ie:

Symbols:
 - { Name: a, Type: Func }

or

Symbols: []

or

Symbols:

This further diverges the format from existing llvm-elftapi. This is a
good thing because although the format originally came from the same
place, they are not the same in any way.

Differential Revision: https://reviews.llvm.org/D76979
2020-04-01 10:49:06 -04:00
Simon Pilgrim eb8880562e [X86][SSE] combinePTESTCC - fold TESTZ(X,~Y) -> TESTC(Y,X) 2020-04-01 15:10:53 +01:00
Kai Wang 501522b5b2 [RISCV] Support RISC-V ELF attributes sections in llvm-readobj.
Enable llvm-readobj to handle RISC-V ELF attribute sections.

Differential Revision: https://reviews.llvm.org/D75833
2020-04-01 21:50:11 +08:00
David Green a0c537834a [ARM] Extra vmull loop tests. NFC 2020-04-01 14:07:45 +01:00
shchenz e344f8b9db Revert "[LSR] re-add testcase for wrongly phi node elimination - NFC"
This reverts commit f25a1b4f58.
ARM and hexagon fail at the new added case.
2020-04-01 12:58:06 +00:00
Pierre-vh 2effe8f5e7 [Target][ARM] Improvements to the VPT Block Insertion Pass
This allows the MVE VPT Block insertion pass to remove VPNOTs in
order to create more complex VPT blocks such as TE, TEET, TETE, etc.

Differential Revision: https://reviews.llvm.org/D75993
2020-04-01 12:34:20 +01:00
shchenz f25a1b4f58 [LSR] re-add testcase for wrongly phi node elimination - NFC
Retest the case on X86/SystemZ/AArch64/PowerPC
2020-04-01 11:11:17 +00:00
Cullen Rhodes 84aa6cf1a9 [Transforms][SROA] Promote allocas with mem2reg for scalable types
Summary:
Aggregate types containing scalable vectors aren't supported and as far
as I can tell this pass is mostly concerned with optimisations on
aggregate types, so the majority of this pass isn't very useful for
scalable vectors.

This patch modifies SROA such that mem2reg is run on allocas with
scalable types that are promotable, but nothing else such as slicing is
done.

The use of TypeSize in this pass has also been updated to be explicitly
fixed size. When invoking the following methods in DataLayout:

    * getTypeSizeInBits
    * getTypeStoreSize
    * getTypeStoreSizeInBits
    * getTypeAllocSize

we now called getFixedSize on the resultant TypeSize. This is quite an
extensive change with around 50 calls to these functions, and also the
first change of this kind (being explicit about fixed vs scalable
size) as far as I'm aware, so feedback welcome.

A test is included containing IR with scalable vectors that this pass is
able to optimise.

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D76720
2020-04-01 10:34:11 +00:00
Simon Pilgrim 918ccb64b0 [X86][SSE] Handle basic inversion of PTEST/TESTP operands (PR38522)
PTEST/TESTP sets EFLAGS as:
TESTZ: ZF = (Op0 & Op1) == 0
TESTC: CF = (~Op0 & Op1) == 0
TESTNZC: ZF == 0 && CF == 0

If we are inverting the 0'th operand of a PTEST/TESTP instruction we can adjust the comparisons to correct handle the inversion implicitly.

Additionally, for "TESTZ" (ZF) cases, the allones case, PTEST(X,-1) can be simplified to PTEST(X,X).

We can expand this for the TESTZ(X,~Y) pattern and also handle KTEST/KORTEST in the future.

Differential Revision: https://reviews.llvm.org/D76984
2020-04-01 11:33:28 +01:00
shchenz 8b8cd150a4 Revert "[LSR] add testcase for wrongly phi node elimination - NFC"
This reverts commit dbf5e4f6c7.
The testcase has different behaviour on PowerPC and X86.
2020-04-01 10:28:43 +00:00
shchenz dbf5e4f6c7 [LSR] add testcase for wrongly phi node elimination - NFC 2020-04-01 09:58:58 +00:00
Qiu Chaofan d8b51789fd [NFC] [PowerPC] Add test for frsp elimination 2020-04-01 17:54:24 +08:00
Bjorn Pettersson ef49895da8 [X86] Do not assume types are legal in getFauxShuffleMask
Summary:
Make sure we do not assert on value types not being
simple in getFauxShuffleMask when analysing operations
such as "v8i16 = truncate v8i24".

Reviewers: RKSimon

Reviewed By: RKSimon

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D77136
2020-04-01 11:40:18 +02:00
Georgii Rymar 93fc0ba145 [yaml2obj] - Add NBucket and NChain fields for the SHT_HASH section.
These fields allows to override nchain and nbucket fields of a SHT_HASH section.

Differential revision: https://reviews.llvm.org/D76834
2020-04-01 12:28:16 +03:00
Florian Hahn d307174e1d [ConstantRange] Use APInt::or/APInt::and for single elements.
Currently ConstantRange::binaryAnd/binaryOr results are too pessimistic
for single element constant ranges.

If both operands are single element ranges, we can use APInt's AND and
OR implementations directly.

Note that some other binary operations on constant ranges can cover the
single element cases naturally, but for OR and AND this unfortunately is
not the case.

Reviewers: nikic, spatel, lebedev.ri

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D76446
2020-04-01 09:50:24 +01:00
Florian Hahn e20cac3650 [Matrix] Add new test case with getelementptr constant exprs.
The new test mostly ensures we keep doing the right thing for constant
expressions while lowering matrix instructions.
2020-04-01 09:32:13 +01:00
Qiu Chaofan 95bcab8272 [DAGCombiner] Require ninf for sqrt recip estimation
Currently, DAG combiner uses (fmul (rsqrt x) x) to estimate square
root of x. However, this method would return NaN if x is +Inf, which
is incorrect.

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D76853
2020-04-01 16:23:43 +08:00
Florian Hahn 862766e01e [Verifier] Verify matrix dimensions operands match vector size.
This patch adds checks to the verifier to ensure the dimension arguments
passed to the matrix intrinsics match the vector types for their
arugments/return values.

Reviewers: anemet, Gerolf, andrew.w.kaylor, LuoYuanke

Reviewed By: anemet

Differential Revision: https://reviews.llvm.org/D77129
2020-04-01 09:21:39 +01:00
Simon Pilgrim f9f401dba1 [X86][AVX] Add additional 256/512-bit test cases for PACKSS/PACKUS shuffle patterns
Also add lowerShuffleWithPACK call to lowerV32I16Shuffle - shuffle combining was catching it but we avoid a lot of temporary shuffle creations if we catch it at lowering first.
2020-04-01 08:19:03 +01:00
Simon Pilgrim 3c9064ed96 [X86] Run XOP vector rotation tests with/without AVX2
I noticed this while reviewing D77152 - by only testing bdver4 we weren't checking an XOP target that only had AVX1
2020-04-01 08:19:03 +01:00
Kai Luo 8eb40e41f6 [PowerPC] Don't generate ST_VSR_SCAL_INT if power8-vector is disabled
Summary:
In https://bugs.llvm.org/show_bug.cgi?id=45297, it fails selecting
instructions for `PPCISD::ST_VSR_SCAL_INT`. The reason it generate the
`PPCISD::ST_VSR_SCAL_INT` with `-power8-vector` in IR is PPC's
combiner checks `hasP8Altivec` rather than `hasP8Vector`. This patch
should resolve PR45297.

Differential Revision: https://reviews.llvm.org/D76773
2020-04-01 02:15:25 +00:00
Shengchen Kan d0efd7bfcf [X86][MC] Disable Prefix padding after hardcode/prefix
Reviewers: reames, MaskRay, craig.topper, LuoYuanke, jyknight, eli.friedman

Reviewed By: craig.topper

Subscribers: hiraditya, llvm-commits, annita.zhang

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76475
2020-04-01 09:49:52 +08:00
Matt Arsenault 43e576593e AMDGPU/GlobalISel: Fix insert point when lowering G_FMAD 2020-03-31 19:57:06 -04:00
Evgenii Stepanov f9471b0010 Fix MSan false positive due to select folding.
Summary:
Select folding in JumpThreading can create a conditional branch on a
code patch that did not have one in the original program. This is not a
valid transformation in sanitize_memory functions.

Note that JumpThreading does select folding in 3 different places. Two
of them seem safe - they apply to a select instruction in a BB that ends
with an unconditional branch to another BB, which (in turn) ends with a
conditional branch or a switch with the same condition.

Fixes PR45220.

Reviewers: glider, dvyukov, efriedma

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76332
2020-03-31 15:25:42 -07:00
Fangrui Song 4af7560b37 [PPCInstPrinter] Print conditional branches as `bt 2, $target` instead of `bt 2, .+$imm`
Follow-up of D76591.

Reviewed By: #powerpc, sfertile

Differential Revision: https://reviews.llvm.org/D76907
2020-03-31 15:05:38 -07:00
Joel E. Denny 8f8c4950fe [FileCheck] Add missing %ProtectFileCheckOutput to FileCheck tests
I'm committing this fixup without review because it's an obvious
continuation of D65121 (committed at f471eb8e99).
2020-03-31 17:29:11 -04:00
Hubert Tong 478af4479a [Object] Update ObjectFile::makeTriple for XCOFF
Summary:
When we encounter an XCOFF file, reflect that in the triple information.
In addition to knowing the object file format, we know that the
associated OS is AIX.

This means that we can expect that there is no output difference in the
processing of an XCOFF32 input file between cases where the triple is
left unspecified by the user and cases where the user specifies
`--triple powerpc-ibm-aix` explicitly.

Reviewers: jhenderson, sfertile, jasonliu, daltenty

Reviewed By: jasonliu

Subscribers: wuzish, nemanjai, hiraditya, MaskRay, rupprecht, steven.zhang, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D77025
2020-03-31 17:26:30 -04:00
Daniel Frampton 494abe139a [AArch64] Change AArch64 Windows EH UnwindHelp object to be a fixed object
The UnwindHelp object is used during exception handling by runtime
code. It must be findable from a fixed offset from FP.

This change allocates the UnwindHelp object as a fixed object (as is
done for x86_64) to ensure that both the generated code and runtime
agree on the location of the object.

Fixes https://bugs.llvm.org/show_bug.cgi?id=45346

Differential Revision: https://reviews.llvm.org/D77016
2020-03-31 14:21:21 -07:00
Daniel Frampton 522b4c4b88 [AArch64] Fix mismatch in prologue and epilogue for funclets on Windows
The generated code for a funclet can have an add to sp in the epilogue
for which there is no corresponding sub in the prologue.

This patch removes the early return from emitPrologue that was
preventing the sub to sp, and instead conditionalizes the appropriate
parts of the rest of the function.

Fixes https://bugs.llvm.org/show_bug.cgi?id=45345

Differential Revision: https://reviews.llvm.org/D77015
2020-03-31 14:21:18 -07:00
Anna Thomas 58a05675da Revert "[InlineFunction] Handle return attributes on call within inlined body"
This reverts commit 28518d9ae3.
There is a failure in MsgPackReader.cpp when built with clang. It
complains about "signext and zeroext" are incompatible. Investigating
offline if it is infact a UB in the MsgPackReader code.
2020-03-31 16:16:34 -04:00
Eli Friedman dacf8d3562 [AArch64][SVE] Add support for fcmp.
This also requires support for boolean "not", so I added boolean logic
while I was there.

Differential Revision: https://reviews.llvm.org/D76901
2020-03-31 12:04:39 -07:00
Guozhi Wei 6d20937c29 [CodeGenPrepare] Delete intrinsic call to llvm.assume to enable more tailcall
The attached test case is simplified from tcmalloc. Both function calls should be optimized as tailcall. But llvm can only optimize the first call. The second call can't be optimized because function dupRetToEnableTailCallOpts failed to duplicate ret into block case2.

There 2 problems blocked the duplication:

  1 Intrinsic call llvm.assume is not handled by dupRetToEnableTailCallOpts.
  2 The control flow is more complex than expected, dupRetToEnableTailCallOpts can only duplicate ret into its predecessor, but here we have an intermediate block between call and ret.

The solutions:

  1 Since CodeGenPrepare is already at the end of LLVM IR phase, we can simply delete the intrinsic call to llvm.assume.
  2 A general solution to the complex control flow is hard, but for this case, after exit2 is duplicated into case1, exit2 is the only successor of exit1 and exit1 is the only predecessor of exit2, so they can be combined through eliminateFallThrough. But this function is called too late, there is no more dupRetToEnableTailCallOpts after it. We can add an earlier call to eliminateFallThrough to solve it.

Differential Revision: https://reviews.llvm.org/D76539
2020-03-31 11:55:51 -07:00
Stanislav Mekhanoshin 08682dcc86 [AMDGPU] Define 16 bit VGPR subregs
We have loads preserving low and high 16 bits of their
destinations. However, we always use a whole 32 bit register
for these. The same happens with 16 bit stores, we have to
use full 32 bit register so if high bits are clobbered the
register needs to be copied. One example of such code is
added to the load-hi16.ll.

The proper solution to the problem is to define 16 bit subregs
and use them in the operations which do not read another half
of a VGPR or preserve it if the VGPR is written.

This patch simply defines subregisters and register classes.
At the moment there should be no difference in code generation.
A lot more work is needed to actually use these new register
classes. Therefore, there are no new tests at this time.

Register weight calculation has changed with new subregs so
appropriate changes were made to keep all calculations just
as they are now, especially calculations of register pressure.

Differential Revision: https://reviews.llvm.org/D74873
2020-03-31 11:49:06 -07:00
Anna Thomas 28518d9ae3 [InlineFunction] Handle return attributes on call within inlined body
Consider a callee function that has a call (C) within it which feeds
into the return.  When we inline that callee into a callsite that has
return attributes, we can backward propagate those attributes to the
call (C) within that inlined callee body.

This is safe to do so only if we can guarantee transfer of execution to
successor in the window of instructions between return value (i.e. the
call C) and the return instruction.

See added test cases.

Reviewed-By: reames, jdoerfert

Differential Revision: https://reviews.llvm.org/D76140
2020-03-31 14:35:40 -04:00
Ulrich Weigand c726c920e0 [SystemZ] Allow %r0 in address context for AsmParser
Registers used in any address (as well as in a few other contexts)
have special semantics when a "zero" register is used, which is
why the back-end defines extra register classes ADDR32, ADDR64 etc
to be used to prevent the register allocator from using %r0 there.

However, when writing assembler code "by hand", you sometimes need
to trigger that special semantics.  However, currently the AsmParser
will reject %r0 in those places.  In some cases it may be possible
to write that instruction differently - but in others it is currently
not possible at all.

This check in AsmParser simply seems overly strict, so this patch
just removes the check completely.  This brings the behaviour of
AsmParser in line with the GNU assembler as well.

Bugzilla: https://bugs.llvm.org/show_bug.cgi?id=45092
2020-03-31 19:48:50 +02:00
Uday Bondhugula dc817b2dea [InstCombine] Deduce attributes for aligned_alloc in InstCombine
Make InstCombine aware of the aligned_alloc library function.

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Depends on D76970.

Differential Revision: https://reviews.llvm.org/D76971
2020-03-31 23:17:28 +05:30
zhizhouy 94d912296d [NFC] Do not run CGProfilePass when not using integrated assembler
Summary:
CGProfilePass is run by default in certain new pass manager optimization pipeline. Assemblers other than llvm as (such as gnu as) cannot recognize the .cgprofile entries generated and emitted from this pass, causing build time error.

This patch adds new options in clang CodeGenOpts and PassBuilder options so that we can turn cgprofile off when not using integrated assembler.

Reviewers: Bigcheese, xur, george.burgess.iv, chandlerc, manojgupta

Reviewed By: manojgupta

Subscribers: manojgupta, void, hiraditya, dexonsmith, llvm-commits, tcwang, llozano

Tags: #llvm, #clang

Differential Revision: https://reviews.llvm.org/D62627
2020-03-31 10:31:31 -07:00
Simon Pilgrim 30436a1ce7 [X86][SSE] Add additional PTEST/TESTP inversion tests 2020-03-31 18:02:27 +01:00
Simon Pilgrim 8b925440d1 [X86][SSE] Simplify PTEST/TESTP tests for D76984
We don't need to use an allones for the second operand - test the general case.
2020-03-31 18:02:27 +01:00
Sterling Augustine 21d9d0855b New symbolizer option to print files relative to the compilation directory.
Summary: New "--relative" option to allow printing files relative to the compilation directory.

Reviewers: jhenderson

Subscribers: MaskRay, rupprecht, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76733
2020-03-31 09:29:24 -07:00
Florian Hahn b0cd7b2799 [SCCP] Limit use of range info for binops to integers for now.
This fixes a crash when building the test suite.
2020-03-31 17:08:09 +01:00
Tyker 4aeb7e1ef4 [AssumeBundles] Preserve information in EarlyCSE
Summary: this patch preserve information from various places in EarlyCSE into assume bundles.

Reviewers: jdoerfert

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76769
2020-03-31 17:47:04 +02:00
Tyker 7093b92a13 [AssumeBundles] Preserve Information from Load/Store
Summary: This patch preserve dereferenceable, nonnull and alignment from loads and stores.

Reviewers: jdoerfert

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76759
2020-03-31 17:47:04 +02:00
Jonas Paulsson f481d48893 [SystemZ] Improve foldMemoryOperandImpl().
Fold MS(G)RKC -> MS(G)C.

Review: Ulrich Weigand

Differential Revision: https://reviews.llvm.org/D76771
2020-03-31 17:17:51 +02:00
Georgii Rymar b3f13bc165 [obj2yaml] - Teach tool to dump program headers.
Currently obj2yaml does not dump program headers,
this patch teaches it to do that.

Differential revision: https://reviews.llvm.org/D75342
2020-03-31 18:10:19 +03:00
Simon Pilgrim 7e0e5fa499 Revert rGefe59d6717dcdf7777acb9b7a734e1a520bdf22a "[X86][SSE] lowerShuffleWithPACK - extend to use chained PACKs for larger truncations"
This might be causing an issue on the fuchsia-x86_64-linux buildbot - reverting to see what happens.
2020-03-31 15:47:30 +01:00
Simon Pilgrim efe59d6717 [X86][SSE] lowerShuffleWithPACK - extend to use chained PACKs for larger truncations
If canLowerByDroppingEvenElements indicates that the shuffle is a N:1 compaction pattern and the inputs are suitably sign/zero extended then we can use a chain of PACKSS/PACKUS to compact.

This helps avoid PSHUFB (and its mask load) for short shuffle chains, shuffle combining will still replace with a PSHUFB if we have enough shuffles as getFauxShuffleMask can recognise PACKSS/PACKUS chains.
2020-03-31 14:48:48 +01:00
Sanjay Patel fa61b5059a [InstCombine] remove stray auto-generated test comment; NFC
The script now includes extra info about command-line options used
when generating its advertisement heading, but we don't want that
here. This is a special-case because we have enhanced the check
lines (as noted in the 2nd comment line).
2020-03-31 09:19:12 -04:00
Florian Hahn b37543750c [ValueLattice] Distinguish between constant ranges with/without undef.
This patch updates ValueLattice to distinguish between ranges that are
guaranteed to not include undef and ranges that may include undef.

A constant range guaranteed to not contain undef can be used to simplify
instructions to arbitrary values. A constant range that may contain
undef can only be used to simplify to a constant. If the value can be
undef, it might take a value outside the range. For example, consider
the snipped below

define i32 @f(i32 %a, i1 %c) {
  br i1 %c, label %true, label %false
true:
  %a.255 = and i32 %a, 255
  br label %exit
false:
  br label %exit
exit:
  %p = phi i32 [ %a.255, %true ], [ undef, %false ]
  %f.1 = icmp eq i32 %p, 300
  call void @use(i1 %f.1)
  %res = and i32 %p, 255
  ret i32 %res
}

In the exit block, %p would be a constant range [0, 256) including undef as
%p could be undef. We can use the range information to replace %f.1 with
false because we remove the compare, effectively forcing the use of the
constant to be != 300. We cannot replace %res with %p however, because
if %a would be undef %cond may be true but the  second use might not be
< 256.

Currently LazyValueInfo uses the new behavior just when simplifying AND
instructions and does not distinguish between constant ranges with and
without undef otherwise. I think we should address the remaining issues
in LVI incrementally.

Reviewers: efriedma, reames, aqjune, jdoerfert, sstefan1

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D76931
2020-03-31 12:50:20 +01:00
Denis Antrushin 06c58f11a9 [SCEV] Use backedge SCEV of PHI only if its input is loop invariant
For the PHI node

      %1 = phi [%A, %entry], [%X, %latch]

it is incorrect to use SCEV of backedge val %X as an exit value
of PHI unless %X is loop invariant.
This is because exit value of %1 is value of %X at one-before-last
iteration of the loop.

Reviewed By: Meinersbur
Differential Revision: https://reviews.llvm.org/D73181
2020-03-31 18:39:24 +07:00
Simon Pilgrim 98357dee1c [X86] Combine concat(palignr,palignr) -> palignr(concat,concat)
combineX86ShufflesRecursively should handle this someday
2020-03-31 11:06:35 +01:00
Daan Sprenkels 464b9aeafe [InstCombine] Transform extelt-trunc -> bitcast-extelt
Canonicalize the case when a scalar extracted from a vector is
truncated.  Transform such cases to bitcast-then-extractelement.
This will enable erasing the truncate operation.

This commit fixes PR45314.

reviewers: spatel

Differential revision: https://reviews.llvm.org/D76983
2020-03-31 11:53:41 +02:00
David Green 2c5f43f9dd [ARM] Fix qdadd operand order
qdadd is defined as sat(Rm + sat(2*Rn)). We had the Rm and Rn switched
the wrong way around.

Differential Revision: https://reviews.llvm.org/D77049
2020-03-31 10:11:36 +01:00
Guillaume Chatelet c9d5c19597 [Alignment][NFC] Transitionning more getMachineMemOperand call sites
Summary:
This is patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790

Reviewers: courbet

Subscribers: arsenm, dylanmckay, sdardis, nemanjai, jvesely, nhaehnle, hiraditya, kbarton, jrtc27, atanasyan, Jim, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D77121
2020-03-31 08:36:18 +00:00
Sebastian Neubauer 5d3a69feca [AMDGPU] New llvm.amdgcn.ballot intrinsic
Add a new llvm.amdgcn.ballot intrinsic modeled on the ballot function
in GLSL and other shader languages. It returns a bitfield containing the
result of its boolean argument in all active lanes, and zero in all
inactive lanes.

This is intended to replace the existing llvm.amdgcn.icmp and
llvm.amdgcn.fcmp intrinsics after a suitable transition period.

Use the new intrinsic in the atomic optimizer pass.

Differential Revision: https://reviews.llvm.org/D65088
2020-03-31 10:35:39 +02:00
Florian Hahn 0c9c58ada0 [SCCP] Use constant ranges for casts.
For casts with constant range operands, we can use
ConstantRange::castOp.

Reviewers: davide, efriedma, mssimpso

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D71938
2020-03-31 09:22:04 +01:00
Kai Wang 581ba35291 [RISCV] ELF attribute section for RISC-V.
Leverage ARM ELF build attribute section to create ELF attribute section
for RISC-V. Extract the common part of parsing logic for this section
into ELFAttributeParser.[cpp|h] and ELFAttributes.[cpp|h].

Differential Revision: https://reviews.llvm.org/D74023
2020-03-31 16:16:19 +08:00
Djordje Todorovic bcbd60aeb5 [Mips] Make MipsBranchExpansion aware of BBIT family of branch
Octeon branches (bbit0/bbit032/bbit1/bbit132) have an immediate operand,
so it is legal to have such replacement within
MipsBranchExpansion::replaceBranch().

According to the specification, a branch (e.g. bbit0 ) looks like:

bbit0  rs p offset  // p is an immediate operand
  if !rs<p> then branch

Without this patch, an assertion triggers in the method,
and the problem has been found in the real example.

Differential Revision: https://reviews.llvm.org/D76842
2020-03-31 09:20:51 +02:00
Dylan McKay 7b808b105f [AVR] Generalize the previous interrupt bugfix to signal handlers too 2020-03-31 19:33:34 +13:00
Dylan McKay 339b34266c [AVR] Respect the 'interrupt' function attribute
In the past, AVR functions were only lowered with interrupt-specific
machine code if the function was defined with the "avr-interrupt" or
"avr-signal" calling conventions.

This patch modifies the backend so that if the function does not have a
special calling convention, but does have an "interrupt" attribute,
that function is interpreted as a function with interrupts.

This also extracts the "is this function an interrupt" logic from
several disparate places in the backend into one AVRMachineFunctionInfo
attribute.

Bug found by Wilhelm Meier.
2020-03-31 19:00:18 +13:00
Wei Mi ebad678857 [SampleFDO] Port MD5 name table support to extbinary format.
Compbinary format uses MD5 to represent strings in name table. That gives smaller profile without the need of compression/decompression when writing/reading the profile. The patch adds the support in extbinary format. It is off by default but user can choose to enable it.

Note the feature of using MD5 in name table can bring very small chance of name conflict leading to profile mismatch. Besides, profile using the feature won't have the profile remapping support.

Differential Revision: https://reviews.llvm.org/D76255
2020-03-30 22:07:08 -07:00
QingShan Zhang 4eeb56d088 [PowerPC] Don't do the folding if the operand is R0/X0
We have this transformation in PowerPC peephole:

Replace instruction:
  renamable $x28 = ADDI8 renamable $x7, -8
  renamable $x28 = ADD8 killed renamable $x28, renamable $x0
  STFD killed renamable $f0, -8, killed renamable $x28 :: (store 8 into %ir._ind_cast99.epil)
with:
  renamable $x28 = ADDI8 renamable $x7, -16
  STFDX killed renamable $f0, $x0, killed $x28 :: (store 8 into %ir._ind_cast99.epil)

It is invalid as the '$x0' in STFDX is constant 0, not register r0.

Reviewed By: Nemanjai

Differential Revision: https://reviews.llvm.org/D77034
2020-03-31 02:50:19 +00:00
Jessica Paquette d5ee72065b [GlobalISel] Implement identity transforms for x op x -> x
When we have

```
a = G_OR x, x
```

or

```
b = G_AND y, y
```

We can drop the G_OR/G_AND and just use x/y respectively.

Also update arm64-fallback.ll because there was an or in there which hits this
transformation.

Differential Revision: https://reviews.llvm.org/D77105
2020-03-30 18:22:37 -07:00
Juneyoung Lee 519f5c3796 [LegalizeTypes] Add SoftenFloatRes_FREEZE
Summary: This adds SoftenFloatRes_FREEZE.

Reviewers: bkramer, JamesNagurne, craig.topper, efriedma

Reviewed By: craig.topper

Subscribers: AbigailLinden, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76980
2020-03-31 10:16:38 +09:00
Jessica Paquette 63d70ea6a0 [GlobalISel] Combine (x op 0) -> x for operations with a right identity of 0
Implement identity combines for operations like the following:

```
%a = G_SUB %b, 0
```

This can just be replaced with %b.

Over CTMark, this gives some minor size improvements at -O3.

Differential Revision: https://reviews.llvm.org/D76640
2020-03-30 16:49:52 -07:00
Matt Arsenault b8fc192d42 Revert "[GISel]: Fix incorrect IRTranslation while translating null pointer types"
This reverts commit b3297ef051.

This change is incorrect. The current semantic of null in the IR is a
pointer with the bitvalue 0. It is not a cast from an integer 0, so
this should preserve the pointer type.
2020-03-30 19:30:42 -04:00
Daan Sprenkels 5227fa0c72 Recommit "[InstCombine] Update assertions in InstCombine test; NFC" 2020-03-31 00:00:41 +02:00
Matt Arsenault db9f0d1ce5 AMDGPU: Form v_cvt_ubyte* with f16 results
We get 2 conversion instructions anyway. Previously we would get a
conversion with SDWA reading from a byte source, which has a larger
encoding.
2020-03-30 17:59:49 -04:00
Matt Arsenault b27d255e1e AMDGPU/GlobalISel: Form CVT_F32_UBYTE0 2020-03-30 17:45:55 -04:00
Matt Arsenault bcb643c8af AMDGPU/GlobalISel: Handle image atomics 2020-03-30 17:41:04 -04:00
Matt Arsenault 48eda37282 AMDGPU/GlobalISel: Start selecting image intrinsics
Does not handled atomics yet.
2020-03-30 17:33:04 -04:00
Matt Arsenault 570a578e46 AMDGPU: Account for dmask when computing image mem size
Only the number of elements in the dmask will really be accessed.
2020-03-30 17:30:58 -04:00
Jay Foad cee65d51fe AMDGPU: Implement getMemcpyLoopLoweringType
Summary: Based on a patch by Matt Arsenault.

Reviewers: rampitec, kerbowa, nhaehnle, arsenm

Subscribers: kzhuravl, jvesely, wdng, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D77057
2020-03-30 22:21:01 +01:00
Matt Arsenault 2641ba52a9 AMDGPU/GlobalISel: Round up image operations with 5, 6 or 7 addresses
The instruction definitions are missing for these register types, so
round up to 8 like the DAG.
2020-03-30 17:02:47 -04:00
Matt Arsenault 42d5609809 AMDGPU/GlobalISel: Start handling _L to _LZ optimization
We currently don't have a way to map to the equivalent intrinsic
opcode, so track immediate 0s in place of the address for the
selection to know to change the final opcode.
2020-03-30 17:02:30 -04:00
Daan Sprenkels 273b0d7766 Revert "[InstCombine] Update assertions in InstCombine test; NFC"
This reverts commit 4243bd494d.
2020-03-30 22:41:33 +02:00
Daan Sprenkels 4243bd494d [InstCombine] Update assertions in InstCombine test; NFC 2020-03-30 22:15:50 +02:00
Sanjay Patel f2fbdf76d8 [InstCombine] do not exclude min/max from icmp with casted operand fold
InstCombine has a mess of logic that tries to preserve min/max patterns,
but AFAICT, this one is not necessary because we can always narrow the
corresponding select in this sequence to match the narrow compare.

The biggest danger for this patch is inducing infinite looping or
assert from exceeding max iterations. If any bots hit that in the
vicinity of this commit, this is the likely patch to blame.
2020-03-30 16:10:51 -04:00
Eli Friedman 9eb1b41811 [llvm-cov] Improve error message for missing profdata
I got a report recently that a user was having trouble interpreting the
meaning of the error message.  Hopefully this is more readable; produces
something like the following:

error: No such file or directory: Could not read profile data!

Differential Revision: https://reviews.llvm.org/D76796
2020-03-30 12:54:07 -07:00
Matt Arsenault 4919f2e1c5 AMDGPU/GlobalISel: Basic legalize rules for G_FSHR
Only handles easy 32-bit cases.
2020-03-30 11:53:01 -07:00
Bill Wendling fa496ce3c6 [Intrinsic] Give "is.constant" the "convergent" attribute
Summary:
Code frequently relies upon the results of "is.constant" intrinsics to
DCE invalid code paths. We don't want the intrinsic to be made control-
dependent on any additional values. For instance, we can't split a PHI
into a "constant" and "non-constant" part via jump threading in order
to "optimize" the constant part, because the "is.constant" intrinsic is
meant to return "false".

Reviewers: wmi, kazu, MaskRay

Reviewed By: kazu

Subscribers: jdoerfert, efriedma, joerg, lebedev.ri, nikic, xbolva00, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75799
2020-03-30 11:47:12 -07:00
Matt Arsenault 23da702d69 GlobalISel: Translate llvm.fshl/llvm.fshr 2020-03-30 11:34:42 -07:00
Jakub Kuderski 77ce2e21a8 [AMDGPU] Add Relocation Constant Support
Summary:
This change adds amdgcn.reloc.constant intrinsic to the amdgpu backend, which will compile into a relocation entry in the resulting elf.

The intrinsics takes a MetadataNode (String) as its only argument, which specifies the symbol name of the relocation entry.

`SelectionDAGBuilder::getValueImpl` is changed to allow metadata operands passed through to ISel.

Author: csyonghe <yonghe@google.com>

Reviewers: tpr, nhaehnle

Reviewed By: nhaehnle

Subscribers: arsenm, kzhuravl, jvesely, wdng, yaxunl, dstuttard, t-tye, hiraditya, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76440
2020-03-30 13:49:20 -04:00
Sameer Sahasrabuddhe 3cbbded68c Introduce unify-loop-exits pass.
For each natural loop with multiple exit blocks, this pass creates a
new block N such that all exiting blocks now branch to N, and then
control flow is redistributed to all the original exit blocks.

The bulk of the tranformation is a new function introduced in
BasicBlockUtils that an redirect control flow from a set of incoming
blocks to a set of outgoing blocks via a common "hub".

This is a useful workaround for a limitation in the structurizer which
incorrectly orders blocks when processing a nest of loops. This pass
bypasses that issue by ensuring that each natural loop is recognized
as a separate region. Since the structurizer is a region pass, it no
longer sees a nest of loops in a single region, and instead processes
each "level" in the nesting as a separate region.

The AMDGPU backend provides a new option to enable this pass before
the structurizer, which may eventually be enabled by default.

Reviewers: madhur13490, arsenm, nhaehnle

Reviewed By: nhaehnle

Differential Revision: https://reviews.llvm.org/D75865
2020-03-30 13:23:56 -04:00
Vedant Kumar dcc410b5cf [LoopVectorize] Fix crash on "getNoopOrZeroExtend cannot truncate!" (PR45259)
In InnerLoopVectorizer::getOrCreateTripCount, when the backedge taken
count is a SCEV add expression, its type is defined by the type of the
last operand of the add expression.

In the test case from PR45259, this last operand happens to be a
pointer, which (according to llvm::Type) does not have a primitive size
in bits. In this case, LoopVectorize fails to truncate the SCEV and
crashes as a result.

Uing ScalarEvolution::getTypeSizeInBits makes the truncation work as expected.

https://bugs.llvm.org/show_bug.cgi?id=45259

Differential Revision: https://reviews.llvm.org/D76669
2020-03-30 10:14:14 -07:00
Yuanfang Chen ece79f4708 [X86] make sure POP has implicit def/use of stack pointer when materializing 8-bit immediates for minsize
Summary:
Otherwise PostRA list scheduler may reorder instruction, such as

schedule this
'''
pushq  $0x8
pop    %rbx
lea    0x2a0(%rsp),%r15
'''
to
'''
pushq  $0x8
lea    0x2a0(%rsp),%r15
pop    %rbx
'''
by mistake. The patch is to prevent this to happen by making sure POP has
implicit use of SP.

Reviewers: craig.topper

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D77031
2020-03-30 09:25:31 -07:00
Matt Arsenault bb009498c2 AMDGPU/GlobalISel: Hack to fix i24 argument lowering
I still think the call lowering type legalization logic split between
the generic code and target is too confusing, but largely induced by
the reliance on the DAG infrastructure.
2020-03-30 11:00:45 -04:00
Matt Arsenault 90a36bbd7c AMDGPU/GlobalISel: Legalize 64-bit G_UDIV/G_UREM
Mostly ported from the DAG version. This results in much worse code
than the DAG version, largely due to a much worse expansion for
G_UMULH.
2020-03-30 10:57:37 -04:00
Chris Jackson f6b2c003f3 [DebugInfo] Ensure that a demanded bits optimisation in
InstCombine does not result in an incorrect debuginfo variable
value

- Add an additional salvage and a test.

Reviewers: aprantl, djtodoro

Differential Revision: https://reviews.llvm.org/D76854

Bugzilla:  https://bugs.llvm.org/show_bug.cgi?id=44371
2020-03-30 15:39:22 +01:00
Florian Hahn 7899a111ea Revert "[Darwin] Respect -fno-unroll-loops during LTO."
As per post-commit comment at https://reviews.llvm.org/D76916, this
should better be done at the TU level.

This reverts commit 9ce198d6ed.
2020-03-30 15:20:30 +01:00
Chris Jackson 135709aa90 [DebugInfo] Ensure dead store elimination can mark an operand
value as undefined

    - Correct a debug info salvage and add a test

    Reviewers: aprantl, vsk

    Differential Revision: https://reviews.llvm.org/D76930
    Bugzilla: https://bugs.llvm.org/show_bug.cgi?id=45080
2020-03-30 14:58:14 +01:00
Sanjay Patel bc60cdcc3f [InstCombine] add test for trunc-extelt; NFC
Goes with D76983
2020-03-30 09:43:03 -04:00
Georgii Rymar 4cbfb98eb3 [llvm-readobj] - Improve test of --elf-hash-histogram option.
This test missed the check of histograms printed for .hash sections.
It was removed by mistake in D71606 where I tried to get rid of precompiled objects
and did not realize that time that both SHT_GNU_HASH and SHT_HASH sections
were tested and not just GNU version.

Also it never tested aliases for the --elf-hash-histogram option.

Differential revision: https://reviews.llvm.org/D76920
2020-03-30 15:46:45 +03:00
Georgii Rymar 821439a45a [llvm-readobj][test] - Simplify hash-symbols test.
We are able to reduce `-DBITS=32/64` to reduce this test case.
I've rewrote the comments we had to generalize them and
fix wrong computations they contained.

Differential revision: https://reviews.llvm.org/D76924
2020-03-30 14:44:30 +03:00
Simon Pilgrim e95d04f4f1 [X86][AVX] lowerV4X128Shuffle - attempt to widen to 2x256 to simplify shuffles
If we are lowering to X86ISD::SHUF128 we are going to lose track of individual 128-bit lanes that are UNDEF, so if we can widen these to guarantee that they are sequential with their neighbour we should. This helps with later shuffle combines.
2020-03-30 12:22:26 +01:00
Florian Hahn 84c1fbab5d [CVP] Add additional icmp for ranges with undef to test. 2020-03-30 10:59:25 +01:00
Qiu Chaofan 9aa884ccc2 [NFC] [PowerPC] Update and add tests for ori
Use script to update test for ori with 32-bit imms, and add test for
ori with 64-bit imms.
2020-03-30 17:46:12 +08:00
Sam Parker 94b195ff12 [ARM][LowOverheadLoops] Add horizontal reduction support
Add a bit more logic into the 'FalseLaneZeros' tracking to enable
horizontal reductions and also make the VADDV variants
validForTailPredication.

Differential Revision: https://reviews.llvm.org/D76708
2020-03-30 09:55:41 +01:00
David Green c9eaed5149 [ARM] MVE VMOV.i64
In the original batch of MVE VMOVimm code generation VMOV.i64 was left
out due to the way it was done downstream. It turns out that it's fairly
simple though. This adds the codegen for it, similar to NEON.

Bigendian is technically incorrect in this version, which John is fixing
in a Neon patch.
2020-03-30 07:44:23 +01:00
Craig Topper b4695351cb [TTI][X86] Fix the value passed to IsUnsigned for cost modeling of experimental.vector.reduce.smin/smax/umin/umax.
We were passing true for smax/smin and false for umax/umin.
2020-03-29 23:34:22 -07:00
Jun Ma 31a1d85c53 [Coroutines 2/2] Improve symmetric control transfer feature
Differential Revision: https://reviews.llvm.org/D76913
2020-03-30 09:53:09 +08:00
Jun Ma a94fa2c049 [Coroutines 1/2] Improve symmetric control transfer feature
Differential Revision: https://reviews.llvm.org/D76911
2020-03-30 09:53:09 +08:00
Craig Topper d74533a18b [X86] Add sse4.1 RUNs lines to the min/max reduction cost model tests.
Mostly this matches the sse4.2 we already had command lines for.
Except in the i64 case since sse4.1 doesn't have pcmpgtq.
2020-03-29 16:05:35 -07:00
Daan Sprenkels 24562c6588 [InstCombine] Add tests for trunc (extelt x); (NFC)
Baseline tests for D76983 (PR45314)

Differential Revision: https://reviews.llvm.org/D77024
2020-03-29 17:30:54 -04:00
Craig Topper 2451e4c597 [X86] Add sse4.2 command lines to min/max reduction tests.
SSE4.2 has the pcmpgtq instruction which we will use in
vXi64 reductions when its available.
2020-03-29 13:51:03 -07:00
David Green 7c1a6873aa [ARM] VMOV.64 immediate tests. NFC 2020-03-29 21:08:43 +01:00
Simon Pilgrim 9c8ec99c80 [X86][AVX] Combine 128/256-bit lane shuffles with zeroable upper subvectors to EXTRACT_SUBVECTOR (PR40720)
As explained on PR40720, EXTRACTF128 is always as good/better than VPERM2F128/SHUF128, and we can use the implicit zeroing of the uppers.
2020-03-29 19:51:38 +01:00
Uday Bondhugula c0955edfd6 Introduce support for lib function aligned_alloc in TLI / memory builtins
Aligned_alloc is a standard lib function and has been in glibc since
2.16 and in the C11 standard. It has semantics similar to malloc/calloc
for several analyses/transforms. This patch introduces aligned_alloc
in target library info and memory builtins. Subsequent ones will
make other passes aware and fix https://bugs.llvm.org/show_bug.cgi?id=44062

This change will also be useful to LLVM generators that need to allocate
buffers of vector elements larger than 16 bytes (for eg. 256-bit ones),
element boundary alignment for which is not typically provided by glibc malloc.

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Differential Revision: https://reviews.llvm.org/D76970
2020-03-29 23:36:24 +05:30
Matt Arsenault 0b68ca5162 AMDGPU: Add some additional tests for v_cvt_ubyte* formation
Use functions now that we have them for less boilerplate in the
output.
2020-03-29 14:03:07 -04:00
Sanjay Patel febcb24f14 [InstCombine] make test independent of branch undef/UB; NFC 2020-03-29 13:32:47 -04:00
Simon Pilgrim 443dcc0e00 [X86][AVX] Add tests for 512-bit shuffle patterns that could reduce to subvector extractions 2020-03-29 18:27:18 +01:00
Simon Pilgrim b44f07045c Remove unnecessary empty comments from test check lines. NFC. 2020-03-29 18:27:18 +01:00
Simon Pilgrim 7734e4b3a3 [X86][AVX] Combine 128-bit lane shuffles with a zeroable upper half to EXTRACT_SUBVECTOR (PR40720)
As explained on PR40720, EXTRACTF128 is always as good/better than VPERM2F128, and we can use the implicit zeroing of the upper half.

I've added some extra tests to vector-shuffle-combining-avx2.ll to make sure we don't lose coverage.
2020-03-29 16:41:59 +01:00
Simon Pilgrim 10439f9e32 [X86][AVX] Add X86ISD::VALIGN target shuffle decode support
Allows us to combine VALIGN instructions with other shuffles - the combiner doesn't create VALIGN yet though.
2020-03-29 16:41:58 +01:00
Simon Pilgrim a7115d51be [X86] X86CallFrameOptimization - generalize slow push code path
Replace the explicit isAtom() || isSLM() test with the more general (and more specific) slowTwoMemOps() check to avoid the use of the PUSHrmm push from memory case.

This is actually very tricky to test in anything but quite complex code, but the atomic-idempotent.ll tests seem to be the most straightforward to use.

Differential Revision: https://reviews.llvm.org/D76239
2020-03-29 11:01:59 +01:00
Richard Diamond 4bf015c035 [AlignmentFromAssumptions] Fix a SCEV assertion resulting from address space differences.
Summary:
On targets with different pointer sizes, -alignment-from-assumptions could attempt to create SCEV expressions which use different effective SCEV types. The provided test illustrates the issue.

In `getNewAlignment`, AASCEV would be the (only) alloca, which would have an effective SCEV type of i32. But PtrSCEV, the GEP in this case, due to being in the flat/default address space, will have an effective SCEV of i64.

This patch resolves the issue by truncating PtrSCEV to AASCEV's effective type.

Reviewers: hfinkel, jdoerfert

Reviewed By: jdoerfert

Subscribers: jvesely, nhaehnle, hiraditya, javed.absar, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75471
2020-03-29 01:26:31 -05:00
Craig Topper c0aa97b632 [X86] Add cost model test cases for fmin/fmax reduction. 2020-03-28 17:12:49 -07:00
Fangrui Song fc93787d7e [MC][PowerPC] Make .reloc support arbitrary relocation types
Generalizes ad7199f3e6 (R_PPC_NONE/R_PPC64_NONE).
2020-03-28 17:04:31 -07:00
Yonghong Song ced0d1f42b [BPF] support 128bit int explicitly in layout spec
Currently, bpf does not specify 128bit alignment in its
layout spec. So for a structure like
  struct ipv6_key_t {
    unsigned pid;
    unsigned __int128 saddr;
    unsigned short lport;
  };
clang will generate IR type
  %struct.ipv6_key_t = type { i32, [12 x i8], i128, i16, [14 x i8] }
Additional padding is to ensure later IR->MIR can generate correct
stack layout with target layout spec.

But it is common practice for a tracing program to be
first compiled with target flag (e.g., x86_64 or aarch64) through
clang to generate IR and then go through llc to generate bpf
byte code. Tracing program often refers to kernel internal
data structures which needs to be compiled with non-bpf target.

But such a compilation model may cause a problem on aarch64.
The bcc issue https://github.com/iovisor/bcc/issues/2827
reported such a problem.

For the above structure, since aarch64 has "i128:128" in its
layout string, the generated IR will have
  %struct.ipv6_key_t = type { i32, i128, i16 }

Since bpf does not have "i128:128" in its spec string,
the selectionDAG assumes alignment 8 for i128 and
computes the stack storage size for the above is 32 bytes,
which leads incorrect code later.

The x86_64 does not have this issue as it does not have
"i128:128" in its layout spec as it does permits i128 to
be alignmented at 8 bytes at stack. Its IR type looks like
  %struct.ipv6_key_t = type { i32, [12 x i8], i128, i16, [14 x i8] }

The fix here is add i128 support in layout spec, the same as
aarch64. The only downside is we may have less optimal stack
allocation in certain cases since we require 16byte alignment
for i128 instead of 8. But this is probably fine as i128 is
not used widely and in most cases users should already
have proper alignment.

Differential Revision: https://reviews.llvm.org/D76587
2020-03-28 11:46:29 -07:00
Reid Kleckner e5bf5037d8 [CodeGen] Fix sinking local values in lpads with phis
There was already a test case for landingpads to handle this case, but I
had forgotten to consider PHI instructions preceding the EH_LABEL in the
landingpad.

PR45261
2020-03-28 11:10:33 -07:00
Nikita Popov 30d712103f [InstCombine] Use replaceOperand() API in GEP transforms
To make sure that replaced operands get DCEd. This drops one
iteration from gepphigep.ll, which is still not optimal.

This was the last test case performing more than 3 iterations.

NFC-ish, only worklist order should change.
2020-03-28 19:07:25 +01:00
Nikita Popov 672e8bfbfc [InstCombine] Fix worklist management in foldXorOfICmps()
Because this code does not use the IC-aware replaceInstUsesWith()
helper, we need to manually push users to the worklist.

This is NFC-ish, in that it may only change worklist order.
2020-03-28 18:25:21 +01:00
Nikita Popov 337b671b0d [InstCombine] Change limit-max-iterations test case; NFC
This particular case will stop needing multiple iterations in
a followup change.
2020-03-28 18:25:20 +01:00
Martin Storsjö e6112a56dd [AsmPrinter] Emit .weak directive for weak linkage on COFF for symbols without a comdat
MC already knows how to emulate the .weak directive (with its ELF
semantics; i.e., an undefined weak symbol resolves to 0, and a defined
weak symbol has lower link precedence than a strong symbol of the same
name) using COFF weak externals. Plumb this through the ASM printer too,
so that definitions marked with __attribute__((weak)) at the language
level (which gets translated to weak linkage at the IR level) have the
corresponding .weak directive emitted. Note that declarations marked
with __attribute__((weak)) at the language level (which translates to
extern_weak at the IR level) already have .weak directives emitted.

Weak*/linkonce* symbols without an associated comdat (in particular, ones
generated with __attribute__((weak)) in C/C++) were earlier emitted as
normal unique globals, as the comdat is required to provide the linkonce
semantics. This change makes sure they are emitted as .weak instead,
allowing other symbols to override them.

Rename the existing coff-weak.ll test to coff-linkonce.ll. I'm not
quite sure what that test covers, since the behavior being tested in it
(the emission of a one_only section) is just a result of passing
-function-sections to llc; the linkonce_odr makes no difference.

Add a new coff-weak.ll which tests the new directive emission.

Based on an previous patch by Shoaib Meenai.

Differential Revision: https://reviews.llvm.org/D44543
2020-03-28 18:48:58 +02:00
Martin Storsjö 8330dcadb8 [llvm-rc] Allow -1 for menu item IDs
This seems to be used in some resource files, e.g.
f3217573d7/include/wx/msw/wx.rc (L28).

MSVC rc.exe and GNU windres both allow any value here, and silently
just truncate to uint16_t range. This just explicitly allows the
-1 value and errors out on others - the same was done for control
IDs in dialogs in c1a67857ba.

Differential Revision: https://reviews.llvm.org/D76951
2020-03-28 14:32:08 +02:00
Simon Pilgrim 8c1dbd5c1e [X86][SSE] Add testnzc(~X,Y) -> testnzc(X,Y) test cases 2020-03-28 10:56:57 +00:00
Simon Pilgrim d34d2ec28b [X86][SSE] Add original PR38522 test case 2020-03-28 10:56:57 +00:00
Simon Pilgrim 8d85da5f5a [X86][SSE] Add combine tests for PTEST/TESTPS/TESTPD instructions
Including some test coverage for PR38522
2020-03-28 10:56:57 +00:00
Serge Pavlov f398739152 [FEnv] Constfold some unary constrained operations
This change implements constant folding to constrained versions of
intrinsics, implementing rounding: floor, ceil, trunc, round, rint and
nearbyint.

Differential Revision: https://reviews.llvm.org/D72930
2020-03-28 12:28:33 +07:00
Jessica Paquette 98d05f88d5 [GlobalISel] Fix equality for copies from physregs in matchEqualDefs
When we see this:

```
%a = COPY $physreg
...
SOMETHING implicit-def $physreg
...
%b = COPY $physreg
```

The two copies are not equivalent, and so we shouldn't perform any folding
on them.

When we have two instructions which use a physical register check that they
define the same virtual register(s) as well.

e.g., if we run into this case

```
%a = COPY $physreg
...
%b = COPY %a
```

we can say that the two copies are the same, and can be folded.

Differential Revision: https://reviews.llvm.org/D76890
2020-03-27 17:52:21 -07:00
Kamlesh Kumar aabc24acf0 [RISCV] Support llvm.thread.pointer
Fixes https://bugs.llvm.org/show_bug.cgi?id=45303 (clang crashed on __builtin_thread_pointer)

Reviewed By: lenary, MaskRay, luismarques

Differential Revision: https://reviews.llvm.org/D76828
2020-03-27 17:30:12 -07:00
Nemanja Ivanovic 4821411347 [DAGCombine] Fix splitting indexed loads in ForwardStoreValueToDirectLoad()
In DAGCombiner::visitLOAD() we perform some checks before breaking up an indexed
load. However, we don't do the same checking in ForwardStoreValueToDirectLoad()
which can lead to failures later during combining
(see: https://bugs.llvm.org/show_bug.cgi?id=45301).

This patch just adds the same checks to this function as well.

Fixes: https://bugs.llvm.org/show_bug.cgi?id=45301

Differential revision: https://reviews.llvm.org/D76778
2020-03-27 18:03:47 -05:00
Florian Hahn 9ce198d6ed [Darwin] Respect -fno-unroll-loops during LTO.
Currently -fno-unroll-loops is ignored when doing LTO on Darwin. This
patch adds a new -lto-no-unroll-loops option to the LTO code generator
and forwards it to the linker if -fno-unroll-loops is passed.

Reviewers: thegameg, steven_wu

Reviewed By: thegameg

Differential Revision: https://reviews.llvm.org/D76916
2020-03-27 22:19:03 +00:00
Sanjay Patel 0f56bbc1a5 [InstCombine] reduce FP-casted and bitcasted signbit check
PR45305:
https://bugs.llvm.org/show_bug.cgi?id=45305

Alive2 proofs:
http://volta.cs.utah.edu:8080/z/bVyrko
http://volta.cs.utah.edu:8080/z/Vxpz9q
2020-03-27 17:33:59 -04:00
Sanjay Patel e72730ee3a [InstCombine] add tests for FP cast+bitcast signbit checks; NFC
PR45305:
https://bugs.llvm.org/show_bug.cgi?id=45305
2020-03-27 17:25:25 -04:00
Matt Arsenault a8cc9047de CodeGen: Add -denormal-fp-math-f32 flag
Make the set of FP related attributes and command flags closer.
2020-03-27 14:00:39 -07:00
Jay Foad a6dfd827e5 [AMDGPU] Fix getEUsPerCU for gfx10 in CU mode
Summary:
"Per CU" is a bit simplistic for gfx10, but I couldn't think of a better
name.

Reviewers: arsenm, rampitec, nhaehnle, dstuttard, tpr

Subscribers: kzhuravl, jvesely, wdng, yaxunl, t-tye, hiraditya, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76861
2020-03-27 20:36:49 +00:00
Fangrui Song 152d14da64 [MC][X86] Make .reloc support arbitrary relocation types
Generalizes D62014 (R_386_NONE/R_X86_64_NONE).

Unlike ARM (D76746) and AArch64 (D76754), we cannot delete FK_NONE from
getFixupKindSize because FK_NONE is still used by R_386_TLS_DESC_CALL/R_X86_64_TLSDESC_CALL.
2020-03-27 13:33:15 -07:00
Matt Arsenault 0fd8030be3 Fix line endings in test 2020-03-27 16:26:06 -04:00
Matt Arsenault 348735b723 AMDGPU: Stop setting attributes based on TargetOptions
Having arbitrary passes looking at the TargetOptions is pretty
messy. This was also disregarding if a function already had an
explicit attribute setting on it. opt/llc now add the attributes to
functions that don't specify the attribute. clang and lld do not call
the function to do this, which they maybe should.

This was also treating unsafe-fp-math as implying the others, and
setting the other attributes based on it. This is not done anywhere
else, and I'm not sure is correct based on the current description of
the option bit.

Effectively reverts 1d8cf2be89
2020-03-27 13:13:43 -07:00
Fangrui Song 34d77516b8 [MC][AArch64] Make .reloc support arbitrary relocation types
Depends on D76746. Generalizes D61973.

Differential Revision: https://reviews.llvm.org/D76754
2020-03-27 12:30:52 -07:00
Fangrui Song c389526171 [MC][ARM] Make .reloc support arbitrary relocation types
Generalizes D61992. In GNU as, the .reloc directive supports arbitrary relocation types.

A MCFixupKind value `V` larger than or equal to FirstLiteralRelocationKind
is used to represent the relocation type whose number is V-FirstLiteralRelocationKind.

This is useful for linker tests. Without the feature the assembler
cannot produce certain relocation records (e.g.  R_ARM_ALU_PC_G0/R_ARM_LDR_PC_G0)
This helps move forward D75349 and D76575.

Differential Revision: https://reviews.llvm.org/D76746
2020-03-27 12:29:49 -07:00
Craig Topper cdd1cd7120 [X86] Don't form masked instructions if the operation has an additional user.
This will cause the operation to be repeated in both a mask and another masked
or unmasked form. This can a wasted of execution resources.

Differential Revision: https://reviews.llvm.org/D60940
2020-03-27 10:44:22 -07:00
Simon Pilgrim 763c87309d [X86][SSE] Add some additional v8i16 'truncation' style shuffle tests 2020-03-27 17:29:29 +00:00
Dennis Felsing aa0be69e74 Export Segment.IsGapRegion to JSON
Summary:
So that external tools can make use of that information and not display such lines as uncovered.

Fixes https://bugs.llvm.org/show_bug.cgi?id=45300

Reviewers: vsk

Reviewed By: vsk

Differential Revision: https://reviews.llvm.org/D76763
2020-03-27 18:05:01 +01:00
jasonliu d60d7d69de [llvm-objdump][XCOFF][AIX] Implement -r option
Summary:
Implement several XCOFF hooks to get '-r' option working for llvm-objdump -r.

Reviewer: DiggerLin, hubert.reinterpretcast, jhenderson, MaskRay

Differential Revision: https://reviews.llvm.org/D75131
2020-03-27 16:05:42 +00:00
Sam Parker d7084fa34a [ARM][LowOverheadLoops] DoubleWidthResult instructions canGenerateZeros
Given that some instructions generate wider result elements than
their inputs, flag them as being able to generate non zeros in the
false lanes.

Differential Revision: https://reviews.llvm.org/D76766
2020-03-27 15:26:13 +00:00
Simon Pilgrim f4f4a8bfef [InstCombine][X86] Add repeated ops demanded elts tests for SSE intrinsics (PR24523) 2020-03-27 14:51:09 +00:00
Simon Pilgrim ec3bb6c3e7 [InstCombine][X86] Regenerate SSE2 tests 2020-03-27 14:51:09 +00:00
Guillaume Chatelet e2ef6127d9 [Alignment] Fix overaligning bug
Summary:
This was discovered while converting to Align type.

See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790

Reviewers: courbet

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76914
2020-03-27 12:57:50 +00:00
David Green 8689f98e9b [ARM] Fix MVE VCMPr f16 pattern
This patterns seemed to be using the f32 instruction, not f16. Fix it to
use the correct one.

Differential Revision: https://reviews.llvm.org/D76841
2020-03-27 11:18:24 +00:00
Georgii Rymar 30c1f9a558 [llvm-readobj] - Fix a crash when DT_STRTAB is broken.
We might have a crash scenario when we have an invalid DT_STRTAB value
that is larger than the file size. I've added a test case to demonstrate.

Differential revision: https://reviews.llvm.org/D76706
2020-03-27 13:18:08 +03:00
Shengchen Kan 1fb4f99a21 [X86][MC] Fix the bug for prefix padding support
Summary:
There is a tiny logic error of D75300, making branch is not
correctly aligned with option -x86-pad-max-prefix-size

Reviewers: reames, MaskRay, craig.topper, LuoYuanke, jyknight

Reviewed By: reames

Subscribers: hiraditya, llvm-commits, annita.zhang

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76285
2020-03-27 14:16:09 +08:00
Kai Luo 26b46b67d8 [PowerPC] Fix test for PR45297 to adapt build without asserts. NFC. 2020-03-27 05:28:34 +00:00
Kai Luo 351b192315 [PowerPC] Enhance test for PR45297. NFC. 2020-03-27 04:45:21 +00:00
Juneyoung Lee 1bcc500b48 [DAGCombine] Add basic optimizations for FREEZE in SelDag
Summary: This patch is the first effort to adding basic optimizations for FREEZE in SelDag.

Reviewers: spatel, lebedev.ri

Reviewed By: spatel

Subscribers: xbolva00, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76707
2020-03-27 12:20:39 +09:00
Dan Gohman d865437d9c [WebAssembly] Fix the order of destructors in the LowerGlobalDtors pass.
Fix the LowerGlobalDtors pass to run destructors in the same order as the
regular LLVM destructor lowering -- in reverse order. Adjacent
destructors with the same associated object are grouped, but destructors
are not reordered based on associated objects.

Differential Revision: https://reviews.llvm.org/D70685
2020-03-26 16:19:02 -07:00
Stanislav Mekhanoshin 4c4b71843b [AMDGPU] Propagate amdgpu-waves-per-eu to callees
Differential Revision: https://reviews.llvm.org/D76868
2020-03-26 14:43:44 -07:00
Craig Topper 9f7d4150b9 [X86] Move combineLoopMAddPattern and combineLoopSADPattern to an IR pass before SelecitonDAG.
These transforms rely on a vector reduction flag on the SDNode
set by SelectionDAGBuilder. This flag exists because SelectionDAG
can't see across basic blocks so SelectionDAGBuilder is looking
across and saving the info. X86 is the only target that uses this
flag currently. By removing the X86 code we can remove the flag
and the SelectionDAGBuilder code.

This pass adds a dedicated IR pass for X86 that looks across the
blocks and transforms the IR into a form that the X86 SelectionDAG
can finish.

An advantage of this new approach is that we can enhance it to
shrink the phi nodes and final reduction tree based on the zeroes
that we need to concatenate to bring the partially reduced
reduction back up to the original width.

Differential Revision: https://reviews.llvm.org/D76649
2020-03-26 14:10:20 -07:00
Simon Pilgrim ad36491ebb [X86] Prefer PACKUS(AND(),AND()) to SHUFFLE(PSHUFB(),PSHUFB()) on all targets
Extends rG9d1721ce3926 to support AVX2+ targets.
2020-03-26 20:46:24 +00:00
Simon Pilgrim 39a52a19ed [X86] lowerV16I8Shuffle - create v8i16 mask for PACKUS(AND(),AND()) patterns.
We can improve computeKnownBits results by avoiding excess bitcasts.

For this pattern we were doing:

  (v16i8 PACKUS(v8i16 BITCAST(v16i8 AND(V1, MASK)), v8i16 BITCAST(v16i8 AND(V2, MASK))))

By performing the MASK/AND with a v8i16 type and bitcasting V1/V2 directly we can help computeKnownBits see that the mask is clearing the upper bits and allows shuffle combining to peek through later on.

This will be necessary to extend rG9d1721ce3926 to AVX2+ targets in a future patch.
2020-03-26 19:59:57 +00:00
diggerlin fdfe411e7c [AIX] discard the label in the csect of function description and use qualname for linkage
SUMMARY:

SUMMARY
for a source file  "test.c"

void foo() {};

llc will generate assembly code as (assembly patch)
     .globl  foo
     .globl  .foo
     .csect foo[DS]
foo:

        .long   .foo
        .long   TOC[TC0]
        .long   0

   and symbol table as (xcoff object file)
   [4]     m   0x00000004     .data     1  unamex                    foo
   [5]     a4  0x0000000c       0    0     SD       DS    0    0
   [6]     m   0x00000004     .data     1  extern                    foo
   [7]     a4  0x00000004       0    0     LD       DS    0    0

   After first patch, the assembly will be as

        .globl  foo[DS]                 # -- Begin function foo
        .globl  .foo
        .align  2
        .csect foo[DS]
        .long   .foo
        .long   TOC[TC0]
        .long   0

    and symbol table will as
   [6]     m   0x00000004     .data     1  extern                    foo
   [7]     a4  0x00000004       0    0     DS      DS    0    0
Change the code for the assembly path and xcoff objectfile patch for llc.

Reviewers: Jason Liu
Subscribers: wuzish, nemanjai, hiraditya

Differential Revision: https://reviews.llvm.org/D76162
2020-03-26 15:46:52 -04:00
Sanjay Patel 5237262feb [InstCombine] add shuffle-with-bitcast-operand tests; NFC 2020-03-26 14:28:47 -04:00
Jonathan Roelofs 7a89a5d81b [InstCombine] Fix Incorrect fold of ashr+xor -> lshr w/ vectors
Fixes https://bugs.llvm.org/show_bug.cgi?id=43665
2020-03-26 12:09:36 -06:00
Justin Hibbits 459e8e9488 [PowerPC]: Don't allow r0 as a target for LD_GOT_TPREL_L/32
Summary:
The linker is free to relax this (relocation R_PPC_GOT_TPREL16) against
R_PPC_TLS, if it sees fit (initial exec to local exec).  If r0 is used,
this can generate execution-invalid code (converts to 'addi %rX, %r0,
FOO, which translates in PPC-lingo to li %rX, FOO).  Forbid this
instead.

This fixes static binaries using locales on FreeBSD/powerpc
(tested on FreeBSD/powerpcspe).

Reviewed By: nemanjai
Differential Revision: https://reviews.llvm.org/D76662
2020-03-26 10:59:28 -05:00
Simon Pilgrim 9d1721ce39 [X86][SSE] Prefer PACKUS(AND(),AND()) to SHUFFLE(PSHUFB(),PSHUFB()) on pre-AVX2 targets
As discussed on PR31443, we should be trying to use PACKUS for binary truncation patterns to reduce the number of shuffles.

The plan is to support AVX2+ targets once we've worked around PR45315 - we fail to peek through a VBROADCAST_LOAD mask to recognise zero upper bits in a PACKUS pattern.

We should also be able to add support for v8i16 and possibly 256/512-bit vectors as well.
2020-03-26 15:47:43 +00:00
Fangrui Song 3eef47407b [PPCInstPrinter] Change printBranchOperand(calltarget) to print the target address in hexadecimal form
```
// llvm-objdump -d output (before)
0: bl .-4
4: bl .+0
8: bl .+4

// llvm-objdump -d output (after) ; GNU objdump -d
0: bl 0xfffffffc / bl 0xfffffffffffffffc
4: bl 0x4
8: bl 0xc
```

Many Operand's are not annotated as OPERAND_PCREL.
They are not affected (e.g. `b .+67108860`). I plan to fix them in future patches.

Modified test/tools/llvm-objdump/ELF/PowerPC/branch-offset.s to test
address space wraparound for powerpc32 and powerpc64.

Reviewed By: sfertile, jhenderson

Differential Revision: https://reviews.llvm.org/D76591
2020-03-26 08:32:29 -07:00
Fangrui Song 87de9a0786 [X86InstPrinter] Change printPCRelImm to print the target address in hexadecimal form
```
// llvm-objdump -d output (before)
400000: e8 0b 00 00 00   callq 11
400005: e8 0b 00 00 00   callq 11

// llvm-objdump -d output (after)
400000: e8 0b 00 00 00  callq 0x400010
400005: e8 0b 00 00 00  callq 0x400015

// GNU objdump -d. The lack of 0x is not ideal because the result cannot be re-assembled
400000: e8 0b 00 00 00  callq 400010
400005: e8 0b 00 00 00  callq 400015
```

In llvm-objdump, we pass the address of the next MCInst. Ideally we
should just thread the address of the current address, unfortunately we
cannot call X86MCCodeEmitter::encodeInstruction (X86MCCodeEmitter
requires MCInstrInfo and MCContext) to get the length of the MCInst.

MCInstPrinter::printInst has other callers (e.g llvm-mc -filetype=asm, llvm-mca) which set Address to 0.
They leave MCInstPrinter::PrintBranchImmAsAddress as false and this change is a no-op for them.

Reviewed By: jhenderson

Differential Revision: https://reviews.llvm.org/D76580
2020-03-26 08:28:59 -07:00
Qiu Chaofan 172456c775 [Legalizer] Fix some flags miss in vector results
In some scalarize/split result methods (unary, binary, ...), flags in
SDNode were not passed down, which may lead to unexpected results in
unsafe float-point optimization. This patch fixes them. (maybe not
complete)

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D76832
2020-03-26 22:01:19 +08:00
Sam Parker db8a3c4206 [NFC] Create X86 subdirectory for indvar tests
Many IndVarSiimplify tests target an x86 triple, so move them into
a target specific folder.
2020-03-26 12:24:45 +00:00
Simon Pilgrim e30d29ebc1 [X86][SSE] getFauxShuffleMask - peek through TRUNCATE/AEXT/ZEXT for INSERT_VECTOR_ELT(EXTRACT_VECTOR_ELT())
As long we extract from a source vector with smaller elements and we zero-extend the element in the final shuffle mask then we can safely peek through truncations and any/zero-extensions to find the source extraction.
2020-03-26 11:57:45 +00:00
Kang Zhang 4673699a47 [PowerPC] Remove the repeated definition for some InstAlias for mtspr/mfspr
Summary:
Below InstAlias have been redefined, this patch is to remove the repeated
definition.
mtdec/mfdec mtsdr1/mfsdr1 mtsrr0/mfsrr0 mtsrr1/mfsrr1 mtasr

Reviewed By: nemanjai, steven.zhang

Differential Revision: https://reviews.llvm.org/D75821
2020-03-26 09:58:30 +00:00
Cullen Rhodes 9086db707d [AArch64][SVE] Implement structured store intrinsics
Summary:
This patch adds initial support for the following intrinsics:

    * llvm.aarch64.sve.st2
    * llvm.aarch64.sve.st3
    * llvm.aarch64.sve.st4

For storing two, three and four vectors worth of data. Basic codegen for
reg+immediate forms are implemented. Reg+reg addressing modes will be
addressed in a later patch.

These intrinsics are intended for use in the Arm C Language Extension
(ACLE).

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D75947
2020-03-26 09:34:51 +00:00
Ties Stuij 71ae267d1f [PATCH] [ARM] ARMv8.6-a command-line + BFloat16 Asm Support
Summary:
This patch introduces command-line support for the Armv8.6-a architecture and assembly support for BFloat16. Details can be found
https://community.arm.com/developer/ip-products/processors/b/processors-ip-blog/posts/arm-architecture-developments-armv8-6-a

in addition to the GCC patch for the 8..6-a CLI:
https://gcc.gnu.org/legacy-ml/gcc-patches/2019-11/msg02647.html

In detail this patch

- march options for armv8.6-a
- BFloat16 assembly

This is part of a patch series, starting with command-line and Bfloat16
assembly support. The subsequent patches will upstream intrinsics
support for BFloat16, followed by Matrix Multiplication and the
remaining Virtualization features of the armv8.6-a architecture.

Based on work by:
- labrinea
- MarkMurrayARM
- Luke Cheeseman
- Javed Asbar
- Mikhail Maltsev
- Luke Geeson

Reviewers: SjoerdMeijer, craig.topper, rjmccall, jfb, LukeGeeson

Reviewed By: SjoerdMeijer

Subscribers: stuij, kristof.beyls, hiraditya, dexonsmith, danielkiss, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D76062
2020-03-26 09:17:20 +00:00
David Green 37b9cc8f29 [ARM] Sink splats to vector float instructions
Some MVE floating point instructions have gpr register variants that take
the scalar gpr value and splat them to all lanes. In order to accept
them in loops, the shuffle_vector and insert need to be sunk down into
the loop, next to the instruction so that ISel can see the whole
pattern.

This does that sinking for FAdd, FSub, FMul and FCmp. The patterns for
mul are slightly more constrained as there are no fms variants taking
register arguments.

Differential Revision: https://reviews.llvm.org/D76023
2020-03-26 09:02:18 +00:00
Craig Topper 281015de5d [X86] Update more intrinsic tests to prepare to extend D60940 to scalar fp.
I want to extend D60940 to scalar FP which will prevent forming
masked instructions if the arithmetic op has another use. To
prepare for that, this patch updates tests to avoid repeating
the operation multiple times with different masking.
2020-03-25 23:03:20 -07:00
John McCall 9514c048d8 Use optimal layout and preserve alloca alignment in coroutine frames.
Previously, we would ignore alloca alignment when building the frame
and just use the natural alignment of the allocated type.  If an alloca
is over-aligned for its IR type, this could lead to a frame entry with
inadequate alignment for the downstream uses of the alloca.

Since highly-aligned fields also tend to produce poor layouts under a
naive layout algorithm, I've also switched coroutine frames to use the
new optimal struct layout algorithm.

In order to communicate the frame size and alignment to later passes,
I needed to set align+dereferenceable attributes on the frame-pointer
parameter of the resume function.  This is clearly the right thing to
do, but the align attribute currently seems to result in assumptions
being added during inlining that the optimizer cannot easily remove.
2020-03-26 00:51:09 -04:00
QingShan Zhang 1ef7bf4121 [PowerPC] Improve the way legalize mul for v8i16 and add pattern to match mul + add
We can legalize the operation MUL for v8i16 with instruction (vmladduhm A, B, 0)
if altivec enabled. Now, it is set as custom and expand it later, which is not
the right way. And then, we can add the pattern to match the mul + add with (vmladduhm A, B, C)

Reviewed By: Nemanjai

Differential Revision: https://reviews.llvm.org/D76751
2020-03-26 04:46:49 +00:00
Craig Topper 31c5afb3f2 [X86] Split more masked instruction tests to enable D60940.
More mechanical splitting of tests so we can add a one use
check to the isel patterns for forming masked instructions.

In a few cases I changed immediates of instructions in
order to avoid needing to split.
2020-03-25 21:18:27 -07:00
Douglas Yung d622612e61 Relax newly added opcode checks to check only for a number instead of a specific opcode. 2020-03-25 20:15:33 -07:00
Stanislav Mekhanoshin e06d707aa2 [AMDGPU] Fixed function traversal in attribute propagation
AMDGPUPropagateAttributes pass was skipping some of the functions
when cloning. Functions were added to root set and then skipped
on the next interation because they are already in the root set,
while were meant to be processed with different features.

Differential Revision: https://reviews.llvm.org/D76815
2020-03-25 18:47:09 -07:00
Stanislav Mekhanoshin 6e00e3fcb0 [AMDGPU] Preserve original symbol during attribute propagation
AMDGPUPropagateAttributes can swap names while cloning a function.
Only do it if original symbol was not externally visible.

Differential Revision: https://reviews.llvm.org/D76789
2020-03-25 15:26:30 -07:00
Florian Hahn 081efa7dd0 [SCCP] Add a few constantexpr,undef tests for cond propagation 2020-03-25 21:28:35 +00:00
Tyker f1a9efabcb Ignore/Drop droppable uses for code-sinking in InstCombine
Summary:
This patch allows code-sinking in InstCombine to be performed when instruction have uses in llvm.assume.

Use are considered droppable when it is preferable to modify the User such that the use disappears rather than to prevent a transformation because of the use.
for now uses are considered droppable if they are in an llvm.assume.

Reviewers: jdoerfert, nikic, spatel, lebedev.ri, sstefan1

Reviewed By: jdoerfert

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73832
2020-03-25 20:42:52 +01:00
Alexandre Ganea 934d4feab1 [ThinLTO] Don't rely on debug output for thinlto_samplepgo_icp3 test
Because using -print-imports is not thread-safe, make the test rely on llvm-dis instead.
Also cover the ICALL-PROM part as intended originally.

Differential Revision: https://reviews.llvm.org/D76775
2020-03-25 14:38:20 -04:00
Simon Pilgrim c6e5531f9b [X86][AVX] Combine shuffles to TRUNCATE/VTRUNC patterns
Add support for combining shuffles to AVX512 truncate instructions - another step toward fixing D56387/D66004. It also fixes SKX code on PR31443.

We could probably extend this further to handle non-VLX truncation cases.
2020-03-25 17:41:51 +00:00
Mikhail Maltsev bb4da94e5b [ARM,CDE] Implement predicated Q-register CDE intrinsics
Summary:
This patch implements the following CDE intrinsics:

  T __arm_vcx1q_m(int coproc, T inactive, uint32_t imm, mve_pred_t p);
  T __arm_vcx2q_m(int coproc, T inactive, U n, uint32_t imm, mve_pred_t p);
  T __arm_vcx3q_m(int coproc, T inactive, U n, V m, uint32_t imm, mve_pred_t p);

  T __arm_vcx1qa_m(int coproc, T acc, uint32_t imm, mve_pred_t p);
  T __arm_vcx2qa_m(int coproc, T acc, U n, uint32_t imm, mve_pred_t p);
  T __arm_vcx3qa_m(int coproc, T acc, U n, V m, uint32_t imm, mve_pred_t p);

The intrinsics are not part of the released ACLE spec, but internally at
Arm we have reached consensus to add them to the next ACLE release.

Reviewers: simon_tatham, MarkMurrayARM, ostannard, dmgreen

Reviewed By: simon_tatham

Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D76610
2020-03-25 17:08:19 +00:00
Yvan Roux bd069ad39c [ARM] Move ConstantIsland and LowOverheadLoops Passes.
Move ARM ConstantIsland and LowOverheadLopps passes later in the pipeline
such that they will be run after the upcoming Machine Outlining pass.

Differential Revision: https://reviews.llvm.org/D76065
2020-03-25 16:49:21 +01:00
cdevadas ce984129ea [AMDGPU] Add SIPreEmitPeephole pass.
This pass can handle all the optimization
opportunities found just before code emission.
Presently it includes the handling of vcc branch
optimization that was handled earlier in SIInsertSkips.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D76712
2020-03-25 15:35:35 +00:00
Simon Pilgrim 146df5581d [X86][AVX] Add common prefix to merge 32/64-bit AVX1 checks 2020-03-25 15:33:58 +00:00
Jonas Paulsson f09b891d4a [SystemZ] Improve foldMemoryOperandImpl()
A spilled load of an immediate can use MVHI/MVGHI instead.
A compare of a spilled register against an immediate can use CHSI/CGHSI.
A logical compare can use CLFHSI/CLGHSI.

Review: Ulrich Weigand

Differential Revision: https://reviews.llvm.org/D76055
2020-03-25 16:21:08 +01:00
Sean Fertile 3282d875d6 [PowerPC][AIX] ByVal formal arguments in a single register.
Adds support for passing ByVal formal arguments as long as they fit in a
single register.

Differential Revision: https://reviews.llvm.org/D76401
2020-03-25 11:09:40 -04:00
Sanjay Patel f631b9dc36 [VectorCombine] add shuffle tests; NFC
Goes with DD76727.
2020-03-25 10:35:03 -04:00
sstefan1 72b51d6f93 OpenMP] Adding InaccessibleMemOnly and InaccessibleMemOrArgMemOnly for runtime calls.
Summary: Attempt to add more attributes for runtime calls.

Reviewers: jdoerfert

Subscribers: guansong, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75010
2020-03-25 14:08:50 +00:00
Kerry McLaughlin 05606329e2 [AArch64][SVE] Add SVE intrinsics for masked loads & stores
Summary:
Implements the following intrinsics for contiguous loads & stores:
  - @llvm.aarch64.sve.ld1
  - @llvm.aarch64.sve.st1

Reviewers: sdesmalen, andwar, efriedma, cameron.mcinally, dancgr, rengolin

Reviewed By: cameron.mcinally

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, danielkiss, cfe-commits, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76688
2020-03-25 11:48:40 +00:00
Juneyoung Lee d82c1e8c56 Rename test name, add more tests for codegenprepare 2020-03-25 20:31:12 +09:00
Simon Tatham 8f1651ccea [ARM,MVE] Add missing tests for vqdmlash intrinsics.
Summary:
These were accidentally left out of D76123. I added tests for the
other three instructions in this small cross-product family (vqdmlah,
vqrdmlah, vqrdmlash) but missed this one.

Reviewers: miyuki

Reviewed By: miyuki

Subscribers: kristof.beyls, dmgreen, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D76714
2020-03-25 09:46:16 +00:00
Juneyoung Lee e951a48996 Add freeze(and x, const) case to codegenprepare's freeze-cmp.ll 2020-03-25 17:29:01 +09:00
Craig Topper 2093fdd429 [X86] Split masked instruction tests to enable D60940.
We need to split tests that rely on isel duplicating operations
for different masking conditions. Repeating the operation is
more costly than emitting the masking separately.

The change here is a mechanical splitting of tests that
call multiple intrinsics in one function into separate
functions that call one intrinsic. We could obviously avoid
the splitting by giving the intrinsics different operands, but
that would need closer scrutiny than just splitting.
2020-03-24 23:44:16 -07:00
Kai Luo 70f9f4dd9d [PowerPC] Pre-commit reduced test case for PR45297. NFC. 2020-03-25 06:19:59 +00:00
QingShan Zhang 2488ea428d [NFC][Test][PowerPC] Add one test to verify the behavior of vector
mul/add for v8i16
2020-03-25 02:37:26 +00:00
Matt Arsenault baa78179fe AMDGPU/GlobalISel: Add a testcase for G_UNMERGE_VALUES legalization
I had a note that this doesn't work, but it seems to now.
2020-03-24 21:54:43 -04:00
Matt Arsenault d16ee1174a AMDGPU/GlobalISel: Add some end to end tests for fma selection 2020-03-24 21:23:37 -04:00
Matt Arsenault bba8c92d54 AMDGPU/GlobalISel: Add select patterns for v_and_or_b32 2020-03-24 20:47:54 -04:00
Matt Arsenault c9e0b448b8 AMDGPU/GlobalISel: Add load legalization tests 2020-03-24 20:41:01 -04:00
Matt Arsenault 01a337cfc9 AMDGPU/GlobalISel: Add missing tests for G_FRINT selection 2020-03-24 20:41:01 -04:00
Adrian Prantl ed8ad6ec15 Add an -object-path-prefix option to dsymutil
to remap object file paths (but no source paths) before
processing. This is meant to be used for Clang objects where the
module cache location was remapped using ``-fdebug-prefix-map``; to
help dsymutil find the Clang module cache.

<rdar://problem/55685132>

Differential Revision: https://reviews.llvm.org/D76391
2020-03-24 17:13:42 -07:00
Amara Emerson 472d282046 [AArch64][GlobalISel] Don't localize TLS G_GLOBAL_VALUEs on Darwin.
On Darwin these need to be selected into a function call for the TLS
address lookup. As a result, they can't be moved below a physreg write,
which happens in call sequences. In the long term, we should have some
mechanism in the localizer to prevent localizing into target-specific
atomic instruction sequences.

rdar://60056248

Differential Revision: https://reviews.llvm.org/D76652
2020-03-24 13:35:50 -07:00
Johannes Doerfert 5699d08b79 [Attributor] Use knowledge retained in llvm.assume (operand bundles)
This patch integrates operand bundle llvm.assumes [0] with the
Attributor. Most IRAttributes will now look at uses of the associated
value and if there are llvm.assume operand bundle uses with the right
tag we will check if they are in the must-be-executed-context (around
the context instruction). Droppable users, which is currently only
llvm::assume, are handled special in some places now as well.

[0] http://lists.llvm.org/pipermail/llvm-dev/2019-December/137632.html

Reviewed By: uenoku

Differential Revision: https://reviews.llvm.org/D74888
2020-03-24 15:33:40 -05:00
Craig Topper e8d67ada2d [X86] Disable autoupgrade support for avx512.mask.broadcasti32x2.* and avx512.mask.broadcastf32x2.*.
These intrinsics take a v4i32/v4f32 input and are supposed to
broadcast elements 0 and 1. Instead the autoupgrade code was
broadcasting elements 0, 1, 2, and 3.

I could fix the autoupgrade, but since its been broken for years
it seemed better just to steer anyone still trying to use it away
completely.
2020-03-24 12:35:24 -07:00
Sanjay Patel c84446f4e9 [VectorCombine] add tests for bitcast (shuffle); NFC 2020-03-24 15:18:32 -04:00
Reid Kleckner 597718aae0 Re-land "Avoid emitting unreachable SP adjustments after `throw`"
This reverts commit 4e0fe038f4. Re-lands
65b21282c7.

After landing 5ff5ddd0ad to add int3 into
trailing unreachable blocks, we can now remove these extra stack
adjustments without confusing the Win64 unwinder. See
https://llvm.org/45064#c4 or X86AvoidTrailingCall.cpp for a full
explanation.

Fixes PR45064.
2020-03-24 12:04:43 -07:00
Vedant Kumar f7052da6db [DWARF] Emit DW_AT_call_pc for tail calls
Record the address of a tail-calling branch instruction within its call
site entry using DW_AT_call_pc. This allows a debugger to determine the
address to use when creating aritificial frames.

This creates an extra attribute + relocation at tail call sites, which
constitute 3-5% of all call sites in xnu/clang respectively.

rdar://60307600

Differential Revision: https://reviews.llvm.org/D76336
2020-03-24 12:01:55 -07:00
Juneyoung Lee 49f75132bc [DivRemPairs] Freeze operands if they can be undef values
Summary:
DivRemPairs is unsound with respect to undef values.

```
      // bb1:
      //   %rem = srem %x, %y
      // bb2:
      //   %div = sdiv %x, %y
      // -->
      // bb1:
      //   %div = sdiv %x, %y
      //   %mul = mul %div, %y
      //   %rem = sub %x, %mul
```

If X can be undef, X should be frozen first.
For example, let's assume that Y = 1 & X = undef:
```
   %div = sdiv undef, 1 // %div = undef
   %rem = srem undef, 1 // %rem = 0
 =>
   %div = sdiv undef, 1 // %div = undef
   %mul = mul %div, 1   // %mul = undef
   %rem = sub %x, %mul  // %rem = undef - undef = undef
```
http://volta.cs.utah.edu:8080/z/m7Xrx5

Same for Y. If X = 1 and Y = (undef | 1), %rem in src is either 1 or 0,
but %rem in tgt can be one of many integer values.

This resolves https://bugs.llvm.org/show_bug.cgi?id=42619 .

This miscompilation disappears if undef value is removed, but it may take a while.
DivRemPair happens pretty late during the optimization pipeline, so this optimization seemed as a good candidate to fix without major regression using freeze than other broken optimizations.

Reviewers: spatel, lebedev.ri, george.burgess.iv

Reviewed By: spatel

Subscribers: wuzish, regehr, nlopes, nemanjai, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76483
2020-03-25 03:46:14 +09:00
Benjamin Kramer 0019c2f194 [SelectionDAG] Don't crash when freezing illegal float types 2020-03-24 19:45:19 +01:00
Simon Pilgrim 0c24adcc94 [X86][AVX] Add some v32i16 to v32i8 style truncation shuffle tests 2020-03-24 18:38:13 +00:00
Matt Arsenault bb3aa09b15 AMDGPU/GlobalISel: Add more tests for add3 folding
Forget to squash into 2ea4605105
2020-03-24 14:30:24 -04:00
Matt Arsenault 2ea4605105 AMDGPU/GlobalISel: Add some more tests for add3 folding
These currently fail to form add3 due to the pointer type, but they
should be handle.
2020-03-24 14:26:23 -04:00
Sanjay Patel 88b493a838 [ValueTracking] improve undef/poison analysis for constant vectors
Differential Revision: https://reviews.llvm.org/D76702
2020-03-24 13:35:47 -04:00
Hiroshi Yamauchi c3417592c8 Revert "Include static prof data when collecting loop BBs"
This reverts commit 129c911efa.

Due to an internal benchmark regression.
2020-03-24 09:41:16 -07:00
David Green f8c79b94af [ARM] Fold VMOVrh VLDR to LDRH
This adds a simple fold to combine VMOVrh load to a integer load.
Similar to what is already performed for BITCAST, but needs to account
for the types being of different sizes, creating an zero extending load.

Differential Revision: https://reviews.llvm.org/D76485
2020-03-24 15:51:03 +00:00
Sanjay Patel 6c3c7a0dd6 [InstSimplify] add tests for freeze(constexpr); NFC 2020-03-24 11:39:19 -04:00
Simon Pilgrim 714402147d [X86][SSE1] Add support for logic+movmsk patterns (PR42870)
rL368506 handled the basic case, but we need to account for boolean logic patterns as well.
2020-03-24 14:28:40 +00:00
Pavel Labath d381b6a8d3 [DWARF] Fix v5 debug_line parsing of prologues with many files
Summary:
The directory_count and file_name_count fields are (section 6.2.4 of
DWARF5 spec) supposed to be uleb128s, not bytes. This bug meant that it
was not possible to correctly parse headers with more than 128 files or
directories.

I've found this bug by code inspection, though the limit is so small
someone would have run into it for real sooner or later. I've verified
that the producer side handles many files correctly, and that we are
able to parse such files after this fix.

Reviewers: dblaikie, jhenderson

Subscribers: aprantl, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76498
2020-03-24 15:11:54 +01:00
Juneyoung Lee 7802be4a3d [SelDag] Add FREEZE
Summary:
- Add FREEZE node to SelDag
- Lower FreezeInst (in IR) to FREEZE node
- Add Legalization for FREEZE node

Reviewers: qcolombet, bogner, efriedma, lebedev.ri, nlopes, craig.topper, arsenm

Reviewed By: lebedev.ri

Subscribers: wdng, xbolva00, Petar.Avramovic, liuz, lkail, dylanmckay, hiraditya, Jim, arsenm, craig.topper, RKSimon, spatel, lebedev.ri, regehr, trentxintong, nlopes, mkuper, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D29014
2020-03-24 23:04:58 +09:00
Sanjay Patel 58ec867a3b [InstSimplify] add more tests for freeze(constant); NFC
These should really be moved over to a ConstantFolding test file,
but since this may overlap with the in-progress D76010 and similar
tests already exist here, we can do that as a later cleanup.
2020-03-24 09:53:49 -04:00
Simon Pilgrim 865638f5eb [X86][SSE1] Add additional logic+movmsk patterns that scalarize (PR42870)
rL368506 handled the basic case, but we need to account for boolean logic patterns as well.
2020-03-24 13:20:41 +00:00
Sam Parker ca21e60fdf [NFC][ARM] Add missing tests 2020-03-24 11:08:01 +00:00
David Green 1232cfa385 [ARM] Don't split trunc stores that can be better handled as VMOVN
We deliberately split stores of the form
store(truncate(larger-than-legal-type)) into two stores, allowing each
store to perform part of the truncate for free.

There are times however where it makes more sense to use VMOVN to
de-interlace the results back into a single vector, and store that in
one go. This adds a check for that situation, not splitting the store if
it looks like a VMOVN can be more useful.

Differential Revision: https://reviews.llvm.org/D76511
2020-03-24 08:48:52 +00:00
Douglas Yung 18e1a59eed Fix another instance where a variable was renamed in the generated LLVM IR. [NFC] 2020-03-23 22:53:29 -07:00
Jun Ma a44de12ab2 [Coroutines] Also check lifetime intrinsic for local variable when build
coroutine frame

Currently we move all allocas into the frame when build coroutine frame in
CoroSplit pass. However, this can be relaxed.

Since CoroSplit pass run after Inline pass, we can use lifetime intrinsic to
do such analysis: If the scope of lifetime intrinsic is not across any suspend
point, rather than move the allocas to frame, we can just move them to entry bb
of corresponding function. This reduce the frame size.

More importantly, this also avoid data race in multithread environment.
Consider one inline function by coroutine: it starts a thread which access
local variables, while after inline the movement of allocs to frame also access
them. cause data race.

Differential Revision: https://reviews.llvm.org/D75664
2020-03-24 13:41:55 +08:00
Vedant Kumar b7cd291c15 [GlobalOpt] Treat null-check of loaded value as use of global (PR35760)
PR35760 shows an example program which, when compiled with `clang -O0`
or gcc at any optimization level, prints '0'. However, llvm transforms
the program in a way that causes it to print '1'.

Fix the issue by having `AllUsesOfValueWillTrapIfNull` return false when
analyzing a load from a global which is used by an `icmp`. This special
case was untested [0] so this is just deleting dead code.

An alternative fix might be to change the GlobalStatus analysis for the
global to report "Stored" instead of "StoredOnce". However, "StoredOnce"
is appropriate when only one value other than the initializer is stored
to the global.

[0]
http://lab.llvm.org:8080/coverage/coverage-reports/coverage/Users/buildslave/jenkins/workspace/coverage/llvm-project/llvm/lib/Transforms/IPO/GlobalOpt.cpp.html#L662

Differential Revision: https://reviews.llvm.org/D76645
2020-03-23 22:36:09 -07:00
Douglas Yung e79b1ab65b Make test more flexible for when the variable is renamed in the generated LLVM IR. [NFC] 2020-03-23 22:03:21 -07:00
Jinsong Ji 816ad48c82 [NFC][RUIP] Small debug output refine
Add a new line, so that we always print MI in a new line,
before and after UpdateRegMask, for easier check..
2020-03-24 03:29:45 +00:00
Jessica Paquette 02187ed45a [GlobalISel] Combine G_SELECTs of the form (cond ? x : x) into x
When we find something like this:

```
%a:_(s32) = G_SOMETHING ...
...
%select:_(s32) = G_SELECT %cond(s1), %a, %a
```

We can remove the select and just replace it entirely with `%a` because it's
always going to result in `%a`.

Same if we have

```
%select:_(s32) = G_SELECT %cond(s1), %a, %b
```

where we can deduce that `%a == %b`.

This implements the following cases:

- `%select:_(s32) = G_SELECT %cond(s1), %a, %a` -> `%a`

- `%select:_(s32) = G_SELECT %cond(s1), %a, %some_copy_from_a` -> `%a`

- `%select:_(s32) = G_SELECT %cond(s1), %a, %b` -> `%a` when `%a` and `%b`
   are defined by identical instructions

This gives a few minor code size improvements on CTMark at -O3 for AArch64.

Differential Revision: https://reviews.llvm.org/D76523
2020-03-23 16:46:03 -07:00
Nemanja Ivanovic bfa9ce1cb2 [PowerPC] Improve handling of some BUILD_VECTOR nodes
An analysis of real world code turned up a number of patterns with BUILD_VECTOR
of nodes resulting from operations on extracted vector elements for which we
produce poor code. This addresses those cases. No attempt is made for
completeness as that would entail a large amount of work for something that
there is no evidence of in real code.

Differential revision: https://reviews.llvm.org/D72660
2020-03-23 17:34:29 -05:00
Justin Hibbits f0990e104b [PowerPC]: e500 target can't use lwsync, use msync instead
The e500 core has a silicon bug that triggers an illegal instruction
program trap on any sync other than msync.  Other cores will typically
ignore illegal sync types, and the documentation even implies that the
'illegal' bits are ignored.

Address this hardware deficiency by only using msync, like the PPC440.

Differential Revision:  https://reviews.llvm.org/D76614
2020-03-23 17:15:27 -05:00
Matt Arsenault 66073953a5 AMDGPU: Allow vectorization of round intrinsic
There seems to be a small benefit to the legalized sequence for v2f16
round with packed instructions, so allow vectorizing it by reducing
the cost.

An unintended side effect is vectorization of f32 round also
happens. The current FMA logic seems off to me, and isn't checking for
packed instructions.
2020-03-23 17:00:41 -04:00
Matt Arsenault b20a1d840f GVNSink: Allow handling addrspacecast 2020-03-23 16:50:58 -04:00
Fangrui Song f2f96eb605 [llvm-objcopy] Improve tool selection logic to recognize llvm-strip-$major as strip
Debian and some other distributions install llvm-strip as llvm-strip-$major (e.g. `/usr/bin/llvm-strip-9`)

D54193 made it work with llvm-strip-$major but did not add a test.
The behavior was regressed by D69146.

Fixes https://github.com/ClangBuiltLinux/linux/issues/940

Reviewed By: alexshap

Differential Revision: https://reviews.llvm.org/D76562
2020-03-23 13:49:26 -07:00
Matt Arsenault 43d98a0ecf Allow replacing intrinsic operands with variables
Since intrinsics can now specify when an argument is required to be
constant, it is now OK to replace arguments with variables if they
aren't. This means intrinsics must now be accurately marked with
immarg.
2020-03-23 15:51:57 -04:00
Sanjay Patel a1fe6beb1e [InstCombine] remove one-use check for ctpop -> cttz
Two one-use checks were added with rGfdcb27105537,
but only the first one is necessary to limit an
increase in instruction count. The second transform
only creates one instruction, so it is always a
reasonable canonicalization/optimization.
2020-03-23 13:59:57 -04:00
Johannes Doerfert 9d38f98dc3 [OpenMPOpt] Validate declaration types against the expected types
Validation of the found runtime library functions declarations types
(return and argument types) with the expected types.

Reviewed By: jdoerfert

Differential Revision: https://reviews.llvm.org/D76058
2020-03-23 11:43:36 -05:00
David Green e10af89d99 [ARM] Extra VMOVN and VMULL tests. NFC 2020-03-23 16:18:49 +00:00
Reid Kleckner 5ff5ddd0ad [Win64] Insert int3 into trailing empty BBs
Otherwise, the Win64 unwinder considers direct branches to such empty
trailing BBs to be a branch out of the function. It treats such a branch
as a tail call, which can only be part of an epilogue. If the unwinder
misclassifies such a branch as part of the epilogue, it will fail to
unwind the stack further. This can lead to bad stack traces, or failure
to handle exceptions properly. This is described in
https://llvm.org/PR45064#c4, and by the comment at the top of the
X86AvoidTrailingCallPass.cpp file.

It should be safe to insert int3 for such blocks. An empty trailing BB
that reaches this pass is pretty much guaranteed to be unreachable.  If
a program executed such a block, it would fall off the end of the
function.

Most of the complexity in this patch comes from threading through the
"EHFuncletEntry" boolean on the MIRParser and registering the pass so we
can stop and start codegen around it. I used an MIR test because we
should teach LLVM to optimize away these branches as a follow-up.

Reviewed By: hans

Differential Revision: https://reviews.llvm.org/D76531
2020-03-23 08:50:37 -07:00
Johannes Doerfert 68fed27067 [Attributor] Handle calls in AAValueConstantRange properly
We did handle calls that were operands of certain instructions but not
standalone calls we visit via indirection, e.g., selects.
2020-03-23 10:45:24 -05:00
Johannes Doerfert 54ec9b54f6 [Attributor] Unify handling of must-tail calls
We special cased must-tail calls all over the place because they cannot
be modified as other calls can be. However, we already centralized the
modification API so we can centralize the handling as well. This
simplifies the code and allows to remove must-tail calls completely.
2020-03-23 10:45:24 -05:00
Jay Foad 0444d16a16 [GlobalISel] Add generic opcodes for saturating add/subtract
Summary:
Add new generic MIR opcodes G_SADDSAT etc. Add support in IRTranslator
for translating the saturating add/subtract intrinsics to the new
opcodes.

Reviewers: aemerson, dsanders, paquette, arsenm

Subscribers: jvesely, wdng, nhaehnle, rovka, hiraditya, volkan, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76600
2020-03-23 15:16:45 +00:00
Matt Arsenault db3f3f0240 AMDGPU/GlobalISel: Add some oversized G_IMPLICIT_DEF tests
Not all of these legalize correctly yet.
2020-03-23 11:16:10 -04:00
Simon Pilgrim fdcb271055 [InstCombine] Limit CTPOP -> CTTZ simplifications to one use
Tweak D76568 so we only combine if it will remove the bit-twiddling.

Suggested by @spatel
2020-03-23 14:33:41 +00:00
Florian Hahn 33942d18b1 [SCCP] Precommit additional range propagation test. 2020-03-23 14:15:19 +00:00
Sanjay Patel 5eeea337be [VectorCombine] add more tests for extract-extract patterns; NFC 2020-03-23 09:33:56 -04:00
Jonas Paulsson 9adc7fc3cd [SystemZ] Perform instruction shortening for fused fp ops.
Replace single-lane (W... form) vector "multiply and add" and "multiply and
subtract" instructions with equivalent floating point instructions whenever
possible in SystemZShortenInst.

Review: Ulrich Weigand

Differential Revision: https://reviews.llvm.org/D76370
2020-03-23 14:12:13 +01:00
Simon Pilgrim 16d2065cfc [InstCombine] Add ub-safe negation patterns (PR27817) 2020-03-23 12:47:32 +00:00
Florian Hahn b8a2cf6b5b [SCCP] Extend test coverage in conditions-ranges.ll to false branches. 2020-03-23 12:32:14 +00:00
James Henderson b259ce998f [llvm-readobj] Derive dynamic symtab size from DT_HASH
If the section headers have been removed by a tool such as llvm-objcopy
or llvm-strip, previously llvm-readobj/llvm-readelf would not dump the
dynamic symbols when --dyn-symbols was specified. However, the nchain
value of the DT_HASH data specifies the number of dynamic symbols, so if
it is present, we can use that. This patch implements this behaviour.

Fixes https://bugs.llvm.org/show_bug.cgi?id=45089.

Reviewed by: grimar, MaskRay

Differential Revision: https://reviews.llvm.org/D76352
2020-03-23 12:21:20 +00:00
Simon Pilgrim 72d1419bfb [InstCombine] Add CTPOP -> CTTZ simplifications (PR43513)
As detailed on PR43513, we can simplify:

ctpop(x | -x) -> bitwidth - cttz(x, false)
Alive2: http://volta.cs.utah.edu:8080/z/caw49X

ctpop(~x & (x - 1)) -> cttz(x, false)
Alive2: http://volta.cs.utah.edu:8080/z/5zfVrx

I've tweaked the initial test cases I added at rG2d712fb75584 to increase commutativity testing.

Differential Revision: https://reviews.llvm.org/D76568
2020-03-23 11:04:33 +00:00
Sam Parker 62fdb1f534 [DAGCombine] Skip PostInc combine with later users
When decided whether to generate a post-inc load/store, look at the
other memory nodes that use the same base address and, if any proceed
the current node, then don't do the combine.
The change only seems to be affecting the Arm backend, which I was
surprised at, but it appears to fix a lot of our issues around MVE
masked load/stores having to store a temporary address after an early
post-increment on a shared base address.

Differential Revision: https://reviews.llvm.org/D75847
2020-03-23 08:39:53 +00:00
Dominik Montada ccf49b9ef0 [GlobalISel] support widen unmerge if WideTy > SrcTy
Summary:
Widening G_UNMERGE_VALUES to a type which is larger than the
original source type is the same as widening it to the same
type as the source type: in both cases, G_UNMERGE_VALUES has
to be replaced with bit arithmetic which. Although the arithmetic
itself is independent of whether the source type is smaller
or equal to the widen type, widening the source type to the
widen type should result in less artifacts being emitted,
since this is the type that the user explicitly requested.

Reviewers: arsenm, dsanders, aemerson, aditya_nandakumar

Reviewed By: arsenm, dsanders

Subscribers: jvesely, wdng, nhaehnle, rovka, hiraditya, volkan, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76494
2020-03-23 09:16:45 +01:00
Fangrui Song 0cc124c186 [llvm-objdump][test] Improve PowerPC branch offset tests 2020-03-23 00:07:21 -07:00
Juneyoung Lee 5792c2236d Add test cases that are addressed by D76010 2020-03-23 13:49:29 +09:00
Qiu Chaofan 196b48a224 [NFC] [PowerPC] Prepare test for FMA negate check
This patch adds a test file, covering outputs when some operands in FMA
is negative.
2020-03-23 11:40:07 +08:00
Craig Topper e2cb121374 [X86] Remove maximum vector length limit from combineBasicSADPattern.
createPSADBW uses SplitsOpsAndApply so should be able to handle
any size.

Restrict the extract result type to i32 or i64 since that's what
we have coverage for today and probably matches what the
isSimple() check gave us before.

Differential Revision: https://reviews.llvm.org/D76560
2020-03-22 15:02:05 -07:00
Florian Hahn 006244152d [SCCP] Add a few more tests for conditional propagation,XOR. 2020-03-22 21:43:33 +00:00
Craig Topper f4c67dfa92 [X86] More accurately model the cost of horizontal reductions.
This patch attempts to more accurately model the reduction of
power of 2 vectors of types we natively support. This takes into
account the narrowing of vectors that occur as we go from 512
bits to 256 bits, to 128 bits. It also takes into account the use
of wider elements in the shuffles for the first 2 steps of a
reduction from 128 bits. And uses a v8i16 shift for the final step
of vXi8 reduction.

The default implementation uses the legalized type for the arithmetic
for all levels. And uses the single source permute cost of the
legalized type for all levels. This penalizes things like
lack of v16i8 pshufb on pre-sse3 targets and the splitting and
joining that needs to be done for integer types on AVX1. We never
need v16i8 shuffle for a reduction and we only need split AVX1 ops
when type the type wide and needs to be split. I think we're still
over costing splits and joins for AVX1, but we're closer now.

I've also removed all pairwise special casing because I don't
think we ever want to generate that on X86. I've also adjusted
the add handling to more accurately account for any type splitting
that occurs before we reach a legal type.

Differential Revision: https://reviews.llvm.org/D76478
2020-03-22 14:20:15 -07:00
Simon Atanasyan 2dc4eb08cd [mips] Implement .cpadd directive
This directive inserts code to add $gp to the argument's register when
support for position independent code is enabled.

For example, this code:
  .cpadd $4
expands to:
  addu $4, $4, $gp
2020-03-22 23:34:32 +03:00
Simon Atanasyan 9bbddfbeaa [mips] Implement sne pseudo instruction
The `sne Dst, Src1, Src2/Imm` pseudo instruction sets register `Dst` to
1 if register `Src1` is not equal to `Src2/Imm` and to 0 otherwise.
2020-03-22 23:34:31 +03:00
Simon Atanasyan dca9e40c0c [mips] Implement sle/sleu pseudo instructions
The `sle/sleu Dst, Src1, Src2/Imm` pseudo instructions set register
`Dst` to 1 if register `Src1` is less than or equal `Src2/Imm` and
to 0 otherwise.
2020-03-22 23:34:31 +03:00
Craig Topper b89ae50795 [X86] Remove maximum vector width restriction from combineLoopSADPattern.
SplitsOpsAndApply will take care of any needed splitting correctly.
All that we need to check is that the vector element count is a
power of 2.

Differential Revision: https://reviews.llvm.org/D76558
2020-03-22 11:09:14 -07:00
Matt Arsenault b76bbcc60d Verifier: Check bswap is supported size
Make sure it is a multiple of 2 bytes as specified in the LangRef.
2020-03-22 12:15:25 -04:00
Nikita Popov dc81923659 [InstCombine] Remove ExpensiveCombines option
D75801 removed the last and only user of this option, so we can
drop it now. The original idea behind this was to only run expensive
transforms under -O3, but apart from the one known bits transform,
this has never really taken off. I believe nowadays the recommendation
is to put expensive transforms in AggressiveInstCombine instead,
though that isn't terribly popular either :)

Differential Revision: https://reviews.llvm.org/D76540
2020-03-22 16:56:28 +01:00
Qiu Chaofan 763871053c [DAGCombiner] Require nsz for aggressive fma fold
For folding pattern `x-(fma y,z,u*v) -> (fma -y,z,(fma -u,v,x))`, if
`yz` is 1, `uv` is -1 and `x` is -0, sign of result would be changed.

Differential Revision: https://reviews.llvm.org/D76419
2020-03-22 23:10:07 +08:00
Qiu Chaofan 996dc13dc4 [NFC] [PowerPC] Remove unsafe-fp-math in FMA test 2020-03-22 22:40:49 +08:00
Simon Pilgrim 0105e9cd92 [X86][SSE] Add some additional irregular AVG tests
Finally resurrecting D56506 and want to improve test coverage.
2020-03-22 14:28:31 +00:00
Bjorn Pettersson d077d678d3 [ValueTracking] Avoid blind cast from Operator to Instruction
Summary:
Avoid blind cast from Operator to ExtractElementInst in
computeKnownBitsFromOperator. This resulted in some crashes
in downstream fuzzy testing. Instead we use getOperand directly
on the Operator when accessing the vector/index operands.

Haven't seen any problems with InsertElement and ShuffleVector,
but I believe those could be used in constant expressions as well.
So the same kind of fix as for ExtractElement was also applied for
InsertElement.

When it comes to ShuffleVector we now simply bail out if a dynamic
cast of the Operator to ShuffleVectorInst fails. I've got no
reproducer indicating problems for ShuffleVector, and a fix would be
slightly more complicated as getShuffleDemandedElts is involved.

Reviewers: RKSimon, nikic, spatel, efriedma

Reviewed By: RKSimon

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76564
2020-03-22 14:45:31 +01:00
Qiu Chaofan c1bc56bf4f [NFC] [PowerPC] Update FMA association test 2020-03-22 20:55:32 +08:00
Fangrui Song 71f8b78d89 [AsmPrinter] Simplify AsmPrinter::emitXXStructorList after D61547 2020-03-21 23:18:23 -07:00
Craig Topper a1e02753c0 [X86] Add nonloop v64i8 test to sad.ll. 2020-03-21 17:49:20 -07:00
Craig Topper d1739f1e2f [X86] Add test for v4i8 loop sad pattern.
This cases produces a psadbw that doesn't need to be widened or
extracted so takes a slightly different code path in
combineLoopSADPattern.
2020-03-21 15:27:29 -07:00
Simon Pilgrim 2d712fb755 [InstCombine] Add ctpop -> cttz combine tests (PR43513) 2020-03-21 19:30:22 +00:00
Simon Pilgrim 25eb9056d7 [X86] getTargetShuffleAndZeroables - add insert_subvector(undef, sub, c) handling.
We often widen xmm/ymm vectors to ymm/zmm by insertion into an undef base vector. By letting getTargetShuffleAndZeroables track the undef elts we can help avoid a lot of unnecessary cross-lane shuffles.

Fixes PR44694
2020-03-21 19:11:42 +00:00
Simon Pilgrim 7a3d994880 [X86][AVX] Add HADDPD test case for PR44694 2020-03-21 19:11:41 +00:00
Simon Pilgrim 4ceade0428 [X86] Combine concat(shufps,shufps) -> shufps(concat,concat)
Now that rG18c19441d105 has improved VPERM2X128 handling, we can perform this to improve x64->x32 truncation without poor cross-lane issues.

Someday combineX86ShufflesRecursively will handle this, but we're still really bad at dealing with different vector widths.
2020-03-21 12:44:10 +00:00
Simon Pilgrim f424d51c3e Revert rGe6a7e3b5e3e7 "[X86][SSE] matchShuffleWithSHUFPD - add support for unary shuffles."
This reverts commit e6a7e3b5e3.

Avoids register pressure regression reported at PR45263
2020-03-21 12:14:19 +00:00
Simon Pilgrim c5fd9e3888 [DAG] Don't permit EXTLOAD when combining FSHL/FSHR consecutive loads (PR45265)
Technically we can permit EXTLOAD of the LHS operand but only if all the extended bits are shifted out. Until we test coverage for that case, I'm just disabling this to fix PR45265.
2020-03-21 10:52:41 +00:00
Fangrui Song 85c30f3374 [X86] Reland D71360 Clean up UseInitArray initialization for X86ELFTargetObjectFile
-fuse-init-array is now the CC1 default but TargetLoweringObjectFileELF::UseInitArray still defaults to false.
The following two unknown OS target triples continue using .ctors/.dtors because InitializeELF is not called.

clang -target i386 -c a.c
clang -target x86_64 -c a.c

This cleanup fixes this as a bonus.

X86SpeculativeLoadHardeningPass::tracePredStateThroughCall can call
MCContext::createTempSymbol before TargetLoweringObjectFileELF::Initialize().
We need to call TargetLoweringObjectFileELF::Initialize() ealier.

test/CodeGen/X86/speculative-load-hardening-indirect.ll

Differential Revision: https://reviews.llvm.org/D71360
2020-03-20 21:57:34 -07:00
Eric Christopher fc7233d774 Temporarily Revert "[X86] Reland D71360 Clean up UseInitArray initialization for X86ELFTargetObjectFile"
as it's causing msan failures.

This reverts commit 7899fe9da8.
2020-03-20 17:36:12 -07:00
Huihui Zhang 4f5af9d70d [ValueTracking] Fix usage of DataLayout::getTypeStoreSize()
Summary:
DataLayout::getTypeStoreSize() returns TypeSize.

For cases where it can not be scalable vector (e.g., GlobalVariable),
explicitly call TypeSize::getFixedSize().

For cases where scalable property doesn't matter, (e.g., check for
zero-sized type), use TypeSize::isNonZero().

Reviewers: sdesmalen, efriedma, apazos, reames

Reviewed By: efriedma

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76454
2020-03-20 16:52:15 -07:00
Huihui Zhang 1993f95f2b [ValueTracking][SVE] Fix getOffsetFromIndex for scalable vector.
Summary:
Return None if GEP index type is scalable vector. Size of scalable vectors
are multiplied by a runtime constant.

Avoid transforming:
  %a = bitcast i8* %p to <vscale x 16 x i8>*
  %tmp0 = getelementptr <vscale x 16 x i8>, <vscale x 16 x i8>* %a, i64 0
  store <vscale x 16 x i8> zeroinitializer, <vscale x 16 x i8>* %tmp0
  %tmp1 = getelementptr <vscale x 16 x i8>, <vscale x 16 x i8>* %a, i64 1
  store <vscale x 16 x i8> zeroinitializer, <vscale x 16 x i8>* %tmp1

into:
  %a = bitcast i8* %p to <vscale x 16 x i8>*
  %tmp0 = getelementptr <vscale x 16 x i8>, <vscale x 16 x i8>* %a, i64 0
  %1 = bitcast <vscale x 16 x i8>* %tmp0 to i8*
  call void @llvm.memset.p0i8.i64(i8* align 16 %1, i8 0, i64 32, i1 false)

Reviewers: sdesmalen, efriedma, apazos, reames

Reviewed By: sdesmalen

Subscribers: tschuett, hiraditya, rkruppe, arphaman, psnobl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76464
2020-03-20 14:48:29 -07:00
Pirama Arumuga Nainar fe5599eac6 [llvm-ar] Use target triple to deduce archive kind for bitcode inputs
Summary:
When using full LTO on cross-compile settings, instead of generating the
default archive kind of the host platform, we could deduce the archive
kind based on the target triple.

This specifically addresses https://github.com/android/ndk/issues/1209
by making it possible to drop llvm-ar in place of GNU ar without extra
flags.

Reviewers: compnerd, pcc, srhines, danalbert

Subscribers: hiraditya, MaskRay, steven_wu, dexonsmith, rupprecht, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76461
2020-03-20 13:19:44 -07:00
Nikita Popov 2b52e4e629 [InstCombine] Remove known bits constant folding
If ExpensiveCombines is enabled (which is the case with -O3 on the
legacy PM and always on the new PM), InstCombine tries to compute
the known bits of all instructions in the hope that all bits end up
being known, which is fairly expensive.

How effective is it? If we add some statistics on how often the
constant folding succeeds and how many KnownBits calculations are
performed and run test-suite we get:

    "instcombine.NumConstPropKnownBits": 642,
    "instcombine.NumConstPropKnownBitsComputed": 18744965,

In other words, we get one fold for every 30000 KnownBits calculations.
However, the truth is actually much worse: Currently, known bits are
computed before performing other folds, so there is a high chance
that cases that get folded by known bits would also have been
handled by other folds.

What happens if we compute known bits after all other folds
(hacky implementation: https://gist.github.com/nikic/751f25b3b9d9e0860db5dde934f70f46)?

    "instcombine.NumConstPropKnownBits": 0,
    "instcombine.NumConstPropKnownBitsComputed": 18105547,

So it turns out despite doing 18 million known bits calculations,
the known bits fold does not do anything useful on test-suite.
I was originally planning to move this into AggressiveInstCombine
so it only runs once in the pipeline, but seeing this, I think
we're better off removing it entirely.

As this is the only use of the "expensive combines" mechanism,
it may be removed afterwards, but I'll leave that to a separate patch.

Differential Revision: https://reviews.llvm.org/D75801
2020-03-20 20:54:06 +01:00
Fangrui Song 7899fe9da8 [X86] Reland D71360 Clean up UseInitArray initialization for X86ELFTargetObjectFile
UseInitArray is now the CC1 default but TargetLoweringObjectFileELF::UseInitArray still defaults to false.
The following two unknown OS target triples continue using .ctors/.dtors because InitializeELF is not called.

clang -target i386 -c a.c
clang -target x86_64 -c a.c

This cleanup fixes this as a bonus.

Differential Revision: https://reviews.llvm.org/D71360
2020-03-20 11:18:36 -07:00
Vedant Kumar 636665331b PR45181: Fix another invalid DIExpression combination
The original test case from PR45181 triggers a DIExpression combination
that wasn't fixed in D76164.
2020-03-20 11:18:05 -07:00
Nikita Popov 3205d1a860 [InstCombine] Handle known shl nsw sign bit in SimplifyDemanded
Ideally SimplifyDemanded should compute the same known bits as
computeKnownBits(). This patch addresses one discrepancy, where
ValueTracking is more powerful: If we have a shl nsw shift, we
know that the sign bit of the input and output must be the same.
If this results in a conflict, the result is poison.

This is implemented in
2c4ca6832f/lib/Analysis/ValueTracking.cpp (L1175-L1179)
and
2c4ca6832f/lib/Analysis/ValueTracking.cpp (L904-L908).

This implements the same basic logic in SimplifyDemanded. It's
slightly stronger, because I return undef instead of zero for the
poison case (which is not an option inside ValueTracking).

As mentioned in https://reviews.llvm.org/D75801#inline-698484,
we could detect poison in more cases, this just establishes parity
with the existing logic.

Differential Revision: https://reviews.llvm.org/D76489
2020-03-20 18:16:05 +01:00
Pirama Arumuga Nainar edcfb47ff6 [DAGCombiner] Do not fold truncate(build_vector(..)) if it creates an illegal type
Summary:
It can be the case that a vector type is legal but the corresponding
scalar type is not legal for an architecture (i8 vs. v16i8 on AArch64).
Check if the scalar type created when folding
  truncate(build_vector(x,y)) -> build_vector(truncate(x),truncate(y))

is legal if we are running after the type legalizer.

This fixes https://github.com/android/ndk/issues/1207.

Reviewers: RKSimon, srhines

Subscribers: kristof.beyls, hiraditya, danielkiss, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76312
2020-03-20 09:20:16 -07:00
Sean Fertile 56122fcd64 [PowerPC][AIX][NFC] Extend the test coverage of ByVal args.
Adds/changes some types in the ByVal cc test so that they aren't all
structs of arrays of bytes, and adds testing for passing multiple
ByVal arguments.
2020-03-20 12:19:08 -04:00
Simon Pilgrim 34659de5fd [InstCombine][X86] simplifyX86immShift - convert variable in-range vector shift by scalar amounts to generic shifts (PR40391)
The sll/srl/sra scalar vector shifts can be replaced with generic shifts if the shift amount is known to be in range.

This also required public DemandedElts variants of llvm::computeKnownBits to be exposed (PR36319).
2020-03-20 15:48:06 +00:00
Simon Tatham 1adfa4c991 [ARM,MVE] Add ACLE intrinsics for the vaddv/vaddlv family.
Summary:
I've implemented them as target-specific IR intrinsics rather than
using `@llvm.experimental.vector.reduce.add`, on the grounds that the
'experimental' intrinsic doesn't currently have much code generation
benefit, and my replacements encapsulate the sign- or zero-extension
so that you don't expose the illegal MVE vector type (`<4 x i64>`) in
IR.

The machine instructions come in two versions: with and without an
input accumulator. My new IR intrinsics, like the 'experimental' one,
don't take an accumulator parameter: we represent that by just adding
on the input value using an ordinary i32 or i64 add. So if you write
the `vaddvaq` C-language intrinsic with an input accumulator of zero,
it can be optimised to VADDV, and conversely, if you write something
like `x += vaddvq(y)` then that can be combined into VADDVA.

Most of this is achieved in isel lowering, by converting these IR
intrinsics into the existing `ARMISD::VADDV` family of custom SDNode
types. For the difficult case (64-bit accumulators), isel lowering
already implements the optimization of folding an addition into a
VADDLV to make a VADDLVA; so once we've made a VADDLV, our job is
already done, except that I had to introduce a parallel set of ARMISD
nodes for the //predicated// forms of VADDLV.

For the simpler VADDV, we handle the predicated form by just leaving
the IR intrinsic alone and matching it in an ordinary dag pattern.

Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard

Reviewed By: dmgreen

Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D76491
2020-03-20 15:42:33 +00:00
Simon Tatham 45a9945b9e [ARM,MVE] Add ACLE intrinsics for the vminv/vmaxv family.
Summary:
I've implemented these as target-specific IR intrinsics, because
they're not //quite// enough like @llvm.experimental.vector.reduce.min
(which doesn't take the extra scalar parameter). Also this keeps the
predicated and unpredicated versions looking similar, and the
floating-point minnm/maxnm versions fold into the same schema.

We had a couple of min/max reductions already implemented, from the
initial pathfinding exercise in D67158. Those were done by having
separate IR intrinsic names for the signed and unsigned integer
versions; as part of this commit, I've changed them to use a flag
parameter indicating signedness, which is how we ended up deciding
that the rest of the MVE intrinsics family ought to work. So now
hopefully the ewhole lot is consistent.

In the new llc test, the output code from the `v8f16` test functions
looks quite unpleasant, but most of it is PCS lowering (you can't pass
a `half` directly in or out of a function). In other circumstances,
where you do something else with your `half` in the same function, it
doesn't look nearly as nasty.

Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard

Reviewed By: MarkMurrayARM

Subscribers: kristof.beyls, hiraditya, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D76490
2020-03-20 15:42:33 +00:00
Sean Fertile fc902cb6e2 [PowerPC][AIX][NFC] Add zero-sized by val params to cc test.
The zero sized structs force creation of a stack object of size 1, align
8 in the locals area, but otherwise have no effect on the calling convention
code. i.e. They consume no registers or stack space in the paramater save area.

The 32-bit codegen has 8 bytes of padding to fit the new stack object so
stack size stays the same. 64-bit codegen has no padding in the stack
frames allocated so 8 bytes is added, and becuase of 16-byte aligned
stack, the stack size increases from 112 bytes to 128.
2020-03-20 11:24:46 -04:00
Bjorn Pettersson d168b77780 [DAGCombiner] Fix non-determinism problem related to argument evaluation order in visitFDIV
Summary:
For some reason the order in which we call getNegatedExpression
for the involved operands, after a call to isCheaperToUseNegatedFPOps,
seem to matter. This patch includes a new test case in
test/CodeGen/X86/fdiv.ll that crashes if we reverse the order of
those calls. Before this patch that could happen depending on
which compiler that were used when buildind llvm. With my GCC
version (7.4.0) I got the crash, because it seems like it is
using a different order for the argument evaluation compared
to clang.

All other users of isCheaperToUseNegatedFPOps already used this
pattern with unfolded/ordered calls to getNegatedExpression, so
this patch is aligning visitFDIV with the other use cases.

This patch simply deals with the non-determinism for FDIV. While
the underlying problem with getNegatedExpression is discussed
further in D76439.

Reviewers: spatel, RKSimon

Reviewed By: spatel

Subscribers: hiraditya, mgrang, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76319
2020-03-20 16:11:17 +01:00
Matt Arsenault 53d6b156bb AMDGPU: Add more tests for fshr 2020-03-20 11:01:51 -04:00
alex-t 6e34e71869 [AMDGPU] Enable divergence driven ISel for ADD/SUB i64
Summary:
Currently we custom select add/sub with carry out to scalar form relying on later replacing them to vector form if necessary.
This change enables custom selection code to take the divergence of adde/addc SDNodes into account and select the appropriate form in one step.

Reviewers: arsenm, vpykhtin, rampitec

Reviewed By: arsenm, vpykhtin

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, kerbowa

Differential Revision: https://reviews.llvm.org/D76371
2020-03-20 17:06:11 +03:00
Mikhail Maltsev 969034b860 [ARM,CDE] Implement CDE unpredicated Q-register intrinsics
Summary:
This patch implements the following intrinsics:

  uint8x16_t __arm_vcx1q_u8 (int coproc, uint32_t imm);
  T __arm_vcx1qa(int coproc, T acc, uint32_t imm);
  T __arm_vcx2q(int coproc, T n, uint32_t imm);
  uint8x16_t __arm_vcx2q_u8(int coproc, T n, uint32_t imm);
  T __arm_vcx2qa(int coproc, T acc, U n, uint32_t imm);
  T __arm_vcx3q(int coproc, T n, U m, uint32_t imm);
  uint8x16_t __arm_vcx3q_u8(int coproc, T n, U m, uint32_t imm);
  T __arm_vcx3qa(int coproc, T acc, U n, V m, uint32_t imm);

Most of them are polymorphic. Furthermore, some intrinsics are
polymorphic by 2 or 3 parameter types, such polymorphism is not
supported by the existing MVE/CDE tablegen backends, also we don't
really want to have a combinatorial explosion caused by 1000 different
combinations of 3 vector types. Because of this some intrinsics are
implemented as macros involving a cast of the polymorphic arguments to
uint8x16_t.

The IR intrinsics are even more restricted in terms of types: all MVE
vectors are cast to v16i8.

Reviewers: simon_tatham, MarkMurrayARM, dmgreen, ostannard

Reviewed By: MarkMurrayARM

Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D76299
2020-03-20 14:01:56 +00:00
Mikhail Maltsev d22e661712 [ARM,CDE] Implement CDE S and D-register intrinsics
Summary:
This patch implements the following ACLE intrinsics:

  uint32_t __arm_vcx1_u32(int coproc, uint32_t imm);
  uint32_t __arm_vcx1a_u32(int coproc, uint32_t acc, uint32_t imm);
  uint32_t __arm_vcx2_u32(int coproc, uint32_t n, uint32_t imm);
  uint32_t __arm_vcx2a_u32(int coproc, uint32_t acc, uint32_t n, uint32_t imm);
  uint32_t __arm_vcx3_u32(int coproc, uint32_t n, uint32_t m, uint32_t imm);
  uint32_t __arm_vcx3a_u32(int coproc, uint32_t acc, uint32_t n, uint32_t m, uint32_t imm);

  uint64_t __arm_vcx1d_u64(int coproc, uint32_t imm);
  uint64_t __arm_vcx1da_u64(int coproc, uint64_t acc, uint32_t imm);
  uint64_t __arm_vcx2d_u64(int coproc, uint64_t m, uint32_t imm);
  uint64_t __arm_vcx2da_u64(int coproc, uint64_t acc, uint64_t m, uint32_t imm);
  uint64_t __arm_vcx3d_u64(int coproc, uint64_t n, uint64_t m, uint32_t imm);
  uint64_t __arm_vcx3da_u64(int coproc, uint64_t acc, uint64_t n, uint64_t m, uint32_t imm);

Since the semantics of CDE instructions is opaque to the compiler, the
ACLE intrinsics require dedicated LLVM IR intrinsics. The 64-bit and
32-bit variants share the same IR intrinsic.

Reviewers: simon_tatham, MarkMurrayARM, ostannard, dmgreen

Reviewed By: MarkMurrayARM

Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D76298
2020-03-20 14:01:53 +00:00
Mikhail Maltsev 7a85e3585e [ARM,CDE] Implement GPR CDE intrinsics
Summary:
This change implements ACLE CDE intrinsics that translate to
instructions working with general-purpose registers.

The specification is available at
https://static.docs.arm.com/101028/0010/ACLE_2019Q4_release-0010.pdf

Each ACLE intrinsic gets a corresponding LLVM IR intrinsic (because
they have distinct function prototypes). Dual-register operands are
represented as pairs of i32 values. Because of this the instruction
selection for these intrinsics cannot be represented as TableGen
patterns and requires custom C++ code.

Reviewers: simon_tatham, MarkMurrayARM, dmgreen, ostannard

Reviewed By: MarkMurrayARM

Subscribers: kristof.beyls, hiraditya, danielkiss, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D76296
2020-03-20 14:01:51 +00:00
Florian Hahn ece6cf0fa5 [DSE,MSSA] Precommit additional tests for D73763. 2020-03-20 13:39:46 +00:00
Simon Pilgrim 7f764fa18f [ValueTracking] Add some initial isKnownNonZero DemandedElts support (PR36319) 2020-03-20 13:29:00 +00:00
Nikita Popov ce6c95aaca [InstCombine] Move test to instcombine; NFC
This test uses -instcombine, so move it into the appropriate
directory. Also fork it for expensive checks enabled/disabled.
2020-03-20 12:41:19 +01:00
Simon Pilgrim c1efdbcbe0 [ValueTracking] Add computeKnownBits DemandedElts support to shift instructions (PR36319) 2020-03-20 11:08:08 +00:00
Nikita Popov a09ff56b5b [Tests] Regenerate some test checks; NFC 2020-03-20 12:06:53 +01:00
James Henderson 86b093d1a1 [llvm-readobj] Allow syms from all sections to match stack size entries
Prior to this change, for non-relocatable objects llvm-readobj would
assume that all symbols that corresponded to a stack size section's
entries were in the section specified by the section's sh_link field.
In the presence of an output section description combining
SHF_LINK_ORDER sections linking different output sections, this cannot
be respected, since linker script section patterns are "by name" by
nature. Consequently, the sh_link value would not be correct for all
section entries.

This patch changes llvm-readobj to ignore the section of symbols in a
non-relocatable object.

Fixes https://bugs.llvm.org/show_bug.cgi?id=45228.

Reviewed by: grimar, MaskRay

Differential Revision: https://reviews.llvm.org/D76425
2020-03-20 10:54:18 +00:00
Georgii Rymar 63778bc653 [llvm-readobj][llvm-readelf][test] - Add a test to check how we dump relocation addends.
Seems we do not test how we print relocation addends well.
And the behavior of dumpers does not seem to be ideal here
(and llvm-readelf does not match GNU as the test case shows).

This patch adds a test case to document the current behavior.

Differential revision: https://reviews.llvm.org/D75671
2020-03-20 13:41:32 +03:00
David Green b3499f572d [ARM] Change VDUP type to i32 for MVE
The MVE VDUP instruction take a GPR and splats into every lane of a
vector register. Unlike NEON we do not have a VDUPLANE equivalent
instruction, doing the same splat from a fp register. Previously a VDUP
to a v4f32/v8f16 would be represented as a (v4f32 VDUP f32), which
would mean the instruction pattern needs to add a COPY_TO_REGCLASS to
the GPR.

Instead this now converts that earlier during an ISel DAG combine,
converting (VDUP x) to (VDUP (bitcast x)). This can allow instruction
selection to tell that the input needs to be an i32, which in one of the
testcases allows it to use ldr (or specifically ldm) over (vldr;vmov).

Whilst being simple enough for floats, as the types sizes are the same,
these is no BITCAST equivalent for getting a half into a i32. This uses
a VMOVrh ARMISD node, which doesn't know the same tricks yet.

Differential Revision: https://reviews.llvm.org/D76292
2020-03-20 09:48:45 +00:00
Roger Ferrer Ibanez 3c24aee7ee [RISCV] Select +0.0 immediate using fmv.{w,d}.x / fcvt.d.w
Floating point positive zero can be selected using fmv.w.x / fmv.d.x /
fcvt.d.w and the zero source register.

Differential Revision: https://reviews.llvm.org/D75729
2020-03-20 09:42:24 +00:00
Roger Ferrer Ibanez ebb04e9ca9 [NFC][RISCV] Test for 0.0 fp immediate
To show a later change that impacts 0.0 fp constant generation.

Differential Revision: https://reviews.llvm.org/D75728
2020-03-20 09:42:24 +00:00
Nikita Popov 0372768776 [InstCombine] Simplify calls with "returned" attribute
If a call argument has the "returned" attribute, we can simplify
the call to the value of that argument. This was already partially
handled by InstSimplify/InstCombine for the case where the argument
is an integer constant, and the result is thus known via known bits.
The non-constant (or non-int) argument cases weren't handled though.

This previously landed as an InstSimplify transform, but was reverted
due to assertion failures when compiling the Linux kernel. The reason
is that simplifying a call to another call breaks assumptions in
call graph updating during inlining. As the code is not easy to fix,
and there is no particularly strong motivation for having this in
InstSimplify, the transform is only performed in InstCombine instead.

Differential Revision: https://reviews.llvm.org/D75815
2020-03-20 10:23:39 +01:00
David Green 9cf920e64d [ARM] Extra MVE float loop tests. NFC 2020-03-20 09:21:45 +00:00
Nikita Popov 5c10967157 [InstCombine] Don't replace musttail result based on known bits
This is the same change as D75824, but for two cases where
InstCombine performs the same optimization: Replacing an instruction
whose bits are fully known with a constant. This is not (generally)
legal for musttail calls.

Differential Revision: https://reviews.llvm.org/D76457
2020-03-20 10:17:09 +01:00
Florian Hahn 3a8372ed02 [DSE] Support traversing MemoryPhis.
For MemoryPhis, we have to avoid that the MemoryPhi may be executed
before before the access we are currently looking at.

To do this we do a post-order numbering of the basic blocks in the
function and bail out once we reach a MemoryPhi with a larger (or equal)
post-order block number than the current MemoryAccess.
This changes the order in which we visit stores for elimination.

This patch also adds support for exploring multiple paths. We keep a worklist (ToCheck) of memory accesses that might be eliminated by our starting MemoryDef or MemoryPhis for further exploration.  For MemoryPhis, we add the incoming values to the worklist, for MemoryDefs we add the defining access.

Reviewers: dmgreen, rnk, efriedma, bryant, asbirlea

Reviewed By: asbirlea

Differential Revision: https://reviews.llvm.org/D72148
2020-03-20 07:51:42 +00:00
Austin Kerbow 2cbb8c946a [AMDGPU] Reuse register during frame index elimination
If there were no free VGPRs we would need two emergency spill slots for register
scavenging during PEI/frame index elimination. Reuse 'ResultReg' for scale
calculation so that only one spill is needed.

Differential Revision: https://reviews.llvm.org/D76387
2020-03-20 00:19:15 -07:00
cdevadas 728b878de6 [AMDGPU] Set the CostPerUse value for vgpr registers.
Apart from the argument registers, set the CostPerUse
value as per the ratio reg_index/allocation_granularity.
It is a pre-commit for introducing the scratch registers
in the ABI. This change should help in a balanced
register allocation.

Differential Revision: https://reviews.llvm.org/D76417
2020-03-20 11:49:35 +05:30
Wei Mi a035726e5a Revert "Generate Callee Saved Register (CSR) related cfi directives like .cfi_restore."
This reverts commit 3c96d01d2e. Got report that it caused test failures in libc++.
2020-03-19 22:45:27 -07:00
Jun Ma 032251e34d [Coroutines] Fix PR45130
For now, when final suspend can be simplified by simplifySuspendPoint,
handleFinalSuspend is executed as well to remove last case in switch
instruction. This patch fixes it.

Differential Revision: https://reviews.llvm.org/D76345
2020-03-20 11:27:08 +08:00
Yuta Saito 08670d435b [WebAssembly] Support swiftself and swifterror for WebAssembly target
Summary:
Swift ABI is based on basic C ABI described here https://github.com/WebAssembly/tool-conventions/blob/master/BasicCABI.md
Swift Calling Convention on WebAssembly is a little deffer from swiftcc
on another architectures.

On non WebAssembly arch, swiftcc accepts extra parameters that are
attributed with swifterror or swiftself by caller. Even if callee
doesn't have these parameters, the invocation succeed ignoring extra
parameters.

But WebAssembly strictly checks that callee and caller signatures are
same. https://github.com/WebAssembly/design/blob/master/Semantics.md#calls
So at WebAssembly level, all swiftcc functions end up extra arguments
and all function definitions and invocations explicitly have additional
parameters to fill swifterror and swiftself.

This patch support signature difference for swiftself and swifterror cc
is swiftcc.

e.g.
```
declare swiftcc void @foo(i32, i32)
@data = global i8* bitcast (void (i32, i32)* @foo to i8*)
define swiftcc void @bar() {
  %1 = load i8*, i8** @data
  %2 = bitcast i8* %1 to void (i32, i32, i32)*
  call swiftcc void %2(i32 1, i32 2, i32 swiftself 3)
  ret void
}
```

For swiftcc, emit additional swiftself and swifterror parameters
if there aren't while lowering. These additional parameters are added
for both callee and caller.
They are necessary to match callee and caller signature for direct and
indirect function call.

Differential Revision: https://reviews.llvm.org/D76049
2020-03-19 17:39:52 -07:00
Thomas Lively 34db3c3a18 [WebAssembly] SIMD integer abs instructions
Summary:
These were merged to the SIMD proposal in
https://github.com/WebAssembly/simd/pull/128.

Depends on D76397 to avoid merge conflicts.

Reviewers: aheejin

Subscribers: dschuff, sbc100, jgravelle-google, hiraditya, sunfish, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76399
2020-03-19 17:25:58 -07:00
Thomas Lively a3f974f3c3 [WebAssembly] SIMD bitmask intrinsics and builtin functions
Summary:
These experimental new instructions are proposed in
https://github.com/WebAssembly/simd/pull/201.

Reviewers: aheejin

Subscribers: dschuff, sbc100, jgravelle-google, hiraditya, sunfish, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D76397
2020-03-19 17:15:37 -07:00
Jessica Paquette c999084619 [GlobalISel] Port some basic shufflevector undef combines from the DAGCombiner
Port over the following:

- shuffle undef, undef, any_mask -> undef
- shuffle anything, anything, undef_mask -> undef

This sort of thing shows up a lot when you try to bugpoint code containing
shufflevector.

Differential Revision: https://reviews.llvm.org/D76382
2020-03-19 16:46:06 -07:00
Simon Pilgrim 95b6f62efb [InstSimplify] Add some vector shift tests to show lack of DemandedElts support 2020-03-19 22:09:51 +00:00
Stefan Agner f87563661d [MC][ARM] add implicit immediate form for ldrsbt/ldrht/ldrsht
Add pseudo instructions for ldrsbt/ldrht/ldrsht with implicit immediate
and add fall back C++ code to transform the instruction to the
equivalent LDRSBTi/LDRHTi/LDRSHTi form.

This is similar to how it has been done in commit
fb3950ec63

This fixes:
https://bugs.llvm.org/show_bug.cgi?id=45070
2020-03-19 22:36:42 +01:00
Kazu Hirata e23d786526 [JumpThreading] Fix infinite loop (PR44611)
Summary:
This patch fixes https://bugs.llvm.org/show_bug.cgi?id=44611 by
preventing an infinite loop in the jump threading pass when
-jump-threading-across-loop-headers is on.  Specifically, without this
patch, jump threading through two basic blocks would trigger on the
same area of the CFG over and over, resulting in an infinite loop.

Consider testcase PR44611-across-header-hang.ll in this patch.  The
first opportunity to thread through two basic blocks is:

  from bb_body2 through bb_header and bb_body1 to bb_body2.

The pass duplicates bb_header and bb_body1 as, say, bb_header.thread1
and bb_body1.thread1.  Since bb_header contains a successor edge back
to itself, bb_header.thread1 also contains a successor edge to
bb_header, immediately giving rise to the next jump threading
opportunity:

  from bb_header.thread1 through bb_header and bb_body1 to bb_body2.

After that, we repeatedly thread an incoming edge into bb_header
through bb_header and bb_body1 to bb_body2.  In other words, we keep
peeling one iteration from bb_header's self loop.

The patch fixes the problem by preventing the pass from duplicating a
basic block containing a self loop.

Reviewers: wmi, junparser, efriedma

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76390
2020-03-19 12:49:36 -07:00
Scott Linder 0e9368cc8c [AMDGPU] Move frame pointer from s34 to s33
Remove the gap left between the stack pointer (s32) and frame pointer
(s34) now that the scratch wave offset is no longer a part of the
calling convention ABI.

Update llvm/docs/AMDGPUUsage.rst to reflect the change.

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75657
2020-03-19 15:35:16 -04:00
Scott Linder 60b1967c39 [AMDGPU] Add Scratch Wave Offset to Scratch Buffer Descriptor in entry functions
Add the scratch wave offset to the scratch buffer descriptor (SRSrc) in
the entry function prologue. This allows us to removes the scratch wave
offset register from the calling convention ABI.

As part of this change, allow the use of an inline constant zero for the
SOffset of MUBUF instructions accessing the stack in entry functions
when a frame pointer is not requested/required. Entry functions with
calls still need to set up the calling convention ABI stack pointer
register, and reference it in order to address arguments of called
functions. The ABI stack pointer register remains unswizzled, but is now
wave-relative instead of queue-relative.

Non-entry functions also use an inline constant zero SOffset for
wave-relative scratch access, but continue to use the stack and frame
pointers as before. When the stack or frame pointer is converted to a
swizzled offset it is now scaled directly, as the scratch wave offset no
longer needs to be subtracted first.

Update llvm/docs/AMDGPUUsage.rst to reflect these changes to the calling
convention.

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D75138
2020-03-19 15:35:16 -04:00
Simon Pilgrim c2586cab89 [InstCombine][X86] Tests for variable but in-range vector-by-scalar shift amounts (PR40391)
These shifts are masked to be inrange so we should be able to replace them with generic shifts.
2020-03-19 19:24:55 +00:00
Cameron McInally 018dde4ce5 [AArch64][SVE] Add support for DestructiveBinaryImm DestructiveInstType
Support prefixing destructive operations, with the MOVPRFX instruction, to build constructive operations.

Differential Revision: https://reviews.llvm.org/D75064
2020-03-19 13:11:46 -05:00
Craig Topper c13aa36bb7 [X86] Attempt to more accurately model the cost of a bool reduction of wide vector type.
Previously we multiplied the cost for the table entries by the number of splits needed. But that implies that each split goes through a reduction to scalar independently. I think what really happens is that the we AND/OR the split pieces until we're down to a single value with a legal type and then do special reduction sequence on that.

So to model that this patch takes the number of splits minus one multiplied by the cost of a AND/OR at the legal element count and adds that on top of the table lookup.

Differential Revision: https://reviews.llvm.org/D76400
2020-03-19 09:31:05 -07:00
Vedant Kumar 5e6e545cba [test] Re-enable accidentally disabled X86 tests
A number of X86 tests were accidentally disabled in
https://reviews.llvm.org/D73568. This commit re-enables those tests.

```
$ for x86_test in $(gg 'REQUIRES: x86$' llvm/test | fst); do sed -i "" '/REQUIRES: x86/d' $x86_test; done
```

(Note that 'x86' is not an available feature, that's what caused the
tests to be disabled.)
2020-03-19 09:29:23 -07:00
Sam Parker 27ef7c6bf0 [NFC][ARM] Fix for buildbots
Update broken test.
2020-03-19 15:50:13 +00:00
Simon Pilgrim 433897da4a [InstCombine][X86] simplifyX86immShift - convert variable in-range vector shift by immediate amounts to generic shifts (PR40391)
The slli/srli/srai 'immediate' vector shifts (although its not immediate anymore to match gcc) can be replaced with generic shifts if the shift amount is known to be in range.
2020-03-19 15:44:24 +00:00
Sam Parker d0fb6879c3 [NFC][ARM] Add two tests
Add tests for v8m indvar simplify.
2020-03-19 15:18:33 +00:00
Sean Fertile 06c810b155 [PowerPC][AIX] Simplify the check prefixes in the ByVal lit tests. [NFC] 2020-03-19 10:59:48 -04:00
Georgii Rymar fecce903db [obj2yaml][test] - Update test after output change.
D76227 changed the output. This test was forgotten because
belonged to a different patch.
2020-03-19 17:42:36 +03:00
Georgii Rymar a02b38698b [obj2yaml] - SHT_DYNAMIC and SHT_REL* sections: stop dumping sh_entsize field when it has the default value.
Currently obj2yaml always emits the `EntSize` property when `sh_entsize != 0`.
It is not correct. For example, for `SHT_DYNAMIC` section, `EntSize == 0`
is abnormal, while `sizeof(ELFT::Dyn)` is the expected default.

To reduce the output produces we should not dump default values.

yaml2obj tests that shows `sh_entsize` values produced are:
1) For `SHT_REL*` sections: `yaml2obj\ELF\reloc-sec-entry-size.yaml`
2) For `SHT_DYNAMIC`: `yaml2obj\ELF\dynamic-section.yaml`

Differential revision: https://reviews.llvm.org/D76227
2020-03-19 17:25:53 +03:00
Georgii Rymar 9c69cc109b [obj2yaml] - SHT_REL*, SHT_DYNAMIC sections: add tests to document the behavior when sh_entsize is broken.
We do not have tests that shows the current behavior.
It is needed for D76227 which changes the logic of dumping of `EntSize` fields.

Differential revision: https://reviews.llvm.org/D76282
2020-03-19 16:43:40 +03:00
Kamau Bridgeman accf06feb1 Test commit. 2020-03-19 08:34:48 -05:00
Piotr Sobczak 4d8a720277 [NFC] Simplify test
Remove extra preheader block as there is no value in keeping it.
2020-03-19 14:29:57 +01:00
Simon Pilgrim fb11455038 [InstCombine][X86] Tests for variable but in-range vector-by-scalar shift amounts (PR40391)
These shifts are masked to be inrange so we should be able to replace them with generic shifts.
2020-03-19 13:11:06 +00:00
Djordje Todorovic d9b9621009 Reland D73534: [DebugInfo] Enable the debug entry values feature by default
The issue that was causing the build failures was fixed with the D76164.
2020-03-19 13:57:30 +01:00
Andrzej Warzynski 0ea4fb5bb7 [AArch64][SVE] Rename intrinsics for gather prefetch [NFC]
Summary:
In order to keep the names consistent with other SVE gather loads, the
intrinsics for gather prefetch are renamed as follows:
  * @llvm.aarch64.sve.gather.prfb -> @llvm.aarch64.sve.prfb.gather

Reviewed by: fpetrogalli

Differential Revision: https://reviews.llvm.org/D76421
2020-03-19 12:53:36 +00:00
Florian Hahn 4a58996dd2 [SCCP] Use constant ranges for PHI nodes.
For PHIs with multiple incoming values, we can improve precision by
using constant ranges for integers. We can over-approximate phis
by merging the incoming values.

Reviewers: davide, efriedma, mssimpso

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D71933
2020-03-19 12:45:33 +00:00
Igor Kudrin b1c8a378f7 [llvm-dwp] Start error messages with a lowercase letter.
We usually start error messages with lowercase letters and most of them
in llvm-dwp follow that rule. This patch fixes a few messages that
started with capital letters.

Differential revision: https://reviews.llvm.org/D76277
2020-03-19 19:43:08 +07:00
Simon Pilgrim 0b458d4dca [ValueTracking] Add computeKnownBits DemandedElts support to ADD/SUB/MUL instructions (PR36319) 2020-03-19 12:41:29 +00:00
Simon Pilgrim 7ce7f78963 [InstSimplify] Add missing vector ADD+SUB tests to show lack of DemandedElts support 2020-03-19 11:27:27 +00:00
Simon Pilgrim d259e31a17 [InstSimplify] Add missing vector MUL tests to show lack of DemandedElts support 2020-03-19 11:27:27 +00:00
Georgii Rymar e26e9ba288 [obj2yaml] - Stop dumping an empty sh_info field for SHT_RELA/SHT_REL sections.
`.rela.dyn` is a dynamic relocation section that normally has
no value in `sh_info` field.

The existent `elf-reladyn-section-shinfo.yaml` which tests this piece has issues:

1) It does not check the case when we have more than one `SHT_REL[A]`
   section with `sh_info == 0` in the object. Because of this it did not catch the issue.
   Currently we print an excessive "Info" field:

```
  - Name:            .rela.dyn
    Type:            SHT_RELA
    EntSize:         0x0000000000000018
  - Name:            .rel.dyn
    Type:            SHT_REL
    EntSize:         0x0000000000000010
    Info:            ' [1]'
```

2) It seems can be more generic. I've added a `rel-rela-section.yaml` instead.

Differential revision: https://reviews.llvm.org/D76281
2020-03-19 14:00:21 +03:00
Simon Moll 733b319948 [VP,Integer,#1] Vector-predicated integer intrinsics
Summary:
This patch adds IR intrinsics for vector-predicated integer arithmetic.

It is subpatch #1 of the [integer
slice](https://reviews.llvm.org/D57504#1732277) of
[LLVM-VP](https://reviews.llvm.org/D57504).  LLVM-VP is a larger effort to bring
native vector predication to LLVM.

Reviewed By: andrew.w.kaylor

Differential Revision: https://reviews.llvm.org/D69891
2020-03-19 10:51:47 +01:00
Florian Hahn 8a36594a7e [SCCP] Use constant ranges for binary operators.
If one of the operands of a binary operator is a constant range, we can
use ConstantRange::binaryOp to approximate the result.

We still handle single element constant ranges as we did previously,
with ConstantExpr::get(), because ConstantRange::binaryOp still gives
worse results in a few cases for single element ranges.

Also note that we bail out early if any of the operands is still unknown.

Reviewers: davide, efriedma, mssimpso

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D71936
2020-03-19 09:35:48 +00:00
Chen Zheng d8fcdcdf68 [Reassociate] add testcases for more than 1 pairs - NFC 2020-03-19 05:21:24 -04:00
Chen Zheng 3f85134d71 [PowerPC] implement target hook isProfitableToHoist
On Powerpc fma is faster than fadd + fmul for some types,
(PPCTargetLowering::isFMAFasterThanFMulAndFAdd). we should implement target
hook isProfitableToHoist to prevent simplifyCFGpass from breaking fma
pattern by hoisting fmul to predecessor block.

Reviewed By: nemanjai

Differential Revision: https://reviews.llvm.org/D76207
2020-03-19 00:17:25 -04:00
Huihui Zhang 2ea5495759 [InstCombine][SVE] Fix InstCombiner::visitAllocaInst for scalable vector.
Summary:
DataLayout::getTypeAllocSize() return TypeSize. For cases where scalable
property doesn't matter (check for zero-sized alloca), we should explicitly
call getKnownMinSize() to avoid implicit type conversion to uint64_t, which is
invalid for scalable vector type.

Reviewers: sdesmalen, efriedma, spatel, apazos

Reviewed By: efriedma

Subscribers: tschuett, hiraditya, rkruppe, psnobl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76386
2020-03-18 20:57:14 -07:00
Simon Pilgrim 99336bf95a [ValueTracking] Add computeKnownBits DemandedElts support to masked add instructions (PR36319) 2020-03-18 21:50:56 +00:00
Simon Pilgrim 49bdfd888d [InstSimplify] Add missing vector masked add tests to show lack of DemandedElts support 2020-03-18 21:04:54 +00:00
Sanjay Patel 22c66c1a28 [JumpThreading] add a miscompile test based on discussion in D76332; NFC 2020-03-18 16:46:18 -04:00
Craig Topper 498b53890d [SelectionDAGBuilder][FPEnv] Take into account SelectionDAG continuous CSE when setting the nofpexcept flag for constrained intrinsics
SelectionDAG CSEs nodes based on their result type and operands, but not their flags. The flags are expected to be intersected when they are CSEd. In SelectionDAGBuilder, for FP nodes we manage both the fast math flags and the nofpexcept flag after the nodes have already been CSEd when they were created with getNode. The management of the fastmath flags before the constrained nodes prevents the nofpexcept management from working correctly.

This commit moves the FMF handling for constrained intrinsics into their visitor and disables the common FMF handling for these nodes.

Differential Revision: https://reviews.llvm.org/D75224
2020-03-18 13:37:17 -07:00
Simon Pilgrim 9d40292a64 [ValueTracking] Add computeKnownBits DemandedElts support to XOR instructions (PR36319) 2020-03-18 20:24:14 +00:00
Simon Pilgrim 47ce1406c8 [InstSimplify] Add missing vector OR test to show lack of DemandedElts support 2020-03-18 20:24:14 +00:00
Simon Pilgrim 6bdb0efa42 [InstSimplify] Regenerate OR tests 2020-03-18 20:24:13 +00:00
Eli Friedman ebec984e14 [AliasAnalysis] Misc fixes for checking aliasing with scalable types.
This is fixing up various places that use the implicit
TypeSize->uint64_t conversion.

The new overloads in MemoryLocation.h are already used in various places
that construct a MemoryLocation from a TypeSize, including MemorySSA.
(They were using the implicit conversion before.)

Differential Revision: https://reviews.llvm.org/D76249
2020-03-18 12:28:47 -07:00
Simon Pilgrim 1010c44b4c [ValueTracking] Add computeKnownBits DemandedElts support to EXTRACTELEMENT/OR/BSWAP/BITREVERSE instructions (PR36319)
These are all covered by the bswap/bitreverse vector tests.
2020-03-18 18:49:58 +00:00
Nemanja Ivanovic e009fad342 [PowerPC] Remove UB from PPCInstrInfo when handling rotates fed by constants
As pointed out in https://bugs.llvm.org/show_bug.cgi?id=45232 this code can
end up shifting a 64-bit unsigned value left by 64 bits. Althought this works
as expected on some platforms it is definitely UB. This patch removes the UB
and adds the associated test case.

Fixes: https://bugs.llvm.org/show_bug.cgi?id=45232
2020-03-18 13:40:39 -05:00
Simon Pilgrim 9c6458ecf8 [InstSimplify] Add bitreverse/bswap vector tests
Shows missing DemandedElts support (PR36319)
2020-03-18 18:17:10 +00:00
Jessica Paquette dc5f982639 [GlobalISel] Port some basic undef combines from DAGCombiner.cpp
This ports some combines from DAGCombiner.cpp which perform some trivial
transformations on instructions with undef operands.

Not having these can make it extremely annoying to find out where we differ
from SelectionDAG by looking at existing lit tests. Without them, we tend to
produce pretty bad code generation when we run into instructions which use
undef operands.

Also remove the nonpow2_store_narrowing testcase from arm64-fallback.ll, since
we no longer fall back on the add.

Differential Revision: https://reviews.llvm.org/D76339
2020-03-18 11:05:44 -07:00
Jin Lin 0d896278c8 Support repeated machine outlining
Summary: The following change is to allow the machine outlining can be applied for Nth times, where N is specified by the compiler option. By default the value of N is 1. The motivation is that the repeated machine outlining can further reduce code size.  Please refer to the presentation "Improving Swift Binary Size via Link Time Optimization" in LLVM Developers' Meeting in 2019.

Reviewers: aschwaighofer, tellenbach, paquette

Reviewed By: paquette

Subscribers: tellenbach, hiraditya, llvm-commits, jinlin

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71027
2020-03-18 10:48:52 -07:00
Simon Tatham e13d153c1b [ARM,MVE] Add intrinsics for the VQDMLAD family.
Summary:
This is another set of instructions too complicated to be sensibly
expressed in IR by anything short of a target-specific intrinsic.
Given input vectors a,b, the instruction generates intermediate values
2*(a[0]*b[0]+a[1]+b[1]), 2*(a[2]*b[2]+a[3]+b[3]), etc; takes the high
half of each double-width values, and overwrites half the lanes in the
output vector c, which you therefore have to provide the input value
of. Optionally you can swap the elements of b so that the are things
like a[0]*b[1]+a[1]*b[0]; optionally you can round to nearest when
taking the high half; and optionally you can take the difference
rather than sum of the two products. Finally, saturation is applied
when converting back to a single-width vector lane.

Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard

Reviewed By: miyuki

Subscribers: kristof.beyls, hiraditya, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D76359
2020-03-18 17:11:22 +00:00
Sam Parker fc2a5ef9c8 [NFC][PowerPC] Update test
Run the update script on one of the loop unroll tests.
2020-03-18 16:21:37 +00:00
Matt Arsenault 4ea1baf6a0 AMDGPU: Initial, crude support for indirect calls
This isn't really usable, and requires using the
-amdgpu-fixed-function-abi flag to work.

Assumes a uniform call target, and will hit a verifier error if the
call target ends up in a VGPR. Also doesn't attempt to do anything
sensible for the reported register/stack usage.
2020-03-18 12:03:48 -04:00
Matt Arsenault ea4597eef1 Reapply "AMDGPU/GlobalISel: Fully handle 0 dmask case during legalize"
This reverts commit 9bca8fc4cf.

Rearrange handling to avoid changing the instruction in the case where
it's going to be erased and replaced with undef.
2020-03-18 12:01:22 -04:00
Piotr Sobczak d1a7bfca74 [AMDGPU] Fix AMDGPUUnifyDivergentExitNodes
Summary:
For the case where "done" bits on existing exports are removed
by unifyReturnBlockSet(), unify all return blocks - even the
uniformly reached ones. We do not want to end up with a non-unified,
uniformly reached block containing a normal export with the "done"
bit cleared.

That case is believed to be rare - possible with infinite loops
in pixel shaders.

This is a fix for D71192.

Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76364
2020-03-18 16:49:30 +01:00
Simon Pilgrim 06150e8356 [ValueTracking] Add computeKnownBits DemandedElts support to AND instructions (PR36319) 2020-03-18 15:38:15 +00:00
Sander de Smalen ef64ba8311 [InstCombine] GEPOperator::accumulateConstantOffset does not support scalable vectors
Avoid transforming:

 %0 = bitcast i8* %base to <vscale x 16 x i8>*
 %1 = getelementptr <vscale x 16 x i8>, <vscale x 16 x i8>* %0, i64 1

into:

 %0 = getelementptr i8, i8* %base, i64 16
 %1 = bitcast i8* %0 to <vscale x 16 x i8>*

Reviewers: efriedma, ctetreau

Reviewed By: efriedma

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76236
2020-03-18 14:58:46 +00:00