Summary: This exposes `CallInst`'s tail call kind via new `LLVMGetTailCallKind` and `LLVMSetTailCallKind` functions. The motivation for this is to be able to see `musttail` for languages that require mandatory tail calls for correctness. Today only the weaker `LLVMSetTail` is exposed and there is no way to set `GuaranteedTailCallOpt` via the C API.
Reviewers: CodaFi, jyknight, deadalnix, rnk
Reviewed By: CodaFi
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66061
llvm-svn: 368945
Summary:
Instead of constantly keeping track of the nonnull status with the
dereferenceable information we can simply query the nonnull attribute
whenever we need the information (debug + manifest).
Reviewers: sstefan1, uenoku
Subscribers: hiraditya, bollu, jfb, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66113
llvm-svn: 368924
Implement this single atomic load instruction so that we can compile stack
protector code.
Differential Revision: https://reviews.llvm.org/D66245
llvm-svn: 368923
Summary:
As one of the first attributes, and one of the complex ones,
AAReturnedValues was not using liveness but we filtered the result after
the fact. This change adds liveness usage during the creation. The
algorithm is also improved and shorter.
The new algorithm will collect returned values over time using the
generic facilities that work with liveness already, e.g.,
genericValueTraversal which does not look at dead PHI node predecessors.
A test to show how this leads to better results is included.
Note: Unresolved calls and resolved calls are now tracked explicitly.
Reviewers: uenoku, sstefan1
Subscribers: hiraditya, bollu, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66120
llvm-svn: 368922
Summary:
If the associated context instruction is assumed dead we do not need to
update or manifest the state.
Reviewers: sstefan1, uenoku
Subscribers: hiraditya, bollu, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66116
llvm-svn: 368921
Summary:
The next attempt to clean up the Attributor interface before we grow it
further.
Before, we used a combination of two values (associated + anchor) and an
argument number (or -1) to determine a location. This was very fragile.
The new system uses exclusively IR positions and we restrict the
generation of IR positions to special constructor methods that verify
internal constraints we have. This will catch misuse early.
The auto-conversion, e.g., in getAAFor, is now performed through the
SubsumingPositionIterator. This iterator takes an IR position and allows
to visit all IR positions that "subsume" the given one, e.g., function
attributes "subsume" argument attributes of that function. For a
detailed breakdown see the class comment of SubsumingPositionIterator.
This patch also introduces the IRPosition::getAttrs() to extract IR
attributes at a certain position. The method knows how to look up in
different positions that are equivalent, e.g., the argument position for
call site arguments. We also introduce three new positions kinds such
that we have all IR positions where attributes can be placed and one for
"floating" values.
Reviewers: sstefan1, uenoku
Subscribers: hiraditya, bollu, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65977
llvm-svn: 368919
We already supported rewriting loop exit values for multiple exit loops, but if any of the loop exits were not computable, we gave up on all loop exit values. This patch generalizes the existing code to handle individual computable loop exits where possible.
As discussed in the review, this is a starting point for figuring out a better API. The code is a bit ugly, but getting it in lets us test as we go.
Differential Revision: https://reviews.llvm.org/D65544
llvm-svn: 368898
I'm planning on handling intrinsics that will benefit from checking
the address space enums. Don't bother moving the address collection
for now, since those won't need th enums.
llvm-svn: 368895
Summary: There are places where a case that debug label scope has an extra lexical block file is not considered properly. The modified test won't pass without this patch.
Reviewers: aprantl, HsiangKai
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66187
llvm-svn: 368891
After switching over LLDB's line table parser to libDebugInfo, we
noticed two regressions on the Windows bot. The problem is that when
obtaining a file from the line table prologue, we append paths without
specifying a path style. This leads to incorrect results on Windows for
debug info containing Posix paths:
0x0000000000201000: /tmp\b.c, is_start_of_statement = TRUE
This patch is an attempt to fix that by guessing the path style whenever
possible.
Differential revision: https://reviews.llvm.org/D66227
llvm-svn: 368879
Summary:
We can't speculate around indirect branches: indirectbr and invoke. The
callbr instruction needs to be included here.
Reviewers: nickdesaulniers, manojgupta, chandlerc
Reviewed By: chandlerc
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66200
llvm-svn: 368873
Summary:
Fixes PR42973. Tests don't change because simd-arith.ll tests behavior
on unimplemented-simd128, which does not include any temporary
workarounds such as the one removed in this revision.
Reviewers: aheejin
Subscribers: dschuff, sbc100, jgravelle-google, hiraditya, sunfish, dmgreen, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66166
llvm-svn: 368868
Now that we legalize by widening, the element types here won't change. Previously these were modeled as the elements being widened and then the instruction might become an AND or SHL/ASHR pair. But now they'll become something like a ZERO_EXTEND_VECTOR_INREG/SIGN_EXTEND_VECTOR_INREG.
For AVX2, when the destination type is legal its clear the cost should be 1 since we have extend instructions that can produce 256 bit vectors from less than 128 bit vectors. I'm a little less sure about AVX1 costs, but I think the ones I changed were definitely too high, but they might still be too high.
Differential Revision: https://reviews.llvm.org/D66169
llvm-svn: 368858
This reverts commit r368849, because it breaks some bots (e.g.
llvm-clang-x86_64-win-fast).
It turns out this is not as NFC as we had hoped, because operator== will
consider two std::error_codes to be distinct even though they both hold
"success" values if they have different categories.
llvm-svn: 368854
Summary:
The main motivation for this is unit tests, which contain a large macro
for pretty-printing std::error_code, and this macro is duplicated in
every file that needs to do this. However, the functionality may be
useful elsewhere too.
In this patch I have reimplemented the existing ASSERT_NO_ERROR macros
to reuse the new functionality, but I have kept the macro (as a
one-liner) as it is slightly more readable than ASSERT_EQ(...,
std::error_code()).
Reviewers: sammccall, ilya-biryukov
Subscribers: zturner, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65643
llvm-svn: 368849
MCP currently uses changeDebugValuesDefReg / collectDebugValues to find
debug users of a register, however those functions assume that all
DBG_VALUEs immediately follow the specified instruction, which isn't
necessarily true. This is going to become very often untrue when we turn
off CodeGenPrepare::placeDbgValues.
Instead of calling changeDebugValuesDefReg on an instruction to change its
debug users, in this patch we instead collect DBG_VALUEs of copies as we
iterate over insns, and update the debug users of copies that are made
dead. This isn't a non-functional change, because MCP will now update
DBG_VALUEs that aren't immediately after a copy, but refer to the same
register. I've hijacked the regression test for PR38773 to test for this
new behaviour, an entirely new test seemed overkill.
Differential Revision: https://reviews.llvm.org/D56265
llvm-svn: 368835
Changes: no changes. A fix for the clang code will be landed right on top.
Original commit message:
SectionRef::getName() returns std::error_code now.
Returning Expected<> instead has multiple benefits.
For example, it forces user to check the error returned.
Also Expected<> may keep a valuable string error message,
what is more useful than having a error code.
(Object\invalid.test was updated to show the new messages printed.)
This patch makes a change for all users to switch to Expected<> version.
Note: in a few places the error returned was ignored before my changes.
In such places I left them ignored. My intention was to convert the interface
used, and not to improve and/or the existent users in this patch.
(Though I think this is good idea for a follow-ups to revisit such places
and either remove consumeError calls or comment each of them to clarify why
it is OK to have them).
Differential revision: https://reviews.llvm.org/D66089
llvm-svn: 368826
In MCAsmStreamer:
.type foo,@function # <--- this is redundant
.type foo,@gnu_indirect_function
In MCELFStreamer, the latter STT_GNU_IFUNC overrides STT_FUNC.
llvm-svn: 368823
SectionRef::getName() returns std::error_code now.
Returning Expected<> instead has multiple benefits.
For example, it forces user to check the error returned.
Also Expected<> may keep a valuable string error message,
what is more useful than having a error code.
(Object\invalid.test was updated to show the new messages printed.)
This patch makes a change for all users to switch to Expected<> version.
Note: in a few places the error returned was ignored before my changes.
In such places I left them ignored. My intention was to convert the interface
used, and not to improve and/or the existent users in this patch.
(Though I think this is good idea for a follow-ups to revisit such places
and either remove consumeError calls or comment each of them to clarify why
it is OK to have them).
Differential revision: https://reviews.llvm.org/D66089
llvm-svn: 368812
This is the compiler-flag equivalent of the Predicate pragma
(https://reviews.llvm.org/D65197), to direct the vectorizer to fold the
remainder-loop into the main-loop using predication.
Differential Revision: https://reviews.llvm.org/D66108
Reviewers: Ayal, hsaito, fhahn, SjoerdMeije
llvm-svn: 368801
The support for swifterror allocas should work in all lowerings.
The support for swifterror arguments only really works in a lowering
with prototypes where you can ensure that the prototype also has a
swifterror argument; I'm not really sure how it could possibly be
made to work in the switch lowering.
llvm-svn: 368795
A quick contrast of this ABI with the currently-implemented ABI:
- Allocation is implicitly managed by the lowering passes, which is fine
for frontends that are fine with assuming that allocation cannot fail.
This assumption is necessary to implement dynamic allocas anyway.
- The lowering attempts to fit the coroutine frame into an opaque,
statically-sized buffer before falling back on allocation; the same
buffer must be provided to every resume point. A buffer must be at
least pointer-sized.
- The resume and destroy functions have been combined; the continuation
function takes a parameter indicating whether it has succeeded.
- Conversely, every suspend point begins its own continuation function.
- The continuation function pointer is directly returned to the caller
instead of being stored in the frame. The continuation can therefore
directly destroy the frame when exiting the coroutine instead of having
to leave it in a defunct state.
- Other values can be returned directly to the caller instead of going
through a promise allocation. The frontend provides a "prototype"
function declaration from which the type, calling convention, and
attributes of the continuation functions are taken.
- On the caller side, the frontend can generate natural IR that directly
uses the continuation functions as long as it prevents IPO with the
coroutine until lowering has happened. In combination with the point
above, the frontend is almost totally in charge of the ABI of the
coroutine.
- Unique-yield coroutines are given some special treatment.
llvm-svn: 368788
This patch implements global address lowering for 32/64 bit with small/large code models.
1.For 32bit large code model on AIX, there are newly added pseudo opcode LWZtocL & ADDIStocHA32, the support of which on MC layer will be
provided by future patches.
2.The default code model on AIX should be small code model.
3.Since AIX does not have medium code model, "report_fatal_error" when users specify it.
Differential Revision: https://reviews.llvm.org/D63547
llvm-svn: 368744
That change (r363670) could leave a copy from vgpr to sgpr. Fixed.
Differential Revision: https://reviews.llvm.org/D66133
Change-Id: I00c3fe6fda2e8e1e36f53195b881b1449c777ea4
llvm-svn: 368736
The MVE architecture has the idea of "beats", where a vector instruction can be
executed over several ticks of the architecture. This adds a similar system
into the Arm backend cost model, multiplying the cost of all vector
instructions by a factor.
This factor essentially becomes the expected difference between scalar code
and vector code, on average. MVE Vector instructions can also overlap so the a
true cost of them is often lower. But equally scalar instructions can in some
situations be dual issued, or have other optimisations such as unrolling or
make use of dsp instructions. The default is chosen as 2. This should not
prevent vectorisation is a most cases (as the vector instructions will still be
doing at least 4 times the work), but it will help prevent over vectorising in
cases where the benefits are less likely.
This adds things so far to the obvious places in ARMTargetTransformInfo, and
updates a few related costs like not treating float instructions as cost 2 just
because they are floats.
Differential Revision: https://reviews.llvm.org/D66005
llvm-svn: 368733
Summary: Fix "llvm-profdata show" so it can work with compact binary format profile. The change is to mark all functions "used" so SampleProfileReaderCompactBinary::read will read in all profiles available for dumping. The function names will be MD5 hash for compact binary format.
Reviewers: wmi, davidxl, danielcdh
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65162
llvm-svn: 368731
Summary:
This is a tweak to r368311 and r368646 which auto upgrades the calls to
objc runtime functions to objc runtime intrinsics, in order to make sure
that the auto upgrader does not trigger with up-to-date bitcode.
It is possible for bitcode that is up-to-date to contain direct calls to
objc runtime function and those are not inserted by compiler as part of
ARC and they should not be upgraded. Now auto upgrader only triggers as
when the old style of ARC marker is used so it is guaranteed that it
won't trigger on update-to-date bitcode.
This also means it won't do this upgrade for bitcode from llvm-8 and
llvm-9, which preserves the behavior of those releases. Ideally they
should be upgraded as well but it is more important to make sure
AutoUpgrader will not trigger on up-to-date bitcode.
Reviewers: ahatanak, rjmccall, dexonsmith, pete
Reviewed By: dexonsmith
Subscribers: hiraditya, jkorous, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66153
llvm-svn: 368730
Summary:
While D65962 is pending for review, I landed D65475 that added one more
use of `unsigned`. Changed it to `Register`.
Reviewers: dsanders
Subscribers: dschuff, sbc100, jgravelle-google, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66064
llvm-svn: 368727
Use isGuaranteedToTransferExecutionToSuccessor() instead of
isSafeToSpeculativelyExecute() when seeing whether we can propagate
the information in an assume backwards in isValidAssumeForContext().
The latter is more general - it also allows arbitrary loads/stores -
and is also the condition we want: if our assume is guaranteed to
execute, its condition not holding would be UB.
Original patch by arielb1.
Differential Revision: https://reviews.llvm.org/D37215
llvm-svn: 368723
Trying again with the code changes (and not just the new test).
Summary:
This patch fixes the offsets of fields in the stack frame linkage save
area for AIX.
Reviewers: sfertile, hubert.reinterpretcast, jasonliu, Xiangling_L, xingxue, ZarkoCA, daltenty
Reviewed By: hubert.reinterpretcast
Subscribers: wuzish, nemanjai, hiraditya, kbarton, MaskRay, jsji, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64424
Patch by Chris Bowler!
llvm-svn: 368721
Addresses post-commit comments on https://reviews.llvm.org/D64825. Use
assert instead of llvm_unreachable to check if invalid csect types are being
generated. Use report_fatal_error on unimplemented XCOFF features.
Differential Revision: https://reviews.llvm.org/D64825
llvm-svn: 368720
An incorrect verification error revealed that the list of type tags was
incomplete. This patch adds the missing types by adding a tag kind to
the Dwarf.def file, which is used by the `isType` function.
A test was added for the original verification error.
Differential revision: https://reviews.llvm.org/D65914
llvm-svn: 368718
This patch replaces the JITDylib::DefinitionGenerator typedef with a class of
the same name, and adds support for attaching a sequence of DefinitionGeneration
objects to a JITDylib.
This patch also adds a new definition generator,
StaticLibraryDefinitionGenerator, that can be used to add symbols fom a static
library to a JITDylib. An object from the static library will be added (via
a supplied ObjectLayer reference) whenever a symbol from that object is
referenced.
To enable testing, lli is updated to add support for the --extra-archive option
when running in -jit-kind=orc-lazy mode.
llvm-svn: 368707
Currently shufflemasks get emitted as any other constant, and you end
up with a bunch of virtual registers of G_CONSTANT with a
G_BUILD_VECTOR. The AArch64 selector then asserts on anything that
doesn't fit this pattern. This isn't an ideal representation, and
should avoid legalization and have fewer opportunities for a
representational error.
Rather than invent a new shuffle mask operand type, similar to what
ShuffleVectorSDNode does, just track the original IR Constant mask
operand. I don't completely like the idea of adding another link to
the IR, but MIR is already quite dependent on IR constants already,
and this will allow sharing the shuffle mask utility functions with
the IR.
llvm-svn: 368704
Summary:
This implements an optimization described in Hacker's Delight 10-17:
when `C` is constant, the result of `X % C == 0` can be computed
more cheaply without actually calculating the remainder.
The motivation is discussed here: https://bugs.llvm.org/show_bug.cgi?id=35479.
One huge caveat: this signed case is only valid for positive divisors.
While we can freely negate negative divisors, we can't negate `INT_MIN`,
so for now if `INT_MIN` is encountered, we bailout.
As a follow-up, it should be possible to handle that more gracefully
via extra `and`+`setcc`+`select`.
This passes llvm's test-suite, and from cursory(!) cross-examination
the folds (the assembly) match those of GCC, and manual checking via alive
did not reveal any issues (other than the `INT_MIN` case)
Reviewers: RKSimon, spatel, hermord, craig.topper, xbolva00
Reviewed By: RKSimon, xbolva00
Subscribers: xbolva00, thakis, javed.absar, hiraditya, dexonsmith, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65366
llvm-svn: 368702
The comment initially matched the code, but the code was incorrect
and was fixed after the initial revert back back when it was introduced,
but the comment was never updated.
llvm-svn: 368701
Summary:
Given a pattern like:
```
%old_cmp1 = icmp slt i32 %x, C2
%old_replacement = select i1 %old_cmp1, i32 %target_low, i32 %target_high
%old_x_offseted = add i32 %x, C1
%old_cmp0 = icmp ult i32 %old_x_offseted, C0
%r = select i1 %old_cmp0, i32 %x, i32 %old_replacement
```
it can be rewritten as more canonical pattern:
```
%new_cmp1 = icmp slt i32 %x, -C1
%new_cmp2 = icmp sge i32 %x, C0-C1
%new_clamped_low = select i1 %new_cmp1, i32 %target_low, i32 %x
%r = select i1 %new_cmp2, i32 %target_high, i32 %new_clamped_low
```
Iff `-C1 s<= C2 s<= C0-C1`
Also, `ULT` predicate can also be `UGE`; or `UGT` iff `C0 != -1` (+invert result)
Also, `SLT` predicate can also be `SGE`; or `SGT` iff `C2 != INT_MAX` (+invert result)
If `C1 == 0`, then all 3 instructions must be one-use; else at most either `%old_cmp1` or `%old_x_offseted` can have extra uses.
NOTE: if we could reuse `%old_cmp1` as one of the comparisons we'll have to build, this could be less limiting.
So there are two icmp's, each one with 3 predicate variants, so there are 9 fold variants:
| | ULT | UGE | UGT |
| SLT | https://rise4fun.com/Alive/yIJ | https://rise4fun.com/Alive/5BfN | https://rise4fun.com/Alive/INH |
| SGE | https://rise4fun.com/Alive/hd8 | https://rise4fun.com/Alive/Abk | https://rise4fun.com/Alive/PlzS |
| SGT | https://rise4fun.com/Alive/VYG | https://rise4fun.com/Alive/oMY | https://rise4fun.com/Alive/KrzC |
{F9730206}
This fold was brought up in https://reviews.llvm.org/D65148#1603922 by @dmgreen, and is needed to unblock that patch.
This patch requires D65530.
Reviewers: spatel, nikic, xbolva00, dmgreen
Reviewed By: spatel
Subscribers: hiraditya, llvm-commits, dmgreen
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65765
llvm-svn: 368687
Summary:
This is rather unconventional..
As the comment there says, we don't have much folds for xor-of-icmps,
we try to turn them into an and-of-icmps, for which we have plenty of folds.
But if the ICmp we need to invert is not single-use - we give up.
As discussed in https://reviews.llvm.org/D65148#1603922,
we may have a non-canonical CLAMP pattern, with bit match and
select-of-threshold that we'll potentially clamp.
As it can be seen in `canonicalize-clamp-with-select-of-constant-threshold-pattern.ll`,
out of all 8 variations of the pattern, only two are **not** canonicalized into
the variant with and+icmp instead of bit math.
The reason is because the ICmp we need to invert is not single-use - we give up.
We indeed can't perform this fold at will, the general rule is that
we should not increase instruction count in InstCombine,
But we wouldn't end up increasing instruction count if we can adapt every other
user to the inverted value. This way the `not` we create **will** get folded,
and in the end the instruction count did not increase.
For that, of course, we need to look at the users of a Value,
which is again rather unconventional for InstCombine :S
Thus i'm proposing to be a little bit more insistive in `foldXorOfICmps()`.
The alternatives would be to not create that `not`, but add duplicate code to
manually invert all users; or to add some even less general combine to handle
some more specific pattern[s].
Reviewers: spatel, nikic, RKSimon, craig.topper
Reviewed By: spatel
Subscribers: hiraditya, jdoerfert, dmgreen, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65530
llvm-svn: 368685
If the target shuffle mask is from a wider type, attempt to scale the mask so that the extraction can attempt to peek through.
Fixes the regression mentioned in rL368662
Reapplying this as rL368308 had to be reverted as part of rL368660 to revert rL368276
llvm-svn: 368663
If we don't demand all elements, then attempt to combine to a simpler shuffle.
At the moment we can only do this if Depth == 0 as combineX86ShufflesRecursively uses Depth to track whether the shuffle has really changed or not - we'll need to change this before we can properly start merging combineX86ShufflesRecursively into SimplifyDemandedVectorElts.
The insertps-combine.ll regression is because XFormVExtractWithShuffleIntoLoad can't see through shuffles of different widths - this will be fixed in a follow-up commit.
Reapplying this as rL368307 had to be reverted as part of rL368660 to revert rL368276
llvm-svn: 368662
This introduced a false positive MemorySanitizer warning about use of
uninitialized memory in a vectorized crc function in Chromium. That suggests
maybe something is not right with this transformation. See
https://crbug.com/992853#c7 for a reproducer.
This also reverts the follow-up commits r368307 and r368308 which
depended on this.
> This patch attempts to peek through vectors based on the demanded bits/elt of a particular ISD::EXTRACT_VECTOR_ELT node, allowing us to avoid dependencies on ops that have no impact on the extract.
>
> In particular this helps remove some unnecessary scalar->vector->scalar patterns.
>
> The wasm shift patterns are annoying - @tlively has indicated that the wasm vector shift codegen are to be refactored in the near-term and isn't considered a major issue.
>
> Differential Revision: https://reviews.llvm.org/D65887
llvm-svn: 368660
The legalizer would hit an assertion on PowerPC platform when truncating
a vector whose size is not power of 2. This patch is to add a check to
prevent vectors with such odd-size elements from being custom lowered.
Reviewed By: Hal Finkel
Differential Revision: https://reviews.llvm.org/D65261
llvm-svn: 368654
Currently we can't keep any state in the selector object that we get from
subtarget. As a result we have to plumb through all our variables through
multiple functions. This change makes it non-const and adds a virtual init()
method to allow further state to be captured for each target.
AArch64 makes use of this in this patch to cache a call to hasFnAttribute()
which is expensive to call, and is used on each selection of G_BRCOND.
Differential Revision: https://reviews.llvm.org/D65984
llvm-svn: 368652
to intrinsic calls
This fixes a bug in r368311.
It turns out that the ARC runtime functions in the IR can have pointer
parameter types that are not i8* or i8**. Instead of RAUWing normal
functions with intrinsics, manually bitcast the arguments before passing
them to the intrinsic functions and bitcast the return value back to the
type of the original call instruction.
This recommits r368634, which was reverted in r368637. The loop in the
patch was iterating over uses of a function and deleting function calls
inside it, which caused bots to crash.
rdar://problem/54125406
Differential Revision: https://reviews.llvm.org/D66047
llvm-svn: 368646
Summary:
This was mostly an experiment to assess the feasibility of completely
eliminating a problematic implicit conversion case in D61321 in advance of
landing that* but it also happens to align with the goal of propagating the
use of Register/MCRegister instead of unsigned so I believe it makes sense
to commit it.
The overall process for eliminating the implicit conversions from
Register/MCRegister -> unsigned was to:
1. Add an explicit conversion to support genuinely required conversions to
unsigned. For example, using them as an index for IndexedMap. Sadly it's
not possible to have an explicit and implicit conversion to the same
type and only deprecate the implicit one so I called the explicit
conversion get().
2. Temporarily annotate the implicit conversion to unsigned with
LLVM_ATTRIBUTE_DEPRECATED to make them visible
3. Eliminate implicit conversions by propagating Register/MCRegister/
explicit-conversions appropriately
4. Remove the deprecation added in 2.
* My conclusion is that it isn't feasible as there's too much code to
update in one go.
Depends on D65678
Reviewers: arsenm
Subscribers: MatzeB, wdng, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65685
llvm-svn: 368643
to intrinsic calls
This fixes a bug in r368311.
It turns out that the ARC runtime functions in the IR can have pointer
parameter types that are not i8* or i8**. Instead of RAUWing normal
functions with intrinsics, manually bitcast the arguments before passing
them to the intrinsic functions and bitcast the return value back to the
type of the original call instruction.
rdar://problem/54125406
llvm-svn: 368634
r367088 made it so that funclets store XMM registers into their local
frame instead of storing them to the parent frame. However, that change
forgot to update the parent frame pointer offset for catch blocks. This
change does that.
Fixes crashes when an exception is rethrown in a catch block that saves
XMMs, as described in https://crbug.com/992860.
llvm-svn: 368631
- There was a simple typo in TextStub code that prevented version 3 files to be read.
- Included a version 3 unit test to handle the differences in the format.
- Also a typo in Error.h inside the comments.
https://reviews.llvm.org/D66041
This patch is from Cyndy Ishida <cyndy_ishida@apple.com>.
llvm-svn: 368630
This is infrastructural, will be needed for future work.
For some reason it was only used in MIMG_NoSampler, while
needed everywere we use MIMGBaseOpcode if we want to use
predicates.
Differential Revision: https://reviews.llvm.org/D66115
llvm-svn: 368626
We need AVX512BW to be able to truncate an i16 vector. If we don't
have that we have to extend i16->i32, then trunc, i32->i8. But we
won't be able to remove the min/max if we do that. At least not
without more special handling.
llvm-svn: 368623
https://reviews.llvm.org/D66039
We were using getIndexSize instead of getIndexSizeInBits().
Added test case for G_PTRTOINT and G_INTTOPTR.
llvm-svn: 368618
If we're after type legalize, we should make sure we won't create
a store with an illegal type when we separate the AVG pattern
from the truncating store.
I don't know of a way to fail for this today. Just noticed while
I was in the vicinity.
llvm-svn: 368608
Summary:
This commit fixed a race condition from multi-threaded thinLTO backends that causes non-deterministic memory corruption for a data structure used only by AutoFDO with compact binary profile.
GUIDToFuncNameMap, a static data member of type DenseMap in FunctionSamples is used as a per-module mapping from function name MD5 to name string when input AutoFDO profile is in compact binary format. However with ThinLTO, we can have parallel backends modifying and accessing the class static map concurrently. The fix is to make GUIDToFuncNameMap a member of SampleProfileLoader instead of a file static data.
Reviewers: wmi, davidxl, danielcdh
Subscribers: mehdi_amini, inglorion, hiraditya, dexonsmith, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65848
llvm-svn: 368596
This teaches the cost model that the sext or zext of a load is going to be
free.
Differential Revision: https://reviews.llvm.org/D66006
llvm-svn: 368593
This pass is a port of the according pass from the HSAIL compiler.
It parses printf calls and setup runtime printf buffer.
After that it copies printf arguments to the buffer and fills in
module metadata for runtime.
Differential Revision: https://reviews.llvm.org/D24035
llvm-svn: 368592
A VDUP will perform a vector broadcast in a single instruction. Update the cost
model for MVE accordingly.
Code originally by David Sherwood.
Differential Revision: https://reviews.llvm.org/D63448
llvm-svn: 368589
This puts some of the calls in ARMTargetTransformInfo.cpp behind hasNeon()
checks, now that we have MVE, and updates all the tests accordingly.
Differential Revision: https://reviews.llvm.org/D63447
llvm-svn: 368587
Convert SymbolNameSize and SectionNameSize into just `NameSize`. The length of
a name embeded in a symbol table entry or section header table entry is length 8
for Sections, Symbols and Files. No need to have a distinct constant for each
one. Also removes the Size argument to 'generateStringRef' as the size is
always 'XCOFF::NameSize'.
llvm-svn: 368584
It caused assertions to fire when building Chromium:
lib/CodeGen/LiveDebugValues.cpp:331: bool
{anonymous}::LiveDebugValues::OpenRangesSet::empty() const: Assertion
`Vars.empty() == VarLocs.empty() && "open ranges are inconsistent"' failed.
See https://crbug.com/992871#c3 for how to reproduce.
> Patch https://reviews.llvm.org/D43256 introduced more aggressive loop layout optimization which depends on profile information. If profile information is not available, the statically estimated profile information(generated by BranchProbabilityInfo.cpp) is used. If user program doesn't behave as BranchProbabilityInfo.cpp expected, the layout may be worse.
>
> To be conservative this patch restores the original layout algorithm in plain mode. But user can still try the aggressive layout optimization with -force-precise-rotation-cost=true.
>
> Differential Revision: https://reviews.llvm.org/D65673
llvm-svn: 368579
Summary:
Ana Pazos reported a bug where we were not checking that an APInt would
fit into 64-bits before calling `getSExtValue()`. This caused asserts when
compiling large constants, such as i128s, as happens when compiling compiler-rt.
This patch adds a testcase and makes the callback less error-prone.
Reviewers: apazos, asb, luismarques
Reviewed By: luismarques
Subscribers: hiraditya, rbar, johnrusso, simoncook, sabuasal, niosHD, kito-cheng, shiva0217, jrtc27, MaskRay, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, rkruppe, PkmX, jocewei, psnobl, benna, Jim, s.egerton, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66081
llvm-svn: 368572
Summary:
When eliminating an unreachable block we must remove any call site
information for calls residing in the block.
This was originally found on a downstream target, and the attached x86
test case was produced by hand-modifying some MIR.
Reviewers: aprantl, asowda, NikolaPrica, djtodoro, ivanbaev, vsk
Reviewed By: NikolaPrica, vsk
Subscribers: vsk, hiraditya, llvm-commits
Tags: #debug-info, #llvm
Differential Revision: https://reviews.llvm.org/D64500
llvm-svn: 368566
Summary:
In `block-placement` pass, it will create some patterns for unconditional we can do the simple early retrun.
But the `early-ret` pass is before `block-placement`, we don't want to run it again.
This patch is to do the simple early return to optimize the blocks at the last of `block-placement`.
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D63972
llvm-svn: 368565
> In `block-placement` pass, it will create some patterns for unconditional we can do the simple early retrun.
> But the `early-ret` pass is before `block-placement`, we don't want to run it again.
> This patch is to do the simple early return to optimize the blocks at the last of `block-placement`.
>
> Reviewed By: efriedma
>
> Differential Revision: https://reviews.llvm.org/D63972
This also revertes follow-ups r368514 and r368532.
llvm-svn: 368560
Instead of matching value and then blindly casting to BinaryOperator
just to get the opcode, just match instruction and do no cast.
Fixes https://bugs.llvm.org/show_bug.cgi?id=42962
llvm-svn: 368554
Support -march=tigerlake for x86.
Compare with Icelake Client, It include 4 more new features ,they are
avx512vp2intersect, movdiri, movdir64b, shstk.
Patch by Xiang Zhang (xiangzhangllvm)
Differential Revision: https://reviews.llvm.org/D65840
llvm-svn: 368543
Summary:
After the commits that changed x86 backend to widen vectors
instead of using promotion some of our downstream tests
started to fail. It was noticed that WidenVectorResult has
been missing support for SMULFIX/UMULFIX/SMULFIXSAT. This
patch adds the missing functionality.
Reviewers: craig.topper, RKSimon
Reviewed By: craig.topper
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66051
llvm-svn: 368540
Due to the nature of the beat system in the MVE architecture, along with tail
predication and low-overhead loops, unrolling has less benefit compared to
normal loops. You can not, for example, hide the latency of a load with other
instructions as you can for scalar code. Preventing unrolling also makes the
code easier to read and reason about.
So if a loop contains vector code, don't enable the runtime unrolling. At least
for the time being.
Differential Revision: https://reviews.llvm.org/D65803
llvm-svn: 368530
With enough codegen complete, we can now correctly report the number and size
of vector registers for MVE, allowing auto vectorisation. This also allows FP
auto-vectorization for MVE without -Ofast/-ffast-math, due to support for IEEE
FP arithmetic and parity between scalar and vector FP behaviour.
Patch by David Sherwood.
Differential Revision: https://reviews.llvm.org/D63728
llvm-svn: 368529
Summary:
When exceptions are repeatedly thrown in the middle of handling another
exception, we call `__clang_call_terminate` with the exception pointer
(i32) as an argument. But in case of foreign exceptions, we don't have
the pointer, so we call the function with 0. (This requires
`__clang_call_terminate` can deal with 0 argument, which will be done
later)
But previously the 0 argument was not added as a `i32.const 0` but an
immediate by mistake, causing the `call` instruction to take not an i32
but rather an exnref, because an `exnref` is left on top of the value
stack if `br_on_exn` is not taken.
```
block i32
br_on_exn 0, __cpp_exception
;; exnref is on top of stack now
i32.const 0 ;; This was missing!
call __clang_call_terminate
unreachable
end
call __clang_call_terminate ;; This takes i32 extracted by br_on_exn
```
Reviewers: dschuff
Subscribers: sbc100, jgravelle-google, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65475
llvm-svn: 368527
Summary:
Hoisting/sinking instruction out of a loop isn't always beneficial. Hoisting an instruction from a cold block inside a loop body out of the loop could hurt performance. This change makes Loop ICM profile aware - it now checks block frequency to make sure hoisting/sinking anly moves instruction to colder block.
Test Plan:
ninja check
Reviewers: asbirlea, sanjoy, reames, nikic, hfinkel, vsk
Reviewed By: asbirlea
Subscribers: fhahn, vsk, davidxl, xbolva00, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65060
llvm-svn: 368526
The test case that changed is probably better served through
allowing combineTruncatedArithmetic to create narrow vectors. It
also appears InstCombine would have simplified this test case
to remove the zext and trunc anyway.
llvm-svn: 368522
If one of the values being shifted is a constant, since the new shift
amount is known-constant, the new shift will end up being constant-folded
so, we don't need that one-use restriction then.
llvm-svn: 368519
That one-use restriction is not needed for correctness - we have already
ensured that one of the shifts will go away, so we know we won't increase
the instruction count. So there is no need for that restriction.
llvm-svn: 368518
On SSE41+ targets we always lower vector shuffles to ZERO_EXTEND_VECTOR_INREG, even if we don't need the extended bits.
This patch relaxes this so that we lower to ANY_EXTEND_VECTOR_INREG if we can, meaning that shuffle combines have a better idea of what elements need to be kept zero. This helps the multiple reduction code as we can now combine away a lot more of the pack+extend codes.
Differential Revision: https://reviews.llvm.org/D65741
llvm-svn: 368515
This is an extension of a transform that tries to produce positive floating-point
constants to improve canonicalization (and hopefully lead to more reassociation
and CSE).
The original patches were:
D4904
D5363 (rL221721)
But as the test diffs show, these were limited to basic patterns by walking from
an instruction to its single user rather than recursively moving up the def-use
sequence. No fast-math is required here because we're only rearranging implicit
FP negations in intermediate ops.
A motivating bug is:
https://bugs.llvm.org/show_bug.cgi?id=32939
Differential Revision: https://reviews.llvm.org/D65954
llvm-svn: 368512
Summary:
In `block-placement` pass, it will create some patterns for unconditional we can do the simple early retrun.
But the `early-ret` pass is before `block-placement`, we don't want to run it again.
This patch is to do the simple early return to optimize the blocks at the last of `block-placement`.
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D63972
llvm-svn: 368509
Summary:
This patch adds a special DAG combine for SSE1 to recognize the IR pattern InstCombine gives us for movmsk. This only does the recognition for a few cases where its obvious the input won't be scalarized resulting in building a vector just do to the movmsk. I've made it separate from our existing matching for movmsk since that's called in multiple places and I didn't spend time to see if the other callers would make sense here. Plus the restrictions and additional checks would complicate that.
This fixes the case from PR42870. Buts its probably still broken the presence of logic ops feeding the movmsk pattern which would further hide the v4f32 type.
Reviewers: spatel, RKSimon, xbolva00
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65689
llvm-svn: 368506
Summary:
On windows if the frame size exceed 4096 bytes, compiler need to
generate a call to _alloca_probe. X86CallFrameOptimization pass
changes the reserved stack size and cause of stack probe function
not be inserted. This patch fix the issue by detecting the call
frame size, if the size exceed 4096 bytes, drop X86CallFrameOptimization.
Reviewers: craig.topper, wxiao3, annita.zhang, rnk, RKSimon
Reviewed By: rnk
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65923
llvm-svn: 368503
Introducing non-global control for default block-scan-limit in MemDep analysis.
Useful when there are many compilations per initialized LLVM instance (e.g. JIT).
Reviewed By: asbirlea
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65806
llvm-svn: 368502
The default behavior of Clang's indirect function call checker will replace
the address of each CFI-checked function in the output file's symbol table
with the address of a jump table entry which will pass CFI checks. We refer
to this as making the jump table `canonical`. This property allows code that
was not compiled with ``-fsanitize=cfi-icall`` to take a CFI-valid address
of a function, but it comes with a couple of caveats that are especially
relevant for users of cross-DSO CFI:
- There is a performance and code size overhead associated with each
exported function, because each such function must have an associated
jump table entry, which must be emitted even in the common case where the
function is never address-taken anywhere in the program, and must be used
even for direct calls between DSOs, in addition to the PLT overhead.
- There is no good way to take a CFI-valid address of a function written in
assembly or a language not supported by Clang. The reason is that the code
generator would need to insert a jump table in order to form a CFI-valid
address for assembly functions, but there is no way in general for the
code generator to determine the language of the function. This may be
possible with LTO in the intra-DSO case, but in the cross-DSO case the only
information available is the function declaration. One possible solution
is to add a C wrapper for each assembly function, but these wrappers can
present a significant maintenance burden for heavy users of assembly in
addition to adding runtime overhead.
For these reasons, we provide the option of making the jump table non-canonical
with the flag ``-fno-sanitize-cfi-canonical-jump-tables``. When the jump
table is made non-canonical, symbol table entries point directly to the
function body. Any instances of a function's address being taken in C will
be replaced with a jump table address.
This scheme does have its own caveats, however. It does end up breaking
function address equality more aggressively than the default behavior,
especially in cross-DSO mode which normally preserves function address
equality entirely.
Furthermore, it is occasionally necessary for code not compiled with
``-fsanitize=cfi-icall`` to take a function address that is valid
for CFI. For example, this is necessary when a function's address
is taken by assembly code and then called by CFI-checking C code. The
``__attribute__((cfi_jump_table_canonical))`` attribute may be used to make
the jump table entry of a specific function canonical so that the external
code will end up taking a address for the function that will pass CFI checks.
Fixes PR41972.
Differential Revision: https://reviews.llvm.org/D65629
llvm-svn: 368495
This is the codegen part of fixing:
https://bugs.llvm.org/show_bug.cgi?id=32939
Even with the optimal/canonical IR that is ideally created by D65954,
we would reverse that transform in DAGCombiner and end up with the same
asm on AArch64 or x86.
I see 2 options for trying to correct this:
1. Limit isNegatibleForFree() by special-casing the fmul pattern (this patch).
2. Avoid creating (fmul X, 2.0) in the 1st place by adding a special-case
transform to SelectionDAG::getNode() and/or SelectionDAGBuilder::visitFMul()
that matches the transform done by DAGCombiner.
This seems like the less intrusive patch, but if there's some other reason to
prefer 1 option over the other, we can change to the other option.
Differential Revision: https://reviews.llvm.org/D66016
llvm-svn: 368490
Summary:
Targets often have instructions that can sign-extend certain cases faster
than the equivalent shift-left/arithmetic-shift-right. Such cases can be
identified by matching a shift-left/shift-right pair but there are some
issues with this in the context of combines. For example, suppose you can
sign-extend 8-bit up to 32-bit with a target extend instruction.
%1:_(s32) = G_SHL %0:_(s32), i32 24 # (I've inlined the G_CONSTANT for brevity)
%2:_(s32) = G_ASHR %1:_(s32), i32 24
%3:_(s32) = G_ASHR %2:_(s32), i32 1
would reasonably combine to:
%1:_(s32) = G_SHL %0:_(s32), i32 24
%2:_(s32) = G_ASHR %1:_(s32), i32 25
which no longer matches the special case. If your shifts and extend are
equal cost, this would break even as a pair of shifts but if your shift is
more expensive than the extend then it's cheaper as:
%2:_(s32) = G_SEXT_INREG %0:_(s32), i32 8
%3:_(s32) = G_ASHR %2:_(s32), i32 1
It's possible to match the shift-pair in ISel and emit an extend and ashr.
However, this is far from the only way to break this shift pair and make
it hard to match the extends. Another example is that with the right
known-zeros, this:
%1:_(s32) = G_SHL %0:_(s32), i32 24
%2:_(s32) = G_ASHR %1:_(s32), i32 24
%3:_(s32) = G_MUL %2:_(s32), i32 2
can become:
%1:_(s32) = G_SHL %0:_(s32), i32 24
%2:_(s32) = G_ASHR %1:_(s32), i32 23
All upstream targets have been configured to lower it to the current
G_SHL,G_ASHR pair but will likely want to make it legal in some cases to
handle their faster cases.
To follow-up: Provide a way to legalize based on the constant. At the
moment, I'm thinking that the best way to achieve this is to provide the
MI in LegalityQuery but that opens the door to breaking core principles
of the legalizer (legality is not context sensitive). That said, it's
worth noting that looking at other instructions and acting on that
information doesn't violate this principle in itself. It's only a
violation if, at the end of legalization, a pass that checks legality
without being able to see the context would say an instruction might not be
legal. That's a fairly subtle distinction so to give a concrete example,
saying %2 in:
%1 = G_CONSTANT 16
%2 = G_SEXT_INREG %0, %1
is legal is in violation of that principle if the legality of %2 depends
on %1 being constant and/or being 16. However, legalizing to either:
%2 = G_SEXT_INREG %0, 16
or:
%1 = G_CONSTANT 16
%2:_(s32) = G_SHL %0, %1
%3:_(s32) = G_ASHR %2, %1
depending on whether %1 is constant and 16 does not violate that principle
since both outputs are genuinely legal.
Reviewers: bogner, aditya_nandakumar, volkan, aemerson, paquette, arsenm
Subscribers: sdardis, jvesely, wdng, nhaehnle, rovka, kristof.beyls, javed.absar, hiraditya, jrtc27, atanasyan, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61289
llvm-svn: 368487
Summary:
A block address may be used in inline assembly. In which case it
requires a name so that the asm parser has something to parse. Creating
a name for every block address is a large hammer, but is necessary
because at the point when a temp symbol is created we don't necessarily
know if it's used in inline asm. This ensures that it exists regardless.
Reviewers: nickdesaulniers, craig.topper
Subscribers: nathanchance, javed.absar, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65352
llvm-svn: 368478
Summary:
This patch keeps track of MCSymbols created for blocks that were
referenced in inline asm. It prevents creating a new symbol which
doesn't refer to the block.
Inline asm may have a reference to a label. The asm parser however
doesn't recognize it as a label and tries to create a new symbol. The
result being that instead of the original symbol (e.g. ".Ltmp0") the
parser replaces it in the inline asm with the new one (e.g. ".Ltmp00")
without updating it in the symbol table. So the machine basic block
retains the "old" symbol (".Ltmp0"), but the inline asm uses the new one
(".Ltmp00").
Reviewers: nickdesaulniers, craig.topper
Subscribers: nathanchance, javed.absar, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65304
llvm-svn: 368477
Refactor `LibCallSimplifier::optimizeExp2()` to use the new
`emitBinaryFloatFnCall()` version that fetches the function name from TLI.
llvm-svn: 368457
For type values that do not have proper names, print reasonable representation
in llvm-nm, llvm-readobj and llvm-readelf, matching GNU tools.s
Fixes PR41713.
Differential Revision: https://reviews.llvm.org/D65537
llvm-svn: 368451
Summary:
This is exposed by adding a new testcase in PowerPC in
https://reviews.llvm.org/rL367732
The testcase got different output on different platform, hence breaking
buildbots.
The problem is that we get differnt FuncUnitOrder when calculateResMII.
The root cause is:
1. Two MachineInstr might get SAME priority(MFUsx) from minFuncUnits.
2. Current comparison operator() will return `MFUs1 > MFUs2`.
3. We use iterators for MachineInstr, so the input to FuncUnitSorter
might be different on differnt platform due to the iterator nature.
So for two MI with same MFU, their order is actually depends on the
iterator order, which is platform (implemtation) dependent.
This is risky, and may cause cross-compiling problems.
The fix is to check make sure we assign a determine order when they are
equal.
Reviewers: bcahoon, hfinkel, jmolloy
Subscribers: nemanjai, hiraditya, MaskRay, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65992
llvm-svn: 368441
Summary: Implement a new analysis to estimate the number of cache lines
required by a loop nest.
The analysis is largely based on the following paper:
Compiler Optimizations for Improving Data Locality
By: Steve Carr, Katherine S. McKinley, Chau-Wen Tseng
http://www.cs.utexas.edu/users/mckinley/papers/asplos-1994.pdf
The analysis considers temporal reuse (accesses to the same memory
location) and spatial reuse (accesses to memory locations within a cache
line). For simplicity the analysis considers memory accesses in the
innermost loop in a loop nest, and thus determines the number of cache
lines used when the loop L in loop nest LN is placed in the innermost
position.
The result of the analysis can be used to drive several transformations.
As an example, loop interchange could use it determine which loops in a
perfect loop nest should be interchanged to maximize cache reuse.
Similarly, loop distribution could be enhanced to take into
consideration cache reuse between arrays when distributing a loop to
eliminate vectorization inhibiting dependencies.
The general approach taken to estimate the number of cache lines used by
the memory references in the inner loop of a loop nest is:
Partition memory references that exhibit temporal or spatial reuse into
reference groups.
For each loop L in the a loop nest LN: a. Compute the cost of the
reference group b. Compute the 'cache cost' of the loop nest by summing
up the reference groups costs
For further details of the algorithm please refer to the paper.
Authored By: etiotto
Reviewers: hfinkel, Meinersbur, jdoerfert, kbarton, bmahjour, anemet,
fhahn
Reviewed By: Meinersbur
Subscribers: reames, nemanjai, MaskRay, wuzish, Hahnfeld, xusx595,
venkataramanan.kumar.llvm, greened, dmgreen, steleman, fhahn, xblvaOO,
Whitney, mgorny, hiraditya, mgrang, jsji, llvm-commits
Tag: LLVM
Differential Revision: https://reviews.llvm.org/D63459
llvm-svn: 368439
As discussed on PR42825, if we are inverting the selection mask we can just swap the inputs and avoid the inversion.
Differential Revision: https://reviews.llvm.org/D65522
llvm-svn: 368438
Fast-isel was picking AFGR64 register class for processing call
arguments when +fp64 options was used. We simply check is option +fp64
is used and pick appropriate register.
Patch by Mirko Brkusanin.
Differential Revision: https://reviews.llvm.org/D65886
llvm-svn: 368433
Flag -show-encoding enables the printing of instruction encodings as part of the
the instruction info view.
Example (with flags -mtriple=x86_64-- -mcpu=btver2):
Instruction Info:
[1]: #uOps
[2]: Latency
[3]: RThroughput
[4]: MayLoad
[5]: MayStore
[6]: HasSideEffects (U)
[7]: Encoding Size
[1] [2] [3] [4] [5] [6] [7] Encodings: Instructions:
1 2 1.00 4 c5 f0 59 d0 vmulps %xmm0, %xmm1, %xmm2
1 4 1.00 4 c5 eb 7c da vhaddps %xmm2, %xmm2, %xmm3
1 4 1.00 4 c5 e3 7c e3 vhaddps %xmm3, %xmm3, %xmm4
In this example, column Encoding Size is the size in bytes of the instruction
encoding. Column Encodings reports the actual instruction encodings as byte
sequences in hex (objdump style).
The computation of encodings is done by a utility class named mca::CodeEmitter.
In future, I plan to expose the CodeEmitter to the instruction builder, so that
information about instruction encoding sizes can be used by the simulator. That
would be a first step towards simulating the throughput from the decoders in the
hardware frontend.
Differential Revision: https://reviews.llvm.org/D65948
llvm-svn: 368432
All TLS access on Darwin is in the "general dynamic" form where we call
a function to resolve the address, so implementation is pretty simple.
llvm-svn: 368418
I've now needed to add an extra parameter to this call twice recently. Not only
is the signature getting extremely unwieldy, but just updating all of the
callsites and implementations is a pain. Putting the parameters in a struct
sidesteps both issues.
llvm-svn: 368408
As loads are combined and widened, we replaced their sext users
operands whereas we should have been replacing the uses of the sext.
I've added a load of tests, with only a few of them originally
causing assertion failures, the rest improve pattern coverage.
Differential Revision: https://reviews.llvm.org/D65740
llvm-svn: 368404
Summary:
Make sure that we report that changes has been made
by InstSimplify also in situations when only trivially
dead instructions has been removed. If for example a call
is removed the call graph must be updated.
Bug seem to have been introduced by llvm-svn r367173
(commit 02b9e45a7e), since the code in question
was rewritten in that commit.
Reviewers: spatel, chandlerc, foad
Reviewed By: spatel
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65973
llvm-svn: 368401
The only way to generate these was through promoting legalization
of narrow vectors, but we widen those types now. So we shouldn't
produce these nodes.
llvm-svn: 368396
This isn't the most robust error handling API, but does allow clients to
opt-in to getting Errors they can handle. I suspect the long-term
solution would be to move away from the lazy unit parsing and have an
explicit step that parses the unit and then allows access to the other
APIs that require a parsed unit.
llvm-dwarfdump could be expanded to use this (or newer/better API) to
demonstrate the benefit of it - but for now lld will use this in a
follow-up cl which ensures lld can exit non-zero on errors like this (&
provide more descriptive diagnostics including which object file the
error came from).
(error access to later errors when parsing nested DIEs would be good too
- but, again, exposing that without it being a hassle for every consumer
may be tricky)
llvm-svn: 368377
GlobalAlias and GlobalIFunc ought to be treated the same by the IR
linker, so we can generalize the code to be in terms of their common
base class GlobalIndirectSymbol.
Differential Revision: https://reviews.llvm.org/D55046
llvm-svn: 368357
Under this configuration we'll want to split the v8i64 or v16i32 into two vectors. The default legalization will try to truncate each of those 256-bit pieces one step to 128-bit, concatenate those, then truncate one more time from the new 256 to 128 bits.
With this patch we now truncate the two splits to 64-bits then concatenate those. We have to do this two different ways depending on whether have widening legalization enabled. Without widening legalization we have to manually construct X86ISD::VTRUNC to prevent the ISD::TRUNCATE with a narrow result being promoted to 128 bits with a larger element type than what we want followed by something like a pshufb to grab the lower half of each element to finish the job. With widening legalization we just get the right thing. When we switch to widening by default we can just delete the other code path.
Differential Revision: https://reviews.llvm.org/D65626
llvm-svn: 368349
We may be able to look to how VSELECT is handled to further
improve this, but this appears to be neutral or an improvement
on the test cases we have.
llvm-svn: 368344
Patch https://reviews.llvm.org/D43256 introduced more aggressive loop layout optimization which depends on profile information. If profile information is not available, the statically estimated profile information(generated by BranchProbabilityInfo.cpp) is used. If user program doesn't behave as BranchProbabilityInfo.cpp expected, the layout may be worse.
To be conservative this patch restores the original layout algorithm in plain mode. But user can still try the aggressive layout optimization with -force-precise-rotation-cost=true.
Differential Revision: https://reviews.llvm.org/D65673
llvm-svn: 368339
This fixes znver1 so that it properly enables CMPXHG8B. We can
probably remove explicit CMPXCHG8B from CPUs that also have
CMPXCHG16B, but keeping this simple to allow cherry pick to 9.0.
Fixes PR42935.
llvm-svn: 368324
Summary:
The A64 assembly language does not require the '#' character to
introduce constant immediate operands. Avoid the '#' since the AArch64
asm parser does not accept '#' before the lane specifier and rejects the
following:
__asm__ ("fmla v2.4s, v0.4s, v1.s[%0]" :: "I"(0x1))
Fix a test to not expect the '#' and add a new test case with the above
asm.
Fixes: https://github.com/android-ndk/ndk/issues/1036
Reviewers: peter.smith, kristof.beyls
Subscribers: javed.absar, hiraditya, llvm-commits, srhines
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65550
llvm-svn: 368320
the bitcode has the arm64 retainAutoreleasedReturnValue marker
The ARC middle-end passes stopped optimizing or transforming bitcode
that has been compiled with old compilers after we started emitting
calls to ARC runtime functions as intrinsic calls instead of normal
function calls in the front-end and made changes to teach the ARC
middle-end passes about those intrinsics (see r349534). This patch
converts calls to ARC runtime functions that are not intrinsic functions
to intrinsic function calls if the bitcode has the arm64
retainAutoreleasedReturnValue marker. Checking for the presence of the
marker is necessary to make sure we aren't changing ARC function calls
that were originally MRR message sends (see r349952).
rdar://problem/53280660
Differential Revision: https://reviews.llvm.org/D65902
llvm-svn: 368311
If the target shuffle mask is from a wider type, attempt to scale the mask so that the extraction can attempt to peek through.
Fixes the regression mentioned in rL368307
llvm-svn: 368308
If we don't demand all elements, then attempt to combine to a simpler shuffle.
At the moment we can only do this if Depth == 0 as combineX86ShufflesRecursively uses Depth to track whether the shuffle has really changed or not - we'll need to change this before we can properly start merging combineX86ShufflesRecursively into SimplifyDemandedVectorElts.
The insertps-combine.ll regression is because XFormVExtractWithShuffleIntoLoad can't see through shuffles of different widths - this will be fixed in a follow-up commit.
llvm-svn: 368307
Summary:
This patch enable assembly output of local commons for AIX using .lcomm
directives. Adds a EmitXCOFFLocalCommonSymbol to MCStreamer so we can emit the
AIX version of .lcomm assembly directives which include a csect name. Handle the
case of BSS locals in PPCAIXAsmPrinter by using EmitXCOFFLocalCommonSymbol. Adds
a test for generating .lcomm on AIX Targets.
Reviewers: cebowleratibm, hubert.reinterpretcast, Xiangling_L, jasonliu, sfertile
Reviewed By: sfertile
Subscribers: wuzish, nemanjai, hiraditya, kbarton, MaskRay, jsji, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64825
llvm-svn: 368306
This adds pre- and post- increment and decrements for MVE loads and stores. It
uses the builtin pre and post load/store detection, unlike Neon. Loads are
selected with the code in tryT2IndexedLoad, stores are selected with tablegen
patterns. The immediates have a +/-7bit range, multiplied by the size of the
element.
Differential Revision: https://reviews.llvm.org/D63840
llvm-svn: 368305
This adds some missing patterns for big endian loads/stores, allowing unaligned
loads/stores to also be selected with an extra VREV, which produces better code
than aligning through a stack. Also moves VLDR_P0 to not be LE only, and
adjusts some of the tests to show all that working.
Differential Revision: https://reviews.llvm.org/D65583
llvm-svn: 368304
Summary:
Clang will replace references to registers using ABI names in inline
assembly constraints with references to architecture names, but other
frontends do not. LLVM uses the regular assembly parser to parse inline asm,
so inline assembly strings can contain references to registers using their
ABI names.
This patch adds support for parsing constraints using either the ABI name or
the architectural register name. This means we do not need to implement the
ABI name replacement code in every single frontend, especially those like
Rust which are a very thin shim on top of LLVM IR's inline asm, and that
constraints can more closely match the assembly strings they refer to.
Reviewers: asb, simoncook
Reviewed By: simoncook
Subscribers: hiraditya, rbar, johnrusso, JDevlieghere, apazos, sabuasal, niosHD, kito-cheng, shiva0217, jrtc27, MaskRay, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, rkruppe, PkmX, jocewei, psnobl, benna, Jim, s.egerton, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65947
llvm-svn: 368303
Summary:
Currently the RISC-V backend does not realign the stack. This can be an issue even for the RV32I/RV64I ABIs (where the stack is 16-byte aligned), though is rare. It will be much more comment with RV32E (though the alignment requirements for common data types remain under-documented...).
This patch adds minimal support for stack realignment. It should cope with large realignments. It will error out if the stack needs realignment and variable sized objects are present.
It feels like a lot of the code like getFrameIndexReference and determineFrameLayout could be refactored somehow, as right now it feels fiddly and brittle. We also seem to allocate a lot more memory than GCC does for equivalent C code.
Reviewers: asb
Reviewed By: asb
Subscribers: wwei, jrtc27, s.egerton, MaskRay, Jim, lenary, hiraditya, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, kito-cheng, shiva0217, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, rkruppe, PkmX, jocewei, psnobl, benna, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D62007
llvm-svn: 368300
For some targets the LICM pass can result in sub-optimal code in some
cases where it would be better not to run the pass, but it isn't
always possible to suppress the transformations heuristically.
Where the front-end has insight into such cases it is beneficial
to attach loop metadata to disable the pass - this change adds the
llvm.licm.disable metadata to enable that.
Differential Revision: https://reviews.llvm.org/D64557
llvm-svn: 368296
This patch attempts to peek through vectors based on the demanded bits/elt of a particular ISD::EXTRACT_VECTOR_ELT node, allowing us to avoid dependencies on ops that have no impact on the extract.
In particular this helps remove some unnecessary scalar->vector->scalar patterns.
The wasm shift patterns are annoying - @tlively has indicated that the wasm vector shift codegen are to be refactored in the near-term and isn't considered a major issue.
Differential Revision: https://reviews.llvm.org/D65887
llvm-svn: 368276
G_JUMP_TABLE and G_BRJT appear from translation of switch statement.
Select these two instructions for MIPS32, both pic and non-pic.
Differential Revision: https://reviews.llvm.org/D65861
llvm-svn: 368274
In some cases a symbol might have section index == SHN_XINDEX.
This is an escape value indicating that the actual section header index
is too large to fit in the containing field.
Then the SHT_SYMTAB_SHNDX section is used. It contains the 32bit values
that stores section indexes.
ELF gABI says that there can be multiple SHT_SYMTAB_SHNDX sections,
i.e. for example one for .symtab and one for .dynsym
(1) https://groups.google.com/forum/#!topic/generic-abi/-XJAV5d8PRg
(2) DT_SYMTAB_SHNDX: http://www.sco.com/developers/gabi/latest/ch5.dynamic.html
In this patch I am only supporting a single SHT_SYMTAB_SHNDX associated
with a .symtab. This is a more or less common case which is used a few tests I saw in LLVM.
I decided not to create the SHT_SYMTAB_SHNDX section as "implicit",
but implement is like a kind of regular section for now.
i.e. tools do not recreate this section or its content, like they do for
symbol table sections, for example. That should allow to write all kind of
possible broken test cases for our needs and keep the output closer to requested.
Differential revision: https://reviews.llvm.org/D65446
llvm-svn: 368272
VLDRH needs to have an alignment of at least 2, including the
widening/narrowing versions. This tightens up the ISel patterns for it and
alters allowsMisalignedMemoryAccesses so that unaligned accesses are expanded
through the stack. It also fixed some incorrect shift amounts, which seemed to
be passing a multiple not a shift.
Differential Revision: https://reviews.llvm.org/D65580
llvm-svn: 368256
Summary:
The ever growing switch required Attribute::AttrKind values but they
might not be available for all abstract attributes we deduce. With the
new method we track statistics at the abstract attribute level. The
provided macros simplify the usage and make the messages uniform.
Reviewers: sstefan1, uenoku
Subscribers: hiraditya, bollu, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65732
llvm-svn: 368227
Summary:
The wrapper reduces boilerplate code and also provide a nice way to
determine the state type used by an abstract attributes statically via
AAType::StateType.
This was already discussed as part of the review of D65711.
Reviewers: sstefan1, uenoku
Subscribers: hiraditya, bollu, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65786
llvm-svn: 368224
If we know everything is live there is no need to query for liveness.
Indicating a pessimistic fixpoint will cause the state to be "invalid"
which will cause the Attributor to not return the AAIsDead on request,
which will prevent us from querying isAssumedDead().
llvm-svn: 368223
Summary:
So far, whenever one wants to look at returned values, one had to deal
with the AAReturnedValues and potentially with the AAIsDead attribute.
In the same spirit as other checkForAllXXX methods, we add this
functionality now to the Attributor. By adopting the use sites we got
better results when return instructions were dead.
Reviewers: sstefan1, uenoku
Subscribers: hiraditya, bollu, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65733
llvm-svn: 368222
If we know the trip count, we should make sure the interleave factor won't cause the vectorized loop to exceed it.
Improves one of the cases from PR42674
Differential Revision: https://reviews.llvm.org/D65896
llvm-svn: 368215
If we're splitting the 512-bit vector anyway and we have zero/sign bits, then we might as well use pack instructions to concat and truncate at once.
Differential Revision: https://reviews.llvm.org/D65904
llvm-svn: 368210
This reverts commit fbc563e2cb "Create
unique, but identically-named ELF sections for explicitly-sectioned
functions and globals when using -function-sections and
-data-sections."
Reason for revert: sections are created with potentially wrong
attributes.
llvm-svn: 368204
The matchSelectPattern code can match patterns like (x >= 0) ? x : -x
for absolute value. But it can also match ((x-y) >= 0) ? (x-y) : (y-x).
If the latter form was matched we can only use the nsw flag if its
set on both subtracts.
This match makes sure we're looking at the former case only.
Differential Revision: https://reviews.llvm.org/D65692
llvm-svn: 368195
Without this patch computeConstantDifference returns None for cases like
these:
computeConstantDifference(%x, %x)
computeConstantDifference({%x,+,16}, {%x,+,16})
Differential Revision: https://reviews.llvm.org/D65474
llvm-svn: 368193
Some of these names were abbreviated, some were not, some pluralised,
some not. Made the API difficult to use - since it's an exact 1:1
mapping to the DWARF sections - use those names (changing underscore
separation for camel casing).
llvm-svn: 368189
We built a StringRef from a string literal which we then converted to a
std::string to call c_str(). Just use a pointer to the string literal
instead of a StringRef.
No behavior change.
Differential Revision: https://reviews.llvm.org/D65890
llvm-svn: 368187
The assert that caused this to be reverted should be fixed now.
Original commit message:
This patch changes our defualt legalization behavior for 16, 32, and
64 bit vectors with i8/i16/i32/i64 scalar types from promotion to
widening. For example, v8i8 will now be widened to v16i8 instead of
promoted to v8i16. This keeps the elements widths the same and pads
with undef elements. We believe this is a better legalization strategy.
But it carries some issues due to the fragmented vector ISA. For
example, i8 shifts and multiplies get widened and then later have
to be promoted/split into vXi16 vectors.
This has the potential to cause regressions so we wanted to get
it in early in the 10.0 cycle so we have plenty of time to
address them.
Next steps will be to merge tests that explicitly test the command
line option. And then we can remove the option and its associated
code.
llvm-svn: 368183
Summary:
In SimplifySelectsFeedingBinaryOp, propagate fast math flags from the
outer op into both arms of the new select, to take advantage of
simplifications that require fast math flags.
Reviewers: mcberg2017, majnemer, spatel, arsenm, xbolva00
Subscribers: wdng, javed.absar, kristof.beyls, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65658
llvm-svn: 368175
Prevent the LoadStoreOptimizer from pairing any load/store instructions with
instructions from the prologue/epilogue if the CFI information has encoded the
operations as separate instructions. This would otherwise lead to a mismatch
of the actual prologue size from the size as recorded in the Windows CFI.
Reviewers: efriedma, mstorsjo, ssijaric
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D65817
llvm-svn: 368164
Function MipsAsmParser::expandMemInst() did not properly handle
instruction `sc` with a symbol as an argument because first argument
would be counted twice. We add additional checks and handle this case
separately.
Patch by Mirko Brkusanin.
Differential Revision: https://reviews.llvm.org/D64252
llvm-svn: 368160
- Remove support for non-recursive mutexes. This was unused.
- The std::recursive_mutex is now created/destroyed unconditionally.
Locking is still only done if threading is enabled.
- Alias SmartScopedLock to std::lock_guard.
This should make no semantic difference on the existing APIs.
llvm-svn: 368158
In particular this helps the SSE vector shift cvttps2dq+add+shl pattern by avoiding the need for zeros in shuffle style extensions to vXi32 types as we'll be shifting out those bits anyway
llvm-svn: 368155
This was initially committed in r368059 but got reverted in r368084
because there was a faulty logic in how the shift amounts type mismatch
was being handled (it simply wasn't).
I've added an explicit bailout before we SimplifyAddInst() - i don't think
it's designed in general to handle differently-typed values, even though
the actual problem only comes from ConstantExpr's.
I have also changed the common type deduction, to not just blindly
look past zext, but try to do that so that in the end types match.
Differential Revision: https://reviews.llvm.org/D65380
llvm-svn: 368141
Currently we check whether LR is stored/loaded to/from inbetween the
loop decrement and loop end pseudo instructions. There's two problems
here:
- It relies on all load/store instructions being labelled as such in
tablegen.
- Actually any use of loop decrement is troublesome because the value
doesn't exist!
So we need to check for any read/write of LR that occurs between the
two instructions and revert if we find anything.
Differential Revision: https://reviews.llvm.org/D65792
llvm-svn: 368130
This patch turns on the prof branch_weights metadata consistency
check in SwitchInstProfUpdateWrapper.
If this patch causes a failure then please before reverting do report
the IR that hits the assertion and try identifying the pass that
introduces the inconsistency. We have to fix all such passes.
See also the upcoming change https://reviews.llvm.org/D61179
in the Verifier.
Reviewers: davidx, nikic, eraman, reames, chandlerc
Reviewed By: davidx
Differential Revision: https://reviews.llvm.org/D64061
llvm-svn: 368129
We have aliases that disambiguate memory forms of bt/btc/btr/bts
without suffixes to the 32-bit form. These aliases should have
been updated when the instructions were updated in r356413.
llvm-svn: 368127
The upper 4 bits of the immediate byte are used to encode a
register. We need to limit the explicit immediate to fit in the
remaining 4 bits.
Fixes PR42899.
llvm-svn: 368123
Globals are instrumented by adding a pointer tag to their symbol values
and emitting metadata into a special section that allows the runtime to tag
their memory when the library is loaded.
Due to order of initialization issues explained in more detail in the comments,
shadow initialization cannot happen during regular global initialization.
Instead, the location of the global section is marked using an ELF note,
and we require libc support for calling a function provided by the HWASAN
runtime when libraries are loaded and unloaded.
Based on ideas discussed with @evgeny777 in D56672.
Differential Revision: https://reviews.llvm.org/D65770
llvm-svn: 368102
Summary:
This change gives Emscripten the ability to use more than one constructor
priorities that runs before ASan. By convention, constructor priorites 0-100
are reserved for use by the system. ASan on Emscripten now uses priority 50,
leaving plenty of room for use by Emscripten before and after ASan.
This change is done in response to:
https://github.com/emscripten-core/emscripten/pull/9076#discussion_r310323723
Reviewers: kripken, tlively, aheejin
Reviewed By: tlively
Subscribers: cfe-commits, dschuff, sbc100, jgravelle-google, hiraditya, sunfish, llvm-commits
Tags: #llvm, #clang
Differential Revision: https://reviews.llvm.org/D65684
llvm-svn: 368101
This check is only meaningful for COFF and it is perfectly valid to create
such a GlobalValue in ELF.
Differential Revision: https://reviews.llvm.org/D65686
llvm-svn: 368094
If we're after type legalization we should only be trying to turn
v2i64 into v2i32. So bitcast to v4i32, shuffle the even elements
together. Then use X86ISD::CVTSI2P. The alternative is to leave
the v2i64 type alone and let it scalarized. Hopefully keeping
it packed is better.
Fixes PR42905.
llvm-svn: 368091
This reverts r368059 (git commit 0f95710976)
This caused Clang to assert while self-hosting and compiling
SystemZInstrInfo.cpp. Reduction is running.
llvm-svn: 368084
Commit r368064 was necessary after r367953 (D65712) broke the module
build. That happened, apparently, because the template class IRAttribute
defined in the header had a virtual method defined in the corresponding
source file (IRAttribute::manifest). To unbreak the situation this patch
introduces a helper function IRAttributeManifest::manifestAttrs which
is used to implement IRAttribute::manifest in the header. The deifnition
of the helper function is still in the source file.
Patch by jdoerfert (Johannes Doerfert)
Differential Revision: https://reviews.llvm.org/D65821
llvm-svn: 368076
https://reviews.llvm.org/D65698
This adds a KnownBits analysis pass for GISel. This was done as a
pass (compared to static functions) so that we can add other features
such as caching queries(within a pass and across passes) in the future.
This patch only adds the basic pass boiler plate, and implements a lazy
non caching knownbits implementation (ported from SelectionDAG). I've
also hooked up the AArch64PreLegalizerCombiner pass to use this - there
should be no compile time regression as the analysis is lazy.
llvm-svn: 368065
Summary:
This is tested by D61289 but has been pulled into a separate patch at
a reviewers request.
Reviewers: bogner, aditya_nandakumar, volkan, aemerson, paquette, arsenm, rovka
Reviewed By: arsenm
Subscribers: javed.absar, hiraditya, wdng, kristof.beyls, Petar.Avramovic, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61321
llvm-svn: 368063
Summary:
Cleans X86.td's Barcelona entry to be more like the others,
by moving the features out of the `Proc<>`, thus potentially
making it possible to inherit from them.
Split off from D63628
Reviewers: craig.topper, RKSimon
Reviewed By: craig.topper
Subscribers: hiraditya, jfb, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65791
llvm-svn: 368061
Summary:
Currently `reassociateShiftAmtsOfTwoSameDirectionShifts()` only handles
two shifts one after another. If the shifts are `shl`, we still can
easily perform the fold, with no extra legality checks:
https://rise4fun.com/Alive/OQbM
If we have right-shift however, we won't be able to make it
any simpler than it already is.
After this the only thing missing here is constant-folding: (`NewShAmt >= bitwidth(X)`)
* If it's a logical shift, then constant-fold to `0` (not `undef`)
* If it's a `ashr`, then a splat of original signbit
https://rise4fun.com/Alive/E1Khttps://rise4fun.com/Alive/i0V
Reviewers: spatel, nikic, xbolva00
Reviewed By: spatel
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65380
llvm-svn: 368059
This fixes a bug for making path with a //net style root absolute. I
discovered the bug while writing a test case for the VFS, which uses
these paths because they're both legal absolute paths on Windows and
Unix.
Differential revision: https://reviews.llvm.org/D65675
llvm-svn: 368053
Refactor emitFrameOffset to accept a StackOffset struct as its offset argument.
This method currently only supports byte offsets (MVT::i8) but will be extended
in a later patch to support scalable offsets (MVT::nxv1i8) as well.
Reviewers: thegameg, rovka, t.p.northover, efriedma, greened
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D61436
llvm-svn: 368049
This patch replaces a TODO comment with a call to `report_fatal_error`.
The path that reaches the added call to `report_fatal_error` manifestly
dereferences a null pointer.
llvm-svn: 368048
D62198 introduced an option to relax the checks for
hasOnlyUniformBranches. This commit turns the option on by default, for
better code generation in some cases in AMDGPU.
Differential Revision: https://reviews.llvm.org/D63198
Change-Id: I9cbff002a1e74d3b7eb96b4192dc8129936d537d
llvm-svn: 368042
To support spilling/filling of scalable vectors we need a more generic
representation of a stack offset than simply 'int'.
For this we introduce the StackOffset struct, which comprises multiple
offsets sized by their respective MVTs. Byte-offsets will thus be a simple
tuple such as { offset, MVT::i8 }. Adding two byte-offsets will result in a
byte offset { offsetA + offsetB, MVT::i8 }. When two offsets have different
types, we can canonicalise them to use the same MVT, as long as their
runtime sizes are guaranteed to have the same size-ratio as they would have
at compile-time.
When we have both scalable- and fixed-size objects on the stack, we can
create an offset that is:
({ offset_fixed, MVT::i8 } + { offset_scalable, MVT::nxv1i8 })
The struct also contains a getForFrameOffset() method that is specific to
AArch64 and decomposes the frame-offset to be used directly in instructions
that operate on the stack or index into the stack.
Note: This patch adds StackOffset as an AArch64-only concept, but we would
like to make this a generic concept/struct that is supported by all
interfaces that take or return stack offsets (currently as 'int'). Since
that would be a bigger change that is currently pending on D32530 landing,
we thought it makes sense to first show/prove the concept in the AArch64
target before proposing to roll this out further.
Reviewers: thegameg, rovka, t.p.northover, efriedma, greened
Reviewed By: rovka, greened
Differential Revision: https://reviews.llvm.org/D61435
llvm-svn: 368024
If we don't demand any non-undef shuffle elements then the assert will fail as all shuffle inputs would still be flagged as 'identity' safe.
Exposed by an incoming patch.
llvm-svn: 368022
This updates all libraries and tools in LLVM Core to use 64-bit offsets
which directly or indirectly come to DataExtractor.
Differential Revision: https://reviews.llvm.org/D65638
llvm-svn: 368014
Using 64-bit offsets is required to fully implement 64-bit DWARF.
As these classes are used in many different libraries they should
temporarily support both 32- and 64-bit offsets.
Differential Revision: https://reviews.llvm.org/D64006
llvm-svn: 368013
This patch changes the DAG legalizer to respect the operation actions
set by the target for strict floating-point operations. (Currently, the
legalizer will usually fall back to mutate to the non-strict action
(which is assumed to be legal), and only skip mutation if the strict
operation is marked legal.)
With this patch, if whenever a strict operation is marked as Legal or
Custom, it is passed to the target as usual. Only if it is marked as
Expand will the legalizer attempt to mutate to the non-strict operation.
Note that this will now fail if the non-strict operation is itself
marked as Custom -- the target will have to provide a Custom definition
for the strict operation then as well.
Reviewed By: hfinkel
Differential Revision: https://reviews.llvm.org/D65226
llvm-svn: 368012
Summary:
Before this patch MGATHER/MSCATTER is capable of representing all
common addressing modes, but only when illegal types are used.
This patch adds an IndexType property so more representations
are available when using legal types only.
Original modes:
vector of bases
base + vector of signed scaled offsets
New modes:
base + vector of signed unscaled offsets
base + vector of unsigned scaled offsets
base + vector of unsigned unscaled offsets
The current behaviour of addressing modes for gather/scatter remains
unchanged.
Patch by Paul Walker.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D65636
llvm-svn: 368008
Summary:
Similar to `Attributor::checkForAllCallSites`, we now provide such
functionality for instructions of a certain opcode through
`Attributor::checkForAllInstructions` and the convenient wrapper
`Attributor::checkForAllCallLikeInstructions`. This cleans up code,
avoids duplication, and simplifies the usage of liveness information.
Reviewers: sstefan1, uenoku
Subscribers: hiraditya, bollu, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65731
llvm-svn: 367961
Fixes e.g.
http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux-bootstrap-ubsan/builds/14273
We can't left shift here because left shifting of a negative number is UB.
The same doesn't apply to unsigned arithmetic, but switching to unsigned
doesn't appear to stop ubsan from complaining, so we need to mask out the
high bits.
llvm-svn: 367959
Summary:
Certain properties, e.g., an AttrKind, are not shared among all abstract
attributes. This patch extracts the functionality into a helper struct.
Reviewers: uenoku, sstefan1
Subscribers: hiraditya, bollu, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65712
llvm-svn: 367953
To remove boilerplate, mostly passing through values to the
AbstractAttriubute base class, we extract the state into an IRPosition
helper. There is no function change intended but the IRPosition struct
will provide more functionality down the line.
Reviewers: sstefan1, uenoku
Subscribers: hiraditya, bollu, jfb, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65711
llvm-svn: 367952
Summary:
The new scheme is similar to the pass manager and dyn_cast scheme where
we identify classes by the address of a static member. This is better
than the old scheme in which we had to "invent" new Attributor enums if
there was no corresponding one.
Reviewers: sstefan1, uenoku
Subscribers: hiraditya, bollu, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65710
llvm-svn: 367951
Summary:
Instead of storing the reference to the InformationCache we now pass it
whenever it might be needed.
Reviewers: sstefan1, uenoku
Subscribers: hiraditya, bollu, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65709
llvm-svn: 367950
A function is "no-return" if we never reach a return instruction, either
because there are none or the ones that exist are dead.
Test have been adjusted:
- either noreturn was added, or
- noreturn was avoided by modifying the code.
The new noreturn_{sync,async} test make sure we do handle invoke
instructions with a noreturn (and potentially nowunwind) callee
correctly, even in the presence of potential asynchronous exceptions.
llvm-svn: 367948
This commit adds host CPU name and sub-target features to the
`JITTargetMachineBuilder` created by `JITTargetMachineBuilder::detectHost()`.
Differential Revision: https://reviews.llvm.org/D65760
llvm-svn: 367944
Summary:
When the WebAssembly backend encounters a return type that doesn't
fit within i32, SelectionDAG performs sret demotion, adding an
additional argument to the start of the function that contains
a pointer to an sret buffer to use instead. However, this conflicts
with the emscripten sjlj lowering pass. There we translate calls like:
```
call {i32, i32} @foo()
```
into (in pseudo-llvm)
```
%addr = @foo
call {i32, i32} @__invoke_{i32,i32}(%addr)
```
i.e. we perform an indirect call through an extra function.
However, the sret transform now transforms this into
the equivalent of
```
%addr = @foo
%sret = alloca {i32, i32}
call {i32, i32} @__invoke_{i32,i32}(%sret, %addr)
```
(while simultaneously translation the implementation of @foo as well).
Unfortunately, this doesn't work out. The __invoke_ ABI expected
the function address to be the first argument, causing crashes.
There is several possible ways to fix this:
1. Implementing the sret rewrite at the IR level as well and performing
it as part of lowering to __invoke
2. Fixing the wasm backend to recognize that __invoke has a special ABI
3. A change to the binaryen/emscripten ABI to recognize this situation
This revision implements the middle option, teaching the backend to
treat __invoke_ functions specially in sret lowering. This is achieved
by
1) Introducing a new CallingConv ID for invoke functions
2) When this CallingConv ID is seen in the backend and the first argument
is marked as sret (a function pointer would never be marked as sret),
swapping the first two arguments.
Reviewed By: tlively, aheejin
Differential Revision: https://reviews.llvm.org/D65463
llvm-svn: 367935
When we remove instructions cached references could still be live. This
patch avoids removing invoke instructions that are replaced by calls and
instead keeps them around but in a dead block.
llvm-svn: 367933
MSVC finds ambiguity where clang doesn't and it looks like it's not going to be an easy fix
Reverting while I figure out how to fix it
This reverts r367916 (git commit aa15ec3c23)
This reverts r367920 (git commit 5d14efe279)
llvm-svn: 367932
Similar to other places where we transform invokes to calls we need to
be careful if the handler (=personality) can catch asynchronous
exceptions as they are not modeled as part of nounwind.
This is tested with D59978.
llvm-svn: 367931
Any addresses that we pass to llvm-symbolizer are going to be untagged,
while any HWASAN instrumented globals are going to be tagged in the
symbol table. Therefore we need to untag the addresses before using them.
Differential Revision: https://reviews.llvm.org/D65769
llvm-svn: 367926
FastISel already does this since the initial arm64 port was upstreamed, so
it seems there are no issues with doing this at -O0 for very small memcpys.
Gives a 0.2% geomean code size improvement on CTMark.
Differential Revision: https://reviews.llvm.org/D65758
llvm-svn: 367919
This patch changes our defualt legalization behavior for 16, 32, and
64 bit vectors with i8/i16/i32/i64 scalar types from promotion to
widening. For example, v8i8 will now be widened to v16i8 instead of
promoted to v8i16. This keeps the elements widths the same and pads
with undef elements. We believe this is a better legalization strategy.
But it carries some issues due to the fragmented vector ISA. For
example, i8 shifts and multiplies get widened and then later have
to be promoted/split into vXi16 vectors.
This has the potential to cause regressions so we wanted to get
it in early in the 10.0 cycle so we have plenty of time to
address them.
Next steps will be to merge tests that explicitly test the command
line option. And then we can remove the option and its associated
code.
llvm-svn: 367901
Patch D56593 by @courbet results in calls to `bcmp()` in some cases, should
the target support the it. Unless `TTI::MemCmpExpansionOptions()`
is overridden by the target.
In a proprietary benchmark we see a performance drop of about 12% on PNG
compression before this patch, though it passes all tests.
This patch mirrors X86 for AArch64 and initializes
`TTI::MemCmpExpansionOptions()` to then expand calls to `bcmp()` when
appropriate. No tuning of the parameters was performed, but, at this point,
it's enough to recover the performance drop above.
This problem also exists on ARM. Once a consensus is reached for AArch64, we
can work to fix ARM as well.
Authors:
- Evandro Menezes (@evandro) <e.menezes@samsung.com>
- Brian Rzycki (@brzycki) <b.rzycki@samsung.com>
Differential revision: https://reviews.llvm.org/D64805
llvm-svn: 367898
Summary:
The Arm Neoverse N1 Software Optimization Guide [1], Section "4.8 Branch
instruction alignment" states:
"Consider aligning subroutine entry points and branch targets to 32B
boundaries, within the bounds of the code-density requirements of the
program."
This patch sets the preferred function alignment on Neoverse N1 to 2^4=16B.
This was already the case in some of the latest Cortex-A CPUs. Benchmarking
in previous Cortex-A CPUs suggested that 16B alignment is already better
than the default. See commit d04ee305.
The reason we don't set it to 32B right now (as the optimisation guide
suggests) is that this will impact code size and perhaps the instruction
cache performance. Therefore we need benchmark numbers first.
I have also added testing for A75 and A76 that we were missing.
[1] https://developer.arm.com/docs/swog309707/latest
Reviewers: fhahn, greened, samparker, dmgreen
Reviewed By: dmgreen
Subscribers: dmgreen, javed.absar, kristof.beyls, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65654
llvm-svn: 367894
This appears to slightly help patterns similar to what's
shown in PR42874:
https://bugs.llvm.org/show_bug.cgi?id=42874
...but not in the way requested.
That fix will require some later IR and/or backend pass to
decompose multiply/shifts into something more optimal per
target. Those transforms already exist in some basic forms,
but probably need enhancing to catch more cases.
https://rise4fun.com/Alive/Qzv2
llvm-svn: 367891
Summary:
This is a follow up to r367237. MachineFunction::getMachineMemOperand()
adds the offset parameter to the existing offset instead of resetting it.
So we need to reset the offset to the correct value after calling this
function.
Reviewers: arsenm
Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65557
llvm-svn: 367881
This was switching to use a format store for a non-format store for
f16 types. Also fixes i16/f16 stores on targets without legal f16.
The corresponding loads also need to be fixed.
llvm-svn: 367872
Without context we assume SGPR. Allowing VGPR constants theoretically
helps avoid a copy. This seems to not actually work now, and the
choice isn't based on the use bank.
llvm-svn: 367871
I'm not sure what complications these present, but the current
argument lowering is pretty much directly copied from the DAG
lowering, so I assume these work as they should.
No tests because I'm lazy and things are getting pretty close to the
point where the existing calling-conventions.ll can be shared with
SelectionDAG.
llvm-svn: 367870
Summary:
This patch adds initial support for the SVE calling convention such that
SVE types can be passed as arguments and return values to/from a
subroutine.
The SVE AAPCS states [1]:
z0-z7 are used to pass scalable vector arguments to a subroutine,
and to return scalable vector results from a function. If a
subroutine takes arguments in scalable vector or predicate
registers, or if it is a function that returns results in such
registers, it must ensure that the entire contents of z8-z23 are
preserved across the call. In other cases it need only preserve the
low 64 bits of z8-z15, as described in §5.1.2.
p0-p3 are used to pass scalable predicate arguments to a subroutine
and to return scalable predicate results from a function. If a
subroutine takes arguments in scalable vector or predicate
registers, or if it is a function that returns results in these
registers, it must ensure that p4-p15 are preserved across the call.
In other cases it need not preserve any scalable predicate register
contents.
SVE predicate and data registers are passed indirectly (i.e. spilled to the
stack and pass the address) if they exceed the registers used for argument
passing defined by the PCS referenced above. Until SVE stack support is merged
we can't spill SVE registers to the stack, so currently an llvm_unreachable is
used where we will eventually handle this.
[1] https://static.docs.arm.com/100986/0000/100986_0000.pdf
Reviewed By: ostannard
Differential Revision: https://reviews.llvm.org/D65448
llvm-svn: 367859
The test case is based on the example from the post-commit thread for:
https://reviews.llvm.org/rGc9171bd0a955
This replaces the x86-specific simple-type check from:
rL367766
with a check in the DAGCombiner. Adding the check isn't
strictly necessary after the fix from:
rL367768
...but it seems likely that we're heading for trouble if
we are creating weird types in this transform.
I combined the earlier legality check into the initial
clause to simplify the code.
So we should only try the trunc/sext transform at the
earliest combine stage, but we limit the transform to
simple types anyway because the TLI hook is probably
too lax about what it considers a free truncate.
llvm-svn: 367834
Adds a two way mapping between the scalable vector IR type and
corresponding SelectionDAG ValueTypes.
Reviewers: craig.topper, jeroen.dobbelaere, fhahn, rengolin, greened, rovka
Reviewed By: greened
Differential Revision: https://reviews.llvm.org/D47770
llvm-svn: 367832
We process 2 elements at a time and expect the number of elements to be
even. Similar to D60690.
Reviewers: dmgreen, samparker, t.p.northover
Reviewed By: dmgreen
Differential Revision: https://reviews.llvm.org/D65400
llvm-svn: 367831
Summary:
This is patch is part of a serie to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790
Reviewers: courbet, jfb, jakehehrlich
Reviewed By: jfb
Subscribers: wuzish, jholewinski, arsenm, dschuff, nemanjai, jvesely, nhaehnle, javed.absar, sbc100, jgravelle-google, hiraditya, aheejin, kbarton, asb, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, jrtc27, MaskRay, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, dexonsmith, PkmX, jocewei, jsji, s.egerton, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65514
llvm-svn: 367828
Add an explicit construction of the ArrayRef, gcc 5 and earlier don't
seem to select the ArrayRef constructor which takes a C array when the
construction is implicit.
Original commit message:
- Avoid a crash when IPRA calls ARMFrameLowering::determineCalleeSaves
with a null RegScavenger. Simply not updating the register scavenger
is fine because IPRA only cares about the SavedRegs vector, the acutal
code of the function has already been generated at this point.
- Add a new hook to TargetRegisterInfo to get the set of registers which
can be clobbered inside a call, even if the compiler can see both
sides, by linker-generated code.
Differential revision: https://reviews.llvm.org/D64908
llvm-svn: 367819
Summary:
This contains various fixes:
- Explicitly determine and return the next noreturn instruction.
- If an invoke calls a noreturn function which is not nounwind we
keep the unwind destination live. This also means we require an
invoke. Though we can still add the unreachable to the normal
destination block.
- Check if the return instructions are dead after we look for calls
to avoid triggering an optimistic fixpoint in the presence of
assumed liveness information.
- Make the interface work with "const" pointers.
- Some simplifications
While additional tests are included, full coverage is achieved only with
D59978.
Reviewers: sstefan1, uenoku
Subscribers: hiraditya, bollu, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65701
llvm-svn: 367791
When a fixpoint is indicated the change status is known due to the
fixpoint kind. This simplifies a common code pattern by making the
connection explicit.
llvm-svn: 367790
Summary:
If the DerefBytesState (and thereby the DerefState) is invalid, we
reached a fixpoint for the whole DerefState as we will not
manifest/provide information then.
Reviewers: uenoku, sstefan1
Subscribers: hiraditya, bollu, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65586
llvm-svn: 367789
Summary:
The SimplifyDemandedVectorElts function can replace with undef
when no elements are demanded, but due to how it interacts with
TargetLoweringOpts, it can only do this when the node has
no other users.
Remove a now unneeded DAG combine from the X86 backend.
Reviewers: RKSimon, spatel
Reviewed By: RKSimon
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65713
llvm-svn: 367788
This adds big endian MVE patterns for bitcasts. They are defined in llvm as
being the same as a store of the existing type and the load into the new. This
means that they have to become a VREV between the two types, working in the
same way that NEON works in big-endian. This also adds some example tests for
bigendian, showing where code is and isn't different.
The main difference, especially from a testing perspective is that vectors are
passed as v2f64, and so are VREV into and out of call arguments, and the
parameters are passed in a v2f64 format. Same happens for inline assembly where
the register class is used, so it is VREV to a v16i8.
So some of this is probably not correct yet, but it is (mostly) self-consistent
and seems to be consistent with how llvm treats vectors. The rest we can
hopefully fix later. More details about big endian neon can be found in
https://llvm.org/docs/BigEndianNEON.html.
Differential Revision: https://reviews.llvm.org/D65581
llvm-svn: 367780
Currently, when a GVN or CSE optimization happens,
the llvm.preserve.access.index metadata is dropped.
This caused a problem for BPF AbstructMemberOffset phase
as it relies on the metadata (debuginfo types).
This patch added proper hooks in lib/Transforms to
preserve !preserve.access.index metadata. A test
case is added to ensure metadata is preserved under CSE.
Differential Revision: https://reviews.llvm.org/D65700
llvm-svn: 367769
This is further fix for PR42880.
Sanjay already disabled the X86 TLI hook for non-simple types,
but we should really call isTypeLegal here if we're after type
legalization.
llvm-svn: 367768
This avoids the crash from:
https://bugs.llvm.org/show_bug.cgi?id=42880
...and I think it's a proper constraint for the TLI hook.
But that example raises questions about what happens to get us
into this situation (created i29 types) and what happens later
(why does legalization die on those types), so I'm not sure if
we will resolve the bug based on this change.
llvm-svn: 367766
Summary:
The allocsize attribute refers to call parameters by index.
Thus, when we add the extra parameter in sjlj lowering, we
need to increment the referenced paramater in the allocsize
attribute to avoid angering the Verifier.
Reviewed By: aheejin
Differential Revision: https://reviews.llvm.org/D65470
llvm-svn: 367765
MachO/x86-64 UNSIGNED relocs are almost always 64-bit (length=3), but UNSIGNED
relocs of length=2 are allowed if the target resides in the low 32-bits. This
patch adds support for such relocations in JITLink (previously they would have
triggered an unsupported relocation error).
llvm-svn: 367764
For consistency with normal instructions and clarity when reading IR,
it's best to print the %0, %1, ... names of function arguments in
definitions.
Also modifies the parser to accept IR in that form for obvious reasons.
llvm-svn: 367755
Fix for https://bugs.llvm.org/show_bug.cgi?id=42760. A tBR_JTr
instruction is duplicated by tail duplication, which results in
the same jumptable with the same label being emitted twice.
Fix this by marking tBR_JTr as not duplicable. The corresponding
ARM/Thumb instructions are already marked as not duplicable.
Additionally also mark tTBB_JT and tTBH_JT to be consistent with
Thumb2, even though this shouldn't be strictly necessary.
Differential Revision: https://reviews.llvm.org/D65606
llvm-svn: 367753
This is an old commit that exposed a bug in the GISel importer, which caused
non-truncating stores to be selected for truncating store patterns. Now that's
been fixed in r367737 this can go back in.
llvm-svn: 367739
Same as what was done for gather/scatter/load/store in r367489.
Expandload/compressstore were delayed due to lack of constant
masking handling that has since been fixed.
llvm-svn: 367738
This is consistent with the target independent intrinsic handling.
Not sure this really matters since we just pull the constant out
using getZExtValue later.
llvm-svn: 367736
With newly added debuginfo type
metadata for preserve_array_access_index() intrinsic,
this patch did the following two things:
(1). checking validity before adding a new access index
to the access chain.
(2). calculating access byte offset in IR phase
BPFAbstractMemberAccess instead of when BTF is emitted.
For (1), the metadata provided by all preserve_*_access_index()
intrinsics are used to check whether the to-be-added type
is a proper struct/union member or array element.
For (2), with all available metadata, calculating access byte
offset becomes easier in BPFAbstractMemberAccess IR phase.
This enables us to remove the unnecessary complexity in
BTFDebug.cpp.
New tests are added for
. user explicit casting to array/structure/union
. global variable (or its dereference) as the source of base
. multi demensional arrays
. array access given a base pointer
. cases where we won't generate relocation if we cannot find
type name.
Differential Revision: https://reviews.llvm.org/D65618
llvm-svn: 367735
Modifying other AbstractAttributes to use Liveness AA and skip dead instructions.
Reviewers: jdoerfert, uenoku
Subscribers: hiraditya, llvm-commits
Differential revision: https://reviews.llvm.org/D65243
llvm-svn: 367725
These cases can come up when the extending loads combiner doesn't combine a
zext(load) to a zextload op, due to some other operation being in between, which
then gets simplified at a later stage.
Differential Revision: https://reviews.llvm.org/D65360
llvm-svn: 367723
Summary:
As part of this, define DenseMapInfo for MCRegister (and Register while I'm at it)
Depends on D65599
Reviewers: arsenm
Subscribers: MatzeB, qcolombet, jvesely, wdng, nhaehnle, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65605
llvm-svn: 367719
This really should have been part of 366765. For some reason, I forgot to handle the corresponding load side, and the readable test cases (using deopt vs statepoints) turned out to be overly reduced. Oops.
As seen in the test change, the problem was that we were using a load with alignment expectations rather than the unaligned variant when the stack alignment was less than that prefered type alignment.
llvm-svn: 367718
This adds support for generating all the loads or stores for a constant mask into a single basic block with no conditionals.
Differential Revision: https://reviews.llvm.org/D65613
llvm-svn: 367715
libObject does not apply the Exported flag to symbols in COFF object files,
which can lead to assertions when the symbol flags initially derived from
IR added to the JIT clash with the flags seen by the JIT linker. Both
RTDyldObjectLinkingLayer and ObjectLinkingLayer have a workaround for this:
they can be told to override the flags seen by the linker with the flags
attached to the materialization responsibility object that was passed down
to the linker. This patch modifies LLJIT's setup code to enable this override
by default on platforms where COFF is the default object format.
llvm-svn: 367712
This reverses a questionable IR canonicalization when a truncate
is free:
sra (add (shl X, N1C), AddC), N1C -->
sext (add (trunc X to (width - N1C)), AddC')
https://rise4fun.com/Alive/slRC
More details in PR42644:
https://bugs.llvm.org/show_bug.cgi?id=42644
I limited this to pre-legalization for code simplicity because that
should be enough to reverse the IR patterns. I don't have any
evidence (no regression test diffs) that we need to try this later.
Differential Revision: https://reviews.llvm.org/D65607
llvm-svn: 367710
Add an equivalent ComplexRendererFns function for SelectNegArithImmed. This
allows us to select immediate adds of -1 by turning them into subtracts.
Update select-binop.mir to show that the pattern works.
Differential Revision: https://reviews.llvm.org/D65460
llvm-svn: 367700
Summary:
Since the for loop iterates over BB's predecessors, the branch conditions found must have BB as one of the successors.
For an unconditional branch the successor must be BB, added `assert`.
For a conditional branch, one of the two successors must be BB, simplify `else if` to `else` and `assert`.
Sink common instructions outside the if/else block.
Reviewers: sanjoy.google
Subscribers: jlebar, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65596
llvm-svn: 367699
This fixes a crash in the case where the type info object is an alias
pointing to a non-zero offset within a global or is otherwise unanalyzable
by the stripPointerCasts() function. Looking through the alias is not the
right thing to do anyway for similar reasons as D65118.
Differential Revision: https://reviews.llvm.org/D65314
llvm-svn: 367696
As discussed in PR42696:
https://bugs.llvm.org/show_bug.cgi?id=42696
...but won't help that case yet.
We have an odd situation where a select operand equivalence fold was
implemented in InstSimplify when it could have been done more generally
in InstCombine if we allow dropping of {nsw,nuw,exact} from a binop operand.
Here's an example:
https://rise4fun.com/Alive/Xplr
%cmp = icmp eq i32 %x, 2147483647
%add = add nsw i32 %x, 1
%sel = select i1 %cmp, i32 -2147483648, i32 %add
=>
%sel = add i32 %x, 1
I've left the InstSimplify code in place for now, but my guess is that we'd
prefer to remove that as a follow-up to save on code duplication and
compile-time.
Differential Revision: https://reviews.llvm.org/D65576
llvm-svn: 367695
ThreadSafeModule/ThreadSafeContext are used to manage lifetimes and locking
for LLVMContexts in ORCv2. Prior to this patch contexts were locked as soon
as an associated Module was emitted (to be compiled and linked), and were not
unlocked until the emit call returned. This could lead to deadlocks if
interdependent modules that shared contexts were compiled on different threads:
when, during emission of the first module, the dependence was discovered the
second module (which would provide the required symbol) could not be emitted as
the thread emitting the first module still held the lock.
This patch eliminates this possibility by moving to a finer-grained locking
scheme. Each client holds the module lock only while they are actively operating
on it. To make this finer grained locking simpler/safer to implement this patch
removes the explicit lock method, 'getContextLock', from ThreadSafeModule and
replaces it with a new method, 'withModuleDo', that implicitly locks the context,
calls a user-supplied function object to operate on the Module, then implicitly
unlocks the context before returning the result.
ThreadSafeModule TSM = getModule(...);
size_t NumFunctions = TSM.withModuleDo(
[](Module &M) { // <- context locked before entry to lambda.
return M.size();
});
Existing ORCv2 layers that operate on ThreadSafeModules are updated to use the
new method.
This method is used to introduce Module locking into each of the existing
layers.
llvm-svn: 367686
This patch adds support to the WholeProgramDevirt pass to perform
index-based WPD, which is invoked from ThinLTO during the thin link.
The ThinLTO backend (WPD import phase) behaves the same regardless of
whether the WPD decisions were made with the index-based or (the
existing) IR-based analysis.
Depends on D54815.
Reviewers: pcc
Subscribers: mehdi_amini, inglorion, eraman, steven_wu, dexonsmith, arphaman, dang, llvm-commits
Differential Revision: https://reviews.llvm.org/D55153
llvm-svn: 367679
The parameter to the -D (--dllname) option is the name of the dll
that llvm-dlltool produces an import library for. Even though this
is named "OutputFile" in the COFFModuleDefinition class, it's not
an output file name in the context of llvm-dlltool, but the name
of the DLL to create an import library for.
llvm-svn: 367676
This optimisation isn't generally profitable for ARM, because we can
save/restore many registers in the prologue and epilogue using the PUSH
and POP instructions, but mostly use individual LDR/STR instructions for
other spills.
Differential revision: https://reviews.llvm.org/D64910
llvm-svn: 367670
- Avoid a crash when IPRA calls ARMFrameLowering::determineCalleeSaves
with a null RegScavenger. Simply not updating the register scavenger
is fine because IPRA only cares about the SavedRegs vector, the acutal
code of the function has already been generated at this point.
- Add a new hook to TargetRegisterInfo to get the set of registers which
can be clobbered inside a call, even if the compiler can see both
sides, by linker-generated code.
Differential revision: https://reviews.llvm.org/D64908
llvm-svn: 367669
This patch adds an ability to disable profile based peeling
causing the peeling of all iterations and as a result prohibits
further unroll/peeling attempts on that loop.
The motivation to get an ability to separate peeling usage in
pipeline where in the first part we peel only separate iterations if needed
and later in pipeline we apply the full peeling which will prohibit further peeling.
Reviewers: reames, fhahn
Reviewed By: reames
Subscribers: hiraditya, zzheng, dmgreen, llvm-commits
Differential Revision: https://reviews.llvm.org/D64983
llvm-svn: 367668
1. raw_ostream supports ANSI colors so that you can write messages to
the termina with colors. Previously, in order to change and reset
color, you had to call `changeColor` and `resetColor` functions,
respectively.
So, if you print out "error: " in red, for example, you had to do
something like this:
OS.changeColor(raw_ostream::RED);
OS << "error: ";
OS.resetColor();
With this patch, you can write the same code as follows:
OS << raw_ostream::RED << "error: " << raw_ostream::RESET;
2. Add a boolean flag to raw_ostream so that you can disable colored
output. If you disable colors, changeColor, operator<<(Color),
resetColor and other color-related functions have no effect.
Most LLVM tools automatically prints out messages using colors, and
you can disable it by passing a flag such as `--disable-colors`.
This new flag makes it easy to write code that works that way.
Differential Revision: https://reviews.llvm.org/D65564
llvm-svn: 367649
Current peeling cost model can decide to peel off not all iterations
but only some of them to eliminate conditions on phi. At the same time
if any peeling happens the door for further unroll/peel optimizations on that
loop closes because the part of the code thinks that if peeling happened
it is profile based peeling and all iterations are peeled off.
To resolve this inconsistency the patch provides the flag which states whether
the full peeling basing on profile is enabled or not and peeling cost model
is able to modify this field like it does not PeelCount.
In a separate patch I will introduce an option to allow/disallow peeling basing
on profile.
To avoid infinite loop peeling the patch tracks the total number of peeled iteration
through llvm.loop.peeled.count loop metadata.
Reviewers: reames, fhahn
Reviewed By: reames
Subscribers: hiraditya, zzheng, dmgreen, llvm-commits
Differential Revision: https://reviews.llvm.org/D64972
llvm-svn: 367647
Added code to truncate or shrink offsets so that we can continue
base pointer search if size has changed along the way.
Differential Revision: https://reviews.llvm.org/D65612
llvm-svn: 367646
Summary:
When combining `extsw` and `sldi` in `PPCMIPeephole`, we have to check
if `extsw`'s second operand is a virtual register, otherwise we might
get miscompile.
Differential Revision: https://reviews.llvm.org/D65315
llvm-svn: 367645
Summary:
The old code can be simplified to define the element type of TailCalls as `BasicBlock` not `CallInst`. Also I use the for-range loop instead the for loop.
Reviewed By: jsji
Differential Revision: https://reviews.llvm.org/D64905
llvm-svn: 367644
Add PGO support at -O0 in the experimental new pass manager to sync the
behavior of the legacy pass manager.
Also change the test of gcc-flag-compatibility.c for more complete test:
(1) change the match string to "profc" and "profd" to ensure the
instrumentation is happening.
(2) add IR format proftext so that PGO use compilation is tested.
Differential Revision: https://reviews.llvm.org/D64029
llvm-svn: 367628
The previous change to fix crash in the vectorizer introduced
performance regressions. The condition to preserve pointer
address space during the search is too tight, we only need to
match the size.
Differential Revision: https://reviews.llvm.org/D65600
llvm-svn: 367624
Summary:
Fixes: https://bugs.llvm.org/show_bug.cgi?id=42441
Used to print:
<unknown>:0: error: Cannot represent a difference across sections
(the location was null).
Now prints:
err.s:20:3: error: Cannot represent a difference across sections
i32.const foo-bar
^
Note: I looked at adding a test for this, but I don't think it is
worth it. We're not testing error formatting in the Wasm backend :)
Reviewers: sbc100, jgravelle-google
Subscribers: dschuff, aheejin, sunfish, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65602
llvm-svn: 367619
AMDGPU sometimes has legal s16 and <2 x s16> operations, but all
registers are really 32-bit. An unmerge destination really should ben
widened to a 32-bit register. If widening a scalarizing vector with a
target size that matches the vector size, bitcast to integer and
extract the relevant bits with shifts.
I'm not sure if this is the right place for this. This could arguably
be part of widenScalar for the result. I also have a growing feeling
that we're missing a bitcast legalize action.
llvm-svn: 367604
If a type is larger than a legal type and needs to be split, we would previously allow the multiply to be decomposed even if the split multiply is legal. Since the shift + add/sub code would also need to be split, its not any better to decompose it.
This patch figures out what type the mul will eventually be legalized to and then uses that type for the query. I tried just returning false illegal types and letting them get handled after type legalization, but then we can't recognize and i64 constant splat on 32-bit targets since will be destroyed by type legalization. We could special case vectors of i64 to avoid that...
Differential Revision: https://reviews.llvm.org/D65533
llvm-svn: 367601
The note in the documentation suggests this restriction is a compile
time optimization for architectures that make heavy use of
bundling. Allowing virtual registers in a bundle is useful for some
(non-R600) AMDGPU use cases and are infrequent enough to matter.
A more common AMDGPU use case has already been using virtual registers
in bundles since r333691, although never calling finalizeBundle on
them and manually creating the use/def list on the BUNDLE
instruction. This is also relatively infrequent, and only happens for
consecutive sequences of some load/store types.
llvm-svn: 367597
Summary:
DominatorTree is invalid after SimplifyCFG because of a missed `Changed = true` when simplifying a branch condition and removing an edge.
Resolves PR42272.
Reviewers: zhizhouy, manojgupta
Subscribers: jlebar, sanjoy.google, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65490
llvm-svn: 367596
Summary:
LoopSimplify is preserved in the legacy pass manager, but not in the new pass manager.
Update LoopSimplify to preserve MemorySSA conditionally when the analysis is available (same behavior as the legacy pass manager).
Reviewers: chandlerc
Subscribers: mehdi_amini, jlebar, Prazek, george.burgess.iv, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65418
llvm-svn: 367594
This allows folding of the scalar epilogue loop (the tail) into the main
vectorised loop body when the loop is annotated with a "vector predicate"
metadata hint. To fold the tail, instructions need to be predicated (masked),
enabling/disabling lanes for the remainder iterations.
Differential Revision: https://reviews.llvm.org/D65197
llvm-svn: 367592
A TYPE_INDEX operand (as used by call_indirect) used to be represented
by the InstPrinter as a symbol (e.g. .Ltype_index0@TYPE_INDEX) which
was a bit of a mismatch with the WasmObjectWriter which expects an
unnamed symbol, to receive the signature from and then turn into a
reloc.
There was really no good way to round-trip this information. An earlier
version of this patch tried to attach the signature information using
a .functype, but that ran into trouble when the symbol was re-emitted
without a name. Removing the name was a giant hack also.
The current version changes the assembly syntax to have an inline
signature spec for TYPEINDEX operands that is always unnamed, which
is much more elegant both in syntax and in implementation (as now the
assembler is able to follow the same path as the regular backend)
Reviewers: sbc100, dschuff, aheejin, jgravelle-google, sunfish, tlively
Subscribers: arphaman, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64758
llvm-svn: 367590
User of AAReturnedValues need to know if HasOverdefinedReturnedCalls
changed from false to true as it will impact the result of the return
value traversal (calls are not ignored anymore).
This will be tested with the tests in D59978.
llvm-svn: 367581
If an operand of the `lw/sw` instructions is a symbol, these instructions
incorrectly lowered using not-position-independent chain of commands.
For PIC code we should use `lw/addiu` instructions with the `R_MIPS_GOT16`
and `R_MIPS_LO16` relocations respectively. Instead of that LLVM generates
position dependent code with the `R_MIPS_HI16` and `R_MIPS_LO16`
relocations.
This patch provides a fix for the bug by handling PIC case separately in
the `MipsAsmParser::expandMemInst`. The main idea is to generate a chain
of PIC instructions to load a symbol address into a register and then
load the address content.
The fix is not optimal and does not fix all PIC-related problems. This
is a task for subsequent patches.
Differential Revision: https://reviews.llvm.org/D65524
llvm-svn: 367580
This adds SimplifyMultipleUseDemandedBitsForTargetNode X86 support and uses it to allow us to peek through vector insertions to avoid dependencies on entire insertion chains.
llvm-svn: 367570
Summary:
GCC Accepts both (reg) and 0(reg) for atomic instruction memory
operands. These instructions do not allow for an offset in their
encoding, so in the latter case, the 0 is silently dropped.
Due to how we have structured the RISCVAsmParser, the easiest way to add
support for parsing this offset is to add a custom AsmOperand and
parser. This parser drops all the parens, and just keeps the register.
This commit also adds a custom printer for these operands, which matches
the GCC canonical printer, printing both `(a0)` and `0(a0)` as `(a0)`.
Reviewers: asb, lewis-revill
Reviewed By: asb
Subscribers: s.egerton, hiraditya, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, kito-cheng, shiva0217, jrtc27, MaskRay, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, rkruppe, jfb, PkmX, jocewei, psnobl, benna, Jim, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65205
llvm-svn: 367553
Summary:
While there is always a `Value::replaceAllUsesWith()`,
sometimes the replacement needs to be conditional.
I have only cleaned a few cases where `replaceUsesWithIf()`
could be used, to both add test coverage,
and show that it is actually useful.
Reviewers: jdoerfert, spatel, RKSimon, craig.topper
Reviewed By: jdoerfert
Subscribers: dschuff, sbc100, jgravelle-google, hiraditya, aheejin, george.burgess.iv, asbirlea, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65528
llvm-svn: 367548
Summary:
Sometimes we need to swap true-val and false-val of a `SelectInst`.
Having a function for that is nicer than hand-writing it each time.
Reviewers: spatel, RKSimon, craig.topper, jdoerfert
Reviewed By: jdoerfert
Subscribers: jdoerfert, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65520
llvm-svn: 367547
The VREV64 instruction is apparently unpredictable if Qd == Qm, due to the
cross-beat nature of the instruction. This adds an earlyclobber to Qd, which
seems to be the same way we deal with this on other instructions like the
write-back on loads and stores.
Differential Revision: https://reviews.llvm.org/D65502
llvm-svn: 367544
Fix an issue where the compiler still allocates an emergency spill slot even
though it already decided to spill an extra callee-save register to use
as a scratch register.
Reviewers: gberry, thegameg, mstorsjo, t.p.northover
Reviewed By: thegameg
Differential Revision: https://reviews.llvm.org/D65504
llvm-svn: 367540
Fold load/store + G_GEP + G_CONSTANT when
immediate in G_CONSTANT fits into 16 bit signed integer.
Differential Revision: https://reviews.llvm.org/D65507
llvm-svn: 367535
In PowerPC, there is instruction to load vector in big endian element order when it's in little endian target.
So we can combine vector load + reverse into big endian load to eliminate the swap instruction.
Also combine vector reverse + store into big endian store.
Differential Revision: https://reviews.llvm.org/D65063
llvm-svn: 367516
Start migrating to a form that will be compatible with the global isel
emitter. Also should fix some overly lax checks on the memory type,
which allowed mis-selecting some illegal atomics.
llvm-svn: 367506
This allows functions and globals to to be reordered later in the linking phase
(using the -symbol-ordering-file) even though reordering will be limited to
the scope of the explicit section.
Patch by Rahman Lavaee!
Differential Revision: https://reviews.llvm.org/D65478
llvm-svn: 367501
This is extremely specific, but saves three instructions when it's
legal. I don't think the code can be usefully generalized.
Differential Revision: https://reviews.llvm.org/D65351
llvm-svn: 367492
Thumb1 has very limited immediate modes, so turning an "and" into a
shift can save multiple instructions.
It's possible to simplify the generated code for test2 and test3 in
cmp-and-fold.ll a little more, but I'll implement that as a followup.
Differential Revision: https://reviews.llvm.org/D65175
llvm-svn: 367491
X86 at least is able to use movmsk or kmov to move the mask to the scalar
domain. Then we can just use test instructions to test individual bits.
This is more efficient than extracting each mask element
individually.
I special cased v1i1 to use the previous behavior. This avoids
poor type legalization of bitcast of v1i1 to i1.
I've skipped expandload/compressstore as I think we need to
handle constant masks for those better first.
Many tests end up with duplicate test instructions due to tail
duplication in the branch folding pass. But the same thing
happens when constructing similar code in C. So its not unique
to the scalarization.
Not sure if this lowering code will also be good for other targets,
but we're only testing X86 today.
Differential Revision: https://reviews.llvm.org/D65319
llvm-svn: 367489
We have custom code that ignores the normal promoting type legalization on less than 128-bit vector types like v4i8 to emit pavgb, paddusb, psubusb since we don't have the equivalent instruction on a larger element type like v4i32. If this operation appears before a store, we can be left with an any_extend_vector_inreg followed by a truncstore after type legalization. When truncstore isn't legal, this will normally be decomposed into shuffles and a non-truncating store. This will then combine away the any_extend_vector_inreg and shuffle leaving just the store. On avx512, truncstore is legal so we don't decompose it and we had no combines to fix it.
This patch adds a new DAG combine to detect this case and emit either an extract_store for 64-bit stoers or a extractelement+store for 32 and 16 bit stores. This makes the avx512 codegen match the avx2 codegen for these situations. I'm restricting to only when -x86-experimental-vector-widening-legalization is false. When we're widening we're not likely to create this any_extend_inreg+truncstore combination. This means we should be able to remove this code when we flip the default. I would like to flip the default soon, but I need to investigate some performance regressions its causing in our branch that I wasn't seeing on trunk.
Differential Revision: https://reviews.llvm.org/D65538
llvm-svn: 367488
Summary: Honoring no signed zeroes is also available as a user control through clang separately regardless of fastmath or UnsafeFPMath context, DAG guards should reflect this context.
Reviewers: spatel, arsenm, hfinkel, wristow, craig.topper
Reviewed By: spatel
Subscribers: rampitec, foad, nhaehnle, wuzish, nemanjai, jvesely, wdng, javed.absar, MaskRay, jsji
Differential Revision: https://reviews.llvm.org/D65170
llvm-svn: 367486
This is a prepatory patch for future work on support exit value rewriting in loops with a mixture of computable and non-computable exit counts. The intention is to be "mostly NFC" - i.e. not enable any interesting new transforms - but in practice, there are some small output changes.
The test differences are caused by cases wherewhere getSCEVAtScope can simplify a single entry phi without needing any knowledge of the loop.
llvm-svn: 367485
Summary:
This will make it possible to improve IPRA by taking into account
register usage in indirect calls.
NFC yet; this is just laying the groundwork to start building
up patches to take advantage of the information for improved register
allocation.
Reviewers: aditya_nandakumar, volkan, qcolombet, arsenm, rovka, aemerson, paquette
Subscribers: sdardis, wdng, javed.absar, hiraditya, jrtc27, atanasyan, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65488
llvm-svn: 367476
This feature instructs the backend to allow locally defined global variable
addresses to contain a pointer tag in bits 56-63 that will be ignored by
the hardware (i.e. TBI), but may be used by an instrumentation pass such
as HWASAN. It works by adding a MOVK instruction to the regular ADRP/ADD
sequence that sets bits 48-63 to the corresponding bits of the global, with
the linker bounds check disabled on the ADRP instruction to prevent the tag
from causing a link failure.
This implementation of the feature omits the MOVK when loading from or storing
to a global, which is sufficient for TBI. If the same approach is extended
to MTE, assuming that 0 is not configured as a catch-all tag, we will most
likely also need the MOVK in this case in order to avoid a tag mismatch.
Differential Revision: https://reviews.llvm.org/D65364
llvm-svn: 367475
This makes the field wider than MachineOperand::SubReg_TargetFlags so that
we don't end up silently truncating any higher bits. We should still catch
any bits truncated from the MachineOperand field as a consequence of the
assertion in MachineOperand::setTargetFlags().
Differential Revision: https://reviews.llvm.org/D65465
llvm-svn: 367474
to bail out in store merging dependence check.
We run into a case where dependence check in store merging bail out many times
for the same store and root nodes in a huge basicblock. That increases compile
time by almost 100x. The patch add a map to track how many times the bailing
out happen for the same store and root, and if it is over a limit, stop
considering the store with the same root as a merging candidate.
Differential Revision: https://reviews.llvm.org/D65174
llvm-svn: 367472
Summary:
Verify that the incoming defs into phis are the last defs from the
respective incoming blocks.
When moving blocks, insertDef must RenameUses. Adding this verification
makes GVNHoist tests fail that uncovered this issue.
Reviewers: george.burgess.iv
Subscribers: jlebar, Prazek, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D63147
llvm-svn: 367451
Reverse the canonicalization of fneg relative to fmul/fdiv. That makes it
easier to implement the transforms (and possibly other fneg transforms) in
1 place because we can always start the pattern match from fneg (either the
legacy binop or the new unop).
There's a secondary practical benefit seen in PR21914 and PR42681:
https://bugs.llvm.org/show_bug.cgi?id=21914https://bugs.llvm.org/show_bug.cgi?id=42681
...hoisting fneg rather than sinking seems to play nicer with LICM in IR
(although this change may expose analysis holes in the other direction).
1. The instcombine test changes show the expected neutral IR diffs from
reversing the order.
2. The reassociation tests show that we were missing an optimization
opportunity to fold away fneg-of-fneg. My reading of IEEE-754 says
that all of these transforms are allowed (regardless of binop/unop
fneg version) because:
"For all other operations [besides copy/abs/negate/copysign], this
standard does not specify the sign bit of a NaN result."
In all of these transforms, we always have some other binop
(fadd/fsub/fmul/fdiv), so we are free to flip the sign bit of a
potential intermediate NaN operand.
(If that interpretation is wrong, then we must already have a bug in
the existing transforms?)
3. The clang tests shouldn't exist as-is, but that's effectively a
revert of rL367149 (the test broke with an extension of the
pre-existing fneg canonicalization in rL367146).
Differential Revision: https://reviews.llvm.org/D65399
llvm-svn: 367447
When vectorizer strips pointers it can eventually end up with
pointers of two different sizes, then SCEV will crash.
Differential Revision: https://reviews.llvm.org/D65480
llvm-svn: 367443
Summary:
According to the Armv8.1-M manual CSEL, CSINC, CSINV and CSNEG are
"constrained unpredictable" when SP is used as the source register Rn.
The assembler should diagnose this case.
Reviewers: momchil.velikov, dmgreen, ostannard, simon_tatham, t.p.northover
Reviewed By: ostannard
Subscribers: javed.absar, kristof.beyls, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65505
llvm-svn: 367433
We have some code marks instructions with struct operands as overdefined,
but if the instruction is a call to a function with tracked arguments,
this breaks the assumption that the lattice values of all call sites
are not overdefined and will be replaced by a constant.
This also re-adds the assertion from D65222, with additionally skipping
non-callsite uses. This patch should address the cases reported in which
the assertion fired.
Fixes PR42738.
Reviewers: efriedma, davide
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D65439
llvm-svn: 367430
Before combining insert_subvector(insert_subvector(vec, sub0, c0), sub1, c1) patterns, ensure that the subvectors are all the same type. On AVX512 targets especially we might have a mixture of 128/256 subvector insertions.
llvm-svn: 367429
Summary:
While `-div-rem-pairs` pass can decompose rem in div+rem pair when div-rem pair
is unsupported by target, nothing performs the opposite fold.
We can't do that in InstCombine or DAGCombine since neither of those has access to TTI.
So it makes most sense to teach `-div-rem-pairs` about it.
If we matched rem in expanded form, we know we will be able to place div-rem pair
next to each other so we won't regress the situation.
Also, we shouldn't decompose rem if we matched already-decomposed form.
This is surprisingly straight-forward otherwise.
The original patch was committed in rL367288 but was reverted in rL367289
because it exposed pre-existing RAUW issues in internal data structures
of the pass; those now have been addressed in a previous patch.
https://bugs.llvm.org/show_bug.cgi?id=42673
Reviewers: spatel, RKSimon, efriedma, ZaMaZaN4iK, bogner
Reviewed By: bogner
Subscribers: bogner, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65298
llvm-svn: 367419
Summary:
`DivRemPairs` internally creates two maps:
* {sign, divident, divisor} -> div instruction
* {sign, divident, divisor} -> rem instruction
Then it iterates over rem map, and looks if there is an entry
in div map with the same key. Then depending on some internal logic
it may RAUW rem instruction with something else.
But if that rem instruction is an input to other div/rem,
then it was used as a key in these maps, so the old value (used in key)
is now dandling, because RAUW didn't update those maps.
And we can't even RAUW map keys in general, there's `ValueMap`,
but we don't have a single `Value` as key...
The bug was discovered via D65298, and the test there exists.
Now, i'm not sure how to expose this issue in trunk.
The bug is clearly there if i change the map keys to be `AssertingVH`/`PoisoningVH`,
but i guess this didn't miscompiled anything thus far?
I really don't think this is benin without that patch.
The fix is actually rather straight-forward - instead of trying to somehow
shoe-horn `ValueMap` here (doesn't fit, key isn't just `Value`), or writing a new
`ValueMap` with key being a struct of `Value`s, we can just have an intermediate
data structure - a vector, each entry containing matching `Div, Rem` pair,
and pre-filling it before doing any modifications.
This way we won't need to query map after doing RAUW, so no bug is possible.
Reviewers: spatel, bogner, RKSimon, craig.topper
Reviewed By: spatel
Subscribers: hiraditya, hans, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65451
llvm-svn: 367417
Summary:
This adds the 'f' inline assembly constraint, as supported by GCC. An
'f'-constrained operand is passed in a floating point register. Exactly
which kind of floating-point register (32-bit or 64-bit) is decided
based on the operand type and the available standard extensions (-f and
-d, respectively).
This patch adds support in both the clang frontend, and LLVM itself.
Reviewers: asb, lewis-revill
Reviewed By: asb
Subscribers: hiraditya, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, kito-cheng, shiva0217, jrtc27, MaskRay, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, rkruppe, PkmX, jocewei, psnobl, benna, Jim, s.egerton, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D65500
llvm-svn: 367403
Use a switch instead of many isa<> while checking for supported
values. Also be explicit about which cast instructions are supported;
This allows the removal of SIToFP from GenerateSignBits.
llvm-svn: 367402
Summary:
* Loads and stores in SVE2 are gather/scatter not contiguous, fixed by
renaming multiclasses to reflect this and also updated comments.
* Remove aliases from load/store multiclasses that reflect the behaviour
of the original form.
* Fix bug in scatter store implementation, vector list should be used as
input, not output.
Reviewed By: sdesmalen
Differential Revision: https://reviews.llvm.org/D65392
llvm-svn: 367398
This adds the required extension to RISC-V's getRegForInlineAsmConstraint
in order to be able to correctly distringuish between the 32 and 64-bit
floating point registers when the generic fX name appears in inlineasm
clobber contraints. It also adds a check to validate that callee saved
floating point registers are only saved in this case when a hard-float
ABI is selected.
Differential Revision: https://reviews.llvm.org/D64751
llvm-svn: 367397
Summary:
* Clarify comment with SVE2 for predicated shifts and move next to other
shift instructions.
* Clarify comments for various instructions.
* Move FCVTX instruction next to other fp conversions.
* Move FLOGB to next to other fp instructions and fix description.
* Remove "cons" from non-constructive multiclass for bitwise shift-right
and accumulate instructions.
Reviewed By: sdesmalen
Differential Revision: https://reviews.llvm.org/D65390
llvm-svn: 367396
Summary:
This patch fixes a bug in the following instructions that should have been
implemented as destructive. A destructive instruction is an instruction where
one of the source registers also acts as the destination register. Therefore,
the contents of the source register, when the instruction begins execution, are
replaced by the result of the instruction when the instruction completes
execution [1]:
* SRI/SLI
* EORBT/EORTB
* TBX
* Narrowing top instructions
* FP convert precision instructions
These changes are non-functional from the assembler/diassembler point-of-view
but are necessary for correct codegen.
[1] https://static.docs.arm.com/ddi0584/ae/DDI0584A_e_SVE_supp_armv8A.pdf
Reviewed By: sdesmalen
Differential Revision: https://reviews.llvm.org/D65389
llvm-svn: 367394
In PowerPC, there is instruction to load vector in big endian element order when it's in little endian target.
So we can combine vector load + reverse into big endian load to eliminate the swap instruction.
Also combine vector reverse + store into big endian store.
llvm-svn: 367382
We had couple places which still return 10 as a maximum
occupancy. Fixed.
Also print comment about occupancy as compiler see it.
Differential Revision: https://reviews.llvm.org/D65423
llvm-svn: 367381
AMDGPU uses some custom code predicates for testing alignments.
I'm still having trouble comprehending the behavior of predicate bits
in the PatFrag hierarchy. Any attempt to abstract these properties
unexpectdly fails to apply them.
llvm-svn: 367373
Add a new serializer, using a binary format based on the LLVM bitstream
format.
This format provides a way to serialize the remarks in two modes:
1) Separate mode: the metadata is separate from the remark entries.
2) Standalone mode: the metadata and the remark entries are in the same
file.
The format contains:
* a meta block: container version, container type, string table,
external file path, remark version
* a remark block: type, remark name, pass name, function name, debug
file, debug line, debug column, hotness, arguments (key, value, debug
file, debug line, debug column)
A string table is required for this format, which will be dumped in the
meta block to be consumed before parsing the remark blocks.
On clang itself, we noticed a size reduction of 13.4x compared to YAML,
and a compile-time reduction of between 1.7% and 3.5% on CTMark.
Differential Revision: https://reviews.llvm.org/D63466
Original llvm-svn: 367364
Revert llvm-svn: 367370
llvm-svn: 367372
Add an option to control whether or not to enable store merging in dag combiner
so we can workaround some bugs more easily.
Differential Revision: https://reviews.llvm.org/D65482
llvm-svn: 367365
Add a new serializer, using a binary format based on the LLVM bitstream
format.
This format provides a way to serialize the remarks in two modes:
1) Separate mode: the metadata is separate from the remark entries.
2) Standalone mode: the metadata and the remark entries are in the same
file.
The format contains:
* a meta block: container version, container type, string table,
external file path, remark version
* a remark block: type, remark name, pass name, function name, debug
file, debug line, debug column, hotness, arguments (key, value, debug
file, debug line, debug column)
A string table is required for this format, which will be dumped in the
meta block to be consumed before parsing the remark blocks.
On clang itself, we noticed a size reduction of 13.4x compared to YAML,
and a compile-time reduction of between 1.7% and 3.5% on CTMark.
Differential Revision: https://reviews.llvm.org/D63466
llvm-svn: 367364
Summary:
LoopRotate may simplify instructions, leading to the new instructions not having memory accesses created for them.
Allow this behavior, by allowing the new access to be null when the template is null, and looking upwards for the proper defined access when dealing with simplified instructions.
Reviewers: george.burgess.iv
Subscribers: jlebar, Prazek, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65338
llvm-svn: 367352
Summary:
- Use the passed `DL` directly as retrieving data layout from CS by
checking the called function is not reliable. Under indirect function
call, there is no called function.
Subscribers: jholewinski, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65468
llvm-svn: 367349
Summary:
In D37215, AssumeLikeInstruction are regarded as `willreturn`. In this patch, annotation is added to those which don't have `willreturn` now(`sideeffect, object_size, experimental_widenable_condition`).
Reviewers: jdoerfert, nikic, sstefan1
Reviewed By: nikic
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65455
llvm-svn: 367342
Summary:
return_call and return_call_indirect are only valid if the return
types of the callee and caller match. We were previously not enforcing
that, which was producing invalid modules.
Reviewers: aheejin
Subscribers: dschuff, sbc100, jgravelle-google, hiraditya, sunfish, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65246
llvm-svn: 367339
LoopInfo can be easily preserved by passing it to the functions that
modify the CFG (SplitCriticalEdge and MergeBlockIntoPredecessor.
SplitCriticalEdge also preserves LoopSimplify and LCSSA form when when passing in
LoopInfo. The test case shows that we preserve LoopSimplify and
LoopInfo. Adding addPreservedID(LCSSAID) did not preserve LCSSA for some
reason.
Also I am not sure if it is possible to preserve those in the new pass
manager, as they aren't analysis passes.
Reviewers: reames, hfinkel, davide, jdoerfert
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D65137
llvm-svn: 367332
The default mode is separate, where the metadata is serialized
separately from the remarks.
Another mode is the standalone mode, where the metadata is serialized
before the remarks, on the same stream.
llvm-svn: 367328
Summary:
This patch extends the use of the OptimizationRemarkEmitter to provide
information about loops that are not fused, and loops that are not eligible for
fusion. In particular, it uses the OptimizationRemarkAnalysis to identify loops
that are not eligible for fusion and the OptimizationRemarkMissed to identify
loops that cannot be fused.
It also reuses the statistics to provide the messages used in the
OptimizationRemarks. This provides common message strings between the
optimization remarks and the statistics.
I would like feedback on this approach, in general. If people are OK with this,
I will flesh out additional remarks in subsequent commits.
Subscribers: hiraditya, jsji, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D63844
llvm-svn: 367327
Empty condition strings are considerde always true. This removes a lot
of clutter from the generated matcher tables.
This shrinks the source size of AMDGPUGenDAGISel.inc from 7.3M to
6.1M.
llvm-svn: 367326
Addresses number of comment made on D64652 after commiting:
- Reorders function decls in the TargetLoweringObjectFileXCOFF class.
- Fix comment in MCSectionXCOFF to include description of external reference
csects.
- Convert several llvm_unreachables to report_fatal_error
- Convert several dyn_casts to casts as they are expected not to fail.
- Avoid copying DataLayout object.
llvm-svn: 367324
Summary:
I have stumbled into this by accident while preparing to extend backend `x s% C ==/!= 0` handling.
While we did happen to handle this fold in most of the cases,
the folding is indirect - we fold `x u% y` to `x & (y-1)` (iff `y` is power-of-two),
or first turn `x s% -y` to `x u% y`; that does handle most of the cases.
But we can't turn `x s% INT_MIN` to `x u% -INT_MIN`,
and thus we end up being stuck with `(x s% INT_MIN) == 0`.
There is no such restriction for the more general fold:
https://rise4fun.com/Alive/IIeS
To be noted, the fold does not enforce that `y` is a constant,
so it may indeed increase instruction count.
This is consistent with what `x u% y`->`x & (y-1)` already does.
I think it makes sense, it's at most one (simple) extra instruction,
while `rem`ainder is really much more un-simple (and likely **very** costly).
Reviewers: spatel, RKSimon, nikic, xbolva00, craig.topper
Reviewed By: RKSimon
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65046
llvm-svn: 367322
PR42819 showed an issue that we couldn't handle the case where we demanded a 'sub-sub-vector' of the SUBV_BROADCAST 'sub-vector' source.
This patch recognizes these cases and extracts the sub-sub-vector instead of trying to broadcast to a type smaller than the 'sub-vector' source.
llvm-svn: 367306
The code is now in a good enough state to pass the bunch of tests that
I have run (after fixing the bugs), so let's enable it by default.
Differential Revision: https://reviews.llvm.org/D65277
llvm-svn: 367297
Revert the hardware loop upon finding a LoopEnd that doesn't target
the loop header, instead of asserting a failure.
Differential Revision: https://reviews.llvm.org/D65268
llvm-svn: 367296
test-suite/MultiSource/Benchmarks/DOE-ProxyApps-C/miniGMG broke:
Only PHI nodes may reference their own value!
%sub33 = srem i32 %sub33, %ranks_in_i
This reverts commit r367288.
llvm-svn: 367289
Summary:
While `-div-rem-pairs` pass can decompose rem in div+rem pair when div-rem pair
is unsupported by target, nothing performs the opposite fold.
We can't do that in InstCombine or DAGCombine since neither of those has access to TTI.
So it makes most sense to teach `-div-rem-pairs` about it.
If we matched rem in expanded form, we know we will be able to place div-rem pair
next to each other so we won't regress the situation.
Also, we shouldn't decompose rem if we matched already-decomposed form.
This is surprisingly straight-forward otherwise.
https://bugs.llvm.org/show_bug.cgi?id=42673
Reviewers: spatel, RKSimon, efriedma, ZaMaZaN4iK, bogner
Reviewed By: bogner
Subscribers: bogner, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65298
llvm-svn: 367288
This patch adds a VFS that can be overlaid on top of another VFS
to record file system accesses using the FileCollector.
This can help to gather files that are needed for reproducers.
Differential Revision: https://reviews.llvm.org/D65411
llvm-svn: 367278
Put the list of fixed metadata kinds in one place.
Testing: check-llvm with+without LLVM_ENABLE_MODULES=On
Differential Revision: https://reviews.llvm.org/D64437
llvm-svn: 367257
Globals that are associated with globals with type metadata need to appear
in the merged module because they will reference the global's section directly.
Differential Revision: https://reviews.llvm.org/D65312
llvm-svn: 367242
Summary:
The LoadStoreOptimizer was creating instructions with 2
MachineMemOperands, which meant they were assumed to alias with all other instructions,
because MachineInstr:mayAlias() returns true when an instruction has multiple
MachineMemOperands.
This was preventing these instructions from being merged again, and was
giving the scheduler less freedom to reorder them.
Reviewers: arsenm, nhaehnle
Reviewed By: arsenm
Subscribers: kzhuravl, jvesely, wdng, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65036
llvm-svn: 367237
As discussed on rL367171, we have a problem where the depth recursion used in combineX86ShufflesRecursively was subtly different to computeKnownBits etc. - it starts at Depth=1 instead of Depth=0 like the others and has a different maximum recursion depth.
This NFC patch fixes the recursion depth to start at 0, so we can more easily reuse depth values in calls from combineX86ShufflesRecursively and its helper functions in computeKnownBits etc.
llvm-svn: 367232
For llvm/test/MC/RISCV/rv64i-aliases-invalid.s, UBSan reports:
lib/Target/RISCV/AsmParser/RISCVAsmParser.cpp:371:9: runtime error:
load of value 3879186881, which is not a valid value for type
'RISCVMCExpr::VariantKind'
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior
lib/Target/RISCV/AsmParser/RISCVAsmParser.cpp:371:9 in
It turns out that evaluateConstantImm does not set `VK` and it remains
unitialized when doing comparisons in `isImmXLenLI()`.
Differential Revision: https://reviews.llvm.org/D65347
llvm-svn: 367230
The backend already does this via isNegatibleForFree(),
but we may want to alter the fneg IR canonicalizations
that currently exist, so we need to try harder to fold
fneg in IR to avoid regressions.
llvm-svn: 367227
Summary: As clarified in D53184, volatile load and store do not trap. Therefore, we should remove volatile checks for instructions in `isGuaranteedToTransferExecutionToSuccessor`.
Reviewers: jdoerfert, efriedma, nikic
Reviewed By: nikic
Subscribers: hiraditya, jfb, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65375
llvm-svn: 367226
Summary:
The existing isDivergent(Value) methods query whether a value is
divergent at its definition. However even if a value is uniform at its
definition, a use of it in another basic block can be divergent because
of divergent control flow between the def and the use.
This patch adds new isDivergent(Use) methods to DivergenceAnalysis,
LegacyDivergenceAnalysis and GPUDivergenceAnalysis.
This might allow D63953 or other similar workarounds to be removed.
Reviewers: alex-t, nhaehnle, arsenm, rtaylor, rampitec, simoll, jingyue
Reviewed By: nhaehnle
Subscribers: jfb, jvesely, wdng, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65141
llvm-svn: 367218
- Remove some unused typedefs.
- Rename BinOpChain struct to MulCandidate.
- Remove the size method of MulCandidate.
- Store only the first input of the ValueList provided to
MulCandidate, as it's the only value we care about. This means we
don't have to perform any ugly (and unnecessary) iterations of the
list later on.
llvm-svn: 367208
Summary:
If isel is presented with <2 x half> vectors then it will correctly select
v_pk_fma style instructions.
If isel is presented with e.g. <4 x half> vectors it will scalarize, unlike for
other instruction types (such as fadd, fmul etc.)
Added extra support to enable this. Updated one of the tests to include a test
for this (as well as extending the test to GFX9)
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, tpr, t-tye, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65325
Change-Id: I50a4577a3f8223fb53992af3b7d26121f65b71ee
llvm-svn: 367206
The pmaddwd inserts a truncate, if that truncate would end up
creating additional instructions instead of making a zext
narrower, then we shouldn't do it.
I've restricted this to only sse4.1 targets since on prior
targets the zext will be done in stages. So the truncate will
probably not create additional instructions. Might need some
more investigation of mul shrinking and the other pmaddwd
transform to be sure this is the right decision.
There might be a slight regression on AVX1 targets due to add
splitting. Hard to say for sure. Maybe we need to look into
using the vector reduction flag to use 2 narrow loads and a
blend instead of extracting and inserting.
llvm-svn: 367198
This reverts r340478 and r340631 and replaces them with a simpler
method of just letting DAG combine revisit the nodes to handle
the other operand.
llvm-svn: 367195
The backend already does this via isNegatibleForFree(),
but we may want to alter the fneg IR canonicalizations
that currently exist, so we need to try harder to fold
fneg in IR to avoid regressions.
llvm-svn: 367194
This adds the patterns required to transform xor P0, -1 to a VPNOT. The
instruction operands have to change a little for this, adding an in and an out
VCCR reg and using a custom DecodeMVEVPNOT for the decode.
Differential Revision: https://reviews.llvm.org/D65133
llvm-svn: 367192
These are some better patterns for converting between predicates and floating
points. Much like the extends, we select "1"/"-1" or "0" depending on the
predicate value. Or we perform a compare against 0 to convert to a predicate.
Differential Revision: https://reviews.llvm.org/D65103
llvm-svn: 367191
This allows us to peek through BITCASTs, attempt to simplify the source operand, and then bitcast back.
This reapplies rL367091 which was reverted at rL367118 - we were inconsistently peeking through the bitcasts to the source value.
Fixes PR42777
llvm-svn: 367174
The test case from:
https://bugs.llvm.org/show_bug.cgi?id=42771
...shows a ~30x slowdown caused by the awkward loop iteration (rL207302) that is
seemingly done just to avoid invalidating the instruction iterator. We can instead
delay instruction deletion until we reach the end of the block (or we could delay
until we reach the end of all blocks).
There's a test diff here for a degenerate case with llvm.assume that is not
meaningful in itself, but serves to verify this change in logic.
This change probably doesn't result in much overall compile-time improvement
because we call '-instsimplify' as a standalone pass only once in the standard
-O2 opt pipeline currently.
Differential Revision: https://reviews.llvm.org/D65336
llvm-svn: 367173
Recommit rL367100 which was reverted at rL367141. Until PR42777 is fixed, we no longer get the benefits of peeking through bitcasts but it does still remove a GetDemandedBits user and gives us the equivalent combines.
llvm-svn: 367172
If anything called the recursive isKnownNeverNaN/computeKnownBits/ComputeNumSignBits/SimplifyDemandedBits/SimplifyMultipleUseDemandedBits with an incorrect depth then we could continue to recurse if we'd already exceeded the depth limit.
This replaces the limit check (Depth == 6) with a (Depth >= 6) to make sure that we don't circumvent it.
This causes a couple of regressions as a mixture of calls (SimplifyMultipleUseDemandedBits + combineX86ShufflesRecursively) were calling with depths that were already over the limit. I've fixed SimplifyMultipleUseDemandedBits to not do this. combineX86ShufflesRecursively is trickier as we get a lot of regressions if we reduce its own limit from 8 to 6 (it also starts at Depth == 1 instead of Depth == 0 like the others....) - I'll see what I can do in future patches.
llvm-svn: 367171
We're getting reports of massive compile time increases because SimplifyMultipleUseDemandedBits was losing track of the depth and not earlying-out. No repro yet, but consider this a pre-emptive commit.
llvm-svn: 367169
Add partial instruction selection for intrinsics like this:
```
declare i32 @llvm.aarch64.stlxr(i64, i32*)
```
(This only handles the case where a G_ZEXT is feeding the intrinsic.)
Also make sure that the added store instruction actually has the memory op from
the original G_STORE.
Update select-stlxr-intrin.mir and arm64-ldxr-stxr.ll.
Differential Revision: https://reviews.llvm.org/D65355
llvm-svn: 367163
This adds support to the yaml remark parser to be able to parse remarks
directly from the metadata.
This supports parsing separate metadata and following the external file
with the associated metadata, and also a standalone file containing
metadata + remarks all together.
Original llvm-svn: 367148
Revert llvm-svn: 367151
This has a fix for gcc builds.
llvm-svn: 367155
unreachable loop.
updatePredecessorProfileMetadata in jumpthreading tries to find the
first dominating predecessor block for a PHI value by searching upwards
the predecessor block chain.
But jumpthreading may see some temporary IR state which contains
unreachable bb not being cleaned up. If an unreachable loop happens to
be on the predecessor block chain, keeping chasing the predecessor
block will run into an infinite loop.
The patch fixes it.
Differential Revision: https://reviews.llvm.org/D65310
llvm-svn: 367154
This adds support to the yaml remark parser to be able to parse remarks
directly from the metadata.
This supports parsing separate metadata and following the external file
with the associated metadata, and also a standalone file containing
metadata + remarks all together.
llvm-svn: 367148
Adds machine operand lowering for MCSymbolSDNodes to the PowerPC
backend. This is needed to produce call instructions in assembly for AIX
because the callee operand is a MCSymbolSDNode. The test is XFAIL'ed for
asserts due to a (valid) assertion in PEI that the AIX ABI isn't supported yet.
Differential Revision: https://reviews.llvm.org/D63738
llvm-svn: 367133
Summary:
The bitperm feature flag is now prefixed with SVE2, as it is for all other SVE2
extensions
Patch by Maciej Gabka.
Reviewers: sdesmalen, rovka, chill, SjoerdMeijer, rengolin
Reviewed By: SjoerdMeijer, rengolin
Differential Revision: https://reviews.llvm.org/D65327
llvm-svn: 367124
In preperation for AIX support in FrameLowering: replace a number of literal
'8' that represent the stack offset of the condition register save area with
a member in PPCFrameLowering.
Patch by Chris Bowler.
llvm-svn: 367111
Void return used to have unsigned with value 0 for virtual register
but with addition of Register class and changes to arguments to lowerCall
this is no longer valid.
Check for void return by inspecting the Ty field in OrigRet.
Differential Revision: https://reviews.llvm.org/D65321
llvm-svn: 367107
(Y * (1.0 - Z)) + (X * Z) -->
Y - (Y * Z) + (X * Z) -->
Y + Z * (X - Y)
This is part of solving:
https://bugs.llvm.org/show_bug.cgi?id=42716
Factoring eliminates an instruction, so that should be a good canonicalization.
The potential conversion to FMA would be handled by the backend based on target
capabilities.
Differential Revision: https://reviews.llvm.org/D65305
llvm-svn: 367101
Add llvm.amdgcn.softwqm intrinsic which behaves like llvm.amdgcn.wqm
only if there is other WQM computation in the shader.
Reviewers: nhaehnle, tpr
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, yaxunl, dstuttard, t-tye, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64935
llvm-svn: 367097
Embedded Trace Extension and Trace Buffer Extension are optional
future architecture extensions.
(cf. https://developer.arm.com/architectures/cpu-architecture/a-profile/exploration-tools)
Their system registers are documented here:
https://developer.arm.com/docs/ddi0601/a
ETE shares register names with ETM. One exception is the ETE
TRCEXTINSELR0 register, which has the same encoding as the ETM
TRCEXTINSELR register (but different semantics). This patch treats
them as aliases: the assembler will accept both names, emitting
identical encoding, and the disassembler will keep disassembling
to TRCEXRINSELR.
Differential Revision: https://reviews.llvm.org/D63707
llvm-svn: 367093
Eventually all of these will be moved over, but we create nodes in GetDemandedBits recursion at the moment which causes regressions when we try to remove them all.
llvm-svn: 367092
Both WhileLoopStart and LoopEnd may get turned into a cmp and br pair,
so add an implicit def to these pseudo instructions in case that WLS
and LE aren't generated.
Differential Revision: https://reviews.llvm.org/D65275
llvm-svn: 367089
Summary:
This is an alternate approach to D57970.
Currently funclets reuse the same stack slots that are used in the
parent function for saving callee-saved xmm registers. If the parent
function modifies a callee-saved xmm register before an excpetion is
thrown, the catch handler will overwrite the original saved value.
This patch allocates space in funclets stack for saving callee-saved xmm
registers and uses RSP instead RBP to access memory.
Reviewers: andrew.w.kaylor, LuoYuanke, annita.zhang, craig.topper,
RKSimon
Subscribers: rnk, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D63396
Signed-off-by: pengfei <pengfei.wang@intel.com>
llvm-svn: 367088
To avoid duplicates in loop metadata, if the string to add is
already there, just update the value.
Reviewers: reames, Ashutosh
Reviewed By: reames
Subscribers: hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D65265
llvm-svn: 367087
Just move the utility function to LoopUtils.cpp to re-use it in loop peeling.
Reviewers: reames, Ashutosh
Reviewed By: reames
Subscribers: hiraditya, asbirlea, llvm-commits
Differential Revision: https://reviews.llvm.org/D65264
llvm-svn: 367085
COPYFILE_CLONE is only defined on newer macOS versions, using it without
check breaks build on systems running legacy OS and toolchain.
Differential Revision: https://reviews.llvm.org/D65317
llvm-svn: 367084
handleAssignments gives up pretty easily on structs, and i8 values for
some reason. The other case that doesn't work is when an implicit sret
needs to be inserted if the return size exceeds the number of return
registers.
llvm-svn: 367082
Summary:
In `block-placement` pass, it will create some patterns for unconditional we can do the simple early retrun.
But the `early-ret` pass is before `block-placement`, we don't want to run it again.
This patch is to do the simple early return to optimize the blocks at the last of `block-placement`.
Below is an example
```
BB: | BB:
XOR 3, 3, 4 | XOR 3, 3, 4
B TBB | B ChainBB
... | ...
ChainBB: | ChainBB:
B TBB | ADD 3, 3, 4
... | BLR
TBB: |
ADD 3, 3, 4 |
BLR |
```
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D63972
llvm-svn: 367080
This allows every serializer format to implement metaSerializer() and
return the corresponding meta serializer.
Original llvm-svn: 366946
Reverted llvm-svn: 367004
This fixes the unit tests on Windows bots.
llvm-svn: 367078
Currently, stack protector loads and stores are resolved during
LocalStackSlotAllocation (if the pass needs to run). When this is the
case, the base register assigned to the frame access is going to be one
of the vregs created during LocalStackSlotAllocation. This means that we
are keeping a pointer to the stack protector slot, and we're using this
pointer to load and store to it.
In case register pressure goes up, we may end up spilling this pointer
to the stack, which can be a security concern.
Instead, leave it to PEI to resolve the frame accesses. In order to do
that, we make all stack protector accesses go through frame index
operands, then PEI will resolve this using an offset from sp/fp/bp.
Differential Revision: https://reviews.llvm.org/D64759
llvm-svn: 367068
Currently, the CO-RE offset relocation does not work
if any struct/union member or array element is a typedef.
For example,
typedef const int arr_t[7];
struct input {
arr_t a;
};
func(...) {
struct input *in = ...;
... __builtin_preserve_access_index(&in->a[1]) ...
}
The BPF backend calculated default offset is 0 while
4 is the correct answer. Similar issues exist for struct/union
typedef's.
When getting struct/union member or array element type,
we should trace down to the type by skipping typedef
and qualifiers const/volatile as this is what clang did
to generate getelementptr instructions.
(const/volatile member type qualifiers are already
ignored by clang.)
This patch fixed this issue, for each access index,
skipping typedef and const/volatile/restrict BTF types.
Signed-off-by: Yonghong Song <yhs@fb.com>
Differential Revision: https://reviews.llvm.org/D65259
llvm-svn: 367062
The file collector class is useful for constructing reproducers by
creating a snapshot of the files that are accessed. Sometimes it might
also be important to construct directories that don't necessarily have files,
but are still accessed by some tool that we want to make a reproducer for.
This is useful for instance for modeling the behavior of Clang's header search,
which scans through a number of directories it doesn't actually access when
looking for framework headers. This commit extends the file collector to allow
it to work with paths that are just directories, by constructing them as the
files are copied over.
Differential Revision: https://reviews.llvm.org/D65297
llvm-svn: 367061
changes were made to the patch since then.
--------
[NewPM] Port Sancov
This patch contains a port of SanitizerCoverage to the new pass manager. This one's a bit hefty.
Changes:
- Split SanitizerCoverageModule into 2 SanitizerCoverage for passing over
functions and ModuleSanitizerCoverage for passing over modules.
- ModuleSanitizerCoverage exists for adding 2 module level calls to initialization
functions but only if there's a function that was instrumented by sancov.
- Added legacy and new PM wrapper classes that own instances of the 2 new classes.
- Update llvm tests and add clang tests.
llvm-svn: 367053
Currently there are a few pointer comparisons in ValueDFS_Compare, which
can cause non-deterministic ordering when materializing values. There
are 2 cases this patch fixes:
1. Order defs before uses used to compare pointers, which guarantees
defs before uses, but causes non-deterministic ordering between 2
uses or 2 defs, depending on the allocation order. By converting the
pointers to booleans, we can circumvent that problem.
2. comparePHIRelated was comparing the basic block pointers of edges,
which also results in a non-deterministic order and is also not
really meaningful for ordering. By ordering by their destination DFS
numbers we guarantee a deterministic order.
For the example below, we can end up with 2 different uselist orderings,
when running `opt -mem2reg -ipsccp` hundreds of times. Because the
non-determinism is caused by allocation ordering, we cannot reproduce it
with ipsccp alone.
declare i32 @hoge() local_unnamed_addr #0
define dso_local i32 @ham(i8* %arg, i8* %arg1) #0 {
bb:
%tmp = alloca i32
%tmp2 = alloca i32, align 4
br label %bb19
bb4: ; preds = %bb20
br label %bb6
bb6: ; preds = %bb4
%tmp7 = call i32 @hoge()
store i32 %tmp7, i32* %tmp
%tmp8 = load i32, i32* %tmp
%tmp9 = icmp eq i32 %tmp8, 912730082
%tmp10 = load i32, i32* %tmp
br i1 %tmp9, label %bb11, label %bb16
bb11: ; preds = %bb6
unreachable
bb13: ; preds = %bb20
br label %bb14
bb14: ; preds = %bb13
%tmp15 = load i32, i32* %tmp
br label %bb16
bb16: ; preds = %bb14, %bb6
%tmp17 = phi i32 [ %tmp10, %bb6 ], [ 0, %bb14 ]
br label %bb19
bb18: ; preds = %bb20
unreachable
bb19: ; preds = %bb16, %bb
br label %bb20
bb20: ; preds = %bb19
indirectbr i8* null, [label %bb4, label %bb13, label %bb18]
}
Reviewers: davide, efriedma
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D64866
llvm-svn: 367049
We'd like to determine the idom of exit block after peeling one iteration.
Let Exit is exit block.
Let ExitingSet - is a set of predecessors of Exit block. They are exiting blocks.
Let Latch' and ExitingSet' are copies after a peeling.
We'd like to find an idom'(Exit) - idom of Exit after peeling.
It is an evident that idom'(Exit) will be the nearest common dominator of ExitingSet and ExitingSet'.
idom(Exit) is a nearest common dominator of ExitingSet.
idom(Exit)' is a nearest common dominator of ExitingSet'.
Taking into account that we have a single Latch, Latch' will dominate Header and idom(Exit).
So the idom'(Exit) is nearest common dominator of idom(Exit)' and Latch'.
All these basic blocks are in the same loop, so what we find is
(nearest common dominator of idom(Exit) and Latch)'.
Reviewers: reames, fhahn
Reviewed By: reames
Subscribers: hiraditya, zzheng, llvm-commits
Differential Revision: https://reviews.llvm.org/D65292
llvm-svn: 367044
Later code in TryToSimplifyUncondBranchFromEmptyBlock() assumes that
we have cleaned up unreachable blocks, but that was not happening
with this switch transform.
llvm-svn: 367037
Summary:
This is the first patch for the loop guard. We introduced
getLoopGuardBranch() and isGuarded().
This currently only works on simplified loop, as it requires a preheader
and a latch to identify the guard.
It will work on loops of the form:
/// GuardBB:
/// br cond1, Preheader, ExitSucc <== GuardBranch
/// Preheader:
/// br Header
/// Header:
/// ...
/// br Latch
/// Latch:
/// br cond2, Header, ExitBlock
/// ExitBlock:
/// br ExitSucc
/// ExitSucc:
Prior discussions leading upto the decision to introduce the loop guard
API: http://lists.llvm.org/pipermail/llvm-dev/2019-May/132607.html
Reviewer: reames, kbarton, hfinkel, jdoerfert, Meinersbur, dmgreen
Reviewed By: reames
Subscribers: wuzish, hiraditya, jsji, llvm-commits, bmahjour, etiotto
Tag: LLVM
Differential Revision: https://reviews.llvm.org/D63885
llvm-svn: 367033
Summary:
Looking at the current Apple-specific code for crash handling it does a few
silly things that I think we should avoid while handling crashes:
* Try real hard not to allocate.
* Set the global crash reporter string early so that any crash while
generating the stack trace will still report some info.
* Prevent reordering of operations in the current thread.
<rdar://problem/53503334>
Subscribers: hiraditya, jkorous, dexonsmith, llvm-commits, beanz, Bigcheese, thakis, lattner, jordan_rose
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65235
llvm-svn: 367031
Currently, we expect the CO-RE offset relocation records
a string encoding the original getelementptr access index,
so kernel bpf loader can decode it correctly.
For example,
struct s { int a; int b; };
struct t { int c; int d; };
#define _(x) (__builtin_preserve_access_index(x))
int get_value(const void *addr1, const void *addr2);
int test(struct s *arg1, struct t *arg2) {
return get_value(_(&arg1->b), _(&arg2->d));
}
We expect two offset relocations:
reloc 1: type s, access index 0, 1
reloc 2: type t, access index 0, 1
Two globals are created to retain access indexes for the
above two relocations with global variable names.
The first global has a name "0:1:". Unfortunately,
the second global has the name "0:1:.1" as the llvm
internals automatically add suffix ".1" to a global
with the same name. Later on, the BPF peels the last
character and record "0:1" and "0:1:." in the
relocation table.
This is not desirable. BPF backend could use the global
variable suffix knowledge to generate correct access str.
This patch rather took an approach not relying on
that knowledge. It generates "s:0:1:" and "t:0:1:" to
avoid global variable suffixes and later on generate
correct index access string "0:1" for both records.
Signed-off-by: Yonghong Song <yhs@fb.com>
Differential Revision: https://reviews.llvm.org/D65258
llvm-svn: 367030
This reverts commit bc4a63fd3c, this is a
speculative revert to fix a number of sanitizer bots (like
sanitizer-x86_64-linux-bootstrap-ubsan) that have started to see stage2
compiler crashes, presumably due to a miscompile.
llvm-svn: 367029
We do not need the SmallPtrSet to avoid adding duplicates to
OpsToRename, because we already keep a ValueInfo mapping. If we see an
op for the first time, Infos will be empty and we can also add it to
OpsToRename.
We process operands by visiting BBs depth-first and then iterate over
all instructions & users, so the order should be deterministic.
Therefore we can skip one round of sorting, which we purely needed for
guaranteeing a deterministic order when iterating over the SmallPtrSet.
Reviewers: efriedma, davide
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D64816
llvm-svn: 367028
Summary:
- As LCSSA is turned on just before isel, it may create PHI of the flow,
which is consumed by pseudo structurized CFG instructions. When that
PHIs are eliminated in O0, COPY may be placed wrongly as the these
pseudo structurized CFG instructions are considering prologue of MBB.
- Run extra `unreachable-mbb-elimination` at the end of isel to clean up
PHIs.
Reviewers: arsenm, rampitec
Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64353
llvm-svn: 367023
... for the vector forms of `{SQ,UQ,}{INC,DEC}P` instructions. Also continue
supporting the exsting behaviour of not requiring an explicit size
specifier. The preferred disasembly is *with* the specifier.
This is implemented by redefining intruction forms to require vector predicates
with explicit size and adding aliases, which allow a predicate with no size.
Differential Revision: https://reviews.llvm.org/D65145
llvm-svn: 367019
trunc (load X) --> load (bitcast X to narrow type)
We have this transform in DAGCombiner::ReduceLoadWidth(), but the truncated
load pattern can interfere with other instcombine transforms, so I'd like to
allow the fold sooner.
Example:
https://bugs.llvm.org/show_bug.cgi?id=16739
...in that report, we have bitcasts bracketing these ops, so those could get
eliminated too.
We've generally ruled out widening of loads early in IR ( LoadCombine -
http://lists.llvm.org/pipermail/llvm-dev/2016-September/105291.html ), but
that reasoning may not apply to narrowing if we can preserve information
such as the dereferenceable range.
Differential Revision: https://reviews.llvm.org/D64432
llvm-svn: 367011
We should only zap returns in functions, where all live users have a
replace-able value (are not overdefined). Unused return values should be
undefined.
This should make it easier to detect bugs like in PR42738.
Alternatively we could bail out of zapping the function returns, but I
think it would be better to address those divergences between function
and call-site values where they are actually caused.
Reviewers: davide, efriedma
Reviewed By: davide, efriedma
Differential Revision: https://reviews.llvm.org/D65222
llvm-svn: 366998
This refactors boolean 'OptForSize' that was passed around in a lot of places.
It controlled folding of the tail loop, the scalar epilogue, into the main loop
but code-size reasons may not be the only reason to do this. Thus, this is a
first step to generalise the concept of tail-loop folding, and hence OptForSize
has been renamed and is using an enum ScalarEpilogueStatus that holds the
status how the epilogue should be lowered.
This will be followed up by D65197, that picks up the predicate loop hint and
performs the tail-loop folding.
Differential Revision: https://reviews.llvm.org/D64916
llvm-svn: 366993
Summary:
In PostRA phase, we often have to find out the most recent definition
of a register. This patch adds getDefMIPostRA so that other methods
can use it rather than implementing it repeatedly.
Differential Revision: https://reviews.llvm.org/D65131
llvm-svn: 366990
Summary:
Add a new method which tries to compute the target address referenced by an operand.
This patch supports x86_64 RIP-relative addressing for now.
It is necessary to print referenced symbol names in llvm-objdump.
Reviewers: andreadb, MaskRay, grosbach, jgalenson, craig.topper
Reviewed By: MaskRay, craig.topper
Subscribers: bcain, rupprecht, jhenderson, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D63847
llvm-svn: 366987
Change MAXSECTALIGN to a public MaxSectionAlignment in MachOUniversal.
Will be used in a follow-up.
Patch by Anusha Basana <anusha.basana@gmail.com>
Differential Revision: https://reviews.llvm.org/D65117
llvm-svn: 366969
This patch changes the coding style of the FileCollector from the LLDB
to the LLVM coding style. Alex recently lifted it into LLVM and I
volunteered to do the conversion.
llvm-svn: 366966
Before, we weren't able to select things like this for G_GEP:
add x0, x8, #8
And instead we'd materialize the 8.
This teaches GISel to do that. It gives some considerable code size savings
on 252.eon-- about 4%!
Differential Revision: https://reviews.llvm.org/D65248
llvm-svn: 366959
Throughout the legalizerinfo we currently make the assumption that the target
has neon and FP target features available. Fixing it will require a refactor of
the whole thing, so until then make sure we fall back.
Works around PR42734
Differential Revision: https://reviews.llvm.org/D65244
llvm-svn: 366957
The file collector class is useful for creating reproducers,
not just for LLDB, but for other tools as well in LLVM/Clang.
Differential Revision: https://reviews.llvm.org/D65237
llvm-svn: 366956
Summary:
This was originally reported in D62818.
https://rise4fun.com/Alive/oPH
InstCombine does the opposite fold, in hope that `C l>>/<< Y` expression
will be hoisted out of a loop if `Y` is invariant and `X` is not.
But as it is seen from the diffs here, if it didn't get hoisted,
the produced assembly is almost universally worse.
Much like with my recent "hoist add/sub by/from const" patches,
we should get almost universal win if we hoist constant,
there is almost always an "and/test by imm" instruction,
but "shift of imm" not so much, so we may avoid having to
materialize the immediate, and thus need one less register.
And since we now shift not by constant, but by something else,
the live-range of that something else may reduce.
Special care needs to be applied not to disturb x86 `BT` / hexagon `tstbit`
instruction pattern. And to not get into endless combine loop.
Reviewers: RKSimon, efriedma, t.p.northover, craig.topper, spatel, arsenm
Reviewed By: spatel
Subscribers: hiraditya, MaskRay, wuzish, xbolva00, nikic, nemanjai, jvesely, wdng, nhaehnle, javed.absar, tpr, kristof.beyls, jsji, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D62871
llvm-svn: 366955
If we have a G_MUL, and either the LHS or the RHS of that mul is the legal
shift value for a load addressing mode, we can fold it into the load.
This gives some code size savings on some SPEC tests. The best are around 2%
on 300.twolf and 3% on 254.gap.
Differential Revision: https://reviews.llvm.org/D65173
llvm-svn: 366954
For aliases, any expression that lowers at the MC level to global_object or
global_object+constant is valid at the object file level. getBaseObject()
should return a result if the aliasee ends up being of that form even if
the IR used to produce it is somewhat unconventional.
Note that this is different from what stripInBoundsOffsets() and that family
of functions is doing. Those functions are concerned about semantic properties
of IR, whereas here we only care about the lowering result.
Therefore reimplement getBaseObject() in a way that matches the lowering
result. This fixes a crash when producing a summary for aliases such as
that in the included test case.
Differential Revision: https://reviews.llvm.org/D65115
llvm-svn: 366952
This introduces a new family of combiner helper routines that re-use the
target specific cost model from SelectionDAG, and generate inline implementations
of the memcpy family of intrinsics.
The combines are only enabled at optimization levels higher than -O0, and give
very substantial performance improvements.
Differential Revision: https://reviews.llvm.org/D65167
llvm-svn: 366951
We can treat icmp eq X, MIN_UINT as icmp ule X, MIN_UINT and allow
it to merge with icmp ugt X, C. Similar for the other constants.
We can do simliar for icmp ne X, (U)INT_MIN/MAX in foldAndOfICmps. And we already handled UINT_MIN there.
Fixes PR42691.
Differential Revision: https://reviews.llvm.org/D65017
llvm-svn: 366945
r366317 added a legalization for s128 G_ICMP narrow scalar which tried to hard
code the result type of the new legalized G_SELECT. Change this to instead use
type of the original G_ICMP result and allow the target to legalize it if necessary
later.
llvm-svn: 366943
To support prefetch mode 3 we need to pad current
cacheline and fill 3 cachelines after. Current padding
is only sufficient for mode 2.
Differential Revision: https://reviews.llvm.org/D65236
llvm-svn: 366938
This removes the VCEQ/VCNE/VCGE/VCEQZ/etc nodes, just using two called VCMP and
VCMPZ with an extra operand as the condition code. I believe this will make
some combines simpler, allowing us to just look at these codes and not the
operands. It also helps fill in a missing VCGTUZ MVE selection without adding
extra nodes for it.
Differential Revision: https://reviews.llvm.org/D65072
llvm-svn: 366934
This patch adds support for recognizing cases where a larger vector type is being used to reduce just the elements in the lower subvector:
e.g. <8 x i32> reduction pattern in a <16 x i32> vector:
<4,5,6,7,u,u,u,u,u,u,u,u,u,u,u,u>
<2,3,u,u,u,u,u,u,u,u,u,u,u,u,u,u>
<1,u,u,u,u,u,u,u,u,u,u,u,u,u,u,u>
matchBinOpReduction returns the lower extracted subvector in such cases, assuming isExtractSubvectorCheap accepts the extraction.
I've only enabled it for X86 reduction sums so far. I intend to enable it for the bitop/minmax cases in future patches, and eventually I think its worth turning it on all the time. This is mainly just a case of ensuring calls to matchBinOpReduction don't make assumptions on the vector width based on the original vector extraction.
Fixes the x86 partial reduction sum cases in PR33758 and PR42023.
Differential Revision: https://reviews.llvm.org/D65047
llvm-svn: 366933
The prevents us from trying to convert an i1 predicate vector to a float, or
vice-versa. Better patterns are possible, which will follow in a subsequent
commit. For now we just expand them.
Differential Revision: https://reviews.llvm.org/D65066
llvm-svn: 366931
Fix an off-by-one error which made us not look at the last element of the
zero vector. This caused a miscompile in 188.ammp.
Differential Revision: https://reviews.llvm.org/D65168
llvm-svn: 366930
MVE VCMP instructions can use a general purpose register as the second operand.
This adds the combines for it, selecting from a compare of a vdup.
Differential Revision: https://reviews.llvm.org/D65061
llvm-svn: 366924
If we are already using the same chain for the old/new memory ops then just return.
Fixes PR42727 which had getLoad() reusing an existing node.
llvm-svn: 366922
This adds a DeMorgan combine for OR's of compares to turn them into AND's,
helping prevent them from going into and out of gpr registers. It also fills in
the VCLE and VCLT nodes that MVE can select, allowing it to invert more
compares.
Differential Revision: https://reviews.llvm.org/D65059
llvm-svn: 366920
Summary:
Move `-ftime-trace-granularity` option to frontend options. Without patch
this option is showed up in the help for any tool that links libSupport.
Reviewers: sammccall
Subscribers: hiraditya, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D65202
llvm-svn: 366911
Add a number of folds to convert and(vcmp, vcmp) into a single VPT block, where
the second vcmp becomes predicated on the first.
The VCMP; VPST; VCMP will eventually be converted to VPT; VCMP in the
VPTBlockPass.
Differential Revision: https://reviews.llvm.org/D65058
llvm-svn: 366910
Much like integers, this adds MVE floating point compares and select. It
requires a lot more buildvector/shuffle code because we may need to expand the
compares without mve.fp, and requires support for and/or because of the way we
lower llvm condition codes.
Some original code by David Sherwood
Differential Revision: https://reviews.llvm.org/D65054
llvm-svn: 366909
This adds some basic, "worst case" handling for MVE predicate Or/And/Xor. It
does this by going into and out of GPRs, doing the operation on scalars.
Code by David Sherwood.
Differential Revision: https://reviews.llvm.org/D65053
llvm-svn: 366907
This change make sure that llvm does not emit an invalid IT block
by putting the constant pool in the middle of an IT block.
We have code to try to avoid putting a constant island in the middle of an
IT block, but it only works if we see an IT between the one currently
referencing CPE and possible insertion point. If the first instruction
we look at is the VLDRD after the IT , we never see the IT and does not
realize that the instruction doing the load could be in an IT block itself.
Differential Revision: https://reviews.llvm.org/D64621
Change-Id: I24cecb37cded75e8992870bd997f6226853bd920
llvm-svn: 366905
Summary:
SimplifyFPBinOp is a variant of SimplifyBinOp that lets you specify
fast math flags, but the name is misleading because both functions
can simplify both FP and non-FP ops. Instead, overload SimplifyBinOp
so that you can optionally specify fast math flags.
Likewise for SimplifyFPUnOp.
Reviewers: spatel
Reviewed By: spatel
Subscribers: xbolva00, cameron.mcinally, eraman, hiraditya, haicheng, zzheng, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64902
llvm-svn: 366902
Summary:
This patch is part of a patch series to add support for FileCheck
numeric expressions. This specific patch lift the restriction for a
numeric expression to either be a variable definition or a numeric
expression to try to match.
This commit allows a numeric variable to be set to the result of the
evaluation of a numeric expression after it has been matched
successfully. When it happens, the variable is allowed to be used on
the same line since its value is known at match time.
It also makes use of this possibility to reuse the parsing code to
parse a command-line definition by crafting a mirror string of the
-D option with the equal sign replaced by a colon sign, e.g. for option
'-D#NUMVAL=10' it creates the string
'-D#NUMVAL=10 (parsed as [[#NUMVAL:10]])' where the numeric expression
is parsed to define NUMVAL. This result in a few tests needing updating
for the location diagnostics on top of the tests for the new feature.
It also enables empty numeric expression which match any number without
defining a variable. This is done here rather than in commit #5 of the
patch series because it requires to dissociate automatic regex insertion
in RegExStr from variable definition which would make commit #5 even
bigger than it already is.
Copyright:
- Linaro (changes up to diff 183612 of revision D55940)
- GraphCore (changes in later versions of revision D55940 and
in new revision created off D55940)
Reviewers: jhenderson, chandlerc, jdenny, probinson, grimar, arichardson, rnk
Subscribers: hiraditya, llvm-commits, probinson, dblaikie, grimar, arichardson, tra, rnk, kristina, hfinkel, rogfer01, JonChesterfield
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60388
> llvm-svn: 366860
llvm-svn: 366897
This adds support code for building and shuffling i1 predicate registers. It
generally uses two basic principles, either converting the predicate into an
scalar (through a PREDICATE_CAST) and doing scalar operations on it there, or
by converting the register to an full vector register and back.
Some of the code here is a not super efficient but will hopefully cover most
cases of moving i1 vectors around and can be improved in subsequent patches.
Some code by David Sherwood.
Differential Revision: https://reviews.llvm.org/D65052
llvm-svn: 366890
This adds the very basics for MVE vector predication, adding integer VCMP and
VSEL instruction support. This is done through predicate registers (MVT::v16i1,
MVT::v8i1, MVT::v4i1), but otherwise using same mechanics as NEON to custom
lower setcc's through ARMISD::VCXX nodes (VCEQ, VCGT, VCEQZ, etc).
An extra VCNE was added, as this can be handled sensibly by MVE's expanded
number of VCMP condition codes. (There are also VCLE and VCLT which are added
later).
VPSEL is also added here, simply selecting on the vselect.
Original code by David Sherwood.
Differential Revision: https://reviews.llvm.org/D65051
llvm-svn: 366885
While combining two loads into a single load, we often need to
reorder the pointer operands for the new load. This reordering was
broken in the cases where there was a chain of values that built up
the pointer.
Differential Revision: https://reviews.llvm.org/D65193
llvm-svn: 366881
This is a follow up to D64971. While we need to insert the deref after
the offset, it needs to come before the remaining elements in the
original expression since the deref needs to happen before the LLVM
fragment if present.
Differential Revision: https://reviews.llvm.org/D65172
llvm-svn: 366865
Copying them is expensive. This allows the tables to be moved around at
lower cost, and allows a remarks::StringTable to be constructed from
a remarks::ParsedStringTable.
llvm-svn: 366864
Summary:
This patch is part of a patch series to add support for FileCheck
numeric expressions. This specific patch lift the restriction for a
numeric expression to either be a variable definition or a numeric
expression to try to match.
This commit allows a numeric variable to be set to the result of the
evaluation of a numeric expression after it has been matched
successfully. When it happens, the variable is allowed to be used on
the same line since its value is known at match time.
It also makes use of this possibility to reuse the parsing code to
parse a command-line definition by crafting a mirror string of the
-D option with the equal sign replaced by a colon sign, e.g. for option
'-D#NUMVAL=10' it creates the string
'-D#NUMVAL=10 (parsed as [[#NUMVAL:10]])' where the numeric expression
is parsed to define NUMVAL. This result in a few tests needing updating
for the location diagnostics on top of the tests for the new feature.
It also enables empty numeric expression which match any number without
defining a variable. This is done here rather than in commit #5 of the
patch series because it requires to dissociate automatic regex insertion
in RegExStr from variable definition which would make commit #5 even
bigger than it already is.
Copyright:
- Linaro (changes up to diff 183612 of revision D55940)
- GraphCore (changes in later versions of revision D55940 and
in new revision created off D55940)
Reviewers: jhenderson, chandlerc, jdenny, probinson, grimar, arichardson, rnk
Subscribers: hiraditya, llvm-commits, probinson, dblaikie, grimar, arichardson, tra, rnk, kristina, hfinkel, rogfer01, JonChesterfield
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60388
llvm-svn: 366860
We need to be able to load and store s128 for memcpy inlining, where we want to
generate Q register mem ops. Making these legal also requires that we add some
support in other instructions. Regbankselect should also know about these since
they have no GPR register class that can hold them, so need special handling to
live on the FPR bank.
Differential Revision: https://reviews.llvm.org/D65166
llvm-svn: 366857
This exposes better support to use a string table with a format through
an actual new remark::Format, called yaml-strtab.
This can now be used with -fsave-optimization-record=yaml-strtab.
llvm-svn: 366849
Currently PowerPC backend emits code like this:
r3 = li 0
std r3, 264(r1)
r3 = li 0
std r3, 272(r1)
This patch fixes that and other cases where a register already contains a value that is loaded so we will get:
r3 = li 0
std r3, 264(r1)
std r3, 272(r1)
Differential Revision: https://reviews.llvm.org/D64220
llvm-svn: 366840
LegalizeDAG tries to legal the DAG by legalizing nodes before
their operands.
If we create a new node, we end up legalizing it after its operands.
This prevents some of the optimizations that can be done when the
operand is a build_vector since the build_vector will have been
legalized to something else.
Differential Revision: https://reviews.llvm.org/D65132
llvm-svn: 366835
The original code failed to account for the fact that one exit can have a pointer exit count without all of them having pointer exit counts. This could cause two separate bugs:
1) We might exit the loop early, and leave optimizations undone. This is what triggered the assertion failure in the reported test case.
2) We might optimize one exit, then exit without indicating a change. This could result in an analysis invalidaton bug if no other transform is done by the rest of indvars.
Note that the pointer exit counts are a really fragile concept. They show up only when we have a pointer IV w/o a datalayout to provide their size. It's really questionable to me whether the complexity implied is worth it.
llvm-svn: 366829
Summary:
Allow IntToPtrInst to carry !dereferenceable metadata tag.
This is valid since !dereferenceable can be only be applied to
pointer type values.
Change-Id: If8a6e3c616f073d51eaff52ab74535c29ed497b4
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64954
llvm-svn: 366826
When we select the XRO variants of loads, we can pull in very specific shifts
(of the size of an element). E.g.
```
ldr x1, [x2, x3, lsl #3]
```
This teaches GISel to handle these when they're coming from shifts
specifically.
This adds a new addressing mode function, `selectAddrModeShiftedExtendXReg`
which recognizes this pattern.
This also packs this up with `selectAddrModeRegisterOffset` into
`selectAddrModeXRO`. This is intended to be equivalent to `selectAddrModeXRO`
in AArch64ISelDAGtoDAG.
Also update load-addressing-modes to show that all of the cases here work.
Differential Revision: https://reviews.llvm.org/D65119
llvm-svn: 366819
If all the demanded elts are from one operand and are inline, then we can use the operand directly.
The changes are mainly from SSE41 targets which has blendvpd but not cmpgtq, allowing the v2i64 comparison to be simplified as we only need the signbit from alternate v4i32 elements.
llvm-svn: 366817
llvm-ar outputs a strange error message when handling archives with
members larger than 4GB due to not checking file size when passing the
value as an unsigned 32 bit integer. This overflow issue caused
malformed archives to be created.:
https://bugs.llvm.org/show_bug.cgi?id=38058
This change allows for members above 4GB and will error in a case that
is over the formats size limit, a 10 digit decimal integer.
Differential Revision: https://reviews.llvm.org/D65093
llvm-svn: 366813
While lowering test.set.loop.iterations, it wasn't checked how the
brcond was using the result and so the wls could branch to the loop
preheader instead of not entering it. The same was true for
loop.decrement.reg.
So brcond and br_cc and now lowered manually when using the hwloop
intrinsics. During this we now check whether the result has been
negated and whether we're using SETEQ or SETNE and 0 or 1. We can
then figure out which basic block the WLS and LE should be targeting.
Differential Revision: https://reviews.llvm.org/D64616
llvm-svn: 366809
This patch introduces the DAG version of SimplifyMultipleUseDemandedBits, which attempts to peek through ops (mainly and/or/xor so far) that don't contribute to the demandedbits/elts of a node - which means we can do this even in cases where we have multiple uses of an op, which normally requires us to demanded all bits/elts. The intention is to remove a similar instruction - SelectionDAG::GetDemandedBits - once SimplifyMultipleUseDemandedBits has matured.
The InstCombine version of SimplifyMultipleUseDemandedBits can constant fold which I haven't added here yet, and so far I've only wired this up to some basic binops (and/or/xor/add/sub/mul) to demonstrate its use.
We do see a couple of regressions that need to be addressed:
AMDGPU unsigned dot product codegen retains an AND mask (for ZERO_EXTEND) that it previously removed (but otherwise the dotproduct codegen is a lot better).
X86/AVX2 has poor handling of vector ANY_EXTEND/ANY_EXTEND_VECTOR_INREG - it prematurely gets converted to ZERO_EXTEND_VECTOR_INREG.
The code owners have confirmed its ok for these cases to fixed up in future patches.
Differential Revision: https://reviews.llvm.org/D63281
llvm-svn: 366799
cast<CallInst> shouldn't return null and we dereference the pointer in a lot of other places, causing both MSVC + cppcheck to warn about dereferenced null pointers
llvm-svn: 366793
Summary:
Deduce dereferenceable attribute in Attributor.
These will be added in a later patch.
* dereferenceable(_or_null)_globally (D61652)
* Deduction based on load instruction (similar to D64258)
Reviewers: jdoerfert, sstefan1
Reviewed By: jdoerfert
Subscribers: hiraditya, jfb, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64876
llvm-svn: 366788
The function was calling getNode() on an SDValue to return and the
caller turned the result back into a SDValue. So just return the
original SDValue to avoid this.
llvm-svn: 366779
Summary: Adds a binding to the internalize pass that allows the caller to pass a function pointer that acts as the visibility-preservation predicate. Previously, one could only pass an unsigned value (not LLVMBool?) that directed the pass to consider "main" or not.
Reviewers: whitequark, deadalnix, harlanhaskins
Reviewed By: whitequark, harlanhaskins
Subscribers: kren1, hiraditya, llvm-commits, harlanhaskins
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D62456
llvm-svn: 366777
Replace float load/store pair with integer load/store pair when it's only used in load/store,
because float load/store instructions cost more cycles then integer load/store.
A typical scenario is when there is a call with more than 13 float arguments passing, we need pass them by stack.
So we need a load/store pair to do such memory operation if the variable is global variable.
Differential Revision: https://reviews.llvm.org/D64195
llvm-svn: 366775
[Attributor] Liveness analysis.
Liveness analysis abstract attribute used to indicate which BasicBlocks are dead and can therefore be ignored.
Right now we are only looking at noreturn calls.
Reviewers: jdoerfert, uenoku
Subscribers: hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D64162
llvm-svn: 366769
We were silently using the ABI alignment for all of the stores generated for deopt and gc values. We'd gotten the alignment of the stack slot itself properly reduced (via MachineFrameInfo's clamping), but having the MMO on the store incorrect was enough for us to generate an aligned store to a unaligned location.
The simplest fix would have been to just pass the alignment to the helper function, but once we do that, the helper function doesn't really help. So, inline it and directly call the MMO version of DAG.getStore with a properly constructed MMO.
Note that there's a separate performance possibility here. Even if we *can* realign stacks, we probably don't *want to* if all of the stores are in slowpaths. But that's a later patch, if at all. :)
llvm-svn: 366765
This patch exnteds the error handling in the debug line parser to get
rid of the existing MD5 assertion. I want to reuse the debug line parser
from LLVM in LLDB where we cannot crash on invalid input.
Differential revision: https://reviews.llvm.org/D64544
llvm-svn: 366762
It is not safe in general to replace an alias in a GEP with its aliasee
if the alias can be replaced with another definition (i.e. via strong/weak
resolution (linkonce_odr) or via symbol interposition (default visibility
in ELF)) while the aliasee cannot. An example of how this can go wrong is
in the included test case.
I was concerned that this might be a load-bearing misoptimization (it's
possible for us to use aliases to share vtables between base and derived
classes, and on Windows, vtable symbols will always be aliases in RTTI
mode, so this change could theoretically inhibit trivial devirtualization
in some cases), so I built Chromium for Linux and Windows with and without
this change. The file sizes of the resulting binaries were identical, so it
doesn't look like this is going to be a problem.
Differential Revision: https://reviews.llvm.org/D65118
llvm-svn: 366754
[Attributor] Liveness analysis.
Liveness analysis abstract attribute used to indicate which BasicBlocks are dead and can therefore be ignored.
Right now we are only looking at noreturn calls.
Reviewers: jdoerfert, uenoku
Subscribers: hiraditya, llvm-commits
Differential revision: https://reviews.llvm.org/D64162
llvm-svn: 366753
The xform has no real valuewhen it's using out of a complex pattern
output. The complex pattern was already creating TargetConstants with
i16, so this was just unnecessary machinery.
This allows global isel to import the simple cases once the complex
pattern is implemented.
llvm-svn: 366743
Liveness analysis abstract attribute used to indicate which BasicBlocks are dead and can therefore be ignored.
Right now we are only looking at noreturn calls.
Reviewers: jdoerfert, uenoku
Subscribers: hiraditya, llvm-commits
Differential revision: https://reviews.llvm.org/D64162
llvm-svn: 366736
The build_vector will become a constant pool load. By using the
desired type initially, it ensures we don't generate a bitcast
of the constant pool load which will need to be folded with
the load.
While experimenting with another patch, I noticed that when the
load type and the constant pool type don't match, then
SimplifyDemandedBits can't handle it. While we should probably
fix that, this was a simple way to fix the issue I saw.
llvm-svn: 366732
Summary:
Since we are planning to add ADDIStocHA for 32bit in later patch, we decided
to change 64bit one first to follow naming convention with 8 behind opcode.
Patch by: Xiangling_L
Differential Revision: https://reviews.llvm.org/D64814
llvm-svn: 366731
Porting function return value attribute noalias to attributor.
This will be followed with a patch for callsite and function argumets.
Reviewers: jdoerfert
Subscribers: lebedev.ri, hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D63067
llvm-svn: 366728
Stubs out a TargetLoweringObjectFileXCOFF class, implementing only
SelectSectionForGlobal for common symbols. Also adds an override of
EmitGlobalVariable in PPCAIXAsmPrinter which adds a number of defensive errors
and adds support for emitting common globals.
llvm-svn: 366727
While debugging code that uses SafeStack, we've noticed that LLVM
produces an invalid DWARF. Concretely, in the following example:
int main(int argc, char* argv[]) {
std::string value = "";
printf("%s\n", value.c_str());
return 0;
}
DWARF would describe the value variable as being located at:
DW_OP_breg14 R14+0, DW_OP_deref, DW_OP_constu 0x20, DW_OP_minus
The assembly to get this variable is:
leaq -32(%r14), %rbx
The order of operations in the DWARF symbols is incorrect in this case.
Specifically, the deref is incorrect; this appears to be incorrectly
re-inserted in repalceOneDbgValueForAlloca.
With this change which inserts the deref after the offset instead of
before it, LLVM produces correct DWARF:
DW_OP_breg14 R14-32
Differential Revision: https://reviews.llvm.org/D64971
llvm-svn: 366726
The bytes inserted before an overaligned global need to be padded according
to the alignment set on the original global in order for the initializer
to meet the global's alignment requirements. The previous implementation
that padded to the pointer width happened to be correct for vtables on most
platforms but may do the wrong thing if the vtable has a larger alignment.
This issue is visible with a prototype implementation of HWASAN for globals,
which will overalign all globals including vtables to 16 bytes.
There is also no padding requirement for the bytes inserted after the global
because they are never read from nor are they significant for alignment
purposes, so stop inserting padding there.
Differential Revision: https://reviews.llvm.org/D65031
llvm-svn: 366725
We were previously ignoring alignment entirely when combining globals
together in this pass. There are two main things that we need to do here:
add additional padding before each global to meet the alignment requirements,
and set the combined global's alignment to the maximum of all of the original
globals' alignments.
Since we now need to calculate layout as we go anyway, use the calculated
layout to produce GlobalLayout instead of using StructLayout.
Differential Revision: https://reviews.llvm.org/D65033
llvm-svn: 366722
This reverts commit r366686 as it appears to be causing buildbot
failures on sanitizer-x86_64-linux-android and sanitizer-x86_64-linux.
llvm-svn: 366708
ARMLowOverheadLoops would assert a failure if it did not find all the
pseudo instructions that comprise the hardware loop. Instead of doing
this, iterate through all the instructions of the function and revert
any remaining pseudo instructions that haven't been converted.
Differential Revision: https://reviews.llvm.org/D65080
llvm-svn: 366691
This patch was not the reason of the buildbot failure.
Deleted code was introduced as a work around for a bug in the gold linker
(http://sourceware.org/PR16794). Test case that was given as a reason for
this part of code, the one on previous link, now works for the gold.
This condition is too strict and when a code is compiled with debug info
it forces generation of numerous relocations with symbol for architectures
that do not have relocation addend.
Reviewers: arsenm, espindola
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D64327
llvm-svn: 366686
We need to ensure that the number of T's is correct when adding multiple
instructions into the same VPT block.
Differential revision: https://reviews.llvm.org/D65049
llvm-svn: 366684
This patch enables us to find the source loads for each element, splitting them into a Load and ByteOffset, and attempts to recognise consecutive loads that are in fact from the same source load.
A helper function, findEltLoadSrc, recurses to find a LoadSDNode and determines the element's byte offset within it. When attempting to match consecutive loads, byte offsetted loads then attempt to matched against a previous load that has already been confirmed to be a consecutive match.
Next step towards PR16739 - after this we just need to account for shuffling/repeated elements to create a vector load + shuffle.
Fixed out of bounds load assert identified in rL366501
Differential Revision: https://reviews.llvm.org/D64551
llvm-svn: 366681
ARM has code to recognise uses of the "returned" function parameter
attribute which guarantee that the value passed to the function in r0
will be returned in r0 unmodified. IPRA replaces the regmask on call
instructions, so needs to be told about this to avoid reverting the
optimisation.
Differential revision: https://reviews.llvm.org/D64986
llvm-svn: 366669
Summary:
In the atomic optimizer, save doing a bunch of work and generating a
bunch of dead IR in the fairly common case where the result of an
atomic op (i.e. the value that was in memory before the atomic op was
performed) is not used. NFC.
Reviewers: arsenm, dstuttard, tpr
Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, t-tye, hiraditya, jfb, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64981
llvm-svn: 366667
Current algorithm to update branch weights of latch block and its copies is
based on the assumption that number of peeling iterations is approximately equal
to trip count.
However it is not correct. According to profitability check in one case we can decide to peel
in case it helps to reduce the number of phi nodes. In this case the number of peeled iteration
can be less then estimated trip count.
This patch introduces another way to set the branch weights to peeled of branches.
Let F is a weight of the edge from latch to header.
Let E is a weight of the edge from latch to exit.
F/(F+E) is a probability to go to loop and E/(F+E) is a probability to go to exit.
Then, Estimated TripCount = F / E.
For I-th (counting from 0) peeled off iteration we set the the weights for
the peeled latch as (TC - I, 1). It gives us reasonable distribution,
The probability to go to exit 1/(TC-I) increases. At the same time
the estimated trip count of remaining loop reduces by I.
As a result after peeling off N iteration the weights will be
(F - N * E, E) and trip count of loop becomes
F / E - N or TC - N.
The idea is taken from the review of the patch D63918 proposed by Philip.
Reviewers: reames, mkuper, iajbar, fhahn
Reviewed By: reames
Subscribers: hiraditya, zzheng, llvm-commits
Differential Revision: https://reviews.llvm.org/D64235
llvm-svn: 366665
For equality, the function called getTrue/getFalse with the VT
of the comparison input. But getTrue/getFalse need the boolean VT.
So if this code ever executed, it would assert.
I believe these cases are removed by InstSimplify so we don't get here.
So this patch just fixes up an assert to exclude the equality
possibility and removes the broken code.
llvm-svn: 366649
AddOne/SubOne create new Constant objects. That seems heavy for
comparing ConstantInts which wrap APInts. Just do the math on
on the APInts and compare them.
llvm-svn: 366648
Summary:
Four things here:
1. Generalize the fold to handle non-splat divisors. Reasonably trivial.
2. Unban power-of-two divisors. I don't see any reason why they should
be illegal.
* There is no ban in Hacker's Delight
* I think the ban came from the same bug that caused the miscompile
in the base patch - in `floor((2^W - 1) / D)` we were dividing by
`D0` instead of `D`, and we **were** ensuring that `D0` is not `1`,
which made sense.
3. Unban `1` divisors. I no longer believe Hacker's Delight actually says
that the fold is invalid for `D = 0`. Further considerations:
* We know that
* `(X u% 1) == 0` can be constant-folded to `1`,
* `(X u% 1) != 0` can be constant-folded to `0`,
* Also, we know that
* `X u<= -1` can be constant-folded to `1`,
* `X u> -1` can be constant-folded to `0`,
* https://godbolt.org/z/7jnZJXhttps://rise4fun.com/Alive/oF6p
* We know will end up with the following:
`(setule/setugt (rotr (mul N, P), K), Q)`
* Therefore, for given new DAG nodes and comparison predicates
(`ule`/`ugt`), we will still produce the correct answer if:
`Q` is a all-ones constant; and both `P` and `K` are *anything*
other than `undef`.
* The fold will indeed produce `Q = all-ones`.
4. Try to re-splat the `P` and `K` vectors - we don't care about
their values for the lanes where divisor was `1`.
Reviewers: RKSimon, hermord, craig.topper, spatel, xbolva00
Reviewed By: RKSimon
Subscribers: hiraditya, javed.absar, dexonsmith, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D63963
llvm-svn: 366637
As detailed on PR42674, we can reduce a vXi8 down until we have the final <8 x i8>, and then use PSADBW with zero, to sum those values. We then extract the bottom i8, discarding any overflow from the upper bits of the i16 result.
llvm-svn: 366636
If the blockaddress is not destoryed, the destination block will still
be marked as having its address taken, limiting further transformations.
I think there are other places where the dead blockaddress constants are kept
around, I'll look into that as follow up.
Reviewers: craig.topper, brzycki, davide
Reviewed By: brzycki, davide
Differential Revision: https://reviews.llvm.org/D64936
llvm-svn: 366633
Sometimes, you can end up with cross-bank copies between same-sized GPRs and
FPRs, which feed into G_STOREs. When these copies feed only into stores, they
aren't necessary; we can just store using the original register bank.
This provides some minor code size savings for some floating point SPEC
benchmarks. (Around 0.2% for 453.povray and 450.soplex)
This issue doesn't seem to show up due to regbankselect or anything similar. So,
this patch introduces an early select function, `contractCrossBankCopyIntoStore`
which performs the contraction when possible. The selector then continues
normally and selects the correct store opcode, eliminating needless copies
along the way.
Differential Revision: https://reviews.llvm.org/D65024
llvm-svn: 366625
Summary:
Add immutable WASM global `__tls_align` which stores the alignment
requirements of the TLS segment.
Add `__builtin_wasm_tls_align()` intrinsic to get this alignment in Clang.
The expected usage has now changed to:
__wasm_init_tls(memalign(__builtin_wasm_tls_align(),
__builtin_wasm_tls_size()));
Reviewers: tlively, aheejin, sbc100, sunfish, alexcrichton
Reviewed By: tlively
Subscribers: dschuff, jgravelle-google, hiraditya, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D65028
llvm-svn: 366624
The top-level BUNDLE instruction should behave as an ordinary
instruction. It is supposed to have all relevant registers as implicit
operands. Moving it should work as any other instruction. I believe
the assert intended to avoid moving instructions inside bundles.
llvm-svn: 366605
This change reverts most of the previous register name generation.
The real problem is that RegisterTuple does not generate asm names.
Added optional operand to RegisterTuple. This way we can simplify
register name access and dramatically reduce the size of static
tables for the backend.
Differential Revision: https://reviews.llvm.org/D64967
llvm-svn: 366598
This should now handle everything except structs passed as multiple
registers.
I think most of the packing logic should be handled by
handleAssignments, but I'm unclear on what the contract is for
multiple registers. This is copying how x86 handles this.
This does change the behavior of the test_sgpr_alignment0 amdgpu_vs
test. I don't think shader arguments should try to follow the
alignment, and registers need to be repacked. I also don't think it
matters, since I think the pointers are packed to the beginning of the
argument list anyway.
llvm-svn: 366582
This is the more natural lowering, and presents more opportunities to
reduce 64-bit ops to 32-bit.
This should also help avoid issues graphics shaders have had with
64-bit values, and simplify argument lowering in globalisel.
llvm-svn: 366578
This was handled previously for arguments split due to not fitting in
an MVT. This was dropping the register for argument registers split
due to TLI::getRegisterTypeForCallingConv.
llvm-svn: 366574
Summary:
Current PRE hoists common computations into
CMBB = DT->findNearestCommonDominator(MBB, MBB1).
However, if CMBB is in a hot loop body, we might get performance
degradation.
Differential Revision: https://reviews.llvm.org/D64394
llvm-svn: 366570
Summary:
For split-stack, if the nested argument (i.e. R10) is not used, no need to save/restore it in the prologue.
Reviewers: thanm
Reviewed By: thanm
Subscribers: mstorsjo, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64673
llvm-svn: 366569
We'd like to remove this whole function, because these are properties of
functions, not the target as a whole. These two are easy to remove
because they are only used for emitting ARM build attributes, which
expects them to represent the defaults for the whole module, not just
the last function generated.
This is needed to get correct build attributes when using IPRA on ARM,
because IPRA causes resetTargetOptions to get called before
ARMAsmPrinter::emitAttributes.
Differential revision: https://reviews.llvm.org/D64929
llvm-svn: 366562
If a function definition is not exact, then the linker could select a
differently-compiled version of it, which could use different registers.
https://reviews.llvm.org/D64909
llvm-svn: 366557
Summary:
According to the new Armv8-M specification
https://static.docs.arm.com/ddi0553/bh/DDI0553B_h_armv8m_arm.pdf the
instructions SQRSHRL and UQRSHLL now have an additional immediate
operand <saturate>. The new assembly syntax is:
SQRSHRL<c> RdaLo, RdaHi, #<saturate>, Rm
UQRSHLL<c> RdaLo, RdaHi, #<saturate>, Rm
where <saturate> can be either 64 (the existing behavior) or 48, in
that case the result is saturated to 48 bits.
The new operand is encoded as follows:
#64 Encoded as sat = 0
#48 Encoded as sat = 1
sat is bit 7 of the instruction bit pattern.
This patch adds a new assembler operand class MveSaturateOperand which
implements parsing and encoding. Decoding is implemented in
DecodeMVEOverlappingLongShift.
Reviewers: ostannard, simon_tatham, t.p.northover, samparker, dmgreen, SjoerdMeijer
Reviewed By: simon_tatham
Subscribers: javed.absar, kristof.beyls, hiraditya, pbarrio, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64810
llvm-svn: 366555
Summary:
This patch removes the `default` case from some switches on
`llvm::Triple::ObjectFormatType`, and cases for the missing enumerators
(`UnknownObjectFormat`, `Wasm`, and `XCOFF`) are then added.
For `UnknownObjectFormat`, the effect of the action for the `default`
case is maintained; otherwise, where `llvm_unreachable` is called,
`report_fatal_error` is used instead.
Where the `default` case returns a default value, `report_fatal_error`
is used for XCOFF as a placeholder. For `Wasm`, the effect of the action
for the `default` case in maintained.
The code is structured to avoid strongly implying that the `Wasm` case
is present for any reason other than to make the switch cover all
`ObjectFormatType` enumerator values.
Reviewers: sfertile, jasonliu, daltenty
Reviewed By: sfertile
Subscribers: hiraditya, aheejin, sunfish, llvm-commits, cfe-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D64222
llvm-svn: 366544
Summary:
If we have some pattern that leaves only some low bits set, and then performs
left-shift of those bits, if none of the bits that are left after the final
shift are modified by the mask, we can omit the mask.
There are many variants to this pattern:
f. `((x << MaskShAmt) a>> MaskShAmt) << ShiftShAmt`
All these patterns can be simplified to just:
`x << ShiftShAmt`
iff:
f. `(ShiftShAmt-MaskShAmt) s>= 0` (i.e. `ShiftShAmt u>= MaskShAmt`)
Normally, the inner pattern is sign-extend,
but for our purposes it's no different to other patterns:
alive proofs:
f: https://rise4fun.com/Alive/7U3
For now let's start with patterns where both shift amounts are variable,
with trivial constant "offset" between them, since i believe this is
both simplest to handle and i think this is most common.
But again, there are likely other variants where we could use
ValueTracking/ConstantRange to handle more cases.
https://bugs.llvm.org/show_bug.cgi?id=42563
Differential Revision: https://reviews.llvm.org/D64524
llvm-svn: 366540
Summary:
If we have some pattern that leaves only some low bits set, and then performs
left-shift of those bits, if none of the bits that are left after the final
shift are modified by the mask, we can omit the mask.
There are many variants to this pattern:
e. `((x << MaskShAmt) l>> MaskShAmt) << ShiftShAmt`
All these patterns can be simplified to just:
`x << ShiftShAmt`
iff:
e. `(ShiftShAmt-MaskShAmt) s>= 0` (i.e. `ShiftShAmt u>= MaskShAmt`)
alive proofs:
e: https://rise4fun.com/Alive/0FT
For now let's start with patterns where both shift amounts are variable,
with trivial constant "offset" between them, since i believe this is
both simplest to handle and i think this is most common.
But again, there are likely other variants where we could use
ValueTracking/ConstantRange to handle more cases.
https://bugs.llvm.org/show_bug.cgi?id=42563
Differential Revision: https://reviews.llvm.org/D64521
llvm-svn: 366539
Summary:
If we have some pattern that leaves only some low bits set, and then performs
left-shift of those bits, if none of the bits that are left after the final
shift are modified by the mask, we can omit the mask.
There are many variants to this pattern:
d. `(x & ((-1 << MaskShAmt) >> MaskShAmt)) << ShiftShAmt`
All these patterns can be simplified to just:
`x << ShiftShAmt`
iff:
d. `(ShiftShAmt-MaskShAmt) s>= 0` (i.e. `ShiftShAmt u>= MaskShAmt`)
alive proofs:
d: https://rise4fun.com/Alive/I5Y
For now let's start with patterns where both shift amounts are variable,
with trivial constant "offset" between them, since i believe this is
both simplest to handle and i think this is most common.
But again, there are likely other variants where we could use
ValueTracking/ConstantRange to handle more cases.
https://bugs.llvm.org/show_bug.cgi?id=42563
Differential Revision: https://reviews.llvm.org/D64519
llvm-svn: 366538
Summary:
If we have some pattern that leaves only some low bits set, and then performs
left-shift of those bits, if none of the bits that are left after the final
shift are modified by the mask, we can omit the mask.
There are many variants to this pattern:
c. `(x & (-1 >> MaskShAmt)) << ShiftShAmt`
All these patterns can be simplified to just:
`x << ShiftShAmt`
iff:
c. `(ShiftShAmt-MaskShAmt) s>= 0` (i.e. `ShiftShAmt u>= MaskShAmt`)
alive proofs:
c: https://rise4fun.com/Alive/RgJh
For now let's start with patterns where both shift amounts are variable,
with trivial constant "offset" between them, since i believe this is
both simplest to handle and i think this is most common.
But again, there are likely other variants where we could use
ValueTracking/ConstantRange to handle more cases.
https://bugs.llvm.org/show_bug.cgi?id=42563
Differential Revision: https://reviews.llvm.org/D64517
llvm-svn: 366537
Summary:
If we have some pattern that leaves only some low bits set, and then performs
left-shift of those bits, if none of the bits that are left after the final
shift are modified by the mask, we can omit the mask.
There are many variants to this pattern:
b. `(x & (~(-1 << maskNbits))) << shiftNbits`
All these patterns can be simplified to just:
`x << ShiftShAmt`
iff:
b. `(MaskShAmt+ShiftShAmt) u>= bitwidth(x)`
alive proof:
b: https://rise4fun.com/Alive/y8M
For now let's start with patterns where both shift amounts are variable,
with trivial constant "offset" between them, since i believe this is
both simplest to handle and i think this is most common.
But again, there are likely other variants where we could use
ValueTracking/ConstantRange to handle more cases.
https://bugs.llvm.org/show_bug.cgi?id=42563
Differential Revision: https://reviews.llvm.org/D64514
llvm-svn: 366536
Summary:
If we have some pattern that leaves only some low bits set, and then performs
left-shift of those bits, if none of the bits that are left after the final
shift are modified by the mask, we can omit the mask.
There are many variants to this pattern:
a. `(x & ((1 << MaskShAmt) - 1)) << ShiftShAmt`
All these patterns can be simplified to just:
`x << ShiftShAmt`
iff:
a. `(MaskShAmt+ShiftShAmt) u>= bitwidth(x)`
alive proof:
a: https://rise4fun.com/Alive/wi9
Indeed, not all of these patterns are canonical.
But since this fold will only produce a single instruction
i'm really interested in handling even uncanonical patterns,
since i have this general kind of pattern in hotpaths,
and it is not totally outlandish for bit-twiddling code.
For now let's start with patterns where both shift amounts are variable,
with trivial constant "offset" between them, since i believe this is
both simplest to handle and i think this is most common.
But again, there are likely other variants where we could use
ValueTracking/ConstantRange to handle more cases.
https://bugs.llvm.org/show_bug.cgi?id=42563
Reviewers: spatel, nikic, huihuiz, xbolva00
Reviewed By: xbolva00
Subscribers: efriedma, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64512
llvm-svn: 366535
In debug frame information, some fields, e.g., Length in CIE/FDE and
Offset in FDE are attributes to describe the structure of CIE/FDE. They
are not related to the relaxed code. However, these attributes are
symbol differences. So, in current design, these attributes will be
filled as zero and LLVM generates relocations for them.
We only need to generate relocations for symbols in executable sections.
So, if the symbols are not located in executable sections, we still
evaluate their values under relaxation.
Differential Revision: https://reviews.llvm.org/D61584
llvm-svn: 366531
It is necessary to generate fixups in .debug_frame or .eh_frame as
relaxation is enabled due to the address delta may be changed after
relaxation.
There is an opcode with 6-bits data in debug frame encoding. So, we
also need 6-bits fixup types.
Differential Revision: https://reviews.llvm.org/D58335
llvm-svn: 366524
Summary:
Inline asm doesn't use labels when compiled as an object file. Therefore, we
shouldn't create one for the (potential) callbr destination. Instead, use the
symbol for the MachineBasicBlock.
Reviewers: nickdesaulniers, craig.topper
Reviewed By: nickdesaulniers
Subscribers: xbolva00, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64888
llvm-svn: 366523
I plan on adding memcpy optimizations in the GlobalISel pipeline, but we can't
do that unless we delay lowering to actual function calls. This patch changes
the translator to generate G_INTRINSIC_W_SIDE_EFFECTS for these functions, and
then have each target specify that using the new custom legalizer for intrinsics
hook that they want it expanded it a libcall.
Differential Revision: https://reviews.llvm.org/D64895
llvm-svn: 366516
Add support for folding G_GEPs into loads of the form
```
ldr reg, [base, off]
```
when possible. This can save an add before the load. Currently, this is only
supported for loads of 64 bits into 64 bit registers.
Add a new addressing mode function, `selectAddrModeRegisterOffset` which
performs this folding when it is profitable.
Also add a test for addressing modes for G_LOAD.
Differential Revision: https://reviews.llvm.org/D64944
llvm-svn: 366503
This is a small extension of !associated, mostly useful for the implementation
convenience of instrumentation passes that RAUW globals with aliases, such
as LowerTypeTests.
Differential Revision: https://reviews.llvm.org/D64951
llvm-svn: 366502
Summary:
Some polish for r365099 which adds a static initializer to
MachOObjectFile. Remove it by moving it to file scope.
Reviewers: smeenai, alexshap, compnerd, mtrent, anushabasana
Reviewed By: smeenai
Subscribers: hiraditya, jkorous, dexonsmith, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64873
llvm-svn: 366496
This causes sections with relative pointers to be marked as read only,
which means that they won't end up sharing pages with writable data.
Differential Revision: https://reviews.llvm.org/D64948
llvm-svn: 366494
While 'd_type' is a non-standard extension to `struct dirent`, only
glibc signals its presence with a macro '_DIRENT_HAVE_D_TYPE'.
However, any platform with 'd_type' also includes a way to convert to
mode_t values using the macro 'DTTOIF', so we can check for that alone
and still be confident that the 'd_type' member exists.
(If this turns out to be wrong, I'll go back and set up an actual
CMake check.)
I couldn't think of how to write a test for this, because I couldn't
think of how to test that a 'stat' call doesn't happen without
controlling the filesystem or intercepting 'stat', and there's no good
cross-platform way to do that that I know of.
Follow-up (almost a year later) to r342089.
rdar://problem/50592673
https://reviews.llvm.org/D64940
llvm-svn: 366486
Summary:
Add `__builtin_wasm_tls_base` so that LeakSanitizer can find the thread-local
block and scan through it for memory leaks.
Reviewers: tlively, aheejin, sbc100
Subscribers: dschuff, jgravelle-google, hiraditya, sunfish, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D64900
llvm-svn: 366475
Summary:
- As the pointer stripping now tracks through `addrspacecast`, prepare
to handle the bit-width difference from the result pointer.
Reviewers: jdoerfert
Subscribers: jvesely, nhaehnle, hiraditya, arphaman, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64928
llvm-svn: 366470
There doesn't seem to be a practical reason for these instructions to have
different restrictions on the types of relocations that they may be used
with, notwithstanding the language in the ELF AArch64 spec that implies that
specific relocations are meant to be used with specific instructions.
For example, we currently forbid the first instruction in the following
sequence, despite it currently being used by clang to generate a global
reference under -mcmodel=large:
movz x0, #:abs_g0_nc:foo
movk x0, #:abs_g1_nc:foo
movk x0, #:abs_g2_nc:foo
movk x0, #:abs_g3:foo
Therefore, allow MOVK/MOVN/MOVZ to accept the union of the set of relocations
that they currently accept individually.
Differential Revision: https://reviews.llvm.org/D64466
llvm-svn: 366461
It is necessary to generate fixups in .debug_frame or .eh_frame as
relaxation is enabled due to the address delta may be changed after
relaxation.
There is an opcode with 6-bits data in debug frame encoding. So, we
also need 6-bits fixup types.
Differential Revision: https://reviews.llvm.org/D58335
llvm-svn: 366442
This patch enables us to find the source loads for each element, splitting them into a Load and ByteOffset, and attempts to recognise consecutive loads that are in fact from the same source load.
A helper function, findEltLoadSrc, recurses to find a LoadSDNode and determines the element's byte offset within it. When attempting to match consecutive loads, byte offsetted loads then attempt to matched against a previous load that has already been confirmed to be a consecutive match.
Next step towards PR16739 - after this we just need to account for shuffling/repeated elements to create a vector load + shuffle.
Differential Revision: https://reviews.llvm.org/D64551
llvm-svn: 366441
Summary:
Commit r365249 changed usage of FileCheckNumericVariable to have one
instance of that class per variable as opposed to one instance per
definition of a given variable as was done before. However, it retained
the safety check in setValue that it should only be called with the
variable unset, even after r365625.
However this causes assert failure when a non-pseudo variable is being
redefined. And while redefinition of @LINE at each CHECK line work in
the general case, it caused problem when a substitution failed (fixed in
r365624) and still causes problem when a CHECK line does not match since
@LINE's value is cleared after substitutions in match() happened but
printSubstitutions also attempts a substitution.
This commit solves the root of the problem by changing setValue to set a
new value regardless of whether a value was set or not, thus fixing all
the aforementioned issues.
Reviewers: jhenderson, chandlerc, jdenny, probinson, grimar, arichardson, rnk
Subscribers: hiraditya, llvm-commits, probinson, dblaikie, grimar, arichardson, tra, rnk, kristina, hfinkel, rogfer01, JonChesterfield
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64882
llvm-svn: 366434
LEA doesn't affect flags, so use it more liberally to replace an ADD when
we know that the ADD operands affect flags.
In the motivating example from PR40483:
https://bugs.llvm.org/show_bug.cgi?id=40483
...this lets us avoid duplicating a math op just to avoid flag conflict.
As mentioned in the TODO comments, this heuristic can be extended to
fire more often if that leads to more improvements.
Differential Revision: https://reviews.llvm.org/D64707
llvm-svn: 366431
Summary:
PerformVMOVRRDCombine ommits adding a offset
of 4 to the PointerInfo, when converting a
f64 = load[M]
to
{i32, i32} = {load[M], load[M + 4]}
Which would allow the machine scheduller
to break dependencies with the second load.
- pr42638
Reviewers: eli.friedman, dmgreen, ostannard
Reviewed By: ostannard
Subscribers: ostannard, javed.absar, kristof.beyls, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64870
llvm-svn: 366423
I'm not convinced the code this calls is properly vetted for
vXi1 vectors. Experimental vector widening legalization testing
for D55251 is now hitting an assertion failure inside
EltsFromConsecutiveLoads. This is occurring from a v2i1 load
having a store size different than its VT size. Hopefully
this commit will keep such issues from happening.
llvm-svn: 366405
When code relaxation is enabled many RISC-V fixups are not resolved but
instead relocations are emitted. This happens even for DWARF debug
sections. Therefore, to properly support the parsing of DWARF debug info
we need to be able to resolve RISC-V relocations. This patch adds:
* Support for RISC-V relocations in RelocationResolver
* DWARF support for two relocations per object file offset
* DWARF changes to support relocations in more DIE fields
The two relocations per offset change is needed because some RISC-V
relocations (used for label differences) come in pairs.
Relocations can also be emitted for DWARF fields where relocations were
not yet evaluated. Adding relocation support for some of these fields is
essencial. On the other hand, LLVM currently emits RISC-V relocations
for fixups that could be safely evaluated, since they can never be
affected by code relaxations. This patch also adds relocation support
for the fields affected by those extraneous relocations (the DWARF unit
entry Length, and the DWARF debug line entry TotalLength and
PrologueLength), for testing purposes.
Differential Revision: https://reviews.llvm.org/D62062
Patch by Luís Marques.
llvm-svn: 366402
Summary:
LoopInfoWrapperPass::verify uses DT, which means DT must be alive
even if it has no direct users.
Fixes a crash in expensive checks mode.
Reviewers: pcc, leonardchan
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64896
llvm-svn: 366388
After rL365286 I had failing test:
LLVM :: tools/gold/X86/v1.12/thinlto_emit_linked_objects.ll
It was failing with the output:
$ llvm-bcanalyzer --dump llvm/test/tools/gold/X86/v1.12/Output/thinlto_emit_linked_objects.ll.tmp3.o.thinlto.bc
Expected<T> must be checked before access or destruction.
Unchecked Expected<T> contained error:
Unexpected end of file reading 0 of 0 bytesStack dump:
Change-Id: I07e03262074ea5e0aae7a8d787d5487c87f914a2
llvm-svn: 366387
- getCompression() used to return a PDB_SourceCompression even though
the docs for IDiaInjectedSource are explicit about the return value
being compiler-dependent. Return an uint32_t instead, and make the
printing code handle unknown values better by printing "Unknown" and
the int value instead of not printing any compression.
- Print compressed contents as hex dump, not as string.
- Add compression type "DotNet", which is used (at least) by csc.exe,
the C# compiler. Also add a lengthy comment describing the stream
contents (derived from looking at the raw hex contents long enough
to see the GUIDs, which led me to the roslyn and mono implementations
for handling this).
- The native injected source dumper was dumping the contents of the
whole data stream -- but csc.exe writes a stream that's padded with
zero bytes to the next 512 boundary, and the dia api doesn't display
those padding bytes. So make NativeInjectedSource::getCode() do the
same thing.
Differential Revision: https://reviews.llvm.org/D64879
llvm-svn: 366386
This will let us instrument globals during initialization. This required
making the new PM pass a module pass, which should still provide access to
analyses via the ModuleAnalysisManager.
Differential Revision: https://reviews.llvm.org/D64843
llvm-svn: 366379
The LocalStackSlotPass pre-allocates a stack protector and makes sure
that it comes before the local variables on the stack.
We need to make sure that later during PEI we don't re-allocate a new
stack protector slot. If that happens, the new stack protector slot will
end up being **after** the local variables that it should be protecting.
Therefore, we would have two slots assigned for two different stack
protectors, one at the top of the stack, and one at the bottom. Since
PEI will overwrite the assigned slot for the stack protector, the load
that is used to compare the value of the stack protector will use the
slot assigned by PEI, which is wrong.
For this, we need to check if the object is pre-allocated, and re-use
that pre-allocated slot.
Differential Revision: https://reviews.llvm.org/D64757
llvm-svn: 366371
Extract the sources to the GCD of the original size and target size,
padding with implicit_def as necessary.
Also fix the case where the requested source type is wider than the
original result type. This was ignoring the type, and just using the
destination. Do the operation in the requested type and truncate back.
llvm-svn: 366367
Use an anyext to the requested type for the leftover operand to
produce a slightly wider type, and then truncate the final merge.
I have another implementation almost ready which handles arbitrary
widens, but I think it produces worse code in this example (which I
think is 90% due to not folding redundant copies or folding out
implicit_def users), so I wanted to add this as a baseline first.
llvm-svn: 366366
Implement IR intrinsics for stack tagging. Generated code is very
unoptimized for now.
Two special intrinsics, llvm.aarch64.irg.sp and llvm.aarch64.tagp are
used to implement a tagged stack frame pointer in a virtual register.
Differential Revision: https://reviews.llvm.org/D64172
llvm-svn: 366360
Summary:
Since the target has no significant advantage of vectorization,
vector instructions bous threshold bonus should be optional.
amdgpu-inline-arg-alloca-cost parameter default value and the target
InliningThresholdMultiplier value tuned then respectively.
Reviewers: arsenm, rampitec
Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, eraman, hiraditya, haicheng, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64642
llvm-svn: 366348
Summary:
ORCv1 is deprecated. The current aim is to remove it before the LLVM 10.0
release. This patch adds deprecation attributes to the ORCv1 layers and
utilities to warn clients of the change.
Reviewers: dblaikie, sgraenitz, AlexDenisov
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64609
llvm-svn: 366344
Summary:
Deduce the "willreturn" attribute for functions.
For now, intrinsics are not willreturn. More annotation will be done in another patch.
Reviewers: jdoerfert
Subscribers: jvesely, nhaehnle, nicholas, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D63046
llvm-svn: 366335
The original behavior was to always emit the offsets to each call site in the
call site table as uleb128 values, however on some architectures (eg RISCV)
these uleb128 offsets into the code cannot always be resolved until link time
(because relaxation will invalidate any calculated offsets), and there are no
appropriate relocations for uleb128 values. As a consequence it needs to be
possible to specify an alternative.
This also switches RISCV to use DW_EH_PE_udata4 for call side encodings in
.gcc_except_table
Differential Revision: https://reviews.llvm.org/D63415
Patch by Edward Jones.
llvm-svn: 366329
This patch sets correct encodings for DWARF exception handling for RISC-V
(other than call site encoding, which must be udata4 rather than uleb128 and
is handled by D63415).
This has the same intend as D63409, except this version matches GCC/binutils
behaviour which uses the same encodings regardless of PIC/non-PIC and
medlow/medany code model.
llvm-svn: 366327
Summary:
Missed in the original commit, use the correct callee-saved register
list for spilling, instead of the standard SVR432 list. This avoids
needlessly spilling the SPE non-volatile registers when they're not used.
As part of this, also add where missing, and sort, the spill opcode
checks for SPE and SPE4 register classes.
Reviewers: nemanjai, hfinkel, joerg
Subscribers: kbarton, jsji, llvm-commits
Differential Revision: https://reviews.llvm.org/D56703
llvm-svn: 366319
Summary:
Pointed out in a comment for D49754, register spilling will currently
spill SPE registers at almost any offset. However, the instructions
`evstdd` and `evldd` require a) 8-byte alignment, and b) a limit of 256
(unsigned) bytes from the base register, as the offset must fix into a
5-bit offset, which ranges from 0-31 (indexed in double-words).
The update to the register spill test is taken partially from the test
case shown in D49754.
Additionally, pointed out by Kei Thomsen, globals will currently use
evldd/evstdd, though the offset isn't known at compile time, so may
exceed the 8-bit (unsigned) offset permitted. This fixes that as well,
by forcing it to always use evlddx/evstddx when accessing globals.
Part of the patch contributed by Kei Thomsen.
Reviewers: nemanjai, hfinkel, joerg
Subscribers: kbarton, jsji, llvm-commits
Differential Revision: https://reviews.llvm.org/D54409
llvm-svn: 366318
Add narrowScalar to half of original size for G_ICMP.
ClampScalar G_ICMP's operands 2 and 3 to to s32.
Select G_ICMP for pointers for MIPS32. Pointer compare is same
as for integers, it is enough to declare them as legal type.
Differential Revision: https://reviews.llvm.org/D64856
llvm-svn: 366317
Migrate CallLowering::lowerReturnVal to use the same infrastructure as
lowerCall/FormalArguments and remove the now obsolete code path from
splitToValueTypes.
Forgot to push this earlier.
llvm-svn: 366308
This directive forces to use the alternate register for context pointer.
For example, this code:
.cplocal $4
jal foo
expands to:
ld $25, %call16(foo)($4)
jalr $25
Differential Revision: https://reviews.llvm.org/D64743
llvm-svn: 366300
As well as other LLVM targets we do not handle "offsettable"
memory addresses in any special way. In other words, the "o" constraint
is an exact equivalent of the "m" one. But some existing code require
the "o" constraint support.
This fixes PR42589.
Differential Revision: https://reviews.llvm.org/D64792
llvm-svn: 366299
AMDGPU needs to allocate special argument registers separately from
the user function argument list, so needs direct control over the
CCState.
The ArgLocs argument is only really necessary because CCState doesn't
allow access to it.
llvm-svn: 366279
Summary:
Currently, on Emscripten, dynamic linking is not supported with threads.
This means that if thread-local storage is used, it must be used in a
statically-linked executable. Hence, local-exec is the only possible model.
This diff compiles all TLS variables to use local-exec on Emscripten as a
temporary measure until dynamic linking is supported with threads.
The goal for this is to allow C++ types with constructors to be thread-local.
Currently, when `clang` compiles a `thread_local` variable with a constructor,
it generates `__tls_guard` variable:
@__tls_guard = internal thread_local global i8 0, align 1
As no TLS model is specified, this is treated as general-dynamic, which we do
not support (and cannot support without implementing dynamic linking support
with threads in Emscripten). As a result, any C++ constructor in `thread_local`
variables would not compile.
By compiling all `thread_local` as local-exec, `__tls_guard` will compile and
we can support C++ constructors with TLS without implementing dynamic linking
with threads.
Depends on D64537
Reviewers: tlively, aheejin, sbc100
Reviewed By: aheejin
Subscribers: dschuff, jgravelle-google, hiraditya, sunfish, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64776
llvm-svn: 366275
Summary:
Thread local variables are placed inside a `.tdata` segment. Their symbols are
offsets from the start of the segment. The address of a thread local variable
is computed as `__tls_base` + the offset from the start of the segment.
`.tdata` segment is a passive segment and `memory.init` is used once per thread
to initialize the thread local storage.
`__tls_base` is a wasm global. Since each thread has its own wasm instance,
it is effectively thread local. Currently, `__tls_base` must be initialized
at thread startup, and so cannot be used with dynamic libraries.
`__tls_base` is to be initialized with a new linker-synthesized function,
`__wasm_init_tls`, which takes as an argument a block of memory to use as the
storage for thread locals. It then initializes the block of memory and sets
`__tls_base`. As `__wasm_init_tls` will handle the memory initialization,
the memory does not have to be zeroed.
To help allocating memory for thread-local storage, a new compiler intrinsic
is introduced: `__builtin_wasm_tls_size()`. This instrinsic function returns
the size of the thread-local storage for the current function.
The expected usage is to run something like the following upon thread startup:
__wasm_init_tls(malloc(__builtin_wasm_tls_size()));
Reviewers: tlively, aheejin, kripken, sbc100
Subscribers: dschuff, jgravelle-google, hiraditya, sunfish, jfb, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D64537
llvm-svn: 366272
This is part of what is requested by PR42023:
https://bugs.llvm.org/show_bug.cgi?id=42023
There's an extension needed for FP add, but exactly how we would specify
that using flags is not clear to me, so I left that as a TODO.
We're still missing patterns for partial reductions when the input vector
is 256-bit or 512-bit, but I think that's a failure of vector narrowing.
If we can reduce the widths, then this matching should work on those tests.
Differential Revision: https://reviews.llvm.org/D64760
llvm-svn: 366268
D64033 <https://reviews.llvm.org/D64033> added DW_AT_call_column for
inline sites. However, that change wasn't aware of "-gno-column-info".
To avoid adding column info when "-gno-column-info" is used, now
DW_AT_call_column is only added when we have non-zero column (when
"-gno-column-info" is used, column will be zero).
Patch by Wenlei He!
Differential Revision: https://reviews.llvm.org/D64784
llvm-svn: 366264
Summary:
This is exposed by our internal testing.
The reduced testcase will assert with "Impossible reg-to-reg copy"
We can't use COPY to do 32-bit to 64-bit conversion.
Reviewers: kbarton, hfinkel, nemanjai
Reviewed By: hfinkel
Subscribers: hiraditya, MaskRay, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64499
llvm-svn: 366255
I think this manages to not break the DAG handling with the divergent
predicates because the stadalone divergent patterns end up with a
higher priority than the pattern on the instruction definition.
The 16-bit versions don't work yet.
llvm-svn: 366254
When it is AReg_1024 this results in unnecessary copying into
AGPRs of a 32 element vectors even though they are not intended
for an mfma instruction.
Differential Revision: https://reviews.llvm.org/D64815
llvm-svn: 366252
I don't have an IR sample which is actually failing, but the issue described in the comment is theoretically possible, and should be guarded against even if there's a different root cause for the bot failures.
llvm-svn: 366241
Now that the patterns use the new PatFrag address space support, the
only blocker to importing most load patterns is the addressing mode
complex patterns.
llvm-svn: 366237
`pretty -native -injected-sources -injected-source-content` works with
this patch, and produces identical output to the dia version.
Differential Revision: https://reviews.llvm.org/D64428
llvm-svn: 366236
Summary:
Extend the atomic optimizer to handle signed and unsigned max and min
operations, as well as add and subtract.
Reviewers: arsenm, sheredom, critson, rampitec
Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, jfb, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64328
llvm-svn: 366235
Reimplement scheduling constraints for strict FP instructions in
ScheduleDAGInstrs::buildSchedGraph to allow for more relaxed
scheduling. Specifially, allow one strict FP instruction to
be scheduled across another, as long as it is not moved across
any global barrier.
Differential Revision: https://reviews.llvm.org/D64412
Reviewed By: cameron.mcinally
llvm-svn: 366222
Before, everything was based on some kind of type erased parser
implementation which container a lot of boilerplate code when multiple
formats were to be supported.
This simplifies it by:
* the remark now owns its arguments
* *always* returning an error from the implementation side
* working around the way the YAML parser reports errors: catch them through
callbacks and re-insert them in a proper llvm::Error
* add a CParser wrapper that is used when implementing the C API to
avoid cluttering the C++ API with useless state
* LLVMRemarkParserGetNext now returns an object that needs to be
released to avoid leaking resources
* add a new API to dispose of a remark entry: LLVMRemarkEntryDispose
llvm-svn: 366217
Summary:
As per title. DAGCombiner only mathes the special case where b = 0, this patches extends the pattern to match any value of b.
Depends on D57302
Reviewers: hfinkel, RKSimon, craig.topper
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59208
llvm-svn: 366214
Apparently the check for legal instructions during instruction
select does not happen without an asserts build, so these would
successfully select in release, and fail in debug.
Make s16 and/or/xor legal. These can just be selected directly
to the 32-bit operation, as is already done in SelectionDAG, so just
make them legal.
llvm-svn: 366210
The jcvt intrinsic defined in ACLE [1] is available when ARM_FEATURE_JCVT is defined.
This change introduces the AArch64 intrinsic, wires it up to the instruction and a new clang builtin function.
The __ARM_FEATURE_JCVT macro is now defined when an Armv8.3-A or higher target is used.
I've implemented the target detection logic in Clang so that this feature is enabled for architectures from armv8.3-a onwards (so -march=armv8.4-a also enables this, for example).
make check-all didn't show any new failures.
[1] https://developer.arm.com/docs/101028/latest/data-processing-intrinsics
Differential Revision: https://reviews.llvm.org/D64495
llvm-svn: 366197
The DWARF3 documentation had inconsistency concerning the reserved range
for unit length values. The issue was fixed in DWARF4.
Differential Revision: https://reviews.llvm.org/D64622
llvm-svn: 366190
The first argument in the constructor was ignored, and the remaining
arguments were always passed as their defaults.
Differential Revision: https://reviews.llvm.org/D64407
llvm-svn: 366188
The canonical GNU form of JALR resembles a load/store instruction rather
than placing the immediate offset as a separate argument, so match this
behaviour. Also add parser-only aliases for the three-operand form, and
add other shorter aliases also emitted by GNU tools.
Differential Revision: https://reviews.llvm.org/D55277
Patch by James Clarke.
llvm-svn: 366179
RISCVAsmBackend::shouldInsertExtraNopBytesForCodeAlign() assumed that the
align specified would be greater than or equal to the minimum nop length, but
that is not always the case - for example if a user specifies ".align 0" in
assembly.
Differential Revision: https://reviews.llvm.org/D63274
Patch by Edward Jones.
llvm-svn: 366176
The bool result of shouldInsertExtraNopBytesForCodeAlign() is not checked but
the returned nop count is unconditionally read even though it could be
uninitialized.
Differential Revision: https://reviews.llvm.org/D63285
Patch by Edward Jones.
llvm-svn: 366175
Since PseudoCALL defines AsmString, it can be generated from assembly,
and so code-gen patterns should be defined separately to be consistent
with the style of the RISCV backend. Other pseudo-instructions exist
that have code-gen patterns defined directly, but these instructions are
purely for code-gen and cannot be written in assembly.
Differential Revision: https://reviews.llvm.org/D64012
Patch by James Clarke.
llvm-svn: 366174
Previously, this function didn't check the IsPCRel argument. But doing so is a
useful check for errors, and also seemingly necessary for FK_Data_4 (which we
produce a R_RISCV_32_PCREL relocation for if IsPCRel).
Other than R_RISCV_32_PCREL, this should be NFC. Future exception handling
related patches will include tests that capture this behaviour.
llvm-svn: 366172
Use the MemoryVT field. This will be necessary for tablegen to
automatically handle patterns for GlobalISel.
Doesn't handle the d16 lo/hi patterns. Those are a special case since
it involvess the custom node type.
llvm-svn: 366168
In LLDB, when parsing type units, we don't need to parse the whole line
table. Instead, we only need to parse the "support files" from the line
table prologue.
To make that possible, this patch moves the respective functions from
the LineTable into the Prologue. Because I don't think users of the
LineTable should have to know that these files come from the Prologue,
I've left the original methods in place, and made them redirect to the
LineTable.
Differential revision: https://reviews.llvm.org/D64774
llvm-svn: 366164
Summary:
- As the pointer stripping could trace through `addrspacecast` now, need
to sext/trunc the offset to ensure it has the same width as the
pointer after stripping.
Reviewers: jdoerfert
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64768
llvm-svn: 366162
In LLDB, when parsing type units, we don't need to parse the whole line
table. Instead, we only need to parse the "support files" from the line
table prologue.
To make that possible, this patch moves the respective functions from
the LineTable into the Prologue. Because I don't think users of the
LineTable should have to know that these files come from the Prologue,
I've left the original methods in place, and made them redirect to the
LineTable.
Differential revision: https://reviews.llvm.org/D64774
llvm-svn: 366158
As there are some reported miscompiles with AVX512 and performance regressions
in Eigen. Verified with the original committer and testcases will be forthcoming.
This reverts commit r364964.
llvm-svn: 366154
We mostly avoid sub with immediate but there are a couple cases that can create them. One is the add 128, %rax -> sub -128, %rax trick in isel. The other is when a SUB immediate gets created for a compare where both the flags and the subtract value is used. If we are unable to linearize the SelectionDAG to satisfy the flag user and the sub result user from the same instruction, we will clone the sub immediate for the two uses. The one that produces flags will eventually become a compare. The other will have its flag output dead, and could then be considered for LEA creation.
I added additional test cases to add.ll to show the the sub -128 trick gets converted to LEA and a case where we don't need to convert it.
This showed up in the current codegen for PR42571.
Differential Revision: https://reviews.llvm.org/D64574
llvm-svn: 366151
Summary:
This adds missing utility methods and copy instruction handling for
`exnref` type and also adds tests.
`tee` instruction tests are missing because `isTee` is currently only
used in ExplicitLocals pass and testing that pass in mir requires
serialization of stackified registers in mir files, which is a bit
nontrivial because `MachineFunctionInfo` only has info of vreg numbers
(which are large integers) but not the mir's register numbers. But this
change is quite trivial anyway.
Reviewers: tlively
Subscribers: dschuff, sbc100, jgravelle-google, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64705
llvm-svn: 366149
Summary:
We agreed to rename `except_ref` to `exnref` for consistency with other
reference types in
https://github.com/WebAssembly/exception-handling/issues/79. This also
renames WebAssemblyInstrExceptRef.td to WebAssemblyInstrRef.td in order
to use the file for other reference types in future.
Reviewers: dschuff
Subscribers: sbc100, jgravelle-google, hiraditya, sunfish, jfb, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64703
llvm-svn: 366145
Summary:
These are emitted as identifiers by the InstPrinter, so we should
parse them as such. These could potentially clash with symbols of
the same name, but that is out of our (the WebAssembly backend) control.
Reviewers: dschuff
Subscribers: sbc100, jgravelle-google, aheejin, sunfish, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64770
llvm-svn: 366139
Summary:
Enable hoisting and merging m0 defs that are initialized with the same
immediate value. Fixes bug where removed instructions are not considered
to interfere with other inits, and make sure to not hoist inits before block
prologues.
Reviewers: rampitec, arsenm
Reviewed By: rampitec
Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64766
llvm-svn: 366135
We already do this for the flat and DS instructions, although it is
certainly uglier and more verbose.
This will allow using separate pattern definitions for extload and
zextload. Currently we get away with using a single PatFrag with
custom predicate code to check if the extension type is a zextload or
anyextload. The generic mechanism the global isel emitter understands
treats these as mutually exclusive. I was considering making the
pattern emitter accept zextload or sextload extensions for anyextload
patterns, but in global isel, the different extending loads have
distinct opcodes, and there is currently no mechanism for an opcode
matcher to try multiple (and there probably is very little need for
one beyond this case).
llvm-svn: 366132
Summary:
There is currently a correctness issue when unrolling loops containing
callbr's where their indirect targets are being updated correctly to the
newly created labels, but their operands are not. This manifests in
unrolled loops where the second and subsequent copies of callbr
instructions have blockaddresses of the label from the first instance of
the unrolled loop, which would result in nonsensical runtime control
flow.
For now, conservatively do not unroll the loop. In the future, I think
we can pursue unrolling such loops provided we transform the cloned
callbr's operands correctly.
Such a transform and its legalities are being discussed in:
https://reviews.llvm.org/D64101
Link: https://bugs.llvm.org/show_bug.cgi?id=42489
Link: https://groups.google.com/forum/#!topic/clang-built-linux/z-hRWP9KqPI
Reviewers: fhahn, hfinkel, efriedma
Reviewed By: fhahn, hfinkel, efriedma
Subscribers: efriedma, hiraditya, zzheng, dmgreen, llvm-commits, pirama, kees, nathanchance, E5ten, craig.topper, chandlerc, glider, void, srhines
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64368
llvm-svn: 366130
If a 1-bit value is in a 32-bit VGPR, the scalar opcodes set SCC to
whether the result is 0. If the inputs are SCC, these can be copied to
a 32-bit SGPR to produce an SCC result.
llvm-svn: 366125
Add "memtag" sanitizer that detects and mitigates stack memory issues
using armv8.5 Memory Tagging Extension.
It is similar in principle to HWASan, which is a software implementation
of the same idea, but there are enough differencies to warrant a new
sanitizer type IMHO. It is also expected to have very different
performance properties.
The new sanitizer does not have a runtime library (it may grow one
later, along with a "debugging" mode). Similar to SafeStack and
StackProtector, the instrumentation pass (in a follow up change) will be
inserted in all cases, but will only affect functions marked with the
new sanitize_memtag attribute.
Reviewers: pcc, hctim, vitalybuka, ostannard
Subscribers: srhines, mehdi_amini, javed.absar, kristof.beyls, hiraditya, cryptoad, steven_wu, dexonsmith, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D64169
llvm-svn: 366123
This is a hack until I come up with a better way of dealing with the
pseudo-register banks used for boolean values. If the use instruction
constrains the register, the selector for the def instruction won't
see that the bank was VCC. A 1-bit SReg_32 is could ambiguously have
been SCCRegBank or VCCRegBank in wave32.
This is necessary to successfully select branches with and and/or/xor
condition.
llvm-svn: 366120
The extra test change is correct, although how it arrives there is a
bug that needs work. With wave32, the test for isVCC ambiguously
reports true for an SCC or VCC source. A new allocatable pseudo
register class for SCC may be necesssary.
llvm-svn: 366119
Summary:
Processing of command-line definition of variable and logic around
implicit not directives both reuse parsing code that expects a line
number to be defined. So far, a special line number of 0 was used for
those users of the parsing code where a line number does not make sense.
This commit instead represents line numbers as Optional values so that
they can be None for those cases.
Reviewers: jhenderson, chandlerc, jdenny, probinson, grimar, arichardson, rnk
Subscribers: JonChesterfield, rogfer01, hfinkel, kristina, rnk, tra, arichardson, grimar, dblaikie, probinson, llvm-commits, hiraditya
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64639
llvm-svn: 366109
The construction was explained in
https://reviews.llvm.org/D44810?id=139526#inline-391999 but reading the code
shouldn't require hunting down old reviews to understand it.
The precomputed list was missing an entry for the empty list case, and
one entry at the very end. (The current last entry is the last one where
3 * BucketCount fits in a signed int, but the reference implementation
uses unsigneds as far as I can tell, so there's room for one more entry.)
No behavior change for inputs seen in practice.
Differential Revision: https://reviews.llvm.org/D64738
llvm-svn: 366107
We need to make sure that we are sensibly dealing with vectors of types v2i64
and v2f64, even if most of the time we cannot generate native operations for
them. This mostly adds a lot of testing, plus fixes up a couple of the issues
found. And, or and xor can be legal for v2i64, and shifts combining needs a
slight fixup.
Differential Revision: https://reviews.llvm.org/D64316
llvm-svn: 366106
inttofp (trunc (extelt X, 0)) --> inttofp (extelt (bitcast X), 0)
We have pseudo-vectorization of scalar int to FP casts, so this tries to
make that more likely by replacing a truncate with a bitcast. I didn't see
any test diffs starting from 'uitofp', so I left that as a TODO. We can't
only match the shorter trunc+extract pattern because there's an opposing
transform somewhere, so we infinite loop. Waiting to try this during
lowering is another possibility.
A motivating case is shown in PR39975 and included in the test diffs here:
https://bugs.llvm.org/show_bug.cgi?id=39975
Differential Revision: https://reviews.llvm.org/D64710
llvm-svn: 366098
Summary:
Occasionally the build of LLVMLibDriver will fail because Attributes.inc has not been generated yet. Add an explicit dependency, so that we can guarantee that the file has been generated before LLVMLibDriver is build.
##[error]llvm\include\llvm\IR\Attributes.h(73,0): Error C1083: Cannot open include file: 'llvm/IR/Attributes.inc': No such file or directory
llvm\include\llvm/IR/Attributes.h(73): fatal error C1083: Cannot open include file: 'llvm/IR/Attributes.inc': No such file or directory [LLVMLibDriver.vcxproj]
Reviewers: asmith
Subscribers: mgorny, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64357
llvm-svn: 366097
I think we only turn out of range shiftss to undef when
all elements are out of range or the shift amount is a splat out
of range. I'm not sure which, I didn't check.
During lowering we can split a shift where some elements
are out of range into multiple shifts. This can create a
new shift with a splat shift amount that is out of range.
This patch returns undef for this case.
Fixes PR42615.
Differential Revision: https://reviews.llvm.org/D64699
llvm-svn: 366096
Insert these during codegenprepare.
This works around a DAG issue where generic combines eliminate the and
asserting the high bits are zero, which then exposes an unknown read
source to the mul combine. It doesn't worth the hassle of trying to
insert an AssertZext or something to try to deal with it.
llvm-svn: 366094
There are scenarios where mutually recursive functions may cause the SCC
to contain both read only and write only functions. This removes an
assertion when adding read attributes which caused a crash with a the
provided test case, and instead just doesn't add the attributes.
Patch by Luke Lau <luke.lau@intel.com>
Differential Revision: https://reviews.llvm.org/D60761
llvm-svn: 366090
This adds basic lowering for MVE shifts. There are many shifts in MVE, but the
instructions handled here are:
VSHL (imm)
VSHRu (imm)
VSHRs (imm)
VSHL (vector)
VSHL (register)
MVE, like NEON before it, doesn't have shift right by a vector (or register).
We instead have to negate the amount and shift in the opposite direction. This
means we have to convert any SHR's into a form of SHL (that is still signed or
unsigned) with a negated condition and selecting from there. MVE still does
have shifting by an immediate for SHL, ASR and LSR.
This adds lowering for these and for register forms, which work well for shift
lefts but may require an extra fold of neg(vdup(x)) -> vdup(neg(x)) to potentially
work optimally for right shifts.
Differential Revision: https://reviews.llvm.org/D64212
llvm-svn: 366056
This just moves the shift instruction definitions further down the
ARMInstrMVE.td file, to make positioning patterns slightly more natural.
llvm-svn: 366054
This adjusts the way that we lower NEON shifts to use a DAG target node, not
via a neon intrinsic. This is useful for handling MVE shifts operations in the
same the way. It also renames some of the immediate shift nodes for
consistency, and moves some of the processing of immediate shifts into
LowerShift allowing it to capture more cases.
Differential Revision: https://reviews.llvm.org/D64426
llvm-svn: 366051
It is possible that loop exit has two predecessors in a loop body.
In this case after the peeling the iDom of the exit should be a clone of
iDom of original exit but no a clone of a block coming to this exit.
Reviewers: reames, fhahn
Reviewed By: reames
Subscribers: hiraditya, zzheng, llvm-commits
Differential Revision: https://reviews.llvm.org/D64618
llvm-svn: 366050
We do not compute the scalarization overhead in getVectorIntrinsicCost
and TTI::getIntrinsicInstrCost requires the full arguments list.
llvm-svn: 366049
This CL enables peeling of the loop with multiple exits where
one exit should be from latch and others are basic blocks with
call to deopt.
The peeling is enabled under the flag which is false by default.
Reviewers: reames, mkuper, iajbar, fhahn
Reviewed By: reames
Subscribers: xbolva00, hiraditya, zzheng, llvm-commits
Differential Revision: https://reviews.llvm.org/D63923
llvm-svn: 366048
With this patch the getLoopEstimatedTripCount function will
accept also the loops where there are more than one exit but
all exits except latch block should ends up with a call to deopt.
This side exits should not impact the estimated trip count.
Reviewers: reames, mkuper, danielcdh
Reviewed By: reames
Subscribers: fhahn, lebedev.ri, hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D64553
llvm-svn: 366042
Extract the code from LoopUnrollRuntime into utility function to
re-use it in D63923.
Reviewers: reames, mkuper
Reviewed By: reames
Subscribers: fhahn, hiraditya, zzheng, dmgreen, llvm-commits
Differential Revision: https://reviews.llvm.org/D64548
llvm-svn: 366040
Summary:
SSE1 only supports v4f32. But does have instructions like movlps/movhps that load/store 64-bits of memory.
This patch breaks the connection between the node VT of the vzext_load/vextract_store patterns and the memory VT. Enabling a v4f32 node with a 64-bit memory VT. I've used i64 as the memory VT here. I've written the PatFrag predicate to just check the store size not the specific VT. I think the VT will only matter for CSE purposes. We could use v2f32, but if we want to start using these operations in more places a simple integer type might make the most sense.
I'd like to maybe use this same thing for SSE2 and later as well, but that will need more work to be supported by EltsFromConsecutiveLoads to avoid regressing lit tests. I'd maybe also like to combine bitcasts with these load/stores nodes now that the types are disconnected. And I'd also like to consider canonicalizing (scalar_to_vector + load) to vzext_load.
If you want I can split the mechanical tablegen stuff where I added the 32/64 off from the sse1 change.
Reviewers: spatel, RKSimon
Reviewed By: RKSimon
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64528
llvm-svn: 366034
Teaches ARM::appendArchExtFeatures to account dependencies when processing
target features: i.e. when you say -march=armv8.1-m.main+mve.fp+nofp it
means mve.fp should get discarded too. (Split from D63936)
Differential Revision: https://reviews.llvm.org/D64048
llvm-svn: 366031
Loop invariant operands do not need to be scalarized, as we are using
the values outside the loop. We should ignore them when computing the
scalarization overhead.
Fixes PR41294
Reviewers: hsaito, rengolin, dcaballe, Ayal
Reviewed By: Ayal
Differential Revision: https://reviews.llvm.org/D59995
llvm-svn: 366030
When processing the command line options march, mcpu and mfpu, we store
the implied target features on a vector. The change D62998 introduced a
temporary vector, where the processed features get accumulated. When
calling DecodeARMFeaturesFromCPU, which sets the default features for
the specified CPU, we certainly don't want to override the features
that have been explicitly specified on the command line. Therefore, the
default features should appear first in the final vector. This problem
became evident once I added the missing (unhandled) target features in
ARM::getExtensionFeatures.
Differential Revision: https://reviews.llvm.org/D63936
llvm-svn: 366027
This recommits r365750 (git commit 8b222ecf27)
Original message:
Currently invalid bitcode files can cause a crash, when OpNum exceeds
the number of elements in Record, like in the attached bitcode file.
The test case was generated by clusterfuzz: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=15698
Reviewers: t.p.northover, thegameg, jfb
Reviewed By: jfb
Differential Revision: https://reviews.llvm.org/D64507
llvm-svn: 365750jkkkk
llvm-svn: 366018
At the moment, bitcode files with invalid forward reference can easily
cause the bitcode reader to run out of memory, by creating a forward
reference with a very high index.
We can use the size of the bitcode file as an upper bound, because a
valid bitcode file can never contain more records. This should be
sufficient to fail early in most cases. The only exception is large
files with invalid forward references close to the file size.
There are a couple of clusterfuzz runs that fail with out-of-memory
because of very high forward references and they should be fixed by this
patch.
A concrete example for this is D64507, which causes out-of-memory on
systems with low memory, like the hexagon upstream bots.
Reviewers: t.p.northover, thegameg, jfb, efriedma, hfinkel
Reviewed By: jfb
Differential Revision: https://reviews.llvm.org/D64577
llvm-svn: 366017
The vmovlb instructions can be uses to sign or zero extend vector registers
between types. This adds some patterns for them and relevant testing. The
VBICIMM generation is also put behind a hasNEON check (as is already done for
VORRIMM).
Code originally by David Sherwood.
Differential Revision: https://reviews.llvm.org/D64069
llvm-svn: 366008
This selects integer VNEG instructions, which can be especially useful with shifts.
Differential Revision: https://reviews.llvm.org/D64204
llvm-svn: 366006
This simply makes the MVE integer min and max instructions legal and adds the
relevant patterns for them.
Differential Revision: https://reviews.llvm.org/D64026
llvm-svn: 366004
This adds support for the floor/ceil/trunc/... series of instructions,
converting to various forms of VRINT. They use the same suffixes as their
floating point counterparts. There is not VTINTR, so nearbyint is expanded.
Also added a copysign test, to show it is expanded.
Differential Revision: https://reviews.llvm.org/D63985
llvm-svn: 366003
This adds the patterns for minnm and maxnm from the fminnum and fmaxnum nodes,
similar to scalar types.
Original patch by Simon Tatham
Differential Revision: https://reviews.llvm.org/D63870
llvm-svn: 366002
Summary:
This patch is part of a patch series to add support for FileCheck
numeric expressions. This specific patch extend numeric expression to
support an arbitrary number of operands, either variable or literals.
Copyright:
- Linaro (changes up to diff 183612 of revision D55940)
- GraphCore (changes in later versions of revision D55940 and
in new revision created off D55940)
Reviewers: jhenderson, chandlerc, jdenny, probinson, grimar, arichardson, rnk
Subscribers: hiraditya, llvm-commits, probinson, dblaikie, grimar, arichardson, tra, rnk, kristina, hfinkel, rogfer01, JonChesterfield
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60387
llvm-svn: 366001
Attributor::getAAFor will now only return AbstractAttributes with a
valid AbstractState. This simplifies call sites as they only need to
check if the returned pointer is non-null. It also reduces the potential
for accidental misuse.
llvm-svn: 365983
Summary: We are going to add a function attribute number 64.
Reviewers: pcc, jdoerfert, lebedev.ri
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64663
llvm-svn: 365980
The traits object is only used by a few methods. Deserializing a hash
table and walking it is possible without the traits object, so it
shouldn't be required to build a dummy object for that use case.
The TraitsT object used to be a function template parameter before
r327647, this restores it to that state.
This makes it clear that the traits object isn't needed at all in 1 of
the current 3 uses of HashTable (and I am going to add another use that
doesn't need it), and that the default PdbHashTraits isn't used outside
of tests.
While here, also re-enable 3 checks in the test that were commented out
(which requires making HashTableInternals templated and giving FooBar
an operator==).
No intended behavior change.
Differential Revision: https://reviews.llvm.org/D64640
llvm-svn: 365974
These should really use v32f32, but were defined as v32i32
due to the lack of the v32f32 type.
Differential Revision: https://reviews.llvm.org/D64667
llvm-svn: 365972
Summary: Vector of the same value with few undefs will sill be considered "Bytewise"
Reviewers: eugenis, pcc, jfb
Reviewed By: jfb
Subscribers: dexonsmith, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64031
llvm-svn: 365971
Summary:
Most of these functions can work for MachineInstr and MCInst
equally now.
Reviewers: dschuff
Subscribers: MatzeB, sbc100, jgravelle-google, aheejin, sunfish, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64643
llvm-svn: 365965
assembly command
'macCatalyst' is more readable than 'maccatalyst'. I renamed the objdump output,
but the assembly should match it as well.
llvm-svn: 365964
Split AArch64FrameLowering::resolveFrameIndexReference in two parts
* Finding frame offset for the index.
* Finding base register and offset to that register.
The second part will be used to implement a virtual frame pointer in
armv8.5 MTE stack instrumentation lowering.
Reviewers: pcc, vitalybuka, hctim, ostannard
Subscribers: javed.absar, kristof.beyls, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64171
llvm-svn: 365958
Before 2018, mesa used to use byval interchangably with inreg, which
didn't really make sense. Fix tests still using it to avoid breaking
in a future commit.
llvm-svn: 365953
This fixes https://bugs.llvm.org/show_bug.cgi?id=42606 by extending
D64213. Instead of only checking if the carry comes from a matching
operation, we now check the full chain of carries. Otherwise we might
custom lower the outermost addcarry, but then generically legalize
an inner addcarry.
Differential Revision: https://reviews.llvm.org/D64658
llvm-svn: 365949
The column field is missing for all inline sites, currently it's always
zero. This changes populates DW_AT_call_column field for inline sites.
Test case modified to cover this change.
Patch by: Wenlei He
Differential revision: https://reviews.llvm.org/D64033
llvm-svn: 365945
All callers had a PDBFile object at hand, so call
Pdb.createIndexedStream() instead, which pre-populates all the arguments
(and returns nullptr for kInvalidStreamIndex).
Also change safelyCreateIndexedStream() to only take the string index,
and update callers. Make the method public and call it in two places
that manually did the bounds checking before.
No intended behavior change.
Differential Revision: https://reviews.llvm.org/D64633
llvm-svn: 365936
This patch series adds support for the next-generation arch13
CPU architecture to the SystemZ backend.
This includes:
- Basic support for the new processor and its features.
- Assembler/disassembler support for new instructions.
- CodeGen for new instructions, including new LLVM intrinsics.
- Scheduler description for the new processor.
- Detection of arch13 as host processor.
Note: No currently available Z system supports the arch13
architecture. Once new systems become available, the
official system name will be added as supported -march name.
llvm-svn: 365932
We can use the C flag from NEG to detect that the input was zero.
Really we could probably use the Z flag too. But C matches what
we'd do for usubo 0, X.
Haven't found a test case for this due to the usubo formation
in CGP. But I verified if I comment out the CGP code this
transformation catches some of the same cases.
llvm-svn: 365929
Some function in the Attributor framework are unnecessarily
marked virtual. This patch removes virtual keyword
Reviewers: jdoerfert
Subscribers: hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D64637
llvm-svn: 365925
Continue in the spirit of D63618, and use exit count reasoning to prove away loop exits which can not be taken since the backedge taken count of the loop as a whole is provably less than the minimal BE count required to take this particular loop exit.
As demonstrated in the newly added tests, this triggers in a number of cases where IndVars was previously unable to discharge obviously redundant exit tests. And some not so obvious ones.
Differential Revision: https://reviews.llvm.org/D63733
llvm-svn: 365920
An application linking against LLVMSupport should not get the gratuitous
set::std_new_handler call.
Reviewed By: jfb
Differential Revision: https://reviews.llvm.org/D64505
llvm-svn: 365915
Support SIGINFO (and SIGUSR1 for POSIX purposes) to tell what
long-running jobs are doing, as inspired by BSD tools (including on
macOS), by dumping the current PrettyStackTrace.
This adds a new kind of signal handler for non-fatal "info" signals,
similar to the "interrupt" handler that already exists for SIGINT
(Ctrl-C). It then uses that handler to update a "generation count"
managed by the PrettyStackTrace infrastructure, which is then checked
whenever a PrettyStackTraceEntry is pushed or popped on each
thread. If the generation has changed---i.e. if the user has pressed
Ctrl-T---the stack trace is dumped, though unfortunately it can't
include the deepest entry because that one is currently being
constructed/destructed.
https://reviews.llvm.org/D63750
llvm-svn: 365911
Summary:
r363675 changed the exec modification helper function, now called
execMayBeModifiedBeforeUse, so that if no UseMI is specified it checks
all instructions in the basic block, even beyond the last use. That
meant that the DPP combiner no longer worked in any basic block that
ended with a control flow instruction, and in particular it didn't work
on code sequences generated by the atomic optimizer.
Fix it by reinstating the old behaviour but in a new helper function
execMayBeModifiedBeforeAnyUse, and limiting the number of instructions
scanned.
Reviewers: arsenm, vpykhtin
Subscribers: kzhuravl, nemanjai, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, kbarton, MaskRay, jfb, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64393
llvm-svn: 365910
Summary:
D64497 allowed abs/neg source modifiers on v_cndmask_b32 but it doesn't
make any sense to apply them to f16 operands; they would interpret the
bits of the value as an f32, giving nonsensical results. This patch
restricts them to f32 operands.
Reviewers: arsenm, hakzsam
Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64636
llvm-svn: 365904
Summary:
Useful for jumps, such as `j .`.
I am not sure who should review this. Do not hesitate to change the reviewers if needed.
Reviewers: asb, jrtc27, lenary
Reviewed By: lenary
Subscribers: MaskRay, lenary, hiraditya, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, kito-cheng, shiva0217, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, rkruppe, PkmX, jocewei, psnobl, benna, Jim, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D63669
Patch by John LLVM (JohnLLVM)
llvm-svn: 365881
There is not match for the `MipsJmpLink texternalsym` and `MipsJmpLink
tglobaladdr` patterns for microMIPS R6. As a result LLVM incorrectly
selects the `JALRC16` compact 2-byte instruction which takes a target
instruction address from a register only and assign `R_MIPS_32` relocation
for this instruction. This relocation completely overwrites `JALRC16`
and nearby instructions.
This patch adds missed matching patterns, selects `BALC` instruction and
assign a correct `R_MICROMIPS_PC26_S1` relocation.
Differential Revision: https://reviews.llvm.org/D64552
llvm-svn: 365870
llvm::yaml::Output::paddedKey unconditionally outputs spaces, which
are superfluous if the value to be dumped is a sequence or map.
Change `bool NeedsNewLine` to `StringRef Padding` so that it can be
overridden to `\n` if the value is a sequence or map.
An empty map/sequence is special. It is printed as `{}` or `[]` without
a newline, while a non-empty map/sequence follows a newline. To handle
this distinction, add another variable `PaddingBeforeContainer` and does
the special handling in endMapping/endSequence.
Reviewed By: grimar, jhenderson
Differential Revision: https://reviews.llvm.org/D64566
llvm-svn: 365869
Summary:
Problem exposed in PowerPC functional testing.
We did not consider Anti dependence for nodes in same cycle,
so we may end up generating bad machine code.
eg: the reduced test won't verify.
*** Bad machine code: Using an undefined physical register ***
- function: lame_encode_buffer_interleaved
- basic block: %bb.4 (0x4bde4e12928)
- instruction: %29:gprc = ADDZE %27:gprc, implicit-def dead $carry, implicit $carry
- operand 3: implicit $carry
Reviewers: bcahoon, kparzysz, hfinkel
Subscribers: MaskRay, wuzish, nemanjai, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64192
llvm-svn: 365859
Summary:
This helps with more efficient use of memset for pattern initialization
From @pcc prototype for -ftrivial-auto-var-init=pattern optimizations
Binary size change on CTMark, (with -fuse-ld=lld -Wl,--icf=all, similar results with default linker options)
```
master patch diff
Os 8.238864e+05 8.238864e+05 0.0
O3 1.054797e+06 1.054797e+06 0.0
Os zero 8.292384e+05 8.292384e+05 0.0
O3 zero 1.062626e+06 1.062626e+06 0.0
Os pattern 8.579712e+05 8.338048e+05 -0.030299
O3 pattern 1.090502e+06 1.067574e+06 -0.020481
```
Zero vs Pattern on master
```
zero pattern diff
Os 8.292384e+05 8.579712e+05 0.036578
O3 1.062626e+06 1.090502e+06 0.025124
```
Zero vs Pattern with the patch
```
zero pattern diff
Os 8.292384e+05 8.338048e+05 0.003333
O3 1.062626e+06 1.067574e+06 0.003193
```
Reviewers: pcc, eugenis
Subscribers: hiraditya, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D63967
llvm-svn: 365858
This patch contains a port of SanitizerCoverage to the new pass manager. This one's a bit hefty.
Changes:
- Split SanitizerCoverageModule into 2 SanitizerCoverage for passing over
functions and ModuleSanitizerCoverage for passing over modules.
- ModuleSanitizerCoverage exists for adding 2 module level calls to initialization
functions but only if there's a function that was instrumented by sancov.
- Added legacy and new PM wrapper classes that own instances of the 2 new classes.
- Update llvm tests and add clang tests.
Differential Revision: https://reviews.llvm.org/D62888
llvm-svn: 365838
Original patch from alanbaker@google.com
Fixes the error:
CMake Error in <...>/llvm/cmake/modules/CMakeLists.txt:
export called with target "LLVMTestingSupport" which requires target
"gtest" that is not in the export set.
This occurs when LLVM is embedded in a larger project, but is configured not to
include tests. If testing is disabled gtest isn't available and LLVM fails to
configure.
Differential revision: https://reviews.llvm.org/D63097
llvm-svn: 365836
Introduce and deduce "nosync" function attribute to indicate that a function
does not synchronize with another thread in a way that other thread might free memory.
Reviewers: jdoerfert, jfb, nhaehnle, arsenm
Subscribers: wdng, hfinkel, nhaenhle, mehdi_amini, steven_wu,
dexonsmith, arsenm, uenoku, hiraditya, jfb, llvm-commits
Differential Revision: https://reviews.llvm.org/D62766
llvm-svn: 365830
Follow up to D58597, where it was noted that the commuted ISD::SUB variant
was having problems with lack of combines.
See also D63958 where we untangled setcc/sub pairs.
Differential Revision: https://reviews.llvm.org/D58875
llvm-svn: 365791
We already split extract_subvector(binop(insert_subvector(v,x),insert_subvector(w,y))) -> binop(x,y).
This patch adds support for extract_subvector(binop(concat_vectors(),concat_vectors())) cases as well.
In particular this means we don't have to wait for X86 lowering to convert concat_vectors to insert_subvector chains, which helps avoid some cases where demandedelts/combine calls occur too late to split large vector ops.
The fast-isel-store.ll load folding regression is annoying but I don't think is that critical.
Differential Revision: https://reviews.llvm.org/D63653
llvm-svn: 365785
The interface predates CallBase, so both it and implementation were
significantly more complicated than they needed to be. There was even
some redundancy that could be eliminated.
Should also help with OpaquePointers by not trying to derive a
function's type from it's PointerType.
llvm-svn: 365767
There is no way to set broken sh_size field currently
for sections. It can be usefull for writing the
test cases.
Differential revision: https://reviews.llvm.org/D64401
llvm-svn: 365766
There is currently an EPERM error when a regular user executes `llvm-objcopy a.o /dev/null`.
Worse, root can even change the mode bits of /dev/null.
Fix it by checking if the output file is special.
A new overload of llvm::sys::fs::setPermissions with FD as the parameter
is added. Users should provide `perm & ~umask` as the parameter if they
intend to respect umask.
The existing overload of llvm::sys::fs::setPermissions may be deleted if
we can find an implementation of fchmod() on Windows. fchmod() is
usually better than chmod() because it saves syscalls and can avoid race
condition.
Reviewed By: jakehehrlich, jhenderson
Differential Revision: https://reviews.llvm.org/D64236
llvm-svn: 365753
Summary:
As of binutils 2.32, ld has a bogus TLS relaxation error when the GD/LD
code sequence using R_X86_64_GOTPCREL (instead of R_X86_64_GOTPCRELX) is
attempted to be relaxed to IE/LE (binutils PR24784). gold and lld are good.
In gcc/config/i386/i386.md, there is a configure-time check of as/ld
support and the GOT relaxation will not be used if as/ld doesn't support
it:
if (flag_plt || !HAVE_AS_IX86_TLS_GET_ADDR_GOT)
return "call\t%P2";
return "call\t{*%p2@GOT(%1)|[DWORD PTR %p2@GOT[%1]]}";
In clang, -DENABLE_X86_RELAX_RELOCATIONS=OFF is the default. The ld.bfd
bogus error can be reproduced with:
thread_local int a;
int main() { return a; }
clang -fno-plt -fpic a.cc -fuse-ld=bfd
GOTPCRELX gained relative good support in 2016, which is considered
relatively new. It is even difficult to conditionally default to
-DENABLE_X86_RELAX_RELOCATIONS=ON due to cross compilation reasons. So
work around the ld.bfd bug by only using GOT when GOTPCRELX is enabled.
Reviewers: dalias, hjl.tools, nikic, rnk
Reviewed By: nikic
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64304
llvm-svn: 365752
Currently invalid bitcode files can cause a crash, when OpNum exceeds
the number of elements in Record, like in the attached bitcode file.
The test case was generated by clusterfuzz: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=15698
Reviewers: t.p.northover, thegameg, jfb
Reviewed By: jfb
Differential Revision: https://reviews.llvm.org/D64507
llvm-svn: 365750
This patch addresses a couple of problems:
1) The maximum supported offset of LE is -4094.
2) The offset of WLS also needs to be checked, this uses a
maximum positive offset of 4094.
The use of BasicBlockUtils has been changed because the block offsets
weren't being initialised, but the isBBInRange checks both positive
and negative offsets.
ARMISelLowering has been tweaked because the test case presented
another pattern that we weren't supporting.
llvm-svn: 365749
The VQDMLAH.U8, VQDMLAH.U16 and VQDMLAH.U32 instructions don't
actually exist: the Armv8.1-M architecture spec only lists signed
forms of that instruction. The unsigned ones were added in error: they
existed in an early draft of the spec, but they were removed before
the public version, and we missed that particular spec change.
Also affects the variant forms VQDMLASH, VQRDMLAH and VQRDMLASH.
Reviewers: miyuki
Subscribers: javed.absar, kristof.beyls, hiraditya, dmgreen, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64502
llvm-svn: 365747
Skip copies between virtual registers during search for UseDefs
and DefUses.
Since each operand has one def search for UseDefs is straightforward.
But since operand can have many uses, we have to check all uses of
each copy we traverse during search for DefUses.
Differential Revision: https://reviews.llvm.org/D64486
llvm-svn: 365744
When one of the uses/defs of ambiguous instruction is also ambiguous
visit it recursively and search its uses/defs for instruction with
only one mapping available.
When all instruction in a chain are ambiguous arbitrary mapping can
be selected. For s64 operands in ambiguous chain fprb is selected since
it results in less instructions then having to narrow scalar s64 to s32.
For s32 both gprb and fprb result in same number of instructions and
gprb is selected like a general purpose option.
At the moment we always avoid cross register bank copies.
TODO: Implement a model for costs calculations of different mappings
on same instruction and cross bank copies. Allow cross bank copies
when appropriate according to cost model.
Differential Revision: https://reviews.llvm.org/D64485
llvm-svn: 365743
Two functional changes have been made here:
- Now search up from any add instruction to find the chains of
operations that we may turn into a smlad. This allows the
generation of a smlad which doesn't accumulate into a phi.
- The search function has been corrected to stop it falsely searching
up through an invalid path.
The bulk of the changes have been making the Reduction struct a class
and making it more C++y with getters and setters.
Differential Revision: https://reviews.llvm.org/D61780
llvm-svn: 365740
Summary:
Wasm does not currently support `llvm.clear_cache` intrinsic, and this
prints a proper error message instead of segfault.
Reviewers: dschuff, sbc100, sunfish
Subscribers: jgravelle-google, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64322
llvm-svn: 365731
This patch replaces the three almost identical "strip & accumulate"
implementations for constant pointer offsets with a single one,
combining the respective functionalities. The old interfaces are kept
for now.
Differential Revision: https://reviews.llvm.org/D64468
llvm-svn: 365723
We use the functions that convert to three address to do the
conversion, but changing an 8 or 16 bit will cause it to create
a virtual register. This can't be done after register allocation
where this pass runs.
I've switched the pass completely to a white list of instructions
that can be converted to LEA instead of a blacklist that was
incorrect. This will avoid surprises if we enhance the three
address conversion function to include additional instructions
in the future.
Fixes PR42565.
llvm-svn: 365720
If we have:
R = sub X, Y
P = cmp Y, X
...then flipping the operands in the compare instruction can allow using a subtract that sets compare flags.
Motivated by diffs in D58875 - not sure if this changes anything there,
but this seems like a good thing independent of that.
There's a more involved version of this transform already in IR (in instcombine
although that seems misplaced to me) - see "swapMayExposeCSEOpportunities()".
Differential Revision: https://reviews.llvm.org/D63958
llvm-svn: 365711
Summary:
We will need to handle IntToPtr which I will submit in a separate patch as it's
not going to be NFC.
Reviewers: eugenis, pcc
Reviewed By: eugenis
Subscribers: hiraditya, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D63940
llvm-svn: 365709
Unfortunately subo formation in CGP prevents obvious ways of
testing this.
But we already have BLSI in here and the flag behavior is
well understood.
Might become more useful if we improve PR42571.
llvm-svn: 365702
Summary: llvm/IR/GlobalValue.h can't be included in MC, that creates a circular dependency between MC and IR libraries. This circular dependency is causing an issue for build system that enforce layering.
Author: Xiangling_L
Reviewers: sfertile, jasonliu, hubert.reinterpretcast, gribozavr
Reviewed By: gribozavr
Subscribers: wuzish, nemanjai, hiraditya, kbarton, MaskRay, jsji, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64445
llvm-svn: 365701
Since we have distinct types for pointers and scalars, G_INTTOPTRs can sometimes
obstruct attempts to find constant source values. These usually come about when
try to do some kind of null pointer check. Teaching getConstantVRegValWithLookThrough
about this operation allows the CBZ/CBNZ optimization to catch more cases.
This change also improves the case where we can't find a constant source at all.
Previously we would emit a cmp, cset and tbnz for that. Now we try to just emit
a cmp and conditional branch, saving an instruction.
The cumulative code size improvement of this change plus D64354 is 5.5% geomean
on arm64 CTMark -O0.
Differential Revision: https://reviews.llvm.org/D64377
llvm-svn: 365690
Some minor cleanup.
This function in Utils does the same thing as `findMIFromReg`. It also looks
through copies, which `findMIFromReg` didn't.
Delete `findMIFromReg` and use `getOpcodeDef` instead. This only happens in
`tryOptVectorDup` right now.
Update opt-shuffle-splat to show that we can look through the copies now, too.
Differential Revision: https://reviews.llvm.org/D64520
llvm-svn: 365684
There are a few places where we walk over copies throughout
AArch64InstructionSelector.cpp. In Utils, there's a function that does exactly
this which we can use instead.
Note that the utility function works with the case where we run into a COPY
from a physical register. We've run into bugs with this a couple times, so using
it should defend us from similar future bugs.
Also update opt-fold-compare.mir to show that we still handle physical registers
properly.
Differential Revision: https://reviews.llvm.org/D64513
llvm-svn: 365683
Summary: Unsafe does not map well alone for each of these three cases as it is missing NoNan context when accessed directly with clang. I have migrated the fold guards to reflect the expectations of handing nan and zero contexts directly (NoNan, NSZ) and some tests with it. Unsafe does include NSZ, however there is already precedent for using the target option directly to reflect that context.
Reviewers: spatel, wristow, hfinkel, craig.topper, arsenm
Reviewed By: arsenm
Subscribers: michele.scandale, wdng, javed.absar
Differential Revision: https://reviews.llvm.org/D64450
llvm-svn: 365679
Rework the TTI cache and software prefetching APIs to prepare for the
introduction of a general system model. Changes include:
- Marking existing interfaces const and/or override as appropriate
- Adding comments
- Adding BasicTTIImpl interfaces that delegate to a subtarget
implementation
- Adding a default "no information" subtarget implementation
Only a handful of targets use these interfaces currently: AArch64,
Hexagon, PPC and SystemZ. AArch64 already has a custom subtarget
implementation, so its custom TTI implementation is migrated to use
the new facilities in BasicTTIImpl to invoke its custom subtarget
implementation. The custom TTI implementations continue to exist for
the other targets with this change. They are not moved over to
subtarget-based implementations.
The end goal is to have the default subtarget implementation defer to
the system model defined by the target. With this change, the default
subtarget implementation essentially returns "no information" for
these interfaces. None of the existing users of TTI will hit that
implementation because they define their own custom TTI
implementations and won't use the BasicTTIImpl implementations.
Once system models are in place for the targets that use these
interfaces, their custom TTI implementations can be removed.
Differential Revision: https://reviews.llvm.org/D63614
llvm-svn: 365676
Previously reverted in 364141 due to buildbot breakage, and fixed here
by making GeneralCategory global a ManagedStatic.
Summary:
This change processes `OptionCategory`s and `SubCommand`s as they
are seen instead of caching them in the Option class and processing
them later. Doing so simplifies the work needed to be done by the Global
parser and significantly reduces the size of the Option class to a mere 64
bytes.
Removing the `OptionCategory` cache saved 24 bytes, and removing
the `SubCommand` cache saved an additional 48 bytes, for a total of a
72 byte reduction.
Reviewed By: serge-sans-paille
Tags: #llvm, #clang
Differential Revision: https://reviews.llvm.org/D62105
llvm-svn: 365675
Summary:
The map kept in loop rotate is used for instruction remapping, in order
to simplify the clones of instructions. Thus, if an instruction can be
simplified, its simplified value is placed in the map, even when the
clone is added to the IR. MemorySSA in contrast needs to know about that
clone, so it can add an access for it.
To resolve this: keep a different map for MemorySSA.
Reviewers: george.burgess.iv
Subscribers: jlebar, Prazek, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D63680
llvm-svn: 365672
LLJITBuilder now has a setCompileFunctionCreator method which can be used to
construct a CompileFunction for the LLJIT instance being created. The motivating
use-case for this is supporting ObjectCaches, which can now be set up at
compile-function construction time. To demonstrate this an example project,
LLJITWithObjectCache, is included.
llvm-svn: 365671
An alloca which can be sunk into the extraction region may have more
than one bitcast use. Move these uses along with the alloca to prevent
use-before-def.
Testing: check-llvm, stage2 build of clang
Fixes llvm.org/PR42451.
Differential Revision: https://reviews.llvm.org/D64463
llvm-svn: 365660
This renames the type so it doesn't sound like its based off the load size - as we're moving towards supporting combining loads of different sizes.
llvm-svn: 365655