Summary: Skip over prefetches when assigning debug info to instructions with memory operands. This way, the debug info is stable after instrumenting a binary with prefetches, allowing for iterative profiling and instrumentation.
Reviewers: davidxl
Reviewed By: davidxl
Subscribers: aprantl, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61789
llvm-svn: 360471
Split out from D61692 per RKSimon's suggestion. Vector op
legalization will automatically recursively legalize the returned
SDValue, but we need to take care of the other results ourselves.
Otherwise it will end up getting legalized only during op
legalization, by which point it might be too late (though I'm not
aware of any specific cases right now).
There are codegen differences because expansion occurs earlier now
and we don't get a DAGCombiner run in between.
Differential Revision: https://reviews.llvm.org/D61744
llvm-svn: 360470
Summary:
We hit undefined references building with ThinLTO when one source file
contained explicit instantiations of a template method (weak_odr) but
there were also implicit instantiations in another file (linkonce_odr),
and the latter was the prevailing copy. In this case the symbol was
marked hidden when the prevailing linkonce_odr copy was promoted to
weak_odr. It led to unsats when the resulting shared library was linked
with other code that contained a reference (expecting to be resolved due
to the explicit instantiation).
Add a CanAutoHide flag to the GV summary to allow the thin link to
identify when all copies are eligible for auto-hiding (because they were
all originally linkonce_odr global unnamed addr), and only do the
auto-hide in that case.
Most of the changes here are due to plumbing the new flag through the
bitcode and llvm assembly, and resulting test changes. I augmented the
existing auto-hide test to check for this situation.
Reviewers: pcc
Subscribers: mehdi_amini, inglorion, eraman, dexonsmith, arphaman, dang, llvm-commits, steven_wu, wmi
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59709
llvm-svn: 360466
Follow up to r359122, after a bug was reported in it - the original
change too aggressively tried to move related types out of type units,
which included unnamed types (like array types) which can't reasonably
be declared-but-not-defined.
A step beyond that is that some types in type units can be anonymous, if
they are types with a name for linkage purposes (eg: "typedef struct { }
x;"). So ensure those don't get turned into plain declarations (without
signatures) because, lacking names, they can't be resolved to the
definition.
[Also include a fix for llvm-dwarfdump/libDebugInfoDWARF to pretty print
types in type units]
llvm-svn: 360458
This patch fixes the TreeEntry dangling pointer issue caused by reallocations of VectorizableTree.
Committed on behalf of @vporpo (Vasileios Porpodas)
Differential Revision: https://reviews.llvm.org/D61706
llvm-svn: 360456
The original change introduced a depth limit of 7 which caused a 22% regression
in the Swift MapReduceLazyCollection & Ackermann benchmarks. This new threshold
still ensures that the original test case doesn't hang.
rdar://50359639
llvm-svn: 360444
On PowerPC64 ELFv2 ABI, the top 3 bits of st_other encode the local
entry offset. A versioned symbol alias created by .symver should copy
the bits from the source symbol.
This partly fixes PR41048. A full fix needs tracking of .set assignments
and updating st_other fields when finish() is called, see D56586.
Patch by Alfredo Dal'Ava Júnior
Differential Revision: https://reviews.llvm.org/D59436
llvm-svn: 360442
This fix allows the scheduler to take into account the number of instances of
each ProcResource specified. Previously a declaration in a scheduler of
ProcResource<1> would be treated identically to a declaration of
ProcResource<2>. Now the hazard recognizer would report a hazard only after all
of the resource instances are busy.
Patch by Jackson Woodruff and Momchil Velikov.
Differential Revision: https://reviews.llvm.org/D51160
llvm-svn: 360441
Fixes https://bugs.llvm.org/show_bug.cgi?id=40969
The functions findPotentiallyBlockedCopies and buildCopy are currently not
accounting for the presence of debug instructions. In the former this results
in the optimization not being trigerred, and in the latter results in
inconsistent codegen.
This patch enables the optimization to be performed in a debug build and
ensures the codegen is consistent with non-debug builds.
Patch by Chris Dawson.
Differential Revision: https://reviews.llvm.org/D61680
llvm-svn: 360436
If we only use the lower xmm of a ymm hop, then extract the xmm's (for free), perform the xmm hop and then insert back into a ymm (for free).
Fixes some of the regressions noted in D61782
llvm-svn: 360435
Summary:
- Constant expressions may not be added in strict postorder as the
forward instruction scan order. Thus, for a constant express (CE0), if
its operand (CE1) is used in an previous instruction, they are not in
postorder. However, different from
`cloneInstructionWithNewAddressSpace`,
`cloneConstantExprWithNewAddressSpace` doesn't bookkeep uninferred
instructions for later resolving. That results in failure of inferring
constant address.
- This patch adds the support to infer constant expression operand
recursively, since there won't be loop, if that operand is another
constant expression.
Reviewers: arsenm
Subscribers: jholewinski, jvesely, wdng, nhaehnle, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61760
llvm-svn: 360431
In certain circumstances, optimizations pick line numbers from debug
intrinsic instructions as the new location for altered instructions. This
is problematic because the line number of a debugging intrinsic is
meaningless (it doesn't produce any machine instruction), only the scope
information is valid. The result can be the line number of a variable
declaration "leaking" into real code from debugging intrinsics, making the
line table un-necessarily jumpy, and potentially different with / without
variable locations.
Fix this by using zero line numbers when promoting dbg.declare intrinsics
into dbg.values: this is safe for debug intrinsics as their line numbers
are meaningless, and reduces the scope for damage / misleading stepping
when optimizations pick locations from the wrong place.
Differential Revision: https://reviews.llvm.org/D59272
llvm-svn: 360415
The current PIC model for WebAssembly is more like ELF in that it
allows symbol interposition.
This means that more functions end up being addressed via the GOT
and fewer directly added to the wasm table.
One effect is a reduction in the number of wasm table entries similar
to the previous attempt in https://reviews.llvm.org/D61539 which was
reverted.
Differential Revision: https://reviews.llvm.org/D61772
llvm-svn: 360402
This also allows three op patterns to use increased constant bus
limit of GFX10.
Differential Revision: https://reviews.llvm.org/D61763
llvm-svn: 360395
The current lowering uses an mfence. mfences are substaintially higher latency than the locked operations originally requested, but we do want to avoid contention on the original cache line. As such, use a locked instruction on a cache line assumed to be thread local.
Differential Revision: https://reviews.llvm.org/D58632
llvm-svn: 360393
Subtractor relocation addends are signed, so we need to read them via signed
int pointers. Accidentally treating 32-bit addends as unsigned leads to
out-of-range errors when we try to add very large (>INT32_MAX) bogus addends.
llvm-svn: 360392
If we have a large module which is mostly intrinsics, we hammer the lib call lookup path from CodeGenPrepare. Adding a fastpath reduces compile by 15% for one such example.
The problem is really more general than intrinsics - a module with lots of non-intrinsics non-libcall calls has the same problem - but we might as well avoid an easy case quickly.
llvm-svn: 360391
Adds full edge details (rather than just edge targets) when out-of-range errors
are generated. Also fixes a bug where debugging output accessed an invalidated
DenseMap iterator by moving the debugging output above the invalidation point.
llvm-svn: 360383
Summary:
The ".dword" directive is a synonym for ".xword" and is used used
by klibc, a minimalistic libc subset for initramfs.
Reviewers: t.p.northover, nickdesaulniers
Reviewed By: nickdesaulniers
Subscribers: nickdesaulniers, javed.absar, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61719
llvm-svn: 360381
As reported on PR39920, "slow horizontal ops" targets tend to internally expand to 2*shuffle+add/sub - so if we can reduce 2*shuffle+add/sub to a hadd/sub then we should do it - similar port usage but reduced instruction count.
This works out in most cases, although the "PR22377" regression in vector-shuffle-combining.ll is annoying - going from 2*shuffle+add+shuffle to hadd+2*shuffle - I've opened PR41813 to cover this.
Differential Revision: https://reviews.llvm.org/D61308
llvm-svn: 360360
To find the candidates to merge stores we iterate over all nodes in a chain
for each store, which leads to quadratic compile times for large basic blocks
with a large number of stores.
Reviewers: niravd, spatel, craig.topper
Reviewed By: niravd
Differential Revision: https://reviews.llvm.org/D61511
llvm-svn: 360357
Summary:
The implementation is a pretty straightforward extension of the pattern
used for (de)serializing the ModuleList stream. Since there are other
streams which use the same format (MemoryList and MemoryList64, at
least). I tried to generalize the code a bit so that adding future
streams of this type can be done with less code.
Reviewers: amccarth, jhenderson, clayborg
Subscribers: markmentovai, lldb-commits, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61423
llvm-svn: 360350
Summary:
Seeing some issues for windows debug pathological cases with collectBitParts
recursion (1525 levels of recursion!)
Setting the limit to 64 as this should be sufficient - passes all lit cases
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61728
Change-Id: I7f44cdc6c1badf1c2ccbf1b0c4b6afe27ecb39a1
llvm-svn: 360347
I've started this cleanup more several times now, but got sidetracked
elsewhere, e.g. by llvm-exegesis problems. Not this time, finally!
This is mainly cleaning up the inverse throughput values,
and a few latencies/uops, based on the llvm-exegesis measured values.
Though this is not complete by any means,
there's certainly more cleanup to be done.
The performance numbers (i've only checked by RawSpeed benchmark) aren't
really surprising - overall this *slightly* (< -1%) improves perf.
llvm-svn: 360341
Add an Argument that has the SExtAttr attached, as well as SIToFP
instructions, as values that generate sign bits. SIToFP doesn't
strictly do this and could be treated as a sink to be sign-extended.
Differential Revision: https://reviews.llvm.org/D61381
llvm-svn: 360331
Commit r360221 ("[Support] Add error handling to
sys::Process::getPageSize().", 2019-05-08) seems to have missed these
uses of getPageSize(). Update them to getPageSizeEstimate().
llvm-svn: 360322
This code was never covered by tests, in PR41786 it was pointed out that
the deletion part doesn't work, and in a full Chrome build I was never
able to hit the code path that looks through copies. It seems the
situation it's supposed to handle doesn't actually come up in practice.
Delete it to simplify the code.
Differential revision: https://reviews.llvm.org/D61671
llvm-svn: 360320
Prior to this change sub-register index names are assumed to be lower
case (but they are printed with original casing). This means that if a
target has some upper case characters in its sub-register names then
mir-export directly followed by mir-import is not possible. This also
means that sub-register indices currently are (and will continue to be)
slightly inconsistent with register names which are printed and assumed
to be lower case.
As the current textual representation of mir has a few inconsistencies
in this area it is a bit arbitrary how to address the matter. This
change is towards the direction that we feel is most correct (i.e. case
sensitivity).
Differential Revision: https://reviews.llvm.org/D61499
llvm-svn: 360318
Klocwork static check:
Pointer from call to function `DebugLoc::operator DILocation *() const `
may be NULL and will be dereference in function `printExtendedName```
Patch by Shengchen Kan (skan)
Differential Revision: https://reviews.llvm.org/D61715
llvm-svn: 360317
While ASan and MSan passes were already ported to new PM, the kernel
variants weren't setup in the pipeline which makes the KASan and KMSan
tests in Clang fail.
Differential Revision: https://reviews.llvm.org/D61664
llvm-svn: 360313
This patch allows for expansion of ADDCARRY and SUBCARRY when the target does not support it.
Differential Revision: https://reviews.llvm.org/D61411
llvm-svn: 360303
as it was causing significant compile time regressions.
This reverts commit r359426 while we come up with testcases and additional ideas.
llvm-svn: 360301
This is extracted from the original draft of D61419 with some additional tests.
We don't currently get this in IR (it's conservatively turned into a NaN),
but presumably that'll get updated as we add real IR support for 'fneg'
rather than 'fsub -0.0, x'.
The x86-32 run shows the following, and I haven't looked further to see why,
but that seems to be independent:
Legalizing: t1: f32 = undef
Trying to expand node
Creating fp constant: t4: f32 = ConstantFP<0.000000e+00>
Differential Revision: https://reviews.llvm.org/D61516
llvm-svn: 360296
The VOP3 form should always be the preferred selection, to be shrunk
later. This should only be an optimization issue, but this partially
works around a problem from clobbering VCC when SIFixSGPRCopies
rewrites an SCC defining operation directly to VCC.
3 of the testcases are regressions from failing to fold the immediate
in cases it should. These can be avoided by improving the VCC liveness
handling in SIFoldOperands. Simply increasing the threshold to
computeRegisterLiveness works, although this is common enough that VCC
liveness should probably be tracked throughout the pass. The hack of
leaving behind an implicit_def instruction to avoid breaking iterator
wastes instruction count, which inhibits finding the VCC def in long
chains of adds. Doing this however exposes different, worse looking
regressions from poor scheduling behavior. This could probably be
avoided around by forcing the shrink of the addc here, but the
scheduler should probably be fixed.
The r600 add test needs to be split out because it asserts on the
arguments in the new test during the calling convention lowering.
llvm-svn: 360293
Summary:
Fix various issues in code style of method comments:
1) Move all heading comments to all non-static methods near their
declaration in the FileCheck.h header file.
2) Harmonize the action verb in doxygen comments for methods to always
be in third person
3) Use \returns instead of free text "return" and "returns".
4) Document a couple more parameters while at it.
Reviewers: jhenderson, probinson, arichardson
Subscribers: javed.absar, kristof.beyls, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61445
llvm-svn: 360288
The worklist loop that we're returning back to should be able to do the repacement itself. This is how we normally do replacements.
My main motivation was that I observed that we weren't preserving the name of the result when we do this transform. The replacement code in the worklist loop will call takeName as part of the replacement.
Differential Revision: https://reviews.llvm.org/D61695
llvm-svn: 360284
InsertBinop tries to move insertion-points out of loops for expressions
that are loop-invariant. This patch adds a new parameter, IsSafeToHost,
to guard that hoisting. This allows callers to suppress that hoisting
for unsafe situations, such as divisions that may have a zero
denominator.
This fixes PR38697.
Differential Revision: https://reviews.llvm.org/D55232
llvm-svn: 360280
When assigning the definitions of an instruction we were updating
the available registers while walking the definitions. Some of
those definitions may be from physical registers and thus, they are
not available for other definitions to take, but by the time we see
that we may have already assign these registers to another
virtual register.
Fix that by walking through all the definitions and mark as unavailable
the physical register definitions, then do the virtual register assignments.
PR41790
llvm-svn: 360278
This patch adds support for calling selectFNeg for FNeg instructions in addition to the fsub idiom
Differential Revision: https://reviews.llvm.org/D61624
llvm-svn: 360273
Summary:
Preserve MemorySSA in LoopSimplify, in the old pass manager, if the analysis is available.
Do not preserve it in the new pass manager.
Update tests.
Subscribers: nemanjai, jlebar, javed.absar, Prazek, kbarton, zzheng, jsji, llvm-commits, george.burgess.iv, chandlerc
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60833
llvm-svn: 360270
This was committed in rL358887 but reverted in rL360066 due to a x86 regression, really it should be have been pre-committed instead of being part of the SimplifyDemandedBits bitcast patch.
llvm-svn: 360263
Reassociation's NegateValue moved instructions to the beginning of
blocks (after PHIs) without checking for exception handling pads.
It's possible for reassociation to move something into an exception
handling block so we need to make sure we don't move things too early
in the block. This change advances the insertion point past any
exception handling pads.
If the block we want to move into contains a catchswitch, we cannot
move into it. In that case just create a new neg as if we had not
found an existing neg to move.
Differential Revision: https://reviews.llvm.org/D61089
llvm-svn: 360262
Using SP in this position is unpredictable in ARMv7. CMP and CMN are not
affected, and of course v8 relaxes this requirement, but that's handled
elsewhere.
llvm-svn: 360242
If we fold a branch/switch to an unconditional branch to another dead block we
replace the branch with unreachable, to avoid attempting to fold the
unconditional branch.
Reviewers: davide, efriedma, mssimpso, jdoerfert
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D61300
llvm-svn: 360232
Add a new function to do the endian check, as I will commit another patch later, which will also need the endian check.
Differential Revision: https://reviews.llvm.org/D61236
llvm-svn: 360226
This patch changes the return type of sys::Process::getPageSize to
Expected<unsigned> to account for the fact that the underlying syscalls used to
obtain the page size may fail (see below).
For clients who use the page size as an optimization only this patch adds a new
method, getPageSizeEstimate, which calls through to getPageSize but discards
any error returned and substitues a "reasonable" page size estimate estimate
instead. All existing LLVM clients are updated to call getPageSizeEstimate
rather than getPageSize.
On Unix, sys::Process::getPageSize is implemented in terms of getpagesize or
sysconf, depending on which macros are set. The sysconf call is documented to
return -1 on failure. On Darwin getpagesize is implemented in terms of sysconf
and may also fail (though the manpage documentation does not mention this).
These failures have been observed in practice when highly restrictive sandbox
permissions have been applied. Without this patch, the result is that
getPageSize returns -1, which wreaks havoc on any subsequent code that was
assuming a sane page size value.
<rdar://problem/41654857>
Reviewers: dblaikie, echristo
Subscribers: kristina, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59107
llvm-svn: 360221
Summary:
A COFF stub indirects the reference to a symbol through memory. A
.refptr.$sym global variable pointer is created to refer to $sym.
Typically mingw uses these for external global variable declarations,
but we can use them for weak function declarations as well.
Updates the dso_local classification to add a special case for
extern_weak symbols on COFF in both clang and LLVM.
Fixes PR37598
Reviewers: smeenai, mstorsjo
Subscribers: hiraditya, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D61615
llvm-svn: 360207
This patch modifies MachOAtomGraphBuilder to use setLayoutNext rather than
addEdge, and fixes a bug in the section layout algorithm that could result in
atoms appearing more than once in the section ordering (which resulted in those
atoms being assigned invalid addresses during layout).
llvm-svn: 360205
Summary: GCNHazardRecognizer fails to identify hazards that are in and around bundles. This patch allows the hazard recognizer to consider bundled instructions in both scheduler and hazard recognizer mode. We ignore “bundledness” for the purpose of detecting hazards and examine the instructions individually.
Reviewers: arsenm, msearles, rampitec
Reviewed By: rampitec
Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61564
llvm-svn: 360199
Summary:
The DEBUG_TYPE of the default hazard recognizer should be updated to
match the DEBUG_TYPE of the machine-scheduler pass.
Reviewers: rampitec
Reviewed By: rampitec
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61359
llvm-svn: 360198
The MachO .alt_entry directive is applied to a symbol to indicate that it is
locked (in terms of address layout and liveness) to its predecessor atom. I.e.
it is an alternate entry point, at a fixed offset, for the previous atom.
This patch updates MachOAtomGraphBuilder to check for the .alt_entry flag on
symbols and add a corresponding LayoutNext edge to the atom-graph. It also
updates MachOAtomGraphBuilder_x86_64 to generalize handling of the
X86_64_RELOC_SUBTRACTOR relocation: previously either the minuend or
subtrahend of the subtraction had to be the same as the atom being fixed up,
now it is only necessary for the minuend or subtrahend to be locked (via any
chain of alt_entry directives) to the atom being fixed up.
llvm-svn: 360194
Compute results in more direct ways, avoid subset intersect
operations. Extract the core code for computing mul nowrap ranges
into separate static functions, so they can be reused.
llvm-svn: 360189
(X | C1) + C2 --> (X | C1) ^ C1 iff (C1 == -C2)
I verified the correctness using Alive:
https://rise4fun.com/Alive/YNV
This transform enables the following transform that already exists in
instcombine:
(X | Y) ^ Y --> X & ~Y
As a result, the full expected transform is:
(X | C1) + C2 --> X & ~C1 iff (C1 == -C2)
There already exists the transform in the sub case:
(X | Y) - Y --> X & ~Y
However this does not trigger in the case where Y is constant due to an earlier
transform:
X - (-C) --> X + C
With this new add fold, both the add and sub constant cases are handled.
Patch by Chris Dawson.
Differential Revision: https://reviews.llvm.org/D61517
llvm-svn: 360185
Fundamentally/generally, we should not have to rely on bailouts/crippling of
folds. In this particular case, I think we always recognize the inverted
predicate min/max pattern, so there should not be any loss of optimization.
Codegen looks better because we are eliminating an fneg.
llvm-svn: 360180
Summary:
It's not uncommon for separate components to share common
Options, e.g., it's common for related Passes to share Options in
addition to the Pass specific ones.
With this change, components can use OptionCategory's to simply help
output even if some of the options are shared.
Reviewed By: MaskRay
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61574
llvm-svn: 360179
DWARF5, 2.12 20ff says that
Any debugging information entry representing a pointer or reference
type [may have a DW_AT_address_class attribute].
The existing code (https://reviews.llvm.org/D29670) seems to take a
quite literal interpretation of that wording. I don't see a reason why
an rvalue reference isn't a reference type in the spirit of that
paragraph. This patch allows rvalue references to also have address
spaces.
rdar://problem/50511483
Differential Revision: https://reviews.llvm.org/D61625
llvm-svn: 360176
When simplifying TokenFactors, we potentially iterate over all
operands of a large number of TokenFactors. This causes quadratic
compile times in some cases and the large token factors cause additional
scalability problems elsewhere.
This patch adds some limits to the number of nodes explored for the
cases mentioned above.
Reviewers: niravd, spatel, craig.topper
Reviewed By: niravd
Differential Revision: https://reviews.llvm.org/D61397
llvm-svn: 360171
Summary:
Bug: https://bugs.llvm.org/show_bug.cgi?id=39024
The bug reports that a vectorized loop is stepped through 4 times and each step through the loop seemed to show a different path. I found two problems here:
A) An incorrect line number on a preheader block (for.body.preheader) instruction causes a step into the loop before it begins.
B) Instructions in the middle block have different line numbers which give the impression of another iteration.
In this patch I give all of the middle block instructions the line number of the scalar loop latch terminator branch. This seems to provide the smoothest debugging experience because the vectorized loops will always end on this line before dropping into the scalar loop. To solve problem A I have altered llvm::SplitBlockPredecessors to accommodate loop header blocks.
Reviewers: samsonov, vsk, aprantl, probinson, anemet, hfinkel
Reviewed By: hfinkel
Subscribers: bjope, jmellorcrummey, hfinkel, gbedwell, hiraditya, zzheng, llvm-commits
Tags: #llvm, #debug-info
Differential Revision: https://reviews.llvm.org/D60831
llvm-svn: 360162
Summary:
Currently we express umin as `~umax(~x, ~y)`. However, this becomes
a problem for operands in non-integral pointer spaces, because `~x`
is not something we can compute for `x` non-integral. However, since
comparisons are generally still allowed, we are actually able to
express `umin(x, y)` directly as long as we don't try to express is
as a umax. Support this by adding an explicit umin/smin representation
to SCEV. We do this by factoring the existing getUMax/getSMax functions
into a new function that does all four. The previous two functions were
largely identical.
Reviewed By: sanjoy
Differential Revision: https://reviews.llvm.org/D50167
llvm-svn: 360159
The single-constant algorithm produces infinities on a lot of denormal values.
The precision of the two-constant algorithm is actually sufficient across the
range of denormals. We will switch to that algorithm for now to avoid the
infinities on denormals. In the future, we will re-evaluate the algorithm to
find the optimal one for PowerPC.
Differential revision: https://reviews.llvm.org/D60037
llvm-svn: 360144
In some cases it is useful to explicitly set symbol's st_name value.
For example, I am using it in a patch for LLD to remove the broken
binary from a test case and replace it with a YAML test.
Differential revision: https://reviews.llvm.org/D61180
llvm-svn: 360137
Basic "revectorization" combine, we can probably do more opcodes here but it can be a tricky cost-benefit depending on where the subvectors came from - but this case helps shuffle combining.
llvm-svn: 360134
Summary:
No test case because I don't know of a way to trigger this, but I
accidentally caused this to fail while working on a different change.
Change-Id: I8015aa447fe27163cc4e4902205a203bd44bf7e3
Reviewers: arsenm, rampitec
Subscribers: kzhuravl, jvesely, wdng, yaxunl, dstuttard, tpr, t-tye, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61490
llvm-svn: 360123
Summary:
If fneg lowering for fsub -0.0, x fails we currently fall back to treating it as an fsub. This has different behavior for nans than the xor with sign bit trick we normally try to do. On X86, the xor trick for double fails fast-isel in 32-bit mode with sse2 due to 64 bit integer types not being available. With -O2 we would always use an xorpd for this case. If we use subsd, this creates an observable behavior difference between -O0 and -O2. So fall back to SelectionDAG if we can't fast-isel it, that way SelectionDAG will use the xorpd.
I believe this patch is restoring the behavior prior to r345295 from last October. This was missed then because our fast isel case in 32-bit mode aborted fast-isel earlier for another reason. But I've added new tests to cover that.
Reviewers: andrew.w.kaylor, cameron.mcinally, spatel, efriedma
Reviewed By: cameron.mcinally
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61622
llvm-svn: 360111
The only known user of this relocation type and symbol type is
the debug info sections, but we were not testing the `--relocatable`
output path.
This change adds a minimal test case to cover relocations against
section symbols includes `--relocatable` output.
Differential Revision: https://reviews.llvm.org/D61623
llvm-svn: 360110
TypedDINodeRef<T> is a redundant wrapper of Metadata * that is actually a T *.
Accordingly, change DI{Node,Scope,Type}Ref uses to DI{Node,Scope,Type} * or their const variants.
This allows us to delete many resolve() calls that clutter the code.
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D61369
llvm-svn: 360108
Fixes the main issue in PR41693
When both modes are used, two functions are created:
`sancov.module_ctor`, `sancov.module_ctor.$LastUnique`, where
$LastUnique is the current LastUnique counter that may be different in
another module.
`sancov.module_ctor.$LastUnique` belongs to the comdat group of the same
name (due to the non-null third field of the ctor in llvm.global_ctors).
COMDAT group section [ 9] `.group' [sancov.module_ctor] contains 6 sections:
[Index] Name
[ 10] .text.sancov.module_ctor
[ 11] .rela.text.sancov.module_ctor
[ 12] .text.sancov.module_ctor.6
[ 13] .rela.text.sancov.module_ctor.6
[ 23] .init_array.2
[ 24] .rela.init_array.2
# 2 problems:
# 1) If sancov.module_ctor in this module is discarded, this group
# has a relocation to a discarded section. ld.bfd and gold will
# error. (Another issue: it is silently accepted by lld)
# 2) The comdat group has an unstable name that may be different in
# another translation unit. Even if the linker allows the dangling relocation
# (with --noinhibit-exec), there will be many undesired .init_array entries
COMDAT group section [ 25] `.group' [sancov.module_ctor.6] contains 2 sections:
[Index] Name
[ 26] .init_array.2
[ 27] .rela.init_array.2
By using different module ctor names, the associated comdat group names
will also be different and thus stable across modules.
Reviewed By: morehouse, phosek
Differential Revision: https://reviews.llvm.org/D61510
llvm-svn: 360107
The FR32/FR64/VR128/VR256 register classes don't contain the upper 16 registers. For most cases we use the default implementation which will find any register class that contains the register in question if the VT is legal for the register class. But if the VT is i32 or i64, we won't find a matching register class and will instead up in the code modified in this patch.
If the requested register is x/y/zmm16-31 we weren't returning a register class that contains those registers and will hit an assertion in the caller.
To fix this, I've changed to use the extended register class instead. I don't believe we need a subtarget check to see if avx512 is enabled. The default implementation just pick whatever register class it finds first. I checked and we currently pick FR32X for XMM0 with an f32 type using the default implementation regardless of whether avx512 is enabled. So I assume its it is ok to do the same for i32.
Differential Revision: https://reviews.llvm.org/D61457
llvm-svn: 360102
Summary:
When there are multiple instances of a forward decl record type, only the first one is emitted with a type index, because
the type is added to a map with a null type index. Avoid this by reordering so that forward decl types aren't added to the map.
Reviewers: rnk
Subscribers: aprantl, hiraditya, arphaman, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61460
llvm-svn: 360101
This generally follows what other targets do. I don't completely
understand why the special case for tail calls existed in the first
place; even when the code was committed in r105413, call lowering didn't
work in the way described in the comments.
Stack protector lowering breaks if the register copies are not glued to
a tail call: we have to insert the stack protector check before the tail
call, and we choose the location based on the assumption that all
physical register dependencies of a tail call are adjacent to the tail
call. (See FindSplitPointForStackProtector.) This is sort of fragile,
but I don't see any reason to break that assumption.
I'm guessing nobody has seen this before just because it's hard to
convince the scheduler to actually schedule the code in a way that
breaks; even without the glue, the only computation that could actually
be scheduled after the register copies is the computation of the call
address, and the scheduler usually prefers to schedule that before the
copies anyway.
Fixes https://bugs.llvm.org/show_bug.cgi?id=41417
Differential Revision: https://reviews.llvm.org/D60427
llvm-svn: 360099
The problem was that we were creating a CMOV64rr <TargetFrameIndex>, <TargetFrameIndex>. The entire point of a TFI is that address code is not generated, so there's no way to legalize/lower this. Instead, simply prevent it's creation.
Arguably, we shouldn't be using *Target*FrameIndices in StatepointLowering at all, but that's a much deeper change.
llvm-svn: 360090
This reverts r357452 (git commit 21eb771dcb).
This was causing strange optimization-related test failures on an internal test. Will followup with more details offline.
llvm-svn: 360086
We require d/q suffixes on the memory form of these instructions to disambiguate the memory size.
We don't require it on the register forms, but need to support parsing both with and without it.
Previously we always printed the d/q suffix on the register forms, but it's redundant and
inconsistent with gcc and objdump.
After this patch we should support the d/q for parsing, but not print it when its unneeded.
llvm-svn: 360085
We don't always get this:
Cond ? -X : -Y --> -(Cond ? X : Y)
...even with the legacy IR form of fneg in the case with extra uses,
and we miss matching with the newer 'fneg' instruction because we
are expecting binops through the rest of the path.
Differential Revision: https://reviews.llvm.org/D61604
llvm-svn: 360075
It's possible to use the 'y' mmx constraint with a type narrower than 64-bits.
This patch supports this by bitcasting the mmx type to 64-bits and then
truncating to the desired type.
There are probably other missing type combinations we need to support, but this
is the case we have a bug report for.
Fixes PR41748.
Differential Revision: https://reviews.llvm.org/D61582
llvm-svn: 360069
After support for dealing with types that need to be extended in some way was
added in r358032 we didn't correctly handle <1 x T> return types. These types
don't have a GISel direct representation, instead we just see them as scalars.
When we need to pad them into <2 x T> types however we need to use a
G_BUILD_VECTOR instead of trying to do a G_CONCAT_VECTOR.
This fixes PR41738.
llvm-svn: 360068
Reverts "[X86] Remove (V)MOV64toSDrr/m and (V)MOVDI2SSrr/m. Use 128-bit result MOVD/MOVQ and COPY_TO_REGCLASS instead"
Reverts "[TargetLowering][AMDGPU][X86] Improve SimplifyDemandedBits bitcast handling"
Eric Christopher and Jorge Gorbe Moya reported some issues with these patches to me off list.
Removing the CodeGenOnly instructions has changed how fneg is handled during fast-isel with sse/sse2. We're now emitting fsub -0.0, x instead
moving to the integer domain(in a GPR), xoring the sign bit, and then moving back to xmm. This is because the fast isel table no longer
contains an entry for (f32/f64 bitcast (i32/i64)) so the target independent fneg code fails. The use of fsub changes the behavior of nan with
respect to -O2 codegen which will always use a pxor. NOTE: We still have a difference with double with -m32 since the move to GPR doesn't work
there. I'll file a separate PR for that and add test cases.
Since removing the CodeGenOnly instructions was fixing PR41619, I'm reverting r358887 which exposed that PR. Though I wouldn't be surprised
if that bug can still be hit independent of that.
This should hopefully get Google back to green. I'll work with Simon and other X86 folks to figure out how to move forward again.
llvm-svn: 360066
Add support for srem() to ConstantRange so we can use it in LVI. For
srem the sign of the result matches the sign of the LHS. For the RHS
only the absolute value is important. Apart from that the logic is
like urem.
Just like for urem this is only an approximate implementation. The tests
check a few specific cases and run an exhaustive test for conservative
correctness (but not exactness).
Differential Revision: https://reviews.llvm.org/D61207
llvm-svn: 360055
This addresses one half of https://bugs.llvm.org/show_bug.cgi?id=41635
by combining a VECREDUCE_AND/OR into VECREDUCE_UMIN/UMAX (if latter is
legal but former is not) for zero-or-all-ones boolean reductions (which
are detected based on sign bits).
Differential Revision: https://reviews.llvm.org/D61398
llvm-svn: 360054
A condition for exiting the legalization of v4i32 conversion to v2f64 through
extract/convert/build erroneously checks for the extract having type i32.
This is not adequate as smaller extracts are actually legalized to i32 as well.
Furthermore, an early exit is missing which means that we only check that
both extracts are from the same vector if that check fails.
As a result, both cases in the included test case fail - the first gets a
select error and the second generates incorrect code.
The culprit commit is r274535.
llvm-svn: 360043
Properly initialize store type to null then ensure we find a real store type in the chain.
Fixes scan-build null dereference warning and makes the code clearer.
llvm-svn: 360031
scan-build was reporting that CommutableOpIdx1 never used its original initialized value - move it down to where its first used to make the real initialization more obvious (and matches the comment that's there).
llvm-svn: 360028
Summary:
1. Enable infrastructure of AVX512_BF16, which is supported for BFLOAT16 in Cooper Lake;
2. Enable VCVTNE2PS2BF16, VCVTNEPS2BF16 and DPBF16PS instructions, which are Vector Neural Network Instructions supporting BFLOAT16 inputs and conversion instructions from IEEE single precision.
VCVTNE2PS2BF16: Convert Two Packed Single Data to One Packed BF16 Data.
VCVTNEPS2BF16: Convert Packed Single Data to Packed BF16 Data.
VDPBF16PS: Dot Product of BF16 Pairs Accumulated into Packed Single Precision.
For more details about BF16 isa, please refer to the latest ISE document: https://software.intel.com/en-us/download/intel-architecture-instruction-set-extensions-programming-reference
Author: LiuTianle
Reviewers: craig.topper, smaslov, LuoYuanke, wxiao3, annita.zhang, RKSimon, spatel
Reviewed By: craig.topper
Subscribers: kristina, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60550
llvm-svn: 360017
Summary:
Prior to DWARF v5, a directory index of 0 represents DW_AT_comp_dir.
In DWARF v5, the index starts with 0 and Entry.DirIdx is the index into
Prologue.IncludeDirectories.
Reviewed By: labath
Differential Revision: https://reviews.llvm.org/D61253
llvm-svn: 360015
Optimization pass lib/Transforms/IPO/GlobalOpt.cpp needs to insert
DW_OP_deref_size instead of DW_OP_deref to be compatible with big-endian
targets for same reasons as in D59687.
Differential Revision: https://reviews.llvm.org/D60611
llvm-svn: 360013
Based on PR41748, not all cases are handled in this function.
llvm_unreachable is treated as an optimization hint than can prune code paths
in a release build. This causes weird behavior when PR41748 is encountered on a
release build. It appears to generate an fp_round instruction from the floating
point code.
Making this a report_fatal_error prevents incorrect optimization of the code
and will instead generate a message to file a bug report.
llvm-svn: 360008
Thus it does not assume that the old basic block is the basic block
for which we are looking at successors.
Not reviewed, but seems rather trivial, in line with the rest of
previous few patches.
llvm-svn: 359997
Summary:
It is a common thing to loop over every `PHINode` in some `BasicBlock`
and change old `BasicBlock` incoming block to a new `BasicBlock` incoming block.
`replaceSuccessorsPhiUsesWith()` already had code to do that,
it just wasn't a function.
So outline it into a new function, and use it.
Reviewers: chandlerc, craig.topper, spatel, danielcdh
Reviewed By: craig.topper
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61013
llvm-svn: 359996
Summary:
There is `PHINode::getBasicBlockIndex()`, `PHINode::setIncomingBlock()`
and `PHINode::getNumOperands()`, but no function to replace every
specified `BasicBlock*` predecessor with some other specified `BasicBlock*`.
Clearly, there are a lot of places that could use that functionality.
Reviewers: chandlerc, craig.topper, spatel, danielcdh
Reviewed By: craig.topper
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61011
llvm-svn: 359995
Summary:
There is `Instruction::getNumSuccessors()`, `Instruction::getSuccessor()`
and `Instruction::setSuccessor()`, but no function to replace every
specified `BasicBlock*` successor with some other specified `BasicBlock*`.
I've found one place where it should clearly be used.
Reviewers: chandlerc, craig.topper, spatel, danielcdh
Reviewed By: craig.topper
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61010
llvm-svn: 359994
Summary:
If `deleteDeadLoop()` is called on such a loop, that has "bad" exit block,
one that e.g. has no terminator instruction, the `DIBuilder::insertDbgValueIntrinsic()`
will be told to insert the Dbg Value Intrinsic after `nullptr`
(since there is no first non-PHI instruction), which will cause it to not insert
those instructions into any basic block. The instructions will be parent-less,
and IR verifier will complain. It is rather obvious to track down the root cause
when that happens, so let's just assert it never happens.
Reviewers: sanjoy, davide, vsk
Reviewed By: vsk
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61008
llvm-svn: 359993
Summary: This check appears to be a leftover from when add/sub/mul could be either integer or fp. The NSW/NUW flags are only set for add/sub/mul/shl earlier. And we check that those operations only have integer types just below this. So it seems unnecessary to explicitly error for NUW/NSW being used on a add/sub/mul that have the wrong type that would later error for that.
Reviewers: spatel, dblaikie, jyknight, arsenm
Reviewed By: spatel
Subscribers: wdng, llvm-commits, hiraditya
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61562
llvm-svn: 359987
Summary:
These methods previously took a 0, 1, or 2 to indicate what types were allowed, but the 0 encoding which meant both fp and integer types has been unused for years. Its leftover from when add/sub/mul used to be shared between int and fp
Simplify it by changing it to just a bool to distinquish int and fp.
Reviewers: spatel, dblaikie, jyknight, arsenm
Reviewed By: spatel
Subscribers: wdng, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61561
llvm-svn: 359986
Summary:
Remove duplicate checks that both operands have the same type. This is checked
before the switch.
Use 'integer' or 'floating-point' instead of 'arithmetic' type. I think this
might be a leftover to the days when floating point and integer operations
shared the same opcodes.
Reviewers: spatel, RKSimon, dblaikie
Reviewed By: RKSimon
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61558
llvm-svn: 359985
This is a subset of the original commit from rL359879
which was reverted because it could crash when using the 'RemovedInstructions'
structure that enables delayed deletion of dead instructions. The motivating
compile-time win does not require that change though. We should get most of
that win from this change alone.
Using/updating a dominator tree to match math overflow patterns may be very
expensive in compile-time (because of the way CGP uses a DT), so just handle
the single-block case.
See post-commit thread for rL354298 for more details:
http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20190422/646276.html
Differential Revision: https://reviews.llvm.org/D61075
llvm-svn: 359969
These operations were already used in eh-frame registration, and are likely to
be used in other runtime registrations, so this commit moves them into a header
where they can be re-used.
llvm-svn: 359950
This saves us some unnecessary copies.
If the inputs to a G_SELECT are floating point, we should use fcsel rather than
csel.
Changes here are...
- Teach selectCopy about s1-to-s1 copies across register banks.
- AArch64RegisterBankInfo about G_SELECT in general.
- Teach the instruction selector about the FCSEL instructions.
Also add two tests:
- select-select.mir to show that we get the expected FCSEL
- regbank-select.mir (unfortunately named) to show the register banks on
G_SELECT are properly preserved
And update fast-isel-select.ll to show that we do the same thing as other
instruction selectors in these cases.
llvm-svn: 359940
Summary:
This change enables `cl::Grouping` for short options --
options with names of a single character. This is consistent with GNU
getopt behavior.
Reviewers: rnk, MaskRay
Reviewed By: MaskRay
Subscribers: thopre, cfe-commits, MaskRay, rupprecht, hiraditya, llvm-commits
Tags: #llvm, #clang
Differential Revision: https://reviews.llvm.org/D61270
llvm-svn: 359917
Summary:
By default, `parseCommandLineOptions()` will accept either a
`-` or `--` prefix for long options -- options with names longer than
a single character.
While this change does not affect behavior, it will be helpful with a
subsequent change that requires long options use the `--` prefix.
Reviewers: rnk, thopre
Reviewed By: thopre
Subscribers: thopre, cfe-commits, hiraditya, llvm-commits
Tags: #llvm, #clang
Differential Revision: https://reviews.llvm.org/D61269
llvm-svn: 359909
The x/y/z suffix is needed to disambiguate the memory form in at&t syntax since no xmm/ymm/zmm register is mentioned.
But we should also allow it for the register and broadcast forms where its not needed for consistency. This matches gas.
The printing code will still only use the suffix for the memory form where it is needed.
llvm-svn: 359903
The VOP3 form should always be the preferred selection form to be
shrunk later.
The r600 sub test needs to be split out because it asserts on the
arguments in the new test during the calling convention lowering.
llvm-svn: 359899
This was broken if the original operand was killed. The kill flag
would appear on both instructions, and fail the verifier. Keep the
kill flag, but remove the operands from the old instruction. This has
an added benefit of really reducing the use count for future folds.
Ideally the pass would be structured more like what PeepholeOptimizer
does to avoid this hack to avoid breaking instruction iterators.
llvm-svn: 359891
When a fold of an immediate into a sub/subrev required shrinking the
instruction, the wrong VOP2 opcode was used. This was using the VOP2
equivalent of the original instruction, not the commuted instruction
with the inverted opcode.
llvm-svn: 359883
Using/updating a dominator tree to match math overflow patterns may be very
expensive in compile-time (because of the way CGP uses a DT), so just handle
the single-block case.
Also, we were restarting the iterator loops when doing the overflow intrinsic
transforms by marking the dominator tree for update. That was done to prevent
iterating over a removed instruction. But we can postpone the deletion using
the existing "RemovedInsts" structure, and that means we don't need to update
the DT.
See post-commit thread for rL354298 for more details:
http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20190422/646276.html
Differential Revision: https://reviews.llvm.org/D61075
llvm-svn: 359879
Patch adds support for dumping of file headers with llvm-readobj. XCOFF
object files are added to test dumping a well formed file, and dumping
both negative timestamps and negative symbol counts, both of which are
allowed in the XCOFF definition.
Differential Revision: https://reviews.llvm.org/D60878
llvm-svn: 359878
This is the second part of the commit fixing PR38917 (hoisting
partitially redundant machine instruction). Most of PRE (partitial
redundancy elimination) and CSE work is done on LLVM IR, but some of
redundancy arises during DAG legalization. Machine CSE is not enough
to deal with it. This simple PRE implementation works a little bit
intricately: it passes before CSE, looking for partitial redundancy
and transforming it to fully redundancy, anticipating that the next
CSE step will eliminate this created redundancy. If CSE doesn't
eliminate this, than created instruction will remain dead and eliminated
later by Remove Dead Machine Instructions pass.
The third part of the commit is supposed to refactor MachineCSE,
to make it more clear and to merge MachinePRE with MachineCSE,
so one need no rely on further Remove Dead pass to clear instrs
not eliminated by CSE.
First step: https://reviews.llvm.org/D54839
Fixes llvm.org/PR38917
Reviewers: RKSimon
Subscribers: hfinkel, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D56772
llvm-svn: 359870
We use to incorrectly use the store size instead of the alloc size when
creating the stack slot for allocas.
On aarch64 this can be demonstrated by allocating weirdly sized types.
For instance, in the added test case, we use an alloca for i19. We used
to allocate a slot of size 24-bit (19 rounded up to the next byte),
whereas we really want to use a full 32-bit slot for this type.
llvm-svn: 359856
The primary fix here is to WinException.cpp: we need to exclude jump
tables when computing the length of a function, or else we fail to
correctly compute the length. (We can only compute the number of bytes
consumed by certain assembler directives after the entire file is
parsed. ".p2align" is one of those directives, and is used by jump table
generation.)
The secondary fix, to MCWin64EH, is to make sure we don't silently
miscompile if we hit a similar situation in the future.
It's possible we could extend ARM64EmitUnwindInfo so it allows function
bodies that contain assembler directives, but that's a lot more
complicated; see the FIXME in MCWin64EH.cpp.
Fixes https://bugs.llvm.org/show_bug.cgi?id=41581 .
Differential Revision: https://reviews.llvm.org/D61095
llvm-svn: 359849
Summary:
Originally the insertDef method was only used when building MemorySSA, and was limiting the number of Phi nodes that it created.
Now it's used for updates as well, and it can create additional Phis needed for correctness.
Make sure no Phis are created in unreachable blocks (condition met during MSSA build), otherwise the renamePass will find a null DTNode.
Resolves PR41640.
Reviewers: george.burgess.iv
Subscribers: jlebar, Prazek, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61410
llvm-svn: 359845
Summary: Create a method to clean up multiple potentially trivial phis, since we will need this often.
Reviewers: george.burgess.iv
Subscribers: jlebar, Prazek, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61471
llvm-svn: 359842
The default impementation in the base class for TargetLowering::getRegForInlineAsmConstraint doesn't work for mask registers when the VT is a scalar type integer types since the only legal mask types are vXi1. So we end up just getting whatever the first register class that contains the register. Currently this appears to be VK1, but its really dependent on the order tablegen outputs the register classes.
Some code in the caller ends up looking up the type for this register class and find v1i1 then generates a copyfromreg from the physical k-register with the v1i1 type. Then it generates an any_extend from v1i1 to the scalar VT which isn't legal. This bad any_extend sticks around until isel where it selects a MOVZX32rr8 with a v1i1 input or maybe a i8 input. Not sure but eventually we pick up a copy from VK1 to GR8 in MachineIR which isn't supported. This leads to a failure in physical register copying.
This patch uses the scalar type to find a VK class of the right size. In the attached test case this will be VK16. This causes a bitcast from vk16 to i16 to be generated instead of an any_extend. This will be properly iseled to a VK16 to GR32 copy and a GR32->GR16 extract_subreg.
Fixes PR41678
Differential Revision: https://reviews.llvm.org/D61453
llvm-svn: 359837
As a result of the underlying cause of PR41678 we created an ANY_EXTEND node with a scalar result type and v1i1 input type. Ideally we would have asserted for this instead of letting it go through to instruction selection and generate bad machine IR
Differential Revision: https://reviews.llvm.org/D61463
llvm-svn: 359836
As a side benefit, lld-link now reports more than one duplicate resource
entry before exiting with an error even if the new flag is not passed.
llvm-svn: 359829
The original patch was committed at rL359398 and reverted at rL359695 because of
infinite looping.
This includes a fix to check for a vector splat of "1.0" to avoid the infinite loop.
Original commit message:
This was originally part of D61028, but it's an independent diff.
If we try the repeated divisor reciprocal transform before producing an estimate sequence,
then we have an opportunity to use scalar fdiv. On x86, the trade-off is 1 divss vs. 5
vector FP ops in the default estimate sequence. On recent chips (Skylake, Ryzen), the
full-precision division is only 3 cycle throughput, so that's probably the better perf
default option and avoids problems from x86's inaccurate estimates.
The last 2 tests show that users still have the option to override the defaults by using
the function attributes for reciprocal estimates, but those patterns are potentially made
faster by converting the vector ops (including ymm ops) to scalar math.
Differential Revision: https://reviews.llvm.org/D61149
llvm-svn: 359793
We don't have FP exception limits in the IR constant folder for the binops (apart from strict ops),
so it does not make sense to have them here in the DAG either. Nothing else in the backend tries
to preserve exceptions (again outside of strict ops), so I don't see how this could have ever
worked for real code that cares about FP exceptions.
There are still cases (examples: unary opcodes in SDAG, FMA in IR) where we are trying (at least
partially) to preserve exceptions without even asking if the target supports FP exceptions. Those
should be corrected in subsequent patches.
Real support for FP exceptions requires several changes to handle the constrained/strict FP ops.
Differential Revision: https://reviews.llvm.org/D61331
llvm-svn: 359791
Limiting scalar hadd/hsub generation to the lowest xmm looks to be unnecessary - we will be extracting one upper xmm whatever, and we can remove a shuffle by using the hop which is inline with what shouldUseHorizontalOp expects to happen anyway.
Testing on btver2 (the main target for fast-hops) shows this is beneficial even for float ops where we have a 'shuffle' to extract the float result:
https://godbolt.org/z/0R-U-K
Differential Revision: https://reviews.llvm.org/D61426
llvm-svn: 359786
Summary:
It currently receives an output parameter and returns
std::error_code. Expected<StringRef> fits for this purpose perfectly.
Differential Revision: https://reviews.llvm.org/D61421
llvm-svn: 359774
Select G_SEXT and G_ZEXT with destination types smaller than 32 bits in
the exact same way as 32 bits. This overwrites the higher bits, but that
should be ok since all legal users of types smaller than 32 bits ignore
those bits anyway.
llvm-svn: 359768
Summary:
Based on the Eli Friedman's comments in https://reviews.llvm.org/D60811 , we'd better return early if the element type is not byte-sized in `combineBVOfConsecutiveLoads`.
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D61076
llvm-svn: 359764
Summary:
The stream contains the list of threads belonging to the process
described by the minidump. Its structure is the same as the ModuleList
stream, and in fact, I have generalized the ModuleList reading code to
handle this stream too.
Reviewers: amccarth, jhenderson, clayborg
Subscribers: llvm-commits, lldb-commits, markmentovai, zturner
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61064
llvm-svn: 359762
Though being marked "deprecated" by the Linux man-pages project
(MAP_ANON is a synonym of MAP_ANONYMOUS), it is the mostly widely
available macro - many systems that don't provide MAP_ANONYMOUS have
MAP_ANON. MAP_ANON is also used here and there in compiler-rt.
llvm-svn: 359758
The broadcasting variant for instruction vfpclassp[d,s] shouldn't use suffix q/l. So remove them from the template.
Patch by Pengfei Wang
Differential Revision: https://reviews.llvm.org/D61295
llvm-svn: 359753
Reduces the error message from:
lld-link: error: failed to parse .res file: duplicate resource: type STRINGTABLE (ID 6)/name ID 3/language 1033, in test1.res and in test2.res
To:
lld-link: error: duplicate resource: type STRINGTABLE (ID 6)/name ID 3/language 1033, in test1.res and in test2.res
Make sure every error message emitted by cvtres contains the name of at
least one ".res" file, so that removing the "failed to parse .res file"
string doesn't lose information.
Differential Revision: https://reviews.llvm.org/D61388
llvm-svn: 359749
Summary:
Inalloca parameters require special handling in some optimizations.
This change causes globalopt to strip the inalloca attribute from
function parameters when it is safe to do so, removes the special
handling for inallocas from argpromotion, and replaces it with a
simple check that causes argpromotion to skip functions that receive
inallocas (for when the pass is invoked on code that didn't run
through globalopt first). This also avoids a case where argpromotion
would incorrectly try to pass an inalloca in a register.
Fixes PR41658.
Reviewers: rnk, efriedma
Reviewed By: rnk
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61286
llvm-svn: 359743
Summary:
This patch is part of a patch series to add support for FileCheck
numeric expressions. This specific patch introduces the @LINE numeric
expressions.
This commit introduces a new syntax to express a relation a numeric
value in the input text must have with the line number of a given CHECK
pattern: [[#<@LINE numeric expression>]]. Further commits build on that
to express relations between several numeric values in the input text.
To help with naming, regular variables are renamed into pattern
variables and old @LINE expression syntax is referred to as legacy
numeric expression.
Compared to existing @LINE expressions, this new syntax allow arbitrary
spacing between the component of the expression. It offers otherwise the
same functionality but the commit serves to introduce some of the data
structure needed to support more general numeric expressions.
Copyright:
- Linaro (changes up to diff 183612 of revision D55940)
- GraphCore (changes in later versions of revision D55940 and
in new revision created off D55940)
Reviewers: jhenderson, chandlerc, jdenny, probinson, grimar, arichardson, rnk
Subscribers: hiraditya, llvm-commits, probinson, dblaikie, grimar, arichardson, tra, rnk, kristina, hfinkel, rogfer01, JonChesterfield
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60384
llvm-svn: 359741
Summary: Fix a transformation bug where two scopes share a common instrution to hoist.
Reviewers: davidxl
Reviewed By: davidxl
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61405
llvm-svn: 359736
Clients who want to regain ownership of object buffers after they have been
linked may now use the NotifyEmitted callback for this purpose.
Note: Currently NotifyEmitted is only called if linking succeeds. If linking
fails the buffer is always discarded.
llvm-svn: 359735
This adds support for using fmov rather than a standard mov to materialize
G_FCONSTANT when it's safe to do so.
Update arm64-fast-isel-materialize.ll and select-constant.mir to show that the
selection is correct.
llvm-svn: 359734
We already perform horizontal add/sub if we extract from elements 0 and 1, this patch extends it to non-0/1 element extraction indices (as long as they are from the lowest 128-bit vector).
Differential Revision: https://reviews.llvm.org/D61263
llvm-svn: 359707
If the user passes a flag like `-version` to a program, it's more likely
they mean `--version` than `-version:`, since there's no parameter
passed. Hence, give delimited arguments a penalty of 1 if the user input
doesn't contain the delimiter or no data after it.
The motivation is that with this, lld-link can suggest "--version"
instead of "-version:" for "-version" and "-nodefaultlib" instead of
"-nodefaultlib:" for "-nodefaultlibs".
Differential Revision: https://reviews.llvm.org/D61382
llvm-svn: 359701
Summary:
Early returns were causing some code to be skipped. This was missed
since the summary entries are typically at the end of the llvm assembly
file.
Fixes PR41663.
Reviewers: RKSimon, wristow
Subscribers: mehdi_amini, inglorion, eraman, steven_wu, dexonsmith, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61355
llvm-svn: 359697
Summary:
Commit
rL331949: SCEV] Do not use induction in isKnownPredicate for simplification umax
changed the codepath for umax from isKnownPredicate to
isKnownViaNonRecursiveReasoning to avoid compile time blow up (and as
I found out also stack overflows). However, there is an exact copy of
the code for umax that was lacking this change. In D50167 I want to unify
these codepaths, but to avoid that being a behavior change for the smax
case, pull this independent bit out of it.
Reviewed By: sanjoy
Differential Revision: https://reviews.llvm.org/D61166
llvm-svn: 359693
Prior to this, OptTable::findNearest() thought that the input `--foo`
had an editing distance of 0 from an existing flag `--foo=`, which made
it suggest flags with delimiters more often than flags without one.
After this, it correctly assigns this case an editing distance of 1.
Differential Revision: https://reviews.llvm.org/D61373
llvm-svn: 359685
Summary:
This change was part of D46460. However, in the meantime rL341926 fixed the
correctness issue here. What remained was the performance issue in setLoopID
where it would iterate through all blocks in the loop and their successors,
rather than just the predecessor of the header (the later presumably being
much faster). We already have the `getLoopLatches` to compute precisely these
basic blocks in an efficient manner, so just use it (as the original commit
did for `getLoopID`).
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D61215
llvm-svn: 359684
In preparation for supporting ILP32 on AArch64, this modifies the SelectionDAG
builder code so that pointers are allowed to have a larger type when "live" in
the DAG compared to memory.
Pointers get zero-extended whenever they are loaded, and truncated prior to
stores. In addition, a few not quite so obvious locations need updating:
* A GEP that has not been marked inbounds needs to enforce the IR-documented
2s-complement wrapping at the memory pointer size. Inbounds GEPs are
undefined if they overflow the address space, so no additional operations
are needed.
* Signed comparisons would give incorrect results if performed on the
zero-extended values.
This shouldn't affect CodeGen for now, but will become active when the AArch64
ILP32 support is committed.
llvm-svn: 359676
This is an alternative to D59669 which more aggressively extracts i1 elements from vXi1 bool vectors using a MOVMSK.
Differential Revision: https://reviews.llvm.org/D61189
llvm-svn: 359666
JITLinkGeneric phases 2 and 3 (focused on applying fixups and finalizing memory,
respectively) may fail for various reasons. If this happens, we need to
explicitly de-allocate the memory allocated in phase 1 (explicitly, because
deallocation may also fail and so is implemented as a method returning error).
No testcase yet: I am still trying to decide on the right way to test totally
platform agnostic code like this.
llvm-svn: 359643
The demanded elts rules introduced for GEPs in https://reviews.llvm.org/rL356293 replaced vector constants with undefs (by design). It turns out that the LangRef disallows such cases when indexing structs. The right fix is probably to relax the langref requirement, and update other passes to expect the result, but for the moment, limit the transform to avoid compiler crashes.
This should fix https://bugs.llvm.org/show_bug.cgi?id=41624.
llvm-svn: 359633
Summary:
MemorySSA keeps internal pointers of AA and DT.
If these get invalidated, so should MemorySSA.
Reviewers: george.burgess.iv, chandlerc
Subscribers: jlebar, Prazek, llvm-commits
Tags: LLVM
Differential Revision: https://reviews.llvm.org/D61043
llvm-svn: 359627
Summary:
This is a redo of D60914.
The objective is to not invalidate AAManager, which is stateless, unless
there is an explicit invalidate in one of the AAResults.
To achieve this, this patch adds an API to PAC, to check precisely this:
is this analysis not invalidated explicitly == is this analysis not abandoned == is this analysis stateless, so preserved without explicitly being marked as preserved by everyone
Reviewers: chandlerc
Subscribers: mehdi_amini, jlebar, george.burgess.iv, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61284
llvm-svn: 359622
Summary:
Match NewPassManager behavior: add option for interleaved loops in the
old pass manager, and use that instead of the flag used to disable loop unroll.
No changes in the defaults.
Reviewers: chandlerc
Subscribers: mehdi_amini, jlebar, dmgreen, hsaito, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61030
llvm-svn: 359615
In-memory compiled object buffer identifiers will now be derived from the
identifiers of their source IR modules. This makes it easier to connect
in-memory objects with their source modules in debugging output.
llvm-svn: 359613
Add overlap functionality to llvm-profdata tool to compute the similarity
between two profile files.
Differential Revision: https://reviews.llvm.org/D60977
llvm-svn: 359612
`Candidate` was a StringRef refering to a temporary string.
Instead, create a local variable for the string and use
a StringRef referring to that.
llvm-svn: 359604
Add support for f16 libcalls in WebAssembly. This entails adding signatures
for the remaining F16 libcalls, and renaming gnu_f2h_ieee/gnu_h2f_ieee to
truncsfhf2/extendhfsf2 for consistency between f32 and f64/f128 (compiler-rt
already supports this).
Differential Revision: https://reviews.llvm.org/D61287
Reviewer: dschuff
llvm-svn: 359600
The reordering can leave at least a dead TokenFactor in the graph. This cause the linearize scheduler to fail with something like the assert seen in PR22614. This is only one of many ways we can break the linearize scheduler today so I can't say for sure that any of the other failures in that bug were caused by this issue.
This takes the heavy hammer approach of just running RemoveDeadNodes unconditionally at the end of the PreprocessISelDAG. If this turns out to be a compile time hit, we can try to refine it.
Differential Revision: https://reviews.llvm.org/D61164
llvm-svn: 359582
This removes some of the class variables. Merge basic block processing into
runOnMachineFunction to keep the flags local.
Pass MachineBasicBlock around instead of an iterator. We can get the iterator in
the few places that need it. Allows a range-based outer for loop.
Separate the Atom optimization from the rest of the optimizations. This allows
fixupIncDec to create INC/DEC and still allow Atom to turn it back into LEA
when profitable by its heuristics.
I'd like to improve fixupIncDec to turn LEAs into ADD any time the base or index
register is equal to the destination register. This is profitable regardless of
the various slow flags. But again we would want Atom to be able to undo that.
Differential Revision: https://reviews.llvm.org/D60993
llvm-svn: 359581
This was first reviewed in https://reviews.llvm.org/D46776 and
landed in r332299, but got reverted because it broke the PS4
bots.
https://reviews.llvm.org/D50410 fixed this, and then this
change was re-reviewed at https://reviews.llvm.org/D50515 and
relanded in r341329. It got reverted due to causing MSan issues.
However, nobody wrote down the error message and the bot link
is dead, so I'm relanding this to capture the MSan error.
I'll then either fix it, or copy it somewhere and revert if
fixing looks difficult.
llvm-svn: 359580
We don't have this restriction in IR, so it should not be here
either simply out of consistency. Code that wants to handle FP
exceptions is expected to use the 'strict' variants of these
nodes.
We don't get the frem case because frem by 0.0 produces NaN (invalid),
and that's the remaining check here (so the removed check for frem
was dead code AFAIK).
This is the only place in SDAG that uses "HasFPExceptions", so I
think we should remove that entirely as a follow-up patch.
llvm-svn: 359566
Summary:
MemorySSA keeps internal pointers of AA and DT.
If these get invalidated, so should MemorySSA.
Reviewers: george.burgess.iv, chandlerc
Subscribers: jlebar, Prazek, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61043
........
This was causing windows build bot failures
llvm-svn: 359555
This implements TargetTransformInfo method getMemcpyCost, which estimates the
number of instructions to which a memcpy instruction expands to.
Differential Revision: https://reviews.llvm.org/D59787
llvm-svn: 359547
Current LLVM uses pxor+pinsrb on SSE4+ for INSERT_VECTOR_ELT(ZeroVec, 0, Elt) insead of much simpler movd.
INSERT_VECTOR_ELT(ZeroVec, 0, Elt) is idiomatic construct which is used e.g. for _mm_cvtsi32_si128(Elt) and for lowest element initialization in _mm_set_epi32.
So such inefficient lowering leads to significant performance digradations in ceratin cases switching from SSSE3 to SSE4.
https://bugs.llvm.org/show_bug.cgi?id=41512
Here INSERT_VECTOR_ELT(ZeroVec, 0, Elt) is simply converted to SCALAR_TO_VECTOR(Elt) when applicable since latter is closer match to desired behavior and always efficiently lowered to movd and alike.
Committed on behalf of @Serge_Preis (Serge Preis)
Differential Revision: https://reviews.llvm.org/D60852
llvm-svn: 359545
This was a local static funtion in SelectionDAG, which I've promoted to
TargetLowering so that I can reuse it to estimate the cost of a memory
operation in D59787.
Differential Revision: https://reviews.llvm.org/D59766
llvm-svn: 359543
Bail out on function arguments/returns with types aggregating an
unsupported type. This fixes cases where we would happily and
incorrectly lower functions taking e.g. [1 x i64] parameters, when we
don't even support plain i64 yet.
llvm-svn: 359540
The MachineFunction wasn't used in getOptimalMemOpType, but more importantly,
this allows reuse of findOptimalMemOpLowering that is calling getOptimalMemOpType.
This is the groundwork for the changes in D59766 and D59787, that allows
implementation of TTI::getMemcpyCost.
Differential Revision: https://reviews.llvm.org/D59785
llvm-svn: 359537
Summary:
When a variable goes into scope several times within a single function
or when two variables from different scopes share a stack slot it may
be incorrect to poison such scoped locals at the beginning of the
function.
In the former case it may lead to false negatives (see
https://github.com/google/sanitizers/issues/590), in the latter - to
incorrect reports (because only one origin remains on the stack).
If Clang emits lifetime intrinsics for such scoped variables we insert
code poisoning them after each call to llvm.lifetime.start().
If for a certain intrinsic we fail to find a corresponding alloca, we
fall back to poisoning allocas for the whole function, as it's now
impossible to tell which alloca was missed.
The new instrumentation may slow down hot loops containing local
variables with lifetime intrinsics, so we allow disabling it with
-mllvm -msan-handle-lifetime-intrinsics=false.
Reviewers: eugenis, pcc
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60617
llvm-svn: 359536
The PrologEpilogInserter need to insert a DW_OP_deref_size before
prepending a memory location expression to an already implicit
expression to avoid having the existing expression act on the memory
address instead of the value behind it.
The reason for using DW_OP_deref_size and not plain DW_OP_deref is that
big-endian targets need to read the right size as simply truncating a
larger read would yield the wrong result (LSB bytes are not at the lower
address).
This re-commit fixes issues reported in the first one. Namely deref was
inserted under wrong conditions and additionally the deref_size argument
was incorrectly encoded.
Differential Revision: https://reviews.llvm.org/D59687
llvm-svn: 359535
Do not combine (trunc adde(X, Y, Carry)) into (adde trunc(X), trunc(Y), Carry),
if adde is not legal for the target. Even it's at type-legalize phase.
Because adde is special and will not be legalized at operation-legalize phase later.
This fixes: PR40922
https://bugs.llvm.org/show_bug.cgi?id=40922
Differential Revision: https://reviews.llvm.org//D60854
llvm-svn: 359532
Background: A definition generator can be attached to a JITDylib to generate
new definitions in response to queries. For example: a generator that forwards
calls to dlsym can map symbols from a dynamic library into the JIT process on
demand.
If definition generation fails then the generator should be able to return an
error. This allows the JIT API to distinguish between the case where a
generator does not provide a definition, and the case where it was not able to
determine whether it provided a definition due to an error.
The immediate motivation for this is cross-process symbol lookups: If the
remote-lookup generator is attached to a JITDylib early in the search list, and
if a generator failure is misinterpreted as "no definition in this JITDylib" then
lookup may continue and bind to a different definition in a later JITDylib, which
is a bug.
llvm-svn: 359521
Summary:
MemorySSA keeps internal pointers of AA and DT.
If these get invalidated, so should MemorySSA.
Reviewers: george.burgess.iv, chandlerc
Subscribers: jlebar, Prazek, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61043
llvm-svn: 359519
lld-link used to write PDB files that DIA couldn't recover natvis
files from if:
- The global strings table was > 64kiB
- There were at least 3 natvis files
The cause was that the hash function for the /src/headerblock stream
was incorrect: It needs to be truncated to 16 bit.
If the global strings table was <= 64kiB, truncating to 16 bit is a
no-op, so this wasn't needed for small programs.
If there are only 1 or 2 natvis files, then the growth strategy in
HashTable::grow() would mean the hash table would have 2 buckets (for 1
natvis file) or 4 buckets (for 4 natvis files), and since the hash
function is used modulo number of buckets, and since 2 and 4 divide
0x10000, the missing `% 0x10000` is a no-op there too. For 3 natvis
files, the hash table grows to 6 buckets, which has a factor that's not
common with 0x10000 and the difference starts to matter.
Fixes PR41626.
Differential Revision: https://reviews.llvm.org/D61277
llvm-svn: 359515
LLJITBuilder and LLLazyJITBuilder construct LLJIT and LLLazyJIT instances
respectively. Over time these will allow more configurable options to be
added while remaining easy to use in the default case, which for default
in-process JITing is now:
auto J = ExitOnErr(LLJITBuilder.create());
llvm-svn: 359511
Summary:
For ThinLTOCodegenerator, it has an option to save the object file
outputs into a directory which is essential for debug info. Tools like lldb
and dsymutil will look for these object files for debug info.
On Darwin platform, you can link fat binaries with one single clang
driver invocation like:
$ clang -arch x86_64 -arch i386 -Wl,-object_path_lto,$TMPDIR ...
Unfornately, the output object files for one architecture is going to
overwrite the previous ones and one architecture slice will end up with
no debug info. One example for this is to turn on ThinLTO for sanitizer
dylibs in compiler-rt project.
To fix the issue, add the name for the architecture into the name of the
output object file.
rdar://problem/35482935
Reviewers: tejohnson, bd1976llvm, dexonsmith, JDevlieghere
Reviewed By: dexonsmith
Subscribers: mehdi_amini, aprantl, inglorion, eraman, hiraditya, jkorous, dang, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60924
llvm-svn: 359508
The WebAssembly backend needs to know the signatures of all runtime
libcall functions. This adds the signature for __stack_chk_fail which was
previously missing.
Also, make the error message for a missing libcall include the name of
the function.
Differential Revision: https://reviews.llvm.org/D59521
Reviewed By: sbc100
llvm-svn: 359505
Change the PPCISelLowering.cpp function that decides to avoid update form in
favor of partial vector loads to know about newer load types and to not be
confused by the chain operand.
Differential Revision: https://reviews.llvm.org/D60102
llvm-svn: 359504
This was falling back and gives us a reason to create a selectIntrinsic function
which we would need eventually anyway. Update arm64-crypto.ll to show that we
correctly select it.
Also factor out the code for finding an intrinsic ID.
llvm-svn: 359501
This is necessary since SVN r330706, as tail merging can include
CFI instructions since then.
This fixes PR40322 and PR40012.
Differential Revision: https://reviews.llvm.org/D61252
llvm-svn: 359496
Add target shuffle decoding to isHorizontalBinOp as well as ISD::VECTOR_SHUFFLE support.
This does mean we can go through bitcasts so we need to bitcast the extracted args to ensure they are the correct type
Fixes PR39936 and should help with PR39920/PR39921
Differential Revision: https://reviews.llvm.org/D61245
llvm-svn: 359491
Follow-up to:
rL359482
Avoid this potential problem throughout by giving the type a name
and verifying the assumption that both operands are the same type.
llvm-svn: 359485
PVS Studio's copy+paste recognizer was seeing this as a typo, technically Op0/Op1 in a fcmp should always be the same type, but we might as well avoid the issue.
Reported in https://www.viva64.com/en/b/0629/
llvm-svn: 359482
* LegalizeAction should be printed by name rather than number
* Newly created instructions are incomplete at the point the observer first sees
them. They are therefore recorded in a small vector and printed just before
the legalizer moves on to another instruction. By this point, the instruction
must be complete.
llvm-svn: 359481
Summary:
Prior to this patch, the CommandLine parser would strip an
unlimitted number of dashes from options. This patch limits it to
two.
Reviewers: rnk
Reviewed By: rnk
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61229
llvm-svn: 359480
Use size_t assignment to prevent a bad explicit type conversion warning.
Given the typical size of shuffle masks this was never going to happen, but this at least stops the warning.
Reported in https://www.viva64.com/en/b/0629/
llvm-svn: 359479
Summary:
Extract the logic for doing reassociations
from DAGCombiner::reassociateOps into a helper
function DAGCombiner::reassociateOpsCommutative,
and use that helper to trigger reassociation
on the original operand order, or the commuted
operand order.
Codegen is not identical since the operand order will
be different when doing the reassociations for the
commuted case. That causes some unfortunate churn in
some test cases. Apart from that this should be NFC.
Reviewers: spatel, craig.topper, tstellar
Reviewed By: spatel
Subscribers: dmgreen, dschuff, jvesely, nhaehnle, javed.absar, sbc100, jgravelle-google, hiraditya, aheejin, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61199
llvm-svn: 359476
Summary:
This patch is part of a patch series to add support for FileCheck
numeric expressions. This specific patch gives earlier and better
diagnostics for the @LINE expressions.
Rather than detect parsing errors at matching time, this commit adds
enhance parsing to detect issues with @LINE expressions at parse time
and diagnose them more accurately.
Copyright:
- Linaro (changes up to diff 183612 of revision D55940)
- GraphCore (changes in later versions of revision D55940 and
in new revision created off D55940)
Reviewers: jhenderson, chandlerc, jdenny, probinson, grimar, arichardson, rnk
Subscribers: hiraditya, llvm-commits, probinson, dblaikie, grimar, arichardson, tra, rnk, kristina, hfinkel, rogfer01, JonChesterfield
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60383
llvm-svn: 359475
This change aims at making the file format be compatible with the
way LLVM handles command line options.
Differential Revision: https://reviews.llvm.org/D60970
llvm-svn: 359462
Fix typo introduced in rL332824 where we simplified the extact string matches for "avx512.mask.permvar.sf.256" and "avx512.mask.permvar.si.256" to a string startswith test for "avx512.mask.permvar."
llvm-svn: 359460
This patch adds aliases for element sizes .B/.H/.S to the
AND/ORR/EOR/BIC bitwise logical instructions. The assembler now accepts
these instructions with all element sizes up to 64-bit (.D). The
preferred disassembly is .D.
llvm-svn: 359457
Summary:
This patch is part of a patch series to add support for FileCheck
numeric expressions. This specific patch gives earlier and better
diagnostics for the -D option.
Prior to this change, parsing of -D option was very loose: it assumed
that there is an equal sign (which to be fair is now checked by the
FileCheck executable) and that the part on the left of the equal sign
was a valid variable name. This commit adds logic to ensure that this
is the case and gives diagnostic when it is not, making it clear that
the issue came from a command-line option error. This is achieved by
sharing the variable parsing code into a new function ParseVariable.
Copyright:
- Linaro (changes up to diff 183612 of revision D55940)
- GraphCore (changes in later versions of revision D55940 and
in new revision created off D55940)
Reviewers: jhenderson, chandlerc, jdenny, probinson, grimar, arichardson, rnk
Subscribers: hiraditya, llvm-commits, probinson, dblaikie, grimar, arichardson, tra, rnk, kristina, hfinkel, rogfer01, JonChesterfield
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60382
llvm-svn: 359447
Summary:
This patch adds some basic operations for fp16
vectors, such as bitcast from fp16 to i16,
required to perform extract_subvector (also added
here) and extract_element.
Reviewers: SjoerdMeijer, DavidSpickett, t.p.northover, ostannard
Reviewed By: ostannard
Subscribers: javed.absar, kristof.beyls, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60618
llvm-svn: 359433
Summary:
The Procedure Call Standard for the Arm Architecture
states that float16x4_t and float16x8_t behave just
as uint16x4_t and uint16x8_t for argument passing.
This patch adds the fp16 vectors to the
ARMCallingConv.td file.
Reviewers: miyuki, ostannard
Reviewed By: ostannard
Subscribers: ostannard, javed.absar, kristof.beyls, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60720
llvm-svn: 359431
Currently, clang's libTooling passes this function a fake argv0, which
means that no libTooling tools can find the standard headers on FreeBSD.
With this change, these will now work on any FreeBSD systems that have
procfs mounted. This isn't the right fix for the libTooling issue, but
it does bring the FreeBSD implementation of getExecutablePath closer to
the Linux and macOS implementations.
llvm-svn: 359427
This patch fixes PR40795, where constant-valued variable locations can
"leak" into blocks placed at higher addresses. The root of this is that
DbgEntityHistoryCalculator terminates all register variable locations at
the end of each block, but not constant-value variable locations.
Fixing this requires constant-valued DBG_VALUE instructions to be
broadcast into all blocks where the variable location remains valid, as
documented in the LiveDebugValues section of SourceLevelDebugging.rst,
and correct termination in DbgEntityHistoryCalculator.
Differential Revision: https://reviews.llvm.org/D59431
llvm-svn: 359426
The 128/256 bit version of these instructions require an 'x' or 'y' suffix to
disambiguate the memory form in att syntax.
We were allowing the same suffix in intel syntax, but it appears gas does not
do that.
gas does allow the 'x' and 'y' suffix on register and broadcast forms even
though its not needed. We were allowing it on unmasked register form, but not on
masked versions or on masked or unmasked broadcast form.
While there fix some test coverage holes so they can be extended with the 'x'
and 'y' suffix tests.
llvm-svn: 359418
I got confused on the terminology, and the change in D60598 was not
correct. I was thinking of "exact" in terms of the result being
non-approximate. However, the relevant distinction here is whether
the result is
* Largest range such that:
Forall Y in Other: Forall X in Result: X BinOp Y does not wrap.
(makeGuaranteedNoWrapRegion)
* Smallest range such that:
Forall Y in Other: Forall X not in Result: X BinOp Y wraps.
(A hypothetical makeAllowedNoWrapRegion)
* Both. (makeExactNoWrapRegion)
I'm adding a separate makeExactNoWrapRegion method accepting a
single APInt (same as makeExactICmpRegion) and using it in the
places where the guarantee is relevant.
Differential Revision: https://reviews.llvm.org/D60960
llvm-svn: 359402
Some of the combines might be further improved if we lower more shuffles with X86ISD::VPERMV3 directly, instead of waiting to combine the results.
llvm-svn: 359400
This was originally part of D61028, but it's an independent diff.
If we try the repeated divisor reciprocal transform before producing an estimate sequence,
then we have an opportunity to use scalar fdiv. On x86, the trade-off is 1 divss vs. 5
vector FP ops in the default estimate sequence. On recent chips (Skylake, Ryzen), the
full-precision division is only 3 cycle throughput, so that's probably the better perf
default option and avoids problems from x86's inaccurate estimates.
The last 2 tests show that users still have the option to override the defaults by using
the function attributes for reciprocal estimates, but those patterns are potentially made
faster by converting the vector ops (including ymm ops) to scalar math.
Differential Revision: https://reviews.llvm.org/D61149
llvm-svn: 359398
An xor reduction of a bool vector can be optimized to a parity check of the MOVMSK/BITCAST'd integer - if the population count is odd return 1, else return 0.
Differential Revision: https://reviews.llvm.org/D61230
llvm-svn: 359396
Summary:
The register form of these instructions are CodeGenOnly instructions that cover
GR32->FR32 and GR64->FR64 bitcasts. There is a similar set of instructions for
the opposite bitcast. Due to the patterns using bitcasts these instructions get
marked as "bitcast" machine instructions as well. The peephole pass is able to
look through these as well as other copies to try to avoid register bank copies.
Because FR32/FR64/VR128 are all coalescable to each other we can end up in a
situation where a GR32->FR32->VR128->FR64->GR64 sequence can be reduced to
GR32->GR64 which the copyPhysReg code can't handle.
To prevent this, this patch removes one set of the 'bitcast' instructions. So
now we can only go GR32->VR128->FR32 or GR64->VR128->FR64. The instruction that
converts from GR32/GR64->VR128 has no special significance to the peephole pass
and won't be looked through.
I guess the other option would be to add support to copyPhysReg to just promote
the GR32->GR64 to a GR64->GR64 copy. The upper bits were basically undefined
anyway. But removing the CodeGenOnly instruction in favor of one that won't be
optimized seemed safer.
I deleted the peephole test because it couldn't be made to work with the bitcast
instructions removed.
The load version of the instructions were unnecessary as the pattern that selects
them contains a bitcasted load which should never happen.
Fixes PR41619.
Reviewers: RKSimon, spatel
Reviewed By: RKSimon
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61223
llvm-svn: 359392
As predicate masks are legal on AVX512 targets, we avoid MOVMSK in these cases, but we can just bitcast the bool vector to the integer equivalent directly - avoiding expansion of the reduction to a shuffle pattern.
llvm-svn: 359386
Fixes PR40332 in the limited case where we're selecting between a target shuffle and a zero vector.
We can extend this in the future to handle more opcodes and non-zero selections.
llvm-svn: 359378
Summary: If we have SSE2 we can use a MOVQ to store 64-bits and avoid falling back to a cmpxchg8b loop. If its a seq_cst store we need to insert an mfence after the store.
Reviewers: spatel, RKSimon, reames, jfb, efriedma
Reviewed By: RKSimon
Subscribers: hiraditya, dexonsmith, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60546
llvm-svn: 359368
This reverts commit 7a6ef3004655dd86d722199c471ae78c28e31bb4.
We discovered some internal test failures, so reverting for now.
Differential Revision: https://reviews.llvm.org/D61213
llvm-svn: 359363
ObjectLinkingLayer::Plugin provides event notifications when objects are loaded,
emitted, and removed. It also provides a modifyPassConfig callback that allows
plugins to modify the JITLink pass configuration.
This patch moves eh-frame registration into its own plugin, and teaches
llvm-jitlink to only add that plugin when performing execution runs on
non-Windows platforms. This should allow us to re-enable the test case that was
removed in r359198.
llvm-svn: 359357
getConstantVRegValWithLookThrough does the same thing as the
getConstantValueForReg function, and has more visibility across GISel. Plus, it
supports looking through G_TRUNC, G_SEXT, and G_ZEXT. So, we get better code
reuse and more functionality for free by using it.
Add some test cases to select-extract-vector-elt.mir to show that we can now
look through those instructions.
llvm-svn: 359351
Summary:
Targets like ARM, MSP430, PPC, and SystemZ have complex behavior when
printing the address of a MachineOperand::MO_GlobalAddress. Move that
handling into a new overriden method in each base class. A virtual
method was added to the base class for handling the generic case.
Refactors a few subclasses to support the target independent %a, %c, and
%n.
The patch also contains small cleanups for AVRAsmPrinter and
SystemZAsmPrinter.
It seems that NVPTXTargetLowering is possibly missing some logic to
transform GlobalAddressSDNodes for
TargetLowering::LowerAsmOperandForConstraint to handle with "i" extended
inline assembly asm constraints.
Fixes:
- https://bugs.llvm.org/show_bug.cgi?id=41402
- https://github.com/ClangBuiltLinux/linux/issues/449
Reviewers: echristo, void
Reviewed By: void
Subscribers: void, craig.topper, jholewinski, dschuff, jyknight, dylanmckay, sdardis, nemanjai, javed.absar, sbc100, jgravelle-google, eraman, kristof.beyls, hiraditya, aheejin, kbarton, fedor.sergeev, jrtc27, atanasyan, jsji, llvm-commits, kees, tpimh, nathanchance, peter.smith, srhines
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60887
llvm-svn: 359337
There are instructions for these, so mark them as legal. Select the correct
instruction in AArch64InstructionSelector.cpp.
Update select-bswap.mir and arm64-rev.ll to reflect the changes.
llvm-svn: 359331
Add support for abs() to ConstantRange. This will allow to handle
SPF_ABS select flavor in LVI and will also come in handy as a
primitive for the srem implementation.
The implementation is slightly tricky, because a) abs of signed min
is signed min and b) sign-wrapped ranges may have an abs() that is
smaller than a full range, so we need to explicitly handle them.
Differential Revision: https://reviews.llvm.org/D61084
llvm-svn: 359321
The PPC vector cost model values for insert/extract element reflect older
processors that lacked vector insert/extract and move-to/move-from VSR
instructions. Update getVectorInstrCost to give appropriate values for when
the newer instructions are present.
Differential Revision: https://reviews.llvm.org/D60160
llvm-svn: 359313
Create a matchBitOpReduction helper that checks for the pattern with any opcode.
First step towards reusing this code to recognize other scalar reduction patterns.
llvm-svn: 359296
The slow path (with at least one non US-ASCII) will be slower but that
doesn't matter.
Differential Revision: https://reviews.llvm.org/D61178
llvm-svn: 359294
As detailed on PR40758, Bobcat/Jaguar can perform vector immediate shifts on the same pipes as vector ANDs with the same latency - so it doesn't make sense to replace a shl+lshr with a shift+and pair as it requires an additional mask (with the extra constant pool, loading and register pressure costs).
Differential Revision: https://reviews.llvm.org/D61068
llvm-svn: 359293
A small step towards combining shuffles across vector sizes - this recognizes when a shuffle's operands are all extracted from the same larger source and tries to combine to an unary shuffle of that source instead. Fixes one of the test cases from PR34380.
Differential Revision: https://reviews.llvm.org/D60512
llvm-svn: 359292
This enables the pass to be used in the absence of
TargetTransformInfo. When the argument isn't passed, the factory
defaults to UninitializedAddressSpace and the flat address space is
obtained from the TargetTransformInfo as before this change. Existing
users won't have to change.
Patch by Kevin Petit.
Differential Revision: https://reviews.llvm.org/D60602
llvm-svn: 359290
The code was using the alignment of a pointer to the value, not the
alignment of the constant itself.
Maybe we got away with it so far because the pointer alignment is
fairly high, but we did end up under-aligning <16 x i8> vectors,
which was caught in the Chromium build after lld stopped over-aligning
the .rodata.cst16 section in r356428. (See crbug.com/953815)
Differential revision: https://reviews.llvm.org/D61124
llvm-svn: 359287
When constrainRegClass is called if the constraining happens on a use the COPY
needs to be inserted before the instruction that contains the MachineOperand,
but if we are constraining a definition it actually needs to be added
after the instruction. In addition, the COPY needs to have its operands
flipped (in the use case we are copying from the old unconstrained register
to the new constrained register, while in the definition case we are copying
from the new constrained register that the instruction defines to the old
unconstrained register).
llvm-svn: 359282
isValidCandidateForColdCC is much more expensive than
TTI.useColdCCForColdCall, which by default just returns false. Avoid
doing this work if we're not going to look at the answer anyway.
This change is NFC, but I see significant compile time improvements on
some code with pathologically many functions.
llvm-svn: 359253
When failing materialization of a symbol X, remove X from the dependants list
of any of X's dependencies. This ensures that when X's dependencies are
emitted (or fail themselves) they do not try to access the no-longer-existing
MaterializationInfo for X.
llvm-svn: 359252
These builtins provide access to the new integer and
sub-integer variants of MMA (matrix multiply-accumulate) instructions
provided by CUDA-10.x on sm_75 (AKA Turing) GPUs.
Also added a feature for PTX 6.4. While Clang/LLVM does not generate
any PTX instructions that need it, we still need to pass it through to
ptxas in order to be able to compile code that uses the new 'mma'
instruction as inline assembly (e.g used by NVIDIA's CUTLASS library
https://github.com/NVIDIA/cutlass/blob/master/cutlass/arch/mma.h#L101)
Differential Revision: https://reviews.llvm.org/D60279
llvm-svn: 359248
All of the new instructions are still handled mostly by tablegen. I've slightly
refactored the code to drive intrinsic/instruction generation from a master
list of supported variants, so all irregularities have to be implemented in one place only.
The test generation script wmma.py has been refactored in a similar way.
Differential Revision: https://reviews.llvm.org/D60015
llvm-svn: 359247
PTX 6.3 requires using ".aligned" in the MMA instruction names.
In order to generate correct name, now we pass current
PTX version to each instruction as an extra constant operand
and InstPrinter adjusts its output accordingly.
Differential Revision: https://reviews.llvm.org/D59393
llvm-svn: 359246
Generalized constructions of 'fragments' of MMA operations to provide
common primitives for construction of the ops. This will make it easier
to add new variants of the instructions that operate on integer types.
Use nested foreach loops which makes it possible to better control
naming of the intrinsics.
This patch does not affect LLVM's output, so there are no test changes.
Differential Revision: https://reviews.llvm.org/D59389
llvm-svn: 359245
Adds a representation of the section header table to XCOFFObjectFile,
and implements enough to dump the section headers with llvm-obdump.
Differential Revision: https://reviews.llvm.org/D60784
llvm-svn: 359244
I added a diagnostic along the lines of `-Wpessimizing-move` to detect `return x = y` suppressing copy elision, but I don't know if the diagnostic is really worth it. Anyway, here are the places where my diagnostic reported that copy elision would have been possible if not for the assignment.
P1155R1 in the post-San-Diego WG21 (C++ committee) mailing discusses whether WG21 should fix this pitfall by just changing the core language to permit copy elision in cases like these.
(Kona update: The bulk of P1155 is proceeding to CWG review, but specifically *not* the parts that explored the notion of permitting copy-elision in these specific cases.)
Reviewed By: dblaikie
Author: Arthur O'Dwyer
Differential Revision: https://reviews.llvm.org/D54885
llvm-svn: 359236
This case was missing before, so we couldn't legalize it.
Add it to AArch64LegalizerInfo.cpp and update select-extract-vector-elt.mir.
llvm-svn: 359231
it keeps track of becomes too large
ARC optimizer does a top-down and a bottom-up traversal of the whole
function to pair up retain and release instructions and remove them.
This can be expensive if the number of instructions in the function and
pointer states it tracks are large since it has to look at each pointer
state and determine whether the instruction being visited can
potentially use the pointer.
This patch adds a command line option that sets a limit to the number of
pointers it tracks.
rdar://problem/49477063
Differential Revision: https://reviews.llvm.org/D61100
llvm-svn: 359226
This adds a legalization rule for G_ZEXT, G_ANYEXT, and G_SEXT which allows
extends whenever the types will fit in registers (or the source is an s1).
Update tests. Add GISel checks throughout all of arm64-vabs.ll,
where we now select a good portion of the code. Add GISel checks to
arm64-subvector-extend.ll, which has a good number of vector extends in it.
Differential Revision: https://reviews.llvm.org/D60889
llvm-svn: 359222
We had special case handling here, but it uses a scalar any_extend for the
promotion then bitcasts to the final type. This won't split up the input data
into multiple promoted elements like we need.
This patch falls back to doing the conversion through memory.
Fixes PR41594 which I believe was reflected in the bitcast-vector-bool.ll
changes. The changes to vector-half-conversions.ll are fixing a previously
unknown miscompile from this issue.
Differential Revision: https://reviews.llvm.org/D61114
llvm-svn: 359219
When evaluating a store through a bitcast, the evaluator tries to move the
bitcast from the pointer onto the stored value. If the cast is invalid, it
tries to "introspect" the type to get a valid cast by obtaining a pointer to
the initial element (if the type is nested, this may require walking several
initial elements).
In some situations it is possible to get a bitcast on a load (e.g. with
unions, where the bitcast may not be the same type as the store). However,
equivalent logic to the store to introspect the type is missing. This patch
add this logic.
Note, when developing the patch I was unhappy with adding similar logic
directly to the load case as it could get out of step. Instead, I have
abstracted the "introspection" into a helper function, with the specifics
being handled by a passed-in lambda function.
Differential Revision: https://reviews.llvm.org/D60793
llvm-svn: 359205
Add legalizer support for G_FNEARBYINT. It's the same as G_FCEIL etc.
Since the importer allows us to automatically select this after legalization,
also add tests for selection etc. Also update arm64-vfloatintrinsics.ll.
llvm-svn: 359204
Translate llvm.nearbyint into G_FNEARBYINT as a simple intrinsic. Update
arm64-irtranslator.ll.
Differential Revision: https://reviews.llvm.org/D60922
llvm-svn: 359203
Summary:
There's still a little bit of constant factor that could be trimmed (e.g.
more overloads to avoid round-tripping primitives through json::Value).
But this solves the memory scaling problem, and greatly improves the performance
constant factor, and the API should leave room for optimization if needed.
Adapt TimeProfiler to use it, eliminating almost all the performance regression
from r358476.
Performance test on my machine:
perf stat -r 5 ~/llvmbuild-opt/bin/clang++ -w -S -ftime-trace -mllvm -time-trace-granularity=0 spirit.cpp
Handcrafted JSON (HEAD=r358532 with r358476 reverted): 2480ms
json::Value (HEAD): 2757ms (+11%)
After this patch: 2520 ms (+1.6%)
Reviewers: anton-afanasyev, lebedev.ri
Subscribers: kristina, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60804
llvm-svn: 359186
Summary:
Concurrent (e.g. nested) llvm::parallel::for_each() may lead to dead
locks. See PR35788 (fixed by rLLD322041) and PR41508 (fixed by D60757).
When parallel_for_each() is about to return, in ~Latch() called by
~TaskGroup(), a thread (in the default executor) may block in
Latch::sync() waiting for Count to become zero. If all threads in the
default executor are blocked, it is a dead lock.
To fix this, force serial execution if the current TaskGroup is not the
first one. For a nested llvm::parallel::for_each(), this parallelizes
the outermost loop and serializes inner loops.
Differential Revision: https://reviews.llvm.org/D61115
llvm-svn: 359182
Summary:
Annotations allow writing nice-looking unit test code when one needs
access to locations from the source code, e.g. running code completion
at particular offsets in a file. See comments in Annotations.cpp for
more details on the API.
Also got rid of a duplicate annotations parsing code in clang's code
complete tests.
Reviewers: gribozavr, sammccall
Reviewed By: gribozavr
Subscribers: mgorny, hiraditya, ioeric, MaskRay, jkorous, arphaman, kadircet, jdoerfert, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D59814
llvm-svn: 359179
yaml2obj might crash on invalid input when unable to parse the YAML.
Recently a crash with a very similar nature was fixed for an empty files.
This patch revisits the fix and does it in yaml::Input instead.
It seems to be more correct way to handle such situation.
With that crash for invalid inputs is also fixed now.
Differential revision: https://reviews.llvm.org/D61059
llvm-svn: 359178
Truncate the movmsk scalar integer result to the equivalent scalar integer width as before but then bitcast to the requested type.
We still have the issue identified in PR41594 but D61114 should handle this.
llvm-svn: 359176
On Mips32r2 bitcast can be expanded to two sw instructions and an ldc1
when using bitcast i64 to double or an sdc1 and two lw instructions when
using bitcast double to i64. By introducing custom lowering that uses
mtc1/mthc1 we can avoid excessive instructions.
Patch by Mirko Brkusanin.
Differential Revision: https://reviews.llvm.org/D61069
llvm-svn: 359171
The IndexReg will always be non-null at this point. Earlier in the function, if
IndexReg was null we set it to CurDAG->getRegister(0, VT) which made it
non-null.
llvm-svn: 359170
Summary:
When refactoring vectorization flags, vectorization was disabled by default in the new pass manager.
This patch re-enables is for both managers, and changes the assumptions opt makes, based on the new defaults.
Comments in opt.cpp should clarify the intended use of all flags to enable/disable vectorization.
Reviewers: chandlerc, jgorbe
Subscribers: jlebar, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61091
llvm-svn: 359167
Do not wrap the contents of printFusionCandidates in the LLVM_DEBUG macro. This
fixes an unused variable warning generated when compiling without asserts but
with -DENABLE_LLVM_DUMP.
Differential Revision: https://reviews.llvm.org/D61035
llvm-svn: 359161
For well-known type IDs, include the name of the type.
To not duplicate the ID->name map, make llvm-readobj call this new
function as well. It has slightly different output, so this also
requires updating a few tests.
Differential Revision: https://reviews.llvm.org/D61086
llvm-svn: 359153
Summary:
We've seen cases of bots failing with:
clang: error: unable to execute command: posix_spawn failed: Interrupted system call
Add a small retry loop to posix_spawn in case this happens. Don't retry too much in case there's some systemic problem going on, but retry a few times.
<rdar://problem/50181448>
Reviewers: Bigcheese, arphaman
Subscribers: jkorous, dexonsmith, kristina, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61096
llvm-svn: 359152
Summary:
This emits labels around heapallocsite calls and S_HEAPALLOCSITE debug
info in codeview. Currently only changes FastISel, so emitting labels still
needs to be implemented in SelectionDAG.
Reviewers: rnk
Subscribers: aprantl, hiraditya, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D61083
llvm-svn: 359149
If we have a vector FP division with a splatted divisor, use the existing transform
that converts 'x/y' into 'x * (1.0/y)' to allow more conversions. This can then
potentially be converted into a scalar FP division by existing combines (rL358984)
as seen in the tests here.
That can be a potentially big perf difference if scalar fdiv has better timing
(including avoiding possible frequency throttling for vector ops).
Differential Revision: https://reviews.llvm.org/D61028
llvm-svn: 359147
Using initial-exec TLS variables is a reasonable performance
optimisation for system libraries. Use the correct PIC mechanism to get
hold of the GOT to avoid text relocations.
Differential Revision: https://reviews.llvm.org/D61026
llvm-svn: 359146
Summary: The code did not check if operand was undef before casting it to Instruction.
Reviewers: RKSimon, ABataev, dtemirbulatov
Reviewed By: ABataev
Subscribers: uabelho
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61024
llvm-svn: 359136
First, use the old style of linker initialization for MSVC 2019 in
addition to 2017. MSVC 2019 emits a dynamic initializer for
ManagedStatic when compiled in debug mode, and according to zturner,
also sometimes in release mode. I wasn't able to reproduce that, but it
seems best to stick with the old code that works.
When clang is using the MSVC STL, we have to give ManagedStatic a
constexpr constructor that fully zero initializes all fields, otherwise
it emits a dynamic initializer. The MSVC STL implementation of
std::atomic has a non-trivial (but constexpr) default constructor that
zero initializes the atomic value. Because one of the fields has a
non-trivial constructor, ManagedStatic ends up with a non-trivial ctor.
The ctor is not constexpr, so clang ends up emitting a dynamic
initializer, even though it simply does zero initialization. To make it
constexpr, we must initialize all fields of the ManagedStatic.
However, while the constructor that takes a pointer is marked constexpr,
clang says it does not evaluate to a constant because it contains a cast
from a pointer to an integer. I filed this as:
https://developercommunity.visualstudio.com/content/problem/545566/stdatomic-value-constructor-is-not-actually-conste.html
Once we do that, we can add back the
LLVM_REQUIRE_CONSTANT_INITIALIZATION marker, and so far as I'm aware it
compiles successfully on all supported targets.
llvm-svn: 359135
It turns out that I mesread the man page and fcopyfile(3) does not
actually support COPYFILE_CLONE for files.
<rdar://problem/50148757>
llvm-svn: 359127
While this doesn't come up in reasonable cases currently (the only user
defined types not in type units are ones without linkage - which makes
for near-ODR violations, because it'd be a type with linkage referencing
a type without linkage - such a type can't be validly defined in more
than one TU, so arguably it shouldn't be in a type unit to begin with -
but it's a convenient way to demonstrate an issue that will become more
revalent with homed modular debug info type definitions - which also
don't need to be in type units but more legitimately so).
Precursor to the Clang change to de-type-unit (by omitting the
'identifier') types homed due to strong linkage vtables. (making that
change without this one would lead to major type duplication in type
units)
llvm-svn: 359122
ReplaceAllUsesWith doesn't remove the node that was replaced. So its left around in the graph messing up use counts on other nodes.
One thing to note, is that this isn't valid if the node being deleted is the root node of an LEA match that gets rejected. In that case the node needs to stay alive because the isel table walking code would still have a reference to it that its going to try to match next. I don't think that's the case here though because the nodes being deleted here should be "and", "srl", and "zero_extend" none of which can be the root node of an LEA match.
Differential Revision: https://reviews.llvm.org/D61048
llvm-svn: 359121
Summary: There is still some value in using these functions while the remaining LLVMValueRef-based accessors are still around, but LLVMMDNodeInContext in particular has some wonky semantics that make it worth replacing outright.
Reviewers: whitequark, deadalnix
Reviewed By: whitequark
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60524
llvm-svn: 359114
This patch rewrites the existing PACKSS/PACKUS constant folding code to expand as a generic expansion.
This is a first NFCI step toward expanding PACKSS/PACKUS intrinsics which are acting as non-saturating truncations (although technically the expansion could be used in all cases - but we'll probably want to be conservative).
llvm-svn: 359111
Frame Descriptor Entries (FDEs) have a pointer back to a Common Information
Entry (CIE) that describes how the rest FDE should be parsed. JITLink had been
assuming that FDEs always referred to the most recent CIE encountered, but the
spec allows them to point back to any previously encountered CIE. This patch
fixes JITLink to look up the correct CIE for the FDE.
The testcase is a MachO binary with an FDE that refers to a CIE that is not the
one immediately proceeding it (the layout can be viewed wit
'dwarfdump --eh-frame <testcase>'. This test case had to be a binary as llvm-mc
now sorts FDEs (as of r356216) to ensure FDEs *do* point to the most recent CIE.
llvm-svn: 359105
If the types don't match, we can't just remove the shuffle.
There may be some other opportunity for optimization here,
but this should prevent the crashing seen in:
https://bugs.llvm.org/show_bug.cgi?id=41414
llvm-svn: 359095
If two .res files contain the same resource, cvtres.exe (and hence
link.exe) reject the input with this message:
CVTRES : fatal error CVT1100: duplicate resource. type:STRING, name:101, language:0x0409
LINK : fatal error LNK1123: failure during conversion to COFF: file invalid or corrupt
llvm-cvtres (and lld-link) used to silently pick one of the duplicate
resources instead. This patch makes them report an error as well.
We slightly improve on cvtres by printing the name of two .res files
containing duplicate entries as well.
Differential Revision: https://reviews.llvm.org/D61049
llvm-svn: 359083
Summary:
Both the input Value pointer and the returned Value
pointers in GetUnderlyingObjects are now declared as
const.
It turned out that all current (in-tree) uses of
GetUnderlyingObjects were trivial to update, being
satisfied with have those Value pointers declared
as const. Actually, in the past several of the users
had to use const_cast, just because of ValueTracking
not providing a version of GetUnderlyingObjects with
"const" Value pointers. With this patch we get rid
of those const casts.
Reviewers: hfinkel, materi, jkorous
Reviewed By: jkorous
Subscribers: dexonsmith, jkorous, jholewinski, sdardis, eraman, hiraditya, jrtc27, atanasyan, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61038
llvm-svn: 359072
Summary:
The MachineFunction should have been created with the correct subtarget. As
long as there is no way to change it, MipsTargetMachine can just capture it
directly from the MachineFunction without calling getSubtargetImpl again.
While there, const correct the Subtarget pointer to avoid a const_cast.
I believe the Mips16Subtarget and NoMips16Subtarget members are never used, but
I'll leave there removal for a separate patch.
Reviewers: echristo, atanasyan
Reviewed By: atanasyan
Subscribers: sdardis, arichardson, hiraditya, jrtc27, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60936
llvm-svn: 359071
* Add support for uniquing strings in the remark streamer and emitting the string table in the remarks section.
* Add parsing support for the string table in the RemarkParser.
From this remark:
```
--- !Missed
Pass: inline
Name: NoDefinition
DebugLoc: { File: 'test-suite/SingleSource/UnitTests/2002-04-17-PrintfChar.c',
Line: 7, Column: 3 }
Function: printArgsNoRet
Args:
- Callee: printf
- String: ' will not be inlined into '
- Caller: printArgsNoRet
DebugLoc: { File: 'test-suite/SingleSource/UnitTests/2002-04-17-PrintfChar.c',
Line: 6, Column: 0 }
- String: ' because its definition is unavailable'
...
```
to:
```
--- !Missed
Pass: 0
Name: 1
DebugLoc: { File: 3, Line: 7, Column: 3 }
Function: 2
Args:
- Callee: 4
- String: 5
- Caller: 2
DebugLoc: { File: 3, Line: 6, Column: 0 }
- String: 6
...
```
And the string table in the .remarks/__remarks section containing:
```
inline\0NoDefinition\0printArgsNoRet\0
test-suite/SingleSource/UnitTests/2002-04-17-PrintfChar.c\0printf\0
will not be inlined into \0 because its definition is unavailable\0
```
This is mostly supposed to be used for testing purposes, but it gives us
a 2x reduction in the remark size, and is an incremental change for the
updates to the remarks file format.
Differential Revision: https://reviews.llvm.org/D60227
llvm-svn: 359050
Summary:
If two arguments are both readonly, then they have no memory dependency
that would violate noalias, even if they do actually overlap.
Reviewers: hfinkel, efriedma
Reviewed By: efriedma
Subscribers: efriedma, hiraditya, llvm-commits, tstellar
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60239
llvm-svn: 359047
Add selection support for G_INTRINSIC_ROUND, add a selection test, and add
check lines to arm64-vfloatintrinsics.ll and f16-instructions.ll.
llvm-svn: 359046
Add G_INTRINSIC_ROUND to isPreISelGenericFloatingPointOpcode to ensure that its
input and output are assigned the correct register bank.
Add a regbankselect test to verify that we get what we expect here.
llvm-svn: 359044
The simple case of:
```
int *callee();
void *caller(void *a) {
if (a == NULL)
return callee();
return a;
}
```
would generate a regular call instead of a tail call because we don't
look through the bitcast of the call to `callee` when duplicating the
return blocks.
Differential Revision: https://reviews.llvm.org/D60837
llvm-svn: 359041
Summary:
Always convert switches to br_tables unless there is only one case,
which is equivalent to a simple branch. This reduces code size for wasm,
and we defer possible jump table optimizations to the VM.
Addresses PR41502.
Reviewers: kripken, sunfish
Subscribers: dschuff, sbc100, jgravelle-google, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60966
llvm-svn: 359038
Summary:
Enabling MemorySSA in the old pass manager leads to MemorySSA being run
twice due to the fact that LCSSA and LoopSimplify do not preserve
MemorySSA. This is the first step to address that: target LCSSA.
LCSSA does not make any changes that invalidate MemorySSA, so it
preserves it by design. It must preserve AA as well, for this to hold.
After this patch, MemorySSA is still run twice in the old pass manager.
Step two follows: target LoopSimplify.
Subscribers: mehdi_amini, jlebar, Prazek, llvm-commits, george.burgess.iv, chandlerc
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60832
llvm-svn: 359032
Apparently FileCheck wasn't actually matching the fallback check lines in
arm64-vfloatintrinsics.ll properly. So, there were selection fallbacks for
G_INTRINSIC_TRUNC there.
Actually hook it up into AArch64InstructionSelector.cpp and write a proper
selection test.
I guess I'll figure out the FileCheck magic to make the fallback checks work
properly in arm64-vfloatintrinsics.ll.
llvm-svn: 359030
DominatorTree::dominate.
ARC contract pass has an optimization that replaces the uses of the
argument of an ObjC runtime function call with the call result.
For example:
; Before optimization
%1 = tail call i8* @foo1()
%2 = tail call i8* @llvm.objc.retainAutoreleasedReturnValue(i8* %1)
store i8* %1, i8** @g0, align 8
; After optimization
%1 = tail call i8* @foo1()
%2 = tail call i8* @llvm.objc.retainAutoreleasedReturnValue(i8* %1)
store i8* %2, i8** @g0, align 8 // %1 is replaced with %2
Before replacing the argument use, DominatorTree::dominate is called to
determine whether the user instruction is dominated by the ObjC runtime
function call instruction. The call to DominatorTree::dominate can be
expensive if the two instructions belong to the same basic block and the
size of the basic block is large. This patch checks the basic block size
and just bails out if the size exceeds the limit set by command line
option "arc-contract-max-bb-size".
rdar://problem/49477063
Differential Revision: https://reviews.llvm.org/D60900
llvm-svn: 359027
Originally committed in r358931
Reverted in r358997
Seems this change made Apple accelerator tables miss names (because
names started respecting the CU NameTableKind GNU & assuming that
shouldn't produce accelerated names too), which is never correct (apple
accelerator tables don't have separators or CU lists - if present, they
must describe all names in all CUs).
Original Description:
Currently to opt in to debug_names in DWARFv5, the IR must contain
'nameTableKind: Default' which also enables debug_pubnames.
Instead, only allow one of {debug_names, apple_names, debug_pubnames,
debug_gnu_pubnames}.
nameTableKind: Default gives debug_names in DWARFv5 and greater,
debug_pubnames in v4 and earlier - and apple_names when tuning for lldb
on MachO.
nameTableKind: GNU always gives gnu_pubnames
llvm-svn: 359026
Summary:
The opt level was not being passed down to the ThinLTO backend when
invoked via clang (for distributed ThinLTO).
This exposed an issue where the new PM was asserting if the Thin or
regular LTO backend pipelines were invoked with -O0 (not a new issue,
could be provoked by invoking in-process *LTO backends via linker using
new PM and -O0). Fix this similar to the old PM where -O0 only does the
necessary lowering of type metadata (WPD and LowerTypeTest passes) and
then quits, rather than asserting.
Reviewers: xur
Subscribers: mehdi_amini, inglorion, eraman, hiraditya, steven_wu, dexonsmith, cfe-commits, llvm-commits, pcc
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D61022
llvm-svn: 359025
Before, there was an IsData parameter. Now, there are two different
functions for data nodes and ID nodes. No behavior change, needed for a
follow-up change to make two data nodes (but not two ID nodes) with the
same ID an error.
For consistency, rename another addChild() overload to addNameChild().
llvm-svn: 359024
Add it to isPreISelGenericFloatingPointOpcode, and add a regbankselect test.
Update arm64-vfloatintrinsics.ll now that we can select it.
llvm-svn: 359022
Same patch as G_FCEIL etc.
Add the missing switch case in widenScalar, add G_INTRINSIC_TRUNC to the correct
rule in AArch64LegalizerInfo.cpp, and add a test.
llvm-svn: 359021
Add urem support to ConstantRange, so we can handle in in LVI. This
is an approximate implementation that tries to capture the most useful
conditions: If the LHS is always strictly smaller than the RHS, then
the urem is a no-op and the result is the same as the LHS range.
Otherwise the lower bound is zero and the upper bound is
min(LHSMax, RHSMax - 1).
Differential Revision: https://reviews.llvm.org/D60952
llvm-svn: 359019
Same as G_FCEIL, G_FABS, etc. Just move it into that rule.
Add a legalizer test for G_FMA, which we didn't have before and update
arm64-vfloatintrinsics.ll.
llvm-svn: 359015
If we have a masked.load from a location we know to be dereferenceable, we can simply issue a speculative unconditional load against that address. The key advantage is that it produces IR which is well understood by the optimizer. The select (cnd, load, passthrough) form produced should be pattern matchable back to hardware predication if profitable.
Differential Revision: https://reviews.llvm.org/D59703
llvm-svn: 359000
Circling back to a leftover bit from PR39859:
https://bugs.llvm.org/show_bug.cgi?id=39859#c1
...we have this counter-intuitive (based on the test diffs) opportunity to use 'psubus'.
This appears to be the better perf option for both Haswell and Jaguar based on llvm-mca.
We already do this transform for the SETULT predicate, so this makes the code more
symmetrical too. If we have pminub/pminuw, we prefer those, so this should not affect
anything but pre-SSE4.1 subtargets.
$ cat before.s
movdqa -16(%rip), %xmm2 ## xmm2 = [32768,32768,32768,32768,32768,32768,32768,32768]
pxor %xmm0, %xmm2
pcmpgtw -32(%rip), %xmm2 ## xmm2 = [255,255,255,255,255,255,255,255]
pand %xmm2, %xmm0
pandn %xmm1, %xmm2
por %xmm2, %xmm0
$ cat after.s
movdqa -16(%rip), %xmm2 ## xmm2 = [256,256,256,256,256,256,256,256]
psubusw %xmm0, %xmm2
pxor %xmm3, %xmm3
pcmpeqw %xmm2, %xmm3
pand %xmm3, %xmm0
pandn %xmm1, %xmm3
por %xmm3, %xmm0
$ llvm-mca before.s -mcpu=haswell
Iterations: 100
Instructions: 600
Total Cycles: 909
Total uOps: 700
Dispatch Width: 4
uOps Per Cycle: 0.77
IPC: 0.66
Block RThroughput: 1.8
$ llvm-mca after.s -mcpu=haswell
Iterations: 100
Instructions: 700
Total Cycles: 409
Total uOps: 700
Dispatch Width: 4
uOps Per Cycle: 1.71
IPC: 1.71
Block RThroughput: 1.8
Differential Revision: https://reviews.llvm.org/D60838
llvm-svn: 358999
64bit mode must use 64bit registers, otherwise assumptions about the top
half of the registers are made. Problem found by Takeshi Nakayama in
NetBSD.
llvm-svn: 358998
This patch adds support for parsing and assembling the %tls_ie_pcrel_hi
and %tls_gd_pcrel_hi modifiers.
Differential Revision: https://reviews.llvm.org/D55342
llvm-svn: 358994
Essentially complete a proper rebase of the V3 metadata change over
https://reviews.llvm.org/D49096.
Minimize the diff between the V2 and V3 variants of the relevant lit
tests, and clean up some trailing whitespace.
llvm-svn: 358992
The manual says that Thumb2 add/sub instructions are only allowed to modify sp
if the first source is also sp. This is slightly different from the usual rGPR
restriction since it's context-sensitive, so implement it in C++.
llvm-svn: 358987
If we only match build vectors, we can miss some patterns
that use shuffles as seen in the affected tests.
Note that the underlying calls within getSplatSourceVector()
have the potential for compile-time explosion because of
exponential recursion looking through binop opcodes, but
currently the list of supported opcodes is very limited.
Both of those problems should be addressed in follow-up
patches.
llvm-svn: 358984
Summary:
When an LCSSA phi survives through instruction selection, the pass
ends up removing that phi entirely because it is dominated by the
logic that does the lanemask merging.
This then used to trigger an assertion when processing a dependent
phi instruction.
Change-Id: Id4949719f8298062fe476a25718acccc109113b6
Reviewers: llvm-commits
Subscribers: kzhuravl, jvesely, wdng, yaxunl, t-tye, tpr, dstuttard, rtaylor, arsenm
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60999
llvm-svn: 358983
Converting InlineCost interface and its internals into CallBase usage.
Inliners themselves are still not converted.
Reviewed By: reames
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60636
llvm-svn: 358982
The check for creating CBZ in constant island pass recently obtained the
ability to search backwards to find a Cmp instruction. The code in IfCvt should
mirror this to allow more conversions to the smaller form. The common code has
been pulled out into a separate function to be shared between the two places.
Differential Revision: https://reviews.llvm.org/D60090
llvm-svn: 358977
Ifcvt can replicate instructions as it converts them to be predicated. This
stops that from happening on thumb2 targets at minsize where an extra IT
instruction is likely needed.
Differential Revision: https://reviews.llvm.org/D60089
llvm-svn: 358974
Summary:
The DAGCombiner is rewriting (canonicalizing) an ISD::ADD
with no common bits set in the operands as an ISD::OR node.
This could sometimes result in "missing out" on some
combines that normally are performed for ADD. To be more
specific this could happen if we already have rewritten an
ADD into OR, and later (after legalizations or combines)
we expose patterns that could have been optimized if we
had seen the OR as an ADD (e.g. reassociations based on ADD).
To make the DAG combiner less sensitive to if ADD or OR is
used for these "no common bits set" ADD/OR operations we
now apply most of the ADD combines also to an OR operation,
when value tracking indicates that the operands have no
common bits set.
Reviewers: spatel, RKSimon, craig.topper, kparzysz
Reviewed By: spatel
Subscribers: arsenm, rampitec, lebedev.ri, jvesely, nhaehnle, hiraditya, javed.absar, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59758
llvm-svn: 358965
This patch provides intrinsics support for Memory Tagging Extension (MTE),
which was introduced with the Armv8.5-a architecture.
The intrinsics are described in detail in the latest
ACLE Q1 2019 documentation: https://developer.arm.com/docs/101028/latest
Reviewed by: David Spickett
Differential Revision: https://reviews.llvm.org/D60486
llvm-svn: 358963
About the compressed sections spec says:
(https://docs.oracle.com/cd/E37838_01/html/E36783/section_compression.html)
sh_addralign fields of the section header for a compressed section
reflect the requirements of the compressed section.
Currently, llvm-mc always puts uncompressed section alignment to sh_addralign.
It is not correct. zlib styled section contains an Elfxx_Chdr header,
so we should either use 4 or 8 values depending on the target
(Uncompressed section alignment is stored in ch_addralign field of the compression header).
GNU assembler version 2.31.1 also has this issue,
but in 2.32.51 it was already fixed. This is how it was found
during debugging of the https://bugs.llvm.org/show_bug.cgi?id=40482
actually.
Differential revision: https://reviews.llvm.org/D60965
llvm-svn: 358960
In some circumstances we can end up with setup costs that are very complex to
compute, even though the scevs are not very complex to create. This can also
lead to setupcosts that are calculated to be exactly -1, which LSR treats as an
invalid cost. This patch puts a limit on the recursion depth for setup cost to
prevent them taking too long.
Thanks to @reames for the report and test case.
Differential Revision: https://reviews.llvm.org/D60944
llvm-svn: 358958
This reverts r358910 (git commit 2b74466530)
While this patch *seems* trivial and safe and correct, it is not. The
copies are actually load bearing copies. You can observe this with MSan
or other ways of checking for use-after-destroy, but otherwise this may
result in ... difficult to debug inexplicable behavior.
I suspect the issue is that the debug location is used after the
original reference to it is removed. The metadata backing it gets
destroyed as its last references goes away, and then we reference it
later through these const references.
llvm-svn: 358940
Currently to opt in to debug_names in DWARFv5, the IR must contain
'nameTableKind: Default' which also enables debug_pubnames.
Instead, only allow one of {debug_names, apple_names, debug_pubnames,
debug_gnu_pubnames}.
nameTableKind: Default gives debug_names in DWARFv5 and greater,
debug_pubnames in v4 and earlier - and apple_names when tuning for lldb
on MachO.
nameTableKind: GNU always gives gnu_pubnames
llvm-svn: 358931
This was supposed to be NFC, but the change in SDLoc
definitions causes instruction scheduling changes.
There's nothing x86-specific in this code, and it can
likely be used from DAGCombiner's simplifyVBinOp().
llvm-svn: 358930
Summary:
- Only apply packed literal `op_sel_hi` skipping on operands requiring
packed literals. Even an instruction is `packed`, it may have operand
requiring non-packed literal, such as `v_dot2_f32_f16`.
Reviewers: rampitec, arsenm, kzhuravl
Subscribers: jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60978
llvm-svn: 358922
If we have a store to a piece of memory which is known constant, then we know the store must be storing back the same value. As a result, the store (or memset, or memmove) must either be down a dead path, or a noop. In either case, it is valid to simply remove the store.
The motivating case for this involves a memmove to a buffer which is constant down a path which is dynamically dead.
Note that I'm choosing to implement the less aggressive of two possible semantics here. We could simply say that the store *is undefined*, and prune the path. Consensus in the review was that the more aggressive form might be a good follow on change at a later date.
Differential Revision: https://reviews.llvm.org/D60659
llvm-svn: 358919
In the process, use the existing masked.load combine which is slightly stronger, and handles a mix of zero and undef elements in the mask.
llvm-svn: 358913
These are inserted after branch relaxation, and for some reason it's
decided to put them in the long branch expansion block. It's probably
not great to rely on the source block address, so this should probably
be switched to being PC relative instead of relying on the block
address
llvm-svn: 358909
Back in August, r340525 introduced a dependency on the assumption
cache tracker in the ipsccp pass, but that commit missed a call to
INITIALIZE_PASS_DEPENDENCY, which leaves the assumption cache
improperly registered if SCCP is the only thing that pulls it in.
llvm-svn: 358903
Currently, we do not expose BPI to loop passes at all. In the old pass manager, we appear to have been ignoring the fact that LCSSA and/or LoopSimplify didn't preserve BPI, and making it available to the following loop passes anyways. In the new one, it's invalidated before running any loop pass if either LCSSA or LoopSimplify actually make changes. If they don't make changes, then BPI is valid and available. So, we go ahead and teach LCSSA and LoopSimplify how to preserve BPI for consistency between old and new pass managers.
This patch avoids an invalidation between the two requires in the following trivial pass pipeline:
opt -passes="requires<branch-prob>,loop(no-op-loop),requires<branch-prob>"
(when the input file is one which requires either LCSSA or LoopSimplify to canonicalize the loops)
Differential Revision: https://reviews.llvm.org/D60790
llvm-svn: 358901
to CallInst.
The issue was raised here: https://reviews.llvm.org/D60903#1472783
The function Instruction::updateProfWeight is only used for CallInst in
profile update. From the current interface, it is very easy to think that
the function can also be used for branch instruction. However, Branch
instruction does't need the scaling the function provides for
branch_weights and VP (value profile), in addition, scaling may introduce
inaccuracy for branch probablity.
The patch moves the function updateProfWeight from Instruction class to
CallInst to remove the confusion. The patch also changes the scaling of
branch_weights from a loop to a block because we know that ProfileData
for branch_weights of CallInst will only have two operands at most.
Differential Revision: https://reviews.llvm.org/D60911
llvm-svn: 358900
This patch adds support for BigBitWidth -> SmallBitWidth bitcasts, splitting the DemandedBits/Elts accordingly.
The AMDGPU backend needed an extra (srl (and x, c1 << c2), c2) -> (and (srl(x, c2), c1) combine to encourage BFE creation, I investigated putting this in DAGCombine but it caused a lot of noise on other targets - some improvements, some regressions.
The X86 changes are all definite wins.
Differential Revision: https://reviews.llvm.org/D60462
llvm-svn: 358887
This reverts commit 7bf4d7c07f2fac862ef34c82ad0fef6513452445.
After thinking about this more, this isn't right, the range is not exact
in the same sense as makeExactICmpRegion(). This needs a separate
function.
llvm-svn: 358876
Following D60632 makeGuaranteedNoWrapRegion() always returns an
exact nowrap region. Rename the function accordingly. This is in
line with the naming of makeExactICmpRegion().
llvm-svn: 358875
Section atoms are not sorted, so we need to scan the whole section to find the
start address.
No test case: Found by inspection, and any reproduction would depend on pointer
ordering.
llvm-svn: 358865
llvm-undname used to put '\x' in front of every pair of nibbles, but
u"\xD7\xFF" produces a string with 6 bytes: \xD7 \0 \xFF \0 (and \0\0). Correct
for a single character (plus terminating \0) is u\xD7FF instead.
Now, wchar_t, char16_t, and char32_t strings roundtrip from source to
clang-cl (and cl.exe) and then llvm-undname.
(...at least as long as it's not a string like L"\xD7FF" L"foo" which
gets demangled as L"\xD7FFfoo", where the compiler then considers the
"f" as part of the hex escape. That seems ok.)
Also add a comment saying that the "almost-valid" char32_t string I
added in my last commit is actually produced by compilers.
llvm-svn: 358857
If a unsigned with all 4 bytes non-0 was passed to outputHex(), there
were two off-by-ones in it:
- Both MaxPos and Pos left space for the final \0, which left the buffer
one byte to small. Set MaxPos to 16 instead of 15 to fix.
- The `assert(Pos >= 0);` was after a `Pos--`, move it up one line.
Since valid Unicode codepoints are <= 0x10ffff, this could never really
happen in practice.
Found by oss-fuzz.
llvm-svn: 358856
Add support for uadd_sat and friends to ConstantRange, so we can
handle uadd.sat and friends in LVI. The implementation is forwarding
to the corresponding APInt methods with appropriate bounds.
One thing worth pointing out here is that the handling of wrapping
ranges is not maximally accurate. A simple example is that adding 0
to a wrapped range will return a full range, rather than the original
wrapped range. The tests also only check that the non-wrapping
envelope is correct and minimal.
Differential Revision: https://reviews.llvm.org/D60946
llvm-svn: 358855
ConstantRanges have an annoying special case: If upper and lower are
the same, it can be either an empty or a full set. When constructing
constant ranges nearly always a full set is intended, but this still
requires an explicit check in many places.
This revision adds a getNonEmpty() constructor that disambiguates this
case: If upper and lower are the same, a full set is created.
Differential Revision: https://reviews.llvm.org/D60947
llvm-svn: 358854
This does two main things, firstly adding some at least basic addressing modes
for i64 types, and secondly treats floats and doubles sensibly when there is no
fpu. The floating point change can help codesize in some cases, especially with
D60294.
Most backends seems to not consider the exact VT in isLegalAddressingMode,
instead switching on type size. That is now what this does when the target does
not have an fpu (as the float data will be loaded using LDR's). i64's currently
use the address range of an LDRD (even though they may be legalised and loaded
with an LDR). This is at least better than marking them all as illegal
addressing modes.
I have not attempted to do much with vectors yet. That will need changing once
MVE is added.
Differential Revision: https://reviews.llvm.org/D60677
llvm-svn: 358845
The error check required FDEs to refer to the most recent CIE, but the eh-frame
spec allows them to refer to any previously seen CIE. This patch removes the
offending check.
llvm-svn: 358840
- Don't assert when a string looks like a u32 string to the heuristic
but doesn't have a length that's 0 mod 4. Instead, classify those
as u16 with embedded \0 chars. Found by oss-fuzz.
- Print embedded nul bytes as \0 instead of \x00.
llvm-svn: 358835
Knowing the address/symbolnum field values makes it easier to identify the
unsupported relocation, and provides enough information for the full bit
pattern of the relocation to be reconstructed.
llvm-svn: 358833
Summary:
JITLink is a jit-linker that performs the same high-level task as RuntimeDyld:
it parses relocatable object files and makes their contents runnable in a target
process.
JITLink aims to improve on RuntimeDyld in several ways:
(1) A clear design intended to maximize code-sharing while minimizing coupling.
RuntimeDyld has been developed in an ad-hoc fashion for a number of years and
this had led to intermingling of code for multiple architectures (e.g. in
RuntimeDyldELF::processRelocationRef) in a way that makes the code more
difficult to read, reason about, extend. JITLink is designed to isolate
format and architecture specific code, while still sharing generic code.
(2) Support for native code models.
RuntimeDyld required the use of large code models (where calls to external
functions are made indirectly via registers) for many of platforms due to its
restrictive model for stub generation (one "stub" per symbol). JITLink allows
arbitrary mutation of the atom graph, allowing both GOT and PLT atoms to be
added naturally.
(3) Native support for asynchronous linking.
JITLink uses asynchronous calls for symbol resolution and finalization: these
callbacks are passed a continuation function that they must call to complete the
linker's work. This allows for cleaner interoperation with the new concurrent
ORC JIT APIs, while still being easily implementable in synchronous style if
asynchrony is not needed.
To maximise sharing, the design has a hierarchy of common code:
(1) Generic atom-graph data structure and algorithms (e.g. dead stripping and
| memory allocation) that are intended to be shared by all architectures.
|
+ -- (2) Shared per-format code that utilizes (1), e.g. Generic MachO to
| atom-graph parsing.
|
+ -- (3) Architecture specific code that uses (1) and (2). E.g.
JITLinkerMachO_x86_64, which adds x86-64 specific relocation
support to (2) to build and patch up the atom graph.
To support asynchronous symbol resolution and finalization, the callbacks for
these operations take continuations as arguments:
using JITLinkAsyncLookupContinuation =
std::function<void(Expected<AsyncLookupResult> LR)>;
using JITLinkAsyncLookupFunction =
std::function<void(const DenseSet<StringRef> &Symbols,
JITLinkAsyncLookupContinuation LookupContinuation)>;
using FinalizeContinuation = std::function<void(Error)>;
virtual void finalizeAsync(FinalizeContinuation OnFinalize);
In addition to its headline features, JITLink also makes other improvements:
- Dead stripping support: symbols that are not used (e.g. redundant ODR
definitions) are discarded, and take up no memory in the target process
(In contrast, RuntimeDyld supported pointer equality for weak definitions,
but the redundant definitions stayed resident in memory).
- Improved exception handling support. JITLink provides a much more extensive
eh-frame parser than RuntimeDyld, and is able to correctly fix up many
eh-frame sections that RuntimeDyld currently (silently) fails on.
- More extensive validation and error handling throughout.
This initial patch supports linking MachO/x86-64 only. Work on support for
other architectures and formats will happen in-tree.
Differential Revision: https://reviews.llvm.org/D58704
llvm-svn: 358818
Summary:
If you pass two 1024 bit vectors in IR with AVX2 on Windows 64. Both vectors will be split in four 256 bit pieces. The four pieces of the first argument will be passed indirectly using 4 gprs. The second argument will get passed via pointers in memory.
The PartOffsets stored for the second argument are all in terms of its original 1024 bit size. So the PartOffsets for each piece are 32 bytes apart. So if we consider it for copy elision we'll only load an 8 byte pointer, but we'll move the address 32 bytes. The stack object size we create for the first part is probably wrong too.
This issue was encountered by ISPC. I'm working on getting a reduce test case, but wanted to go ahead and get feedback on the fix.
Reviewers: rnk
Reviewed By: rnk
Subscribers: dbabokin, llvm-commits, hiraditya
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60801
llvm-svn: 358817
Summary:
Teach CorrelatedValuePropagation to also handle sub instructions in addition to add. Relatively simple since makeGuaranteedNoWrapRegion already understood sub instructions. Only subtle change is which range is passed as "Other" to that function, since sub isn't commutative.
Note that CorrelatedValuePropagation::processAddSub is still hidden behind a default-off flag as IndVarSimplify hasn't yet been fixed to strip the added nsw/nuw flags and causes a miscompile. (PR31181)
Reviewers: sanjoy, apilipenko, nikic
Reviewed By: nikic
Subscribers: hiraditya, jfb, jdoerfert, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60036
llvm-svn: 358816
This is very minor issue. The returned section index is only used by
DWARFDebugLine as an llvm::upper_bound input and the use case shouldn't
cause any behavioral change.
llvm-svn: 358814
Fix for https://bugs.llvm.org/show_bug.cgi?id=41477. On the x32 ABI
with stack probing a dynamic alloca will result in a WIN_ALLOCA_32
with a 32-bit size. The current implementation tries to copy it into
RAX, resulting in a physreg copy error. Fix this by copying to EAX
instead. Also fix incorrect opcodes or registers used in subs.
llvm-svn: 358807
The MOVZX doesn't require an immediate to be encoded at all. Though it does use
a 2 byte opcode so its the same size as a 1 byte immediate. But it has a
separate source and dest register so can help avoid copies.
llvm-svn: 358805
There's one slight regression in here because we don't check that the immediate
already allowed movzx before the shift. I'll fix that next.
llvm-svn: 358804
Exactly the same as G_FCEIL, G_FABS, etc.
Add tests for the fp16/nofp16 behaviour, update arm64-vfloatintrinsics, etc.
Differential Revision: https://reviews.llvm.org/D60895
llvm-svn: 358799
My understanding is that once BuildMI has been called we can't fallback
to SelectionDAG.
This change moves the fallback for when getRegForValue() fails for
that target of an indirect call. This was failing in -fPIC mode when
the callee is GlobalValue.
Add a test case that tickles this.
Differential Revision: https://reviews.llvm.org/D60908
llvm-svn: 358793
This is a follow-up to r291037+r291258, which used null debug locations
to prevent jumpy line tables.
Using line 0 locations achieves the same effect, but works better for
crash attribution because it preserves the right inline scope.
Differential Revision: https://reviews.llvm.org/D60913
llvm-svn: 358791
VK_SABS is part of the SymLoc bitfield in the variant kind which should
be compared for equality, not by checking the VK_SABS bit.
As far as I know, the existing code happened to produce the correct
results in all cases, so this is just a cleanup.
Patch by Stephen Crane.
Differential Revision: https://reviews.llvm.org/D60596
llvm-svn: 358788
Summary:
This emits labels around heapallocsite calls and S_HEAPALLOCSITE debug
info in codeview. Currently only changes FastISel, so emitting labels still
needs to be implemented in SelectionDAG.
Reviewers: hans, rnk
Subscribers: aprantl, hiraditya, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D60800
llvm-svn: 358783
Summary:
Make the flags in LICM + MemorySSA tuning options in the old and new
pass managers.
Subscribers: mehdi_amini, jlebar, Prazek, george.burgess.iv, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60490
llvm-svn: 358772
This instruction is legalized in the same way as G_FSIN, G_FCOS, G_FLOG10, etc.
Update legalize-pow.mir and arm64-vfloatintrinsics.ll to reflect the change.
Differential Revision: https://reviews.llvm.org/D60218
llvm-svn: 358764
Summary:
Trying to add the plumbing necessary to add tuning options to the new pass manager.
Testing with the flags for loop vectorize.
Reviewers: chandlerc
Subscribers: sanjoy, mehdi_amini, jlebar, steven_wu, dexonsmith, dang, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59723
llvm-svn: 358763
These are general queries, so they should not die when given
a degenerate input like an all undef mask. Callers should be
able to deal with an op that will eventually be simplified away.
llvm-svn: 358761
Gold and ld on Linux already support saving stats, but the
infrastructure is missing on Darwin. Unfortunately it seems like the
configuration from lib/LTO/LTO.cpp is not used.
This patch adds a new LTOStatsFile option and adds plumbing in Clang to
use it on Darwin, similar to the way remarks are handled.
Currnetly the handling of LTO flags seems quite spread out, with a bunch
of duplication. But I am not sure if there is an easy way to improve
that?
Reviewers: anemet, tejohnson, thegameg, steven_wu
Reviewed By: steven_wu
Differential Revision: https://reviews.llvm.org/D60516
llvm-svn: 358753
Summary:
The basic idea here is to make it possible to use
MachineInstr::mayAlias also when the MachineInstr
is const (or the "Other" MachineInstr is const).
The addition of const in MachineInstr::mayAlias
then rippled down to the need for adding const
in several other places, such as
TargetTransformInfo::getMemOperandWithOffset.
Reviewers: hfinkel
Reviewed By: hfinkel
Subscribers: hfinkel, MatzeB, arsenm, jvesely, nhaehnle, hiraditya, javed.absar, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60856
llvm-svn: 358744
Pending instructions that may have been blocked from being available by the HazardRecognizer may no longer may not be blocked any more when an instruction is scheduled; pending instructions should be re-checked in this case.
This is primarily aimed at VLIW targets with large parallelism and esoteric constraints.
No testcase as no in-tree targets have this behavior.
Differential revision: https://reviews.llvm.org/D60861
llvm-svn: 358743
removeUsers uses a work list to collect indirect users and call remove()
on those functions. However it has a bug (`if (!Visited.insert(UU).second)`).
Actually, we don't have to collect indirect users.
After the merge of F and G, G's callers will be considered (added to
Deferred). If G's callers can be merged, G's callers' callers will be
considered.
Update the test unnamed-addr-reprocessing.ll to make it clear we can
still merge indirect callers.
llvm-svn: 358741
Summary:
Ignore edges to non-SUnits (e.g. ExitSU) when checking
for low latency instructions.
When calling the function isLowLatencyInstruction(),
an ExitSU could be on the list of successors, not necessarily
a regular SU. In other places in the code there is a check
"Succ->NodeNum >= DAGSize" to prevent further processing of
ExitSU as "Succ->getInstr()" is NULL in such a case.
Also, 8 out of 9 cases of "SUnit *Succ = SuccDep.getSUnit())"
has the guard, so it is clearly an omission here.
Change-Id: Ica86f0327c7b2e6bcb56958e804ea6c71084663b
Reviewers: nhaehnle
Reviewed By: nhaehnle
Subscribers: MatzeB, arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, javed.absar, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60864
llvm-svn: 358740
code to `CallBase`.
This patch focuses on the legacy PM, call graph, and some of inliner and legacy
passes interacting with those APIs from `CallSite` to the new `CallBase` class.
No interesting changes.
Differential Revision: https://reviews.llvm.org/D60412
llvm-svn: 358739
Summary:
There are two places where we create a HandleSDNode in address matching in order to handle the case where N is changed by CSE. But if we end up not matching, we fall back to code at the bottom of the switch that really would like N to point to something that wasn't CSEd away. So we should make sure we copy the handle back to N on any paths that can reach that code.
This appears to be the true reason we needed to check DELETED_NODE in the negation matching. In pr32329.ll we had two subtracts back to back. We recursed through the first subtract, and onto the second subtract. The second subtract called matchAddressRecursively on its LHS which caused that subtract to CSE. We ultimately failed the match and ended up in the default code. But N was pointing at the old node that had been deleted, but the default code didn't know that and took it as the base register. Then we unwound back to the first subtract and tried to access this bogus base reg requiring the check for deleted node. With this patch we now use the CSE result as the base reg instead.
matchAdd has been broken since sometime in 2015 when it was pulled out of the switch into a helper function. The assignment to N at the end was still there, but N was passed by value and not by reference so the update didn't go anywhere.
Reviewers: niravd, spatel, RKSimon, bkramer
Reviewed By: niravd
Subscribers: llvm-commits, hiraditya
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60843
llvm-svn: 358735
Another attempt to land the changes in debug line header to prevent duplicate
files in Dwarf 5. I rolled back my previous commit because of a mistake in
generating the object file in a test. Meanwhile, I addressed some offline
comments and changed the implementation; the largest difference is that
MCDwarfLineTableHeader does not keep DwarfVersion but gets it as a parameter. I
also merged the patch to fix two lld tests that will strt to fail into this
patch.
Original Commit:
https://reviews.llvm.org/D59515
Original Message:
Motivation: In previous dwarf versions, file name indexes started from 1, and
the primary source file was not explicit. Dwarf 5 standard (6.2.4) prescribes
the primary source file to be explicitly given an entry with an index number 0.
The current implementation honors the specification by just duplicating the
main source file, once with index number 0, and later maybe with another
index number. While this is compliant with the letter of the standard, the
duplication causes problems for consumers of this information such as lldb.
(Some files are duplicated, where only some of them have a line table although
all refer to the same file)
With this change, dwarf 5 debug line section files always start from 0, and
the zeroth entry is not duplicated whenever possible. This requires different
handling of dwarf 4 and dwarf 5 during generation (e.g. when a function returns
an index zero for a file name, it signals an error in dwarf 4, but not in dwarf
5) However, I think the minor complication is worth it, because it enables all
consumers (lldb, gdb, dwarfdump, objdump, and so on) to treat all files in the
file name list homogenously.
llvm-svn: 358732
Change two costly udiv() calls to lshr(1)*RHS + left-shift + plus
On one 64-bit umul_ov benchmark, I measured an obvious improvement: 12.8129s -> 3.6257s
Note, there may be some value to special case 64-bit (the most common
case) with __builtin_umulll_overflow().
Differential Revision: https://reviews.llvm.org/D60669
llvm-svn: 358730
We would previously drop the COMDAT on the thunk we generated when replacing a
function body with the forwarding thunk. This would result in a function that
may have been multiply emitted and multiply merged to be emitted with the same
name without the COMDAT. This is a hard error with PE/COFF where the COMDAT is
used for the deduplication of Value Witness functions for Swift.
llvm-svn: 358728
This adds legalization for G_SEXT, G_ZEXT, and G_ANYEXT for v8s8s.
We were falling back on G_ZEXT in arm64-vabs.ll before, preventing us from
selecting the @llvm.aarch64.neon.sabd.v8i8 intrinsic.
This adds legalizer support for those 3, which gives us selection via the
importer. Update the relevant tests (legalize-ext.mir, select-int-ext.mir) and
add a GISel line to arm64-vabs.ll.
Differential Revision: https://reviews.llvm.org/D60881
llvm-svn: 358715
Prior to this patch, each basic block listed in the extrack-blocks-file
would be extracted to a different function.
This patch adds the support for comma separated list of basic blocks
to form group.
When the region formed by a group is not extractable, e.g., not single
entry, all the blocks of that group are left untouched.
Let us see this new format in action (comments are not part of the
file format):
;; funcName bbName[,bbName...]
foo bb1 ;; Extract bb1 in its own function
foo bb2,bb3 ;; Extract bb2,bb3 in their own function
bar bb1,bb4 ;; Extract bb1,bb4 in their own function
bar bb2 ;; Extract bb2 in its own function
Assuming all regions are extractable, this will create one function and
thus one call per region.
Differential Revision: https://reviews.llvm.org/D60746
llvm-svn: 358701
combineVectorTruncationWithPACKUS is currently splitting the upper bit bit masking into 128-bit subregs and then concatenating them back together.
This was originally done to avoid regressions that caused existing subregs to be concatenated to the larger type just for the AND masking before being extracted again. This was fixed by @spatel (notably rL303997 and rL347356).
This also lets SimplifyDemandedBits do some further improvements before it hits the recursive depth limit.
My only annoyance with this is that we were broadcasting some xmm masks but we seem to have lost them by moving to ymm - but that's a known issue as the logic in lowerBuildVectorAsBroadcast isn't great.
Differential Revision: https://reviews.llvm.org/D60375#inline-539623
llvm-svn: 358692
The bug is that I didn't check whether the operand of the invariant_loads were themselves invariant. I don't know how this got missed in the patch and review. I even had an unreduced test case locally, and I remember handling this case, but I must have lost it in one of the rebases. Oops.
llvm-svn: 358688
The purpose of this patch is to eliminate a pass ordering dependence between LoopPredication and LICM. To understand the purpose, consider the following snippet of code inside some loop 'L' with IV 'i'
A = _a.length;
guard (i < A)
a = _a[i]
B = _b.length;
guard (i < B);
b = _b[i];
...
Z = _z.length;
guard (i < Z)
z = _z[i]
accum += a + b + ... + z;
Today, we need LICM to hoist the length loads, LoopPredication to make the guards loop invariant, and TrivialUnswitch to eliminate the loop invariant guard to establish must execute for the next length load. Today, if we can't prove speculation safety, we'd have to iterate these three passes 26 times to reduce this example down to the minimal form.
Using the fact that the array lengths are known to be invariant, we can short circuit this iteration. By forming the loop invariant form of all the guards at once, we remove the need for LoopPredication from the iterative cycle. At the moment, we'd still have to iterate LICM and TrivialUnswitch; we'll leave that part for later.
As a secondary benefit, this allows LoopPred to expose peeling oppurtunities in a much more obvious manner. See the udiv test changes as an example. If the udiv was not hoistable (i.e. we couldn't prove speculation safety) this would be an example where peeling becomes obviously profitable whereas it wasn't before.
A couple of subtleties in the implementation:
- SCEV's isSafeToExpand guarantees speculation safety (i.e. let's us expand at a new point). It is not a precondition for expansion if we know the SCEV corresponds to a Value which dominates the requested expansion point.
- SCEV's isLoopInvariant returns true for expressions which compute the same value across all iterations executed, regardless of where the original Value is located. (i.e. it can be in the loop) This implies we have a speculation burden to prove before expanding them outside loops.
- invariant_loads and AA->pointsToConstantMemory are two cases that SCEV currently does not handle, but meets the SCEV definition of invariance. I plan to sink this part into SCEV once this has baked for a bit.
Differential Revision: https://reviews.llvm.org/D60093
llvm-svn: 358684
Summary:
The immediate post dominator of the loop header may be part of the divergent loop.
Since this /was/ the divergence propagation bound the SDA would not detect joins of divergent paths outside the loop.
Reviewers: nhaehnle
Reviewed By: nhaehnle
Subscribers: mmasten, arsenm, jvesely, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59042
llvm-svn: 358681
isSafeToExpand was making a common, but dangerously wrong, mistake in assuming that if any instruction within a basic block executes, that all instructions within that block must execute. This can be trivially shown to be false by considering the following small example:
bb:
add x, y <-- InsertionPoint
call @throws()
udiv x, y <-- SCEV* S
br ...
It's clearly not legal to expand S above the throwing call, but the previous logic would do so since S dominates (but not properlyDominates) the block containing the InsertionPoint.
Since iterating instructions w/in a block is expensive, this change special cases two cases: 1) S is an operand of InsertionPoint, and 2) InsertionPoint is the terminator of it's block. These two together are enough to keep all current optimizations triggering while fixing the latent correctness issue.
As best I can tell, this is a silent bug in current ToT. Given that, there's no tests with this change. It was noticed in an upcoming optimization change (D60093), and was reviewed as part of that. That change will include the test which caused me to notice the issue. I'm submitting this seperately so that anyone bisecting a problem gets a clear explanation.
llvm-svn: 358680
Summary:
This patch adds support for yaml (de)serialization of the minidump
ModuleList stream. It's a fairly straight forward-application of the
existing patterns to the ModuleList structures defined in previous
patches.
One thing, which may be interesting to call out explicitly is the
addition of "new" allocation functions to the helper BlobAllocator
class. The reason for this was, that there was an emerging pattern of a
need to allocate space for entities, which do not have a suitable
lifetime for use with the existing allocation functions. A typical
example of that was the "size" of various lists, which is only available
as a temporary returned by the .size() method of some container. For
these cases, one can use the new set of allocation functions, which
will take a temporary object, and store it in an allocator-managed
buffer until it is written to disk.
Reviewers: amccarth, jhenderson, clayborg, zturner
Subscribers: lldb-commits, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60405
llvm-svn: 358672
This replaces the MOVMSK combine introduced at D52121/rL342326
(movmsk (setne (and X, (1 << C)), 0)) -> (movmsk (X << C))
with the more general icmp lowering so it can pick up more cases through bitcasts - notably vXi8 cases which use vXi16 shifts+masks, this patch can remove the mask and use pcmpgtb(0,x) for the sra.
Differential Revision: https://reviews.llvm.org/D60625
llvm-svn: 358651
Summary:
This issue from the bugzilla: https://bugs.llvm.org/show_bug.cgi?id=41177
When the two operands for BUILD_VECTOR are same, we will get assert error.
llvm::SDValue combineBVOfConsecutiveLoads(llvm::SDNode*, llvm::SelectionDAG&):
Assertion `!(InputsAreConsecutiveLoads && InputsAreReverseConsecutive) &&
"The loads cannot be both consecutive and reverse consecutive."' failed.
This error caused by the wrong ElemSIze when calling isConsecutiveLS(). We
should use `getScalarType().getStoreSize();` to get the ElemSize instread of
`getScalarSizeInBits() / 8`.
Reviewed By: jsji
Differential Revision: https://reviews.llvm.org/D60811
llvm-svn: 358644
fneg combining attempts to turn it into fadd(fneg(A), fneg(0)), but
creating the new fadd folds to just fneg(A). When A has multiple uses,
this confuses it and you get an assert. Fixed.
Differential Revision: https://reviews.llvm.org/D60633
Change-Id: I0ddc9b7286abe78edc0cd8d734fdeb05ff09821c
llvm-svn: 358640
Reverse the checking of the domiance order so that when a self compare happens,
it returns false. This makes compare function have strict weak ordering.
llvm-svn: 358636
The test file has pairs of tests that are logically equivalent:
https://rise4fun.com/Alive/2zQ
%t4 = and i8 %t1, 8
%t5 = zext i8 %t4 to i16
%sh = shl i16 %t5, 2
%t6 = add i16 %sh, %t0
=>
%t4 = and i8 %t1, 8
%sh2 = shl i8 %t4, 2
%z5 = zext i8 %sh2 to i16
%t6 = add i16 %z5, %t0
...so if we can fold the shift op into LEA in the 1st pattern, then we
should be able to do the same in the 2nd pattern (unnecessary 'movzbl'
is a separate bug I think).
We don't want to do this any sooner though because that would conflict
with generic transforms that try to narrow the width of the shift.
Differential Revision: https://reviews.llvm.org/D60789
llvm-svn: 358622
Summary:
X86 is quite complicated; so I intend to leave it as is. ARM+Aarch64 do
basically the same thing (Aarch64 did not correctly handle immediates,
ARM has a test llvm/test/CodeGen/ARM/2009-04-06-AsmModifier.ll that uses
%a with an immediate) for a flag that should be target independent
anyways.
Reviewers: echristo, peter.smith
Reviewed By: echristo
Subscribers: javed.absar, eraman, kristof.beyls, hiraditya, llvm-commits, srhines
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60841
llvm-svn: 358618
Legalize things like i24 load/store by splitting them into smaller power of 2 operations.
This matches how SelectionDAG handles these operations.
Differential Revision: https://reviews.llvm.org/D59971
llvm-svn: 358613
This patch adds a basic loop fusion pass. It will fuse loops that conform to the
following 4 conditions:
1. Adjacent (no code between them)
2. Control flow equivalent (if one loop executes, the other loop executes)
3. Identical bounds (both loops iterate the same number of iterations)
4. No negative distance dependencies between the loop bodies.
The pass does not make any changes to the IR to create opportunities for fusion.
Instead, it checks if the necessary conditions are met and if so it fuses two
loops together.
The pass has not been added to the pass pipeline yet, and thus is not enabled by
default. It can be run stand alone using the -loop-fusion option.
Differential Revision: https://reviews.llvm.org/D55851
llvm-svn: 358607
Summary:
None of these derived classes do anything that the base class cannot.
If we remove these case statements, then the base class can handle them
just fine.
Reviewers: peter.smith, echristo
Reviewed By: echristo
Subscribers: nemanjai, javed.absar, eraman, kristof.beyls, hiraditya, kbarton, jsji, llvm-commits, srhines
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60803
llvm-svn: 358603
Summary:
Reapply r357931 with fixes to ThinLTO testcases and llvm-lto tool.
ThinLTOCodeGenerator currently does not preserve llvm.used symbols and
it can internalize them. In order to pass the necessary information to the
legacy ThinLTOCodeGenerator, the input to the code generator is
rewritten to be based on lto::InputFile.
Now ThinLTO using the legacy LTO API will requires data layout in
Module.
"internalize" thinlto action in llvm-lto is updated to run both
"promote" and "internalize" with the same configuration as
ThinLTOCodeGenerator. The old "promote" + "internalize" option does not
produce the same output as ThinLTOCodeGenerator.
This fixes: PR41236
rdar://problem/49293439
Reviewers: tejohnson, pcc, kromanova, dexonsmith
Reviewed By: tejohnson
Subscribers: ormris, bd1976llvm, mehdi_amini, inglorion, eraman, hiraditya, jkorous, dexonsmith, arphaman, dang, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60421
llvm-svn: 358601
In InstCombine, we use an idiom of "store i1 true, i1 undef" to indicate we've found a path which we've proven unreachable. We can't actually insert the unreachable instruction since that would require changing the CFG. We leave that to simplifycfg later.
This just factors out that idiom creation so we don't duplicate the same mostly undocument idiom creation in multiple places.
llvm-svn: 358600
If a branch is conditional on extractvalue(op.with.overflow(%x, C), 1)
then we can constrain the value of %x inside the branch based on
makeGuaranteedNoWrapRegion(). We do this by extending the edge-value
handling in LVI. This allows CVP to then fold comparisons against %x,
as illustrated in the tests.
Differential Revision: https://reviews.llvm.org/D60650
llvm-svn: 358597
Summary: This fixes a large Dawn of War 3 performance regression with RADV from Mesa 19.0 to master which was caused by creating less code in some branches.
Reviewers: arsen, nhaehnle
Reviewed By: nhaehnle
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60824
llvm-svn: 358592
Summary:
In the following cases, unrolling can be beneficial, even when
optimizing for code size:
1) very low trip counts
2) potential to constant fold most instructions after fully unrolling.
We can unroll in those cases, by setting the unrolling threshold to the
loop size. This might highlight some cost modeling issues and fixing
them will have a positive impact in general.
Reviewers: vsk, efriedma, dmgreen, paquette
Reviewed By: paquette
Differential Revision: https://reviews.llvm.org/D60265
llvm-svn: 358586
Summary:
This patch adds support for ULEB128 and SLEB128 encoding and decoding to
BinaryStreamWriter and BinaryStreamReader respectively.
Support for ULEB128/SLEB128 will be used for eh-frame parsing in the JITLink
library currently under development (see https://reviews.llvm.org/D58704).
Reviewers: zturner, dblaikie
Subscribers: kristina, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60810
llvm-svn: 358584
Currently there is a single point in ScheduleDAGRRList, where we
actually query the topological order (besides init code). Currently we
are recomputing the order after adding a node (which does not have
predecessors) and then we add predecessors edge-by-edge.
We can avoid adding edges one-by-one after we added a new node. In that case, we can
just rebuild the order from scratch after adding the edges to the DAG
and avoid all the updates to the ordering.
Also, we can delay updating the DAG until we query the DAG, if we keep a
list of added edges. Depending on the number of updates, we can either
apply them when needed or recompute the order from scratch.
This brings down the geomean compile time for of CTMark with -O1 down 0.3% on X86,
with no regressions.
Reviewers: MatzeB, atrick, efriedma, niravd, paquette
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D60125
llvm-svn: 358583
Summary:
Add accessors for the file, directory, source file name (curiously, an `Optional` value?), of a DIFile.
This is intended to replace the LLVMValueRef-based accessors used in D52239
Reviewers: whitequark, jberdine, deadalnix
Reviewed By: whitequark, jberdine
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60489
llvm-svn: 358577
On pre-AVX512 targets we can use MOVMSK to extract reduced boolean results. This is properly optimized, annoyingly AVX512 isn't and produces code that is almost as bad as the (unchanged) costs suggest......
Differential Revision: https://reviews.llvm.org/D60403
llvm-svn: 358574
We have two versions of some instructions, VR128 versions and FR32 versions that
are marked as CodeGenOnly.
This change switches to using the VR128 versions for these copies. It's after
register allocation so the class size no longer matters. This matches how GR64
works.
llvm-svn: 358555
As it's causing some bot failures (and per request from kbarton).
This reverts commit r358543/ab70da07286e618016e78247e4a24fcb84077fda.
llvm-svn: 358546
This patch adds a basic loop fusion pass. It will fuse loops that conform to the
following 4 conditions:
1. Adjacent (no code between them)
2. Control flow equivalent (if one loop executes, the other loop executes)
3. Identical bounds (both loops iterate the same number of iterations)
4. No negative distance dependencies between the loop bodies.
The pass does not make any changes to the IR to create opportunities for fusion.
Instead, it checks if the necessary conditions are met and if so it fuses two
loops together.
The pass has not been added to the pass pipeline yet, and thus is not enabled by
default. It can be run stand alone using the -loop-fusion option.
Phabricator: https://reviews.llvm.org/D55851
llvm-svn: 358543
Summary: Metadata for a global variable is really a (GlobalVariable, Expression) tuple. Allow access to these, then allow retrieving the file, scope, and line for a DIVariable, whether global or local. This should be the last of the accessors required for uniform access to location and file information metadata.
Reviewers: jberdine, whitequark, deadalnix
Reviewed By: jberdine, whitequark
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60725
llvm-svn: 358532
Summary:
The printOperand function takes a default parameter, for which there are
zero call sites that explicitly pass such a parameter. As such, there
is no case to support. This means that the method
printVecModifiedImmediate is purly dead code, and can be removed.
The eventual goal for some of these AsmPrinter refactoring is to have
printOperand be a virtual method; making it easier to print operands
from the base class for more generic Asm printing. It will help if all
printOperand methods have the same function signature (ie. no Modifier
argument when not needed).
Reviewers: echristo, tra
Reviewed By: echristo
Subscribers: jholewinski, hiraditya, llvm-commits, craig.topper, srhines
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60727
llvm-svn: 358527
As discussed on PR41359, this patch renames the pair of shift-mask target feature functions to make their purposes more obvious.
shouldFoldShiftPairToMask -> shouldFoldConstantShiftPairToMask
preferShiftsToClearExtremeBits -> shouldFoldMaskToVariableShiftPair
llvm-svn: 358526
This is 1 of the problems discussed in the post-commit thread for:
rL355741 / http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20190311/635516.html
and filed as:
https://bugs.llvm.org/show_bug.cgi?id=41101
Instcombine tries to canonicalize some of these cases (and there's room for improvement
there independently of this patch), but it can't always do that because of extra uses.
So we need to recognize these commuted operand patterns here in EarlyCSE. This is similar
to how we detect commuted compares and commuted min/max/abs.
Differential Revision: https://reviews.llvm.org/D60723
llvm-svn: 358523
Summary:
Use llvm::json::Array.reserve() to optimize json output time. Here is motivation:
https://reviews.llvm.org/D60609#1468941. In short: for the json array
with ~32K entries, pushing back each entry takes ~4% of whole time compared
to the method of preliminary memory reservation: (3995-3845)/3995 = 3.75%.
Reviewers: lebedev.ri
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60792
llvm-svn: 358522
If a umul.with.overflow or smul.with.overflow operation cannot
overflow, simplify it to a simple mul nuw / mul nsw. After the
refactoring in D60668 this is just a matter of removing an
explicit check against multiplications.
Differential Revision: https://reviews.llvm.org/D60791
llvm-svn: 358521
This is a refactoring patch which should have all the functionality of the current code. Its goal is twofold:
i. Cleanup and simplify the reordering code, and
ii. Generalize reordering so that it will work for an arbitrary number of operands, not just 2.
This is the second patch in a series of patches that will enable operand reordering across chains of operations. An example of this was presented in EuroLLVM'18 https://www.youtube.com/watch?v=gIEn34LvyNo .
Committed on behalf of @vporpo (Vasileios Porpodas)
Differential Revision: https://reviews.llvm.org/D59973
llvm-svn: 358519
Improves codegen demonstrated by D60512 - instructions represented by X86ISD::PERMV/PERMV3 can never memory fold the operand used for their index register.
This patch updates the 'isUseOfShuffle' helper into the more capable 'isFoldableUseOfShuffle' that recognises that the op is used for a X86ISD::PERMV/PERMV3 index mask and can't be folded - allowing us to use broadcast/subvector-broadcast ops to reduce the size of the mask constant pool data.
Differential Revision: https://reviews.llvm.org/D60562
llvm-svn: 358516
If a constant shift amount is used, then only some of the LHS/RHS
operand bits are demanded and we may be able to simplify based on
that. InstCombineSimplifyDemanded already had the necessary support
for that, we just weren't calling it with fshl/fshr as root.
In particular, this allows us to relax some masked funnel shifts
into simple shifts, as shown in the tests.
Patch by Shawn Landden.
Differential Revision: https://reviews.llvm.org/D60660
llvm-svn: 358515
This adds a WithOverflowInst class with a few helper methods to get
the underlying binop, signedness and nowrap type and makes use of it
where sensible. There will be two more uses in D60650/D60656.
The refactorings are all NFC, though I left some TODOs where things
could be improved. In particular we have two places where add/sub are
handled but mul isn't.
Differential Revision: https://reviews.llvm.org/D60668
llvm-svn: 358512
The checks in `canFoldInAddressingMode` tested for addressing modes that have a
base register but didn't set the `HasBaseReg` flag to true (it's false by
default). This patch fixes that. Although the omission of the flag was
technically incorrect it had no known observable impact, so no tests were
changed by this patch.
Differential Revision: https://reviews.llvm.org/D60314
llvm-svn: 358502
When not optimizing for minimum size (-Oz) we custom lower wide shifts
(SHL_PARTS, SRA_PARTS, SRL_PARTS) instead of expanding to a libcall.
Differential Revision: https://reviews.llvm.org/D59477
llvm-svn: 358498
Summary:
We have a multi-platform thread priority setting function(last piece
landed with D58683), I wanted to make this available to all llvm community,
there seem to be other users of such functionality with portability fixmes:
lib/Support/CrashRecoveryContext.cpp
tools/clang/tools/libclang/CIndex.cpp
Reviewers: gribozavr, ioeric
Subscribers: krytarowski, jfb, kristina, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59130
llvm-svn: 358494
Similar to r358421: A StructorIndentifierNode has a Class field which
is read when printing it, but if the StructorIndentifierNode appears in
a template argument then demangleFullyQualifiedSymbolName() which sets
Class isn't called. Since StructorIndentifierNodes are always leaf
names, we can just reject them as well.
Found by oss-fuzz.
llvm-svn: 358491
The original commit caused false positives from AddressSanitizer's
use-after-scope checks, which have now been fixed in r358478.
> The code was previously checking that candidates for sinking had exactly
> one use or were a store instruction (which can't have uses). This meant
> we could sink call instructions only if they had a use.
>
> That limitation seemed a bit arbitrary, so this patch changes it to
> "instruction has zero or one use" which seems more natural and removes
> the need to special-case stores.
>
> Differential revision: https://reviews.llvm.org/D59936
llvm-svn: 358483
If there are any intrinsics that cannot be traced back to an alloca, we
might have missed the start of a variable's scope, leading to false
error reports if the variable is poisoned at function entry. Instead, if
there are some intrinsics that can't be traced, fail safe and don't
poison the variables in that function.
Differential revision: https://reviews.llvm.org/D60686
llvm-svn: 358478
The CodeExtractor is not smart enough to compute which basic block is
the entry of a region. Instead it relies on the order of the list
of basic blocks that is handed to it and assumes that the entry
is the first block in the list.
Without the additional debug information, it is hard to understand
why a valid region does not get extracted, because we would miss
that the order of in the list just doesn't match what the CodeExtractor
wants.
NFC
llvm-svn: 358471
The test in the dependent revision has been fixed for Windows.
Original commit message:
Response file expansion limits the amount of expansion to prevent
potential infinite recursion. However, the current logic assumes that
any argument beginning with @ is a response file, which is not true for
e.g. `-Xlinker -rpath -Xlinker @executable_path/../lib` on Darwin.
Having too many of these non-response file arguments beginning with @
prevents actual response files from being expanded. Instead, limit based
on the number of successful response file expansions, which should still
prevent infinite recursion but also avoid false positives.
Differential Revision: https://reviews.llvm.org/D60631
> llvm-svn: 358452
llvm-svn: 358466
Since non-pow-2 types are going to get split up into multiple loads anyway,
don't do the [SZ]EXTLOAD combine for those and save us trouble later in
legalization.
llvm-svn: 358458
If LSR split critical edge during rewriting phi operands and
phi node has other pending fixup operands, we need to
update those pending fixups. Otherwise formulae will not be
implemented completely and some instructions will not be eliminated.
llvm.org/PR41445
Differential Revision: https://reviews.llvm.org/D60645
Patch by: Denis Bakhvalov <denis.bakhvalov@intel.com>
llvm-svn: 358457
Under some environments, argv[0] doesn't hold a valid file name, but
sys::fs::getMainExecutable will find the main executable properly.
This patch tweaks the logic to fall back to sys::fs::getMainExecutable
in more situations.
Differential Revision: https://reviews.llvm.org/D60730
llvm-svn: 358455
Response file expansion limits the amount of expansion to prevent
potential infinite recursion. However, the current logic assumes that
any argument beginning with @ is a response file, which is not true for
e.g. `-Xlinker -rpath -Xlinker @executable_path/../lib` on Darwin.
Having too many of these non-response file arguments beginning with @
prevents actual response files from being expanded. Instead, limit based
on the number of successful response file expansions, which should still
prevent infinite recursion but also avoid false positives.
Differential Revision: https://reviews.llvm.org/D60631
llvm-svn: 358452
This is a preparatory patch for D60093. This patch itself is NFC, but while preparing this I noticed and committed a small hoisting change in rL358419.
The basic structure of the new scheme is that we pass around the guard ("the using instruction"), and select an optimal insert point by examining operands at each construction point. This seems conceptually a bit cleaner to start with as it isolates the knowledge about insertion safety at the actual insertion point.
Note that the non-hoisting path is not actually used at the moment. That's not exercised until D60093 is rebased on this one.
Differential Revision: https://reviews.llvm.org/D60718
llvm-svn: 358434
Zexts can be treated like no-op casts when it comes to assessing whether their
removal affects debug info.
Reviewer: aprantl
Differential Revision: https://reviews.llvm.org/D60641
llvm-svn: 358431
Summary: Add DefaultOption flag to CommandLineParser which provides a
default option or alias, but allows users to override it for some
other purpose as needed.
Also, add `-h` as a default alias to `-help`, which can be seamlessly
overridden by applications like llvm-objdump and llvm-readobj which
use `-h` as an alias for other options.
(relanding after revert, r358414)
Added DefaultOptions.clear() to reset().
Reviewers: alexfh, klimek
Reviewed By: klimek
Subscribers: kristina, MaskRay, mehdi_amini, inglorion, dexonsmith, hiraditya, llvm-commits, jhenderson, arphaman, cfe-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D59746
llvm-svn: 358428
The pattern we replaced these with may be too hard to match as demonstrated by
PR41496 and PR41316.
This patch restores the intrinsics and then we can start focusing
on the optimizing the intrinsics.
I've mostly reverted the original patch that removed them. Though I modified
the avx512 intrinsics to not have masking built in.
Differential Revision: https://reviews.llvm.org/D60674
llvm-svn: 358427
Summary:
Enable some of the existing size optimizations for cold code under PGO.
A ~5% code size saving in big internal app under PGO.
The way it gets BFI/PSI is discussed in the RFC thread
http://lists.llvm.org/pipermail/llvm-dev/2019-March/130894.html
Note it doesn't currently touch loop passes.
Reviewers: davidxl, eraman
Reviewed By: eraman
Subscribers: mgorny, javed.absar, smeenai, mehdi_amini, eraman, zzheng, steven_wu, dexonsmith, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59514
llvm-svn: 358422
A ConversionOperatorIdentifierNode has a TargetType which is read when
printing it, but if the ConversionOperatorIdentifierNode appears in a
template argument there's nothing that can provide the TargetType.
Normally the COIN is a symbol (leaf) name and takes its TargetType from the
symbol's type, but in a template argument context the COIN can only be
either a non-leaf name piece or a type, and must hence be invalid.
Similar to the COIN check in demangleDeclarator().
Found by oss-fuzz.
llvm-svn: 358421
If we have multiple range checks which can be predicated, hoist the and of the results outside the loop. This minorly cleans up the resulting IR, but the main motivation is as a building block for D60093.
llvm-svn: 358419
Arguments already have a flag to inform backends when they have been split up.
The AArch64 arm64_32 ABI makes use of these on return types too, so that code
emitted for armv7k can be ABI-compliant.
There should be no CodeGen changes yet, just making more information available.
llvm-svn: 358399
The arm64_32 ABI specifies that pointers (despite being 32-bits) should be
zero-extended to 64-bits when passed in registers for efficiency reasons. This
means that the SelectionDAG needs to be able to tell the backend that an
argument was originally a pointer, which is implmented here.
Additionally, some memory intrinsics need to be declared as taking an i8*
instead of an iPTR.
There should be no CodeGen change yet, but it will be triggered when AArch64
backend support for ILP32 is added.
llvm-svn: 358398
This fixes a test I introduced in change D59191 (that added src0 and
src1 modifiers to the v_cndmask instruction for disassembly purposes).
Spotted by David Binderman in bug 41488.
Differential Revision: https://reviews.llvm.org/D60652
Change-Id: I6ac95e66cd84e812ed3359ad57bcd0e13198ba0c
llvm-svn: 358392
Summary:
This patch is part of a patch series to add support for FileCheck
numeric expressions. This specific patch adds a new class to hold
pattern matching global state.
The table holding the values of FileCheck variable constitutes some sort
of global state for the matching phase, yet is passed as parameters of
all functions using it. This commit create a new FileCheckPatternContext
class pointed at from FileCheckPattern. While it increases the line
count, it separates local data from global state. Later commits build
on that to add numeric expression global state to that class.
Copyright:
- Linaro (changes up to diff 183612 of revision D55940)
- GraphCore (changes in later versions of revision D55940 and
in new revision created off D55940)
Reviewers: jhenderson, chandlerc, jdenny, probinson, grimar, arichardson, rnk
Subscribers: hiraditya, llvm-commits, probinson, dblaikie, grimar, arichardson, tra, rnk, kristina, hfinkel, rogfer01, JonChesterfield
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60381
llvm-svn: 358390
It didn't handle empty LHS correctly. If two ranges of LHS were
contiguous and jointly contained one range of RHS, it could also be incorrect.
DWARFAddressRange::contains can be removed and its tests can be merged into DWARFVerifier::DieRangeInfo::contains
llvm-svn: 358387
Summary:
Factor out findAllocaForValue() from ASan so that we can use it in
MSan to handle lifetime intrinsics.
Reviewers: eugenis, pcc
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60615
llvm-svn: 358380
Summary:
Use KnownBits::computeForAddSub/computeForAddCarry
in SelectionDAG::computeKnownBits when doing value
tracking for addition/subtraction.
This should improve the precision of the known bits,
as we only used to make a simple estimate of known
zeroes. The KnownBits support functions are also
able to deduce bits that are known to be one in the
result.
Reviewers: spatel, RKSimon, nikic, lebedev.ri
Reviewed By: nikic
Subscribers: nikic, javed.absar, lebedev.ri, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60460
llvm-svn: 358372
Other opcodes shouldn't be CSE'd until we can be sure debug info quality won't
be degraded.
This change also improves the IRTranslator so that in most places, but not all,
it creates constants using the MIRBuilder directly instead of first creating a
new destination vreg and then creating a constant. By doing this, the
buildConstant() method can just return the vreg of an existing G_CONSTANT
instead of having to create a COPY from it.
I measured a 0.2% improvement in compile time and a 0.9% improvement in code
size at -O0 ARM64.
Compile time:
Program base cse diff
test-suite...ark/tramp3d-v4/tramp3d-v4.test 9.04 9.12 0.8%
test-suite...Mark/mafft/pairlocalalign.test 2.68 2.66 -0.7%
test-suite...-typeset/consumer-typeset.test 5.53 5.51 -0.4%
test-suite :: CTMark/lencod/lencod.test 5.30 5.28 -0.3%
test-suite :: CTMark/Bullet/bullet.test 25.82 25.76 -0.2%
test-suite...:: CTMark/ClamAV/clamscan.test 6.92 6.90 -0.2%
test-suite...TMark/7zip/7zip-benchmark.test 34.24 34.17 -0.2%
test-suite :: CTMark/SPASS/SPASS.test 6.25 6.24 -0.1%
test-suite...:: CTMark/sqlite3/sqlite3.test 1.66 1.66 -0.1%
test-suite :: CTMark/kimwitu++/kc.test 13.61 13.60 -0.0%
Geomean difference -0.2%
Code size:
Program base cse diff
test-suite...-typeset/consumer-typeset.test 1315632 1266480 -3.7%
test-suite...:: CTMark/ClamAV/clamscan.test 1313892 1297508 -1.2%
test-suite :: CTMark/lencod/lencod.test 1439504 1423112 -1.1%
test-suite...TMark/7zip/7zip-benchmark.test 2936980 2904172 -1.1%
test-suite :: CTMark/Bullet/bullet.test 3478276 3445460 -0.9%
test-suite...ark/tramp3d-v4/tramp3d-v4.test 8082868 8033492 -0.6%
test-suite :: CTMark/kimwitu++/kc.test 3870380 3853972 -0.4%
test-suite :: CTMark/SPASS/SPASS.test 1434904 1434896 -0.0%
test-suite...Mark/mafft/pairlocalalign.test 764528 764528 0.0%
test-suite...:: CTMark/sqlite3/sqlite3.test 782092 782092 0.0%
Geomean difference -0.9%
Differential Revision: https://reviews.llvm.org/D60580
llvm-svn: 358369
Because CodeGen can't depend on GlobalISel, we need a way to encapsulate the CSE
configs that can be passed between TargetPassConfig and the targets' custom
pass configs. This CSEConfigBase allows targets to create custom CSE configs
which is then used by the GISel passes for the CSEMIRBuilder.
This support will be used in a follow up commit to allow constant-only CSE for
-O0 compiles in D60580.
llvm-svn: 358368
This will ensure IMUL64ri8 is tried before IMUL64ri32. For IMUL32 and IMUL16 the
order doesn't really matter because only the ri8 versions use a predicate. That
automatically gives them priority.
llvm-svn: 358360
We had many tablegen patterns for these instructions. And due to the
commutability of the patterns, tablegen expands them to even more patterns. All
together VPTESTMD patterns accounted for more the 50K of the 610K isel table.
This had gotten bad when we stopped canonicalizing AND to vXi64. This required
a pattern for every combination of bitcast input type.
This change moves the matching to custom code where it is easier to look through
the bitcasts without being concerned with the specific types.
The test changes are because we are now stricter with one use checks as its
required to make load folding legal. We now require the AND and any BITCAST to
only have a single use. This prevents forming VPTESTM and a VPAND with the same
inputs.
We now support broadcast loads for 128/256 patterns without VLX. We'll widen to
512-bit like and still fold the broadcast since the amount of memory read
doesn't change.
There are a few tests that got slightly longer because are now prefering
load + VPTESTM over XOR+VPCMPEQ for (seteq (load), allzeros). Previously we were
able to share the XOR with multiple VPTESTM instructions.
llvm-svn: 358359
We're better of emitting a single compare + kand rather than a compare for the
other use and a masked compare.
I'm looking into using custom instruction selection for VPTESTM to reduce the
ridiculous number of permutations of patterns in the isel table. Putting a one
use check on all masked compare folding makes load fold matching in the custom
code easier.
llvm-svn: 358358
* Rearrange continu/break
* BBNumbers.lookup(A) -> BBNumbers.find(A)->second
BBNumbers has been computed, thus we can assume the value exists in the predicate.
llvm-svn: 358351
getSetSize returns an APInt that is 1 bit wider. The APInt is typically 65-bit and requires memory allocation. isSizeStrictlySmallerThan and isSizeLargerThan are preferred. The last use of this helper method was removed by rL302385.
llvm-svn: 358347
Summary:
The Linux kernel uses PC-relative mode, so allow that when the code model is
"kernel".
Reviewers: craig.topper
Reviewed By: craig.topper
Subscribers: llvm-commits, kees, nickdesaulniers
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60643
llvm-svn: 358343
As motivated in D60598, this drops support for specifying both NUW and
NSW in makeGuaranteedNoWrapRegion(). None of the users of this function
currently make use of this.
When both NUW and NSW are specified, the exact nowrap region has two
disjoint parts and makeGNWR() returns one of them. This result doesn't
seem to be useful for anything, but makes the semantics of the function
fuzzier.
Differential Revision: https://reviews.llvm.org/D60632
llvm-svn: 358340
As pointed out in D60518 folding mulo(%x, undef) to {undef, undef}
isn't correct. As a correct version of this already exists in
InstructionSimplify (bd8056ef32/lib/Analysis/InstructionSimplify.cpp (L4750-L4757)) this is just
dead code though. Drop it together with the mul(%x, 0) -> {0, false}
fold that is also already handled by InstSimplify.
Differential Revision: https://reviews.llvm.org/D60649
llvm-svn: 358339
We know all our values are limited to 64 bits here so we don't need an APInt.
This should save some generated code checking between large and small size.
llvm-svn: 358338
Summary: Add DefaultOption flag to CommandLineParser which provides a
default option or alias, but allows users to override it for some
other purpose as needed.
Also, add `-h` as a default alias to `-help`, which can be seamlessly
overridden by applications like llvm-objdump and llvm-readobj which
use `-h` as an alias for other options.
Reviewers: alexfh, klimek
Reviewed By: klimek
Subscribers: MaskRay, mehdi_amini, inglorion, dexonsmith, hiraditya, llvm-commits, jhenderson, arphaman, cfe-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D59746
llvm-svn: 358337
This enables the simple copy combine that already exists in the CombinerHelper.
However, it exposed a bug in the GISelChangeObserver where it wouldn't clear a
set of MIs to process, and so would end up causing a crash when deleted MIs were
being added to the combiner worklist again.
Differential Revision: https://reviews.llvm.org/D60579
llvm-svn: 358318
Summary:
This ensures that object files will continue to validate as
WebAssembly modules in the presence of bulk memory operations. Engines
that don't support bulk memory operations will not recognize the
DataCount section and will report validation errors, but that's ok
because object files aren't supposed to be run directly anyway.
Reviewers: aheejin, dschuff, sbc100
Subscribers: jgravelle-google, hiraditya, sunfish, rupprecht, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60623
llvm-svn: 358315
This crash was introduced in r358032 as we try to construct an EVT from an MVT
in order to find the register type for the calling conv. Fall back instead of
trying to do this with an invalid MVT coming from i256.
llvm-svn: 358314
Summary:
When inserting a new Def, MemorySSA may be have non-minimal number of Phis.
While inserting, the walk to find the previous definition may cleanup minimal Phis.
When the last definition is trivial to obtain, we do not cache it.
It is possible while getting the previous definition for a Def to get two different answers:
- one that was straight-forward to find when walking the first path (a trivial phi in this case), and
- another that follows a cleanup of the trivial phi, it determines it may need additional Phi nodes, it inserts them and returns a new phi in the same position as the former trivial one.
While the Phis added for the second path are all redundant, they are not complete (the walk is only done upwards), and they are not properly cleaned up afterwards.
A way to fix this problem is to cache the straight-forward answer we got on the first walk.
The caching is only kept for the duration of a getPreviousDef call, and for Phis we use TrackingVH, so removing the trivial phi will lead to replacing it with the next dominating phi in the cache.
Resolves PR40749.
Reviewers: george.burgess.iv
Subscribers: jlebar, Prazek, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60634
llvm-svn: 358313
If a shufflevector's mask vector has an element with "undef" then the generic
instruction defining that element register is a G_IMPLICT_DEF instead of G_CONSTANT.
This fixes the selector to handle this case, and for now assumes that undef just means
zero. In future we'll optimize this case properly.
llvm-svn: 358312
Summary: This brings the backend in line with Clang.
Reviewers: aheejin, dschuff
Subscribers: sbc100, jgravelle-google, hiraditya, sunfish, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60594
llvm-svn: 358310
makeGuaranteedNoWrapRegion() is actually makeExactNoWrapRegion() as
long as only one of NUW or NSW is specified. This is not obvious from
the current documentation, and some code seems to think that it is
only exact for single-element ranges. Clarify docs and add tests to
be more confident this really holds.
There are currently no users of makeGuaranteedNoWrapRegion() that
pass both NUW and NSW. I think it would be best to drop support for
this entirely and then rename the function to makeExactNoWrapRegion().
Knowing that the no-wrap region is exact is useful, because we can
backwards-constrain values. What I have in mind in particular is
that LVI should be able to constrain values on edges where the
with.overflow overflow flag is false.
Differential Revision: https://reviews.llvm.org/D60598
llvm-svn: 358305
Summary:
Create a method to forget everything in SCEV.
Add a cl::opt and PassManagerBuilder option to use this in LoopUnroll.
Motivation: Certain Halide applications spend a very long time compiling in forgetLoop, and prefer to forget everything and rebuild SCEV from scratch.
Sample difference in compile time reduction: 21.04 to 14.78 using current ToT release build.
Testcase showcasing this cannot be opensourced and is fairly large.
The option disabled by default, but it may be desirable to enable by
default. Evidence in favor (two difference runs on different days/ToT state):
File Before (s) After (s)
clang-9.bc 7267.91 6639.14
llvm-as.bc 194.12 194.12
llvm-dis.bc 62.50 62.50
opt.bc 1855.85 1857.53
File Before (s) After (s)
clang-9.bc 8588.70 7812.83
llvm-as.bc 196.20 194.78
llvm-dis.bc 61.55 61.97
opt.bc 1739.78 1886.26
Reviewers: sanjoy
Subscribers: mehdi_amini, jlebar, zzheng, javed.absar, dmgreen, jdoerfert, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60144
llvm-svn: 358304
Summary:
After introducing the limit for clobber walking, `walkToPhiOrClobber` would assert that the limit is at least 1 on entry.
The test included triggered that assert.
The callsite in `tryOptimizePhi` making the calls to `walkToPhiOrClobber` is structured like this:
```
while (true) {
if (getBlockingAccess()) { // calls walkToPhiOrClobber
}
for (...) {
walkToPhiOrClobber();
}
}
```
The cleanest fix is to check if the limit was reached inside `walkToPhiOrClobber`, and give an allowence of 1.
This approach not make any alias() calls (no calls to instructionClobbersQuery), so the performance condition is enforced.
The limit is set back to 0 if not used, as this provides info on the fact that we stopped before reaching a true clobber.
Reviewers: george.burgess.iv
Subscribers: jlebar, Prazek, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60479
llvm-svn: 358303
This fixes a miscompile which was introduced in r356510 (https://reviews.llvm.org/D57372).
The problem is that the original patch removed pointer operands where the load results we're demanded, but without considering the legality of the load itself. If the masked.gather had active, but undemanded, lanes, then we could end up creating a load which loaded from an undef address. The result could be a segfault, or, in theory, an arbitrary read from a random memory location into an used register.
llvm-svn: 358299
When CVP determines that a with.overflow intrinsic cannot overflow,
it currently inserts a simple add/sub. As we already determined that
there can be no overflow, we should add the appropriate NUW/NSW flag.
Differential Revision: https://reviews.llvm.org/D60585
llvm-svn: 358298
This is for D60460. computeForAddSub() essentially already supports
carries because it has to deal with subtractions. This revision
extracts a lower-level computeForAddCarry() function, which allows
computing the known bits for add (carry known zero), sub (carry known
one) and addcarry (carry unknown).
As we don't seem to have any yet, I've added a unit test file for
KnownBits and exhaustive tests for the new computeForAddCarry()
functionality, as well the existing computeForAddSub() function.
Differential Revision: https://reviews.llvm.org/D60522
llvm-svn: 358297
This patch reduces the number of functions in the interface between RuntimeDyld
and RuntimeDyldChecker by combining "GetXAddress" and "GetXContent" functions
into "GetXInfo" functions that return a struct describing both the address and
content. The GetStubOffset function is also replaced with a pair of utilities,
GetStubInfo and GetGOTInfo, that fit the new scheme. For RuntimeDyld both of
these functions will return the same result, but for the new JITLink linker
(https://reviews.llvm.org/D58704) these will provide the addresses of PLT stubs
and GOT entries respectively.
For JITLink's use, a 'got_addr' utility has been added to the rtdyld-check
language, and the syntax of 'got_addr' and 'stub_addr' has been changed: both
functions now take two arguments, a 'stub container name' and a target symbol
name. For llvm-rtdyld/RuntimeDyld the stub container name is the object file
name and section name, separated by a slash. E.g.:
rtdyld-check: *{8}(stub_addr(foo.o/__text, y)) = y
For the upcoming llvm-jitlink utility, which creates stubs on a per-file basis
rather than a per-section basis, the container name is just the file name. E.g.:
jitlink-check: *{8}(got_addr(foo.o, y)) = y
llvm-svn: 358295
The Hexagon Vector Loop Carried Reuse pass was allowing reuse between
two shufflevectors with different masks. The reason is that the masks
are not instruction objects, so the code that checks each operand
just skipped over the operands.
This patch fixes the bug by checking if the operands are the same
when they are not instruction objects. If the objects are not the
same, then the code assumes that reuse cannot occur.
Differential Revision: https://reviews.llvm.org/D60019
llvm-svn: 358292
// shuffle (concat X, undef), (concat Y, undef), Mask -->
// concat (shuffle X, Y, Mask0), (shuffle X, Y, Mask1)
The ARM changes with 'vtrn' and narrowed 'vuzp' are improvements.
The x86 changes look neutral or better. There's one test with an
extra instruction, but that could be reversed for a subtarget with
the right attributes. But by default, we want to avoid the 256-bit
op when possible (in my motivating benchmark, a handful of ymm ops
sprinkled into a sequence of xmm ops are triggering frequency
throttling on Haswell resulting in significantly worse perf).
Differential Revision: https://reviews.llvm.org/D60545
llvm-svn: 358291
Currently combineHorizontalPredicateResult only handles anyof/allof reduction patterns of legal types, which can be tricky to match as type legalization of bools can introduce bitcasts/truncs/extensions.
This patch extends combineHorizontalPredicateResult to recognise vXi1 bool reductions as well and uses the existing combineBitcastvxi1 helper to create the MOVMSK necessary to then compare the signmask result.
This ensures the accuracy of the reduction costs added in D60403 which assume the MOVMSK generation.
Differential Revision: https://reviews.llvm.org/D60610
llvm-svn: 358286
It causes clang to crash while building Chromium. See https://crbug.com/952230
for reproducer.
> The PrologEpilogInserter need to insert a DW_OP_deref_size before
> prepending a memory location expression to an already implicit
> expression to avoid having the existing expression act on the memory
> address instead of the value behind it.
>
> The reason for using DW_OP_deref_size and not plain DW_OP_deref is that
> big-endian targets need to read the right size as simply truncating a
> larger read would yield the wrong result (LSB bytes are not at the lower
> address).
>
> Differential Revision: https://reviews.llvm.org/D59687
llvm-svn: 358281
Summary:
Some llc debug options need pass-name as the parameters.
But if we use the pass-name ppc-early-ret, we will get below error:
llc test.ll -stop-after ppc-early-ret
LLVM ERROR: "ppc-early-ret" pass is not registered.
Below pass-names have the pass is not registered error:
ppc-ctr-loops
ppc-ctr-loops-verify
ppc-loop-preinc-prep
ppc-toc-reg-deps
ppc-vsx-copy
ppc-early-ret
ppc-vsx-fma-mutate
ppc-vsx-swaps
ppc-reduce-cr-ops
ppc-qpx-load-splat
ppc-branch-coalescing
ppc-branch-select
Reviewed By: jsji
Differential Revision: https://reviews.llvm.org/D60248
llvm-svn: 358271
Bug: https://bugs.llvm.org/show_bug.cgi?id=41175
In the bug test case the DSE pass is shortening the range of memory that a
memset is working on. A getelementptr is generated so that the new
starting address can be passed to memset. This instruction was not given
a DebugLoc.
To fix the bug, copy the DebugLoc from the memset instruction.
Patch by Orlando Cazalet-Hyams!
Differential Revision: https://reviews.llvm.org/D60556
llvm-svn: 358270
The PrologEpilogInserter need to insert a DW_OP_deref_size before
prepending a memory location expression to an already implicit
expression to avoid having the existing expression act on the memory
address instead of the value behind it.
The reason for using DW_OP_deref_size and not plain DW_OP_deref is that
big-endian targets need to read the right size as simply truncating a
larger read would yield the wrong result (LSB bytes are not at the lower
address).
Differential Revision: https://reviews.llvm.org/D59687
llvm-svn: 358268
the MCDwarf.h include.
This removes 50 transitive dependencies for a modification of
MCDwarf.h in a build of llc for a pair of out of line functions
and reduces the build overhead of 'touch MCDwarf.h" by 15% without
impacting test time of check-llvm.
llvm-svn: 358264
This removes 50 transitive dependencies for a modification of
MCDwarf.h in a build of llc for a single out of line function
and reduces the build overhead by 20% without impacting test
time of check-llvm.
llvm-svn: 358258
If the upper bits of the SHL result aren't used, we might be able to use a narrower shift. For example, on X86 this can turn a 64-bit into 32-bit enabling a smaller encoding.
Differential Revision: https://reviews.llvm.org/D60358
llvm-svn: 358257
Summary:
Some llc debug options need pass-name as the parameters.
But if we use the pass-name ppc-early-ret, we will get below error:
llc test.ll -stop-after ppc-early-ret
LLVM ERROR: "ppc-early-ret" pass is not registered.
Below pass-names have the pass is not registered error:
ppc-ctr-loops
ppc-ctr-loops-verify
ppc-loop-preinc-prep
ppc-toc-reg-deps
ppc-vsx-copy
ppc-early-ret
ppc-vsx-fma-mutate
ppc-vsx-swaps
ppc-reduce-cr-ops
ppc-qpx-load-splat
ppc-branch-coalescing
ppc-branch-select
Reviewed By: jsji
Differential Revision: https://reviews.llvm.org/D60248
llvm-svn: 358256
This removes 500 transitive dependencies for a modification of
MCDwarf.h in a build of llc for a single out of line function
and reduces the build overhead by more than half without impacting
test time of check-llvm.
llvm-svn: 358255
There are 3 operands of maddld, (add (mul %1, %2), %3) and sometimes
they are constant. If there is constant operand, it takes extra li to
materialize the operand, and one more extra register too. So it's not
profitable to use maddld to optimize mul-add pattern.
Differential Revision: https://reviews.llvm.org/D60181
llvm-svn: 358253
This special section is named .symtab_shndx, according to gABI Chapter 4
Sections, and the name is used by some other tools. Though the section
type SHT_SYMTAB_SHNDX is what really matters, let's fix the typo
introduced in rL204769 :)
llvm-svn: 358247
Summary:
A lot of the code for printing special cases of operands in this
translation unit are static functions. While I too have suffered many
years of abuse at the hands of C, we should prefer private methods,
particularly when you start passing around *this as your first argument,
which is a code smell.
This will help make generic vs arch specific asm printing easier, as it
brings X86AsmPrinter more in line with other arch's derived AsmPrinters.
We will then be able to more easily move architecture generic code to
the base class, and architecture specific code to the derived classes.
Some other small refactorings while we're here:
- the parameter Op is now consistently OpNo
- add spaces around binary expressions. I know we're not millionaires
but c'mon.
Reviewers: echristo
Reviewed By: echristo
Subscribers: smeenai, hiraditya, llvm-commits, srhines, craig.topper
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60577
llvm-svn: 358236
The isLoopCarriedDep function does not correctly compute loop
carried dependences when the array index offset is negative
or the stride is smallar than the access size.
Patch by Denis Antrushin.
Differential Revision: https://reviews.llvm.org/D60135
llvm-svn: 358233
Same as the other ConstantRange overflow checking methods, but for
unsigned mul. In this case there is no cheap overflow criterion, so
using umul_ov for the implementation.
Differential Revision: https://reviews.llvm.org/D60574
llvm-svn: 358228
We currently assume profile hash conflicts will be caught by an upfront
check and we assert for the cases that escape the check. The assumption
is not always true as there are chances of conflict. This patch prints
a warning and skips annotating the function for the escaped cases,.
Differential Revision: https://reviews.llvm.org/D60154
llvm-svn: 358225
Loads and store of values with type like <2 x p0> currently don't get imported
because SelectionDAG has no knowledge of pointer types. To leverage the existing
support for vector load/stores, we can bitcast the value to have s64 element
types instead. We do this as a custom legalization.
This patch also adds support for general loads of <2 x s64>, and relaxes some
type conditions on selecting G_BITCAST.
Differential Revision: https://reviews.llvm.org/D60534
llvm-svn: 358221
If the vector setcc has been legalized then we will need to convert a vector boolean of 0 or -1 to a scalar boolean of 0 or 1.
The added test case previously crashed in 32-bit mode by creating a setcc with an i64 condition that type legalization couldn't expand.
llvm-svn: 358218
This patch adds patterns for turning bitcasted atomic load/store into movss/sd.
It also removes the pseudo instructions for atomic RMW fadd. Instead just adding isel patterns for folding an atomic load into addss/sd. And relying on the new movss/sd store pattern to handle the write part.
This also makes the fadd patterns use VEX and EVEX instructions when AVX or AVX512F are enabled.
Differential Revision: https://reviews.llvm.org/D60394
llvm-svn: 358215
With correct test checks this time.
If we have X87, but not SSE2 we can atomicaly load an i64 value into the significand of an 80-bit extended precision x87 register using fild. We can then use a fist instruction to convert it back to an i64 integ
This matches what gcc and icc do for this case and removes an existing FIXME.
llvm-svn: 358214
If we have X87, but not SSE2 we can atomicaly load an i64 value into the significand of an 80-bit extended precision x87 register using fild. We can then use a fist instruction to convert it back to an i64 integer and store it to a stack temporary. From there we can do two 32-bit loads to get the value into integer registers without worrying about atomicness.
This matches what gcc and icc do for this case and removes an existing FIXME.
Differential Revision: https://reviews.llvm.org/D60156
llvm-svn: 358211
RISCVMCCodeEmitter::expandAddTPRel asserts that the second operand must be
x4/tp. As we are not currently checking this in the RISCVAsmParser, the assert
is easy to trigger due to wrong assembly input.
This patch does a late check of this constraint.
An alternative could be using a singleton register class for x4/tp similar to
the current one for sp. Unfortunately it does not result in a good diagnostic.
Because add is an overloaded mnemonic, if no matching is possible, the
diagnostic of the first failing alternative seems to be used as the diagnostic
itself. This means that this case the %tprel_add is diagnosed as an invalid
operand (because the real add instruction only has 3 operands).
Differential Revision: https://reviews.llvm.org/D60528
llvm-svn: 358183
Summary:
A bug/typo in Output::scalarString caused us to round-trip a StringRef
through a const char *. This meant that any strings with embedded nuls
were unintentionally cut short at the first such character. (It also
could have caused accidental buffer overruns, but it seems that all
StringRefs coming into this functions were formed from null-terminated
strings.)
This patch fixes the bug and adds an appropriate test.
Reviewers: sammccall, jhenderson
Subscribers: kristina, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60505
llvm-svn: 358176
// bo (build_vec ...undef, x, undef...), (build_vec ...undef, y, undef...) -->
// build_vec ...undef, (bo x, y), undef...
The lifetime of the nodes in these examples is different for variables versus constants,
but they are all build vectors briefly, so I'm proposing to catch them in this form to
handle all of the leading examples in the motivating test file.
Before we have build vectors, we might have insert_vector_element. After that, we might
have scalar_to_vector and constant pool loads.
It's going to take more work to ensure that FP vector operands are getting simplified
with undef elements, so this transform can apply more widely. In a non-loose FP environment,
we are likely simplifying FP elements to NaN values rather than undefs.
We also need to allow more opcodes down this path. Eg, we don't handle FP min/max flavors
yet.
Differential Revision: https://reviews.llvm.org/D60514
llvm-svn: 358172
This is a follow-up patch to D60504 to further improve
performance issues in computeKnownBitsFromAssume.
The patch is NFC, but may improve compile-time performance
if the compiler isn't clever enough to do the optimization
itself.
llvm-svn: 358163
Because of gp = sdata_start_address + 0x800, gp with signed twelve-bit offset
could covert most of the small data section. Linker relaxation could transfer
the multiple data accessing instructions to a gp base with signed twelve-bit
offset instruction.
Differential Revision: https://reviews.llvm.org/D57493
llvm-svn: 358150
Summary:
Make DW_LNS_copy set the discriminator register to 0, to conform to
DWARF 4 & 5: "Then it sets the discriminator register to 0, and sets the
basic_block, prologue_end and epilogue_begin registers to false."
Because all of DW_LNE_end_sequence, DN_LNS_copy, and special opcodes reset
discriminator to 0, we can move discriminator=0 to appendRowToMatrix.
Also, make DW_LNS_copy print before appending the row, as it is similar
to a address+=0,line+=0 special opcode, which prints before appending
the row.
Reviewers: dblaikie, probinson, aprantl
Reviewed By: dblaikie
Subscribers: danielcdh, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60364
llvm-svn: 358148
If the ObjectSizeOffsetEvaluator fails to fold the object size call, then it may
litter some unused instructions in the function. When done repeatably in
InstCombine, this results in an infinite loop. Fix this by tracking the set of
instructions that were inserted, then removing them on failure.
rdar://49172227
Differential revision: https://reviews.llvm.org/D60298
llvm-svn: 358146
foldMaskedShiftToScaledMask tries to reorder and & shl to enable the shl to fold into an LEA. But if there is an any_extend between them it doesn't work.
This patch modifies the code to look through any_extend from i32 to i64 when the and mask only uses bits that weren't from the extended part.
This will prevent a regression from D60358 caused by 64-bit SHL being narrowed to 32-bits when their upper bits aren't demanded.
Differential Revision: https://reviews.llvm.org/D60532
llvm-svn: 358139
Many of our instructions have both a _Int form used by intrinsics and a form
used by other IR constructs. In the EVEX space the _Int versions usually cover
all the capabilities include broadcasting and rounding. While the other version
only covers simple register/register or register/load forms. For this reason
in EVEX, the non intrinsic form is usually marked isCodeGenOnly=1.
In the VEX encoding space we were less consistent, but usually the _Int version
was the isCodeGenOnly version.
This commit makes the VEX instructions match the EVEX instructions. This was
done by manually studying the AsmMatcher table so its possible I missed some
cases, but we should be closer now.
I'm thinking about using the isCodeGenOnly bit to simplify the EVEX2VEX
tablegen code that disambiguates the _Int and non _Int versions. Currently it
checks register class sizes and Record the memory operands come from. I have
some other changes I was looking into for D59266 that may break the memory check.
I had to make a few scheduler hacks to keep the _Int versions from being treated
differently than the non _Int version.
Differential Revision: https://reviews.llvm.org/D60441
llvm-svn: 358138
These ifs were ensuring we don't have to handle types larger than 64 bits probably because we use getZExtValue in several places below them.
None of the callers of this code pass types larger than 64-bits so we can just assert instead of branching in release code.
I've also moved them earlier since we're just looking through operations that don't effect bit width.
This is prep work for some refactoring I plan to do to the (and (shl)) handling code.
llvm-svn: 358123
Summary:
The Modifier memory operands is used in 2 cases of memory references
(H & P ExtraCodes). Rather than pass around the likely nullptr Modifier,
refactor the handling of the Modifier out from printOperand().
The refactorings in this patch:
- Don't forward declare printOperand, move its definition up.
- The diff makes it look like there's a change to printPCRelImm
(narrator: there's not).
- Create printModifiedOperand()
- Move logic for Modifier to there from printOperand
- Use printModifiedOperand in 3 call sites that actually create
Modifiers.
- Remove now unused Modifier parameter from printOperand
- Remove default parameter from printLeaMemReference as it only has 1
call site that explicitly passes a parameter.
- Remove default parameter from printMemReference, make call lone call
site explicitly pass nullptr.
- Drop Modifier parameter from printIntelMemReference, as Intel style
memory references don't support the Modifiers in question.
This will allow future changes to printOperand() to make it a pure virtual
method on the base AsmPrinter class, allowing for more generic handling
of some architecture generic constraints. X86AsmPrinter was the only
derived class of AsmPrinter to have additional parameters on its
printOperand function.
Reviewers: craig.topper, echristo
Reviewed By: echristo
Subscribers: hiraditya, llvm-commits, srhines
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60526
llvm-svn: 358122
The problem is that one can't concatenate an empty list
(implied all-ones) with non-empty list here. The result
will be the non-empty list, and it won't match the length
of the ExePorts list.
The problems begin when LoadRes != 1 here,
which is the case in PdWriteResYMMPair,
and more importantly i think it will be the case for PdWriteResExPair.
llvm-svn: 358118
Summary:
```
``!listsplat(a, size)``
A list value that contains the value ``a`` ``size`` times.
Example: ``!listsplat(0, 2)`` results in ``[0, 0]``.
```
I plan to use this in X86ScheduleBdVer2.td for LoadRes handling.
This is a little bit controversial because unlike every other binary operator
the types aren't identical.
Reviewers: stoklund, javed.absar, nhaehnle, craig.topper
Reviewed By: javed.absar
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60367
llvm-svn: 358117
Certain optimisations from ConstantHoisting and CGP rely on Selection DAG not
seeing through to the constant in other blocks. Revert this patch while we come
up with a better way to handle that.
I will try to follow this up with some better tests.
llvm-svn: 358113
This fixes a regression from https://reviews.llvm.org/D60354. We used to
SymbolNode *Symbol = demangleEncodedSymbol(MangledName, QN);
if (Symbol) {
Symbol->Name = QN;
}
but changed that to
SymbolNode *Symbol = demangleEncodedSymbol(MangledName, QN);
if (Error)
return nullptr;
Symbol->Name = QN;
and one branch somewhere returned a nullptr without setting Error.
Looking at the code changed in r340083 and r340710 that branch looks
like a remnant from an earlier attempt to demangle RTTI descriptors
that has since been rewritten -- so just remove this branch. It
shouldn't change behavior for correctly mangled symbols.
llvm-svn: 358112
Call lowering should use this directly instead of going through the
EVT version, but more work is needed to deal with this (mostly the
passing of the IR type pointer instead of the relevant properties in
ArgInfo).
llvm-svn: 358111
This patch teach getTestBitOperand to look through ANY_EXTENDs when the extended bits aren't used. The test case changed here is based what D60358 did to test16 in tbz-tbnz.ll. So this patch will avoid that regression.
Differential Revision: https://reviews.llvm.org/D60482
llvm-svn: 358108
Summary:
The InlineAsm::AsmDialect is only required for X86; no architecture
makes use of it and as such it gets passed around between arch-specific
and general code while being unused for all architectures but X86.
Since the AsmDialect is queried from a MachineInstr, which we also pass
around, remove the additional AsmDialect parameter and query for it deep
in the X86AsmPrinter only when needed/as late as possible.
This refactor should help later planned refactors to AsmPrinter, as this
difference in the X86AsmPrinter makes it harder to make AsmPrinter more
generic.
Reviewers: craig.topper
Subscribers: jholewinski, arsenm, dschuff, jyknight, dylanmckay, sdardis, nemanjai, jvesely, nhaehnle, javed.absar, sbc100, jgravelle-google, eraman, hiraditya, aheejin, kbarton, fedor.sergeev, asb, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, jrtc27, zzheng, edward-jones, atanasyan, rogfer01, MartinMosbeck, brucehoult, the_o, PkmX, jocewei, jsji, llvm-commits, peter.smith, srhines
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60488
llvm-svn: 358101
Following D60483 and D60497, this adds support for AlwaysOverflows
handling for ssubo. This is the last case we can handle right now.
Differential Revision: https://reviews.llvm.org/D60518
llvm-svn: 358100
ssubo X, C is equivalent to saddo X, -C. Make the transformation in
InstCombine and allow the logic implemented for saddo to fold prior
usages of add nsw or sub nsw with constants.
Patch by Dan Robertson.
Differential Revision: https://reviews.llvm.org/D60061
llvm-svn: 358099
This patch changes the order of pattern matching by first testing
a compare instruction's predicate, before doing the pattern
match for the whole expression tree.
Patch by Paul Walker.
Reviewed By: spatel
Differential Revision: https://reviews.llvm.org/D60504
llvm-svn: 358097
Summary: The fp16 scalar version of facge and facgt requires a custom patter matching, as the result type is not the same width of the operands.
Reviewers: olista01, javed.absar, pbarrio
Reviewed By: javed.absar
Subscribers: kristof.beyls, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60212
llvm-svn: 358083
Summary:
When calculating the debug value history, DbgEntityHistoryCalculator
would only keep track of register clobbering for the latest debug value
per inlined entity. This meant that preceding register-described debug
value fragments would live on until the next overlapping debug value,
ignoring any potential clobbering. This patch amends
DbgEntityHistoryCalculator so that it keeps track of all registers that
a inlined entity's currently live debug values are described by.
The DebugInfo/COFF/pieces.ll test case has had to be changed since
previously a register-described fragment would incorrectly outlive its
basic block.
The parent patch D59941 is expected to increase the coverage slightly,
as it makes sure that location list entries are inserted after clobbered
fragments, and this patch is expected to decrease it, as it stops
preceding register-described from living longer than they should. All in
all, this patch and the preceding patch has a negligible effect on the
output from `llvm-dwarfdump -statistics' for a clang-3.4 binary built
using the RelWithDebInfo build profile. "Scope bytes covered" increases
by 0.5%, and "variables with location" increases from 2212083 to
2212088, but it should improve the accuracy quite a bit.
This fixes PR40283.
Reviewers: aprantl, probinson, dblaikie, rnk, bjope
Reviewed By: aprantl
Subscribers: llvm-commits
Tags: #debug-info, #llvm
Differential Revision: https://reviews.llvm.org/D59942
llvm-svn: 358073
Summary:
Currently the DbgValueHistorymap only keeps track of clobbered registers
for the last debug value that it has encountered. This could lead to
preceding register-described debug values living on longer in the
location lists than they should. See PR40283 for an example. This
patch does not introduce tracking of multiple registers, but changes
the DbgValueHistoryMap structure to allow for that in a follow-up
patch. This patch is not NFC, as it at least fixes two bugs in
DwarfDebug (both are covered in the new clobbered-fragments.mir test):
* If a debug value was clobbered (its End pointer set), the value would
still be added to OpenRanges, meaning that the succeeding location list
entries could potentially contain stale values.
* If a debug value was clobbered, and there were non-overlapping
fragments that were still live after the clobbering, DwarfDebug would
not create a location list entry starting directly after the
clobbering instruction. This meant that the location list could have
a gap until the next debug value for the variable was encountered.
Before this patch, the history map was represented by <Begin, End>
pairs, where a new pair was created for each new debug value. When
dealing with partially overlapping register-described debug values, such
as in the following example:
DBG_VALUE $reg2, $noreg, !1, !DIExpression(DW_OP_LLVM_fragment, 32, 32)
[...]
DBG_VALUE $reg3, $noreg, !1, !DIExpression(DW_OP_LLVM_fragment, 64, 32)
[...]
$reg2 = insn1
[...]
$reg3 = insn2
the history map would then contain the entries `[<DV1, insn1>, [<DV2, insn2>]`.
This would leave it up to the users of the map to be aware of
the relative order of the instructions, which e.g. could make
DwarfDebug::buildLocationList() needlessly complex. Instead, this patch
makes the history map structure monotonically increasing by dropping the
End pointer, and replacing that with explicit clobbering entries in the
vector. Each debug value has an "end index", which if set, points to the
entry in the vector that ends the debug value. The ending entry can
either be an overlapping debug value, or an instruction which clobbers
the register that the debug value is described by. The ending entry's
instruction can thus either be excluded or included in the debug value's
range. If the end index is not set, the debug value that the entry
introduces is valid until the end of the function.
Changes to test cases:
* DebugInfo/X86/pieces-3.ll: The range of the first DBG_VALUE, which
describes that the fragment (0, 64) is located in RDI, was
incorrectly ended by the clobbering of RAX, which the second
(non-overlapping) DBG_VALUE was described by. With this patch we
get a second entry that only describes RDI after that clobbering.
* DebugInfo/ARM/partial-subreg.ll: This test seems to indiciate a bug
in LiveDebugValues that is caused by it not being aware of fragments.
I have added some comments in the test case about that. Also, before
this patch DwarfDebug would incorrectly include a register-described
debug value from a preceding block in a location list entry.
Reviewers: aprantl, probinson, dblaikie, rnk, bjope
Reviewed By: aprantl
Subscribers: javed.absar, kristof.beyls, jdoerfert, llvm-commits
Tags: #debug-info, #llvm
Differential Revision: https://reviews.llvm.org/D59941
llvm-svn: 358072
Make it possible to TableGen code for FCONSTS and FCONSTD.
We need to make two changes to the TableGen descriptions of vfp_f32imm
and vfp_f64imm respectively:
* add GISelPredicateCode to check that the immediate fits in 8 bits;
* extract the SDNodeXForms into separate definitions and create a
GISDNodeXFormEquiv and a custom renderer function for each of them.
There's a lot of boilerplate to get the actual value of the immediate,
but it basically just boils down to calling ARM_AM::getFP32Imm or
ARM_AM::getFP64Imm.
llvm-svn: 358063
Summary:
In an upcoming commit the history map will be changed so that it
contains explicit entries for instructions that clobber preceding debug
values, rather than Begin- End range pairs, so generalize the name to
"Entry".
Also, prefix the iterator variable names in buildLocationList() with
"E". In an upcoming commit the entry will have query functions such as
"isD(e)b(u)gValue", which could at a glance make one confuse it for
iterations over MachineInstrs, so make the iterator names a bit more
distinct to avoid that.
Reviewers: aprantl
Reviewed By: aprantl
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59939
llvm-svn: 358060
Summary:
Replace use of std::pair by creating a class for the debug value
instruction ranges instead. This is a preparatory refactoring for
improving handling of clobbered fragments.
In an upcoming commit the Begin pointer will become a PointerIntPair, so
it will be cleaner to have a getter for that.
Reviewers: aprantl
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59938
llvm-svn: 358059
This is helpful to measure the impact of D60125 on maintaining
topological orders.
Reviewers: MatzeB, atrick, efriedma, niravd
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D60187
llvm-svn: 358058
1. Use computed VF for stress testing.
2. If the computed VF does not produce vector code (VF smaller than 2), force VF to be 4.
3. Test vectorization of i64 data on AArch64 to make sure we generate VF != 4 (on X86 that was already tested on AVX).
Patch by Francesco Petrogalli <francesco.petrogalli@arm.com>
Differential Revision: https://reviews.llvm.org/D59952
llvm-svn: 358056
We want the last row whose address is less than or equal to Address.
This can be computed as upper_bound - 1, which is simpler than
lower_bound followed by skipping equal rows in a loop.
Since FirstRow (LowPC) does not satisfy the predicate (OrderByAddress)
while LastRow-1 (HighPC) satisfies the predicate. We can decrease the
search range by two, i.e.
upper_bound [FirstRow,LastRow) = upper_bound [FirstRow+1,LastRow-1)
llvm-svn: 358053
Check AlwaysOverflow condition for usubo. The implementation is the
same as the existing handling for uaddo and umulo. Handling for saddo
and ssubo will follow (smulo doesn't have the necessary ValueTracking
support).
Differential Revision: https://reviews.llvm.org/D60483
llvm-svn: 358052
metadata into a module flag in the auto-upgrader and make the ARC
contract pass read the marker as a module flag.
This is needed to fix a bug where ARC contract wasn't inserting the
retainRV marker when LTO was enabled, which caused objects returned
from a function to be auto-released.
rdar://problem/49464214
Differential Revision: https://reviews.llvm.org/D60303
llvm-svn: 358047
Years ago I moved this to an InstAlias using VR128H/VR128L. But now that we support {vex3} pseudo prefix, we need to block the optimization when it is set to match gas behavior.
llvm-svn: 358046
Summary:
Obviously, new built MI (sethi+add or sethi+xor+add) for constructing large offset
should be inserted before new created MI for storing even register into memory.
So the insertion position should be *StMI instead of II.
before fixed:
std %f0, [%g1+80]
sethi 4, %g1 <<<
add %g1, %sp, %g1 <<< this two instructions should be put before "std %f0, [%g1+80]".
sethi 4, %g1
add %g1, %sp, %g1
std %f2, [%g1+88]
after fixed:
sethi 4, %g1
add %g1, %sp, %g1
std %f0, [%g1+80]
sethi 4, %g1
add %g1, %sp, %g1
std %f2, [%g1+88]
Reviewers: venkatra, jyknight
Reviewed By: jyknight
Subscribers: jyknight, fedor.sergeev, jrtc27, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60397
llvm-svn: 358042
The EVEX versions are ambiguous with the VEX versions based on operands alone so we had explicitly dropped
them from the AsmMatcher table. Unfortunately, when we add them they incorrectly show in the table before
their VEX counterparts. This is different how the prioritization normally works.
To fix this we have to explicitly reject the instructions unless the {evex} prefix has been seen.
llvm-svn: 358041
Scalar VEX/EVEX instructions don't use the L bit and don't look at it for decoding either.
So we should ignore it in our disassembler.
The missing instructions here were found by grepping the raw tablegen class definitions in
the tablegen debug output.
llvm-svn: 358040
Summary: Deprecate the existing accessors for the "current debug location" of an IRBuilder. The setter could not handle being reset to NULL, and the getter would create bogus metadata if the NULL location was returned. Provide direct metadata-based accessors instead.
Reviewers: whitequark, deadalnix
Reviewed By: whitequark
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60484
llvm-svn: 358039
Summary: Provide direct accessors to supplement LLVMSetInstDebugLocation. In addition, properly accept and return the NULL location. The old accessors provided no way to do this, so the current debug location cannot currently be cleared.
Reviewers: whitequark, deadalnix
Reviewed By: whitequark
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60481
llvm-svn: 358038
Summary: This brings us to full feature parity with the old API, so I've deprecated it and updated the tests. I'll do a follow-up patch to do some more cleanup and documentation work in this header.
Reviewers: whitequark, deadalnix
Reviewed By: whitequark
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60407
llvm-svn: 358037
I was attempting to convert mnemonics to lower case after processing a pseudo prefix. But the ParseOperands just hold a StringRef for tokens so there is no where to allocate the memory.
Add FIXMEs for the lower case issue which also exists in the prefix parsing code.
llvm-svn: 358036
The selection for G_ICMP is unfortunately not currently importable from SDAG
due to the use of custom SDNodes. To support this, this selection method has an
opcode table which has been generated by a script, indexed by various
instruction properties. Ideally in future we will have a GISel native selection
patterns that we can write in tablegen to improve on this.
For selection of some types we also need support for G_ASHR and G_SHL which are
generated as a result of legalization. This patch also adds support for them,
generating the same code as SelectionDAG currently does.
Differential Revision: https://reviews.llvm.org/D60436
llvm-svn: 358035
required to be passed as different register types. E.g. <2 x i16> may need to
be passed as a larger <2 x i32> type, so formal arg lowering needs to be able
truncate it back. Likewise, when dealing with returns of these types, they need
to be widened in the appropriate way back.
Differential Revision: https://reviews.llvm.org/D60425
llvm-svn: 358032
These can be used to force the encoding used for instructions.
{vex2} will fail if the instruction is not VEX encoded, but otherwise won't do anything since we prefer vex2 when possible. Might need to skip use of the _REV MOV instructions for this too, but I haven't done that yet.
{vex3} will force the instruction to use the 3 byte VEX encoding or fail if there is no VEX form.
{evex} will force the instruction to use the EVEX version or fail if there is no EVEX version.
Differential Revision: https://reviews.llvm.org/D59266
llvm-svn: 358029
This lines up with what we do for regular subtract and it matches up better with X86 assumptions in isel patterns that add with immediate is more canonical than sub with immediate.
Differential Revision: https://reviews.llvm.org/D60020
llvm-svn: 358027
This reverts commit 1383a91689.
sdiv-canonicalize.ll fails after this revision. The fold needs to be
moved outside the branch handling constant operands. However when this
is done there are further test changes, so I'm reverting this in the
meantime.
llvm-svn: 358026
Change the code to always handle the unsigned+signed cases together
with the same basic structure for add/sub/mul. The simple folds are
always handled first and then the ValueTracking overflow checks are
used.
llvm-svn: 358025
This is the same change as D60420 but for signed sub rather than
signed add: Range information is intersected into the known bits
result, allows to detect more no/always overflow conditions.
Differential Revision: https://reviews.llvm.org/D60469
llvm-svn: 358020
One of out of tree targets has regressed with this patch. Reverting
it for now and let liveness to be fully reconstructed in case pass
was used after the LIS is created to resolve the regression.
Differential Revision: https://reviews.llvm.org/D60466
llvm-svn: 358015
This is D59386 for the signed add case. The computeConstantRange()
result is now intersected into the existing known bits information,
allowing to detect additional no-overflow/always-overflow conditions
(though the latter isn't used yet).
This (finally...) covers the motivating case from D59071.
Differential Revision: https://reviews.llvm.org/D60420
llvm-svn: 358014
Similar to:
rL358005
Forego folding arbitrary vector constants to fix a possible miscompile bug.
We can enhance the transform if we do want to handle the more complicated
vector case.
llvm-svn: 358013
In a sorted list of non-overlapping [LowPC,HighPC) ranges, locating an address with
upper_bound on HighPC is simpler than lower_bound on LowPC.
llvm-svn: 358012
// 0 - (X sdiv C) -> (X sdiv -C) provided the negation doesn't overflow.
This fold has been around for many years and nobody noticed the potential
vector miscompile from overflow until recently...
So it seems unlikely that there's much demand for a vector sdiv optimization
on arbitrary vector constants, so just limit the matching to splat constants
to avoid the possible bug.
Differential Revision: https://reviews.llvm.org/D60426
llvm-svn: 358005
This patch factors out mappings of scalar maths functions to their vector
counterparts from TargetLibraryInfo.cpp to a separate VecFuncs.def file. Such
mappings are currently available for Accelerate framework, and SVML library.
This is in support of the follow-up: https://reviews.llvm.org/D59881
Patch by pjeeva01
Differential revision: https://reviews.llvm.org/D60211
llvm-svn: 358001
Summary:
Use optimized hashing while writing time trace by join two hashes to one.
Used for -ftime-trace option.
Reviewers: rnk, takuto.ikuta
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60404
llvm-svn: 357998
When bitcasting from a source op to a larger bitwidth op, split the demanded bits and OR them on top of one another and demand those merged bits in the SimplifyDemandedBits call on the source op.
llvm-svn: 357992
Summary:
With MergeValues() removed, amend DebugLocEntry's constructor so that it
takes multiple values rather than a single, and keep non-fragment values
in OpenRanges, as this allows some cleanup of the code in
buildLocationList().
Reviewers: aprantl, dblaikie, loladiro
Reviewed By: aprantl
Subscribers: hiraditya, llvm-commits
Tags: #debug-info, #llvm
Differential Revision: https://reviews.llvm.org/D59303
llvm-svn: 357988
Summary:
The MergeValues() function would try to merge two entries if they shared
the same beginning label. Having the same beginning label means that the
former entry's range would be empty; however, after D55919 we no longer
create entries for empty ranges, so we can no longer land in a situation
where that check in MergeValues would succeed. Instead, the "merging" is
done by keeping the live values from the preceding empty ranges in
OpenRanges, and adding them to the first non-empty range.
Reviewers: aprantl, dblaikie, loladiro
Reviewed By: aprantl
Subscribers: llvm-commits
Tags: #debug-info, #llvm
Differential Revision: https://reviews.llvm.org/D59301
llvm-svn: 357974
The composite existed to simplify some other tablegen code and not really in an
important way. Remove the combined field and just calculate the vector size
using two ifs.
llvm-svn: 357972
The instruction's document this as W0 for the VEX encoding. But there's a
footnote mentioning that VEX.W is ignored in 64-bit mode. And the main VEX
encoding description says the VEX.W bit is ignored for instructions that are
equivalent to a legacy SSE instruction that uses REX.W to select a GPR which
would apply here.
By making this match EVEX we can remove a special case of allowing EVEX2VEX to
turn an EVEX.WIG instruction into VEX.W0.
llvm-svn: 357971
Switch part of the computeOverflowForSignedAdd() implementation to
use Range.isAllNegative() rather than KnownBits.isNegative() and
similar. They do the same thing, but using the ConstantRange methods
allows dropping the KnownBits variables more easily in D60420.
llvm-svn: 357969
It's been on in Android for a while without causing problems, so it's time
to make it the default and remove the flag.
Differential Revision: https://reviews.llvm.org/D60355
llvm-svn: 357960
This changes the operand type from v4f32/v2f64 to iPTR which seems more correct. But that doesn't seem to do anything other than change the comments in X86GenDAGISel.inc. Probably because we use a ComplexPattern to do the matching so there's no autogenerated code to change.
llvm-svn: 357959
A more general canonicalization between fdiv and fmul would not
handle this case because that would have to be limited by uses
to prevent 2 values from becoming 3 values:
(x/y) * (x/y) --> (x*x) / (y*y)
(But we probably should still have that limited -- but more general --
canonicalization independently of this change.)
llvm-svn: 357943
For functions whose callers don't check that enough input is present,
add checks at the start of the function that enough input is there and
set Error otherwise.
For functions that return AST objects, return nullptr instead of
incomplete AST objects with nullptr fields if an error occurred during
the function.
Introduce a new function demangleDeclarator() for the sequence
demangleFullyQualifiedSymbolName(); demangleEncodedSymbol() and
use it in the two places that had this sequence. Let this new function
check that ConversionOperatorIdentifiers have a valid TargetType.
Some of the bad inputs found by oss-fuzz, others by inspection.
Differential Revision: https://reviews.llvm.org/D60354
llvm-svn: 357936
Returning SDValue() makes the caller think custom lowering was unsuccessful and then it will fall back to trying to expand the original node. This expanded code will end up with no users and end up being pruned later. But it was useless unnecessary work to create it.
Instead return a MERGE_VALUES with all the results so the caller knows something changed. The caller can handle the replacements.
For one of the cases I had to use UNDEF has a dummy value for a result we know is unused. This should get pruned later.
llvm-svn: 357935
COMMON blocks are a feature of Fortran that has no direct analog in C languages, but they are similar to data sections in assembly language programming. A COMMON block is a named area of memory that holds a collection of variables. Fortran subprograms may map the COMMON block memory area to their own, possibly distinct, non-empty list of variables. A Fortran COMMON block might look like the following example.
COMMON /ALPHA/ I, J
For this construct, the compiler generates a new scope-like DI construct (!DICommonBlock) into which variables (see I, J above) can be placed. As the common block implies a range of storage with global lifetime, the !DICommonBlock refers to a !DIGlobalVariable. The Fortran variable that comprise the COMMON block are also linked via metadata to offsets within the global variable that stands for the entire common block.
@alpha_ = common global %alphabytes_ zeroinitializer, align 64, !dbg !27, !dbg !30, !dbg !33!14 = distinct !DISubprogram(…)
!20 = distinct !DICommonBlock(scope: !14, declaration: !25, name: "alpha")
!25 = distinct !DIGlobalVariable(scope: !20, name: "common alpha", type: !24)
!27 = !DIGlobalVariableExpression(var: !25, expr: !DIExpression())
!29 = distinct !DIGlobalVariable(scope: !20, name: "i", file: !3, type: !28)
!30 = !DIGlobalVariableExpression(var: !29, expr: !DIExpression())
!31 = distinct !DIGlobalVariable(scope: !20, name: "j", file: !3, type: !28)
!32 = !DIExpression(DW_OP_plus_uconst, 4)
!33 = !DIGlobalVariableExpression(var: !31, expr: !32)
The DWARF generated for this is as follows.
DW_TAG_common_block:
DW_AT_name: alpha
DW_AT_location: @alpha_+0
DW_TAG_variable:
DW_AT_name: common alpha
DW_AT_type: array of 8 bytes
DW_AT_location: @alpha_+0
DW_TAG_variable:
DW_AT_name: i
DW_AT_type: integer*4
DW_AT_location: @Alpha+0
DW_TAG_variable:
DW_AT_name: j
DW_AT_type: integer*4
DW_AT_location: @Alpha+4
Patch by Eric Schweitz!
Differential Revision: https://reviews.llvm.org/D54327
llvm-svn: 357934
Summary:
ThinLTOCodeGenerator currently does not preserve llvm.used symbols and
it can internalize them. In order to pass the necessary information to the
legacy ThinLTOCodeGenerator, the input to the code generator is
rewritten to be based on lto::InputFile.
This fixes: PR41236
rdar://problem/49293439
Reviewers: tejohnson, pcc, dexonsmith
Reviewed By: tejohnson
Subscribers: mehdi_amini, inglorion, eraman, hiraditya, jkorous, dang, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60226
llvm-svn: 357931
Fixes bug 40992: https://bugs.llvm.org/show_bug.cgi?id=40992
There is potential for miscompiled code emitted from JumpThreading when
analyzing a block with one or more indirectbr or callbr predecessors. The
ProcessThreadableEdges() function incorrectly folds conditional branches
into an unconditional branch.
This patch prevents incorrect branch folding without fully pessimizing
other potential threading opportunities through the same basic block.
This IR shape was manually fed in via opt and is unclear if clang and the
full pass pipeline will ever emit similar code shapes.
Thanks to Matthias Liedtke for the bug report and simplified IR example.
Differential Revision: https://reviews.llvm.org/D60284
llvm-svn: 357930
I was looking at a potential DAGCombiner fix for 1 of the regressions in D60278, and it caused severe regression test pain because x86 TLI lies about the desirability of 8-bit shift ops.
We've hinted at making all 8-bit ops undesirable for the reason in the code comment:
// TODO: Almost no 8-bit ops are desirable because they have no actual
// size/speed advantages vs. 32-bit ops, but they do have a major
// potential disadvantage by causing partial register stalls.
...but that leads to massive diffs and exposes all kinds of optimization holes itself.
Differential Revision: https://reviews.llvm.org/D60286
llvm-svn: 357912
First step towards removing the MOVMSK intrinsics completely - this patch expands MOVMSK to the pattern:
e.g. PMOVMSKB(v16i8 x):
%cmp = icmp slt <16 x i8> %x, zeroinitializer
%int = bitcast <16 x i8> %cmp to i16
%res = zext i16 %int to i32
Which is correctly handled by ISel and FastIsel (give or take an annoying movzx move....): https://godbolt.org/z/rkrSFW
Differential Revision: https://reviews.llvm.org/D60256
llvm-svn: 357909
Summary:
The ModuleList stream consists of an integer giving the number of
entries in the list, followed by the list itself. Each entry in the list
describes a module (dynamically loaded objects which were loaded in the
process when it crashed (or when the minidump was generated).
The code for reading the list is relatively straight-forward, with a
single gotcha. Some minidump writers are emitting padding after the
"count" field in order to align the subsequent list on 8 byte boundary
(this depends on how their ModuleList type was defined and the native
alignment of various types on their platform). Fortunately, the minidump
format contains enough redundancy (in the form of the stream length
field in the stream directory), which allows us to detect this situation
and correct it.
This patch just adds the ability to parse the stream. Code for
conversion to/from yaml will come in a follow-up patch.
Reviewers: zturner, amccarth, jhenderson, clayborg
Subscribers: jdoerfert, markmentovai, lldb-commits, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60121
llvm-svn: 357897
Previously LowerOperationWrapper took the number of results from the original
node and counted that many results from the new node. This was intended to drop
chain operands from FP_TO_SINT lowering that uses X87 with memory operations to
stack temporaries. The final load had an extra chain output that needs to be
ignored.
Unfortunately, it didn't work with scatter which has 2 result operands, the
mask output which is discarded and a chain output. The chain output is the one
that is needed but it comes second and it would be dropped by the previous
logic here. To workaround this we were doing a ReplaceAllUses in the lowering
code so that the generic legalization code wouldn't see any uses to replace
since it had been given the wrong result/type.
After this change we take the LowerOperation result directly if the original
node has one result. This allows us to directly return the chain from scatter
or the load data from the FP_TO_SINT case. When the original node has multiple
results we'll ensure the returned node has the same number and copy them over.
For cases where the original node has multiple results and the new code for some
reason has even more results, MERGE_VALUES can be used to pass only the needed
results.
llvm-svn: 357887
This extends D59959 to unionWith(), allowing to specify that a
non-wrapping unsigned/signed range is preferred. This is somewhat
less useful than the intersect case, because union operations are
rarer. An example use would the the phi union computed in SCEV.
The implementation is mostly a straightforward use of getPreferredRange(),
but I also had to adjust some <=/< checks to make sure that no ranges with
lower==upper get constructed before they're passed to getPreferredRange(),
as these have additional constraints.
Differential Revision: https://reviews.llvm.org/D60377
llvm-svn: 357876
Summary:
Previously we would use MOVZXrm8/MOVZXrm16, but those are longer encodings.
This is similar to what we do in the loadi32 predicate.
Reviewers: RKSimon, spatel
Reviewed By: RKSimon
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60341
llvm-svn: 357875
The intersection of two ConstantRanges may consist of two disjoint
ranges. As we can only return one range as the result, we need to
return one of the two possible ranges that cover both. Currently the
result is picked based on set size. However, this is not always
optimal: If we're in an unsigned context, we'd prefer to get a large
unsigned range over a small signed range -- the latter effectively
becomes a full set in the unsigned domain.
This revision adds a PreferredRangeType, which can be either Smallest,
Unsigned or Signed. Smallest is the current behavior and Unsigned and
Signed are new variants that prefer not to wrap the unsigned/signed
domain. The new type isn't used anywhere yet (but SCEV will be a good
first user, see D60035).
I've also added some comments to illustrate the various cases in
intersectWith(), which should hopefully make it more obvious what is
going on.
Differential Revision: https://reviews.llvm.org/D59959
llvm-svn: 357873
Summary: Add an accessor for the type of a binary file.
Reviewers: whitequark, deadalnix
Reviewed By: whitequark
Subscribers: hiraditya, aheejin, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60366
llvm-svn: 357872
Add isAllNegative() and isAllNonNegative() methods to ConstantRange,
which determine whether all values in the constant range are
negative/non-negative.
This is useful for replacing KnownBits isNegative() and isNonNegative()
calls when changing code to use constant ranges.
Differential Revision: https://reviews.llvm.org/D60264
llvm-svn: 357871
Add support for min/max flavor selects in computeConstantRange(),
which allows us to fold comparisons of a min/max against a constant
in InstSimplify. This fixes an infinite InstCombine loop, with the
test case taken from D59378.
Relative to the previous iteration, this contains some adjustments for
AMDGPU med3 tests: The AMDGPU target runs InstSimplify prior to codegen,
which ends up constant folding some existing med3 tests after this
change. To preserve these tests a hidden -amdgpu-scalar-ir-passes option
is added, which allows disabling scalar IR passes (that use InstSimplify)
for testing purposes.
Differential Revision: https://reviews.llvm.org/D59506
llvm-svn: 357870
In the case where we only want the sign bit (e.g. when using PACKSS truncation of comparison results for MOVMSK) then we can just demand the sign bit of the source operands.
This makes use of the fact that PACKSS saturates out of range values to the min/max int values - so the sign bit is always preserved.
Differential Revision: https://reviews.llvm.org/D60333
llvm-svn: 357859
if we do SHL of two 16-bit ranges like [0, 30000) with [1,2) we get
"full-set" instead of what I would have expected [0, 60000) which is
still in the 16-bit unsigned range.
This patch changes the SHL algorithm to allow getting a usable range
even in this case.
Differential Revision: https://reviews.llvm.org/D57983
llvm-svn: 357854
This function reorders AND and SHL to enable the SHL to fold into an LEA. The
upper bits of the AND will be shifted out by the SHL so it doesn't matter what
mask value we use for these bits. By using sign bits from the original mask in
these upper bits we might enable a shorter immediate encoding to be used.
llvm-svn: 357846
The current lower_bound approach has to check two iterators pos and pos-1.
Changing it to upper_bound allows us to check one iterator (similar to
DWARFUnitVector::getUnitFor*).
llvm-svn: 357834
Summary:
Provides a new type, `LLVMBinaryRef`, and a binding to `llvm::object::createBinary` for more general interoperation with binary files than `LLVMObjectFileRef`. It also provides the proper non-consuming API for input buffers and populates an out parameter for error handling if necessary - two things the previous API did not do.
In a follow-up, I'll define section and symbol iterators and begin to build upon the existing test infrastructure.
This patch is a first step towards deprecating that API and replacing it with something more robust.
Reviewers: deadalnix, whitequark
Reviewed By: whitequark
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60322
llvm-svn: 357822
Summary:
Now that we can create standalone basic blocks, it's useful to be able to append them. Add bindings to
- Insert a basic block after the current insertion block
- Append a basic block to the end of a function's list of basic blocks
Reviewers: whitequark, deadalnix, harlanhaskins
Reviewed By: whitequark, harlanhaskins
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59658
llvm-svn: 357812
The expansion of TCRETURNri(64) would not keep operand flags like
undef/renamable/etc. which can result in machine verifier issues.
Also add plumbing to be able to use `-run-pass=x86-pseudo`.
llvm-svn: 357808
Detect dead lanes can create some dead defs. Then RenameIndependentSubregs
will break a REG_SEQUENCE which may use these dead defs. At this point
a dead instruction can be removed but we do not run a DCE anymore.
MachineDCE was only running before live variable analysis. The patch
adds a mean to preserve LiveIntervals and SlotIndexes in case it works
past this.
Differential Revision: https://reviews.llvm.org/D59626
llvm-svn: 357805
Summary:
This avoids needing an isel pattern for each condition code. And it removes translation switches for converting between Jcc instructions and condition codes.
Now the printer, encoder and disassembler take care of converting the immediate. We use InstAliases to handle the assembly matching. But we print using the asm string in the instruction definition. The instruction itself is marked IsCodeGenOnly=1 to hide it from the assembly parser.
Reviewers: spatel, lebedev.ri, courbet, gchatelet, RKSimon
Reviewed By: RKSimon
Subscribers: MatzeB, qcolombet, eraman, hiraditya, arphaman, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60228
llvm-svn: 357802
Summary:
This avoids needing an isel pattern for each condition code. And it removes translation switches for converting between SETcc instructions and condition codes.
Now the printer, encoder and disassembler take care of converting the immediate. We use InstAliases to handle the assembly matching. But we print using the asm string in the instruction definition. The instruction itself is marked IsCodeGenOnly=1 to hide it from the assembly parser.
Reviewers: andreadb, courbet, RKSimon, spatel, lebedev.ri
Reviewed By: andreadb
Subscribers: hiraditya, lebedev.ri, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60138
llvm-svn: 357801
Summary:
Reorder the condition code enum to match their encodings. Move it to MC layer so it can be used by the scheduler models.
This avoids needing an isel pattern for each condition code. And it removes
translation switches for converting between CMOV instructions and condition
codes.
Now the printer, encoder and disassembler take care of converting the immediate.
We use InstAliases to handle the assembly matching. But we print using the
asm string in the instruction definition. The instruction itself is marked
IsCodeGenOnly=1 to hide it from the assembly parser.
This does complicate the scheduler models a little since we can't assign the
A and BE instructions to a separate class now.
I plan to make similar changes for SETcc and Jcc.
Reviewers: RKSimon, spatel, lebedev.ri, andreadb, courbet
Reviewed By: RKSimon
Subscribers: gchatelet, hiraditya, kristina, lebedev.ri, jdoerfert, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60041
llvm-svn: 357800
Current LCG doesn't check aliased functions. So if an internal function has a public alias it will not be added to CG SCC, but it is still reachable from outside through the alias.
So this patch adds aliased functions to SCC.
Differential Revision: https://reviews.llvm.org/D59898
llvm-svn: 357795
We have done some predicate and feature refactoring lately but
did not upstream it. This is to sync.
Differential revision: https://reviews.llvm.org/D60292
llvm-svn: 357791
Second half of PR40800, this patch adds DAG undef handling to fcmp instructions to match the behavior in llvm::ConstantFoldCompareInstruction, this permits constant folding of vector comparisons where some elements had been reduced to UNDEF (by SimplifyDemandedVectorElts etc.).
This involves a lot of tweaking to reduced tests as bugpoint loves to reduce fcmp arguments to undef........
Differential Revision: https://reviews.llvm.org/D60006
llvm-svn: 357765
There are a variety of vector patterns that may be profitably reduced to a
scalar op when scalar ops are performed using a subset (typically, the
first lane) of the vector register file.
For x86, this is true for float/double ops and element 0 because
insert/extract is just a sub-register rename.
Other targets should likely enable the hook in a similar way.
Differential Revision: https://reviews.llvm.org/D60150
llvm-svn: 357760
We need to read the strings from the minidump files as little-endian,
regardless of the host byte order.
I definitely remember thinking about this case while writing the patch
(and in fact, I have implemented that for the "write" case), but somehow
I have ended up not implementing the byte swapping when reading the
data. This adds the necessary byte-swapping and should hopefully fix
test failures on big-endian bots.
llvm-svn: 357754
MSVC found the bare "make_unique" invocation ambiguous (between std::
and llvm:: versions). Explicitly qualifying the call with llvm:: should
hopefully fix it.
llvm-svn: 357750
Summary:
Strings in minidump files are stored as a 32-bit length field, giving
the length of the string in *bytes*, which is followed by the
appropriate number of UTF16 code units. The string is also supposed to
be null-terminated, and the null-terminator is not a part of the length
field. This patch:
- adds support for reading these strings out of the minidump file (this
implementation does not depend on proper null-termination)
- adds support for writing them to a minidump file
- using the previous two pieces implements proper (de)serialization of
the CSDVersion field of the SystemInfo stream. Previously, this was
only read/written as hex, and no attempt was made to access the
referenced string -- now this string is read and written correctly.
The changes are tested via yaml2obj|obj2yaml round-trip as well as a
unit test which checks the corner cases of the string deserialization
logic.
Reviewers: jhenderson, zturner, clayborg
Subscribers: llvm-commits, aprantl, markmentovai, amccarth, lldb-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59775
llvm-svn: 357749
Summary:
Teach SelectionDAG how to compute known bits of ISD::CopyFromReg if
the virtual reg used has one def only.
This can be particularly useful when calling isBaseWithConstantOffset()
with the ISD::CopyFromReg argument, as more optimizations may get enabled
in the result.
Also add a missing truncation on X86, found by testing of this patch.
Change-Id: Id1c9fceec862d118c54a5b53adf72ada5d6daefa
Reviewers: bogner, craig.topper, RKSimon
Reviewed By: RKSimon
Subscribers: lebedev.ri, nemanjai, jvesely, nhaehnle, javed.absar, jsji, jdoerfert, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59535
llvm-svn: 357745
We already promote SRL and SHL to i32.
This will introduce sign extends sometimes which might be harder to deal with than the zero we use for promoting SRL. I ran this through some of our internal benchmark lists and didn't see any major regressions.
I think there might be some DAG combine improvement opportunities in the test changes here.
Differential Revision: https://reviews.llvm.org/D60278
llvm-svn: 357743
Lowering safepoint checks that all gc.relocaes observed in safepoint
must be lowered. However Fast-Isel is able to skip dead gc.relocate.
To resolve this issue we just ignore dead gc.relocate in the check.
Reviewers: reames
Reviewed By: reames
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D60184
llvm-svn: 357742
A block reachable from the entry block can't have any route to a block that's not reachable from the entry block (if it did, that route would make it reachable from the entry block). That is the intended performance optimization for isPotentiallyReachable. For the case where we ask whether an unreachable from entry block has a route to a reachable from entry block, we can't conclude one way or the other. Fix a bug where we claimed there could be no such route.
The fix in rL357425 ironically reintroduced the very bug it was fixing but only when a DominatorTree is provided. This fixes the remaining bug.
llvm-svn: 357734
Summary: This changes the Architecture enum to use a prefix (AK_) to prevent the
preprocessor from replacing i386 with 1 when building llvm/clang for i386.
Reviewers: steven_wu, lhames, mstorsjo
Reviewed By: mstorsjo
Subscribers: hiraditya, jkorous, dexonsmith, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60241
llvm-svn: 357733
It unnecessarily breaks previously-working code which used varargs,
but didn't pass any float/double arguments (such as EDK2).
Also revert the fixup on top of that:
Revert [X86] Fix a test from r357317
This reverts r357317 (git commit d413f41de6)
This reverts r357380 (git commit 7af32444b9)
llvm-svn: 357718
MSVC 2019 casts the pointer to a pointer-sized integer, which is a
reinterpret_cast, which is invalid in a constexpr context, so I have to
remove the LLVM_REQUIRES_CONSTANT_INITIALIZATION annotation for now.
llvm-svn: 357716
Fixes PR41367.
This effectively relands r357655 with a workaround for MSVC 2017.
I tried various approaches with unions, but I ended up going with this
ifdef approach because it lets us write the proper C++11 code that we
want to write, with a separate workaround that we can delete when we
drop MSVC 2017 support.
This also adds LLVM_REQUIRE_CONSTANT_INITIALIZATION, which wraps
[[clang::require_constant_initialization]]. This actually detected a
minor issue when using clang-cl where clang wasn't able to use the
constexpr constructor in MSVC's STL, so I switched back to using the
default ctor of std::atomic<void*>.
llvm-svn: 357714
This patch adds support in the MC layer for parsing and assembling the
4-operand add instruction needed for TLS addressing. This also involves
parsing the %tprel_hi, %tprel_lo and %tprel_add operand modifiers.
Differential Revision: https://reviews.llvm.org/D55341
llvm-svn: 357698
Summary:
Take the Index into account in `getDelayImportTable`, otherwise we
always return the entry for the first delay DLL reference.
Reviewers: ruiu
Reviewed By: ruiu
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60081
llvm-svn: 357697
This function is responsible for checking the legality of fusing an instance
of load -> op -> store into a single operation. In the SystemZ backend the
check was incomplete and a test case emerged with a cycle in the instruction
selection DAG as a result.
Instead of using the NodeIds to determine node relationships,
hasPredecessorHelper() now is used just like in the X86 backend. This handled
the failing tests and as well gave a few additional transformations on
benchmarks.
The SystemZ isFusableLoadOpStorePattern() is now a very near copy of the X86
function, and it seems this could be made a utility function in common code
instead.
Review: Ulrich Weigand
https://reviews.llvm.org/D60255
llvm-svn: 357688
This patch fixes .arch_extension directive parsing to handle a wider
range of architecture extension options. The existing parser was parsing
extensions as an identifier which breaks for extensions containing a
"-", such as the "tlb-rmi" extension.
The extension is now parsed as a string. This is consistent with the
extension parsing in the .arch and .cpu directive parsing.
Patch by Cullen Rhodes (c-rhodes)
Reviewed By: SjoerdMeijer
Differential Revision: https://reviews.llvm.org/D60118
llvm-svn: 357677
In general, llvm-symbolizer follows the output style of GNU's addr2line.
However, there are still some differences; in particular, for a requested
address, llvm-symbolizer prints line and column, while addr2line prints
only the line number.
This patch adds a new switch to select the preferred style.
Differential Revision: https://reviews.llvm.org/D60190
llvm-svn: 357675
SUBREG_TO_REG is supposed to be used to assert that we know the upper bits are
zero. But that isn't the case here. We've done no analysis of the inputs.
llvm-svn: 357673
The Fast ISel has a fallback to SelectionDAGISel in case it cannot handle the instruction.
This works as follows:
Using reverse order, try to select instruction using Fast ISel, if it cannot handle instruction it fallbacks to SelectionDAGISel
for these instructions if it is a call and continue fast instruction selections.
However if unhandled instruction is not a call or statepoint related instruction it fallbacks to SelectionDAGISel for all remaining
instructions in basic block.
However gc.result instruction is missed and as a result it is possible that gc.result is processed earlier than statepoint
causing breakage invariant the gc.results should be handled after statepoint.
Test is updated because in the current form fast-isel cannot handle ret instruction (due to i1 ret type without explicit ext)
and as a result test does not check fast-isel at all.
Reviewers: reames
Reviewed By: reames
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D60182
llvm-svn: 357672
This revision causes tests to fail under ASAN. Since the cause of the failures
is not clear (could be ASAN, could be a Clang bug, could be a bug in this
revision), the safest course of action seems to be to revert while investigating.
llvm-svn: 357667
This allows __THREW__ to be defined in the current module, although
it is still required to be a GlobalVariable.
In emscripten we want to be able to compile the source code that
defines this symbols.
Previously we were avoid this by not running this pass when building
that compiler-rt library, but I have change out to build it using the
normal compiler path:
https://github.com/emscripten-core/emscripten/pull/8391
Differential Revision: https://reviews.llvm.org/D60232
llvm-svn: 357665
Summary:
`posix_fallocate` can fail if the underlying filesystem does not support
it; and, on AIX, such a failure is reported by a return value of
`ENOTSUP`. The existing code checks only for `EOPNOTSUPP`, which may
share the same value as `ENOTSUP`, but is not required to.
Reviewers: xingxue, sfertile, jasonliu
Reviewed By: xingxue
Subscribers: kristina, jsji, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60175
llvm-svn: 357662
These inserters inserted some instructions to zero some registers and copied from virtual registers to physical registers.
This change instead inserts the zeros directly into the DAG at lowering time using new ISD opcodes
that take the extra zeroes as inputs. The zeros will then go through isel on their own to select
the MOV32r0 pseudo. Then we just need to mention the physical registers directly
in the isel patterns and the isel table and InstrEmitter will take care of inserting the necessary
copies to/from physical registers.
llvm-svn: 357659
Summary:
Now CVType and CVSymbol are effectively type-safe wrappers around
ArrayRef<uint8_t>. Make the kind() accessor load it from the
RecordPrefix, which is the same for types and symbols.
Reviewers: zturner, aganea
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60018
llvm-svn: 357658
This custom inserter existed so we could do a weird thing where we pretended that the instructions support
a full address mode instead of taking a pointer in EAX/RAX. I think was largely so we could be pointer
size agnostic in the isel pattern.
To make this work we would then put the address into an LEA into EAX/RAX in front of the instruction after
isel. But the LEA is overkill when we just have a base pointer. So we end up using the LEA as a slower MOV
instruction.
With this change we now just do custom selection during isel instead and just assign the incoming address
of the intrinsic into EAX/RAX based on its size. After the intrinsic is selected, we can let isel take
care of selecting an LEA or other operation to do any address computation needed in this basic block.
I've also split the instruction into a 32-bit mode version and a 64-bit mode version so the implicit
use is properly sized based on the pointer. Without this we get comments in the assembly output about
killing eax and defing rax or vice versa depending on whether we define the instruction to use EAX/RAX.
llvm-svn: 357652
This pattern would show up as a regression if we more
aggressively convert vector FP ops to scalar ops.
There's still a missed optimization for the v4f64 legal
case (AVX) because we create that h-op with an undef operand.
We should probably just duplicate the operands for that
pattern to avoid trouble.
llvm-svn: 357642
Create method `optForNone()` testing for the function level equivalent of
`-O0` and refactor appropriately.
Differential revision: https://reviews.llvm.org/D59852
llvm-svn: 357638
Relying on no spill or other code being inserted before this was
precarious. It relied on code diligently checking isBasicBlockPrologue
which is likely to be forgotten.
Ideally this could be done earlier, but this doesn't work because of
phis. Any other instruction can't be placed before them, so we have to
accept the position being incorrect during SSA.
This avoids regressions in the fast register allocator rewrite from
inverting the direction.
llvm-svn: 357634
The standard doesn't require a DW_TAG_variable, DW_TAG_formal_parameter
or DW_TAG_constant to A DW_AT_type attribute describing the type of the
variable. It only specifies that it *can* have one.
llvm-svn: 357628
Summary: Currently ProfileSummaryBuilder doesn't count into callsite samples when computing total samples. Considering that ProfileSummaryInfo is used to checked the hotness of not only body samples but also callsite samples (from SampleProfileLoader), I think the callsite sample counts should be considered when computing total samples.
Reviewers: eraman, danielcdh, wmi
Subscribers: hiraditya, jdoerfert, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59835
llvm-svn: 357627
Same as G_EXP. Add a test, and update legalizer-info-validation.mir and
f16-instructions.ll.
Differential Revision: https://reviews.llvm.org/D60165
llvm-svn: 357605
When performing an add-with-overflow with an immediate in the
range -2G ... -4G, code currently loads the immediate into a
register, which generally takes two instructions.
In this particular case, it is preferable to load the negated
immediate into a register instead, which always only requires
one instruction, and then perform a subtract.
llvm-svn: 357597
Currently, YAML has the following syntax for describing the symbols:
Symbols:
Local:
LocalSymbol1:
...
LocalSymbol2:
...
...
Global:
GlobalSymbol1:
...
Weak:
...
GNUUnique:
I.e. symbols are grouped by their bindings. That is not very convenient,
because:
It does not allow to set a custom binding, what can be useful for producing
broken/special outputs for test cases. Adding a new binding would require to
change a syntax (what we observed when added GNUUnique recently).
It does not allow to change the order of the symbols in .symtab/.dynsym,
i.e. currently all Local symbols are placed first, then Global, Weak and GNUUnique
are following, but we are not able to change the order.
It is not consistent. Binding is just one of the properties of the symbol,
we do not group them by other properties.
It makes the code more complex that it can be. This patch shows it can be simplified
with the change performed.
The patch changes the syntax to just:
Symbols:
Symbol1:
...
Symbol2:
...
...
With that, we are able to work with the binding field just like with any other symbol property.
Differential revision: https://reviews.llvm.org/D60122
llvm-svn: 357595
The latest MTE specification adds register Xt to the STG instruction family:
STG [Xn, #offset] -> STG Xt, [Xn, #offset]
The tag written to memory is taken from Xt rather than Xn.
Also, the LDG instruction also was changed to read return address from Xt:
LDG Xt, [Xn, #offset].
This patch includes those changes and tests.
Specification is at: https://developer.arm.com/docs/ddi0596/c
Differential Revision: https://reviews.llvm.org/D60188
llvm-svn: 357583
There are 3 changes to make this correspond to the same transform in instcombine:
1. Remove the legality check - we can't create anything less legal than we started with.
2. Ease the use restriction, so we only bail out if both operands have >1 use.
3. Ease the use restriction for binops with a repeated operand (eg, mul x, x).
As discussed in D60150, there's a scalarization opportunity that will be made
easier by allowing this transform more generally.
llvm-svn: 357580
Summary:
Given that X86 does not use this currently, this is an NFC. I'll
experiment with enabling and will report numbers.
Reviewers: andreadb, lebedev.ri
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60185
llvm-svn: 357568
The test should really be checking for the property directly in the
code object headers, but there are problems with this. I don't see
this directly represented in the text form, and for the binary
emission this is depending on a function level subtarget feature to
emit a global flag.
llvm-svn: 357558
The Emscripten OS provides a definition of __EMSCRIPTEN__, and also that it
supports iprintf optimizations.
Also define small_printf optimizations, which is a printf with float support
but not long double (which in wasm can be useful since long doubles are 128
bit and force linking of float128 emulation code). This part is based on
sunfish's https://reviews.llvm.org/D57620 (which can't land yet since
the WASI integration isn't ready yet).
Differential Revision: https://reviews.llvm.org/D60167
llvm-svn: 357552
This change is in preparation for the addition of new target
operand flags for new relocation types. Have a symbol type as part
of the flag set makes it harder to use and AFAICT these are serving
no purpose.
Differential Revision: https://reviews.llvm.org/D60014
llvm-svn: 357548
We should overall stop using these, but the uppercase name didn't
work. Any feature string is converted to lowercase, so these
could never be found in the table.
llvm-svn: 357541
This function should only be called with instructions that are really convertible. And all
convertible instructions need to be handled by the switch. So nothing should use the default.
llvm-svn: 357529
X86FixupLEAs just assumes convertToThreeAddress will return nullptr for any instruction that isn't convertible.
But the code in convertToThreeAddress for X86 assumes that any instruction coming in has at least 2 operands and that the second one is a register. But those properties aren't guaranteed of all instructions. We should check the instruction property first.
llvm-svn: 357528
This adds partial instruction selection support for llvm.aarch64.stlxr. It also
factors out selection for G_INTRINSIC_W_SIDE_EFFECTS into its own function. The
new function removes the restriction that the intrinsic ID on the
G_INTRINSIC_W_SIDE_EFFECTS be on operand 0.
Also add a test, and add a GISel line to arm64-ldxr-stxr.ll.
Differential Revision: https://reviews.llvm.org/D60100
llvm-svn: 357518
Set the correct debug location on instructions which load arguments in
preparation for a call to an arg-promoted function.
This prevents location cascade from misattributing the line/scope of one
of these loads to the location of the instruction preceding the call.
Differential Revision: https://reviews.llvm.org/D60113
llvm-svn: 357500
Bug: https://bugs.llvm.org/show_bug.cgi?id=41180
In the bug test case the debug location was missing for the cmp instruction in
the "middle block" BB. This patch fixes the bug by copying the debug location
from the cmp of the scalar loop's terminator branch, if it exists.
The patch also fixes the debug location on the subsequent branch instruction.
It was previously using the location of the of the original loop's pre-header
block terminator. Both of these instructions will now map to the source line of
the conditional branch in the original loop.
A regression test has been added that covers these issues.
Patch by Orlando Cazalet-Hyams!
Differential Revision: https://reviews.llvm.org/D59944
llvm-svn: 357499
Did experiments on power 9 machine, checked the outputs for NaN & Infinity+
cases with corresponding DCMX bit set. Confirmed the DCMX mask bit for NaN and
infinity+ are reversed.
This patch fixes the issue.
Patch by Victor Huang.
Differential Revision: https://reviews.llvm.org/D59384
llvm-svn: 357494
The code was failing to actually check for the presence of the call to widenable_condition. The whole point of specifying the widenable_condition intrinsic was allowing widening transforms. A normal branch is not widenable. A normal branch leading to a deopt is not widenable (in general).
I added a test case via LoopPredication, but GuardWidening has an analogous bug. Those are the only two passes actually using this utility just yet. Noticed while working on LoopPredication for non-widenable branches; POC in D60111.
llvm-svn: 357493
Summary:
When inserting an `unreachable` after a noreturn call, we must ensure
that it's not a musttail call to avoid breaking the IR invariants for
musttail calls.
Reviewers: fedor.sergeev, majnemer
Reviewed By: majnemer
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60080
llvm-svn: 357485
Summary: It is possible that multiple indirect call targets have been promoted for a single callsite from the profiled binary. Current implementation repeats promotion for all these targets as far as the callsite itself is hot (the callsite is assumed to be hot if any one of these targets was "hot" during the profiling). However, even when one of the ICPed target is hot other targets may not, and we should not repeat promotion for "cold" targets.
Reviewers: danielcdh, wmi
Subscribers: hiraditya, jdoerfert, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59940
llvm-svn: 357484
Summary:
When inserting an `unreachable` after a noreturn call, we must ensure
that it's not a musttail call to avoid breaking the IR invariants for
musttail calls.
Reviewers: fedor.sergeev, majnemer
Reviewed By: majnemer
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60079
llvm-svn: 357483
For shift and rotate instructions that only use the last 6 bits of the shift
amount, a shift amount of (x*64-s) can be substituted with (-s). This saves
one instruction and a register:
lhi %r1, 64
sr %r1, %r3
sllg %r2, %r2, 0(%r1)
=>
lcr %r1, %r3
sllg %r2, %r2, 0(%r1)
Review: Ulrich Weigand
llvm-svn: 357481
`StoreInst::getValueOperand` is identical to `getOperand(0)`, so the call to
`getOperand(0)` can be replaced. Further, `SI->getValueOperand` is redundantly
called just a few lines down, despite its return value being stored in variable
`DV`. No functional change.
llvm-svn: 357479
To disable using of odd floating-point registers (O32 ABI and
-mno-odd-spreg command line option) such registers and their
super-registers added to the set of reserved registers. In general, it
works. But there is at least one problem - in case of enabled machine
verifier pass some floating-point tests failed because live ranges of
register units that are reserved is not empty and verification pass
failed with "Live segment doesn't end at a valid instruction" error
message.
There is D35985 patch which tries to solve the problem by explicit
removing of register units. This solution did not get approval.
I would like to use another approach for prevent using odd floating
point registers - define `AltOrders` and `AltOrderSelect` for MIPS
floating point register classes. Such `AltOrders` contains reduced set
of registers. At first glance, such solution does not break any test
cases and allows enabling machine instruction verification for all MIPS
test cases.
Differential Revision: http://reviews.llvm.org/D59799
llvm-svn: 357472
This patch allows symbols appended with @plt to parse and assemble with the
R_RISCV_CALL_PLT relocation.
Differential Revision: https://reviews.llvm.org/D55335
Patch by Lewis Revill.
llvm-svn: 357470
Summary:
This patch adds the code needed to parse a minidump file into the
MinidumpYAML model, and the necessary glue code so that obj2yaml can
recognise the minidump files and process them.
Reviewers: jhenderson, zturner, clayborg
Subscribers: mgorny, lldb-commits, amccarth, markmentovai, aprantl, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59634
llvm-svn: 357469
There are various places in LLVM where the definition of StackID is not
properly honoured, for example in PEI where objects with a StackID > 0 are
allocated on the default stack (StackID0). This patch enforces that PEI
only considers allocating objects to StackID 0.
Reviewers: arsenm, thegameg, MatzeB
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D60062
llvm-svn: 357460
The code was previously checking that candidates for sinking had exactly
one use or were a store instruction (which can't have uses). This meant
we could sink call instructions only if they had a use.
That limitation seemed a bit arbitrary, so this patch changes it to
"instruction has zero or one use" which seems more natural and removes
the need to special-case stores.
Differential revision: https://reviews.llvm.org/D59936
llvm-svn: 357452
The code doesn't actually need any of the information about the widenable condition at this level. The only thing we need is to ensure the WC call is the last thing anded in, and even that is a quirk we should really look to remove.
llvm-svn: 357448
There's an existing optimization for x != C, but somehow it was missing
a special case for 0.
While I'm here, also cleaned up the code/comments a bit: the second
value produced by the MERGE_VALUES was actually dead, since a CMOV only
produces one result.
Differential Revision: https://reviews.llvm.org/D59616
llvm-svn: 357437
It's a little tricky to make this issue show up because
prologue/epilogue emission normally likes to push at least two
registers... but it doesn't when lr is force-spilled due to function
length. Not sure if that really makes sense, but I decided not to touch
it for now.
Differential Revision: https://reviews.llvm.org/D59385
llvm-svn: 357436
This improves selection for vector stores into v2s64s. Before we just
scalarized them, but we can just use a STRQui instead.
Differential Revision: https://reviews.llvm.org/D60083
llvm-svn: 357432
This should allow llvm-exegesis to intelligently constrain the rounding mode.
The mask in the encoder shouldn't be necessary any more. We used to allow codegen to use 8-11 for rounding mode and the assembler would use 0-3 to mean the same thing so we masked here and in the printer. Codegen now matches the assembler and the printer was updated, but I forgot to update the encoder.
llvm-svn: 357419
We'd been optimizing the case where the predicate was obviously true, do the same for the false case. Mostly just for completeness sake, but also may improve compile time in loops which will exit through the guard. Such loops are presumed rare in fastpath code, but may be present down untaken paths, so optimizing for them is still useful.
llvm-svn: 357408
Summary:
Previously, we translate llvm.round to PTX cvt.rni, which rounds to the
even interger when the source is equidistant between two integers. This
is not correct as llvm.round should round away from zero. This change
replaces llvm.round with a round away from zero implementation through
target specific custom lowering.
Modify a few affected tests to not check for cvt.rni. Instead, we check
for the use of a few constants used in implementing round. We are also
adding CUDA runnable tests to check for the values produced by
llvm.round to test-suites/External/CUDA.
Reviewers: tra
Subscribers: jholewinski, sanjoy, jlebar, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59947
llvm-svn: 357407
LoopPredication was replacing the original condition, but leaving the instructions to compute the old conditions around. This would get cleaned up by other passes of course, but we might as well do it eagerly. That also makes the test output less confusing.
llvm-svn: 357406
This change incorporates an effort by Connor Abbot to change how we deal
with WWM operations potentially trashing valid values in inactive lanes.
Previously, the SIFixWWMLiveness pass would work out which registers
were being trashed within WWM regions, and ensure that the register
allocator did not have any values it was depending on resident in those
registers if the WWM section would trash them. This worked perfectly
well, but would cause sometimes severe register pressure when the WWM
section resided before divergent control flow (or at least that is where
I mostly observed it).
This fix instead runs through the WWM sections and pre allocates some
registers for WWM. It then reserves these registers so that the register
allocator cannot use them. This results in a significant register
saving on some WWM shaders I'm working with (130 -> 104 VGPRs, with just
this change!).
Differential Revision: https://reviews.llvm.org/D59295
llvm-svn: 357400
This instruction writes a block of allocation tags
and stores zero to the associated data locations.
It differs from STGM by 1 bit and has the same
arguments.
The specification can be found here:
https://developer.arm.com/docs/ddi0596/c
Differential Revision: https://reviews.llvm.org/D60065
llvm-svn: 357397
This patch replaces the addition of VK_RISCV_CALL in RISCVMCCodeEmitter by
creating the RISCVMCExpr when tail/call are parsed, or in the codegen case
when the callee symbols are created.
This required adding a new CallSymbol operand to allow only adding
VK_RISCV_CALL to tail/call instructions.
This patch will allow further expansion of parsing and codegen to easily
include PLT symbols which must generate the R_RISCV_CALL_PLT relocation.
Differential Revision: https://reviews.llvm.org/D55560
Patch by Lewis Revill.
llvm-svn: 357396
The STGV/LDGV instructions were replaced with
STGM/LDGM. The encodings remain the same but there
is no longer writeback so there are no unpredictable
encodings to check for.
The specfication can be found here:
https://developer.arm.com/docs/ddi0596/c
Differential Revision: https://reviews.llvm.org/D60064
llvm-svn: 357395
This patch adds an implementation of a PC-relative addressing sequence to be
used when -mcmodel=medium is specified. With absolute addressing, a 'medium'
codemodel may cause addresses to be out of range. This is because while
'medium' implies a 2 GiB addressing range, this 2 GiB can be at any offset as
opposed to 'small', which implies the first 2 GiB only.
Note that LLVM/Clang currently specifies code models differently to GCC, where
small and medium imply the same functionality as GCC's medlow and medany
respectively.
Differential Revision: https://reviews.llvm.org/D54143
Patch by Lewis Revill.
llvm-svn: 357393
The latest version of the MTE spec added a system
register 'GMID_EL1'. It contains the block size used
by the LDGM and STGM instructions and is read only.
The specification can be found here:
https://developer.arm.com/docs/ddi0596/c
llvm-svn: 357392
Summary:
This fixes PR41270.
The recursive function evaluateInDifferentElementOrder expects to be called
on a vector Value, so when we call it on a vector GEP's arguments, we must
first check that the argument is indeed a vector.
Reviewers: reames, spatel
Reviewed By: spatel
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60058
llvm-svn: 357389
This reverts commit 75216a6dbcfe5fb55039ef06a07e419fa875f4a5.
I'll recommit with a better commit message with reference to the
phabricator review.
llvm-svn: 357387
This fixes PR41270.
The recursive function evaluateInDifferentElementOrder expects to be called
on a vector Value, so when we call it on a vector GEP's arguments, we must
first check that the argument is indeed a vector.
llvm-svn: 357385
If we have a commutable vector binop with inverted select-shuffles,
we don't care about the order of the operands in each vector lane:
LHS = shuffle V1, V2, <0, 5, 6, 3>
RHS = shuffle V2, V1, <0, 5, 6, 3>
LHS + RHS --> <V1[0]+V2[0], V2[1]+V1[1], V2[2]+V1[2], V1[3]+V2[3]> --> V1 + V2
PR41304:
https://bugs.llvm.org/show_bug.cgi?id=41304
...is currently titled as an SLP enhancement, but at least for the
given example, we can reduce that in instcombine because we are just
eliminating shuffles.
As noted in the TODO, this could be generalized, but I haven't thought
through those patterns completely, so this is limited to what appears
to be always safe.
Differential Revision: https://reviews.llvm.org/D60048
llvm-svn: 357382
Adds a `seto` pattern expansion. Without it the lowerings of `fcmp one` and
`fcmp ord` would be inefficient due to an unoptimized double negation.
Differential Revision: https://reviews.llvm.org/D59699
llvm-svn: 357378
A pcrel_lo will point to the associated pcrel_hi fixup which in turn points to
the real target. RISCVMCExpr::evaluatePCRelLo will work around this
indirection in order to allow the fixup to be evaluate properly. However, if
relocations are forced (e.g. due to linker relaxation is enabled) then its
evaluation is undesired and will result in a relocation with the wrong target.
This patch modifies evaluatePCRelLo so it will not try to evaluate if the
fixup will be forced as a relocation. A new helper method is added to
RISCVAsmBackend to query this.
Differential Revision: https://reviews.llvm.org/D59686
llvm-svn: 357374
One motivation for making this change is that the lack of using movmsk is likely
a main source of perf difference between clang and gcc on the C-Ray benchmark as
shown here:
https://www.phoronix.com/scan.php?page=article&item=gcc-clang-2019&num=5
...but this change alone isn't enough to solve that problem.
The 'all-of' examples show what is likely the worst case trade-off: we end up with
an extra instruction (or 2 if we count the 'xor' register clearing). The 'any-of'
examples look clearly better using movmsk because we've traded 2 vector instructions
for 2 scalar instructions, and movmsk may have better timing than the generic 'movq'.
If we examine the llvm-mca output for these cases, it appears that even though the
'all-of' movmsk variant looks worse on paper, it would perform better on both
Haswell and Jaguar.
$ llvm-mca -mcpu=haswell no_movmsk.s -timeline
Iterations: 100
Instructions: 400
Total Cycles: 504
Total uOps: 400
Dispatch Width: 4
uOps Per Cycle: 0.79
IPC: 0.79
Block RThroughput: 1.0
$ llvm-mca -mcpu=haswell movmsk.s -timeline
Iterations: 100
Instructions: 600
Total Cycles: 358
Total uOps: 600
Dispatch Width: 4
uOps Per Cycle: 1.68
IPC: 1.68
Block RThroughput: 1.5
$ llvm-mca -mcpu=btver2 no_movmsk.s -timeline
Iterations: 100
Instructions: 400
Total Cycles: 407
Total uOps: 400
Dispatch Width: 2
uOps Per Cycle: 0.98
IPC: 0.98
Block RThroughput: 2.0
$ llvm-mca -mcpu=btver2 movmsk.s -timeline
Iterations: 100
Instructions: 600
Total Cycles: 311
Total uOps: 600
Dispatch Width: 2
uOps Per Cycle: 1.93
IPC: 1.93
Block RThroughput: 3.0
Finally, there may be CPUs where movmsk is horribly slow (old AMD small cores?), but if
that's true, then we're also almost certainly making the wrong transform already for
reductions with >2 elements, so that should be fixed independently.
Differential Revision: https://reviews.llvm.org/D59997
llvm-svn: 357367
In PR41304:
https://bugs.llvm.org/show_bug.cgi?id=41304
...we have a case where we want to fold a binop of select-shuffle (blended) values.
Rather than try to match commuted variants of the pattern, we can canonicalize the
shuffles and check for mask equality with commuted operands.
We don't produce arbitrary shuffle masks in instcombine, but select-shuffles are a
special case that the backend is required to handle because we already canonicalize
vector select to this shuffle form.
So there should be no codegen difference from this change. It's possible that this
improves CSE in IR though.
Differential Revision: https://reviews.llvm.org/D60016
llvm-svn: 357366
Straightforward port of StatepointIRVerifier pass to new Pass Manager framework.
Fix By: skatkov
Reviewed By: fedor.sergeev
Differential Revision: https://reviews.llvm.org/D59825
This is a re-land of r357147/r357148 with LLVM_ENABLE_MODULES build fixed.
Adding IR/SafepointIRVerifier.h into its own module.
llvm-svn: 357361
Negate updates flags like a subtract. We should be able to use the flags from the RMW form of negate when we have (store (X86ISD::SUB 0, load A), A)
Differential Revision: https://reviews.llvm.org/D60007
llvm-svn: 357353
This patch adds support for the RISC-V hard float ABIs, building on top of
rL355771, which added basic target-abi parsing and MC layer support. It also
builds on some re-organisations and expansion of the upstream ABI and calling
convention tests which were recently committed directly upstream.
A number of aspects of the RISC-V float hard float ABIs require frontend
support (e.g. flattening of structs and passing int+fp for fp+fp structs in a
pair of registers), and will be addressed in a Clang patch.
As can be seen from the tests, it would be worthwhile extending
RISCVMergeBaseOffsets to handle constant pool as well as global accesses.
Differential Revision: https://reviews.llvm.org/D59357
llvm-svn: 357352
Fixes PR41316 where the expanded PAVG intrinsic had had one of its ADDs turned into an OR due to its operands having no conflicting bits.
llvm-svn: 357351
Summary:
Linearing the control flow by placing `try`/`end_try` markers can create
mismatches in unwind destinations. This patch resolves these mismatches
by wrapping those instructions with an incorrect unwind destination with
a nested `try`/`catch`/`end_try` and branching to the right destination
within the new catch block.
Reviewers: dschuff
Subscribers: sunfish, sbc100, jgravelle-google, chrib, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D48345
llvm-svn: 357343
Summary:
While this does not change any final output, this will greatly simplify
ixing unwind destination mismatches in CFGStackify (D48345), because we
have to create some new registers there.
Reviewers: dschuff
Subscribers: sunfish, sbc100, jgravelle-google, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59652
llvm-svn: 357342
The SplitF64 node is used on RV32D to convert an f64 directly to a pair of i32
(necessary as bitcasting to i64 isn't legal). When performed on a ConstantFP,
this will result in a FP load from the constant pool followed by a store to
the stack and two integer loads from the stack (necessary as there is no way
to directly move between f64 FPRs and i32 GPRs on RV32D). It's always cheaper
to just materialise integers for the lo and hi parts of the FP constant, so do
that instead.
llvm-svn: 357341
This change adds hierarchical "time trace" profiling blocks that can be visualized in Chrome, in a "flame chart" style. Each profiling block can have a "detail" string that for example indicates the file being processed, template name being instantiated, function being optimized etc.
This is taken from GitHub PR: https://github.com/aras-p/llvm-project-20170507/pull/2
Patch by Aras Pranckevičius.
Differential Revision: https://reviews.llvm.org/D58675
llvm-svn: 357340
Summary:
Currently we create a routing block to the dispatch block for every
predecessor of every entry. So the total number of routing blocks
created will be (# of preds) * (# of entries). But we don't need to do
this: we need at most 2 routing blocks per loop entry, one for when the
predecessor is inside the loop and one for it is outside the loop. (We
can't merge these into one because this will creates another loop cycle
between blocks inside and blocks outside) This patch fixes this and
creates at most 2 routing blocks per entry.
This also renames variable `Split` to `Routing`, which I think is a bit
clearer.
Reviewers: kripken
Subscribers: sunfish, dschuff, sbc100, jgravelle-google, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59462
llvm-svn: 357337
Summary:
On AIX, we can determine whether a filesystem is remote using `mntctl`.
If the information is not found, then claim that the file is remote
(since that is the more restrictive case). Testing for the associated
interface is restored with a modified version of the unit test from
rL295768.
Reviewers: jasonliu, xingxue
Reviewed By: xingxue
Subscribers: jsji, apaprocki, Hahnfeld, zturner, krytarowski, kristina, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D58801
llvm-svn: 357333
Summary:
This feature is not actually used for anything in the WebAssembly
backend, but adding it allows users to get it into the target features
sections of their objects, which makes these objects
future-compatible.
Reviewers: aheejin, dschuff
Subscribers: sbc100, jgravelle-google, hiraditya, sunfish, jdoerfert, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D60013
llvm-svn: 357321
Summary:
This lets us avoid e.g. checking if A >=s B in getSMaxExpr(A, B) if we've
already established that (A smax B) is the best we can do.
Fixes PR41225.
Reviewers: asbirlea
Subscribers: mcrosier, jlebar, bixia, jdoerfert, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60010
llvm-svn: 357320
This adds support for v2s32 vector inserts, and updates the selection +
regbankselect tests for G_INSERT_VECTOR_ELT.
Differential Revision: https://reviews.llvm.org/D59910
llvm-svn: 357318
We need XMM registers to handle varargs with the Win64 ABI. Before we would
silently generate bad code resulting in an assertion failure elsewhere in the
backend.
llvm-svn: 357317
Summary:
This fixes crashes when a BB in which an END_LOOP is to be placed is
unreachable and does not have any predecessors. Fixes PR41307.
Reviewers: dschuff
Subscribers: yurydelendik, sbc100, jgravelle-google, sunfish, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D60004
llvm-svn: 357303
Since this can be set with s_setreg*, it should not be a subtarget
property. Set a default based on the calling convention, and Introduce
a new amdgpu-dx10-clamp attribute to override this if desired.
Also introduce a new amdgpu-ieee attribute to match.
The values need to match to allow inlining. I think it is OK for the
caller's dx10-clamp attribute to override the callee, but there
doesn't appear to be the infrastructure to do this currently without
definining the attribute in the generic Attributes.td.
Eventually the calling convention lowering will need to insert a mode
switch somewhere for these.
llvm-svn: 357302
Summary:
Nodes that have no uses are eventually pruned when they are selected
from the worklist. Record nodes newly added to the worklist or DAG and
perform pruning after every combine attempt.
Reviewers: efriedma, RKSimon, craig.topper, spatel, jyknight
Reviewed By: jyknight
Subscribers: jdoerfert, jyknight, nemanjai, jvesely, nhaehnle, javed.absar, hiraditya, jsji, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D58070
llvm-svn: 357283
Summary:
Various SelectionDAG non-combine operations (e.g. the getNode smart
constructor and legalization) may leave dangling nodes by applying
optimizations without fully pruning unused result values. This results
in nodes that are never added to the worklist and therefore can not be
pruned.
Add a node inserter for the combiner to make sure such nodes have the
chance of being pruned. This allows a number of additional peephole
optimizations.
Reviewers: efriedma, RKSimon, craig.topper, jyknight
Reviewed By: jyknight
Subscribers: msearles, jyknight, sdardis, nemanjai, javed.absar, hiraditya, jrtc27, atanasyan, jsji, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D58068
llvm-svn: 357279
This may not be NFC, but I'm not sure how to expose any diffs in
tests. In theory, it should be slightly more efficient and possibly
more profitable to do the canonicalizations (which can increase the
undef elements in the mask) ahead of SimplifyDemandedVectorElts().
llvm-svn: 357272
For the cases where the icmp/fcmp predicate is commutative, use reorderInputsAccordingToOpcode to collect and commute the operands.
This requires a helper to recognise commutativity in both general Instruction and CmpInstr types - the CmpInst::isCommutative doesn't overload the Instruction::isCommutative method for reasons I'm not clear on (maybe because its based on predicate not opcode?!?).
Differential Revision: https://reviews.llvm.org/D59992
llvm-svn: 357266
The `lowerMSASplatImm` function zero-extends `i32` immediates while
building constant. If target type is `i64`, negative immediate loses
the sign. As a result, for example `__builtin_msa_ldi_d(-1)` lowered
to series of instruction loads incorrect value 0xffffffff to the `$w0`
register instead of single `ldi.d $w0, -1` instruction.
The fix zero-extends unsigned immediates and signed-extend signed
immediates.
Differential Revision: http://reviews.llvm.org/D59884
llvm-svn: 357264
After investigating the examples from D59777 targeting an SSE4.1 machine,
it looks like a very different problem due to how we map illegal types (256-bit in these cases).
We're missing a shuffle simplification that maps elements of a vector back to a shuffled operand.
We have a more general version of this transform in DAGCombiner::visitVECTOR_SHUFFLE(), but that
generality means it is limited to patterns with a one-use constraint, and the examples here have
2 uses. We don't need any uses or legality limitations for a simplification (no new value is
created).
It looks like we miss this pattern in IR too.
In one of the zext examples here, we have shuffle masks like this:
Shuf0 = vector_shuffle<0,u,3,7,0,u,3,7>
Shuf = vector_shuffle<4,u,6,7,u,u,u,u>
...so that's moving the high half of the 1st vector into the low half. But the high half of the
1st vector is already identical to the low half.
Differential Revision: https://reviews.llvm.org/D59961
llvm-svn: 357258
Updated to use DenseMap::insert instead of [] operator for insertion, to
avoid a crash caused by epoch checks.
This reverts commit 2b85de4383.
llvm-svn: 357257
This is a sibling to rL357178 that I noticed we'd hit if we chose
an alternate transform in D59818.
%z = zext i8 %x to i32
%dec = add i32 %z, -1
%r = sext i32 %dec to i64
=>
%z2 = zext i8 %x to i64
%r = add i64 %z2, -1
https://rise4fun.com/Alive/kPP
The x86 vector diffs show a slight regression, so there's a chance
that we should limit this and the previous transform to scalars.
But given that we allowed vectors before, I'm matching that behavior
here. We should change both transforms together if that's the right
thing to do.
llvm-svn: 357254
In the example below, we would previously emit two range checks, one for cases
1--3 and one for 4--6. This patch makes us exploit the fact that the
fall-through is unreachable and only one range check is necessary.
switch i32 %i, label %default [
i32 1, label %bb1
i32 2, label %bb1
i32 3, label %bb1
i32 4, label %bb2
i32 5, label %bb2
i32 6, label %bb2
]
default: unreachable
llvm-svn: 357252
This patch adds an experimental stage named MicroOpQueueStage.
MicroOpQueueStage can be used to simulate a hardware micro-op queue (basically,
a decoupling queue between 'decode' and 'dispatch'). Users can specify a queue
size, as well as a optional MaxIPC (which - in the absence of a "Decoders" stage
- can be used to simulate a different throughput from the decoders).
This stage is added to the default pipeline between the EntryStage and the
DispatchStage only if PipelineOption::MicroOpQueue is different than zero. By
default, llvm-mca sets PipelineOption::MicroOpQueue to the value of hidden flag
-micro-op-queue-size.
Throughput from the decoder can be simulated via another hidden flag named
-decoder-throughput. That flag allows us to quickly experiment with different
frontend throughputs. For targets that declare a loop buffer, flag
-decoder-throughput allows users to do multiple runs, each time simulating a
different throughput from the decoders.
This stage can/will be extended in future. For example, we could add a "buffer
full" event to notify bottlenecks caused by backpressure. flag
-decoder-throughput would probably go away if in future we delegate to another
stage (DecoderStage?) the simulation of a (potentially variable) throughput from
the decoders. For now, flag -decoder-throughput is "good enough" to run some
simple experiments.
Differential Revision: https://reviews.llvm.org/D59928
llvm-svn: 357248
We should be able to match elements with the swapped predicate as well - as long as we commute the source operands.
Differential Revision: https://reviews.llvm.org/D59956
llvm-svn: 357243
Summary:
PowerPC64/PowerPC64le supports the builtin function __builtin_setrnd to set the floating point rounding mode. This function will use the least significant two bits of integer argument to set the floating point rounding mode.
double __builtin_setrnd(int mode);
The effective values for mode are:
0 - round to nearest
1 - round to zero
2 - round to +infinity
3 - round to -infinity
Note that the mode argument will modulo 4, so if the int argument is greater than 3, it will only use the least significant two bits of the mode. Namely, builtin_setrnd(102)) is equal to builtin_setrnd(2).
Reviewed By: jsji
Differential Revision: https://reviews.llvm.org/D59405
llvm-svn: 357241
Some DAG mutations can only be applied to `ScheduleDAGMI`, and have to
internally cast a `ScheduleDAGInstrs` to `ScheduleDAGMI`.
There is nothing actually specific to `ScheduleDAGMI` in `Topo`.
llvm-svn: 357239
The register index can only really be an SGPR. Lie that a VGPR index
is legal, and then rewrite the instruction in a waterfall loop to
handle the index.
llvm-svn: 357235
A shift and add/sub sequence combination is faster in place of a multiply by constant.
Because the cycle or latency of multiply is not huge, we only consider such following
worthy patterns.
```
(mul x, 2^N + 1) => (add (shl x, N), x)
(mul x, -(2^N + 1)) => -(add (shl x, N), x)
(mul x, 2^N - 1) => (sub (shl x, N), x)
(mul x, -(2^N - 1)) => (sub x, (shl x, N))
```
And the cycles or latency is subtarget-dependent so that we need consider the
subtarget to determine to do or not do such transformation.
Also data type is considered for different cycles or latency to do multiply.
Differential Revision: https://reviews.llvm.org/D58950
llvm-svn: 357233
Summary:
It does not currently make sense to use WebAssembly features in some functions
but not others, so this CL adds an IR pass that takes the union of all used
feature sets and applies it to each function in the module. This allows us to
prevent atomics from being lowered away if some function has opted in to using
them. When atomics is not enabled anywhere, we detect whether there exists any
atomic operations or thread local storage that would be stripped and disallow
linking with objects that contain atomics if and only if atomics or tls are
stripped. When atomics is enabled, mark it as used but do not require it of
other objects in the link. These changes allow libraries that do not use atomics
to be built once and linked into both single-threaded and multithreaded
binaries.
Reviewers: aheejin, sbc100, dschuff
Subscribers: jgravelle-google, hiraditya, sunfish, jfb, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59625
llvm-svn: 357226
For the attached test case, unchecked addition of immediate starts and
ends overflows, as they can be arbitrary i64 constants.
Proof: https://rise4fun.com/Alive/Plqc
Reviewers: qcolombet, gilr, efriedma
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D59218
llvm-svn: 357217
For multi-dimensional array like below
int a[2][3];
the previous implementation generates BTF_KIND_ARRAY type
like below:
. element_type: int
. index_type: unsigned int
. number of elements: 6
This is not the best way to represent arrays, esp.,
when converting BTF back to headers and users will see
int a[6];
instead.
This patch generates proper support for multi-dimensional arrays.
For "int a[2][3]", the two BTF_KIND_ARRAY types will be
generated:
Type #n:
. element_type: int
. index_type: unsigned int
. number of elements: 3
Type #(n+1):
. element_type: #n
. index_type: unsigned int
. number of elements: 2
The linux kernel already supports such a multi-dimensional
array representation properly.
Signed-off-by: Yonghong Song <yhs@fb.com>
Differential Revision: https://reviews.llvm.org/D59943
llvm-svn: 357215
This patch has three related fixes to improve float literal lexing:
1. Make AsmLexer::LexDigit handle floats without a decimal point more
consistently.
2. Make AsmLexer::LexFloatLiteral print an error for floats which are
apparently missing an "e".
3. Make APFloat::convertFromString use binutils-compatible exponent
parsing.
Together, this fixes some cases where a float would be incorrectly
rejected, fixes some cases where the compiler would crash, and improves
diagnostics in some cases.
Patch by Brandon Jones.
Differential Revision: https://reviews.llvm.org/D57321
llvm-svn: 357214
Even if the interleaving transform would otherwise be legal, we shouldn't
introduce an interleaved load that is wider than the original load: it might
have undefined behavior.
It might be possible to perform some sort of mask-narrowing transform in
some cases (using a narrower interleaved load, then extending the
results using shufflevectors). But I haven't tried to implement that,
at least for now.
Fixes https://bugs.llvm.org/show_bug.cgi?id=41245 .
Differential Revision: https://reviews.llvm.org/D59954
llvm-svn: 357212
By extending OrderedBB to allow removing and replacing cached
instructions, we can preserve OrderedBBs in DSE easily. This eliminates
one source of quadratic compile time in DSE.
Fixes PR38829.
Reviewers: rnk, efriedma, hfinkel
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D59789
llvm-svn: 357208
If the caller can preserve the OBB, we can avoid recomputing the order
for each getDependency call.
Reviewers: efriedma, rnk, hfinkel
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D59788
llvm-svn: 357206
to unbreak the modular bots and its follow-up commit.
This reverts commit https://reviews.llvm.org/D59825
because it introduced a
fatal error: cyclic dependency in module 'LLVM_intrinsic_gen': LLVM_intrinsic_gen -> LLVM_IR -> LLVM_intrinsic_gen
llvm-svn: 357201
For 64-bit operations we should consider if the immediate can be made to fit
in an unsigned 32-bits immedate. For OR/XOR this allows us to load the immediate
with MOV32ri instead of movabsq. For AND this allows us to fold the immediate.
Differential Revision: https://reviews.llvm.org/D59867
llvm-svn: 357196
This avoids allocating a few KB of heap memory on startup, and instead
allocates these maps lazily. I noticed this while profiling LLD.
llvm-svn: 357192
This is probably the least important of our movmsk problems, but I'm starting
at the bottom to reduce distractions.
We were creating a select_cc which bypasses the select and bitmask codegen
optimizations that we have now. If we produce a compare+negate instead, we
allow things like neg/sbb carry bit hacks, and in all cases we avoid a cmov.
There's no partial register update danger in these sequences because we always
produce the zero-register xor ahead of the 'set' if needed.
There seems to be a missing fold for sext of a bool bit here:
negl %ecx
movslq %ecx, %rax
...but that's an independent transform.
Differential Revision: https://reviews.llvm.org/D59818
llvm-svn: 357172
Summary:
This adds a BranchFusion feature to replace the usage of the MacroFusion
for AMD CPUs.
See D59688 for context.
Reviewers: andreadb, lebedev.ri
Subscribers: hiraditya, jdoerfert, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59872
llvm-svn: 357171
Based on llvm-exegesis measurements.
Now that llvm-exegesis is ~2 magnitudes faster, and is a bit smarter,
it is now possible to continue cleanup of the scheduler model.
With this, there are no more latency inconsistencies for the
opcodes that produce stable measurements, and only a few inconsistencies
for unstable measurements (MMX_* opcodes, opcodes that llvm-exegesis
measures by chaining - CMP, TEST, BT, SETcc, CVT, MOV, etc.)
llvm-svn: 357169
If scalar truncates are free, attempt to pre-truncate build_vectors source operands.
Only attempt to do this before legalization as we often end up with truncations/extensions during build_vector lowering.
Differential Revision: https://reviews.llvm.org/D59654
llvm-svn: 357161
This is in preparation to a driver patch to add gcc 8's -fsanitize=pointer-compare and -fsanitize=pointer-subtract.
Disabled by default as this is still an experimental feature.
Reviewed By: morehouse, vitalybuka
Differential Revision: https://reviews.llvm.org/D59220
llvm-svn: 357157
With this change, the VPlan native path is triggered with the directive:
#pragma clang loop vectorize(enable)
There is no need to specify the vectorize_width(N) clause.
Patch by Francesco Petrogalli <francesco.petrogalli@arm.com>
Differential Revision: https://reviews.llvm.org/D57598
llvm-svn: 357156
G_SELECT uses a 1-bit scalar for the condition, and is currently
implemented with a plain CMPri against 0. This means that values such as
0x1110 are interpreted as true, when instead the higher bits should be
treated as undefined and therefore ignored. Replace the CMPri with a
TSTri against 0x1, which performs an implicit AND, yielding the expected
result.
llvm-svn: 357153
These fixup kinds are not explicitly related to the code section. They
are there to signal how to apply the fixup.
Also, a couple of other minor wasm cleanups.
Differential Revision: https://reviews.llvm.org/D59908
llvm-svn: 357145
The issue here is that we actually allow CGSCC passes to mutate IR (and
therefore invalidate analyses) outside of the current SCC. At a minimum,
we need to support mutating parent and ancestor SCCs to support the
ArgumentPromotion pass which rewrites all calls to a function.
However, the analysis invalidation infrastructure is heavily based
around not needing to invalidate the same IR-unit at multiple levels.
With Loop passes for example, they don't invalidate other Loops. So we
need to customize how we handle CGSCC invalidation. Doing this without
gratuitously re-running analyses is even harder. I've avoided most of
these by using an out-of-band preserved set to accumulate the cross-SCC
invalidation, but it still isn't perfect in the case of re-visiting the
same SCC repeatedly *but* it coming off the worklist. Unclear how
important this use case really is, but I wanted to call it out.
Another wrinkle is that in order for this to successfully propagate to
function analyses, we have to make sure we have a proxy from the SCC to
the Function level. That requires pre-creating the necessary proxy.
The motivating test case now works cleanly and is added for
ArgumentPromotion.
Thanks for the review from Philip and Wei!
Differential Revision: https://reviews.llvm.org/D59869
llvm-svn: 357137
The last reference to this function was removed from the ARM
td files in 2015 in rL225266.
Differential Revision: https://reviews.llvm.org/D59868
llvm-svn: 357130
If we know the 2 halves of an oversized zext-in-reg are the same,
don't create those halves independently.
I tried several different approaches to fold this, but it's difficult
to get right during legalization. In the default path, we are creating
a generic shuffle that looks like an unpack high, but it can get
transformed into a different mask (a blend), so it's not
straightforward to match that. If we try to fold after it actually
becomes an X86ISD::UNPCKH node, we can't be sure what the operand node
is - it might be a generic shuffle, or it could be some x86-specific op.
From the test output, we should be doing something like this for SSE4.1
as well, but I'd rather leave that as a follow-up since it involves
changing lowering actions.
Differential Revision: https://reviews.llvm.org/D59777
llvm-svn: 357129
This is not exactly NFC because it should make further combines
of MOVMSK easier to match, but there should be no outward differences
because we have isel patterns in place specifically to allow this. See:
// Also support integer VTs to avoid a int->fp bitcast in the DAG.
llvm-svn: 357128
This makes more sense as a place to initialize these. I don't think runOnMachineFunction was overriden when these cached values were originally created.
llvm-svn: 357123
When lowering a load or store for TypeWidenVector, the type legalizer
would use a single load or store if the associated integer type was legal
or promoted. E.g. it loads a v4i8 as an i32 if i32 is legal/promotable.
(See https://reviews.llvm.org/rL236528 for reference.)
This applies that behaviour to vector types. If the vector type is
TypePromoteInteger, the element type is going to be TypePromoteInteger
as well, which will lead to have a single promoting load rather than N
individual promoting loads. For instance, if we have a v3i1, we would
now have a load of v4i1 instead of 3 loads of i1.
Patch by Guillaume Marques. Thanks!
Differential Revision: https://reviews.llvm.org/D56201
llvm-svn: 357120
Split off from D59749. This adds isWrappedSet() and
isUpperSignWrapped() set with the same behavior as isSignWrappedSet()
and isUpperWrapped() for the respectively other domain.
The methods isWrappedSet() and isSignWrappedSet() will not consider
ranges of the form [X, Max] == [X, 0) and [X, SignedMax] == [X, SignedMin)
to be wrapping, while isUpperWrapped() and isUpperSignWrapped() will.
Also replace the checks in getUnsignedMin() and friends with method
calls that implement the same logic.
llvm-svn: 357112
Summary:
A recent fix (r355751) caused a compile time regression because setting
the ModifiedDT flag in optimizeSelectInst means that each time a select
instruction is optimized the function walk in runOnFunction stops and
restarts again (which was needed to build a new DT before we started
building it lazily in r356937). Now that the DT is built lazily, a
simple fix is to just reset the DT at this point, rather than restarting
the whole function walk.
In the future other places that set ModifiedDT may want to switch to
just resetting the DT directly. But that will require an evaluation to
ensure that they don't otherwise need to restart the function walk.
Reviewers: spatel
Subscribers: jdoerfert, llvm-commits, xur
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59889
llvm-svn: 357111
ARMBaseInstrInfo::getNumLDMAddresses is making bad assumptions about the
memory operands of load and store-multiple operations. This doesn't
really fix the problem properly, but it's enough to prevent crashing,
at least.
Fixes https://bugs.llvm.org/show_bug.cgi?id=41231 .
Differential Revision: https://reviews.llvm.org/D59834
llvm-svn: 357109
Split out from D59749. The current implementation of isWrappedSet()
doesn't do what it says on the tin, and treats ranges like
[X, Max] as wrapping, because they are represented as [X, 0) when
using half-inclusive ranges. This also makes it inconsistent with
the semantics of isSignWrappedSet().
This patch renames isWrappedSet() to isUpperWrapped(), in preparation
for the introduction of a new isWrappedSet() method with corrected
behavior.
llvm-svn: 357107
If there were only dbg_values in the block, recede would hit the
beginning of the block and try to use thet dbg_value as a real
instruction.
llvm-svn: 357105
Start using the uadd.sat and usub.sat intrinsics for the existing
canonicalizations. These intrinsics should optimize better than
expanded IR, have better handling in the X86 backend and should
be no worse than expanded IR in other backends, as far as we know.
rL357012 already introduced use of uadd.sat for the add+umin pattern.
Differential Revision: https://reviews.llvm.org/D58872
llvm-svn: 357103
The artifact combiners push instructions which have been marked for deletion
onto an list for the legalizer to deal with on return. However, for trunc(ext)
combines the combiner routine recursively calls itself. When it does this the
dead instructions list may not be empty, and the other combiners don't expect
to be dealing with essentially invalid MIR (multiple vreg defs etc).
This change fixes it by ensuring that the dead instructions are processed on
entry into tryCombineInstruction.
As a result, this fix exposed a few places in tests where G_TRUNC instructions
were not being deleted even though they were dead.
Differential Revision: https://reviews.llvm.org/D59892
llvm-svn: 357101
This reapplies r356149, using the correct overload of findUnusedReg
which passes the current iterator.
This worked most of the time, because the scavenger iterator was moved
at the end of the frame index loop in PEI. This would fail if the
spill was the first instruction. This was further hidden by the fact
that the scavenger wasn't passed in for normal frame index
elimination.
llvm-svn: 357098
Haswell CPUs have special support for SHLD/SHRD with the same register for both sources. Such an instruction will go to the rotate/shift unit on port 0 or 6. This gives it 1 cycle latency and 0.5 cycle reciprocal throughput. When the register is not the same, it becomes a 3 cycle operation on port 1. Sandybridge and Ivybridge always have 1 cyc latency and 0.5 cycle reciprocal throughput for any SHLD.
When FastSHLDRotate feature flag is set, we try to use SHLD for rotate by immediate unless BMI2 is enabled. But MachineCopyPropagation can look through a copy and change one of the sources to be different. This will break the hardware optimization.
This patch adds psuedo instruction to hide the second source input until after register allocation and MachineCopyPropagation. I'm not sure if this is the best way to do this or if there's some other way we can make this work.
Fixes PR41055
Differential Revision: https://reviews.llvm.org/D59391
llvm-svn: 357096
This patch removes an overly conservative check that would prevent
simplifying copies when the value we were tracking would go through
several subregister indices.
Indeed, the intend of this check was to not track values whenever
we have to compose subregister, but actually what the check was
doing was bailing anytime we see a second subreg, even if that
second subreg would actually be the new source of truth (as opposed
to a part of that subreg).
Differential Revision: https://reviews.llvm.org/D59891
llvm-svn: 357095
This patch fixes an assembler bug that allowed SVE vector registers to contain a
type suffix when not expected. The SVE unpredicated movprfx instruction is the
only instruction affected.
The following are examples of what was previously valid:
movprfx z0.b, z0.b
movprfx z0.b, z0.s
movprfx z0, z0.s
These instructions are now erroneous.
Patch by Cullen Rhodes (c-rhodes)
Reviewed By: sdesmalen
Differential Revision: https://reviews.llvm.org/D59636
llvm-svn: 357094
Also includes one example of how this transform is unsound. This isn't
verifying the copies are used in the control flow intrinisic patterns.
Also add option to disable exec mask opt pass. Since this pass is
unsound, it may be useful to turn it off until it is fixed.
llvm-svn: 357091
Currently this is called before the frame size is set on the
function. For AMDGPU, the scavenger is used for large frames where
part of the offset needs to be materialized in a register, so
estimating the frame size is useful for knowing whether the scavenger
is useful.
llvm-svn: 357087
The AMDGPU implementation of getReservedRegs depends on
MachineFunctionInfo fields that are parsed from the YAML section. This
was reserving the wrong register since it was setting the reserved
regs before parsing the correct one.
Some tests were relying on the default reserved set for the assumed
default calling convention.
llvm-svn: 357083
The .BTF.ext FuncInfoTable and LineInfoTable contain
information organized per ELF section. Current definition
of FuncInfoTable/LineInfoTable is:
std::unordered_map<uint32_t, std::vector<BTFFuncInfo>> FuncInfoTable
std::unordered_map<uint32_t, std::vector<BTFLineInfo>> LineInfoTable
where the key is the section name off in the string table.
The unordered_map may cause the order of section output
different for different platforms.
The same for unordered map definition of
std::unordered_map<std::string, std::unique_ptr<BTFKindDataSec>>
DataSecEntries
where BTF_KIND_DATASEC entries may have different ordering
for different platforms.
This patch fixed the issue by using std::map.
Test static-var-derived-type.ll is modified to generate two
DataSec's which will ensure the ordering is the same for all
supported platforms.
Signed-off-by: Yonghong Song <yhs@fb.com>
llvm-svn: 357077
There is no reason why stages should be visited in reverse order.
This patch allows the definition of stages that push instructions forward from
their cycleEnd() routine.
llvm-svn: 357074
Rework BaseIndexOffset and isAlias to fully work with lifetime nodes
and fold in lifetime alias analysis.
This is mostly NFC.
Reviewers: courbet
Reviewed By: courbet
Subscribers: hiraditya, jdoerfert, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59794
llvm-svn: 357070
Original commit by Ayonam Ray.
This commit adds a regression test for the issue discovered in the
previous commit: that the range check for the jump table can only be
omitted if the fall-through destination of the jump table is
unreachable, which isn't necessarily true just because the default of
the switch is unreachable.
This addresses the missing optimization in PR41242.
> During the lowering of a switch that would result in the generation of a
> jump table, a range check is performed before indexing into the jump
> table, for the switch value being outside the jump table range and a
> conditional branch is inserted to jump to the default block. In case the
> default block is unreachable, this conditional jump can be omitted. This
> patch implements omitting this conditional branch for unreachable
> defaults.
>
> Differential Revision: https://reviews.llvm.org/D52002
> Reviewers: Hans Wennborg, Eli Freidman, Roman Lebedev
llvm-svn: 357067
but the implementation is hard to extend. It doesn't currently have an
easy way to support intrinsics that, for example, lack a rounding mode.
This will be needed for impending new constrained intrinsics.
This code is split out of D55897 <https://reviews.llvm.org/D55897>, which
itself was split out of D43515 <https://reviews.llvm.org/D43515>.
Reviewed by: arsenm
Differential Revision: http://reviews.llvm.org/D59830
llvm-svn: 357065
Cleanup isAArch64FrameOffsetLegal by:
- Merging the large switch statement to reuse AArch64InstrInfo::getMemOpInfo().
- Using AArch64InstrInfo::getUnscaledLdSt() to determine whether an instruction
has an unscaled variant.
- Simplifying the logic that calculates the offset to fit the immediate.
Reviewers: paquette, evandro, eli.friedman, efriedma
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D59636
llvm-svn: 357064
This patch mirrors the change made to the Unix equivalent in
r351916. This in turn fixes bugs related to the use of FileOutputBuffer
to output to "-", i.e. stdout, on Windows.
Differential Revision: https://reviews.llvm.org/D59663
llvm-svn: 357058
Enable SSE41 ZERO_EXTEND_VECTOR_INREG shuffle combines - for the PMOVZX(PSHUFD(V)) -> UNPCKH(V,0) pattern we reduce the shuffles (port5-bottleneck on Intel) at the expense of creating a zero (pxor v,v) and an extra register move - which is a good trade off as these are pretty cheap and in most cases it doesn't increase register pressure.
This also exposed a missed opportunity to use combine to ZERO_EXTEND_VECTOR_INREG with folded loads - even if we're in the float domain.
........
Causes PR41249
llvm-svn: 357057
getAsCarry() checks that the input argument is a carry-producing node before
allowing a transformation to addcarry. This patch adds a check to make sure
that the carry-producing node is legal. If it is not, it may not remain in a
form that is manageable by the target backend. The test case caused a
compilation failure during instruction selection for this reason on SystemZ.
Patch by Ulrich Weigand.
Review: Sanjay Patel
https://reviews.llvm.org/D59822
llvm-svn: 357052
Previously we manually selected the AND/OR/XOR with immediate and the SHL(or ADD if the shift is 1). But this was missing out on the opportunity to use a 64 bit AND with a 32-bit immediate and possibly other isel tricks we have built into the tables.
Instead, insert the new nodes into the DAG using insertDAGNode and allow them each to be selected through the normal table.
llvm-svn: 357049
This patch lays the groundwork for extending the generic machine scheduler by providing a PPC-specific implementation.
There are no functional changes as this is an incremental patch that simply provides the necessary overrides which just
encapsulate the behavior of the generic scheduler. Subsequent patches will add specific behavior.
Differential Revision: https://reviews.llvm.org/D59284
llvm-svn: 357047
We were manually outputting the code we would get from selecting ANY_EXTEND. We
can save some code by just letting an ANY_EXTEND go through isel on its own.
llvm-svn: 357045
A section containing metadata on remark diagnostics will be emitted if
the flag (-mllvm) -remarks-section is present.
For now, the metadata is:
* a magic number for remarks: "REMARKS\0"
* the version number: a little-endian uint64_t
* the absolute file path to the serialized remark diagnostics: a
null-terminated string.
Differential Revision: https://reviews.llvm.org/D59571
llvm-svn: 357043
Split off from D59749. This uses a simpler and more efficient
implementation of isSignWrappedSet(), and considers full sets
as non-wrapped, to be consistent with isWrappedSet(). Otherwise
the behavior is unchanged.
There are currently only two users of this function and both already
check for isFullSet() || isSignWrappedSet(), so this is not going to
cause a change in overall behavior.
Differential Revision: https://reviews.llvm.org/D59848
llvm-svn: 357039
This patch splits the huge function PPCBranchSelector.cpp:runOnMachineFunction into several smaller functions.
No functional change.
Differential Revision: https://reviews.llvm.org/D59623
llvm-svn: 357033
When splitting a subrange we end up with two different subranges covering
two different, non overlapping, lanes.
As part of this splitting the VNIs of the original live-range need
to be dispatched to the subranges according to which lanes they are
actually defining.
Prior to this patch we were assuming that all values were defining
all lanes. This was wrong as demonstrated by llvm.org/PR40835.
Differential Revision: https://reviews.llvm.org/D59731
llvm-svn: 357032
We have the folds for fadd/fsub/fmul already in DAGCombiner,
so it may be possible to remove that code if we can guarantee that
these ops are zapped before they can exist.
llvm-svn: 357029
The UseVSXReg flag can be safely removed and the code cleaned up.
Patch By: Yi-Hong Liu
Differential Revision: https://reviews.llvm.org/D58685
llvm-svn: 357028
This reverts commit rL357020.
The commit broke the test llvm/test/tools/llvm-objdump/embedded-source.test
on some builds including clang-ppc64be-linux-multistage,
clang-s390x-linux, clang-with-lto-ubuntu, clang-x64-windows-msvc,
llvm-clang-lld-x86_64-scei-ps4-windows10pro-fast (and others).
llvm-svn: 357026
This change implements lowering of references global symbols in PIC
mode.
This change implements lowering of global references in PIC mode using a
new @GOT reference type. @GOT references can be used with function or
data symbol names combined with the get_global instruction. In this case
the linker will insert the wasm global that stores the address of the
symbol (either in memory for data symbols or in the wasm table for
function symbols).
For now I'm continuing to use the R_WASM_GLOBAL_INDEX_LEB relocation
type for this type of reference which means that this relocation type
can refer to either a global or a function or data symbol. We could
choose to introduce specific relocation types for GOT entries in the
future. See the current dynamic linking proposal:
https://github.com/WebAssembly/tool-conventions/blob/master/DynamicLinking.md
Differential Revision: https://reviews.llvm.org/D54647
llvm-svn: 357022
Reapply rL356941 after regenerating the object file in the failing test
llvm/test/tools/llvm-objdump/embedded-source.test from source.
Original commit message:
[llvm] Prevent duplicate files in debug line header in dwarf 5.
Motivation: In previous dwarf versions, file name indexes started from 1, and
the primary source file was not explicit. Dwarf 5 standard (6.2.4) prescribes
the primary source file to be explicitly given an entry with an index number 0.
The current implementation honors the specification by just duplicating the
main source file, once with index number 0, and later maybe with another
index number. While this is compliant with the letter of the standard, the
duplication causes problems for consumers of this information such as lldb.
(Some files are duplicated, where only some of them have a line table although
all refer to the same file)
With this change, dwarf 5 debug line section files always start from 0, and
the zeroth entry is not duplicated whenever possible. This requires different
handling of dwarf 4 and dwarf 5 during generation (e.g. when a function returns
an index zero for a file name, it signals an error in dwarf 4, but not in dwarf 5)
However, I think the minor complication is worth it, because it enables all
consumers (lldb, gdb, dwarfdump, objdump, and so on) to treat all files in the
file name list homogenously.
Tags: #llvm, #debug-info
Differential Revision: https://reviews.llvm.org/D59515
llvm-svn: 357018
Summary:
`WebAssembly::analyzeBranch` now does not analyze anything if the
function is CFG stackified. We were previously doing similar things by
checking if a branch's operand is whether an integer or an MBB, but this
failed to bail out when a BB did not have any terminators.
Consider this case:
```
bb0:
try $label0
call @foo // unwinds to %ehpad
bb1:
...
br $label0 // jumps to %cont. can be deleted
ehpad:
catch
...
cont:
end_try
```
Here `br $label0` will be deleted in CFGStackify's
`removeUnnecessaryInstrs` function, because we jump to the %cont block
even without the branch. But in this case, MachineVerifier fails to
verify this, because `ehpad` is not a successor of `bb1` even if `bb1`
does not have any terminators. MachineVerifier incorrectly thinks `bb1`
falls through to the next block.
This pass now consistently rejects all analysis after CFGStackify
whether a BB has terminators or not, also making the MachineVerifier
work. (MachineVerifier does not try to verify relationships between BBs
if `analyzeBranch` fails, the behavior we want after CFGStackify.)
This also adds a new option `-wasm-disable-ehpad-sort` for testing. This
option helps create the sorted order we want to test, and without the
fix in this patch, the tests in cfg-stackify-eh.ll fail at
MachineVerifier with `-wasm-disable-ehpad-sort`.
Reviewers: dschuff
Subscribers: sunfish, sbc100, jgravelle-google, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59740
llvm-svn: 357015
This is the last step towards solving the examples shown in:
https://bugs.llvm.org/show_bug.cgi?id=14613
With this change, x86 should end up with psubus instructions
when those are available.
All known codegen issues with expanding the saturating intrinsics
were resolved with:
D59006 / rL356855
We also have some early evidence in D58872 that using the intrinsics
will lead to better perf. If some target regresses from this, custom
lowering of the intrinsics (as in the above for x86) may be needed.
llvm-svn: 357012
Summary:
This adds `CFGStackified` field and its serialization to
WebAssemblyFunctionInfo.
Reviewers: dschuff
Subscribers: sunfish, sbc100, jgravelle-google, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59747
llvm-svn: 357011
Summary:
The framework for supporting target-specific MachineFunctionInfo was
added in r356215. This adds serialization support for
WebAssemblyFunctionInfo on top of that. This patch only adds the
framework and does not actually serialize anything at this point; we
have to add YAML mapping later for the fields in WebAssemblyFunctionInfo
we want to serialize if necessary.
Reviewers: dschuff, arsenm
Subscribers: sunfish, wdng, sbc100, jgravelle-google, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59737
llvm-svn: 357009
Summary:
When TRY and LOOP markers are in the same BB and END_TRY and END_LOOP
markers are in the same BB, END_TRY should be _before_ END_LOOP, because
LOOP is always before TRY if they are in the same BB. (TRY is placed in
the latest possible position, whereas LOOP is in the earliest possible
position.)
Reviewers: dschuff
Subscribers: sunfish, sbc100, jgravelle-google, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59751
llvm-svn: 357008
Summary:
Before we placed all TRY/END_TRY markers before placing BLOCK/END_BLOCK
markers. This couldn't handle this case:
```
bb0:
br bb2
bb1: // nearest common dominator of bb3 and bb4
br_if ... bb3
br bb4
bb2:
...
bb3:
call @foo // unwinds to ehpad
bb4:
call @bar // unwinds to ehpad
ehpad:
catch
...
```
When we placed TRY markers, we placed it in bb1 because it is the
nearest common dominator of bb3 and bb4. But because bb0 jumps to bb2,
when we placed block markers, we ended up with interleaved scopes like
```
block
try
end_block
catch
end_try
```
which was not correct.
This patch fixes the bug by placing BLOCK and TRY markers in one pass
while iterating BBs in a function. This also adds some more routines to
`placeTryMarkers`, because we now have to assume that there can be
previously placed BLOCK and END_BLOCK.
Reviewers: dschuff
Subscribers: sunfish, sbc100, jgravelle-google, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59739
llvm-svn: 357007
Found by inspection when looking at the debug output of MCA.
This problem was latent, and none of the upstream models were affected by it.
No functional change intended.
llvm-svn: 357000
Various SelectionDAG non-combine operations (e.g. the getNode smart
constructor and legalization) may leave dangling nodes by applying
optimizations or not fully pruning unused result values. This can
result in nodes that are never added to the worklist and therefore can
not be pruned.
Add a node inserter as the current node deleter to make sure such
nodes have the chance of being pruned.
Many minor changes, mostly positive.
llvm-svn: 356996
Adds two patterns to improve the codegen of GPR value comparisons with small
constants. Instead of first loading the constant into another register and then
doing an XOR of those registers, these patterns directly use the constant as an
XORI immediate.
llvm-svn: 356990
This helps us relax the extension of a lot of scalar elements before they are inserted into a vector.
Its exposes an issue in DAGCombiner::convertBuildVecZextToZext as some/all the zero-extensions may be relaxed to ANY_EXTEND, so we need to handle that case to avoid a couple of AVX2 VPMOVZX test regressions.
Once this is in it should be easier to fix a number of remaining failures to fold loads into VBROADCAST nodes.
Differential Revision: https://reviews.llvm.org/D59484
llvm-svn: 356989
DenseMap iteration order is not guaranteed, use MapVector instead.
Fix provided by srhines.
Differential Revision: https://reviews.llvm.org/D59807
llvm-svn: 356988
When one mistakenly specifies 'def' instead of using 'defm',
the error message is quite misleading: 'Couldn't find class..'
Instead, it should recommend using defm if the multiclass of
same name exists.
Reviewed By: hfinkel
Differential Revision: https://reviews.llvm.org/D59294
llvm-svn: 356985
This makes GNU binutils not reject the libraries outright.
GNU ld handles weak externals slightly differently though, so it
can't use them for aliases in import libraries, but this makes GNU
ld able to use the rest of the import libraries.
LLD accepted object files with machine type 0 aka
IMAGE_FILE_MACHINE_UNKNOWN.
Differential Revision: https://reviews.llvm.org/D59742
llvm-svn: 356982
We were using OrigNBits, but that put all the nodes before the node we used to start the control computation. This caused some node earlier than the sequence we inserted to be selected before the sequence we created. We want our new sequence to be selected first since it depends on OrigNBits.
I don't have a test case. Found by reviewing the code.
llvm-svn: 356979
Within the MatchFPUWaitAlias function, Operands[0] is potentially overwritten leading to &Op referencing a deleted object. To fix this, assign the reference after the function.
Differential Revision: https://reviews.llvm.org/D57376
llvm-svn: 356973
This error can only happen if an unfinished operation is at Eof.
Patch by Brandon Jones
Differential Revision: https://reviews.llvm.org/D57379
llvm-svn: 356972
This should hopefully lead to minor improvements in code generation, and
more accurate spill/reload comments in assembly.
Also fix isLoadFromStackSlotPostFE/isStoreToStackSlotPostFE so they
don't lead to misleading assembly comments for merged memory operands;
this is technically orthogonal, but in practice the relevant memory
operand lists don't show up without this change.
Differential Revision: https://reviews.llvm.org/D59713
llvm-svn: 356963
This is generally more readable due to the way the assembler aliases
work.
(This causes a lot of test changes, but it's not really as scary as it
looks at first glance; it's just mechanically changing a bunch of checks
for orr to check for mov instead.)
Differential Revision: https://reviews.llvm.org/D59720
llvm-svn: 356954
These were defaulting to true, but they are just wrappers around bit
operations. This avoids regressions in the exec mask optimization
passes in a future commit.
llvm-svn: 356952
Summary: Add a binding to Function::lookupIntrinsicID so clients don't have to go searching the ID table themselves.
Reviewers: whitequark, deadalnix
Reviewed By: whitequark
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59697
llvm-svn: 356948
Summary:
Motivation: In previous dwarf versions, file name indexes started from 1, and
the primary source file was not explicit. Dwarf 5 standard (6.2.4) prescribes
the primary source file to be explicitly given an entry with an index number 0.
The current implementation honors the specification by just duplicating the
main source file, once with index number 0, and later maybe with another
index number. While this is compliant with the letter of the standard, the
duplication causes problems for consumers of this information such as lldb.
(Some files are duplicated, where only some of them have a line table although
all refer to the same file)
With this change, dwarf 5 debug line section files always start from 0, and
the zeroth entry is not duplicated whenever possible. This requires different
handling of dwarf 4 and dwarf 5 during generation (e.g. when a function returns
an index zero for a file name, it signals an error in dwarf 4, but not in dwarf 5)
However, I think the minor complication is worth it, because it enables all
consumers (lldb, gdb, dwarfdump, objdump, and so on) to treat all files in the
file name list homogenously.
Reviewers: dblaikie, probinson, aprantl, espindola
Reviewed By: probinson
Subscribers: emaste, jvesely, nhaehnle, aprantl, javed.absar, arichardson, hiraditya, MaskRay, rupprecht, jdoerfert, llvm-commits
Tags: #llvm, #debug-info
Differential Revision: https://reviews.llvm.org/D59515
llvm-svn: 356941
As discussed on D59738, this generalizes reorderInputsAccordingToOpcode to handle multiple + non-commutative instructions so we can get rid of reorderAltShuffleOperands and make use of the extra canonicalizations that reorderInputsAccordingToOpcode brings.
Differential Revision: https://reviews.llvm.org/D59784
llvm-svn: 356939
First half of PR40800, this patch adds DAG undef handling to icmp instructions to match the behaviour in llvm::ConstantFoldCompareInstruction and SimplifyICmpInst, this permits constant folding of vector comparisons where some elements had been reduced to UNDEF (by SimplifyDemandedVectorElts etc.).
This involved a lot of tweaking to reduced tests as bugpoint loves to reduce icmp arguments to undef........
Differential Revision: https://reviews.llvm.org/D59363
llvm-svn: 356938
Summary:
In r355512 CGP was changed to build the DominatorTree only once per
function traversal, to avoid repeatedly building it each time it was
accessed. This solved one compile time issue but introduced another. In
the second case, we now were building the DT unnecessarily many times
when we performed many function traversals (i.e. more than once per
function when running CGP because of changes made each time).
Change to saving the DT in the CodeGenPrepare object, and building it
lazily when needed. It is reset whenever we need to rebuild it.
The case that exposed the issue there are 617 functions, and we walk
them (i.e. execute the "while (MadeChange)" loop in runOnFunction) a
total of 12083 times (so previously we were building the DT 12083
times). With this patch we only build the DT 844 times (average of 1.37
times per function). We dropped the total time to compile this file from
538.11s without this patch to 339.63s with it.
There is still an issue as CGP is taking much longer than all other
passes even with this patch, and before a recent compiler release cut at
r355392 the total time to this compile was only 97 sec with a huge
reduction in CGP time. I suspect that one of the other recent changes to
CGP led to iterating each function many more times on average, but I
need to do some more investigation.
Reviewers: spatel
Subscribers: jdoerfert, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59696
llvm-svn: 356937
I think this is correct, but may not necessarily be the correct fix
for the assertion I'm really trying to solve. If a scheduling region
was found that only has dbg_value instructions, the RegPressure
tracker would end up in an inconsistent state because it would skip
over any debug instructions and point to an instruction outside of the
scheduling region. It may still be possible for this to happen if
there are some real schedulable instructions between dbg_values, but I
haven't managed to break this.
The testcase is extremely sensitive and I'm not sure how to make it
more resistent to future scheduler changes that would avoid stressing
this situation.
llvm-svn: 356926
Remove attempts to commute non-Instructions to the LHS - the codegen changes appear to rely on chance more than anything else and also have a tendency to fight existing instcombine canonicalization which moves constants to the RHS of commutable binary ops.
This is prep work towards:
(a) reusing reorderInputsAccordingToOpcode for alt-shuffles and removing the similar reorderAltShuffleOperands
(b) improving reordering to optimized cases with commutable and non-commutable instructions to still find splat/consecutive ops.
Differential Revision: https://reviews.llvm.org/D59738
llvm-svn: 356913
Following r354972 the Intel JIT Listener would not report line table
information because the section indices did not match. There was
a similar issue with the PerfJitEventListener. This change performs
the section index lookup when building the object address used to
query the line table information.
Differential Revision: https://reviews.llvm.org/D59490
llvm-svn: 356895
Move selectCopy into MipsInstructionSelector class.
Select copy for arguments from FPRBRegBank for MIPS32.
Differential Revision: https://reviews.llvm.org/D59644
llvm-svn: 356886
Add floating point register bank for MIPS32.
Implement getRegBankFromRegClass for float register classes.
Differential Revision: https://reviews.llvm.org/D59643
llvm-svn: 356883
Lower float and double arguments in registers for MIPS32.
When float/double argument is passed through gpr registers
select appropriate move instruction.
Differential Revision: https://reviews.llvm.org/D59642
llvm-svn: 356882
We currently use only VLDR/VSTR for all 64-bit loads/stores, so the
memory operands must be word-aligned. Mark aligned operations as legal
and narrow non-aligned ones to 32 bits.
While we're here, also mark non-power-of-2 loads/stores as unsupported.
llvm-svn: 356872
Normally when the nodes we use here(AND32ri8 for example) are selected their
immediates are just converted from ConstantSDNode to TargetConstantSDNode
without changing VT from the original operation VT. So we should still be
emitting them with the operation VT.
Theoretically this could expose more accurate opportunities for CSE.
llvm-svn: 356869
We were using this to create an AND32ri8 node from a 64-bit and, but that node
normally still uses a 32-bit immediate. So we should just truncate the existing
immediate to i32. We already verified it has the same value in bits 31:7.
llvm-svn: 356868
Enable SSE41 ZERO_EXTEND_VECTOR_INREG shuffle combines - for the PMOVZX(PSHUFD(V)) -> UNPCKH(V,0) pattern we reduce the shuffles (port5-bottleneck on Intel) at the expense of creating a zero (pxor v,v) and an extra register move - which is a good trade off as these are pretty cheap and in most cases it doesn't increase register pressure.
This also exposed a missed opportunity to use combine to ZERO_EXTEND_VECTOR_INREG with folded loads - even if we're in the float domain.
llvm-svn: 356864
Class `RegionInfo` was `SortUnitInfo` before, so the variables were
named `SUI`. Now the class name is `RegionInfo`, so this renames `SUI`
to `RI`, matching the class name.
llvm-svn: 356861
An i16 bswap can be implemented with an i16 rotate by 8. We previously emitted
a shift and OR sequence that DAG combine should be able to turn back into
rotate. But we might as well go there directly. If rotate isn't legal,
LegalizeDAG should further legalize it to either the opposite rotate, or the
shift and OR pattern.
I don't know of any way to get the existing DAG combine reliance to fail. So
I don't know any way to add new tests for this that wouldn't have worked
previously.
llvm-svn: 356860
Just enable this for AVX for now as SSE41 introduces extra register moves for the PMOVZX(PSHUFD(V)) -> UNPCKH(V,0) pattern (but otherwise helps reduce port5 usage on Intel targets).
Only AVX support is required for PR40685 as the issue is due to 8i8->8i32 zext shuffle leftovers.
llvm-svn: 356858
This is yet another step towards solving PR14613:
https://bugs.llvm.org/show_bug.cgi?id=14613
uaddsat X, Y --> (X >u (X + Y)) ? -1 : X + Y
usubsat X, Y --> (X >u Y) ? X - Y : 0
We can't count on a sane vector ISA, so override the default (umin/umax)
expansion of unsigned add/sub saturate in cases where we do not have umin/umax.
Differential Revision: https://reviews.llvm.org/D59006
llvm-svn: 356855
Remove the I.getOperand() calls from inside shouldReorderOperands - reorderInputsAccordingToOpcode should handle the creation of the operand lists and shouldReorderOperands should just check to see whether the i'th element should be commuted.
llvm-svn: 356854
This adds ConstantRange::getFull(BitWidth) and
ConstantRange::getEmpty(BitWidth) named constructors as more readable
alternatives to the current ConstantRange(BitWidth, /* full */ false)
and similar. Additionally private getFull() and getEmpty() member
functions are added which return a full/empty range with the same bit
width -- these are commonly needed inside ConstantRange.cpp.
The IsFullSet argument in the ConstantRange(BitWidth, IsFullSet)
constructor is now mandatory for the few usages that still make use of it.
Differential Revision: https://reviews.llvm.org/D59716
llvm-svn: 356852
[Symbolizer] Add getModuleSectionIndexForAddress() helper routine
The https://reviews.llvm.org/D58194 patch changed symbolizer interface.
Particularily it requires not only Address but SectionIndex also.
Note object::SectionedAddress parameter:
Expected<DILineInfo> symbolizeCode(const std::string &ModuleName,
object::SectionedAddress ModuleOffset,
StringRef DWPName = "");
There are callers of symbolizer which do not know particular section index.
That patch creates getModuleSectionIndexForAddress() routine which
will detect section index for the specified address. Thus if caller
set ModuleOffset.SectionIndex into object::SectionedAddress::UndefSection
state then symbolizer would detect section index using
getModuleSectionIndexForAddress routine.
Differential Revision: https://reviews.llvm.org/D58848
llvm-svn: 356829
As a followup to newpm -time-passes fix (D59366), now adding a similar
functionality to legacy time-passes.
Enhancing llvm::reportAndResetTimings to accept an optional stream
for reporting output. By default it still reports into the stream created
by CreateInfoOutputFile (-info-output-file).
Also fixing to actually reset after printing as declared.
Reviewed By: philip.pfaffe
Differential Revision: https://reviews.llvm.org/D59416
llvm-svn: 356824
Add basic infrastructure for reading and writting TBD files (version 1 - 3).
The TextAPI library is not used by anything yet (besides the unit tests). Tool
support will be added in a separate commit.
The TBD format is currently documented in the implementation file (TextStub.cpp).
https://reviews.llvm.org/D53945
Update: This contains changes to fix issues discovered by the bots:
- add parentheses to silence warnings.
- rename variables
- use PlatformType from BinaryFormat
- Trying if switching from a vector to an array will appeas the bots.
- Replace the tuple with a struct to work around an explicit constructor bug.
- This fixes an issue where we were leaking the YAML document if there was a
parsing error.
Updated the license information in all files.
llvm-svn: 356820
This is a refactoring patch that removes the redundancy of performing operand reordering twice, once in buildTree() and later in vectorizeTree().
To achieve this we need to keep track of the operands within the TreeEntry struct while building the tree, and later in vectorizeTree() we are just accessing them from the TreeEntry in the right order.
This patch is the first in a series of patches that will allow for better operand reordering across chains of instructions (e.g., a chain of ADDs), as presented here: https://www.youtube.com/watch?v=gIEn34LvyNo
Patch by: @vporpo (Vasileios Porpodas)
Differential Revision: https://reviews.llvm.org/D59059
llvm-svn: 356814
In r322972/r323136, the iteration here was changed to catch cases at the
beginning of a basic block... but we accidentally deleted an important
safety check. Restore that check to the way it was.
Fixes https://bugs.llvm.org/show_bug.cgi?id=41116
Differential Revision: https://reviews.llvm.org/D59680
llvm-svn: 356809
On 32-bit targets without popcnt, we currently expand 64-bit popcnt to sequences of arithmetic and logic ops for each 32-bit half and then add the 32 bit halves together. If we have xmm registers we can use use those to implement the operation instead. This results in less instructions then doing two separate 32-bit popcnt sequences.
This mitigates some of PR41151 for the i64 on i686 case when we have SSE2.
Differential Revision: https://reviews.llvm.org/D59662
llvm-svn: 356808
We used a lock cmpxchg8b to do i64 atomic loads. But if we have SSE2 we can do better and use a plain movq to do the load instead.
I tried to just use an f64 atomic load and add isel patterns to MOVSD(which the domain fixing pass can turn to MOVQ), but the atomic_load SDNode in TargetSelectionDAG.td requires the type to be integer.
So I've emitted VZEXT_LOAD instead which should be selected by isel to a MOVQ. Hopefully we don't need a specific atomic flavor of this. I kept the memory operand from the original AtomicSDNode. I wasn't sure if I might need to set the MOVolatile flag?
I've left some FIXMEs for improvements we can do without SSE2.
Differential Revision: https://reviews.llvm.org/D59679
llvm-svn: 356807
Summary:
Between building the pair map and querying it there are a few places that
erase and create Values. It's rare but the address of these newly created
Values is occasionally the same as a just-erased Value that we already
have in the pair map. These coincidences should be accounted for to avoid
non-determinism.
Thanks to Roman Tereshin for the test case.
Reviewers: rtereshin, bogner
Reviewed By: rtereshin
Subscribers: mgrang, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59401
llvm-svn: 356803
This doesn't have any practical effect at the moment, as far as I know,
because high registers aren't allocatable in Thumb1 mode. But it might
matter in the future.
Differential Revision: https://reviews.llvm.org/D59675
llvm-svn: 356791
Just as as llvm IR supports explicitly specifying numeric value ids
for instructions, and emits them by default in textual output, now do
the same for blocks.
This is a slightly incompatible change in the textual IR format.
Previously, llvm would parse numeric labels as string names. E.g.
define void @f() {
br label %"55"
55:
ret void
}
defined a label *named* "55", even without needing to be quoted, while
the reference required quoting. Now, if you intend a block label which
looks like a value number to be a name, you must quote it in the
definition too (e.g. `"55":`).
Previously, llvm would print nameless blocks only as a comment, and
would omit it if there was no predecessor. This could cause confusion
for readers of the IR, just as unnamed instructions did prior to the
addition of "%5 = " syntax, back in 2008 (PR2480).
Now, it will always print a label for an unnamed block, with the
exception of the entry block. (IMO it may be better to print it for
the entry-block as well. However, that requires updating many more
tests.)
Thus, the following is supported, and is the canonical printing:
define i32 @f(i32, i32) {
%3 = add i32 %0, %1
br label %4
4:
ret i32 %3
}
New test cases covering this behavior are added, and other tests
updated as required.
Differential Revision: https://reviews.llvm.org/D58548
llvm-svn: 356789
We're already computing the known bits of the operands here. If the
known bits of the operands can determine the sign bit of the result,
we'll already catch this in signedAddMayOverflow(). The only other
way (and as the comment already indicates) we'll get new information
from computing known bits on the whole add, is if there's an assumption
on it.
As such, we change the code to only compute known bits from assumptions,
instead of computing full known bits on the add (which would unnecessarily
recompute the known bits of the operands as well).
Differential Revision: https://reviews.llvm.org/D59473
llvm-svn: 356785
Summary:
Adding contained caching to AliasAnalysis. BasicAA is currently the only one using it.
AA changes:
- This patch is pulling the caches from BasicAAResults to AAResults, meaning the getModRefInfo call benefits from the IsCapturedCache as well when in "batch mode".
- All AAResultBase implementations add the QueryInfo member to all APIs. AAResults APIs maintain wrapper APIs such that all alias()/getModRefInfo call sites are unchanged.
- AA now provides a BatchAAResults type as a wrapper to AAResults. It keeps the AAResults instance and a QueryInfo instantiated to batch mode. It delegates all work to the AAResults instance with the batched QueryInfo. More API wrappers may be needed in BatchAAResults; only the minimum needed is currently added.
MemorySSA changes:
- All walkers are now templated on the AA used (AliasAnalysis=AAResults or BatchAAResults).
- At build time, we optimize uses; now we create a local walker (lives only as long as OptimizeUses does) using BatchAAResults.
- All Walkers have an internal AA and only use that now, never the AA in MemorySSA. The Walkers receive the AA they will use when built.
- The walker we use for queries after the build is instantiated on AliasAnalysis and is built after building MemorySSA and setting AA.
- All static methods doing walking are now templated on AliasAnalysisType if they are used both during build and after. If used only during build, the method now only takes a BatchAAResults. If used only after build, the method now takes an AliasAnalysis.
Subscribers: sanjoy, arsenm, jvesely, nhaehnle, jlebar, george.burgess.iv, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59315
llvm-svn: 356783
Summary:
In C++, the behavior of casting a double value that is beyond the range
of a single precision floating-point to a float value is undefined. This
change replaces such a cast with APFloat::convert to convert the value,
which is consistent with how we convert a double value to a half value.
Reviewers: sanjoy
Subscribers: lebedev.ri, sanjoy, jlebar, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59500
llvm-svn: 356781
This helps to avoid the situation where RA spots that only 3 of the
v4f32 result of a load are used, and immediately reallocates the 4th
register for something else, requiring a stall waiting for the load.
Differential Revision: https://reviews.llvm.org/D58906
Change-Id: I947661edfd5715f62361a02b100f14aeeada29aa
llvm-svn: 356768
Some image ops return three or five dwords. Previously, we modeled that
with a 4 or 8 dword register class. The register allocator could
cleverly spot that some subregs were dead and allocate something else
there, but that caused the de-optimization that waitcnt insertion would
think that the result was used immediately.
This commit allows such an image op to have a result with a three or
five dword result, avoiding the above de-optimization.
Differential Revision: https://reviews.llvm.org/D58905
Change-Id: I3651211bbd7ed22721ee7b9fefd7bcc60a809d8b
llvm-svn: 356757
Now we have vec3 MVTs, this commit implements dwordx3 variants of the
buffer intrinsics.
On gfx6, a dwordx3 buffer load intrinsic is implemented as a dwordx4
instruction, and a dwordx3 buffer store intrinsic is not supported.
We need to support the dwordx3 load intrinsic because it is generated by
subtarget-unaware code in InstCombine.
Differential Revision: https://reviews.llvm.org/D58904
Change-Id: I016729d8557b98a52f529638ae97c340a5922a4e
llvm-svn: 356755
Summary:
This patch adds the ability to read a yaml form of a minidump file and
write it out as binary. Apart from the minidump header and the stream
directory, only three basic stream kinds are supported:
- Text: This kind is used for streams which contain textual data. This
is typically the contents of a /proc file on linux (e.g.
/proc/PID/maps). In this case, we just put the raw stream contents
into the yaml.
- SystemInfo: This stream contains various bits of information about the
host system in binary form. We expose the data in a structured form.
- Raw: This kind is used as a fallback when we don't have any special
knowledge about the stream. In this case, we just print the stream
contents in hex.
For this code to be really useful, more stream kinds will need to be
added (particularly for things like lists of memory regions and loaded
modules). However, these can be added incrementally.
Reviewers: jhenderson, zturner, clayborg, aprantl
Subscribers: mgorny, lemo, llvm-commits, lldb-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59482
llvm-svn: 356753
The RISC-V ISA defines RV32E as an alternative "base" instruction set
encoding, that differs from RV32I by having only 16 rather than 32 registers.
This patch adds basic definitions for RV32E as well as MC layer support
(assembling, disassembling) and tests. The only supported ABI on RV32E is
ILP32E.
Add a new RISCVFeatures::validate() helper to RISCVUtils which can be called
from codegen or MC layer libraries to validate the combination of TargetTriple
and FeatureBitSet. Other targets have similar checks (e.g. erroring if SPE is
enabled on PPC64 or oddspreg + o32 ABI on Mips), but they either duplicate the
checks (Mips), or fail to check for both codegen and MC codepaths (PPC).
Codegen for the ILP32E ABI support and RV32E codegen are left for a future
patch/patches.
Differential Revision: https://reviews.llvm.org/D59470
llvm-svn: 356744
This patch optimizes the emission of a sequence of SELECTs with the same
condition, avoiding the insertion of unnecessary control flow. Such a sequence
often occurs when a SELECT of values wider than XLEN is legalized into two
SELECTs with legal types. We have identified several use cases where the
SELECTs could be interleaved with other instructions. Therefore, we extend the
sequence to include non-SELECT instructions if we are able to detect that the
non-SELECT instructions do not impact the optimization.
This patch supersedes https://reviews.llvm.org/D59096, which attempted to
address this issue by introducing a new SelectionDAG node. Hat tip to Eli
Friedman for his feedback on how to best handle this issue.
Differential Revision: https://reviews.llvm.org/D59355
Patch by Luís Marques.
llvm-svn: 356741
Indicates in the TargetLowering interface that conversions from CC logic to
bitwise logic are allowed. Adds tests that show the benefit when optimization
opportunities are detected. Also adds tests that show that when the optimization
is not applied correct code is generated (but opportunities for other
optimizations remain).
Differential Revision: https://reviews.llvm.org/D59596
Patch by Luís Marques.
llvm-svn: 356740
Spec says about the first symbol table entry that index 0 both designates the first entry in the table
and serves as the undefined symbol index. It should have zero value.
Hence the first symbol table entry has no name. And so has to have a st_name == 0.
(http://refspecs.linuxbase.org/elf/gabi4+/ch4.symtab.html)
Currently, we do not emit zero value for the first symbol table entry.
That happens because we add empty strings to the string builder, which
for each such case adds a zero byte:
(https://github.com/llvm-mirror/llvm/blob/master/lib/MC/StringTableBuilder.cpp#L185)
After the string optimization performed it might return non zero indexes for the
empty string requested.
The patch fixes this issue for the case above and other sections with no names.
Differential revision: https://reviews.llvm.org/D59496
llvm-svn: 356739
They are not used by anything yet, but a subsequent commit will start
using them for image ops that return 5 dwords.
Differential Revision: https://reviews.llvm.org/D58903
Change-Id: I63e1904081e39a6d66e4eb96d51df25ad399d271
llvm-svn: 356735
Summary:
getRelocatedValue may compute incorrect value for SHT_RELA-typed relocation entries.
// DWARFDataExtractor.cpp
uint64_t DWARFDataExtractor::getRelocatedValue(uint32_t Size, uint32_t *Off,
...
// This formula is correct for REL, but may be incorrect for RELA if the value
// stored in the location (getUnsigned(Off, Size)) is not zero.
return getUnsigned(Off, Size) + Rel->Value;
In this patch, we
* refactor these visit* functions to include a new parameter `uint64_t A`.
Since these visit* functions are no longer used as visitors, rename them to resolve*.
+ REL: A is used as the addend. A is the value stored in the location where the
relocation applies: getUnsigned(Off, Size)
+ RELA: The addend encoded in RelocationRef is used, e.g. getELFAddend(R)
* and add another set of supports* functions to check if a given relocation type is handled.
DWARFObjInMemory uses them to fail early.
Reviewers: echristo, dblaikie
Reviewed By: echristo
Subscribers: mgorny, aprantl, aheejin, fedor.sergeev, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D57939
llvm-svn: 356729
Currently, the type id for a derived type is computed incorrectly.
For example,
type #1: int
type #2: ptr to #1
For a global variable "int *a", type #1 will be attributed to variable "a".
This is due to a bug which assigns the type id of the basetype of
that derived type as the derived type's type id. This happens
to "const", "volatile", "restrict", "typedef" and "pointer" types.
This patch fixed this bug, fixed existing test cases and added
a new one focusing on pointers plus other derived types.
Signed-off-by: Yonghong Song <yhs@fb.com>
llvm-svn: 356727
This is the result of discussions on the list about how to deal with intrinsics
which require codegen to disambiguate them via only the integer/fp overloads.
It causes problems for GlobalISel as some of that information is lost during
translation, while with other operations like IR instructions the information is
encoded into the instruction opcode.
This patch changes clang to emit the new faddp intrinsic if the vector operands
to the builtin have FP element types. LLVM IR AutoUpgrade has been taught to
upgrade existing calls to aarch64.neon.addp with fp vector arguments, and
we remove the workarounds introduced for GlobalISel in r355865.
This is a more permanent solution to PR40968.
Differential Revision: https://reviews.llvm.org/D59655
llvm-svn: 356722
Currently, this fails with many tools, e.g.
$ clang -fembed-bitcode-marker -c -o test.o test.c
$ nm test.o
nm: test.o The file was not recognized as a valid object file
-fembed-bitcode-marker creates a LLVM,bitcode section consisting of a single
byte. When reading the object file, IRObjectFile::findBitcodeInObject succeeds,
causing SymbolicFile::createSymbolicFile to try to read the "bitcode" rather
than using the outer Mach-O data - when then fails.
Fix this by making findBitcodeInObject return an error if the section size <= 1.
Patched by: Nicholas Allegra
Differential Revision: https://reviews.llvm.org/D44373
llvm-svn: 356718
This was creating a copy of the register the pseudo itself was
def'ing, leaving a copy of an undefined register. I'm not sure how
the verifier is not catching this, but this avoids asserting in a
future change to RegAllocFast
llvm-svn: 356716
The AArch64 test was broken since the result register already had a
set register class, so this test was a no-op. The mapping verify call
would fail because the result size is not the same as the inputs like
in a copy or phi.
The AMDGPU testcases are half broken and introduce illegal VGPR->SGPR
copies which need much more work to handle correctly (same for phis),
but add them as a baseline.
llvm-svn: 356713
are annotated with notail.
r356705 annotated calls to objc_retainAutoreleasedReturnValue with
notail on x86-64. This commit teaches ARC optimizer to check the notail
marker on the call before turning it into a tail call.
rdar://problem/38675807
llvm-svn: 356707
Summary:
This considers module symbol streams and the global symbol stream to be
roots. Most types that this considers "unreferenced" are referenced by
LF_UDT_MOD_SRC_LINE id records, which VC seems to always include.
Essentially, they are types that the user can only find in the debugger
if they call them by name, they cannot be found by traversing a symbol.
In practice, around 80% of type information in a PDB is referenced by a
symbol. That seems like a reasonable number.
I don't really plan to do anything with this tool. It mostly just exists
for informational purposes, and to confirm that we probably don't need
to implement type reference tracking in LLD. We can continue to merge
all types as we do today without wasting space.
Reviewers: zturner, aganea
Subscribers: mgorny, hiraditya, arphaman, jdoerfert, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59620
llvm-svn: 356692
If they have other users we'll just end up increasing the instruction count.
We might be able to weaken this to only one of them having a single use if we can prove that the and will be removed.
Fixes PR41164.
Differential Revision: https://reviews.llvm.org/D59630
llvm-svn: 356690
Under optsize we try to avoid folding immediates into instructions under optsize. But if the immediate is 16-bits or 32 bits, but can be encoded as an 8-bit immediate we don't save enough from disabling the folding unless the immediate has enough uses to make up for the size of the move which is either 3 bytes or 5 bytes since there are no sign extended 8-bit moves. We would also save something if the immediate was a live out of the basic block and thus a move was unavoidable, but that would require a more advanced heuristic than just counting uses.
Note we only avoid folding multiple use immediates into the patterns that use X86ISD::ADD/SUB/XOR/OR/AND/CMP/ADC/SBB nodes and not the more common ISD::ADD/SUB/XOR/OR/AND nodes.
Differential Revision: https://reviews.llvm.org/D59522
llvm-svn: 356688
This adds support for scalarizing these intrinsics as well the X86TargetTransformInfo support to avoid scalarizing them in the cases X86 can handle.
I've omitted handling special cases for constant masks for this first pass. Though CodeGenPrepare can constant fold the branch conditions and remove some of the control flow anyway.
Fixes PR40994 and is covers most of PR3666. Might want to implement constant masks to close that.
Differential Revision: https://reviews.llvm.org/D59180
llvm-svn: 356687
This is D59450, but for signed sub. This case is not NFC, because
the overflow logic in ConstantRange is more powerful than the existing
check. This resolves the TODO in the function.
I've added two tests to show that this indeed catches more cases than
the previous logic, but the main correctness test coverage here is in
the existing ConstantRange unit tests.
Differential Revision: https://reviews.llvm.org/D59617
llvm-svn: 356685
SDNodes can only have 64k operands and for some inputs (e.g. large
number of stores), we can reach this limit when creating TokenFactor
nodes. This patch is a follow up to D56740 and updates a few more places
that potentially can create TokenFactors with too many operands.
Reviewers: efriedma, craig.topper, aemerson, RKSimon
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D59156
llvm-svn: 356668
This is probably a bigger limitation than necessary, but since we don't have any evidence yet
that this transform led to real-world perf improvements rather than regressions, I'm making a
quick, blunt fix.
In the motivating x86 example from:
https://bugs.llvm.org/show_bug.cgi?id=41129
...and shown in the regression test, we want to avoid an extra instruction in the dominating
block because that could be costly.
The x86 LSR test diff is reversing the changes from D57789. There's no evidence that 1 version
is any better than the other yet.
Differential Revision: https://reviews.llvm.org/D59602
llvm-svn: 356665
Added support for dwordx3 for most load/store types, but not DS, and not
intrinsics yet.
SI (gfx6) does not have dwordx3 instructions, so they are not enabled
there.
Some of this patch is from Matt Arsenault, also of AMD.
Differential Revision: https://reviews.llvm.org/D58902
Change-Id: I913ef54f1433a7149da8d72f4af54dbb13436bd9
llvm-svn: 356659
Added subtarget features for AArch64 to use TPIDR_EL[1|2|3] as the TLS base
register, rather than the default TPIDR_EL0.
Patch by Philip Derrin!
Differential revision: https://reviews.llvm.org/D54685
llvm-svn: 356657
Summary:
This patch adds basic support for reading minidump files. It contains
the definitions of various important minidump data structures (header,
stream directory), and of one minidump stream (SystemInfo). The ability
to read other streams will be added in follow-up patches. However, all
streams can be read even now as raw data, which means lldb's minidump
support (where this code is taken from) can be immediately rebased on
top of this patch as soon as it lands.
As we don't have any support for generating minidump files (yet), this
tests the code via unit tests with some small handcrafted binaries in
the form of c char arrays.
Reviewers: Bigcheese, jhenderson, zturner
Subscribers: srhines, dschuff, mgorny, fedor.sergeev, lemo, clayborg, JDevlieghere, aprantl, lldb-commits, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59291
llvm-svn: 356652
Summary:
This is a refactoring patch.
- Reduce the number of map searches by reusing the iterator.
- Add asserts to check that the entry is in the cache, as this is something BasicAA relies on to avoid infinite recursion.
Reviewers: chandlerc, aschwaighofer
Subscribers: sanjoy, jlebar, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59151
llvm-svn: 356644
Code archaeology in D59315 revealed that MSSA should never be moved.
Rather than trying to check dynamically that this hasn't happened in the
verify() functions of Walkers, it's likely best to just delete its move
constructor.
Since all these verify() functions did is check that MSSA hasn't moved,
this allows us to remove these verify functions.
I can readd the verification checks if someone's super concerned about
us trying to `memcpy` MemorySSA or something somewhere, but I imagine we
have other problems if we're trying anything like that...
llvm-svn: 356641
CMPXCHG8B was introduced on i586/pentium generation.
If its not enabled, limit the atomic width to 32 bits so the AtomicExpandPass will expand to lib calls. Unclear if we should be using a different limit for other configs. The default is 1024 and experimentation shows that using an i256 atomic will cause a crash in SelectionDAG.
Differential Revision: https://reviews.llvm.org/D59576
llvm-svn: 356631
Summary:
llvm-objdump (via libObject) validates DYLD_INFO rebase and bind
entries against the basic structure found in the Mach-O file before
evaluating the contents of those entries. Certain malformed Mach-Os can
defeat the validation check and force llvm-objdump (libObject) to crash.
The previous logic verified a rebase or bind started in a valid Mach-O
section, but did not verify that the section wholely contained the
fixup. It also generally allows rebases or binds to start immediately
after a valid section even if that range is not itself part of a valid
section. Finally, bind and rebase opcodes that indicate more than one
fixup (apply N times...) are not completely validated: only the first
and final fixups are checked.
The previous logic also rejected certain binaries as false positives.
Some bind and rebase opcodes can modify the state machine such that the
next bind or rebase will fail. libObject will reject these opcodes as
invalid in order to be helpful and print an error message associated
with the instruction that caused the problem, even though the binary is
not actually illegal until it consumes the invalid state in the state
machine. In other words, libObject may reject a Mach-O binary that
Apple's dynamic linker may consider legal. The original version of
macho-rebase-add-addr-uleb-too-big is an example of such a binary.
I have replaced the existing checkSegAndOffset and checkCountAndSkip
functions with a single function, checkSegAndOffsets, which validates
all of the fixups realized by a DYLD_INFO opcode. checkSegAndOffsets
verifies that a Mach-O section fully contains each fixup. Every fixup
realized by an opcode is validated, and some (but not all!)
inconsistencies in the state machine are allowed until a fixup is
realized. This means that libObject may fail on an opcode that realizes
a fixup, not on the opcode that introduced the arithmetic error.
Existing test cases have been modified to reflect the changes in error
messages returned by libObject. What's more, the test case for
macho-rebase-add-addr-uleb-too-big has been modified so that it actually
triggers the error condition; the new code in libObject considers the
original test binary "legal".
rdar://47797757
Reviewers: lhames, pete, ab
Reviewed By: pete
Subscribers: rupprecht, jdoerfert, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59574
llvm-svn: 356629
My previous fix rL356591 "[AMDGPU] Added MsgPack format PAL metadata"
accidentally caused a spurious PAL metadata .note record to be emitted
for any AMDGPU output. That caused failures in the lld test
amdgpu-relocs.s. Fixed.
Differential Revision: https://reviews.llvm.org/D59613
Change-Id: Ie04a2aaae890dcd490f22c89edf9913a77ce070e
llvm-svn: 356621
Machine DCE cannot remove a dead definition if there are non-dbg uses.
A use however can be in the same instruction:
dead %0 = INST %0
Such instructions sometimes created by Detect dead lanes pass.
Allow this instruction to be deleted despite the use if the only use
belongs to the same instruction.
Differential Revision: https://reviews.llvm.org/D59565
llvm-svn: 356619
This patch enables the use of lowerShuffleAsBitMask for 512-bit blends before
falling back to move immedate, GPR to k-register, and masked op.
I had to make some changes to support v8i64 when i64 is not a legal type. And to
support floating point types.
This trades a load for the move immediate and GPR move which is higher latency.
But its probably better for register pressure not having to hop through other
register classes. The load+and should play better with LICM and
rematerialization I think.
Differential Revision: https://reviews.llvm.org/D59479
llvm-svn: 356618
Summary: - The linking is broken when this library is built as shared one.
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59610
llvm-svn: 356617
The constantness shouldn't change the register bank choice. We also
don't need to restrict this to only indexing VGPRs, since it's
possible to index SGPRs (but SelectionDAG made using this
difficult). Allow directly indexing SGPRs when appropriate.
llvm-svn: 356611
Summary:
Implements a new target features section in assembly and object files
that records what features are used, required, and disallowed in
WebAssembly objects. The linker uses this information to ensure that
all objects participating in a link are feature-compatible and records
the set of used features in the output binary for use by optimizers
and other tools later in the toolchain.
The "atomics" feature is always required or disallowed to prevent
linking code with stripped atomics into multithreaded binaries. Other
features are marked used if they are enabled globally or on any
function in a module.
Future CLs will add linker flags for ignoring feature compatibility
checks and for specifying the set of allowed features, implement using
the presence of the "atomics" feature to control the type of memory
and segments in the linked binary, and add front-end flags for
relaxing the linkage policy for atomics.
Reviewers: aheejin, sbc100, dschuff
Subscribers: jgravelle-google, hiraditya, sunfish, mgrang, jfb, jdoerfert, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59173
llvm-svn: 356610
Summary:
- Should use `targetconstant` instead of `constant` operand for clamp
bit, which is expected as an immediate operand. Under certain
conditions, such as a common `i1 false` constant is used in other
place and selected before the instruction with clamp bit, register
operand may be added instead of immediate one. Use `targetcosntant` to
enforce that.
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59608
llvm-svn: 356608
This takes sequences like "mov r4, sp; str r0, [r4]", and optimizes them
to something like "str r0, [sp]".
For regular stack variables, this optimization was already implemented:
we lower loads and stores using frame indexes, which are expanded later.
However, when constructing a call frame for a call with more than four
arguments, the existing optimization doesn't apply. We need to use
stores which are actually relative to the current value of sp, and don't
have an associated frame index.
This patch adds a special case to handle that construct. At the DAG
level, this is an ISD::STORE where the address is a CopyFromReg from SP
(plus a small constant offset).
This applies only to Thumb1: in Thumb2 or ARM mode, a regular store
instruction can access SP directly, so the COPY gets eliminated by
existing code.
The change to ARMDAGToDAGISel::SelectThumbAddrModeSP is a related
cleanup: we shouldn't pretend that it can select anything other than
frame indexes.
Differential Revision: https://reviews.llvm.org/D59568
llvm-svn: 356601
Summary:
When linking two llvm.used arrays, if the resulting merged
array ends up with duplicated elements (with the same name) but with
different types, the IRLinker was crashing. This was supposed to be
legal, as the IRLinker bitcasts elements to match types in these
situations.
This bug was exposed by D56928 in clang to support attribute used
in member functions of class templates. Crash happened when self-hosting
with LTO. Since LLVM depends on attribute used to generate code
for the dump() method, ubiquitous in the code base, many input bc
had a definition of this method referenced in their llvm.used array.
Some of these classes got optimized, changing the type of the first
parameter (this) in the dump method, leading to a scenario with a
pool of valid definitions but some with a different type, triggering
this bug.
This is a memory bug: ValueMapper depends on (calls) the materializer
provided by IRLinker, and this materializer was freely calling RAUW
methods whenever a global definition was updated in the temporary merged
output file. However, replaceAllUsesWith may or may not destroy
constants that use this global. If the linked definition has a type
mismatch regarding the new def and the old def, the materializer would
bitcast the old type to the new type and the elements of the llvm.used
array, which already uses bitcast to i8*, would end up with elements
cascading two bitcasts. RAUW would then indirectly call the
constantfolder to update the constant to the new ref, which would,
instead of updating the constant, destroy it to be able to create
a new constant that folds the two bitcasts into one. The problem is that
ValueMapper works with pointers to the same constants that may be
getting destroyed by RAUW. Obviously, RAUW can update references in the
Module to do not use the old destroyed constant, but it can't update
ValueMapper's internal pointers to these constants, which are now
invalid.
The approach here is to move the task of RAUWing old definitions
outside of the materializer.
Test Plan:
Added LIT test case, tested clang self-hosting with D56928 and
verified it works
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D59552
llvm-svn: 356597
Summary:
PAL metadata now supports both the old linear reg=val pairs format and
the new MsgPack format.
The MsgPack format uses YAML as its textual representation. On output to
YAML, a mnemonic name is provided for some hardware registers.
Differential Revision: https://reviews.llvm.org/D57028
Change-Id: I2bbaabaaca4b3574f7e03b80fbef7c7a69d06a94
llvm-svn: 356591
If we know we're not storing a lane, we don't need to compute the lane. This could be improved by using the undef element result to further prune the mask, but I want to separate that into its own change since it's relatively likely to expose other problems.
Differential Revision: https://reviews.llvm.org/D57247
llvm-svn: 356590
Summary:
Before this patch, if any Use existed in the loop, with a defining
access in the loop, we conservatively decide to not move the store.
What this approach was missing, is that ordered loads are not Uses, they're Defs
in MemorySSA. So, even when the clobbering walker does not find that
volatile load to interfere, we still cannot hoist a store past a
volatile load.
Resolves PR41140.
Reviewers: george.burgess.iv
Subscribers: sanjoy, jlebar, Prazek, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59564
llvm-svn: 356588
This is a small followup to D59511. The code that was moved into
computeConstantRange() there is a bit overly conversative: If the
abs is not nsw, it does not compute any range. However, abs without
nsw still has a well-defined contiguous unsigned range from 0 to
SIGNED_MIN. This is a lot less useful than the usual 0 to SIGNED_MAX
range, but if we're already here we might as well specify it...
Differential Revision: https://reviews.llvm.org/D59563
llvm-svn: 356586
Summary:
This commit introduces a new AMDGPUPALMetadata class that:
* is inside the AMDGPU target;
* keeps an in-memory representation of PAL metadata;
* provides a method to read the frontend-supplied metadata from LLVM IR;
* provides methods for the asm printer to set metadata items;
* provides methods to write the metadata as a binary blob to put in a
.note record or as an asm directive;
* provides a method to read the metadata as a binary blob from a .note
record.
Because llvm-readobj cannot call directly into a target, I had to remove
llvm-readobj's ability to dump PAL metadata, pending a resolution to
https://reviews.llvm.org/D52821
Differential Revision: https://reviews.llvm.org/D57027
Change-Id: I756dc830894fcb6850324cdcfa87c0120eb2cf64
llvm-svn: 356582
This should be extended, but CGP does some strange things,
so I'm intentionally not changing the potential order of
any transforms yet.
llvm-svn: 356566
This patch removes the following dag node opcodes from namespace X86ISD:
RDTSC_DAG,
RDTSCP_DAG,
RDPMC_DAG
The logic that expands RDTSC/RDPMC/XGETBV intrinsics is basically the same. The
only differences are:
RDTSC/RDTSCP don't implicitly read ECX.
RDTSCP also implicitly writes ECX.
I moved the common expansion logic into a helper function with the goal to get
rid of code repetition. That helper is now used for the expansion of
RDTSC/RDTSCP/RDPMC/XGETBV intrinsics.
No functional change intended.
Differential Revision: https://reviews.llvm.org/D59547
llvm-svn: 356546
Summary:
If an MIMG instruction has managed to get through to adjustWritemask in isel but
has no uses (and doesn't enable TFC) then prevent an assertion by not attempting
to adjust the writemask.
The instruction will be removed anyway.
Change-Id: I9a5dba6bafe1f35ac99c1b73df390936e2ac27a7
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, tpr, t-tye, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D58964
llvm-svn: 356540
This reverts commit 6080a6fb19 (r356532).
Clang does not accept this syntax, so reverting this until I can find something that works across all compilers.
llvm-svn: 356533
This change does two things. One, it ensures compilation will abort
instead of miscompiling if ARMFrameLowering::determineCalleeSaves
chooses not to save LR in a case where it's necessary. Two, it changes
the way we estimate the size of a function to be more conservative in
the presence of constant pool entries and jump tables.
EstimateFunctionSizeInBytes probably still isn't really conservative
enough, but I'm not sure how we can come up with a reliable estimate
before constant islands runs.
Differential Revision: https://reviews.llvm.org/D59439
llvm-svn: 356527
This adds pattern matching for the insert+shufflevector sequence so we can
generate dup instructions instead of the current TBL sequence.
Differential Revision: https://reviews.llvm.org/D59558
llvm-svn: 356526
This adds a Remark class that allows us to share code when working with
remarks.
The C API has been updated to reflect this. Instead of the parser
generating C structs, it's now using a C++ object that is used through
opaque pointers in C. This gives us much more flexibility on what
changes we can make to the internal state of the object and interacts
much better with scenarios where the library is used through dlopen.
* C API updates:
* move from C structs to opaque pointers and functions
* the remark type is now an enum instead of a string
* unit tests updates:
* use mostly the C++ API
* keep one test for the C API
* rename to YAMLRemarksParsingTest
* a typo was fixed: AnalysisFPCompute -> AnalysisFPCommute.
* a new error message was added: "expected a remark tag."
* llvm-opt-report has been updated to use the C++ parser instead of the
C API
Differential Revision: https://reviews.llvm.org/D59049
Original llvm-svn: 356491
llvm-svn: 356519
Nothing prevents entries from being bigger than the 16 bit size field in
Dwarf < 5. For entries that are too big, just emit an empty entry
instead of crashing.
This fixes PR41038.
Reviewers: probinson, aprantl, davide
Reviewed By: probinson
Differential Revision: https://reviews.llvm.org/D59518
llvm-svn: 356514
LTO provides additional opportunities for tailcall elimination due to
link-time inlining and visibility of nocapture attribute. Testing showed
negligible impact on compilation times.
Differential Revision: https://reviews.llvm.org/D58391
llvm-svn: 356511
Teach instcombine to propagate demanded elements through a masked load or masked gather instruction. This is in the broader context of improving vector pointer instcombine under https://reviews.llvm.org/D57140.
Differential Revision: https://reviews.llvm.org/D57372
llvm-svn: 356510
This will allow targets more flexibility to replace the
register allocator core passes. In a future commit,
AMDGPU will run the core register assignment passes
twice, and will also want to disallow using the
standard -regalloc option.
llvm-svn: 356506
Do not actually allocate a register for an undef use. Previously we we
would create unnecessary reload instruction for undef uses where the
register wasn't live.
Patch by Matthias Braun
llvm-svn: 356501
The 2nd loop calculates spill costs but reports free registers as cost
0 anyway, so there is little benefit from having a separate early
loop.
Surprisingly this is not NFC, as many register are marked regDisabled
so the first loop often picks up later registers unnecessarily instead
of the first one available in the allocation order...
Patch by Matthias Braun
llvm-svn: 356499
The actual code change is fairly straight forward, but exercising it isn't. First, it turned out we weren't adding the appropriate flags in SelectionDAG. Second, it turned out that we've got some optimization gaps, so obvious test cases don't work.
My first attempt (in atomic-unordered.ll) points out a deficiency in our peephole-opt folding logic which I plan to fix separately. Instead, I'm exercising this through MachineLICM.
Differential Revision: https://reviews.llvm.org/D59375
llvm-svn: 356494
This adds a Remark class that allows us to share code when working with
remarks.
The C API has been updated to reflect this. Instead of the parser
generating C structs, it's now using a C++ object that is used through
opaque pointers in C. This gives us much more flexibility on what
changes we can make to the internal state of the object and interacts
much better with scenarios where the library is used through dlopen.
* C API updates:
* move from C structs to opaque pointers and functions
* the remark type is now an enum instead of a string
* unit tests updates:
* use mostly the C++ API
* keep one test for the C API
* rename to YAMLRemarksParsingTest
* a typo was fixed: AnalysisFPCompute -> AnalysisFPCommute.
* a new error message was added: "expected a remark tag."
* llvm-opt-report has been updated to use the C++ parser instead of the
C API
Differential Revision: https://reviews.llvm.org/D59049
llvm-svn: 356491
Improve computeOverflowForUnsignedAdd/Sub in ValueTracking by
intersecting the computeConstantRange() result into the ConstantRange
created from computeKnownBits(). This allows us to detect some
additional never/always overflows conditions that can't be determined
from known bits.
This revision also adds basic handling for constants to
computeConstantRange(). Non-splat vectors will be handled in a followup.
The signed case will also be handled in a followup, as it needs some
more groundwork.
Differential Revision: https://reviews.llvm.org/D59386
llvm-svn: 356489
Add tests for wider atomic loads and stores. In the process, fix a crasher where we appearently handled unorder stores, but not loads, when lowering to cmpxchg idioms.
llvm-svn: 356482
Dynamic stack realignment was disabled on micromips by checking if
target has standard encoding. We simply change the condition to skip
Mips16 only.
Patch by Mirko Brkusanin.
Differential Revision: http://reviews.llvm.org/D59499
llvm-svn: 356478
In r311255 we added a case where we split vectors whose elements are
all derived from the same input vector so that we could shuffle it
more efficiently. In doing so, createBuildVecShuffle was taught to
adjust for the fact that all indices would be based off of the first
vector when this happens, but it's possible for the code that checked
that to fire incorrectly if we happen to have a BUILD_VECTOR of
extracts from subvectors and don't hit this new optimization.
Instead of trying to detect if we've split the vector by checking if
we have extracts from the same base vector, we can just pass that
information into createBuildVecShuffle, avoiding the miscompile.
Differential Revision: https://reviews.llvm.org/D59507
llvm-svn: 356476
Combine 2 fcmps that are checking for nan-ness:
and (fcmp ord X, 0), (and (fcmp ord Y, 0), Z) --> and (fcmp ord X, Y), Z
or (fcmp uno X, 0), (or (fcmp uno Y, 0), Z) --> or (fcmp uno X, Y), Z
This is an exact match for a minimal reassociation pattern.
If we want to handle this more generally that should go in
the reassociate pass and allow removing this code.
This should fix:
https://bugs.llvm.org/show_bug.cgi?id=41069
llvm-svn: 356471
These changes are related to PR37743 and include:
SelectionDAGBuilder::visitSelect handles the unary SelectPatternFlavor::SPF_ABS case to build ABS node.
Delete the redundant recognizer of the integer ABS pattern from the DAGCombiner.
Add promoting the integer ABS node in the LegalizeIntegerType.
Expand-based legalization of integer result for the ABS nodes.
Expand-based legalization of ABS vector operations.
Add some integer abs testcases for different typesizes for Thumb arch
Add the custom ABS expanding and change the SAD pattern recognizer for X86 arch: The i64 result of the ABS is expanded to:
tmp = (SRA, Hi, 31)
Lo = (UADDO tmp, Lo)
Hi = (XOR tmp, (ADDCARRY tmp, hi, Lo:1))
Lo = (XOR tmp, Lo)
The "detectZextAbsDiff" function is changed for the recognition of pattern with the ABS node. Given a ABS node, detect the following pattern:
(ABS (SUB (ZERO_EXTEND a), (ZERO_EXTEND b))).
Change integer abs testcases for codegen with the ABS node support for AArch64.
Indicate that the ABS is legal for the i64 type when the NEON is supported.
Change the integer abs testcases to show changing of codegen.
Add combine and legalization of ABS nodes for Thumb arch.
Extend 'matchSelectPattern' to recognize the ABS patterns with ICMP_SGE condition.
For discussion, see https://bugs.llvm.org/show_bug.cgi?id=37743
Patch by: @ikulagin (Ivan Kulagin)
Differential Revision: https://reviews.llvm.org/D49837
llvm-svn: 356468
I found this really weird WWM-related case whereby through the WWM
transformations our isel lowering was trying to promote 2 min's into a
min3 for the i8 type, which our hardware doesn't support.
The new min3_i8.ll test case would previously spew the error:
PromoteIntegerResult #0: t69: i8 = SMIN3 t70, Constant:i8<0>, t68
Before the simple fix to our isel lowering to not do it for i8 MVT's.
Differential Revision: https://reviews.llvm.org/D59543
llvm-svn: 356464
Switch to the `MCParserUtils::parseAssignmentExpression` for parsing
assignment expressions in the `.set` directive reduces code and allows
to print an error message instead of crashing in case of incorrect
recursive using of the `.set`.
Fix for the bug https://bugs.llvm.org/show_bug.cgi?id=41053.
Differential Revision: http://reviews.llvm.org/D59452
llvm-svn: 356461
As discussed on PR41125 and D59363, we have a mismatch between icmp eq/ne cases with an undef operand:
When the other operand is constant we fold to undef (handled in ConstantFoldCompareInstruction)
When the other operand is non-constant we fold to a bool constant based on isTrueWhenEqual (handled in SimplifyICmpInst).
Neither is really wrong, but this patch changes the logic in SimplifyICmpInst to consistently fold to undef.
The NewGVN test change is annoying (as with most heavily reduced tests) but AFAICT I have kept the purpose of the test based on rL291968.
Differential Revision: https://reviews.llvm.org/D59541
llvm-svn: 356456
Moving subprogram specific flags into DISPFlags makes IR code more readable.
In addition, we provide free space in DIFlags for other
'non-subprogram-specific' debug info flags.
Patch by Djordje Todorovic.
Differential Revision: https://reviews.llvm.org/D59288
llvm-svn: 356454
Introduce a DW_OP_LLVM_convert Dwarf expression pseudo op that allows
for a convenient way to perform type conversions on the Dwarf expression
stack. As an additional bonus it paves the way for using other Dwarf
v5 ops that need to reference a base_type.
The new DW_OP_LLVM_convert is used from lib/Transforms/Utils/Local.cpp
to perform sext/zext on debug values but mainly the patch is about
preparing terrain for adding other Dwarf v5 ops that need to reference a
base_type.
For Dwarf v5 the op maps to DW_OP_convert and for earlier versions a
complex shift & mask pattern is generated to emulate sext/zext.
This is a recommit of r356442 with trivial fixes for the failing tests.
Differential Revision: https://reviews.llvm.org/D56587
llvm-svn: 356451
Introduce a DW_OP_LLVM_convert Dwarf expression pseudo op that allows
for a convenient way to perform type conversions on the Dwarf expression
stack. As an additional bonus it paves the way for using other Dwarf
v5 ops that need to reference a base_type.
The new DW_OP_LLVM_convert is used from lib/Transforms/Utils/Local.cpp
to perform sext/zext on debug values but mainly the patch is about
preparing terrain for adding other Dwarf v5 ops that need to reference a
base_type.
For Dwarf v5 the op maps to DW_OP_convert and for earlier versions a
complex shift & mask pattern is generated to emulate sext/zext.
Differential Revision: https://reviews.llvm.org/D56587
llvm-svn: 356442
Summary:
- Make some class member methods const
- Delete unnecessary includes
- Use a simpler form of `BuildMI`
Reviewers: kripken
Subscribers: dschuff, sbc100, jgravelle-google, sunfish, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59454
llvm-svn: 356440
Add support for min/max flavor selects in computeConstantRange(),
which allows us to fold comparisons of a min/max against a constant
in InstSimplify. This was suggested by spatel as an alternative
approach to D59378. I've also added the infinite looping test from
that revision here.
Differential Revision: https://reviews.llvm.org/D59506
llvm-svn: 356415
The default implementation does we want and is going to more compatible
with dynamic linking (-fPIC) support that is planned.
This is NFC because currently we only build wasm with
`-relocation-model=static` which in turn means that the default
`isOffsetFoldingLegal` always returns true today.
Differential Revision: https://reviews.llvm.org/D54661
llvm-svn: 356410
This is preparation for D59506. The InstructionSimplify abs handling
is moved into computeConstantRange(), which is the general place for
such calculations. This is NFC and doesn't affect the existing tests
in test/Transforms/InstSimplify/icmp-abs-nabs.ll.
Differential Revision: https://reviews.llvm.org/D59511
llvm-svn: 356409
This change makes linking into .build-id atomic and safe to use.
Some users under particular workflows are reporting that this races
more than half the time under particular conditions.
llvm-svn: 356404
Allow the clamp modifier on vop3 int arithmetic instructions in assembly
and disassembly.
This involved adding a clamp operand to the affected instructions in MIR
and MC, and thus having to fix up several places in codegen and MIR
tests.
Differential Revision: https://reviews.llvm.org/D59267
Change-Id: Ic7775105f02a985b668fa658a0cd7837846a534e
llvm-svn: 356399
This commit allows v_cndmask_b32_e64 with abs, neg source
modifiers on src0, src1 to be assembled and disassembled.
This does appear to be allowed, even though they are floating point
modifiers and the operand type is b32.
To do this, I added src0_modifiers and src1_modifiers to the
MachineInstr, which involved fixing up several places in codegen and mir
tests.
Differential Revision: https://reviews.llvm.org/D59191
Change-Id: I69bf4a8c73ebc65744f6110bb8fc4e937d79fbea
llvm-svn: 356398
After review comments, it was preferred to not teach MachineIRBuilder about
non-generic instructions beyond using buildInstr().
For AArch64 I've changed the buildCopy() calls to buildInstr() + a
separate addReg() call.
This also relaxes the MachineIRBuilder's COPY checking more because it may
not always have a SrcOp given to it.
llvm-svn: 356396
Before, empty debug streams were written as 8 bytes (4 bytes signature + 4 bytes for the GlobalRefs count).
With this patch, unused empty streams aren't emitted anymore. Modules now encode 65535 as an 'unused stream' value, by convention.
Also fix the * Linker * contrib section which wasn't correctly emitted previously.
Differential Revision: https://reviews.llvm.org/D59502
llvm-svn: 356395
This fixes a couple of unflushed raw_string_ostream bugs in recent
commits that only show up on a bot building on windows with expensive
checks.
Differential Revision: https://reviews.llvm.org/D59396
Change-Id: I9c6208325503b3ee0786b4b688e13fc24a15babf
llvm-svn: 356394
This reinstates r347934, along with a tweak to address a problem with
PHI node ordering that that commit created (or exposed). (That commit
was reverted at r348426, due to the PHI node issue.)
Original commit message:
r320789 suppressed moving the insertion point of SCEV expressions with
dev/rem operations to the loop header in non-loop-invariant situations.
This, and similar, hoisting is also unsafe in the loop-invariant case,
since there may be a guard against a zero denominator. This is an
adjustment to the fix of r320789 to suppress the movement even in the
loop-invariant case.
This fixes PR30806.
Differential Revision: https://reviews.llvm.org/D57428
llvm-svn: 356392
It uses the generic AArch64_IMM::expandMOVImm to get the correct
number of instruction used in immediate materialization.
Reviewers: efriedma
Differential Revision: https://reviews.llvm.org/D58461
llvm-svn: 356391
This patch follows some ideas from r352866 to optimize the floating
point materialization even further. It changes isFPImmLegal to
considere up to 2 mov instruction or up to 5 in case subtarget has
fused literals.
The rationale is the cost is the same for mov+fmov vs. adrp+ldr; but
the mov+fmov sequence is always better because of the reduced d-cache
pressure. The timings are still the same if you consider movw+movk+fmov
vs. adrp+ldr will be fused (although one instruction longer).
Reviewers: efriedma
Differential Revision: https://reviews.llvm.org/D58460
llvm-svn: 356390
This allows better code size for aarch64 floating point materialization
in a future patch.
Reviewers: evandro
Differential Revision: https://reviews.llvm.org/D58690
llvm-svn: 356389
It splits the login of actual instruction emission away from the logic
that figures out the appropriate sequence on AArch64ExpandPseudo::expandMOVImm.
The new function AArch64_IMM::expandMOVImm, which return the list of the
instructions to materialize the immediate constant, is implemented on a
separated unit because it will be used in a subsequent patch to optimize
floating point materialization.
Reviewers: efriedma
Differential Revision: https://reviews.llvm.org/D58915
llvm-svn: 356387
Delete temporarily constructed node uses for analysis after it's use,
holding onto original input nodes. Ideally this would be rewritten
without making nodes, but this appears relatively complex.
Reviewers: spatel, RKSimon, craig.topper
Subscribers: jdoerfert, hiraditya, deadalnix, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D57921
llvm-svn: 356382
Add an experimental buffer fat pointer address space that is currently
unhandled in the backend. This commit reserves address space 7 as a
non-integral pointer repsenting the 160-bit fat pointer (128-bit buffer
descriptor + 32-bit offset) that is heavily used in graphics workloads
using the AMDGPU backend.
Differential Revision: https://reviews.llvm.org/D58957
llvm-svn: 356373
Follow-up to:
rL356338
rL356369
We can calculate an arbitrary vector constant minus the bitwidth, so there's
no need to limit this transform to scalars and splats.
llvm-svn: 356372
Follow-up to:
rL356338
Rotates are a special case of funnel shift where the 2 input operands
are the same value, but that does not need to be a restriction for the
canonicalization when the shift amount is a constant.
llvm-svn: 356369
Summary:
Look past bitcasts when looking for parameter debug values that are
described by frame-index loads in `EmitFuncArgumentDbgValue()`.
In the attached test case we would be left with an undef `DBG_VALUE`
for the parameter without this patch.
A similar fix was done for parameters passed in registers in D13005.
This fixes PR40777.
Reviewers: aprantl, vsk, jmorse
Reviewed By: aprantl
Subscribers: bjope, javed.absar, jdoerfert, llvm-commits
Tags: #debug-info, #llvm
Differential Revision: https://reviews.llvm.org/D58831
llvm-svn: 356363
Fixes https://bugs.llvm.org/show_bug.cgi?id=35094
The Dead register definition pass should leave alone the atomicrmw
instructions on AArch64 (LTE extension). The reason is the following
statement in the Arm ARM:
"The ST<OP> instructions, and LD<OP> instructions where the destination
register is WZR or XZR, are not regarded as doing a read for the purpose
of a DMB LD barrier."
A good example was given in the gcc thread by Will Deacon (linked in the
bugzilla ticket 35094):
P0 (atomic_int* y,atomic_int* x) {
atomic_store_explicit(x,1,memory_order_relaxed);
atomic_thread_fence(memory_order_release);
atomic_store_explicit(y,1,memory_order_relaxed);
}
P1 (atomic_int* y,atomic_int* x) {
atomic_fetch_add_explicit(y,1,memory_order_relaxed); // STADD
atomic_thread_fence(memory_order_acquire);
int r0 = atomic_load_explicit(x,memory_order_relaxed);
}
P2 (atomic_int* y) {
int r1 = atomic_load_explicit(y,memory_order_relaxed);
}
My understanding is that it is forbidden for r0 == 0 and r1 == 2 after
this test has executed. However, if the relaxed add in P1 compiles to
STADD and the subsequent acquire fence is compiled as DMB LD, then we
don't have any ordering guarantees in P1 and the forbidden result could
be observed.
Change-Id: I419f9f9df947716932038e1100c18d10a96408d0
llvm-svn: 356360
These are used to help convert OR->LEA when needed to avoid avoid a copy. They
aren't need after register allocation.
Happens to remove an ugly goto from X86MCCodeEmitter.cpp
llvm-svn: 356356
The only thing the print methods currently need to know is the string to print for the memory size in intel syntax.
This patch merges the functions based on this string. If we ever need something else in the future, its easy to split them back out.
This reduces the number of cases in the assembly printers. It shrinks the intel printer to only use 7 bytes per instruction instead of 8.
llvm-svn: 356352
AMDGPU would like to use these MVTs.
Differential Revision: https://reviews.llvm.org/D58901
Change-Id: I6125fea810d7cc62a4b4de3d9904255a1233ae4e
llvm-svn: 356351
AMDGPU would like to have MVTs for v3i32, v3f32, v5i32, v5f32. This
commit does not add them, but makes preparatory changes:
* Exclude non-legal non-power-of-2 vector types from ComputeRegisterProp
mechanism in TargetLoweringBase::getTypeConversion.
* Cope with SETCC and VSELECT for odd-width i1 vector when the other
vectors are legal type.
Some of this patch is from Matt Arsenault, also of AMD.
Differential Revision: https://reviews.llvm.org/D58899
Change-Id: Ib5f23377dbef511be3a936211a0b9f94e46331f8
llvm-svn: 356350
There are a few different issues, mostly stemming from using
generation based checks for anything instead of subtarget
features. Stop adding flat-address-space as a feature for HSA, as it
should only be a device property. This was incorrectly allowing flat
instructions to select for SI.
Increase the default generation for HSA to avoid the encoding error
when emitting objects. This has some other side effects from various
checks which probably should be separate subtarget features (in the
cost model and for dealing with the DS offset folding issue).
Partial fix for bug 41070. It should probably be an error to try using
amdhsa without flat support.
llvm-svn: 356347
This is the same change as rL356290, but for signed add. It replaces
the existing ripple logic with the overflow logic in ConstantRange.
This is NFC in that it should return NeverOverflow in exactly the
same cases as the previous implementation. However, it does make
computeOverflowForSignedAdd() more powerful by now also determining
AlwaysOverflows conditions. As none of its consumers handle this yet,
this has no impact on optimization. Making use of AlwaysOverflows
in with.overflow folding will be handled as a followup.
Differential Revision: https://reviews.llvm.org/D59450
llvm-svn: 356345
Previously we had a regular form of the instruction used when the immediate was 0-7. And _alt form that allowed the full 8 bit immediate. Codegen would always use the 0-7 form since the immediate was always checked to be in range. Assembly parsing would use the 0-7 form when a mnemonic like vpcomtrueb was used. If the immediate was specified directly the _alt form was used. The disassembler would prefer to use the 0-7 form instruction when the immediate was in range and the _alt form otherwise. This way disassembly would print the most readable form when possible.
The assembly parsing for things like vpcomtrueb relied on splitting the mnemonic into 3 pieces. A "vpcom" prefix, an immediate representing the "true", and a suffix of "b". The tablegenerated printing code would similarly print a "vpcom" prefix, decode the immediate into a string, and then print "b".
The _alt form on the other hand parsed and printed like any other instruction with no specialness.
With this patch we drop to one form and solve the disassembly printing issue by doing custom printing when the immediate is 0-7. The parsing code has been tweaked to turn "vpcomtrueb" into "vpcomb" and then the immediate for the "true" is inserted either before or after the other operands depending on at&t or intel syntax.
I'd rather not do the custom printing, but I tried using an InstAlias for each possible mnemonic for all 8 immediates for all 16 combinations of element size, signedness, and memory/register. The code emitted into printAliasInstr ended up checking the number of operands, the register class of each operand, and the immediate for all 256 aliases. This was repeated for both the at&t and intel printer. Despite a lot of common checks between all of the aliases, when compiled with clang at least this commonality was not well optimized. Nor do all the checks seem necessary. Since I want to do a similar thing for vcmpps/pd/ss/sd which have 32 immediate values and 3 encoding flavors, 3 register sizes, etc. This didn't seem to scale well for clang binary size. So custom printing seemed a better trade off.
I also considered just using the InstAlias for the matching and not the printing. But that seemed like it would add a lot of extra rows to the matcher table. Especially given that the 32 immediates for vpcmpps have 46 strings associated with them.
Differential Revision: https://reviews.llvm.org/D59398
llvm-svn: 356343
AMDGPU would like to have MVTs for v3i32, v3f32, v5i32, v5f32. This
commit does not add them, but makes preparatory changes:
* Fixed assumptions of power-of-2 vector type in kernel arg handling,
and added v5 kernel arg tests and v3/v5 shader arg tests.
* Added v5 tests for cost analysis.
* Added vec3/vec5 arg test cases.
Some of this patch is from Matt Arsenault, also of AMD.
Differential Revision: https://reviews.llvm.org/D58928
Change-Id: I7279d6b4841464d2080eb255ef3c589e268eabcd
llvm-svn: 356342
I am about to introduce some non-power-of-2 width vector MVTs. This
commit fixes a power-of-2 assumption that my forthcoming change would
otherwise break, as shown by test/CodeGen/ARM/vcvt_combine.ll and
vdiv_combine.ll.
Differential Revision: https://reviews.llvm.org/D58927
Change-Id: I56a282e365d3874ab0621e5bdef98a612f702317
llvm-svn: 356341
Following the suggestion in D59450, I'm moving the code for constructing
a ConstantRange from KnownBits out of ValueTracking, which also allows us
to test this code independently.
I'm adding this method to ConstantRange rather than KnownBits (which
would have been a bit nicer API wise) to avoid creating a dependency
from Support to IR, where ConstantRange lives.
Differential Revision: https://reviews.llvm.org/D59475
llvm-svn: 356339
This was noted as a backend problem:
https://bugs.llvm.org/show_bug.cgi?id=41057
...and subsequently fixed for x86:
rL356121
But we should canonicalize these in IR for the benefit of all targets
and improve IR analysis such as CSE.
llvm-svn: 356338
The constant island pass currently only looks at the instruction immediately
before a branch for a CMP to fold into a CBZ/CBNZ. This extends it to search
backwards for the instruction that defines CPSR. We need to ensure that the
register is not overridden between the CMP and the branch.
Differential Revision: https://reviews.llvm.org/D59317
llvm-svn: 356336
Fold (x & ~y) | y and it's four commuted variants to x | y. This pattern
can in particular appear when a vselect c, x, -1 is expanded to
(x & ~c) | (-1 & c) and combined to (x & ~c) | c.
This change has some overlap with D59066, which avoids creating a
vselect of this form in the first place during uaddsat expansion.
Differential Revision: https://reviews.llvm.org/D59174
llvm-svn: 356333
This is a subset of what was proposed in:
D59006
...and may overlap with test changes from:
D59174
...but it seems like a good general optimization to turn selects
into bitwise-logic when possible because we never know exactly
what can happen at this stage of DAG combining depending on how
the target has defined things.
Differential Revision: https://reviews.llvm.org/D59066
llvm-svn: 356332
RISCVAsmParser::ParseRegister is called from AsmParser::parseRegisterOrNumber,
which in turn is called when processing CFI directives. The RISC-V
implementation wasn't setting RegNo, and so was incorrect. This patch address
that and adds cfi directive tests that demonstrate the fix. A follow-up patch
will factor out the register parsing logic shared between ParseRegister and
parseRegister.
llvm-svn: 356329
rL356292 reduces the size of scalar_to_vector if we know the upper bits are undef - which means that shuffles may find they are suddenly referencing scalar_to_vector elements other than zero - so make sure we handle this as undef.
llvm-svn: 356327
Two new kinds, BTF_KIND_VAR and BTF_KIND_DATASEC, are added.
BTF_KIND_VAR has the following specification:
btf_type.name: var name
btf_type.info: type kind
btf_type.type: var type
// btf_type is followed by one u32
u32: varinfo (currently, only 0 - static, 1 - global allocated in elf sections)
Not all globals are supported in this patch. The following globals are supported:
. static variables with or without section attributes
. global variables with section attributes
The inclusion of globals with section attributes
is for future potential extraction of key/value
type id's from map definition.
BTF_KIND_DATASEC has the following specification:
btf_type.name: section name associated with variable or
one of .data/.bss/.readonly
btf_type.info: type kind and vlen for # of variables
btf_type.size: 0
#vlen number of the following:
u32: id of corresponding BTF_KIND_VAR
u32: in-session offset of the var
u32: the size of memory var occupied
At the time of debug info emission, the data section
size is unknown, so the btf_type.size = 0 for
BTF_KIND_DATASEC. The loader can patch it during
loading time.
The in-session offseet of the var is only available
for static variables. For global variables, the
loader neeeds to assign the global variable symbol value in
symbol table to in-section offset.
The size of memory is used to specify the amount of the
memory a variable occupies. Typically, it equals to
the type size, but for certain structures, e.g.,
struct tt {
int a;
int b;
char c[];
};
static volatile struct tt s2 = {3, 4, "abcdefghi"};
The static variable s2 has size of 20.
Note that for BTF_KIND_DATASEC name, the section name
does not contain object name. The compiler does have
input module name. For example, two cases below:
. clang -target bpf -O2 -g -c test.c
The compiler knows the input file (module) is test.c
and can generate sec name like test.data/test.bss etc.
. clang -target bpf -O2 -g -emit-llvm -c test.c -o - |
llc -march=bpf -filetype=obj -o test.o
The llc compiler has the input file as stdin, and
would generate something like stdin.data/stdin.bss etc.
which does not really make sense.
For any user specificed section name, e.g.,
static volatile int a __attribute__((section("id1")));
static volatile const int b __attribute__((section("id2")));
The DataSec with name "id1" and "id2" does not contain
information whether the section is readonly or not.
The loader needs to check the corresponding elf section
flags for such information.
A simple example:
-bash-4.4$ cat t.c
int g1;
int g2 = 3;
const int g3 = 4;
static volatile int s1;
struct tt {
int a;
int b;
char c[];
};
static volatile struct tt s2 = {3, 4, "abcdefghi"};
static volatile const int s3 = 4;
int m __attribute__((section("maps"), used)) = 4;
int test() { return g1 + g2 + g3 + s1 + s2.a + s3 + m; }
-bash-4.4$ clang -target bpf -O2 -g -S t.c
Checking t.s, 4 BTF_KIND_VAR's are generated (s1, s2, s3 and m).
4 BTF_KIND_DATASEC's are generated with names
".data", ".bss", ".rodata" and "maps".
Signed-off-by: Yonghong Song <yhs@fb.com>
Differential Revision: https://reviews.llvm.org/D59441
llvm-svn: 356326
Summary:
In the new wasm EH proposal, `rethrow` takes an `except_ref` argument.
This change was missing in r352598.
This patch adds `llvm.wasm.rethrow.in.catch` intrinsic. This is an
intrinsic that's gonna eventually be lowered to wasm `rethrow`
instruction, but this intrinsic can appear only within a catchpad or a
cleanuppad scope. Also this intrinsic needs to be invokable - otherwise
EH pad successor for it will not be correctly generated in clang.
This also adds lowering logic for this intrinsic in
`SelectionDAGBuilder::visitInvoke`. This routine is basically a
specialized and simplified version of
`SelectionDAGBuilder::visitTargetIntrinsic`, but we can't use it
because if is only for `CallInst`s.
This deletes the previous `llvm.wasm.rethrow` intrinsic and related
tests, which was meant to be used within a `__cxa_rethrow` library
function. Turned out this needs some more logic, so the intrinsic for
this purpose will be added later.
LateEHPrepare takes a result value of `catch` and inserts it into
matching `rethrow` as an argument.
`RETHROW_IN_CATCH` is a pseudo instruction that serves as a link between
`llvm.wasm.rethrow.in.catch` and the real wasm `rethrow` instruction. To
generate a `rethrow` instruction, we need an `except_ref` argument,
which is generated from `catch` instruction. But `catch` instrutions are
added in LateEHPrepare pass, so we use `RETHROW_IN_CATCH`, which takes
no argument, until we are able to correctly lower it to `rethrow` in
LateEHPrepare.
Reviewers: dschuff
Subscribers: sbc100, jgravelle-google, sunfish, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59352
llvm-svn: 356316
Summary:
Currently the order of these methods does not matter, but the following
CL needs to have this order changed. Merging the order change and the
semantics change within a CL complicates the diff, so submitting the
order change first.
Reviewers: dschuff
Subscribers: sbc100, jgravelle-google, sunfish, jdoerfert, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59342
llvm-svn: 356315
Summary:
Rewrite WebAssemblyFixIrreducibleControlFlow to a simpler and cleaner
design, which directly computes reachability and other properties
itself. This avoids previous complexity and bugs. (The new graph
analyses are very similar to how the Relooper algorithm would find loop
entries and so forth.)
This fixes a few bugs, including where we had a false positive and
thought fannkuch was irreducible when it was not, which made us much
larger and slower there, and a reverse bug where we missed
irreducibility. On fannkuch, we used to be 44% slower than asm2wasm and
are now 4% faster.
Reviewers: aheejin
Subscribers: jdoerfert, mgrang, dschuff, sbc100, jgravelle-google, sunfish, llvm-commits
Differential Revision: https://reviews.llvm.org/D58919
Patch by Alon Zakai (kripken)
llvm-svn: 356313
TimePassesHandler object (implementation of time-passes for new pass manager)
gains ability to report into a stream customizable per-instance (per pipeline).
Intended use is to specify separate time-passes output stream per each compilation,
setting up TimePasses member of StandardInstrumentation during PassBuilder setup.
That allows to get independent non-overlapping pass-times reports for parallel
independent compilations (in JIT-like setups).
By default it still puts timing reports into the info-output-file stream
(created by CreateInfoOutputFile every time report is requested).
Unit-test added for non-default case, and it also allowed to discover that print() does not work
as declared - it did not reset the timers, leading to yet another report being printed into the default stream.
Fixed print() to actually reset timers according to what was declared in print's comments before.
Reviewed By: philip.pfaffe
Differential Revision: https://reviews.llvm.org/D59366
llvm-svn: 356305
This relaxes some asserts about sizes, and adds an optional subreg parameter
to buildCopy().
Also update AArch64 instruction selector to use this in places where we
previously used MachineInstrBuilder manually.
Differential Revision: https://reviews.llvm.org/D59434
llvm-svn: 356304
tMOVr and tPUSH/tPOP/tPOP_RET have register constraints which can't be
expressed in TableGen, so check them explicitly. I've unfortunately run
into issues with both of these recently; hopefully this saves some time
for someone else in the future.
Differential Revision: https://reviews.llvm.org/D59383
llvm-svn: 356303
Summary:
As noted by @andreadb in https://reviews.llvm.org/D59035#inline-525780
If we have `sext (trunc (cmov C0, C1) to i8)`,
we can instead do `cmov (sext (trunc C0 to i8)), (sext (trunc C1 to i8))`
Reviewers: craig.topper, andreadb, RKSimon
Reviewed By: craig.topper
Subscribers: llvm-commits, andreadb
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59412
llvm-svn: 356301
Switch BIC immediate creation for vector ANDs from custom lowering
to a DAG combine, which gives generic DAG combines a change to
apply first. In particular this avoids (and x, -1) being turned into
a (bic x, 0) instead of being eliminated entirely.
Differential Revision: https://reviews.llvm.org/D59187
llvm-svn: 356299
Summary:
At the exit of the loop, the compiler uses a register to remember and accumulate
the number of threads that have already exited. When all active threads exit the
loop, this register is used to restore the exec mask, and the execution continues
for the post loop code.
When there is a "continue" in the loop, the compiler made a mistake to reset the
register to 0 when the "continue" backedge is taken. This will result in some
threads not executing the post loop code as they are supposed to.
This patch fixed the issue.
Reviewers:
nhaehnle, arsenm
Differential Revision:
https://reviews.llvm.org/D59312
llvm-svn: 356298
The asm parser generates the immediate without the SAE bit. So for consistency we should generate the MCInst the same way from CodeGen.
Since they are now both the same, remove the masking from the printer and replace with an llvm_unreachable.
Use a target constant since we're rebuilding the node anyway. Then we don't have to have isel convert it. Saves about 500 bytes from the isel table.
llvm-svn: 356294
A change of two parts:
1) A generic enhancement for all callers of SDVE to exploit the fact that if all lanes are undef, the result is undef.
2) A GEP specific piece to strengthen/fix the vector index undef element handling, and call into the generic infrastructure when visiting the GEP.
The result is that we replace a vector gep with at least one undef in each lane with a undef. We can also do the same for vector intrinsics. Once the masked.load patch (D57372) has landed, I'll update to include call tests as well.
Differential Revision: https://reviews.llvm.org/D57468
llvm-svn: 356293
Reduce the size of an any-extended i64 scalar_to_vector source to i32 - the any_extend nodes are often introduced by SimplifyDemandedBits.
llvm-svn: 356292
Use the methods introduced in rL356276 to implement the
computeOverflowForUnsigned(Add|Sub) functions in ValueTracking, by
converting the KnownBits into a ConstantRange.
This is NFC: The existing KnownBits based implementation uses the same
logic as the the ConstantRange based one. This is not the case for the
signed equivalents, so I'm only changing unsigned here.
This is in preparation for D59386, which will also intersect the
computeConstantRange() result into the range determined from KnownBits.
llvm-svn: 356290
Since we can't insert s16 gprs as we don't have 16 bit GPR registers, we need to
teach RBS to assign them to the FPR bank so our selector works.
llvm-svn: 356282
The existing lowering code is accidentally correct for unordered atomics as far as I can tell. An unordered atomic has no memory ordering, and simply requires the actual load or store to be done as a single well aligned instruction. As such, relax the restriction while adding tests to ensure the lowering remains correct in the future.
Differential Revision: https://reviews.llvm.org/D57803
llvm-svn: 356280
Previous commit 6bc58e6d3dbd ("[BPF] do not generate unused local/global types")
tried to exclude global variable from type generation. The condition is:
if (Global.hasExternalLinkage())
continue;
This is not right. It also excluded initialized globals.
The correct condition (from AssemblyWriter::printGlobal()) is:
if (!GV->hasInitializer() && GV->hasExternalLinkage())
Out << "external ";
Let us do the same in BTF type generation. Also added a test for it.
Signed-off-by: Yonghong Song <yhs@fb.com>
llvm-svn: 356279
Add functions to ConstantRange that determine whether the
unsigned/signed addition/subtraction of two ConstantRanges
may/always/never overflows. This will allow checking overflow
conditions based on known constant ranges in addition to known bits.
I'm implementing these methods on ConstantRange to allow them to be
unit tested independently of any ValueTracking machinery. The tests
include exhaustive testing on 4-bit ranges, to make sure the result
is both conservatively correct and maximally precise.
The OverflowResult enum is redeclared on ConstantRange, because
I wanted to avoid a dependency in either direction between
ValueTracking.h and ConstantRange.h.
Differential Revision: https://reviews.llvm.org/D59193
llvm-svn: 356276
Summary: Add bindings to create a wrapped "Add Discriminators" pass. Now that we have debug info support, this is a handy transform to have.
Reviewers: whitequark, deadalnix
Reviewed By: whitequark
Subscribers: dblaikie, aprantl, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D58624
llvm-svn: 356272
Summary:
The AliasSummary previously contained the AliaseeGUID, which was only
populated when reading the summary from bitcode. This patch changes it
to instead hold the ValueInfo of the aliasee, and always populates it.
This enables more efficient access to the ValueInfo (specifically in the
recent patch r352438 which needed to perform an index hash table lookup
using the aliasee GUID).
As noted in the comments in AliasSummary, we no longer technically need
to keep a pointer to the corresponding aliasee summary, since it could
be obtained by walking the list of summaries on the ValueInfo looking
for the summary in the same module. However, I am concerned that this
would be inefficient when walking through the index during the thin
link for various analyses. That can be reevaluated in the future.
By always populating this new field, we can remove the guard and special
handling for a 0 aliasee GUID when dumping the dot graph of the summary.
An additional improvement in this patch is when reading the summaries
from LLVM assembly we now set the AliaseeSummary field to the aliasee
summary in that same module, which makes it consistent with the behavior
when reading the summary from bitcode.
Reviewers: pcc, mehdi_amini
Subscribers: inglorion, eraman, steven_wu, dexonsmith, arphaman, llvm-commits
Differential Revision: https://reviews.llvm.org/D57470
llvm-svn: 356268
Summary:
This is similar to how addr2line handles consecutive entries with the
same address - pick the last one.
Reviewers: dblaikie, friss, JDevlieghere
Reviewed By: dblaikie
Subscribers: eugenis, vitalybuka, echristo, JDevlieghere, probinson, aprantl, hiraditya, rupprecht, jdoerfert, llvm-commits
Tags: #llvm, #debug-info
Differential Revision: https://reviews.llvm.org/D58952
llvm-svn: 356265
Summary:
This is a fix to bug 41052:
https://bugs.llvm.org/show_bug.cgi?id=41052
While trying to optimize a memory instruction in a dead basic block, we end up registering the same phi for replacement twice. This patch avoids registering more than the first replacement candidate for a phi.
Patch by: JesperAntonsson
Reviewers: skatkov, aprantl
Reviewed By: aprantl
Subscribers: jdoerfert, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59358
llvm-svn: 356260
Summary:
- During the fixing of SGPR copying from VGPR, ensure users of SCC is
properly propagated, i.e.
* only propagate through live def of SCC,
* skip the SCC-def inst itself, and
* stop the propagation on the other SCC-def inst after checking its
SCC-use first.
Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59362
llvm-svn: 356258
We are adding a sign extended IR value to an int64_t, which can cause
signed overflows, as in the attached test case, where we have a formula
with BaseOffset = -1 and a constant with numeric_limits<int64_t>::min().
If the addition would overflow, skip the simplification for this
formula. Note that the target triple is required to trigger the failure.
Reviewers: qcolombet, gilr, kparzysz, efriedma
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D59211
llvm-svn: 356256
yaml2obj currently derives the p_filesz, p_memsz, and p_offset values of
program headers from their sections. This makes writing tests for
certain formats more complex, and sometimes impossible. This patch
allows setting these fields explicitly, overriding the default value,
when relevant.
Reviewed by: jakehehrlich, Higuoxing
Differential Revision: https://reviews.llvm.org/D59372
llvm-svn: 356247
Bail early when we don't have a preheader and also if the target is
big endian because it's written with only little endian in mind!
Differential Revision: https://reviews.llvm.org/D59368
llvm-svn: 356243
Certain 32 bit constants can be generated with a single instruction
instead of two. Implement materialize32BitImm function for MIPS32.
Differential Revision: https://reviews.llvm.org/D59369
llvm-svn: 356238
The kernel currently has a limit for # of types to be 64KB and
the size of string subsection to be 64KB. A simple bcc tool
runqlat.py generates:
. the size of ~33KB type section, roughly ~10K types
. the size of ~17KB string section
The majority type is from the types referenced by local
variables in the bpf program. For example, the kernel "task_struct"
itself recursively brings in ~900 other types.
This patch did the following optimization to avoid generating
unused types:
. do not generate types for local variables unless they are
function arguments.
. do not generate types for external globals.
If an external global is not used in the program, llvm
already removes it from IR, so global variable saving is
typical small. For runqlat.py, only one variable "llvm.used"
is the external global.
The types for locals and external globals can be added back
once there is a usage for them.
After the above optimization, the runqlat.py generates:
. the size of ~1.5KB type section, roughtly 500 types
. the size of ~0.7KB string section
UPDATE:
resubmitted the patch after previous revert with
the following fix:
use Global.hasExternalLinkage() to test "external"
linkage instead of using Global.getInitializer(),
which will assert on external variables.
Signed-off-by: Yonghong Song <yhs@fb.com>
llvm-svn: 356234
The kernel currently has a limit for # of types to be 64KB and
the size of string subsection to be 64KB. A simple bcc tool
runqlat.py generates:
. the size of ~33KB type section, roughly ~10K types
. the size of ~17KB string section
The majority type is from the types referenced by local
variables in the bpf program. For example, the kernel "task_struct"
itself recursively brings in ~900 other types.
This patch did the following optimization to avoid generating
unused types:
. do not generate types for local variables unless they are
function arguments.
. do not generate types for external globals.
If an external global is not used in the program, llvm
already removes it from IR, so global variable saving is
typical small. For runqlat.py, only one variable "llvm.used"
is the external global.
The types for locals and external globals can be added back
once there is a usage for them.
After the above optimization, the runqlat.py generates:
. the size of ~1.5KB type section, roughtly 500 types
. the size of ~0.7KB string section
Signed-off-by: Yonghong Song <yhs@fb.com>
llvm-svn: 356232
Before r355981, this was under LLVM_DEBUG. I don't think the assert is
quite right, but this really should be a verifier check. Instcombine
should not be asserting on this sort of thing.
llvm-svn: 356219
This is almost the same as:
rL355345
...and should prevent any potential crashing from examples like:
https://bugs.llvm.org/show_bug.cgi?id=41064
...although the bug was masked by:
rL355823
...and I'm not sure how to repro the problem after that change.
llvm-svn: 356218
This isn't necessary according to the DWARF standard, but it matches the
.eh_frame sections emitted by other tools in practice, and the Android
libunwindstack rejects .eh_frame sections where an FDE refers to a CIE
other than the closest previous CIE. So match the other tools and also
sort accordingly.
I consider this a bug in libunwindstack, but it's easy enough to emit
a compatible .eh_frame section for compatibility with installed
operating systems.
Differential Revision: https://reviews.llvm.org/D58266
llvm-svn: 356216
This has been a very painful missing feature that has made producing
reduced testcases difficult. In particular the various registers
determined for stack access during function lowering were necessary to
avoid undefined register errors in a large percentage of
cases. Implement a subset of the important fields that need to be
preserved for AMDGPU.
Most of the changes are to support targets parsing register fields and
properly reporting errors. The biggest sort-of bug remaining is for
fields that can be initialized from the IR section will be overwritten
by a default initialized machineFunctionInfo section. Another
remaining bug is the machineFunctionInfo section is still printed even
if empty.
llvm-svn: 356215
This adds instruction selection support for G_UADDO on s32s and s64s.
Also
- Add an instruction selection test
- Update the arm64-xaluo.ll test to show that we generate the correct assembly
Differential Revision: https://reviews.llvm.org/D58734
llvm-svn: 356214
This re-uses the previous support for extract vector elt to extract the
subvectors.
Differential Revision: https://reviews.llvm.org/D59390
llvm-svn: 356213
On ARC ISA, general format of load instruction is this:
LD<zz><.x><.aa><.di> a, [b,c]
And general format of store is this:
ST<zz><.aa><.di> c, [b,s9]
Where:
<zz> is data size field and can be one of
<empty> (bits 00) - Word (32-bit), default behavior
B (bits 01) - Byte
H (bits 10) - Half-word (16-bit)
<.x> is data extend mode:
<empty> (bit 0) - If size is not Word(32-bit), then data is zero extended
X (bit 1) - If size is not Word(32-bit), then data is sign extended
<.aa> is address write-back mode:
<empty> (bits 00) - no write-back
.AW (bits 01) - Preincrement, base register updated pre memory transaction
.AB (bits 10) - Postincrement, base register updated post memory transaction
<.di> is cache bypass mode:
<empty> (bit 0) - Cached memory access, default mode
.DI (bit 1) - Non-cached data memory access
This patch adds these load/store instruction variants to the ARC backend.
Patch By Denis Antrushin! <denis@synopsys.com>
Differential Revision: https://reviews.llvm.org/D58980
llvm-svn: 356200
Windows command line argument processing treats consecutive double quotes
as a single double-quote. This patch implements this functionality.
Differential Revision: https://reviews.llvm.org/D58662
llvm-svn: 356193
The shift argument is defined to be modulo the bitwidth, so if that argument
is a constant, we can always reduce the constant to its minimal form to allow
better CSE and other follow-on transforms.
We need to be careful to ignore constant expressions here, or we will likely
infinite loop. I'm adding a general vector constant query for that case.
Differential Revision: https://reviews.llvm.org/D59374
llvm-svn: 356192
This adds support for inserting elements into packed vectors. It also adds
two tests: one for selection, and one for regbank select.
Unpacked vectors will come in a follow-up.
Differential Revision: https://reviews.llvm.org/D59325
llvm-svn: 356182
Summary:
Some operations have multiple ARC instructions that are applicable.
For instance, "add r0, r0, 123" can be encoded as a "LImm" instruction
with a 32-bit immediate (8-bytes), or as a signed 12-bit immediate instruction
for the case where the source and destination register are the same (4-bytes).
The ARC assembler will choose the shortest encoding, but we should track
the correct instruction in the compiler.
This patch fixes the instruction used in some cases from ARCFrameLowering.
Subscribers: hiraditya, jdoerfert, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59326
llvm-svn: 356179
Building on the work done in D57601, now that we can distinguish between atomic and volatile memory accesses, go ahead and allow code motion of unordered atomics. As seen in the diffs, this allows much better folding of memory operations into using instructions. (Mostly done by the PeepholeOpt pass.)
Note: I have not reviewed all callers of hasOrderedMemoryRef since one of them - isSafeToMove - is very widely used. I'm relying on the documented semantics of each method to judge correctness.
Differential Revision: https://reviews.llvm.org/D59345
llvm-svn: 356170
These instructions used to use rotl with a bitwidth-1 immediate. I changed the immediate to 1,
but failed to change the opcode.
Thankfully this seems to have not caused a functional issue because we now had two rotl by 1 patterns,
but the correct ones were earlier and took priority. So we just missed some optimization.
llvm-svn: 356164
This is an immediate fix for:
https://bugs.llvm.org/show_bug.cgi?id=41066
...but as noted there and the code comments, we should do better
by stubbing this out sooner.
llvm-svn: 356158
This is consistent with what SelectionDAG does and is much easier to
work with than the extract sequence with an artificial wide register.
For the AMDGPU control flow intrinsics, this was producing an s128 for
the i64, i1 tuple return. Any legalization that should apply to a real
s128 value would badly obscure the direct values that need to be seen.
llvm-svn: 356147
Summary:
Add hooks for determining the policy used to decide whether/how
to chop off symbol 'suffixes' when locating a given function
in a sample profile.
Prior to this change, any function symbols of the form "X.Y" were
elided/truncated into just "X" when looking up things in a sample
profile data file.
With this change, the policy on suffixes can be changed by adding a
new attribute "sample-profile-suffix-elision-policy" to the function:
this attribute can have the value "all" (the default), "selected", or
"none". A value of "all" preserves the previous behavior (chop off
everything after the first "." character, then treat that as the
symbol name). A value of "selected" chops off only the rightmost
".llvm.XXXX" suffix (where "XXX" is any string not containing a "."
char). A value of "none" indicates that names should be left as is.
Subscribers: jdoerfert, wmi, mtrofin, danielcdh, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D58832
llvm-svn: 356146
When choosing whether a pair of loads can be combined into a single
wide load, we check that the load only has a sext user and that sext
also only has one user. But this can prevent the transformation in
the cases when parallel macs use the same loaded data multiple times.
To enable this, we need to fix up any other uses after creating the
wide load: generating a trunc and a shift + trunc pair to recreate
the narrow values. We also need to keep a record of which loads have
already been widened.
Differential Revision: https://reviews.llvm.org/D59215
llvm-svn: 356132
Create members for Loop, ScalarEvolution, DominatorTree,
TargetTransformInfo and Formula.
Differential Revision: https://reviews.llvm.org/D58389
llvm-svn: 356131
The CSR renaming further prepares the way for an upcoming patch adding support for more
RISC-V ABIs.
Modify RISCVRegisterInfo::getCalleeSavedRegs and
RISCVRegisterInfo::getReservedRegs to do MF->getSubtarget<RISCVSubtarget>()
once rather than multiple times.
llvm-svn: 356123
Prior to the introduction of funnel shift intrinsics we could count on rotate
by immediates prefering to use rotl since that's what MatchRotate would check
first. The or+shift pattern doesn't have a direction so one must be chosen
arbitrarily.
With funnel shift, there is a direction and fshr will try to use rotr first.
While fshl will try to use rotl first.
This patch adds the isel patterns for rotr to complement the rotl patterns. I've
put the rotr by 1 patterns in the instruction patterns. And moved the rotl by
bitwidth-1 patterns to separate Pat patterns.
Fixes PR41057.
llvm-svn: 356121
getConstantVRegVal used to only look for G_CONSTANT when looking at
unboxing the value of a vreg. However, constants are sometimes not
directly used and are hidden behind trunc, s|zext or copy chain of
computation.
In particular this may be introduced by the legalization process that
doesn't want to simplify these patterns because it can lead to infine
loop when legalizing a constant.
To circumvent that problem, add a new variant of getConstantVRegVal,
named getConstantVRegValWithLookThrough, that allow to look through
extensions.
Differential Revision: https://reviews.llvm.org/D59227
llvm-svn: 356116
Adding a "NumFunctionsVisited" for collecting the visited function number.
It can be used to collect function pass rate in some tests,
the pass rate = (NumberVisited - NumberReset)/NumberVisited.
e.g. it can be used for caculating GlobalISel pass rate in Test-Suite.
Patch by Tianyang Zhu (zhutianyang)
Differential Revision: https://reviews.llvm.org/D59285
llvm-svn: 356114
NFC. Some more preliminary factoring for G_INSERT_VECTOR_ELT.
Also better code-reuse, etc., etc.
Differential Revision: https://reviews.llvm.org/D59323
llvm-svn: 356107
Factor out the vector insert code in `selectBuildVector`. Replace part of it
with `emitScalarToVector`, since it was pretty much equivalent.
This will make implementing G_INSERT_VECTOR_ELT easier.
Differential Revision: https://reviews.llvm.org/D59322
llvm-svn: 356106
Some more refactoring for G_INSERT_VECTOR_ELT.
Factor out the code used to find a lane index from `selectExtractElt`. Put it
into a more general-purpose `getConstantValueForReg` function.
This will be shared with the code for G_INSERT_VECTOR_ELT.
Differential Revision: https://reviews.llvm.org/D59324
llvm-svn: 356101
Summary:
MsgPackTypes has been replaced by the lighter-weight MsgPackDocument.
Differential Revision: https://reviews.llvm.org/D57025
Change-Id: Ia7069880ef29f55490abbe5d8ae15f25cc1490a4
llvm-svn: 356082
Summary:
MsgPackDocument is the lighter-weight replacement for MsgPackTypes. This
commit switches AMDGPU HSA metadata processing to use MsgPackDocument
instead of MsgPackTypes.
Differential Revision: https://reviews.llvm.org/D57024
Change-Id: I0751668013abe8c87db01db1170831a76079b3a6
llvm-svn: 356081
Summary:
A class that exposes a simple in-memory representation of a document of
MsgPack objects, that can be read from and written to MsgPack, read from
and written to YAML, and inspected and modified in memory. This is
intended to be a lighter-weight (in terms of memory allocations)
replacement for MsgPackTypes.
Two subsequent changes will:
1. switch AMDGPU HSA metadata to using MsgPackDocument instead of
MsgPackTypes;
2. add MsgPack AMDGPU PAL metadata via MsgPackDocument.
Differential Revision: https://reviews.llvm.org/D57023
Change-Id: Ie15a054831d5a6467c5867c064c8f8f6b80270e1
llvm-svn: 356080
The feature flag alone can't be trusted since it can be passed via -mattr. Need to ensure 64-bit mode as well.
We had a 64 bit mode check on the instruction to make the assembler work correctly. But we weren't guarding any of our lowering code or the hooks for the AtomicExpandPass.
I've added 32-bit command lines to atomic128.ll with and without cx16. The tests there would all previously fail if -mattr=cx16 was passed to them. I had to move one test case for f128 to a new file as it seems to have a different 32-bit mode or possibly sse issue.
Differential Revision: https://reviews.llvm.org/D59308
llvm-svn: 356078
Summary:
A number of optimizations are inhibited by single-use TokenFactors not
being merged into the TokenFactor using it. This makes we consider if
we can do the merge immediately.
Most tests changes here are due to the change in visitation causing
minor reorderings and associated reassociation of paired memory
operations.
CodeGen tests with non-reordering changes:
X86/aligned-variadic.ll -- memory-based add folded into stored leaq
value.
X86/constant-combiners.ll -- Optimizes out overlap between stores.
X86/pr40631_deadstore_elision -- folds constant byte store into
preceding quad word constant store.
Reviewers: RKSimon, craig.topper, spatel, efriedma, courbet
Reviewed By: courbet
Subscribers: dylanmckay, sdardis, nemanjai, jvesely, nhaehnle, javed.absar, eraman, hiraditya, kbarton, jrtc27, atanasyan, jsji, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59260
llvm-svn: 356068
Attempt to combine CONCAT_VECTORS nodes, which we only really have pre-legalization.
This encourages a lot of X86ISD::SUBV_BROADCAST generation, so I've added SimplifyDemandedVectorEltsForTargetNode handling for this at the same time.
The X86ISD::VTRUNC regression in shuffle-vs-trunc-256-widen.ll will be handled in a future commit.
llvm-svn: 356064
This follows similar logic in the ARM and Mips backends, and allows the free
use of s0 in functions without a dedicated frame pointer. The changes in
callee-saved-gprs.ll most clearly show the effect of this patch.
llvm-svn: 356063
A fuzzer found the crasher:
https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=13700
The bug was introduced recently here:
rL355741
This is the quick fix. If we need to do this transform
later, then we'd have to extend/truncate the vector setcc
element type to the scalar setcc type (i8).
llvm-svn: 356053
Before this change LLVM emits non-microMIPS variant of the `mov.d`
command for microMIPS code.
Differential Revision: http://reviews.llvm.org/D59045
llvm-svn: 356052
To provide mapping between standard and microMIPS R6 variants of the
`sw` command we have to rename SWSP_xxx commands from "sw" to "swsp".
Otherwise `tablegen` starts to show the error `Multiple matches found
for `SW'`. After that to restore printing SWSP command as `sw`, I add
an appropriate `MipsInstAlias` instance.
We also need to implement "size reduction" for microMIPS R6. But this
task is for separate patch. After that the `micromips-lwsp-swsp.ll` test
case will be extended.
Differential Revision: http://reviews.llvm.org/D59046
llvm-svn: 356045
AVX1 broadcasts were failing as we were adding bitcasts that caused MayFoldLoad's hasOneUse to return false.
This patch stops introducing bitcasts so early and also replaces the broadcast index scaling through bitcasts (which can't succeed in some cases) to instead just keep track of the bitoffset which can be converted back to the broadcast index later on.
Differential Revision: https://reviews.llvm.org/D58888
llvm-svn: 356043
First step towards PR40800 - I intend to move the float case in a separate future patch.
I had to tweak the (overly reduced) thumb2 test and the x86 widening test change is annoying (no longer rematerializable) but we should address this separately.
Differential Revision: https://reviews.llvm.org/D59244
llvm-svn: 356040
On micromips MipsMTLOHI is always matched to PseudoMTLOHI_DSP regardless
of +dsp argument. This patch checks is HasDSP predicate is present for
PseudoMTLOHI_DSP so PseudoMTLOHI_MM can be matched when appropriate.
Add expansion of PseudoMTLOHI_MM instruction into a mtlo/mthi pair.
Patch by Mirko Brkusanin.
Differential Revision: http://reviews.llvm.org/D59203
llvm-svn: 356039
Add break statements in Object/ELF.cpp since the code should consider the
generic tags for Hexagon, MIPS, and PPC. Add a test (copied from llvm-readobj)
to show that this works correctly (earlier versions of this patch would have
asserted).
The warnings in X86ELFObjectWriter.cpp are actually false-positives since
the nested switch() handles all possible values and returns in all cases.
Make this explicit by adding llvm_unreachable's.
Differential Revision: https://reviews.llvm.org/D58837
llvm-svn: 356037
If the concatenation of arguments dir and bin has at least PATH_MAX
characters the call to snprintf will truncate. The result will usually
not exist, but if it does it's actually incorrect to return that the
path exists.
(Motivated by GCC compiler warning about format truncation.)
Differential Revision: https://reviews.llvm.org/D58835
llvm-svn: 356036
RISCVDisassembler was incorrectly using sizeof(Arr) when it should have used
sizeof(Arr)/sizeof(Arr[0]). Update to use array_lengthof instead.
llvm-svn: 356035
Summary:
After instruction selection phase, possibly-throwing calls, which were
previously invoke, are wrapped in `EH_LABEL` instructions. For example:
```
EH_LABEL <mcsymbol .Ltmp0>
CALL_VOID @foo ...
EH_LABEL <mcsymbol .Ltmp1>
```
`EH_LABEL` is placed also in the beginning of EH pads:
```
bb.1 (landing-pad):
EH_LABEL <mcsymbol .Ltmp2>
...
```
And we'd like to maintian this relationship, so when we place a `try`,
```
TRY ...
EH_LABEL <mcsymbol .Ltmp0>
CALL_VOID @foo ...
EH_LABEL <mcsymbol .Ltmp1>
```
When we place a `catch`,
```
bb.1 (landing-pad):
EH_LABEL <mcsymbol .Ltmp2>
%0:except_ref = CATCH ...
...
```
Previously we didn't treat EH_LABELs specially, so `try` was placed
right before a call, and `catch` was placed in the beginning of an EH
pad.
Reviewers: dschuff
Subscribers: sbc100, jgravelle-google, sunfish, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D58914
llvm-svn: 355996
Summary:
AIX compilers define macros based on the version of the operating
system.
This patch implements updating of versionless AIX triples to include the
host AIX version. Also, the host triple detection in the build system is
adjusted to strip the AIX version information so that the run-time
detection is preferred.
Reviewers: xingxue, stefanp, nemanjai, jasonliu
Reviewed By: xingxue
Subscribers: mgorny, kristina, jdoerfert, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D58798
llvm-svn: 355995
This patch adds an XCOFF triple object format type into LLVM.
This XCOFF triple object file type will be used later by object file and assembly generation for the AIX platform.
Differential Revision: https://reviews.llvm.org/D58930
llvm-svn: 355989
Currently we have -Rpass for filtering the remarks that are displayed as
diagnostics, but when using -fsave-optimization-record, there is no way
to filter the remarks while generating them.
This adds support for filtering remarks by passes using a regex.
Ex: `clang -fsave-optimization-record -foptimization-record-passes=inline`
will only emit the remarks coming from the pass `inline`.
This adds:
* `-fsave-optimization-record` to the driver
* `-opt-record-passes` to cc1
* `-lto-pass-remarks-filter` to the LTOCodeGenerator
* `--opt-remarks-passes` to lld
* `-pass-remarks-filter` to llc, opt, llvm-lto, llvm-lto2
* `-opt-remarks-passes` to gold-plugin
Differential Revision: https://reviews.llvm.org/D59268
Original llvm-svn: 355964
llvm-svn: 355984
A faulting_op is one that has specified behavior when a fault occurs, generally redirecting control flow to another location. This change just adds a comment to the assembly output which makes it both human readable, and machine checkable w/o having to parse the FaultMap section. This is used to split a test file into two parts, so that I can (in a near future commit) easily extend the test file to demonstrate another case.
llvm-svn: 355982
This indicates an intrinsic parameter is required to be a constant,
and should not be replaced with a non-constant value.
Add the attribute to all AMDGPU and generic intrinsics that comments
indicate it should apply to. I scanned other target intrinsics, but I
don't see any obvious comments indicating which arguments are intended
to be only immediates.
This breaks one questionable testcase for the autoupgrade. I'm unclear
on whether the autoupgrade is supposed to really handle declarations
which were never valid. The verifier fails because the attributes now
refer to a parameter past the end of the argument list.
llvm-svn: 355981
Summary:
This is similar to how addr2line handles consecutive entries with the
same address - pick the last one.
Reviewers: dblaikie, friss, JDevlieghere
Reviewed By: dblaikie
Subscribers: ormris, echristo, JDevlieghere, probinson, aprantl, hiraditya, rupprecht, jdoerfert, llvm-commits
Tags: #llvm, #debug-info
Differential Revision: https://reviews.llvm.org/D58952
llvm-svn: 355972
Every time a physical register reference was parsed, this would
initialize a string map for every register in in target, and discard
it for the next. The same applies for the other fields initialized
from target information.
Follow along with how the function state is tracked, and add a new
tracking class for target information.
The string->register class/register bank for some reason were kept
separately, so track them in the same place.
llvm-svn: 355970
Currently we have -Rpass for filtering the remarks that are displayed as
diagnostics, but when using -fsave-optimization-record, there is no way
to filter the remarks while generating them.
This adds support for filtering remarks by passes using a regex.
Ex: `clang -fsave-optimization-record -foptimization-record-passes=inline`
will only emit the remarks coming from the pass `inline`.
This adds:
* `-fsave-optimization-record` to the driver
* `-opt-record-passes` to cc1
* `-lto-pass-remarks-filter` to the LTOCodeGenerator
* `--opt-remarks-passes` to lld
* `-pass-remarks-filter` to llc, opt, llvm-lto, llvm-lto2
* `-opt-remarks-passes` to gold-plugin
Differential Revision: https://reviews.llvm.org/D59268
llvm-svn: 355964
The included test case currently crashes on tip of tree. Rather than adding a bailout, I chose to restructure the code so that the existing helper function could be used. Given that, the majority of the diff is NFC-ish, but the key difference is that canConvertValue returns false when only one side is a non-integral pointer.
Thanks to Cherry Zhang for the test case.
Differential Revision: https://reviews.llvm.org/D59000
llvm-svn: 355962
The existing statepoint lowering code does something odd; it adds machine memory operands post instruction selection. This was copied from the stackmap/patchpoint implementation, but appears to be non-idiomatic.
This change is largely NFC. It moves the MMO creation logic into SelectionDAG building. It ends up not quite being NFC because the size of the stack slot is reflected in the MMO. The old code blindly used pointer size for the MMO size, which appears to have always been incorrect for larger values. It just happened nothing actually relied on the MMOs, so it worked out okay.
For context, I'm planning on removing the MOVolatile flag from these in a future commit, and then removing the MOStore flag from deopt spill slots in a separate one. Doing so is motivated by a small test case where we should be able to better schedule spill slots, but don't do so due to a memory use/def implied by the statepoint.
Differential Revision: https://reviews.llvm.org/D59106
llvm-svn: 355953
Summary:
This fixes an extremely long compile time caused by recursive analysis
of truncs, which were not previously subject to any depth limits unlike
some of the other ops. I decided to use the same control used for
sext/zext, since the routines analyzing these are sometimes mutually
recursive with the trunc analysis.
Reviewers: mkazantsev, sanjoy
Subscribers: sanjoy, jdoerfert, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D58994
llvm-svn: 355949
Vector imm setting instructions like XXLXORz/XXLXORspz/XXLXORdpz
Should behave like LI8.
We should set corresponding flags to allow rematerialization and other
opts in LICM, RA, Scheduling etc.
Differential Revision: https://reviews.llvm.org/D58645
llvm-svn: 355948
This patch adds a new option to SplitAllCriticalEdges and uses it to avoid splitting critical edges when the destination basic block ends with unreachable. Otherwise if we split the critical edge, sanitizer coverage will instrument the new block that gets inserted for the split. But since this block itself shouldn't be reachable this is pointless. These basic blocks will stick around and generate assembly, but they don't end in sane control flow and might get placed at the end of the function. This makes it look like one function has code that flows into the next function.
This showed up while compiling the linux kernel with clang. The kernel has a tool called objtool that detected the code that appeared to flow from one function to the next. https://github.com/ClangBuiltLinux/linux/issues/351#issuecomment-461698884
Differential Revision: https://reviews.llvm.org/D57982
llvm-svn: 355947
If a symbol points to the end of a fragment, instead of searching for
fixups in that fragment, search in the next fragment.
Fixes spurious assembler error with subtarget change next to "la"
pseudo-instruction, or expanded equivalent.
Alternate proposal to fix the problem discussed in
https://reviews.llvm.org/D58759.
Testcase by Ana Pazos.
Differential Revision: https://reviews.llvm.org/D58943
llvm-svn: 355946
Expand MULO with constant power of two operand into a shift. The
overflow is checked with (x << shift) >> shift == x, where the right
shift will be logical for umulo and arithmetic for smulo (with
exception for multiplications by signed_min).
Differential Revision: https://reviews.llvm.org/D59041
llvm-svn: 355937
This patch removes two assertions that were preventing writing of a test
that checked an empty line followed by some text. For example:
CHECK: {{^$}}
CHECK-NEXT: foo()
The assertion was because the current location the CHECK-NEXT was
scanning from was the start of the buffer. A similar issue occurred with
CHECK-SAME. These assertions don't protect against anything, as there is
already an error check that checks that CHECK-NEXT/EMPTY/SAME don't
appear first in the checks, and the following code works fine if the
pointer is at the start of the input.
Reviewed by: probinson, thopre, jdenny
Differential Revision: https://reviews.llvm.org/D58784
llvm-svn: 355928
Targets can potentially emit more efficient code if they know address
computations never overflow. For example ILP32 code on AArch64 (which only has
64-bit address computation) can ignore the possibility of overflow with this
extra information.
llvm-svn: 355926
The code might intend to replace puts("") with putchar('\n') even if the
return value is used. It failed because use_empty() was used to guard
the whole block. While returning '\n' (putchar('\n')) is technically
correct (puts is only required to return a nonnegative number on
success), doing this looks weird and there is really little benefit to
optimize puts whose return value is used. So don't do that.
llvm-svn: 355921
This was found when we generated COPY from G8RC to F8RC in
EmitInstrWithCustomInserter without checking proper architecture,
we silently generated mtvsrd, which require P8 and up.
This is a NFC patch to add assert when we call copyPhysReg, in case
someone accidentally generate COPY between G8RC to F8RC for P7 and
below.
llvm-svn: 355920
This is a refactoring patch that removes the redundancy of performing operand reordering twice, once in buildTree() and later in vectorizeTree().
To achieve this we need to keep track of the operands within the TreeEntry struct while building the tree, and later in vectorizeTree() we are just accessing them from the TreeEntry in the right order.
This patch is the first in a series of patches that will allow for better operand reordering across chains of instructions (e.g., a chain of ADDs), as presented here: https://www.youtube.com/watch?v=gIEn34LvyNo
Patch by: @vporpo (Vasileios Porpodas)
Differential Revision: https://reviews.llvm.org/D59059
........
Reverted due to buildbot failures that I don't have time to track down.
llvm-svn: 355913
This is a refactoring patch that removes the redundancy of performing operand reordering twice, once in buildTree() and later in vectorizeTree().
To achieve this we need to keep track of the operands within the TreeEntry struct while building the tree, and later in vectorizeTree() we are just accessing them from the TreeEntry in the right order.
This patch is the first in a series of patches that will allow for better operand reordering across chains of instructions (e.g., a chain of ADDs), as presented here: https://www.youtube.com/watch?v=gIEn34LvyNo
Patch by: @vporpo (Vasileios Porpodas)
Differential Revision: https://reviews.llvm.org/D59059
llvm-svn: 355906
This is addressing the issue that we're not modeling the cost of clib functions
in TTI::getIntrinsicCosts and thus we're basically addressing this fixme:
// FIXME: This is wrong for libc intrinsics.
To enable analysis of clib functions, we not only need an intrinsic ID and
formal arguments, but also the actual user of that function so that we can e.g.
look at alignment and values of arguments. So, this is the initial plumbing to
pass the user of an intrinsinsic on to getCallCosts, which queries
getIntrinsicCosts.
Differential Revision: https://reviews.llvm.org/D59014
llvm-svn: 355901
These two values correspond to the 'Empty' and 'Tombstone' special
keys defined by DenseMapInfo<int64_t>, which means that neither one
can be used as a key in DenseMap<int64_t, anything>. Hence, if you try
to use either of those values as an int literal, IntInit::get() fails
an assertion when it tries to insert them into its static cache of
int-literal objects.
Fixed by replacing the DenseMap with a std::map, which doesn't intrude
on the space of legal values of the key type.
Reviewers: nhaehnle, hfinkel, javedabsar, efriedma
Reviewed By: efriedma
Subscribers: fhahn, efriedma, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59016
llvm-svn: 355900
Change from original commit: move test (that uses an X86 triple) into the X86
subdirectory.
Original description:
Gating vectorizing reductions on *all* fastmath flags seems unnecessary;
`reassoc` should be sufficient.
Reviewers: tvvikram, mkuper, kristof.beyls, sdesmalen, Ayal
Reviewed By: sdesmalen
Subscribers: dcaballe, huntergr, jmolloy, mcrosier, jlebar, bixia, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D57728
llvm-svn: 355889
Summary:
Swift now generates PDBs for debugging on Windows. llvm and lldb
need a language enumerator value too properly handle the output
emitted by swiftc.
Subscribers: jdoerfert, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59231
llvm-svn: 355882
For the design in question, overloads seem to be a much simpler and less subtle solution.
This removes ODR issues, and errors of the kind where code that uses the
specialization in question will accidentally and erroneously specialize
the primary template. This only "works" by accident; the program is
ill-formed NDR.
(Found with -Wundefined-func-template.)
Patch by Thomas Köppe!
Differential Revision: https://reviews.llvm.org/D58998
llvm-svn: 355880
ProcFeatures was a class that just concatenated two feature lists together and gave it a name. We used it to inherit features between CPUs.
ProcModel took a two CPU feature lists and concatenated them before deferring to ProcessorModel. This was to allow inherited features and specific features to be passed to each CPU.
Both of these allowed for only very rigid CPU inheritance rules.
With this patch we now store all of the lists we were using for inheritance in one object and do any list oncatenation we want there. Then we just pass whatever list we want from this class into the ProcessorModel class for each CPU.
Hopefully this gives us more flexibility to build up feature lists in whatever ways we think make sense. Perhaps untangling ISA flags and tuning flags.
I've only touched the CPUs that were directly affected by the removal of the ProcModel and ProcFeatures classes. We should move more of the feature lists into ProcessorFeatures.
llvm-svn: 355872
After r355865, we should be able to safely select G_EXTRACT_VECTOR_ELT without
running into any problematic intrinsics.
Also add a fix for lane copies, which don't support index 0.
llvm-svn: 355871
AtomicCmpSwapWithSuccess is legalised into an AtomicCmpSwap plus a comparison.
This requires an extension of the value which, by default, is a
zero-extension. When we later lower AtomicCmpSwap into a PseudoCmpXchg32 and then expanded in
RISCVExpandPseudoInsts.cpp, the lr.w instruction does a sign-extension.
This mismatch of extensions causes the comparison to fail when the compared
value is negative. This change overrides TargetLowering::getExtendForAtomicOps
for RISC-V so it does a sign-extension instead.
Differential Revision: https://reviews.llvm.org/D58829
Patch by Ferran Pallarès Roca.
llvm-svn: 355869
The RISC-V Assembly Programmer's Manual defines fp as another alias of x8.
However, our tablegen rules only recognise s0. This patch adds fp as another
alias of x8. GCC also accepts fp.
Differential Revision: https://reviews.llvm.org/D59209
Patch by Ferran Pallarès Roca.
llvm-svn: 355867
Overloaded intrinsics aren't necessarily safe for instruction selection. One
such intrinsic is aarch64.neon.addp.*.
This is a temporary workaround to ensure that we always fall back on that
intrinsic. Eventually this will be replaced with a proper solution.
https://bugs.llvm.org/show_bug.cgi?id=40968
Differential Revision: https://reviews.llvm.org/D59062
llvm-svn: 355865
It hasn't seen active development in years, and it hasn't reached a
state where it was useful.
Remove the code until someone is interested in working on it again.
Differential Revision: https://reviews.llvm.org/D59133
llvm-svn: 355862
Fixes https://bugs.llvm.org/show_bug.cgi?id=36796.
Implement basic legalizations (PromoteIntRes, PromoteIntOp,
ExpandIntRes, ScalarizeVecOp, WidenVecOp) for VECREDUCE opcodes.
There are more legalizations missing (esp float legalizations),
but there's no way to test them right now, so I'm not adding them.
This also includes a few more changes to make this work somewhat
reasonably:
* Add support for expanding VECREDUCE in SDAG. Usually
experimental.vector.reduce is expanded prior to codegen, but if the
target does have native vector reduce, it may of course still be
necessary to expand due to legalization issues. This uses a shuffle
reduction if possible, followed by a naive scalar reduction.
* Allow the result type of integer VECREDUCE to be larger than the
vector element type. For example we need to be able to reduce a v8i8
into an (nominally) i32 result type on AArch64.
* Use the vector operand type rather than the scalar result type to
determine the action, so we can control exactly which vector types are
supported. Also change the legalize vector op code to handle
operations that only have vector operands, but no vector results, as
is the case for VECREDUCE.
* Default VECREDUCE to Expand. On AArch64 (only target using VECREDUCE),
explicitly specify for which vector types the reductions are supported.
This does not handle anything related to VECREDUCE_STRICT_*.
Differential Revision: https://reviews.llvm.org/D58015
llvm-svn: 355860
As a fix for https://bugs.llvm.org/show_bug.cgi?id=40986 ("excessive compile
time building opencollada"), this patch makes sure that no phys reg is hinted
more than once from getRegAllocationHints().
This handles the case were many virtual registers are assigned to the same
physreg. The previous compile time fix (r343686) in weightCalcHelper() only
made sure that physical/virtual registers are passed no more than once to
addRegAllocationHint().
Review: Dimitry Andric, Quentin Colombet
https://reviews.llvm.org/D59201
llvm-svn: 355854
Summary:
Depends on https://reviews.llvm.org/D59069.
https://bugs.llvm.org/show_bug.cgi?id=40979 describes a bug in which the
-coro-split pass would assert that a use was across a suspend point from
a definition. Normally this would mean that a value would "spill" across
a suspend point and thus need to be stored in the coroutine frame. However,
in this case the use was unreachable, and so it would not be necessary
to store the definition on the frame.
To prevent the assert, simply remove unreachable basic blocks from a
coroutine function before computing spills. This avoids the assert
reported in PR40979.
Reviewers: GorNishanov, tks2103
Reviewed By: GorNishanov
Subscribers: EricWF, jdoerfert, llvm-commits, lewissbaker
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59068
llvm-svn: 355852
Summary:
llvm-objdump can be tricked into reading beyond valid memory and
segfaulting if LC_LINKER_COMMAND strings are not null terminated. libObject
does have code to validate the integrity of the LC_LINKER_COMMAND struct,
but this validator improperly assumes linker command strings are null
terminated.
The solution is to report an error if a string extends beyond the end of
the LC_LINKER_COMMAND struct.
Reviewers: lhames, pete
Reviewed By: pete
Subscribers: rupprecht, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59179
llvm-svn: 355851
Summary:
Extract the functionality of eliminating unreachable basic blocks
within a function, previously encapsulated within the
-unreachableblockelim pass, and make it available as a function within
BlockUtils.h. No functional change intended other than making the logic
reusable.
Exposing this logic makes it easier to implement
https://reviews.llvm.org/D59068, which fixes coroutines bug
https://bugs.llvm.org/show_bug.cgi?id=40979.
Reviewers: mkazantsev, wmi, davidxl, silvas, davide
Reviewed By: davide
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59069
llvm-svn: 355846
AMDGPU target run out of Subtarget feature flags hitting the limit of 64.
AssemblerPredicates uses at most uint64_t for their representation.
At the same time CodeGen has exhausted this a long time ago and switched
to a FeatureBitset with the current limit of 192 bits.
This patch completes transition to the bitset for feature bits extending
it to asm matcher and MC code emitter.
Differential Revision: https://reviews.llvm.org/D59002
llvm-svn: 355839
Fixes bug 38023: https://bugs.llvm.org/show_bug.cgi?id=38023
The SimplifyCFG pass will perform jump threading in some cases where
doing so is trivial and would simplify the CFG. When folding a series
of blocks with redundant conditional branches into an unconditional "critical
edge" block, it does not keep the debug location associated with the previous
conditional branch.
This patch fixes the bug described by copying the debug info from the
old conditional branch to the new unconditional branch instruction, and
adds a regression test for the SimplifyCFG pass that covers this case.
Patch by Stephen Tozer!
Differential Revision: https://reviews.llvm.org/D59206
llvm-svn: 355833
A pattern needed to match TruncIntFP was missing. This was causing multiple
tests from llvm test suite to fail during compilation for micromips.
Patch by Mirko Brkusanin.
Differential Revision: https://reviews.llvm.org/D58722
llvm-svn: 355825
Inserting an overflowing arithmetic intrinsic can increase register
pressure by producing two values at a point where only one is needed,
while the second use maybe several blocks away. This increase in
pressure is likely to be more detrimental on performance than
rematerialising one of the original instructions.
So, check that the arithmetic and compare instructions are no further
apart than their immediate successor/predecessor.
Differential Revision: https://reviews.llvm.org/D59024
llvm-svn: 355823
Fixes bug 37966: https://bugs.llvm.org/show_bug.cgi?id=37966
The Jump Threading pass will replace certain conditional branch
instructions with unconditional branches when it can prove that only one
branch can occur. Prior to this patch, it would not carry the debug
info from the old instruction to the new one.
This patch fixes the bug described by copying the debug info from the
conditional branch instruction to the new unconditional branch
instruction, and adds a regression test for the Jump Threading pass that
covers this case.
Patch by Stephen Tozer!
Differential Revision: https://reviews.llvm.org/D58963
llvm-svn: 355822
The control flow here cannot ever use the uninitialized value, but it's
too hard for the compiler to figure that out. Clang warns:
llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp:2600:28: error: variable 'CarrySum' is used uninitialized whenever 'for' loop exits because its condition is false [-Werror,-Wsometimes-uninitialized]
for (unsigned i = 2; i < Factors.size(); ++i)
^~~~~~~~~~~~~~~~~~
llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp:2604:26: note: uninitialized use occurs here
CarrySumPrevDstIdx = CarrySum;
^~~~~~~~
llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp:2600:28: note: remove the condition if it is always true
for (unsigned i = 2; i < Factors.size(); ++i)
^~~~~~~~~~~~~~~~~~
llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp:2583:22: note: initialize the variable 'CarrySum' to silence this warning
unsigned CarrySum;
^
= 0
llvm-svn: 355818
Narrow Scalar G_MUL for MIPS32.
Revisit NarrowScalar implementation in LegalizerHelper.
Introduce new helper function multiplyRegisters.
It performs generic multiplication of values held in multiple registers.
Generated instructions use only types NarrowTy and i1.
Destination can be same or two times size of the source.
Differential Revision: https://reviews.llvm.org/D58824
llvm-svn: 355814
Instead I plan to have dedicated nodes for FROUND_CURRENT and FROUND_NO_EXC.
This patch starts with FADDS/FSUBS/FMULS/FDIVS/FMAXS/FMINS/FSQRTS.
llvm-svn: 355799
After fix all asserts found by machine verifier in PowerPC target with following patches,
we can activate machine verifier as default.
rL293769, rL348566, rL349030, rL349029, rL350113, rL350111,
rL350799, rL350165, rL355378, rL352174, rL354762, rL350115
It's also found in PR#27456, https://bugs.llvm.org/show_bug.cgi?id=27456
Differential Revision: https://reviews.llvm.org/D59011
llvm-svn: 355798
We had patterns using X86ISD::SCALAR_SINT_TO_FP_RND/SCALAR_UINT_TO_FP_RND for
these instructions. There's nothing to round. Instead, we use a regular
sint_to_fp/uint_to_fp and a movsd as the pattern for these.
llvm-svn: 355796
Many of our tests were not using valid rounding mode immediates. Clang verifies this in the frontend when it creates the intrinsics from builtins, but the backend would still lower invalid immediates.
With this change we will now leave them as intrinsics if the immediate is invalid. This will cause an isel selection failure.
llvm-svn: 355789
Includes a fix to emit a CheckOpcode for build_vector when immAllZerosV/immAllOnesV is used as a pattern root. This means it can't be used to look through bitcasts when used as a root, but that's probably ok. This extra CheckOpcode will ensure that the first match in the isel table will be a SwitchOpcode which is needed by the caching optimization in the ISel Matcher.
Original commit message:
Previously we had build_vector PatFrags that called ISD::isBuildVectorAllZeros/Ones. Internally the ISD::isBuildVectorAllZeros/Ones look through bitcasts, but we aren't able to take advantage of that in isel. Instead of we have to canonicalize the types of the all zeros/ones build_vectors and insert bitcasts. Then we have to pattern match those exact bitcasts.
By emitting specific matchers for these 2 nodes, we can make isel look through any bitcasts without needing to explicitly match them. We should also be able to remove the canonicalization to vXi32 from lowering, but I've left that for a follow up.
This removes something like 40,000 bytes from the X86 isel table.
Differential Revision: https://reviews.llvm.org/D58595
llvm-svn: 355784
InstructionSimplify currently has some code to determine the constant
range of integer instructions for some simple cases. It is used to
simplify icmps.
This change moves the relevant code into ValueTracking as
llvm::computeConstantRange(), so it can also be reused for other
purposes.
In particular this is with the optimization of overflow checks in
mind (ref D59071), where constant ranges cover some cases that
known bits don't.
llvm-svn: 355781
This patch adds proper handling of -target-abi, as accepted by llvm-mc and
llc. Lowering (codegen) for the hard-float ABIs will follow in a subsequent
patch. However, this patch does add MC layer support for the hard float and
RVE ABIs (emission of the appropriate ELF flags
https://github.com/riscv/riscv-elf-psabi-doc/blob/master/riscv-elf.md#-file-header).
ABI parsing must be shared between codegen and the MC layer, so we add
computeTargetABI to RISCVUtils. A warning will be printed if an invalid or
unrecognized ABI is given.
Differential Revision: https://reviews.llvm.org/D59023
llvm-svn: 355771
Summary:
Uses the named operands tablegen feature to look up the indices of
offset, address, and p2align operands for all load and store
instructions. This replaces brittle, incorrect logic for identifying
loads and store when eliminating frame indices, which previously
crashed on bulk-memory ops. It also cleans up the SetP2Alignment pass.
Reviewers: aheejin, dschuff
Subscribers: sbc100, jgravelle-google, hiraditya, sunfish, jfb, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59007
llvm-svn: 355770
This saves needing to call getInt32 ourselves. Making the code a little shorter.
The test changes are because insert/extract use getInt64 internally. Shouldn't be a functional issue.
This cleanup because I plan to write similar code for expandload/compressstore.
llvm-svn: 355767
There are special cases in the scalarization for constant masks. If we hit one of the special cases we don't need to reset the iteration.
Noticed while starting work on adding expandload/compressstore to this pass.
llvm-svn: 355754
Summary:
Floating-point CSRs should be accessible even when F extension is not enabled.
But pseudo instructions that access floating point CSRs still require the F extension.
GNU tools already implement this behavior. RISC-V spec is pending update to reflect
this behavior and to extend it to pseudo instructions that access floating point CSRs.
Reviewers: asb
Reviewed By: asb
Subscribers: asb, rbar, johnrusso, simoncook, sabuasal, niosHD, kito-cheng, shiva0217, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, rkruppe, PkmX, jocewei, llvm-commits
Differential Revision: https://reviews.llvm.org/D58932
llvm-svn: 355753
r44412 fixed a huge compile time regression but it needed ModifiedDT flag to be
maintained correctly in optimizations in optimizeBlock() and optimizeInst().
Function optimizeSelectInst() does not update the flag.
This patch propagates the flag in optimizeSelectInst() back to
optimizeBlock().
This patch also removes ModifiedDT in CodeGenPrepare class (which is not used).
The property of ModifiedDT is now recorded in a ref parameter.
Differential Revision: https://reviews.llvm.org/D59139
llvm-svn: 355751
Specifically, compute and Print Type and Section columns.
This is a re-commit of rL354833, after fixing the Asan problem found a a buildbot.
Differential Revision: https://reviews.llvm.org/D59060
llvm-svn: 355742
An extension of D58282 noted in PR39665:
https://bugs.llvm.org/show_bug.cgi?id=39665
This doesn't answer the request to use movmsk, but that's an
independent problem. We need this and probably still need
scalarization of FP selects because we can't do that as a
target-independent transform (although it seems likely that
targets besides x86 should have this transform).
llvm-svn: 355741
Summary:
This patch works around the bug in the ptxas tool with the processing of bytes
separated by the comma symbol. The emission of the packed string is
temporarily disabled.
Reviewers: tra
Subscribers: jholewinski, jdoerfert, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D59148
llvm-svn: 355740
Summary:
This change change the instrumentation to allow users to view the registers at the point at which tag mismatch occured. Most of the heavy lifting is done in the runtime library, where we save the registers to the stack and emit unwind information. This allows us to reduce the overhead, as very little additional work needs to be done in each __hwasan_check instance.
In this implementation, the fast path of __hwasan_check is unmodified. There are an additional 4 instructions (16B) emitted in the slow path in every __hwasan_check instance. This may increase binary size somewhat, but as most of the work is done in the runtime library, it's manageable.
The failure trace now contains a list of registers at the point of which the failure occured, in a format similar to that of Android's tombstones. It currently has the following format:
Registers where the failure occurred (pc 0x0055555561b4):
x0 0000000000000014 x1 0000007ffffff6c0 x2 1100007ffffff6d0 x3 12000056ffffe025
x4 0000007fff800000 x5 0000000000000014 x6 0000007fff800000 x7 0000000000000001
x8 12000056ffffe020 x9 0200007700000000 x10 0200007700000000 x11 0000000000000000
x12 0000007fffffdde0 x13 0000000000000000 x14 02b65b01f7a97490 x15 0000000000000000
x16 0000007fb77376b8 x17 0000000000000012 x18 0000007fb7ed6000 x19 0000005555556078
x20 0000007ffffff768 x21 0000007ffffff778 x22 0000000000000001 x23 0000000000000000
x24 0000000000000000 x25 0000000000000000 x26 0000000000000000 x27 0000000000000000
x28 0000000000000000 x29 0000007ffffff6f0 x30 00000055555561b4
... and prints after the dump of memory tags around the buggy address.
Every register is saved exactly as it was at the point where the tag mismatch occurs, with the exception of x16/x17. These registers are used in the tag mismatch calculation as scratch registers during __hwasan_check, and cannot be saved without affecting the fast path. As these registers are designated as scratch registers for linking, there should be no important information in them that could aid in debugging.
Reviewers: pcc, eugenis
Reviewed By: pcc, eugenis
Subscribers: srhines, kubamracek, mgorny, javed.absar, krytarowski, kristof.beyls, hiraditya, jdoerfert, llvm-commits, #sanitizers
Tags: #sanitizers, #llvm
Differential Revision: https://reviews.llvm.org/D58857
llvm-svn: 355738
Commit r355068 "Fix IR/Analysis layering issue with OptBisect" uses the
template
return Gate.isEnabled() && !Gate.shouldRunPass(this, getDescription(...));
for all pass kinds. For the RegionPass, it left out the not operator,
causing region passes to be skipped as soon as a pass gate is used.
llvm-svn: 355733
When matching half of the build_vector to a load, there could still be
a hidden dependency on the other half of the build_vector the pattern
wouldn't detect. If there was an additional chain dependency on the
other value, a cycle could be introduced.
I don't think a tablegen pattern is capable of matching the necessary
conditions, so move this into PreprocessISelDAG. Check isPredecessorOf
for the other value to avoid a cycle. This has a warning that it's
expensive, so this should probably be moved into an MI pass eventually
that will have more freedom to reorder instructions to help match
this. That is currently complicated by the lack of a computeKnownBits
type mechanism for the selected function.
llvm-svn: 355731
This avoids breaking possible value dependencies when sorting loads by
offset.
AMDGPU has some load instructions that write into the high or low bits
of the destination register, and have a tied input for the other input
bits. These can easily have the same base pointer, but be a swizzle so
the high address load needs to come first. This was inserting glue
forcing the opposite ordering, producing a cycle the InstrEmitter
would assert on. It may be potentially expensive to look for the
dependency between the other loads, so just skip any where this could
happen.
Fixes bug 40936 by reverting r351379, which added a hacky attempt to
fix this by adding chains in this case, which I think was just working
around broken glue before the InstrEmitter. The core of the patch is
re-implementing the fix for that problem.
llvm-svn: 355728
This was checking the wrong operands for the base register and the
offsets. The indexes are shifted by the number of output registers
from the machine instruction definition, and the chain is moved to the
end.
llvm-svn: 355722
Summary:
If the LLVM module shows that it has debug info, but the file is
actually empty and the real debug info is not emitted, the ptxas tool
emits error 'Debug information not found in presence of .target debug'.
We need at leas one empty debug section to silence this message. Section
`.debug_loc` is not emitted for PTX and we can emit empty `.debug_loc`
section if `debug` option was emitted.
Reviewers: tra
Subscribers: jholewinski, aprantl, llvm-commits
Differential Revision: https://reviews.llvm.org/D57250
llvm-svn: 355719