Commit Graph

146169 Commits

Author SHA1 Message Date
Nikita Popov d440f9a326 [LICM] Make capture check more precise
During store promotion, we check whether the pointer was captured
to exclude potential reads from other threads. However, we're only
interested in captures before or inside the loop. Check this using
PointerMayBeCapturedBefore against the loop header.

Differential Revision: https://reviews.llvm.org/D100706
2021-04-19 20:34:23 +02:00
Craig Topper 87afefcd22 [RISCV] Fix mistake in comment. NFC 2021-04-19 11:15:32 -07:00
Craig Topper 7ed01a420a [RISCV] Pad v4i1/v2i1/v1i1 stores with 0s to make a full byte.
As noted in the FIXME there's a sort of agreement that the any
extra bits stored will be 0.

The generated code is pretty terrible. I was really hoping we
could use a tail undisturbed trick, but tail undisturbed no
longer applies to masked destinations in the current draft
spec.

Fingers crossed that it isn't common to do this. I doubt IR
from clang or the vectorizer would ever create this kind of store.

Reviewed By: frasercrmck

Differential Revision: https://reviews.llvm.org/D100618
2021-04-19 11:05:18 -07:00
Jessica Paquette 65f257a215 [AArch64][GlobalISel] Implement custom legalization for s32 and s64 G_CTPOP
This is a partial port of AArch64TargetLowering::LowerCTPOP.

This custom lowering tries to uses NEON instructions to give a more efficient
CTPOP lowering when possible.

In the non-NEON/noimplicitfloat case, this should use the generic lowering
(see: https://godbolt.org/z/GcaPvWe4x). I think that's worth implementing after
implementing the widening code for s16/s8 though.

Differential Revision: https://reviews.llvm.org/D100399
2021-04-19 10:56:02 -07:00
Nick Desaulniers c440b97d89 [TargetLowering] move "o" and "X" constraint handling to base class
These constraints are machine agnostic; there's no reason to handle
these per-arch. If arches don't support these constraints, then they
will fail elsewhere during instruction selection. We don't need virtual
calls to look these up; TargetLowering::getInlineAsmMemConstraint should
only be overridden by architectures with additional unique memory
constraints.

Reviewed By: echristo, MaskRay

Differential Revision: https://reviews.llvm.org/D100416
2021-04-19 10:53:31 -07:00
Jessica Paquette 91bbb914e0 [AArch64][GlobalISel] Regbankselect + select @llvm.aarch64.neon.uaddlv
It turns out we actually import a bunch of selection code for intrinsics. The
imported code checks that the register banks on the G_INTRINSIC instruction
are correct. If so, it goes ahead and selects it.

This adds code to AArch64RegisterBankInfo to allow us to correctly determine
register banks on intrinsics which have known register bank constraints.

For now, this only handles @llvm.aarch64.neon.uaddlv. This is necessary for
porting AArch64TargetLowering::LowerCTPOP.

Also add a utility for getting the intrinsic ID from a G_INTRINSIC instruction.
This seems a little nicer than having to know about how intrinsic instructions
are structured.

Differential Revision: https://reviews.llvm.org/D100398
2021-04-19 10:47:49 -07:00
Sanjay Patel 9d43f6d7ce [LowerConstantIntrinsics] avoid crashing on alloca with unexpected operand type
The test here is reduced from the fuzzer-generated crasher in:
https://llvm.org/PR50023
https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=33395

I don't know if this is the best or complete solution, but the
zext of the i42 type appears to match the behavior if we run a
weird type example like this through the IR optimizer with -O1.

Differential Revision: https://reviews.llvm.org/D100766
2021-04-19 13:06:29 -04:00
Roman Lebedev d746fefb6f
[SCEVExpander] ReuseOrCreateCast(): use IRBuilder to actually create the cast
In particular, this allows to create constant expressions
instead of IR Instruction's if the argumen is a constant.
2021-04-19 18:38:39 +03:00
Roman Lebedev ecc9d7e913
[SCEVExpander] Expand explicit PtrToInt casts just like we would implicit ones
I.e., use GetOptimalInsertionPointForCastOf() helper to get the insertion
point, and try to reuse casts first.
2021-04-19 18:38:39 +03:00
Roman Lebedev 442c408e0e
[SCEVExpander] GetOptimalInsertionPointForCastOf(): gracefully handle Constant's
I guess this case hasn't come up thus far, and i'm not sure if it can
really happen for the existing usages, thus no test in *this* commit.

But, the following commit adds test coverage,
there we'd expirience a crash without this fix.
2021-04-19 18:38:39 +03:00
Roman Lebedev b8a3705896
[NFCI][SCEVExpander] Extract GetOptimalInsertionPointForCastOf() helper 2021-04-19 18:38:38 +03:00
Roman Lebedev 73f60e3988
[SCEVExpander] generateOverflowCheck(): explicitly PtrToInt the Start
Currently, InsertNoopCastOfTo() would implicitly insert that cast,
but now that we have SCEVPtrToIntExpr, i'm hoping we could stop
InsertNoopCastOfTo() from doing that. But first all users must be fixed.
2021-04-19 18:38:38 +03:00
Roman Lebedev 41c22acc22
[NFC][SCEV] Assert that we don't try to create SCEVPtrToIntExpr of a non-integral pointer
ptr<->int casts are only valid for integral pointes,
defensively assert that we don't try to break that here.
2021-04-19 18:38:38 +03:00
Simon Pilgrim ddcdeae358 [Analysis] ImportedFunctionsInliningStatistics.h - add <memory> and remove unused <string> include. NFCI.
Move <string> include to ImportedFunctionsInliningStatistics.cpp and add missing <memory> include as we have explicit uses of std::unique_ptr in the header.
2021-04-19 16:20:56 +01:00
Jay Foad a02aa91313 [AMDGPU] GCNDPPCombine: simplify API of isShrinkable. NFC. 2021-04-19 14:20:46 +01:00
Paul C. Anagnostopoulos a5aaec8f4e [TableGen] Add support for the 'assert' statement in multiclasses
This is step 3 of adding the 'assert' statement.

Differential Revision: https://reviews.llvm.org/D99751
2021-04-19 09:01:42 -04:00
Jay Foad ef443390a9 [AMDGPU] Remove MachineDCE after SIFoldOperands
Remove the MachineDCE pass after the first SIFoldOperands pass now
that SIFoldOperands deletes its own dead instructions.

Reapply after fixing dependent change D100188.

Differential Revision: https://reviews.llvm.org/D100189
2021-04-19 12:08:02 +01:00
Jay Foad 323ef0eb45 [AMDGPU] SIFoldOperands: eagerly erase dead REG_SEQUENCEs
This is fairly cheap to implement and means less work for future
passes like MachineDCE.

Reapply with a fix for using InstToErase after it had been erased.

Differential Revision: https://reviews.llvm.org/D100188
2021-04-19 12:05:41 +01:00
Cullen Rhodes f0bc2782f2 [TTI] NFC: Remove unused 'OptSize' parameter from shouldMaximizeVectorBandwidth
Reviewed By: sdesmalen

Differential Revision: https://reviews.llvm.org/D100377
2021-04-19 11:01:34 +00:00
Dmitry Preobrazhensky bcc29e0fcf [AMDGPU][MC] Corrected parsing of carry in/out operands in VOP3
Disabled constants as carry in/out operands. See bug 48711.

Differential Revision: https://reviews.llvm.org/D100642
2021-04-19 13:42:31 +03:00
Roman Lebedev df9597cf5a
[X86][CostModel] X86TTIImpl::getShuffleCost(): subvector insertions are cheap
This is similar to the subvector extractions,
except that the 0'th subvector isn't free to insert,
because we generally don't know whether or not
the upper elements need to be preserved:
https://godbolt.org/z/rsxP5W4sW

This is needed to avoid regressions in D100684

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D100698
2021-04-19 13:24:58 +03:00
Fraser Cormack c9a93c3e01 [RISCV] Lower vector shuffles to vrgather operations
This patch extends the lowering of RVV fixed-length vector shuffles to
avoid the default stack expansion and instead lower to vrgather
instructions.

For "permute"-style shuffles where one vector is swizzled, we can lower
to one vrgather. For shuffles involving two vector operands, we lower to
one unmasked vrgather (or splat, where appropriate) followed by a masked
vrgather which blends in the second half.

On occasion, when it's not possible to create a legal BUILD_VECTOR for
the indices, we use vrgatherei16 instructions with 16-bit index types.

For 8-bit element vectors where we may have indices over 255, we have a
fairly blunt fallback to the stack expansion to avoid custom-splitting
of the vector types.

To enable the selection of masked vrgather instructions, this patch
extends the various RISCVISD::VRGATHER nodes to take a passthru operand.

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D100549
2021-04-19 11:13:13 +01:00
OCHyams bbccdf6f81 [DebugInfo] Replace debug uses in replaceUsesOutsideBlock
Value::replaceUsesOutsideBlock doesn't replace debug uses which leads to an
unnecessary reduction in variable location coverage. Fix this, add a unittest for
it, and add a regression test demonstrating the change through instcombine's
replacedSelectWithOperand.

Reviewed By: djtodoro

Differential Revision: https://reviews.llvm.org/D99169
2021-04-19 11:06:53 +01:00
OCHyams 0ebf9a8e34 [DebugInfo] Move the findDbg* functions into DebugInfo.cpp
Move the findDbg* functions into lib/IR/DebugInfo.cpp from
lib/Transforms/Utils/Local.cpp.

D99169 adds a call to a function (findDbgUsers) that lives in
lib/Transforms/Utils/Local.cpp (LLVMTransformUtils) from lib/IR/Value.cpp
(LLVMCore). The Core lib doesn't include TransformUtils. The builtbots caught
this here: https://lab.llvm.org/buildbot/#/builders/109/builds/12664. This patch
moves the function, and the 3 similar ones for consistency, into DebugInfo.cpp
which is part of LLVMCore.

Reviewed By: dblaikie, rnk

Differential Revision: https://reviews.llvm.org/D100632
2021-04-19 10:30:25 +01:00
David Sherwood 83f5fa519e [CodeGen] Improve code generation for clamping of constant indices with scalable vectors
When trying to clamp a constant index into a scalable vector we can
test if the index is less than the minimum number of elements in the
vector. If so, we can simply return the index because we know it is
guaranteed to fit inside the vector.

Differential Revision: https://reviews.llvm.org/D100639
2021-04-19 08:34:17 +01:00
Serguei Katkov 9f33943ee0 [GreedyRA ORE] Add stats for copy of virtual registers.
Greedy RA adds copies of virtual registers when splitting live interval.
This stat might be useful.

Reviewers: reames, MatzeB, anemet, thegameg
Reviewed By: reames
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D100017
2021-04-19 12:43:44 +07:00
Serguei Katkov 61d22f2e4e [Greedy RA] Add a check to MachineVerifier
If Virtual Register is alive in landing pad its def must be
before the call causing the exception or it should be statepoint instruction itself and
in this case def actually means the relocation of gc pointer and is alive in
landing pad.

The test shows the triggering this check for an option under development
use-registers-for-gc-values-in-landing-pad which is off by default until
it is functionally correct.

Reviewers: reames, void, jyknight, nickdesaulniers, efriedma, arsenm, rnk
Reviewed By: rnk
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D100525
2021-04-19 12:31:18 +07:00
Evgeniy Brevnov 35e95c6817 [CVP] processCallSite returns wrong status
Recently processMinMaxIntrinsic has been added and we started to observe a number of analysis get invalidated after CVP. The problem is CVP conservatively returns 'true'  even if there were no modifications to IR. I found one more place besides processMinMaxIntrinsic  which has the same problem. I think processMinMaxIntrinsic and similar should better have boolean return status to prevent similar issue reappear in future.

Reviewed By: lebedev.ri

Differential Revision: https://reviews.llvm.org/D100538
2021-04-19 12:13:22 +07:00
Xun Li 5faba87938 Revert "[Coroutines] Set presplit attribute in Clang instead of CoroEarly pass"
This reverts commit fa6b54c44a.
The commited patch broke mlir tests. It seems that mlir tests depend on coroutine function properties set in CoroEarly pass.
2021-04-18 17:22:28 -07:00
Xun Li fa6b54c44a [Coroutines] Set presplit attribute in Clang instead of CoroEarly pass
Presplit coroutines cannot be inlined. During AlwaysInliner we check if a function is a presplit coroutine, if so we skip inlining.
The presplit coroutine attributes are set in CoroEarly pass.
However in O0 pipeline, AlwaysInliner runs before CoroEarly, so the attribute isn't set yet and will still inline the coroutine.
This causes Clang to crash: https://bugs.llvm.org/show_bug.cgi?id=49920

To fix this, we set the attributes in the Clang front-end instead of in CoroEarly pass.

Reviewed By: rjmccall, ChuanqiXu

Differential Revision: https://reviews.llvm.org/D100282
2021-04-18 15:41:09 -07:00
Xun Li c0211e8d7d Revert "[Coroutines] Move CoroEarly pass to before AlwaysInliner"
This reverts commit 2b50f5a434.
Forgot to update the description of the commit to sync with phabricator. Going to redo the commit.
2021-04-18 15:38:19 -07:00
Xun Li 2b50f5a434 [Coroutines] Move CoroEarly pass to before AlwaysInliner
Presplit coroutines cannot be inlined. During AlwaysInliner we check if a function is a presplit coroutine, if so we skip inlining.
The presplit coroutine attributes are set in CoroEarly pass.
However in O0 pipeline, AlwaysInliner runs before CoroEarly, so the attribute isn't set yet and will still inline the coroutine.
This causes Clang to crash: https://bugs.llvm.org/show_bug.cgi?id=49920

Differential Revision: https://reviews.llvm.org/D100282
2021-04-18 14:54:04 -07:00
Roman Lebedev d480f968ad
Revert "[SCEV] Model `ashr exact x, C` as `(abs(x) EXACT/u (1<<C)) * signum(x)`"
As being discussed in https://reviews.llvm.org/D100721,
this modelling is lossy, we can't reconstruct `ash`/`ashr exact`
from it, which means that whenever we actually expand the IR,
we've just pessimized the code..

It would be good to model this pattern, after all it comes up every time
you want to compute a distance between two pointers, but not at this cost.

This reverts commit ec54867df5.
2021-04-18 16:26:45 +03:00
Juneyoung Lee 1c10201d96 Update InstCombine to use undef matcher instead
This is a patch to use m_Undef() matcher instead of isa<UndefValue>().

As suggested in D100122, this update is separately committed.
2021-04-18 11:05:36 +09:00
Yaxun (Sam) Liu 3597f02fd5 [AMDGPU] Add GlobalDCE before internalization pass
The internalization pass only internalizes global variables
with no users. If the global variable has some dead user,
the internalization pass will not internalize it.

To be able to internalize global variables with dead
users, a global dce pass is needed before the
internalization pass.

This patch adds that.

Reviewed by: Artem Belevich, Matt Arsenault

Differential Revision: https://reviews.llvm.org/D98783
2021-04-17 11:25:25 -04:00
Florian Hahn af523514c4
[SimplifyCFG] Skip dbg intrinsics when checking for branch-only BBs.
Debug intrinsics are free to hoist and should be skipped when looking
for terminator-only blocks. As a consequence, we have to delegate to the
main hoisting loop to hoist any dbg intrinsics instead of jumping to the
terminator case directly.

This fixes PR49982.

Reviewed By: lebedev.ri

Differential Revision: https://reviews.llvm.org/D100640
2021-04-17 15:17:50 +01:00
Nikita Popov e68b12c99e [Inline] Don't add noalias metadata to inaccessiblememonly calls
It will not do anything useful for them, as we already know that
they don't modref with any accessible memory.

In particular, this prevents noalias metadata from being placed
on noalias.scope.decl intrinsics. This reduces the amount of
metadata needed, and makes it more likely that unnecessary decls
can be eliminated.
2021-04-17 14:56:13 +02:00
Serge Guelton d6de1e1a71 Normalize interaction with boolean attributes
Such attributes can either be unset, or set to "true" or "false" (as string).
throughout the codebase, this led to inelegant checks ranging from

        if (Fn->getFnAttribute("no-jump-tables").getValueAsString() == "true")

to

        if (Fn->hasAttribute("no-jump-tables") && Fn->getFnAttribute("no-jump-tables").getValueAsString() == "true")

Introduce a getValueAsBool that normalize the check, with the following
behavior:

no attributes or attribute set to "false" => return false
attribute set to "true" => return true

Differential Revision: https://reviews.llvm.org/D99299
2021-04-17 08:17:33 +02:00
Nemanja Ivanovic ff769dd111 [PowerPC] Minor improvement for insert_vector_elt codegen
For v2f64, all VSX subtargets can insert an element with a single
XXPERMDI.
2021-04-16 18:52:37 -05:00
Philip Reames 11707435cc [inferattrs] Don't infer lib func attributes for nobuiltin functions
If we have a nobuiltin function, we can't assume we know anything about the implementation.

I noticed this when tracing through a log from an in the wild miscompile (https://github.com/emscripten-core/emscripten/issues/9443) triggered after 8666463.  We were incorrectly assuming that a custom allocator could not free.  (It's not clear yet this is the only problem in said issue.)

I also noticed something similiar mentioned in the commit message of ab243e when scrolling back through history.  Through, from what I can tell, that commit fixed symptom not root cause.

The interface we have for library function detection is extremely error prone, but given the interaction between ``nobuiltin`` decls and ``builtin`` callsites, it's really hard to imagine something much cleaner.  I may iterate on that, but it'll be invasive enough I didn't want to hold an obvious functional fix on it.
2021-04-16 15:36:15 -07:00
Philip Reames f549176ad9 [funcattrs] Add the maximal set of implied attributes to definitions
Have funcattrs expand all implied attributes into the IR. This expands the infrastructure from D100400, but for definitions not declarations this time.

Somewhat subtly, this mostly isn't semantic. Because the accessors did the inference, any client which used the accessor was already getting the stronger result. Clients that directly checked presence of attributes (there are some), will see a stronger result now.

The old behavior can end up quite confusing for two reasons:
* Without this change, we have situations where function-attrs appears to fail when inferring an attribute (as seen by a human reading IR), but that consuming code will see that it should have been implied. As a human trying to sanity check test results and study IR for optimization possibilities, this is exceeding error prone and confusing. (I'll note that I wasted several hours recently because of this.)
* We can have transforms which trigger without the IR appearing (on inspection) to meet the preconditions. This change doesn't prevent this from happening (as the accessors still involve multiple checks), but it should make it less frequent.

I'd argue in favor of deleting the extra checks out of the accessors after this lands, but I want that in it's own review as a) it's purely stylistic, and b) I already know there's some disagreement.

Once this lands, I'm also going to do a cleanup change which will delete some now redundant duplicate predicates in the inference code, but again, that deserves to be a change of it's own.

Differential Revision: https://reviews.llvm.org/D100226
2021-04-16 14:22:19 -07:00
serge-sans-paille 550ed575cb Simplify BitVector code
Instead of managing memory by hand, delegate it to std::vector. This makes the
code much simpler, and also avoids repeatedly computing the storage size.

According to valgrind --tool=callgrind, this also slightly decreases the
instruction count, but by a small margin.

This is a recommit of 82f0e3d3ea with one usage
fixed in llvm/lib/CodeGen/RegisterScavenging.cpp.

Not the slight API change: BitVector::clear() now has the same behavior as any
other container: it does not free memory, but indeed sets the size of the
BitVector to 0. It is thus incorrect to access its content right afterwards, a
scenario which wasn't enforced in previous implementation.

Differential Revision: https://reviews.llvm.org/D100387
2021-04-16 22:48:33 +02:00
Joe Nash a0ed70abde [AMDGPU] Remove redundant field from DPP8 def
These lines set the value to what it already was,
so they are redundant. NFC

Reviewed By: rampitec

Differential Revision: https://reviews.llvm.org/D100664

Change-Id: Ibf6f27d50a7fa1f76c127f01b799821378bfd3b3
2021-04-16 16:23:52 -04:00
Joe Nash 919236e608 [AMDGPU] NFC, Comment in disassembler for dpp8
Gives reasoning for convertDPP8.
Also corrects typo in Operand type comment.

Reviewed By: rampitec

Differential Revision: https://reviews.llvm.org/D100665

Change-Id: I33ff269db8072d83e5e0ecdbfb731d6000fc26c4
2021-04-16 16:21:47 -04:00
Tony 13875aab4e [AMDGPU] Enforce that gfx802/803/805 do not support XNACK
Reviewed By: kzhuravl

Differential Revision: https://reviews.llvm.org/D100679
2021-04-16 19:34:30 +00:00
Thomas Lively 5c729750a6 [WebAssembly] Remove saturating fp-to-int target intrinsics
Use the target-independent @llvm.fptosi and @llvm.fptoui intrinsics instead.
This includes removing the instrinsics for i32x4.trunc_sat_zero_f64x2_{s,u},
which are now represented in IR as a saturating truncation to a v2i32 followed by
a concatenation with a zero vector.

Differential Revision: https://reviews.llvm.org/D100596
2021-04-16 12:11:20 -07:00
Philip Reames ff55d01a8e [nofree] Restrict semantics to memory visible to caller
This patch clarifies the semantics of the nofree function attribute to make clear that it provides an "as if" semantic. That is, a nofree function is guaranteed not to free memory which existed before the call, but might allocate and then deallocate that same memory within the lifetime of the callee.

This is the result of the discussion on llvm-dev under the thread "Ambiguity in the nofree function attribute".

The most important part of this change is the LangRef wording. The rest is minor comment changes to emphasize the new semantics where code was accidentally consistent, and fix one place which wasn't consistent. That one place is currently narrowly used as it is primarily part of the ongoing (and not yet enabled) deref-at-point semantics work.

Differential Revision: https://reviews.llvm.org/D100141
2021-04-16 11:38:55 -07:00
Christudasan Devadasan 97618522dc [AMDGPU] Remove dead dcode (NFC). 2021-04-16 23:03:31 +05:30
Simon Pilgrim 37a4621fb6 [DAG] SelectionDAG::isSplatValue - early out if binop is not splat. NFCI.
Just return false if we fail to match splats - the remainder of the code is for (fixed)vector operations - shuffles/insertions etc.
2021-04-16 18:26:33 +01:00
Joe Nash 7cc4a02fa2 [AMDGPU] Refactor VOP3P Profile and AsmParser, NFC
Refactors VOP3P tablegen and the AsmParser for VOP3P
for better extensibility. NFC intended

Reviewed By: rampitec

Differential Revision: https://reviews.llvm.org/D100602

Change-Id: I038e3a772ac348bb18979cdf3e3ae2e9476dd411
2021-04-16 13:06:50 -04:00
Nick Lewycky 244d9d6e41 Verify the LLVMContext that an Attribute belongs to.
Attributes don't know their parent Context, adding this would make Attribute larger. Instead, we add hasParentContext that answers whether this Attribute belongs to a particular LLVMContext by checking for itself inside the context's FoldingSet. Same with AttributeSet and AttributeList. The Verifier checks them with the Module context.

Differential Revision: https://reviews.llvm.org/D99362
2021-04-16 09:44:38 -07:00
Malhar Jajoo 093f1828e5 [ARM] Prevent phi-node-elimination from generating copy above t2WhileLoopStartLR
This patch prevents phi-node-elimination from generating a COPY
operation for the register defined by t2WhileLoopStartLR, as it is a
terminator that defines a value.

This happens because of the presence of phi-nodes in the loop body (the
Preheader of which is the block containing the t2WhileLoopStartLR). If
this is not done, the COPY is generated above/before the terminator
(t2WhileLoopStartLR here), and since it uses the value defined by
t2WhileLoopStartLR, MachineVerifier throws a 'use before define' error.

This essentially adds on to the change in differential D91887/D97729.

Differential Revision: https://reviews.llvm.org/D100376
2021-04-16 16:45:07 +01:00
Mats Petersson 517c3aee4d [OpenMP IRBuilder, MLIR] Add support for OpenMP do schedule dynamic
The implementation supports static schedule for Fortran do loops. This
implements the dynamic variant of the same concept.

Reviewed By: Meinersbur

Differential Revision: https://reviews.llvm.org/D97393
2021-04-16 16:09:49 +01:00
Jonathan Crowther e71994a239 [SystemZ][z/OS] Add IsText Argument to GetFile and GetFileOrSTDIN
Add the `IsText` argument to `GetFile` and `GetFileOrSTDIN` which will help z/OS distinguish between text and binary correctly. This is an extension to [this patch](https://reviews.llvm.org/D97785)

Reviewed By: abhina.sreeskantharajan, amccarth

Differential Revision: https://reviews.llvm.org/D100488
2021-04-16 10:08:36 -04:00
Sanjay Patel bb907b26e2 [ValueTracking] don't recursively compute known bits using multiple llvm.assumes
This is an alternative to D99759 to avoid the compile-time explosion seen in:
https://llvm.org/PR49785

Another potential solution would make the exclusion logic stronger to avoid
blowing up, but note that we reduced the complexity of the exclusion mechanism
in D16204 because it was too costly.

So I'm questioning the need for recursion/exclusion entirely - what is the
optimization value vs. cost of recursively computing known bits based on
assumptions?
This was built into the implementation from the start with 60db058,
and we have kept adding code/cost to deal with that capability.

By clearing the query's AssumptionCache inside computeKnownBitsFromAssume(),
this patch retains all existing assume functionality except refining known
bits based on even more assumptions.

We have 1 regression test that shows a difference in optimization power.

Differential Revision: https://reviews.llvm.org/D100573
2021-04-16 08:43:35 -04:00
Roman Lebedev b06c55a698
[X86][CostModel] Fix cost model for non-power-of-two vector load/stores
Sometimes LV has to produce really wide vectors,
and sometimes they end up being not powers of two.
As it can be seen from the diff, the cost computation
is currently completely non-sensical in those cases.

Instead of just scalarizing everything, split/factorize the wide vector
into a number of subvectors, each one having a power-of-two elements,
recurse to get the cost of op on this subvector. Also, check how we'd
legalize this subvector, and if the legalized type is scalar,
also account for the scalarization cost.

Note that for sub-vector loads, we might be able to do better,
when the vectors are properly aligned.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D100099
2021-04-16 15:30:57 +03:00
Abhina Sreeskantharajan 3be2ba0ba3 [SystemZ][z/OS][Windows] Add new functions that set Text/Binary mode for Stdin and Stdout based on OpenFlags
On Windows, we want to open a file in Binary mode if OF_CRLF bit is not set. On z/OS, we want to open a file in Binary mode if the OF_Text bit is not set.

This patch creates two new functions called ChangeStdinMode and ChangeStdoutMode which will take OpenFlags as an arg to determine which mode to set stdin and stdout to. This will enable patches like https://reviews.llvm.org/D100056 to not affect Windows when setting the OF_Text flag for raw_fd_streams.

Reviewed By: rnk

Differential Revision: https://reviews.llvm.org/D100130
2021-04-16 08:09:19 -04:00
David Green 00a6045473 [ARM] Combine sub 0, csinc X, Y, CC -> csinv -X, Y, CC
Combine sub 0, csinc X, Y, CC to csinv -X, Y, CC providing that the
negation of X is cheap, currently just handling constants. This comes up
during the splat of an i1 to a predicate, where we now generate csetm,
as opposed to cset; rsb.

Differential Revision: https://reviews.llvm.org/D99940
2021-04-16 11:52:31 +01:00
Nick Desaulniers bb7016f8f5 [Aarch64] handle "o" inline asm memory constraints
This Linux kernel is making use of this inline asm constraint which is
causing an ICE.

PR49956

Link: https://github.com/ClangBuiltLinux/linux/issues/1348

Reviewed By: MaskRay

Differential Revision: https://reviews.llvm.org/D100412
2021-04-15 23:36:21 -07:00
Jim Lin 2893570e86 [RISCV] Don't emit save-restore call if function is a interrupt handler
It has to save all caller-saved registers before a call in the handler.
So don't emit a call that save/restore registers.

Reviewed By: simoncook, luismarques, asb

Differential Revision: https://reviews.llvm.org/D100532
2021-04-16 12:54:47 +08:00
hsmahesha 099dcb68a6 [AMDGPU] Refactor ds_read/ds_write related select code for better readability.
Part of the code related to ds_read/ds_write ISel is refactored, and the
corresponding comment is re-written for better readability, which would help
while implementing any future ds_read/ds_write ISel related modifications.

Reviewed By: rampitec

Differential Revision: https://reviews.llvm.org/D100300
2021-04-16 08:24:00 +05:30
Mircea Trofin 0d06b14f59 [MLGO] Fix use of AM.invalidate post D100519
The ML inline advisors more aggressively invalidate certain analyses
after each call site inlining, to more accurately capture the problem
state.
2021-04-15 18:45:39 -07:00
Marcythm f8cf3b9931
[LICM][NFC] Fix typo
fixed some typos which may lead to misunderstandings in LICM.cpp

Reviewed By: nikic, asbirlea
Differential Revision: https://reviews.llvm.org/D100470
2021-04-16 09:42:00 +08:00
Arthur Eubanks 9c776c2fa2 [NFC][NewPM] Remove some AnalysisManager invalidate methods
These were misleading, they're more of a "clear" than an "invalidate".

We shouldn't be individually clearing analysis results. Either we clear
all analyses when some IR becomes invalid, or we properly go through
invalidation.

There was only one use of this, which can be simulated with
AM.invalidate(F, PA).

Reviewed By: mtrofin

Differential Revision: https://reviews.llvm.org/D100519
2021-04-15 16:51:26 -07:00
Florian Hahn 3e7ee5428d
[InferAttrs] Do not mark first argument of str(n)cat as writeonly.
str(n)cat appends a copy of the second argument to the end of the first
argument. To find the end of the first argument, str(n)cat has to read
from it until it finds the terminating 0. So it should not be marked as
writeonly. I think this means the argument should not be marked as
writeonly.

(This is causing a mis-compile with legacy DSE, before it got removed)

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D100601
2021-04-15 23:00:21 +01:00
Momchil Velikov f9d932e673 [clang][AArch64] Correctly align HFA arguments when passed on the stack
When we pass a AArch64 Homogeneous Floating-Point
Aggregate (HFA) argument with increased alignment
requirements, for example

    struct S {
      __attribute__ ((__aligned__(16))) double v[4];
    };

Clang uses `[4 x double]` for the parameter, which is passed
on the stack at alignment 8, whereas it should be at
alignment 16, following Rule C.4 in
AAPCS (https://github.com/ARM-software/abi-aa/blob/master/aapcs64/aapcs64.rst#642parameter-passing-rules)

Currently we don't have a way to express in LLVM IR the
alignment requirements of the function arguments. The align
attribute is applicable to pointers only, and only for some
special ways of passing arguments (e..g byval). When
implementing AAPCS32/AAPCS64, clang resorts to dubious hacks
of coercing to types, which naturally have the needed
alignment. We don't have enough types to cover all the
cases, though.

This patch introduces a new use of the stackalign attribute
to control stack slot alignment, when and if an argument is
passed in memory.

The attribute align is left as an optimizer hint - it still
applies to pointer types only and pertains to the content of
the pointer, whereas the alignment of the pointer itself is
determined by the stackalign attribute.

For byval arguments, the stackalign attribute assumes the
role, previously perfomed by align, falling back to align if
stackalign` is absent.

On the clang side, when passing arguments using the "direct"
style (cf. `ABIArgInfo::Kind`), now we can optionally
specify an alignment, which is emitted as the new
`stackalign` attribute.

Patch by Momchil Velikov and Lucas Prates.

Differential Revision: https://reviews.llvm.org/D98794
2021-04-15 22:58:14 +01:00
Stanislav Mekhanoshin 13015ebd6f [AMDGPU] Factor out predicate FmaakFmamkF32Insts
Differential Revision: https://reviews.llvm.org/D100409
2021-04-15 12:29:16 -07:00
Florian Hahn 49999d4364 [VPlan] Replace a few unnecessary includes with forward decls. 2021-04-15 20:08:31 +01:00
Stanislav Mekhanoshin d4385e483d [AMDGPU] Add new EmitDstSel field to VOPPofile. NFC.
Differential Revision: https://reviews.llvm.org/D100589
2021-04-15 12:07:08 -07:00
hsmahesha 82787eb228 [AMDGPU] Move LDS lowering related utility functions to a separate utils file.
Move some utility functions which are used within LDS lowering pass to a separate utils
file so that other LDS related passes can make use of them when required.

Reviewed By: JonChesterfield

Differential Revision: https://reviews.llvm.org/D100526
2021-04-16 00:15:48 +05:30
Krzysztof Parzyszek 280678122d [Hexagon] Avoid infinite loops in type legalization when lowering SETCC
Only widen SETCC if the operands can be widened. Not checking that caused
infinite widen-split loops in legalization.
2021-04-15 13:34:37 -05:00
Craig Topper 1656df13da [RISCV] Share RVInstIShift and RVInstIShiftW instruction format classes with the B extension.
This generalizes RVInstIShift/RVInstIShiftW to take the upper
5 or 7 bits of the immediate as an input instead of only bit 30. Then
we can share them.

For RVInstIShift I left a hardcoded 0 at bit 26 where RV128 gets
a 7th bit for the shift amount.

Reviewed By: frasercrmck

Differential Revision: https://reviews.llvm.org/D100424
2021-04-15 11:08:28 -07:00
cchen e0c2125d1d [OpenMP] Added codegen for masked directive
Reviewed By: ABataev

Differential Revision: https://reviews.llvm.org/D100514
2021-04-15 12:55:07 -05:00
Danilo C. Grael 55487079a9 [LoopUnrollAndJam] Avoid repeated instructions for UAJ analysis
Avoid visiting repeated instructions for processHeaderPhiOperands as it can cause a scenario of endless loop. Test case is attached and can be ran with `opt -basic-aa -tbaa -loop-unroll-and-jam  -allow-unroll-and-jam -unroll-and-jam-count=4`.

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D97407
2021-04-15 12:59:42 -04:00
Arthur Eubanks c8f0a7c215 [NewPM] Cleanup IR printing instrumentation
Being lazy with printing the banner seems hard to reason with, we should print it
unconditionally first (it could also lead to duplicate banners if we
have multiple functions in -filter-print-funcs).

The printIR() functions were doing too many things. I separated out the
call from PrintPassInstrumentation since we were essentially doing two
completely separate things in printIR() from different callers.

There were multiple ways to generate the name of some IR. That's all
been moved to getIRName(). The printing of the IR name was also
inconsistent, now it's always "IR Dump on $foo" where "$foo" is the
name. For a function, it's the function name. For a loop, it's what's
printed by Loop::print(), which is more detailed. For an SCC, it's the
list of functions in parentheses. For a module it's "[module]", to
differentiate between a possible SCC with a function called "module".

To preserve D74814, we have to check if we're going to print anything at
all first. This is unfortunate, but I would consider this a special
case that shouldn't be handled in the core logic.

Reviewed By: jamieschmeiser

Differential Revision: https://reviews.llvm.org/D100231
2021-04-15 09:50:55 -07:00
Mark Johnston f511dc75e4 [asan] Add an offset for the kernel address sanitizer on FreeBSD
This is based on a port of the sanitizer runtime to the FreeBSD kernel
that has been commited as https://cgit.freebsd.org/src/commit/?id=38da497a4dfcf1979c8c2b0e9f3fa0564035c147
and the following commits.

Reviewed By: emaste, dim
Differential Revision: https://reviews.llvm.org/D98285
2021-04-15 17:49:00 +01:00
Stefan Pintilie f28cb01be0 [PowerPC] Add ROP Protection Instructions for PowerPC
There are four new PowerPC instructions that are introduced in
Power 10. They are hashst, hashchk, hashstp, hashchkp.

These instructions will be used for ROP Protection.
This patch adds the four instructions.

Reviewed By: nemanjai, amyk, #powerpc

Differential Revision: https://reviews.llvm.org/D99375
2021-04-15 11:38:38 -05:00
Stelios Ioannou bf147c4653 [LSR] Fix for pre-indexed generated constant offset
This patch changed the isLegalUse check to ensure that
LSRInstance::GenerateConstantOffsetsImpl generates an
offset that results in a legal addressing mode and
formula. The check is changed to look similar to the
assert check used for illegal formulas.

Differential Revision: https://reviews.llvm.org/D100383

Change-Id: Iffb9e32d59df96b8f072c00f6c339108159a009a
2021-04-15 16:44:42 +01:00
OCHyams 17cec07184 Revert "[DebugInfo] Replace debug uses in replaceUsesOutsideBlock"
This reverts commit 96a1e6b7cf.

Failing build bots e.g. https://lab.llvm.org/buildbot/#/builders/161/builds/163
2021-04-15 16:35:45 +01:00
OCHyams 96a1e6b7cf [DebugInfo] Replace debug uses in replaceUsesOutsideBlock
Value::replaceUsesOutsideBlock doesn't replace debug uses which leads to an
unnecessary reduction in variable location coverage. Fix this, add a unittest for
it, and add a regression test demonstrating the change through instcombine's
replacedSelectWithOperand.

Reviewed By: djtodoro

Differential Revision: https://reviews.llvm.org/D99169
2021-04-15 16:19:36 +01:00
LemonBoy 24185541ca [yaml2obj/obj2yaml/llvm-readobj] Support printing and parsing AVR-specific e_flags
The `e_flags` contains a mixture of bitfields and regular ones, ensure all of them can be serialized and deserialized.

Reviewed By: MaskRay

Differential Revision: https://reviews.llvm.org/D100250
2021-04-15 15:54:28 +02:00
Sebastian Neubauer 7842e1725e [AMDGPU] Fix large return values with amdgpu_gfx
Returning in memory is not supported, so fall back to sret.
Also, extend i1 and i16 to i32. Otherwise, they would be passed through
memory.

Differential Revision: https://reviews.llvm.org/D100543
2021-04-15 14:57:56 +02:00
Simon Pilgrim 9d57a77b81 [X86] combineCMP - fold cmpEQ/NE(TRUNC(X),0) -> cmpEQ/NE(X,0)
If we are truncating from a i32 source before comparing the result against zero, then see if we can directly compare the source value against zero.

If the upper (truncated) bits are known to be zero then we can compare against that, hopefully increasing the chances of us folding the compare into a EFLAG result of the source's operation.

Fixes PR49028.

Differential Revision: https://reviews.llvm.org/D100491
2021-04-15 13:55:51 +01:00
Bradley Smith 22c017f0f9 [AArch64][NEON] Match (or (and -a b) (and (a+1) b)) => bit select
With this patch vbslq_f32(vnegq_s32(a), b, c) lowers to a BIT instruction.

Co-authored-by: Paul Walker <paul.walker@arm.com>

Differential Revision: https://reviews.llvm.org/D100304
2021-04-15 13:52:47 +01:00
Alex Orlov 49cbf4cd85 Fix bug in .eh_frame/.debug_frame PC offset calculation for DW_EH_PE_pcrel
This fixes the following bugs:
https://bugs.llvm.org/show_bug.cgi?id=27249
https://bugs.llvm.org/show_bug.cgi?id=46414

Reviewed By: MaskRay

Differential Revision: https://reviews.llvm.org/D100328
2021-04-15 15:06:20 +04:00
Florian Hahn 6adebe3fd2 [VPlan] Add VPRecipeBase::mayHaveSideEffects.
Add an initial version of a helper to determine whether a recipe may
have side-effects.

Reviewed By: a.elovikov

Differential Revision: https://reviews.llvm.org/D100259
2021-04-15 11:49:40 +01:00
Jun Ma 7e1422c1e4 [DAGCombiner] Fold step_vector with add/mul/shl
This patch implements some DAG combines for STEP_VECTOR:
add step_vector(C1), step_vector(C2) -> step_vector(C1+C2)
add (add X step_vector(C1)), step_vector(C2) -> add X step_vector(C1+C2)
mul step_vector(C1), C2 -> step_vector(C1*C2)
shl step_vector(C1), C2 -> step_vector(C1<<C2)

TestPlan: check-llvm

Differential Revision: https://reviews.llvm.org/D100088
2021-04-15 18:06:35 +08:00
David Sherwood ea14df695e [SVE][LoopVectorize] Fix crash in InnerLoopVectorizer::widenPHIInstruction
There were a few places in widenPHIInstruction where calculations of
offsets were failing to take the runtime calculation of VF into
account for scalable vectors. I've fixed those cases in this patch
as well as adding an assert that we should not be scalarising for
scalable vectors.

Tests are added here:

  Transforms/LoopVectorize/AArch64/sve-widen-phi.ll

Differential Revision: https://reviews.llvm.org/D99254
2021-04-15 10:51:49 +01:00
dfukalov ce1626f34a [AA] Updates for D95543.
Addressing latter comments in D95543:
- `AliasResult::Result` renamed to `AliasResult::Kind`
- Offset printing added for `PartialAlias` case in `-aa-eval`
- Removed VisitedPhiBBs check from BasicAA'

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D100454
2021-04-15 12:22:03 +03:00
Florian Hahn acd9cc7495
[AArch64] Use type-legalization cost for code size memop cost.
At the moment, getMemoryOpCost returns 1 for all inputs if CostKind is
CodeSize or SizeAndLatency. This fools LoopUnroll into thinking memory
operations on large vectors have a cost of one, even if they will get
expanded to a large number of memory operations in the backend.

This patch updates getMemoryOpCost to return the cost for the type
legalization for both CodeSize and SizeAndLatency. This should more
accurately reflect the number of memory operations required.

I am not sure how latency should properly be included in SizeAndLatency
from the description, but returning the size cost should be clearly more
accurate.

This does not cause any binary changes when building
MultiSource/SPEC2000/SPEC2006 with -O3 -flto for AArch64, likely because
large vector memops are not really formed by code emitted from Clang.
But using the C/C++ matrix extension can easily result in code with very
large vector operations directly from Clang, e.g.
https://clang.godbolt.org/z/6xzxcTGvb

Reviewed By: samparker

Differential Revision: https://reviews.llvm.org/D100291
2021-04-15 10:11:05 +01:00
David Sherwood 7120f89f7d [NFC][LoopVectorize] Remove unnecessary VF.isScalable asserts
There are a few places in LoopVectorize.cpp where we have been too
cautious in adding VF.isScalable() asserts and it can be confusing.
It also makes it more difficult to see the genuine places where
work needs doing to improve scalable vectorization support.

This patch changes getMemInstScalarizationCost to return an
invalid cost instead of firing an assert for scalable vectors. Also,
vectorizeInterleaveGroup had multiple asserts all for the same
thing. I have removed all but one assert near the start of the
function, and added a new assert that we aren't dealing with masks
for scalable vectors.

Differential Revision: https://reviews.llvm.org/D99727
2021-04-15 09:41:03 +01:00
Martin Storsjö 5144f730a8 [AArch64] Fix windows vararg functions with floats in the fixed args
On Windows, float arguments are normally passed in float registers
in the calling convention for regular functions. For variable
argument functions, floats are passed in integer registers. This
already was done correctly since many years.

However, the surprising bit was that floats among the fixed arguments
also are supposed to be passed in integer registers, contrary to regular
functions. (This also seems to be the behaviour on ARM though, both
on Windows, but also on e.g. hardfloat linux.)

In the calling convention, don't promote shorter floats to f64, but
convert them to integers of the same length. (Floats passed as part of
the actual variable arguments are promoted to double already on the
C/Clang level; the LLVM vararg calling convention doesn't do any
extra promotion of f32 to f64 - this matches how it works on X86 too.)

Technically, this is an ABI break compared to older LLVM versions,
but it fixes compatibility with the official platform ABI. (In practice,
floats among the fixed arguments in variable argument functions is
a pretty rare construct.)

Differential Revision: https://reviews.llvm.org/D100365
2021-04-15 11:02:14 +03:00
Nikita Popov a1ed025d0e Revert "[SCEV] Don't walk uses of phis without SCEV expression when forgetting"
This reverts commit faf9f11589.

Issues with this patch have been reported in
https://reviews.llvm.org/D100264#2689917 and
https://bugs.llvm.org/show_bug.cgi?id=49967.
2021-04-15 09:43:52 +02:00
Florian Hahn 5a3ff24b12
[NewGVN] Add phi-of-ops operands if no real PHI is created.
If the PHI-of-ops simplifies to an existing value, no real PHI is
created, which means the dependencies between the
PHI-of-ops and its operands is not materialized in IR. At the
moment, we fail to create a real PHI node for the PHI-of-ops,
because the PHI-of-ops root instruction is not re-visited if
one of the PHI-of-ops operands changes. We need to add the
operands as additional users in this case.

Even with this patch, there are still some dependencies
missing. I will continue tackling the outstanding
reporeted crashes in this area.

Fixes PR36501, PR42422, PR42557.

Reviewed By: asbirlea

Differential Revision: https://reviews.llvm.org/D66924
2021-04-15 08:25:10 +01:00
Craig Topper c3f1271464 [RISCV] Add a PatFrag to shorten repeated (XLenVT (VLOp GPR:$vl)) in V extension patterns.
Reduces the amount of changes needed in D100288.
2021-04-14 22:36:35 -07:00
hsmahesha 4973b0c4e7 [AMDGPU] Disable forceful inline of non-kernel functions which use LDS.
Now since LDS uses within non-kernel functions are being handled in the
pass - LowerModuleLDS, we *NO* need to *forcefully* inline non-kernel
functions just because they use LDS. Do forceful inlining only when the
pass - LowerModuleLDS is not enabled. It is enabled by default.

Reviewed By: JonChesterfield

Differential Revision: https://reviews.llvm.org/D100481
2021-04-15 09:12:56 +05:30
Alexander Yermolovich b7459a10da [DWARF] Fix crash for DWARFDie::dump.
When DIE is extracted manually, the DieArray is empty. When dump is invoked on aforementioned DIE it tries to extract child, even if Dump options say otherwise. Resulting in crash.

Reviewed By: dblaikie

Differential Revision: https://reviews.llvm.org/D99698
2021-04-14 18:46:34 -07:00
Philip Reames dd985551c2 Reapply "[InferAttributes] Materialize all infered attributes for declaration"" and follow on patches.
This reverts commit ab98f2c712 and 98eea392cd.

It includes a fix for the clang test which triggered the revert.  I failed to notice this one because there was another AMDGPU llvm test with a similiar name and the exact same text in the error message.  Odd.  Since only one build bot reported the clang test, I didn't notice that one.
2021-04-14 16:38:07 -07:00
Nico Weber ab98f2c712 Revert "[InferAttributes] Materialize all infered attributes for declaration"
Breaks check-clang, see comments on D100400

Also revert follow-up "[NFC] Move a recently added utility into a location to enable reuse"

This reverts commit 3ce61fb6d6.
This reverts commit 61a85da882.
2021-04-14 18:41:20 -04:00
Philip Reames 3ce61fb6d6 [NFC] Move a recently added utility into a location to enable reuse
About to refresh a patch that uses this in FunctionAtrrs, doing the move seperately to control build times.
2021-04-14 15:05:16 -07:00
Philip Reames 61a85da882 [InferAttributes] Materialize all infered attributes for declaration
We have some cases today where attributes can be inferred from another on access, but the result is not explicitly materialized in IR. This change is a step towards changing that.

Why? Two main reasons:

* Human clarity. It's really confusing trying to figure out why a transform is triggering when the IR doesn't appear to have the required attributes.
* This avoids the need to special case declarations in e.g. functionattrs. Since we can assume the attribute is present, we can work directly from attributes (and only attributes) without also needing to query accessors on Function to avoid missing cases due to unannotated (but infered on use) declarations. (This piece will appear must easier to follow once D100226 also lands.)

Differential Revision: https://reviews.llvm.org/D100400
2021-04-14 14:45:24 -07:00
Thomas Lively 6a18cc23ef [WebAssembly] Codegen for i64x2.extend_{low,high}_i32x4_{s,u}
Removes the builtins and intrinsics used to opt in to using these instructions
and replaces them with normal ISel patterns now that they are no longer
prototypes.

Differential Revision: https://reviews.llvm.org/D100402
2021-04-14 13:43:09 -07:00
Mehrnoosh Heidarpour 29f189f90d [InstCombine] Conditionally emit nowrap flags when combining two adds
Currently, the InstCombineCompare is combining two add operations
into a single add operation which always has a nsw flag, without
checking the conditions to see if this flag should be present
according to the original two add operations or not.

This patch will change the InstCombineCompare to emit the nsw or
nuw only when these flags are allowed to be generated according to
the original add operations and remove the possibility of applying
wrong optimization with passes that will perform on the IR later
in the pipeline.

To confirm that the current results are buggy and the results after
proposed patch are the correct IR the following examples from Alive2
are attached; the same results can be seen in the case of nuw flag
and nsw is just used as an example. The following link shows that
the generated IR with current LLVM is a buggy IR when none of the
original add operations have nsw flag.
https://alive2.llvm.org/ce/z/WGaDrm
The following link proves that the generated IR after the patch in
the former case is the correct IR.
https://alive2.llvm.org/ce/z/wQ7G_e

Differential Revision: https://reviews.llvm.org/D100095
2021-04-14 20:53:06 +02:00
William S. Moses d3e2b4c0a2 [SROA][TBAA] Handle shift of regular TBAA nodes
SROA shifts TBAA nodes in a way that may present a problem for !tbaa but not !tbaa.struct nodes.

Differential Revision: https://reviews.llvm.org/D99851
2021-04-14 14:35:20 -04:00
Thomas Lively af7925b4dd [WebAssembly] Codegen for f64x2.convert_low_i32x4_{s,u}
Add a custom DAG combine and ISD opcode for detecting patterns like

  (uint_to_fp (extract_subvector ...))

before the extract_subvector is expanded to ensure that they will ultimately
lower to f64x2.convert_low_i32x4_{s,u} instructions. Since these instructions
are no longer prototypes and can now be produced via standard IR, this commit
also removes the target intrinsics and builtins that had been used to prototype
the instructions.

Differential Revision: https://reviews.llvm.org/D100425
2021-04-14 10:42:45 -07:00
Nikita Popov 0d91075f77 [ValueTracking] Don't require strictly positive for mul nsw recurrence
Just like in the mul nuw case, it's sufficient that the step is
non-zero. If the step is negative, then the values will jump
between positive and negative, "crossing" zero, but the value of
the recurrence is never actually zero.
2021-04-14 19:39:59 +02:00
Nikita Popov 5c0fb026c9 [ValueTracking] Don't require non-zero step for add nuw
It's okay if the step is zero, we'll just stay at the same non-zero
value in that case. The valuable part of this is that the step
doesn't even need to be a constant anymore.
2021-04-14 19:06:18 +02:00
Stanislav Mekhanoshin b7ebb25e53 [AMDGPU] Factor out SelectSAddrFI()
This is a service function generally useful for selection
of a FI in an SADDR. NFC for now, needed for future patch.

Differential Revision: https://reviews.llvm.org/D100406
2021-04-14 09:40:02 -07:00
Sander de Smalen 4f42d873c2 [TTI] NFC: Change getArithmeticInstrCost to return InstructionCost
This patch migrates the TTI cost interfaces to return an InstructionCost.

See this patch for the introduction of the type: https://reviews.llvm.org/D91174
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2020-November/146408.html

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D100317
2021-04-14 17:20:36 +01:00
Sander de Smalen d84bd951a8 [TTI] NFC: Change getFPOpCost to return InstructionCost
This patch migrates the TTI cost interfaces to return an InstructionCost.

See this patch for the introduction of the type: https://reviews.llvm.org/D91174
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2020-November/146408.html

Reviewed By: c-rhodes

Differential Revision: https://reviews.llvm.org/D100316
2021-04-14 17:20:36 +01:00
Sander de Smalen 1af35e77f4 [TTI] NFC: Change getVectorInstrCost to return InstructionCost
This patch migrates the TTI cost interfaces to return an InstructionCost.

See this patch for the introduction of the type: https://reviews.llvm.org/D91174
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2020-November/146408.html

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D100315
2021-04-14 17:20:35 +01:00
Sander de Smalen 174e8f6c5e [TTI] NFC: Change getShuffleCost to return InstructionCost
This patch migrates the TTI cost interfaces to return an InstructionCost.

See this patch for the introduction of the type: https://reviews.llvm.org/D91174
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2020-November/146408.html

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D100314
2021-04-14 17:20:35 +01:00
Sander de Smalen 14b934f8a6 [TTI] NFC: Change getCFInstrCost to return InstructionCost
This patch migrates the TTI cost interfaces to return an InstructionCost.

See this patch for the introduction of the type: https://reviews.llvm.org/D91174
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2020-November/146408.html

Reviewed By: samparker

Differential Revision: https://reviews.llvm.org/D100313
2021-04-14 17:20:34 +01:00
Sander de Smalen 596f669cfb [TTI] NFC: Change getCallInstrCost to return InstructionCost
This patch migrates the TTI cost interfaces to return an InstructionCost.

See this patch for the introduction of the type: https://reviews.llvm.org/D91174
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2020-November/146408.html

Reviewed By: c-rhodes

Differential Revision: https://reviews.llvm.org/D100312
2021-04-14 17:20:34 +01:00
Thomas Lively af7ab81ce3 [WebAssembly] Use standard intrinsics for f32x4 and f64x2 ops
Now that these instructions are no longer prototypes, we do not need to be
careful about keeping them opt-in and can use the standard LLVM infrastructure
for them. This commit removes the bespoke intrinsics we were using to represent
these operations in favor of the corresponding target-independent intrinsics.
The clang builtins are preserved because there is no standard way to easily
represent these operations in C/C++.

For consistency with the scalar codegen in the Wasm backend, the intrinsic used
to represent {f32x4,f64x2}.nearest is @llvm.nearbyint even though
@llvm.roundeven better captures the semantics of the underlying Wasm
instruction. Replacing our use of @llvm.nearbyint with use of @llvm.roundeven is
left to a potential future patch.

Differential Revision: https://reviews.llvm.org/D100411
2021-04-14 09:19:27 -07:00
Sjoerd Meijer 39d29817f3 [SCCP] Follow up of rGbbab9f986c6d. NFC.
This addresses the linter messages, mainly the inconsistent capitalisation of
member functions.
2021-04-14 17:14:46 +01:00
Benjamin Kramer cf4161673c [Instcombine] Disable memcpy of alloca bypass for instruction sources
This transformation is fundamentally broken when it comes to dominance,
it just happened to work when the source of the memcpy can be moved into
the place of the alloca. The bug shows up a lot more often since
077bff39d4 allows the source to be a
switch.

It would be possible to check dominance of the source and all its
operands, but that seems very heavy for instcombine.
2021-04-14 16:52:09 +02:00
hsmahesha e3070db0f7 [AMDGPU] Rename "LDS lowering" pass name.
Rename the name of "LDS lowering" pass from `amdgpu-disable-lower-module-lds` to
`amdgpu-enable-lower-module-lds` as later is consistent and reads better.

Reviewed By: JonChesterfield

Differential Revision: https://reviews.llvm.org/D100441
2021-04-14 20:19:53 +05:30
Simon Pilgrim 4fbe761572 [X86][SSE] canonicalizeShuffleWithBinOps - check for more combos of merge-able binary shuffles.
In the fold SHUFFLE(BINOP(X,Y),BINOP(Z,W)) -> BINOP(SHUFFLE(X,Z),SHUFFLE(Y,W)), check if both X/Z AND Y/W have at least one merge-able shuffle in which case the total number of shuffle should still fall.

Helps with instruction count regressions we saw while fixing PR48823
2021-04-14 15:24:41 +01:00
Simon Pilgrim b49c41afba [SLP] createOp - fix null dereference warning. NFCI.
Only attempt to propagateIRFlags if we have both SelectInst - afaict we shouldn't have matched a min/max reduction without both SelectInst, but static analyzer doesn't know that.
2021-04-14 15:24:41 +01:00
Pablo Barrio cca40aa8d8 [AArch64][v8.5A] Add BTI to all function starts
The existing BTI placement pass avoids inserting "BTI c" when the
function has local linkage and is only directly called. However,
even in this case, there is a (small) chance that the linker later
adds a hunk with an indirect call to the function, e.g. if the
function is placed in a separate section and moved far away from
its callers. Make sure to add BTI for these functions too.

Differential Revision: https://reviews.llvm.org/D99417
2021-04-14 15:24:01 +01:00
Sjoerd Meijer bbab9f986c [SCCP] Create SCCP Solver
This refactors SCCP and creates a SCCPSolver interface and class so that it can
be used by other passes and transformations. We will use this in D93838, which
adds a function specialisation pass.

This is based on an early version by Vinay Madhusudan.

Differential Revision: https://reviews.llvm.org/D93762
2021-04-14 14:58:03 +01:00
Sanjay Patel 7ef2c68a3d [InstSimplify] improve efficiency for detecting non-zero value
Stepping through callstacks in the example from D99759 reveals
this potential compile-time improvement.

The savings come from avoiding ValueTracking's computing known
bits if we have already dealt with special-case patterns.

Further improvements in this direction seem possible.

This makes a degenerate test based on PR49785 about 40x faster
(25 sec -> 0.6 sec), but it does not address the larger question
of how to limit computeKnownBitsFromAssume(). Ie, the original
test there is still infinite-time for all practical purposes.

Differential Revision: https://reviews.llvm.org/D100408
2021-04-14 09:04:15 -04:00
Sanjay Patel 5ae5d25e38 [ValueTracking] match negative-stepping non-zero recurrence
This is pulled out of D100408.

This avoids a regression that would be exposed by making the
calling code from InstSimplify more efficient.
2021-04-14 08:57:53 -04:00
Sebastian Neubauer 929edd4375 [AMDGPU] Mark scavenged SGPR as used
Otherwise it reuses the same register for storing the stack slot
offset if the stack slot offset is big.

Differential Revision: https://reviews.llvm.org/D100461
2021-04-14 14:55:01 +02:00
Sanjay Patel 4919365397 [ValueTracking] reduce code duplication; NFC
The start value can't be null for something to be a non-zero
recurrence, so hoist that common check out of the switch.

Subsequent checks may be incomplete or over-specified as noted in:
D100408
2021-04-14 08:32:42 -04:00
Zarko Todorovski 6b7838b68c [AIX] Allow safe for 32bit P8 VSX pattern matching
Pull some of the safe for 32bit pattern matching for Pwr8 and above.

Reviewed By: nemanjai

Differential Revision: https://reviews.llvm.org/D97909
2021-04-14 08:12:48 -04:00
Tim Northover 6401b78ab3 SDAG: constant fold bf16 -> i16 casts
This direction is particularly useful because i16 constants are much more
likely to be legal than bf16.
2021-04-14 11:27:46 +01:00
Roman Lebedev 2fea5d5d4a
[InstCombine] tmp alloca bypass: ensure that the replacement dominates all alloca uses
After 077bff39d4,
isDereferenceableForAllocaSize() can recurse into selects,
which is causing a problem for the new test case,
reduced from https://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20210412/904154.html
because the replacement (the select) is defined after the first use
of an alloca, so we'd end up with a verifier error.

Now, this new check is too restrictive.
We likely can handle *some* cases, by trying to sink all uses of an alloca
to after the the def.
2021-04-14 13:04:12 +03:00
Simon Pilgrim 73737fe990 [X86] Fold cmpeq/ne(trunc(x),0) --> cmpeq/ne(x,0)
Relax the fold from rGbaadbe04bf75 to compare any op, not just logic ops, now that the movmsk regressions have been handled.
2021-04-14 11:02:02 +01:00
Simon Pilgrim 016ceb8382 [X86][SSE] combineSetCCMOVMSK - allow comparison with upper (known zero) bits in MOVMSK(SHUFFLE(X,u)) -> MOVMSK(X) fold
Extension to rG74f98391a7a4, we can also include any of the upper (known zero) bits in the comparison in the shuffle removal fold, just as long as we demand all the elements of the movmsk source vector.
2021-04-14 11:02:01 +01:00
Nemanja Ivanovic 8be3181df6 [PowerPC] Fix incorrect subreg typo from 0148bf53f0 2021-04-14 05:01:12 -05:00
Martin Storsjö 3b32dc4b84 [ARM] [COFF] Properly produce cross-section relative relocations
Differential Revision: https://reviews.llvm.org/D99574
2021-04-14 12:31:28 +03:00
Martin Storsjö d5c5cf5ce8 [AArch64] [COFF] Properly produce cross-section relative relocations
This fixes breakage on Windows/ARM64 after D94355.

Modelled after the corresponding code for X86; not entirely familiar
with those aspects of that layer otherwise.

Differential Revision: https://reviews.llvm.org/D99572
2021-04-14 12:31:26 +03:00
Serguei Katkov cf0d3477aa [GreedyRA ORE] Separate Folder Reloads and Zero Cost Folder Reloads
Patchpoint instructions have operands which is actually zero cost
(or the same as register) to use the value from the stack.
In terms of statistic it makes same to separate them.

Move from computation instructions related to stack spill/reload to
number of stack slot referenced.

Reviewers: reames, MatzeB, anemet, thegameg
Reviewed By: reames
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D100016
2021-04-14 14:25:28 +07:00
Bogdan Graur 0acf4e5005
[NFC] Fix unused warning.
Differential Revision: https://reviews.llvm.org/D100449
2021-04-14 09:09:20 +02:00
Serguei Katkov 02265ed7ad [Live Intervals] Teach Greedy RA to recognize special case live-through
Statepoint instruction has a deopt section which is actually live-through the call.
Currently this is handled by special post pass after RA - fixup-statepoint-caller-saved.

This change teaches Greedy RA that if segment of live interval is ended with statepoint
instruction and its reg is used in deopt bundle then this live interval interferes regmask of this statepoint
and as a result caller-saved register cannot be assigned to this live interval.

Reviewers: reames, dantrushin
Reviewed By: reames
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D100296
2021-04-14 13:26:49 +07:00
Min-Yih Hsu 91b6ef64db [M68k] Put M68kInfo as the direct library dependency for AsmParser
M68kAsmParser uses `llvm::getTheM68kTarget` from M68kInfo, therefore we
should put M68kInfo as its direct dependency. Otherwise the build will
fail when building LLVM libraries as shared objects (building LLVM
libraries statically won't have this problem though).
2021-04-13 21:21:02 -07:00
Serguei Katkov d5ed0d4816 [Live Intervals] Factor-out unionBitMask. NFC.
For further re-usage in other place.
2021-04-14 10:39:01 +07:00
Wang, Pengfei a3b52a9d13 [X86][AMX] Refactor for PostRA ldtilecfg pass.
This is a follow up of D99010. We didn't consider the live range of shape registers when hoist ldtilecfg. There maybe risks, e.g. we happen to insert it to an invalid range of some registers and get unexpected error.

This patch fixes this problem by storing the value to corresponding stack place of ldtilecfg after all its definition immediately.

This patch also fix a problem in previous code: If we don't have a ldtilecfg which dominates all AMX instructions, we cannot initialize shapes for other ldtilecfg.

There're still some optimization points left. E.g. eliminate unused mov instructions, break the def-use dependency before RA etc.

Reviewed By: LuoYuanke, xiangzhangllvm

Differential Revision: https://reviews.llvm.org/D99966
2021-04-14 10:08:23 +08:00
Philip Reames 00c8be3f93 fix whitespace type 2021-04-13 19:02:41 -07:00
ShihPo Hung d5e962f1f2 [RISCV] Implement COPY for Zvlsseg registers
When copying Zvlsseg register tuples, we split the COPY to NF whole register moves
as below:

  $v10m2_v12m2 = COPY $v4m2_v6m2 # NF = 2
=>
  $v10m2 = PseudoVMV2R_V $v4m2
  $v12m2 = PseudoVMV2R_V $v6m2

This patch copies forwardCopyWillClobberTuple from AArch64 to check
register overlapping.

Reviewed By: frasercrmck

Differential Revision: https://reviews.llvm.org/D100280
2021-04-13 18:55:51 -07:00
Nemanja Ivanovic 0148bf53f0 [PowerPC] Use correct node to get a super register from a subreg
The VSX tablegen file has some rather eggregious uses of
COPY_TO_REGCLASS even in situations where it needs to use
SUBREG_TO_REG. While this produces correct code, it often doesn't
allow the register coalescer to coalesce copies and the resulting
code ends up being suboptimal. This patch just changes over
patterns that should use SUBREG_TO_REG.
2021-04-13 19:52:21 -05:00
Sterling Augustine 32e264921b Revert "[GlobalOpt] Revert valgrind hacks"
This reverts commit dbc16ed199.
2021-04-13 17:47:07 -07:00
root 645ce31c20 Title: [RISCV] Add missing part of instruction vmsge {u}. VX Review By: craig.topper Differential Revision : https://reviews.llvm.org/D100115 2021-04-14 06:41:59 +08:00
Daniel Sanders be50657c6a [TableGen] Resolve concrete but not complete field access initializers
This fixes the resolution of Rec10.Zero in ListSlices.td.

As part of this, correct the definition of complete for ListInit such that
it's complete iff all the elements in the list are complete rather than
always being complete regardless of the elements. This is the reason
Rec10.TwoFive from ListSlices.td previously resolved despite being
incomplete like Rec10.Zero was

Depends on D100247

Reviewed By: Paul-C-Anagnostopoulos

Differential Revision: https://reviews.llvm.org/D100253
2021-04-13 15:14:56 -07:00
Anirudh Prasad 6ddd8c28b7 [AsmParser][SystemZ][z/OS] Add support to AsmLexer to accept HLASM style integers
- Add support for HLASM style integers. These are the decimal integers [0-9].
- HLASM does not support the additional prefixed integers like, `0b`, `0x`, octal integers and Masm style integers.
- To achieve this, a field `LexHLASMStyleIntegers` (similar to the `LexMasmStyleIntegers` field) is introduced in `MCAsmLexer.h` as well as a corresponding setter.

Note: This field could also go into MCAsmInfo.h. I used the previous precedent set by the `LexMasmIntegers` field.

Depends on https://reviews.llvm.org/D99286

Reviewed By: epastor

Differential Revision: https://reviews.llvm.org/D99374
2021-04-13 15:29:37 -04:00
Craig Topper 6aa6f748ae [RISCV] Add a generic PatGprImm class and use it to simplify patterns in RISCVInstrInfoB.td. NFC 2021-04-13 12:07:24 -07:00
Craig Topper cb073f1bc0 [RISCV] Make use of PatGprGpr and PatGpr in RISCVInstrInfoB.td. NFC 2021-04-13 12:06:58 -07:00
Yonghong Song a285bdb56f BPF: remove default .extern data section
Currently, for any extern variable, if it doesn't have
section attribution, it will be put into a default ".extern"
btf DataSec. The initial design is to put every extern
variable in a DataSec so libbpf can use it.

But later on, libbpf actually requires extern variables
to put into special sections, e.g., ".kconfig", ".ksyms", etc.
so they can be used properly based on section name.

Andrii mentioned since ".extern" variables are
not actually used, it makes sense to remove it from
the compiler so libbpf does not need to deal with it,
esp. for static linking. The BTF for these extern variables
is still generated.

With this patch, I tested kernel selftests/bpf and all tests
passed. Indeed, removing ".extern" DataSec seems having no
impact.

Differential Revision: https://reviews.llvm.org/D100392
2021-04-13 11:35:52 -07:00