Texture/sampler/surface operands can be either a register or an
immediate (an index of .texref, .samplerref or .surfref).
TableGen declarations for these instructions used to only have
Int64Regs operands, so this caused issues when machine verifier
is turned on:
*** Bad machine code: Expected a register operand. ***
- function: bar
- basic block: %bb.0 (0x55b144d99ab8)
- instruction: %4:int32regs = SULD_1D_I32_TRAP 0, killed %2:int32regs
- operand 1: 0
The solution is to duplicate these instructions for all possible
operand types (i16imm and Int64Regs). Since this would
essentially double the amount code in TableGen, the patch also
does some refactoring for the original instructions to keep
things manageable.
Differential Revision: https://reviews.llvm.org/D112232
`llvm-otool -tV foo.o` and `llvm-objdump --macho -d foo.o` would
previously fail on object files containing @TLVPPAGE or @TLVPPAGEOFF relocs.
Move llvm-objdump-specific test from
llvm/test/MC/AArch64/arm64-tls-modifiers-darwin.s to new
llvm/test/tools/llvm-objdump/MachO/disassemble-arm64-tlv-modifers.test
and put test for this fix to that new file.
Fixes PR52356.
Differential Revision: https://reviews.llvm.org/D112843
A problem was noted in the post-commit review for
c36b7e21bd / D113035 :
If the source type is not integer or integer vector,
then we could crash when trying to ComputeNumSignBits().
Currently, the floating point instructions that depend on
rounding mode are correctly marked in the PPC back end with
an implicit use of the RM register. Similarly, instructions
that explicitly define the register are marked with an
implicit def of the same register. So for the most part,
RM-using code won't be moved across RM-setting instructions.
However, calls are not marked as RM-setting instructions so
code can be moved across calls. This is generally desired,
but so is the ability to turn off this behaviour with an
appropriate option - and -frounding-math really should be
that option.
This patch provides a set of call instructions (for direct
and indirect calls) that are marked with an implicit def of
the RM register. These will be used for calls that are marked
with the strictfp attribute.
Differential revision: https://reviews.llvm.org/D111433
If processLogicalImmediate fails, we should return from the function
without changing InsInstrs or DelInstrs. This happens for
CodeGen/AArch64/urem-seteq-nonzero.ll LIT test as described in
https://reviews.llvm.org/D99662#2662296.
Callers of genAlternativeCodeSequence skip patterns where InsInstrs
stays empty, so this does not cause any issues now.
Differential Revision: https://reviews.llvm.org/D100047
NFC refactor of D113351, pulling out the APInt split/merge code from the BuildVectorSDNode bits extraction into a BuildVectorSDNode::recastRawBits helper. This is to allow us to reuse the code when we're packing constant folded APInt data back together.
Unfortunately sinking recipes for first-order recurrences relies on
the original position of recipes. So if a recipes needs to be sunk after
an optimized induction, it needs to stay in the original position, until
sinking is done. This is causing PR52460.
To fix the crash, keep the recipes in the original position until
sink-after is done.
Post-commit follow-up to c45045bfd0 to address PR52460.
This reverts commit 7cd273c339.
Several patches with tests fixes have been applied:
0cada82f0a "[Test] Remove incorrect test in GVN"
97cb13615d "[Test] Separate IndVars test into AArch64 and X86 parts"
985cc490f1 "[Test] Remove separated test in IndVars",
and test failures caused by 5ec2386 should be resolved now.
This patch fixes a compiler crash when widening scalable-vector loads
and stores which end up breaking down to element-wise store operations.
It does so by providing a way for targets with support for
vector-predicated loads and stores to use those instead. By widening the
operation but maintaining the original effective operation length via
the EVL, only the intended vector elements are loaded or stored.
This method should in theory be possible and even preferred for
fixed-length vector types, but all fixed-length types can be broken down
into their elements, and regardless I have observed regressions in the
generated code when doing so. I believe this is simply due to
VP_LOAD/VP_STORE not being up to par with LOAD/STORE in terms of
optimization. It does improve performance on smaller self-contained
examples, however, so the potential is there.
While the only target that benefits from this is RISCV, the legalization
is generic and so was placed centrally.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D111248
When creating a splat of 0 for scalable vectors we tend to create them
with using a combination of shufflevector and insertelement, i.e.
shufflevector (<vscale x 4 x i32> insertelement (<vscale x 4 x i32> poison, i32 0, i32 0),
<vscale x 4 x i32> poison, <vscale x 4 x i32> zeroinitializer)
However, for the case of a zero splat we can actually just replace the
above with zeroinitializer instead. This makes the IR a lot simpler and
easier to read. I have changed ConstantFoldShuffleVectorInstruction to
use zeroinitializer when creating a splat of integer 0 or FP +0.0 values.
Differential Revision: https://reviews.llvm.org/D113394
Previously, InstCombine detected a pair of llvm.stacksave/stackrestore
instructions that are adjacent modulo debug instructions in order to
eliminate the llvm.stackrestore. This precludes situations where
intervening instructions (e.g. loads) preclude the llvm.stacksave and
llvm.stackrestore from becoming adjacent. This commit extends the logic
and allows for eliminating the llvm.stackrestore when the range of
instructions between them does not include any alloca or side-effect
causing instructions.
Signed-off-by: Itay Bookstein <itay.bookstein@nextsilicon.com>
Reviewed By: lebedev.ri
Differential Revision: https://reviews.llvm.org/D113105
Hoist the instruction classification logic outside the loop
in preparation for reuse in a future commit.
Signed-off-by: Itay Bookstein <itay.bookstein@nextsilicon.com>
Reviewed By: lebedev.ri
Differential Revision: https://reviews.llvm.org/D113464
Summary: Fix the build failure on MSVC by making the `T` and `U` of the function
'T llvm::Optional<T>::getValueOr<llvm::yaml::Hex32>(U &&) const &' the same.
Differential Revision: https://reviews.llvm.org/D111487
The existing code didn't add all necessary successors, which resulted in
disjoint basic blocks. These would end up not being legalized which, in the
best case, caused a fallback only in assert builds.
Here's an example:
https://godbolt.org/z/ndx15Enfj
We also end up getting weird codegen here as well.
Refactoring the code here allows us to correctly attach all successors. With
this patch, the above example gives correct codegen at -O0 with and without
asserts.
Also autogen the testcase to show that we add all the successors now.
Differential Revision: https://reviews.llvm.org/D113437
If a kernel had no formal arguments but did have the implicit
arguments, we were reporting a required kernarg alignment of 4. For
some reason we require an 8-byte alignment for this, even though
there's no real advantage and I don't see where this is documented in
the ABI.
The code object header code also claims the minimum alignment is 16,
which is what I thought you always got at runtime anyway so I don't
know why this matters.
This patch uses the abstract attributor introduced in D111054 to get the
assumption values instead of the `hasAssumption` function. This also
calls it so assumption information should propagate throug the device
where applicabile.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D111445
This patch introduces a new abstract attributor instance that propagates
assumption information from functions. Conceptually, if a function is
only called by functions that have certain assumptions, then we can
apply the same assumptions to that function. This problem is similar to
calculating the dominator set, but the assumptions are merged instead of
nodes.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D111054
add tracing for loads and stores.
The primary goal is to have more options for data-flow-guided fuzzing,
i.e. use data flow insights to perform better mutations or more agressive corpus expansion.
But the feature is general puspose, could be used for other things too.
Pipe the flag though clang and clang driver, same as for the other SanitizerCoverage flags.
While at it, change some plain arrays into std::array.
Tests: clang flags test, LLVM IR test, compiler-rt executable test.
Reviewed By: morehouse
Differential Revision: https://reviews.llvm.org/D113447
Changes from commit 1db137b185
added iteration over hash map that can result in non-deterministic
order. Fix that by using a SmallMapVector to preserve the order.
Differential Revision: https://reviews.llvm.org/D113468
This patch fixes some issues introduced in
https://reviews.llvm.org/D108942:
1) Remove the default label to fix the bots that use
-Werror,-Wcovered-switch-default
2) Modify the malformed test to fix the bots that are
built without zlib support
3) Modify some error messages in malformed profiles
The implementation for and/or is the same, apart from the choice
of exactIntersectWith() vs exactUnionWith(). Extract a common
function to make future extension easier.
Rather than testing for many specific combinations of predicates
and values, compute the exact icmp regions for both comparisons
and check whether they union/intersect exactly. If they do,
construct the equivalent icmp for the new range. Assuming that the
existing code handled all possible cases, this should be NFC.
Differential Revision: https://reviews.llvm.org/D113367
For the declaration like below:
int __tag1 * __tag1 __tag2 *g
Commit 41860e602a ("BPF: Support btf_type_tag attribute")
implemented the following encoding:
VAR(g) -> __tag1 --> __tag2 -> pointer -> __tag1 -> pointer -> int
Some further experiments with linux btf_type_tag support, esp.
with generating attributes in vmlinux.h, and also some internal
discussion showed the following format is more desirable:
VAR(g) -> pointer -> __tag2 -> __tag1 -> pointer -> __tag1 -> int
The format makes it similar to other modifier like 'const', e.g.,
const int *g
which has encoding VAR(g) -> PTR -> CONST -> int
Differential Revision: https://reviews.llvm.org/D113496
To be more consistent with other pass struct names.
There are still more passes that don't end with "Pass", but these are the important ones.
Reviewed By: asbirlea
Differential Revision: https://reviews.llvm.org/D112935
This patch fixes:
llvm/lib/ProfileData/InstrProf.cpp:146:3: error: default label in
switch which covers all enumeration values
[-Werror,-Wcovered-switch-default]
Add UNIQUED and DISTINCT properties in Metadata.def and use them to
implement restrictions on the `distinct` property of MDNodes:
* DIExpression can currently be parsed from IR or read from bitcode
as `distinct`, but this property is silently dropped when printing
to IR. This causes accepted IR to fail to round-trip. As DIExpression
appears inline at each use in the canonical form of IR, it cannot
actually be `distinct` anyway, as there is no syntax to describe it.
* Similarly, DIArgList is conceptually always uniqued. It is currently
restricted to only appearing in contexts where there is no syntax for
`distinct`, but for consistency it is treated equivalently to
DIExpression in this patch.
* DICompileUnit is already restricted to always being `distinct`, but
along with adding general support for the inverse restriction I went
ahead and described this in Metadata.def and updated the parser to be
general. Future nodes which have this restriction can share this
support.
The new UNIQUED property applies to DIExpression and DIArgList, and
forbids them to be `distinct`. It also implies they are canonically
printed inline at each use, rather than via MDNode ID.
The new DISTINCT property applies to DICompileUnit, and requires it to
be `distinct`.
A potential alternative change is to forbid the non-inline syntax for
DIExpression entirely, as is done with DIArgList implicitly by requiring
it appear in the context of a function. For example, we would forbid:
!named = !{!0}
!0 = !DIExpression()
Instead we would only accept the equivalent inlined version:
!named = !{!DIExpression()}
This essentially removes the ability to create a `distinct` DIExpression
by construction, as there is no syntax for `distinct` inline. If this
patch is accepted as-is, the result would be that the non-canonical
version is accepted, but the following would be an error and produce a diagnostic:
!named = !{!0}
; error: 'distinct' not allowed for !DIExpression()
!0 = distinct !DIExpression()
Also update some documentation to consistently use the inline syntax for
DIExpression, and to describe the restrictions on `distinct` for nodes
where applicable.
Reviewed By: StephenTozer, t-tye
Differential Revision: https://reviews.llvm.org/D104827
If profile data is malformed for any kind of reason, we generate
an error that only reports "malformed instrumentation profile data"
without any further information. This patch extends InstrProfError
class to receive an optional error message argument, so that we can
do better error reporting.
Differential Revision: https://reviews.llvm.org/D108942
Op0 - umax(X, Op0) --> 0 - usub.sat(X, Op1)
I'm not sure if this is really an improvement in IR because
we probably have better recognition/analysis for min/max,
but this lines up with the fold we do for the icmp+select
idiom and removes another diff from D98152.
This is similar to the previous fold in the code that was
added with:
83c2fb9f66baa6a85130https://alive2.llvm.org/ce/z/5MrVB9
This patch adds minimal support for D programming language demangling on LLVM
core based on the D name mangling spec. This will allow easier integration on a
future LLDB plugin for D either in the upstream tree or outside of it.
Minimal support includes recognizing D demangling encoding and at least one
mangling name, which in this case is `_Dmain` mangle.
Reviewed By: jhenderson, lattner
Differential Revision: https://reviews.llvm.org/D111414
Implement support for loading the stack canary from a memory location held in
the TLS register, with an optional offset applied. This is used by the Linux
kernel to implement per-task stack canaries, which is impossible on SMP systems
when using a global variable for the stack canary.
Reviewed By: nickdesaulniers
Differential Revision: https://reviews.llvm.org/D112768
combineMulToPMADDWD is currently limited to legal types, but there's no reason why we can't handle any larger type that the existing SplitOpsAndApply code can use to split to legal X86ISD::VPMADDWD ops.
This also exposed a missed opportunity for pre-SSE41 targets to handle SEXT ops from types smaller than vXi16 - without PMOVSX instructions these will always be expanded to unpack+shifts, so we can cheat and convert this into a ZEXT(SEXT()) sequence to make it a valid PMADDWD op.
Differential Revision: https://reviews.llvm.org/D110995
Changes VPReplicateRecipe to extract the last lane from an unconditional,
uniform store instruction. collectLoopUniforms will also add stores to
the list of uniform instructions where Legal->isUniformMemOp is true.
setCostBasedWideningDecision now sets the widening decision for
all uniform memory ops to Scalarize, where previously GatherScatter
may have been chosen for scalable stores.
This fixes an assert ("Cannot yet scalarize uniform stores") in
setCostBasedWideningDecision when we have a loop containing a
uniform i1 store and a scalable VF, which we cannot create a scatter for.
Reviewed By: sdesmalen, david-arm, fhahn
Differential Revision: https://reviews.llvm.org/D112725
(Cond & C) | (~bitcast(Cond) & D) --> bitcast (select Cond, (bc C), (bc D))
This is part of fixing:
https://llvm.org/PR34047
That report shows a case where a bitcast is sitting between the select condition
candidate and its 'not' value due to current cast canonicalization rules.
There's a bitcast type restriction that might be violated in existing matching,
but I still need to investigate if that is possible -
Alive2 shows we can only do this transform safely when the bitcast is from
narrow to wide vector elements (otherwise poison could leak into elements
that were safe in the original code):
https://alive2.llvm.org/ce/z/Hf66qh
Differential Revision: https://reviews.llvm.org/D113035
attempts
Prevent the selection of IVs that have a SCEV containing an undef. Also
prevent salvaging attempts for values for which a SCEV could not be
created by ScalarEvolution and have only SCEVUknown.
Reviewed by: Orlando
Differential Revision: https://reviews.llvm.org/D111810
Without this patch, passingValueIsAlwaysUndefined will iterate over all
instructions from I to the end of the basic block, even if the use is
outside the block.
This patch adds an early bail out, if the use instruction is outside I's
BB. This can greatly reduce compile-time in cases where very large basic
blocks are involved, with a large number of PHI nodes and incoming
values.
Note that the refactoring makes the handling of the case where I is a
phi and Use is in PHI more explicit as well: for phi nodes, we can also
directly bail out. In the existing code, we would iterate until we reach
the end and return false.
Based on an earlier patch by Matt Wala.
Reviewed By: lebedev.ri
Differential Revision: https://reviews.llvm.org/D113293
This patch adds DUP+FMUL => FMUL_indexed pattern to InstCombiner.
FMUL_indexed is normally selected during instruction selection, but it
does not work in cases when VDUP and VMUL are in different basic
blocks.
Differential Revision: https://reviews.llvm.org/D99662
This patch merges FoldConstantVectorArithmetic back into FoldConstantArithmetic.
Like FoldConstantVectorArithmetic we now handle vector ops with any operand count, but we currently still only handle binops for scalar types - this can be improved in future patches - in particular some common unary/trinary ops still have poor constant folding.
There's one change in functionality causing test changes - FoldConstantVectorArithmetic bails early if the build/splat vector isn't all constant (with some undefs) elements, but FoldConstantArithmetic doesn't - it instead attempts to fold the scalar nodes and bails if they fail to regenerate a constant/undef result, allowing some additional identity/undef patterns to be handled.
Differential Revision: https://reviews.llvm.org/D113300
It is often useful to know which die is the parent of the current die.
This patch adds information about parent offset into the dump:
0x0000000b: DW_TAG_compile_unit
DW_AT_producer ("by_hand")
0x00000014: DW_TAG_base_type (0x0000000b) <<<<<<<<<<<<<<
DW_AT_name ("int")
Now it is easy to see which die is the parent of the current die.
This patch makes that behaviour to be default.
We can make it to be opt-in if neccessary.
This functionality differs from already existed "--show-parents"
in that sence that parent information is shown for all dies and
only link to the immediate parent is shown.
Differential Revision: https://reviews.llvm.org/D113406
It is trivial to produce DemandedSrcElts given DemandedReplicatedElts,
so don't pass the former. Also, it isn't really useful so far
to have the overload taking the Mask, so just inline it.
This reapplies patch db289340c8.
The test failures on build with expensive checks caused by the patch happened due
to the fact that we sorted loop Phis in replaceCongruentIVs using llvm::sort,
which shuffles the given container if the expensive checks are enabled,
so equivalent Phis in the sorted vector had different mutual order from run
to run. replaceCongruentIVs tries to replace narrow Phis with truncations
of wide ones. In some test cases there were several Phis with the same
width, so if their order differs from run to run, the narrow Phis would
be replaced with a different Phi, depending on the shuffling result.
The patch ae14fae0ff fixed this issue by
replacing llvm::sort with llvm::stable_sort.
This patch adds a function to verify general properties of VPlans. The
first check makes sure that all phi-like recipes are at the beginning of
a block, with no other recipes in between.
Note that this currently may not hold for VPBlendRecipes at the moment,
as other recipes may be inserted before the VPBlendRecipe during mask
creation.
Note that this patch depends on D111300 and D111301, which fix code that
breaks the checked invariant.
Reviewed By: Ayal
Differential Revision: https://reviews.llvm.org/D111302
Summary:
This patch adds yaml2obj supporting for the auxiliary
file header of XCOFF.
Reviewed By: DiggerLin, jhenderson
Differential Revision: https://reviews.llvm.org/D111487
This is a fix for test failures on expensive checks build caused by db289340c8.
With LLVM_ENABLE_EXPENSIVE_CHECKS enabled the llvm::sort shuffles the given container.
However, the sort is only called when the TTI is passed to replaceCongruentIVs.
In the mentioned patch we pass it TTI, so the sort happens. But due to shuffling
equivalent Phis may appear in different order from run to run.
With the stable_sort instead of sort this is impossible - the order of sorted Phis
is preserved.
This fixes an assertion failure with -early-live-intervals when trying
to update the live intervals for a debug instruction, which don't even
have slot indexes.
Differential Revision: https://reviews.llvm.org/D113116
operand bundle "clang.arc.attachedcall" with ObjC runtime functions
The existing code only handles the case where the intrinsic being
rewritten is used as the called function pointer of a call/invoke.
that don't use the inline asm marker
This patch makes the changes to the ARC middle-end passes that are
needed to handle operand bundle "clang.arc.attachedcall" on targets that
don't use the inline asm marker for the retainRV/autoreleaseRV
handshake (e.g., x86-64).
Note that anyone who wants to use the operand bundle on their target has
to teach their backend to handle the operand bundle. The x86-64 backend
already knows about the operand bundle (see
https://reviews.llvm.org/D94597).
Differential Revision: https://reviews.llvm.org/D111334
Tablegen uses copious amounts of global state for uniquing various records.
This was fine under the original vision where tablegen was a tool, and not a
library, but there are various usages of tablegen that want to use it as a library.
One concrete example is that downstream we have a kythe indexer for tablegen
constructs that allows for IDEs to serve go-to-definition/references/and more.
We currently (kind of hackily) keep the tablegen parts in a shared library that
gets loaded/unloaded.
This revision starts to remedy this by globbing all of the static state into a
managed static so that they can at least be unloaded with llvm_shutdown.
A better solution would be to feed in a context variable (much like how
the IR in LLVM/MLIR do), but that is a more invasive change that can come later.
Differential Revision: https://reviews.llvm.org/D108934
When emitting a reloc for the Wasm global __stack_pointer, it was inadvertedly added to the symbols used for generating aranges, which caused some aranges to use it as the end symbol in a symbol diff, which caused a reloc for it to be emitted, which then caused an assert in `wasm64` since we have no 64-bit relocs for Wasm globals.
Fixes: https://bugs.llvm.org/show_bug.cgi?id=52376
Differential Revision: https://reviews.llvm.org/D113438
This is consistent with what we do for other operands that are required
to be constants.
I don't think this results in any real changes. The pattern match
code for isel treats ConstantSDNode and TargetConstantSDNode the same.
This is in preparation for only invalidating analyses on changed
functions.
Reviewed By: asbirlea
Differential Revision: https://reviews.llvm.org/D113303
instruction the key points to is deleted
Use weak value handles for both the key and the value. The entry is
invalid if either value handle is null.
This fixes an assertion failure in BasicAAResult::alias that is caused
by UnderlyingObjCPtrCache returning a wrong value.
I don't have a test case for this patch that fails reliably.
rdar://83984790
Currently, LOAD_STACK_GUARD on ARM is only implemented for Mach-O targets, and
other targets rely on the generic support which may result in spilling of the
stack canary value or address, or may cause it to be kept in a callee save
register across function calls, which means they essentially get spilled as
well, only by the callee when it wants to free up this register.
So let's implement LOAD_STACK GUARD for other targets as well. This ensures
that the load of the stack canary is rematerialized fully in the epilogue.
This code was split off from
D112768: [ARM] implement support for TLS register based stack protector
for which it is a prerequisite.
Reviewed By: nickdesaulniers
Differential Revision: https://reviews.llvm.org/D112811
- CUDA cannot associate memory space with pointer types. Even though Clang could add extra attributes to specify the address space explicitly on a pointer type, it breaks the portability between Clang and NVCC.
- This change proposes to assume the address space from a pointer from the assumption built upon target-specific address space predicates, such as `__isGlobal` from CUDA. E.g.,
```
foo(float *p) {
__builtin_assume(__isGlobal(p));
// From there, we could assume p is a global pointer instead of a
// generic one.
}
```
This makes the code portable without introducing the implementation-specific features.
Note that NVCC starts to support __builtin_assume from version 11.
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D112041
This fixes the following clang VFS tests, if `windows_slash` is the
default style:
Clang :: VFS/implicit-include.c
Clang :: VFS/relative-path.c
Clang-Unit :: Frontend/./FrontendTests.exe/CompilerInstance.DefaultVFSOverlayFromInvocation
Also clarify a couple references to `Style::windows` into
`Style::windows_backslash`, to make it clearer that each of them are
opinionated in different directions (even if it doesn't matter for
calls to e.g. `is_absolute`).
Differential Revision: https://reviews.llvm.org/D113272
InstCombine converts range tests of the form (X > C1 && X < C2) or
(X < C1 || X > C2) into checks of the form (X + C3 < C4) or
(X + C3 > C4). It is possible to express all range tests in either
of these forms (with different choices of constants), but currently
neither of them is considered canonical. We may have equivalent
range tests using either ult or ugt.
This proposes to canonicalize all range tests to use ult. An
alternative would be to canonicalize to either ult or ugt depending
on the specific constants involved -- e.g. in practice we currently
generate ult for && style ranges and ugt for || style ranges when
going through the insertRangeTest() helper. In fact, the "clamp like"
fold was relying on this, which is why I had to tweak it to not
assume whether inversion is needed based on just the predicate.
Proof: https://alive2.llvm.org/ce/z/_SP_rQ
Differential Revision: https://reviews.llvm.org/D113366
I am planning to upstream MachOObjectFile code to support Darwin
chained fixups. In order to test the new parser features we need a way
to produce correct (and incorrect) chained fixups. Right now the only
tool that can produce them is the Darwin linker. To avoid having to
check in binary files, this patch allows obj2yaml to print a hexdump
of the raw LINKEDIT and DATA segment, which both allows to
bootstrap the parser and enables us to easily create malformed inputs
to test error handling in the parser.
This patch adds two new options to obj2yaml:
-raw-data-segment
-raw-linkedit-segment
Differential Revision: https://reviews.llvm.org/D113234
All phi-like recipes should be at the beginning of a VPBasicBlock with
no other recipes in between. Ensure that the recurrence-splicing recipe
is not added between phi-like recipes, but after them.
Reviewed By: Ayal
Differential Revision: https://reviews.llvm.org/D111301
These and MULHS/MULHU both default to Legal. Targets need to set
the ones they don't support to Expand.
I think MULHS/MULHU likely has priority in most places so this
change probably isn't directly testable. I found it while looking
at disabling MULHS/MULHU for nxvXi64 as required for Zve64x.
Reviewed By: frasercrmck
Differential Revision: https://reviews.llvm.org/D113325
When targeting a specific CPU with scalable vectorization, the knowledge
of that particular CPU's vscale value can be used to tune the cost-model
and make the cost per lane less pessimistic.
If the target implements 'TTI.getVScaleForTuning()', the cost-per-lane
is calculated as:
Cost / (VScaleForTuning * VF.KnownMinLanes)
Otherwise, it assumes a value of 1 meaning that the behavior
is unchanged and calculated as:
Cost / VF.KnownMinLanes
Reviewed By: kmclaughlin, david-arm
Differential Revision: https://reviews.llvm.org/D113209
NFC. This check mainly handles size affecting literals. Make it check all
explicit operands instead of a few by name. Also make the isLiteral
check handle the KIMM operands, see https://reviews.llvm.org/D111067
Change-Id: I1a362d55b2a10f5c74d445272e8b7829a8b77597
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D113318
Change-Id: Ie6c688f30a71e0335d1c6dd1ff65019bd7ce684e
Extended value is known to be inside range smaller than full one.
Prevent SCCP to mark such value as overdefined.
Fixes PR52253
Differential Revision: https://reviews.llvm.org/D112721
Use ComputeMinSignedBits() to ensure the mul source operands at least sign-extend down from the bottom 16 bits.
This will make it easier if/when we try to support handling of source types larger than 32-bits.
The common use case for calling createStepForVF is currently something
like:
Value *Step = createStepForVF(Builder, ConstantInt::get(Ty, UF), VF);
and it makes more sense to reduce overall lines of code and change the
function to let it create the constant instead. With my patch this
becomes:
Value *Step = createStepForVF(Builder, Ty, VF, UF);
and the ConstantInt is created instead createStepForVF. A side-effect of
this is that the code in createStepForVF is also becomes simpler.
As part of this patch I've also replaced some calls to getRuntimeVF
with calls to createStepForVF, i.e.
getRuntimeVF(Builder, Count->getType(), VFactor * UFactor) ->
createStepForVF(Builder, Count->getType(), VFactor, UFactor)
because this feels semantically better.
Differential Revision: https://reviews.llvm.org/D113122
This adds FP type support to the SVE Container type list as a supplement to D112303.
Reviewed By: peterwaller-arm, paulwalker-arm
Differential Revision: https://reviews.llvm.org/D113333
As suggested on D113371, this adds a wrapper to SelectionDAG::ComputeNumSignBits, similar to the llvm::ComputeMinSignedBits wrapper.
I've included some usage, its not exhaustive, just the more obvious cases where the intention is obvious.
Differential Revision: https://reviews.llvm.org/D113396
When inserting an unpacked FP subvector into a packed vector we
can simply cast the unpacked value into a packed value, since
both types are legal for SVE. We can then use this as the input
for the UZP instruction. This avoids us expanding the operation
by going through the stack.
Differential Revision: https://reviews.llvm.org/D113270
Add new triple and target info for ‘spirv32’ and ‘spirv64’ and,
thus, enabling clang (LLVM IR) code emission to SPIR-V target.
The target for SPIR-V is mostly reused from SPIR by derivation
from a common base class since IR output for SPIR-V is mostly
the same as SPIR. Some refactoring are made accordingly.
Added and updated tests for parts that are different between
SPIR and SPIR-V.
Patch by linjamaki (Henry Linjamäki)!
Differential Revision: https://reviews.llvm.org/D109144
In IndVarSimplify after simplifying and extending loop IVs we call 'replaceCongruentIVs'.
This function optionally takes a TTI argument to be able to replace narrow IVs uses
with truncates of the widest one.
For some reason the TTI wasn't passed to the function, so it couldn't perform such
transform.
This patch fixes it.
Reviewed By: mkazantsev
Differential Revision: https://reviews.llvm.org/D113024