This is a follow-up of D105872. Now we are able to prepare for update
form with non-const increment.
Reviewed By: jsji
Differential Revision: https://reviews.llvm.org/D106032
Add a new LLVM switch `-profile-sample-block-accurate` to trust zero block counts for branches. Currently we leave out such zero counts when annotating branch weight metadata, which would lead to weights being considered as unknown.
Differential Revision: https://reviews.llvm.org/D110117
And always print it.
This makes some LLVM diagnostics match up better with Clang's diagnostics.
Updated some AMDGPU uses of DiagnosticInfoResourceLimit and now we print
better diagnostics for those.
Reviewed By: dblaikie
Differential Revision: https://reviews.llvm.org/D110204
This change adds the ASan intrinsic to the list whihc are setting hasCopyImplyingStackAdjustment.
Reviewed By: eugenis
Differential Revision: https://reviews.llvm.org/D110012
This time with the right bug number.
When we rewrite the setcc we replace set old setcc output register
with the new CondReg. But since CondReg can be shared by other
replacements, we don't know if the kill flags for the old register
are valid for CondReg. So be conservative and remove them.
The test case has a SETCCr and a SETCCm on the same condition so
they end up sharing the same CondReg. The SETCCr had one use with
a kill flag. This kill flag isn't valid after the replacement because
CondReg needs a live range extending to the later SETCCm replacment.
Fixes PR51903.
Currently, the dead functions information getting from optimizations remarks does not contain debug location, but knowing where these dead functions locate could be useful for debugging or for detecting dead code.
Cause in `LTO::addRegularLTO()` we use `BitcodeModule::getLazyModule()` to read the bitcode module, when we pass Function F to `ore::NV()`, F is not materialized, so `F->getSubprogram()` returns nullptr, and there is no debug location information of dead functions in optimizations remarks.
This patch call `F->materialize()` before we pass Function F to `ore::NV()`, then debug location information will be emitted for dead functions in optimization remarks.
Reviewed By: tejohnson
Differential Revision: https://reviews.llvm.org/D109737
When we rewrite the setcc we replace set old setcc output register
with the new CondReg. But since CondReg can be shared by other
replacements, we don't know if the kill flags for the old register
are valid for CondReg. So be conservative and remove them.
The test case has a SETCCr and a SETCCm on the same condition so
they end up sharing the same CondReg. The SETCCr had one use with
a kill flag. This kill flag isn't valid after the replacement because
CondReg needs a live range extending to the later SETCCm replacment.
Fixes PR51908.
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D110046
This comment references behavior that was removed in
ccae43a247, which is a commit from 5 years
ago. It seems safe to assume that that behavior won't be coming back
soon. If it does, we can readd this part of the comment :)
A logic incompleteness may lead MemorySSA to be too conservative
in its results. Specifically, when dealing with a call of kind
`call i32 bitcast (i1 (i1)* @test to i32 (i32)*)(i32 %1)`, where
the function `test` is declared with readonly attribute, the
bitcast is not looked through, obscuring function attributes. Hence,
some methods of CallBase (e.g., doesNotReadMemory) could provide
suboptimal results.
Differential Revision: https://reviews.llvm.org/D109888
MergeICmps will currently sort (by offset) all comparisons in a chain,
including those that do not get merged. This is problematic in two ways:
* We may end up moving the original first block into the middle of
the chain, in which case the "extra work" instructions will also
be in the middle of the chain, resulting in invalid IR
(reported in https://reviews.llvm.org/D108782#3005583).
* Reordering branches is generally not legal, because it may
introduce branch on poison, which is UB (PR51845). The merging
done by MergeICmps is legal as long as we assume that memcmp()
works on frozen memory, but the reordering of unmerged comparisons
is definitely incorrect (without inserting freeze instructions),
so we should avoid it.
There are easier ways to fix the first issue, but I figured it was
worthwhile to do this properly to also fix the second one. What we
now do is to restore the original relative order of (potentially
merged) comparisons.
I took the liberty of dropping the MERGEICMPS_DOT_ON functionality,
because it would be more awkward to implement now (as the before and
after representation is different) and it doesn't seem terribly
useful nowadays.
Differential Revision: https://reviews.llvm.org/D110024
Normally, given that the DA results are kept consistent over the selection DAG, uniform comparisons get selected to S_CMP_* but divergent to V_CMP_*. Sometimes, for the sake of efficiency, SSA subgraphs may be converted to VALU to avoid repeatedly copying data back and forth. Hence we have to be able to sustain the correctness passing the i1 from VALU to SALU context and vice versa.
VALU operations only process the active lanes of the VGPR and ignore inactive ones.
Active lanes correspond to 1 bit in the EXEC mask register.
SALU represents i1 as just one bit but VALU as 64bits: 0/1 and 0/(0xffffffffffffffff & EXEC) respectively.
SALU uses one-bit conditional flag SCC but VALU - VCC that is a pair of 32-bit SGPRs
To expose SCC to the VALU context we need to convert the one-bit boolean value to the appropriate 64bit.
To return back to the SALU context we need to do the opposite.
To correctly convert 64bit VALU boolean to either 0 or 1 we need to filter out the bits corresponding to the inactive lanes.
Reviewed By: piotr
Differential Revision: https://reviews.llvm.org/D109900
When adding alias.scope and noalias metadata to a memcpy function,
the alias.scope and noalias metadata from the operands are merged.
The rule for merging alias.scope is to take the intersection of
the domains and the union of the scopes within those domains.
The rule for merging noalias is to take the intersection.
The bug is that AMDGPULowerModuleLDS was using concatenation for
both alias.scope and noalias. For example, when f1 and f2 are added
to the LDS structure and there is a memcpy(f2, f1, sizeof(f1)).
Then, concatenation creates noalias metadata for the memcpy that
includes both {f1, f2}. That means that the memcpy is assumed
not to alias a prior load of f2, which enables the optimizer to
remove a load of f2 that occurs after mempcy.
The function MDNode::getmostGenericAliasScope defines the semantics
for alias.scope. There is a function, combineMetadata in Local.cpp,
that uses intersect for noalias.
Differential Revision: https://reviews.llvm.org/D110049
This patch adds a prefixed load pattern involving v2f32 fpext v2f64, where we
are dealing with a value with an offset that fits into a 34-bit signed immediate.
A reduced test case is also added to patch that tests the pattern, in which the
pattern is tested in the big endian CHECKs of the newly added test.
Differential Revision: https://reviews.llvm.org/D109887
This patch fixes the crash found by PR51614:
whenever doing tail folding, interleave groups must be considered under mask.
Another fix D108900 follows for targets that support masked loads and stores:
when *deciding* to vectorize with masked interleave groups, check if the access
is reverse - which is currently not supported; rather than (only) asserting when
computing cost and generating code.
Differential Revision: https://reviews.llvm.org/D108891
This requires a minor change to CodeGenPrepare to ensure that
shouldSinkOperands will be called for And.
Reviewed By: frasercrmck
Differential Revision: https://reviews.llvm.org/D110106
We already have pow(x, y) * pow(x, z) -> pow(x, y + z) transformation, but we are missing same transformation for powi (power is integer).
Requires reassoc.
Reviewed By: spatel
Differential Revision: https://reviews.llvm.org/D109954
isValidAssumeForContext can provide better results with access to the
dominator tree in some cases. This patch adjusts computeConstantRange to
allow passing through a dominator tree.
The use VectorCombine is updated to pass through the DT to enable
additional scalarization.
Note that similar APIs like computeKnownBits already accept optional dominator
tree arguments.
Reviewed By: lebedev.ri
Differential Revision: https://reviews.llvm.org/D110175
When using instructions which have a MetadataAsValue argument
(e.g. some target-specific intrinsics) MD canonicalization strips
internal MDNodes with a single ConstantAsMetadata child. That
prevented IRTranslator from the proper translation of such a calls.
Optimize (add (mul x, c0), c1) -> (ADDI (MUL (ADDI, c1/c0), c0), c1%c0),
if c1/c0 and c1%c0 are simm12, while c1 is not.
Optimize (add (mul x, c0), c1) -> (MUL (ADDI, c1/c0), c0),
if c1%c0 is zero, and c1/c0 is simm12 while c1 is not.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D108607
This patch allows sinking an instruction which can have multiple uses in a
single user. We were previously over-restrictive by looking for exactly one use,
rather than one user.
Also added an API for retrieving a unique undroppable user.
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D109700
One of the two inputs of the Shufflevector is often a placeholder.
Previously, there were cases where the placeholder was undef, and there were cases where it was poison.
I added these constructors to create a placeholder consistently.
Changing to use the newly added constructor will be written in a separate patch.
Reviewed By: spatel
Differential Revision: https://reviews.llvm.org/D110146
SystemZ adds the EXRL target instructions in the end of each file. This must
be done before debug info emission since that may end the text section, and
therefore this is now done in emitConstantPools() (instead of in
emitEndOfAsmFile).
Review: Ulrich Weigand
Differential Revision: https://reviews.llvm.org/D109513
FMA_W_CHAIN is used when lowering fdiv f32. Prefer to select it to fmac
if there are no source modifiers, just like we do for other mad/mac and
fma/fmac cases.
Differential Revision: https://reviews.llvm.org/D110074
v_fmac with source modifiers forces VOP3 encoding, but it is strictly
better to use the VOP3-only v_fma instead, because $dst and $src2 are
not tied so it gives the register allocator more freedom and avoids a
copy in some cases.
This is the same strategy we already use for v_mad vs v_mac and
v_fma_legacy vs v_fmac_legacy.
Differential Revision: https://reviews.llvm.org/D110070
Add generic helper function that matches constant splat. It has option to
match constant splat with undef (some elements can be undef but not all).
Add util function and matcher for G_FCONSTANT splat.
Differential Revision: https://reviews.llvm.org/D104410
The logic in howManyLessThans is fishy. It first checks invariance of
RHS, and then uses OrigRHS as argument for isLoopEntryGuardedByCond, which
is, strictly saying, a different thing. We are seeing a very rare intermittent
failure of availability checks, and it looks like this precondition is
sometimes broken. Before we can figure out what's going on, adding asserts
that all involved values that may possibly to to isLoopEntryGuardedByCond
are available at loop entry.
If either of these asserts fails (OrigRHS is the most likely suspect), it
means that the logic here is flawed.
This fixes PR51730, a heap-use-after-free bug in
replaceConditionalBranchesOnConstant().
With the attached reproducer we were left with a function looking
something like this after replaceAndRecursivelySimplify():
[...]
cont2.i:
br i1 %.not1.i, label %handler.type_mismatch3.i, label %cont4.i
handler.type_mismatch3.i:
%3 = phi i1 [ %2, %cont2.thread.i ], [ false, %cont2.i ]
unreachable
cont4.i:
unreachable
[...]
with both the branch instruction and PHI node being in the worklist. As
a result of replacing the branch instruction with an unconditional
branch, the PHI node in %handler.type_mismatch3.i would be removed. This
then resulted in a heap-use-after-free bug due to accessing that removed
PHI node in the next worklist iteration.
This is solved by using a value handle worklist. I am a unsure if this
is the most idiomatic solution. Another solution could have been to
produce a worklist just containing the interesting branch instructions,
but I thought that it perhaps was a bit cleaner to keep all worklist
filtering in the loop that does the rewrites.
Reviewed By: lebedev.ri
Differential Revision: https://reviews.llvm.org/D109221
First (and biggest) change is to use "Killing/Dead" in place of "Later/Earlier" base for names in DSE. For example, [Maybe]DeadLoc - is a location killed by KillingI instruction. I believe such names are more descriptive and easy to understand than current ones.
Second, there are inconsistencies in naming where different names are used for the same thing. Fixed that too.
Third, reordered parameters of isPartialOverwrite, tryToMergePartialOverlappingStores, isOverwrite to make them consistent between each other. This greatly reduces potential mistakes.
Reviewed By: fhahn
Differential Revision: https://reviews.llvm.org/D106947
For artifacts excluding G_TRUNC/G_SEXT, which have IR counterparts, we don't
seem to have debug users of defs. However, in the legalizer we're always calling
MachineInstr::eraseFromParentAndMarkDBGValuesForRemoval() which is expensive.
In some rare cases, this contributes significantly to unreasonably long compile
times when we have lots of artifact combiner activity.
To verify this, I added asserts to that function when it actually replaced a debug
use operand with undef for these artifacts. On CTMark with both -O0 and -Os and
debug info enabled, I didn't see a single case where it triggered.
In my measurements I saw around a 0.5% geomean compile-time improvement on -g -O0
for AArch64 with this change.
Differential Revision: https://reviews.llvm.org/D109750
The implication logic for two values that are both negative or non-negative
says that it doesn't matter whether their predicate is signed and unsigned,
but only flips unsigned into signed for further inference. This patch adds
support for flipping a signed predicate into unsigned as well.
Differential Revision: https://reviews.llvm.org/D109959
Reviewed By: nikic
In llvm, for non-alu32 mode, the stack alignment is 64bit so only one
64bit spill per 64bit slot. For alu32 mode, the stack alignment
is 32bit, so it is possible to have two 32bit spills per
64bit slot.
Currently, bpf kernel verifier does not preserve register states
for 32bit spills. That is, one 32bit register may hold a constant
value or a bounded range before spill. After reload from the
stack, the information is lost and sometimes this may cause
verifier failure. For 64bit register spill, the verifier
indeed tries to preserve the register state for reloading.
The current verifier can be modestly changed to handle one
32bit spill per 64bit stack slot with state-preserving reload.
Handling two 32bit spills per 64bit stack slot will require
substantial changes.
This patch changes stack alignment for alu32 to be 64bit.
This way, for any 64bit slot in alu32 mode, only one
32bit or 64bit register values can be saved. Together
with previous-mentioned verifier enhancement, 32bit
spill can be handled with state preserving.
Note that llvm stack slot coallescing
seems only doing adjacent packing which may leave some holes
in the stack. For example,
stack slot 8 <== 8 bytes
stack slot 4 <== 8 bytes with 4 byte hole
stack slot 8 <== 8 bytes
stack slot 4 <== 4 bytes
Differential Revision: https://reviews.llvm.org/D109073
When following a case of a switch instruction is guaranteed to lead to
UB, we can safely break these edges and redirect those cases into a newly
created unreachable block. As result, CFG will become simpler and we can
remove some of Phi inputs to make further analyzes easier.
Patch by Dmitry Bakunevich!
Differential Revision: https://reviews.llvm.org/D109428
Reviewed By: lebedev.ri
For x86 Darwin, we have a stack checking feature which re-uses some of this
machinery around stack probing on Windows. Renaming this to be more appropriate
for a generic feature.
Differential Revision: https://reviews.llvm.org/D109993
We implement logic to convert a byte offset into a sequence of GEP
indices for that offset in a number of places. This patch adds a
DataLayout::getGEPIndicesForOffset() method, which implements the
core logic. I've updated SROA, ConstantFolding and InstCombine to
use it, and there's a few more places where it looks relevant.
Differential Revision: https://reviews.llvm.org/D110043
Some buildbots fail with:
> C:\a\llvm-clang-x86_64-expensive-checks-win\llvm-project\llvm\lib\IR\Verifier.cpp(4352): error C2678: binary '==': no operator found which takes a left-hand operand of type 'const llvm::MDOperand' (or there is no acceptable conversion)
Possibly the explicit MDOperand to Metadata* conversion will help?
This patch fixes the warning
InstructionTables.cpp:27:56: error: loop variable 'Resource' of type
'const std::pair<const uint64_t, ResourceUsage> &' (aka 'const
pair<const unsigned long, llvm::mca::ResourceUsage> &') binds to a
temporary constructed from type 'const std::pair<unsigned long,
llvm::mca::ResourceUsage> &' [-Werror,-Wrange-loop-construct]
Note that Resource is declared as:
SmallVector<std::pair<uint64_t, ResourceUsage>, 4> Resources;
without "const" for uint64_t.
For strided accesses the loop vectorizer seems to prefer creating a
vector induction variable with a start value of the form
<i32 0, i32 1, i32 2, ...>. This value will be incremented each
loop iteration by a splat constant equal to the length of the vector.
Within the loop, arithmetic using splat values will be done on this
vector induction variable to produce indices for a vector GEP.
This pass attempts to dig through the arithmetic back to the phi
to create a new scalar induction variable and a stride. We push
all of the arithmetic out of the loop by folding it into the start,
step, and stride values. Then we create a scalar GEP to use as the
base pointer for a strided load or store using the computed stride.
Loop strength reduce will run after this pass and can do some
cleanups to the scalar GEP and induction variable.
Reviewed By: frasercrmck
Differential Revision: https://reviews.llvm.org/D107790
Reworked reordering algorithm. Originally, the compiler just tried to
detect the most common order in the reordarable nodes (loads, stores,
extractelements,extractvalues) and then fully rebuilding the graph in
the best order. This was not effecient, since it required an extra
memory and time for building/rebuilding tree, double the use of the
scheduling budget, which could lead to missing vectorization due to
exausted scheduling resources.
Patch provide 2-way approach for graph reodering problem. At first, all
reordering is done in-place, it doe not required tree
deleting/rebuilding, it just rotates the scalars/orders/reuses masks in
the graph node.
The first step (top-to bottom) rotates the whole graph, similarly to the previous
implementation. Compiler counts the number of the most used orders of
the graph nodes with the same vectorization factor and then rotates the
subgraph with the given vectorization factor to the most used order, if
it is not empty. Then repeats the same procedure for the subgraphs with
the smaller vectorization factor. We can do this because we still need
to reshuffle smaller subgraph when buildiong operands for the graph
nodes with lasrger vectorization factor, we can rotate just subgraph,
not the whole graph.
The second step (bottom-to-top) scans through the leaves and tries to
detect the users of the leaves which can be reordered. If the leaves can
be reorder in the best fashion, they are reordered and their user too.
It allows to remove double shuffles to the same ordering of the operands in
many cases and just reorder the user operations instead. Plus, it moves
the final shuffles closer to the top of the graph and in many cases
allows to remove extra shuffle because the same procedure is repeated
again and we can again merge some reordering masks and reorder user nodes
instead of the operands.
Also, patch improves cost model for gathering of loads, which improves
x264 benchmark in some cases.
Gives about +2% on AVX512 + LTO (more expected for AVX/AVX2) for {625,525}x264,
+3% for 508.namd, improves most of other benchmarks.
The compile and link time are almost the same, though in some cases it
should be better (we're not doing an extra instruction scheduling
anymore) + we may vectorize more code for the large basic blocks again
because of saving scheduling budget.
Differential Revision: https://reviews.llvm.org/D105020
In ValueTracking.cpp we use a function called
computeKnownBitsFromOperator to determine the known bits of a value.
For the vscale intrinsic if the function contains the vscale_range
attribute we can use the maximum and minimum values of vscale to
determine some known zero and one bits. This should help to improve
code quality by allowing certain optimisations to take place.
Tests added here:
Transforms/InstCombine/icmp-vscale.ll
Differential Revision: https://reviews.llvm.org/D109883
Following D109516, this patch re-uses the new helper function for ELF relocation traversal in the RISCV backend.
Reviewed By: StephenFan
Differential Revision: https://reviews.llvm.org/D109522
Following D109516, this patch re-uses the new helper function for ELF relocation traversal in the x86-64 backend.
Reviewed By: StephenFan
Differential Revision: https://reviews.llvm.org/D109520
The vectorizer can sometimes make reverse shuffles from indices that
count down. In MVE, we don't have a 128bit rev instruction, but we can
select this to a VREV64 with some lane movs to swap the two halfs.
Ideally this would use VMOVD's, but only gets as far as VMOVS's at the
moment.
Differential Revision: https://reviews.llvm.org/D69510
Add eraseInstr(s) utility functions. Before deleting an instruction
collects its use instructions. After deletion deletes use instructions
that became trivially dead.
This patch clears all dead instructions in existing legalizer mir tests.
Differential Revision: https://reviews.llvm.org/D109154
In default pipelines the ModuleInlinerWrapperPass is adding the
InlinerPass to the pipeline twice, once due to MandatoryFirst (passing
true in the ctor) and then a second time with false as argument.
To make it possible to bisect and reduce opt test cases for this
part of the pipeline we need to be able to choose between the two
different variants of the InlinerPass when running opt. This patch is
changing 'inline' to a CGSCC_PASS_WITH_PARAMS in the PassRegistry,
making it possible run opt with both -passes=cgscc(inline) and
-passes=cgscc(inline<only-mandatory>).
Reviewed By: aeubanks, mtrofin
Differential Revision: https://reviews.llvm.org/D109877
v8.4 says that normal loads/stores of 128-bytes are single-copy atomic if
they're properly aligned (which all LLVM atomics are) so we no longer need to
do a full RMW operation to guarantee we got a clean read.
isPotentiallyReachable can use LoopInfo to return earlier. This patch
allows passing an optional LI to PointerMayBeCapturedBefore. Used in
D109844.
Reviewed By: nikic, asbirlea
Differential Revision: https://reviews.llvm.org/D109978
All transforms of IndVars have prerequisite requirement of LCSSA and LoopSimplify
form and rely on it. Added test that shows that this actually stands.
This reverts commit 6fec6552f5.
The patch was reverted on incorrect claim that this patch may break LCSSA form
when the loop is not in a simplify form. All IndVars' transform insure that
the loop is in simplify and LCSSA form, so if it wasn't broken before this
transform, it will also not be broken after it.
There is a piece of logic that uses the fact that signed and unsigned
versions of the same predicate are equivalent when both values are
non-negative. It's also true when both of them are negative.
Differential Revision: https://reviews.llvm.org/D109957
Reviewed By: nikic
This should probably be rendered as "std::nullptr_t" but for now clang
uses the unqualified name (which is ambiguous with possible user defined
name in the global namespace), so match that here.
The scev-based salvaging for LSR can sometimes produce unnecessarily
verbose expressions. This patch adds logic to detect when the value to
be recovered and the induction variable differ by only a constant
offset. Then, the expression to derive the current iteration count can
be omitted from the dbg.value in favour of the offset.
Reviewed by: aprantl
Differential Revision: https://reviews.llvm.org/D109044
Both ports are required in most cases. Update the uops counts + port usage based off the most recent llvm-exegesis captures (PR36895) and what Intel AoM / Agner / InstLatX64 reports as well.
Noticed while trying to improve fp costs for vectorization via the D103695 helper script.
We can combine unary shuffles into either of SHUFPS's inputs and adjust the shuffle mask accordingly.
Unlike general shuffle combining, we can be more aggressive and handle multiuse cases as we're not going to accidentally create additional shuffles.
Apparently this has no test coverage before D108382,
but D108382 itself shows a few regressions that this fixes.
It doesn't seem worthwhile breaking apart broadcasts,
assuming we want the broadcasted value to be preset in several elements,
not just the 0'th one.
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D108411
Split off from D108253.
Broadcast is simpler than any other shuffle we might produce
to do what we want to do here, so prefer it.
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D108382
Both ports are required, for reg and mem variants - we can also use the WriteFComX class directly and remove the unnecessary InstRW overrides. Matches what Intel AoM / Agner / InstLatX64 report as well.
The new device runtime uses an internal variable to set debugging. This
variable was originally privately linked because every module will have
a copy of it. This caused problems with merging the device bitcode
library because it would get renamed and there was not a way to refer to
an external, private symbol. This changes the symbol to weak_odr so it
can be defined multiply, but will not be renamed.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D109997
The AAExecutionDomain instance checks if a BB is executed by the main
thread only. Currently, this only checks the `__kmpc_kernel_init` call
for generic regions to indicate the path taken by the main thread. In
the new runtime, we want to be able to detect basic blocks even in SPMD
mode. For this we enable it to check thread-ID intrinsics being compared
to zero as well.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D109849
Nobody has complained about this, and the documentation for
LLVMContext::yield() states that LLVM is allowed to never call it.
Reviewed By: asbirlea
Differential Revision: https://reviews.llvm.org/D110008
A couple tweaks to
1. allow more thinlto importing by excluding probe intrinsics from IR size in module summary
2. Allow general default attributes (nofree nosync nounwind) for pseudo probe intrinsic. Without those attributes, pseudo probes will be basically treated as unknown calls which will in turn block their containing functions from annotated with those attributes.
Reviewed By: wenlei
Differential Revision: https://reviews.llvm.org/D109976
We can use `OR` instead of `BLEND` if either the element we are not picking is zero (or masked away);
or the element we are picking overwhelms (e.g. it's all-ones) whatever the element we are not picking:
https://alive2.llvm.org/ce/z/RKejao
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D109726
The MMX pack/unpck shuffles don't need an override - they have the same behaviour as other shuffles (Port0 only).
The SSE pslldq/psrldq shuffles don't need an override - they have the same behaviour as other shuffles (Port0 only).
The SSE pshufb shuffles use 4uops (+1 load).
Noticed the pslldq/psrldq issue while trying to improve reduction costs via the D103695 helper script, and fixed the others while reviewing. Confirmed with Intel AoM / Agner / InstLatX64.
The .machine directive can be used in assembly files to specify the ISA for
the instructions following it.
Review: Ulrich Weigand
Differential Revision: https://reviews.llvm.org/D109660
Rework getConstantstVRegValWithLookThrough in order to make it clear if we
are matching integer/float constant only or any constant(default).
Add helper functions that get DefVReg and APInt/APFloat from constant instr
getIConstantVRegValWithLookThrough: integer constant, only G_CONSTANT
getFConstantVRegValWithLookThrough: float constant, only G_FCONSTANT
getAnyConstantVRegValWithLookThrough: either G_CONSTANT or G_FCONSTANT
Rename getConstantVRegVal and getConstantVRegSExtVal to getIConstantVRegVal
and getIConstantVRegSExtVal. These now only match G_CONSTANT as described
in comment.
Relevant matchers now return both DefVReg and APInt/APFloat.
Replace existing uses of getConstantstVRegValWithLookThrough and
getConstantVRegVal with new helper functions. Any constant match is
only required in:
ConstantFoldBinOp: for constant argument that was bit-cast of float to int
getAArch64VectorSplat: AArch64::G_DUP operands can be any constant
amdgpu select for G_BUILD_VECTOR_TRUNC: operands can be any constant
In other places use integer only constant match.
Differential Revision: https://reviews.llvm.org/D104409
This introduces an option to allow specialising on the address of global
values. This option is off by default because it is likely not that profitable
to do so and needs more investigation. Before, we were specialising on addresses
and thus this changes the default behaviour.
Differential Revision: https://reviews.llvm.org/D109775
This commit fixes an order-of-initialization issue: If the default mmapper
object is destroyed while some global SectionMemoryManager is still using it
then calls to the mapper from ~SectionMemoryManager will fail. This issue was
causing failures when running the LLVM Kaleidoscope examples on windows.
Switching to a ManagedStatic solves the initialization order issue.
Patch by Justice Adams. Thanks Justice!
Reviewed By: lhames
Differential Revision: https://reviews.llvm.org/D107087
Do not call `TryToShrinkGlobalToBoolean` for address spaces
that don't allow initializers. It inserts an initializer value
while shrinking to bool. Used the target hook introduced with
D109337 to skip this call for the restricted address spaces.
Reviewed By: tra
Differential Revision: https://reviews.llvm.org/D109823
Move the functionality in lld that handles writing of the LC_CODE_SIGNATURE load command and associated data section to a central reusable location.
This change is in preparation for another change that modifies llvm-objcopy to reproduce the LC_CODE_SIGNATURE load command and corresponding
data section to maintain the validity of signed macho object files passed through llvm-objcopy.
Reviewed By: #lld-macho, int3, oontvoo
Differential Revision: https://reviews.llvm.org/D109803
Finalization and deallocation actions are a key part of the upcoming
JITLinkMemoryManager redesign: They generalize the existing finalization and
deallocate concepts (basically "copy-and-mprotect", and "munmap") to include
support for arbitrary registration and deregistration of parts of JIT linked
code. This allows us to register and deregister eh-frames, TLV sections,
language metadata, etc. using regular memory management calls with no additional
IPC/RPC overhead, which should both improve JIT performance and simplify
interactions between ORC and the ORC runtime.
The SimpleExecutorMemoryManager class provides executor-side support for memory
management operations, including finalization and deallocation actions.
This support is being added in advance of the rest of the memory manager
redesign as it will simplify the introduction of an EPC based
RuntimeDyld::MemoryManager (since eh-frame registration/deregistration will be
expressible as actions). The new RuntimeDyld::MemoryManager will in turn allow
us to remove older remote allocators that are blocking the rest of the memory
manager changes.
Most PDB fields on disk are 32-bit but describe the file in terms of MSF
blocks, which are 4 kiB by default.
So PDB files can be a bit larger than 4 GiB, and much larger if you create them
with a block size > 4 kiB.
This is a first (necessary, but by far not not sufficient) step towards
supporting such PDB files. Now we don't truncate in-memory file offsets (which
are in terms of bytes, not in terms of blocks).
No effective behavior change. lld-link will still error out if it were to
produce PDBs > 4 GiB.
Differential Revision: https://reviews.llvm.org/D109923
To make the IR easier to analyze, this pass makes some minor transformations.
After that, even if it doesn't decide to optimize anything, it can't report that
it changed nothing and preserved all the analyses.
Reviewed By: reames
Differential Revision: https://reviews.llvm.org/D109855
Nonfunctional commit fixing several minor spelling errors in llvm/lib/Target/AMDGPU header files.
Testing workflow as a new contributor.
Differential Revision: https://reviews.llvm.org/D109733
Skip stack accesses unless requested, as the memory profiler runtime
does not currently look at or report accesses for these addresses.
Differential Revision: https://reviews.llvm.org/D109868
getMetadata() currently uses a weird API where it populates a
structure passed to it, and optionally merges into it. Instead,
we can return the AAMDNodes and provide a separate merge() API.
This makes usages more compact.
Differential Revision: https://reviews.llvm.org/D109852
SimplifyDemandedBits can turn srl into sra if the bits being shifted
in aren't demanded. This patch can recover the original sra in some cases.
I've renamed the tablegen class for detecting W users since the "overflowing operator"
term I originally borrowed from Operator.h does not include srl.
Reviewed By: luismarques
Differential Revision: https://reviews.llvm.org/D109162
In https://reviews.llvm.org/D100481, forceful inline of all non-kernel
functions using lds was disabled since AMDGPULowerModuleLDS pass now handles
static lds. However that pass does not handle extern lds so non-kernel
functions using extern lds must sill be inline.
Reviewed By: hsmhsm, arsenm
Differential Revision: https://reviews.llvm.org/D109773
This makes some tests in vector-reductions-logical.ll more stable when
applying D108837.
The cost of branching is higher when vector ops are involved due to
potential SLP transformations.
Reviewed By: spatel
Differential Revision: https://reviews.llvm.org/D108935
Introduce a new command-line flag `-swift-async-fp={auto|always|never}`
that controls how code generation sets the Swift extended async frame
info bit. There are three possibilities:
* `auto`: which determines how to set the bit based on deployment target, either
statically or dynamically via `swift_async_extendedFramePointerFlags`.
* `always`: the default, always set the bit statically, regardless of deployment
target.
* `never`: never set the bit, regardless of deployment target.
Patch by Doug Gregor <dgregor@apple.com>
Reviewed By: doug.gregor
Differential Revision: https://reviews.llvm.org/D109392
Change the asan-module pass into a MODULE_PASS_WITH_PARAMS in the
pass registry, and add a single parameter called 'kernel' that
can be set instead of having a special pass name 'kasan-module'
to trigger that special pass config.
Main reason is to make sure that we have a unique mapping from
ClassName to PassName in the new passmanager framework, making it
possible to correctly identify the passes when dealing with options
such as -print-after and -print-pipeline-passes.
This is a follow-up to D105006 and D105007.
Split ThreadSanitizerPass into ThreadSanitizerPass (as a function
pass) and ModuleThreadSanitizerPass (as a module pass).
Main reason is to make sure that we have a unique mapping from
ClassName to PassName in the new passmanager framework, making it
possible to correctly identify the passes when dealing with options
such as -print-after and -print-pipeline-passes.
This is a follow-up to D105006 and D105007.
Split MemorySanitizerPass into MemorySanitizerPass (as a function
pass) and ModuleMemorySanitizerPass (as a module pass).
Main reason is to make sure that we have a unique mapping from
ClassName to PassName in the new passmanager framework, making it
possible to correctly identify the passes when dealing with options
such as -print-after and -print-pipeline-passes.
This is a follow-up to D105006 and D105007.
Recently a vulnerability issue is found in the implementation of VLLDM
instruction in the Arm Cortex-M33, Cortex-M35P and Cortex-M55. If the
VLLDM instruction is abandoned due to an exception when it is partially
completed, it is possible for subsequent non-secure handler to access
and modify the partial restored register values. This vulnerability is
identified as CVE-2021-35465.
The mitigation sequence varies between v8-m and v8.1-m as follows:
v8-m.main
---------
mrs r5, control
tst r5, #8 /* CONTROL_S.SFPA */
it ne
.inst.w 0xeeb00a40 /* vmovne s0, s0 */
1:
vlldm sp /* Lazy restore of d0-d16 and FPSCR. */
v8.1-m.main
-----------
vscclrm {vpr} /* Clear VPR. */
vlldm sp /* Lazy restore of d0-d16 and FPSCR. */
More details on
developer.arm.com/support/arm-security-updates/vlldm-instruction-security-vulnerability
Differential Revision: https://reviews.llvm.org/D109157
When expanding the non-secure call instruction we are emiting code
to clear the secure floating-point registers only if the targeted
architecture has floating-point support. The potential problem is
when the source code containing non-secure calls are built with
-mfloat-abi=soft but some other part of the system has been built
with -mfloat-abi=softfp (soft and softfp are compatible as they use
the same procedure calling standard). In this case floating-point
registers could leak to non-secure state as the non-secure won't
have cleared them assuming no floating point has been used.
Differential Revision: https://reviews.llvm.org/D109153
Alive2 for `{insert/extract}element`: https://alive2.llvm.org/ce/z/hwy_E-
Actually, no one file of test suite is touched by this change,
which means that is rare pattern not generated by frontend. But
it's worth being in place.
Differential Revision: https://reviews.llvm.org/D109236
If a loop count was initially represented by a 32b unsigned int in C
then the hardware-loop pass can recognise the loop guard and insert
the llvm.test.set.loop.iterations intrinsic. If this was instead a
unsigned short/char then clang inserts a zext instruction to expand
the loop count to an i32. This patch adds the necessary pattern
matching to enable the use of lvm.test.set.loop.iterations in those
cases.
Patch by: sherwin-dc
Differential Revision: https://reviews.llvm.org/D109631
New field `elements` is added to '!DIImportedEntity', representing
list of aliased entities.
This is needed to dump optimized debugging information where all names
in a module are imported, but a few names are imported with overriding
aliases.
Reviewed By: dblaikie
Differential Revision: https://reviews.llvm.org/D109343
The default register bank selection code for G_LOAD assumes that we ought to
use a FPR when the load is casted to a float/double.
For atomics, this isn't true; we should always use GPRs.
Without this patch, we crash in the following example:
https://godbolt.org/z/MThjas441
Also make the code a little more stylistically consistent while we're here.
Also test some other weird cast combinations as well.
Differential Revision: https://reviews.llvm.org/D109771
There's technically a difference in the logic used by these
findIntrinsicID and MachineInstr::getIntrinsicID, but it shouldn't
be a meaningful difference here, with G_INTRINSIC instructions.
getIntrinsicID's "first non-def" logic should be correct for those.
The doc comment for isPredecessor says:
Returns true if \p DefMI precedes \p UseMI or they are the same
instruction.
And dominates relies on that behavior for its own:
Returns true if \p DefMI dominates \p UseMI. By definition an
instruction dominates itself.
Make both statements correct by fixing isPredecessor.
Found by inspection.
PassBuilder.cpp is the slowest file to compile in LLVM.
When trying to test changes to pipelines, it takes a long time to recompile.
This doesn't actually speedup building PassBuilder.cpp itself since most
of the time is spent in other large/duplicated functions caused by
PassRegistry.def.
Reviewed By: asbirlea
Differential Revision: https://reviews.llvm.org/D109798
In particular, it couldn't handle cases where lookup table constant
expressions involved bitcasts. This does not seem to come up
frequently in C++, but comes up reasonably often in Rust via
`#[derive(Debug)]`.
Originally reported by pcwalton.
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D109565
This reverts commit 4ac4e52189.
There are couple of test failures, which needs update of the test cases.
Doing a clean revert and will recommit the change along with fixed
testcases.
Fix build bot failure in rG4ac4e521 caused due to assumeBundleBuilder
using new API (getUniqueUndroppableUser).
We now continue using the existing API for AssumeBundleBuilder
(getSingleUndroppableUser).
Sorry for the noise here.
Tests-Run: failing testcase passes.
The API was removed in 4ac4e52189 in favor of
getUniqueUndroppableUser.
However, this caused a buildbot failure in AbstractCallSiteTest.cpp,
which uses the API and the AbstractCallSite class requires a "use"
rather than a user.
Retain the API so that the unittest compiles and passes.
This patch allows sinking an instruction which can have multiple uses in a
single user. We were previously over-restrictive by looking for exactly one use,
rather than one user.
Also, the API for retrieving undroppable user has been updated accordingly since
in both usecases (Attributor and InstCombine), we seem to care about the user,
rather than the use.
Reviewed-By: nikic
Differential Revision: https://reviews.llvm.org/D109700
I was wondering how instcombine does on the examples in D109236,
and we're missing a basic transform:
inselt (ext X), (ext Y), Index --> ext (inselt X, Y, Index)
https://alive2.llvm.org/ce/z/z2aBu9
Note that there are several possible extensions of this fold
(see TODO comments).
Differential Revision: https://reviews.llvm.org/D109537
Add two levels of verification for MemorySSA: Fast and Full.
The defaults are kept the same. Full verification always occurs under
EXPENSIVE_CHECKS, but now it can also be requested in a specific pass for
debugging purposes.
Based off the worse case numbers generated by D103695, the AVX2/512 bit reversing/counting costs were higher than necessary (based off instruction counts instead of actual throughput).
Under some situations under Thumb1, we could be stuck in an infinite
loop recombining the same instruction. This puts a limit on that, not
combining SUBC with SUBE repeatedly.
This extends the reduction logic in the vectorizer to handle intrinsic
versions of min and max, both the floating point variants already
created by instcombine under fastmath and the integer variants from
D98152.
As a bonus this allows us to match a chain of min or max operations into
a single reduction, similar to how add/mul/etc work.
Differential Revision: https://reviews.llvm.org/D109645
When searching for hidden identity shuffles (added at rG41146bfe82aecc79961c3de898cda02998172e4b), only peek through bitcasts to the source operand if it is a vector type as well.
This is a first step towards addressing the last remaining limitation of
the VPlan version of sinkScalarOperands: the legacy version can
partially sink operands. For example, if a GEP has uniform users outside
the sink target block, then the legacy version will sink all scalar
GEPs, other than the one for lane 0.
This patch works towards addressing this case in the VPlan version by
detecting such cases and duplicating the sink candidate. All users
outside of the sink target will be updated to use the uniform clone.
Note that this highlights an issue with VPValue naming. If we duplicate
a replicate recipe, they will share the same underlying IR value and
both VPValues will have the same name ir<%gep>.
Reviewed By: Ayal
Differential Revision: https://reviews.llvm.org/D104254
Added '-print-pipeline-passes' printing of parameters for those passes
declared with *_WITH_PARAMS macro in PassRegistry.def.
Note that it only prints the parameters declared inside *_WITH_PARAMS as
in a few cases there appear to be additional parameters not parsable.
The following passes are now covered (i.e. all of those with *_WITH_PARAMS in
PassRegistry.def).
LoopExtractorPass - loop-extract
HWAddressSanitizerPass - hwsan
EarlyCSEPass - early-cse
EntryExitInstrumenterPass - ee-instrument
LowerMatrixIntrinsicsPass - lower-matrix-intrinsics
LoopUnrollPass - loop-unroll
AddressSanitizerPass - asan
MemorySanitizerPass - msan
SimplifyCFGPass - simplifycfg
LoopVectorizePass - loop-vectorize
MergedLoadStoreMotionPass - mldst-motion
GVN - gvn
StackLifetimePrinterPass - print<stack-lifetime>
SimpleLoopUnswitchPass - simple-loop-unswitch
Differential Revision: https://reviews.llvm.org/D109310
The fmul is a canonicalizing operation, and fneg is not so this would
break denormals that need flushing and also would not quiet signaling
nans. Fold to fsub instead, which is also canonicalizing.
Pseudo probe instrumentation was missing from O0 build. It is needed in cases where some source files are built in O0 while the others are built in optimize mode.
Reviewed By: wenlei, wlei, wmi
Differential Revision: https://reviews.llvm.org/D109531
This simple heuristic uses the estimated live range length combined
with the number of registers in the class to switch which heuristic to
use. This was taking the raw number of registers in the class, even
though not all of them may be available. AMDGPU heavily relies on
dynamically reserved numbers of registers based on user attributes to
satisfy occupancy constraints, so the raw number is highly misleading.
There are still a few problems here. In the original testcase that
made me notice this, the live range size is incorrect after the
scheduler rearranges instructions, since the instructions don't have
the original InstrDist offsets. Additionally, I think it would be more
appropriate to use the number of disjointly allocatable registers in
the class. For the AMDGPU register tuples, there are a large number of
registers in each tuple class, but only a small fraction can actually
be allocated at the same time since they all overlap with each
other. It seems we do not have a query that corresponds to the number
of independently allocatable registers. Relatedly, I'm still debugging
some allocation failures where overlapping tuples seem to not be
handled correctly.
The test changes are mostly noise. There are a handful of x86 tests
that look like regressions with an additional spill, and a handful
that now avoid a spill. The worst looking regression is likely
test/Thumb2/mve-vld4.ll which introduces a few additional
spills. test/CodeGen/AMDGPU/soft-clause-exceeds-register-budget.ll
shows a massive improvement by completely eliminating a large number
of spills inside a loop.
Laying more foundation for full template name rebuilding - more complex
type printing benefits from an object to carry some state rather than
passing it around as parameters to every function.
This fixes a violation of the wrap flag rules introduced in c4048d8f. As noted in the original review, the NUW is legal to infer from the structure of the replacee, but a) there's no test coverage, and b) this should be done generically for all multiplies.
Differential Revision: https://reviews.llvm.org/D109782
Visibility options currently have limited support on AIX and may cause warnings or errors
depending on the build compiler used.
Reviewed By: ZarkoCA
Differential Revision: https://reviews.llvm.org/D108467
This reverts commit b7b4ebbcfa.
Reason: This breaks several code-size tests in Emscripten test suite
because this exports `emscripten_longjmp` for programs that didn't do it
before.
Use GCNHazardRecognizer in postra sched.
Updated tests for the new schedules.
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D109536
Change-Id: Ia86ba2ae168f12fb34b4d8efdab491f84d936cde
Instead of discovering the sink-to block for each operand in the main
loop, the sink-to block can instead be directly queued with the
operands.
This simplifies processing in the main loop and is a NFC change split
off from D104254 as suggested there.
Ignore dbg instructions when collecting stack slot markers. This is
to make sure the coloring is invariant regarding presence of dbg
instructions (even in cases when the dbg instructions might be
badly placed in the input).
Differential Revision: https://reviews.llvm.org/D109758
We previously had a limitation that TLS variables could not
be exported (and therefore could also not be imported). This
change removed that limitation.
Differential Revision: https://reviews.llvm.org/D108877
This patch exploits the prefixed load and store instructions utilizing the
refactored load/store implementation introduced in D93370.
Prefixed load and store instructions are emitted whenever we are loading or
storing a value with an offset that fits into a 34-bit signed immediate.
Patterns for the prefixed load and stores are added in this patch, as well as
the implementation that detects when we are loading and storing a value with an
offset that fits in 34-bits.
Differential Revision: https://reviews.llvm.org/D96075
SCEV does not look through non-header PHIs inside the loop. Such phis
can be analyzed by adding separate accesses for each incoming pointer
value.
This results in 2 more loops vectorized in SPEC2000/186.crafty and
avoids regressions when sinking instructions before vectorizing.
Fixes PR50296, PR50288.
Reviewed By: Meinersbur
Differential Revision: https://reviews.llvm.org/D102266
The class of instructions that write to narrow top/bottom lanes only
demand the even or odd elements of the input lanes. Which means that a
pair of VMOVNT; VMOVNB demands no lanes from the original input. This
teaches that to instcombine from the target hooks available through
ARMTTIImpl.
Differential Revision: https://reviews.llvm.org/D109325
Previously the relocations pointed at the public user facing,
possibly external symbol.
When the function itself is weak, that symbol may be overridden at
link time, pointing at another strong implementation of the same
function instead. In that case, there's two conflicting pdata entries
pointing at the same address, and the wrong unwind info might end up
used.
Both GCC/binutils and MSVC produce pdata pointing at internal static
symbols. (GCC/binutils point at the .text section just as LLVM does
after this change, MSVC points at special label type symbols with the
type IMAGE_SYM_CLASS_LABEL and names like '$LN4'.)
This fixes unwinding through an overridden "operator new" with a
statically linked C++ library in MinGW mode. (Building libc++ with
-ffunction-sections and linking with --gc-sections might avoid the
issue too.)
This makes the produced object files a little less user friendly
to debug, but with other recent improvements for llvm-readobj, the
unwind info debugging experience should be pretty much the same.
Differential Revision: https://reviews.llvm.org/D109651
Summary: Add the SectionIndex field for symbol.
1: a symbol can reference a section by SectionName or SectionIndex.
2: a symbol can reference a section by both SectionName and SectionIndex.
3: if both Section and SectionIndex are specified, but the two values refer
to different sections, an error will be reported.
4: an invalid SectionIndex is allowed.
5: if a symbol references a non-existent section by SectionName, an error will be reported.
Reviewed By: jhenderson, Higuoxing
Differential Revision: https://reviews.llvm.org/D109566
Three unrelated changes:
1) Add a concat method as a convenience to help write bitvector
use cases in a nicer way.
2) Use LLVM_UNLIKELY as suggested by @xbolva00 in a previous patch.
3) Fix casing of some "slow" methods to follow naming standards.
Differential Revision: https://reviews.llvm.org/D109620
PPCLoopInstrFormPrep pass now can prepare for load store instructions
in a loop whose increment is not a constant integer.
Reviewed By: jsji
Differential Revision: https://reviews.llvm.org/D105872
The attributor can determine that some indirect calls do not require
special inputs. The special inputs will still be present in the ABI,
so we need to allocate the registers and pass undefs.
This is a small first step towards reorganization of the ORC libraries:
Declarations for types and function names (as strings) to be found in the
"ORC runtime bootstrap" set are moved into OrcRTBridge.h / OrcRTBridge.cpp.
The current implementation of the "ORC runtime bootstrap" functions is moved
into OrcRTBootstrap.h and OrcRTBootstrap.cpp. It is likely that this code will
eventually be moved into ORT-RT proper (in compiler RT).
The immediate goal of this change is to make these bootstrap functions usable
for clients other than SimpleRemoteEPC/SimpleRemoteEPCServer. The first planned
client is a new RuntimeDyld::MemoryManager that will run over EPC, which will
allow us to remove the old OrcRemoteTarget code.
The code was using getTypeStoreSize to calculate the difference
between consecutive objects. The calculation was incorrect due
to padding that is added between consecutive objects. The
getTypeAllocSize includes the padding amount. For example,
if the type is [19 x i8], the difference between consecutive
objects is 32 bytes, not 19 bytes.
A second case for getTypeAllocSize is needed when computing
the pointer values for the vector accesses. The calculation needs
to account for the padding as well.
Differential Revision: https://reviews.llvm.org/D109403
This is for Swift VFE support. In some vtable forms that Swift emits, the "base" of a relative pointer is not the global symbol itself directly, but a GEP into it -- so the pointer is relative to a particular field in the global. So getPointerAtOffset() needs to be able to see through the GEP and allow it in a SUB expression, to correctly recognize the offset as a vtable slot.
Differential Revision: https://reviews.llvm.org/D109169
This is a fix on top of D106525's Case 2. In D106525, in
`runEHOnFunction` which handles Emscripten EH, We rethrow `longjmp` only
when the module has any usage of `setjmp` or `longjmp`. But now Wasm
object files are linked using wasm-ld, the module this pass sees is not
the whole program, and even if this module does not contain any
`longjmp`, another file can contain it and can be linked with the
current module. This enables the rethrowing of longjmp whenever
Emscripten SjLj is enabled, regardless of whether it is used in this
module or not.
Reviewed By: dschuff
Differential Revision: https://reviews.llvm.org/D109670
APInt is used to describe a bit mask in a variety of value tracking and demanded bits/elts functions.
When traversing through dst/src operands, we have a number of places where these masks need to widened/narrowed to translate through bitcasts, reductions etc. to a different type.
This patch add a APIntOps::ScaleBitMask common helper, adds unit test coverage, and updates a number of cases to use the the helper instead of their own implementation.
This came up on D109065 where we currently have to add yet another implementation of the same code.
Differential Revision: https://reviews.llvm.org/D109683
Patch by @dpalermo
The corrupt bitcode reported in https://bugs.llvm.org/show_bug.cgi?id=51647 seems to be a result of a later pass changing the workfn variable to addrspace(5) (thread private, on the stack). That seems reasonable for an alloca without an address space so it's an open question why that can crash the bitcode reader.
This change puts it in the thread private address space to begin with which means whatever misfired further down the pipeline does not break it. That matches the codegen from clang where stack variables are always annotated (5) and then addrspace cast prior to following use.
This therefore patches around whatever unsuccessfully moved the alloca variable to addrspace(5). That solves the problem of openmp opt producing code that crashes the bitcode reader. It should be possible to create a minimal repro for the underlying bug based on some handwritten IR that uses an alloca in a generic address space.
Reviewed By: ronlieb, jdoerfert, dpalermo-phab
Differential Revision: https://reviews.llvm.org/D109500
Moved out the checks for profitability of TryToSinkInstructions
into a lambda function.
This will also allow us to easily add checks for bailing out if the
transform is not profitable.
Tests-Run: instCombine tests.
First step in reducing redundancy in `addRelocations()` implementations across ELF JITLink backends. The patch factors out common logic for ELF relocation traversal into the new helper function `forEachRelocation()` in the `ELFLinkGraphBuilder` base class. For now, this is applied to the Aarch64 implementation. Others may follow soon.
Reviewed By: lhames
Differential Revision: https://reviews.llvm.org/D109516
When back-deploying Swift async code we can't always toggle the flag showing an
extended frame is present because it will confuse unwinders on systems released
before this feature. So in cases where the code might run there, we `or` in a
mask provided by the runtime (as an absolute symbol) telling us whether the
unwinders can cope.
When deploying only for newer OSs, we can still hard-code the bit-set for
greater efficiency.
38b098be66 limited scalarization to indices that are known non-poison.
For certain patterns that restrict the range of an index, we can insert
a freeze of the original value, to prevent propagation of poison.
Reviewed By: lebedev.ri
Differential Revision: https://reviews.llvm.org/D107580
This extends the custom lowering for extending loads on
fixed length vectors in SVE to support masked extending loads.
The existing tests for correct behaviour of masked extending loads
exhibit bad code generation due to the legalistaion of i1 vectors.
They have been left as-is and new tests have been added that do not
exhibit this behaviour.
Differential Revision: https://reviews.llvm.org/D108200
After transformation, we assume the split condition of the pre-loop is always
true. In order to guarantee it, we need to check the start value of the split
cond AddRec satisfies the split condition.
Differential Revision: https://reviews.llvm.org/D109354
Summary: The patch adds support for yaml2obj customizing the string table.
Reviewed By: jhenderson
Differential Revision: https://reviews.llvm.org/D107421
This patch simply replaces any unsigned VFs with ElementCounts. It's
still NFC because at the moment epilogue vectorisation is disabled
when the main vector loop uses scalable vectors.
Differential Revision: https://reviews.llvm.org/D109364
Rename prefix `FeatureExt*` to `FeatureStdExt*` for all sub-extension for consistency
Reviewed By: HsiangKai, asb
Differential Revision: https://reviews.llvm.org/D108187
Summary: Use std::move(E) to avoid `Program aborted due to an unhandled Error`
Reviewed By: jhenderson
Differential Revision: https://reviews.llvm.org/D109567
The packed variants of the instructions had been modelled as the same as the scalar variants.
Reported during a run of llvm-exegesis on a cheap SLM box and matches what Agner / InstLatX64 report as well.
This patch use the same way as the https://reviews.llvm.org/rGfe1fa43f16beac1506a2e73a9f7b3c81179744eb to handle the thread local variable.
It allocates 2 * pointerSize space in GOT to represent the thread key and data address. Instead of using the _tls_get_addr function, I customed a function __orc_rt_elfnix_tls_get_addr to get the address of thread local varible. Currently, this is a wip patch, only one TLS relocation R_X86_64_TLSGD is supported and I need to add the corresponding test cases.
To allocate the TLS descriptor in GOT, I need to get the edge kind information in PerGraphGOTAndPLTStubBuilder, So I add a `Edge::Kind K` argument in some functions in PerGraphGOTAndPLTStubBuilder.h. If it is not suitable, I can think further to solve this problem.
Differential Revision: https://reviews.llvm.org/D109293
Implement TODO in optimizeLoopExits. Now if we have proved that some loop exit
is taken on 1st iteration, we make all branches in the following exiting blocks
always branch out of the loop and their conditions simplified away.
Patch by Dmitry Makogon!
Differential Revision: https://reviews.llvm.org/D108910
Reviewed By: lebedev.ri
This is a part of D108910.
We replace all loop PHIs with values coming from the loop preheader if
we proved that backedge is never taken.
Patch by Dmitry Makogon!
Differential Revision: https://reviews.llvm.org/D109596
Reviewed By: lebedev.ri
This patch fixes a error made in 2cc6f7c8e1. That patch
added a call site position but there was a small error with the way
the presence of a unknown call edge was being propagated from call site
to function. This patch fixes that error. This error was effecting some
AMDGPU tests.
This allows for a custom encoding to be emitted. It can also be
used with inline assembly to allow the custom instruction to be
register allocated like other instructions.
I initially started from SystemZ's implementation, but some of
the formats allow operands to be specified in multiple ways so I
had to add support for matching different operand class lists for
the same format. That implementation is a simplified version of
what is emitted by tablegen for regular instructions.
I've left out the compressed formats. And I haven't supported the
named opcodes like LUI or OP_IMM_32. Those can be added in future
patches.
Documentation can be found here https://sourceware.org/binutils/docs-2.37/as/RISC_002dV_002dFormats.html
Reviewed By: jrtc27, MaskRay
Differential Revision: https://reviews.llvm.org/D108602
This patch makes it possible to query callbase reachability
(Can a callbase reach a function Fn transitively).
The patch moves the reachability query handling logic to a member class,
this class will have more users within the AA once we add other function
reachability queries.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D106402
This patch adds a call site position for AACallEdges, this
allows us to ask questions about which functions a specific
`CallBase` might call.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D106208
Users of VPValues are managed in a vector, so we need to be more
careful when iterating over users while updating them. For now, just
copy them.
Fixes 51798.
Rather than inspecting the pointer element type, use the access
type of the load/store/atomicrmw/cmpxchg.
In the process of doing this, simplify the logic by storing the
address + type in MemoryUses, rather than an Instruction + Operand
pair (which was then used to fetch the address).
https://alive2.llvm.org/ce/z/_AivbM
This case seems clear since we can reduce instruction count
and avoid an intermediate type change, but we might want to
use mask-and-compare for other sequences.
Currently, we can generate more instructions on some related
patterns by trying to use bit-hacks instead of mask+cmp, so
something is not behaving as expected.
Bootstrap symbols are symbols whose addresses may be required to bootstrap
the rest of the JIT. The bootstrap symbols map generalizes the existing
JITDispatchInfo class provide an arbitrary map of symbol names to addresses.
The JITDispatchInfo class will be replaced by bootstrap symbols with reserved
names in upcoming commits.
This reapplies bb27e45643 (SimpleRemoteEPC
support) and 2269a941a4 (#include <mutex>
fix) with further fixes to support building with LLVM_ENABLE_THREADS=Off.
Pass the access type to getPtrStride(), so it is not determined
from the pointer element type. Many cases still fetch the element
type at a higher level though, so this only partially addresses
the issue.
This is a translation of the existing code to handle the intrinsics
and another step towards D98152.
https://alive2.llvm.org/ce/z/jA7eBC
This pattern is already handled by underlying folds if there are
less uses, so the minimal tests in this case have extra uses.
The larger cmyk tests show the motivation - when combined with
other folds, we invert a larger sequence and eliminate 'not' ops.
Like the shuffle, we should treat the select delayed so that
all constants can be resolved.
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D109053
This reverts commit 5629afea91 ("[ORC] Add missing
include."), and bb27e45643 ("[ORC] Add
SimpleRemoteEPC: ExecutorProcessControl over SPS + abstract transport.").
The SimpleRemoteEPC patch currently assumes availability of threads, and needs
to be rewritten with LLVM_ENABLE_THREADS guards.
SimpleRemoteEPC is an ExecutorProcessControl implementation (with corresponding
new server class) that uses ORC SimplePackedSerialization (SPS) to serialize and
deserialize EPC-messages to/from byte-buffers. The byte-buffers are sent and
received via a new SimpleRemoteEPCTransport interface that can be implemented to
run SimpleRemoteEPC over whatever underlying transport system (IPC, RPC, network
sockets, etc.) best suits your use case.
The SimpleRemoteEPCServer class provides executor-side support. It uses a
customizable SimpleRemoteEPCServer::Dispatcher object to dispatch wrapper
function calls to prevent the RPC thread from being blocked (a problem in some
earlier remote-JIT server implementations). Almost all functionality (beyond the
bare basics needed to bootstrap) is implemented as wrapper functions to keep the
implementation simple and uniform.
Compared to previous remote JIT utilities (OrcRemoteTarget*,
OrcRPCExecutorProcessControl), more consideration has been given to
disconnection and error handling behavior: Graceful disconnection is now always
initiated by the ORC side of the connection, and failure at either end (or in
the transport) will result in Errors being delivered to both ends to enable
controlled tear-down of the JIT and Executor (in the Executor's case this means
"as controlled as the JIT'd code allows").
The introduction of SimpleRemoteEPC will allow us to remove other remote-JIT
support from ORC (including the legacy OrcRemoteTarget* code used by lli, and
the OrcRPCExecutorProcessControl and OrcRPCEPCServer classes), and then remove
ORC RPC itself.
The llvm-jitlink and llvm-jitlink-executor tools have been updated to use
SimpleRemoteEPC over file descriptors. Future commits will move lli and other
tools and example code to this system, and remove ORC RPC.
When we have full-fp16 support, we should (manually select) s16 G_FCONSTANT to
a constant pool load.
Add support for that to `emitLoadFromConstantPool` + the existing constant
selection code.
Also tidy up the constant selection code a little. There were some out-of-date
comments + some dead code.
Differential Revision: https://reviews.llvm.org/D108957
Refactors copyBlockContentToWorkingMemory to use offsets rather than direct
pointers to working memory. This simplifies the problem of maintaining
alignments between blocks in working memory, without requiring the working
memory itself to be aligned.
This patch introduces the flags `-fopenmp-target-debug` and
`-fopenmp-target-debug=` to set the value of a global in the device.
This will be used to enable or disable debugging features statically in
the device runtime library.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D109544
We peform runtime folding, but do not currently emit remarks when it is
performed. This is because it comes from the runtime library and is
beyond the users control. However, people may still wish to view this
and similar information easily, so we can enable this behaviour using a
special flag to enable verbose remarks.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D109627
This reapplies commit 7dbba3376f, or, put
differently, this reverts commit d9a8d20827.
The test now requires the amdgpu and nvptx backend explicitly as it
won't work without properly.
This patch adds functionality to check assumption attributes on call
sites as well.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D109376
This leads to a statistically significant improvement when using -hwasan-instrument-stack=0: https://bit.ly/3AZUIKI.
When enabling stack instrumentation, the data appears gets better but not statistically significantly so. This is consistent
with the very moderate improvements I have seen for stack safety otherwise, so I expect it to improve when the underlying
issue of that is resolved.
Reviewed By: eugenis
Differential Revision: https://reviews.llvm.org/D108457
When combining 'and' of an unsigned unpack and shuffle instruction,
bail early if shuffle is not constructed from a constant integer.
Reviewed By: paulwalker-arm
Differential Revision: https://reviews.llvm.org/D109556
Add `udiv` and `urem` instructions to the DAG post-dominated by `trunc`,
allowing TruncInstCombine to reduce bitwidth of expressions containing these
instructions. It is sufficient to require that all truncated bits of both
operands are zeros: https://alive2.llvm.org/ce/z/yiithn
(`urem` case is identical).
Differential Revision: https://reviews.llvm.org/D109515
Not all address spaces support initializers for globals and we can
therefore not set them without checking if they are allowed. This
patch adds a hook into TTI to check if an AS allows non-undef
initializers. We disable it for all but address space 0 by default,
NVPTX and AMDGPU targets allow all but address space 3.
Reviewed By: tra
Differential Revision: https://reviews.llvm.org/D109337
When we guard side-effects as part of SPMDzation we do it for
consecutive instructions that need guarding. This patch will try to
reorder guarded side-effects in a block to decrease the number of
guarded regions we need. It does not use any smarts, e.g., alias
analysis, to move side-effects over non-interfering reads. Instead,
it only moves side-effects downwards to the next guarded side-effect
if there was nothing in between that could have possibly be affected.
Reviewed By: ggeorgakoudis
Differential Revision: https://reviews.llvm.org/D109070
Always use the byval/inalloca/preallocated type (which is required
nowadays), don't fall back on the pointer element type.
This requires adding Function::getParamPreallocatedType() to
mirror the CallBase API, so that the templated code can work with
both.
LICM may have pulled out a splat, but with .vx instructions we
can fold it into an operation.
This patch enables CGP to reverse the LICM transform and move the
splat back into the loop.
I've started with the commutable integer operations and shifts, but we can
extend this with more operations in future patches.
Reviewed By: frasercrmck
Differential Revision: https://reviews.llvm.org/D109394
The re-use of this struct across iterations of the loop was causing
fields (specifically Name) to be incorrectly shared between multiple
sections.
Differential Revision: https://reviews.llvm.org/D108984
Currently, opaque pointers are supported in two forms: The
-force-opaque-pointers mode, where all pointers are opaque and
typed pointers do not exist. And as a simple ptr type that can
coexist with typed pointers.
This patch removes support for the mixed mode. You either get
typed pointers, or you get opaque pointers, but not both. In the
(current) default mode, using ptr is forbidden. In -opaque-pointers
mode, all pointers are opaque.
The motivation here is that the mixed mode introduces additional
issues that don't exist in fully opaque mode. D105155 is an example
of a design problem. Looking at D109259, it would probably need
additional work to support mixed mode (e.g. to generate GEPs for
typed base but opaque result). Mixed mode will also end up
inserting many casts between i8* and ptr, which would require
significant additional work to consistently avoid.
I don't think the mixed mode is particularly valuable, as it
doesn't align with our end goal. The only thing I've found it to
be moderately useful for is adding some opaque pointer tests in
between typed pointer tests, but I think we can live without that.
Differential Revision: https://reviews.llvm.org/D109290
This patch implements legalization of EXTRACT_SUBVECTOR for the case
where the result needs promoting, and the input type requires widening.
Reviewed By: frasercrmck
Differential Revision: https://reviews.llvm.org/D109509
This patch implements legalization of EXTRACT_SUBVECTOR for the case
where the result needs promoting, and the input type is either legal
or requires splitting.
The idea is that the operation is broken down into simpler steps,
by first extracting a smaller subvector until the input vector
becomes legal or requires promotion.
Reviewed By: CarolineConcatto
Differential Revision: https://reviews.llvm.org/D109313
LoopFlatten wasn't triggering on this motivating case after IV widening:
void foo(int *A, int N, int M) {
for (int i = 0; i < N; ++i)
for (int j = 0; j < M; ++j)
f(A[i*M+j]);
}
The reason was that the old induction phi nodes were getting in the way. These
narrow and dead induction phis are not always trivially dead, and having both
the narrow and wide IVs confused the analysis and caused it to bail. This adds
some extra bookkeeping for these old phis, so we can filter them out when
checks on phi nodes are performed. Other clean up passes will get rid of these
old phis and increment instructions.
As this was one of the motivating examples from the beginning, it was
surprising this wasn't triggering from C/C++ code. It looks like the IR and CFG
is just slightly different.
Differential Revision: https://reviews.llvm.org/D109309
For SVE, when scalarising the PHI instruction the whole vector part is
generated as opposed to creating instructions for each lane for fixed-
width vectors. However, in some cases the lane values may be needed
later (e.g for a load instruction) so we still need to calculate
these values to avoid extractelement being called on the vector part.
Differential Revision: https://reviews.llvm.org/D109445
This fixes LanaiTTIImpl::getIntImmCost to return valid costs for i128
(and wider) values. Previously any immediate wider than
64 bits would cause Lanai llc to crash.
A regression test is also added that exercises this functionality.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D107091
D78776 removed is{Call,Branch,UnconditionalBranch} guards in objdump
before calling MCInstrAnalysis::evaluateBranch. This is fine for other
architectures as they gracefully handle evaluateBranch being called on
non-branches. However, the Lanai MCInstrAnalysis implementation didn't
and that change caused it to crash.
This inserts the same guards back into Lanai's evaluateBranch
implementation and adds a smoke test that exercises `llc | objdump` so
this kind of regression is hopefully caught next time.
Reviewed By: jpienaar, MaskRay
Differential Revision: https://reviews.llvm.org/D107593
When passing an empty strides map, there's nothing to replace for
replaceSymbolicStrideSCEV and it just returns the SCEV for Ptr. There
should be no need to call the function.
Reviewed By: SjoerdMeijer
Differential Revision: https://reviews.llvm.org/D109462
The MinSize attribute can be attached to both the callee and the caller
in the callsite. Function specialisation was already skipped for function
declarations (callees) with MinSize. This also skips specialisations for
the callsite when it has MinSize set.
Differential Revision: https://reviews.llvm.org/D109441
DWARFUnit::clearDIEs() uses std::vector::shrink_to_fit() to make
capacity of DieArray matched with its size(). The shrink_to_fit()
is not binding request to make capacity match with size().
Thus the memory could still be reserved after DWARFUnit::clearDIEs()
is called. This patch erases capacity when DWARFUnit::clearDIEs() is requested.
So the memory occupied by dies would be freed.
Differential Revision: https://reviews.llvm.org/D109499
Motivation: APInt not supporting zero bit values leads to
a lot of special cases in various bits of code, particularly
when using APInt as a bit vector (where you want to start with
zero bits and then concat on more. This is particularly
challenging in the CIRCT project, where the absence of zero-bit
ConstantOp forces duplication of ops and makes instcombine-like
logic far more complicated.
Approach: zero bit integers are weird. There are two reasonable
approaches: either make it illegal to do general arithmetic on
them (e.g. sign extends), or treat them as as implicitly having
a zero value. This patch takes the conservative approach, which
enables their use in bitvector applications.
Differential Revision: https://reviews.llvm.org/D109555
This reverts commit 98f4713122.
Without any (theoretical/practical) guarantee that all the allocas within
*entry* basic block are clustered together at the beginning of the block,
this patch is doomed to fail. Hence reverting it.
Previously we have the following binary representation:
struct bpf_type { name, info, type }
struct btf_tag { __u32 component_idx; }
If the tag points to a struct/union/var/func type, we will have
kflag = 1, component_idx = 0
if the tag points to struct/union member or func argument, we will have
kflag = 0, component_idx = 0, ..., vlen - 1
The above rather makes interface complex to have both kflag and
component needed to determine its legality and index.
This patch simplifies the interface by removing kflag involvement.
component_idx = (u32)-1 : tag pointing to a type
component_idx = 0 ... vlen - 1 : tag pointing to a member or argument
and kflag is always 0 and there is no need to check.
Differential Revision: https://reviews.llvm.org/D109560
Drop the legacy version in AMDGPUAnnotateKernelFeatures. This has the
side effect of now respecting the linkage, and not changing externally
visible functions.
Previously we assumed all callable functions did not need any
implicitly passed inputs, and added attributes to functions to
indicate when they were necessary. Requiring attributes for
correctness is pretty ugly, and it makes supporting indirect and
external calls more complicated.
This inverts the direction of the attributes, so an undecorated
function is assumed to need all implicit imputs. This enables
AMDGPUAttributor by default to mark when functions are proven to not
need a given input. This strips the equivalent functionality from the
legacy AMDGPUAnnotateKernelFeatures pass.
However, AMDGPUAnnotateKernelFeatures is not fully removed at this
point although it should be in the future. It is still necessary for
the two hacky amdgpu-calls and amdgpu-stack-objects attributes, which
would be better served by a trivial analysis on the IR during
selection. Additionally, AMDGPUAnnotateKernelFeatures still
redundantly handles the uniform-work-group-size attribute to be
removed in a future commit.
At this point when not using -amdgpu-fixed-function-abi, we are still
modifying the ABI based on these newly negated attributes. In the
future, this option will be removed and the locations for implicit
inputs will always be fixed. We will then use the new attributes to
avoid passing the values when unnecessary.
It's possible in some cases for the LHS to be a pointer where the RHS is not. This isn't directly possible for an icmp, but the analysis mixes up operands of different icmp expressions in some cases.
This does not include a test case as the smallest reduced case we've managed is extremely fragile and unlikely to test anything meaningful in the long term.
Also add an assertion to getNotSCEV() to make tracking down this sort of issue a bit easier in the future.
Fixes https://bugs.llvm.org/show_bug.cgi?id=51787 .
Differential Revision: https://reviews.llvm.org/D109546
This bit of code is incredibly suspicious. It allows fully unknown (but potentially negative) steps, but not steps known to be negative. The comment about scev flag inference is worrying, but also not correct to my knowledge.
At best, this might be covering up some related miscompile. However, there's no test in tree for it, the review history doesn't include obvious motivation, and the C++ example doesn't appear to give wrong results when hand translated to IR. I think it's time to remove this and see what falls out.
During review, there were concerns raised about the correctness of the corresponding signed case. This change was deliberately narrowed to the unsigned case which has been auditted and appears correct for negative values. We need to get back to the known-negative signed case, but that'll be a future patch if nothing falls out from this one.
Differential Revision: https://reviews.llvm.org/D104140
This patch updates the PC-Relative load and store patterns to utilize the
refactored load/store implementation introduced in D93370.
PC-Relative implementation has been added to PPCISelLowering.cpp, and also the
patterns in PPCInstrPrefix.td have been updated and no longer require AddedComplexity.
All existing test cases pass with this update.
Differential Revision: https://reviews.llvm.org/D95116
Soft deprecrate isNullValue/isAllOnesValue and update in tree
callers. This matches the changes to the APInt interface from
D109483.
Reviewed By: lattner
Differential Revision: https://reviews.llvm.org/D109535
In general, howManyLessThans doesn't really want to work with pointers
at all; the result is an integer, and the operands of the icmp are
effectively integers. However, isLoopEntryGuardedByCond doesn't like
extra ptrtoint casts, so the arguments to isLoopEntryGuardedByCond need
to be computed without those casts.
Somehow, the values got mixed up with the recent howManyLessThans
improvements; fix the confused values, and add a better comment to
explain what's happening.
Differential Revision: https://reviews.llvm.org/D109465
This constrains the Mov* and similar pseudo instruction to take
GPR64common register classes rather than GPR64. GPR64 includs XZR
which is invalid here, because this pseudo instructions expands
into an adrp/add pair sharing a destination register. XZR is invalid
on add and attempting to encode it will instead increment the stack
pointer causing crashes (downstream report at [1]). The test case
there reproduces on LLVM11, but I do not have a test case that
reaches this code path on main, since it is being masked by
improved dead code elimination introduced in D91513. Nevertheless,
this seems like a good thing to fix in case there are other cases
that dead code elimination doesn't clean up (e.g. if `optnone` is
used and the optimization is skipped).
I think it would be worth auditing uses of GPR64 in pseudo
instructions to see if there are any similar issues, but I do not
have a high enough view of the backend or knowledge of the
Aarch64 architecture to do this quickly.
[1] https://github.com/JuliaLang/julia/issues/39818
Reviewed By: t.p.northover
Differential Revision: https://reviews.llvm.org/D97435
Follow up to suggestions in D109103 via hans:
I think UnreachableDefault (or UnreachableFallthrough) would be a
better name now, since it doesn't just omit the range check, it also
omits the last bit test.
Reviewed By: hans
Differential Revision: https://reviews.llvm.org/D109455
This library function only exists in compiler-rt not libgcc. So
this would fail to link unless we were linking with compiler-rt.
This is consistent with the recent removal of calls to mulodi4 on
32-bit targets like D108928.
I suppose maybe we could keep the libcalls for platforms like
Darwin that use compiler-rt exclusively?
Reviewed By: nickdesaulniers, MaskRay
Differential Revision: https://reviews.llvm.org/D109385
This renames the primary methods for creating a zero value to `getZero`
instead of `getNullValue` and renames predicates like `isAllOnesValue`
to simply `isAllOnes`. This achieves two things:
1) This starts standardizing predicates across the LLVM codebase,
following (in this case) ConstantInt. The word "Value" doesn't
convey anything of merit, and is missing in some of the other things.
2) Calling an integer "null" doesn't make any sense. The original sin
here is mine and I've regretted it for years. This moves us to calling
it "zero" instead, which is correct!
APInt is widely used and I don't think anyone is keen to take massive source
breakage on anything so core, at least not all in one go. As such, this
doesn't actually delete any entrypoints, it "soft deprecates" them with a
comment.
Included in this patch are changes to a bunch of the codebase, but there are
more. We should normalize SelectionDAG and other APIs as well, which would
make the API change more mechanical.
Differential Revision: https://reviews.llvm.org/D109483
This patch adds class SystemZFrameLowering which is a SystemZ-specific class
detailing special registers used by calling conventions on the target.
SystemZELFFrameLowering and SystemZXPLINKFrameLowering implement this class
for ELF and XPLINK64 respectively. Previous functionality in SystemZFrameLowering
is moved to SystemZELFFrameLowering. SystemZXPLINKFrameLowering can then be
implemented in future patches.
Reviewed By: uweigand, Kai
Differential Revision: https://reviews.llvm.org/D108777
The motivating case is an infinite loop shown with a reduced test from:
https://llvm.org/PR51762
To solve this, I'm proposing we delete the most obviously broken part of this code.
The bug example shows a fundamental problem: we ask computeKnownBits if a transform
will be profitable, alter the code by creating new instructions, then rely on
computeKnownBits to return the same answer to actually eliminate instructions.
But there's no guarantee that the results will be the same between the 1st and 2nd
calls. In the infinite loop example, we get different answers, so we add
instructions that conflict with some other transform, and we're stuck.
There's at least one other problem visible in the test diff for
`@zext_or_masked_bit_test_uses`: the code doesn't check uses properly, so we can
end up with extra instructions created.
Last, it's not clear if this set of transforms actually improves analysis or
codegen. I spot-checked a few targets and don't see a clear win:
https://godbolt.org/z/x87EWovso
If we do see a regression from this change, codegen seems like the right place to
add a cmp -> bit-hack fold.
If this is too big of a step, we could limit the computeKnownBits calls by not
passing a context instruction and/or limiting the recursion. I checked that those
would stop the infinite loop for PR51762, but that won't guarantee that some other
example does not fall into the same loop.
Differential Revision: https://reviews.llvm.org/D109440
As discussed on the ticket, I'm intending to add additional 128->256 patterns when we have test coverage, but this addresses a known crash.
Differential Revision: https://reviews.llvm.org/D109434
Allow variable number of directories, as allowed by the
specification. NumberOfRvaAndSize will default to 16 if not specified,
as in the past.
Reviewed by: jhenderson
Differential Revision: https://reviews.llvm.org/D108825