This patch replaces NoneType() and NoneType::None with None in
preparation for migration from llvm::Optional to std::optional.
In the std::optional world, we are not guranteed to be able to
default-construct std::nullopt_t or peek what's inside it, so neither
NoneType() nor NoneType::None has a corresponding expression in the
std::optional world.
Once we consistently use None, we should even be able to replace the
contents of llvm/include/llvm/ADT/None.h with something like:
using NoneType = std::nullopt_t;
inline constexpr std::nullopt_t None = std::nullopt;
to ease the migration from llvm::Optional to std::optional.
Differential Revision: https://reviews.llvm.org/D138376
Previously, we used SHA-1 for hashing the CodeView type records.
SHA-1 in `GloballyHashedType::hashType()` is coming top in the profiles. By simply replacing with BLAKE3, the link time is reduced in our case from 15 sec to 13 sec. I am only using MSVC .OBJs in this case. As a reference, the resulting .PDB is approx 2.1GiB and .EXE is approx 250MiB.
Differential Revision: https://reviews.llvm.org/D137101
If LogicNonShiftReg is the same to Shift1Base, and shift1 const is the same to MatchInfo.Shift2 const, CSEMIRBuilder will reuse the old shift1 when build shift2.
So, if we erase MatchInfo.Shift2 at the end, actually we remove old shift1. And it will cause crash later.
Solution for this issue is just erase it earlier to avoid the crash.
Fix#58423
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D138187
The major change is falling through to ComputeKnownBits when we don't have an implementation of ComputeNumSignBits due to conservatism over scalable vectors. Right now, we're mostly conservative in the same cases, but this allows our results to improve when we change ComputeKnownBits without also needing to improve ComputeNumSignBits at the same time.
This is a continuation of the series of patches adding lane wise support for scalable vectors in various knownbit-esq routines.
The basic idea here is that we track a single lane for scalable vectors which corresponds to an unknown number of lanes at runtime. This is enough for us to perform lane wise reasoning on many arithmetic operations.
Differential Revision: https://reviews.llvm.org/D137141
his is the SelectionDAG equivalent of D136470, and is thus an alternate patch to D128159.
The basic idea here is that we track a single lane for scalable vectors which corresponds to an unknown number of lanes at runtime. This is enough for us to perform lane wise reasoning on many arithmetic operations.
This patch also includes an implementation for SPLAT_VECTOR as without it, the lane wise reasoning has no base case. The original patch which inspired this (D128159), also included STEP_VECTOR. I plan to do that as a separate patch.
Differential Revision: https://reviews.llvm.org/D137140
This now allows folding an AND of a anyext masked_load to a
zext_masked_load even if the masked load has multiple users. Doing is
eliminates some redundant ANDs/MOVs for certain AArch64 SVE code.
I'm not sure if there's any cases where doing this could negatively the
other users of the masked_load. Looking at other optimizations of
masked loads, most don't apply if the load is used more than once, so it
doesn't look like this would interfere.
Reviewed By: c-rhodes
Differential Revision: https://reviews.llvm.org/D137844
Otherwise, the spill position may point to position where before
FrameSetup instructions. In which case, the spill instruction may store
to caller's frame since the stack pointer has not been adjustted.
Fixes https://github.com/llvm/llvm-project/issues/58286
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D135693
Add vp.inttoptr & vp.ptrtoint support by lowering them into
vp.zext / vp.truncate with in SelectionDAGBuilder.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D137169
A target can return if a misaligned access is 'fast' as defined
by the target or not. In reality there can be different levels
of 'fast' and 'slow'. This patch changes the boolean 'Fast'
argument of the allowsMisalignedMemoryAccesses family of functions
to an unsigned representing its speed.
A target can still define it as it wants and the direct translation
of the current code uses 0 and 1 for current false and true. This
makes the change an NFC.
Subsequent patch will start using an actual value of speed in
the load/store vectorizer to compare if a vectorized access going
to be not just fast, but not slower than before.
Differential Revision: https://reviews.llvm.org/D124217
Previously it was only being done if shouldAlignPointerArgs() returned
true, which right now is only true for ARM targets.
Updating the argument alignment attributes of memcpy/memset intrinsics
if the underlying object has larger alignment can be beneficial even
when CGP didn't increase alignment (as can be seen from the test changes),
so invert the loop and if condition.
Differential Revision: https://reviews.llvm.org/D134281
This patch adds tranformation of fmul+fadd/fsub chains to fused multiply
instructions:
* fmul+fadd->fmadd
* fmul+fsub->fmsub/fnmsub
We also will try to combine these instructions if the fmul has more than one use
and cannot be deleted. However, removing the dependence between fmul and fadd can
still be profitable, and we rely on machine combiner approximations of scheduling.
Differential Revision: https://reviews.llvm.org/D136764
Verify three cases of G_UNMERGE_VALUES separately:
1. Splitting a vector into subvectors (the converse of
G_CONCAT_VECTORS).
2. Splitting a vector into its elements (the converse of
G_BUILD_VECTOR).
3. Splitting a scalar into smaller scalars (the converse of
G_MERGE_VALUES).
Previously #1 allowed strange combinations like this:
%1:_(<2 x s16>),%2:_(<2 x s16>) = G_UNMERGE_VALUES %0(<2 x s32>)
This has been tightened up to check that the source and destination
element types match, and some MIR test cases updated accordingly.
Differential Revision: https://reviews.llvm.org/D111132
basic block section cases
MachineBlockPlacement pass sets an alignment attribute to the loop
header MBB and this attribute will lead to an alignment directive during
emitting asm. In the case of the basic block section, the alignment
directive is put before the section label, and thus the alignment is set
to the predecessor of the loop header, which is not what we expect and
increases the code size (both inserting nop and set section alignment).
Reviewed By: rahmanl
Differential Revision: https://reviews.llvm.org/D137535
D134260/D138107 exposed that the MachineCombiner was not copying
pcsections metadata where it should. This patch switches the MIBuild
methods to use MIMetadata that can copy the debug loc and pcsections at
the same time.
Differential Revision: https://reviews.llvm.org/D138112
Hide the underlying DbgValueInst by adding methods to extract the necessary
information and by adding a raw_ostream &operator<< overload to print it.
Remove the DebugLoc field as this is always the same as the DbgValueInst's
DebugLoc (see D136247).
Reviewed By: StephenTozer
Differential Revision: https://reviews.llvm.org/D136249
handleDebugValue has two DebugLoc parameters that appear to always take the
same value. Remove one of the duplicate parameters. See phabricator review for
more detail.
Reviewed By: StephenTozer
Differential Revision: https://reviews.llvm.org/D136247
AMDGPU legalizes i64 loads to loads of <2 x i32>, leaving the
i64 MMO with attached range metadata alone. The known bit width
was using the scalar element type, and asserting on a mismatch.
nearbyint has the property to execute without exception.
For not modifying fflags, the patch added new machine opcode
PseudoVFROUND_NOEXCEPT_V that expands vfcvt.x.f.v and vfcvt.f.x.v between a pair
of frflags and fsflags.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D137685
The patch also added function expandVPBSWAP to expand ISD::VP_BSWAP nodes.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D137928
We can reuse constants if we use SRL followed by AND and AND followed by SHL.
Similar was done to bitreverse previously.
Differential Revision: https://reviews.llvm.org/D138045
This reverts commit e05ce03cfa.
Caused asan use-after-poison to 4 DebugInfo/AMDGPU/ tests.
Triggered in PEI::replaceFrameIndicesBackward called llvm::MachineInstr::getNumOperands
Instruction G_IS_FPCLASS had an operand that represented floating-point
semantics of its first operand. It allowed types that have the same length,
like `bfloat16` and `half`, to be distinguished. Unfortunately, it is
not sufficient, as other operation still cannot distinguish such types.
Solution of this problem must be more general, so now this operand is removed.
Differential Revision: https://reviews.llvm.org/D138004
Ignorable operands don't impact instruction's behavior, we can safely do CSE on
the instruction.
It is split from D130919. It has big impact to some AMDGPU test cases.
For example in atomic_optimizations_raw_buffer.ll, when trying to check if the
following instruction can be CSEed
%37:vgpr_32 = V_MOV_B32_e32 0, implicit $exec
Function isCallerPreservedOrConstPhysReg is called on operand "implicit $exec",
this function is implemented as
- return TRI.isCallerPreservedPhysReg(Reg, MF) ||
+ return TRI.isCallerPreservedPhysReg(Reg, MF) || TII.isIgnorableUse(MO) ||
(MRI.reservedRegsFrozen() && MRI.isConstantPhysReg(Reg));
Both TRI.isCallerPreservedPhysReg and MRI.isConstantPhysReg return false on this
operand, so isCallerPreservedOrConstPhysReg is also false, it causes LLVM failed
to CSE this instruction.
With this patch TII.isIgnorableUse returns true for the operand $exec, so
isCallerPreservedOrConstPhysReg also returns true, it causes this instruction to
be CSEed with previous instruction
%14:vgpr_32 = V_MOV_B32_e32 0, implicit $exec
So I got different result from here. AMDGPU's implementation of isIgnorableUse
is
bool SIInstrInfo::isIgnorableUse(const MachineOperand &MO) const {
// Any implicit use of exec by VALU is not a real register read.
return MO.getReg() == AMDGPU::EXEC && MO.isImplicit() &&
isVALU(*MO.getParent()) && !resultDependsOnExec(*MO.getParent());
}
Since the operand $exec is not a real register read, my understanding is it's
reasonable to do CSE on such instructions.
Because more instructions are CSEed, so I get less instructions generated for
these tests.
Differential Revision: https://reviews.llvm.org/D137222
Adds the Complex Deinterleaving Pass implementing support for complex numbers in a target-independent manner, deferring to the TargetLowering for the given target to create a target-specific intrinsic.
Differential Revision: https://reviews.llvm.org/D114174
The TableGen implementation was using a homegrown implementation of
FunctionModRefInfo. This switches it to use MemoryEffects instead.
This makes the code simpler, and will allow exposing the full
representational power of MemoryEffects in the future. Among other
things, this will allow us to map IntrHasSideEffects to an
inaccessiblemem readwrite, rather than just ignoring it entirely
in most cases.
To avoid layering issues, this moves the ModRef.h header from IR
to Support, so that it can be included in the TableGen layer.
Differential Revision: https://reviews.llvm.org/D137641
When we match a pattern from m_GCst, the register type could be different from original op. So we can't replace the original op to vreg direct.
This code create a new constant with original op type then replace the original op.
Fix#58906
Reviewed By: arsenm, aemerson
Differential Revision: https://reviews.llvm.org/D137778
While working on this code to support outputs from callbr along indirect
branches, I kept making these changes again and again. Precommit these.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D137445
This is required for the upcoming backward PEI::replaceFrameIndices version.
Both forward and backward versions will use same code for debug instruction processing.
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D137741
With https://reviews.llvm.org/D136627, now we have the metrics for profile staleness based on profile statistics, monitoring the profile staleness in real-time can help user quickly identify performance issues. For a production scenario, the build is usually incremental and if we want the real-time metrics, we should store/cache all the old object's metrics somewhere and pull them in a post-build time. To make it more convenient, this patch add an option to persist them into the object binary, the metrics can be reported right away by decoding the binary rather than polling the previous stdout/stderrs from a cache system.
For implementation, it writes the statistics first into a new metadata section(llvm.stats) then encode into a special ELF `.llvm_stats` section. The section data is formatted as a list of key/value pair so that future statistics can be easily extended. This is also under a new switch(`-persist-profile-staleness`)
In terms of size overhead, the metrics are computed at module level, so the size overhead should be small, measured on one of our internal service, it costs less than < 1MB for a 10GB+ binary.
Reviewed By: wenlei
Differential Revision: https://reviews.llvm.org/D136698
This was done as a test for D137302 and it makes sense to push these changes
Reviewed By: dblaikie
Differential Revision: https://reviews.llvm.org/D137493
Caused by legacy min/max combines (select + cmp) asking for legalizer info
in prelegalizer (D135047 added combine to all_combines).
Combine still does not work for AMDGPU since destination opcode is custom,
not legal. Similar combine works on DAG since it asks for legal or custom.
Differential Revision: https://reviews.llvm.org/D137274
As discussed in https://reviews.llvm.org/D136474 -fmessage-length
creates problems with reproduciability in the PDB files.
This patch just drops that argument when writing the PDB file.
Reviewed By: hans
Differential Revision: https://reviews.llvm.org/D137322
Use case:
- When block layout is visualized after MBP pass, the basic blocks are labeled in layout order; meanwhile blocks could be numbered in a different order.
- As a result, it's hard to map between the graph and pass output. With this option on, the basic blocks are renumbered in function layout order.
This option is only useful when a function is to be visualized (i.e., when view options are on) to make it debugging only.
Use https://godbolt.org/z/5WTW36bMr as an example:
- As MBP pass output (shown in godbolt output window), `func2` is in a basic block numbered `2` (`bb.2`), and `func1` is in a basic block numbered `3` (`bb.3`);
`bb.3` is a block with higher block frequency than `bb.2`, and `bb.3` is placed before `bb.2` in the functin layout.
- Use [1] to get the dot graph (graph uploaded in [2]), the blocks are re-numbered.
- `func1` is in 'if.end' block, and labeled `1` in visualized dot; `func2` is in 'if.then' blocks, and labeled `3` --> the labeled number and bb number won't map.
- [[ b5626ae975/llvm/lib/CodeGen/MachineBlockFrequencyInfo.cpp (L127) | DOTGraphTraits<MachineBlockFrequencyInfo *>::getNodeLabel ]] is where labeled numbers are based on function layout number, and [[ a8d93783f3/llvm/include/llvm/Support/GraphWriter.h (L209)
| called by graph writer ]].
So call 'MachineFunction::RenumberBlocks' would make labeled number (in dot graph) and block number (in pass output) consistent with each other.
[1] `./bin/clang++ -O3 -S -mllvm -view-block-layout-with-bfi=count -mllvm -view-bfi-func-name=_Z9func_loopv -mllvm -print-after=block-placement -mllvm -filter-print-funcs=_Z9func_loopv test.c`
[2] {F25201785}
Reviewed By: davidxl
Differential Revision: https://reviews.llvm.org/D137467
The loop-carried dependency detection logic in isLoopCarriedDep relies
on the load and store using the same definition for the base register.
This misses the case of post-increment loads and stores whose base
register are different PHI initialized from the same initial value.
This commit extends the logic to accept the load and store having
different PHI base address provided that they had the same initial value
when entering the loop and are incremented by the same amount in each
loop.
Reviewed By: bcahoon
Differential Revision: https://reviews.llvm.org/D136463
The 32-bit floating-point atomic add instructions on AMDGPUs does not support a
"flat" or "generic" address space. So, if the address space cannot be determined
statically, the AMDGPU backend will fall back to a CAS loop (which does support
"flat" addressing). Instead, this patch emits runtime address-space checks to
allow native FP atomic add instructions for global and LDS memory (and non-atomic
FP add instructions for private/scratch memory).
In order to do that, this patch introduces a new interface function
`emitExpandAtomicRMW`. It is expected to be called when a common atomic expand
doesn't work for a specific target, such as the case we discussed here.
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D129690
This switches everything to use the memory attribute proposed in
https://discourse.llvm.org/t/rfc-unify-memory-effect-attributes/65579.
The old argmemonly, inaccessiblememonly and inaccessiblemem_or_argmemonly
attributes are dropped. The readnone, readonly and writeonly attributes
are restricted to parameters only.
The old attributes are auto-upgraded both in bitcode and IR.
The bitcode upgrade is a policy requirement that has to be retained
indefinitely. The IR upgrade is mainly there so it's not necessary
to update all tests using memory attributes in this patch, which
is already large enough. We could drop that part after migrating
tests, or retain it longer term, to make it easier to import IR
from older LLVM versions.
High-level Function/CallBase APIs like doesNotAccessMemory() or
setDoesNotAccessMemory() are mapped transparently to the memory
attribute. Code that directly manipulates attributes (e.g. via
AttributeList) on the other hand needs to switch to working with
the memory attribute instead.
Differential Revision: https://reviews.llvm.org/D135780
mergeSExts iterates throught ValueToSExts. Using DenseMap result in
unstable optimization path so that output IR may vary even if the input
IR is same.
Reviewed By: wxiao3
Differential Revision: https://reviews.llvm.org/D137234
The getCopyFromPartsVector doesn't work correctly when PartEVT and ValueVT have both different element type and different size.
This patch
1) removes the part of a comment that contains the incorrect assumption that element type are the same
2) use bitcast when copy parts of int vector to a float vector after the subvector extraction
Reviewed By: Peter, efriedma
Differential Revision: https://reviews.llvm.org/D136726
A new register class as well as a number of related subregisters are being added
to Future CPU. These registers are Dense Math Registers (DMR) and are 1024 bits
long. These regsiters can also be used in consecutive pairs which leads to a
register that is 2048 bits.
This patch also adds 7 new instructions that use these registers. More
instructions will be added in future patches.
Reviewed By: amyk, saghir
Differential Revision: https://reviews.llvm.org/D136366
D121929 happens to cause CodeGen/MIR/AArch64/mirnamer.mir to fail due
to a hash collision caused by adding two extra opcodes. The collision
is only in the top 19 bits of the hashed opcode so fix this by just
using the whole hash (in fixed width hex for consistency) instead of
the top 5 decimal digits.
Differential Revision: https://reviews.llvm.org/D137155
Currently MachineCSE forbids PRE when the instruction reads a physical
register. Relax this so that it's allowed when the value being read is
the same as what would be read in the place the instruction would be
hoisted to.
This is being done in preparation for adding FPCR handling to the
AArch64 backend, in order to prevent it to from worsening the
generated code, but for targets that already have a similar register
it should improve things.
This patch affects code generation in several tests. The new code
looks better except for in Thumb2/LowOverheadLoops/memcall.ll where
we perform PRE but the LowOverheadLoops transformation then undoes
it. Also in AMDGPU/selectcc-opt.ll the CHECK makes things look worse,
but actually the function as a whole is better (as a MOV is PRE'd).
Differential Revision: https://reviews.llvm.org/D136675
Originaly the loop did almost nothing as the calculated location was
overwritten on the next iteration.
Differential Revision: https://reviews.llvm.org/D136937
We have similar code to translate a demanded elements mask for a shuffle's operands in multiple places - this patch adds a helper function to VectorUtils and updates a number of locations to use it directly.
Differential Revision: https://reviews.llvm.org/D136832
This was disabled to prevent regressions, which appear to be just occurring on AMDGPU (at least in our current lit tests), which I've addressed by adding AMDGPUTargetLowering::isDesirableToCommuteWithShift overrides.
Fixes#57872
Differential Revision: https://reviews.llvm.org/D136042
When legalizing a smax/smin/umax/umin op, if we know that the upper half is all sign bits, then we can perform the op on the lower half and then sign extend the result to the upper half.
Alive2: https://alive2.llvm.org/ce/z/rk8RfdFixes#58630
This bug was introduced with D136713 / 54eeadcf44 .
As an enhancement, we could cast operands to the expected type,
but we need to make sure that is done correctly (zext vs. sext).
It's also possible (but seems unlikely) that an operand can have
a type larger than the result type.
Fixes#58661
Currently pseudo probe encoding for a function is like:
- For the first probe, a relocation from it to its physical position in the code body
- For subsequent probes, an incremental offset from the current probe to the previous probe
The relocation could potentially cause relocation overflow during link time. I'm now replacing it with an offset from the first probe to the function start address.
A source function could be lowered into multiple binary functions due to outlining (e.g, coro-split). Since those binary function have independent link-time layout, to really avoid relocations from .pseudo_probe sections to .text sections, the offset to replace with should really be the offset from the probe's enclosing binary function, rather than from the entry of the source function. This requires some changes to previous section-based emission scheme which now switches to be function-based. The assembly form of pseudo probe directive is also changed correspondingly, i.e, reflecting the binary function name.
Most of the source functions end up with only one binary function. For those don't, a sentinel probe is emitted for each of the binary functions with a different name from the source. The sentinel probe indicates the binary function name to differentiate subsequent probes from the ones from a different binary function. For examples, given source function
```
Foo() {
…
Probe 1
…
Probe 2
}
```
If it is transformed into two binary functions:
```
Foo:
…
Foo.outlined:
…
```
The encoding for the two binary functions will be separate:
```
GUID of Foo
Probe 1
GUID of Foo
Sentinel probe of Foo.outlined
Probe 2
```
Then probe1 will be decoded against binary `Foo`'s address, and Probe 2 will be decoded against `Foo.outlined`. The sentinel probe of `Foo.outlined` makes sure there's not accidental relocation from `Foo.outlined`'s probes to `Foo`'s entry address.
On the BOLT side, to be minimal intrusive, the pseudo probe re-encoding sticks with the old encoding format. This is fine since unlike linker, Bolt processes the pseudo probe section as a whole and it is free from relocation overflow issues.
The change is downwards compatible as long as there's no mixed use of the old encoding and the new encoding.
Reviewed By: wenlei, maksfb
Differential Revision: https://reviews.llvm.org/D135912
Differential Revision: https://reviews.llvm.org/D135914
Differential Revision: https://reviews.llvm.org/D136394
Currently MachineCSE forbids PRE when the instruction reads a physical
register. Relax this so that it's allowed when the value being read is
the same as what would be read in the place the instruction would be
hoisted to.
This is being done in preparation for adding FPCR handling to the
AArch64 backend, in order to prevent it to from worsening the
generated code, but for targets that already have a similar register
it should improve things.
This patch affects code generation in several tests. The new code
looks better except for in Thumb2/LowOverheadLoops/memcall.ll where
we perform PRE but the LowOverheadLoops transformation then undoes
it. Also in AMDGPU/selectcc-opt.ll the CHECK makes things look worse,
but actually the function as a whole is better (as a MOV is PRE'd).
Differential Revision: https://reviews.llvm.org/D136675
Add statistics about how much memory is used, in variables, spills, and
unsafestack.
Issue #58168 describes some of the difficulty diagnosing stack size issues
identified by -Wframe-larger-than. D135488 addresses some of those issues by
giving developers a method to view the stack layout and thereby understand
where and how stack memory is used.
However, that solution requires an additional pass, when a short summary about
how the compiler has allocated stack memory can inform developers about where
they should investigate. When they need the complete context, D135488 can
provide them with a more comprehensive set of diagnostics.
Reviewed By: nickdesaulniers
Differential Revision: https://reviews.llvm.org/D136484
scalar-to-vector (scalar binop (extractelt V, Idx), C) --> shuffle (vector binop V, C'), {Idx, -1, -1...}
We generally try to avoid ad-hoc vectorization in SDAG,
but the motivating case from issue #39482 escapes our
normal vectorization folds in IR. It seems like it should
always be a win to transform this pattern in cases where
we have the same vector type for input and output and the
target supports the vector operation. That avoids
transfers from vector to scalar and back.
In the x86 shift examples, we create the scalar-to-vector
node during legalization. I'm not sure if there's a more
general way to create the pattern for testing. (If so, I
could add tests for other targets.)
Differential Revision: https://reviews.llvm.org/D136713
This is not quite NFC because one of the users should
now avoid the DIVREM opcodes too, but I'm not sure
how to test that.
I used the same name as an analysis function in IR
in case we want to expand this to include other
operations.
Another potential use is proposed in D136713.
memcpy has clamped dst stack alignment to NaturalStackAlignment if
hasStackRealignment is false. We should also clamp stack alignment
for memset and memmove. If we don't clamp, SelectionDAG may first
do tail call optimization which requires no stack realignment. Then
memmove, memset in same function may be lowered to load/store with
larger alignment leading to PEI emit stack realignment code which
is absolutely not correct.
Reviewed By: LuoYuanke
Differential Revision: https://reviews.llvm.org/D136456
I'm unsure what the code does without the semicolon. On the surface it
seems like the assert below it would be considered part of the if
and thus the assert would only execute if DestReg is 0. But 0 isn't
considered a virtual register so the assert should fail.
Found by PVS Studio.
Reported https://pvs-studio.com/en/blog/posts/cpp/1003/ (N7)
Verify the LiveVariables analysis after a pass that claims to preserve
it, even if there are no further passes (apart from the verifier itself)
that would use the analysis.
Differential Revision: https://reviews.llvm.org/D129213
We can expect that the sequence of inserting-of-extracts-into-undef
will be successfully lowered back into widening of the source vector,
but it seems that at least for X86 mask vectors, we have a really hard time
recovering from inserting-into-zero.
I've looked into alternative fix injection points, and they are much more
involved, by the time of `LowerBUILD_VECTORvXi1()`/`LowerINSERT_VECTOR_ELT()`
the constants might be obscured, so it does not seem like we can easily
deal with this by lowering into bit math later on,
some other pieces are missing.
Instead, it seems like just clearing the padding away via an `AND`-mask
is at least not a worse choice. Why create a problem where there wasn't one.
Though yes, it is possible that there are cases where constants originate
from the source IR, so some other fix may still be needed.
Reviewed By: pengfei
Differential Revision: https://reviews.llvm.org/D136046
This reverts commit 65aaecca88.
There was an ordering problem in the calculation of the partial
remainder.
Original commit message:
If the divisor is even, we can first shift the dividend and divisor
right by the number of trailing zeros. Now the divisor is odd and we
can do the original algorithm to calculate a remainder. Then we shift
that remainder left by the number of trailing zeros and add the bits
that were shifted out of the dividend.
Differential Revision: https://reviews.llvm.org/D135541
If the upper half of an abs() is all sign bits, then we can perform the abs() using just the lower half and then zero extend.
I've limited the DAG combine to only sign_extend_inreg (and free truncate/zero_extend) to minimise any later promotion issues, but for legalization a similar fold can use ComputeNumSignBits to be more aggressive.
Alive2: https://alive2.llvm.org/ce/z/y32fS4Fixes#43370
Differential Revision: https://reviews.llvm.org/D136559
If the divisor is even, we can first shift the dividend and divisor
right by the number of trailing zeros. Now the divisor is odd and we
can do the original algorithm to calculate a remainder. Then we shift
that remainder left by the number of trailing zeros and add the bits
that were shifted out of the dividend.
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D135541
(sra X, BW-1) is either 0 or -1. So the multiply is a conditional
negate of Y.
This pattern shows up when type legalizing wide multiplies involving
a sign extended value.
Fixes PR57549.
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D133399
The CanFoldNonConst doesn't work correctly with opaque constants
because getNode won't constant fold constants if one is opaque. Even
if the operation is AND/OR. This can lead to infinite loops.
This patch does the folding manually in the DAGCombine. Alternatively,
we could improve getNode but that seemed likely to have bigger impact
and possibly increase compile time for the additional checks. We wouldn't
want to directly constant fold because we need to preserve the opaque flag.
Fixes PR58511.
Reviewed By: spatel
Differential Revision: https://reviews.llvm.org/D136472
When a CU attaches some ranges for a subprogram or an inlined code, the CU should be that of the subprogram/inlined code that was emitted.
If not, then these emitted ranges will use the incorrect base of the CU in `emitRangeList`.
A reproducible example is:
When linking these two LLVM IRs, dsymutil will report no mapping for range or inconsistent range data warnings.
`foo.swift`
```swift
import AppKit.NSLayoutConstraint
public class Foo {
public var c: Int {
get {
Int(NSLayoutConstraint().constant)
}
set {
}
}
}
```
`main.swift`
```swift
// no mapping for range
let f: Foo! = nil
// inconsistent range data
//let l: Foo = Foo()
```
Reviewed By: dblaikie
Differential Revision: https://reviews.llvm.org/D136039
This doesn't touch objc-arc-contract because that's in the codegen pipeline.
However, this does move its corresponding initialize function into initializeCodegen().
Reviewed By: asbirlea
Differential Revision: https://reviews.llvm.org/D135041
Verify the LiveVariables analysis after a pass that claims to preserve
it, even if there are no further passes (apart from the verifier itself)
that would use the analysis.
Differential Revision: https://reviews.llvm.org/D129213
D129213 improves verification of LiveVariables, and caused
CodeGen/X86/statepoint-cmp-sunk-past-statepoint.ll to fail with:
*** Bad machine code: LiveVariables: Block should not be in AliveBlocks ***
after Two-Address instruction pass.
Fix it by clearing AliveBlocks for a register which is no longer used.
Differential Revision: https://reviews.llvm.org/D136445
This is an alternative to fix PR57939 for RISC-V. It definitely
can be argued that the stack temporaries for RISC-V are being created
with an unnecessarily large alignment. But ignoring the alignment
in MachineFrameInfo also seems bad.
Looking at the test update that go with the current ID==0 check,
it was intending to exclude things like the NoAlloc stackid. So I'm
not sure if scalable vectors are intentionally being excluded.
Reviewed By: reames
Differential Revision: https://reviews.llvm.org/D135913
@foad was right - this isn't actually going to help with D136042 as much as hoped, we need a better AMDGPU-specific solution as other targets are likely to make use of it
Helps with some of the AMDGPU regressions identified in D136042 where we were losing signed BFE patterns after sinking shifts behind logic ops.
Differential Revision: https://reviews.llvm.org/D136081
Implement CanLowerReturn and associated CallingConv changes for SPARC/SPARC64.
In particular, for SPARC64 there's new `RetCC_Sparc64_*` functions that handles the return case of the calling convention.
It uses the same analysis as `CC_Sparc64_*` family of funtions, but fails if the return value doesn't fit into the return registers.
This makes calls to functions with big return values converted to an sret function as expected, instead of crashing LLVM.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D132465
The code incorrectly checked for CTLZ_ZERO_UNDEF instead of
CTTZ_ZERO_UNDEF.
While I was there I flipped the condition into an early out.
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D136010
This patch renames FuncletPadInst::getNumArgOperands to arg_size for
consistency with CallBase, where getNumArgOperands was removed in
favor of arg_size in commit 3e1c787b31
Differential Revision: https://reviews.llvm.org/D136048
Instead of checking that an operand is constant/opaque before calling getNode() and then checking that the result is a constant, just use FoldConstantArithmetic which will just early-out if the operands are not constant foldable.
In https://github.com/llvm/llvm-project/issues/57452, we found that IRTranslator is translating `i1 true` into `i32 -1`.
This is because IRTranslator uses SExt for indices.
In this fix, we change the expected behavior of extractelement's index, moving from SExt to ZExt.
This change includes both documentation, SelectionDAG and IRTranslator.
We also included a test for AMDGPU, updated tests for AArch64, Mips, PowerPC, RISCV, VE, WebAssembly and X86
This patch fixes issue #57452.
Differential Revision: https://reviews.llvm.org/D132978
The crash case comes from #58350. It have two stores, one store is type f32 and the other is v1f32.
When we try to merge these two stores on v1f32, the memVT is vector type so the old code will use ISD::EXTRACT_SUBVECTOR for type f32 also then compiler crash.
So this patch insert a build_vector for f32 store to generate v1f32 also when memVT is v1f32.
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D135954
Functions with `aarch64_sme_pstatesm_body` will emit a SMSTART at the start
of the function, and a SMSTOP at the end of the function, such that all
operations use the right value for vscale.
Because the placement of these nodes is critically important (i.e. no
vscale-dependent operations should be done before SMSTART has been issued),
we require glueing the CopyFromReg to the Entry node such that we can
insert the SMSTART as part of that glued chain.
More details about the SME attributes and design can be found
in D131562.
Reviewed By: aemerson
Differential Revision: https://reviews.llvm.org/D131582
Prior to inserting an unconditional branch from X to its
fall through basic block, check if X has any terminators to
avoid inserting additional branches.
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D134557
This was scanning through def operands looking for the
symbol operand. This is pointless because the symbol is always
the first operand as enforced by the verifier, and all operands
are implicit.
In Linux PIC model, there are 4 cases about value/label addressing:
Case 1: Function call or Label jmp inside the module.
Case 2: Data access (such as global variable, static variable) inside the module.
Case 3: Function call or Label jmp outside the module.
Case 4: Data access (such as global variable) outside the module.
Due to current llvm inline asm architecture designed to not "recognize" the asm
code, there are quite troubles for us to treat mem addressing differently for
same value/adress used in different instuctions.
For example, in pic model, call a func may in plt way or direclty pc-related,
but lea/mov a function adress may use got.
This patch fix/refine the case 1 and case 2 in inline asm.
Due to currently inline asm didn't support jmp the outsider lable, this patch
mainly focus on fix the function call addressing bugs in inline asm.
Reviewed By: Pengfei, RKSimon
Differential Revision: https://reviews.llvm.org/D133914
This is a simple addition to the convertPhiTypes in CodeGenPrepare to
consider and convert constants as it converts the phi type. Someone
fixed the bug in the motivating example, so the undef is now a constant
0. This does mean converting between integer and floating point
constants, which may have different materialization.
Differential Revision: https://reviews.llvm.org/D135561
MachineInstr's copy constructor works by calling the addOperand method
to add each operand of the old MachineInstr to the new one, one by
one. But addOperand deliberately avoids trying to replicate ties
between operands, on the grounds that the tie refers to operands by
index, and the indices aren't necessarily finalized yet.
This led to a code generation fault when the machine pipeliner cloned
an Arm conditional instruction, and lost the tie between the output
register and the input value to be used when the condition failed to
execute.
Reviewed By: dmgreen
Differential Revision: https://reviews.llvm.org/D135434
(sra X, BW-1) is either 0 or -1. So the multiply is a conditional
negate of Y.
This pattern shows up when type legalizing wide multiplies involving
a sign extended value.
Fixes PR57549.
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D133399
The previous code used a APInt(1, 0) to represent the demanded elts of a scalable vector, and then ignored that argument if type was scalable. This was inconsistent with the UndefElts parameter which is set to either APInt(1, 0) or APInt(1,1) - that is, implicitly broadcast across all lanes. Particularly since the undef code relied on the DemandedElts parameter having bitwidth 1 to achieve that result!
This change switches the demanded parameter to APInt(1,1), documents the broadcast semantics, and takes advantage of it to remove one special case for scalable vectors which is no longer required.
Update comment, and add an assertion to check property expected by sole (non-test) caller. Remove tests which appear to have been copied from fixed vector tests, and whose demanded bits don't correspond to the way this interface is otherwise used.
If the divisor is even, we can first shift the dividend and divisor
right by the number of trailing zeros. Now the divisor is odd and we
can do the original algorithm to calculate a remainder. Then we shift
that remainder left by the number of trailing zeros and add the bits
that were shifted out of the dividend.
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D135541
The first two parameters of addcarry are commutative. We may face a situation where both variant are present in the DAG, in which case we benefit from using just one.
Depends on D57302 and D33587
Reviewed By: RKSimon, chfast
Differential Revision: https://reviews.llvm.org/D57317
The scalar instruction of this is `llvm.trunc`. However the naming of
ISD::VP_TRUNC is already taken by `trunc` of the LLVM IR. Naming this as
`vp.ftrunc` would likely cause confusion with `vp.fptrunc`. So adding
`vp.roundtozero` that will look similar to `vp.roundeven`.
Reviewed By: frasercrmck
Differential Revision: https://reviews.llvm.org/D135233
Similar to the current "Trunc/BuildVector" folding - which folds low element extracts of BuildVectors, folds hi element extracts done using bitshifts.
For D134354
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D135148
Regression from D131400: cross-language LTO causes a crash in the
compiler on the NULL deref of Scope in `isa` call when Rust IR is
involved. Presumably, this might affect other languages too, and
even Rust itself without cross-language LTO when the Rust compiler
switched to LLVM 16.
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D134616
We can still get a NaN even if none of the operands are NaN,
e.g. from +inf/-inf. D50804 didn't catch that.
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D134854
Sometimes when a function is inlined into a different CU, `llvm-dwarfdump --verify` would find an inlined subroutine with an invalid abstract origin. This is because `DwarfUnit::addDIEEntry()` will incorrectly assume the inlined subroutine and the abstract origin are from the same CU if it can't find the CU for the inlined subroutine.
In the added test, the inlined subroutine for `bar()` is created before the CU for `B.swift` is created, so it tries to point to `goo()` in the wrong CU. Interestingly, if we swap the order of the two functions then we don't see a crash since the module for `goo()` is created first.
The fix is to give a parent DIE to `ScopeDIE` before calling `addDIEEntry()` so that its CU can be found. Luckily, `constructInlinedScopeDIE()` is only called once so we can pass it the DIE of the scope's parent and give it a child just after it's created.
`constructInlinedScopeDIE()` should always return a DIE, so assert that it is not null.
Reviewed By: aprantl
Differential Revision: https://reviews.llvm.org/D135114