Commit Graph

33283 Commits

Author SHA1 Message Date
Kazu Hirata 1fa870b1bd Use None consistently (NFC)
This patch replaces NoneType() and NoneType::None with None in
preparation for migration from llvm::Optional to std::optional.

In the std::optional world, we are not guranteed to be able to
default-construct std::nullopt_t or peek what's inside it, so neither
NoneType() nor NoneType::None has a corresponding expression in the
std::optional world.

Once we consistently use None, we should even be able to replace the
contents of llvm/include/llvm/ADT/None.h with something like:

  using NoneType = std::nullopt_t;
  inline constexpr std::nullopt_t None = std::nullopt;

to ease the migration from llvm::Optional to std::optional.

Differential Revision: https://reviews.llvm.org/D138376
2022-11-20 00:24:40 -08:00
Kazu Hirata 6ccf1d23d7 [SelectionDAG] Teach getRegistersForValue to return std::optional (NFC)
This is part of an effort to migrate from llvm::Optional to
std::optional:

https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716/11
2022-11-19 15:00:19 -08:00
Alexandre Ganea 49e483d3d6 [CodeView] Replace GHASH hasher by BLAKE3
Previously, we used SHA-1 for hashing the CodeView type records.
SHA-1 in `GloballyHashedType::hashType()` is coming top in the profiles. By simply replacing with BLAKE3, the link time is reduced in our case from 15 sec to 13 sec. I am only using MSVC .OBJs in this case. As a reference, the resulting .PDB is approx 2.1GiB and .EXE is approx 250MiB.

Differential Revision: https://reviews.llvm.org/D137101
2022-11-19 15:17:42 -05:00
chenglin.bi fe07eeb825 [GlobalISel] Fix crash in applyShiftOfShiftedLogic caused by CSEMIRBuilder reuse instruction
If LogicNonShiftReg is the same to Shift1Base, and shift1 const is the same to MatchInfo.Shift2 const, CSEMIRBuilder will reuse the old shift1 when build shift2.
So, if we erase MatchInfo.Shift2 at the end, actually we remove old shift1. And it will cause crash later.

Solution for this issue is just erase it earlier to avoid the crash.

Fix #58423

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D138187
2022-11-19 09:13:44 +08:00
Philip Reames 6705d94005 Revert "[SDAG] Allow scalable vectors in ComputeKnownBits"
This reverts commit bc0fea0d55.

There was a "timeout for a Halide Hexagon test" reported.  Revert until investigation complete.
2022-11-18 15:29:14 -08:00
Philip Reames 102f05bd34 Revert "[SDAG] Allow scalable vectors in ComputeNumSignBits" and follow up
This reverts commits 3fb08d14a6 and f8c63a7fbf.

There was a "timeout for a Halide Hexagon test" reported.  Revert until investigation complete.
2022-11-18 15:25:59 -08:00
Matt Arsenault 1fe1299a93 GlobalISel: Legalize strict_fsub
In the future should probably have a more convenient
way to switch between building strict and non-strict ops.
2022-11-18 15:21:41 -08:00
Philip Reames 3fb08d14a6 [SDAG] Address post commit review feedback from f8c63a7f
The major change is falling through to ComputeKnownBits when we don't have an implementation of ComputeNumSignBits due to conservatism over scalable vectors.  Right now, we're mostly conservative in the same cases, but this allows our results to improve when we change ComputeKnownBits without also needing to improve ComputeNumSignBits at the same time.
2022-11-18 12:30:10 -08:00
Philip Reames f8c63a7fbf [SDAG] Allow scalable vectors in ComputeNumSignBits
This is a continuation of the series of patches adding lane wise support for scalable vectors in various knownbit-esq routines.

The basic idea here is that we track a single lane for scalable vectors which corresponds to an unknown number of lanes at runtime. This is enough for us to perform lane wise reasoning on many arithmetic operations.

Differential Revision: https://reviews.llvm.org/D137141
2022-11-18 10:50:06 -08:00
Matt Arsenault 08ec15e44b AMDGPU/GlobalISel: Fix strictfp fmul 2022-11-18 08:53:49 -08:00
Philip Reames bc0fea0d55 [SDAG] Allow scalable vectors in ComputeKnownBits
his is the SelectionDAG equivalent of D136470, and is thus an alternate patch to D128159.

The basic idea here is that we track a single lane for scalable vectors which corresponds to an unknown number of lanes at runtime. This is enough for us to perform lane wise reasoning on many arithmetic operations.

This patch also includes an implementation for SPLAT_VECTOR as without it, the lane wise reasoning has no base case. The original patch which inspired this (D128159), also included STEP_VECTOR. I plan to do that as a separate patch.

Differential Revision: https://reviews.llvm.org/D137140
2022-11-18 07:40:32 -08:00
Alexander Timofeev 32bd75716c PEI should be able to use backward walk in replaceFrameIndicesBackward.
The backward register scavenger has correct register
liveness information. PEI should leverage the backward register scavenger.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D137574
2022-11-18 15:57:34 +01:00
Phoebe Wang d558255650 [X86] Use lock add/sub for cases that we only care about the EFLAGS
This fixes #36373, #36905 and partial of #58685.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D137711
2022-11-18 21:43:47 +08:00
Benjamin Maxwell 34d88cf6cf [DAG] Allow folding AND of anyext masked_load with >1 user to zext version
This now allows folding an AND of a anyext masked_load to a
zext_masked_load even if the masked load has multiple users.  Doing is
eliminates some redundant ANDs/MOVs for certain AArch64 SVE code.

I'm not sure if there's any cases where doing this could negatively the
other users of the masked_load.  Looking at other optimizations of
masked loads, most don't apply if the load is used more than once, so it
doesn't look like this would interfere.

Reviewed By: c-rhodes

Differential Revision: https://reviews.llvm.org/D137844
2022-11-18 10:38:09 +00:00
luxufan 18c5f3c35d [RegisterScavenger][RISCV] Don't search for FrameSetup instrs if we were searching from Non-FrameSetup instrs
Otherwise, the spill position may point to position where before
FrameSetup instructions. In which case, the spill instruction may store
to caller's frame since the stack pointer has not been adjustted.

Fixes https://github.com/llvm/llvm-project/issues/58286

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D135693
2022-11-18 15:13:52 +08:00
Matt Arsenault fe5b9a6a11 AMDGPU/GlobalISel: Make strict fadd, fmul and fma legal 2022-11-17 20:50:04 -08:00
YingChi Long 7a715bf317
[VP] Add support for vp.inttoptr & vp.ptrtoint
Add vp.inttoptr & vp.ptrtoint support by lowering them into
vp.zext / vp.truncate with in SelectionDAGBuilder.

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D137169
2022-11-18 10:42:24 +08:00
Stanislav Mekhanoshin bcaf31ec3f [AMDGPU] Allow finer grain control of an unaligned access speed
A target can return if a misaligned access is 'fast' as defined
by the target or not. In reality there can be different levels
of 'fast' and 'slow'. This patch changes the boolean 'Fast'
argument of the allowsMisalignedMemoryAccesses family of functions
to an unsigned representing its speed.

A target can still define it as it wants and the direct translation
of the current code uses 0 and 1 for current false and true. This
makes the change an NFC.

Subsequent patch will start using an actual value of speed in
the load/store vectorizer to compare if a vectorized access going
to be not just fast, but not slower than before.

Differential Revision: https://reviews.llvm.org/D124217
2022-11-17 09:23:53 -08:00
Philip Reames 4105794e66 [SDAG] Assert we don't see scalable VECTOR_SHUFFLES
It was pointed out in review of D137140 that this case should be impossible.  This patch converts an existing bailout into an assert instead.
2022-11-17 08:18:51 -08:00
Alex Richardson 754d25844a [CGP] Update MemIntrinsic alignment if possible
Previously it was only being done if shouldAlignPointerArgs() returned
true, which right now is only true for ARM targets.
Updating the argument alignment attributes of memcpy/memset intrinsics
if the underlying object has larger alignment can be beneficial even
when CGP didn't increase alignment (as can be seen from the test changes),
so invert the loop and if condition.

Differential Revision: https://reviews.llvm.org/D134281
2022-11-17 11:59:35 +00:00
Anton Sidorenko b6c790736e [MachineCombiner][RISCV] Add fmadd/fmsub/fnmsub instructions patterns
This patch adds tranformation of fmul+fadd/fsub chains to fused multiply
instructions:
  * fmul+fadd->fmadd
  * fmul+fsub->fmsub/fnmsub

We also will try to combine these instructions if the fmul has more than one use
and cannot be deleted. However, removing the dependence between fmul and fadd can
still be profitable, and we rely on machine combiner approximations of scheduling.

Differential Revision: https://reviews.llvm.org/D136764
2022-11-17 13:24:04 +03:00
Jay Foad 96a661de4b [GlobalISel] Better verification of G_UNMERGE_VALUES
Verify three cases of G_UNMERGE_VALUES separately:

1. Splitting a vector into subvectors (the converse of
   G_CONCAT_VECTORS).
2. Splitting a vector into its elements (the converse of
   G_BUILD_VECTOR).
3. Splitting a scalar into smaller scalars (the converse of
   G_MERGE_VALUES).

Previously #1 allowed strange combinations like this:
  %1:_(<2 x s16>),%2:_(<2 x s16>) = G_UNMERGE_VALUES %0(<2 x s32>)
This has been tightened up to check that the source and destination
element types match, and some MIR test cases updated accordingly.

Differential Revision: https://reviews.llvm.org/D111132
2022-11-17 08:19:57 +00:00
Sinan Lin 4ad8952d2d [CodeGen][BasicBlockSections] Fix wrong alignment directive placement in
basic block section cases

MachineBlockPlacement pass sets an alignment attribute to the loop
header MBB and this attribute will lead to an alignment directive during
emitting asm. In the case of the basic block section, the alignment
directive is put before the section label, and thus the alignment is set
to the predecessor of the loop header, which is not what we expect and
increases the code size (both inserting nop and set section alignment).

Reviewed By: rahmanl

Differential Revision: https://reviews.llvm.org/D137535
2022-11-17 15:01:57 +08:00
zhongyunde 8fbb6f8678 [NFC] Fix typo in comment
Address comment in https://reviews.llvm.org/D137936

Differential Revision: https://reviews.llvm.org/D138124
2022-11-16 23:35:53 +08:00
David Green 71609871dd [AArch64][MachineCombiner] Use MIMetadata to copy pcsections metadata to reassociated instructions.
D134260/D138107 exposed that the MachineCombiner was not copying
pcsections metadata where it should. This patch switches the MIBuild
methods to use MIMetadata that can copy the debug loc and pcsections at
the same time.

Differential Revision: https://reviews.llvm.org/D138112
2022-11-16 13:22:48 +00:00
Simon Pilgrim a92f5a08a1 [DAG] simplifySelect - add support for vselect(0, T, F) -> F fold
We still need to add handling for the non-zero T fold (which requires getBooleanContents handling)
2022-11-16 13:11:14 +00:00
OCHyams a1ac6efcb0 [NFC][SelectionDAG][DebugInfo] Refactor DanglingDebugInfo class
Hide the underlying DbgValueInst by adding methods to extract the necessary
information and by adding a raw_ostream &operator<< overload to print it.

Remove the DebugLoc field as this is always the same as the DbgValueInst's
DebugLoc (see D136247).

Reviewed By: StephenTozer

Differential Revision: https://reviews.llvm.org/D136249
2022-11-16 10:10:24 +00:00
OCHyams 9792744650 [NFC][SelectionDAG][DebugInfo] Remove duplicate parameter from handleDebugValue
handleDebugValue has two DebugLoc parameters that appear to always take the
same value. Remove one of the duplicate parameters. See phabricator review for
more detail.

Reviewed By: StephenTozer

Differential Revision: https://reviews.llvm.org/D136247
2022-11-16 09:59:35 +00:00
Matt Arsenault 116c894d72 DAG: Fix assert on load casted to vector with attached range metadata
AMDGPU legalizes i64 loads to loads of <2 x i32>, leaving the
i64 MMO with attached range metadata alone. The known bit width
was using the scalar element type, and asserting on a mismatch.
2022-11-15 23:28:55 -08:00
Yeting Kuo ed9638c44b [VP][RISCV] Add vp.nearbyint and RISC-V support.
nearbyint has the property to execute without exception.
For not modifying fflags, the patch added new machine opcode
PseudoVFROUND_NOEXCEPT_V that expands vfcvt.x.f.v and vfcvt.f.x.v between a pair
of frflags and fsflags.

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D137685
2022-11-16 14:05:35 +08:00
Yeting Kuo 5c3ca10b09 [VP][RISCV] Add vp.bswap and RISC-V support.
The patch also added function expandVPBSWAP to expand ISD::VP_BSWAP nodes.

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D137928
2022-11-16 11:36:38 +08:00
Craig Topper f387918dd8 [TargetLowering][RISCV][ARM][AArch64][Mips] Reduce the number of AND mask constants used by BSWAP expansion.
We can reuse constants if we use SRL followed by AND and AND followed by SHL.
Similar was done to bitreverse previously.

Differential Revision: https://reviews.llvm.org/D138045
2022-11-15 14:36:01 -08:00
Fangrui Song 6c7666a408 Revert D137574 "PEI should be able to use backward walk in replaceFrameIndicesBackward."
This reverts commit e05ce03cfa.

Caused asan use-after-poison to 4 DebugInfo/AMDGPU/ tests.
Triggered in PEI::replaceFrameIndicesBackward called llvm::MachineInstr::getNumOperands
2022-11-15 19:19:46 +00:00
Sanjay Patel fe05a0a3dd [SDAG] avoid udiv/urem transform for vector/scalar type mismatches
This solves the crashing from issue #58994.
I don't know anything about VE, so I don't know if the output
is as expected or even correct.
2022-11-15 11:01:18 -05:00
Alexander Timofeev e05ce03cfa PEI should be able to use backward walk in replaceFrameIndicesBackward.
The backward register scavenger has correct register
liveness information. PEI should leverage the backward register scavenger.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D137574
2022-11-15 15:20:25 +01:00
Serge Pavlov ec893da990 [GlobalISel] Remove semantic operand of G_IS_FPCLASS
Instruction G_IS_FPCLASS had an operand that represented floating-point
semantics of its first operand. It allowed types that have the same length,
like `bfloat16` and `half`, to be distinguished. Unfortunately, it is
not sufficient, as other operation still cannot distinguish such types.
Solution of this problem must be more general, so now this operand is removed.

Differential Revision: https://reviews.llvm.org/D138004
2022-11-15 15:48:05 +07:00
Guozhi Wei 11e86868c1 [MachineCSE] Allow CSE for instructions with ignorable operands
Ignorable operands don't impact instruction's behavior, we can safely do CSE on
the instruction.

It is split from D130919. It has big impact to some AMDGPU test cases.
For example in atomic_optimizations_raw_buffer.ll, when trying to check if the
following instruction can be CSEed

  %37:vgpr_32 = V_MOV_B32_e32 0, implicit $exec

Function isCallerPreservedOrConstPhysReg is called on operand "implicit $exec",
this function is implemented as

  -  return TRI.isCallerPreservedPhysReg(Reg, MF) ||
  +  return TRI.isCallerPreservedPhysReg(Reg, MF) || TII.isIgnorableUse(MO) ||
            (MRI.reservedRegsFrozen() && MRI.isConstantPhysReg(Reg));

Both TRI.isCallerPreservedPhysReg and MRI.isConstantPhysReg return false on this
operand, so isCallerPreservedOrConstPhysReg is also false, it causes LLVM failed
to CSE this instruction.

With this patch TII.isIgnorableUse returns true for the operand $exec, so
isCallerPreservedOrConstPhysReg also returns true, it causes this instruction to
be CSEed with previous instruction

  %14:vgpr_32 = V_MOV_B32_e32 0, implicit $exec

So I got different result from here. AMDGPU's implementation of isIgnorableUse
is

  bool SIInstrInfo::isIgnorableUse(const MachineOperand &MO) const {
    // Any implicit use of exec by VALU is not a real register read.
    return MO.getReg() == AMDGPU::EXEC && MO.isImplicit() &&
           isVALU(*MO.getParent()) && !resultDependsOnExec(*MO.getParent());
  }

Since the operand $exec is not a real register read, my understanding is it's
reasonable to do CSE on such instructions.

Because more instructions are CSEed, so I get less instructions generated for
these tests.

Differential Revision: https://reviews.llvm.org/D137222
2022-11-14 19:34:59 +00:00
Nicholas Guy d52e2839f3 [ARM][CodeGen] Add support for complex deinterleaving
Adds the Complex Deinterleaving Pass implementing support for complex numbers in a target-independent manner, deferring to the TargetLowering for the given target to create a target-specific intrinsic.

Differential Revision: https://reviews.llvm.org/D114174
2022-11-14 14:02:27 +00:00
Nikita Popov feda983ff8 [TableGen] Use MemoryEffects to represent intrinsic memory effects (NFCI)
The TableGen implementation was using a homegrown implementation of
FunctionModRefInfo. This switches it to use MemoryEffects instead.
This makes the code simpler, and will allow exposing the full
representational power of MemoryEffects in the future. Among other
things, this will allow us to map IntrHasSideEffects to an
inaccessiblemem readwrite, rather than just ignoring it entirely
in most cases.

To avoid layering issues, this moves the ModRef.h header from IR
to Support, so that it can be included in the TableGen layer.

Differential Revision: https://reviews.llvm.org/D137641
2022-11-14 10:52:04 +01:00
chenglin.bi 8482247900 [GlobalISel] Correct constant type in matchReassocConstantInnerLHS
When we match a pattern from m_GCst, the register type could be different from original op. So we can't replace the original op to vreg direct.
This code create a new constant with original op type then replace the original op.

Fix #58906

Reviewed By: arsenm, aemerson

Differential Revision: https://reviews.llvm.org/D137778
2022-11-13 19:20:07 +08:00
Matt Arsenault 3cfa03856f AtomicExpand: Support cmpxchg expansion for small FP types
Handles f16 atomics for AMDGPU.
2022-11-10 22:16:11 -08:00
Nick Desaulniers f2981a3bc9 [SelectDagISEL] refactor HandlePHINodesInSuccessorBlocks NFC.
While working on this code to support outputs from callbr along indirect
branches, I kept making these changes again and again. Precommit these.

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D137445
2022-11-10 14:34:23 -08:00
Alexander Timofeev 27091e6227 [PEI][NFC] Refactoring of the debug instructions frame index replacement
This is required for the upcoming backward PEI::replaceFrameIndices version.
Both forward and backward versions will use same code for debug instruction processing.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D137741
2022-11-10 14:02:03 +01:00
wlei 47b0758049 [SampleFDO] Persist profile staleness metrics into binary
With https://reviews.llvm.org/D136627, now we have the metrics for profile staleness based on profile statistics, monitoring the profile staleness in real-time can help user quickly identify performance issues. For a production scenario, the build is usually incremental and if we want the real-time metrics, we should store/cache all the old object's metrics somewhere and pull them in a post-build time. To make it more convenient, this patch add an option to persist them into the object binary, the metrics can be reported right away by decoding the binary rather than polling the previous stdout/stderrs from a cache system.

For implementation, it writes the statistics first into a new metadata section(llvm.stats) then encode into a special ELF `.llvm_stats` section. The section data is formatted as a list of key/value pair so that future statistics can be easily extended. This is also under a new switch(`-persist-profile-staleness`)

In terms of size overhead, the metrics are computed at module level, so the size overhead should be small, measured on one of our internal service, it costs less than < 1MB for a 10GB+ binary.

Reviewed By: wenlei

Differential Revision: https://reviews.llvm.org/D136698
2022-11-09 22:34:33 -08:00
chenglin.bi 597f444092 [TypePromotion] Replace Zext to Truncate for the case src bitwidth is larger
Fix: https://github.com/llvm/llvm-project/issues/58843

Reviewed By: samtebbs

Differential Revision: https://reviews.llvm.org/D137613
2022-11-09 05:08:01 +08:00
Nathan James 6aa050a690 Reland "[llvm][NFC] Use c++17 style variable type traits"
This reverts commit 632a389f96.

This relands commit
1834a310d0.

Differential Revision: https://reviews.llvm.org/D137493
2022-11-08 14:15:15 +00:00
Nathan James 632a389f96 Revert "[llvm][NFC] Use c++17 style variable type traits"
This reverts commit 1834a310d0.
2022-11-08 13:11:41 +00:00
Nathan James 1834a310d0
[llvm][NFC] Use c++17 style variable type traits
This was done as a test for D137302 and it makes sense to push these changes

Reviewed By: dblaikie

Differential Revision: https://reviews.llvm.org/D137493
2022-11-08 12:22:52 +00:00
Petar Avramovic 838d5d371a AMDGPU/GlobalISel: Fix combine crash because LI is not set in prelegalizer
Caused by legacy min/max combines (select + cmp) asking for legalizer info
in prelegalizer (D135047 added combine to all_combines).
Combine still does not work for AMDGPU since destination opcode is custom,
not legal. Similar combine works on DAG since it asks for legal or custom.

Differential Revision: https://reviews.llvm.org/D137274
2022-11-08 12:46:16 +01:00
Tobias Hieta aa99b607b5
[clang][pdb] Don't include -fmessage-length in PDB buildinfo
As discussed in https://reviews.llvm.org/D136474 -fmessage-length
creates problems with reproduciability in the PDB files.

This patch just drops that argument when writing the PDB file.

Reviewed By: hans

Differential Revision: https://reviews.llvm.org/D137322
2022-11-08 10:05:59 +01:00
Mingming Liu 36e8e19337 [NFC][BlockPlacement]Add an option to renumber blocks based on function layout order.
Use case:
- When block layout is visualized after MBP pass, the basic blocks are labeled in layout order; meanwhile blocks could be numbered in a different order.
- As a result, it's hard to map between the graph and pass output. With this option on, the basic blocks are renumbered in function layout order.

This option is only useful when a function is to be visualized (i.e., when view options are on) to make it debugging only.

Use https://godbolt.org/z/5WTW36bMr as an example:
- As MBP pass output (shown in godbolt output window), `func2` is in a basic block numbered `2` (`bb.2`), and `func1` is in a basic block numbered `3` (`bb.3`);
    `bb.3` is a block with higher block frequency than `bb.2`, and `bb.3` is placed before `bb.2` in the functin layout.
- Use [1] to get the dot graph (graph uploaded in [2]), the blocks are re-numbered.
   - `func1` is in 'if.end' block, and labeled `1` in visualized dot; `func2` is in 'if.then' blocks, and labeled `3` --> the labeled number and bb number won't map.
   - [[ b5626ae975/llvm/lib/CodeGen/MachineBlockFrequencyInfo.cpp (L127) | DOTGraphTraits<MachineBlockFrequencyInfo *>::getNodeLabel ]] is where labeled numbers are based on function layout number, and [[ a8d93783f3/llvm/include/llvm/Support/GraphWriter.h (L209)
 | called by graph writer ]].
        So call 'MachineFunction::RenumberBlocks' would make labeled number (in dot graph) and block number (in pass output) consistent with each other.

[1] `./bin/clang++ -O3 -S -mllvm -view-block-layout-with-bfi=count -mllvm -view-bfi-func-name=_Z9func_loopv -mllvm -print-after=block-placement -mllvm  -filter-print-funcs=_Z9func_loopv test.c`

[2] {F25201785}

Reviewed By: davidxl

Differential Revision: https://reviews.llvm.org/D137467
2022-11-07 07:52:45 -08:00
Thomas Preud'homme c8be35293c [SWP] Recognize mem carried dep with different base
The loop-carried dependency detection logic in isLoopCarriedDep relies
on the load and store using the same definition for the base register.
This misses the case of post-increment loads and stores whose base
register are different PHI initialized from the same initial value.

This commit extends the logic to accept the load and store having
different PHI base address provided that they had the same initial value
when entering the loop and are incremented by the same amount in each
loop.

Reviewed By: bcahoon

Differential Revision: https://reviews.llvm.org/D136463
2022-11-07 09:53:41 +00:00
Matt Arsenault 162d9030ab GlobalISel: Pass through AA metadata for target memory intrinsics
The corresponding change for the DAG was done in fa4aac7335
2022-11-06 22:14:12 -08:00
Amaury Séchet 82209fd96e [NFC] Refactor DAGCombiner::foldSelectOfConstants to reduce nesting 2.0 2022-11-05 17:10:06 +00:00
Amaury Séchet 7c05f092c9 [NFC] Refactor DAGCombiner::foldSelectOfConstants to reduce nesting 2022-11-05 16:17:58 +00:00
Shilei Tian 1186e9d59f [LLVM][AMDGPU] Specialize 32-bit atomic fadd instruction for generic address space
The 32-bit floating-point atomic add instructions on AMDGPUs does not support a
"flat" or "generic" address space. So, if the address space cannot be determined
statically, the AMDGPU backend will fall back to a CAS loop (which does support
"flat" addressing). Instead, this patch emits runtime address-space checks to
allow native FP atomic add instructions for global and LDS memory (and non-atomic
FP add instructions for private/scratch memory).

In order to do that, this patch introduces a new interface function
`emitExpandAtomicRMW`. It is expected to be called when a common atomic expand
doesn't work for a specific target, such as the case we discussed here.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D129690
2022-11-04 14:11:05 -04:00
Nikita Popov 304f1d59ca [IR] Switch everything to use memory attribute
This switches everything to use the memory attribute proposed in
https://discourse.llvm.org/t/rfc-unify-memory-effect-attributes/65579.
The old argmemonly, inaccessiblememonly and inaccessiblemem_or_argmemonly
attributes are dropped. The readnone, readonly and writeonly attributes
are restricted to parameters only.

The old attributes are auto-upgraded both in bitcode and IR.
The bitcode upgrade is a policy requirement that has to be retained
indefinitely. The IR upgrade is mainly there so it's not necessary
to update all tests using memory attributes in this patch, which
is already large enough. We could drop that part after migrating
tests, or retain it longer term, to make it easier to import IR
from older LLVM versions.

High-level Function/CallBase APIs like doesNotAccessMemory() or
setDoesNotAccessMemory() are mapped transparently to the memory
attribute. Code that directly manipulates attributes (e.g. via
AttributeList) on the other hand needs to switch to working with
the memory attribute instead.

Differential Revision: https://reviews.llvm.org/D135780
2022-11-04 10:21:38 +01:00
Haohai Wen e419620fc2 [CodeGenPrep] Change ValueToSExts from DeseMap to MapVector
mergeSExts iterates throught ValueToSExts. Using DenseMap result in
unstable optimization path so that output IR may vary even if the input
IR is same.

Reviewed By: wxiao3

Differential Revision: https://reviews.llvm.org/D137234
2022-11-04 11:15:18 +08:00
Nick Desaulniers 0589038a6f [StatepointLowering] remove unused parameter. NFC
Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D136885
2022-11-03 15:43:20 -07:00
Henry Yu ef0d689e8b [SelectionDAGBuilder] use bitcast instead of AnyExtOrTrunc if copy parts from an int vector to a float vector to fix issue #58615
The getCopyFromPartsVector doesn't work correctly when PartEVT and ValueVT have both different element type and different size.

This patch
1) removes the part of a comment that contains the incorrect assumption that element type are the same
2) use bitcast when copy parts of int vector to a float vector after the subvector extraction

Reviewed By: Peter, efriedma

Differential Revision: https://reviews.llvm.org/D136726
2022-11-03 15:35:13 -07:00
Stefan Pintilie 9df924a634 [PowerPC] Add new DMR register classes to Future CPU.
A new register class as well as a number of related subregisters are being added
to Future CPU. These registers are Dense Math Registers (DMR) and are 1024 bits
long. These regsiters can also be used in consecutive pairs which leads to a
register that is 2048 bits.

This patch also adds 7 new instructions that use these registers. More
instructions will be added in future patches.

Reviewed By: amyk, saghir

Differential Revision: https://reviews.llvm.org/D136366
2022-11-03 08:29:55 -05:00
Pierre van Houtryve 020a9d7b20 [GISel] Add (fsub +-0.0, X) -> fneg combine
Allows for better matching of VOP3 mods.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D136442
2022-11-03 08:21:50 +00:00
Matt Arsenault cbce11c422 WebAssembly: Move exception handling code together 2022-11-02 16:05:34 -07:00
Matt Arsenault 4fed59ed41 FunctionLoweringInfo: Use TLI member instead of finding it 2022-11-02 16:05:34 -07:00
John Brawn 2d8c1597e5 [MIRVRegNamer] Avoid opcode hash collision
D121929 happens to cause CodeGen/MIR/AArch64/mirnamer.mir to fail due
to a hash collision caused by adding two extra opcodes. The collision
is only in the top 19 bits of the hashed opcode so fix this by just
using the whole hash (in fixed width hex for consistency) instead of
the top 5 decimal digits.

Differential Revision: https://reviews.llvm.org/D137155
2022-11-02 13:53:12 +00:00
John Brawn 88ac25b357 [MachineCSE] Allow PRE of instructions that read physical registers
Currently MachineCSE forbids PRE when the instruction reads a physical
register. Relax this so that it's allowed when the value being read is
the same as what would be read in the place the instruction would be
hoisted to.

This is being done in preparation for adding FPCR handling to the
AArch64 backend, in order to prevent it to from worsening the
generated code, but for targets that already have a similar register
it should improve things.

This patch affects code generation in several tests. The new code
looks better except for in Thumb2/LowOverheadLoops/memcall.ll where
we perform PRE but the LowOverheadLoops transformation then undoes
it. Also in AMDGPU/selectcc-opt.ll the CHECK makes things look worse,
but actually the function as a whole is better (as a MOV is PRE'd).

Differential Revision: https://reviews.llvm.org/D136675
2022-11-02 13:53:12 +00:00
Anton Sidorenko 92ec614988 [GlobalISel] Compute debug location when merging stores more accurately
Originaly the loop did almost nothing as the calculated location was
overwritten on the next iteration.

Differential Revision: https://reviews.llvm.org/D136937
2022-11-01 14:32:42 +03:00
Yeting Kuo 71e4e35581 [VP][RISCV] Add vp.rint and RISC-V support.
FRINT uses dynamic rounding mode instead of static rounding mode. The patch
rename VFCVT_X_F_VL to VFCVT_RM_X_F_VL for static rounding mode uses and added
new ISDNode VFCVT_X_F_VL directly selected to PseudoVFCVT_X_F_V.

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D136662
2022-11-01 14:52:47 +08:00
Matt Arsenault b60a9ccd02 AtomicExpand: Use InstSimplifyFolder
Automatically cleanup operations if we know the atomic has higher
alignment.
2022-10-31 23:31:42 -07:00
Matt Arsenault 07f12170a2 AtomicExpand: Don't create unused instructions for some atomicrmw
This wasn't used by every atomicrmw expansion.
2022-10-31 18:34:36 -07:00
Simon Pilgrim 55a11b542e [VectorUtils] Add getShuffleDemandedElts helper
We have similar code to translate a demanded elements mask for a shuffle's operands in multiple places - this patch adds a helper function to VectorUtils and updates a number of locations to use it directly.

Differential Revision: https://reviews.llvm.org/D136832
2022-10-30 17:03:55 +00:00
Simon Pilgrim 78739fdb4d [DAG] Enable combineShiftOfShiftedLogic folds after type legalization
This was disabled to prevent regressions, which appear to be just occurring on AMDGPU (at least in our current lit tests), which I've addressed by adding AMDGPUTargetLowering::isDesirableToCommuteWithShift overrides.

Fixes #57872

Differential Revision: https://reviews.llvm.org/D136042
2022-10-29 12:30:04 +01:00
Simon Pilgrim 69d117edc2 [DAG] ExpandIntRes_MINMAX - simplify cases with sufficient number of sign bits
When legalizing a smax/smin/umax/umin op, if we know that the upper half is all sign bits, then we can perform the op on the lower half and then sign extend the result to the upper half.

Alive2: https://alive2.llvm.org/ce/z/rk8Rfd

Fixes #58630
2022-10-28 17:10:45 +01:00
John Brawn 7a7b36e96b Revert "[MachineCSE] Allow PRE of instructions that read physical registers"
This reverts commit 628467e53f.

This is causing a miscompile in ffmpeg when compiled for armv7.
2022-10-28 14:39:56 +01:00
Sanjay Patel 1e7c1dd67c [SDAG] avoid crash from mismatched types in scalar-to-vector fold
This bug was introduced with D136713 / 54eeadcf44 .

As an enhancement, we could cast operands to the expected type,
but we need to make sure that is done correctly (zext vs. sext).
It's also possible (but seems unlikely) that an operand can have
a type larger than the result type.

Fixes #58661
2022-10-28 09:14:08 -04:00
Simon Pilgrim d47f056cd2 [DAG] visitXOR - fold XOR(A,B) -> OR(A,B) iff A and B have no common bits
Alive2: https://alive2.llvm.org/ce/z/7wvfns

Part of Issue #58624
2022-10-28 12:11:12 +01:00
Simon Pilgrim 28bfd853ab [DAG] visitFSUBForFMACombine - pass callbacks by reference in isContractableAndReassociableFMUL lambda capture. NFC.
Fixes a coverity remark about large copies by value
2022-10-28 11:48:45 +01:00
Simon Pilgrim 32d51581da IndirectBrExpandPass - remove unused DL from lambda capture. NFC.
Oddly I wasn't seeing an unused variable warning - this came up in a coverity remark about large copy by values!
2022-10-28 11:05:42 +01:00
Pierre van Houtryve 088a816824 [DAGCombiner] Use `getAnyExtOrTrunc` instead of TRUNCATE in ExtractVectorElt combine
ScalarVT isn't guaranteed to be smaller than the BCSrc.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D136849
2022-10-28 06:33:29 +00:00
Hongtao Yu d5a963ab8b [PseudoProbe] Replace relocation with offset for entry probe.
Currently pseudo probe encoding for a function is like:
	- For the first probe, a relocation from it to its physical position in the code body
	- For subsequent probes, an incremental offset from the current probe to the previous probe

The relocation could potentially cause relocation overflow during link time. I'm now replacing it with an offset from the first probe to the function start address.

A source function could be lowered into multiple binary functions due to outlining (e.g, coro-split). Since those binary function have independent link-time layout, to really avoid relocations from .pseudo_probe sections to .text sections, the offset to replace with should really be the offset from the probe's enclosing binary function, rather than from the entry of the source function. This requires some changes to previous section-based emission scheme which now switches to be function-based. The assembly form of pseudo probe directive is also changed correspondingly, i.e, reflecting the binary function name.

Most of the source functions end up with only one binary function. For those don't, a sentinel probe is emitted for each of the binary functions with a different name from the source. The sentinel probe indicates the binary function name to differentiate subsequent probes from the ones from a different binary function. For examples, given source function

```
Foo() {
  …
  Probe 1
  …
  Probe 2
}
```

If it is transformed into two binary functions:

```
Foo:
   …

Foo.outlined:
   …
```

The encoding for the two binary functions will be separate:

```

GUID of Foo
  Probe 1

GUID of Foo
  Sentinel probe of Foo.outlined
  Probe 2
```

Then probe1 will be decoded against binary `Foo`'s address, and Probe 2 will be decoded against `Foo.outlined`. The sentinel probe of `Foo.outlined` makes sure there's not accidental relocation from `Foo.outlined`'s probes to `Foo`'s entry address.

On the BOLT side, to be minimal intrusive, the pseudo probe re-encoding sticks with the old encoding format. This is fine since unlike linker, Bolt processes the pseudo probe section as a whole and it is free from relocation overflow issues.

The change is downwards compatible as long as there's no mixed use of the old encoding and the new encoding.

Reviewed By: wenlei, maksfb

Differential Revision: https://reviews.llvm.org/D135912
Differential Revision: https://reviews.llvm.org/D135914
Differential Revision: https://reviews.llvm.org/D136394
2022-10-27 13:28:22 -07:00
Craig Topper 00d93def77 [LegalizeVectorOps][X86][RISCV] Expand vector S/USHLSAT instead of unrolling.
Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D136478
2022-10-27 09:09:36 -07:00
John Brawn 628467e53f [MachineCSE] Allow PRE of instructions that read physical registers
Currently MachineCSE forbids PRE when the instruction reads a physical
register. Relax this so that it's allowed when the value being read is
the same as what would be read in the place the instruction would be
hoisted to.

This is being done in preparation for adding FPCR handling to the
AArch64 backend, in order to prevent it to from worsening the
generated code, but for targets that already have a similar register
it should improve things.

This patch affects code generation in several tests. The new code
looks better except for in Thumb2/LowOverheadLoops/memcall.ll where
we perform PRE but the LowOverheadLoops transformation then undoes
it. Also in AMDGPU/selectcc-opt.ll the CHECK makes things look worse,
but actually the function as a whole is better (as a MOV is PRE'd).

Differential Revision: https://reviews.llvm.org/D136675
2022-10-27 14:14:57 +01:00
David Spickett c6e2de6042 [LLVM] Use DWARFv4 bitfields when tuning for GDB
GDB implemented data_bit_offset in https://sourceware.org/bugzilla/show_bug.cgi?id=12616
which has been present since GDB 8.0.

GCC started using it at GCC 11.

Reviewed By: dblaikie

Differential Revision: https://reviews.llvm.org/D135583
2022-10-27 08:51:07 +00:00
Paul Kirth 2e1e2f52f3 [CodeGen] Improve large stack frame diagnostic
Add statistics about how much memory is used, in variables, spills, and
unsafestack.

Issue #58168 describes some of the difficulty diagnosing stack size issues
identified by -Wframe-larger-than. D135488 addresses some of those issues by
giving developers a method to view the stack layout and thereby understand
where and how stack memory is used.

However, that solution requires an additional pass, when a short summary about
how the compiler has allocated stack memory can inform developers about where
they should investigate. When they need the complete context, D135488 can
provide them with a more comprehensive set of diagnostics.

Reviewed By: nickdesaulniers

Differential Revision: https://reviews.llvm.org/D136484
2022-10-27 00:51:45 +00:00
Sanjay Patel 54eeadcf44 [SDAG] avoid vector extract/insert around binop
scalar-to-vector (scalar binop (extractelt V, Idx), C) --> shuffle (vector binop V, C'), {Idx, -1, -1...}

We generally try to avoid ad-hoc vectorization in SDAG,
but the motivating case from issue #39482 escapes our
normal vectorization folds in IR. It seems like it should
always be a win to transform this pattern in cases where
we have the same vector type for input and output and the
target supports the vector operation. That avoids
transfers from vector to scalar and back.

In the x86 shift examples, we create the scalar-to-vector
node during legalization. I'm not sure if there's a more
general way to create the pattern for testing. (If so, I
could add tests for other targets.)

Differential Revision: https://reviews.llvm.org/D136713
2022-10-26 14:04:46 -04:00
Sanjay Patel 3aec021118 [SDAG] add helper for opcodes that are not speculatable
This is not quite NFC because one of the users should
now avoid the DIVREM opcodes too, but I'm not sure
how to test that.

I used the same name as an analysis function in IR
in case we want to expand this to include other
operations.

Another potential use is proposed in D136713.
2022-10-26 11:20:14 -04:00
Haohai Wen 21f23a37c6 [SelectionDAG] Clamp stack alignment for memset, memmove
memcpy has clamped dst stack alignment to NaturalStackAlignment if
hasStackRealignment is false. We should also clamp stack alignment
for memset and memmove. If we don't clamp, SelectionDAG may first
do tail call optimization which requires no stack realignment. Then
memmove, memset in same function may be lowered to load/store with
larger alignment leading to PEI emit stack realignment code which
is absolutely not correct.

Reviewed By: LuoYuanke

Differential Revision: https://reviews.llvm.org/D136456
2022-10-26 16:45:31 +08:00
Matt Arsenault a91c17498a GlobalISel: Fix copy paste error
Pretty sure this was harmless since the tablegen
calling convention definitions do not use pointers.

Part of issue 58604
2022-10-25 17:06:00 -07:00
Craig Topper 8c42b5e89e [SelectionDAG] Add missing semicolon after return.
I'm unsure what the code does without the semicolon. On the surface it
seems like the assert below it would be considered part of the if
and thus the assert would only execute if DestReg is 0. But 0 isn't
considered a virtual register so the assert should fail.

Found by PVS Studio.
Reported https://pvs-studio.com/en/blog/posts/cpp/1003/ (N7)
2022-10-25 10:24:01 -07:00
Sanjay Patel b179351ad4 [SDAG] refactor folds for scalar-to-vector; NFCI
Fix typos, add comments, improve variable names,
rearrange code, add early exits.
2022-10-25 12:53:46 -04:00
Jay Foad 0243057304 [MachineVerifier] Try harder to verify LiveVariables
Verify the LiveVariables analysis after a pass that claims to preserve
it, even if there are no further passes (apart from the verifier itself)
that would use the analysis.

Differential Revision: https://reviews.llvm.org/D129213
2022-10-25 09:21:51 +01:00
Roman Lebedev 377f27be87
[X86] `DAGTypeLegalizer::ModifyToType()`: when widening w/ zeros, insert into undef and `and`-mask the padding away
We can expect that the sequence of inserting-of-extracts-into-undef
will be successfully lowered back into widening of the source vector,
but it seems that at least for X86 mask vectors, we have a really hard time
recovering from inserting-into-zero.

I've looked into alternative fix injection points, and they are much more
involved, by the time of `LowerBUILD_VECTORvXi1()`/`LowerINSERT_VECTOR_ELT()`
the constants might be obscured, so it does not seem like we can easily
deal with this by lowering into bit math later on,
some other pieces are missing.

Instead, it seems like just clearing the padding away via an `AND`-mask
is at least not a worse choice. Why create a problem where there wasn't one.
Though yes, it is possible that there are cases where constants originate
from the source IR, so some other fix may still be needed.

Reviewed By: pengfei

Differential Revision: https://reviews.llvm.org/D136046
2022-10-24 20:27:02 +03:00
Craig Topper 1fa8fd4c33 Recommit "[TargetLowering][RISCV][X86] Support even divisors in expandDIVREMByConstant."
This reverts commit 65aaecca88.

There was an ordering problem in the calculation of the partial
remainder.

Original commit message:

If the divisor is even, we can first shift the dividend and divisor
right by the number of trailing zeros. Now the divisor is odd and we
can do the original algorithm to calculate a remainder. Then we shift
that remainder left by the number of trailing zeros and add the bits
that were shifted out of the dividend.

Differential Revision: https://reviews.llvm.org/D135541
2022-10-24 10:08:50 -07:00
Craig Topper 65aaecca88 Revert "[TargetLowering][RISCV][X86] Support even divisors in expandDIVREMByConstant."
This reverts commit f6a7b47820.

I received a report that this fails on 32-bit X86.
2022-10-24 07:12:54 -07:00
Simon Pilgrim fd5f3abb07 [DAG] Fold (abs (sign_extend_inreg x)) -> (zero_extend (abs (truncate x))) (PR43370)
If the upper half of an abs() is all sign bits, then we can perform the abs() using just the lower half and then zero extend.

I've limited the DAG combine to only sign_extend_inreg (and free truncate/zero_extend) to minimise any later promotion issues, but for legalization a similar fold can use ComputeNumSignBits to be more aggressive.

Alive2: https://alive2.llvm.org/ce/z/y32fS4

Fixes #43370

Differential Revision: https://reviews.llvm.org/D136559
2022-10-24 10:27:08 +01:00
Kazu Hirata a1317be28d [SelectionDAG] Use std::clamp (NFC) 2022-10-24 00:23:51 -07:00
Simon Pilgrim 913f08b74c [DAG] Add freeze(sign/zero_extend_vector_inreg(x)) -> sign/zero_extend_vector_inreg(freeze(x)) folding 2022-10-23 12:19:42 +01:00
Craig Topper f6a7b47820 [TargetLowering][RISCV][X86] Support even divisors in expandDIVREMByConstant.
If the divisor is even, we can first shift the dividend and divisor
right by the number of trailing zeros. Now the divisor is odd and we
can do the original algorithm to calculate a remainder. Then we shift
that remainder left by the number of trailing zeros and add the bits
that were shifted out of the dividend.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D135541
2022-10-22 23:35:33 -07:00
Craig Topper db25f51e37 Revert "[DAGCombiner] Fold (mul (sra X, BW-1), Y) -> (neg (and (sra X, BW-1), Y))"
This reverts commit e8b3ffa532.

The AMDGPU/mad_64_32.ll seems to fail on some of the build bots but
passes locally. I'm really confused.
2022-10-22 22:50:43 -07:00
Craig Topper e8b3ffa532 [DAGCombiner] Fold (mul (sra X, BW-1), Y) -> (neg (and (sra X, BW-1), Y))
(sra X, BW-1) is either 0 or -1. So the multiply is a conditional
negate of Y.

This pattern shows up when type legalizing wide multiplies involving
a sign extended value.

Fixes PR57549.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D133399
2022-10-22 21:51:45 -07:00
Craig Topper 00816714f9 [DAGCombiner][RISCV] Make foldBinOpIntoSelect work correctly with opaque constants.
The CanFoldNonConst doesn't work correctly with opaque constants
because getNode won't constant fold constants if one is opaque. Even
if the operation is AND/OR. This can lead to infinite loops.

This patch does the folding manually in the DAGCombine. Alternatively,
we could improve getNode but that seemed likely to have bigger impact
and possibly increase compile time for the additional checks. We wouldn't
want to directly constant fold because we need to preserve the opaque flag.

Fixes PR58511.

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D136472
2022-10-22 19:10:33 -07:00
Simon Pilgrim b24a9f0cef [DAG] visitFREEZE - pull out Operands array. NFCI.
Initial tidyup and it will make it easier to adjust additional Operands in a future patch.
2022-10-22 20:14:56 +01:00
Simon Pilgrim 7511303c4f [DAG] canCreateUndefOrPoison - add freeze(fsh(x,y,z)) -> fsh(freeze(x),freeze(y),freeze(z)) support
The funnel-shift amount is always modulo, so won't introduce poison/undef
2022-10-22 18:39:52 +01:00
Simon Pilgrim 89111707ec [DAG] canCreateUndefOrPoison - add freeze(rot(x,y)) -> rot(freeze(x),freeze(y)) support
The rotation amount is always modulo, so won't introduce poison/undef
2022-10-22 17:24:53 +01:00
DianQK d20e4a1d68
[DebugInfo] Fix potential CU mismatch for attachRangesOrLowHighPC
When a CU attaches some ranges for a subprogram or an inlined code, the CU should be that of the subprogram/inlined code that was emitted.
If not, then these emitted ranges will use the incorrect base of the CU in `emitRangeList`.

A reproducible example is:
When linking these two LLVM IRs,  dsymutil will report no mapping for range or inconsistent range data warnings.

`foo.swift`
```swift
import AppKit.NSLayoutConstraint

public class Foo {

    public var c: Int {
        get {
            Int(NSLayoutConstraint().constant)
        }
        set {
        }
    }

}
```

`main.swift`
```swift
// no mapping for range
let f: Foo! = nil
// inconsistent range data
//let l: Foo = Foo()
```

Reviewed By: dblaikie

Differential Revision: https://reviews.llvm.org/D136039
2022-10-22 21:22:01 +08:00
Jay Foad f010b1eae5 Revert "[MachineVerifier] Try harder to verify LiveVariables"
This reverts commit d4650d0938.

Reverted because it causes several X86 CodeGen test failures in a build
with LLVM_ENABLE_EXPENSIVE_CHECKS=ON.
2022-10-22 12:21:50 +01:00
Arthur Eubanks 4153f989ba [ObjCARC] Remove legacy PM versions of optimization passes
This doesn't touch objc-arc-contract because that's in the codegen pipeline.
However, this does move its corresponding initialize function into initializeCodegen().

Reviewed By: asbirlea

Differential Revision: https://reviews.llvm.org/D135041
2022-10-21 13:40:54 -07:00
Jay Foad d4650d0938 [MachineVerifier] Try harder to verify LiveVariables
Verify the LiveVariables analysis after a pass that claims to preserve
it, even if there are no further passes (apart from the verifier itself)
that would use the analysis.

Differential Revision: https://reviews.llvm.org/D129213
2022-10-21 16:31:09 +01:00
Jay Foad 33f78d0903 [TwoAddressInstruction] Fix stale LiveVariables info in processStatepoint
D129213 improves verification of LiveVariables, and caused
CodeGen/X86/statepoint-cmp-sunk-past-statepoint.ll to fail with:
*** Bad machine code: LiveVariables: Block should not be in AliveBlocks ***
after Two-Address instruction pass.

Fix it by clearing AliveBlocks for a register which is no longer used.

Differential Revision: https://reviews.llvm.org/D136445
2022-10-21 14:57:03 +01:00
Craig Topper 2c82080f09 [MachineFrameInfo][RISCV] Call ensureStackAlignment for objects created with scalable vector stack id.
This is an alternative to fix PR57939 for RISC-V. It definitely
can be argued that the stack temporaries for RISC-V are being created
with an unnecessarily large alignment. But ignoring the alignment
in MachineFrameInfo also seems bad.

Looking at the test update that go with the current ID==0 check,
it was intending to exclude things like the NoAlloc stackid. So I'm
not sure if scalable vectors are intentionally being excluded.

Reviewed By: reames

Differential Revision: https://reviews.llvm.org/D135913
2022-10-20 14:05:46 -07:00
Paul Walker ab8257ca0e [NFC] Fix a few whitespace inconsistencies. 2022-10-20 14:52:25 +00:00
Yuanfang Chen 24c6ea917c [JMCInstrument] rename ELF section name from ".just.my.code" to ".data.just.my.code"
This gives linker scripts a hint about where to place the section.
2022-10-19 10:49:54 -07:00
Simon Pilgrim 9708d88017 Revert rG42230efccf8fe1185be5fa6c23dce0a8183d6ec9 "[DAG] Fold (sra (or (shl x, c1), (shl y, c2)), c1) -> (sext_inreg (or x, (shl y,c2-c1)) iff c2 >= c1"
@foad was right - this isn't actually going to help with D136042 as much as hoped, we need a better AMDGPU-specific solution as other targets are likely to make use of it
2022-10-19 12:07:41 +01:00
Simon Pilgrim 42230efccf [DAG] Fold (sra (or (shl x, c1), (shl y, c2)), c1) -> (sext_inreg (or x, (shl y,c2-c1)) iff c2 >= c1
Helps with some of the AMDGPU regressions identified in D136042 where we were losing signed BFE patterns after sinking shifts behind logic ops.

Differential Revision: https://reviews.llvm.org/D136081
2022-10-19 11:18:49 +01:00
Koakuma d3fcbee10d [SPARC] Make calls to function with big return values work
Implement CanLowerReturn and associated CallingConv changes for SPARC/SPARC64.

In particular, for SPARC64 there's new `RetCC_Sparc64_*` functions that handles the return case of the calling convention.
It uses the same analysis as `CC_Sparc64_*` family of funtions, but fails if the return value doesn't fit into the return registers.

This makes calls to functions with big return values converted to an sret function as expected, instead of crashing LLVM.

Reviewed By: MaskRay

Differential Revision: https://reviews.llvm.org/D132465
2022-10-18 00:01:55 +00:00
Craig Topper 30305d7948 [TargetLowering][RISCV][Sparc] Don't emit zero check in CTTZTableLookup for CTTZ_ZERO_UNDEF.
The code incorrectly checked for CTLZ_ZERO_UNDEF instead of
CTTZ_ZERO_UNDEF.

While I was there I flipped the condition into an early out.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D136010
2022-10-17 10:15:39 -07:00
Kazu Hirata ef9956f434 [IR] Rename FuncletPadInst::getNumArgOperands to arg_size (NFC)
This patch renames FuncletPadInst::getNumArgOperands to arg_size for
consistency with CallBase, where getNumArgOperands was removed in
favor of arg_size in commit 3e1c787b31

Differential Revision: https://reviews.llvm.org/D136048
2022-10-17 10:15:10 -07:00
Simon Pilgrim 8e77458578 [DAG] visitShiftByConstant - replace constant detection with FoldConstantArithmetic
Instead of checking that an operand is constant/opaque before calling getNode() and then checking that the result is a constant, just use FoldConstantArithmetic which will just early-out if the operands are not constant foldable.
2022-10-17 16:19:10 +01:00
Simon Pilgrim af5942cc09 Remove trailing whitespace. NFC. 2022-10-17 15:20:26 +01:00
Peter Rong c2e7c9cb33 [CodeGen] Using ZExt for extractelement indices.
In https://github.com/llvm/llvm-project/issues/57452, we found that IRTranslator is translating `i1 true` into `i32 -1`.
This is because IRTranslator uses SExt for indices.

In this fix, we change the expected behavior of extractelement's index, moving from SExt to ZExt.
This change includes both documentation, SelectionDAG and IRTranslator.
We also included a test for AMDGPU, updated tests for AArch64, Mips, PowerPC, RISCV, VE, WebAssembly and X86

This patch fixes issue #57452.

Differential Revision: https://reviews.llvm.org/D132978
2022-10-15 15:45:35 -07:00
Filipp Zhinkin ef774bec63 [AArch64] Support SETCCCARRY lowering
Support SETCCCARRY lowering to SBCS instruction.

Related issue: https://github.com/llvm/llvm-project/issues/44629

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D135302
2022-10-14 22:29:31 +03:00
chenglin.bi c1909d7337 [DAGCombiner] Fix crash for the merge stores with different value type
The crash case comes from #58350. It have two stores, one store is type f32 and the other is v1f32.
When we try to merge these two stores on v1f32, the memVT is vector type so the old code will use ISD::EXTRACT_SUBVECTOR for type f32 also then compiler crash.
So this patch insert a build_vector for f32 store to generate v1f32 also when memVT is v1f32.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D135954
2022-10-15 01:16:35 +08:00
Nicola Lancellotti ce1a2ccf94 [NFC] Fix typo in DAGCombiner 2022-10-14 17:47:25 +01:00
Sander de Smalen 02df03c5b7 [AArch64][SME] Add support for arm_locally_streaming functions.
Functions with `aarch64_sme_pstatesm_body` will emit a SMSTART at the start
of the function, and a SMSTOP at the end of the function, such that all
operations use the right value for vscale.

Because the placement of these nodes is critically important (i.e. no
vscale-dependent operations should be done before SMSTART has been issued),
we require glueing the CopyFromReg to the Entry node such that we can
insert the SMSTART as part of that glued chain.

More details about the SME attributes and design can be found
in D131562.

Reviewed By: aemerson

Differential Revision: https://reviews.llvm.org/D131582
2022-10-14 13:47:53 +00:00
Matt Arsenault d0750ec475 AtomicExpand: Avoid some operations if the atomic is overaligned
Let some of the pointer bithacking fold away if we know the LSB are 0.
2022-10-13 23:31:00 -07:00
Anshil Gandhi d383adec4d [BranchRelaxation] Fall through only if block has no unconditional branches
Prior to inserting an unconditional branch from X to its
fall through basic block, check if X has any terminators to
avoid inserting additional branches.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D134557
2022-10-13 22:48:41 -06:00
Matt Arsenault c427ee9798 AsmPrinter: Remove pointless code in inline asm emission
This was scanning through def operands looking for the
symbol operand. This is pointless because the symbol is always
the first operand as enforced by the verifier, and all operands
are implicit.
2022-10-13 21:12:11 -07:00
Xiang1 Zhang aad013de41 [InlineAsm][bugfix] Correct function addressing in inline asm
In Linux PIC model, there are 4 cases about value/label addressing:
Case 1: Function call or Label jmp inside the module.
Case 2: Data access (such as global variable, static variable) inside the module.
Case 3: Function call or Label jmp outside the module.
Case 4: Data access (such as global variable) outside the module.

Due to current llvm inline asm architecture designed to not "recognize" the asm
code, there are quite troubles for us to treat mem addressing differently for
same value/adress used in different instuctions.
For example, in pic model, call a func may in plt way or direclty pc-related,
but lea/mov a function adress may use got.

This patch fix/refine the case 1 and case 2 in inline asm.
Due to currently inline asm didn't support jmp the outsider lable, this patch
mainly focus on fix the function call addressing bugs in inline asm.

Reviewed By: Pengfei, RKSimon

Differential Revision: https://reviews.llvm.org/D133914
2022-10-14 09:47:26 +08:00
David Green 16e4e4ab87 [CodeGenPrep] Handle constants in ConvertPhiType
This is a simple addition to the convertPhiTypes in CodeGenPrepare to
consider and convert constants as it converts the phi type. Someone
fixed the bug in the motivating example, so the undef is now a constant
0. This does mean converting between integer and floating point
constants, which may have different materialization.

Differential Revision: https://reviews.llvm.org/D135561
2022-10-13 16:41:44 +01:00
Anton Sidorenko 4431e705cc [NFC] Use forward decl of MachineCombinerPattern enum to reduce dependencies
Differential Revision: https://reviews.llvm.org/D135776
2022-10-13 14:56:14 +01:00
Simon Tatham 526ce9c929 Propagate tied operands when copying a MachineInstr.
MachineInstr's copy constructor works by calling the addOperand method
to add each operand of the old MachineInstr to the new one, one by
one. But addOperand deliberately avoids trying to replicate ties
between operands, on the grounds that the tie refers to operands by
index, and the indices aren't necessarily finalized yet.

This led to a code generation fault when the machine pipeliner cloned
an Arm conditional instruction, and lost the tie between the output
register and the input value to be used when the condition failed to
execute.

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D135434
2022-10-13 09:40:35 +01:00
Mirko Brkusanin 8b8463ef6c [SelectionDAG] Use consistent type sizes for opcode 2022-10-12 17:33:04 +02:00
Craig Topper ac9209751a Revert "[DAGCombiner] Fold (mul (sra X, BW-1), Y) -> (neg (and (sra X, BW-1), Y))"
This reverts commit 0148df8157.

Getting a lit test failures on AMDGPU but I can't reproduce it so far.
Reverting to investigate.
2022-10-11 16:30:40 -07:00
Craig Topper 0148df8157 [DAGCombiner] Fold (mul (sra X, BW-1), Y) -> (neg (and (sra X, BW-1), Y))
(sra X, BW-1) is either 0 or -1. So the multiply is a conditional
negate of Y.

This pattern shows up when type legalizing wide multiplies involving
a sign extended value.

Fixes PR57549.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D133399
2022-10-11 16:20:55 -07:00
Jessica Paquette 0f1a51e173 [GlobalISel] Allow vectors in redundant or + add combines
We support KnownBits for vectors, so we can enable these.

https://godbolt.org/z/r9a9W4Gj1

Differential Revision: https://reviews.llvm.org/D135719
2022-10-11 15:31:09 -07:00
Jessica Paquette 036a13065b [GlobalISel] Combine (X op Y) == X --> Y == 0
This matches patterns of the form

```
(X op Y) == X
```

And transforms them to

```
Y == 0
```

where appropriate.

Example: https://godbolt.org/z/hfW811c7W

Differential Revision: https://reviews.llvm.org/D135380
2022-10-11 09:52:48 -07:00
Philip Reames 487695e7c9 [SDAG] Treat DemandedElts argument to isSplatVector as splat for scalable vectors [nfc]
The previous code used a APInt(1, 0) to represent the demanded elts of a scalable vector, and then ignored that argument if type was scalable.  This was inconsistent with the UndefElts parameter which is set to either APInt(1, 0) or APInt(1,1) - that is, implicitly broadcast across all lanes.  Particularly since the undef code relied on the DemandedElts parameter having bitwidth 1 to achieve that result!

This change switches the demanded parameter to APInt(1,1), documents the broadcast semantics, and takes advantage of it to remove one special case for scalable vectors which is no longer required.
2022-10-11 09:49:28 -07:00
Philip Reames ac4f3fff8c [SDAG] Clarify behavior of scalable demanded/undef elts in isSplatValue [nfc]
Update comment, and add an assertion to check property expected by sole (non-test) caller.  Remove tests which appear to have been copied from fixed vector tests, and whose demanded bits don't correspond to the way this interface is otherwise used.
2022-10-11 07:28:34 -07:00
Craig Topper 0121b1a4ac Revert "[TargetLowering][RISCV][X86] Support even divisors in expandDIVREMByConstant."
This reverts commit d4facda414.

This has been reported to cause failures. Reverting while I investigate.
2022-10-10 14:53:29 -07:00
Craig Topper d4facda414 [TargetLowering][RISCV][X86] Support even divisors in expandDIVREMByConstant.
If the divisor is even, we can first shift the dividend and divisor
right by the number of trailing zeros. Now the divisor is odd and we
can do the original algorithm to calculate a remainder. Then we shift
that remainder left by the number of trailing zeros and add the bits
that were shifted out of the dividend.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D135541
2022-10-10 11:02:22 -07:00
wanglei 730ee6568c [LoongArch] Set correct encodings for DWARF exception handling
This patch sets correct encodings for DWARF exception handling for
LoongArch.

Differential Revision: https://reviews.llvm.org/D134710
2022-10-08 11:53:48 +08:00
Craig Topper 9f67047cf0 [VP][RISCV] Add vp.smax/smin/umax/umin intrinsics
Differential Revision: https://reviews.llvm.org/D135418
2022-10-07 17:14:31 -07:00
Amaury Séchet 62ea6c5be7
[DAGCombine] Deduplicate addcarry node using commutativity.
The first two parameters of addcarry are commutative. We may face a situation where both variant are present in the DAG, in which case we benefit from using just one.

Depends on D57302 and D33587

Reviewed By: RKSimon, chfast

Differential Revision: https://reviews.llvm.org/D57317
2022-10-08 00:55:14 +02:00
eopXD dbc681c98e [VP][RISCV] Add vp.roundtozero and its RISC-V support
The scalar instruction of this is `llvm.trunc`. However the naming of
ISD::VP_TRUNC is already taken by `trunc` of the LLVM IR. Naming this as
`vp.ftrunc` would likely cause confusion with `vp.fptrunc`. So adding
`vp.roundtozero` that will look similar to `vp.roundeven`.

Reviewed By: frasercrmck

Differential Revision: https://reviews.llvm.org/D135233
2022-10-07 02:15:23 -07:00
Pierre van Houtryve 36c3833783 [GISel] Add Trunc/Lshr/BuildVector Folding
Similar to the current "Trunc/BuildVector" folding - which folds low element extracts of BuildVectors, folds hi element extracts done using bitshifts.

For D134354

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D135148
2022-10-07 08:44:03 +00:00
Pierre van Houtryve a34977c4d0 [GISel] Handle G_TRUNC in `matchExtractVecEltBuildVec`
Spotted some cases in D134354 where this was an issue.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D135147
2022-10-07 08:37:18 +00:00
Mike Hommey d3b0e745e8 [CodeView] Avoid NULL deref of Scope
Regression from D131400: cross-language LTO causes a crash in the
compiler on the NULL deref of Scope in `isa` call when Rust IR is
involved. Presumably, this might affect other languages too, and
even Rust itself without cross-language LTO when the Rust compiler
switched to LLVM 16.

Reviewed By: rnk

Differential Revision: https://reviews.llvm.org/D134616
2022-10-07 08:34:57 +02:00
Philip Reames 04bb32e58a [DAG] Extract helper for (neg x) [nfc]
This is a frequently reoccurring pattern, let's factor it out.

Differential Revision: https://reviews.llvm.org/D135301
2022-10-06 13:23:52 -07:00
Pierre van Houtryve 3ec0085c3f [DAG] Update `isKnownNeverNaN` for `FMA/FMAD`
We can still get a NaN even if none of the operands are NaN,
e.g. from +inf/-inf. D50804 didn't catch that.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D134854
2022-10-06 06:52:36 +00:00
Ellis Hoag 549773f9e9 [Dwarf] Reference the correct CU when inlining
Sometimes when a function is inlined into a different CU, `llvm-dwarfdump --verify` would find an inlined subroutine with an invalid abstract origin. This is because `DwarfUnit::addDIEEntry()` will incorrectly assume the inlined subroutine and the abstract origin are from the same CU if it can't find the CU for the inlined subroutine.

In the added test, the inlined subroutine for `bar()` is created before the CU for `B.swift` is created, so it tries to point to `goo()` in the wrong CU. Interestingly, if we swap the order of the two functions then we don't see a crash since the module for `goo()` is created first.

The fix is to give a parent DIE to `ScopeDIE` before calling `addDIEEntry()` so that its CU can be found. Luckily, `constructInlinedScopeDIE()` is only called once so we can pass it the DIE of the scope's parent and give it a child just after it's created.

`constructInlinedScopeDIE()` should always return a DIE, so assert that it is not null.

Reviewed By: aprantl

Differential Revision: https://reviews.llvm.org/D135114
2022-10-05 09:19:12 -07:00