Commit Graph

33148 Commits

Author SHA1 Message Date
Serge Pavlov ec893da990 [GlobalISel] Remove semantic operand of G_IS_FPCLASS
Instruction G_IS_FPCLASS had an operand that represented floating-point
semantics of its first operand. It allowed types that have the same length,
like `bfloat16` and `half`, to be distinguished. Unfortunately, it is
not sufficient, as other operation still cannot distinguish such types.
Solution of this problem must be more general, so now this operand is removed.

Differential Revision: https://reviews.llvm.org/D138004
2022-11-15 15:48:05 +07:00
Guozhi Wei 11e86868c1 [MachineCSE] Allow CSE for instructions with ignorable operands
Ignorable operands don't impact instruction's behavior, we can safely do CSE on
the instruction.

It is split from D130919. It has big impact to some AMDGPU test cases.
For example in atomic_optimizations_raw_buffer.ll, when trying to check if the
following instruction can be CSEed

  %37:vgpr_32 = V_MOV_B32_e32 0, implicit $exec

Function isCallerPreservedOrConstPhysReg is called on operand "implicit $exec",
this function is implemented as

  -  return TRI.isCallerPreservedPhysReg(Reg, MF) ||
  +  return TRI.isCallerPreservedPhysReg(Reg, MF) || TII.isIgnorableUse(MO) ||
            (MRI.reservedRegsFrozen() && MRI.isConstantPhysReg(Reg));

Both TRI.isCallerPreservedPhysReg and MRI.isConstantPhysReg return false on this
operand, so isCallerPreservedOrConstPhysReg is also false, it causes LLVM failed
to CSE this instruction.

With this patch TII.isIgnorableUse returns true for the operand $exec, so
isCallerPreservedOrConstPhysReg also returns true, it causes this instruction to
be CSEed with previous instruction

  %14:vgpr_32 = V_MOV_B32_e32 0, implicit $exec

So I got different result from here. AMDGPU's implementation of isIgnorableUse
is

  bool SIInstrInfo::isIgnorableUse(const MachineOperand &MO) const {
    // Any implicit use of exec by VALU is not a real register read.
    return MO.getReg() == AMDGPU::EXEC && MO.isImplicit() &&
           isVALU(*MO.getParent()) && !resultDependsOnExec(*MO.getParent());
  }

Since the operand $exec is not a real register read, my understanding is it's
reasonable to do CSE on such instructions.

Because more instructions are CSEed, so I get less instructions generated for
these tests.

Differential Revision: https://reviews.llvm.org/D137222
2022-11-14 19:34:59 +00:00
Nicholas Guy d52e2839f3 [ARM][CodeGen] Add support for complex deinterleaving
Adds the Complex Deinterleaving Pass implementing support for complex numbers in a target-independent manner, deferring to the TargetLowering for the given target to create a target-specific intrinsic.

Differential Revision: https://reviews.llvm.org/D114174
2022-11-14 14:02:27 +00:00
Nikita Popov feda983ff8 [TableGen] Use MemoryEffects to represent intrinsic memory effects (NFCI)
The TableGen implementation was using a homegrown implementation of
FunctionModRefInfo. This switches it to use MemoryEffects instead.
This makes the code simpler, and will allow exposing the full
representational power of MemoryEffects in the future. Among other
things, this will allow us to map IntrHasSideEffects to an
inaccessiblemem readwrite, rather than just ignoring it entirely
in most cases.

To avoid layering issues, this moves the ModRef.h header from IR
to Support, so that it can be included in the TableGen layer.

Differential Revision: https://reviews.llvm.org/D137641
2022-11-14 10:52:04 +01:00
chenglin.bi 8482247900 [GlobalISel] Correct constant type in matchReassocConstantInnerLHS
When we match a pattern from m_GCst, the register type could be different from original op. So we can't replace the original op to vreg direct.
This code create a new constant with original op type then replace the original op.

Fix #58906

Reviewed By: arsenm, aemerson

Differential Revision: https://reviews.llvm.org/D137778
2022-11-13 19:20:07 +08:00
Matt Arsenault 3cfa03856f AtomicExpand: Support cmpxchg expansion for small FP types
Handles f16 atomics for AMDGPU.
2022-11-10 22:16:11 -08:00
Nick Desaulniers f2981a3bc9 [SelectDagISEL] refactor HandlePHINodesInSuccessorBlocks NFC.
While working on this code to support outputs from callbr along indirect
branches, I kept making these changes again and again. Precommit these.

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D137445
2022-11-10 14:34:23 -08:00
Alexander Timofeev 27091e6227 [PEI][NFC] Refactoring of the debug instructions frame index replacement
This is required for the upcoming backward PEI::replaceFrameIndices version.
Both forward and backward versions will use same code for debug instruction processing.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D137741
2022-11-10 14:02:03 +01:00
wlei 47b0758049 [SampleFDO] Persist profile staleness metrics into binary
With https://reviews.llvm.org/D136627, now we have the metrics for profile staleness based on profile statistics, monitoring the profile staleness in real-time can help user quickly identify performance issues. For a production scenario, the build is usually incremental and if we want the real-time metrics, we should store/cache all the old object's metrics somewhere and pull them in a post-build time. To make it more convenient, this patch add an option to persist them into the object binary, the metrics can be reported right away by decoding the binary rather than polling the previous stdout/stderrs from a cache system.

For implementation, it writes the statistics first into a new metadata section(llvm.stats) then encode into a special ELF `.llvm_stats` section. The section data is formatted as a list of key/value pair so that future statistics can be easily extended. This is also under a new switch(`-persist-profile-staleness`)

In terms of size overhead, the metrics are computed at module level, so the size overhead should be small, measured on one of our internal service, it costs less than < 1MB for a 10GB+ binary.

Reviewed By: wenlei

Differential Revision: https://reviews.llvm.org/D136698
2022-11-09 22:34:33 -08:00
chenglin.bi 597f444092 [TypePromotion] Replace Zext to Truncate for the case src bitwidth is larger
Fix: https://github.com/llvm/llvm-project/issues/58843

Reviewed By: samtebbs

Differential Revision: https://reviews.llvm.org/D137613
2022-11-09 05:08:01 +08:00
Nathan James 6aa050a690 Reland "[llvm][NFC] Use c++17 style variable type traits"
This reverts commit 632a389f96.

This relands commit
1834a310d0.

Differential Revision: https://reviews.llvm.org/D137493
2022-11-08 14:15:15 +00:00
Nathan James 632a389f96 Revert "[llvm][NFC] Use c++17 style variable type traits"
This reverts commit 1834a310d0.
2022-11-08 13:11:41 +00:00
Nathan James 1834a310d0
[llvm][NFC] Use c++17 style variable type traits
This was done as a test for D137302 and it makes sense to push these changes

Reviewed By: dblaikie

Differential Revision: https://reviews.llvm.org/D137493
2022-11-08 12:22:52 +00:00
Petar Avramovic 838d5d371a AMDGPU/GlobalISel: Fix combine crash because LI is not set in prelegalizer
Caused by legacy min/max combines (select + cmp) asking for legalizer info
in prelegalizer (D135047 added combine to all_combines).
Combine still does not work for AMDGPU since destination opcode is custom,
not legal. Similar combine works on DAG since it asks for legal or custom.

Differential Revision: https://reviews.llvm.org/D137274
2022-11-08 12:46:16 +01:00
Tobias Hieta aa99b607b5
[clang][pdb] Don't include -fmessage-length in PDB buildinfo
As discussed in https://reviews.llvm.org/D136474 -fmessage-length
creates problems with reproduciability in the PDB files.

This patch just drops that argument when writing the PDB file.

Reviewed By: hans

Differential Revision: https://reviews.llvm.org/D137322
2022-11-08 10:05:59 +01:00
Mingming Liu 36e8e19337 [NFC][BlockPlacement]Add an option to renumber blocks based on function layout order.
Use case:
- When block layout is visualized after MBP pass, the basic blocks are labeled in layout order; meanwhile blocks could be numbered in a different order.
- As a result, it's hard to map between the graph and pass output. With this option on, the basic blocks are renumbered in function layout order.

This option is only useful when a function is to be visualized (i.e., when view options are on) to make it debugging only.

Use https://godbolt.org/z/5WTW36bMr as an example:
- As MBP pass output (shown in godbolt output window), `func2` is in a basic block numbered `2` (`bb.2`), and `func1` is in a basic block numbered `3` (`bb.3`);
    `bb.3` is a block with higher block frequency than `bb.2`, and `bb.3` is placed before `bb.2` in the functin layout.
- Use [1] to get the dot graph (graph uploaded in [2]), the blocks are re-numbered.
   - `func1` is in 'if.end' block, and labeled `1` in visualized dot; `func2` is in 'if.then' blocks, and labeled `3` --> the labeled number and bb number won't map.
   - [[ b5626ae975/llvm/lib/CodeGen/MachineBlockFrequencyInfo.cpp (L127) | DOTGraphTraits<MachineBlockFrequencyInfo *>::getNodeLabel ]] is where labeled numbers are based on function layout number, and [[ a8d93783f3/llvm/include/llvm/Support/GraphWriter.h (L209)
 | called by graph writer ]].
        So call 'MachineFunction::RenumberBlocks' would make labeled number (in dot graph) and block number (in pass output) consistent with each other.

[1] `./bin/clang++ -O3 -S -mllvm -view-block-layout-with-bfi=count -mllvm -view-bfi-func-name=_Z9func_loopv -mllvm -print-after=block-placement -mllvm  -filter-print-funcs=_Z9func_loopv test.c`

[2] {F25201785}

Reviewed By: davidxl

Differential Revision: https://reviews.llvm.org/D137467
2022-11-07 07:52:45 -08:00
Thomas Preud'homme c8be35293c [SWP] Recognize mem carried dep with different base
The loop-carried dependency detection logic in isLoopCarriedDep relies
on the load and store using the same definition for the base register.
This misses the case of post-increment loads and stores whose base
register are different PHI initialized from the same initial value.

This commit extends the logic to accept the load and store having
different PHI base address provided that they had the same initial value
when entering the loop and are incremented by the same amount in each
loop.

Reviewed By: bcahoon

Differential Revision: https://reviews.llvm.org/D136463
2022-11-07 09:53:41 +00:00
Matt Arsenault 162d9030ab GlobalISel: Pass through AA metadata for target memory intrinsics
The corresponding change for the DAG was done in fa4aac7335
2022-11-06 22:14:12 -08:00
Amaury Séchet 82209fd96e [NFC] Refactor DAGCombiner::foldSelectOfConstants to reduce nesting 2.0 2022-11-05 17:10:06 +00:00
Amaury Séchet 7c05f092c9 [NFC] Refactor DAGCombiner::foldSelectOfConstants to reduce nesting 2022-11-05 16:17:58 +00:00
Shilei Tian 1186e9d59f [LLVM][AMDGPU] Specialize 32-bit atomic fadd instruction for generic address space
The 32-bit floating-point atomic add instructions on AMDGPUs does not support a
"flat" or "generic" address space. So, if the address space cannot be determined
statically, the AMDGPU backend will fall back to a CAS loop (which does support
"flat" addressing). Instead, this patch emits runtime address-space checks to
allow native FP atomic add instructions for global and LDS memory (and non-atomic
FP add instructions for private/scratch memory).

In order to do that, this patch introduces a new interface function
`emitExpandAtomicRMW`. It is expected to be called when a common atomic expand
doesn't work for a specific target, such as the case we discussed here.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D129690
2022-11-04 14:11:05 -04:00
Nikita Popov 304f1d59ca [IR] Switch everything to use memory attribute
This switches everything to use the memory attribute proposed in
https://discourse.llvm.org/t/rfc-unify-memory-effect-attributes/65579.
The old argmemonly, inaccessiblememonly and inaccessiblemem_or_argmemonly
attributes are dropped. The readnone, readonly and writeonly attributes
are restricted to parameters only.

The old attributes are auto-upgraded both in bitcode and IR.
The bitcode upgrade is a policy requirement that has to be retained
indefinitely. The IR upgrade is mainly there so it's not necessary
to update all tests using memory attributes in this patch, which
is already large enough. We could drop that part after migrating
tests, or retain it longer term, to make it easier to import IR
from older LLVM versions.

High-level Function/CallBase APIs like doesNotAccessMemory() or
setDoesNotAccessMemory() are mapped transparently to the memory
attribute. Code that directly manipulates attributes (e.g. via
AttributeList) on the other hand needs to switch to working with
the memory attribute instead.

Differential Revision: https://reviews.llvm.org/D135780
2022-11-04 10:21:38 +01:00
Haohai Wen e419620fc2 [CodeGenPrep] Change ValueToSExts from DeseMap to MapVector
mergeSExts iterates throught ValueToSExts. Using DenseMap result in
unstable optimization path so that output IR may vary even if the input
IR is same.

Reviewed By: wxiao3

Differential Revision: https://reviews.llvm.org/D137234
2022-11-04 11:15:18 +08:00
Nick Desaulniers 0589038a6f [StatepointLowering] remove unused parameter. NFC
Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D136885
2022-11-03 15:43:20 -07:00
Henry Yu ef0d689e8b [SelectionDAGBuilder] use bitcast instead of AnyExtOrTrunc if copy parts from an int vector to a float vector to fix issue #58615
The getCopyFromPartsVector doesn't work correctly when PartEVT and ValueVT have both different element type and different size.

This patch
1) removes the part of a comment that contains the incorrect assumption that element type are the same
2) use bitcast when copy parts of int vector to a float vector after the subvector extraction

Reviewed By: Peter, efriedma

Differential Revision: https://reviews.llvm.org/D136726
2022-11-03 15:35:13 -07:00
Stefan Pintilie 9df924a634 [PowerPC] Add new DMR register classes to Future CPU.
A new register class as well as a number of related subregisters are being added
to Future CPU. These registers are Dense Math Registers (DMR) and are 1024 bits
long. These regsiters can also be used in consecutive pairs which leads to a
register that is 2048 bits.

This patch also adds 7 new instructions that use these registers. More
instructions will be added in future patches.

Reviewed By: amyk, saghir

Differential Revision: https://reviews.llvm.org/D136366
2022-11-03 08:29:55 -05:00
Pierre van Houtryve 020a9d7b20 [GISel] Add (fsub +-0.0, X) -> fneg combine
Allows for better matching of VOP3 mods.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D136442
2022-11-03 08:21:50 +00:00
Matt Arsenault cbce11c422 WebAssembly: Move exception handling code together 2022-11-02 16:05:34 -07:00
Matt Arsenault 4fed59ed41 FunctionLoweringInfo: Use TLI member instead of finding it 2022-11-02 16:05:34 -07:00
John Brawn 2d8c1597e5 [MIRVRegNamer] Avoid opcode hash collision
D121929 happens to cause CodeGen/MIR/AArch64/mirnamer.mir to fail due
to a hash collision caused by adding two extra opcodes. The collision
is only in the top 19 bits of the hashed opcode so fix this by just
using the whole hash (in fixed width hex for consistency) instead of
the top 5 decimal digits.

Differential Revision: https://reviews.llvm.org/D137155
2022-11-02 13:53:12 +00:00
John Brawn 88ac25b357 [MachineCSE] Allow PRE of instructions that read physical registers
Currently MachineCSE forbids PRE when the instruction reads a physical
register. Relax this so that it's allowed when the value being read is
the same as what would be read in the place the instruction would be
hoisted to.

This is being done in preparation for adding FPCR handling to the
AArch64 backend, in order to prevent it to from worsening the
generated code, but for targets that already have a similar register
it should improve things.

This patch affects code generation in several tests. The new code
looks better except for in Thumb2/LowOverheadLoops/memcall.ll where
we perform PRE but the LowOverheadLoops transformation then undoes
it. Also in AMDGPU/selectcc-opt.ll the CHECK makes things look worse,
but actually the function as a whole is better (as a MOV is PRE'd).

Differential Revision: https://reviews.llvm.org/D136675
2022-11-02 13:53:12 +00:00
Anton Sidorenko 92ec614988 [GlobalISel] Compute debug location when merging stores more accurately
Originaly the loop did almost nothing as the calculated location was
overwritten on the next iteration.

Differential Revision: https://reviews.llvm.org/D136937
2022-11-01 14:32:42 +03:00
Yeting Kuo 71e4e35581 [VP][RISCV] Add vp.rint and RISC-V support.
FRINT uses dynamic rounding mode instead of static rounding mode. The patch
rename VFCVT_X_F_VL to VFCVT_RM_X_F_VL for static rounding mode uses and added
new ISDNode VFCVT_X_F_VL directly selected to PseudoVFCVT_X_F_V.

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D136662
2022-11-01 14:52:47 +08:00
Matt Arsenault b60a9ccd02 AtomicExpand: Use InstSimplifyFolder
Automatically cleanup operations if we know the atomic has higher
alignment.
2022-10-31 23:31:42 -07:00
Matt Arsenault 07f12170a2 AtomicExpand: Don't create unused instructions for some atomicrmw
This wasn't used by every atomicrmw expansion.
2022-10-31 18:34:36 -07:00
Simon Pilgrim 55a11b542e [VectorUtils] Add getShuffleDemandedElts helper
We have similar code to translate a demanded elements mask for a shuffle's operands in multiple places - this patch adds a helper function to VectorUtils and updates a number of locations to use it directly.

Differential Revision: https://reviews.llvm.org/D136832
2022-10-30 17:03:55 +00:00
Simon Pilgrim 78739fdb4d [DAG] Enable combineShiftOfShiftedLogic folds after type legalization
This was disabled to prevent regressions, which appear to be just occurring on AMDGPU (at least in our current lit tests), which I've addressed by adding AMDGPUTargetLowering::isDesirableToCommuteWithShift overrides.

Fixes #57872

Differential Revision: https://reviews.llvm.org/D136042
2022-10-29 12:30:04 +01:00
Simon Pilgrim 69d117edc2 [DAG] ExpandIntRes_MINMAX - simplify cases with sufficient number of sign bits
When legalizing a smax/smin/umax/umin op, if we know that the upper half is all sign bits, then we can perform the op on the lower half and then sign extend the result to the upper half.

Alive2: https://alive2.llvm.org/ce/z/rk8Rfd

Fixes #58630
2022-10-28 17:10:45 +01:00
John Brawn 7a7b36e96b Revert "[MachineCSE] Allow PRE of instructions that read physical registers"
This reverts commit 628467e53f.

This is causing a miscompile in ffmpeg when compiled for armv7.
2022-10-28 14:39:56 +01:00
Sanjay Patel 1e7c1dd67c [SDAG] avoid crash from mismatched types in scalar-to-vector fold
This bug was introduced with D136713 / 54eeadcf44 .

As an enhancement, we could cast operands to the expected type,
but we need to make sure that is done correctly (zext vs. sext).
It's also possible (but seems unlikely) that an operand can have
a type larger than the result type.

Fixes #58661
2022-10-28 09:14:08 -04:00
Simon Pilgrim d47f056cd2 [DAG] visitXOR - fold XOR(A,B) -> OR(A,B) iff A and B have no common bits
Alive2: https://alive2.llvm.org/ce/z/7wvfns

Part of Issue #58624
2022-10-28 12:11:12 +01:00
Simon Pilgrim 28bfd853ab [DAG] visitFSUBForFMACombine - pass callbacks by reference in isContractableAndReassociableFMUL lambda capture. NFC.
Fixes a coverity remark about large copies by value
2022-10-28 11:48:45 +01:00
Simon Pilgrim 32d51581da IndirectBrExpandPass - remove unused DL from lambda capture. NFC.
Oddly I wasn't seeing an unused variable warning - this came up in a coverity remark about large copy by values!
2022-10-28 11:05:42 +01:00
Pierre van Houtryve 088a816824 [DAGCombiner] Use `getAnyExtOrTrunc` instead of TRUNCATE in ExtractVectorElt combine
ScalarVT isn't guaranteed to be smaller than the BCSrc.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D136849
2022-10-28 06:33:29 +00:00
Hongtao Yu d5a963ab8b [PseudoProbe] Replace relocation with offset for entry probe.
Currently pseudo probe encoding for a function is like:
	- For the first probe, a relocation from it to its physical position in the code body
	- For subsequent probes, an incremental offset from the current probe to the previous probe

The relocation could potentially cause relocation overflow during link time. I'm now replacing it with an offset from the first probe to the function start address.

A source function could be lowered into multiple binary functions due to outlining (e.g, coro-split). Since those binary function have independent link-time layout, to really avoid relocations from .pseudo_probe sections to .text sections, the offset to replace with should really be the offset from the probe's enclosing binary function, rather than from the entry of the source function. This requires some changes to previous section-based emission scheme which now switches to be function-based. The assembly form of pseudo probe directive is also changed correspondingly, i.e, reflecting the binary function name.

Most of the source functions end up with only one binary function. For those don't, a sentinel probe is emitted for each of the binary functions with a different name from the source. The sentinel probe indicates the binary function name to differentiate subsequent probes from the ones from a different binary function. For examples, given source function

```
Foo() {
  …
  Probe 1
  …
  Probe 2
}
```

If it is transformed into two binary functions:

```
Foo:
   …

Foo.outlined:
   …
```

The encoding for the two binary functions will be separate:

```

GUID of Foo
  Probe 1

GUID of Foo
  Sentinel probe of Foo.outlined
  Probe 2
```

Then probe1 will be decoded against binary `Foo`'s address, and Probe 2 will be decoded against `Foo.outlined`. The sentinel probe of `Foo.outlined` makes sure there's not accidental relocation from `Foo.outlined`'s probes to `Foo`'s entry address.

On the BOLT side, to be minimal intrusive, the pseudo probe re-encoding sticks with the old encoding format. This is fine since unlike linker, Bolt processes the pseudo probe section as a whole and it is free from relocation overflow issues.

The change is downwards compatible as long as there's no mixed use of the old encoding and the new encoding.

Reviewed By: wenlei, maksfb

Differential Revision: https://reviews.llvm.org/D135912
Differential Revision: https://reviews.llvm.org/D135914
Differential Revision: https://reviews.llvm.org/D136394
2022-10-27 13:28:22 -07:00
Craig Topper 00d93def77 [LegalizeVectorOps][X86][RISCV] Expand vector S/USHLSAT instead of unrolling.
Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D136478
2022-10-27 09:09:36 -07:00
John Brawn 628467e53f [MachineCSE] Allow PRE of instructions that read physical registers
Currently MachineCSE forbids PRE when the instruction reads a physical
register. Relax this so that it's allowed when the value being read is
the same as what would be read in the place the instruction would be
hoisted to.

This is being done in preparation for adding FPCR handling to the
AArch64 backend, in order to prevent it to from worsening the
generated code, but for targets that already have a similar register
it should improve things.

This patch affects code generation in several tests. The new code
looks better except for in Thumb2/LowOverheadLoops/memcall.ll where
we perform PRE but the LowOverheadLoops transformation then undoes
it. Also in AMDGPU/selectcc-opt.ll the CHECK makes things look worse,
but actually the function as a whole is better (as a MOV is PRE'd).

Differential Revision: https://reviews.llvm.org/D136675
2022-10-27 14:14:57 +01:00
David Spickett c6e2de6042 [LLVM] Use DWARFv4 bitfields when tuning for GDB
GDB implemented data_bit_offset in https://sourceware.org/bugzilla/show_bug.cgi?id=12616
which has been present since GDB 8.0.

GCC started using it at GCC 11.

Reviewed By: dblaikie

Differential Revision: https://reviews.llvm.org/D135583
2022-10-27 08:51:07 +00:00
Paul Kirth 2e1e2f52f3 [CodeGen] Improve large stack frame diagnostic
Add statistics about how much memory is used, in variables, spills, and
unsafestack.

Issue #58168 describes some of the difficulty diagnosing stack size issues
identified by -Wframe-larger-than. D135488 addresses some of those issues by
giving developers a method to view the stack layout and thereby understand
where and how stack memory is used.

However, that solution requires an additional pass, when a short summary about
how the compiler has allocated stack memory can inform developers about where
they should investigate. When they need the complete context, D135488 can
provide them with a more comprehensive set of diagnostics.

Reviewed By: nickdesaulniers

Differential Revision: https://reviews.llvm.org/D136484
2022-10-27 00:51:45 +00:00
Sanjay Patel 54eeadcf44 [SDAG] avoid vector extract/insert around binop
scalar-to-vector (scalar binop (extractelt V, Idx), C) --> shuffle (vector binop V, C'), {Idx, -1, -1...}

We generally try to avoid ad-hoc vectorization in SDAG,
but the motivating case from issue #39482 escapes our
normal vectorization folds in IR. It seems like it should
always be a win to transform this pattern in cases where
we have the same vector type for input and output and the
target supports the vector operation. That avoids
transfers from vector to scalar and back.

In the x86 shift examples, we create the scalar-to-vector
node during legalization. I'm not sure if there's a more
general way to create the pattern for testing. (If so, I
could add tests for other targets.)

Differential Revision: https://reviews.llvm.org/D136713
2022-10-26 14:04:46 -04:00