Commit Graph

59187 Commits

Author SHA1 Message Date
Krzysztof Parzyszek c2b7b9b642 [Hexagon] Fix order of operands in V6_vdealb4w 2020-09-08 22:09:28 -05:00
Brad Smith 88b368a1c4 [PowerPC] Set setMaxAtomicSizeInBitsSupported appropriately for 32-bit PowerPC in PPCTargetLowering
Reviewed By: nemanjai

Differential Revision: https://reviews.llvm.org/D86165
2020-09-08 21:21:14 -04:00
Craig Topper b1e68f885b [SelectionDAGBuilder] Pass fast math flags to getNode calls rather than trying to set them after the fact.:
This removes the after the fact FMF handling from D46854 in favor of passing fast math flags to getNode. This should be a superset of D87130.

This required adding a SDNodeFlags to SelectionDAG::getSetCC.

Now we manage to contant fold some stuff undefs during the
initial getNode that we don't do in later DAG combines.

Differential Revision: https://reviews.llvm.org/D87200
2020-09-08 15:27:21 -07:00
Krzysztof Parzyszek d183f47261 [Hexagon] Handle widening of truncation's operand with legal result
Failing example: v8i8 = truncate v8i32. v8i8 is legal, but v8i32 was
widened to HVX. Make sure that v8i8 does not get altered (even if it's
changed to another legal type).
2020-09-08 16:07:39 -05:00
Simon Pilgrim 0dacf3b5ac RISCVMatInt.h - remove unnecessary includes. NFCI.
Add APInt forward declaration and move include to RISCVMatInt.cpp
2020-09-08 18:25:24 +01:00
Heejin Ahn d25c17f317 [WebAssembly] Fix fixEndsAtEndOfFunction for try-catch
When the function return type is non-void and `end` instructions are at
the very end of a function, CFGStackify's `fixEndsAtEndOfFunction`
function fixes the corresponding block/loop/try's type to match the
function's return type. This is applied to consecutive `end` markers at
the end of a function. For example, when the function return type is
`i32`,
```
block i32    ;; return type is fixed to i32
  ...
  loop i32   ;; return type is fixed to i32
    ...
  end_loop
end_block
end_function
```

But try-catch is a little different, because it consists of two parts:
a try part and a catch part, and both parts' return type should satisfy
the function's return type. Which means,
```
try i32      ;; return type is fixed to i32
  ...
  block i32  ;; this should be changed i32 too!
    ...
  end_block
catch
  ...
end_try
end_function
```
As you can see in this example, it is not sufficient to only `end`
instructions at the end of a function; in case of `try`, we should
check instructions before `catch`es, in case their corresponding `try`'s
type has been fixed.

This changes `fixEndsAtEndOfFunction`'s algorithm to use a worklist
that contains a reverse iterator, each of which is a starting point for
a new backward `end` instruction search.

Fixes https://bugs.llvm.org/show_bug.cgi?id=47413.

Reviewed By: dschuff, tlively

Differential Revision: https://reviews.llvm.org/D87207
2020-09-08 09:27:40 -07:00
Ronak Chauhan 487a805310 [AMDGPU] Support disassembly for AMDGPU kernel descriptors
Decode AMDGPU Kernel descriptors as assembler directives.

Reviewed By: scott.linder, jhenderson, kzhuravl

Differential Revision: https://reviews.llvm.org/D80713
2020-09-08 21:26:11 +05:30
Simon Pilgrim fcff2c32c0 X86CallLowering.cpp - improve auto const/pointer/reference qualifiers. NFCI.
Fix clang-tidy warnings by ensuring auto variables are more cleanly qualified, or just avoid auto entirely.
2020-09-08 13:01:23 +01:00
Simon Pilgrim 0729ae367a X86DomainReassignment.cpp - improve auto const/pointer/reference qualifiers. NFCI.
Fix clang-tidy warnings by ensuring auto variables are more cleanly qualified, or just avoid auto entirely.
2020-09-08 13:01:23 +01:00
Sam Tebbs 7aabb6ad77 [ARM][LowOverheadLoops] Remove modifications to the correct element
count register

After my patch at D86087, code that now uses the mov operand rather than
the vctp operand will no longer remove modifications to the vctp operand
as they should. This patch fixes that by explicitly removing
modifications to the vctp operand rather than the register used as the
element count.
2020-09-08 10:30:05 +01:00
Qiu Chaofan 8d9c13f37d Revert "[PowerPC] Implement instruction clustering for stores"
This reverts commit 3c0b325023, (along
with ea795304 and bb39eb9e) since it breaks test with UB sanitizer.
2020-09-08 17:24:08 +08:00
Qiu Chaofan bb39eb9e7f [PowerPC] Fix getMemOperandWithOffsetWidth
Commit 3c0b3250 introduced memory cluster under pwr10 target, but a
check for operands was unexpectedly removed. This adds it back to avoid
regression.
2020-09-08 15:35:25 +08:00
Simon Wallis 8ee1419ab6 [AARCH64][RegisterCoalescer] clang miscompiles zero-extension to long long
Implement AArch64 variant of shouldCoalesce() to detect a known failing case
and prevent the coalescing of a 32-bit copy into a 64-bit sign-extending load.

Do not coalesce in the following case:
COPY where source is bottom 32 bits of a 64-register,
and destination is a 32-bit subregister of a 64-bit register,
ie it causes the rest of the register to be implicitly set to zero.

A mir test has been added.

In the test case, the 32-bit copy implements a 32 to 64 bit zero extension
and relies on the upper 32 bits being zeroed.

Coalescing to the result of the 64-bit load meant overwriting
the upper 32 bits incorrectly when the loaded byte was negative.

Reviewed By: john.brawn

Differential Revision: https://reviews.llvm.org/D85956
2020-09-08 08:04:52 +01:00
Mikael Holmen ea795304ec [PowerPC] Add parentheses to silence gcc warning
Without gcc 7.4 warns with

../lib/Target/PowerPC/PPCInstrInfo.cpp:2284:25: warning: suggest parentheses around '&&' within '||' [-Wparentheses]
          BaseOp1.isFI() &&
          ~~~~~~~~~~~~~~~^~
              "Only base registers and frame indices are supported.");
              ~
2020-09-08 08:39:57 +02:00
Qiu Chaofan 3c0b325023 [PowerPC] Implement instruction clustering for stores
On Power10, it's profitable to schedule some stores with adjacent target
address together. This patch implements this feature.

Reviewed By: steven.zhang

Differential Revision: https://reviews.llvm.org/D86754
2020-09-08 11:03:09 +08:00
Roman Lebedev bb7d3af113
Reland [SimplifyCFG][LoopRotate] SimplifyCFG: disable common instruction hoisting by default, enable late in pipeline
This was reverted in 503deec218
because it caused gigantic increase (3x) in branch mispredictions
in certain benchmarks on certain CPU's,
see https://reviews.llvm.org/D84108#2227365.

It has since been investigated and here are the results:
https://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20200907/827578.html
> It's an amazingly severe regression, but it's also all due to branch
> mispredicts (about 3x without this). The code layout looks ok so there's
> probably something else to deal with. I'm not sure there's anything we can
> reasonably do so we'll just have to take the hit for now and wait for
> another code reorganization to make the branch predictor a bit more happy :)
>
> Thanks for giving us some time to investigate and feel free to recommit
> whenever you'd like.
>
> -eric

So let's just reland this.
Original commit message:


I've been looking at missed vectorizations in one codebase.
One particular thing that stands out is that some of the loops
reach vectorizer in a rather mangled form, with weird PHI's,
and some of the loops aren't even in a rotated form.

After taking a more detailed look, that happened because
the loop's headers were too big by then. It is evident that
SimplifyCFG's common code hoisting transform is at fault there,
because the pattern it handles is precisely the unrotated
loop basic block structure.

Surprizingly, `SimplifyCFGOpt::HoistThenElseCodeToIf()` is enabled
by default, and is always run, unlike it's friend, common code sinking
transform, `SinkCommonCodeFromPredecessors()`, which is not enabled
by default and is only run once very late in the pipeline.

I'm proposing to harmonize this, and disable common code hoisting
until //late// in pipeline. Definition of //late// may vary,
here currently i've picked the same one as for code sinking,
but i suppose we could enable it as soon as right after
loop rotation happens.

Experimentation shows that this does indeed unsurprizingly help,
more loops got rotated, although other issues remain elsewhere.

Now, this undoubtedly seriously shakes phase ordering.
This will undoubtedly be a mixed bag in terms of both compile- and
run- time performance, codesize. Since we no longer aggressively
hoist+deduplicate common code, we don't pay the price of said hoisting
(which wasn't big). That may allow more loops to be rotated,
so we pay that price. That, in turn, that may enable all the transforms
that require canonical (rotated) loop form, including but not limited to
vectorization, so we pay that too. And in general, no deduplication means
more [duplicate] instructions going through the optimizations. But there's still
late hoisting, some of them will be caught late.

As per benchmarks i've run {F12360204}, this is mostly within the noise,
there are some small improvements, some small regressions.
One big regression i saw i fixed in rG8d487668d09fb0e4e54f36207f07c1480ffabbfd, but i'm sure
this will expose many more pre-existing missed optimizations, as usual :S

llvm-compile-time-tracker.com thoughts on this:
http://llvm-compile-time-tracker.com/compare.php?from=e40315d2b4ed1e38962a8f33ff151693ed4ada63&to=c8289c0ecbf235da9fb0e3bc052e3c0d6bff5cf9&stat=instructions
* this does regress compile-time by +0.5% geomean (unsurprizingly)
* size impact varies; for ThinLTO it's actually an improvement

The largest fallout appears to be in GVN's load partial redundancy
elimination, it spends *much* more time in
`MemoryDependenceResults::getNonLocalPointerDependency()`.
Non-local `MemoryDependenceResults` is widely-known to be, uh, costly.
There does not appear to be a proper solution to this issue,
other than silencing the compile-time performance regression
by tuning cut-off thresholds in `MemoryDependenceResults`,
at the cost of potentially regressing run-time performance.
D84609 attempts to move in that direction, but the path is unclear
and is going to take some time.

If we look at stats before/after diffs, some excerpts:
* RawSpeed (the target) {F12360200}
  * -14 (-73.68%) loops not rotated due to the header size (yay)
  * -272 (-0.67%) `"Number of live out of a loop variables"` - good for vectorizer
  * -3937 (-64.19%) common instructions hoisted
  * +561 (+0.06%) x86 asm instructions
  * -2 basic blocks
  * +2418 (+0.11%) IR instructions
* vanilla test-suite + RawSpeed + darktable  {F12360201}
  * -36396 (-65.29%) common instructions hoisted
  * +1676 (+0.02%) x86 asm instructions
  * +662 (+0.06%) basic blocks
  * +4395 (+0.04%) IR instructions

It is likely to be sub-optimal for when optimizing for code size,
so one might want to change tune pipeline by enabling sinking/hoisting
when optimizing for size.

Reviewed By: mkazantsev

Differential Revision: https://reviews.llvm.org/D84108

This reverts commit 503deec218.
2020-09-08 00:24:03 +03:00
Craig Topper da79b1eecc [SelectionDAG][X86][ARM] Teach ExpandIntRes_ABS to use sra+add+xor expansion when ADDCARRY is supported.
Rather than using SELECT instructions, use SRA, UADDO/ADDCARRY and
XORs to expand ABS. This is the multi-part version of the sequence
we use in LegalizeDAG.

It's also the same as the Custom sequence uses for i64 on 32-bit
and i128 on 64-bit. So we can remove the X86 customization.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D87215
2020-09-07 13:15:26 -07:00
Craig Topper 01b3e16757 [X86] Use the same sequence for i128 ISD::ABS on 64-bit targets as we use for i64 on 32-bit targets.
Differential Revision: https://reviews.llvm.org/D87214
2020-09-07 11:14:05 -07:00
Simon Pilgrim 4e89a0ab02 MipsISelLowering.h - remove CCState/CCValAssign forward declarations. NFCI.
These are already defined in the CallingConvLower.h include.
2020-09-07 18:15:26 +01:00
Simon Pilgrim 95ca3aacf0 BTFDebug.h - reduce MachineInstr.h include to forward declaration. NFCI. 2020-09-07 17:51:13 +01:00
Simon Pilgrim dfc333050b LeonPasses.h - remove unnecessary includes. NFCI.
Reduce to forward declarations and move includes to LeonPasses.cpp where necessary.
2020-09-07 17:51:12 +01:00
Simon Pilgrim 1c34ac03a2 LeonPasses.h - remove orphan function declarations. NFCI.
The implementations no longer exist.
2020-09-07 17:51:12 +01:00
alex-t 2480a31e5d [AMDGPU] SILowerControlFlow::optimizeEndCF should remove empty basic block
optimizeEndCF removes EXEC restoring instruction case this instruction is the only one except the branch to the single successor and that successor contains EXEC mask restoring instruction that was lowered from END_CF belonging to IF_ELSE.
As a result of such optimization we get the basic block with the only one instruction that is a branch to the single successor.
In case the control flow can reach such an empty block from S_CBRANCH_EXEZ/EXECNZ it might happen that spill/reload instructions that were inserted later by register allocator are placed under exec == 0 condition and never execute.
Removing empty block solves the problem.

This change require further work to re-implement LIS updates. Recently, LIS is always nullptr in this pass. To enable it we need another patch to fix many places across the codegen.

Reviewed By: rampitec

Differential Revision: https://reviews.llvm.org/D86634
2020-09-07 19:37:27 +03:00
Simon Pilgrim 9de0a3da6a [X86][SSE] Don't use LowerVSETCCWithSUBUS for unsigned compare with +ve operands (PR47448)
We already simplify the unsigned comparisons if we've found the operands are non-negative, but we were still calling LowerVSETCCWithSUBUS which resulted in the PR47448 regressions.
2020-09-07 16:11:40 +01:00
Simon Pilgrim 5bb27e735d X86AvoidStoreForwardingBlocks.cpp - use unsigned for Opcode values. NFCI.
Fixes clang-tidy cppcoreguidelines-narrowing-conversions warnings.
2020-09-07 12:56:27 +01:00
Simon Pilgrim 9b645ebfff [X86][AVX] Use lowerShuffleWithPERMV in shuffle combining to support non-VLX targets
lowerShuffleWithPERMV allows us to use the ZMM variants for 128/256-bit variable shuffles on non-VLX AVX512 targets.

This is another step towards shuffle combining through between vector widths - we still end up with an annoying regression (combine_vpermilvar_vperm2f128_zero_8f32) but we're going in the right direction....
2020-09-07 12:50:50 +01:00
Benjamin Kramer 7ba0f81934 [X86] Unbreak the build after 22fa6b20d9 2020-09-07 12:24:30 +02:00
Simon Pilgrim 71dfdbe2c7 [X86] getFauxShuffleMask - handle insert_subvector(zero, sub, C)
Directly use SM_SentinelZero elements if we're (widening)inserting into a zero vector.
2020-09-07 11:10:40 +01:00
Simon Pilgrim 9ad261540d [X86] Use Register instead of unsigned. NFCI.
Fixes llvm-prefer-register-over-unsigned clang-tidy warnings.
2020-09-07 10:49:29 +01:00
Simon Pilgrim 22fa6b20d9 [X86] Use Register instead of unsigned. NFCI.
Fixes llvm-prefer-register-over-unsigned clang-tidy warnings.
2020-09-07 10:38:09 +01:00
Simon Pilgrim 0dbe2504af [X86] Use Register instead of unsigned. NFCI.
Fixes llvm-prefer-register-over-unsigned clang-tidy warning.
2020-09-07 10:38:08 +01:00
Sam Parker 0af4147804 [ARM][CostModel] CodeSize costs for i1 arith ops
When optimising for size, make the cost of i1 logical operations
relatively expensive so that optimisations don't try to combine
predicates.

Differential Revision: https://reviews.llvm.org/D86525
2020-09-07 09:27:18 +01:00
Thomas Lively caee15a0ed [WebAssembly] Fix incorrect assumption of simple value types
Fixes PR47375, in which an assertion was triggering because
WebAssemblyTargetLowering::isVectorLoadExtDesirable was improperly
assuming the use of simple value types.

Differential Revision: https://reviews.llvm.org/D87110
2020-09-06 15:42:21 -07:00
Amy Kwan efa57f9a7a [PowerPC] Implement Vector Expand Mask builtins in LLVM/Clang
This patch implements the vec_expandm function prototypes in altivec.h in order
to utilize the vector expand with mask instructions introduced in Power10.

Differential Revision: https://reviews.llvm.org/D82727
2020-09-06 17:13:21 -05:00
Simon Pilgrim ecac5c2808 [X86][AVX] lowerShuffleWithPERMV - adjust binary shuffle masks to account for widening on non-VLX targets
rGabd33bf5eff2 enabled us to pad 128/256-bit shuffles to 512-bit on non-VLX targets, but wasn't updating binary shuffles to account for the new vector width.
2020-09-06 14:52:25 +01:00
vnalamot aff94ec0f4 [AMDGPU] Remove the dead spill slots while spilling FP/BP to memory
During the PEI pass, the dead TargetStackID::SGPRSpill spill slots
are not being removed while spilling the FP/BP to memory.

Fixes: SWDEV-250393

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D87032
2020-09-06 07:04:25 +05:30
Krzysztof Parzyszek 62f89a89f3 [Hexagon] Add assertions about V6_pred_scalar2 2020-09-05 18:20:23 -05:00
Krzysztof Parzyszek 9518f032e4 [Hexagon] When widening truncate result, also widen operand if necessary 2020-09-05 18:19:32 -05:00
Krzysztof Parzyszek 8789f2bbde [Hexagon] Resize the mem operand when widening loads and stores 2020-09-05 18:17:48 -05:00
Krzysztof Parzyszek 1387f96ab3 [Hexagon] Handle widening of vector truncate 2020-09-05 15:07:38 -05:00
Jonas Paulsson 714ceefad9 [SelectionDAG] Always intersect SDNode flags during getNode() node memoization.
Previously SDNodeFlags::instersectWith(Flags) would do nothing if Flags was
in an undefined state, which is very bad given that this is the default when
getNode() is called without passing an explicit SDNodeFlags argument.

This meant that if an already existing and reused node had a flag which the
second caller to getNode() did not set, that flag would remain uncleared.

This was exposed by https://bugs.llvm.org/show_bug.cgi?id=47092, where an NSW
flag was incorrectly set on an add instruction (which did in fact overflow in
one of the two original contexts), so when SystemZElimCompare removed the
compare with 0 trusting that flag, wrong-code resulted.

There is more that needs to be done in this area as discussed here:

Differential Revision: https://reviews.llvm.org/D86871

Review: Ulrich Weigand, Sanjay Patel
2020-09-05 10:30:38 +02:00
Qiu Chaofan 705271d9cd [PowerPC] Expand constrained ppc_fp128 to i32 conversion
Libcall __gcc_qtou is not available, which breaks some tests needing
it. On PowerPC, we have code to manually expand the operation, this
patch applies it to constrained conversion. To keep it strict-safe,
it's using the algorithm similar to expandFP_TO_UINT.

For constrained operations marking FP exception behavior as 'ignore',
we should set the NoFPExcept flag. However, in some custom lowering
the flag is missed. This should be fixed by future patches.

Reviewed By: uweigand

Differential Revision: https://reviews.llvm.org/D86605
2020-09-05 13:16:20 +08:00
Krzysztof Parzyszek 89a4fe79d4 [Hexagon] Unindent everything in HexagonISelLowering.h, NFC
Just a shift, no other formatting changes.
2020-09-04 17:25:29 -05:00
Craig Topper 35b35a373d [X86] Prevent shuffle combining from creating an identical X86ISD::SHUF128.
This can cause an infinite loop if SimplifiedDemandedElts asks
for the node to replace itself.

A similar protection exists in other places in shuffle combining.

Fixes ISPC https://github.com/ispc/ispc/issues/1864
2020-09-04 14:12:49 -07:00
Muhammad Asif Manzoor 1ffcbe35ae [AArch64][SVE] Add lowering for rounding operations
Add the functionality to lower SVE rounding operations for passthru variant.
Created a new test case file for all rounding operations.

Reviewed By: paulwalker-arm

Differential Revision: https://reviews.llvm.org/D86793
2020-09-04 11:16:57 -04:00
Simon Pilgrim 7582c5c023 CallingConvLower.h - remove unnecessary MachineFunction.h include. NFC.
Reduce to forward declaration, add the Register.h include that we still needed, move CCState::ensureMaxAlignment into CallingConvLower.cpp as it was the only function that needed the full definition of MachineFunction.

Fix a few implicit dependencies further down.
2020-09-04 12:16:48 +01:00
Simon Pilgrim 740625fecd [X86] Make lowerShuffleAsLanePermuteAndPermute use sublanes on AVX2
Extends lowerShuffleAsLanePermuteAndPermute to search for opportunities to use vpermq (64-bit cross-lane shuffle) and vpermd (32-bit cross-lane shuffle) to get elements into the correct lane, in addition to the 128-bit full-lane permutes it previously searched for.

This is especially helpful in cross-lane byte shuffles, where the alternative tends to be "vpshufb both lanes separately and blend them with a vpblendvb", which is very expensive, especially on Haswell where vpblendvb uses the same execution port as all the shuffles.

Addresses PR47262

Patch By: @TellowKrinkle (TellowKrinkle)

Differential Revision: https://reviews.llvm.org/D86429
2020-09-04 11:41:26 +01:00
David Green 294c0cc3eb [ARM] Fold predicate_cast(load) into vldr p0
This adds a simple tablegen pattern for folding predicate_cast(load)
into vldr p0, providing the alignment and offset are correct.

Differential Revision: https://reviews.llvm.org/D86702
2020-09-04 11:29:59 +01:00
Matt Arsenault 3c2a7bd286 AMDGPU: Remove code to handle tied si_else operands
This has not used tied operands for a long time.
2020-09-03 19:46:05 -04:00
Craig Topper 0851350557 [X86] Update stale comment. NFC
The optimization in ExpandIntOp_UINT_TO_FP was removed in D72728
in January 2020.
2020-09-03 16:19:10 -07:00