Commit Graph

15464 Commits

Author SHA1 Message Date
Sidharth Baveja 38a8217931 [Loop Fusion] Integrate Loop Peeling into Loop Fusion (re-land after fixing ASAN build failures)
This patch adds the ability to peel off iterations of the first loop in loop
fusion. This can allow for both loops to have the same trip count, making it
legal for them to be fused together.

Here is a simple scenario peeling can be used in loop fusion:

for (i = 0; i < 10; ++i)
  a[i] = a[i] + 3;
for (j = 1; j < 10; ++j)
  b[j] = b[j] + 5;

Here is we can make use of peeling, and then fuse the two loops together. We
can peel off the 0th iteration of the loop i, and then combine loop i and j for
i = 1 to 10.

a[0] = a[0] +3;
for (i = 1; i < 10; ++i) {
  a[i] = a[i] + 3;
  b[i] = b[i] + 5;
}

Currently peeling with loop fusion is only supported for loops with constant
trip counts and a single exit point. Both unguarded and guarded loops are
supported.

Reviewed By: bmahjour (Bardia Mahjour), MaskRay (Fangrui Song)

Differential Revision: https://reviews.llvm.org/D82927
2020-07-23 21:02:04 +00:00
Nikita Popov 183342c0a9 [SCCP] Add another switch+phi test (NFC) 2020-07-23 21:51:09 +02:00
Nikita Popov 9394c3ec88 [SCCP] Directly remove non-feasible edges
Non-feasible control-flow edges are currently removed by replacing
the branch condition with a constant and then calling
ConstantFoldTerminator. This happens in a rather roundabout manner,
by inspecting the users (effectively: predecessors) of unreachable
blocks, and further complicated by the need to explicitly materialize
the condition for "forced" edges. I would like to extend SCCP to
discard switch conditions that are non-feasible based on range
information, but this is incompatible with the current approach
(as there is no single constant we could use.)

Instead, this patch explicitly removes non-feasible edges. It
currently only needs to handle the case where there is a single
feasible edge. The llvm_unreachable() branch will need to be
implemented for the aforementioned switch improvement.

Differential Revision: https://reviews.llvm.org/D84264
2020-07-23 20:32:57 +02:00
Nikita Popov def48b0e88 [PredicateInfo][SCCP] Remove assertion (PR46814)
As long as RenamedOp is not guaranteed to be accurate, we cannot
assert here and should just return false. This was already done
for the other conditions in this function.

Fixes https://bugs.llvm.org/show_bug.cgi?id=46814.
2020-07-23 19:36:51 +02:00
Florian Hahn 0f80d598b0 [IPSCCP] Add test case for PR46717 for argmemonly handling. 2020-07-23 17:25:26 +01:00
Sanjay Patel cfe40acd16 [VectorCombine] add tests for load vectorization; NFC 2020-07-23 11:24:04 -04:00
Florian Hahn 82e35197e6 [LSR] Re-generate check lines for test.
The test is quite frafile, as the check lines match IR numbers and it is
not obvious why only a very small subset is checked.

Re-generate check lines, so further changes are more obvious.
2020-07-23 13:53:53 +01:00
Florian Hahn 09c96a31ef [LoopIdiom] Add additional test cases. 2020-07-23 13:53:26 +01:00
Hamilton Tobon Mosquera 6f0d99d2b9 [OpenMPOpt] Regression test for hiding latency of H2D mem transfers 2020-07-22 20:02:54 -05:00
Fangrui Song 27650ec554 Revert D81682 "[PGO] Extend the value profile buckets for mem op sizes."
This reverts commit 4a539faf74.

There is a __llvm_profile_instrument_range related crash in PGO-instrumented clang:

```
(gdb) bt
llvm::ConstantRange const&, llvm::APInt const&, unsigned int, bool) ()
llvm::ScalarEvolution::getRangeForAffineAR(llvm::SCEV const*, llvm::SCEV
const*, llvm::SCEV const*, unsigned int) ()
```

(The body of __llvm_profile_instrument_range is inlined, so we can only find__llvm_profile_instrument_target in the trace)

```
 23│    0x000055555dba0961 <+65>:    nopw   %cs:0x0(%rax,%rax,1)
 24│    0x000055555dba096b <+75>:    nopl   0x0(%rax,%rax,1)
 25│    0x000055555dba0970 <+80>:    mov    %rsi,%rbx
 26│    0x000055555dba0973 <+83>:    mov    0x8(%rsi),%rsi  # %rsi=-1 -> SIGSEGV
 27│    0x000055555dba0977 <+87>:    cmp    %r15,(%rbx)
 28│    0x000055555dba097a <+90>:    je     0x55555dba0a76 <__llvm_profile_instrument_target+342>
```
2020-07-22 16:08:25 -07:00
Rong Xu 50da55a585 [PGO] Supporting code for always instrumenting entry block
This patch includes the supporting code that enables always
instrumenting the function entry block by default.

This patch will NOT the default behavior.

It adds a variant bit in the profile version, adds new directives in
text profile format, and changes llvm-profdata tool accordingly.

This patch is a split of D83024 (https://reviews.llvm.org/D83024)
Many test changes from D83024 are also included.

Differential Revision: https://reviews.llvm.org/D84261
2020-07-22 15:01:53 -07:00
Nikita Popov e20b3079c1 [SCCP] Add additional multi-edge + phi tests (NFC) 2020-07-22 22:10:23 +02:00
Nikita Popov 33f6542014 [SCCP] Regenerate test checks (NFC)
And adjust the indbrtest4 test to actually test what it's supposed
to. BB1 is supposed to be eliminated here, but isn't, because
BB0 still branches to it. This was lost due to the incomplete CHECK
lines.
2020-07-22 22:10:23 +02:00
Nikita Popov eae6bb3807 [SCCP] Add multi-edge switch + phi test case (NFC) 2020-07-22 20:28:22 +02:00
Anton Afanasyev 56c92bf4b7 [SLP][Test] Precommit tests for D83779. NFC. 2020-07-22 18:25:45 +03:00
Sebastian Neubauer 2a6c871596 [InstCombine] Move target-specific inst combining
For a long time, the InstCombine pass handled target specific
intrinsics. Having target specific code in general passes was noted as
an area for improvement for a long time.

D81728 moves most target specific code out of the InstCombine pass.
Applying the target specific combinations in an extra pass would
probably result in inferior optimizations compared to the current
fixed-point iteration, therefore the InstCombine pass resorts to newly
introduced functions in the TargetTransformInfo when it encounters
unknown intrinsics.
The patch should not have any effect on generated code (under the
assumption that code never uses intrinsics from a foreign target).

This introduces three new functions:
TargetTransformInfo::instCombineIntrinsic
TargetTransformInfo::simplifyDemandedUseBitsIntrinsic
TargetTransformInfo::simplifyDemandedVectorEltsIntrinsic

A few target specific parts are left in the InstCombine folder, where
it makes sense to share code. The largest left-over part in
InstCombineCalls.cpp is the code shared between arm and aarch64.

This allows to move about 3000 lines out from InstCombine to the targets.

Differential Revision: https://reviews.llvm.org/D81728
2020-07-22 15:59:49 +02:00
Alexey Bataev be37f13e2d [SLP]Add an extra test for vectorization of non-pow-2 trees, NFC. 2020-07-22 09:13:30 -04:00
Sebastian Neubauer 2c659082bd [AMDGPU] Don't combine memory intrs to v3i16
v3i16 and v3f16 currently cannot be legalized and lowered so they should
not be emitted by inst combining.

Moved the check down to still allow extracting 1 or 2 elements via the dmask.

Fixes image intrinsics being combined to return v3x16.

Differential Revision: https://reviews.llvm.org/D84223
2020-07-22 12:44:01 +02:00
Max Kazantsev 360ab70712 [SimplifyCFG] Do not create unneeded PR Phi in block with convergent calls
We do not thread blocks with convergent calls, but this check was missing
when we decide to insert PR Phis into it (which we only do for threading).

Differential Revision: https://reviews.llvm.org/D83936
Reviewed By: nikic
2020-07-22 13:53:50 +07:00
Chen Zheng e8425b27fe [PowerPC] add store (load float*) pattern to isProfitableToHoist
store (load float*) can be optimized to store(load i32*) in InstCombine pass.

Add store (load float*) to isProfitableToHoist to make sure we don't break
the opt in InstCombine pass.

Reviewed By: jsji

Differential Revision: https://reviews.llvm.org/D82341
2020-07-21 20:55:13 -04:00
Juneyoung Lee ace0bf7490 [ValueTracking] Fix incorrect handling of canCreateUndefOrPoison
.. in isGuaranteedNotToBeUndefOrPoison.

This caused early exit of isGuaranteedNotToBeUndefOrPoison, making it return
imprecise result.

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D84251
2020-07-22 09:31:16 +09:00
Nikita Popov ef868a848e [SCCP] Add switch+range tests (NFC) 2020-07-21 23:07:50 +02:00
Fangrui Song 8a268bec1b Revert D82927 "[Loop Fusion] Integrate Loop Peeling into Loop Fusion"
This reverts commit bb8850d34d.

It broke 3 check-llvm-transforms-loopfusion tests in an ASAN build.

LoopFuse.cpp `for (BasicBlock *Pred : predecessors(BB)) {` may operate on a deleted BB.
2020-07-21 12:24:50 -07:00
Hiroshi Yamauchi 7bedae7dee [PGO][PGSO] Add profile guided size optimization to loop vectorization legality. 2020-07-21 11:16:36 -07:00
Sidharth Baveja bb8850d34d [Loop Fusion] Integrate Loop Peeling into Loop Fusion
Summary:
This patch adds the ability to peel off iterations of the first loop in loop
fusion. This can allow for both loops to have the same trip count, making it
legal for them to be fused together.

Here is a simple scenario peeling can be used in loop fusion:

for (i = 0; i < 10; ++i)
  a[i] = a[i] + 3;
for (j = 1; j < 10; ++j)
  b[j] = b[j] + 5;

Here is we can make use of peeling, and then fuse the two loops together. We can
peel off the 0th iteration of the loop i, and then combine loop i and j for
i = 1 to 10.

a[0] = a[0] +3;
for (i = 1; i < 10; ++i) {
  a[i] = a[i] + 3;
  b[i] = b[i] + 5;
}

Currently peeling with loop fusion is only supported for loops with constant
trip counts and a single exit point. Both unguarded and guarded loops are
supported.

Author: sidbav (Sidharth Baveja)

Reviewers: kbarton, Meinersbur, bkramer, Whitney, skatkov, ashlykov, fhahn, bmahjour

Reviewed By: bmahjour

Subscribers: bmahjour, mgorny, hiraditya, zzheng

Tags: LLVM

Differential Revision: https://reviews.llvm.org/D82927
2020-07-21 15:59:14 +00:00
Jay Foad 5e5bda74b6 [IR] Simplify Use::swap. NFCI.
The new implementation makes it clear that there are exactly two
conditional stores (after the initial no-op optimization). By contrast
the old implementation had seven conditionals, some hidden inside other
functions.

This commit can change the order of operands in operand lists, hence the
tweak to one test case.

Differential Revision: https://reviews.llvm.org/D80116
2020-07-21 12:15:12 +01:00
Florian Hahn 752fea7c27 [SCCP] Add range metadata to call sites with known return ranges.
If we inferred a range for the function return value, we can add !range
at all call-sites of the function, if the range does not include undef.

Reviewers: efriedma, davide, nikic

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D83952
2020-07-21 10:06:54 +01:00
Sanjay Patel 750f4c591d [InstCombine] allow peeking through zext of shift amount to match rotate idioms (PR45701)
We might want to also allow trunc of the shift amount, but that seems less likely?

  define i32 @src(i32 %x, i1 %y) {
  %0:
    %rem = and i1 %y, 1
    %cmp = icmp eq i1 %rem, 0
    %sh_prom = zext i1 %rem to i32
    %sub = sub nsw nuw i1 0, %rem
    %sh_prom1 = zext i1 %sub to i32
    %shr = lshr i32 %x, %sh_prom1
    %shl = shl i32 %x, %sh_prom
    %or = or i32 %shl, %shr
    %r = select i1 %cmp, i32 %x, i32 %or
    ret i32 %r
  }
  =>
  define i32 @tgt(i32 %x, i1 %y) {
  %0:
    %t = zext i1 %y to i32
    %r = fshl i32 %x, i32 %x, i32 %t
    ret i32 %r
  }

  Transformation seems to be correct!

https://alive2.llvm.org/ce/z/xgMvE3

http://bugs.llvm.org/PR45701
2020-07-20 16:18:11 -04:00
Sanjay Patel 92ec0c5da6 [InstCombine] add tests for funnel shift/rotate with narrow shift amount; NFC 2020-07-20 16:18:11 -04:00
Florian Hahn f13a59bcff [Matrix] Use TileInfo to create tiled loop nest for matrix multiply.
This patch uses the TileInfo introduced in D77550 to generate a loop
nest for tiled matrix multiplication, instead of generating the
unrolled code for the whole multiplication. This makes code-generation
more scalable for larger matrixes.

Initially loops are only used if both the number of rows and columns are
divisible by the tile size. Other cases will be added as follow-up.

Reviewers: anemet, Gerolf, hfinkel, andrew.w.kaylor, LuoYuanke, nicolasvasilache

Reviewed By: anemet

Differential Revision: https://reviews.llvm.org/D81308
2020-07-20 21:11:53 +01:00
Mircea Trofin 70f8d0ac8a [llvm] Development-mode InlineAdvisor
Summary:
This is the InlineAdvisor used in 'development' mode. It enables two
scenarios:

 - loading models via a command-line parameter, thus allowing for rapid
 training iteration, where models can be used for the next exploration
 phase without requiring recompiling the compiler. This trades off some
 compilation speed for the added flexibility.

 - collecting training logs, in the form of tensorflow.SequenceExample
 protobufs. We generate these as textual protobufs, which simplifies
 generation and testing. The protobufs may then be readily consumed by a
 tensorflow-based training algorithm.

To speed up training, training logs may also be collected from the
'default' training policy. In that case, this InlineAdvisor does not
use a model.

RFC: http://lists.llvm.org/pipermail/llvm-dev/2020-April/140763.html

Reviewers: jdoerfert, davidxl

Subscribers: mgorny, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D83733
2020-07-20 11:01:56 -07:00
Matt Arsenault 5e999cbe8d IR: Define byref parameter attribute
This allows tracking the in-memory type of a pointer argument to a
function for ABI purposes. This is essentially a stripped down version
of byval to remove some of the stack-copy implications in its
definition.

This includes the base IR changes, and some tests for places where it
should be treated similarly to byval. Codegen support will be in a
future patch.

My original attempt at solving some of these problems was to repurpose
byval with a different address space from the stack. However, it is
technically permitted for the callee to introduce a write to the
argument, although nothing does this in reality. There is also talk of
removing and replacing the byval attribute, so a new attribute would
need to take its place anyway.

This is intended avoid some optimization issues with the current
handling of aggregate arguments, as well as fixes inflexibilty in how
frontends can specify the kernel ABI. The most honest representation
of the amdgpu_kernel convention is to expose all kernel arguments as
loads from constant memory. Today, these are raw, SSA Argument values
and codegen is responsible for turning these into loads.

Background:

There currently isn't a satisfactory way to represent how arguments
for the amdgpu_kernel calling convention are passed. In reality,
arguments are passed in a single, flat, constant memory buffer
implicitly passed to the function. It is also illegal to call this
function in the IR, and this is only ever invoked by a driver of some
kind.

It does not make sense to have a stack passed parameter in this
context as is implied by byval. It is never valid to write to the
kernel arguments, as this would corrupt the inputs seen by other
dispatches of the kernel. These argumets are also not in the same
address space as the stack, so a copy is needed to an alloca. From a
source C-like language, the kernel parameters are invisible.
Semantically, a copy is always required from the constant argument
memory to a mutable variable.

The current clang calling convention lowering emits raw values,
including aggregates into the function argument list, since using
byval would not make sense. This has some unfortunate consequences for
the optimizer. In the aggregate case, we end up with an aggregate
store to alloca, which both SROA and instcombine turn into a store of
each aggregate field. The optimizer never pieces this back together to
see that this is really just a copy from constant memory, so we end up
stuck with expensive stack usage.

This also means the backend dictates the alignment of arguments, and
arbitrarily picks the LLVM IR ABI type alignment. By allowing an
explicit alignment, frontends can make better decisions. For example,
there's real no advantage to an aligment higher than 4, so a frontend
could choose to compact the argument layout. Similarly, there is a
high penalty to using an alignment lower than 4, so a frontend could
opt into more padding for small arguments.

Another design consideration is when it is appropriate to expose the
fact that these arguments are all really passed in adjacent
memory. Currently we have a late IR optimization pass in codegen to
rewrite the kernel argument values into explicit loads to enable
vectorization. In most programs, unrelated argument loads can be
merged together. However, exposing this property directly from the
frontend has some disadvantages. We still need a way to track the
original argument sizes and alignments to report to the driver. I find
using some side-channel, metadata mechanism to track this
unappealing. If the kernel arguments were exposed as a single buffer
to begin with, alias analysis would be unaware that the padding bits
betewen arguments are meaningless. Another family of problems is there
are still some gaps in replacing all of the available parameter
attributes with metadata equivalents once lowered to loads.

The immediate plan is to start using this new attribute to handle all
aggregate argumets for kernels. Long term, it makes sense to migrate
all kernel arguments, including scalars, to be passed indirectly in
the same manner.

Additional context is in D79744.
2020-07-20 10:23:09 -04:00
Florian Hahn dc1087d408 [Matrix] Add minimal lowering pass that only requires TTI.
This patch adds a new variant of the matrix lowering pass that only does
a minimal lowering and only depends on TTI. The main purpose of this pass
is to have a pass with minimal dependencies to run as part of the backend
pipeline.

At the moment, the only difference to the regular lowering pass is that it
does not support remarks. But in subsequent patches add support for tiling
to the lowering pass which will require more analysis, which we do not want
to run in the backend, as the lowering should happen in the middle-end in
practice and running it in the backend is mostly for convenience when
running llc.

Reviewers: anemet, Gerolf, efriedma, hfinkel

Reviewed By: anemet

Differential Revision: https://reviews.llvm.org/D76867
2020-07-20 11:16:11 +01:00
Roman Lebedev 04b729d076
[NFCI][SimplifyCFG] Guard common code hoisting with a (default-on) flag
Common code sinking is already guarded with a (with default-off!) flag,
so add a flag for hoisting, too.

D84108 will hopefully make hoisting off-by-default too.
2020-07-20 10:29:57 +03:00
Roman Lebedev 3de4166325
[NFC][SimplifyCFG] Add standalone test for common code hoisting xform option
Also, move one test into it's correct place
2020-07-20 10:29:29 +03:00
sstefan1 e3d646c699 [Attributor][NFC] applying update_test_checks with --check-attributes
Summary:
All tests are updated, except wrapper.ll since it is not working nicely
with newly created functions.

Reviewers: jdoerfert, uenoku, baziotis, homerdin

Subscribers: arphaman, jfb, kuter, bbn, okura, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D84130
2020-07-20 08:17:34 +02:00
Juneyoung Lee 30201d3b61 [ValueTracking] Let isGuaranteedNotToBeUndefOrPoison use canCreateUndefOrPoison
This patch adds support more operations.

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D83926
2020-07-20 09:21:39 +09:00
Wenlei He d41d952be9 Revert "[InlineAdvisor] New inliner advisor to replay inlining from optimization remarks"
This reverts commit 2d6ecfa168.
2020-07-19 08:49:04 -07:00
Wenlei He 2d6ecfa168 [InlineAdvisor] New inliner advisor to replay inlining from optimization remarks
Summary:
This change added a new inline advisor that takes optimization remarks from previous inlining as input, and provides the decision as advice so current inlining can replay inline decisions of a different compilation. Dwarf inline stack with line and discriminator is used as anchor for call sites including call context. The change can be useful for Inliner tuning as it provides a channel to allow external input for tweaking inline decisions. Existing alternatives like alwaysinline attribute is per-function, not per-callsite. Per-callsite inline intrinsic can be another solution (not yet existing), but it's intrusive to implement and also does not differentiate call context.

A switch -sample-profile-inline-replay=<inline_remarks_file> is added to hook up the new inline advisor with SampleProfileLoader's inline decision for replay. Since SampleProfileLoader does top-down inlining, inline decision can be specialized for each call context, hence we should be able to replay inlining accurately. However with a bottom-up inliner like CGSCC inlining, the replay can be limited due to lack of specialization for different call context. Apart from that limitation, the new inline advisor can still be used by regular CGSCC inliner later if needed for tuning purpose.

Subscribers: mgorny, aprantl, hiraditya, llvm-commits

Tags: #llvm

Resubmit for https://reviews.llvm.org/D84086
2020-07-19 08:21:05 -07:00
Nikita Popov c6e13667e7 [PredicateInfo] Add a method to interpret predicate as cmp constraint
Both users of predicteinfo (NewGVN and SCCP) are interested in
getting a cmp constraint on the predicated value. They currently
implement separate logic for this. This patch adds a common method
for this in PredicateBase.

This enables a missing bit of PredicateInfo handling in SCCP: Now
the predicate on the condition itself is also used. For switches
it means we know that the switched-on value is the same as the case
value. For assumes/branches we know that the condition is true or
false.

Differential Revision: https://reviews.llvm.org/D83640
2020-07-19 15:34:32 +02:00
Sanjay Patel 7393d7574c [InstSimplify] fold fcmp with infinity constant using isKnownNeverInfinity
This is a step towards trying to remove unnecessary FP compares
with infinity when compiling with -ffinite-math-only or similar.
I'm intentionally not checking FMF on the fcmp itself because
I'm assuming that will go away eventually.
The analysis part of this was added with rGcd481136 for use with
isKnownNeverNaN. Similarly, that could be an enhancement here to
get predicates like 'one' and 'ueq'.

Differential Revision: https://reviews.llvm.org/D84035
2020-07-19 09:24:52 -04:00
Nikita Popov d12ec0f752 [InstCombine] Fix store merge worklist management (PR46680)
Fixes https://bugs.llvm.org/show_bug.cgi?id=46680.

Just like insertions through IRBuilder, InsertNewInstBefore()
should be using the deferred worklist mechanism, so that processing
of newly added instructions is prioritized.

There's one side-effect of the worklist order change which could be
classified as a regression. An add op gets pushed through a select
that at the time is not a umax. We could add a reverse transform
that tries to push adds in the reverse direction to restore a min/max,
but that seems like a sure way of getting infinite loops... Seems
like something that should best wait on min/max intrinsics.

Differential Revision: https://reviews.llvm.org/D84109
2020-07-19 15:05:45 +02:00
Nikita Popov 13ae440de4 [InstCombine] Add test for PR46680 (NFC) 2020-07-18 23:37:16 +02:00
Joseph Huber 3bbbe4c4b6 [OpenMP] Add Additional Function Attribute Information to OMPKinds.def
Summary:
This patch adds more function attribute information to the runtime function definitions in OMPKinds.def. The goal is to provide sufficient information about OpenMP runtime functions to perform more optimizations on OpenMP code.

Reviewers: jdoerfert

Subscribers: aaron.ballman cfe-commits yaxunl guansong sstefan1 llvm-commits

Tags: #OpenMP #clang #LLVM

Differential Revision: https://reviews.llvm.org/D81031
2020-07-18 12:55:50 -04:00
Roman Lebedev 8d487668d0
[CVP] Soften SDiv into a UDiv as long as we know domains of both of the operands.
Yes, if operands are non-positive this comes at the extra cost
of two extra negations. But  a. division is already just
ridiculously costly, two more subtractions can't hurt much :)
and  b. we have better/more analyzes/folds for an unsigned division,
we could end up narrowing it's bitwidth, converting it to lshr, etc.

This is essentially a take two on 0fdcca07ad,
which didn't fix the potential regression i was seeing,
because ValueTracking's computeKnownBits() doesn't make use
of dominating conditions in it's analysis.
While i could teach it that, this seems like the more general fix.

This big hammer actually does catch said potential regression.

Over vanilla test-suite + RawSpeed + darktable
(10M IR instrs, 1M IR BB, 1M X86 ASM instrs), this fires/converts 5 more
(+2%) SDiv's, the total instruction count at the end of middle-end pipeline
is only +6, so out of +10 extra negations, ~half are folded away,
and asm instr count is only +1, so practically speaking all extra
negations are folded away and are therefore free.
Sadly, all these new UDiv's remained, none folded away.
But there are two less basic blocks.

https://rise4fun.com/Alive/VS6

Name: v0
Pre: C0 >= 0 && C1 >= 0
%r = sdiv i8 C0, C1
  =>
%r = udiv i8 C0, C1

Name: v1
Pre: C0 <= 0 && C1 >= 0
%r = sdiv i8 C0, C1
  =>
%t0 = udiv i8 -C0, C1
%r = sub i8 0, %t0

Name: v2
Pre: C0 >= 0 && C1 <= 0
%r = sdiv i8 C0, C1
  =>
%t0 = udiv i8 C0, -C1
%r = sub i8 0, %t0

Name: v3
Pre: C0 <= 0 && C1 <= 0
%r = sdiv i8 C0, C1
  =>
%r = udiv i8 -C0, -C1
2020-07-18 17:59:56 +03:00
Roman Lebedev 7b16fd8a25
[NFC][CVP] Add tests for possible sdiv->udiv where operands are not non-negative
Currently that fold requires both operands to be non-negative,
but the only real requirement for the fold is that we must know
the domains of the operands.
2020-07-18 17:59:31 +03:00
David Green 2f4c3e8097 [LV] Add additional InLoop redution tests. NFC 2020-07-18 12:14:23 +01:00
Chen Zheng bb07eb944f [PowerPC]add testcase for adding store (load float*) pattern, nfc 2020-07-17 22:57:08 -04:00
Chen Zheng 6d247f980d [SCEV][IndVarSimplify] insert point should not be block front.
Recommit after removing the unused cast instructions.

Differential Revision:  https://reviews.llvm.org/D80975
2020-07-17 22:25:10 -04:00
Arthur Eubanks 0dfa4a83fa Revert "[PGO][PGSO] Add profile guided size optimization to loop vectorization legality."
This reverts commit 30c382a7c6.

See https://crbug.com/1106813.
2020-07-17 16:47:41 -07:00