Commit Graph

28315 Commits

Author SHA1 Message Date
Nikita Popov 570c9beb8e [MemorySSA] Remove unnecessary MSSA dependencies
LoopLoadElimination, LoopVersioning and LoopVectorize currently
fetch MemorySSA when construction LoopAccessAnalysis. However,
LoopAccessAnalysis does not actually use MemorySSA and we can pass
nullptr instead.

This saves one MemorySSA calculation in the default pipeline, and
thus improves compile-time.

Differential Revision: https://reviews.llvm.org/D108074
2021-08-16 20:40:55 +02:00
Sanjay Patel de285eacb0 [InstCombine] allow for constant-folding in GEP transform
This would crash the reduced test or as described in
https://llvm.org/PR51485
...because we can't mark a constant (-expression) with 'inbounds'.
2021-08-16 10:36:56 -04:00
Roman Lebedev febcedf18c
Revert "[NFCI][IndVars] rewriteLoopExitValues(): nowadays SCEV should not change `GEP` base pointer"
https://bugs.llvm.org/show_bug.cgi?id=51490 was filed.

This reverts commit 35a8bdc775.
2021-08-16 14:30:29 +03:00
David Sherwood 9b19b77883 [NFC] Remove unused code in llvm::createSimpleTargetReduction 2021-08-16 09:50:45 +01:00
Roman Lebedev 2eb554a9fe
Revert "Reland [SimplifyCFG] performBranchToCommonDestFolding(): form block-closed SSA form before cloning instructions (PR51125)"
This is still wrong, as failing bots suggest.

This reverts commit 3d9beefc7d.
2021-08-16 11:07:42 +03:00
David Green c6b7db015f [InstCombine] Add call to matchSAddSubSat from min/max
This adds a call to matchSAddSubSat from smin/smax instrinsics, allowing
the same patterns to match if the canonical form of a min/max is an
intrinsics, not a icmp/select.

Differential Revision: https://reviews.llvm.org/D108077
2021-08-15 17:25:16 +01:00
Roman Lebedev 3d9beefc7d
Reland [SimplifyCFG] performBranchToCommonDestFolding(): form block-closed SSA form before cloning instructions (PR51125)
... with test change this time.

LLVM IR SSA form is "implicit" in `@pr51125`. While is a valid LLVM IR,
and does not require any PHI nodes, that completely breaks the further logic
in `CloneInstructionsIntoPredecessorBlockAndUpdateSSAUses()`
that updates the live-out uses of the bonus instructions.

What i believe we need to do, is to first make the SSA form explicit,
by inserting tautological PHI nodes, and rewriting the offending uses.

```
$ /builddirs/llvm-project/build-Clang12/bin/opt -load /repositories/alive2/build-Clang-release/tv/tv.so -load-pass-plugin /repositories/alive2/build-Clang-release/tv/tv.so -tv -simplifycfg -simplifycfg-require-and-preserve-domtree=1 -bonus-inst-threshold=10 -tv -o /dev/null /tmp/test.ll

----------------------------------------
@global_pr51125 = global 4 bytes, align 4

define i32 @pr51125() {
%entry:
  br label %L

%L:
  %ld = load i32, * @global_pr51125, align 4
  %iszero = icmp eq i32 %ld, 0
  br i1 %iszero, label %exit, label %L2

%L2:
  store i32 4294967295, * @global_pr51125, align 4
  %cmp = icmp eq i32 %ld, 4294967295
  br i1 %cmp, label %L, label %exit

%exit:
  %r = phi i32 [ %ld, %L2 ], [ %ld, %L ]
  ret i32 %r
}
=>
@global_pr51125 = global 4 bytes, align 4

define i32 @pr51125() {
%entry:
  %ld.old = load i32, * @global_pr51125, align 4
  %iszero.old = icmp eq i32 %ld.old, 0
  br i1 %iszero.old, label %exit, label %L2

%L2:
  %ld2 = phi i32 [ %ld.old, %entry ], [ %ld, %L2 ]
  store i32 4294967295, * @global_pr51125, align 4
  %cmp = icmp ne i32 %ld2, 4294967295
  %ld = load i32, * @global_pr51125, align 4
  %iszero = icmp eq i32 %ld, 0
  %or.cond = select i1 %cmp, i1 1, i1 %iszero
  br i1 %or.cond, label %exit, label %L2

%exit:
  %ld1 = phi i32 [ poison, %L2 ], [ %ld.old, %entry ]
  %r = phi i32 [ %ld2, %L2 ], [ %ld.old, %entry ]
  ret i32 %r
}
Transformation seems to be correct!

```

Fixes https://bugs.llvm.org/show_bug.cgi?id=51125

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D106317
2021-08-15 19:16:04 +03:00
Roman Lebedev 60dd0121c9
Revert "[SimplifyCFG] performBranchToCommonDestFolding(): form block-closed SSA form before cloning instructions (PR51125)"
Forgot to stage the test change.

This reverts commit 78af5cb213.
2021-08-15 19:15:09 +03:00
Roman Lebedev 78af5cb213
[SimplifyCFG] performBranchToCommonDestFolding(): form block-closed SSA form before cloning instructions (PR51125)
LLVM IR SSA form is "implicit" in `@pr51125`. While is a valid LLVM IR,
and does not require any PHI nodes, that completely breaks the further logic
in `CloneInstructionsIntoPredecessorBlockAndUpdateSSAUses()`
that updates the live-out uses of the bonus instructions.

What i believe we need to do, is to first make the SSA form explicit,
by inserting tautological PHI nodes, and rewriting the offending uses.

```
$ /builddirs/llvm-project/build-Clang12/bin/opt -load /repositories/alive2/build-Clang-release/tv/tv.so -load-pass-plugin /repositories/alive2/build-Clang-release/tv/tv.so -tv -simplifycfg -simplifycfg-require-and-preserve-domtree=1 -bonus-inst-threshold=10 -tv -o /dev/null /tmp/test.ll

----------------------------------------
@global_pr51125 = global 4 bytes, align 4

define i32 @pr51125() {
%entry:
  br label %L

%L:
  %ld = load i32, * @global_pr51125, align 4
  %iszero = icmp eq i32 %ld, 0
  br i1 %iszero, label %exit, label %L2

%L2:
  store i32 4294967295, * @global_pr51125, align 4
  %cmp = icmp eq i32 %ld, 4294967295
  br i1 %cmp, label %L, label %exit

%exit:
  %r = phi i32 [ %ld, %L2 ], [ %ld, %L ]
  ret i32 %r
}
=>
@global_pr51125 = global 4 bytes, align 4

define i32 @pr51125() {
%entry:
  %ld.old = load i32, * @global_pr51125, align 4
  %iszero.old = icmp eq i32 %ld.old, 0
  br i1 %iszero.old, label %exit, label %L2

%L2:
  %ld2 = phi i32 [ %ld.old, %entry ], [ %ld, %L2 ]
  store i32 4294967295, * @global_pr51125, align 4
  %cmp = icmp ne i32 %ld2, 4294967295
  %ld = load i32, * @global_pr51125, align 4
  %iszero = icmp eq i32 %ld, 0
  %or.cond = select i1 %cmp, i1 1, i1 %iszero
  br i1 %or.cond, label %exit, label %L2

%exit:
  %ld1 = phi i32 [ poison, %L2 ], [ %ld.old, %entry ]
  %r = phi i32 [ %ld2, %L2 ], [ %ld.old, %entry ]
  ret i32 %r
}
Transformation seems to be correct!

```

Fixes https://bugs.llvm.org/show_bug.cgi?id=51125

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D106317
2021-08-15 19:02:34 +03:00
Roman Lebedev 35a8bdc775
[NFCI][IndVars] rewriteLoopExitValues(): nowadays SCEV should not change `GEP` base pointer
Currently/previously, while SCEV guaranteed that it produces the same value,
the way it was produced may be illegal IR, so we have an ugly check that
the replacement is valid.

But now that the SCEV strictness wrt the pointer/integer types has been improved,
i believe this invariant is already upheld by the SCEV itself, natively.

I think we should add an assertion, wait for a week, and then, if all is good,
rip out all this checking.
Or we could just do the latter directly i guess.

This reverts commit rL127839.

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D108043
2021-08-15 18:59:32 +03:00
Nikita Popov 944dfa4975 [IndVars] Don't check for pointer exit count (NFC)
After recent changes, exit counts and BE taken counts are always
integers, so convert these to assertions.

While here, also convert the loop invariance checks to asserts.
Exit counts are always loop invariant.
2021-08-15 16:49:30 +02:00
Nikita Popov 3c503ba06a [FunctionImport] Fix build with old mingw (NFC)
std::errc::operation_not_supported is not universally supported.
Make use of LLVM's errc interoperability header, which lists
known-good errc values.
2021-08-15 15:47:59 +02:00
Paul Walker f7a831daa6 [LoopVectorize] Don't emit remarks about lack of scalable vectors unless they're specifically requested.
Previously we emitted a "does not support scalable vectors"
remark for all targets whenever vectorisation is attempted. This
pollutes the output for architectures that don't support scalable
vectors and is likely confusing to the user.

Instead this patch introduces a debug message that reports when
scalable vectorisation is allowed by the target and only issues
the previous remark when scalable vectorisation is specifically
requested, for example:

  #pragma clang loop vectorize_width(2, scalable)

Differential Revision: https://reviews.llvm.org/D108028
2021-08-15 12:15:52 +01:00
eopXD 012173680f [LoopIdiom] let the pass deal with runtime memset size
The current LIR does not deal with runtime-determined memset-size. This patch
utilizes SCEV and check if the PointerStrideSCEV and the MemsetSizeSCEV are equal.
Before comparison the pass would try to fold the expression that is already
protected by the loop guard.

Testcase file `memset-runtime.ll`, `memset-runtime-debug.ll` added.

This patch deals with proper loop-idiom. Proceeding patch wants to deal with SCEV-s
that are inequal after folding with the loop guards.

Reviewed By: lebedev.ri, Whitney

Differential Revision: https://reviews.llvm.org/D107353
2021-08-14 19:22:06 +08:00
Dawid Jurczak 107401002e [NFC][DSE] Clean up KnownNoReads and MemorySSAScanLimit in DSE
Another simple cleanups set in DSE. CheckCache is removed since 1f1145006b and in consequence KnownNoReads is useless.
Also update description of MemorySSAScanLimit which default value is 150 instead 100.

Differential Revision: https://reviews.llvm.org/D107812
2021-08-14 11:26:57 +02:00
Arthur Eubanks c19d7f8af0 [CallPromotion] Check for inalloca/byval mismatch
Previously we would allow promotion even if the byval/inalloca
attributes on the call and the callee didn't match.

It's ok if the byval/inalloca types aren't the same. For example, LTO
importing may rename types.

Fixes PR51397.

Reviewed By: rnk

Differential Revision: https://reviews.llvm.org/D107998
2021-08-13 16:52:04 -07:00
Arthur Eubanks dc41c558dd [NFC] Make AttributeList::hasAttribute(AttributeList::ReturnIndex) its own method
AttributeList::hasAttribute() is confusing. In an attempt to change the
name to something that suggests using other methods, fix up some
existing uses.
2021-08-13 16:27:11 -07:00
Arthur Eubanks f80ae58068 [NFC] Cleanup calls to AttributeList::getAttribute(FunctionIndex)
getAttribute() is confusing, use a clearer method.
2021-08-13 16:27:11 -07:00
Arthur Eubanks d7593ebaee [NFC] Clean up users of AttributeList::hasAttribute()
AttributeList::hasAttribute() is confusing, use clearer methods like
hasParamAttr()/hasRetAttr().

Add hasRetAttr() since it was missing from AttributeList.
2021-08-13 11:59:18 -07:00
Arthur Eubanks a9831cce1e [NFC] Remove public uses of AttributeList::getAttributes()
Use methods that better convey the intent.
2021-08-13 11:38:12 -07:00
Arthur Eubanks 80ea2bb574 [NFC] Rename AttributeList::getParam/Ret/FnAttributes() -> get*Attributes()
This is more consistent with similar methods.
2021-08-13 11:16:52 -07:00
Arthur Eubanks a0c42ca56c [NFC] Remove AttributeList::hasParamAttribute()
It's the same as AttributeList::hasParamAttr().
2021-08-13 10:58:21 -07:00
Roman Lebedev c46546bd52
Reland "[NFCI][SimplifyCFG] simplifyCondBranch(): assert that branch is non-tautological""
The commit originally unearthed a problem, reported as
https://reviews.llvm.org/rGf30a7dff8a5b32919951dcbf92e4a9d56c4679ff#1019890
Now that the problem has been fixed, and the assertion no longer fires,
let's see if there are other cases it fires on.

This reverts commit 5c8c24d2de,
relanding commit f30a7dff8a.
2021-08-13 15:45:03 +03:00
Roman Lebedev 2702fb1148
[SimplifyCFG] Restart if `removeUndefIntroducingPredecessor()` made changes
It might changed the condition of a branch into a constant,
so we should restart and constant-fold terminator,
instead of continuing with the tautological "conditional" branch.
This fixes the issue reported at https://reviews.llvm.org/rGf30a7dff8a5b32919951dcbf92e4a9d56c4679ff
2021-08-13 15:45:03 +03:00
Roman Lebedev 5c8c24d2de
Revert "[NFCI][SimplifyCFG] simplifyCondBranch(): assert that branch is non-tautological"
The assertion does not hold on a provided reproducer.
Reverting until after fixing the problem.

This reverts commit f30a7dff8a.
2021-08-13 13:16:22 +03:00
Dylan Fleming 4be7fb9762 [SVE] Add folds for truncation of vscale
Reviewed By: david-arm

Differential Revision: https://reviews.llvm.org/D107453
2021-08-13 10:18:00 +01:00
Rosie Sumpter 46abd1fbe8 [LoopFlatten] Fix assertion failure in checkOverflow
There is an assertion failure in computeOverflowForUnsignedMul
(used in checkOverflow) due to the inner and outer trip counts
having different types. This occurs when the IV has been widened,
but the loop components are not successfully rediscovered.
This is fixed by some refactoring of the code in findLoopComponents
which identifies the trip count of the loop.
2021-08-13 10:07:49 +01:00
Giorgis Georgakoudis 60e643fe05 [OpenMP][Fix] Fix disable spmdization option
Besides SPMDization, other analysis and optimization for original, frontend-generated SPMD regions uses information from the AAKernelInfoFunction attribute. This fix makes sure disabling SPMDization through the corresponding option applies only to generic mode regions, which should not be SPMDized, while it leaves unaffected the attribute state of original SPMD regions.

Reviewed By: jdoerfert

Differential Revision: https://reviews.llvm.org/D108001
2021-08-12 17:59:14 -07:00
Sanjay Patel 14eefa57f2 [InstCombine] factorize min/max intrinsic ops with common operand (2nd try)
This is a re-try of 6de1dbbd09 which was reverted because
it missed a null check. Extra test for that failure added.

Original commit message:
This is an adaptation of D41603 and another step on the way
to canonicalizing to the intrinsic forms of min/max.

See D98152 for status.
2021-08-12 16:32:07 -04:00
Amy Huang 427520a8fa Revert "[InstCombine] factorize min/max intrinsic ops with common operand"
This reverts commit 6de1dbbd09 because it causes a
compiler crash.
2021-08-12 12:36:25 -07:00
Florian Hahn f999312872
Recommit "[Matrix] Overload stride arg in matrix.columnwise.load/store."
This reverts the revert 28c04794df.

The failing MLIR test that caused the revert should be fixed  in this
version.

Also includes a PPC test fix previously in 1f87c7c478.
2021-08-12 18:31:57 +01:00
Roman Lebedev f30a7dff8a
[NFCI][SimplifyCFG] simplifyCondBranch(): assert that branch is non-tautological
We really shouldn't deal with a conditional branch that can be trivially
constant-folded into an unconditional branch.

Indeed, barring failure to trigger BB reprocessing, that should be true,
so let's assert as much, and hope the assertion never fires.
If it does, we have a bug to fix.
2021-08-12 20:03:09 +03:00
Roman Lebedev 628f63d3d5
[SimplifyCFG] If FoldTwoEntryPHINode() changed things, restart
Mainly, i want to add an assertion that `SimplifyCFGOpt::simplifyCondBranch()`
doesn't get asked to deal with non-unconditional branches,
and if i do that, then said assertion fires on existing tests,
and this is what prevents it from firing.
2021-08-12 20:03:09 +03:00
Sanjay Patel 790c29ab86 [InstCombine] fold umax/umin intrinsics based on demanded bits
This is a direct translation of the select folds added with
D53033 / D53036 and another step towards canonicalization
using the intrinsics (see D98152).
2021-08-12 12:37:45 -04:00
maekawatoshiki dd3eea6566 [LICM] Support sinking in LNICM
Currently, LNICM pass does not support sinking instructions out of loop nest.
This patch enables LNICM to sink down as many instructions to the exit block of outermost loop as possible.

Reviewed By: Whitney

Differential Revision: https://reviews.llvm.org/D107219
2021-08-13 00:56:26 +09:00
Sanjay Patel cd44cc86e3 [InstCombine] remove unused function argument; NFC
This was just added with 6de1dbbd09 , and I missed
pulling the extra arg from the final revision.
2021-08-12 11:47:25 -04:00
Johannes Doerfert 4e7d7cae67 [Attributor][FIX] Do not try to rewrite functions with casted call sites
If we cast a function at the call site it is hard(er) to get the rewrite
correct, let's not attempt it for now.

Fixes PR51448.
2021-08-12 10:39:53 -05:00
Johannes Doerfert 5f543919b2 [Attributor][FIX] Guard constant casts with type size checks 2021-08-12 10:39:53 -05:00
Johannes Doerfert a420f80bf1 [Attributor] Do not delete volatile stores to null/undef
See D106309.

Differential Revision: https://reviews.llvm.org/D107906
2021-08-12 10:39:52 -05:00
Sanjay Patel be0698559b [InstCombine] remove shl(neg x), y transform
This diff was accidentally committed with:
1b5a195845
2021-08-12 11:27:22 -04:00
Sanjay Patel 6de1dbbd09 [InstCombine] factorize min/max intrinsic ops with common operand
This is an adaptation of D41603 and another step on the way
to canonicalizing to the intrinsic forms of min/max.

See D98152 for status.
2021-08-12 11:19:09 -04:00
Sanjay Patel 1b5a195845 [InstCombine] add tests for factorization of min/max intrinsics; NFC 2021-08-12 11:19:09 -04:00
Liqiang Tao 422fc5603a [llvm][Inline] Refactor out InlineOrder
Move InlineOrder to separated file.

Reviewed By: kazu

Differential Revision: https://reviews.llvm.org/D107831
2021-08-12 22:19:53 +08:00
Mehdi Amini 28c04794df Revert "[Matrix] Overload stride arg in matrix.columnwise.load/store."
This reverts commit a1ef81de35.

Broke the MLIR buildbot.
2021-08-12 11:57:19 +00:00
Florian Hahn a1ef81de35
[Matrix] Overload stride arg in matrix.columnwise.load/store.
This patch adjusts the intrinsics definition of
llvm.matrix.column.major.load and llvm.matrix.column.major.store to
allow overloading the type of the stride. The bitwidth of the stride is
used to perform the offset computation.

This fixes a crash when using __builtin_matrix_column_major_load or
__builtin_matrix_column_major_store on 32 bit platforms. The stride argument
of the builtins are defined as `size_t`, which is 32 bits wide on 32 bit
platforms.

Note that we still perform offset computations with 64 bit width on 32
bit platforms for accesses that do not take a user-specified stride.
This can be fixed separately.

Fixes PR51304.

Reviewed By: erichkeane

Differential Revision: https://reviews.llvm.org/D107349
2021-08-12 10:45:25 +01:00
Christudasan Devadasan 5d940b71ae Reapply "SROA: Enhance speculateSelectInstLoads"
Originally committed as ffc3fb665d
Reverted in fcf2d5f402 due to an
assertion failure.

Original commit message:

Allow the folding even if there is an
intervening bitcast.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D106667
2021-08-11 22:58:54 -04:00
Akira Hatanaka 643ce61fb3 [ObjC][ARC] Don't form a StoreStrong call if it is unsafe to move the
release call

findSafeStoreForStoreStrongContraction checks whether it's safe to move
the release call to the store by inspecting all instructions between the
two, but was ignoring retain instructions. This was causing objects to
be released and deallocated before they were retained.

rdar://81668577
2021-08-11 13:50:19 -07:00
Sanjay Patel a0a9c9e188 [InstCombine] avoid breaking up min/max (cmp+sel) idioms
This is a quick fix for a motivating case that looks like this:
https://godbolt.org/z/GeMqzMc38

As noted, we might be able to restore the min/max patterns
with select folds, or we just wait for this to become easier
with canonicalization to min/max intrinsics.
2021-08-11 12:48:11 -04:00
Yolanda Chen 8fa16cc628 [LTO][lld] Add lto-pgo-warn-mismatch option
When enable CSPGO for ThinLTO, there are profile cfg mismatch warnings that will cause lld-link errors (with /WX)
due to source changes (e.g. `#if` code runs for profile generation but not for profile use)
To disable it we have to use an internal "/mllvm:-no-pgo-warn-mismatch" option.
In contrast clang uses option ”-Wno-backend-plugin“ to avoid such warnings and gcc has an explicit "-Wno-coverage-mismatch" option.

Add "lto-pgo-warn-mismatch" option to lld COFF/ELF to help turn on/off the profile mismatch warnings explicitly when build with ThinLTO and CSPGO.

Differential Revision: https://reviews.llvm.org/D104431
2021-08-11 09:45:55 -07:00
Wang, Pengfei 6c4809825d Revert "[lld] Add lto-pgo-warn-mismatch option"
This reverts commit 0cfb00a1c9.
2021-08-11 16:25:42 +08:00
Yolanda Chen 0cfb00a1c9 [lld] Add lto-pgo-warn-mismatch option
When enable CSPGO for ThinLTO, there are profile cfg mismatch warnings that will cause lld-link errors (with /WX).
To disable it we have to use an internal "/mllvm:-no-pgo-warn-mismatch" option.
In contrast clang uses option ”-Wno-backend-plugin“ to avoid such warnings and gcc has an explicit "-Wno-coverage-mismatch" option.

Add this "lto-pgo-warn-mismatch" option to lld to help turn on/off the profile mismatch warnings explicitly when build with ThinLTO and CSPGO.

Reviewed By: tejohnson

Differential Revision: https://reviews.llvm.org/D104431
2021-08-11 14:43:26 +08:00
Petr Hosek 389dc94d4b [InstrProfiling] Generate runtime hook for Fuchsia
When none of the translation units in the binary have been instrumented
we shouldn't need to link the profile runtime. However, because we pass
-u__llvm_profile_runtime on Linux and Fuchsia, the runtime would still
be pulled in and incur some overhead. On Fuchsia which uses runtime
counter relocation, it also means that we cannot reference the bias
variable unconditionally.

This change modifies the InstrProfiling pass to pull in the profile
runtime only when needed by declaring the __llvm_profile_runtime symbol
in the translation unit only when needed. For now we restrict this only
for Fuchsia, but this can be later expanded to other platforms. This
approach was already used prior to 9a041a7522, but we changed it
to always generate the __llvm_profile_runtime due to a TAPI limitation,
but that limitation may no longer apply, and it certainly doesn't apply
on platforms like Fuchsia.

Differential Revision: https://reviews.llvm.org/D98061
2021-08-10 23:21:15 -07:00
Petr Hosek c0c1c3cf93 Revert "[InstrProfiling] Emit bias variable eagerly"
This reverts commit 6660cec568 since
it was superseded by https://reviews.llvm.org/D98061.
2021-08-10 23:21:15 -07:00
Johannes Doerfert fc32a5c87d [Attributor][NFC] Try to make the windows build bots happy
Failed for some reason, potentially because of the inner type
declaration in combination with the `using`. This might help.

Failure:
https://lab.llvm.org/buildbot/#/builders/127/builds/15432
2021-08-11 01:11:37 -05:00
Johannes Doerfert e7e3585cde [Attributor][FIX] Handle recurrences (PHIs) in AAPointerInfo explicitly
PHI nodes are not pass through but change their value, we have to
account for that to avoid missing stores.

Follow up for D107798 to fix PR51249 for good.

Differential Revision: https://reviews.llvm.org/D107808
2021-08-11 00:49:54 -05:00
Johannes Doerfert 96da6dd6ba [Attributor][FIX] Only avoid visiting PHI uses multiple times (PR51249)
AAPointerInfoFloating needs to visit all uses and some multiple times if
we go through PHI nodes. Attributor::checkForAllUses keeps a visited set
so we don't recurs endlessly. We now allow recursion for non-phi uses so
we track all pointer offsets via PHI nodes properly without endless
recursion.

This replaces the first attempt D107579.

Differential Revision: https://reviews.llvm.org/D107798
2021-08-11 00:49:54 -05:00
Johannes Doerfert e0c5d83a92 [OpenMP][FIX] Disabled optimizations have to be made known
To avoid simplification with wrong constants we need to make sure we
know that we won't perform specific optimizations based on the users
request. The non-SPMDzation and non-CustomStateMachine flags did only
prevent the final transformation but allowed to value simplification
to go ahead.

Differential Revision: https://reviews.llvm.org/D107862
2021-08-11 00:49:53 -05:00
Christopher Di Bella c874dd5362 [llvm][clang][NFC] updates inline licence info
Some files still contained the old University of Illinois Open Source
Licence header. This patch replaces that with the Apache 2 with LLVM
Exception licence.

Differential Revision: https://reviews.llvm.org/D107528
2021-08-11 02:48:53 +00:00
Adrian Prantl a353edb8d6 Simplify coro::salvageDebugInfo() (NFC-ish)
This patch removes the hand-rolled implementation of salvageDebugInfo
for cast and GEPs and replaces it with a call into
llvm::salvageDebugInfoImpl().

A side-effect of this is that additional redundant convert operations
are introduced, but those don't have any negative effect on the
resulting DWARF expression.

rdar://80227769

Differential Revision: https://reviews.llvm.org/D107384
2021-08-10 15:21:18 -07:00
Adrian Prantl d6b6880172 Streamline the API of salvageDebugInfoImpl (NFC)
This patch refactors / simplifies salvageDebugInfoImpl(). The goal
here is to simplify the implementation of coro::salvageDebugInfo() in
a followup patch.

  1. Change the return value to I.getOperand(0). Currently users of
     salvageDebugInfoImpl() assume that the first operand is
     I.getOperand(0). This patch makes this information explicit. A
     nice side-effect of this change is that it allows us to salvage
     expressions such as add i8 1, %a in the future.

  2. Factor out the creation of a DIExpression and return an array of
     DIExpression operations instead. This change allows users that
     call salvageDebugInfoImpl() in a loop to avoid the costly
     creation of temporary DIExpressions and to defer the creation of
     a DIExpression until the end.

This patch does not change any functionality.

rdar://80227769

Differential Revision: https://reviews.llvm.org/D107383
2021-08-10 15:21:18 -07:00
Nikita Popov 17db125b48 [MemCpyOpt] Optimize MemoryDef insertion
When converting a store into a memset, we currently insert the new
MemoryDef after the store MemoryDef, which requires all uses to be
renamed to the new def using a whole block scan. Instead, we can
insert the new MemoryDef before the store and not rename uses,
because we know that the location is immediately overwritten, so
all uses should still refer to the old MemoryDef. Those uses will
get renamed when the old MemoryDef is actually dropped, which is
efficient.

I expect something similar can be done for some of the other MSSA
updates in MemCpyOpt. This is an alternative to D107513, at least
for this particular case.

Differential Revision: https://reviews.llvm.org/D107702
2021-08-10 21:28:29 +02:00
Sanjay Patel b267d3ce8d [InstCombine] avoid infinite loops from min/max canonicalization
The intrinsics have an extra chunk of known bits logic
compared to the normal cmp+select idiom. That allows
folding the icmp in each case to something better, but
that then opposes the canonical form of min/max that
we try to form for a select.

I'm carving out a narrow exception to preserve all
existing regression tests while avoiding the inf-loop.
It seems unlikely that this is the only bug like this
left, but this should fix:
https://llvm.org/PR51419
2021-08-10 14:42:37 -04:00
Carl Ritson a1783b54e8 [SimpifyCFG] Remove recursion from FoldCondBranchOnPHI. NFCI.
Avoid stack overflow errors on systems with small stack sizes
by removing recursion in FoldCondBranchOnPHI.

This is a simple change as the recursion was only iteratively
calling the function again on the same arguments.
Ideally this would be compiled to a tail call, but there is
no guarantee.

Reviewed By: lebedev.ri

Differential Revision: https://reviews.llvm.org/D107803
2021-08-10 19:14:31 +09:00
David Sherwood ce394161cb [InstCombine] Add more complex folds for extractelement + stepvector
I have updated cheapToScalarize to also consider the case when
extracting lanes from a stepvector intrinsic. This required removing
the existing 'bool IsConstantExtractIndex' and passing in the actual
index as a Value instead. We do this because we need to know if the
index is <= known minimum number of elements returned by the stepvector
intrinsic. Effectively, when extracting lane X from a stepvector we
know the value returned is also X.

New tests added here:

  Transforms/InstCombine/vscale_extractelement.ll

Differential Revision: https://reviews.llvm.org/D106358
2021-08-10 09:17:21 +01:00
Arnold Schwaighofer b987c283ae [coro] Correct CurrentBlock tracking bug recently introduced
We use the CurrentBlock to determine whether we have already processed a
block. Don't reuse this variable for setting where we should insert the
rematerialization. The rematerialization block is different to the
current block when we rematerialize for coro suspend block users.

Differential Revision: https://reviews.llvm.org/D107573
2021-08-09 10:41:41 -07:00
Christudasan Devadasan fcf2d5f402 Revert "SROA: Enhance speculateSelectInstLoads"
This reverts commit ffc3fb665d.
2021-08-09 01:13:39 -04:00
Michael Liao b5e470aa2e [LowerMemIntrinsics] Typo fix. 2021-08-08 22:38:58 -04:00
Dorit Nuzman 67278b8a90 [LV] Support Interleaved Store Group With Gaps
Teach LV to use masked-store to support interleave-store-group with
gaps (instead of scatters/scalarization).

The symmetric case of using masked-load to support
interleaved-load-group with gaps was introduced a while ago, by
https://reviews.llvm.org/D53668; This patch completes the store-scenario
leftover from D53668, and solves PR50566.

Reviewed by: Ayal Zaks

Differential Revision: https://reviews.llvm.org/D104750
2021-08-08 10:32:02 +03:00
Nikita Popov 88003cea1c [MemCpyOpt] Remove MemDepAnalysis-based implementation
The MemorySSA-based implementation has been enabled for a few months
(since D94376). This patch drops the old MDA-based implementation
entirely.

I've kept this to only the basic cleanup of dropping various
conditions -- the code could be further cleaned up now that there
is only one implementation.

Differential Revision: https://reviews.llvm.org/D102113
2021-08-07 22:35:44 +02:00
Krishna a9a176ca3b [InstCombine] Remove nnan requirement for transformation to fabs from select
In this patch, the "nnan" requirement is removed for the canonicalization of select with fcmp to fabs.
(i) FSub logic: Remove check for nnan flag presence in fsub. Example: https://alive2.llvm.org/ce/z/751svg (fsub).
(ii) FNeg logic: Remove check for the presence of nnan and nsz flag in fneg. Example: https://alive2.llvm.org/ce/z/a_fsdp (fneg).

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D106872
2021-08-07 22:38:45 +05:30
Roman Lebedev 0a241e90d4
[NFC][InstCombine] `vector_reduce_xor(?ext(<n x i1>))` --> `?ext(vector_reduce_add(<n x i1>))`
Instead of expanding it ourselves,
we can just forward to `?ext(vector_reduce_add(<n x i1>))`, as per alive2:
https://alive2.llvm.org/ce/z/ymz7zE (self)
https://alive2.llvm.org/ce/z/eKu2v2 (skipped zext)
https://alive2.llvm.org/ce/z/c3BXgc (skipped sext)
2021-08-07 17:31:33 +03:00
Roman Lebedev c6ff867f92
[NFC][InstCombine] Simplify emitted IR for `vector_reduce_xor(?ext(<n x i1>))`
Now that we canonicalize low bit splatting to the form we were emitting
here ourselves, emit simpler IR that will be canonicalized later.

See 1e801439be for proofs:
https://alive2.llvm.org/ce/z/MjCm5W (self)
https://alive2.llvm.org/ce/z/kgqF4M (skipped zext)
https://alive2.llvm.org/ce/z/pgy3HP (skipped sext)
2021-08-07 17:31:24 +03:00
Roman Lebedev e71870512f
[InstCombine] Prefer `-(x & 1)` as the low bit splatting pattern (PR51305)
Both patterns are equivalent (https://alive2.llvm.org/ce/z/jfCViF),
so we should have a preference. It seems like mask+negation is better
than two shifts.
2021-08-07 17:25:28 +03:00
Christudasan Devadasan ffc3fb665d SROA: Enhance speculateSelectInstLoads
Allow the folding even if there is an
intervening bitcast.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D106667
2021-08-07 09:09:14 -04:00
Florian Hahn a00aafc30d
[VPlan] Iterate over phi recipes to detect reductions to fix.
After refactoring the phi recipes, we can now iterate over all header
phis in a VPlan to detect reductions when it comes to fixing them up
when tail folding.

This reduces the coupling with the cost model & legal by using the
information directly available in VPlan. It also removes a call to
getOrAddVPValue, which references the original IR value which may
become outdated after VPlan transformations.

Reviewed By: Ayal

Differential Revision: https://reviews.llvm.org/D100102
2021-08-07 14:06:50 +01:00
Sanjay Patel 0369714b31 [InstCombine] reduce vector casting before icmp
There may be some generalizations (see test comments) of these patterns,
but this should handle the cases motivated by:
https://llvm.org/PR51315
https://llvm.org/PR51259

The backend may want to transform differently, but at least for
the x86 examples that I looked at, there does not appear to be
any significant perf diff either way.
2021-08-06 17:09:38 -04:00
Artem Belevich 6a9cf21f5a [CUDA, MemCpyOpt] Add a flag to force-enable memcpyopt and use it for CUDA.
Attempt to enable MemCpyOpt unconditionally in D104801 uncovered the fact that
there are users that do not expect LLVM to materialize `memset` intrinsic.

While other passes can do that, too, MemCpyOpt triggers it more frequently and
breaks sanitizers and some downstream users.

For now introduce a flag to force-enable the flag and opt-in only CUDA
compilation with NVPTX back-end.

Differential Revision: https://reviews.llvm.org/D106401
2021-08-06 11:13:52 -07:00
Michael Liao d1cacd5928 [MemCpyOpt] Teach memcpyopt to handle loads from the constant memory.
- Loads from the constant memory (either explicit one or as the source
  of memory transfer intrinsics) won't alias any stores.

Reviewed By: asbirlea, efriedma

Differential Revision: https://reviews.llvm.org/D107605
2021-08-06 12:43:52 -04:00
David Sherwood 3fd96e1b2e [LoopVectorize] Improve vectorisation of some intrinsics by treating them as uniform
This patch adds more instructions to the Uniforms list, for example certain
intrinsics that are uniform by definition or whose operands are loop invariant.
This list includes:

  1. The intrinsics 'experimental.noalias.scope.decl' and 'sideeffect', which
  are always uniform by definition.
  2. If intrinsics 'lifetime.start', 'lifetime.end' and 'assume' have
  loop invariant input operands then these are also uniform too.

Also, in VPRecipeBuilder::handleReplication we check if an instruction is
uniform based purely on whether or not the instruction lives in the Uniforms
list. However, there are certain cases where calls to some intrinsics can
be effectively treated as uniform too. Therefore, we now also treat the
following cases as uniform for scalable vectors:

  1. If the 'assume' intrinsic's operand is not loop invariant, then we
  are free to treat this as uniform anyway since it's only a performance
  hint. We will get the benefit for the first lane.
  2. When the input pointers for 'lifetime.start' and 'lifetime.end' are loop
  variant then for scalable vectors we assume these still ultimately come
  from the broadcast of an alloca. We do not support scalable vectorisation
  of loops containing alloca instructions, hence the alloca itself would
  be invariant. If the pointer does not come from an alloca then the
  intrinsic itself has no effect.

I have updated the assume test for fixed width, since we now treat it
as uniform:

  Transforms/LoopVectorize/assume.ll

I've also added new scalable vectorisation tests for other intriniscs:

  Transforms/LoopVectorize/scalable-assume.ll
  Transforms/LoopVectorize/scalable-lifetime.ll
  Transforms/LoopVectorize/scalable-noalias-scope-decl.ll

Differential Revision: https://reviews.llvm.org/D107284
2021-08-06 10:13:15 +01:00
Chuanqi Xu 0fd03feb4b [FuncSpec] Return changed if function is changed by tryToReplaceWithConstant
The may get changed before specialization by RunSCCPSolver. In other
words, the pass may change the function without specialization happens.
Add test and comment to reveal this.
And it may return No Changed if the function get changed by
RunSCCPSolver before the specialization. It looks like a potential bug.

Test Plan: check-all

Reviewed By: https://reviews.llvm.org/D107622

Differential Revision: https://reviews.llvm.org/D107622
2021-08-06 17:00:17 +08:00
David Sherwood 43a5c750d1 Revert "[LoopVectorize] Add support for replication of more intrinsics with scalable vectors"
This reverts commit 95800da914.
2021-08-06 09:48:16 +01:00
Chuanqi Xu 62fc3e0ad6 [NFC] [FuncSpec] Remove unused variables in isArgumentInteresting 2021-08-06 16:38:20 +08:00
Chuanqi Xu cc3f40bb41 [FuncSpec] Move invariant computation for spec cost out of loop (NFC-ish)
Noticed that the computation for function specialization cost of a
function wouldn't change during the traversal of the arguments for the
function. We could hoist the computation out of the traversal. I
observed about ~1% improvement on compile time for spec2017. But I guess
it may not be precise. This should be NFC and fine.

Reviewed By: Sjoerd Meijer

Differential Revision: https://reviews.llvm.org/D107621
2021-08-06 15:43:05 +08:00
Serge Pavlov 4c4093e6e3 Introduce intrinsic llvm.isnan
This is recommit of the patch 16ff91ebcc,
reverted in 0c28a7c990 because it had
an error in call of getFastMathFlags (base type should be FPMathOperator
but not Instruction). The original commit message is duplicated below:

    Clang has builtin function '__builtin_isnan', which implements C
    library function 'isnan'. This function now is implemented entirely in
    clang codegen, which expands the function into set of IR operations.
    There are three mechanisms by which the expansion can be made.

    * The most common mechanism is using an unordered comparison made by
      instruction 'fcmp uno'. This simple solution is target-independent
      and works well in most cases. It however is not suitable if floating
      point exceptions are tracked. Corresponding IEEE 754 operation and C
      function must never raise FP exception, even if the argument is a
      signaling NaN. Compare instructions usually does not have such
      property, they raise 'invalid' exception in such case. So this
      mechanism is unsuitable when exception behavior is strict. In
      particular it could result in unexpected trapping if argument is SNaN.

    * Another solution was implemented in https://reviews.llvm.org/D95948.
      It is used in the cases when raising FP exceptions by 'isnan' is not
      allowed. This solution implements 'isnan' using integer operations.
      It solves the problem of exceptions, but offers one solution for all
      targets, however some can do the check in more efficient way.

    * Solution implemented by https://reviews.llvm.org/D96568 introduced a
      hook 'clang::TargetCodeGenInfo::testFPKind', which injects target
      specific code into IR. Now only SystemZ implements this hook and it
      generates a call to target specific intrinsic function.

    Although these mechanisms allow to implement 'isnan' with enough
    efficiency, expanding 'isnan' in clang has drawbacks:

    * The operation 'isnan' is hidden behind generic integer operations or
      target-specific intrinsics. It complicates analysis and can prevent
      some optimizations.

    * IR can be created by tools other than clang, in this case treatment
      of 'isnan' has to be duplicated in that tool.

    Another issue with the current implementation of 'isnan' comes from the
    use of options '-ffast-math' or '-fno-honor-nans'. If such option is
    specified, 'fcmp uno' may be optimized to 'false'. It is valid
    optimization in general, but it results in 'isnan' always returning
    'false'. For example, in some libc++ implementations the following code
    returns 'false':

        std::isnan(std::numeric_limits<float>::quiet_NaN())

    The options '-ffast-math' and '-fno-honor-nans' imply that FP operation
    operands are never NaNs. This assumption however should not be applied
    to the functions that check FP number properties, including 'isnan'. If
    such function returns expected result instead of actually making
    checks, it becomes useless in many cases. The option '-ffast-math' is
    often used for performance critical code, as it can speed up execution
    by the expense of manual treatment of corner cases. If 'isnan' returns
    assumed result, a user cannot use it in the manual treatment of NaNs
    and has to invent replacements, like making the check using integer
    operations. There is a discussion in https://reviews.llvm.org/D18513#387418,
    which also expresses the opinion, that limitations imposed by
    '-ffast-math' should be applied only to 'math' functions but not to
    'tests'.

    To overcome these drawbacks, this change introduces a new IR intrinsic
    function 'llvm.isnan', which realizes the check as specified by IEEE-754
    and C standards in target-agnostic way. During IR transformations it
    does not undergo undesirable optimizations. It reaches instruction
    selection, where is lowered in target-dependent way. The lowering can
    vary depending on options like '-ffast-math' or '-ffp-model' so the
    resulting code satisfies requested semantics.

    Differential Revision: https://reviews.llvm.org/D104854
2021-08-06 14:32:27 +07:00
Florian Hahn 3e58dd19df
[LV] Move reduction PHI node fixup to VPlan::execute (NFC).
All information to fix-up the reduction phi nodes in the vectorized loop
is available in VPlan now. This patch moves the code to do so, to make
this clearer. Fixing up the loop exit value still relies on other
information and remains outside of VPlan for now.

Reviewed By: Ayal

Differential Revision: https://reviews.llvm.org/D100113
2021-08-06 08:29:20 +01:00
Chuanqi Xu 82ca845b47 [NFC] [FuncSpec] Update the Todo list for recursive functions
Now the recursive functions may get specialized many times when
`func-specialization-max-iters` increases. See discussion in
https://reviews.llvm.org/D106426 for details.
2021-08-06 14:43:17 +08:00
Arthur Eubanks a1b21ed3fb [GCov] Emit memset instead of stores in __llvm_gcov_reset
For a very large module, __llvm_gcov_reset can become very large.
__llvm_gcov_reset previously emitted stores to a bunch of globals in one
huge basic block. MemCpyOpt would turn many of these stores into
memsets, and updating MemorySSA would be extremely slow.

Verified that this makes the compile time of certain files go down
drastically (20min -> 5min).

Reviewed By: MaskRay

Differential Revision: https://reviews.llvm.org/D107538
2021-08-05 22:40:15 -07:00
Chris Jackson 113a06f7a5 {DebugInfo][LSR] Don't cache dbg.value that are already undef
The SCEV-based salvaging method caches dbg.value information pre-LSR so
that salvaging may be attempted post-LSR. If the dbg.value are already
undef pre-LSR then a salvage attempt would be fruitless, so avoid
caching them.

Reviewed By: StephenTozer

Differential Revision: https://reviews.llvm.org/D107448
2021-08-05 19:16:43 +01:00
Kazu Hirata 72661f337a [Transforms] Drop unnecessary const from return types (NFC)
Identified with readability-const-return-type.
2021-08-05 08:53:17 -07:00
Alexey Bataev e7c3eaa8ae [SLP]Do not emit extra shuffle for insertelements vectorization.
If the vectorized insertelements instructions form indentity subvector
(the subvector at the beginning of the long vector), it is just enough
to extend the vector itself, no need to generate inserting subvector
shuffle.

Differential Revision: https://reviews.llvm.org/D107494
2021-08-05 08:41:24 -07:00
David Sherwood e9177b0958 Fix build issues caused by 95800da914 2021-08-05 16:26:34 +01:00
Sander de Smalen 3e47f009ff [LV] Consider ExtractValue as uniform.
Since all operands to ExtractValue must be loop-invariant when we deem
the loop vectorizable, we can consider ExtractValue to be uniform.

Reviewed By: david-arm

Differential Revision: https://reviews.llvm.org/D107286
2021-08-05 16:20:50 +01:00
eopXD fd7f6a3c81 [NFC][LoopIdiom] rename boolean variable NegStride to IsNegStride
Rename variable for better code readability.

Reviewed By: lebedev.ri

Differential Revision: https://reviews.llvm.org/D107570
2021-08-05 23:11:42 +08:00
Momchil Velikov f171149e0d [SimpifyCFG] Speculate a store preceded by a local non-escaping load
In SimplifyCFG we may simplify the CFG by speculatively executing
certain stores, when they are preceded by a store to the same
location.  This patch allows such speculation also when the stores are
similarly preceded by a load.

In order for this transformation to be correct we need to ensure that
the memory location is writable and the store in the new location does
not introduce a data race.

Local objects (created by an `alloca` instruction) are always
writable, so once we are past a read from a location it is valid to
also write to that same location.

Seeing just a load does not guarantee absence of a data race (unlike
if we see a store) - the load may still be part of a race, just not
causing undefined behaviour
(cf. https://llvm.org/docs/Atomics.html#optimization-outside-atomic).

In the original program, a data race might have been prevented by the
condition, but once we move the store outside the condition, we must
be sure a data race wasn't possible anyway, no matter what the
condition evaluates to.

One way to be sure that a local object is never concurrently
read/written is check that its address never escapes the function.

Hence this transformation is restricted to local, non-escaping
objects.

Reviewed By: nikic, lebedev.ri

Differential Revision: https://reviews.llvm.org/D107281
2021-08-05 15:54:42 +01:00
Florian Hahn 38b098be66
[VectorCombine] Limit scalarization known non-poison indices.
We can only trust the range of the index if it is guaranteed
non-poison.

Fixes PR50949.

Reviewed By: lebedev.ri

Differential Revision: https://reviews.llvm.org/D107364
2021-08-05 15:36:31 +01:00
Dawid Jurczak 06206a8cd1 [BuildLibCalls][NFC] Remove redundant attribute list from emitCalloc
Additionally with this patch aligned DSE which is the only user of emitCalloc.

Differential Revision: https://reviews.llvm.org/D103523
2021-08-05 16:18:38 +02:00
David Sherwood 95800da914 [LoopVectorize] Add support for replication of more intrinsics with scalable vectors
This patch adds more instructions to the Uniforms list, for example certain
intrinsics that are uniform by definition or whose operands are loop invariant.
This list includes:

  1. The intrinsics 'experimental.noalias.scope.decl' and 'sideeffect', which
  are always uniform by definition.
  2. If intrinsics 'lifetime.start', 'lifetime.end' and 'assume' have
  loop invariant input operands then these are also uniform too.

Also, in VPRecipeBuilder::handleReplication we check if an instruction is
uniform based purely on whether or not the instruction lives in the Uniforms
list. However, there are certain cases where calls to some intrinsics can
be effectively treated as uniform too. Therefore, we now also treat the
following cases as uniform for scalable vectors:

  1. If the 'assume' intrinsic's operand is not loop invariant, then we
  are free to treat this as uniform anyway since it's only a performance
  hint. We will get the benefit for the first lane.
  2. When the input pointers for 'lifetime.start' and 'lifetime.end' are loop
  variant then for scalable vectors we assume these still ultimately come
  from the broadcast of an alloca. We do not support scalable vectorisation
  of loops containing alloca instructions, hence the alloca itself would
  be invariant. If the pointer does not come from an alloca then the
  intrinsic itself has no effect.

I have updated the assume test for fixed width, since we now treat it
as uniform:

  Transforms/LoopVectorize/assume.ll

I've also added new scalable vectorisation tests for other intriniscs:

  Transforms/LoopVectorize/scalable-assume.ll
  Transforms/LoopVectorize/scalable-lifetime.ll
  Transforms/LoopVectorize/scalable-noalias-scope-decl.ll

Differential Revision: https://reviews.llvm.org/D107284
2021-08-05 15:17:27 +01:00
Dawid Jurczak f8cdde7195 [SimplifyLibCalls][NFC] Clean up LibCallSimplifier from 'memset + malloc into calloc' transformation
FoldMallocMemset can be safely removed because since https://reviews.llvm.org/D103009
such transformation is already performed in DSE.

Differential Revision: https://reviews.llvm.org/D103451
2021-08-05 16:08:32 +02:00
Sander de Smalen 8d08a84745 [LV] Remove a change that was added in D106164.
This change wasn't strictly necessary for D106164 and could be removed.
This patch addresses the post-commit comments from @fhahn on D106164, and
also changes sve-widen-gep.ll to use the same IR test as shown in
pointer-induction.ll.

Reviewed By: fhahn

Differential Revision: https://reviews.llvm.org/D106878
2021-08-05 14:44:53 +01:00
eopXD 26aa1bbe97 [NFCI] [LoopIdiom] Let processLoopStridedStore take StoreSize as SCEV instead of unsigned
Letting it take SCEV allows further modification on the function to optimize
if the StoreSize / Stride is runtime determined.

This is a preceeding of D107353.
The big picture is to let LoopIdiom deal with runtime-determined sizes.

Reviewed By: Whitney, lebedev.ri

Differential Revision: https://reviews.llvm.org/D104595
2021-08-05 13:21:48 +08:00