Change the DSE calloc handling to assume that it is
inaccessiblememonly, i.e. the defining access is liveOnEntry.
Differential Revision: https://reviews.llvm.org/D117543
I would like to move LoopFlatten from LoopPass Manager LPM2 to LPM1 (D116612),
but that is a LPM that is using MemorySSA and so LoopFlatten needs to preserve
MemorySSA and this adds that. More specifically, LoopFlatten restructures the
CFG and with this change the MSSA state is updated accordingly, where we also
update the DomTree. LoopFlatten doesn't rewrite/optimise/delete load or store
instructions, so I have not added any MSSA updates for that.
Differential Revision: https://reviews.llvm.org/D116660
ConstantHoist currently only hoists GEPs if there is no notional
overindexing. As this transform only hoists address arithmetic,
it shouldn't care about whether any overindexing occurs or not.
There is one caveat: If the hoisted base GEP is inbounds, and a
later non-inbounds GEP is rewritten in terms of it, the value
may be incorrectly poisoned. To avoid this, restrict the transform
to inbounds GEPs for now, as the notional overindexing check
effectively did that as well. The inbounds restriction could be
dropped by dropping inbounds from the base GEP expression.
Differential Revision: https://reviews.llvm.org/D117201
AAPointerInfo currently bails on constant expression GEPs with
notional overindexing. I don't think this is necessary, as the
following code handling GEPOperator will deal with arbitrary
indices appropriately.
Differential Revision: https://reviews.llvm.org/D117203
After resetting the start value of the canonical IV, it might not be
canonical any more. Add an assertion to make sure it is only used by its
increment, to avoid potential mis-use. Suggested in D117140.
It is a known problem that we can't align the switch-based coroutine
frame if the alignment exceeds std::max_align_t (which is 16 usually).
We could solve the problem on the middle-end by dynamically transforming
or in the frontend by emitting aligned allocation function.
If we need to solve it in the frontend, the middle end need to offer an
intrinsic to tell the alignment at least. This patch tries to offer such
an intrinsic called llvm.coro.align.
Reviewed By: https://reviews.llvm.org/D117542
Differential revision: https://reviews.llvm.org/D117542
This is an extension of **profi** post-processing step that rebalances counts
in CFGs that have basic blocks w/o probes (aka "unknown" blocks). Specifically,
the new version finds many more "unknown" subgraphs and marks more "unknown"
basic blocks as hot (which prevents unwanted optimization passes).
I see up to 0.5% perf on some (large) binaries, e.g., clang-10 and gcc-8.
The algorithm is still linear and yields no build time overhead.
The `InstrProfInstBase` class is for all `llvm.instrprof.*` intrinsics. In a
later diff we will add new instrinsic of this type. Also refactor some
logic in `InstrProfiling.cpp`.
Reviewed By: davidxl
Differential Revision: https://reviews.llvm.org/D117261
The global state refers to the number of the nodes currently in the
module, and the number of direct calls between nodes, across the
module.
Node counts are not a problem; edge counts are because we want strictly
the kind of edges that affect inlining (direct calls), and that is not
easily obtainable without iteration over the whole module.
This patch avoids relying on analysis invalidation because it turned out
to be too aggressive in some cases. It leverages the fact that Node
objects are stable - they do not get deleted while cgscc passes are
run over the module; and cgscc pass manager invariants.
Reviewed By: aeubanks
Differential Revision: https://reviews.llvm.org/D115847
LLVM Programmer’s Manual strongly discourages the use of `std::vector<bool>` and suggests `llvm::BitVector` as a possible replacement.
This patch does just that for llvm.
Reviewed By: dexonsmith
Differential Revision: https://reviews.llvm.org/D117121
This casued a miscompile due to call slot optimization replacing a call
argument without considering the call's !noalias metadata, see discussion on
the code review.
> Call slot optimization is currently supposed to be prevented if
> the call can capture the source pointer. Due to an implementation
> bug, this check currently doesn't trigger if a bitcast of the source
> pointer is passed instead. I'm somewhat afraid of the fallout of
> fixing this bug (due to heavy reliance on call slot optimization
> in rust), so I'd like to strengthen the capture reasoning a bit first.
>
> In particular, I believe that the capture is fine as long as a)
> the call itself cannot depend on the pointer identity, because
> neither dest has been captured before/at nor src before the
> call and b) there is no potential use of the captured pointer
> before the lifetime of the source alloca ends, either due to
> lifetime.end or a return from a function. At that point the
> potentially captured pointer becomes dangling.
>
> Differential Revision: https://reviews.llvm.org/D115615
Also reverting the dependent commit:
> [MemCpyOpt] Look through pointer casts when checking capture
>
> The user scanning loop above looks through pointer casts, so we
> also need to strip pointer casts in the capture check. Previously
> the source was incorrectly considered not captured if a bitcast
> was passed to the call.
This reverts commit 487a34ed9d
and 00e6869463.
Previously some constants were not pushed to the top of the resulting
expression tree as intended by the algorithm. We can remove the logic
from simplifyFAdd and rely on SimplifyAssociativeOrCommutative to do
that.
Differential Revision: https://reviews.llvm.org/D117302
When SVE is enabled for AArch64 targets it makes more sense to use the
get.active.lane.mask intrinsic, because SVE has an exact 1-1 mapping
from the intrinsic to the 'whilelo' instruction for legal vector types.
This instruction neatly takes overflow into account as well. This patch
fixes an issue in VPInstruction::generateInstruction that assumed we are
only dealing with fixed-width vectors.
Differential Revision: https://reviews.llvm.org/D117109
After D116332, some icmps no longer fold with the target-independent
constant folder. The SimplifyCFG code assumed that the comparison
would always fold, which is not guaranteed. Explicitly check that the
result is either true or false.
Differential Revision: https://reviews.llvm.org/D117184
We can generalize the malloc-to-global transform for other allocation functions which are both a) removable, and b) have a known initialization value.
One subtlety that I want to point out - mostly because I hadn't realized it was true until I took a closer look - is that the existing code doesn't prove that initialization/malloc happens only once. The initialization function can be called multiple times. This is correct without special handling for malloc as undef can map to any value previously written, but a non-undef initializing allocation it means we may end up memseting the new global repeatedly. In particular, this means it's not legal to fold the memset into the initializer of the global.
Differential Revision: https://reviews.llvm.org/D117503
This patch fixes an issue in which SSA value reference within a
DIArgList would be unnecessarily dropped by llvm-link, even when
invoking on a single file (which should be a no-op). The reason for the
difference is that the ValueMapper does not refer to the
RF_IgnoreMissingLocals flag for LocalAsMetadata contained within a
DIArgList; this flag is used for direct LocalAsMetadata uses to preserve
SSA references even when the ValueMapper does not have an explicit
mapping for the referenced SSA value, which appears to always be the
case when using llvm-link in this manner.
Differential Revision: https://reviews.llvm.org/D114355
This fold already exists for scalars via FAddCombine (and that's
why 2 of the tests are only changed cosmetically), but that code
misses vectors and has largely been replaced by simpler folds
over time, so this is another step towards removing it.
The tests with constant folding that produces poison
could potentially remove the select entirely:
https://alive2.llvm.org/ce/z/e-WUqF
...but this patch just removes the FMF-only limitation on
propagation.
This was a last-minute addition to D117249, and of course I ended
up inverting the condition in a way that caused an uninitialized
memory read.
I've dropped it entirely, as I don't think we actually care whether
the size is zero or not here. The previous code wasn't checking
this either.
The malloc to global transform currently determines the type of the
global by looking at bitcasts of the malloc. This is limited (the
transform fails if there are multiple different types) and
incompatible with opaque pointers.
My initial approach was to construct an appropriate struct type
based on usage in loads/stores. What this patch does instead is
to always create an [i8 x AllocSize] global, without trying to
guess types at all.
This does mean that other transforms that require a certain global
type may break. I fixed two of these in D117034 and D117223, which
I believe should be sufficient to avoid regressions. In particular,
the global SRA change should end up splitting the global into
naturally-typed sub-globals, at which point all other optimizations
should work.
Differential Revision: https://reviews.llvm.org/D117092
Currently global SRA uses the GEP structure to determine how to
split the global. This patch instead analyses the loads and stores
that are performed on the global, and collects which types are used
at which offset, and then splits the global according to those.
This is both more general, and works fine with opaque pointers.
This is also closer to how ordinary SROA is performed.
Differential Revision: https://reviews.llvm.org/D117223
canSkipDef() currently skips inaccessiblememonly calls, but not
if they are allocation functions. This check was added in D103009,
but actually seems to be a leftover from a previous implementation
in D101440. canSkipDef() is not used on the storeIsNoop() path,
where the relevant transform ended up being implemented.
Differential Revision: https://reviews.llvm.org/D117005
After d4a8fc3a87 LV stopped adding metadata to disable runtime
unrolling to the vectorized epilogue loop. This was missed because
278aa65cc4 removed the relevant test coverage.
This patch fixes that by adding the relevant metadata after
vector loop generation.
Use the AttributeSet constructor instead. There's no good reason
why AttrBuilder itself should exact the AttributeSet from the
AttributeList. Moving this out of the AttrBuilder generally results
in cleaner code.
The empty() method is a footgun: It only checks whether there are
non-string attributes, which is not at all obvious from its name,
and of dubious usefulness. td_empty() is entirely unused.
Drop these methods in favor of hasAttributes(), which checks
whether there are any attributes, regardless of whether these are
string or enum attributes.
If function is not sanitized we must reset shadow, not copy.
Depends on D117285
Reviewed By: kda, eugenis
Differential Revision: https://reviews.llvm.org/D117286
This fixes a crash I observed in issue #48708 where the LSR
pass tries to insert an instruction in a basic block with only a
catchswitch statement in there. This happens because the Phi node
being evaluated assumes the same value for different basic blocks.
If the basic block associated with the incoming value of the operand
being evaluated has an EHPad terminator LSR skips optimizing it.
But if that incoming value can come from multiple different blocks
there can be some incoming basic blocks which are terminated in
an EHPad. If these are then rewritten in RewriteForPhi the ones
containing an EHPad terminator will hit the "Insertion point must
be a normal instruction" assert in AdjustInsertPositionForExpand.
This fix makes CollectLoopInvariantFixupsAndFormulae also ignore
cases where the same value has another incoming basic block with an
EHPad, same as it already does in case the primary value has one.
Patch by Lorenz Brun <lorenz@brun.one>
Differential Revision: https://reviews.llvm.org/D98378
If function has no sanitize_memory we still reset shadow for nested calls.
The first return from getShadow() correctly returned shadow for argument,
but it didn't reset shadow of byval pointee.
Depends on D117277
Reviewed By: eugenis
Differential Revision: https://reviews.llvm.org/D117278
It's NFC because shadow of pointer is clean so origins will not be
propagated anyway.
Depends on D117275
Reviewed By: kda, eugenis
Differential Revision: https://reviews.llvm.org/D117276
In the process of rewriting `alloca`s and `phi`s that use them, the SROA
pass can try to insert a non-PHI instruction by calling
`getFirstInsertionPt()`, which is not possible in a catchswitch BB. This
CL makes we bail out on these cases.
Reviewed By: dschuff
Differential Revision: https://reviews.llvm.org/D117168
Currently loop interchange only supports loops with one inner loop
induction variable. This patch adds support for transformation with
more than one inner loop induction variables. The induction PHIs and
induction increment instructions are moved/duplicated properly to the
new outer header and the new outer latch, respectively.
Reviewed By: bmahjour
Differential Revision: https://reviews.llvm.org/D114917
This commit optimizes the code sequence:
icmp-XXX (ashr-exact (X, C_1), C_2).
Instcombine already implements this optimization for sgt, and this
patch adds support to additional predicates. The transformation is legal
for all predicates if the 'exact' flag is set, and to SGE, UGE, SLT, ULT
when the exact flag is not present.
This pattern is found in the std::vector bounds checks code of the at()
method.
Alive2 proof:
https://alive2.llvm.org/ce/z/JT_WL8
Differential Revision: https://reviews.llvm.org/D117252
After e734e8286b, it is possible to end up in
a situation where an `indirectbr` is fed by a cast, which is in turn fed by
an operation which only produces integers.
`indirectbr` expects a block address, however these operations can't produce
that.
There were several asserts in `computeValueKnownInPredecessorsImpl` which check
that we're not looking for a block address if we're walking through something
which can never produce one.
Since it's now possible to hit these asserts, this changes them into actual
checks which return false if `Preference` is not `WantInteger`.
This adds a testcase which verifies that we don't crash anymore in these
situations.
Differential Revision: https://reviews.llvm.org/D99814
This reverts the revert commit 073c27b5e5.
A reduced test case has been added in 5e4966cbae and the code has
been updated to handle the case where getInductionOpcode returns
BinaryOpsEnd. In this case, the original code was always using
Instruction::Add. Do the same in the patch.
Note this commit may slightly change the value naming, because it now
also assigns the 'induction' name in the floating point case.
This doesn't require callers to put the pointer operand and the indices
in a container like a vector when calling the function. This is not
really an issue with the existing callers. But when using it from
IRBuilder the inputs are available as separate pointer value and indices
ArrayRef.
Reviewed By: lebedev.ri
Differential Revision: https://reviews.llvm.org/D117038
When masked scatter intrinsic does a uniform store to a destination
address from a source vector, and in this case, the mask is all one value.
This patch replaces the masked scatter with an extracted element of the
last lane of the source vector and stores it in the destination vector.
This patch also folds when the value in the masked scatter is a splat.
In this case, the mask cannot be all zero, and it folds to a scalar store
of the value in the destination pointer.
Differential Revision: https://reviews.llvm.org/D115724