Summary:
Adds ClInstrumentFastpath option to control fastpath instrumentation.
Avoids the load/store instrumentation for the cache fragmentation tool.
Renames cache_frag_basic.ll to working_set_slow.ll for slowpath
instrumentation test.
Adds the __esan_init check in struct_field_count_basic.ll.
Reviewers: aizatsky
Subscribers: llvm-commits, bruening, eugenis, kcc, zhaoqin, vitalybuka
Differential Revision: http://reviews.llvm.org/D21079
llvm-svn: 272355
Summary:
We failed to unpoison uninteresting allocas on return as unpoisoning is part of
main instrumentation which skips such allocas.
Added check -asan-instrument-allocas for dynamic allocas. If instrumentation of
dynamic allocas is disabled it will not will not be unpoisoned.
PR27453
Reviewers: kcc, eugenis
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D21207
llvm-svn: 272341
Summary:
We still want to unpoison full stack even in use-after-return as it can be disabled at runtime.
PR27453
Reviewers: eugenis, kcc
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D21202
llvm-svn: 272334
Instead of directly using MaxFunctionCount and function entry count to determine callee hotness, use the isHotFunction/isColdFunction methods provided by ProfileSummaryInfo.
Differential revision: http://reviews.llvm.org/D21045
llvm-svn: 272321
Previously, we materialized secondary vector IVs from the primary scalar IV,
by offseting the primary to match the correct start value, and then broadcasting
it - inside the loop body. Instead, we can use a real vector IV, like we do for
the primary.
This enables using vector IVs for secondary integer IVs whose type matches the
type of the primary.
Differential Revision: http://reviews.llvm.org/D20932
llvm-svn: 272283
Summary:
This fixes PR26682. Also add LCSSA as a preserved pass to LoopSimplify,
that looks correct to me and allows to write a test for the issue.
Reviewers: chandlerc, bogner, sanjoy
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D21112
llvm-svn: 272224
Summary:
This fixes PR27617.
Bug description: The SLPVectorizer asserts on encountering GEPs with different index types, such as i8 and i64.
The patch includes a simple relaxation of the assert to allow constants being of different types, along with a regression test that will provoke the unrelaxed assert.
Reviewers: nadav, mzolotukhin
Subscribers: JesperAntonsson, llvm-commits, mzolotukhin
Differential Revision: http://reviews.llvm.org/D20685
Patch by Jesper Antonsson!
llvm-svn: 272206
with user specified count has been applied.
Summary:
Previously SetLoopAlreadyUnrolled() set the disable pragma only if
there was some loop metadata.
Now it set the pragma in all cases. This helps to prevent multiple
unroll when -unroll-count=N is given.
Reviewers: mzolotukhin
Differential Revision: http://reviews.llvm.org/D20765
From: Evgeny Stupachenko <evstupac@gmail.com>
llvm-svn: 272195
As suggested by clang-tidy's performance-unnecessary-copy-initialization.
This can easily hit lifetime issues, so I audited every change and ran the
tests under asan, which came back clean.
llvm-svn: 272126
Unlike native shifts, the AVX2 per-element shift instructions VPSRAV/VPSRLV/VPSLLV handle out of range shift values (logical shifts set the result to zero, arithmetic shifts splat the sign bit).
If the shift amount is constant we can sometimes convert these instructions to native shifts:
1 - if all shift amounts are in range then the conversion is trivial.
2 - out of range arithmetic shifts can be clamped to the (bitwidth - 1) (a legal shift amount) before conversion.
3 - logical shifts just return zero if all elements have out of range shift amounts.
In addition, UNDEF shift amounts are handled - either as an UNDEF shift amount in a native shift or as an UNDEF in the logical 'all out of range' zero constant special case for logical shifts.
Differential Revision: http://reviews.llvm.org/D19675
llvm-svn: 271996
This patch adds support for folding undef/zero/constant inputs to MOVMSK instructions.
The SSE/AVX versions can be fully folded, but the MMX version can only handle undef inputs.
Differential Revision: http://reviews.llvm.org/D20998
llvm-svn: 271990
scalarizePHI only looked for phis that have exactly two uses - the "latch"
use, and an extract. Unfortunately, we can not assume all equivalent extracts
are CSE'd, since InstCombine itself may create an extract which is a duplicate
of an existing one. This extends it to handle several distinct extracts from
the same index.
This should fix at least some of the performance regressions from PR27988.
Differential Revision: http://reviews.llvm.org/D20983
llvm-svn: 271961
Summary:
Fix LSRInstance::HoistInsertPosition() to check the original insert
position block first for a canonical insertion point that is dominated
by all inputs. This leads to SCEV being able to reuse more instructions
since it currently tracks the instructions it creates for reuse by
keeping a table of <Value, insert point> pairs.
Originally reviewed in http://reviews.llvm.org/D18001
Reviewers: atrick
Subscribers: llvm-commits, mzolotukhin, mcrosier
Differential Revision: http://reviews.llvm.org/D18480
llvm-svn: 271929
In r271810 ( http://reviews.llvm.org/rL271810 ), I loosened the check
above this to work for any Constant rather than ConstantInt. AFAICT,
that part makes sense if we can determine that the shrunken/extended
constant remained equal. But it doesn't make sense for this later
transform where we assume that the constant DID change.
This could assert for a ConstantExpr:
https://llvm.org/bugs/show_bug.cgi?id=28011
And it could be wrong for a vector as shown in the added regression test.
llvm-svn: 271908
Summary:
This hasn't been caught before because it requires noalias or similarly
strong alias analysis to actually reproduce.
Fixes http://llvm.org/PR27952 .
Reviewers: hfinkel, sanjoy
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D20944
llvm-svn: 271858
Since FoldOpIntoPhi speculates the binary operation to potentially each
of the predecessors of the PHI node (pulling it out of arbitrary control
dependence in the process), we can FoldOpIntoPhi only if we know the
operation doesn't have UB.
This also brings up an interesting profitability question -- the way it
is written today, commonIRemTransforms will hoist out work from
dynamically dead code into code that will execute at runtime. Perhaps
that isn't the best canonicalization?
Fixes PR27968.
llvm-svn: 271857
Summary:
There are some rough corners, since the new pass manager doesn't have
(as far as I can tell) LoopSimplify and LCSSA, so I've updated the
tests to run them separately in the old pass manager in the lit tests.
We also don't have an equivalent for AU.setPreservesCFG() in the new
pass manager, so I've left a FIXME.
Reviewers: bogner, chandlerc, davide
Subscribers: sanjoy, mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D20783
llvm-svn: 271846
Change the name of the ICmpInst to 'ICmp' and the Constant (was a ConstantInt) to 'C',
so that it's hopefully clearer that 'CI' refers to CastInst in this context.
While we're scrubbing, fix the documentation comment and use 'auto' with 'dyn_cast'.
llvm-svn: 271817
A basic block could contain:
%cp = cleanuppad []
cleanupret from %cp unwind to caller
This basic block is empty and is thus a candidate for removal. However,
there can be other uses of %cp outside of this basic block. This is
only possible in unreachable blocks.
Make our transform more correct by checking that the pad has a single
user before removing the BB.
This fixes PR28005.
llvm-svn: 271816
Add the MMX implementation to the SimplifyDemandedUseBits SSE/AVX MOVMSK support added in D19614
Requires a minor tweak as llvm.x86.mmx.pmovmskb takes a x86_mmx argument - so we have to be explicit about the implied v8i8 vector type.
llvm-svn: 271789
Summary:
Adds an option -esan-assume-intra-cache-line which causes esan to assume
that a single memory access touches just one cache line, even if it is not
aligned, for better performance at a potential accuracy cost. Experiments
show that the performance difference can be 2x or more, and accuracy loss
is typically negligible, so we turn this on by default. This currently
applies just to the working set tool.
Reviewers: aizatsky
Subscribers: vitalybuka, zhaoqin, kcc, eugenis, llvm-commits
Differential Revision: http://reviews.llvm.org/D20978
llvm-svn: 271743
Summary:
Adds a global variable to specify the tool, to support handling early
interceptors that invoke instrumented code and require shadow memory to be
initialized prior to __esan_init() being invoked.
Reviewers: aizatsky
Subscribers: vitalybuka, zhaoqin, kcc, eugenis, llvm-commits
Differential Revision: http://reviews.llvm.org/D20973
llvm-svn: 271715
There was concern that creating bitcasts for the simpler potential select pattern:
define <2 x i64> @vecBitcastOp1(<4 x i1> %cmp, <2 x i64> %a) {
%a2 = add <2 x i64> %a, %a
%sext = sext <4 x i1> %cmp to <4 x i32>
%bc = bitcast <4 x i32> %sext to <2 x i64>
%and = and <2 x i64> %a2, %bc
ret <2 x i64> %and
}
might lead to worse code for some targets, so this patch is matching the larger
patterns seen in the test cases.
The motivating example for this patch is this IR produced via SSE intrinsics in C:
define <2 x i64> @gibson(<2 x i64> %a, <2 x i64> %b) {
%t0 = bitcast <2 x i64> %a to <4 x i32>
%t1 = bitcast <2 x i64> %b to <4 x i32>
%cmp = icmp sgt <4 x i32> %t0, %t1
%sext = sext <4 x i1> %cmp to <4 x i32>
%t2 = bitcast <4 x i32> %sext to <2 x i64>
%and = and <2 x i64> %t2, %a
%neg = xor <4 x i32> %sext, <i32 -1, i32 -1, i32 -1, i32 -1>
%neg2 = bitcast <4 x i32> %neg to <2 x i64>
%and2 = and <2 x i64> %neg2, %b
%or = or <2 x i64> %and, %and2
ret <2 x i64> %or
}
For an AVX target, this is currently:
vpcmpgtd %xmm1, %xmm0, %xmm2
vpand %xmm0, %xmm2, %xmm0
vpandn %xmm1, %xmm2, %xmm1
vpor %xmm1, %xmm0, %xmm0
retq
With this patch, it becomes:
vpmaxsd %xmm1, %xmm0, %xmm0
Differential Revision: http://reviews.llvm.org/D20774
llvm-svn: 271676
Summary:
Instrument GEP instruction for counting the number of struct field
address calculation to approximate the number of struct field accesses.
Adds test struct_field_count_basic.ll to test the struct field
instrumentation.
Reviewers: bruening, aizatsky
Subscribers: junbuml, zhaoqin, llvm-commits, eugenis, vitalybuka, kcc, bruening
Differential Revision: http://reviews.llvm.org/D20892
llvm-svn: 271619
In r270478, where I enabled the new heuristic I posted testing results,
which I got when explicitly passed the thresholds values via CL options.
However, setting the CL options init-values is not enough to change the
default values of thresholds, so I'm changing them in another place now.
llvm-svn: 271615
In preparation for porting to the new PM.
Patch by Jake VanAdrighem! (review mainly by me/Justin)
Differential Revision: http://reviews.llvm.org/D20610
llvm-svn: 271607
Summary:
The module pass pipeline includes a late LICM run after loop
unrolling. LCSSA is implicitly run as a pass dependency of LICM. However no
cleanup pass was run after this, so the LCSSA nodes ended in the optimized output.
Reviewers: hfinkel, mehdi_amini
Subscribers: majnemer, bruno, mzolotukhin, mehdi_amini, llvm-commits
Differential Revision: http://reviews.llvm.org/D20606
llvm-svn: 271602
This is effectively NFC because we already do this transform after r175380:
http://reviews.llvm.org/rL175380
and also via foldBoolSextMaskToSelect().
This change should just make it a bit more efficient to match the pattern.
The original guard was added in r95058:
http://reviews.llvm.org/rL95058
A sampling of codegen for current in-tree targets shows no problems. This
makes sense given that we're already producing the vector selects via the
other transforms.
llvm-svn: 271554
Inline virtual functions has linkeonceodr linkage (emitted in comdat on
supporting targets). If the vtable for the class is not emitted in the
defining module, function won't be address taken thus its address is not
recorded. At the mercy of the linker, if the per-func prf_data from this
module (in comdat) is picked at link time, we will lose mapping from
function address to its hash val. This leads to missing icall promotion.
The second test case (currently disabled) in compiler_rt (r271528):
instrprof-icall-prom.test demostrates the bug. The first profile-use
subtest is fine due to linker order difference.
With this change, no missing icall targets is found in instrumented clang's
raw profile.
llvm-svn: 271532
Add support for the new pass manager to MemorySSA pass.
Change MemorySSA to be computed eagerly upon construction.
Change MemorySSAWalker to be owned by the MemorySSA object that creates
it.
Reviewers: dberlin, george.burgess.iv
Subscribers: mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D19664
llvm-svn: 271432
Previously, whenever we needed a vector IV, we would create it on the fly,
by splatting the scalar IV and adding a step vector. Instead, we can create a
real vector IV. This tends to save a couple of instructions per iteration.
This only changes the behavior for the most basic case - integer primary
IVs with a constant step.
Differential Revision: http://reviews.llvm.org/D20315
llvm-svn: 271410
This will be necessary to allow the global merge pass to attach
multiple debug info metadata nodes to global variables once we reverse
the edge from DIGlobalVariable to GlobalVariable.
Differential Revision: http://reviews.llvm.org/D20414
llvm-svn: 271358
This patch fixes bug https://llvm.org/bugs/show_bug.cgi?id=27897.
When query memory access cost, current SLP always passes in alignment value of 1 (unaligned), so it gets a very high cost of scalar memory access, and wrongly vectorize memory loads in the test case.
It can be fixed by simply giving correct alignment.
llvm-svn: 271333
The assumption, made in insert() that weak functions are always inserted after strong functions,
is only true in the first round of adding functions.
In subsequent rounds this is no longer guaranteed , because we might remove a strong function from the tree (because it's modified) and add it later,
where an equivalent weak function already exists in the tree.
This change removes the assert in insert() and explicitly enforces a weak->strong order.
This also removes the need of two separate loops in runOnModule().
llvm-svn: 271299
Summary:
Creates a global variable containing preliminary information
for the cache-fragmentation tool runtime.
Passes a pointer to the variable (null if no variable is created) to the
compilation unit init and exit routines in the runtime.
Reviewers: aizatsky, bruening
Subscribers: filcab, kubabrecka, bruening, kcc, vitalybuka, eugenis, llvm-commits, zhaoqin
Differential Revision: http://reviews.llvm.org/D20541
llvm-svn: 271298
This adds support to the backed to actually support SjLj EH as an exception
model. This is *NOT* the default model, and requires explicitly opting into it
from the frontend. GCC supports this model and for MinGW can still be enabled
via the `--using-sjlj-exceptions` options.
Addresses PR27749!
llvm-svn: 271244
Since we already assert that the outgoing IR is in LCSSA, it is easy to
get misled into thinking that -indvars broke LCSSA if the incoming IR is
non-LCSSA. Checking this pre-condition will make such cases break in
more obvious ways.
Inspired by (but does _not_ fix) PR26682.
llvm-svn: 271196
Summary:
If we can prove that an op.with.overflow intrinsic does not overflow, we
can get rid of the intrinsic, and replace it with non-wrapping
arithmetic.
This was first checked in at r265913 but reverted in r265950 because it
exposed some issues around how SCEV handled post-inc add recurrences.
Those issues have now been fixed.
Reviewers: atrick, regehr
Subscribers: sanjoy, mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D18685
llvm-svn: 271153
This patch removes the llvm intrinsics VPMOVSX and (V)PMOVZX sign/zero extension intrinsics and auto-upgrades to SEXT/ZEXT calls instead. We already did this for SSE41 PMOVSX sometime ago so much of that implementation can be reused.
Reapplied now that the the companion patch (D20684) removes/auto-upgrade the clang intrinsics has been committed.
Differential Revision: http://reviews.llvm.org/D20686
llvm-svn: 271131
Summary:
When RF_NullMapMissingGlobalValues is set, mapValue can return null
for GlobalValue. When mapping the operands of a constant that is
referenced from metadata, we need to handle this case and actually
return null instead of mapping this constant.
Reviewers: dexonsmith, rafael
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D20713
llvm-svn: 271129
It was triggering an msan bot.
Revert "[IRPGO] Set the function entry count metadata."
This reverts commit r271090.
Revert "[IRPGO] Centralize the function attribute inliner hint logic. NFC."
This reverts commit r271089.
llvm-svn: 271091
Summary:
Unroll factor (Count) calculations moved to a new function.
Early exits on pragma and "-unroll-count" defined factor added.
New type of unrolling "Force" introduced (previously used implicitly).
New unroll preference "AllowRemainder" introduced and set "true" by default.
(should be set to false for architectures that suffers from it).
Reviewers: hfinkel, mzolotukhin, zzheng
Differential Revision: http://reviews.llvm.org/D19553
From: Evgeny Stupachenko <evstupac@gmail.com>
llvm-svn: 271071
When we traced through a phi node looking for floating-point reductions, we
forgot whether we'd ever seen an instruction without fast-math flags (that
would block vectorization). This propagates it through to the end.
llvm-svn: 271015
Currently we consider that each constant has itself as a base value. I.e "base(const) = const".
This introduces couple of problems when we are trying to avoid reporting constants in statepoint live sets:
1. When querying "base( phi(const1, const2) )" we will get "phi(const1, const2)" as a base pointer. Since
it's not a constant we will record it in a stack map. However on practice we don't want this to happen
(constant are never relocated).
2. base( phi(const, gc ptr) ) = phi( const, base(gc ptr) ). This particular case imposes challenge on our
runtime - we don't expect to see constant base pointers other than null. This problems can be avoided
by treating all constant as if they were derived from null pointer base. I.e in a first case we will
not include constant pointer in a stack map at all. In a second case we will get "phi(null, base(gc ptr))"
as a base pointer which is a lot more convenient.
Differential Revision: http://reviews.llvm.org/D20584
llvm-svn: 270993
This patch removes the llvm intrinsics VPMOVSX and (V)PMOVZX sign/zero extension intrinsics and auto-upgrades to SEXT/ZEXT calls instead. We already did this for SSE41 PMOVSX sometime ago so much of that implementation can be reused.
A companion patch (D20684) removes/auto-upgrade the clang intrinsics.
Differential Revision: http://reviews.llvm.org/D20686
llvm-svn: 270973
objc_storeStrong can be formed from a sequence such as
%0 = tail call i8* @objc_retain(i8* %p) nounwind
%tmp = load i8*, i8** @x, align 8
store i8* %0, i8** @x, align 8
tail call void @objc_release(i8* %tmp) nounwind
The code was already looking through bitcasts for most of the values
involved, but had missed one case where the pointer operand for the
store was a bitcast. Ultimately the pointer for the load and store
have to be the same value, after stripping casts.
llvm-svn: 270955
If every operands of a constant are mapping to themselves, and the
type does not change, we have an early exit as acknowledged in the
comment:
// Otherwise, we have some other constant to remap. Start by checking to see
// if all operands have an identity remapping.
However instead of checking for identity the code was checking if the
operands were mapped to the constant itself, which is rarely true.
As a consequence, the coverage report showed that the early exit was
never taken.
llvm-svn: 270944
Condition might be simplified to a Constant, but it doesn't have to be
ConstantInt, so we should dyn_cast, instead of cast.
This fixes PR27886.
llvm-svn: 270924
An exception could prevent a store from occurring but MemCpyOpt's
callslot optimization would fire anyway, causing the store to occur.
This fixes PR27849.
llvm-svn: 270892
The problem with plugins on Windows is that when building a plugin DLL it needs
to explicitly link against something (an exe or DLL) if it uses symbols from
that thing, and that thing must explicitly export those symbols. Also there's a
limit of 65535 symbols that can be exported. This means that currently plugins
only work on Windows when using BUILD_SHARED_LIBS, and that doesn't work with
MSVC.
This patch adds an LLVM_EXPORT_SYMBOLS_FOR_PLUGINS option, which when enabled
automatically exports from all LLVM tools the symbols that a plugin could want
to use so that a plugin can link against a tool directly. Plugins can specify
what tool they link against by using PLUGIN_TOOL argument to llvm_add_library.
The option can also be enabled on Linux, though there all it should do is
restrict the set of symbols that are exported as by default all symbols are
exported.
This option is currently OFF by default, as while I've verified that it works
with MSVC, linux gcc, and cygwin gcc, I haven't tried mingw gcc and I have no
idea what will happen on OSX. Also unfortunately we can't turn on
LLVM_ENABLE_PLUGINS when the option is ON as bugpoint-passes needs to be
loaded by both bugpoint.exe and opt.exe which is incompatible with this
approach. Also currently clang plugins don't work with this approach, which
will be fixed in future patches.
Differential Revision: http://reviews.llvm.org/D18826
llvm-svn: 270839
It is unsafe to hoist a load before a function call which may throw, the
throw might prevent a pointer dereference.
Likewise, it is unsafe to sink a store after a call which may throw.
The caller might be able to observe the difference.
This fixes PR27858.
llvm-svn: 270828
It turns out that too many passes are relying on alias analysis results
for control dependencies. Until we fix that by introducing a more accurate
modelling of control dependencies, special case assume in MemorySSA instead.
Also introduce tests to ensure we don't regress the FunctionAttrs or LICM
passes.
Differential Revision: http://reviews.llvm.org/D20658
llvm-svn: 270823
There is only one caller of MemorySSA::createNewAccess, and it passes true
as the IgnoreNonMemory argument. Remove that argument and fold its behavior
into createNewAccess.
llvm-svn: 270812
After this change, we do the expected thing for cases like
```
Check0Passed = /* range check IRCE can optimize */
Check1Passed = /* range check IRCE can optimize */
if (!(Check0Passed && Check1Passed))
throw_Exception();
```
llvm-svn: 270804
As a result of D18634 we no longer infer certain attributes on linkonce_odr
functions at compile time, and may only infer them at LTO time. The readnone
attribute in particular is required for virtual constant propagation (part
of whole-program virtual call optimization) to work correctly.
This change moves the whole-program virtual call optimization pass after
the function attribute inference passes, and enables the attribute inference
passes at opt level 1, so that virtual constant propagation has a chance to
work correctly for linkonce_odr functions.
Differential Revision: http://reviews.llvm.org/D20643
llvm-svn: 270765
It may materialize a declaration, or a definition. The name could
be misleading. This is following a merge of materializeInitFor()
into materializeDeclFor().
Differential Revision: http://reviews.llvm.org/D20593
llvm-svn: 270759
They were originally separated to handle the co-recursion between
the ValueMapper and the ValueMaterializer. This recursion does not
exist anymore: the ValueMapper now uses a Worklist and the
ValueMaterializer is scheduling job on the Worklist.
Differential Revision: http://reviews.llvm.org/D20593
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 270758
Also, rename recognizeBitReverseOrBSwapIdiom to recognizeBSwapOrBitReverseIdiom,
so the ordering of the MatchBSwaps and MatchBitReversals arguments are
consistent with the function name.
llvm-svn: 270715
Move the now index-based ODR resolution and internalization routines out
of ThinLTOCodeGenerator.cpp and into either LTO.cpp (index-based
analysis) or FunctionImport.cpp (index-driven optimizations).
This is to enable usage by other linkers.
llvm-svn: 270698
Followup to D20528 clang patch, this removes the (V)CVTDQ2PD(Y) and (V)CVTPS2PD(Y) llvm intrinsics and auto-upgrades to sitofp/fpext instead.
Differential Revision: http://reviews.llvm.org/D20568
llvm-svn: 270678
A volatile load has side effects beyond what callers expect readonly to
signify. For example, it is not safe to reorder two function calls
which each perform a volatile load to the same memory location.
llvm-svn: 270671
Summary:
Adds fastpath instrumentation for esan's working set tool. The
instrumentation for an intra-cache-line load or store consists of an
inlined write to shadow memory bits for the corresponding cache line.
Adds a basic test for this instrumentation.
Reviewers: aizatsky
Subscribers: vitalybuka, zhaoqin, kcc, eugenis, llvm-commits
Differential Revision: http://reviews.llvm.org/D20483
llvm-svn: 270640
Summary:
Adds createEsanInitToolGV for creating a tool-specific variable passed
to the runtime library.
Adds dtor "esan.module_dtor" and inserts calls from the dtor to
"__esan_exit" in the runtime library.
Updates the EfficiencySanitizer test.
Patch by Qin Zhao.
Reviewers: aizatsky
Subscribers: bruening, kcc, vitalybuka, eugenis, llvm-commits
Differential Revision: http://reviews.llvm.org/D20488
llvm-svn: 270627
This changes IRCE to optimize uses, and not branches. This change is
NFCI since the uses we do inspect are in practice only ever going to be
the condition use in conditional branches; but this flexibility will
later allow us to analyze more complex expressions than just a direct
branch on a range check.
llvm-svn: 270500
When an aggregate contains an opaque type its size cannot be
determined. This triggers an "Invalid GetElementPtrInst indices for type" assert
in function checkGEPType. The fix suppresses the conversion in this case.
http://reviews.llvm.org/D20319
llvm-svn: 270479
Summary:
This patch turns on LoopUnrollAnalyzer by default. To mitigate compile
time regressions, I chose very conservative thresholds for now. Later we
can make them more aggressive, but it might require being smarter in
which loops we're optimizing. E.g. currently the biggest issue is that
with more agressive thresholds we unroll many cold loops, which
increases compile time for no performance benefit (performance of those
loops is improved, but it doesn't matter since they are cold).
Test results for compile time(using 4 samples to reduce noise):
```
MultiSource/Benchmarks/VersaBench/ecbdes/ecbdes 5.19%
SingleSource/Benchmarks/Polybench/medley/reg_detect/reg_detect 4.19%
MultiSource/Benchmarks/FreeBench/fourinarow/fourinarow 3.39%
MultiSource/Applications/JM/lencod/lencod 1.47%
MultiSource/Benchmarks/Fhourstones-3_1/fhourstones3_1 -6.06%
```
I didn't see any performance changes in the testsuite, but it improves
some internal tests.
Reviewers: hfinkel, chandlerc
Subscribers: llvm-commits, mzolotukhin
Differential Revision: http://reviews.llvm.org/D20482
llvm-svn: 270478
A cleanuppad is not cheap, they turn into many instructions and result
in additional spills and fills. It is not worth keeping a cleanuppad
around if all it does is hold a lifetime.end instruction.
N.B. We first try to merge the cleanuppad with another cleanuppad to
avoid dropping the lifetime and debug info markers.
llvm-svn: 270314
The InductiveRangeCheck struct is only five words long; so passing these
around value is fine. The allocator makes the code look more complex
than it is.
llvm-svn: 270309
I had used `std::remove_if` under the assumption that it moves the
predicate matching elements to the end, but actaully the elements
remaining towards the end (after the iterator returned by
`std::remove_if`) are indeterminate. Fix the bug (and make the code
more straightforward) by using a temporary SmallVector, and add a test
case demonstrating the issue.
llvm-svn: 270306
Summary:
Uses ModulePass instead of FunctionPass for EfficiencySanitizerPass to
better support global variable creation for a forthcoming struct field
counter tool.
Patch by Qin Zhao.
Reviewers: aizatsky
Subscribers: llvm-commits, eugenis, vitalybuka, bruening, kcc
Differential Revision: http://reviews.llvm.org/D20458
llvm-svn: 270263
Check that the incoming blocks of phi nodes are identical, and block
function merging if they are not.
rdar://problem/26255167
Differential Revision: http://reviews.llvm.org/D20462
llvm-svn: 270250
Sequences of range checks expressed using guards, like
guard((I - 2) u< L)
guard((I - 1) u< L)
guard((I + 0) u< L)
guard((I + 1) u< L)
guard((I + 2) u< L)
can sometimes be combined into a smaller sequence:
guard((I - 2) u< L AND (I + 2) u< L)
if we can prove that (I - 2) u< L AND (I + 2) u< L implies all of checks
expressed in the previous sequence.
This change teaches GuardWidening to do this kind of merging when
feasible.
llvm-svn: 270151
This patch fixes https://llvm.org/bugs/show_bug.cgi?id=27703.
If there is a sequence of one or more load instructions, each loaded value is used as address of later load instruction, bitcast is necessary to change the value type, don't optimize it.
llvm-svn: 270135
Transition InstrProf and Coverage over to the stricter Error/Expected
interface.
Changes since the initial commit:
- Fix error message printing in llvm-profdata.
- Check errors in loadTestingFormat() + annotateAllFunctions().
- Defer error handling in InstrProfIterator to InstrProfReader.
- Remove the base ProfError class to work around an MSVC ICE.
Differential Revision: http://reviews.llvm.org/D19901
llvm-svn: 270020
Summary:
Implement guard widening in LLVM. Description from GuardWidening.cpp:
The semantics of the `@llvm.experimental.guard` intrinsic lets LLVM
transform it so that it fails more often that it did before the
transform. This optimization is called "widening" and can be used hoist
and common runtime checks in situations like these:
```
%cmp0 = 7 u< Length
call @llvm.experimental.guard(i1 %cmp0) [ "deopt"(...) ]
call @unknown_side_effects()
%cmp1 = 9 u< Length
call @llvm.experimental.guard(i1 %cmp1) [ "deopt"(...) ]
...
```
to
```
%cmp0 = 9 u< Length
call @llvm.experimental.guard(i1 %cmp0) [ "deopt"(...) ]
call @unknown_side_effects()
...
```
If `%cmp0` is false, `@llvm.experimental.guard` will "deoptimize" back
to a generic implementation of the same function, which will have the
correct semantics from that point onward. It is always _legal_ to
deoptimize (so replacing `%cmp0` with false is "correct"), though it may
not always be profitable to do so.
NB! This pass is a work in progress. It hasn't been tuned to be
"production ready" yet. It is known to have quadriatic running time and
will not scale to large numbers of guards
Reviewers: reames, atrick, bogner, apilipenko, nlewycky
Subscribers: mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D20143
llvm-svn: 269997
Summary: Set default branch weight to 1:1 if one of the branch has profile missing when simplifying CFG.
Reviewers: spatel, davidxl
Subscribers: danielcdh, llvm-commits
Differential Revision: http://reviews.llvm.org/D20307
llvm-svn: 269995
In truncateToMinimalBitwidths() we were RAUW'ing an instruction then erasing it. However, that intruction could be cached in the map we're iterating over. The first check is "I->use_empty()" which in most cases would return true, as the (deleted) object was RAUW'd first so would have zero use count. However in some cases the object could have been polluted or written over and this wouldn't be the case. Also it makes valgrind, asan and traditionalists who don't like their compiler to crash sad.
No testcase as there are no externally visible symptoms apart from a crash if the stars align.
Fixes PR26509.
llvm-svn: 269908
This bug was introduced in r269728 and is the likely cause of many stage 2 ubsan bot failures.
I'll add a test in a follow-up commit assuming this fixes things properly.
llvm-svn: 269797
This is assertion is no longer necessary since we never record
constants in the live set anyway. (They are never recorded in
the initial live set, and constant bases are removed near line 2119)
Differential Revision: http://reviews.llvm.org/D20293
llvm-svn: 269764
Fix a bug introduced with rL269426 :
[InstCombine] canonicalize* LE/GE vector integer comparisons to LT/GT (PR26701, PR26819)
We were assuming that a ConstantDataVector / ConstantVector / ConstantAggregateZero operand of
an ICMP was composed of ConstantInt elements, but it might have ConstantExpr or UndefValue
elements. Handle those appropriately.
Also, refactor this function to join the scalar and vector paths and eliminate the switches.
Differential Revision: http://reviews.llvm.org/D20289
llvm-svn: 269728
Transition InstrProf and Coverage over to the stricter Error/Expected
interface.
Changes since the initial commit:
- Address undefined-var-template warning.
- Fix error message printing in llvm-profdata.
- Check errors in loadTestingFormat() + annotateAllFunctions().
- Defer error handling in InstrProfIterator to InstrProfReader.
Differential Revision: http://reviews.llvm.org/D19901
llvm-svn: 269694
The selection of the vectorization factor currently doesn't consider
interleaved accesses. The vectorization factor is based on the maximum safe
dependence distance computed by LAA. However, for loops with interleaved
groups, we should instead base the vectorization factor on the maximum safe
dependence distance divided by the maximum interleave factor of all the
interleaved groups. Interleaved accesses not in a group will be scalarized.
Differential Revision: http://reviews.llvm.org/D20241
llvm-svn: 269659
TargetLibraryInfoWrapperPass is a dependency of
SCCP but it's not listed as such. Chandler pointed
out this is an easy mistake to make which only
surfaces in weird crashes with some flag combinations.
This code will go away anyway at some point in the
future, but as long as it's (still) exercised, try
to make it correct.
llvm-svn: 269589
Transition InstrProf and Coverage over to the stricter Error/Expected
interface.
Changes since the initial commit:
- Fix error message printing in llvm-profdata.
- Check errors in loadTestingFormat() + annotateAllFunctions().
- Defer error handling in InstrProfIterator to InstrProfReader.
Differential Revision: http://reviews.llvm.org/D19901
llvm-svn: 269491
Transition InstrProf and Coverage over to the stricter Error/Expected
interface.
Differential Revision: http://reviews.llvm.org/D19901
llvm-svn: 269462
Currently there is no reasonable way to control the warnings in the 'use' phase
of the IRPGO pass. This is problematic because the output can be somewhat
spammy. This patch adds some flags which allow us to optionally disable these
warnings. The current upstream behavior will remain the default.
Patch by Jake VanAdrighem (jvanadrighem@gmail.com)
Differential Revision: http://reviews.llvm.org/D20195
llvm-svn: 269437
Summary: This change fix the bug in isProfitableToUseMemset() where MaxIntSize shoule be in byte, not bit.
Reviewers: arsenm, joker.eph, mcrosier
Subscribers: mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D20176
llvm-svn: 269433
*We don't currently handle the edge case constants (min/max values), so it's not a complete
canonicalization.
To fully solve the motivating bugs, we need to enhance this to recognize a zero vector
too because that's a ConstantAggregateZero which is a ConstantData, not a ConstantVector
or a ConstantDataVector.
Differential Revision: http://reviews.llvm.org/D17859
llvm-svn: 269426
Summary:
...loop after the last iteration.
This is really hard to do correctly. The core problem is that we need to
model liveness through the induction PHIs from iteration to iteration in
order to get the correct results, and we need to correctly de-duplicate
the common subgraphs of instructions feeding some subset of the
induction PHIs. All of this can be driven either from a side effect at
some iteration or from the loop values used after the loop finishes.
This patch implements this by storing the forward-propagating analysis
of each instruction in a cache to recall whether it was free and whether
it has become live and thus counted toward the total unroll cost. Then,
at each sink for a value in the loop, we recursively walk back through
every value that feeds the sink, including looping back through the
iterations as needed, until we have marked the entire input graph as
live. Because we cache this, we never visit instructions more than twice
-- once when we analyze them and put them into the cache, and once when
we count their cost towards the unrolled loop. Also, because the cache
is only two bits and because we are dealing with relatively small
iteration counts, we can store all of this very densely in memory to
avoid this from becoming an excessively slow analysis.
The code here is still pretty gross. I would appreciate suggestions
about better ways to factor or split this up, I've stared too long at
the algorithmic side to really have a good sense of what the design
should probably look at.
Also, it might seem like we should do all of this bottom-up, but I think
that is a red herring. Specifically, the simplification power is *much*
greater working top-down. We can forward propagate very effectively,
even across strange and interesting recurrances around the backedge.
Because we use data to propagate, this doesn't cause a state space
explosion. Doing this level of constant folding, etc, would be very
expensive to do bottom-up because it wouldn't be until the last moment
that you could collapse everything. The current solution is essentially
a top-down simplification with a bottom-up cost accounting which seems
to get the best of both worlds. It makes the simplification incremental
and powerful while leaving everything dead until we *know* it is needed.
Finally, a core property of this approach is its *monotonicity*. At all
times, the current UnrolledCost is a conservatively low estimate. This
ensures that we will never early-exit from the analysis due to exceeding
a threshold when if we had continued, the cost would have gone back
below the threshold. These kinds of bugs can cause incredibly hard to
track down random changes to behavior.
We could use a techinque similar (but much simpler) within the inliner
as well to avoid considering speculated code in the inline cost.
Reviewers: chandlerc
Subscribers: sanjoy, mzolotukhin, llvm-commits
Differential Revision: http://reviews.llvm.org/D11758
llvm-svn: 269388
Ported DA to the new PM by splitting the former DependenceAnalysis Pass
into a DependenceInfo result type and DependenceAnalysisWrapperPass type
and adding a new PM-style DependenceAnalysis analysis pass returning the
DependenceInfo.
Patch by Philip Pfaffe, most of the review by Justin.
Differential Revision: http://reviews.llvm.org/D18834
llvm-svn: 269370
Split FCMP//ICMP/SEL from the basic arithmetic cost functions. They were not sharing any notable code path (just the return) and were repeatedly testing the opcode.
llvm-svn: 269348
LoopVectorBody was changed from a single pointer to a SmallVector when
store predication was introduced in r200270. Since r247139, store predication
no longer splits the vector loop body in-place, so we can go back to having
a single LoopVectorBody block.
This reverts the no-longer-needed changes from r200270.
llvm-svn: 269321
Shifts beyond the bitwidth are undef but SCCP resolved them to zero.
Instead, DTRT and resolve them to undef.
This reimplements the transform which caused PR27712.
llvm-svn: 269269
This new verifier rule lets us unambigously pick a calling convention
when creating a new declaration for
`@llvm.experimental.deoptimize.<ty>`. It is also congruent with our
lowering strategy -- since all calls to `@llvm.experimental.deoptimize`
are lowered to calls to `__llvm_deoptimize`, it is reasonable to enforce
a unique calling convention.
Some of the tests that were breaking this verifier rule have had to be
split up into different .ll files.
The inliner was violating this rule as well, and has been fixed to avoid
producing invalid IR.
llvm-svn: 269261
This should just be a compile-time change. Correct the check for whether
we have already analyzed the callee when making summary based decisions.
There is no need to reprocess one at the same threshold as when it was
last processed.
llvm-svn: 269251
Summary: In sample profile, some branches may have profile missing due to profile inaccuracy. We want existing branch probability still valid after propagation.
Reviewers: hfinkel, davidxl, spatel
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D19948
llvm-svn: 269137
Sort of the BB-local equivalent to idiom-recognizer: if we have a basic-block
that really implements a memcpy operation, backends can benefit from seeing
this.
llvm-svn: 269125
Before r268509, Clang would disable the loop unroll pass when optimizing
for size. That commit enabled it to be able to support unroll pragmas
in -Os builds. However, this regressed binary size in one of Chromium's
DLLs with ~100 KB.
This restores the original behaviour of no unrolling at -Os, but doing it
in LLVM instead of Clang makes more sense, and also allows the pragmas to
keep working.
Differential revision: http://reviews.llvm.org/D20115
llvm-svn: 269124
This patch extend loopreroll to allow the instruction chain
of loop control only IV has sext.
Differential Revision: http://reviews.llvm.org/D19820
llvm-svn: 269121
Remove the ModuleLevelChanges argument, and the ability to create new
subprograms for cloned functions. The latter was added without review in
r203662, but it has no in-tree clients (all non-test callers pass false
for ModuleLevelChanges [1], so it isn't reachable outside of tests). It
also isn't clear that adding a duplicate subprogram to the compile unit is
always the right thing to do when cloning a function within a module. If
this functionality comes back it should be accompanied with a more concrete
use case.
Furthermore, all in-tree clients add the returned function to the module.
Since that's pretty much the only sensible thing you can do with the function,
just do that in CloneFunction.
[1] http://llvm-cs.pcc.me.uk/lib/Transforms/Utils/CloneFunction.cpp/rCloneFunction
Differential Revision: http://reviews.llvm.org/D18628
llvm-svn: 269110
The plan is to eventually make this logic simpler, however I expect it to
be a little tricky for the foreseeable future (at least until we're rid of
pointee types), so move it here so that it can be reused to build a summary
index for devirtualization.
Differential Revision: http://reviews.llvm.org/D20005
llvm-svn: 269081
Summary:
Add support for emission of plaintext lists of the imported files for
each distributed backend compilation. Used for distributed build file
staging.
Invoked with new gold-plugin thinlto-emit-imports-files option, which is
only valid with thinlto-index-only (i.e. for distributed builds), or
from llvm-lto with new -thinlto-action=emitimports value.
Depends on D19556.
Reviewers: joker.eph
Subscribers: llvm-commits, joker.eph
Differential Revision: http://reviews.llvm.org/D19636
llvm-svn: 269067
This restores commit r268627:
Summary:
When launching ThinLTO backends in a distributed build (currently
supported in gold via the thinlto-index-only plugin option), emit
an individual index file for each backend process as described here:
http://lists.llvm.org/pipermail/llvm-dev/2016-April/098272.html
...
Differential Revision: http://reviews.llvm.org/D19556
Address msan failures by avoiding std::prev on map.end(), the
theory is that this is causing issues due to some known UB problems
in __tree.
llvm-svn: 269059
Loop rotation clones instruction from the old header into the preheader. If
there were uses of values produced by these instructions that were outside
the loop, we have to insert PHI nodes to merge the two values. If the values
are used by DbgIntrinsics they will be used as a MetadataAsValue of a
ValueAsMetadata of the original values, and iterating all of the uses of the
original value will not update the DbgIntrinsics. The new code checks if the
values are used by DbgIntrinsics and if so, updates them using essentially
the same logic as the original code.
The attached testcase demonstrates the issue. Without the fix, the
DbgIntrinic outside the loop uses values computed inside the loop, even
though these values do not dominate the DbgIntrinsic.
Author: Thomas Jablin (tjablin)
Reviewers: dblaikie aprantl kbarton hfinkel cycheng
http://reviews.llvm.org/D19564
llvm-svn: 269034
When a va_start or va_copy is immediately followed by a va_end (ignoring
debug information or other start/end in between), then it is safe to
remove the pair. As this code shares some commonalities with the lifetime
markers, this has been factored to helper functions.
This InstCombine pattern kicks-in 3 times when running the LLVM test
suite.
llvm-svn: 269033
This reverts commits r268969, r268979 and r268984. They had target specific test
in generic directories without the correct specifiers and made it hard for us to
come up with a good solution by rapidly committing untested changes.
This test needs to be in a target specific directory or have the correct REQUIRED
identifier.
llvm-svn: 269027
Allow vectorization when the step is a loop-invariant variable.
This is the loop example that is getting vectorized after the patch:
int int_inc;
int bar(int init, int *restrict A, int N) {
int x = init;
for (int i=0;i<N;i++){
A[i] = x;
x += int_inc;
}
return x;
}
"x" is an induction variable with *loop-invariant* step.
But it is not a primary induction. Primary induction variable with non-constant step is not handled yet.
Differential Revision: http://reviews.llvm.org/D19258
llvm-svn: 269023
IR instrumentation generates a COMDAT symbol __llvm_profile_raw_version to
overwrite the same symbol in profile run-time to distinguish IR profiles from
Clang generated profiles. In MACHO, LinkOnceODR linkage is used due to the
lack of COMDAT support.
But LinkOnceODR linkage might have .weak_def_can_be_hidden assembly directive,
while the weak variable in run-time has a .weak_definition directive. Linker
will not merge these two symbols even they have the same name. The end result
is IR profiles are not properly flagged in MACHO.
This patch changes the linkage for __llvm_profile_raw_version in each module to
LinkOnceAny so that it has same .weak_definition directive as in the run-time.
Differential Revision: http://reviews.llvm.org/D20078
llvm-svn: 268969
This fixes http://llvm.org/PR27646 on AArch64.
There are three issues here:
- The GR save area is 7 words in size, instead of 8. This is not enough
if none of the fixed arguments is passed in GRs (they're all floats or
aggregates).
- The first argument is ignored (which counteracts the above if it's passed
in GR).
- Like x86_64, fixed arguments landing in the overflow area are wrongly
counted towards the overflow offset.
Differential Revision: http://reviews.llvm.org/D20023
llvm-svn: 268967
Original Commit Message
Extend load/store type canonicalization to handle unordered operations
Extend the type canonicalization logic to work for unordered atomic loads and stores. Note that while this change itself is fairly simple and low risk, there's a reasonable chance this will expose problems in the backends by suddenly generating IR they wouldn't have seen before. Anything of this nature will be an existing bug in the backend (you could write an atomic float load), but this will definitely change the frequency with which such cases are encountered. If you see problems, feel free to revert this change, but please make sure you collect a test case.
Note that the concern about lowering is now much less likely. PR27490 proved that we already *were* mucking with the types of ordered atomics and volatiles. As a result, this change doesn't introduce as much new behavior as originally thought.
llvm-svn: 268809
Again, fairly simple. Only change is ensuring that we actually copy the property of the load correctly. The aliasing legality constraints were already handled by the FRE patches. There's nothing special about unorder atomics from the perspective of the PRE algorithm itself.
llvm-svn: 268804
You'll note there are essentially no code changes here. Cross block FRE heavily reuses code from the block local FRE. All of the tricky parts were done as part of the previous patch and the refactoring that removed the original code duplication.
llvm-svn: 268775
This patch is the first in a small series teaching GVN to optimize unordered loads aggressively. This change just handles block local FRE because that's the simplest thing which lets me test MDA, and the AvailableValue pieces. Somewhat suprisingly, MDA appears fine and only a couple of small changes are needed in GVN.
Once this is in, I'll tackle non-local FRE and PRE. The former looks like a natural extension of this, the later will require a couple of minor changes.
Differential Revision: http://reviews.llvm.org/D19440
llvm-svn: 268770
Summary:
The original ThinLTO pipeline was derived from some
work I did tuning FullLTO on the test suite and SPEC. This
patch reduces the amount of work done in the "linker phase" of
the build, and extend the function simplifications passes
performed during the "compile phase". This helps the build time
by reducing the IR as much as possible during the compile phase
and limiting the work to be performed during the "link phase",
while keeping the performance "on par" with the existing pipeline.
Reviewers: tejohnson
Subscribers: llvm-commits, joker.eph
Differential Revision: http://reviews.llvm.org/D19773
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 268769
Retrying r268550/r268751 which were reverted at r268577/r268765 due a memory sanitizer failure.
I have not been able to reproduce that failure, but I've taken another guess at fixing
the problem in this version of the patch and will watch for another failure.
Original commit message:
Unlike earlier similar fixes, we need to recalculate the branch weights
in this case.
Differential Revision: http://reviews.llvm.org/D19674
llvm-svn: 268767
Retrying r268550 which was reverted at r268577 due a memory sanitizer failure.
I have not been able to reproduce that failure, but I've taken a guess at fixing
the problem in this version of the patch and will watch for another failure.
Original commit message:
Unlike earlier similar fixes, we need to recalculate the branch weights
in this case.
Differential Revision: http://reviews.llvm.org/D19674
llvm-svn: 268751
Allowing overriding the default ASAN shadow mapping offset with the
-asan-shadow-offset option, and allow zero to be specified for both offset and
scale.
Patch by Aaron Carroll <aaronc@apple.com>.
llvm-svn: 268724
This test was crashing, and currently it breaks bootstrapping clang with debuginfo
Differential Revision: http://reviews.llvm.org/D20008
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 268715
Summary: We need to clean up CFG before assigning discriminator to minimize the impact of optimization on debug info.
Reviewers: davidxl, dblaikie, dnovillo
Subscribers: dnovillo, danielcdh, llvm-commits
Differential Revision: http://reviews.llvm.org/D19926
llvm-svn: 268675
Summary:
Some PHIs can have expressions that are not AddRecExprs due to the presence
of sext/zext instructions. In order to prevent the Loop Vectorizer from
bailing out when encountering these PHIs, we now coerce the SCEV
expressions to AddRecExprs using SCEV predicates (when possible).
We only do this when the alternative would be to not vectorize.
Reviewers: mzolotukhin, anemet
Subscribers: mssimpso, sanjoy, mzolotukhin, llvm-commits
Differential Revision: http://reviews.llvm.org/D17153
llvm-svn: 268633
Summary:
When launching ThinLTO backends in a distributed build (currently
supported in gold via the thinlto-index-only plugin option), emit
an individual index file for each backend process as described here:
http://lists.llvm.org/pipermail/llvm-dev/2016-April/098272.html
The individual index file encodes the summary and module information
required for implementing the importing/exporting decisions made
for a given module in the thin link step.
This is in place of the current mechanism that uses the combined index
to make importing decisions in each back end independently. It is an
enabler for doing global summary based optimizations in the thin link
step (which will be recorded in the individual index files), and reduces
the size of the index that must be sent to each backend process, and
the amount of work to scan it in the backends.
Rather than create entirely new ModuleSummaryIndex structures (and all
the included unique_ptrs) for each backend index file, a map is created
to record all of the GUID and summary pointers needed for a particular
index file. The IndexBitcodeWriter walks this map instead of the full
index (hiding the details of managing the appropriate summary iteration
in a new iterator subclass). This is more efficient than walking the
entire combined index and filtering out just the needed summaries during
each backend bitcode index write.
Depends on D19481.
Reviewers: joker.eph
Subscribers: llvm-commits, joker.eph
Differential Revision: http://reviews.llvm.org/D19556
llvm-svn: 268627
Allowing overriding the default ASAN shadow mapping offset with the
-asan-shadow-offset option, and allow zero to be specified for both offset and
scale.
llvm-svn: 268586
MemorySanitizer: use-of-uninitialized-value
0x4910e47 in count /mnt/b/sanitizer-buildbot2/sanitizer-x86_64-linux-bootstrap/build/llvm/include/llvm/Support/MathExtras.h:159:12
0x4910e47 in countLeadingZeros<unsigned long> /mnt/b/sanitizer-buildbot2/sanitizer-x86_64-linux-bootstrap/build/llvm/include/llvm/Support/MathExtras.h:183
0x4910e47 in FitWeights /mnt/b/sanitizer-buildbot2/sanitizer-x86_64-linux-bootstrap/build/llvm/lib/Transforms/Utils/SimplifyCFG.cpp:855
0x4910e47 in SimplifyCondBranchToCondBranch /mnt/b/sanitizer-buildbot2/sanitizer-x86_64-linux-bootstrap/build/llvm/lib/Transforms/Utils/SimplifyCFG.cpp:2895
This reverts commit 609f4dd4bf3bc735c8c047a4d4b0a8e9e4d202e2.
llvm-svn: 268577
This reapplies commit r268521, that was reverted in r268530 due to a test failure in select-implied.ll
Modified the test case to reflect the new change.
llvm-svn: 268557
Unlike earlier similar fixes, we need to recalculate the branch weights
in this case.
Differential Revision: http://reviews.llvm.org/D19674
llvm-svn: 268550
This patch fixes PR27615.
@llvm.dbg.value instructions no longer count towards the maximum number of
instructions to look back at in the instruction list when searching for a
store instruction. This should make the output consistent between debug and
non-debug build.
Patch by Henric Karlsson <henric.karlsson@ericsson.com>!
Differential Revision: http://reviews.llvm.org/D19912
llvm-svn: 268512
Goal of this change is to guarantee stable ordering of the statepoint arguments and other
newly inserted values such as gc.relocates. Previously we had explicit sorting in a couple
of places. However for unnamed values ordering was partial and overall we didn't have any
strong invariant regarding it. This change switches all data structures to use SetVector's
and MapVector's which provide possibility for deterministic iteration over them.
Explicit sorting is now redundant and was removed.
Differential Revision: http://reviews.llvm.org/D19669
llvm-svn: 268502
We forgot to consider the target of ifuncs when considering if a
function was alive or dead.
N.B. Also update a few auxiliary tools like bugpoint and
verify-uselistorder.
This fixes PR27593.
llvm-svn: 268468
pointing to the same addr space. This can prevent SROA from creating a bitcast
between pointers with different addr spaces.
Differential Revision: http://reviews.llvm.org/D19697
llvm-svn: 268424
SCEV caches whether SCEV expressions are loop invariant, variant or
computable. LICM breaks this cache, almost by definition; so clear the
SCEV disposition cache if LICM changed anything.
llvm-svn: 268408
`Loop::makeLoopInvariant` can hoist instructions out of loops, so loop
dispositions for the loop it operated on may need to be cleared. We can
be smarter here (especially around how `forgetLoopDispositions` is
implemented), but let's be correct first.
Fixes PR27570.
llvm-svn: 268406
Be more specific in describing compression failures. Also, check for
this kind of error in emitNameData().
This is part of a series of patches to transition ProfileData over to
the stricter Error/Expected interface.
llvm-svn: 268400
This pass is supposed to reduce the size of the IR for compile time
purpose. We should run it ASAP, except when we prepare for LTO or
ThinLTO, and we want to keep them available for link-time inline.
Differential Revision: http://reviews.llvm.org/D19813
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 268394
A few benchmarks with lots of accesses to global variables in the hot
loops regressed a lot since r266399, which added the
SpeculativeExecution pass to the default pipeline. The problem is that
this pass doesn't mark Globals Alias Analysis as preserved. Globals
Alias Analysis is computed in a module pass, whereas
SpeculativeExecution is a function pass, and a lot of passes dependent
on the Globals Alias Analysis to optimize these benchmarks are also
function passes. As such, the Globals Alias Analysis information cannot
be recomputed between SpeculativeExecution and the following function
passes needing that information.
SpeculativeExecution doesn't invalidate Globals Alias Analysis, so mark
it as such to fix those performance regressions.
Differential Revision: http://reviews.llvm.org/D19806
llvm-svn: 268370
We were overly cautious in our analysis of loops which have invokes
which unwind to EH pads. The loop unroll transform is safe because it
only clones blocks in the loop body, it does not try to split critical
edges involving EH pads. Instead, move the necessary safety check to
LoopUnswitch.
N.B. The safety check for loop unswitch is covered by an existing test
which fails without it.
llvm-svn: 268357
There is not point in importing a "weak" or a "linkonce" function
since we won't be able to inline it anyway.
We already had a targeted check for WeakAny, this is using the
same check on GlobalValue as the inline, i.e.
isMayBeOverriddenLinkage()
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 268341
There is not point in importing a "weak" or a "linkonce" function
since we won't be able to inline it anyway.
We already had a targeted check for WeakAny, this is using the
same check on GlobalValue as the inline, i.e.
isMayBeOverriddenLinkage()
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 268315
When running cc1 with -flto=thin, it is followed by GlobalOpt, which
requires the callgraph. This saves rebuilding one.
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 268266
Make it possible that TryToSimplifyUncondBranchFromEmptyBlock merges empty
basic block including lifetime intrinsics as well as phi nodes and
unconditional branch into its successor or predecessor(s).
If successor of empty block has single predecessor, all contents including
lifetime intrinsics are sinked into the successor. Otherwise, they are
hoisted into its predecessor(s) and then merged into the predecessor(s).
Patch by Josh Yoon <josh.yoon@samsung.com>!
Differential Revision: http://reviews.llvm.org/D19257
llvm-svn: 268254
This is where it was originally, until LoopVersioningLICM was
inserted before in r259986, I don't believe it was on purpose.
Differential Revision: http://reviews.llvm.org/D19809
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 268252
SystemZ on Linux currently has 53-bit address space. In theory, the hardware
could support a full 64-bit address space, but that's not supported due to
kernel limitations (it'd require 5-level page tables), and there are no plans
for that. The default process layout stays within first 4TB of address space
(to avoid creating 4-level page tables), so any offset >= (1 << 42) is fine.
Let's use 1 << 52 here, ie. exactly half the address space.
I've originally used 7 << 50 (uses top 1/8th of the address space), but ASan
runtime assumes there's some space after the shadow area. While this is
fixable, it's simpler to avoid the issue entirely.
Also, I've originally wanted to have the shadow aligned to 1/8th the address
space, so that we can use OR like X86 to assemble the offset. I no longer
think it's a good idea, since using ADD enables us to load the constant just
once and use it with register + register indexed addressing.
Differential Revision: http://reviews.llvm.org/D19650
llvm-svn: 268161
Make use of Constant::getAggregateElement instead of checking constant types - first step towards adding support for UNDEF mask elements.
llvm-svn: 268158
If a guard call being lowered by LowerGuardIntrinsics has the
`!make.implicit` metadata attached, then reattach the metadata to the
branch in the resulting expanded form of the intrinsic. This allows us
to implement null checks as guards and still get the benefit of implicit
null checks.
llvm-svn: 268148
support multiple induction variables
This patch enable loop reroll for the following case:
for(int i=0; i<N; i += 2) {
S += *a++;
S += *a++;
};
Differential Revision: http://reviews.llvm.org/D16550
llvm-svn: 268147
This moves some logic added to EarlyCSE in rL268120 into
`llvm::isInstructionTriviallyDead`. Adds a test case for DCE to
demonstrate that passes other than EarlyCSE can now pick up on the new
information.
llvm-svn: 268126
Summary:
This change teaches EarlyCSE some basic properties of guard intrinsics:
- Guard intrinsics read all memory, but don't write to any memory
- After a guard has executed, the condition it was guarding on can be
assumed to be true
- Guard intrinsics on a constant `true` are no-ops
Reviewers: reames, hfinkel
Subscribers: mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D19578
llvm-svn: 268120
Make use of Constant::getAggregateElement instead of checking constant types - first step towards adding support for UNDEF mask elements.
llvm-svn: 268115
The implemented heuristic has a large body of code which better sits
in its own function for better readability. It also allows adding more
heuristics easier in the future.
llvm-svn: 268107
This patch fixes two somewhat related bugs in MemorySSA's caching
walker. These bugs were found because D19695 brought up the problem
that we'd have defs cached to themselves, which is incorrect.
The bugs this fixes are:
- We would sometimes skip the nearest clobber of a MemoryAccess, because
we would query our cache for a given potential clobber before
checking if the potential clobber is the clobber we're looking for.
The cache entry for the potential clobber would point to the nearest
clobber *of the potential clobber*, so if that was a cache hit, we'd
ignore the potential clobber entirely.
- There are times (sometimes in DFS, sometimes in the getClobbering...
functions) where we would insert cache entries that say a def
clobbers itself.
There's a bit of common code between the fixes for the bugs, so they
aren't split out into multiple commits.
This patch also adds a few unit tests, and refactors existing tests a
bit to reduce the duplication of setup code.
llvm-svn: 268087
Summary: Callee name is not used to identify a callsite now, so do not read it during annotation.
Reviewers: davidxl, dnovillo
Subscribers: dnovillo, danielcdh, llvm-commits
Differential Revision: http://reviews.llvm.org/D19704
llvm-svn: 268069
Summary:
Historically, we had a switch in the Makefiles for turning on "expensive
checks". This has never been ported to the cmake build, but the
(dead-ish) code is still around.
This will also make it easier to turn it on in buildbots.
Reviewers: chandlerc
Subscribers: jyknight, mzolotukhin, RKSimon, gberry, llvm-commits
Differential Revision: http://reviews.llvm.org/D19723
llvm-svn: 268050
We neglected to transfer operand bundles for some transforms. These
were found via inspection, I'll try to come up with some test cases.
llvm-svn: 268011
We neglected to transfer operand bundles for some transforms. These
were found via inspection, I'll try to come up with some test cases.
llvm-svn: 268010
We need to keep loop hints from the original loop on the new vector loop.
Failure to do this meant that, for example:
void foo(int *b) {
#pragma clang loop unroll(disable)
for (int i = 0; i < 16; ++i)
b[i] = 1;
}
this loop would be unrolled. Why? Because we'd vectorize it, thus dropping the
hints that unrolling should be disabled, and then we'd unroll it.
llvm-svn: 267970
I closely followed the precedents set by the vectorizer:
* With -Rpass-missed, the loop is reported with further details pointing
to -Rpass--analysis.
* -Rpass-analysis reports the details why distribution has failed.
* Regardless of -Rpass*, when distribution fails for a loop where
distribution was forced with the pragma, a warning is produced according
to -Wpass-failed. In this case the analysis info is also printed even
without -Rpass-analysis.
llvm-svn: 267952
The next patch will start using these for -Rpass-analysis so they won't
be internal-only anymore.
Move the 'Skipping; ' prefix that some of the message are using into the
'fail' function. We don't want to include this prefix in
the -Rpass-analysis report.
llvm-svn: 267951
When inlining a call site with llvm.mem.parallel_loop_access metadata, this
metadata needs to be propagated to all cloned memory-accessing instructions.
Otherwise, inlining parts of the loop body will invalidate the annotation.
With this functionality, we now vectorize the following as expected:
void Body(int *res, int *c, int *d, int *p, int i) {
res[i] = (p[i] == 0) ? res[i] : res[i] + d[i];
}
void Test(int *res, int *c, int *d, int *p, int n) {
int i;
#pragma clang loop vectorize(assume_safety)
for (i = 0; i < 1600; i++) {
Body(res, c, d, p, i);
}
}
llvm-svn: 267949
The MOVMSK instructions copies a vector elements' sign bits to the low bits of a scalar register and zeros the high bits.
This patch adds MOVMSK support to SimplifyDemandedUseBits so that its aware that the upper bits are known to be zero. It also removes the call to MOVMSK if none of the lower bits are actually required and just returns zero.
Differential Revision: http://reviews.llvm.org/D19614
llvm-svn: 267873
This patch implements the transformation that promotes indirect calls to
conditional direct calls when the indirect-call value profile meta-data is
available.
Differential Revision: http://reviews.llvm.org/D17864
llvm-svn: 267815
There's no existing test for this path, and I don't know how to expose
it in a regression test, but I'm assuming there's some reason this
path exists.
llvm-svn: 267813
"inferattrs" will deduce the attribute, but it will be too late for
many optimizations. Set it ourselves when creating the call.
Differential Revision: http://reviews.llvm.org/D17598
llvm-svn: 267762
Now the pass is just a tiny wrapper around the util. This lets us reuse
the logic elsewhere (done here for BuildLibCalls) instead of duplicating
it.
The next step is to have something like getOrInsertLibFunc that also
sets the attributes.
Differential Revision: http://reviews.llvm.org/D19470
llvm-svn: 267759
I tried to be as close as possible to the strongest check that
existed before; cleaning these up properly is left for future work.
Differential Revision: http://reviews.llvm.org/D19469
llvm-svn: 267758
We previously disallowed interleaved load groups that may cause us to
speculatively access memory out-of-bounds (r261331). We did this by ensuring
each load group had an access corresponding to the first and last member.
Instead of bailing out for these interleaved groups, this patch enables us to
peel off the last vector iteration, ensuring that we execute at least one
iteration of the scalar remainder loop. This solution was proposed in the
review of the previous patch.
Differential Revision: http://reviews.llvm.org/D19487
llvm-svn: 267751
This is the first of two commits for extending SLP Vectorizer to deal with aggregates.
This commit merely refactors existing logic.
http://reviews.llvm.org/D14185
llvm-svn: 267748
This change adds a new hook for estimating the cost of vector extracts followed
by zero- and sign-extensions. The motivating example for this change is the
SMOV and UMOV instructions on AArch64. These instructions move data from vector
to general purpose registers while performing the corresponding extension
(sign-extend for SMOV and zero-extend for UMOV) at the same time. For these
operations, TargetTransformInfo can assume the extensions are free and only
report the cost of the vector extract. The SLP vectorizer has been updated to
make use of the new hook.
Differential Revision: http://reviews.llvm.org/D18523
llvm-svn: 267725
Summary:
Refine the workaround from r266877 that attempts to prevent
renaming of locals in inline assembly, so that in addition to looking
for a llvm.used local value, that there is at least one inline assembly
call in the module. Otherwise, debug functions added to the llvm.used
can block importing/exporting unnecessarily.
Reviewers: joker.eph
Subscribers: llvm-commits, joker.eph
Differential Revision: http://reviews.llvm.org/D19573
llvm-svn: 267717
This is required to use this function from isSafeToSpeculativelyExecute
Reviewed By: hfinkel
Differential Revision: http://reviews.llvm.org/D16231
llvm-svn: 267692
Summary:
D19403 adds a new pragma for loop distribution. This change adds
support for the corresponding metadata that the pragma is translated to
by the FE.
As part of this I had to rethink the flag -enable-loop-distribute. My
goal was to be backward compatible with the existing behavior:
A1. pass is off by default from the optimization pipeline
unless -enable-loop-distribute is specified
A2. pass is on when invoked directly from opt (e.g. for unit-testing)
The new pragma/metadata overrides these defaults so the new behavior is:
B1. A1 + enable distribution for individual loop with the pragma/metadata
B2. A2 + disable distribution for individual loop with the pragma/metadata
The default value whether the pass is on or off comes from the initiator
of the pass. From the PassManagerBuilder the default is off, from opt
it's on.
I moved -enable-loop-distribute under the pass. If the flag is
specified it overrides the default from above.
Then the pragma/metadata can further modifies this per loop.
As a side-effect, we can now also use -enable-loop-distribute=0 from opt
to emulate the default from the optimization pipeline. So to be precise
this is the new behavior:
C1. pass is off by default from the optimization pipeline
unless -enable-loop-distribute or the pragma/metadata enables it
C2. pass is on when invoked directly from opt
unless -enable-loop-distribute=0 or the pragma/metadata disables it
Reviewers: hfinkel
Subscribers: joker.eph, mzolotukhin, llvm-commits
Differential Revision: http://reviews.llvm.org/D19431
llvm-svn: 267672
Summary:
cloneLoopWithPreheader() does not update LoopInfo for sub-loop of
the original loop being cloned. Add assert to ensure no sub-loops for loop being cloned.
Reviewers: anemet, ashutosh.nema, hfinkel
Subscribers: mzolotukhin, llvm-commits
Differential Revision: http://reviews.llvm.org/D15922
llvm-svn: 267671
Summary:
It is incorrect to compare TripCount (which is BECount + 1)
with extraiters (or Count) to check if we should enter unrolled
loop or not, because TripCount can potentially overflow
(when BECount is max unsigned integer).
While comparing BECount with (Count - 1) is overflow safe and
therefore correct.
Reviewer: hfinkel
Differential Revision: http://reviews.llvm.org/D19256
From: Evgeny Stupachenko <evstupac@gmail.com>
llvm-svn: 267662
In the case where isLegalAddressingMode is used for cases
not related to addressing modes, such as pure adds and muls,
it should not be using address space 0. LSR already passes -1
as the address space in these cases.
llvm-svn: 267645
This splits out the per-loop functionality from the Pass class.
With this the fact whether the loop is forced-distribute with the new
metadata/pragma can be cached in the per-loop class rather than passed
around.
llvm-svn: 267643
We need the default ratio to be sufficiently large that it triggers transforms
based on block frequency info (BFI) and plays well with the recently introduced
BranchProbability used by CGP.
Differential Revision: http://reviews.llvm.org/D19435
llvm-svn: 267615
The destination buffer that sprintf uses is restrict qualified, we do
not need to worry about derived pointers referenced via format
specifiers.
This reverts commit r267580.
llvm-svn: 267605
sprintf doesn't read or copy the terminating null byte from it's string
operands. sprintf will append it's own after processing all of the
format specifiers.
This fixes PR27526.
llvm-svn: 267580
Summary:
Instead of using maximum IR weight as the basic block weight, this patch uses the voting algorithm to find the most likely weight for the basic block. This can effectively avoid the cases when some IRs are annotated incorrectly due to code motion of the profiled binary.
This patch also updates propagate.ll unittest to include discriminator in the input file so that it is testing something meaningful.
Reviewers: davidxl, dnovillo
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D19301
llvm-svn: 267519
When SimplifyCFG merges identical instructions from both sides of a diamond, it
can preserve !llvm.mem.parallel_loop_access (as it does with most of the other
metadata). There's no real data or control dependency change in this case.
llvm-svn: 267515
I really thought we were doing this already, but we were not. Given this input:
void Test(int *res, int *c, int *d, int *p) {
for (int i = 0; i < 16; i++)
res[i] = (p[i] == 0) ? res[i] : res[i] + d[i];
}
we did not vectorize the loop. Even with "assume_safety" the check that we
don't if-convert conditionally-executed loads (to protect against
data-dependent deferenceability) was not elided.
One subtlety: As implemented, it will still prefer to use a masked-load
instrinsic (given target support) over the speculated load. The choice here
seems architecture specific; the best option depends on how expensive the
masked load is compared to a regular load. Ideally, using the masked load still
reduces unnecessary memory traffic, and so should be preferred. If we'd rather
do it the other way, flipping the order of the checks is easy.
The LangRef is updated to make explicit that llvm.mem.parallel_loop_access also
implies that if conversion is okay.
Differential Revision: http://reviews.llvm.org/D19512
llvm-svn: 267514
Pass all of the state we need around as arguments, so that these
functions are easier to reuse. There is one part of this that is
unusual: we pass around a functor to look up a DomTree for a function.
This will be a necessary abstraction when we try to use this code in
both the legacy and the new pass manager.
llvm-svn: 267498
This patch is what was the "instcombine" portion of D14185, with an additional
test added (see julia_pseudovec in test/Transforms/InstCombine/insert-val-extract-elem.ll).
The patch causes instcombine to replace sequences of extractelement-insertvalue-store
that act essentially like a bitcast followed by a store.
Differential review: http://reviews.llvm.org/D14260
llvm-svn: 267482
Add a typedef for the std::map<GlobalValue::GUID, GlobalValueSummary *>
map that is passed around to identify summaries for values defined in a
particular module. This shortens up declarations in a variety of places.
llvm-svn: 267471
The current logic assumes that any constant global will never be SRA'd. I presume this is because normally constant globals can be pushed into their uses and deleted. However, that sometimes can't happen (which is where you really want SRA, so the elements that can be eliminated, are!).
There seems to be no reason why we can't SRA constants too, so let's do it.
llvm-svn: 267393
As discussed on D19318, if we only demand the first element of a DIVSS/DIVSD intrinsic, then reduce to a FDIV call. This matches the existing FADD/FSUB/FMUL patterns.
llvm-svn: 267359
Split from D17490. This patch improves support for determining the demanded vector elements through SSE scalar intrinsics:
1 - demanded vector element support for unary and some extra binary scalar intrinsics (RCP/RSQRT/SQRT/FRCZ and ADD/CMP/DIV/ROUND).
2 - addss/addsd get simplified to a fadd call if we aren't interested in the pass through elements
3 - if we don't need the lowest element of a scalar operation then just use the first argument (the pass through elements) directly
We can add support for propagating demanded elements through any equivalent packed SSE intrinsics in a future patch (these wouldn't use the pass through patterns).
Differential Revision: http://reviews.llvm.org/D19318
llvm-svn: 267357
This patch improves support for determining the demanded vector elements through SSE scalar intrinsics:
1 - recognise that we only need the lowest element of the second input for binary scalar operations (and all the elements of the first input)
2 - recognise that the roundss/roundsd intrinsics use the lowest element of the second input and the remaining elements from the first input
Differential Revision: http://reviews.llvm.org/D17490
llvm-svn: 267356
Summary:
Remove the GlobalValueInfo and change the ModuleSummaryIndex to directly
reference summary objects. The info structure was there to support lazy
parsing of the combined index summary objects, which is no longer
needed and not supported.
Reviewers: joker.eph
Subscribers: joker.eph, llvm-commits
Differential Revision: http://reviews.llvm.org/D19462
llvm-svn: 267344
Summary:
We are always importing the initializer for a GlobalVariable.
So if a GlobalVariable is in the export-list, we pull in any
refs as well.
Reviewers: tejohnson
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D19102
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 267303
The original commit was reverted because of a buildbot problem with LazyCallGraph::SCC handling (not related to the OptBisect handling).
Differential Revision: http://reviews.llvm.org/D19172
llvm-svn: 267231
This patch changes the interface for createPGOFuncNameMetadata() where we add
another PGOFuncName argument.
Differential Revision: http://reviews.llvm.org/D19433
llvm-svn: 267216
Summary:
We can fold compares to false when two distinct allocations within a
function are compared for equality.
Patch by Anna Thomas!
Reviewers: majnemer, reames, sanjoy
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D19390
llvm-svn: 267214
Extend the type canonicalization logic to work for unordered atomic loads and stores. Note that while this change itself is fairly simple and low risk, there's a reasonable chance this will expose problems in the backends by suddenly generating IR they wouldn't have seen before. Anything of this nature will be an existing bug in the backend (you could write an atomic float load), but this will definitely change the frequency with which such cases are encountered. If you see problems, feel free to revert this change, but please make sure you collect a test case.
llvm-svn: 267210
Summary: This change will shorten memset if the beginning of memset is overwritten by later stores.
Reviewers: hfinkel, eeckstein, dberlin, mcrosier
Subscribers: mgrang, mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D18906
llvm-svn: 267197
E.g. for:
!1 = {"llvm.distribute", i32 1}
it now returns the MDOperand for 1.
I will use this in LoopDistribution to check the value of the metadata.
Note that the change is backward-compatible with its current use in
LoopVersioningLICM. An Optional implicitly converts to a bool depending
whether it contains a value or not.
llvm-svn: 267190
Summary:
CachingMemorySSAWalker::invalidateInfo was using IsCall to determine
which cache map needed to be cleared of entries referring to the invalidated
MemoryAccess, but there could also be entries referring to it in the
other cache map (value entries, not key entries). This change just
clears both tables to be conservatively correct.
Also add a verifyRemoved() function, called when expensive
checks (i.e. XDEBUG) are enabled to verify that the invalidated
MemoryAccess object is not referenced in any of the caches.
Reviewers: dberlin, george.burgess.iv
Subscribers: mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D19388
llvm-svn: 267157
We take the intersection of overflow flags while CSE'ing.
This permits us to consider two instructions with different overflow
behavior to be replaceable.
llvm-svn: 267153
Summary:
When optimizing PHIs which have inputs floating point binary
operators, we preserve all IR flags except the fast math
flags.
This change removes the logic which tracked some of the IR flags
(no wrap, exact) and replaces it by doing an and on the IR flags of
all inputs to the PHI - which will also handle the fast math
flags.
Reviewers: majnemer
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D19370
llvm-svn: 267139
EarlyCSE had inconsistent behavior with regards to flag'd instructions:
- In some cases, it would pessimize if the available instruction had
different flags by not performing CSE.
- In other cases, it would miscompile if it replaced an instruction
which had no flags with an instruction which has flags.
Fix this by being more consistent with our flag handling by utilizing
andIRFlags.
llvm-svn: 267111
Re-layer the functions in the new (i.e., newly correct) post-order
traversals in ValueEnumerator (r266947) and ValueMapper (r266949).
Instead of adding a node to the worklist in a helper function and
returning a flag to say what happened, return the node itself. This
makes the code way cleaner: the worklist is local to the main function,
there is no flag for an early loop exit (since we can cleanly bury the
loop), and it's perfectly clear when pointers into the worklist might be
invalidated.
I'm fixing both algorithms in the same commit to avoid repeating the
commit message; if you take the time to understand one the other should
be easy. The diff itself isn't entirely obvious since the traversals
have some noise (i.e., things to do), but here's the high-level change:
auto helper = [&WL](T *Op) { auto helper = [](T **&I, T **E) {
=> while (I != E) {
if (shouldVisit(Op)) { T *Op = *I++;
WL.push(Op, Op->begin()); if (shouldVisit(Op)) {
return true; return Op;
} }
return false; return nullptr;
}; };
=>
WL.push(S, S->begin()); WL.push(S, S->begin());
while (!empty()) { while (!empty()) {
auto *N = WL.top().N; auto *N = WL.top().N;
auto *&I = WL.top().I; auto *&I = WL.top().I;
bool DidChange = false;
while (I != N->end())
if (helper(*I++)) { => if (T *Op = helper(I, N->end()) {
DidChange = true; WL.push(Op, Op->begin());
break; continue;
} }
if (DidChange)
continue;
POT.push(WL.pop()); => POT.push(WL.pop());
} }
Thanks to Mehdi for helping me find a better way to layer this.
llvm-svn: 267099
Summary:
Adds an instrumentation pass for the new EfficiencySanitizer ("esan")
performance tuning family of tools. Multiple tools will be supported
within the same framework. Preliminary support for a cache fragmentation
tool is included here.
The shared instrumentation includes:
+ Turn mem{set,cpy,move} instrinsics into library calls.
+ Slowpath instrumentation of loads and stores via callouts to
the runtime library.
+ Fastpath instrumentation will be per-tool.
+ Which memory accesses to ignore will be per-tool.
Reviewers: eugenis, vitalybuka, aizatsky, filcab
Subscribers: filcab, vkalintiris, pcc, silvas, llvm-commits, zhaoqin, kcc
Differential Revision: http://reviews.llvm.org/D19167
llvm-svn: 267058
Summary:
If we know that the pointer allocated within a function does not escape,
we can fold away comparisons that are done with global pointers
Patch by Anna Thomas!
Reviewers: reames, majnemer, sanjoy
Subscribers: mgrang, mcrosier, majnemer, llvm-commits
Differential Revision: http://reviews.llvm.org/D19276
llvm-svn: 267035
This patch implements a optimization bisect feature, which will allow optimizations to be selectively disabled at compile time in order to track down test failures that are caused by incorrect optimizations.
The bisection is enabled using a new command line option (-opt-bisect-limit). Individual passes that may be skipped call the OptBisect object (via an LLVMContext) to see if they should be skipped based on the bisect limit. A finer level of control (disabling individual transformations) can be managed through an addition OptBisect method, but this is not yet used.
The skip checking in this implementation is based on (and replaces) the skipOptnoneFunction check. Where that check was being called, a new call has been inserted in its place which checks the bisect limit and the optnone attribute. A new function call has been added for module and SCC passes that behaves in a similar way.
Differential Revision: http://reviews.llvm.org/D19172
llvm-svn: 267022
"Into" was misleading. I am also planning to use this helper to look
for loop metadata and return the argument, so find seems like a better
name.
llvm-svn: 267013
This builds on 266999 which made FindAvailableValue do the right thing. Tests included show the newly enabled transforms and those which disabled either due to conservatism or correctness requirements.
llvm-svn: 267006
Summary:
`llvm.guard(false)` always bails out of the current compilation unit, so
we can prune any control flow following it.
Reviewers: hfinkel, pcc, reames
Subscribers: majnemer, reames, mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D19245
llvm-svn: 266955
The iteratitive algorithm from r265456 claimed but failed to create a
post-order traversal. It had the same error that was fixed in the
ValueEnumerator in r266947: now, instead of pushing all operands on the
worklist at once, we pause whenever an operand gets pushed in order to
go depth-first (I know, it sounds obvious).
Sadly, I have no idea how to observe this from outside the algorithm and
so I haven't written a test. The output should be the same; it should
just use fewer temporary nodes now. I've added some comments that I
hope make the current logic clear enough it's unlikely to regress.
llvm-svn: 266949
Summary:
The function importer already decided what symbols need to be pulled
in. Also these magically added ones will not be in the export list
for the source module, which can confuse the internalizer for
instance.
Reviewers: tejohnson, rafael
Subscribers: joker.eph, llvm-commits
Differential Revision: http://reviews.llvm.org/D19096
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 266948
Summary:
This patch refined the instruction weight anootation algorithm:
1. Do not use dbg_value intrinsics for annotation.
2. Annotate cold calls if the call is inlined in profile, but not inlined before preparation. This indicates that the annotation preparation step found no sample for the inlined callsite, thus the call should be very cold.
Reviewers: dnovillo, davidxl
Subscribers: mgrang, llvm-commits
Differential Revision: http://reviews.llvm.org/D19286
llvm-svn: 266936
Summary:
This is done for consistency with asan-use-after-return.
I see no other users than tests.
Reviewers: aizatsky, kcc
Differential Revision: http://reviews.llvm.org/D19306
llvm-svn: 266906
Summary:
This patch prevents importing from (and therefore exporting from) any
module with a "llvm.used" local value. Local values need to be promoted
and renamed when importing, and their presense on the llvm.used variable
indicates that there are opaque uses that won't see the rename. One such
example is a use in inline assembly.
See also the discussion at:
http://lists.llvm.org/pipermail/llvm-dev/2016-April/098047.html
As part of this, move collectUsedGlobalVariables out of Transforms/Utils
and into IR/Module so that it can be used more widely. There are several
other places in LLVM that used copies of this code that can be cleaned
up as a follow on NFC patch.
Reviewers: joker.eph
Subscribers: pcc, llvm-commits, joker.eph
Differential Revision: http://reviews.llvm.org/D18986
llvm-svn: 266877
This linkage is *not* intended to express that a declaration refers
to a weak symbol, but that the symbol might not be present at link
time. I don't believe it was the intent.
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 266856
The functionality contained within getIntrinsicIDForCall is two-fold: it
checks if a CallInst's callee is a vectorizable intrinsic. If it isn't
an intrinsic, it attempts to map the call's target to a suitable
intrinsic.
Move the mapping functionality into getIntrinsicForCallSite and rename
getIntrinsicIDForCall to getVectorIntrinsicIDForCall while
reimplementing it in terms of getIntrinsicForCallSite.
llvm-svn: 266801
This patch improves SimplifyCFG to catch cases like:
if (a < b) {
if (a > b) <- known to be false
unreachable;
}
Phabricator Revision: http://reviews.llvm.org/D18905
llvm-svn: 266767
Summary:
Need to use predecessors for reverse graph, successors for forward graph.
succ_iterator/pred_iterator are not compatible, this patch is all the work necessary to work around that (which is what everywhere else does). Not sure if there is a better way, so cc'ing some random folks to take a gander :)
Reviewers: dblaikie, qcolombet, echristo
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D18796
llvm-svn: 266718
Removed some unused headers, replaced some headers with forward class declarations.
Found using simple scripts like this one:
clear && ack --cpp -l '#include "llvm/ADT/IndexedMap.h"' | xargs grep -L 'IndexedMap[<]' | xargs grep -n --color=auto 'IndexedMap'
Patch by Eugene Kosov <claprix@yandex.ru>
Differential Revision: http://reviews.llvm.org/D19219
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 266595
This catches two nullptr insertions into the ValueMap I missed in
r266567. I missed CloneFunction becuase it never calls RemapInstruction
directly. Here's one of the still-failing bots:
http://lab.llvm.org:8011/builders/clang-x64-ninja-win7/builds/11496
llvm-svn: 266570
Add an assertion to ValueMapper that prevents double-scheduling of
GlobalValues to remap, and fix the one place it happened. There are
tons of tests that fail with this assertion in place and without the
code change, so I'm not adding another.
Although it looks related, r266563 was, indeed, removing dead code.
AFAICT, this cross-file double-scheduling started in r266510 when the
cross-file recursion was removed.
llvm-svn: 266569
Apparently there isn't test coverage for all of these. I'd appreciate
if someone with could reproduce and send me something to reduce, but for
now I've just looked for users of RemapInstruction and MapValue and
ensured they don't accidentally insert nullptr. Here is one of the
bootstraps that caught:
http://lab.llvm.org:8011/builders/clang-x64-ninja-win7/builds/11494
llvm-svn: 266567
As a follow-up to r123058, assert that there are no null mappings in the
ValueMap instead of just ignoring them when they are there. There were
a couple of accidental insertions in CloneFunction so I cleaned those up
(caught by testcases).
llvm-svn: 266565
I accidentally replaced `mayBeOverridden` with `!isInterposable`.
Remove the negation and add a test case that would've caught this.
Many thanks to Håkan Hjort for spotting this!
llvm-svn: 266551
Rather than relying on the structural equivalence of DICompositeType to
merge type definitions, use an explicit map on the LLVMContext that
LLParser and BitcodeReader consult when constructing new nodes.
Each non-forward-declaration DICompositeType with a non-empty
'identifier:' field is stored/loaded from the type map, and the first
definiton will "win".
This map is opt-in: clients that expect ODR types from different modules
to be merged must call LLVMContext::ensureDITypeMap.
- Clients that just happen to load more than one Module in the same
LLVMContext won't magically merge types.
- Clients (like LTO) that want to continue to merge types based on ODR
identifiers should opt-in immediately.
I have updated LTOCodeGenerator.cpp, the two "linking" spots in
gold-plugin.cpp, and llvm-link (unless -disable-debug-info-type-map) to
set this.
With this in place, it will be straightforward to remove the DITypeRef
concept (i.e., referencing types by their 'identifier:' string rather
than pointing at them directly).
llvm-svn: 266549
Since the result of a mapped distinct node is known up front, it's more
efficient to map them separately from uniqued nodes. This commit pulls
them out of the post-order traversal and stores them in a worklist to be
remapped at the top-level.
This is essentially reapplying r244181 ("ValueMapper: Rotate distinct
node remapping algorithm") to the new iterative algorithm from r265456
("ValueMapper: Rewrite Mapper::mapMetadata without recursion").
Now that the traversal logic only handles uniqued MDNodes, it's much
simpler to inline it all into MDNodeMapper::createPOT (I've killed the
MDNodeMapper::push and MDNodeMapper::tryToPop helpers and localized the
traversal worklist).
The resulting high-level algorithm for MDNodeMapper::map now looks like
this:
- Distinct nodes are immediately mapped and added to
MDNodeMapper::DistinctWorklist using MDNodeMapper::mapDistinctNode.
- Uniqued nodes are mapped via MDNodeMapper::mapTopLevelUniquedNode,
which traverses the transitive uniqued subgraph of a node to
calculate uniqued node mappings in bulk.
- This is a simplified version of MDNodeMapper::map from before
this commit (originally r265456) that doesn't traverse through
any distinct nodes.
- Distinct nodes are added to MDNodeMapper::DistinctWorklist via
MDNodeMapper::mapDistinctNode.
- This uses MDNodeMapper::createPOT to fill a
MDNodeMapper::UniquedGraph (a post-order traversal and side
table), UniquedGraph::propagateChanges to track which uniqued
nodes need to change, and MDNodeMapper::mapNodesInPOT to create
the uniqued nodes.
- Placeholders for forward references are now only needed when
there's a uniquing cycle (a cycle of uniqued nodes unbroken by
distinct nodes). This is the key functionality change that
we're reintroducing (from r244181). As of r265456, a temporary
forward reference might be needed for any cycle that involved
uniqued nodes.
- After mapping the first node appropriately, MDNodeMapper::map works
through MDNodeMapper::DistinctWorklist. For each distinct node, its
operands are remapped with MDNodeMapper::mapDistinctNode and
MDNodeMapper::mapTopLevelUniquedNode until all nodes have been
mapped.
Sadly there's nothing observable I can test here; no real functionality
change, just a compile-time speedup from reduced malloc traffic.
llvm-svn: 266537
As a minor fixup to r266258, only track nodes that needed a placeholder
in CyclicNodes in MDNodeMapper::mapUniquedNodes. There should be no
observable functionality change, just some local memory savings because
CyclicNodes only needs to grow to accommodate nodes that are actually
involved in cycles. (This was the original intent of r266258, or else
the vector would have been called "ChangedNodes".)
llvm-svn: 266536
This is a requirement for the cache handling in D18494
Differential Revision: http://reviews.llvm.org/D18908
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 266519
To be able to work accurately on the reference graph when taking
decision about internalizing, promoting, renaming, etc. We need
to have the alias information explicit.
Differential Revision: http://reviews.llvm.org/D18836
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 266517
Stop memoizing ConstantAsMetadata in ValueMapper::mapMetadata. Now we
have to recompute it, but these metadata aren't particularly common, and
it restricts the lifetime of the Metadata map unnecessarily.
(The motivation is that I have a patch which uses a single Metadata map
for the lifetime of IRMover. Mehdi profiled r266446 with the patch
applied and we saw a pretty big speedup in lib/Linker.)
llvm-svn: 266513
This reverts commit r266507, reapplying r266503 (and r266505
"ValueMapper: Use API from r266503 in unit tests, NFC") completely
unchanged.
I reverted because of a bot failure here:
http://lab.llvm.org:8011/builders/lld-x86_64-freebsd/builds/16810/
However, looking more closely, the failure was from a host-compiler
crash (clang 3.7.1) when building:
lib/CodeGen/AsmPrinter/CMakeFiles/LLVMAsmPrinter.dir/DwarfAccelTable.cpp.o
I didn't modify that file, or anything it includes, with that commit.
The next build (which hadn't picked up my revert) got past it:
http://lab.llvm.org:8011/builders/lld-x86_64-freebsd/builds/16811/
I think this was just unfortunate timing. I suppose the bot must be
flakey.
llvm-svn: 266510
This reverts commit r266503, in case it's the root cause of this bot
failure:
http://lab.llvm.org:8011/builders/lld-x86_64-freebsd/builds/16810
I'm also reverting r266505 -- "ValueMapper: Use API from r266503 in unit
tests, NFC" -- since it's in the way.
llvm-svn: 266507
Eliminate co-recursion of Mapper::mapValue through
ValueMaterializer::materializeInitFor, through a major redesign of the
ValueMapper.cpp interface.
- Expose a ValueMapper class that controls the entry points to the
mapping algorithms.
- Change IRLinker to use ValueMapper directly, rather than
llvm::RemapInstruction, llvm::MapValue, etc.
- Use (e.g.) ValueMapper::scheduleMapGlobalInit to add mapping work to
a worklist in ValueMapper instead of recursing.
There were two fairly major complications.
Firstly, IRLinker::linkAppendingVarProto incorporates an on-the-fly IR
ugprade that I had to split apart. Long-term, this upgrade should be
done in the bitcode reader (and we should only accept the "new" form),
but for now I've just made it work and added a FIXME. The hold-op is
that we need to deprecate C API that relies on this.
Secondly, IRLinker has special logic to correctly implement aliases with
comdats, and uses two ValueToValueMapTy instances and two
ValueMaterializers. I supported this by allowing clients to register an
alternate mapping context, whose MCID can be passed in when scheduling
new work.
While out of scope for this commit, it should now be straightforward to
remove recursion from Mapper::mapValue.
llvm-svn: 266503
Change Mapper::VM to a pointer and add a `getVM()` accessor for it.
While this has no functionality change, it minimizes the diff on an
upcoming patch that allows switching between instances of
ValueToValueMapTy on a single Mapper instance.
llvm-svn: 266490
Allow explicit section for indirectly called functions in cfi-icall.
Jumptables for functions in the same type class must be contiguous, so they
always go to the default text section.
Fixes PR25079.
llvm-svn: 266486
InstCombine wants to optimize compares of calls to fabs with zero.
However, we didn't have the necessary legality checking to verify that
the function call had the same behavior as fabs.
llvm-svn: 266452
Currently each Function points to a DISubprogram and DISubprogram has a
scope field. For member functions the scope is a DICompositeType. DIScopes
point to the DICompileUnit to facilitate type uniquing.
Distinct DISubprograms (with isDefinition: true) are not part of the type
hierarchy and cannot be uniqued. This change removes the subprograms
list from DICompileUnit and instead adds a pointer to the owning compile
unit to distinct DISubprograms. This would make it easy for ThinLTO to
strip unneeded DISubprograms and their transitively referenced debug info.
Motivation
----------
Materializing DISubprograms is currently the most expensive operation when
doing a ThinLTO build of clang.
We want the DISubprogram to be stored in a separate Bitcode block (or the
same block as the function body) so we can avoid having to expensively
deserialize all DISubprograms together with the global metadata. If a
function has been inlined into another subprogram we need to store a
reference the block containing the inlined subprogram.
Attached to https://llvm.org/bugs/show_bug.cgi?id=27284 is a python script
that updates LLVM IR testcases to the new format.
http://reviews.llvm.org/D19034
<rdar://problem/25256815>
llvm-svn: 266446
This is almost identical to:
http://reviews.llvm.org/rL264527
This doesn't solve PR27344; it just allows the profile weights to survive.
To solve the bug, we need to use the profile weights in the backend.
llvm-svn: 266442
Summary:
This IR pass is helpful for GPUs, and other targets with divergent
branches. It's a nop on targets without divergent branches.
Reviewers: chandlerc
Subscribers: llvm-commits, jingyue, rnk, joker.eph, tra
Differential Revision: http://reviews.llvm.org/D18626
llvm-svn: 266399
Summary:
This lets us add this pass to the IR pass manager unconditionally; it
will simply not do anything on targets without branch divergence.
Reviewers: tra
Subscribers: llvm-commits, jingyue, rnk, chandlerc
Differential Revision: http://reviews.llvm.org/D18625
llvm-svn: 266398
Some SIMD implementations are not IEEE-754 compliant, for example ARM's NEON.
This patch teaches the loop vectorizer to only allow transformations of loops
that either contain no floating-point operations or have enough allowance
flags supporting lack of precision (ex. -ffast-math, Darwin).
For that, the target description now has a method which tells us if the
vectorizer is allowed to handle FP math without falling into unsafe
representations, plus a check on every FP instruction in the candidate loop
to check for the safety flags.
This commit makes LLVM behave like GCC with respect to ARM NEON support, but
it stops short of fixing the underlying problem: sub-normals. Neither GCC
nor LLVM have a flag for allowing sub-normal operations. Before this patch,
GCC only allows it using unsafe-math flags and LLVM allows it by default with
no way to turn it off (short of not using NEON at all).
As a first step, we push this change to make it safe and in sync with GCC.
The second step is to discuss a new sub-normal's flag on both communitues
and come up with a common solution. The third step is to improve the FastMath
flags in LLVM to encode sub-normals and use those flags to restrict NEON FP.
Fixes PR16275.
llvm-svn: 266363
https://llvm.org/bugs/show_bug.cgi?id=27105
We can check if all bits outside of a constant mask are set with a
single constant.
As noted in the bug report, although this form should be considered the
canonical IR, backends may want to transform this into an 'andn' / 'andc'
comparison against zero because that could be a single machine instruction.
Differential Revision: http://reviews.llvm.org/D18842
llvm-svn: 266362
Summary:
This fully solves the problem where the StructurizeCFG pass does not
consider the same branches as uniform as the SIAnnotateControlFlow pass.
The patch in D19013 helps with this problem, but is not sufficient
(and, interestingly, causes a "regression" with one of the existing
test cases).
No tests included here, because tests in D19013 already cover this.
Reviewers: arsenm, tstellarAMD
Subscribers: arsenm, llvm-commits
Differential Revision: http://reviews.llvm.org/D19018
llvm-svn: 266346
The behavior of {MIN,MAX}NAN differs from that of {MIN,MAX}NUM when only
one of the inputs is NaN: -NUM will return the non-NaN argument while
-NAN would return NaN.
It is desirable to lower to @llvm.{min,max}num to -NAN if they don't
have a native instruction for -NUM. Notably, ARMv7 NEON's vmin has the
-NAN semantics.
N.B. Of course, it is only safe to do this if the intrinsic call is
marked nnan.
llvm-svn: 266279
At some point, ARM stopped getting any benefit from ConstantHoisting because
the pass called a different variant of getIntImmCost. Reimplementing the
correct variant revealed some problems, however:
+ ConstantHoisting was modifying switch statements. This is simply invalid,
the cases must remain integer constants no matter the notional cost.
+ ConstantHoisting was mangling alloca instructions in the entry block. These
should be handled by FrameLowering, so constants actually have a cost of 0.
Worse, the resulting bitcasts meant they became dynamic allocas.
rdar://25707382
llvm-svn: 266260
Fix a major bug from r265456. Although it's now much rarer, ValueMapper
sometimes has to duplicate cycles. The
might-transitively-reference-a-temporary counts don't decrement on their
own when there are cycles, and you need to call MDNode::resolveCycles to
fix it.
r265456 was checking the input nodes to see if they were unresolved.
This is useless; they should never be unresolved. Instead we should
check the output nodes and resolve cycles on them.
llvm-svn: 266258
LLVM optimization passes may reduce a profiled target expression
to a constant. Removing runtime calls at such instrumentation points
would help speedup the runtime of the instrumented program.
llvm-svn: 266229
Summary:
To be able to work accurately on the reference graph when taking decision
about internalizing, promoting, renaming, etc. We need to have the alias
information explicit.
Reviewers: tejohnson
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D18836
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 266214
Remove an ad-hoc transform in InstCombine and replace it with more
general machinery (ValueTracking, InstructionSimplify and VectorUtils).
This fixes PR27332.
llvm-svn: 266175
This will save a bunch of copies / initialization of intermediate
datastructure, and (hopefully) simplify the code.
This also abstract the symbol preservation mechanism outside of the
Internalization pass into the client code, which is not forced
to keep a map of strings for instance (ThinLTO will prefere hashes).
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 266163
This bug was introduced with:
http://reviews.llvm.org/rL262269
AVX masked loads are specified to set vector lanes to zero when the high bit of the mask
element for that lane is zero:
"If the mask is 0, the corresponding data element is set to zero in the load form of these
instructions, and unmodified in the store form." --Intel manual
Differential Revision: http://reviews.llvm.org/D19017
llvm-svn: 266148
Summary:
For correct handling of alias to nameless
function, we need to be able to refer them through a GUID in the summary.
Here we name them using a hash of the non-private global names in the module.
Reviewers: tejohnson
Subscribers: joker.eph, llvm-commits
Differential Revision: http://reviews.llvm.org/D18883
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 266132
Summary:
The function import pass was computing all the imports for all the
modules in the index, and only using the imports for the current module.
Change this to instead compute only for the given module. This means
that the exports list can't be populated, but they weren't being used
anyway.
Longer term, the linker can collect all the imports and export lists
and serialize them out for consumption by the distributed backend
processes which use this pass.
Reviewers: joker.eph
Subscribers: llvm-commits, joker.eph
Differential Revision: http://reviews.llvm.org/D18945
llvm-svn: 266125
`allocsize` is a function attribute that allows users to request that
LLVM treat arbitrary functions as allocation functions.
This patch makes LLVM accept the `allocsize` attribute, and makes
`@llvm.objectsize` recognize said attribute.
The review for this was split into two patches for ease of reviewing:
D18974 and D14933. As promised on the revisions, I'm landing both
patches as a single commit.
Differential Revision: http://reviews.llvm.org/D14933
llvm-svn: 266032
r237193 fix handling of alloca size / align in MergeFunctions, but only tested one and didn't follow FunctionComparator::cmpOperations's usual comparison pattern. It also didn't update Instruction.cpp:haveSameSpecialState which I'll do separately.
llvm-svn: 266022
This is more robust to changes in the link ordering.
Differential Revision: http://reviews.llvm.org/D18946
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 266018
This patch ensures that when we detect first-order recurrences, we reject a phi
node if its previous value is also a phi node. During vectorization the initial
and previous values of the recurrence are shuffled together to create the value
for the current iteration. However, phi nodes are not widened like other
instructions. This fixes PR27246.
Differential Revision: http://reviews.llvm.org/D18971
llvm-svn: 265983
We need just a couple of logic tweaks to consolidate the shl and lshr cases.
This is step 5 of refactoring to solve PR26760:
https://llvm.org/bugs/show_bug.cgi?id=26760
llvm-svn: 265965
This is the straightforward fix for PR26760:
https://llvm.org/bugs/show_bug.cgi?id=26760
But we still need to make some changes to generalize this helper function
and then send the lshr case into here.
llvm-svn: 265960
Summary:
This is the first step in also serializing the index out to LLVM
assembly.
The per-module summary written to bitcode is moved out of the bitcode
writer and to a new analysis pass (ModuleSummaryIndexWrapperPass).
The pass itself uses a new builder class to compute index, and the
builder class is used directly in places where we don't have a pass
manager (e.g. llvm-as).
Because we are computing summaries outside of the bitcode writer, we no
longer can use value ids created by the bitcode writer's
ValueEnumerator. This required changing the reference graph edge type
to use a new ValueInfo class holding a union between a GUID (combined
index) and Value* (permodule index). The Value* are converted to the
appropriate value ID during bitcode writing.
Also, this enables removal of the BitWriter library's dependence on the
Analysis library that was previously required for the summary computation.
Reviewers: joker.eph
Subscribers: joker.eph, llvm-commits
Differential Revision: http://reviews.llvm.org/D18763
llvm-svn: 265941
Summary:
If we can prove that an op.with.overflow intrinsic does not overflow, we
can get rid of the intrinsic, and replace it with non-wrapping
arithmetic.
Reviewers: atrick, regehr
Subscribers: sanjoy, mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D18685
llvm-svn: 265913
Vectorization cost of uniform load wasn't correctly calculated.
As a result, a simple loop that loads a uniform value wasn't vectorized.
Differential Revision: http://reviews.llvm.org/D18940
llvm-svn: 265901
Strip out the remapping parts of IRLinker::linkFunctionBody and put them
in ValueMapper.cpp under the name Mapper::remapFunction (with a
top-level entry-point llvm::RemapFunction).
This is a nice cleanup on its own since it puts the remapping code
together and shares a single Mapper context for the entire
IRLinker::linkFunctionBody Call. Besides that, this will make it easier
to break the co-recursion between IRMover.cpp and ValueMapper.cpp in
follow ups.
llvm-svn: 265835
Use Mapper::mapValue instead of llvm::MapValue from
Mapper::remapInstruction when mapping an incoming block for a PHINode
(follow-up to r265832). This will implicitly pass along the
Materializer argument, but when this code was added in r133513 there was
no Materializer argument. I suspect this call to MapValue was just
missed in r182776 since it's not observable (basic blocks can't be
materialized, and they don't reference other values).
llvm-svn: 265833
Add Mapper::remapInstruction, move the guts of llvm::RemapInstruction
into it, and use the same Mapper for most of the calls to MapValue and
MapMetadata. There should be no functionality change here.
I left off the call to MapValue that wasn't passing in a Materializer
argument (for basic blocks of PHINodes). It shouldn't change
functionality either, but I'm suspicious enough to commit separately.
llvm-svn: 265832
Prevent the Metadata side-table in ValueMap from growing unnecessarily
when RF_NoModuleLevelChanges. As a drive-by, make ValueMap::hasMD,
which apparently had no users until I used it here for testing, actually
compile.
llvm-svn: 265828
Stop adding MDString to the Metadata section of the ValueMap in
MapMetadata. It blows up the size of the map for no benefit, since we
can always return quickly anyway.
There is a potential follow-up that I don't think I'll push on right
away, but maybe someone else is interested: stop checking for a
pre-mapped MDString, and move the `isa<MDString>()` checks in
Mapper::mapSimpleMetadata and MDNodeMapper::getMappedOp in front of the
`VM.getMappedMD()` calls. While this would preclude explicitly
remapping MDStrings it would probably be a little faster.
llvm-svn: 265827
We had a select of a cast of a select but attempted to replace the outer
select with the inner select dispite their incompatible types.
Patch by Anton Korobeynikov!
This fixes PR27236.
llvm-svn: 265805
Two or more identical assumes are occasionally next to each other in a
basic block.
While our generic machinery will turn a redundant assume into a no-op,
it is not super cheap.
We can perform a simpler check to achieve the same result for this case.
llvm-svn: 265801
InstCombine cannot effectively remove redundant assumptions without them
registered in the assumption cache. The vectorizer can create identical
assumptions but doesn't register them with the cache, resulting in
slower compile times because InstCombine tries to reason about a lot
more assumptions.
Fix this by registering the cloned assumptions.
llvm-svn: 265800
This re-commits r265535 which was reverted in r265541 because it
broke the windows bots. The problem was that we had a PointerIntPair
which took a pointer to a struct allocated with new. The problem
was that new doesn't provide sufficient alignment guarantees.
This pattern was already present before r265535 and it just happened
to work. To fix this, we now separate the PointerToIntPair from the
ExitNotTakenInfo struct into a pointer and a bool.
Original commit message:
Summary:
When the backedge taken codition is computed from an icmp, SCEV can
deduce the backedge taken count only if one of the sides of the icmp
is an AddRecExpr. However, due to sign/zero extensions, we sometimes
end up with something that is not an AddRecExpr.
However, we can use SCEV predicates to produce a 'guarded' expression.
This change adds a method to SCEV to get this expression, and the
SCEV predicate associated with it.
In HowManyGreaterThans and HowManyLessThans we will now add a SCEV
predicate associated with the guarded backedge taken count when the
analyzed SCEV expression is not an AddRecExpr. Note that we only do
this as an alternative to returning a 'CouldNotCompute'.
We use new feature in Loop Access Analysis and LoopVectorize to analyze
and transform more loops.
Reviewers: anemet, mzolotukhin, hfinkel, sanjoy
Subscribers: flyingforyou, mcrosier, atrick, mssimpso, sanjoy, mzolotukhin, llvm-commits
Differential Revision: http://reviews.llvm.org/D17201
llvm-svn: 265786
This reverts commit r265765, reapplying r265759 after changing a call from
LocalAsMetadata::get to ValueAsMetadata::get (and adding a unit test). When a
local value is mapped to a constant (like "i32 %a" => "i32 7"), the new debug
intrinsic operand may no longer be pointing at a local.
http://lab.llvm.org:8080/green/job/clang-stage1-configure-RA_build/19020/
The previous coommit message follows:
--
This is a partial re-commit -- maybe more of a re-implementation -- of
r265631 (reverted in r265637).
This makes RF_IgnoreMissingLocals behave (almost) consistently between
the Value and the Metadata hierarchy. In particular:
- MapValue returns nullptr or "metadata !{}" for missing locals in
MetadataAsValue/LocalAsMetadata bridging paris, depending on
the RF_IgnoreMissingLocals flag.
- MapValue doesn't memoize LocalAsMetadata-related results.
- MapMetadata no longer deals with LocalAsMetadata or
RF_IgnoreMissingLocals at all. (This wasn't in r265631 at all, but
I realized during testing it would make the patch simpler with no
loss of generality.)
r265631 went too far, making both functions universally ignore
RF_IgnoreMissingLocals. This broke building (e.g.) compiler-rt.
Reassociate (and possibly other passes) don't currently maintain
dominates-use invariants for metadata operands, resulting in IR like
this:
define void @foo(i32 %arg) {
call void @llvm.some.intrinsic(metadata i32 %x)
%x = add i32 1, i32 %arg
}
If the inliner chooses to inline @foo into another function, then
RemapInstruction will call `MapValue(metadata i32 %x)` and assert that
the return is not nullptr.
I've filed PR27273 to add a Verifier check and fix the underlying
problem in the optimization passes.
As a workaround, return `!{}` instead of nullptr for unmapped
LocalAsMetadata when RF_IgnoreMissingLocals is unset. Otherwise, match
the behaviour of r265631.
Original commit message:
ValueMapper: Make LocalAsMetadata match function-local Values
Start treating LocalAsMetadata similarly to function-local members of
the Value hierarchy in MapValue and MapMetadata.
- Don't memoize them.
- Return nullptr if they are missing.
This also cleans up ConstantAsMetadata to stop listening to the
RF_IgnoreMissingLocals flag.
llvm-svn: 265768
Summary:
Fixes PR26774.
If you're aware of the issue, feel free to skip the "Motivation"
section and jump directly to "This patch".
Motivation:
I define "refinement" as discarding behaviors from a program that the
optimizer has license to discard. So transforming:
```
void f(unsigned x) {
unsigned t = 5 / x;
(void)t;
}
```
to
```
void f(unsigned x) { }
```
is refinement, since the behavior went from "if x == 0 then undefined
else nothing" to "nothing" (the optimizer has license to discard
undefined behavior).
Refinement is a fundamental aspect of many mid-level optimizations done
by LLVM. For instance, transforming `x == (x + 1)` to `false` also
involves refinement since the expression's value went from "if x is
`undef` then { `true` or `false` } else { `false` }" to "`false`" (by
definition, the optimizer has license to fold `undef` to any non-`undef`
value).
Unfortunately, refinement implies that the optimizer cannot assume
that the implementation of a function it can see has all of the
behavior an unoptimized or a differently optimized version of the same
function can have. This is a problem for functions with comdat
linkage, where a function can be replaced by an unoptimized or a
differently optimized version of the same source level function.
For instance, FunctionAttrs cannot assume a comdat function is
actually `readnone` even if it does not have any loads or stores in
it; since there may have been loads and stores in the "original
function" that were refined out in the currently visible variant, and
at the link step the linker may in fact choose an implementation with
a load or a store. As an example, consider a function that does two
atomic loads from the same memory location, and writes to memory only
if the two values are not equal. The optimizer is allowed to refine
this function by first CSE'ing the two loads, and the folding the
comparision to always report that the two values are equal. Such a
refined variant will look like it is `readonly`. However, the
unoptimized version of the function can still write to memory (since
the two loads //can// result in different values), and selecting the
unoptimized version at link time will retroactively invalidate
transforms we may have done under the assumption that the function
does not write to memory.
Note: this is not just a problem with atomics or with linking
differently optimized object files. See PR26774 for more realistic
examples that involved neither.
This patch:
This change introduces a new set of linkage types, predicated as
`GlobalValue::mayBeDerefined` that returns true if the linkage type
allows a function to be replaced by a differently optimized variant at
link time. It then changes a set of IPO passes to bail out if they see
such a function.
Reviewers: chandlerc, hfinkel, dexonsmith, joker.eph, rnk
Subscribers: mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D18634
llvm-svn: 265762
This is a partial re-commit -- maybe more of a re-implementation -- of
r265631 (reverted in r265637).
This makes RF_IgnoreMissingLocals behave (almost) consistently between
the Value and the Metadata hierarchy. In particular:
- MapValue returns nullptr or "metadata !{}" for missing locals in
MetadataAsValue/LocalAsMetadata bridging paris, depending on
the RF_IgnoreMissingLocals flag.
- MapValue doesn't memoize LocalAsMetadata-related results.
- MapMetadata no longer deals with LocalAsMetadata or
RF_IgnoreMissingLocals at all. (This wasn't in r265631 at all, but
I realized during testing it would make the patch simpler with no
loss of generality.)
r265631 went too far, making both functions universally ignore
RF_IgnoreMissingLocals. This broke building (e.g.) compiler-rt.
Reassociate (and possibly other passes) don't currently maintain
dominates-use invariants for metadata operands, resulting in IR like
this:
define void @foo(i32 %arg) {
call void @llvm.some.intrinsic(metadata i32 %x)
%x = add i32 1, i32 %arg
}
If the inliner chooses to inline @foo into another function, then
RemapInstruction will call `MapValue(metadata i32 %x)` and assert that
the return is not nullptr.
I've filed PR27273 to add a Verifier check and fix the underlying
problem in the optimization passes.
As a workaround, return `!{}` instead of nullptr for unmapped
LocalAsMetadata when RF_IgnoreMissingLocals is unset. Otherwise, match
the behaviour of r265631.
Original commit message:
ValueMapper: Make LocalAsMetadata match function-local Values
Start treating LocalAsMetadata similarly to function-local members of
the Value hierarchy in MapValue and MapMetadata.
- Don't memoize them.
- Return nullptr if they are missing.
This also cleans up ConstantAsMetadata to stop listening to the
RF_IgnoreMissingLocals flag.
llvm-svn: 265759
As suggested by Chandler in his review comments for D18662, this
follow-on patch renames some variables in GetLoadValueForLoad and
CoerceAvailableValueToLoadType to hopefully make it more obvious
which variables hold value sizes and which hold load/store sizes.
No functional change intended.
llvm-svn: 265687
When GVN wants to re-interpret an already available value in a smaller
type, it needs to right-shift the value on big-endian systems to ensure
the correct bytes are accessed. The shift value is the difference of
the sizes of the two types.
This is correct as long as both types occupy multiples of full bytes.
However, when one of them is a sub-byte type like i1, this no longer
holds true: we still need to shift, but only to access the correct
*byte*. Accessing bits within the byte requires no shift in either
endianness; e.g. an i1 resides in the least-significant bit of its
containing byte on both big- and little-endian systems.
Therefore, the appropriate shift value to be used is the difference of
the *storage* sizes of the two types. This is already handled correctly
in one place where such a shift takes place (GetStoreValueForLoad), but
is incorrect in two other places: GetLoadValueForLoad and
CoerceAvailableValueToLoadType.
This patch changes both places to use the storage size as well.
Differential Revision: http://reviews.llvm.org/D18662
llvm-svn: 265684
Remove the assertion that disallowed the combination, since
RF_IgnoreMissingLocals should have no effect on globals. As it happens,
RF_NullMapMissingGlobalValues asserted in MapValue(Constant*,...), so I
also changed a cast to a cast_or_null to get my test passing.
llvm-svn: 265633
Start treating LocalAsMetadata similarly to function-local members of
the Value hierarchy in MapValue and MapMetadata.
- Don't memoize them.
- Return nullptr if they are missing.
This also cleans up ConstantAsMetadata to stop listening to the
RF_IgnoreMissingLocals flag.
llvm-svn: 265631
Clarify what this RemapFlag actually means.
- Change the flag name to match its intended behaviour.
- Clearly document that it's not supposed to affect globals.
- Add a host of FIXMEs to indicate how to fix the behaviour to match
the intent of the flag.
RF_IgnoreMissingLocals should only affect the behaviour of
RemapInstruction for function-local operands; namely, for operands of
type Argument, Instruction, and BasicBlock. Currently, it is *only*
passed into RemapInstruction calls (and the transitive MapValue calls
that it makes).
When I split Metadata from Value I didn't understand the flag, and I
used it in a bunch of places for "global" metadata.
This commit doesn't have any functionality change, but prepares to
cleanup MapMetadata and MapValue.
llvm-svn: 265628
Updating dominators for exit-blocks of the unrolled loops is not enough,
as shown in PR27157. The proper way is to update dominators for all
dominance-children of original loop blocks.
llvm-svn: 265605
Summary:
In the context of http://wg21.link/lwg2445 C++ uses the concept of
'stronger' ordering but doesn't define it properly. This should be fixed
in C++17 barring a small question that's still open.
The code currently plays fast and loose with the AtomicOrdering
enum. Using an enum class is one step towards tightening things. I later
also want to tighten related enums, such as clang's
AtomicOrderingKind (which should be shared with LLVM as a 'C++ ABI'
enum).
This change touches a few lines of code which can be improved later, I'd
like to keep it as NFC for now as it's already quite complex. I have
related changes for clang.
As a follow-up I'll add:
bool operator<(AtomicOrdering, AtomicOrdering) = delete;
bool operator>(AtomicOrdering, AtomicOrdering) = delete;
bool operator<=(AtomicOrdering, AtomicOrdering) = delete;
bool operator>=(AtomicOrdering, AtomicOrdering) = delete;
This is separate so that clang and LLVM changes don't need to be in sync.
Reviewers: jyknight, reames
Subscribers: jyknight, llvm-commits
Differential Revision: http://reviews.llvm.org/D18775
llvm-svn: 265602
1. Add FullUnrollMaxCount option that works like MaxCount, but also limits
the unroll count for fully unrolled loops. So if a loop has an iteration
count over this, it won't fully unroll.
2. Add CLI options for MaxCount and the new option, so they can be tested
(plus a test).
3. Make partial unrolling obey MaxCount.
An example use-case (the out of tree one this is originally designed for) is
a target’s TTI can analyze a loop and decide on a max unroll count separate
from the size threshold, e.g. based on register pressure, then constrain
LoopUnroll to not exceed that, regardless of the size of the unrolled loop.
llvm-svn: 265562
Summary:
When the backedge taken codition is computed from an icmp, SCEV can
deduce the backedge taken count only if one of the sides of the icmp
is an AddRecExpr. However, due to sign/zero extensions, we sometimes
end up with something that is not an AddRecExpr.
However, we can use SCEV predicates to produce a 'guarded' expression.
This change adds a method to SCEV to get this expression, and the
SCEV predicate associated with it.
In HowManyGreaterThans and HowManyLessThans we will now add a SCEV
predicate associated with the guarded backedge taken count when the
analyzed SCEV expression is not an AddRecExpr. Note that we only do
this as an alternative to returning a 'CouldNotCompute'.
We use new feature in Loop Access Analysis and LoopVectorize to analyze
and transform more loops.
Reviewers: anemet, mzolotukhin, hfinkel, sanjoy
Subscribers: flyingforyou, mcrosier, atrick, mssimpso, sanjoy, mzolotukhin, llvm-commits
Differential Revision: http://reviews.llvm.org/D17201
llvm-svn: 265535
r265273 added Mapper::mapBlockAddress, which delays mapping a
blockaddress value until the function has a body. The condition was
backwards, and should be checking Function::empty instead of
GlobalValue::isDeclaration.
llvm-svn: 265508
Don't emit a gc.result for a statepoint lowered from
@llvm.experimental.deoptimize since the call into __llvm_deoptimize is
effectively noreturn. Instead follow the corresponding gc.statepoint
with an "unreachable".
llvm-svn: 265485
Add a common parent class for ConstantArray, ConstantVector, and
ConstantStruct called ConstantAggregate. These are the aggregate
subclasses of Constant that take operands.
This is mainly a cleanup, adding common `isa` target and removing
duplicated code. However, it also simplifies caching which constants
point transitively at `GlobalValue` (a possible future direction).
llvm-svn: 265466
This commit completely rewrites Mapper::mapMetadata (the implementation
of llvm::MapMetadata) using an iterative algorithm. The guts of the new
algorithm are in MDNodeMapper::map, the entry function in a new class.
Previously, Mapper::mapMetadata performed a recursive exploration of the
graph with eager "just in case there's a reason" malloc traffic.
The new algorithm has these benefits:
- New nodes and temporaries are not created eagerly.
- Uniquing cycles are not duplicated (see new unit test).
- No recursion.
Given a node to map, it does this:
1. Use a worklist to perform a post-order traversal of the transitively
referenced unmapped nodes.
2. Track which nodes will change operands, and which will have new
addresses in the mapped scheme. Propagate the changes through the
POT until fixed point, to pick up uniquing cycles that need to
change.
3. Map all the distinct nodes without touching their operands. If
RF_MoveDistinctMetadata, they get mapped to themselves; otherwise,
they get mapped to clones.
4. Map the uniqued nodes (bottom-up), lazily creating temporaries for
forward references as needed.
5. Remap the operands of the distinct nodes.
Mehdi helped me out by profiling this with -flto=thin. On his workload
(importing/etc. for opt.cpp), MapMetadata sped up by 15%, contributed
about 50% less to persistent memory, and made about 100x fewer calls to
malloc. The speedup is less than I'd hoped. The profile mainly blames
DenseMap lookups; perhaps there's a way to reduce them (e.g., by
disallowing remapping of MDString).
It would be nice to break the strange remaining recursion on the Value
side: MapValue => materializeInitFor => RemapInstruction => MapValue. I
think we could do this by having materializeInitFor return a worklist of
things to be remapped.
llvm-svn: 265456
Direct callees' that are cast to other function prototypes,
show up in the Call/Invoke instructions as ConstantExpr's.
Currently llvm::CallSite's getCalledFunction() fails
to return the callees in such expressions as direct calls.
Value profiling should avoid instrumenting such cases. Mostly NFC.
llvm-svn: 265330
Summary:
To aid in debugging, dump out the correlation between value names and
GUID for each source module when it is materialized. This will make it
easier to comprehend the earlier summary-based function importing debug
trace which only has access to and prints the GUIDs.
Reviewers: joker.eph
Subscribers: llvm-commits, joker.eph
Differential Revision: http://reviews.llvm.org/D18556
llvm-svn: 265326
Remove a few old FIXMEs from the original commit of the Metadata/Value
split in r223802. These are commented out assertions to the effect that
calls between mapValue and mapMetadata never return nullptr.
(The only behaviour change is that Mapper::mapSimpleMetadata memoizes
the nullptr return.)
When I originally rewrote the mapping code, I thought we could be
stricter in the new metadata hierarchy and never return nullptr when
RF_NullMapMissingGlobalValues was off. It's still not entirely clear to
me why these assertions failed (a few months ago, I had a theory that I
forgot to write down, but that's helping no one).
Understood or not, I no longer see how these commented-out assertions
would be useful. I'm relegating them to the annals of source control
before making significant changes to ValueMapper.cpp.
llvm-svn: 265282
This adds an assertion to maintain the property from r265273. When
Mapper::mapSimpleMetadata calls Mapper::mapValue, it should not find its
way back to mapMetadataImpl. This guarantees that mapSimpleMetadata is
not involved in any recursion.
Since Mapper::mapValue calls out to arbitrary materializers, we need to
save a bit on the ValueMap to make this assertion effective.
There should be no functionality change here. This co-recursion should
already have been impossible.
llvm-svn: 265276
The main change is to delay materializing GlobalValue initializers from
Mapper::mapValue until Mapper::~Mapper. This effectively removes all
recursion from mapSimplifiedMetadata, as promised in r265270.
mapSimplifiedMetadata calls mapValue for ConstantAsMetadata nodes to
find the mapped constant, and now it shouldn't be possible for mapValue
to indirectly re-invoke mapMetadata. I'll add an assertion to that
effect in a follow-up (separated so that the assertion can easily be
reverted independently, if it comes to that).
This a step toward a broader goal: converting Mapper::mapMetadataImpl
from a recursive to an iterative algorithm.
When a BlockAddress points at a BasicBlock inside an unmaterialized
function body, we need to delay it until the function body is
materialized in Mapper::~Mapper. This commit creates a temporary
BasicBlock and returns a new BlockAddress, then RAUWs the BasicBlock
once it is known. This situation should be extremely rare since a
BlockAddress is usually used from within the function it's referencing
(and BlockAddress itself is rare).
There should be no observable functionality change.
llvm-svn: 265273
Split out a helper for mapping metadata without operands. This is any
metadata that is not an MDNode, and any MDNode where the answer is known
without looking at operands.
Through some weird twists, this function is co-recursive:
mapSimpleMetadata
=> MapValue
=> materializeInitFor
=> linkFunctionBody
=> RemapInstructions
=> MapMetadata
=> mapSimpleMetadata
I plan to break the recursion in a follow-up.
llvm-svn: 265270
We already skip optimizations if the return value
of printf() is used, so CI->use_empty() is always
true.
Differential Revision: http://reviews.llvm.org/D18656
llvm-svn: 265253
Instead of checking live during MapMetadata whether a subprogram is
needed, seed the ValueMap with `nullptr` up-front.
There is a small hypothetical functionality change. Previously, calling
MapMetadataOp on a node whose "scope:" chain led to an unneeded
subprogram would return nullptr. However, if that were ever called,
then the subprogram would be needed; a situation that the IRMover is
supposed to avoid a priori!
Besides cleaning up the code a little, this restores a nice property:
MapMetadataOp returns the same as MapMetadata.
llvm-svn: 265229
Support seeding a ValueMap with nullptr for Metadata entries, a
situation I didn't consider in the Metadata/Value split.
I added a ValueMapper::getMappedMD accessor that returns an
Optional<Metadata*> with the mapped (possibly null) metadata. IRMover
needs to use this to avoid modifying the map when it's checking for
unneeded subprograms. I updated a call from bugpoint since I find the
new code clearer.
llvm-svn: 265228
Summary: This should make the code more readable, especially all the map declarations.
Reviewers: tejohnson
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D18721
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 265215
Use a helper function to find all the direct-calls-sites in a function.
Also split the code into a separated file as this will be use by
indirect-call-promotion transformation.
Differential Revision: http://reviews.llvm.org/D18704
llvm-svn: 265199
A catchswitch cannot be preceded by another instruction in the same
basic block (other than a PHI node).
Instead, insert the extract element right after the materialization of
the vectorized value. This isn't optimal but is a reasonable compromise
given the constraints of WinEH.
This fixes PR27163.
llvm-svn: 265157
Refactor the code that gets and creates PGOFuncName meta data so that it can be
used in clang's value profile annotation.
Differential Revision: http://reviews.llvm.org/D18623
llvm-svn: 265149
This is intended to be used for ThinLTO incremental build.
Differential Revision: http://reviews.llvm.org/D18213
This is a recommit of r265095 after fixing the Windows issues.
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 265111
This reverts commit r265096, r265095, and r265094.
Windows build is broken, and the validation does not pass.
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 265102
They're not necessary (since the stack pointer is trivially restored on
return), and the way LLVM inserts the stackrestore calls breaks the
IR (we get a stackrestore between the deoptimize call and the return).
llvm-svn: 265101
They're not necessary (since the lifetime of the alloca is trivially
over due to the return), and the way LLVM inserts the lifetime.end
markers breaks the IR (we get a lifetime end marker between the
deoptimize call and the return).
llvm-svn: 265100
This is intended to be used for ThinLTO incremental build.
Differential Revision: http://reviews.llvm.org/D18213
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 265095
"blockaddress" can not apply to an external function. All
blockaddress constant uses must belong to the same module as the
definition of the target function.
llvm-svn: 265061
This patch simply mirrors the attributes we give to @llvm.nvvm.reflect
to the __nvvm_reflect libdevice call. This shaves about 30% of the code
in libdevice away because of CSE opportunities. It's also helps us
figure out that libdevice implementations of transcendental functions
don't have side-effects.
llvm-svn: 265060
Summary:
As discussed on llvm-dev[1].
This change adds the basic boilerplate code around having this intrinsic
in LLVM:
- Changes in Intrinsics.td, and the IR Verifier
- A lowering pass to lower @llvm.experimental.guard to normal
control flow
- Inliner support
[1]: http://lists.llvm.org/pipermail/llvm-dev/2016-February/095523.html
Reviewers: reames, atrick, chandlerc, rnk, JosephTremoulet, echristo
Subscribers: mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D18527
llvm-svn: 264976
Commit r260791 contained an error in that it would introduce a cross-module
reference in the old module. It also introduced O(N^2) complexity in the
module cloner by requiring the entire module to be visited for each function.
Fix both of these problems by avoiding use of the CloneDebugInfoMetadata
function (which is only designed to do intra-module cloning) and cloning
function-attached metadata in the same way that we clone all other metadata.
Differential Revision: http://reviews.llvm.org/D18583
llvm-svn: 264935
Widening a PHI requires us to insert a trunc.
The logical place for this trunc is in the same BB as the PHI.
This is not possible if the BB is terminated by a catchswitch.
This fixes PR27133.
llvm-svn: 264926
Summary:
This gives callers flexibility to pass lambdas with captures, which lets
callers avoid the C-style void*-ptr closure style. (Currently, callers
in clang store state in the PassManagerBuilderBase arg.)
No functional change, and the new API is backwards-compatible.
Reviewers: chandlerc
Subscribers: joker.eph, cfe-commits
Differential Revision: http://reviews.llvm.org/D18613
llvm-svn: 264918
This change prevents the loop vectorizer from vectorizing when all of the vector
types it generates will be scalarized. I've run into this problem on the PPC's QPX
vector ISA, which only holds floating-point vector types. The loop vectorizer
will, however, happily vectorize loops with purely integer computation. Here's
an example:
LV: The Smallest and Widest types: 32 / 32 bits.
LV: The Widest register is: 256 bits.
LV: Found an estimated cost of 0 for VF 1 For instruction: %indvars.iv25 = phi i64 [ 0, %entry ], [ %indvars.iv.next26, %for.body ]
LV: Found an estimated cost of 0 for VF 1 For instruction: %arrayidx = getelementptr inbounds [1600 x i32], [1600 x i32]* %a, i64 0, i64 %indvars.iv25
LV: Found an estimated cost of 0 for VF 1 For instruction: %2 = trunc i64 %indvars.iv25 to i32
LV: Found an estimated cost of 1 for VF 1 For instruction: store i32 %2, i32* %arrayidx, align 4
LV: Found an estimated cost of 1 for VF 1 For instruction: %indvars.iv.next26 = add nuw nsw i64 %indvars.iv25, 1
LV: Found an estimated cost of 1 for VF 1 For instruction: %exitcond27 = icmp eq i64 %indvars.iv.next26, 1600
LV: Found an estimated cost of 0 for VF 1 For instruction: br i1 %exitcond27, label %for.cond.cleanup, label %for.body
LV: Scalar loop costs: 3.
LV: Found an estimated cost of 0 for VF 2 For instruction: %indvars.iv25 = phi i64 [ 0, %entry ], [ %indvars.iv.next26, %for.body ]
LV: Found an estimated cost of 0 for VF 2 For instruction: %arrayidx = getelementptr inbounds [1600 x i32], [1600 x i32]* %a, i64 0, i64 %indvars.iv25
LV: Found an estimated cost of 0 for VF 2 For instruction: %2 = trunc i64 %indvars.iv25 to i32
LV: Found an estimated cost of 2 for VF 2 For instruction: store i32 %2, i32* %arrayidx, align 4
LV: Found an estimated cost of 1 for VF 2 For instruction: %indvars.iv.next26 = add nuw nsw i64 %indvars.iv25, 1
LV: Found an estimated cost of 1 for VF 2 For instruction: %exitcond27 = icmp eq i64 %indvars.iv.next26, 1600
LV: Found an estimated cost of 0 for VF 2 For instruction: br i1 %exitcond27, label %for.cond.cleanup, label %for.body
LV: Vector loop of width 2 costs: 2.
LV: Found an estimated cost of 0 for VF 4 For instruction: %indvars.iv25 = phi i64 [ 0, %entry ], [ %indvars.iv.next26, %for.body ]
LV: Found an estimated cost of 0 for VF 4 For instruction: %arrayidx = getelementptr inbounds [1600 x i32], [1600 x i32]* %a, i64 0, i64 %indvars.iv25
LV: Found an estimated cost of 0 for VF 4 For instruction: %2 = trunc i64 %indvars.iv25 to i32
LV: Found an estimated cost of 4 for VF 4 For instruction: store i32 %2, i32* %arrayidx, align 4
LV: Found an estimated cost of 1 for VF 4 For instruction: %indvars.iv.next26 = add nuw nsw i64 %indvars.iv25, 1
LV: Found an estimated cost of 1 for VF 4 For instruction: %exitcond27 = icmp eq i64 %indvars.iv.next26, 1600
LV: Found an estimated cost of 0 for VF 4 For instruction: br i1 %exitcond27, label %for.cond.cleanup, label %for.body
LV: Vector loop of width 4 costs: 1.
...
LV: Selecting VF: 8.
LV: The target has 32 registers
LV(REG): Calculating max register usage:
LV(REG): At #0 Interval # 0
LV(REG): At #1 Interval # 1
LV(REG): At #2 Interval # 2
LV(REG): At #4 Interval # 1
LV(REG): At #5 Interval # 1
LV(REG): VF = 8
The problem is that the cost model here is not wrong, exactly. Since all of
these operations are scalarized, their cost (aside from the uniform ones) are
indeed VF*(scalar cost), just as the model suggests. In fact, the larger the VF
picked, the lower the relative overhead from the loop itself (and the
induction-variable update and check), and so in a sense, picking the largest VF
here is the right thing to do.
The problem is that vectorizing like this, where all of the vectors will be
scalarized in the backend, isn't really vectorizing, but rather interleaving.
By itself, this would be okay, but then the vectorizer itself also interleaves,
and that's where the problem manifests itself. There's aren't actually enough
scalar registers to support the normal interleave factor multiplied by a factor
of VF (8 in this example). In other words, the problem with this is that our
register-pressure heuristic does not account for scalarization.
While we might want to improve our register-pressure heuristic, I don't think
this is the right motivating case for that work. Here we have a more-basic
problem: The job of the vectorizer is to vectorize things (interleaving aside),
and if the IR it generates won't generate any actual vector code, then
something is wrong. Thus, if every type looks like it will be scalarized (i.e.
will be split into VF or more parts), then don't consider that VF.
This is not a problem specific to PPC/QPX, however. The problem comes up under
SSE on x86 too, and as such, this change fixes PR26837 too. I've added Sanjay's
reduced test case from PR26837 to this commit.
Differential Revision: http://reviews.llvm.org/D18537
llvm-svn: 264904
PGOFuncNames are used as the key to retrieve the Function definition from the
MD5 stored in the profile. For internal linkage function, we prefix the source
file name to the PGOFuncNames. LTO's internalization privatizes many global linkage
symbols. This happens after value profile annotation, but those internal
linkage functions should not have a source prefix. To differentiate compiler
generated internal symbols from original ones, PGOFuncName meta data are
created and attached to the original internal symbols in the value profile
annotation step. If a symbol does not have the meta data, its original linkage
must be non-internal.
Also add a new map that maps PGOFuncName's MD5 value to the function definition.
Differential Revision: http://reviews.llvm.org/D17895
llvm-svn: 264902
These checks are redundant and can be removed
Reviewers: hans
Subscribers: llvm-commits, mzolotukhin
Differential Revision: http://reviews.llvm.org/D18564
llvm-svn: 264872
Prior to this patch, the MemorySSA caching visitor would cache all
calls that it visited. When paired with phi optimization, this can be
problematic. Consider:
define void @foo() {
; 1 = MemoryDef(liveOnEntry)
call void @clobberFunction()
br i1 undef, label %if.end, label %if.then
if.then:
; MemoryUse(??)
call void @readOnlyFunction()
; 2 = MemoryDef(1)
call void @clobberFunction()
br label %if.end
if.end:
; 3 = MemoryPhi(...)
; MemoryUse(?)
call void @readOnlyFunction()
ret void
}
When optimizing MemoryUse(?), we visit defs 1 and 2, so we note to
cache them later. We ultimately end up not being able to optimize
passed the Phi, so we set MemoryUse(?) to point to the Phi. We then
cache the clobbering call for def 1 to be the Phi.
This commit changes this behavior so that we wipe out any calls
added to VisistedCalls while visiting the defs of a phi we couldn't
optimize.
Aside: With this patch, we now can bootstrap clang/LLVM without a
single MemorySSA verifier failure. Woohoo. :)
llvm-svn: 264820
This patch teaches the caching MemorySSA walker a few things:
1. Not to walk Phis we've walked before. It seems that we tried to do
this before, but it didn't work so well in cases like:
define void @foo() {
%1 = alloca i8
%2 = alloca i8
br label %begin
begin:
; 3 = MemoryPhi({%0,liveOnEntry},{%end,2})
; 1 = MemoryDef(3)
store i8 0, i8* %2
br label %end
end:
; MemoryUse(?)
load i8, i8* %1
; 2 = MemoryDef(1)
store i8 0, i8* %2
br label %begin
}
Because we wouldn't put Phis in Q.Visited until we tried to visit them.
So, when trying to optimize MemoryUse(?):
- We would visit 3 above
- ...Which would make us put {%0,liveOnEntry} in Q.Visited
- ...Which would make us visit {%0,liveOnEntry}
- ...Which would make us put {%end,2} in Q.Visited
- ...Which would make us visit {%end,2}
- ...Which would make us visit 3
- ...Which would realize we've already visited everything in 3
- ...Which would make us conservatively return 3.
In the added test-case, (@looped_visitedonlyonce) this behavior would
cause us to give incorrect results. Specifically, we'd visit 4 twice
in the same query, but on the second visit, we'd skip while.cond because
it had been visited, visit if.then/if.then2, and cache "1" as the
clobbering def on the way back.
2. If we try to walk the defs of a {Phi,MemLoc} and see it has been
visited before, just hand back the Phi we're trying to optimize.
I promise this isn't as terrible as it seems. :)
We now insert {Phi,MemLoc} pairs just before walking the Phi's upward
defs. So, we check the cache for the {Phi,MemLoc} pair before checking
if we've already walked the Phi.
The {Phi,MemLoc} pair is (almost?) always guaranteed to have a cache
entry if we've already fully walked it, because we cache as we go.
So, if the {Phi,MemLoc} pair isn't in cache, either:
(a) we must be in the process of visiting it (in which case, we can't
give a better answer in a cache-as-we-go DFS walker)
(b) we visited it, but didn't cache it on the way back (...which seems
to require `ModifyingAccess` to not dominate `StartingAccess`,
so I'm 99% sure that would be an error. If it's not an error, I
haven't been able to get it to happen locally, so I suspect it's
rare.)
- - - - -
As a consequence of this change, we no longer skip upward defs of phis,
so we can kill the `VisitedOnlyOne` check. This gives us better accuracy
than we had before, at the cost of potentially doing a bit more work
when we have a loop.
llvm-svn: 264814
This is effectively NFC, minus the renaming of the options
(-cyclone-prefetch-distance -> -prefetch-distance).
The change was requested by Tim in D17943.
llvm-svn: 264806
We have known races on profile counters, which can be reproduced by enabling
-fsanitize=thread and -fprofile-instr-generate simultaneously on a
multi-threaded program. This patch avoids reporting those races by not
instrumenting the reads and writes coming from the instruction profiler.
llvm-svn: 264805
During ADCE, track which debug info scopes still have live references
from the code, and delete debug info intrinsics for the dead ones.
These intrinsics describe the locations of variables (in registers or
stack slots). If there's no code left corresponding to a variable's
scope, then there's no way to reference the variable in the debugger and
it doesn't matter what its value is.
I add a DEBUG printout when the described location in an SSA register,
in case it helps some trying to track down why locations get lost.
However, we still delete these; the scope itself isn't attached to any
real code, so the ship has already sailed.
llvm-svn: 264800
Since we have moved to a model where functions are imported in bulk from
each source module after making summary-based importing decisions, there
is no longer a need to link metadata as a postpass, and all users have
been removed.
This essentially reverts r255909 and follow-on fixes.
llvm-svn: 264763
When eliminating or merging almost empty basic blocks, the existence of non-trivial PHI nodes
is currently used to recognize potential loops of which the block is the header and keep the block.
However, the current algorithm fails if the loops' exit condition is evaluated only with volatile
values hence no PHI nodes in the header. Especially when such a loop is an outer loop of a nested
loop, the loop is collapsed into a single loop which prevent later optimizations from being
applied (e.g., transforming nested loops into simplified forms and loop vectorization).
The patch augments the existing PHI node-based check by adding a pre-test if the BB actually
belongs to a set of loop headers and not eliminating it if yes.
llvm-svn: 264697
Personality is copied as part of copyFunctionAttributes, but it is
invalid on a declaration. Remove the personality attribute it the
function body is not cloned.
Also add a verifier run over output modules in the llvm-split tool.
llvm-svn: 264667
On OS X El Capitan and iOS 9, the linker supports a new section
attribute, live_support, which allows dead stripping to remove dead
globals along with the ASAN metadata about them.
With this change __asan_global structures are emitted in a new
__DATA,__asan_globals section on Darwin.
Additionally, there is a __DATA,__asan_liveness section with the
live_support attribute. Each entry in this section is simply a tuple
that binds together the liveness of a global variable and its ASAN
metadata structure. Thus the metadata structure will be alive if and
only if the global it references is also alive.
Review: http://reviews.llvm.org/D16737
llvm-svn: 264645
When eliminating or merging almost empty basic blocks, the existence of non-trivial PHI nodes
is currently used to recognize potential loops of which the block is the header and keep the block.
However, the current algorithm fails if the loops' exit condition is evaluated only with volatile
values hence no PHI nodes in the header. Especially when such a loop is an outer loop of a nested
loop, the loop is collapsed into a single loop which prevent later optimizations from being
applied (e.g., transforming nested loops into simplified forms and loop vectorization).
The patch augments the existing PHI node-based check by adding a pre-test if the BB actually
belongs to a set of loop headers and not eliminating it if yes.
llvm-svn: 264596
Don't set the function hotness attribute on the fly. This changes the CFG
branch probability of the caller function, which leads to inconsistent BB
ordering. This patch moves the attribute setting to a separated loop after
the counts in all functions are populated.
Fixes PR27024 - PGO instrumentation profile data is not reflected in correct
basic blocks.
Differential Revision: http://reviews.llvm.org/D18491
llvm-svn: 264594
Summary:
Add a statistic to count the number of imported functions. Also, add a
new -print-imports option to emit a trace of imported functions, that
works even for an NDEBUG build.
Note that emitOptimizationRemark does not work for the above printing as
it expects a Function object and DebugLoc, neither of which we have
with summary-based importing.
This is part 2 of D18487, the first part was committed separately as
r264536.
Reviewers: joker.eph
Subscribers: llvm-commits, joker.eph
Differential Revision: http://reviews.llvm.org/D18487
llvm-svn: 264537
With r264503, aliases are now being added to the GlobalsToImport set
even when their aliasees can't be imported due to their linkage type.
While the importing worked correctly (the aliases imported as
declarations) due to the logic in doImportAsDefinition, there is no
point to adding them to the GlobalsToImport set.
Additionally, with D18487 it was resulting in incorrectly printing a
message indicating that the alias was imported.
To avoid this, delay adding aliases to the GlobalsToImport set until
after the linkage type of the aliasee is checked.
This patch is part of D18487.
llvm-svn: 264536
Summary:
Now that the summary contains the full reference/call graph, we can
replace the existing function importer that loads and inspect the IR
to iteratively walk the call graph by a traversal based purely on the
summary information. Decouple the actual importing decision from any
IR manipulation.
Reviewers: tejohnson
Subscribers: llvm-commits, joker.eph
Differential Revision: http://reviews.llvm.org/D18343
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 264503
This changes RS4GC to lower calls to ``@llvm.experimental.deoptimize``
to gc.statepoints wrapping ``__llvm_deoptimize``, and changes
``callsGCLeafFunction`` to recognize ``@llvm.experimental.deoptimize``
as a non GC leaf function.
I've had to hard code the ``"__llvm_deoptimize"`` name in
RewriteStatepointsForGC; since ``TargetLibraryInfo`` is available only
during codegen. This isn't without precedent in the codebase, so I'm
not overtly concerned.
llvm-svn: 264456
We try to hoist the insertion point as high as possible to encourage
sharing. However, we must be careful not to hoist into a catchswitch as
it is both an EHPad and a terminator.
llvm-svn: 264344
isDependenceDistanceOfOne asserts that the store and the load access
through the same type. This function is also used by
removeDependencesFromMultipleStores so we need to make sure we filter
out mismatching types before reaching this point.
Now we do this when the initial candidates are gathered.
This is a refinement of the fix made in r262267.
Fixes PR27048.
llvm-svn: 264313
There are a few bugs in the walker that this patch addresses.
Primarily:
- Caching can break when we have multiple BBs without phis
- We weren't optimizing some phis properly
- Because of how the DFS iterator works, there were times where we
wouldn't cache any results of our DFS
I left the test cases with FIXMEs in, because I'm not sure how much
effort it will take to get those to work (read: We'll probably
ultimately have to end up redoing the walker, or we'll have to come up
with some creative caching tricks), and more test coverage = better.
Differential Revision: http://reviews.llvm.org/D18065
llvm-svn: 264180
When you have multiple LCSSA (single-operand) PHIs that are converted
into two-operand PHIs due to versioning, only assert that the PHI
currently being converted has a single operand. I.e. we don't want to
check PHIs that were converted earlier in the loop.
Fixes PR27023.
Thanks to Karl-Johan Karlsson for the minimized testcase!
llvm-svn: 264081
It's a bug fix.
For rerolled loops SE trip count remains unchanged. It leads to incorrect work of the next passes.
My patch just resets SE info for rerolled loop forcing SE to re-evaluate it next time it requested.
I also added a verifier call in the exisitng test to be sure no invalid SE data remain. Without my fix this test would fail with -verify-scev.
Differential Revision: http://reviews.llvm.org/D18316
llvm-svn: 264051
Summary:
Without tree pruning clang has 2,667,552 points.
Wiht only dominators pruning: 1,515,586.
With both dominators & predominators pruning: 1,340,534.
Resubmit of r262103.
Differential Revision: http://reviews.llvm.org/D18341
llvm-svn: 264003
If we have a BB with only MemoryDefs, live-in calculations will ignore
it. This means we get results like this:
define void @foo(i8* %p) {
; 1 = MemoryDef(liveOnEntry)
store i8 0, i8* %p
br i1 undef, label %if.then, label %if.end
if.then:
; 2 = MemoryDef(1)
store i8 1, i8* %p
br label %if.end
if.end:
; 3 = MemoryDef(1)
store i8 2, i8* %p
ret void
}
...When there should be a MemoryPhi in the `if.end` BB.
This patch fixes that behavior.
llvm-svn: 263991
The sinpi/cospi can be replaced with sincospi to remove unnecessary
computations. However, we need to make sure that the calls are within
the same function!
This fixes PR26993.
llvm-svn: 263875
Summary:
ThinLTO is relying on linkInModule to import selected function.
However a lot of "magic" was hidden in linkInModule and the IRMover,
who would rename and promote global variables on the fly.
This is moving to an approach where the steps are decoupled and the
client is reponsible to specify the list of globals to import.
As a consequence some test are changed because they were relying on
the previous behavior which was importing the definition of *every*
single global without control on the client side.
Now the burden is on the client to decide if a global has to be imported
or not.
Reviewers: tejohnson
Subscribers: joker.eph, llvm-commits
Differential Revision: http://reviews.llvm.org/D18122
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 263863
The loop on IVOperand's incoming values assumes IVOperand to be an
induction variable on the loop over which `S Pred X` is invariant;
otherwise loop invariant incoming values to IVOperand are not guaranteed
to dominate the comparision.
This fixes PR26973.
llvm-svn: 263827
Summary:
These dependencies would be used in the future to reduce the number
of instrumented blocks(http://reviews.llvm.org/rL262103)
This is submitted as a separate CL because of previous problems with
ARM.
Subscribers: aemerson
Differential Revision: http://reviews.llvm.org/D18227
llvm-svn: 263797
Summary:
It can hurt performance to prefetch ahead too much. Be conservative for
now and don't prefetch ahead more than 3 iterations on Cyclone.
Reviewers: hfinkel
Subscribers: llvm-commits, mzolotukhin
Differential Revision: http://reviews.llvm.org/D17949
llvm-svn: 263772
Summary:
And use this TTI for Cyclone. As it was explained in the original RFC
(http://thread.gmane.org/gmane.comp.compilers.llvm.devel/92758), the HW
prefetcher work up to 2KB strides.
I am also adding tests for this and the previous change (D17943):
* Cyclone prefetching accesses with a large stride
* Cyclone not prefetching accesses with a small stride
* Generic Aarch64 subtarget not prefetching either
Reviewers: hfinkel
Subscribers: aemerson, rengolin, llvm-commits, mzolotukhin
Differential Revision: http://reviews.llvm.org/D17945
llvm-svn: 263771
Summary:
Use the new LoopVersioning facility (D16712) to add noalias metadata in
the vector loop if we versioned with memchecks. This can enable some
optimization opportunities further down the pipeline (see the included
test or the benchmark improvement quoted in D16712).
The test also covers the bug I had in the initial version in D16712.
The vectorizer did not previously use LoopVersioning. The reason is
that the vectorizer performs its transformations in single shot. It
creates an empty single-block vector loop that it then populates with
the widened, if-converted instructions. Thus creating an intermediate
versioned scalar loop seems wasteful.
So this patch (rather than bringing in LoopVersioning fully) adds a
special interface to LoopVersioning to allow the vectorizer to add
no-alias annotation while still performing its own versioning.
As the vectorizer propagates metadata from the instructions in the
original loop to the vector instructions we also check the pointer in
the original instruction and see if LoopVersioning can add no-alias
metadata based on the issued memchecks.
Reviewers: hfinkel, nadav, mzolotukhin
Subscribers: mzolotukhin, llvm-commits
Differential Revision: http://reviews.llvm.org/D17191
llvm-svn: 263744
Summary:
If we decide to version a loop to benefit a transformation, it makes
sense to record the now non-aliasing accesses in the newly versioned
loop. This allows non-aliasing information to be used by subsequent
passes.
One example is 456.hmmer in SPECint2006 where after loop distribution,
we vectorize one of the newly distributed loops. To vectorize we
version this loop to fully disambiguate may-aliasing accesses. If we
add the noalias markers, we can use the same information in a later DSE
pass to eliminate some dead stores which amounts to ~25% of the
instructions of this hot memory-pipeline-bound loop. The overall
performance improves by 18% on our ARM64.
The scoped noalias annotation is added in LoopVersioning. The patch
then enables this for loop distribution. A follow-on patch will enable
it for the vectorizer. Eventually this should be run by default when
versioning the loop but first I'd like to get some feedback whether my
understanding and application of scoped noalias metadata is correct.
Essentially my approach was to have a separate alias domain for each
versioning of the loop. For example, if we first version in loop
distribution and then in vectorization of the distributed loops, we have
a different set of memchecks for each versioning. By keeping the scopes
in different domains they can conveniently be defined independently
since different alias domains don't affect each other.
As written, I also have a separate domain for each loop. This is not
necessary and we could save some metadata here by using the same domain
across the different loops. I don't think it's a big deal either way.
Probably the best is to review the tests first to see if I mapped this
problem correctly to scoped noalias markers. I have plenty of comments
in the tests.
Note that the interface is prepared for the vectorizer which needs the
annotateInstWithNoAlias API. The vectorizer does not use LoopVersioning
so we need a way to pass in the versioned instructions. This is also
why the maps have to become part of the object state.
Also currently, we only have an AA-aware DSE after the vectorizer if we
also run the LTO pipeline. Depending how widely this triggers we may
want to schedule a DSE toward the end of the regular pass pipeline.
Reviewers: hfinkel, nadav, ashutosh.nema
Subscribers: mssimpso, aemerson, llvm-commits, mcrosier
Differential Revision: http://reviews.llvm.org/D16712
llvm-svn: 263743
This is similar to D18133 where we allowed profile weights on select instructions.
This extends that change to also allow the 'unpredictable' attribute of branches to apply to selects.
A test to check that 'unpredictable' metadata is preserved when cloning instructions was checked in at:
http://reviews.llvm.org/rL263648
Differential Revision: http://reviews.llvm.org/D18220
llvm-svn: 263716
This splits out the logic that maps the `"statepoint-id"` attribute into
the actual statepoint ID, and the `"statepoint-num-patch-bytes"`
attribute into the number of patchable bytes the statpeoint is lowered
into. The new home of this logic is in IR/Statepoint.cpp, and this
refactoring will support similar functionality when lowering calls with
deopt operand bundles in the future.
llvm-svn: 263685
Summary:
Fix LSRInstance::HoistInsertPosition() to check the original insert
position block first for a canonical insertion point that is dominated
by all inputs. This leads to SCEV being able to reuse more instructions
since it currently tracks the instructions it creates for reuse by
keeping a table of <Value, insert point> pairs.
Reviewers: atrick
Subscribers: mcrosier, mzolotukhin, llvm-commits
Differential Revision: http://reviews.llvm.org/D18001
llvm-svn: 263644
The latent bug that LLE exposed in the LoopVectorizer was resolved
(PR26952).
The pass can be disabled with -mllvm -enable-loop-load-elim=0
llvm-svn: 263595
There is something strange going on with debug info (.eh_frame_hdr)
disappearing when msan.module_ctor are placed in comdat sections.
Moving this functionality under flag, disabled by default.
llvm-svn: 263579
This was a latent bug that got exposed by the change to add LoopSimplify
as a dependence to LoopLoadElimination. Since LoopInfo was corrupted
after LV, LoopSimplify mis-compiled nbench in the test-suite (more
details in the PR).
The problem was that when we create the blocks for predicated stores we
didn't add those to any loops.
The original testcase for store predication provides coverage for this
assuming we verify LI on the way out of LV.
Fixes PR26952.
llvm-svn: 263565
Since the static getGlobalIdentifier and getGUID methods are now called
for global values other than functions, reflect that by moving these
methods to the GlobalValue class.
llvm-svn: 263524
(Resubmitting after fixing missing file issue)
With the changes in r263275, there are now more than just functions in
the summary. Completed the renaming of data structures (started in
r263275) to reflect the wider scope. In particular, changed the
FunctionIndex* data structures to ModuleIndex*, and renamed related
variables and comments. Also renamed the files to reflect the changes.
A companion clang patch will immediately succeed this patch to reflect
this renaming.
llvm-svn: 263513
Summary:
Specifically, when we perform runtime loop unrolling of a loop that
contains a convergent op, we can only unroll k times, where k divides
the loop trip multiple.
Without this change, we'll happily unroll e.g. the following loop
for (int i = 0; i < N; ++i) {
if (i == 0) convergent_op();
foo();
}
into
int i = 0;
if (N % 2 == 1) {
convergent_op();
foo();
++i;
}
for (; i < N - 1; i += 2) {
if (i == 0) convergent_op();
foo();
foo();
}.
This is unsafe, because we've just added a control-flow dependency to
the convergent op in the prelude.
In general, runtime unrolling loops that contain convergent ops is safe
only if we don't have emit a prelude, which occurs when the unroll count
divides the trip multiple.
Reviewers: resistor
Subscribers: llvm-commits, mzolotukhin
Differential Revision: http://reviews.llvm.org/D17526
llvm-svn: 263509
Summary: This now try to reorder instructions in order to help create the optimizable pattern.
Reviewers: craig.topper, spatel, dexonsmith, Prazek, chandlerc, joker.eph, majnemer
Differential Revision: http://reviews.llvm.org/D16523
llvm-svn: 263503
With the changes in r263275, there are now more than just functions in
the summary. Completed the renaming of data structures (started in
r263275) to reflect the wider scope. In particular, changed the
FunctionIndex* data structures to ModuleIndex*, and renamed related
variables and comments. Also renamed the files to reflect the changes.
A companion clang patch will immediately succeed this patch to reflect
this renaming.
llvm-svn: 263490
As noted in:
https://llvm.org/bugs/show_bug.cgi?id=26636
This doesn't accomplish anything on its own. It's the first step towards preserving
and using branch weights with selects.
The next step would be to make sure we're propagating the info in all of the other
places where we create selects (SimplifyCFG, InstCombine, etc). I don't think there's
an easy fix to make this happen; we have to look at each transform individually to
determine how to correctly propagate the weights.
Along with that step, we need to then use the weights when making subsequent transform
decisions such as discussed in http://reviews.llvm.org/D16836.
The inliner test is independent but closely related. It verifies that metadata is
preserved when both branches and selects are cloned.
Differential Revision: http://reviews.llvm.org/D18133
llvm-svn: 263482
Summary:
Previously we had a notion of convergent functions but not of convergent
calls. This is insufficient to correctly analyze calls where the target
is unknown, e.g. indirect calls.
Now a call is convergent if it targets a known-convergent function, or
if it's explicitly marked as convergent. As usual, we can remove
convergent where we can prove that no convergent operations are
performed in the call.
Originally landed as r261544, then reverted in r261544 for (incidental)
build breakage. Re-landed here with no changes.
Reviewers: chandlerc, jingyue
Subscribers: llvm-commits, tra, jhen, hfinkel
Differential Revision: http://reviews.llvm.org/D17739
llvm-svn: 263481
The motivating example is this
for (j = n; j > 1; j = i) {
i = j / 2;
}
The signed division is safely to be changed to an unsigned division (j is known
to be larger than 1 from the loop guard) and later turned into a single shift
without considering the sign bit.
llvm-svn: 263406
This follows up on the related AVX instruction transforms, but this
one is too strange to do anything more with. Intel's behavioral
description of this instruction in its Software Developer's Manual
is tragi-comic.
llvm-svn: 263340
commit ae14bf6488e8441f0f6d74f00455555f6f3943ac
Author: Mehdi Amini <mehdi.amini@apple.com>
Date: Fri Mar 11 17:15:50 2016 +0000
Remove PreserveNames template parameter from IRBuilder
Summary:
Following r263086, we are now relying on a flag on the Context to
discard Value names in release builds.
Reviewers: chandlerc
Subscribers: mzolotukhin, llvm-commits
Differential Revision: http://reviews.llvm.org/D18023
From: Mehdi Amini <mehdi.amini@apple.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@263258
91177308-0d34-0410-b5e6-96231b3b80d8
until we can figure out what to do about clang and Release build testing.
This reverts commit 263258.
llvm-svn: 263321
Summary:
This intrinsic, together with deoptimization operand bundles, allow
frontends to express transfer of control and frame-local state from
one (typically more specialized, hence faster) version of a function
into another (typically more generic, hence slower) version.
In languages with a fully integrated managed runtime this intrinsic can
be used to implement "uncommon trap" like functionality. In unmanaged
languages like C and C++, this intrinsic can be used to represent the
slow paths of specialized functions.
Note: this change does not address how `@llvm.experimental_deoptimize`
is lowered. That will be done in a later change.
Reviewers: chandlerc, rnk, atrick, reames
Subscribers: llvm-commits, kmod, mjacob, maksfb, mcrosier, JosephTremoulet
Differential Revision: http://reviews.llvm.org/D17732
llvm-svn: 263281
Value profile instrumentation treats inline asm calls like they are
indirect calls. This causes problems when the 'Callee' is passed to a
ptrtoint cast -- the verifier rightly claims that this is bogus and
crashes opt.
llvm-svn: 263278
Summary:
This patch adds support for including a full reference graph including
call graph edges and other GV references in the summary.
The reference graph edges can be used to make importing decisions
without materializing any source modules, can be used in the plugin
to make file staging decisions for distributed build systems, and is
expected to have other uses.
The call graph edges are recorded in each function summary in the
bitcode via a list of <CalleeValueIds, StaticCount> tuples when no PGO
data exists, or <CalleeValueId, StaticCount, ProfileCount> pairs when
there is PGO, where the ValueId can be mapped to the function GUID via
the ValueSymbolTable. In the function index in memory, the call graph
edges reference the target via the CalleeGUID instead of the
CalleeValueId.
The reference graph edges are recorded in each summary record with a
list of referenced value IDs, which can be mapped to value GUID via the
ValueSymbolTable.
Addtionally, a new summary record type is added to record references
from global variable initializers. A number of bitcode records and data
structures have been renamed to reflect the newly expanded scope of the
summary beyond functions. More cleanup will follow.
Reviewers: joker.eph, davidxl
Subscribers: joker.eph, llvm-commits
Differential Revision: http://reviews.llvm.org/D17212
llvm-svn: 263275
Summary:
Following r263086, we are now relying on a flag on the Context to
discard Value names in release builds.
Reviewers: chandlerc
Subscribers: mzolotukhin, llvm-commits
Differential Revision: http://reviews.llvm.org/D18023
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 263258
Summary:
Following r263086, we are replacing this by a runtime check.
More cleanup will follow on the IRBuilder itself, but I submitted
this patch separately as SROA has a fancy "prefixInserter" class
that needs extra-love.
Reviewers: chandlerc
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D18022
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 263256
member type.
Because of how this type is used by the ValueTable, it cannot actually
have hidden visibility. GCC actually nicely warns about this but Clang
just silently ... I don't even know. =/ We should do a better job either
way though.
This should resolve a bunch of the GCC warnings about visibility that
the port of GVN triggered and make the visibility story a bit more
correct.
llvm-svn: 263250
much to my horror, so use variables to fix it in place.
This terrifies me. Both basic-aa and memdep will provide more precise
information when the domtree and/or the loop info is available. Because
of this, if your pass (like GVN) requires domtree, and then queries
memdep or basic-aa, it will get more precise results. If it does this in
the other order, it gets less precise results.
All of the ideas I have for fixing this are, essentially, terrible. Here
I've just caused us to stop having unspecified behavior as different
implementations evaluate the order of these arguments differently. I'm
actually rather glad that they do, or the fragility of memdep and
basic-aa would have gone on unnoticed. I've left comments so we don't
immediately break this again. This should fix bots whose host compilers
evaluate the order of arguments differently from Clang.
llvm-svn: 263231
This was originally a pointer to support pass managers which didn't use
AnalysisManagers. However, that doesn't realistically come up much and
the complexity of supporting it doesn't really make sense.
In fact, *many* parts of the pass manager were just assuming the pointer
was never null already. This at least makes it much more explicit and
clear.
llvm-svn: 263219
Since the names are used in a loop this does more work in debug builds. In
release builds value names are generally discarded so we don't have to do
the concatenation at all. It's also simpler code, no functional change
intended.
llvm-svn: 263215
tests to run GVN in both modes.
This is mostly the boring refactoring just like SROA and other complex
transformation passes. There is some trickiness in that GVN's
ValueNumber class requires hand holding to get to compile cleanly. I'm
open to suggestions about a better pattern there, but I tried several
before settling on this. I was trying to balance my desire to sink as
much implementation detail into the source file as possible without
introducing overly many layers of abstraction.
Much like with SROA, the design of this system is made somewhat more
cumbersome by the need to support both pass managers without duplicating
the significant state and logic of the pass. The same compromise is
struck here.
I've also left a FIXME in a doxygen comment as the GVN pass seems to
have pretty woeful documentation within it. I'd like to submit this with
the FIXME and let those more deeply familiar backfill the information
here now that we have a nice place in an interface to put that kind of
documentaiton.
Differential Revision: http://reviews.llvm.org/D18019
llvm-svn: 263208
llvm::getDISubprogram walks the instructions in a function, looking for one in the scope of the current function, so that it can find the !dbg entry for the subprogram itself.
Now that !dbg is attached to functions, this should not be necessary. This patch changes all uses to just query the subprogram directly on the function.
Ideally this should be NFC, but in reality its possible that a function:
has no !dbg (in which case there's likely a bug somewhere in an opt pass), or
that none of the instructions had a scope referencing the function, so we used to not find the !dbg on the function but now we will
Reviewed by Duncan Exon Smith.
Differential Revision: http://reviews.llvm.org/D18074
llvm-svn: 263184
The code assumed that we always had a preheader without making the pass
dependent on LoopSimplify.
Thanks to Mattias Eriksson V for reporting this.
llvm-svn: 263173
of, and I misdiagnosed for months and months.
Andrea has had a patch for this forever, but I just couldn't see how
it was fixing the root cause of the problem. It didn't make sense to me,
even though the patch was perfectly good and the analysis of the actual
failure event was *fantastic*.
Well, I came back to it today because the patch has sat for *far* too
long and needs attention and decided I wouldn't let it go until I really
understood what was going on. After quite some time in the debugger,
I finally realized that in fact I had just missed an important case with
my previous attempt to fix PR22093 in r225149. Not only do we need to
handle loads that won't be split, but stores-of-loads that we won't
split. We *do* actually have enough logic in the presplitting to form
new slices for split stores.... *unless* we decided not to split them!
I'm so sorry that it took me this long to come to the realization that
this is the issue. It seems so obvious in hind sight (of course).
Anyways, the fix becomes *much* smaller and more focused. The fact that
we're left doing integer smashing is related to the FIXME in my original
commit: fundamentally, we're not aggressive about pre-splitting for
loads and stores to the same alloca. If we want to get aggressive about
this, it'll need both what Andrea had put into the proposed fix, but
also a *lot* more logic to essentially iteratively pre-split the alloca
until we can't do any more. As I said in that commit log, its really
unclear that this is the right call. Instead, the integer blending and
letting targets lower this to narrower stores seems slightly better. But
we definitely shouldn't really go down that path just to fix this bug.
Again, tons of thanks are owed to Andrea and others at Sony for working
on this bug. I really should have seen what was going on here and
re-directed them sooner. =////
llvm-svn: 263121
We already have the instruction extracted into 'I', just cast that to
a store the way we do for loads. Also, we don't enter the if unless SI
is non-null, so don't test it again for null.
I'm pretty sure the entire test there can be nuked, but this is just the
trivial cleanup.
llvm-svn: 263112
MinVecRegSize is currently hardcoded to 128; this patch adds a cl::opt
to allow changing it. I tried not to change any existing behavior for the default
case.
Differential revision: http://reviews.llvm.org/D13278
llvm-svn: 263089
need to be changed for porting to the new pass manager.
Also sink the comment on the ValueTable class back to that class instead
of it dangling on an anonymous namespace.
No functionality changed.
llvm-svn: 263084
This is a fairly straightforward port to the new pass manager with one
exception. It removes a very questionable use of releaseMemory() in
the old pass to invalidate its caches between runs on a function.
I don't think this is really guaranteed to be safe. I've just used the
more direct port to the new PM to address this by nuking the results
object each time the pass runs. While this could cause some minor malloc
traffic increase, I don't expect the compile time performance hit to be
noticable, and it makes the correctness and other aspects of the pass
much easier to reason about. In some cases, it may make things faster by
making the sets and maps smaller with better locality. Indeed, the
measurements collected by Bruno (thanks!!!) show mostly compile time
improvements.
There is sadly very limited testing at this point as there are only two
tests of memdep, and both rely on GVN. I'll be porting GVN next and that
will exercise this heavily though.
Differential Revision: http://reviews.llvm.org/D17962
llvm-svn: 263082
This patch teaches LICM's implementation of store promotion to exploit the fact that the memory location being accessed might be provable thread local. The fact it's thread local weakens the requirements for where we can insert stores since no other thread can observe the write. This allows us perform store promotion even in cases where the store is not guaranteed to execute in the loop.
Two key assumption worth drawing out is that this assumes a) no-capture is strong enough to imply no-escape, and b) standard allocation functions like malloc, calloc, and operator new return values which can be assumed not to have previously escaped.
In future work, it would be nice to generalize this so that it works without directly seeing the allocation site. I believe that the nocapture return attribute should be suitable for this purpose, but haven't investigated carefully. It's also likely that we could support unescaped allocas with similar reasoning, but since SROA and Mem2Reg should destroy those, they're less interesting than they first might seem.
Differential Revision: http://reviews.llvm.org/D16783
llvm-svn: 263072
When checking whether an smin is positive, we can move the comparison to one of the inputs if the other is known positive. If the known positive one is the min, then the other can't be negative. If the other is the min, then we compute the min.
Differential Revision: http://reviews.llvm.org/D17873
llvm-svn: 263059
I somehow missed this. The case in GCC (global_alloc) was similar to
the new testcase except it had an array of structs rather than a two
dimensional array.
Fixes RP26885.
llvm-svn: 263058
As part of r251146 InstCombine was extended to call computeKnownBits on
every value in the function to determine whether it happens to be
constant. This increases typical compiletime by 1-3% (5% in irgen+opt
time) in my measurements. On the other hand this case did not trigger
once in the whole llvm-testsuite.
This patch introduces the notion of ExpensiveCombines which are only
enabled for OptLevel > 2. I removed the check in InstructionSimplify as
that is called from various places where the OptLevel is not known but
given the rarity of the situation I think a check in InstCombine is
enough.
Differential Revision: http://reviews.llvm.org/D16835
llvm-svn: 263047
Original commit message:
calculate builtin_object_size if argument is a removable pointer
This patch fixes calculating correct value for builtin_object_size function
when pointer is used only in builtin_object_size function call and never
after that.
Patch by Strahinja Petrovic.
Differential Revision: http://reviews.llvm.org/D17337
Reland the original change with a small modification (first do a null check
and then do the cast) to satisfy ubsan.
llvm-svn: 263011
TSan instrumentation functions for atomic stores, loads, and cmpxchg work on
integer value types. This patch adds casts before calling TSan instrumentation
functions in cases where the value is a pointer.
Differential Revision: http://reviews.llvm.org/D17833
llvm-svn: 262876
This lets select sub-targets enable this pass. The patch implements the
idea from the recent llvm-dev thread:
http://thread.gmane.org/gmane.comp.compilers.llvm.devel/94925
The goal is to enable the LoopDataPrefetch pass for the Cyclone
sub-target only within Aarch64.
Positive and negative tests will be included in an upcoming patch that
enables selective prefetching of large-strided accesses on Cyclone.
llvm-svn: 262844
This reverts commit r262250.
It causes SPEC2006/gcc to generate wrong result (166.s) in AArch64 when
running with *ref* data set. The error happens with
"-Ofast -flto -fuse-ld=gold" or "-O3 -fno-strict-aliasing".
llvm-svn: 262839
This code has been successfully used to bootstrap libc++ in a no-asserts
mode for a very long time, so the code that follows cannot be completely
incorrect. I've added a test that shows the current behavior for this
kind of code with DFSan. If it is desirable for DFSan to do something
special when processing an invoke of a variadic function, it can be
added, but we shouldn't keep an assert that we've been ignoring due to
release builds anyways.
llvm-svn: 262829
Given that we're not actually reducing the instruction count in the included
regression tests, I think we would call this a canonicalization step.
The motivation comes from the example in PR26702:
https://llvm.org/bugs/show_bug.cgi?id=26702
If we hoist the bitwise logic ahead of the bitcast, the previously unoptimizable
example of:
define <4 x i32> @is_negative(<4 x i32> %x) {
%lobit = ashr <4 x i32> %x, <i32 31, i32 31, i32 31, i32 31>
%not = xor <4 x i32> %lobit, <i32 -1, i32 -1, i32 -1, i32 -1>
%bc = bitcast <4 x i32> %not to <2 x i64>
%notnot = xor <2 x i64> %bc, <i64 -1, i64 -1>
%bc2 = bitcast <2 x i64> %notnot to <4 x i32>
ret <4 x i32> %bc2
}
Simplifies to the expected:
define <4 x i32> @is_negative(<4 x i32> %x) {
%lobit = ashr <4 x i32> %x, <i32 31, i32 31, i32 31, i32 31>
ret <4 x i32> %lobit
}
Differential Revision: http://reviews.llvm.org/D17583
llvm-svn: 262645
This patch provides the following infrastructure for PGO enhancements in inliner:
Enable the use of block level profile information in inliner
Incremental update of block frequency information during inlining
Update the function entry counts of callees when they get inlined into callers.
Differential Revision: http://reviews.llvm.org/D16381
llvm-svn: 262636
Summary: With discriminator, LineLocation can uniquely identify a callsite without the need to specifying callee name. Remove Callee function name from the key, and put it in the value (FunctionSamples).
Reviewers: davidxl, dnovillo
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D17827
llvm-svn: 262634
The vectorization of first-order recurrences (r261346) caused PR26734. When
detecting these recurrences, we need to ensure that the previous value is
actually defined inside the loop. This patch includes the fix and test case.
llvm-svn: 262624
Summary: This is the last step toward supporting aggregate memory access in instcombine. This explodes stores of arrays into a serie of stores for each element, allowing them to be optimized.
Reviewers: joker.eph, reames, hfinkel, majnemer, mgrang
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D17828
llvm-svn: 262530
Summary: This is another step toward improving fca support. This unpack load of array in a series of load to array's elements.
Reviewers: chandlerc, joker.eph, majnemer, reames, hfinkel
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D15890
llvm-svn: 262521
parts of the AA interface out of the base class of every single AA
result object.
Because this logic reformulates the query in terms of some other aspect
of the API, it would easily cause O(n^2) query patterns in alias
analysis. These could in turn be magnified further based on the number
of call arguments, and then further based on the number of AA queries
made for a particular call. This ended up causing problems for Rust that
were actually noticable enough to get a bug (PR26564) and probably other
places as well.
When originally re-working the AA infrastructure, the desire was to
regularize the pattern of refinement without losing any generality.
While I think it was successful, that is clearly proving to be too
costly. And the cost is needless: we gain no actual improvement for this
generality of making a direct query to tbaa actually be able to
re-use some other alias analysis's refinement logic for one of the other
APIs, or some such. In short, this is entirely wasted work.
To the extent possible, delegation to other API surfaces should be done
at the aggregation layer so that we can avoid re-walking the
aggregation. In fact, this significantly simplifies the logic as we no
longer need to smuggle the aggregation layer into each alias analysis
(or the TargetLibraryInfo into each alias analysis just so we can form
argument memory locations!).
However, we also have some delegation logic inside of BasicAA and some
of it even makes sense. When the delegation logic is baking in specific
knowledge of aliasing properties of the LLVM IR, as opposed to simply
reformulating the query to utilize a different alias analysis interface
entry point, it makes a lot of sense to restrict that logic to
a different layer such as BasicAA. So one aspect of the delegation that
was in every AA base class is that when we don't have operand bundles,
we re-use function AA results as a fallback for callsite alias results.
This relies on the IR properties of calls and functions w.r.t. aliasing,
and so seems a better fit to BasicAA. I've lifted the logic up to that
point where it seems to be a natural fit. This still does a bit of
redundant work (we query function attributes twice, once via the
callsite and once via the function AA query) but it is *exactly* twice
here, no more.
The end result is that all of the delegation logic is hoisted out of the
base class and into either the aggregation layer when it is a pure
retargeting to a different API surface, or into BasicAA when it relies
on the IR's aliasing properties. This should fix the quadratic query
pattern reported in PR26564, although I don't have a stand-alone test
case to reproduce it.
It also seems general goodness. Now the numerous AAs that don't need
target library info don't carry it around and depend on it. I think
I can even rip out the general access to the aggregation layer and only
expose that in BasicAA as it is the only place where we re-query in that
manner.
However, this is a non-trivial change to the AA infrastructure so I want
to get some additional eyes on this before it lands. Sadly, it can't
wait long because we should really cherry pick this into 3.8 if we're
going to go this route.
Differential Revision: http://reviews.llvm.org/D17329
llvm-svn: 262490
As noted in the code comment, I don't think we can do the same transform that we do for
*scalar* integers comparisons to *vector* integers comparisons because it might pessimize
the general case.
Exhibit A for an incomplete integer comparison ISA remains x86 SSE/AVX: it only has EQ and GT
for integer vectors.
But we should now recognize all the variants of this construct and produce the optimal code
for the cases shown in:
https://llvm.org/bugs/show_bug.cgi?id=26701
llvm-svn: 262424
Summary: SampleProfile pass needs to be performed after InstructionCombiningPass, which helps eliminate un-inlinable function calls.
Reviewers: davidxl, dnovillo
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D17742
llvm-svn: 262419
Summary:
This adds the beginning of an update API to preserve MemorySSA. In particular,
this patch adds a way to remove memory SSA accesses when instructions are
deleted.
It also adds relevant unit testing infrastructure for MemorySSA's API.
(There is an actual user of this API, i will make that diff dependent on this one. In practice, a ton of opt passes remove memory instructions, so it's hopefully an obviously useful API :P)
Reviewers: hfinkel, reames, george.burgess.iv
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D17157
llvm-svn: 262362
This patch fixes calculating correct value for builtin_object_size function
when pointer is used only in builtin_object_size function call and never
after that.
Patch by Strahinja Petrovic.
Differential Revision: http://reviews.llvm.org/D17337
llvm-svn: 262337
The intended effect of this patch in conjunction with:
http://reviews.llvm.org/rL259392http://reviews.llvm.org/rL260145
is that customers using the AVX intrinsics in C will benefit from combines when
the load mask is constant:
__m128 mload_zeros(float *f) {
return _mm_maskload_ps(f, _mm_set1_epi32(0));
}
__m128 mload_fakeones(float *f) {
return _mm_maskload_ps(f, _mm_set1_epi32(1));
}
__m128 mload_ones(float *f) {
return _mm_maskload_ps(f, _mm_set1_epi32(0x80000000));
}
__m128 mload_oneset(float *f) {
return _mm_maskload_ps(f, _mm_set_epi32(0x80000000, 0, 0, 0));
}
...so none of the above will actually generate a masked load for optimized code.
This is the masked load counterpart to:
http://reviews.llvm.org/rL262064
llvm-svn: 262269
Summary:
I re-benchmarked this and results are similar to original results in
D13259:
On ARM64:
SingleSource/Benchmarks/Polybench/linear-algebra/solvers/dynprog -59.27%
SingleSource/Benchmarks/Polybench/stencils/adi -19.78%
On x86:
SingleSource/Benchmarks/Polybench/linear-algebra/solvers/dynprog -27.14%
And of course the original ~20% gain on SPECint_2006/456.hmmer with Loop
Distribution.
In terms of compile time, there is ~5% increase on both
SingleSource/Benchmarks/Misc/oourafft and
SingleSource/Benchmarks/Linkpack/linkpack-pc. These are both very tiny
loop-intensive programs where SCEV computations dominates compile time.
The reason that time spent in SCEV increases has to do with the design
of the old pass manager. If a transform pass does not preserve an
analysis we *invalidate* the analysis even if there was *no*
modification made by the transform pass.
This means that currently we don't take advantage of LLE and LV sharing
the same analysis (LAA) and unfortunately we recompute LAA *and* SCEV
for LLE.
(There should be a way to work around this limitation in the case of
SCEV and LAA since both compute things on demand and internally cache
their result. Thus we could pretend that transform passes preserve
these analyses and manually invalidate them upon actual modification.
On the other hand the new pass manager is supposed to solve so I am not
sure if this is worthwhile.)
Reviewers: hfinkel, dberlin
Subscribers: dberlin, reames, mssimpso, aemerson, joker.eph, llvm-commits
Differential Revision: http://reviews.llvm.org/D16300
llvm-svn: 262250
Summary:
The PS4 linker seems to handle this fine.
Hi David, it seems that indeed most ELF linkers support
__{start,stop}_SECNAME, as our proprietary linker does as well.
This follows the pattern of r250679 w.r.t. the testing.
Maggie, Phillip, Paul: I've tested this with the PS4 SDK 3.5 toolchain
prerelease and it seems to work fine.
Reviewers: davidxl
Subscribers: probinson, phillip.power, MaggieYi
Differential Revision: http://reviews.llvm.org/D17672
llvm-svn: 262112
merged into a loop that was subsequently unrolled (or otherwise nuked).
In this case it can't merge in the ASTs for any remaining nested loops,
it needs to re-add their instructions dircetly.
The fix is very isolated, but I've pulled the code for merging blocks
into the AST into a single place in the process. The only behavior
change is in the case which would have crashed before.
This fixes a crash reported by Mikael Holmen on the list after r261316
restored much of the loop pass pipelining and allowed us to actually do
this kind of nested transformation sequenc. I've taken that test case
and further reduced it into the somewhat twisty maze of loops in the
included test case. This does in fact trigger the bug even in this
reduced form.
llvm-svn: 262108
Summary:
Without tree pruning clang has 2,667,552 points.
Wiht only dominators pruning: 1,515,586.
With both dominators & predominators pruning: 1,340,534.
Differential Revision: http://reviews.llvm.org/D17671
llvm-svn: 262103
The intended effect of this patch in conjunction with:
http://reviews.llvm.org/rL259392http://reviews.llvm.org/rL260145
is that customers using the AVX intrinsics in C will benefit from combines when
the store mask is constant:
void mstore_zero_mask(float *f, __m128 v) {
_mm_maskstore_ps(f, _mm_set1_epi32(0), v);
}
void mstore_fake_ones_mask(float *f, __m128 v) {
_mm_maskstore_ps(f, _mm_set1_epi32(1), v);
}
void mstore_ones_mask(float *f, __m128 v) {
_mm_maskstore_ps(f, _mm_set1_epi32(0x80000000), v);
}
void mstore_one_set_elt_mask(float *f, __m128 v) {
_mm_maskstore_ps(f, _mm_set_epi32(0x80000000, 0, 0, 0), v);
}
...so none of the above will actually generate a masked store for optimized code.
Differential Revision: http://reviews.llvm.org/D17485
llvm-svn: 262064
Summary: Check that we're using SCEV for the same loop we're simulating. Otherwise, we might try to use the iteration number of the current loop in SCEV expressions for inner/outer loops IVs, which is clearly incorrect.
Reviewers: chandlerc, hfinkel
Subscribers: sanjoy, llvm-commits, mzolotukhin
Differential Revision: http://reviews.llvm.org/D17632
llvm-svn: 261958
Summary:
This is the first simple attempt to reduce number of coverage-
instrumented blocks.
If a basic block dominates all its successors, then its coverage
information is useless to us. Ingore such blocks if
santizer-coverage-prune-tree option is set.
Differential Revision: http://reviews.llvm.org/D17626
llvm-svn: 261949
The cleanupret instruction has an invariant that it's 'from' operand be
a cleanuppad. This invariant was violated when we removed a dead block
which removed a cleanuppad leaving behind a cleanupret with an undef
'from' operand.
This was solved in r261731 by staving off the removal of the dead block
to a later pass.
However, it occured to me that we do not need to do this.
Instead, we can simply avoid processing the cleanupret if it has an
undef 'from' operand because we know that it will be removed soon.
llvm-svn: 261754
This is part of the payoff for the refactoring in:
http://reviews.llvm.org/rL261649http://reviews.llvm.org/rL261707
In addition to removing a pile of duplicated code, the xor case was
missing the optimization for vector types because it checked
"SrcTy->isIntegerTy()" rather than "SrcTy->isIntOrIntVectorTy()"
like 'and' and 'or' were already doing.
This solves part of:
https://llvm.org/bugs/show_bug.cgi?id=26702
llvm-svn: 261750
This is a part of the refactoring to unify isSafeToLoadUnconditionally and isDereferenceablePointer functions. In subsequent change I'm going to eliminate isDerferenceableAndAlignedPointer from Loads API, leaving isSafeToLoadSpecualtively the only function to check is load instruction can be speculated.
Reviewed By: hfinkel
Differential Revision: http://reviews.llvm.org/D16180
llvm-svn: 261736
DeleteDeadBlock was called indiscriminately, leading to cleanuprets with
undef cleanuppad references.
Instead, try to drain the BB of most of it's instructions if it is
unreachable. We can then remove the BB if it solely consists of a
terminator (and maybe some phis).
llvm-svn: 261731
Note: The 'and' case in foldCastedBitwiseLogic() is inheriting one extra
check from the nearly identical 'or' case:
if ((!isa<ICmpInst>(Cast0Src) || !isa<ICmpInst>(Cast1Src))
But I'm not sure how to expose that difference in a regression test.
Without that check, the 'or' path will infinite loop on:
test/Transforms/InstCombine/zext-or-icmp.ll
because the zext-or-icmp fold is attempting a reverse transform.
The refactoring should extend to the 'xor' case next to solve part of
PR26702.
llvm-svn: 261707
It is problematic if the inlinee has a cleanupret which unwinds to
caller and we inline it into a call site which doesn't unwind.
If the funclet unwinds anywhere other than to the caller,
then we will give the funclet two unwind destinations.
This will result in a verifier failure.
Seeing as how the caller wasn't an invoke (which would locally unwind)
and that the funclet cannot unwind to caller, we must conclude that an
'unwind to caller' cleanupret is dynamically unreachable.
This fixes PR26698.
Differential Revision: http://reviews.llvm.org/D17536
llvm-svn: 261656
This is a straight cut and paste of the existing code and is intended to
be the first step in solving part of PR26702:
https://llvm.org/bugs/show_bug.cgi?id=26702
We should be able to reuse most of this and delete the nearly identical
existing code in visitOr(). Then, we can enhance visitXor() to use the
same code too.
llvm-svn: 261649
Summary:
When we completely unroll a loop, it's pretty easy to update DT in-place and
thus avoid rebuilding it. DT recalculation is one of the most time-consuming
tasks in loop-unroll, so avoiding it at least in case of full unroll should be
beneficial.
On some extreme (but still real-world) tests this patch improves compile time by
~2x.
Reviewers: escha, jmolloy, hfinkel, sanjoy, chandlerc
Subscribers: joker.eph, sanjoy, llvm-commits
Differential Revision: http://reviews.llvm.org/D17473
llvm-svn: 261595
Summary:
Since this is an IR pass it's nice to be able to write tests without
llc. This is the counterpart of the llc test under
CodeGen/PowerPC/loop-data-prefetch.ll.
Reviewers: hfinkel
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D17464
llvm-svn: 261578
The issue was that we only required LCSSA rebuilding if the immediate
parent-loop had values used outside of it. The fix is to enaable the
same logic for all outer loops, not only immediate parent.
llvm-svn: 261575
This flag was part of a migration to a new means of handling vectors-of-points which was described in the llvm-dev thread "FYI: Relocating vector of pointers". The old code path has been off by default for a while without complaints, so time to cleanup.
llvm-svn: 261569
This change reverts "246133 [RewriteStatepointsForGC] Reduce the number of new instructions for base pointers" and a follow on bugfix 12575.
As pointed out in pr25846, this code suffers from a memory corruption bug. Since I'm (empirically) not going to get back to this any time soon, simply reverting the problematic change is the right answer.
llvm-svn: 261565
Summary:
Previously we had a notion of convergent functions but not of convergent
calls. This is insufficient to correctly analyze calls where the target
is unknown, e.g. indirect calls.
Now a call is convergent if it targets a known-convergent function, or
if it's explicitly marked as convergent. As usual, we can remove
convergent where we can prove that no convergent operations are
performed in the call.
Reviewers: chandlerc, jingyue
Subscribers: hfinkel, jhen, tra, llvm-commits
Differential Revision: http://reviews.llvm.org/D17317
llvm-svn: 261544
I missed == and != when I removed implicit conversions between iterators
and pointers in r252380 since they were defined outside ilist_iterator.
Since they depend on getNodePtrUnchecked(), they indirectly rely on UB.
This commit removes all uses of these operators. (I'll delete the
operators themselves in a separate commit so that it can be easily
reverted if necessary.)
There should be NFC here.
llvm-svn: 261498
Stop relying on `getNodePtrUnchecked()` being useful on invalid
iterators. This function is documented to be for internal use only, and
the pointer type will eventually have to change to remove UB from
ilist_iterator. Instead, check the iterator before it has been
invalidated.
llvm-svn: 261497
Cleanuppads may be merged together if one is the only predecessor of the
other in which case a simple transform can be performed: replace the
a cleanupret with a branch and remove an unnecessary cleanuppad.
Differential Revision: http://reviews.llvm.org/D17459
llvm-svn: 261390
This patch enables the vectorization of first-order recurrences. A first-order
recurrence is a non-reduction recurrence relation in which the value of the
recurrence in the current loop iteration equals a value defined in the previous
iteration. The load PRE of the GVN pass often creates these recurrences by
hoisting loads from within loops.
In this patch, we add a new recurrence kind for first-order phi nodes and
attempt to vectorize them if possible. Vectorization is performed by shuffling
the values for the current and previous iterations. The vectorization cost
estimate is updated to account for the added shuffle instruction.
Contributed-by: Matthew Simpson and Chad Rosier <mcrosier@codeaurora.org>
Differential Revision: http://reviews.llvm.org/D16197
llvm-svn: 261346
Summary:
If we don't have the first and last access of an interleaved load group,
the first and last wide load in the loop can do an out of bounds
access. Even though we discard results from speculative loads,
this can cause problems, since it can technically generate page faults
(or worse).
We now discard interleaved load groups that don't have the first and
load in the group.
Reviewers: hfinkel, rengolin
Subscribers: rengolin, llvm-commits, mzolotukhin, anemet
Differential Revision: http://reviews.llvm.org/D17332
llvm-svn: 261331
routine.
We were getting this wrong in small ways and generally being very
inconsistent about it across loop passes. Instead, let's have a common
place where we do this. One minor downside is that this will require
some analyses like SCEV in more places than they are strictly needed.
However, this seems benign as these analyses are complete no-ops, and
without this consistency we can in many cases end up with the legacy
pass manager scheduling deciding to split up a loop pass pipeline in
order to run the function analysis half-way through. It is very, very
annoying to fix these without just being very pedantic across the board.
The only loop passes I've not updated here are ones that use
AU.setPreservesAll() such as IVUsers (an analysis) and the pass printer.
They seemed less relevant.
With this patch, almost all of the problems in PR24804 around loop pass
pipelines are fixed. The one remaining issue is that we run simplify-cfg
and instcombine in the middle of the loop pass pipeline. We've recently
added some loop variants of these passes that would seem substantially
cleaner to use, but this at least gets us much closer to the previous
state. Notably, the seven loop pass managers is down to three.
I've not updated the loop passes using LoopAccessAnalysis because that
analysis hasn't been fully wired into LoopSimplify/LCSSA, and it isn't
clear that those transforms want to support those forms anyways. They
all run late anyways, so this is harmless. Similarly, LSR is left alone
because it already carefully manages its forms and doesn't need to get
fused into a single loop pass manager with a bunch of other loop passes.
LoopReroll didn't use loop simplified form previously, and I've updated
the test case to match the trivially different output.
Finally, I've also factored all the pass initialization for the passes
that use this technique as well, so that should be done regularly and
reliably.
Thanks to James for the help reviewing and thinking about this stuff,
and Ben for help thinking about it as well!
Differential Revision: http://reviews.llvm.org/D17435
llvm-svn: 261316
more places to prevent gratuitous re-"runs" of these passes.
The passes themselves don't do any work when run, but we keep spending
time scheduling and running these needlessly when we really don't need
to do so.
This is the first patch towards fixing the really horrible loop pass
pipeline fragmentation pointed out by Sanjoy in PR24804.
llvm-svn: 261302
This patch is part of the work to make PPCLoopDataPrefetch
target-independent
(http://thread.gmane.org/gmane.comp.compilers.llvm.devel/92758).
Obviously the pass still only used from PPC at this point. Subsequent
patches will start driving this from ARM64 as well.
Due to the previous patch most lines should show up as moved lines.
llvm-svn: 261265
Commit r259357 was reverted because it caused PR26629. We were assuming all
roots of a vectorizable tree could be truncated to the same width, which is not
the case in general. This commit reapplies the patch along with a fix and a new
test case to ensure we don't regress because of this issue again. This should
fix PR26629.
llvm-svn: 261212
convert one test to use this.
This is a particularly significant milestone because it required
a working per-function AA framework which can be queried over each
function from within a CGSCC transform pass (and additionally a module
analysis to be accessible). This is essentially *the* point of the
entire pass manager rewrite. A CGSCC transform is able to query for
multiple different function's analysis results. It works. The whole
thing appears to actually work and accomplish the original goal. While
we were able to hack function attrs and basic-aa to "work" in the old
pass manager, this port doesn't use any of that, it directly leverages
the new fundamental functionality.
For this to work, the CGSCC framework also has to support SCC-based
behavior analysis, etc. The only part of the CGSCC pass infrastructure
not sorted out at this point are the updates in the face of inlining and
running function passes that mutate the call graph.
The changes are pretty boring and boiler-plate. Most of the work was
factored into more focused preperatory patches. But this is what wires
it all together.
llvm-svn: 261203
This function is used to check whether a dbg.value intrinsic has already
been inserted, but without comparing the DIExpression, it would erroneously
fire on split aggregates and only the first scalar would survive.
Found via http://reviews.llvm.org/D16867.
<rdar://problem/24456528>
llvm-svn: 261145
Loop vectorizer now knows to vectorize GEP and create masked gather and scatter intrinsics for random memory access.
The feature is enabled on AVX-512 target.
Differential Revision: http://reviews.llvm.org/D15690
llvm-svn: 261140
Summary: Store and loads unpacked by instcombine do not always have the right alignement. This explicitely compute the alignement and set it.
Reviewers: dblaikie, majnemer, reames, hfinkel, joker.eph
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D17326
llvm-svn: 261139
When support for objc_unsafeClaimAutoreleasedReturnValue has been added to the
ARC optimizer in r258970, one case was missed which would lead the optimizer
to execute an llvm_unreachable. In this case, just handle ClaimRV in the same
way we handle RetainRV.
llvm-svn: 261134
Summary:
On the contrary to Full LTO, ThinLTO can afford to shift compile time
from the frontend to the linker: both phases are parallel (even if
it is not totally "free": projects like clang are reusing product
from the "compile phase" for multiple link, think about
libLLVMSupport reused for opt, llc, etc.).
This pipeline is based on the proposal in D13443 for full LTO. We
didn't move forward on this proposal because the LTO link was far too
long after that. We believe that we can afford it with ThinLTO.
The ThinLTO pipeline integrates in the regular O2/O3 flow:
- The compile phase perform the inliner with a somehow lighter
function simplification. (TODO: tune the inliner thresholds here)
This is intendend to simplify the IR and get rid of obvious things
like linkonce_odr that will be inlined.
- The link phase will run the pipeline from the start, extended with
some specific passes that leverage the augmented knowledge we have
during LTO. Especially after the inliner is done, a sequence of
globalDCE/globalOpt is performed, followed by another run of the
"function simplification" passes. It is not clear if this part
of the pipeline will stay as is, as the split model of ThinLTO
does not allow the same benefit as FullLTO without added tricks.
The measurements on the public test suite as well as on our internal
suite show an overall net improvement. The binary size for the clang
executable is reduced by 5%. We're still tuning it with the bringup
of ThinLTO and it will evolve, but this should provide a good starting
point.
Reviewers: tejohnson
Differential Revision: http://reviews.llvm.org/D17115
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 261029
It is intended to contains the passes run over a function after the
inliner is done with a function and before it moves to its callers.
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 261028
Summary:
Extending findExistingExpansion can use existing value in ExprValueMap.
This patch gives 0.3~0.5% performance improvements on
benchmarks(test-suite, spec2000, spec2006, commercial benchmark)
Reviewers: mzolotukhin, sanjoy, zzheng
Differential Revision: http://reviews.llvm.org/D15559
llvm-svn: 260938
Summary:
While shrinking types according to the required bits, we can
encounter insert/extract element instructions. This will cause us to
reach an llvm_unreachable statement.
This change adds support for truncating insert/extract element
operations, and adds a regression test.
Reviewers: jmolloy
Subscribers: mzolotukhin, llvm-commits
Differential Revision: http://reviews.llvm.org/D17078
llvm-svn: 260893
LICM starts with an *empty* AST, and then merges in each sub-loop. While the
add code is appropriate for sub-loop 2 and up, it's utterly unnecessary for
sub-loop 1. If the AST starts off empty, we can just clone/move the contents
of the subloop into the containing AST.
Reviewed-by: Philip Reames <listmail@philipreames.com>
Differential Revision: http://reviews.llvm.org/D16753
llvm-svn: 260892
than the SCC object, and have it scan the instruction stream directly
rather than relying on call records.
This makes the behavior of this routine consistent between libc routines
and LLVM intrinsics for libc routines. We can go and start teaching it
about those being norecurse, but we should behave the same for the
intrinsic and the libc routine rather than differently. I chatted with
James Molloy and the inconsistency doesn't seem intentional and likely
is due to intrinsic calls not being modelled in the call graph analyses.
This also fixes a bug where we would deduce norecurse on optnone
functions, when generally we try to handle optnone functions as-if they
were replaceable and thus unanalyzable.
llvm-svn: 260813
Summary:
Export the CloneDebugInfoMetadata utility, which clones all debug info
associated with a function into the first module. Also use this function
in CloneModule on each function we clone (the CloneFunction entrypoint
already does this).
Without this, cloning a module will lead to DI quality regressions,
especially since r252219 reversed the Function <-> DISubprogram edge
(before we could get lucky and have this edge preserved if the
DISubprogram itself was, e.g. due to location metadata).
This was verified to fix missing debug information in julia and
a unittest to verify the new behavior is included.
Patch by Yichao Yu! Thanks!
Reviewers: loladiro, pcc
Differential Revision: http://reviews.llvm.org/D17165
llvm-svn: 260791
Summary:
Performing this optimization duplicates the call to the convergent
function and adds new control-flow dependencies, which is a no-no.
Reviewers: jingyue
Subscribers: broune, hfinkel, tra, resistor, joker.eph, arsenm, llvm-commits, mzolotukhin
Differential Revision: http://reviews.llvm.org/D17128
llvm-svn: 260730
Summary:
Calls to convergent functions can be duplicated, but only if the
duplicates are not control-flow dependent on any additional values.
Loop rotation doesn't meet the bar.
Reviewers: jingyue
Subscribers: mzolotukhin, llvm-commits, arsenm, joker.eph, resistor, tra, hfinkel, broune
Differential Revision: http://reviews.llvm.org/D17127
llvm-svn: 260729
The attached patch removes all of the block local code for performing X-load forwarding by reusing the code used in the non-local case.
The motivation here is to remove duplication and in the process increase our test coverage of some fairly tricky code. I have some upcoming changes I'll be proposing in this area and wanted to have the code cleaned up a bit first.
Note: The review for this mostly happened in email which didn't make it to phabricator on the 258882 commit thread.
Differential Revision: http://reviews.llvm.org/D16608
llvm-svn: 260711
In short, before r252926 we were comparing an unsigned (StoreSize) against an a
APInt (Stride), which is fine and well. After we were zero extending the Stride
and then converting to an unsigned, which is not the same thing. Obviously,
Stides can also be negative. This commit just restores the original behavior.
AFAICT, it's not possible to write a test case to expose the issue because
the code already has checks to make sure the StoreSize can't overflow an
unsigned (which prevents the Stride from overflowing an unsigned as well).
llvm-svn: 260706
For some cases, InstCombine replaces the sequence of xor/sub instruction
followed by cmp instruction into a single cmp instruction.
However, this replacement may result suboptimal result especially when
the xor/sub has more than one use, as discussed in
bug 26465 (https://llvm.org/bugs/show_bug.cgi?id=26465).
This patch make the replacement happen only when xor/sub has only one
use.
Differential Revision: http://reviews.llvm.org/D16915
Patch by Taewook Oh!
llvm-svn: 260695
node set rather than walking the SCC directly.
This directly exposes the functions and has already had null entries
filtered out. We also don't need need to handle optnone as it has
already been handled in the caller -- we never try to remove convergent
when there are optnone functions in the SCC.
With this change, the code for removing convergent should work with the
new pass manager and a different SCC analysis.
llvm-svn: 260668
with the test for a non-convergent intrinsic call.
While it is possible to use the call records to search for function
calls, we're going to do an instruction scan anyways to find the
intrinsics, we can handle both cases while scanning instructions. This
will also make the logic more amenable to the new pass manager which
doesn't use the same call graph structure.
My next patch will remove use of CallGraphNode entirely and allow this
code to work with both the old and new pass manager. Fortunately, it
should also get strictly simpler without changing functionality.
llvm-svn: 260666
MSan adds a constructor to each translation unit that calls
__msan_init, and does nothing else. The idea is to run __msan_init
before any instrumented code. This results in multiple constructors
and multiple .init_array entries in the final binary, one per
translation unit. This is absolutely unnecessary; one would be
enough.
This change moves the constructors to a comdat group in order to drop
the extra ones.
llvm-svn: 260632
Original commit message:
[InstCombine] Fold IntToPtr and PtrToInt into preceding loads.
Currently we only fold a BitCast into a Load when the BitCast is its
only user.
Do the same for any no-op cast.
Patch by Philip Pfaffe!
Differential Revision: http://reviews.llvm.org/D9152
llvm-svn: 260612
Summary:
On the contrary to Full LTO, ThinLTO can afford to shift compile time
from the frontend to the linker: both phases are parallel.
This pipeline is based on the proposal in D13443 for full LTO. We ]
didn't move forward on this proposal because the link was far too long
after that.
This patch refactor the "function simplification" passes that are part
of the inliner loop in a helper function (this part is NFC and can be
commited separately to simplify the diff). The ThinLTO pipeline
integrates in the regular O2/O3 flow:
- The compile phase perform the inliner with a somehow lighter
function simplification. (TODO: tune the inliner thresholds here)
This is intendend to simplify the IR and get rid of obvious things
like linkonce_odr that will be inlined.
- The link phase will run the pipeline from the start, extended with
some specific passes that leverage the augmented knowledge we have
during LTO. Especially after the inliner is done, a sequence of
globalDCE/globalOpt is performed, followed by another run of the
"function simplification" passes.
The measurements on the public test suite as well as on our internal
suite show an overall net improvement. The binary size for the clang
executable is reduced by 5%. We're still tuning it with the bringup
of ThinLTO but this should provide a good starting point.
Reviewers: tejohnson
Subscribers: joker.eph, llvm-commits, dexonsmith
Differential Revision: http://reviews.llvm.org/D17115
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 260604
It is intended to contains the passes run over a function after the
inliner is done with a function and before it moves to its callers.
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 260603
When optimizing a extractvalue(load), we generate a load from the
aggregate type. This load didn't have alignment set and so would
get the alignment of the type. This breaks when the type is packed
and so the alignment should be lower.
For example, loading { int, int } would give us alignment of 4, but
the original load from this type may have an alignment of 1 if packed.
Reviewed by David Majnemer
Differential revision: http://reviews.llvm.org/D17158
llvm-svn: 260587
Summary:
When a PHI is used only to be compared with zero, it is possible to replace an
incoming value with any non-zero constant if the incoming value can be proved as
a known nonzero value. For example, in below code, we can replace the incoming value %v with
any non-zero constant based on the fact that the PHI is only used to be compared with zero
and %v is a known non-zero value:
%v = select %cond, 1, 2
%p = phi [%v, BB] ...
%c = icmp eq, %p, 0
Reviewers: mcrosier, jmolloy, sanjoy
Subscribers: hfinkel, mcrosier, majnemer, llvm-commits, haicheng, bmakam, mssimpso, gberry
Differential Revision: http://reviews.llvm.org/D16240
llvm-svn: 260530
Make sure we split ":" from the end of the global function id (which
is <path>:<function> for local functions) instead of the beginning to
avoid splitting at the wrong place for Windows file paths that contain
a ":".
llvm-svn: 260469
The current function importer will walk the callgraph, importing
transitively any callee that is below the threshold. This can
lead to import very deep which is costly in compile time and not
necessarily beneficial as most of the inline would happen in
imported function and not necessarilly in user code.
The actual factor has been carefully chosen by flipping a coin ;)
Some tuning need to be done (just at the existing limiting threshold).
Reviewers: tejohnson
Differential Revision: http://reviews.llvm.org/D17082
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 260466
There is not reason to pass an array of "char *" to rebuild a set if
the client already has one.
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 260462
It looks like clang has a couple of test cases which caught the fact LVI was not slightly more precise after 260439. When looking at the failures, it struck me as wasteful to be querying nullness of a constant via LVI, so instead of tweaking the clang tests, let's just stop querying constants from this source.
llvm-svn: 260451
This restores commit r260408, along with a fix for a bot failure.
The bot failure was caused by dereferencing a unique_ptr in the same
call instruction parameter list where it was passed via std::move.
Apparently due to luck this was not exposed when I built the compiler
with clang, only with gcc.
llvm-svn: 260442
Summary:
This patch uses the lower 64-bits of the MD5 hash of a function name as
a GUID in the function index, instead of storing function names. Any
local functions are first given a global name by prepending the original
source file name. This is the same naming scheme and GUID used by PGO in
the indexed profile format.
This change has a couple of benefits. The primary benefit is size
reduction in the combined index file, for example 483.xalancbmk's
combined index file was reduced by around 70%. It should also result in
memory savings for the index file in memory, as the in-memory map is
also indexed by the hash instead of the string.
Second, this enables integration with indirect call promotion, since the
indirect call profile targets are recorded using the same global naming
convention and hash. This will enable the function importer to easily
locate function summaries for indirect call profile targets to enable
their import and subsequent promotion.
The original source file name is recorded in the bitcode in a new
module-level record for use in the ThinLTO backend pipeline.
Reviewers: davidxl, joker.eph
Subscribers: llvm-commits, joker.eph
Differential Revision: http://reviews.llvm.org/D17028
llvm-svn: 260408
This patch reads the indirect-call value records in the profile and makes the
annotation in the indirect-call instruction. This is for IR level profile
instrumentation.
Differential Revision: http://reviews.llvm.org/D16935
llvm-svn: 260400
Summary:
As discussed on IRC, move the ThinLTOGlobalProcessing code out of
the linker, and into TransformUtils. The name of the class is changed
to FunctionImportGlobalProcessing.
Reviewers: joker.eph, rafael
Subscribers: joker.eph, llvm-commits
Differential Revision: http://reviews.llvm.org/D17081
llvm-svn: 260395
This patch uses one bit in profile version to differentiate Clang
instrumentation and IR level instrumentation profiles.
PGOInstrumenation generates a COMDAT variable __llvm_profile_raw_version so
that the compiler runtime can set the right profile kind.
For Maco-O platform, we generate the variable as linkonce_odr linkage as
COMDAT is not supported.
PGOInstrumenation now checks this bit to make sure it's an IR level
instrumentation profile.
The patch was submitted as r260164 but reverted due to a Darwin test breakage.
Original Differential Revision: http://reviews.llvm.org/D15540
Differential Revision: http://reviews.llvm.org/D17020
llvm-svn: 260385
Summary:
Tests for this will be added once the AMDGPU backend enables this
option.
Reviewers: arsenm
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D16602
llvm-svn: 260336
Summary:
Remove the convergent attribute on any functions which provably do not
contain or invoke any convergent functions.
After this change, we'll be able to modify clang to conservatively add
'convergent' to all functions when compiling CUDA.
Reviewers: jingyue, joker.eph
Subscribers: llvm-commits, tra, jhen, hfinkel, resistor, chandlerc, arsenm
Differential Revision: http://reviews.llvm.org/D17013
llvm-svn: 260319
This pass implements whole program optimization of virtual calls in cases
where we know (via bitset information) that the list of callees is fixed. This
includes the following:
- Single implementation devirtualization: if a virtual call has a single
possible callee, replace all calls with a direct call to that callee.
- Virtual constant propagation: if the virtual function's return type is an
integer <=64 bits and all possible callees are readnone, for each class and
each list of constant arguments: evaluate the function, store the return
value alongside the virtual table, and rewrite each virtual call as a load
from the virtual table.
- Uniform return value optimization: if the conditions for virtual constant
propagation hold and each function returns the same constant value, replace
each virtual call with that constant.
- Unique return value optimization for i1 return values: if the conditions
for virtual constant propagation hold and a single vtable's function
returns 0, or a single vtable's function returns 1, replace each virtual
call with a comparison of the vptr against that vtable's address.
Differential Revision: http://reviews.llvm.org/D16795
llvm-svn: 260312
We introduced gc.relocates of vector-of-pointer types a couple of weeks back. Somehow, I missed updating the InstCombine rule to account for this. If we hit this code path with a vector-of-pointers gc.relocate, we'd crash on a cast<PointerType>.
I also took the chance to do a bit of code style cleanup.
llvm-svn: 260279
FunctionAttrs does an "optimistic" analysis of SCCs as a unit, which
means normally it is able to disregard calls from an SCC into itself.
However, calls and invokes with operand bundles are allowed to have
memory effects not fully described by the memory effects on the call
target, so we can't be optimistic around operand-bundled calls from an
SCC into itself.
llvm-svn: 260244
Summary:
Passes that call `getAnalysisIfAvailable<T>` also need to call
`addUsedIfAvailable<T>` in `getAnalysisUsage` to indicate to the
legacy pass manager that it uses `T`. This contract was being
violated by passes that used `createLegacyPMAAResults`. This change
fixes this by exposing a helper in AliasAnalysis.h,
`addUsedAAAnalyses`, that is complementary to createLegacyPMAAResults
and does the right thing when called from `getAnalysisUsage`.
Reviewers: chandlerc
Subscribers: mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D17010
llvm-svn: 260183
Summary:
Unrolling Analyzer is already pretty complicated, and it becomes harder and harder to exercise it with usual IR tests, as with them we can only check the final decision: whether the loop is unrolled or not. This change factors this framework out from LoopUnrollPass to analyses, which allows to use unit tests.
The change itself is supposed to be NFC, except adding a couple of tests.
I plan to add more tests as I add new functionality and find/fix bugs.
Reviewers: chandlerc, hfinkel, sanjoy
Subscribers: zzheng, sanjoy, llvm-commits
Differential Revision: http://reviews.llvm.org/D16623
llvm-svn: 260169
This patch uses one bit in profile version to differentiate Clang
instrumentation and IR level instrumentation profiles.
PGOInstrumenation generates a COMDAT variable __llvm_profile_raw_version so
that the compiler runtime can set the right profile kind.
PGOInstrumenation now checks this bit to make sure it's an IR level
instrumentation profile.
Differential Revision: http://reviews.llvm.org/D15540
llvm-svn: 260146
This reduces sizes of instrumented object files, final binaries,
process images, and raw profile data.
The format of the indexed profile data remain the same.
Differential Revision: http://reviews.llvm.org/D16388
llvm-svn: 260117
sanitizer issue. The PredicatedScalarEvolution's copy constructor
wasn't copying the Generation value, and was leaving it un-initialized.
Original commit message:
[SCEV][LAA] Add no wrap SCEV predicates and use use them to improve strided pointer detection
Summary:
This change adds no wrap SCEV predicates with:
- support for runtime checking
- support for expression rewriting:
(sext ({x,+,y}) -> {sext(x),+,sext(y)}
(zext ({x,+,y}) -> {zext(x),+,sext(y)}
Note that we are sign extending the increment of the SCEV, even for
the zext case. This is needed to cover the fairly common case where y would
be a (small) negative integer. In order to do this, this change adds two new
flags: nusw and nssw that are applicable to AddRecExprs and permit the
transformations above.
We also change isStridedPtr in LAA to be able to make use of
these predicates. With this feature we should now always be able to
work around overflow issues in the dependence analysis.
Reviewers: mzolotukhin, sanjoy, anemet
Subscribers: mzolotukhin, sanjoy, llvm-commits, rengolin, jmolloy, hfinkel
Differential Revision: http://reviews.llvm.org/D15412
llvm-svn: 260112
Change a return statement of ComputeValueKnownInPredecessors() to be the same as
the rest return statements of the function. Otherwise, it might return true with
an empty Result when the current basic block has no predecessors and trigger the
first assert of JumpThreading::ProcessThreadableEdges().
llvm-svn: 260110
We shouldn't assert when there are no memchecks, since we
can have SCEV checks. There is already an assert covering
the case where there are no SCEV checks or memchecks.
This also changes the LAA pointer wrapping versioning test
to use the loop versioning pass (this was how I managed to
trigger the assert in the loop versioning pass).
llvm-svn: 260086
Summary:
This change adds no wrap SCEV predicates with:
- support for runtime checking
- support for expression rewriting:
(sext ({x,+,y}) -> {sext(x),+,sext(y)}
(zext ({x,+,y}) -> {zext(x),+,sext(y)}
Note that we are sign extending the increment of the SCEV, even for
the zext case. This is needed to cover the fairly common case where y would
be a (small) negative integer. In order to do this, this change adds two new
flags: nusw and nssw that are applicable to AddRecExprs and permit the
transformations above.
We also change isStridedPtr in LAA to be able to make use of
these predicates. With this feature we should now always be able to
work around overflow issues in the dependence analysis.
Reviewers: mzolotukhin, sanjoy, anemet
Subscribers: mzolotukhin, sanjoy, llvm-commits, rengolin, jmolloy, hfinkel
Differential Revision: http://reviews.llvm.org/D15412
llvm-svn: 260085
As discussed in https://github.com/google/sanitizers/issues/398, with current
implementation of poisoning globals we can have some CHECK failures or false
positives in case of mixing instrumented and non-instrumented code due to ASan
poisons innocent globals from non-sanitized binary/library. We can use private
aliases to avoid such errors. In addition, to preserve ODR violation detection,
we introduce new __odr_asan_gen_XXX symbol for each instrumented global that
indicates if this global was already registered. To detect ODR violation in
runtime, we should only check the value of indicator and report an error if it
isn't equal to zero.
Differential Revision: http://reviews.llvm.org/D15642
llvm-svn: 260075
Summary:
When alias analysis is uncertain about the aliasing between any two accesses,
it will return MayAlias. This uncertainty from alias analysis restricts LICM
from proceeding further. In cases where alias analysis is uncertain we might
use loop versioning as an alternative.
Loop Versioning will create a version of the loop with aggressive aliasing
assumptions in addition to the original with conservative (default) aliasing
assumptions. The version of the loop making aggressive aliasing assumptions
will have all the memory accesses marked as no-alias. These two versions of
loop will be preceded by a memory runtime check. This runtime check consists
of bound checks for all unique memory accessed in loop, and it ensures the
lack of memory aliasing. The result of the runtime check determines which of
the loop versions is executed: If the runtime check detects any memory
aliasing, then the original loop is executed. Otherwise, the version with
aggressive aliasing assumptions is used.
The pass is off by default and can be enabled with command line option
-enable-loop-versioning-licm.
Reviewers: hfinkel, anemet, chatur01, reames
Subscribers: MatzeB, grosser, joker.eph, sanjoy, javed.absar, sbaranga,
llvm-commits
Differential Revision: http://reviews.llvm.org/D9151
llvm-svn: 259986
In r255133 (reapplied r253126) we started to avoid redundant
recomputation of LCSSA after loop-unrolling. This patch moves one step
further in this direction - now we can avoid it for much wider range of
loops, as we start to look at IR and try to figure out if the
transformation actually breaks LCSSA phis or makes it necessary to
insert new ones.
Differential Revision: http://reviews.llvm.org/D16838
llvm-svn: 259869
Summary:
Passing the rematerialized values map to insertRematerializationStores by
value looks to be a simple oversight; update it to pass by reference.
Reviewers: reames, sanjoy
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D16911
llvm-svn: 259867
Current SCEV expansion will expand SCEV as a sequence of operations
and doesn't utilize the value already existed. This will introduce
redundent computation which may not be cleaned up throughly by
following optimizations.
This patch introduces an ExprValueMap which is a map from SCEV to the
set of equal values with the same SCEV. When a SCEV is expanded, the
set of values is checked and reused whenever possible before generating
a sequence of operations.
The original commit triggered regressions in Polly tests. The regressions
exposed two problems which have been fixed in current version.
1. Polly will generate a new function based on the old one. To generate an
instruction for the new function, it builds SCEV for the old instruction,
applies some tranformation on the SCEV generated, then expands the transformed
SCEV and insert the expanded value into new function. Because SCEV expansion
may reuse value cached in ExprValueMap, the value in old function may be
inserted into new function, which is wrong.
In SCEVExpander::expand, there is a logic to check the cached value to
be used should dominate the insertion point. However, for the above
case, the check always passes. That is because the insertion point is
in a new function, which is unreachable from the old function. However
for unreachable node, DominatorTreeBase::dominates thinks it will be
dominated by any other node.
The fix is to simply add a check that the cached value to be used in
expansion should be in the same function as the insertion point instruction.
2. When the SCEV is of scConstant type, expanding it directly is cheaper than
reusing a normal value cached. Although in the cached value set in ExprValueMap,
there is a Constant type value, but it is not easy to find it out -- the cached
Value set is not sorted according to the potential cost. Existing reuse logic
in SCEVExpander::expand simply chooses the first legal element from the cached
value set.
The fix is that when the SCEV is of scConstant type, don't try the reuse
logic. simply expand it.
Differential Revision: http://reviews.llvm.org/D12090
llvm-svn: 259736
D16251)
Summary:
This is a simpler fix to the problem than the dominator approach in
http://reviews.llvm.org/D16251. It adds only values into the gather() while loop
that have been seen before.
The actual endless loop is in the constant compare gather() routine in
Utils/SimplifyCFG.cpp. The same value ret.0.off0.i is pushed back into the
queue:
%.ret.0.off0.i = or i1 %.ret.0.off0.i, %cmp10.i
Here is what happens at the IR level:
for.cond.i: ; preds = %if.end6.i,
%if.end.i54
%ix.0.i = phi i32 [ 0, %if.end.i54 ], [ %inc.i55, %if.end6.i ]
%ret.0.off0.i = phi i1 [false, %if.end.i54], [%.ret.0.off0.i, %if.end6.i] <<<
%cmp2.i = icmp ult i32 %ix.0.i, %11
br i1 %cmp2.i, label %for.body.i, label %LBJ_TmpSimpleNeedExt.exit
if.end6.i: ; preds = %for.body.i
%cmp10.i = icmp ugt i32 %conv.i, %add9.i
%.ret.0.off0.i = or i1 %ret.0.off0.i, %cmp10.i <<<
When if.end.i54 gets eliminated which removes the definition of ret.0.off0.i.
The result is the expression %.ret.0.off0.i = or i1 %.ret.0.off0.i, %cmp10.i
(Note the first ‘or’ operand is now %.ret.0.off0.i, and *NOT* %ret.0.off0.i).
And
now there is use of .ret.0.off0.i before a definition which triggers the
“endless” loop in gather():
while(!DFT.empty()) {
V = DFT.pop_back_val(); // V is .ret.0.off0.i
if (Instruction *I = dyn_cast<Instruction>(V)) {
// If it is a || (or && depending on isEQ), process the operands.
if (I->getOpcode() == (isEQ ? Instruction::Or : Instruction::And)) {
DFT.push_back(I->getOperand(1)); // This is now .ret.0.off0.i also
DFT.push_back(I->getOperand(0));
continue; // “endless loop” for .ret.0.off0.i
}
Reviewers: reames, ahatanak
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D16839
llvm-svn: 259730
Bail out if we have a PHI on an EHPad that gets a value from a
CatchSwitchInst. Because the CatchSwitchInst cannot be split, there is
no good place to stick any instructions.
This fixes PR26373.
llvm-svn: 259702
According to git bisect, this is the root cause of a miscompile for Regex in
libLLVMSupport. I am still working on reducing a test case.
The actual bug may be elsewhere and this commit just exposed it.
Anyway, at the moment, to reproduce, follow these steps:
1. Build clang and libLTO in release mode.
2. Create a new build directory <stage2> and cd into it.
3. Use clang and libLTO from #1 to build llvm-extract in Release mode + asserts
using -O2 -flto
4. Run llvm-extract -ralias '.*bar' -S test/Other/extract-alias.ll
Result:
program doesn't contain global named '.*bar'!
Expected result:
@a0a0bar = alias void ()* @bar
@a0bar = alias void ()* @bar
declare void @bar()
Note: In step #3, if you don't use lto or asserts, the miscompile disappears.
llvm-svn: 259674
Current SCEV expansion will expand SCEV as a sequence of operations
and doesn't utilize the value already existed. This will introduce
redundent computation which may not be cleaned up throughly by
following optimizations.
This patch introduces an ExprValueMap which is a map from SCEV to the
set of equal values with the same SCEV. When a SCEV is expanded, the
set of values is checked and reused whenever possible before generating
a sequence of operations.
Differential Revision: http://reviews.llvm.org/D12090
llvm-svn: 259662
Summary:
LoopVersioning is a transform utility that transform passes can use to
run-time disambiguate may-aliasing accesses. I'd like to also expose as
pass to allow it to be unit-tested.
I am planning to add support for non-aliasing annotation in
LoopVersioning and I'd like to be able to write tests directly using
this pass.
(After that feature is done, the pass could also be used to look for
optimization opportunities that are hidden behind incomplete alias
information at compile time.)
The pass drives LoopVersioning in its default way which is to fully
disambiguate may-aliasing accesses no matter how many checks are
required.
Reviewers: hfinkel, ashutosh.nema, sbaranga
Subscribers: zzheng, mssimpso, llvm-commits, sanjoy
Differential Revision: http://reviews.llvm.org/D16612
llvm-svn: 259610
Please see include/llvm/Transforms/Utils/MemorySSA.h for a description
of MemorySSA, and what it does.
Differential Revision: http://reviews.llvm.org/D7864
llvm-svn: 259595
A masked store with a zero mask means there's no store.
A masked store with an allOnes mask means it's a normal vector store.
This is a continuation of:
http://reviews.llvm.org/rL259369
llvm-svn: 259392
This miscompile came about because we tried to use a transform which was
only appropriate for xor operators when addition was present.
This fixes PR26407.
llvm-svn: 259375
A masked load with a zero mask means there's no load.
A masked load with an allOnes mask means it's a normal vector load.
Differential Revision: http://reviews.llvm.org/D16691
llvm-svn: 259369
In the future, we will vectorize recurrences other than reductions. This patch
renames a few variables and updates their associated comments to enable them to
be reused for non-reduction PHI nodes.
This change was requested in the review for D16197.
llvm-svn: 259364