Commit Graph

6511 Commits

Author SHA1 Message Date
Reid Kleckner a4de84604a Fix the clang-cl self-host with VS 2013 headers
std::numeric_limits<int64_t>::max() is not constexpr in VC 2013 headers,
and Clang complains that it isn't. MSVC 2013 itself is emitting a
dynamic initializer for this thing. Instead, use an enum.

llvm-svn: 276334
2016-07-21 21:06:04 +00:00
George Burgess IV 825a868796 Normalize file docs. NFC.
Having the added `\brief` made doxygen interpret it as the summary for
the `llvm` namespace (visible at:
http://llvm.org/doxygen/namespaces.html).

llvm-svn: 276331
2016-07-21 20:52:35 +00:00
Benjamin Kramer a9e477b286 [DemandedBits] Reduce number of duplicated DenseMap lookups.
No functionality change intended.

llvm-svn: 276278
2016-07-21 13:37:55 +00:00
Adam Nemet cbe2a9b213 [OptDiag] Missed these when making the IR Value a const pointer
llvm-svn: 276224
2016-07-21 01:11:12 +00:00
Adam Nemet 7cfd5971ab [OptDiag,LV] Add hotness attribute to applied-optimization remarks
Test coverage is provided by modifying the function in the FP-math
testcase that we are allowed to vectorize.

llvm-svn: 276223
2016-07-21 01:07:13 +00:00
Adam Nemet 0e0e2d5d26 [OptDiag,LV] Add hotness attribute to the derived analysis remarks
This includes FPCompute and Aliasing.

Testcase is based on no_fpmath.ll.

llvm-svn: 276211
2016-07-20 23:50:32 +00:00
Sanjay Patel 5f3c70307d [InstSimplify][InstCombine] don't crash when folding vector selects of icmp
Differential Revision: https://reviews.llvm.org/D22602

llvm-svn: 276209
2016-07-20 23:40:01 +00:00
George Burgess IV 6febd834b6 [CFLAA] Add offset tracking in CFLGraph.
(Also, refactor our constexpr handling to be less insane).

This patch lets us track field offsets in the CFL Graph, which is the
first step to making CFLAA field/offset sensitive. Woohoo! Note that
this patch shouldn't visibly change our behavior (since we make no use
of the offsets we're now tracking), so we can't quite add tests for this
yet.

Patch by Jia Chen.

Differential Revision: https://reviews.llvm.org/D22598

llvm-svn: 276201
2016-07-20 22:53:30 +00:00
Adam Nemet 5b3a5cf6b0 [OptDiag,LV] Add hotness attribute to analysis remarks
The earlier change added hotness attribute to missed-optimization
remarks.  This follows up with the analysis remarks (the ones explaining
the reason for the missed optimization).

llvm-svn: 276192
2016-07-20 21:44:26 +00:00
Adam Nemet 6100d16e7d [OptDiag] Take the IR Value as a const pointer
This helps because LoopAccessReport is passed around as a const
reference and we derive the basic block passed as the Value parameter
from the instruction in LoopAccessReport.

llvm-svn: 276191
2016-07-20 21:44:22 +00:00
Adam Nemet 78d1091c91 [OptDiag] Wrap a long line
llvm-svn: 276190
2016-07-20 21:44:18 +00:00
Wei Mi db80c0c77f Use ValueOffsetPair to enhance value reuse during SCEV expansion.
In D12090, the ExprValueMap was added to reuse existing value during SCEV expansion.
However, const folding and sext/zext distribution can make the reuse still difficult.

A simplified case is: suppose we know S1 expands to V1 in ExprValueMap, and
  S1 = S2 + C_a
  S3 = S2 + C_b
where C_a and C_b are different SCEVConstants. Then we'd like to expand S3 as
V1 - C_a + C_b instead of expanding S2 literally. It is helpful when S2 is a
complex SCEV expr and S2 has no entry in ExprValueMap, which is usually caused
by the fact that S3 is generated from S1 after const folding.

In order to do that, we represent ExprValueMap as a mapping from SCEV to
ValueOffsetPair. We will save both S1->{V1, 0} and S2->{V1, C_a} into the
ExprValueMap when we create SCEV for V1. When S3 is expanded, it will first
expand S2 to V1 - C_a because of S2->{V1, C_a} in the map, then expand S3 to
V1 - C_a + C_b.

Differential Revision: https://reviews.llvm.org/D21313

llvm-svn: 276136
2016-07-20 16:40:33 +00:00
Sean Silva e3c18a5ae8 [PM] Port LoopUnroll.
We just set PreserveLCSSA to always true since we don't have an
analogous method `mustPreserveAnalysisID(LCSSA)`.

Also port LoopInfo verifier pass to test LoopUnrollPass.

llvm-svn: 276063
2016-07-19 23:54:23 +00:00
George Burgess IV 22a0f1a0b9 Attempt to appease MSVC buildbots.
Broken by r276026.

llvm-svn: 276032
2016-07-19 21:35:47 +00:00
George Burgess IV 3b059841ff [CFLAA] Add some interproc. analysis to CFLAnders.
This patch adds function summary support to CFLAnders. It also comes
with a lot of tests! Woohoo!

Patch by Jia Chen.

Differential Revision: https://reviews.llvm.org/D22450

llvm-svn: 276026
2016-07-19 20:47:15 +00:00
George Burgess IV c01b42faa5 [CFLAA] Teach CFLAnders to distinguish reads from writes.
This patch adds more specific edges to CFLAndersAliasAnalysis. The goal
of these edges is to give us more information about *how* two values
that MayAlias alias. With this, we can now tell cases like

a = b; // ergo, a may alias b

apart from

a = c;
b = c;

// so, a may alias b, but only because they were both assigned to c.

...And others.

Patch by Jia Chen.

Differential Revision: https://reviews.llvm.org/D22429

llvm-svn: 276023
2016-07-19 20:38:21 +00:00
David Majnemer f29b7bafb6 [RegionPass] Some minor cleanups
No functional change is intended.

llvm-svn: 276000
2016-07-19 17:50:27 +00:00
David Majnemer 1a4576e79d [LoopPass] Some minor cleanups
No functional change is intended.

llvm-svn: 275999
2016-07-19 17:50:24 +00:00
Simon Pilgrim 0ea8d275cc [X86][SSE] Reimplement SSE fp2si conversion intrinsics instead of using generic IR
D20859 and D20860 attempted to replace the SSE (V)CVTTPS2DQ and VCVTTPD2DQ truncating conversions with generic IR instead.

It turns out that the behaviour of these intrinsics is different enough from generic IR that this will cause problems, INF/NAN/out of range values are guaranteed to result in a 0x80000000 value - which plays havoc with constant folding which converts them to either zero or UNDEF. This is also an issue with the scalar implementations (which were already generic IR and what I was trying to match).

This patch changes both scalar and packed versions back to using x86-specific builtins.

It also deals with the other scalar conversion cases that are runtime rounding mode dependent and can have similar issues with constant folding.

A companion clang patch is at D22105

Differential Revision: https://reviews.llvm.org/D22106

llvm-svn: 275981
2016-07-19 15:07:43 +00:00
Sanjay Patel 5f5eb58eb5 refactor SimplifySelectInst; NFCI
llvm-svn: 275911
2016-07-18 20:56:53 +00:00
Adam Nemet 79ac42a5c9 [OptRemarkEmitter] Port to new PM
Summary:
The main goal is to able to start using the new OptRemarkEmitter
analysis from the LoopVectorizer.  Since the vectorizer was recently
converted to the new PM, it makes sense to convert this analysis as
well.

This pass is currently tested through the LoopDistribution pass, so I am
also porting LoopDistribution to get coverage for this analysis with the
new PM.

Reviewers: davidxl, silvas

Subscribers: llvm-commits, mzolotukhin

Differential Revision: https://reviews.llvm.org/D22436

llvm-svn: 275810
2016-07-18 16:29:21 +00:00
Teresa Johnson cd21a646f6 [ThinLTO] Perform profile-guided indirect call promotion
Summary:
To enable profile-guided indirect call promotion in ThinLTO mode, we
simply add call graph edges for each profitable target from the profile
to the summaries, then the summary-guided importing will consider the
callee for importing as usual.

Also we need to enable the indirect call promotion pass creation in the
PassManagerBuilder when PerformThinLTO=true (we are in the ThinLTO
backend), so that the newly imported functions are considered for
promotion in the backends.

The IC promotion profiles refer to callees by GUID, which required
adding GUIDs to the per-module VST in bitcode (and assigning them
valueIds similar to how they are assigned valueIds in the combined
index).

Reviewers: mehdi_amini, xur

Subscribers: mehdi_amini, davidxl, llvm-commits

Differential Revision: http://reviews.llvm.org/D21932

llvm-svn: 275707
2016-07-17 14:47:01 +00:00
Dehao Chen 1a44452b11 [PM] Convert IVUsers analysis to new pass manager.
Summary: Convert IVUsers analysis to new pass manager.

Reviewers: davidxl, silvas

Subscribers: junbuml, sanjoy, llvm-commits, mzolotukhin

Differential Revision: https://reviews.llvm.org/D22434

llvm-svn: 275698
2016-07-16 22:51:33 +00:00
George Burgess IV 22682e293b [CFLAA] Add attributes handling for CFLAnders.
This patch adds proper handling of stratified attributes into our
anders-style CFLAA implementation. It also comes bundled with more
CFLAnders tests. :)

Patch by Jia Chen.

Differential Revision: https://reviews.llvm.org/D22325

llvm-svn: 275604
2016-07-15 20:02:49 +00:00
George Burgess IV 6d30aa03a0 [CFLAA] Add an initial CFLAnders implementation.
This adds an incomplete anders-style implementation for CFLAA. It's
incomplete in that it's missing interprocedural analysis, attrs
handling, etc. and that it needs more tests. More tests and features
will be added in future commits.

Patch by Jia Chen.

Differential Revision: https://reviews.llvm.org/D22291

llvm-svn: 275602
2016-07-15 19:53:25 +00:00
Adam Nemet aad816083e [OptRemark,LDist] RFC: Add hotness attribute
Summary:
This is the first set of changes implementing the RFC from
http://thread.gmane.org/gmane.comp.compilers.llvm.devel/98334

This is a cross-sectional patch; rather than implementing the hotness
attribute for all optimization remarks and all passes in a patch set, it
implements it for the 'missed-optimization' remark for Loop
Distribution.  My goal is to shake out the design issues before scaling
it up to other types and passes.

Hotness is computed as an integer as the multiplication of the block
frequency with the function entry count.  It's only printed in opt
currently since clang prints the diagnostic fields directly.  E.g.:

  remark: /tmp/t.c:3:3: loop not distributed: use -Rpass-analysis=loop-distribute for more info (hotness: 300)

A new API added is similar to emitOptimizationRemarkMissed.  The
difference is that it additionally takes a code region that the
diagnostic corresponds to.  From this, hotness is computed using BFI.
The new API is exposed via an analysis pass so that it can be made
dependent on LazyBFI.  (Thanks to Hal for the analysis pass idea.)

This feature can all be enabled by setDiagnosticHotnessRequested in the
LLVM context.  If this is off, LazyBFI is not calculated (D22141) so
there should be no overhead.

A new command-line option is added to turn this on in opt.

My plan is to switch all user of emitOptimizationRemark* to use this
module instead.

Reviewers: hfinkel

Subscribers: rcox2, mzolotukhin, llvm-commits

Differential Revision: http://reviews.llvm.org/D21771

llvm-svn: 275583
2016-07-15 17:23:20 +00:00
David Majnemer a940f360cb [AliasAnalysis] Give back AA results for fence instructions
Calling getModRefInfo with a fence resulted in crashes because fences
don't have a memory location.  Add a new predicate to Instruction
called isFenceLike which indicates that the instruction mutates memory
but not any single memory location in particular. In practice, it is a
proxy for the set of instructions which "mayWriteToMemory" but cannot be
used with MemoryLocation::get.

This fixes PR28570.

llvm-svn: 275581
2016-07-15 17:19:24 +00:00
Igor Laevsky ee40d1e8da Re-submit r272891 "Prevent dangling pointer problems in BranchProbabilityInfo"
Most possibly problem was caused by the same reason as PR28400. This change
bypasses it by using CallbackVH instead of AssertingVH.

Differential Revision: https://reviews.llvm.org/D20957

llvm-svn: 275563
2016-07-15 14:31:16 +00:00
Sanjoy Das b66374c634 [ValueTracking] Use Instruction::getFunction; NFC
llvm-svn: 275465
2016-07-14 20:19:01 +00:00
Tom Stellard 1b5cf6217e GlobalsAA: Functions with the argmemonly attribute won't read arbitrary globals
Summary:
In preparation for changing GlobalsAA to stop assuming that intrinsics
can't read arbitrary globals, we need to make sure GlobalsAA is querying
function attributes rather than relying on this assumption.

This patch was inspired by: http://reviews.llvm.org/D20206

Reviewers: jmolloy, hfinkel

Subscribers: eli.friedman, llvm-commits

Differential Revision: https://reviews.llvm.org/D21318

llvm-svn: 275433
2016-07-14 15:50:27 +00:00
Sjoerd Meijer 38c2cd0c14 This implements a more optimal algorithm for selecting a base constant in
constant hoisting. It not only takes into account the number of uses and the
cost of expressions in which constants appear, but now also the resulting
integer range of the offsets. Thus, the algorithm maximizes the number of uses
within an integer range that will enable more efficient code generation. On
ARM, for example, this will enable code size optimisations because less
negative offsets will be created. Negative offsets/immediates are not supported
by Thumb1 thus preventing more compact instruction encoding.

Differential Revision: http://reviews.llvm.org/D21183

llvm-svn: 275382
2016-07-14 07:44:20 +00:00
David Majnemer 17a95aaa7b Simplify llvm.masked.load w/ undef masks
We can always pick the passthru value if the mask is undef: we are
permitted to treat the mask as-if it were filled with zeros.

llvm-svn: 275379
2016-07-14 06:58:37 +00:00
David Majnemer 7f781aba97 [ConstantFolding] Fold masked loads
We can constant fold a masked load if the operands are appropriately
constant.

Differential Revision: http://reviews.llvm.org/D22324

llvm-svn: 275352
2016-07-14 00:29:50 +00:00
David Majnemer f89660aba7 [ConstantFolding] Extend FoldReinterpretLoadFromConstPtr to handle negative offsets
Treat loads which clip before the start of a global initializer the same
way we treat clipping beyond the end of the initializer: use zeros.

llvm-svn: 275345
2016-07-13 23:33:07 +00:00
David Majnemer d77a3b61eb Move a transform from InstCombine to InstSimplify.
This transform doesn't require any new instructions, it can safely live
in InstSimplify.

llvm-svn: 275344
2016-07-13 23:32:53 +00:00
Adam Nemet 7da74abf3d [LAA] Don't hold on to DominatorTree in the analysis result
llvm-svn: 275335
2016-07-13 22:36:35 +00:00
Adam Nemet b49d9a56eb [LAA] Don't hold on to TargetLibraryInfo in the analysis result
llvm-svn: 275334
2016-07-13 22:36:27 +00:00
Adam Nemet 1824e411c6 [LAA] Don't hold on to DataLayout in the analysis result
In fact, don't even pass this to the ctor since we can get it from the
module.

llvm-svn: 275326
2016-07-13 22:18:51 +00:00
Adam Nemet 6616ad08f6 [LAA] Don't hold on to LoopInfo in the analysis result
llvm-svn: 275325
2016-07-13 22:18:48 +00:00
Adam Nemet 1556357677 [LAA] Don't hold on to AliasAnalysis in the analysis result
llvm-svn: 275322
2016-07-13 21:39:09 +00:00
Andrew Kaylor 346dd7f1bd Reverting r275284 due to platform-specific test failures
llvm-svn: 275304
2016-07-13 19:09:16 +00:00
Andrew Kaylor 12cccdd731 Fix for Bug 26903, adds support to inline __builtin_mempcpy
Patch by Sunita Marathe

Differential Revision: http://reviews.llvm.org/D21920

llvm-svn: 275284
2016-07-13 17:25:11 +00:00
David Majnemer 4cff2f8d49 [ConstantFolding] Use sdiv_ov
This is a simplification, there should be no functional change.

llvm-svn: 275273
2016-07-13 15:53:46 +00:00
David Majnemer 1b3db33e3d [ConstantFolding] Don't treat negative GEP offsets as positive
GEP offsets are signed, don't treat them as huge positive numbers.

llvm-svn: 275251
2016-07-13 05:16:16 +00:00
Adam Nemet c2f791d8a7 [BFI] Add new LazyBFI analysis pass
Summary:
This is necessary for D21771.  In order to add the hotness attribute to
optimization remarks we need BFI to be available in all passes that emit
optimization remarks.

However we don't want to pay for computing BFI unless the hotness
attribute is requested.

This is achieved by making BFI lazy at the very high-level through a new
analysis pass -- BFI is not calculated unless requested.

I am adding a test to check the laziness under D21771 where the first
user of the analysis is added.

Reviewers: hfinkel, dexonsmith, davidxl

Subscribers: davidxl, dexonsmith, llvm-commits

Differential Revision: http://reviews.llvm.org/D22141

llvm-svn: 275250
2016-07-13 05:01:48 +00:00
David Majnemer 90a9704a41 [ConstantFolding] Cleanups
No functional change is intended, just a minor cleanup.

llvm-svn: 275249
2016-07-13 04:22:12 +00:00
David Majnemer 17bdf445e4 [IR] Make getIndexedOffsetInType return a signed result
A GEPed offset can go negative, the result of getIndexedOffsetInType
should according be a signed type.

llvm-svn: 275246
2016-07-13 03:42:38 +00:00
Keno Fischer 1efc3b70c5 Fix ScalarEvolutionExpander step scaling bug
The expandAddRecExprLiterally function incorrectly transforms
`[Start + Step * X]` into `Step * [Start + X]` instead of the correct
transform of `[Step * X] + Start`.

This caused https://github.com/JuliaLang/julia/issues/14704#issuecomment-174126219
due to what appeared to be sufficiently complicated loop interactions.

Patch by Jameson Nash (jameson@juliacomputing.com).

Reviewers: sanjoy
Differential Revision: http://reviews.llvm.org/D16505

llvm-svn: 275239
2016-07-13 01:28:12 +00:00
Teresa Johnson 835df56cb3 Remove another unused variable from r275216
Remove another variable added in r275216 that was only used in debug
mode.

llvm-svn: 275238
2016-07-12 23:49:17 +00:00
Teresa Johnson 1e44b5d3ab Refactor indirect call promotion profitability analysis (NFC)
Summary:
Refactored the profitability analysis out of the IC promotion pass and
into lib/Analysis so that it can be accessed by the summary index
builder in a follow-on patch to enable IC promotion in ThinLTO (D21932).

Reviewers: davidxl, xur

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D22182

llvm-svn: 275216
2016-07-12 21:13:44 +00:00