Summary:
This seems like the least intrusive way to pass this information
through.
Fixes PR28151
Reviewers: majnemer, aprantl, dblaikie
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D21444
llvm-svn: 273053
This is indeed a much cleaner approach (thanks to Daniel Berlin
for pointing out), and also David/Sean for review.
Differential Revision: http://reviews.llvm.org/D21454
llvm-svn: 273032
Many CPUs only have the ability to do a 4-byte cmpxchg (or ll/sc), not 1
or 2-byte. For those, you need to mask and shift the 1 or 2 byte values
appropriately to use the 4-byte instruction.
This change adds support for cmpxchg-based instruction sets (only SPARC,
in LLVM). The support can be extended for LL/SC-based PPC and MIPS in
the future, supplanting the ISel expansions those architectures
currently use.
Tests added for the IR transform and SPARCv9.
Differential Revision: http://reviews.llvm.org/D21029
llvm-svn: 273025
We convert `Default` to `NotPIC` so that target independent code
can reason about this correctly.
Differential Revision: http://reviews.llvm.org/D21394
llvm-svn: 273024
Recommiting after fixing non-atomic insert to front of SmallVector in
MCAsmLexer.h
Add explicit Comment Token in Assembly Lexing for future support for
outputting explicit comments from inline assembly. As part of this,
CPPHash Directives are now explicitly distinguished from Hash line
comments in Lexer.
Line comments are recorded as EndOfStatement tokens, not Comment tokens
to simplify compatibility with current TargetParsers. This slightly
complicates comment output.
This remove all lexing tasks out of the parser, does minor cleanup
to remove extraneous newlines Asm Output, and some improvements white
space handling.
Reviewers: rtrieu, dwmw2, rnk
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D20009
llvm-svn: 273007
Reapplying patch as it was reverted when it was first
committed because of an assertion failure when the
mrrc2 intrinsic was called in ARM mode. The failure
was happening because the instruction was being built
in ARMISelDAGToDAG.cpp and the tablegen description for
mrrc2 instruction doesn't allow you to use a predicate.
The ARM architecture manuals do say that mrrc2 in ARM
mode can be predicated with AL in assembly but this has
no effect on the encoding of the instruction as the top
4 bits will always be 1111 not 1110 which is the encoding
for the condition AL.
Differential Revision: http://reviews.llvm.org/D21408
llvm-svn: 272982
pass manager passes' `run` methods.
This removes a bunch of SFINAE goop from the pass manager and just
requires pass authors to accept `AnalysisManager<IRUnitT> &` as a dead
argument. This is a small price to pay for the simplicity of the system
as a whole, despite the noise that changing it causes at this stage.
This will also helpfull allow us to make the signature of the run
methods much more flexible for different kinds af passes to support
things like intelligently updating the pass's progression over IR units.
While this touches many, many, files, the changes are really boring.
Mostly made with the help of my trusty perl one liners.
Thanks to Sean and Hal for bouncing ideas for this with me in IRC.
llvm-svn: 272978
This is still NFCI, so the list of clients that allow symbolic stride
speculation does not change (yes: LV and LoopVersioningLICM, no: LLE,
LDist). However since the symbolic strides are now managed by LAA
rather than passed by client a new bool parameter is used to enable
symbolic stride speculation.
The existing test Transforms/LoopVectorize/version-mem-access.ll checks
that stride speculation is performed for LV.
The previously added test Transforms/LoopLoadElim/symbolic-stride.ll
ensures that no speculation is performed for LLE.
The next patch will change the functionality and turn on symbolic stride
speculation in all of LAA's clients and remove the bool parameter.
llvm-svn: 272970
When moving unsafe allocas to the unsafe stack, dbg.declare intrinsics are
updated to refer to the new location.
This change does the same to dbg.value intrinsics.
llvm-svn: 272968
Add explicit Comment Token in Assembly Lexing for future support for
outputting explicit comments from inline assembly. As part of this,
CPPHash Directives are now explicitly distinguished from Hash line
comments in Lexer.
Line comments are recorded as EndOfStatement tokens, not Comment tokens
to simplify compatibility with current TargetParsers. This slightly
complicates comment output.
This remove all lexing tasks out of the parser, does minor cleanup
to remove extraneous newlines Asm Output, and some improvements white
space handling.
Reviewers: rtrieu, dwmw2, rnk
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D20009
llvm-svn: 272953
Summary:
... into getFrameIndexReferencePreferSP. This change folds the
fail-then-retry logic into getFrameIndexReferencePreferSP.
There is a non-functional but behaviorial change in WinException --
earlier if `getFrameIndexReferenceFromSP` failed we'd trip an assert,
but now we'll silently use the (wrong) offset from the base pointer. I
could not write the assert I'd like to write ("FrameReg ==
StackRegister", like I've done in X86FrameLowering) since there is no
easy way to get to the stack register from WinException (happy to be
proven wrong here). One solution to this is to add a `bool
OnlyStackPointer` parameter to `getFrameIndexReferenceFromSP` that
asserts if it could not satisfy its promise of returning an offset from
a stack pointer, but that seems overkill.
Reviewers: rnk
Subscribers: sanjoy, mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D21427
llvm-svn: 272938
This allows better catching of compiler errors since we can use
the override keyword to verify that methods are actually
overridden.
Also in this patch I've changed from storing a boolean Error
code everywhere to returning an llvm::Error, to propagate richer
error information up the call stack.
Reviewed By: ruiu, rnk
Differential Revision: http://reviews.llvm.org/D21410
llvm-svn: 272926
Daniel Berlin expressed some real concerns about the port and proposed
and alternative approach. I'll revert this for now while working on a
new patch, which I hope to put up for review shortly. Sorry for the churn.
llvm-svn: 272925
Both parameters to visitTypeBegin are actually members of CVRecord,
so we can just pass CVRecord instead of destructuring it.
Differential Revision: http://reviews.llvm.org/D21435
llvm-svn: 272899
We should update results of the BranchProbabilityInfo after removing block in JumpThreading. Otherwise
we will get dangling pointer inside BranchProbabilityInfo cache.
Differential Revision: http://reviews.llvm.org/D20957
llvm-svn: 272891
Summary:
The Mips implementation only covers the feature bits described by the ELF
e_flags so far. Mips stores additional feature bits such as MSA in the
.MIPS.abiflags section.
Also fixed a small bug this revealed where microMIPS wouldn't add the
EF_MIPS_MICROMIPS flag when using -filetype=obj.
Reviewers: echristo, rafael
Subscribers: rafael, mehdi_amini, dsanders, sdardis, llvm-commits
Differential Revision: http://reviews.llvm.org/D21125
llvm-svn: 272880
The header files are designed to be used always together (through Pass.h).
Addresses the first part of https://llvm.org/bugs/show_bug.cgi?id=27991
Patch by Cristina Cristescu and me.
Reviewed by Richard Smith.
llvm-svn: 272877
- We lacked a short unique identifier for a statistics, so I renamed the
current "Name" field that just contained the DEBUG_TYPE name of the
current file to DebugType and added a new "Name" field that contains
the C++ identifier of the statistic variable.
- Add the -stats-json option which outputs statistics in json format.
Differential Revision: http://reviews.llvm.org/D20995
llvm-svn: 272826
Summary: As per title. This completes the C API Attribute support.
Reviewers: Wallbraker, whitequark, echristo, rafael, jyknight
Subscribers: mehdi_amini
Differential Revision: http://reviews.llvm.org/D21365
llvm-svn: 272811
This allows us to emit native IR in Clang (next commit).
Also, update the intrinsic tests to show that codegen already knows how to handle
the IR that Clang will soon produce.
llvm-svn: 272806
[DAG] Previously debug values would transfer debuginfo for the selected
start node for a replacement which allows for debug to be dropped.
Push debug value transfer to occur with node/value replacement in
SelectionDAG, remove now extraneous transfers of debug values.
This refixes PR9817 which was being incompletely checked in the
testsuite.
Reviewers: jyknight
Subscribers: dblaikie, llvm-commits
Differential Revision: http://reviews.llvm.org/D21037
llvm-svn: 272792
This uses the "runImpl" approach to share code with the old PM.
Porting to the new PM meant abandoning the anonymous namespace enclosing
most of SLPVectorizer.cpp which is a bit of a bummer (but not a big deal
compared to having to pull the pass class into a header which the new PM
requires since it calls the constructor directly).
llvm-svn: 272766
Summary:
... when the offset is not statically known.
Prioritize addresses relative to the stack pointer in the stackmap, but
fallback gracefully to other modes of addressing if the offset to the
stack pointer is not a known constant.
Patch by Oscar Blumberg!
Reviewers: sanjoy
Subscribers: llvm-commits, majnemer, rnk, sanjoy, thanm
Differential Revision: http://reviews.llvm.org/D21259
llvm-svn: 272756
Nearly all the changes to this pass have been done while maintaining and
updating other parts of LLVM. LLVM has had another pass, SROA, which
has superseded ScalarReplAggregates for quite some time.
Differential Revision: http://reviews.llvm.org/D21316
llvm-svn: 272737
If a local_unnamed_addr attribute is attached to a global, the address
is known to be insignificant within the module. It is distinct from the
existing unnamed_addr attribute in that it only describes a local property
of the module rather than a global property of the symbol.
This attribute is intended to be used by the code generator and LTO to allow
the linker to decide whether the global needs to be in the symbol table. It is
possible to exclude a global from the symbol table if three things are true:
- This attribute is present on every instance of the global (which means that
the normal rule that the global must have a unique address can be broken without
being observable by the program by performing comparisons against the global's
address)
- The global has linkonce_odr linkage (which means that each linkage unit must have
its own copy of the global if it requires one, and the copy in each linkage unit
must be the same)
- It is a constant or a function (which means that the program cannot observe that
the unique-address rule has been broken by writing to the global)
Although this attribute could in principle be computed from the module
contents, LTO clients (i.e. linkers) will normally need to be able to compute
this property as part of symbol resolution, and it would be inefficient to
materialize every module just to compute it.
See:
http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20160509/356401.htmlhttp://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20160516/356738.html
for earlier discussion.
Part of the fix for PR27553.
Differential Revision: http://reviews.llvm.org/D20348
llvm-svn: 272709
This reverts commit 879139e1c6577b09df52de56a6bab856a19ed185.
This was committed accidentally when I blindly typed git svn
dcommit instead of the command to generate a patch.
llvm-svn: 272693
We move the loop rotate functions in a separate class to avoid passing multiple
parameters to each function. This cleanup will help with further development of
loop rotation. NFC.
Patch written by Aditya Kumar and Sebastian Pop.
Differential Revision: http://reviews.llvm.org/D21311
llvm-svn: 272672
What happened here is that in the new PM there is a bunch of new copying
(actually, moving) and so this reads the HasProfileData member in
situations where it used to not be read.
It used to only be read strictly in the "runOnFunction" method and its
callees, where is *is* initialized (even after my patch).
So this ends up being benign as far as functional behavior of the
compiler (since we set HasProfileData in the "runImpl" method before we
ever make decisions based on it).
It's awesome that UBSan caught this. It highlights one more thing to
watch out for when porting passes.
Sanitizer bot log was:
-- Testing: 17049 tests, 32 threads --
Testing: 0 .. 10.. 20.. 30.. 40.. 50.. 60.. 70.. 80..
FAIL: LLVM :: Transforms/JumpThreading/thread-loads.ll (15184 of 17049)
******************** TEST 'LLVM :: Transforms/JumpThreading/thread-loads.ll' FAILED ********************
Script:
--
/mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm_build_ubsan/./bin/opt < /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/test/Transforms/JumpThreading/thread-loads.ll -jump-threading -S | /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm_build_ubsan/./bin/FileCheck /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/test/Transforms/JumpThreading/thread-loads.ll
/mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm_build_ubsan/./bin/opt < /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/test/Transforms/JumpThreading/thread-loads.ll -passes=jump-threading -S | /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm_build_ubsan/./bin/FileCheck /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/test/Transforms/JumpThreading/thread-loads.ll
--
Exit Code: 2
Command Output (stderr):
--
/mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/include/llvm/Transforms/Scalar/JumpThreading.h:90:57: runtime error: load of value 136, which is not a valid value for type 'bool'
#0 0x2c33ba1 in llvm::JumpThreadingPass::JumpThreadingPass(llvm::JumpThreadingPass&&) /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/include/llvm/Transforms/Scalar/JumpThreading.h:90:57
#1 0x2bc88e4 in void llvm::PassManager<llvm::Function>::addPass<llvm::JumpThreadingPass>(llvm::JumpThreadingPass) /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/include/llvm/IR/PassManager.h:282:40
#2 0x2bb2682 in llvm::PassBuilder::parseFunctionPassName(llvm::PassManager<llvm::Function>&, llvm::StringRef) /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/lib/Passes/PassRegistry.def:133:1
#3 0x2bb4914 in llvm::PassBuilder::parseFunctionPassPipeline(llvm::PassManager<llvm::Function>&, llvm::StringRef&, bool, bool) /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/lib/Passes/PassBuilder.cpp:489:12
#4 0x2bb6f81 in llvm::PassBuilder::parsePassPipeline(llvm::PassManager<llvm::Module>&, llvm::StringRef, bool, bool) /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/lib/Passes/PassBuilder.cpp:674:10
#5 0x986690 in llvm::runPassPipeline(llvm::StringRef, llvm::LLVMContext&, llvm::Module&, llvm::TargetMachine*, llvm::tool_output_file*, llvm::StringRef, llvm::opt_tool::OutputKind, llvm::opt_tool::VerifierKind, bool, bool) /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/tools/opt/NewPMDriver.cpp:85:8
#6 0x9af25e in main /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/tools/opt/opt.cpp:468:12
#7 0x7fd7e27dbf44 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21f44)
#8 0x960157 in _start (/mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm_build_ubsan/bin/opt+0x960157)
FileCheck error: '-' is empty.
FileCheck command line: /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm_build_ubsan/./bin/FileCheck /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/test/Transforms/JumpThreading/thread-loads.ll
--
********************
Testing: 0 .. 10.. 20.. 30.. 40.. 50.. 60.. 70.. 80.. 90..
Testing Time: 128.90s
********************
Failing Tests (1):
LLVM :: Transforms/JumpThreading/thread-loads.ll
Expected Passes : 16725
Expected Failures : 129
Unsupported Tests : 194
Unexpected Failures: 1
llvm-svn: 272616
The need for all these Lookup* functions is just because of calls to
getAnalysis inside methods (i.e. not at the top level) of the
runOnFunction method. They should be straightforward to clean up when
the old PM is gone.
llvm-svn: 272615
This reverts commit r272603 and adds a fix.
Big thanks to Davide for pointing me at r216244 which gives some insight
into how to fix this VS2013 issue. VS2013 can't synthesize a move
constructor. So the fix here is to add one explicitly to the
JumpThreadingPass class.
llvm-svn: 272607
I've tested this locally with VS2015 and there are no issues there,
so this might be a VS2013 specific issue.
Thanks to Davide for the suggested fix.
llvm-svn: 272601
This follows the approach in r263208 (for GVN) pretty closely:
- move the bulk of the body of the function to the new PM class.
- expose a runImpl method on the new-PM class that takes the IRUnitT and
pointers/references to any analyses and use that to implement the
old-PM class.
- use a private namespace in the header for stuff that used to be file
scope
llvm-svn: 272597
This is a bit gnarly since LVI is maintaining its own cache.
I think this port could be somewhat cleaner, but I'd rather not spend
too much time on it while we still have the old pass hanging around and
limiting how much we can clean things up.
Once the old pass is gone it will be easier (less time spent) to clean
it up anyway.
This is the last dependency needed for porting JumpThreading which I'll
do in a follow-up commit (there's no printer pass for LVI or anything to
test it, so porting a pass that depends on it seems best).
I've been mostly following:
r269370 / D18834 which ported Dependence Analysis
r268601 / D19839 which ported BPI
llvm-svn: 272593
Differential Revision: http://reviews.llvm.org/D19842
Corresponding clang patch: http://reviews.llvm.org/D19843
Re-commit after addressing issues with of generating too many warnings for Windows and asan test failures
Patch by Eric Niebler
llvm-svn: 272555
MRRC/MRRC2 instruction writes to two registers. The
intrinsic definition returns a single uint64_t to
represent the write, this is a compact way of
representing a write to two 32 bit registers,
the alternative might have been two return a
struct of 2 uint32_t's but this isn't as nice.
Differential Revision:
llvm-svn: 272544
This used to be free, copying and moving DebugLocs became expensive
after the metadata rewrite. Passing by reference eliminates a ton of
track/untrack operations. No functionality change intended.
llvm-svn: 272512
Below are my super rough notes when porting. They can probably serve as
a basic guide for porting other passes to the new PM. As I port more
passes I'll expand and generalize this and make a proper
docs/HowToPortToNewPassManager.rst document. There is also missing
documentation for general concepts and API's in the new PM which will
require some documentation.
Once there is proper documentation in place we can put up a list of
passes that have to be ported and game-ify/crowdsource the rest of the
porting (at least of the middle end; the backend is still unclear).
I will however be taking personal responsibility for ensuring that the
LLD/ELF LTO pipeline is ported in a timely fashion. The remaining passes
to be ported are (do something like
`git grep "<the string in the bullet point below>"` to find the pass):
General Scalar:
[ ] Simplify the CFG
[ ] Jump Threading
[ ] MemCpy Optimization
[ ] Promote Memory to Register
[ ] MergedLoadStoreMotion
[ ] Lazy Value Information Analysis
General IPO:
[ ] Dead Argument Elimination
[ ] Deduce function attributes in RPO
Loop stuff / vectorization stuff:
[ ] Alignment from assumptions
[ ] Canonicalize natural loops
[ ] Delete dead loops
[ ] Loop Access Analysis
[ ] Loop Invariant Code Motion
[ ] Loop Vectorization
[ ] SLP Vectorizer
[ ] Unroll loops
Devirtualization / CFI:
[ ] Cross-DSO CFI
[ ] Whole program devirtualization
[ ] Lower bitset metadata
CGSCC passes:
[ ] Function Integration/Inlining
[ ] Remove unused exception handling info
[ ] Promote 'by reference' arguments to scalars
Please let me know if you are interested in working on any of the passes
in the above list (e.g. reply to the post-commit thread for this patch).
I'll probably be tackling "General Scalar" and "General IPO" first FWIW.
Steps as I port "Deduce function attributes in RPO"
---------------------------------------------------
(note: if you are doing any work based on these notes, please leave a
note in the post-commit review thread for this commit with any
improvements / suggestions / incompleteness you ran into!)
Note: "Deduce function attributes in RPO" is a module pass.
1. Do preparatory refactoring.
Do preparatory factoring. In this case all I had to do was to pull out a static helper (r272503).
(TODO: give more advice here e.g. if pass holds state or something)
2. Rename the old pass class.
llvm/lib/Transforms/IPO/FunctionAttrs.cpp
Rename class ReversePostOrderFunctionAttrs -> ReversePostOrderFunctionAttrsLegacyPass
in preparation for adding a class ReversePostOrderFunctionAttrs as the pass in the new PM.
(edit: actually wait what? The new class name will be
ReversePostOrderFunctionAttrsPass, so it doesn't conflict. So this step is
sort of useless churn).
llvm/include/llvm/InitializePasses.h
llvm/lib/LTO/LTOCodeGenerator.cpp
llvm/lib/Transforms/IPO/IPO.cpp
llvm/lib/Transforms/IPO/FunctionAttrs.cpp
Rename initializeReversePostOrderFunctionAttrsPass -> initializeReversePostOrderFunctionAttrsLegacyPassPass
(note that the "PassPass" thing falls out of `s/ReversePostOrderFunctionAttrs/ReversePostOrderFunctionAttrsLegacyPass/`)
Note that the INITIALIZE_PASS macro is what creates this identifier name, so renaming the class requires this renaming too.
Note that createReversePostOrderFunctionAttrsPass does not need to be
renamed since its name is not generated from the class name.
3. Add the new PM pass class.
In the new PM all passes need to have their
declaration in a header somewhere, so you will often need to add a header.
In this case
llvm/include/llvm/Transforms/IPO/FunctionAttrs.h is already there because
PostOrderFunctionAttrsPass was already ported.
The file-level comment from the .cpp file can be used as the file-level
comment for the new header. You may want to tweak the wording slightly
from "this file implements" to "this file provides" or similar.
Add declaration for the new PM pass in this header:
class ReversePostOrderFunctionAttrsPass
: public PassInfoMixin<ReversePostOrderFunctionAttrsPass> {
public:
PreservedAnalyses run(Module &M, AnalysisManager<Module> &AM);
};
Its name should end with `Pass` for consistency (note that this doesn't
collide with the names of most old PM passes). E.g. call it
`<name of the old PM pass>Pass`.
Also, move the doxygen comment from the old PM pass to the declaration of
this class in the header.
Also, include the declaration for the new PM class
`llvm/Transforms/IPO/FunctionAttrs.h` at the top of the file (in this case,
it was already done when the other pass in this file was ported).
Now define the `run` method for the new class.
The main things here are:
a) Use AM.getResult<...>(M) to get results instead of `getAnalysis<...>()`
b) If the old PM pass would have returned "false" (i.e. `Changed ==
false`), then you should return PreservedAnalyses::all();
c) In the old PM getAnalysisUsage method, observe the calls
`AU.addPreserved<...>();`.
In the case `Changed == true`, for each preserved analysis you should do
call `PA.preserve<...>()` on a PreservedAnalyses object and return it.
E.g.:
PreservedAnalyses PA;
PA.preserve<CallGraphAnalysis>();
return PA;
Note that calls to skipModule/skipFunction are not supported in the new PM
currently, so optnone and optimization bisect support do not work. You can
just drop those calls for now.
4. Add the pass to the new PM pass registry to make it available in opt.
In llvm/lib/Passes/PassBuilder.cpp add a #include for your header.
`#include "llvm/Transforms/IPO/FunctionAttrs.h"`
In this case there is already an include (from when
PostOrderFunctionAttrsPass was ported).
Add your pass to llvm/lib/Passes/PassRegistry.def
In this case, I added
`MODULE_PASS("rpo-functionattrs", ReversePostOrderFunctionAttrsPass())`
The string is from the `INITIALIZE_PASS*` macros used in the old pass
manager.
Then choose a test that uses the pass and use the new PM `-passes=...` to
run it.
E.g. in this case there is a test that does:
; RUN: opt < %s -basicaa -functionattrs -rpo-functionattrs -S | FileCheck %s
I have added the line:
; RUN: opt < %s -aa-pipeline=basic-aa -passes='require<targetlibinfo>,cgscc(function-attrs),rpo-functionattrs' -S | FileCheck %s
The `-aa-pipeline=basic-aa` and
`require<targetlibinfo>,cgscc(function-attrs)` are what is needed to run
functionattrs in the new PM (note that in the new PM "functionattrs"
becomes "function-attrs" for some reason). This is just pulled from
`readattrs.ll` which contains the change from when functionattrs was ported
to the new PM.
Adding rpo-functionattrs causes the pass that was just ported to run.
llvm-svn: 272505
Summary: This also deprecated the get attribute function familly.
Reviewers: Wallbraker, whitequark, joker.eph, echristo, rafael, jyknight
Subscribers: axw, joker.eph, llvm-commits
Differential Revision: http://reviews.llvm.org/D19181
llvm-svn: 272504
This shouldn't have any functional difference, but it appears to be the
pattern used for other methods on DynamicLibrary, and it should avoid
the -Wpedantic warning on one of the build bots about the direct
reinterpret_cast.
llvm-svn: 272461
Adds a MachineFunctionPass that scans the body to find calls, and
update the register mask with the one saved by the
RegUsageInfoCollector analysis in PhysicalRegisterUsageInfo.
Patch by Vivek Pandya <vivekvpandya@gmail.com>
Differential Revision: http://reviews.llvm.org/D21180
llvm-svn: 272414
The costs are somewhat hand-wavy, but should be much closer to the truth
than what we get from BasicTTI.
Differential Revision: http://reviews.llvm.org/D21156
llvm-svn: 272406
Add an option to enable the analysis of MachineFunction register
usage to extract the list of clobbered registers.
When enabled, the CodeGen order is changed to be bottom up on the Call
Graph.
The analysis is split in two parts, RegUsageInfoCollector is the
MachineFunction Pass that runs post-RA and collect the list of
clobbered registers to produce a register mask.
An immutable pass, RegisterUsageInfo, stores the RegMask produced by
RegUsageInfoCollector, and keep them available. A future tranformation
pass will use this information to update every call-sites after
instruction selection.
Patch by Vivek Pandya <vivekvpandya@gmail.com>
Differential Revision: http://reviews.llvm.org/D20769
llvm-svn: 272403
This reapplies commit r272385 with a fix. The build was failing when compiled
with gcc, but not with clang. With the fix, we now get the data layout from the
current TTI implementation, which will hopefully solve the issue.
llvm-svn: 272395
This patch refines the default cost for interleaved load groups having gaps. If
a load group has gaps, the legalized instructions corresponding to the unused
elements will be dead. Thus, we don't need to account for them in the cost
model. Instead, we only need to account for the fraction of legalized loads
that will actually be used.
Differential Revision: http://reviews.llvm.org/D20873
llvm-svn: 272385
This is the next step towards being able to write PDBs.
MemoryBuffer is immutable, and StreamInterface is our replacement
which can be any combination of read-only, read-write, or write-only
depending on the particular implementation.
The one place where we were creating a PDBFile (in RawSession) is
updated to subclass ByteStream with a simple adapter that holds
a MemoryBuffer, and initializes the superclass with the buffer's
array, so that all the functionality of ByteStream works
transparently.
llvm-svn: 272370
This adds method and tests for writing to a PDB stream. With
this, even a PDB stream which is discontiguous can be treated
as a sequential stream of bytes for the purposes of writing.
Reviewed By: ruiu
Differential Revision: http://reviews.llvm.org/D21157
llvm-svn: 272369
Previously we could run only one machine pass with the run-pass option.
With that patch, we can now specify several passes with several run-pass
options (or just one option with a list of comma separated passes) and
llc will build the related pipeline.
This is great to test the interaction of two passes that are not
necessarily next to each other in the pipeline, or play with pass
ordering.
Now, we should be at parity with opt for the flexibility of running
passes.
Note: I also moved the run pass option from CommandFlags.h to llc.cpp
because, really, this is needed only there!
llvm-svn: 272356
Instead of directly using MaxFunctionCount and function entry count to determine callee hotness, use the isHotFunction/isColdFunction methods provided by ProfileSummaryInfo.
Differential revision: http://reviews.llvm.org/D21045
llvm-svn: 272321
Summary:
Currently clang emits these instructions via inline (volatile) asm in
the CUDA headers. Switching to intrinsics will let the optimizer reason
across calls to these intrinsics.
Reviewers: tra
Subscribers: llvm-commits, jholewinski
Differential Revision: http://reviews.llvm.org/D21160
llvm-svn: 272298
Summary:
__syncthreads, which corresponds to bar.sync 0, is already convergent.
This makes the more general bar.sync n likewise convergent.
Reviewers: tra
Subscribers: llvm-commits, jholewinski
Differential Revision: http://reviews.llvm.org/D21161
llvm-svn: 272295
looking for it along $PATH. This allows installs of LLVM tools outside of
$PATH to find the symbolizer and produce pretty backtraces if they crash.
llvm-svn: 272232
Refactor the code so that we do not compute in two different places the
end iterator for the range of new virtual registers for a given operand.
Although this refactoring was intended as NFC, this is not the case
because it actually fixes a bug where we were returning a range off by 1
(too long). Right now, this could not result in an actual bug because we
were accessing this range via the BreakDown size of the related operand.
llvm-svn: 272208
Summary:
Now DISubroutineType has a 'cc' field which should be a DW_CC_ enum. If
it is present and non-zero, the backend will emit it as a
DW_AT_calling_convention attribute. On the CodeView side, we translate
it to the appropriate enum for the LF_PROCEDURE record.
I added a new LLVM vendor specific enum to the list of DWARF calling
conventions. DWARF does not appear to attempt to standardize these, so I
assume it's OK to do this until we coordinate with GCC on how to emit
vectorcall convention functions.
Reviewers: dexonsmith, majnemer, aaboud, amccarth
Subscribers: mehdi_amini, llvm-commits
Differential Revision: http://reviews.llvm.org/D21114
llvm-svn: 272197
Absence of may-unwind calls is not enough to guarantee that a
UB-generating use of an add-rec poison in the loop latch will actually
cause UB. We also need to guard against calls that terminate the thread
or infinite loop themselves.
This partially addresses PR28012.
llvm-svn: 272181
Now, the target will be able to provide its how implementation to remap
an instruction. This open the way to crazier optimizations, but to
beginning with, we will be able to handle something else than the
default mapping.
llvm-svn: 272165
The generic implementation stated that all copies were free, which is
unlikely. Now, only the copies within the same register bank are free.
We assume they will get coalesced.
llvm-svn: 272085
The cost of a copy may be different based on how many bits we have to
copy around. E.g., a 8-bit copy may be different than a 32-bit copy.
llvm-svn: 272084
We were previously failing to do this and as a result failing to drop
attached metadata.
Not sure if there's a good way to test this. An in-progress patch exposed this
problem by allocating a GlobalVariable at the same address as a previously
allocated GlobalVariable.
Differential Revision: http://reviews.llvm.org/D21109
llvm-svn: 272077
Summary:
This patch is adding support for the MSVC buffer security check implementation
The buffer security check is turned on with the '/GS' compiler switch.
* https://msdn.microsoft.com/en-us/library/8dbf701c.aspx
* To be added to clang here: http://reviews.llvm.org/D20347
Some overview of buffer security check feature and implementation:
* https://msdn.microsoft.com/en-us/library/aa290051(VS.71).aspx
* http://www.ksyash.com/2011/01/buffer-overflow-protection-3/
* http://blog.osom.info/2012/02/understanding-vs-c-compilers-buffer.html
For the following example:
```
int example(int offset, int index) {
char buffer[10];
memset(buffer, 0xCC, index);
return buffer[index];
}
```
The MSVC compiler is adding these instructions to perform stack integrity check:
```
push ebp
mov ebp,esp
sub esp,50h
[1] mov eax,dword ptr [__security_cookie (01068024h)]
[2] xor eax,ebp
[3] mov dword ptr [ebp-4],eax
push ebx
push esi
push edi
mov eax,dword ptr [index]
push eax
push 0CCh
lea ecx,[buffer]
push ecx
call _memset (010610B9h)
add esp,0Ch
mov eax,dword ptr [index]
movsx eax,byte ptr buffer[eax]
pop edi
pop esi
pop ebx
[4] mov ecx,dword ptr [ebp-4]
[5] xor ecx,ebp
[6] call @__security_check_cookie@4 (01061276h)
mov esp,ebp
pop ebp
ret
```
The instrumentation above is:
* [1] is loading the global security canary,
* [3] is storing the local computed ([2]) canary to the guard slot,
* [4] is loading the guard slot and ([5]) re-compute the global canary,
* [6] is validating the resulting canary with the '__security_check_cookie' and performs error handling.
Overview of the current stack-protection implementation:
* lib/CodeGen/StackProtector.cpp
* There is a default stack-protection implementation applied on intermediate representation.
* The target can overload 'getIRStackGuard' method if it has a standard location for the stack protector cookie.
* An intrinsic 'Intrinsic::stackprotector' is added to the prologue. It will be expanded by the instruction selection pass (DAG or Fast).
* Basic Blocks are added to every instrumented function to receive the code for handling stack guard validation and errors handling.
* Guard manipulation and comparison are added directly to the intermediate representation.
* lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
* lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
* There is an implementation that adds instrumentation during instruction selection (for better handling of sibbling calls).
* see long comment above 'class StackProtectorDescriptor' declaration.
* The target needs to override 'getSDagStackGuard' to activate SDAG stack protection generation. (note: getIRStackGuard MUST be nullptr).
* 'getSDagStackGuard' returns the appropriate stack guard (security cookie)
* The code is generated by 'SelectionDAGBuilder.cpp' and 'SelectionDAGISel.cpp'.
* include/llvm/Target/TargetLowering.h
* Contains function to retrieve the default Guard 'Value'; should be overriden by each target to select which implementation is used and provide Guard 'Value'.
* lib/Target/X86/X86ISelLowering.cpp
* Contains the x86 specialisation; Guard 'Value' used by the SelectionDAG algorithm.
Function-based Instrumentation:
* The MSVC doesn't inline the stack guard comparison in every function. Instead, a call to '__security_check_cookie' is added to the epilogue before every return instructions.
* To support function-based instrumentation, this patch is
* adding a function to get the function-based check (llvm 'Value', see include/llvm/Target/TargetLowering.h),
* If provided, the stack protection instrumentation won't be inlined and a call to that function will be added to the prologue.
* modifying (SelectionDAGISel.cpp) do avoid producing basic blocks used for inline instrumentation,
* generating the function-based instrumentation during the ISEL pass (SelectionDAGBuilder.cpp),
* if FastISEL (not SelectionDAG), using the fallback which rely on the same function-based implemented over intermediate representation (StackProtector.cpp).
Modifications
* adding support for MSVC (lib/Target/X86/X86ISelLowering.cpp)
* adding support function-based instrumentation (lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp, .h)
Results
* IR generated instrumentation:
```
clang-cl /GS test.cc /Od /c -mllvm -print-isel-input
```
```
*** Final LLVM Code input to ISel ***
; Function Attrs: nounwind sspstrong
define i32 @"\01?example@@YAHHH@Z"(i32 %offset, i32 %index) #0 {
entry:
%StackGuardSlot = alloca i8* <<<-- Allocated guard slot
%0 = call i8* @llvm.stackguard() <<<-- Loading Stack Guard value
call void @llvm.stackprotector(i8* %0, i8** %StackGuardSlot) <<<-- Prologue intrinsic call (store to Guard slot)
%index.addr = alloca i32, align 4
%offset.addr = alloca i32, align 4
%buffer = alloca [10 x i8], align 1
store i32 %index, i32* %index.addr, align 4
store i32 %offset, i32* %offset.addr, align 4
%arraydecay = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 0
%1 = load i32, i32* %index.addr, align 4
call void @llvm.memset.p0i8.i32(i8* %arraydecay, i8 -52, i32 %1, i32 1, i1 false)
%2 = load i32, i32* %index.addr, align 4
%arrayidx = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 %2
%3 = load i8, i8* %arrayidx, align 1
%conv = sext i8 %3 to i32
%4 = load volatile i8*, i8** %StackGuardSlot <<<-- Loading Guard slot
call void @__security_check_cookie(i8* %4) <<<-- Epilogue function-based check
ret i32 %conv
}
```
* SelectionDAG generated instrumentation:
```
clang-cl /GS test.cc /O1 /c /FA
```
```
"?example@@YAHHH@Z": # @"\01?example@@YAHHH@Z"
# BB#0: # %entry
pushl %esi
subl $16, %esp
movl ___security_cookie, %eax <<<-- Loading Stack Guard value
movl 28(%esp), %esi
movl %eax, 12(%esp) <<<-- Store to Guard slot
leal 2(%esp), %eax
pushl %esi
pushl $204
pushl %eax
calll _memset
addl $12, %esp
movsbl 2(%esp,%esi), %esi
movl 12(%esp), %ecx <<<-- Loading Guard slot
calll @__security_check_cookie@4 <<<-- Epilogue function-based check
movl %esi, %eax
addl $16, %esp
popl %esi
retl
```
Reviewers: kcc, pcc, eugenis, rnk
Subscribers: majnemer, llvm-commits, hans, thakis, rnk
Differential Revision: http://reviews.llvm.org/D20346
llvm-svn: 272053
This allows mapping of any endian-aware type whose underlying
type (e.g. uint32_t) provides a ScalarTraits specialization.
Reviewed by: majnemer
Differential Revision: http://reviews.llvm.org/D21057
llvm-svn: 272049
Currently the only way to use the (V)MOVNTDQA nontemporal vector loads instructions is through the int_x86_sse41_movntdqa style builtins.
This patch adds support for lowering nontemporal loads from general IR, allowing us to remove the movntdqa builtins in a future patch.
We currently still fold nontemporal loads into suitable instructions, we should probably look at removing this (and nontemporal stores as well) or at least make the target's folding implementation aware that its dealing with a nontemporal memory transaction.
There is also an issue that VMOVNTDQA only acts on 128-bit vectors on pre-AVX2 hardware - so currently a normal ymm load is still used on AVX1 targets.
Differential Review: http://reviews.llvm.org/D20965
llvm-svn: 272010
In order to efficiently write PDBs, we need to be able to make a
StreamWriter class similar to a StreamReader, which can transparently deal
with writing to discontiguous streams, and we need to use this for all
writing, similar to how we use StreamReader for all reading.
Most discontiguous streams are the typical numbered streams that appear in
a PDB file and are described by the directory, but the exception to this,
that until now has been parsed by hand, is the directory itself.
MappedBlockStream works by querying the directory to find out which blocks
a stream occupies and various other things, so naturally the same logic
could not possibly work to describe the blocks that the directory itself
resided on.
To solve this, I've introduced an abstraction IPDBStreamData, which allows
the client to query for the list of blocks occupied by the stream, as well
as the stream length. I provide two implementations of this: one which
queries the directory (for indexed streams), and one which queries the
super block (for the directory stream).
This has the side benefit of vastly simplifying the code to parse the
directory. Whereas before a mini state machine was rolled by hand, now we
simply use FixedStreamArray to read out the stream sizes, then build a
vector of FixedStreamArrays for the stream map, all in just a few lines of
code.
Reviewed By: ruiu
Differential Revision: http://reviews.llvm.org/D21046
llvm-svn: 271982
The data strucutre in the new FPO stream is described in the
PE/COFF spec. There is one record per function if frame pointer
is omitted.
Differential Revision: http://reviews.llvm.org/D20999
llvm-svn: 271926
Summary:
There are some rough corners, since the new pass manager doesn't have
(as far as I can tell) LoopSimplify and LCSSA, so I've updated the
tests to run them separately in the old pass manager in the lit tests.
We also don't have an equivalent for AU.setPreservesCFG() in the new
pass manager, so I've left a FIXME.
Reviewers: bogner, chandlerc, davide
Subscribers: sanjoy, mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D20783
llvm-svn: 271846
libstdc++ (or in compilers, or somewhere, I can't track it down) that
causes unittests that use INITIALIZE_PASS to crash.
The analysis I've been able to do is that inside libstdc++'s
implementation of std::call_once, it uses pthread_once, and when that
returns an error code it throws std::system_error which then eventually
calls std::terminate.
Hopefully some of the folks who work on PPC can try to sort out what's
going on here. Until then, they'll have to use the fallback
implementation.
llvm-svn: 271821
CALL_ONCE_... macro in the legacy pass manager with the new
llvm::call_once facility.
Nothing changed sicne the last attempt in r271781 which I reverted in
r271788. At least one of the failures I saw was spurious, and I want to
make sure the other failures are real before I work around them -- they
appeared to only effect ppc64le and ppc64be.
Original commit message of r271781:
----
[LPM] Reinstate r271652 to replace the CALL_ONCE_... macro in the legacy
pass manager with the new llvm::call_once facility.
This reverts commit r271657 and re-applies r271652 with a fix to
actually work with arguments. In the original version, we just ended up
directly calling std::call_once via ADL because of the std::once_flag
argument. The llvm::call_once never worked with arguments. Now,
llvm::call_once is a variadic template that perfectly forwards
everything. As a part of this it had to move to the header and we use
a generic functor rather than an explict function pointer. It would be
nice to use std::invoke here but we don't have it yet. That means
pointer to members won't work here, but that seems a tolerable
compromise.
I've also tested this by forcing the fallback path, so hopefully it
sticks this time.
----
Original commit message of r271652:
----
[LPM] Replace the CALL_ONCE_... macro in the legacy pass manager with
the new llvm::call_once facility.
This facility matches the standard APIs and when the platform supports
it actually directly uses the standard provided functionality. This is
both more efficient on some platforms and much more TSan friendly.
The only remaining user of the cas_flag and home-rolled atomics is the
fallback implementation of call_once. I have a patch that removes them
entirely, but it needs a Windows patch to land first.
This alone substantially cleans up the macros for the legacy pass
manager, and should subsume some of the work Mehdi was doing to clear
the path for TSan testing of ThinLTO, a really important step to have
reliable upstream testing of ThinLTO in all forms.
----
llvm-svn: 271800
There appears to be a strange exception thrown and crash using call_once
on a PPC build bot, and a *really* weird windows link error for
GCMetadata.obj. Still need to investigate the cause of both problems.
Original change summary:
[LPM] Reinstate r271652 to replace the CALL_ONCE_... macro in the legacy
pass manager with the new llvm::call_once facility.
llvm-svn: 271788
pass manager with the new llvm::call_once facility.
This reverts commit r271657 and re-applies r271652 with a fix to
actually work with arguments. In the original version, we just ended up
directly calling std::call_once via ADL because of the std::once_flag
argument. The llvm::call_once never worked with arguments. Now,
llvm::call_once is a variadic template that perfectly forwards
everything. As a part of this it had to move to the header and we use
a generic functor rather than an explict function pointer. It would be
nice to use std::invoke here but we don't have it yet. That means
pointer to members won't work here, but that seems a tolerable
compromise.
I've also tested this by forcing the fallback path, so hopefully it
sticks this time.
Original commit message:
----
[LPM] Replace the CALL_ONCE_... macro in the legacy pass manager with
the new llvm::call_once facility.
This facility matches the standard APIs and when the platform supports
it actually directly uses the standard provided functionality. This is
both more efficient on some platforms and much more TSan friendly.
The only remaining user of the cas_flag and home-rolled atomics is the
fallback implementation of call_once. I have a patch that removes them
entirely, but it needs a Windows patch to land first.
This alone substantially cleans up the macros for the legacy pass
manager, and should subsume some of the work Mehdi was doing to clear
the path for TSan testing of ThinLTO, a really important step to have
reliable upstream testing of ThinLTO in all forms.
llvm-svn: 271781
This commit adds a convenience is_contained() function
which checks if an element exists in a container. It is part of a larger
series of patches adding an MPI checker to the clang static analyzer.
Reviewers: dblaikie,bkramer
A patch by Alexander Droste!
Differential Revision:http://reviews.llvm.org/D16053
llvm-svn: 271757
This is currently used by clang to lock access to modules; improve the
error message so that clang can use better output messages from locking
error issues.
rdar://problem/26529101
Differential Review: http://reviews.llvm.org/D20942
llvm-svn: 271755
My first attempt at this had an overly aggressive assert - chain nodes
will only be removed, but we could hit the assert if a non-chain node
was CSE'd (NodeToMatch, for instance).
This reapplies r271706 by reverting r271713 and fixing an assert.
Original message:
Avoid relying on UB by looking into deleted nodes for a marker value.
Instead, update the list of chain nodes as we go.
llvm-svn: 271733
Summary:
Previously we would try to load PDBs for every PE executable we tried to
symbolize. If that failed, we would fall back to DWARF. If there wasn't
any DWARF, we'd print mostly useless symbol information using the export
table.
With this change, we only try to load PDBs for executables that claim to
have them. If that fails, we can now print an error rather than falling
back silently. This should make it a lot easier to diagnose and fix
common symbolization issues, such as not having DIA or not having a PDB.
Reviewers: zturner, eugenis
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D20982
llvm-svn: 271725
This only translates data members for now. Translating overloaded
methods is complicated, so I stopped short of doing that.
Reviewers: aaboud
Differential Revision: http://reviews.llvm.org/D20924
llvm-svn: 271680
new instruction to ARM and AArch64 targets and several system registers.
Patch by: Roger Ferrer Ibanez and Oliver Stannard
Differential Revision: http://reviews.llvm.org/D20282
llvm-svn: 271670
the new llvm::call_once facility.
This facility matches the standard APIs and when the platform supports
it actually directly uses the standard provided functionality. This is
both more efficient on some platforms and much more TSan friendly.
The only remaining user of the cas_flag and home-rolled atomics is the
fallback implementation of call_once. I have a patch that removes them
entirely, but it needs a Windows patch to land first.
This alone substantially cleans up the macros for the legacy pass
manager, and should subsume some of the work Mehdi was doing to clear
the path for TSan testing of ThinLTO, a really important step to have
reliable upstream testing of ThinLTO in all forms.
llvm-svn: 271652
formatted fancily.
I'm working on rewriting these macros to use the new call_once stuff,
but really want to have clang-format work on the edits, so just
re-baselining the entire file here. No changes other than clang-format.
llvm-svn: 271635
This patch begins adding support for lowering to the XOP VPERMIL2PD/VPERMIL2PS shuffle instructions - adding the X86ISD::VPERMIL2 opcode and cleaning up the usage.
The internal llvm intrinsics were assuming the shuffle mask operand was the same type as the float/double input operands (I guess to simplify the intrinsic definitions in X86InstrXOP.td to a single value type). These needed changing to integer types (matching the clang builtin and the AMD intrinsics definitions), an auto upgrade path is added to convert old calls.
Mask decoding/target shuffle support will be added in future patches.
Differential Revision: http://reviews.llvm.org/D20049
llvm-svn: 271633
When printing line information and file checksums, we were printing
the file offset field from the struct header. This teaches
llvm-pdbdump how to turn those numbers into the filename. In the
case of file checksums, this is done by looking in the global
string table. In the case of line contributions, this is done
by indexing into the file names buffer of the DBI stream. Why
they use a different technique I don't know.
llvm-svn: 271630
To facilitate this, a couple of changes had to be made:
1. `ModuleSubstream` got moved from `DebugInfo/PDB` to
`DebugInfo/CodeView`, and various codeview related types are defined
there. It turns out `DebugInfo/CodeView/Line.h` already defines many of
these structures, but this is really old code that is not endian aware,
doesn't interact well with `StreamInterface` and not very helpful for
getting stuff out of a PDB. Eventually we should migrate the old readobj
`COFFDumper` code to these new structures, or at least merge their
functionality somehow.
2. A `ModuleSubstream` visitor is introduced. Depending on where your
module substream array comes from, different subsets of record types can
be expected. We are already hand parsing these substream arrays in many
places especially in `COFFDumper.cpp`. In the future we can migrate these
paths to the visitor as well, which should reduce a lot of code in
`COFFDumper.cpp`.
Differential Revision: http://reviews.llvm.org/D20936
Reviewed By: ruiu, majnemer
llvm-svn: 271621
This commit adds round tripping for MachO symbol data. Symbols are entries in the name list, that contain offsets into the string table which is at the end of the __LINKEDIT segment.
llvm-svn: 271604
This first pass only splits apart the records and dumps the line
info kinds and binary data. Subsequent patches will parse out
the binary data into more useful information and dump it in
detail.
llvm-svn: 271576
StreamRef was designed to be a thin wrapper over an abstract
stream interface that could itself be treated the same as any
other stream interface. For this reason, it inherited publicly
from StreamInterface, and stored a StreamInterface* internally.
But StreamRef was also designed to be lightweight and easily
copyable, similar to ArrayRef. This led to two misuses of
the classes.
1) When creating a StreamRef A from another StreamRef B, it was
possible to end up with A storing a pointer to B, even when
B was a temporary object, leading to use after free.
2) The above situation could be repeated ad nauseum, so that
A stores a pointer to B, which itself stores a pointer to
another StreamRef C, and so on and so on, creating an
unnecessarily level of nesting depth.
This patch removes the public inheritance relationship between
StreamRef and StreamInterface, making it so that we can never
accidentally convert a StreamRef to a StreamInterface.
llvm-svn: 271570
D19271.
Previous attempt was broken by NetBSD, so in this version I've made the
fallback path generic rather than Windows specific and sent both Windows
and NetBSD to it.
I've also re-formatted the code some, and used an exact clone of the
code in PassSupport.h for doing manual call-once using our atomics
rather than rolling a new one.
If this sticks, we can replace the fallback path for Windows with
a Windows-specific implementation that is more reliable.
Original commit message:
This patch adds an llvm_call_once which is a wrapper around
std::call_once on platforms where it is available and devoid
of bugs. The patch also migrates the ManagedStatic mutex to
be allocated using llvm_call_once.
These changes are philosophically equivalent to the changes
added in r219638, which were reverted due to a hang on Win32
which was the result of a bug in the Windows implementation
of std::call_once.
Differential Revision: http://reviews.llvm.org/D5922
llvm-svn: 271558
Unlike other sections that can grow to any size, the COFF section header
stream has maximum length because each record is fixed size and the COFF
file format limits the maximum number of sections. So I decided to not
create a specific stream class for it. Instead, I added a member function
to DbiStream class which returns a vector of COFF headers.
Differential Revision: http://reviews.llvm.org/D20717
llvm-svn: 271557
This is part of an effort to shave allocations from APInt heavy paths. I'll
be moving many of the other operators to r-value references soon and this is
a step towards doing that without too much duplication.
Saves 15k allocations when doing 'opt -O2 verify-uselistorder.bc'.
llvm-svn: 271556
Also fix slice wrappers drop_front and drop_back.
The unittests are pretty awkward, but do the job; alternatives
welcome!
..and yes, I do have ArrayRefs with more than 4 billion elements.
llvm-svn: 271546
except for CompareAndSwap. That is the only one still being used
anywhere now that statistics have been moved onto std::atomic.
Also, add a warning to the header that we shouldn't introduce more uses
of these old style atomics and instead should be using C++11's
std::atomic facilities.
Really hoping that we can hammer out the last couple of users here and
replace them with something more localized and/or principled, but
figured this was a pretty good start. =]
Note that this patch will need to be reverted if r271504 needs to be
reverted as that removes the last user of these. However, the biggest
risk for that patch was MSVC 2013 and at least one bot has already
passed where it would have failed there. I've tested MSVC 2015 using
their web interfaces and other platforms seem fine, so I'm optimistic.
Differential Revision: http://reviews.llvm.org/D20901
llvm-svn: 271540
This directory is used to find if there is a PDB associated with an
executable. I plan to use this functionality to teach llvm-symbolizer
whether it should use DIA or DWARF to symbolize a given DLL.
Reviewers: majnemer
Differential Revision: http://reviews.llvm.org/D20885
llvm-svn: 271539
Summary:
If the target requests it, use emptry spaces in the fixed and
callee-save stack area to allocate local stack objects.
AArch64: Change last callee-save reg stack object alignment instead of
size to leave a gap to take advantage of above change.
Reviewers: t.p.northover, qcolombet, MatzeB
Subscribers: rengolin, mcrosier, llvm-commits, aemerson
Differential Revision: http://reviews.llvm.org/D20220
llvm-svn: 271527
This patch removes the llvm intrinsics (V)CVTTPS2DQ and VCVTTPD2DQ truncation (round to zero) conversions and auto-upgrades to FP_TO_SINT calls instead.
Note: I looked at updating CVTTPD2DQ as well but this still requires a lot more work to correctly lower.
Differential Revision: http://reviews.llvm.org/D20860
llvm-svn: 271510
This removes usage of the hacky, incorrect, and TSan-unfriendly
home-grown atomics. It should actually be more efficient in some cases.
Based on our existing usage of <atomic>, all of this is portably
available AFAICT. One small challenge is initializing the stastic, but
I've tried a comparable sample out on MSVC (the most likely to complain
here) and it seems to work. Will have to watch the build bots of course.
llvm-svn: 271504
statistics.
Scaling statistics atomically doesn't make any sense anyways, and none
were using these. If you find yourself wanting to do this, you should
probably keep a local count that you scale and then apply that after
scaling to the shared statistic object.
llvm-svn: 271503
We only considered the length of the operation and the length of the
StreamRef without considered what it meant for the offset to be at a
non-zero position.
llvm-svn: 271496
... and merge into `Value::getPointerDereferenceableBytes`. This was
suggested by Artur Pilipenko in D20764 -- since we no longer allow loads
of unsized types, there is no need anymore to have this special logic.
llvm-svn: 271455
Add support for the new pass manager to MemorySSA pass.
Change MemorySSA to be computed eagerly upon construction.
Change MemorySSAWalker to be owned by the MemorySSA object that creates
it.
Reviewers: dberlin, george.burgess.iv
Subscribers: mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D19664
llvm-svn: 271432
Summary:
Make sure that the SCEVExpander Builder insert point and any
saved/restored insert points are kept consistent (i.e. their Instruction
and BasicBlock match) when moving instructions in SCEVExpander.
This fixes an issue triggered by
http://reviews.llvm.org/D18001 [LSR] Create fewer redundant instructions.
Test case will be added in reapply commit of above change:
http://reviews.llvm.org/D18480 Reapply [LSR] Create fewer redundant instructions.
Reviewers: sanjoy
Subscribers: mzolotukhin, sanjoy, qcolombet, mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D20703
llvm-svn: 271424
This patch extends CFLAA to recognize allocation functions such as
malloc, free, etc, so we can treat them more aggressively.
Patch by Jia Chen.
Differential Revision: http://reviews.llvm.org/D20776
llvm-svn: 271421
This will be necessary to allow the global merge pass to attach
multiple debug info metadata nodes to global variables once we reverse
the edge from DIGlobalVariable to GlobalVariable.
Differential Revision: http://reviews.llvm.org/D20414
llvm-svn: 271358
This tidies up some code that was manually constructing RuntimeDyld::SymbolInfo
instances from JITSymbols. It will save more mess in the future when
JITSymbol::getAddress is extended to return an Expected<TargetAddress> rather
than just a TargetAddress, since we'll be able to embed the error checking in
the conversion.
llvm-svn: 271350
This patch adds an IR, assembly and bitcode representation for metadata
attachments for globals. Future patches will port existing features to use
these new attachments.
Differential Revision: http://reviews.llvm.org/D20074
llvm-svn: 271348
Refactor LiveIntervals::renameDisconnectedComponents() to be a pass.
Also change the name to "RenameIndependentSubregs":
- renameDisconnectedComponents() worked on a MachineFunction at a time
so it is a natural candidate for a machine function pass.
- The algorithm is testable with a .mir test now.
- This also fixes a problem where the lazy renaming as part of the
MachineScheduler introduced IMPLICIT_DEF instructions after the number
of a nodes in a region were counted leading to a mismatch.
Differential Revision: http://reviews.llvm.org/D20507
llvm-svn: 271345
when the object is from a slice of a Mach-O Universal Binary use something like
"foo.o (for architecture i386)" as part of the error message when expected.
Also fixed places in these tools that were ignoring object file errors from
MachOUniversalBinary::getAsObjectFile() when the code moved on to see if
the slice was an archive.
To do this MachOUniversalBinary::getAsObjectFile() and
MachOUniversalBinary::getObjectForArch() were changed from returning
ErrorOr<...> to Expected<...> then that was threaded up to its users.
Converting these interfaces to Expected<> from ErrorOr<> does involve
touching a number of places. To contain the changes for now the use of
errorToErrorCode() is still used in two places yet to be fully converted.
llvm-svn: 271332
Adds the method MCStreamer::EmitBinaryData, which is usually an alias
for EmitBytes. In the MCAsmStreamer case, it is overridden to emit hex
dump output like this:
.byte 0x0e, 0x00, 0x08, 0x10
.byte 0x03, 0x00, 0x00, 0x00
.byte 0x00, 0x00, 0x00, 0x00
.byte 0x00, 0x10, 0x00, 0x00
Also, when verbose asm comments are enabled, this patch prints the dump
output for each comment before its record, like this:
# ArgList (0x1000) {
# TypeLeafKind: LF_ARGLIST (0x1201)
# NumArgs: 0
# Arguments [
# ]
# }
.byte 0x06, 0x00, 0x01, 0x12
.byte 0x00, 0x00, 0x00, 0x00
This should make debugging easier and testing more convenient.
Reviewers: aaboud
Subscribers: majnemer, zturner, amccarth, aaboud, llvm-commits
Differential Revision: http://reviews.llvm.org/D20711
llvm-svn: 271313
The MachO export trie is a serially encoded trie keyed by symbol name. This code parses the trie and preserves the structure so that it can be dumped again.
llvm-svn: 271300
Added support to map intrinsics
__builtin_arm_{ldc,ldcl,ldc2,ldc2l,stc,stcl,stc2,stc2l}
to their ARM instructions.
Differential Revision: http://reviews.llvm.org/D20564
llvm-svn: 271271
This adds support to the backed to actually support SjLj EH as an exception
model. This is *NOT* the default model, and requires explicitly opting into it
from the frontend. GCC supports this model and for MinGW can still be enabled
via the `--using-sjlj-exceptions` options.
Addresses PR27749!
llvm-svn: 271244
This function failed to type-check as it was. No test case yet (we only have
regression tests for the remote-JIT code, and LLI don't use this function), but
an upcoming chapter of the Kaleidoscope Building A JIT tutorials will use
this.
llvm-svn: 271189
Consolidate documentation by removing comments from the .cpp file where
the comments in the .cpp file were copy-pasted from the header.
llvm-svn: 271157
Summary:
This change teaches SCEV to see reduce `(extractvalue
0 (op.with.overflow X Y))` into `op X Y` (with a no-wrap tag if
possible).
This was first checked in at r265912 but reverted in r265950 because it
exposed some issues around how SCEV handled post-inc add recurrences.
Those issues have now been fixed.
Reviewers: atrick, regehr
Subscribers: mcrosier, mzolotukhin, llvm-commits
Differential Revision: http://reviews.llvm.org/D18684
llvm-svn: 271152
Fixes PR27315.
The post-inc version of an add recurrence needs to "follow the same
rules" as a normal add or subtract expression. Otherwise we miscompile
programs like
```
int main() {
int a = 0;
unsigned a_u = 0;
volatile long last_value;
do {
a_u += 3;
last_value = (long) ((int) a_u);
if (will_add_overflow(a, 3)) {
// Leave, and don't actually do the increment, so no UB.
printf("last_value = %ld\n", last_value);
exit(0);
}
a += 3;
} while (a != 46);
return 0;
}
```
This patch changes SCEV to put no-wrap flags on post-inc add recurrences
only when the poison from a potential overflow will go ahead to cause
undefined behavior.
To avoid regressing performance too much, I've assumed infinite loops
without side effects is undefined behavior to prove poison<->UB
equivalence in more cases. This isn't ideal, but is not new to LLVM as
a whole, and far better than the situation I'm trying to fix.
llvm-svn: 271151
This patch removes the llvm intrinsics VPMOVSX and (V)PMOVZX sign/zero extension intrinsics and auto-upgrades to SEXT/ZEXT calls instead. We already did this for SSE41 PMOVSX sometime ago so much of that implementation can be reused.
Reapplied now that the the companion patch (D20684) removes/auto-upgrade the clang intrinsics has been committed.
Differential Revision: http://reviews.llvm.org/D20686
llvm-svn: 271131
This matches the behavior of GNU assembler which supports symbolic
expressions in absolute expressions used in assembly directives.
Differential Revision: http://reviews.llvm.org/D20752
llvm-svn: 271102
This converts remaining uses of ByteStream, which was still
left in the symbol stream and type stream, to using the new
StreamInterface zero-copy classes.
RecordIterator is finally deleted, so this is the only way left
now. Additionally, more error checking is added when iterating
the various streams.
With this, the transition to zero copy pdb access is complete.
llvm-svn: 271101
Remove broken patterns matching it. This was matching the
unsafe math pattern and expanding the fix for the buggy instruction
from the pattern. The problems are also on CI. Remove the workarounds
and only use fract with unsafe math or from the intrinsic.
llvm-svn: 271078
Summary:
Unroll factor (Count) calculations moved to a new function.
Early exits on pragma and "-unroll-count" defined factor added.
New type of unrolling "Force" introduced (previously used implicitly).
New unroll preference "AllowRemainder" introduced and set "true" by default.
(should be set to false for architectures that suffers from it).
Reviewers: hfinkel, mzolotukhin, zzheng
Differential Revision: http://reviews.llvm.org/D19553
From: Evgeny Stupachenko <evstupac@gmail.com>
llvm-svn: 271071
This matches the behavior of GNU assembler which supports symbolic
expressions in absolute expressions used in assembly directives.
Differential Revision: http://reviews.llvm.org/D20656
llvm-svn: 271028
Due to differences in template instantiation rules, it is not
portable to static_assert(false) inside of an invalid specialization
of a template. Instead I just =delete the method so that it can't
be used, and leave a comment that it must be explicitly specialized.
llvm-svn: 271027
This reverts commit r271024 due to error: static_assert failed
"You must either provide a specialization of VarStreamArrayExtractor
or a custom extractor"
llvm-svn: 271026
This reduces the amount of memory used by llvm-pdbdump by roughly
1/3 of the size of the PDB file.
Differential Revision: http://reviews.llvm.org/D20724
Reviewed By: ruiu
llvm-svn: 271025
Fix: updated clang code which was not updated by mistake.
Original commit message:
[llvm-mc] - Teach llvm-mc to generate zlib styled compression sections.
This patch is strongly based on previously reverted D20331.
(because of gnuutils < 2.26 does not support compressed debug sections in non zlib-gnu style)
Difference that this patch supports both zlib and zlib-gnu styles.
-compress-debug-sections option now supports next values:
-compress-debug-sections=zlib-gnu
-compress-debug-sections=zlib
-compress-debug-sections=none
Previously specifying -compress-debug-sections enabled zlib-gnu compression,
so anyone can put "-compress-debug-sections=zlib-gnu" to restore the behavior
that was before this patch for case when compression was enabled.
Differential revision: http://reviews.llvm.org/D20676
llvm-svn: 270987
It broke buildbot:
http://lab.llvm.org:8011/builders/llvm-clang-lld-x86_64-scei-ps4-ubuntu-fast/builds/13585/steps/build/logs/stdio
Initial commit message:
[llvm-mc] - Teach llvm-mc to generate zlib styled compression sections.
This patch is strongly based on previously reverted D20331.
(because of gnuutils < 2.26 does not support compressed debug sections in non zlib-gnu style)
Difference that this patch supports both zlib and zlib-gnu styles.
-compress-debug-sections option now supports next values:
-compress-debug-sections=zlib-gnu
-compress-debug-sections=zlib
-compress-debug-sections=none
Previously specifying -compress-debug-sections enabled zlib-gnu compression,
so anyone can put "-compress-debug-sections=zlib-gnu" to restore the behavior
that was before this patch for case when compression was enabled.
Differential revision: http://reviews.llvm.org/D20676
llvm-svn: 270978
This patch is strongly based on previously reverted D20331.
(because of gnuutils < 2.26 does not support compressed debug sections in non zlib-gnu style)
Difference that this patch supports both zlib and zlib-gnu styles.
-compress-debug-sections option now supports next values:
-compress-debug-sections=zlib-gnu
-compress-debug-sections=zlib
-compress-debug-sections=none
Previously specifying -compress-debug-sections enabled zlib-gnu compression,
so anyone can put "-compress-debug-sections=zlib-gnu" to restore the behavior
that was before this patch for case when compression was enabled.
Differential revision: http://reviews.llvm.org/D20676
llvm-svn: 270977
This patch removes the llvm intrinsics VPMOVSX and (V)PMOVZX sign/zero extension intrinsics and auto-upgrades to SEXT/ZEXT calls instead. We already did this for SSE41 PMOVSX sometime ago so much of that implementation can be reused.
A companion patch (D20684) removes/auto-upgrade the clang intrinsics.
Differential Revision: http://reviews.llvm.org/D20686
llvm-svn: 270973
This will be needed in order to consistently return an Error
to clients of the API being developed in D20268.
Differential Revision: http://reviews.llvm.org/D20550
llvm-svn: 270967
Since we want to move toward zero-copy access to stream data, we
want to remove all instances of copying operations. So get rid
of some of those here.
Differential Revision: http://reviews.llvm.org/D20720
Reviewed By: ruiu
llvm-svn: 270960
APInt::operator+(uint64_t) just forwarded to operator+(const APInt&).
Constructing the APInt for the RHS takes an allocation which isn't
required. Also, for APInt's in the slow path, operator+ would
call add() internally which iterates over both arrays of values. Instead
we can use add_1 and sub_1 which only iterate while there is something to do.
Using the memory for 'opt -O2 verify-uselistorder.lto.opt.bc -o opt.bc'
(see r236629 for details), this reduces the number of allocations from
23.9M to 22.7M.
llvm-svn: 270959
type_traits header in libstdc++ 4.8 does not define is_trivially_contructible
so the code doesn't compile with it.
In this file we are using the trait for assertion to provide a better
error message. Removing it doesn't change the meaning of the code.
Differential Revision: http://reviews.llvm.org/D20719
llvm-svn: 270957
This comment was included in Peter Collingbourne's original version of
StringError (see http://reviews.llvm.org/D20550), where it made sense. It was
accidentally copied over with the rest of the class, but no longer applies.
llvm-svn: 270956
PDBs can be extremely large. We're already mapping the entire
PDB into the process's address space, but to make matters worse
the blocks of the PDB are not arranged contiguously. So, when
we have something like an array or a string embedded into the
stream, we have to make a copy. Since it's convenient to use
traditional data structures to iterate and manipulate these
records, we need the memory to be contiguous.
As a result of this, we were using roughly twice as much memory
as the file size of the PDB, because every stream was copied
out and re-stitched together contiguously.
This patch addresses this by improving the MappedBlockStream
to allocate from a BumpPtrAllocator only when a read requires
a discontiguous read. Furthermore, it introduces some data
structures backed by a stream which can iterate over both
fixed and variable length records of a PDB. Since everything
is backed by a stream and not a buffer, we can read almost
everything from the PDB with zero copies.
Differential Revision: http://reviews.llvm.org/D20654
Reviewed By: ruiu
llvm-svn: 270951
StringError can be used to represent Errors that aren't recoverable based on
the error type, but that have a useful error message that can be reported to
the user or logged.
llvm-svn: 270948
This adds support for YAML round tripping dyld info lazy bindings. The storage and format of these is the same as regular bind opcodes, they are just interpreted differently by dyld, and can have DONE opcodes in the middle of the opcode lists.
llvm-svn: 270920
This adds support for YAML round tripping dyld info weak bindings. The storage and format of these is the same as regular bind opcodes, they are just interpreted differently by dyld.
llvm-svn: 270911
Global variables and aliases are emitted eagerly, but there may not be any in
the incoming module. In that case, we can save some memory and compile time by
not building, emitting and tracking an empty globals module.
llvm-svn: 270908
This adds support for YAML round tripping dyld info bind opcodes. Bind opcodes can have signed or unsigned LEB128 data, and they can have symbols associated with them.
llvm-svn: 270901
r270777 improved the precision of alloca vs. inbounbds GEP alias queries: if
we have (a) an inbounds GEP and (b) a pointer based on an alloca, and the
beginning of the object the GEP points to would have a negative offset with
respect to the alloca, then the GEP can not alias pointer (b).
This makes the same logic fire when (b) is based on a GlobalVariable instead
of an alloca.
Differential Revision: http://reviews.llvm.org/D20652
llvm-svn: 270893
There is only one caller of MemorySSA::createNewAccess, and it passes true
as the IgnoreNonMemory argument. Remove that argument and fold its behavior
into createNewAccess.
llvm-svn: 270812
This fixes the example so that it matches the pass's behavior. I was a
little confused by the example until I tried running it and realized that
there was a mistake.
Differential Revision: http://reviews.llvm.org/D20657
llvm-svn: 270811
When we have "Image Info Version" module flag but don't have "Class Properties"
module flag, set "Class Properties" module flag to 0, so we can correctly emit
errors when one module has the flag set and another module does not.
rdar://26469641
llvm-svn: 270791
This matches the behavior of GNU assembler which supports symbolic
expressions in absolute expressions used in assembly directives.
Differential Revision: http://reviews.llvm.org/D20337
llvm-svn: 270786
If a we have (a) a GEP and (b) a pointer based on an alloca, and the
beginning of the object the GEP points would have a negative offset with
repsect to the alloca, then the GEP can not alias pointer (b).
For example, consider code like:
struct { int f0, int f1, ...} foo;
...
foo alloca;
foo *random = bar(alloca);
int *f0 = &alloca.f0
int *f1 = &random->f1;
Which is lowered, approximately, to:
%alloca = alloca %struct.foo
%random = call %struct.foo* @random(%struct.foo* %alloca)
%f0 = getelementptr inbounds %struct, %struct.foo* %alloca, i32 0, i32 0
%f1 = getelementptr inbounds %struct, %struct.foo* %random, i32 0, i32 1
Assume %f1 and %f0 alias. Then %f1 would point into the object allocated
by %alloca. Since the %f1 GEP is inbounds, that means %random must also
point into the same object. But since %f0 points to the beginning of %alloca,
the highest %f1 can be is (%alloca + 3). This means %random can not be higher
than (%alloca - 1), and so is not inbounds, a contradiction.
Differential Revision: http://reviews.llvm.org/D20495
llvm-svn: 270777
Getting accurate locations for loops is important, because those locations are
used by the frontend to generate optimization remarks. Currently, optimization
remarks for loops often appear on the wrong line, often the first line of the
loop body instead of the loop itself. This is confusing because that line might
itself be another loop, or might be somewhere else completely if the body was
inlined function call. This happens because of the way we find the loop's
starting location. First, we look for a preheader, and if we find one, and its
terminator has a debug location, then we use that. Otherwise, we look for a
location on an instruction in the loop header.
The fallback heuristic is not bad, but will almost always find the beginning of
the body, and not the loop statement itself. The preheader location search
often fails because there's often not a preheader, and even when there is a
preheader, depending on how it was formed, it sometimes carries the location of
some preceeding code.
I don't see any good theoretical way to fix this problem. On the other hand,
this seems like a straightforward solution: Put the debug location in the
loop's llvm.loop metadata. A companion Clang patch will cause Clang to insert
llvm.loop metadata with appropriate locations when generating debugging
information. With these changes, our loop remarks have much more accurate
locations.
Differential Revision: http://reviews.llvm.org/D19738
llvm-svn: 270771
As a result of D18634 we no longer infer certain attributes on linkonce_odr
functions at compile time, and may only infer them at LTO time. The readnone
attribute in particular is required for virtual constant propagation (part
of whole-program virtual call optimization) to work correctly.
This change moves the whole-program virtual call optimization pass after
the function attribute inference passes, and enables the attribute inference
passes at opt level 1, so that virtual constant propagation has a chance to
work correctly for linkonce_odr functions.
Differential Revision: http://reviews.llvm.org/D20643
llvm-svn: 270765
It may materialize a declaration, or a definition. The name could
be misleading. This is following a merge of materializeInitFor()
into materializeDeclFor().
Differential Revision: http://reviews.llvm.org/D20593
llvm-svn: 270759
They were originally separated to handle the co-recursion between
the ValueMapper and the ValueMaterializer. This recursion does not
exist anymore: the ValueMapper now uses a Worklist and the
ValueMaterializer is scheduling job on the Worklist.
Differential Revision: http://reviews.llvm.org/D20593
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 270758
We have need to reuse this functionality, including making
additional generic stream types that are smarter about how and
when they copy memory versus referencing the original memory.
So all of these structures belong in the common library
rather than being pdb specific.
llvm-svn: 270751
searching for external symbols, and fall back to the SymbolResolver::findSymbol
method if the former returns null.
This makes RuntimeDyld behave more like a static linker: Symbol definitions
from within the current module's "logical dylib" will be preferred to
external definitions. We can build on this behavior in the future to properly
support weak symbol handling.
Custom symbol resolvers that override the findSymbolInLogicalDylib method may
notice changes due to this patch. Clients who have not overridden this method
should generally be unaffected, however users of the OrcMCJITReplacement class
may notice changes.
llvm-svn: 270716
Also, rename recognizeBitReverseOrBSwapIdiom to recognizeBSwapOrBitReverseIdiom,
so the ordering of the MatchBSwaps and MatchBitReversals arguments are
consistent with the function name.
llvm-svn: 270715
Move the now index-based ODR resolution and internalization routines out
of ThinLTOCodeGenerator.cpp and into either LTO.cpp (index-based
analysis) or FunctionImport.cpp (index-driven optimizations).
This is to enable usage by other linkers.
llvm-svn: 270698
There's already a ARMTargetParser,now adding a similar one for aarch64.
so we can use it to do ARCH/CPU/FPU parsing in clang and llvm, instead of
string comparison.
Patch by Jojo Ma.
llvm-svn: 270687
Followup to D20528 clang patch, this removes the (V)CVTDQ2PD(Y) and (V)CVTPS2PD(Y) llvm intrinsics and auto-upgrades to sitofp/fpext instead.
Differential Revision: http://reviews.llvm.org/D20568
llvm-svn: 270678
Ensure that the unused fields are explicitly stated when defining the types.
Add some compile time assertions about the size requirements for the structure
types.
llvm-svn: 270663
Summary:
Adds fastpath instrumentation for esan's working set tool. The
instrumentation for an intra-cache-line load or store consists of an
inlined write to shadow memory bits for the corresponding cache line.
Adds a basic test for this instrumentation.
Reviewers: aizatsky
Subscribers: vitalybuka, zhaoqin, kcc, eugenis, llvm-commits
Differential Revision: http://reviews.llvm.org/D20483
llvm-svn: 270640
This patch adds support for:
S_EXPORT
LF_BITFIELD
With this patch, I have run through a couple of gigabytes of PDB
files and cannot find a type or symbol that we do not understand.
llvm-svn: 270637
This adds support for parsing and dumping the following
symbol types:
S_LPROCREF
S_ENVBLOCK
S_COMPILE2
S_REGISTER
S_COFFGROUP
S_SECTION
S_THUNK32
S_TRAMPOLINE
As of this patch, the test PDB files no longer have any unknown
symbol types.
llvm-svn: 270628
When dumping huge PDB files, too many of the options were grouped
together so you would get neverending spew of output. This patch
introduces more granular display options so you can only dump the
fields you actually care about.
llvm-svn: 270607
This makes use of the newly introduced `CVSymbolVisitor` to dump details
of each type of symbol record in the symbol streams. Future patches will
bring this visitor based dumping to the publics stream, as well as
creating a `SymbolDumpDelegate` to print more information about
relocations etc.
Differential Revision: http://reviews.llvm.org/D20545
Reviewed By: ruiu
llvm-svn: 270585
Summary:
This patch changes the ODR resolution and internalization to be based on
updates to the Index, which are consumed by the backend portion of the
transformations.
It will be followed by an NFC change to move these out of libLTO's
ThinLTOCodeGenerator so that it can be used by other linkers
(gold and lld) and by ThinLTO distributed backends.
The global summary-based portions use callbacks so that the client can
determine the prevailing copy and other information in a client-specific
way. Eventually, with the API being developed in D20268, these may be
modified to use information such as symbol resolutions, supplied by the
clients to the API.
Reviewers: joker-eph
Subscribers: joker.eph, pcc, llvm-commits
Differential Revision: http://reviews.llvm.org/D20290
llvm-svn: 270584
Now, after landing r270560, r270557, r270320 it is a proper time.
Original commit message:
[llvm-mc] - Teach llvm-mc to generate compressed debug sections in zlib style.
Before this patch llvm-mc generated zlib-gnu styled sections.
That means no SHF_COMPRESSED flag was set, magic 'zlib' signature
was used in combination with full size field. Sections were renamed to "*.z*".
This patch reimplements the compression style to zlib one as zlib-gnu looks
to be depricated everywhere.
Differential revision: http://reviews.llvm.org/D20331
llvm-svn: 270569
Fix was:
1) Had to regenerate dwarfdump-test-zlib.elf-x86-64, dwarfdump-test-zlib-gnu.elf-x86-64
(because llvm-symbolizer-zlib.test uses that inputs for its purposes and failed).
2) Updated llvm-symbolizer-zlib.test (updated used call function address to match new files +
added one more check for newly created dwarfdump-test-zlib-gnu.elf-x86-64 binary input).
3) Updated comment in dwarfdump-test-zlib.cc.
Original commit message:
[llvm-dwarfdump] - Teach dwarfdump to decompress debug sections in zlib style.
Before this llvm-dwarfdump only recognized zlib-gnu compression style of headers,
this patch adds support for zlib style.
It looks reasonable to support both styles for dumping,
even if we are not going to suport generating of deprecated gnu one.
Differential revision: http://reviews.llvm.org/D20470
llvm-svn: 270557
fix: forgot to commit the updated dwarfdump-test-zlib.elf-x86-64
Original commit message:
[llvm-dwarfdump] - Teach dwarfdump to decompress debug sections in zlib style.
Before this llvm-dwarfdump only recognized zlib-gnu compression style of headers,
this patch adds support for zlib style.
It looks reasonable to support both styles for dumping,
even if we are not going to suport generating of deprecated gnu one.
Differential revision: http://reviews.llvm.org/D20470
llvm-svn: 270543
Before this llvm-dwarfdump only recognized zlib-gnu compression style of headers,
this patch adds support for zlib style.
It looks reasonable to support both styles for dumping,
even if we are not going to suport generating of deprecated gnu one.
Differential revision: http://reviews.llvm.org/D20470
llvm-svn: 270540
Moved the ModuleLoader and supporting helper loadModuleFromBuffer out of
ThinLTOCodeGenerator and into new LTO.h/LTO.cpp files. This is in
preparation for a patch that will utilize these in the gold-plugin.
Note that there are some other pending patches (D20268 and D20290) that
also plan to refactor common interfaces and functionality into this same
pair of new files.
llvm-svn: 270509
to llvm-objdump. This section is created with -fembed-bitcode option.
This requires the use of libxar and the Cmake and lit support were crafted by
Chris Bieneman!
rdar://26202242
llvm-svn: 270491
This effectively revers commit r270389 and re-lands r270106, but it's
almost a rewrite.
The behavior change in r270106 was that we could no longer assume that
each LF_FUNC_ID record got its own type index. This patch adds a map
from DINode* to TypeIndex, so we can stop making that assumption.
This change also emits padding bytes between type records similar to the
way MSVC does. The size of the type record includes the padding bytes.
llvm-svn: 270485
This will pave the way to introduce a full fledged symbol visitor
similar to how we have a type visitor, thus allowing the same
dumping code to be used in llvm-readobj and llvm-pdbdump.
Differential Revision: http://reviews.llvm.org/D20384
Reviewed By: rnk
llvm-svn: 270475
In r268693, we started requiring that SelectionDAGISel::Select return
void, but provided a default implementation that did just that by
calling into the old interface. Now that all targets have been
updated, we'll just remove the default implementation.
llvm-svn: 270454
Summary: This needs to get in before anything is released concerning attribute. If the old name gets in the wild, then we are stuck with it forever. Putting it in its own diff should getting that part at least in fast.
Reviewers: Wallbraker, whitequark, joker.eph, echristo, rafael, jyknight
Subscribers: llvm-commits, joker.eph
Differential Revision: http://reviews.llvm.org/D20417
llvm-svn: 270452
Prior to this patch, we were using 1 for all the repairing costs.
Now, we use the information from the target to get this information.
llvm-svn: 270304
We now use LiveRangeCalc::extendToUses() instead of a specially designed
algorithm in constructMainRangeFromSubranges():
- The original motivation for constructMainRangeFromSubranges() were
differences between the main liverange and subranges because of hidden
dead definitions. This case however cannot happen anymore with the
DetectDeadLaneMasks pass in place.
- It simplifies the code.
- This fixes a longstanding bug where we did not properly create new SSA
values on merging control flow (the MachineVerifier missed most of
these cases).
- Move constructMainRangeFromSubranges() to LiveIntervalAnalysis and
LiveRangeCalc to better match the implementation/available helper
functions.
This re-applies r269016. The fixes from r270290 and r270259 should avoid
the machine verifier problems this time.
llvm-svn: 270291
Summary:
Uses ModulePass instead of FunctionPass for EfficiencySanitizerPass to
better support global variable creation for a forthcoming struct field
counter tool.
Patch by Qin Zhao.
Reviewers: aizatsky
Subscribers: llvm-commits, eugenis, vitalybuka, bruening, kcc
Differential Revision: http://reviews.llvm.org/D20458
llvm-svn: 270263
DBI stream contains a stream number of the symbol record stream.
Symbol record streams is an array of length-type-value members.
Each member represents one symbol.
Publics stream contains offsets to the symbol record stream.
This patch is to print out all symbols that are referenced by
the publics stream.
Note that even with this patch, llvm-pdbdump cannot dump all the
information in a publics stream since it contains more information
than symbol names. I'll improve it in followup patches.
Differential Revision: http://reviews.llvm.org/D20480
llvm-svn: 270262
Fix renameDisconnectedComponents() creating vreg uses that can be
reached from function begin withouthaving a definition (or explicit
live-in). Fix this by inserting IMPLICIT_DEF instruction before
control-flow joins as necessary.
Removes an assert from MachineScheduler because we may now get
additional IMPLICIT_DEF when preparing the scheduling policy.
This fixes the underlying problem of http://llvm.org/PR27705
llvm-svn: 270259
The Fast mode takes the first mapping, the greedy mode loops over all
the possible mapping for an instruction and choose the cheaper one.
Test case will come with target specific code, since we currently do not
have instructions that have several mappings.
llvm-svn: 270249
computeMapping.
Computing the cost of a mapping takes some time.
Since in Fast mode, the cost is irrelevant, just spare some cycles by not
computing it.
In Greedy mode, we need to choose the best cost, that means that when
the local cost gets more expensive than the best cost, we can stop
computing the repairing and cost for the current mapping.
llvm-svn: 270245
This adds support for handling unknown load commands, and a bogus_load_command tests.
Unknown or unsupported load commands can be specified in YAML by their hex value.
llvm-svn: 270239
The previous choice of the insertion points for repairing was
straightfoward but may introduce some basic block or edge splitting. In
some situation this is something we can avoid.
For instance, when repairing a phi argument, instead of placing the
repairing on the related incoming edge, we may move it to the previous
block, before the terminators. This is only possible when the argument
is not defined by one of the terminator.
llvm-svn: 270232
If an inline function is observed but unused in a translation unit, dummy
coverage mapping data with zero hash is stored for this function.
If such a coverage mapping section came earlier than real one, the latter
was ignored. As a result, llvm-cov was unable to show coverage information
for those functions.
Differential Revision: http://reviews.llvm.org/D20286
llvm-svn: 270194
Move the ExceptionHandling enumeration into TargetOptions and introduce a field
to track the desired exception model. This will allow us to set the exception
model from the frontend (needed to optionally use SjLj EH on other targets where
zero-cost is available and preferred).
llvm-svn: 270178
an instruction.
Use the previously introduced RepairingPlacement class to split the code
computing the repairing placement from the code doing the actual
placement. That way, we will be able to consider different placement and
then, only apply the best one.
llvm-svn: 270168
When assigning the register banks we may have to insert repairing code
to move already assigned values accross register banks.
Introduce a few helper classes to keep track of what is involved in the
repairing of an operand:
- InsertPoint and its derived classes record the positions, in the CFG,
where repairing has to be inserted.
- RepairingPlacement holds all the insert points for the repairing of an
operand plus the kind of action that is required to do the repairing.
This is going to be used to keep track of how the repairing should be
done, while comparing different solutions for an instruction. Indeed, we
will need the repairing placement to capture the cost of a solution and
we do not want to compute it a second time when we do the actual
repairing.
llvm-svn: 270167
register bank twice.
Prior to this change, we were checking if the assignment for the current
machine operand was matching, then we would check if the mismatch
requires to insert repair code.
We actually already have this information from the first check, so just
pass it along.
NFCI.
llvm-svn: 270166
This helper class will be used to represent the cost of mapping an
instruction to a specific register bank.
The particularity of these costs is that they are mostly local, thus the
frequency of the basic block is irrelevant. However, for few
instructions (e.g., phis and terminators), the cost may be non-local and
then, we need to account for the frequency of the involved basic blocks.
This will be used by the greedy mode I am working on.
llvm-svn: 270163
This removes the subclasses of ProfileSummary, moves the members of the derived classes to the base class.
Differential Revision: http://reviews.llvm.org/D20390
llvm-svn: 270143
This splits ProfileSummary into two classes: a ProfileSummary class that has methods to convert from/to metadata and a ProfileSummaryBuilder class that computes the profiles summary which is in ProfileData.
Differential Revision: http://reviews.llvm.org/D20314
llvm-svn: 270136
This re-applies r270115.
Many of the MachO load commands can have data appended after the command structure. This data is frequently strings, but can actually be anything. This patch adds support for three optional fields on load command yaml descriptions.
The new PayloadString YAML field is populated with the data after load commands known to have strings as extra data.
The new ZeroPadBytes YAML field is a count of zero'd bytes after the end of the load command structure before the next command. This can apply anywhere in the file. MachO2YAML verifies that bytes are zero before populating this field, and YAML2MachO will add zero'd bytes.
The new PayloadBytes YAML field stores all bytes after the end of the load command structure before the next command if they are non-zero. This is a catch all for all unhandled bytes. If MachO2Yaml populates PayloadBytes it will not populate ZeroPadBytes, instead zero'd bytes will be in the PayloadBytes structure.
llvm-svn: 270124
Many of the MachO load commands can have data appended after the command structure. This data is frequently strings, but can actually be anything. This patch adds support for three optional fields on load command yaml descriptions.
The new PayloadString YAML field is populated with the data after load commands known to have strings as extra data.
The new ZeroPadBytes YAML field is a count of zero'd bytes after the end of the load command structure before the next command. This can apply anywhere in the file. MachO2YAML verifies that bytes are zero before populating this field, and YAML2MachO will add zero'd bytes.
The new PayloadBytes YAML field stores all bytes after the end of the load command structure before the next command if they are non-zero. This is a catch all for all unhandled bytes. If MachO2Yaml populates PayloadBytes it will not populate ZeroPadBytes, instead zero'd bytes will be in the PayloadBytes structure.
llvm-svn: 270115
Since the calls don't return, the instruction afterwards will never run,
and is just taking up unnecessary space in the binary.
Differential Revision: http://reviews.llvm.org/D20406
llvm-svn: 270109
getRegAsmName ends up making a copy of the register's name in order to
make a lower-case version of it. This is bad because
getRegForInlineAsmConstraint, it's sole caller, does a lowercase
comparison anyway.
This resulted in a significant regression in compile time for the Linux
kernel because getRegAsmName is called in a loop by
getRegForInlineAsmConstraint.
Instead, forgo the call to lower in getRegAsmName and have it return a
StringRef.
No functionality change is intended.
llvm-svn: 270099
It broke buildbot:
http://lab.llvm.org:8011/builders/clang-s390x-linux/builds/4817/steps/ninja%20check%201/logs/stdio
Actually it is just because D20273 not yet commited, but these 2 were crossing with each other,
and I`ll better find the way to land them separatelly soon.
Initial commit message:
[llvm-mc] - Teach llvm-mc to generate compressed debug sections in zlib style.
Before this patch llvm-mc generated zlib-gnu styled sections.
That means no SHF_COMPRESSED flag was set, magic 'zlib' signature
was used in combination with full size field. Sections were renamed to "*.z*".
This patch reimplements the compression style to zlib one as zlib-gnu looks
to be depricated everywhere.
Differential revision: http://reviews.llvm.org/D20331
llvm-svn: 270075
There are at least 2 places (DAGCombiner, X86ISelLowering) where this could be used instead
of ad-hoc and watered down code that is trying to match a power-of-2 pattern.
Differential Revision: http://reviews.llvm.org/D20439
llvm-svn: 270073
Before this patch llvm-mc generated zlib-gnu styled sections.
That means no SHF_COMPRESSED flag was set, magic 'zlib' signature
was used in combination with full size field. Sections were renamed to "*.z*".
This patch reimplements the compression style to zlib one as zlib-gnu looks
to be depricated everywhere.
Differential revision: http://reviews.llvm.org/D20331
llvm-svn: 270070
Transition InstrProf and Coverage over to the stricter Error/Expected
interface.
Changes since the initial commit:
- Fix error message printing in llvm-profdata.
- Check errors in loadTestingFormat() + annotateAllFunctions().
- Defer error handling in InstrProfIterator to InstrProfReader.
- Remove the base ProfError class to work around an MSVC ICE.
Differential Revision: http://reviews.llvm.org/D19901
llvm-svn: 270020
Summary:
Implement guard widening in LLVM. Description from GuardWidening.cpp:
The semantics of the `@llvm.experimental.guard` intrinsic lets LLVM
transform it so that it fails more often that it did before the
transform. This optimization is called "widening" and can be used hoist
and common runtime checks in situations like these:
```
%cmp0 = 7 u< Length
call @llvm.experimental.guard(i1 %cmp0) [ "deopt"(...) ]
call @unknown_side_effects()
%cmp1 = 9 u< Length
call @llvm.experimental.guard(i1 %cmp1) [ "deopt"(...) ]
...
```
to
```
%cmp0 = 9 u< Length
call @llvm.experimental.guard(i1 %cmp0) [ "deopt"(...) ]
call @unknown_side_effects()
...
```
If `%cmp0` is false, `@llvm.experimental.guard` will "deoptimize" back
to a generic implementation of the same function, which will have the
correct semantics from that point onward. It is always _legal_ to
deoptimize (so replacing `%cmp0` with false is "correct"), though it may
not always be profitable to do so.
NB! This pass is a work in progress. It hasn't been tuned to be
"production ready" yet. It is known to have quadriatic running time and
will not scale to large numbers of guards
Reviewers: reames, atrick, bogner, apilipenko, nlewycky
Subscribers: mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D20143
llvm-svn: 269997
Having an enum member named Default is quite confusing: Is it distinct
from the others?
This patch removes that member and instead uses Optional<Reloc> in
places where we have a user input that still hasn't been maped to the
default value, which is now clear has no be one of the remaining 3
options.
llvm-svn: 269988
Summary:
MONITORX/MWAITX instructions provide similar capability to the MONITOR/MWAIT
pair while adding a timer function, such that another termination of the MWAITX
instruction occurs when the timer expires. The presence of the MONITORX and
MWAITX instructions is indicated by CPUID 8000_0001, ECX, bit 29.
The MONITORX and MWAITX instructions are intercepted by the same bits that
intercept MONITOR and MWAIT. MONITORX instruction establishes a range to be
monitored. MWAITX instruction causes the processor to stop instruction execution
and enter an implementation-dependent optimized state until occurrence of a
class of events.
Opcode of MONITORX instruction is "0F 01 FA". Opcode of MWAITX instruction is
"0F 01 FB". These opcode information is used in adding tests for the
disassembler.
These instructions are enabled for AMD's bdver4 architecture.
Patch by Ganesh Gopalasubramanian!
Reviewers: echristo, craig.topper, RKSimon
Subscribers: RKSimon, joker.eph, llvm-commits
Differential Revision: http://reviews.llvm.org/D19795
llvm-svn: 269911
MC only needs to know if the output is PIC or not. It never has to
decide about creating GOTs and PLTs for example. The only thing that
MC itself uses this information for is expanding "macros" in sparc and
mips. The rest I am pretty sure could be moved to CodeGen.
This is a cleanup and isolates the code from future changes to
Reloc::Model.
llvm-svn: 269909
* Reworks the CVSymbolTypes.def to work similarly to TypeRecords.def.
* Moves some enums from SymbolRecords.h to CodeView.h to maintain
consistency with how we do type records.
* Generalize a few simple things like the record prefix
* Define the leaf enum and the kind enum similar to how we do with tyep
records.
Differential Revision: http://reviews.llvm.org/D20342
Reviewed By: amccarth, rnk
llvm-svn: 269867
I don't yet fully understand the meaning of these data strcutures,
but at least it seems that their sizes and types are correct.
With this change, we can read publics streams till end.
Differential Revision: http://reviews.llvm.org/D20343
llvm-svn: 269861
This adds support for all the MachO *_command structures. The load_command payloads still are not represented, but that will come next.
llvm-svn: 269808
when the object is in an archive to use something like libx.a(foo.o) as part of
the error message.
Also changed llvm-objdump and llvm-size to be like llvm-nm and ignore non-object
files in archives and not produce any error message.
To do this Archive::Child::getAsBinary() was changed from ErrorOr<...> to
Expected<...> then that was threaded up to its users.
Converting this interface to Expected<> from ErrorOr<> does involve
touching a number of places. To contain the changes for now the use of
errorToErrorCode() is still used in one place yet to be fully converted.
Again there some were bugs in the existing code that did not deal with the
old ErrorOr<> return values. So now with Expected<> since they must be
checked and the error handled, I added a TODO and a comments for those.
llvm-svn: 269784
This adds support for all the MachO *_command structures. The load_command payloads still are not represented, but that will come next.
llvm-svn: 269782
This just checks that we emit all type records once, and then after
merging the type stream with no other type streams, we still emit every
kind of type record.
We could test the dumper output more closely, but that would make the
test very brittle. Currently we're just getting coverage.
llvm-svn: 269778
Summary:
Add support to control where files for a distributed backend (the
individual index files and optional imports files) are created.
This is invoked with a new thinlto-prefix-replace option in the gold
plugin and llvm-lto. If specified, expects a string of the form
"oldprefix:newprefix", and instead of generating these files in the
same directory path as the corresponding bitcode file, will use a path
formed by replacing the bitcode file's path prefix matching oldprefix
with newprefix.
Also add a new replace_path_prefix helper to Path.h in libSupport.
Depends on D19636.
Reviewers: joker.eph
Subscribers: llvm-commits, joker.eph
Differential Revision: http://reviews.llvm.org/D19644
llvm-svn: 269771
PrologEpilogInserter has these 3 phases, which are related, but not
all of them are needed by all targets. This patch reorganizes PEI's
varous functions around those phases for more clear separation. It also
introduces a new TargetMachine hook, usesPhysRegsForPEI, which is true
for non-virtual targets. When it is true, all the phases operate as
before, and PEI requires the AllVRegsAllocated property on
MachineFunctions. Otherwise, CSR spilling and scavenging are skipped and
only prolog/epilog insertion/frame finalization is done.
Differential Revision: http://reviews.llvm.org/D18366
llvm-svn: 269750
Transition InstrProf and Coverage over to the stricter Error/Expected
interface.
Changes since the initial commit:
- Address undefined-var-template warning.
- Fix error message printing in llvm-profdata.
- Check errors in loadTestingFormat() + annotateAllFunctions().
- Defer error handling in InstrProfIterator to InstrProfReader.
Differential Revision: http://reviews.llvm.org/D19901
llvm-svn: 269694
Also s/Cycles/Iters/ in NumCyclesForStoreLoadThroughMemory to make it
clear that this is not about clock cycles but loop cycles/iterations.
llvm-svn: 269667
Without a diagnostic handler installed, llc's behaviour is to exit on the first
error that it encounters. This is very different from the behaviour of clang
and other front ends, which try to gather as many errors as possible before
exiting.
This commit adds a diagnostic handler to llc, allowing it to find and report
more than one error. The old behaviour is preserved under a flag (-exit-on-error).
Some of the tests fail with the new diagnostic handler, so they have to use the
new flag in order to run under the previous behaviour. Some of these are known
bugs, others need further investigation. Ideally, we should fix the tests and
remove the flag at some point in the future.
Reapplied after fixing the LLDB build that was broken due to the new
DiagnosticSeverity in LLVMContext.h, and fixed an UB in the new change.
Patch by Diana Picus.
llvm-svn: 269655
This reverts commit r221331 and reinstate r220932 as discussed in D19271.
Original commit message was:
This patch adds an llvm_call_once which is a wrapper around
std::call_once on platforms where it is available and devoid
of bugs. The patch also migrates the ManagedStatic mutex to
be allocated using llvm_call_once.
These changes are philosophically equivalent to the changes
added in r219638, which were reverted due to a hang on Win32
which was the result of a bug in the Windows implementation
of std::call_once.
Differential Revision: http://reviews.llvm.org/D5922
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 269577
Without a diagnostic handler installed, llc's behaviour is to exit on the first
error that it encounters. This is very different from the behaviour of clang
and other front ends, which try to gather as many errors as possible before
exiting.
This commit adds a diagnostic handler to llc, allowing it to find and report
more than one error. The old behaviour is preserved under a flag (-exit-on-error).
Some of the tests fail with the new diagnostic handler, so they have to use the
new flag in order to run under the previous behaviour. Some of these are known
bugs, others need further investigation. Ideally, we should fix the tests and
remove the flag at some point in the future.
Reapplied after fixing the LLDB build that was broken due to the new
DiagnosticSeverity in LLVMContext.h.
Patch by Diana Picus.
llvm-svn: 269563
Summary:
This code is intended to be used as part of LLD's PDB writing. Until
that exists, this is exposed via llvm-readobj for testing purposes.
Type stream merging uses the following algorithm:
- Begin with a new empty stream, and a new empty hash table that maps
from type record contents to new type index.
- For each new type stream, maintain a map from source type index to
destination type index.
- For each record, copy it and rewrite its type indices to be valid in
the destination type stream.
- If the new type record is not already present in the destination
stream hash table, append it to the destination type stream, assign it
the next type index, and update the two hash tables.
- If the type record already exists in the destination stream, discard
it and update the type index map to forward the source type index to
the existing destination type index.
Reviewers: zturner, ruiu
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D20122
llvm-svn: 269521
I missed the fvmlib_command and the sub_framework_command, as well as a few uses of the dylib_command, dylinker_command, and linkedit_data_command.
This should now be a pretty complete listing. The only case I'm not sure about is LC_PREPAGE which doesn't seem to be referenced directly anywhere in LLVM.
llvm-svn: 269513
operator when the value type can't be initialized from the argument
type. Testing with the online MSVC compiler is finally happy with this,
let's see if the build bot will tolerate it.
llvm-svn: 269501
Transition InstrProf and Coverage over to the stricter Error/Expected
interface.
Changes since the initial commit:
- Fix error message printing in llvm-profdata.
- Check errors in loadTestingFormat() + annotateAllFunctions().
- Defer error handling in InstrProfIterator to InstrProfReader.
Differential Revision: http://reviews.llvm.org/D19901
llvm-svn: 269491
Publics stream seems to contain information as to public symbols.
It actually contains a serialized hash table along with fixed-sized
headers. This patch is not complete. It scans only till the end of
the stream and dump the header information. I'll write code to
de-serialize the hash table later.
Reviewers: zturner
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D20256
llvm-svn: 269484
Transition InstrProf and Coverage over to the stricter Error/Expected
interface.
Differential Revision: http://reviews.llvm.org/D19901
llvm-svn: 269462
Summary: This way we can get rid of one of the fields in the .def file.
Reviewers: llvm-commits
Subscribers: zturner
Differential Revision: http://reviews.llvm.org/D20251
llvm-svn: 269461
This patch adds basic support for MachO::load_command. Load command types and sizes are encoded in the YAML and expanded back into MachO.
The YAML doesn't yet support load command structs, that is coming next. In the meantime as a temporary measure when writing MachO files the load commands are padded with zeros so that the generated binary is valid.
llvm-svn: 269442
This reverts commit r269428, as it breaks the LLDB build. We need to
understand how to change LLDB in the same way as LLC before landing this
again.
llvm-svn: 269432
Without a diagnostic handler installed, llc's behaviour is to exit on the first
error that it encounters. This is very different from the behaviour of clang
and other front ends, which try to gather as many errors as possible before
exiting.
This commit adds a diagnostic handler to llc, allowing it to find and report
more than one error. The old behaviour is preserved under a flag (-exit-on-error).
Some of the tests fail with the new diagnostic handler, so they have to use the
new flag in order to run under the previous behaviour. Some of these are known
bugs, others need further investigation. Ideally, we should fix the tests and
remove the flag at some point in the future.
Patch by Diana Picus.
llvm-svn: 269428
a sequence of values.
It increments through the values in the half-open range: [Begin, End),
producing those values when indirecting the iterator. It should support
integers, iterators, and any other type providing these basic arithmetic
operations.
This came up in the C++ standards committee meeting, and it seemed like
a useful construct that LLVM might want as well, and I wanted to
understand how easily we could solve it. I suspect this can be used to
write simpler counting loops even in LLVM along the lines of:
for (int i : seq(0, v.size())) {
...
};
As part of this, I had to fix the lack of a proxy object returned from
the operator[] in our iterator facade.
Differential Revision: http://reviews.llvm.org/D17870
llvm-svn: 269390
Summary:
...loop after the last iteration.
This is really hard to do correctly. The core problem is that we need to
model liveness through the induction PHIs from iteration to iteration in
order to get the correct results, and we need to correctly de-duplicate
the common subgraphs of instructions feeding some subset of the
induction PHIs. All of this can be driven either from a side effect at
some iteration or from the loop values used after the loop finishes.
This patch implements this by storing the forward-propagating analysis
of each instruction in a cache to recall whether it was free and whether
it has become live and thus counted toward the total unroll cost. Then,
at each sink for a value in the loop, we recursively walk back through
every value that feeds the sink, including looping back through the
iterations as needed, until we have marked the entire input graph as
live. Because we cache this, we never visit instructions more than twice
-- once when we analyze them and put them into the cache, and once when
we count their cost towards the unrolled loop. Also, because the cache
is only two bits and because we are dealing with relatively small
iteration counts, we can store all of this very densely in memory to
avoid this from becoming an excessively slow analysis.
The code here is still pretty gross. I would appreciate suggestions
about better ways to factor or split this up, I've stared too long at
the algorithmic side to really have a good sense of what the design
should probably look at.
Also, it might seem like we should do all of this bottom-up, but I think
that is a red herring. Specifically, the simplification power is *much*
greater working top-down. We can forward propagate very effectively,
even across strange and interesting recurrances around the backedge.
Because we use data to propagate, this doesn't cause a state space
explosion. Doing this level of constant folding, etc, would be very
expensive to do bottom-up because it wouldn't be until the last moment
that you could collapse everything. The current solution is essentially
a top-down simplification with a bottom-up cost accounting which seems
to get the best of both worlds. It makes the simplification incremental
and powerful while leaving everything dead until we *know* it is needed.
Finally, a core property of this approach is its *monotonicity*. At all
times, the current UnrolledCost is a conservatively low estimate. This
ensures that we will never early-exit from the analysis due to exceeding
a threshold when if we had continued, the cost would have gone back
below the threshold. These kinds of bugs can cause incredibly hard to
track down random changes to behavior.
We could use a techinque similar (but much simpler) within the inliner
as well to avoid considering speculated code in the inline cost.
Reviewers: chandlerc
Subscribers: sanjoy, mzolotukhin, llvm-commits
Differential Revision: http://reviews.llvm.org/D11758
llvm-svn: 269388
Having the MachO enums in a def file instead of inline will allow us to write utilities and encoding/decoding methods for load commands without having to write a lot of mechanically repeated code.
llvm-svn: 269380
Ported DA to the new PM by splitting the former DependenceAnalysis Pass
into a DependenceInfo result type and DependenceAnalysisWrapperPass type
and adding a new PM-style DependenceAnalysis analysis pass returning the
DependenceInfo.
Patch by Philip Pfaffe, most of the review by Justin.
Differential Revision: http://reviews.llvm.org/D18834
llvm-svn: 269370
I've added the reserved field as an "optional" in YAML, but I've added asserts in the yaml2macho code to enforce that the field is present in mach_header_64, but not in mach_header.
llvm-svn: 269320
This merges the functionality of the macros in `CVLeafTypes.def` and the
macros in `TypeRecords.def` into a single set of macros.
Differential Revision: http://reviews.llvm.org/D20190
Reviewed By: rnk, amccarth
llvm-svn: 269316
This introduces a variadic template and some helper macros to
safely and correctly deserialize many types of common record
fields while maintaining error checking.
Differential Revision: http://reviews.llvm.org/D20183
Reviewed By: rnk, amccarth
llvm-svn: 269315
Since we want to be able to use yaml to describe degenerate object files as well as valid ones, we need to be explicit of some fields in your yaml definitions.
llvm-svn: 269313
It's very common to want to replace a node and then remove it since
it's dead, especially as we port backends from the SDNode *Select API
to the void Select one. This helper makes this sequence a bit less
verbose.
llvm-svn: 269236
DbgInfoIntrinsic::StripCast() is dead since r79977
The only function that creates Comdat objects seems to be in Module, and always creates them using the default constructor.
llvm-svn: 269204
Extract a part of isDereferenceableAndAlignedPointer functionality to Value:
Reviewed By: hfinkel, sanjoy
Differential Revision: http://reviews.llvm.org/D17611
llvm-svn: 269190
This means SelectCode unconditionally returns nullptr now. I'll follow
up with a change to make that return void as well, but it seems best
to keep that one very mechanical.
This is part of the work to have Select return void instead of an
SDNode *, which is in turn part of llvm.org/pr26808.
llvm-svn: 269136
Remove the ModuleLevelChanges argument, and the ability to create new
subprograms for cloned functions. The latter was added without review in
r203662, but it has no in-tree clients (all non-test callers pass false
for ModuleLevelChanges [1], so it isn't reachable outside of tests). It
also isn't clear that adding a duplicate subprogram to the compile unit is
always the right thing to do when cloning a function within a module. If
this functionality comes back it should be accompanied with a more concrete
use case.
Furthermore, all in-tree clients add the returned function to the module.
Since that's pretty much the only sensible thing you can do with the function,
just do that in CloneFunction.
[1] http://llvm-cs.pcc.me.uk/lib/Transforms/Utils/CloneFunction.cpp/rCloneFunction
Differential Revision: http://reviews.llvm.org/D18628
llvm-svn: 269110
The plan is to eventually make this logic simpler, however I expect it to
be a little tricky for the foreseeable future (at least until we're rid of
pointee types), so move it here so that it can be reused to build a summary
index for devirtualization.
Differential Revision: http://reviews.llvm.org/D20005
llvm-svn: 269081
Currently, SelectionDAG assumes 8/16-bit cmpxchg returns either a sign
extended result, or a zero extended result. SystemZ takes a third
option by returning junk in the high bits (rotated contents of the other
bytes in the memory word). In that case, don't use Assert*ext, and
zero-extend the result ourselves if a comparison is needed.
Differential Revision: http://reviews.llvm.org/D19800
llvm-svn: 269075
Summary:
Add support for emission of plaintext lists of the imported files for
each distributed backend compilation. Used for distributed build file
staging.
Invoked with new gold-plugin thinlto-emit-imports-files option, which is
only valid with thinlto-index-only (i.e. for distributed builds), or
from llvm-lto with new -thinlto-action=emitimports value.
Depends on D19556.
Reviewers: joker.eph
Subscribers: llvm-commits, joker.eph
Differential Revision: http://reviews.llvm.org/D19636
llvm-svn: 269067
This restores commit r268627:
Summary:
When launching ThinLTO backends in a distributed build (currently
supported in gold via the thinlto-index-only plugin option), emit
an individual index file for each backend process as described here:
http://lists.llvm.org/pipermail/llvm-dev/2016-April/098272.html
...
Differential Revision: http://reviews.llvm.org/D19556
Address msan failures by avoiding std::prev on map.end(), the
theory is that this is causing issues due to some known UB problems
in __tree.
llvm-svn: 269059
SystemZ (and probably other targets as well) can fold a memory operand
by changing the opcode into a new instruction that as a side-effect
also clobbers the CC-reg.
In order to do this, liveness of that reg must first be checked. When
LIS is passed, getRegUnit() can be called on it and the right
LiveRange is computed on demand.
Reviewed by Matthias Braun.
http://reviews.llvm.org/D19861
llvm-svn: 269026
Allow vectorization when the step is a loop-invariant variable.
This is the loop example that is getting vectorized after the patch:
int int_inc;
int bar(int init, int *restrict A, int N) {
int x = init;
for (int i=0;i<N;i++){
A[i] = x;
x += int_inc;
}
return x;
}
"x" is an induction variable with *loop-invariant* step.
But it is not a primary induction. Primary induction variable with non-constant step is not handled yet.
Differential Revision: http://reviews.llvm.org/D19258
llvm-svn: 269023
We now use LiveRangeCalc::extendToUses() instead of a specially designed
algorithm in constructMainRangeFromSubranges():
- The original motivation for constructMainRangeFromSubranges() were
differences between the main liverange and subranges because of hidden
dead definitions. This case however cannot happen anymore with the
DetectDeadLaneMasks pass in place.
- It simplifies the code.
- This fixes a longstanding bug where we did not properly create new SSA
values on merging control flow (the MachineVerifier missed most of
these cases).
- Move constructMainRangeFromSubranges() to LiveIntervalAnalysis and
LiveRangeCalc to better match the implementation/available helper
functions.
llvm-svn: 269016
Many files include Passes.h but only a fraction needs to know about the
TargetPassConfig class. Move it into an own header. Also rename
Passes.cpp to TargetPassConfig.cpp while we are at it.
llvm-svn: 269011
We now construct a custom pass pipeline instead of injecting
start-before/stop-after into the default pipeline construction. This
allows to specify any pass known to the pass registry. Previously
specifying indirectly added analysis passes or passes not added to the
pipeline add all would not be added and we would silently do nothing.
This also restricts the -run-pass option to cases with .mir input.
llvm-svn: 269003
Add convenience function to create MachineModuleInfo and
MachineFunctionAnalysis passes and add them to a pass manager.
Despite factoring out some shared code in
LiveIntervalTest/LLVMTargetMachine this will be used by my upcoming llc
change.
llvm-svn: 269002
We can use calls to @llvm.experimental.guard to prove predicates,
relying on the fact that in all locations domianted by a call to
@llvm.experimental.guard the predicate it is guarding is known to be
true.
llvm-svn: 268997
Summary:
Previously these intrinsics were marked as can-read any memory address.
Now they're marked as reading only the pointer they're passed.
Reviewers: rnk
Subscribers: jholewinski, llvm-commits, tra
Differential Revision: http://reviews.llvm.org/D20080
llvm-svn: 268996
allow the transformation to strip invalid debug info.
This patch separates the Verifier into an analysis and a transformation
pass, with the transformation pass optionally stripping malformed
debug info.
The problem I'm trying to solve with this sequence of patches is that
historically we've done a really bad job at verifying debug info. We want
to be able to make the verifier stricter without having to worry about
breaking bitcode compatibility with existing producers. For example, we
don't necessarily want IR produced by an older version of clang to be
rejected by an LTO link just because of malformed debug info, and rather
provide an option to strip it. Note that merely outdated (but well-formed)
debug info would continue to be auto-upgraded in this scenario.
http://reviews.llvm.org/D19988
rdar://problem/25818489
This reapplies r268937 without modifications.
llvm-svn: 268966
This patch introduces a new option -lto-strip-invalid-debug-info, which
drops malformed debug info from the input.
The problem I'm trying to solve with this sequence of patches is that
historically we've done a really bad job at verifying debug info. We want
to be able to make the verifier stricter without having to worry about
breaking bitcode compatibility with existing producers. For example, we
don't necessarily want IR produced by an older version of clang to be
rejected by an LTO link just because of malformed debug info, and rather
provide an option to strip it. Note that merely outdated (but well-formed)
debug info would continue to be auto-upgraded in this scenario.
rdar://problem/25818489
http://reviews.llvm.org/D19987
This reapplies 268936 with a test case fix for Linux (-exported-symbol foo)
llvm-svn: 268965
allow the transformation to strip invalid debug info.
This patch separates the Verifier into an analysis and a transformation
pass, with the transformation pass optionally stripping malformed
debug info.
The problem I'm trying to solve with this sequence of patches is that
historically we've done a really bad job at verifying debug info. We want
to be able to make the verifier stricter without having to worry about
breaking bitcode compatibility with existing producers. For example, we
don't necessarily want IR produced by an older version of clang to be
rejected by an LTO link just because of malformed debug info, and rather
provide an option to strip it. Note that merely outdated (but well-formed)
debug info would continue to be auto-upgraded in this scenario.
http://reviews.llvm.org/D19988
rdar://problem/25818489
llvm-svn: 268937
This patch introduces a new option -lto-strip-invalid-debug-info, which
drops malformed debug info from the input.
The problem I'm trying to solve with this sequence of patches is that
historically we've done a really bad job at verifying debug info. We want
to be able to make the verifier stricter without having to worry about
breaking bitcode compatibility with existing producers. For example, we
don't necessarily want IR produced by an older version of clang to be
rejected by an LTO link just because of malformed debug info, and rather
provide an option to strip it. Note that merely outdated (but well-formed)
debug info would continue to be auto-upgraded in this scenario.
rdar://problem/25818489
http://reviews.llvm.org/D19987
llvm-svn: 268936
After looking at D19087 again, it occurred to me that we can do better. If we consolidate
the valueHasExactlyOneBitSet() transforms, we won't incur extra overhead from calling it a
2nd time, and we can shrink SimplifySetCC() a bit. No functional change intended.
Differential Revision: http://reviews.llvm.org/D20050
llvm-svn: 268932
When updating an existing archive, llvm-ar opens the old archive into a
`MemoryBuffer`, does its thing, and writes the results to a temporary
file. That file is then renamed to the original archive filename, thus
replacing it with the updated contents. However, on Windows at least,
what would happen is that the `MemoryBuffer` for the old archive would
actually be an mmap'ed view of the file, so when it came time to do the
rename via Win32's `ReplaceFile`, it would succeed but would be unable
to fully replace the file since there would still be a handle open on
it; instead, the old version got renamed to a random temporary name and
left behind.
Patch by Cameron!
llvm-svn: 268916
Specially crafted bitcode wrapper headers can cause unsigned interger
overflow and lead to crashes when wrapping around. Fix the offset check
and avoid such scenarios.
Writing a testcase for this would involve editing the binary to generate
values that trigger the overflow, since this would never happen while
generating the bitcode in regular compilation flows, so there's
currently no feasible way add one.
llvm-svn: 268881
This reuses the CVTypeDumper from libcodeview to dump full
information about type records within a PDB file.
Differential Revision: http://reviews.llvm.org/D20022
Reviewed By: rnk
llvm-svn: 268808
part of the error message.
As the caller is the one that needs to add the name of where the "object file"
comes from to the error message as the object file could be in an archive, or
coming from a slice of a Mach-O universal file or a buffer created by a JIT.
In the cases of a Mach-O universal file the architecture name may or may not
also need to be printed which is up to the tool code. For example if the tool
code is only selecting the host architecture slice then that architecture name
is never printed.
This patch is the change to the libObject code and there will be follow on
commits for changes to the code for each tool.
llvm-svn: 268789
Summary:
There seems to have been a misunderstanding as to the meaning of 'offset' in
the rules laid down by our ABI. The previous code believed that 'offset' meant
the offset within the section that the relocation is applied to. However, it
should have meant the offset from the symbol used in the relocation expression.
This patch adds two fields to ELFRelocationEntry and uses them to correct the
order of relocations for MIPS. These fields contain:
* The original symbol before shouldRelocateWithSymbol() is considered. This
ensures that R_MIPS_GOT16 is able to correctly distinguish between local and
external symbols, allowing us to tell whether %got() requires a matching
%lo() or not (local symbols require one, external symbols don't). It also
prevents confusing cases where the fuzzy matching rules cause things like
%hi(foo)/%lo(foo+3) and %hi(bar)/%lo(bar+1) to swap their %lo()'s.
* The original offset before shouldRelocateWithSymbol() is considered. The
existing Addend field is always zero when the object uses in place addends
(because it's already moved it to the encoding) but MIPS needs to use the
original offset to ensure that the linker correctly calculates the carry-in
bit for %hi() and %got().
IAS ensures that unmatchable %hi()/%got() relocations are placed at the end of
the table to ensure that the linker rejects the table (we're unable to report
such errors directly). The alternatives to this risk accidental matching
against inappropriate relocations which may silently compute incorrect values
due to an incorrect carry bit between the %lo() and %hi()/%got().
Reviewers: sdardis
Subscribers: dsanders, sdardis, rafael, llvm-commits
Differential Revision: http://reviews.llvm.org/D19718
llvm-svn: 268733
Summary:
This change allows to specify "DefaultMethod" for optional operand (IsOptional = 1) in AsmOperandClass that return default value for operand. This is used in convertToMCInst to set default values in MCInst.
Previously if you wanted to set default value for operand you had to create custom converter method. With this change it is possible to use standard converters even when optional operands presented.
Reviewers: tstellarAMD, ab, craig.topper
Subscribers: jyknight, dsanders, arsenm, nhaustov, llvm-commits
Differential Revision: http://reviews.llvm.org/D18242
llvm-svn: 268726
Summary:
This will be used for AMDGPU_HSA_KERNEL symbol type in output ELF.
Also, in the future unused non-kernels may be optimized.
For now, also accept SPIR_KERNEL for HCC frontend.
Also, add bitcode compatibility tests for missing calling conventions
except AVR_BUILTIN which doesn't have parse code.
Reviewers: tstellarAMD, arsenm
Subscribers: arsenm, joker.eph, llvm-commits
llvm-svn: 268717
This test was crashing, and currently it breaks bootstrapping clang with debuginfo
Differential Revision: http://reviews.llvm.org/D20008
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 268715
This change seems to speed up LLD a bit if it has a lot of mergeable
sections. The number is below. It's not too bad for a small patch.
Time to link Clang (debug build):
w/o patch 6.3696 seconds
w/patch 6.2746 seconds (-1.5%)
Differential Revision: http://reviews.llvm.org/D19933
llvm-svn: 268698
This is a step towards removing the rampant undefined behaviour in
SelectionDAG, which is a part of llvm.org/PR26808.
We rename SelectionDAGISel::Select to SelectImpl and update targets to
match, and then change Select to return void and consolidate the
sketchy behaviour we're trying to get away from there.
Next, we'll update backends to implement `void Select(...)` instead of
SelectImpl and eventually drop the base Select implementation.
llvm-svn: 268693
This opcode never happens in practice, and yet the logic we have in
place to handle it would be undefined behaviour if we ever executed
it. Remove it rather than trying to refactor code that's never
reached.
llvm-svn: 268692
Summary: We need to clean up CFG before assigning discriminator to minimize the impact of optimization on debug info.
Reviewers: davidxl, dblaikie, dnovillo
Subscribers: dnovillo, danielcdh, llvm-commits
Differential Revision: http://reviews.llvm.org/D19926
llvm-svn: 268675
The assertions were assuming that the linker will not ask to preserve
a global that is internal or available_externally, as it does not
really make sense. In practice this break the bootstrap of clang,
I degrade to a warning for now.
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 268671
Summary:
As per the discussion on LLVM-dev this patch proposes removing LLVM_ENABLE_TIMESTAMPS.
The only complicated bit of this patch is the Windows support. On windows we used to log an error if /INCREMENTAL was passed to the linker when timestamps were disabled.
With this change since timestamps in code are always disabled we will always compile on windows with /Brepro unless /INCREMENTAL is specified, and we will log a warning when /INCREMENTAL is specified to notify the user that the build will be non-deterministic.
See: http://lists.llvm.org/pipermail/llvm-dev/2016-May/098990.html
Reviewers: bogner, silvas, rnk
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D19892
llvm-svn: 268670
load commands.
The existing test case in test/Object/macho-invalid.test for
macho-invalid-too-small-segment-load-command has a cmdsize of 55, while
being too small also it is not a multiple of 4. So when that check is added
this test case will produce a different error. So I constructed a new test case
that will trigger the intended error.
I also changed the error message to be consistent with the other malformed Mach-O
file error messages which prints the load command index. I also removed both
object_error::macho_load_segment_too_small and
object_error::macho_load_segment_too_many_sections from Object/Error.h
as they are not needed and can just use object_error::parse_failed and let the
error message string distinguish the specific error.
llvm-svn: 268652
The instruction A2_tfrpi has a 64-bit operand, while the corresponding
intrinsic takes a 32-bit value. The actual value has only 8 significant
bits, so the difference is only in the type used to represent it.
In order to map the intrinsic to the instruction, the operand needs to
be extended to the correct type.
llvm-svn: 268635
Summary:
Some PHIs can have expressions that are not AddRecExprs due to the presence
of sext/zext instructions. In order to prevent the Loop Vectorizer from
bailing out when encountering these PHIs, we now coerce the SCEV
expressions to AddRecExprs using SCEV predicates (when possible).
We only do this when the alternative would be to not vectorize.
Reviewers: mzolotukhin, anemet
Subscribers: mssimpso, sanjoy, mzolotukhin, llvm-commits
Differential Revision: http://reviews.llvm.org/D17153
llvm-svn: 268633
Summary:
When launching ThinLTO backends in a distributed build (currently
supported in gold via the thinlto-index-only plugin option), emit
an individual index file for each backend process as described here:
http://lists.llvm.org/pipermail/llvm-dev/2016-April/098272.html
The individual index file encodes the summary and module information
required for implementing the importing/exporting decisions made
for a given module in the thin link step.
This is in place of the current mechanism that uses the combined index
to make importing decisions in each back end independently. It is an
enabler for doing global summary based optimizations in the thin link
step (which will be recorded in the individual index files), and reduces
the size of the index that must be sent to each backend process, and
the amount of work to scan it in the backends.
Rather than create entirely new ModuleSummaryIndex structures (and all
the included unique_ptrs) for each backend index file, a map is created
to record all of the GUID and summary pointers needed for a particular
index file. The IndexBitcodeWriter walks this map instead of the full
index (hiding the details of managing the appropriate summary iteration
in a new iterator subclass). This is more efficient than walking the
entire combined index and filtering out just the needed summaries during
each backend bitcode index write.
Depends on D19481.
Reviewers: joker.eph
Subscribers: llvm-commits, joker.eph
Differential Revision: http://reviews.llvm.org/D19556
llvm-svn: 268627
Both Linux and kFreeBSD use glibc, so follow similiar code paths.
Add isTargetGlibc to check for this, and use it instead of isTargetLinux
in a few places.
Fixes PR22248 for kFreeBSD.
Differential Revision: http://reviews.llvm.org/D19104
llvm-svn: 268624
Some vector bit operations are promoted instead of having custom lowering. This patch changes the isOperationLegalOrCustom tests for vector AND/OR operations to use a new TLI helper isOperationLegalOrCustomOrPromote instead, allowing the SSE implementations to stay on the simd unit.
Differential Revision: http://reviews.llvm.org/D19805
llvm-svn: 268561
This reapplies commit r268521, that was reverted in r268530 due to a test failure in select-implied.ll
Modified the test case to reflect the new change.
llvm-svn: 268557
Summary:
Port the dumper in llvm-readobj over to it.
I'm planning to use this visitor to power type stream merging.
While we're at it, try to switch from StringRef to ArrayRef<uint8_t> in some
places.
Reviewers: zturner, amccarth
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D19899
llvm-svn: 268535
In the current implementation compiler only prints stack trace
to console after crash. This patch adds saving of minidump
files which contain a useful subset of the information for
further debugging.
Differential Revision: http://reviews.llvm.org/D18216
llvm-svn: 268519
When printing raw PDB file fields, streams, and records, use the
ScopedPrinter class so we have consistency with llvm-readobj's output
format.
For the most part this is pretty mechanical, but I had to fix up the test
file to conform to the new YAMLesque output format. i added a few
additional helper functions to the ScopedPrinter such as one to print a
dotted version, etc.
Differential Revision: http://reviews.llvm.org/D19897
Reviewed By: rnk
llvm-svn: 268506
Goal of this change is to guarantee stable ordering of the statepoint arguments and other
newly inserted values such as gc.relocates. Previously we had explicit sorting in a couple
of places. However for unnamed values ordering was partial and overall we didn't have any
strong invariant regarding it. This change switches all data structures to use SetVector's
and MapVector's which provide possibility for deterministic iteration over them.
Explicit sorting is now redundant and was removed.
Differential Revision: http://reviews.llvm.org/D19669
llvm-svn: 268502
We forgot to consider the target of ifuncs when considering if a
function was alive or dead.
N.B. Also update a few auxiliary tools like bugpoint and
verify-uselistorder.
This fixes PR27593.
llvm-svn: 268468
toString() consumes an Error and returns a string representation of its
contents. This commit also adds a message() method to ErrorInfoBase for
convenience.
Differential Revision: http://reviews.llvm.org/D19883
llvm-svn: 268465
command has a size less than 8 bytes.
I think the existing test case in test/Object/macho-invalid.test for
macho64-invalid-too-small-load-command was trying to test for this but that
test case triggered a different error given how it was constructed. So I
constructed a new test case that would trigger this specific error.
I also changed the error message to be consistent with the other malformed Mach-O
file error messages. I also removed object_error::macho_small_load_command from
Object/Error.h as it is not needed and can just use object_error::parse_failed
and let the error message string distinguish the error.
llvm-svn: 268463
Ability to parse codeview type streams is also needed by
DebugInfoPDB for parsing PDBs, so moving this into a library
gives us this option. Since DebugInfoPDB had already hand
rolled some code to do this, that code is now convereted over
to using this common abstraction.
Differential Revision: http://reviews.llvm.org/D19887
Reviewed By: dblaikie, amccarth
llvm-svn: 268454
A loop pass that didn't preserve this entire set of passes wouldn't
play well with other loop passes, since these are generally a basic
requirement to do any interesting transformations to a loop.
Adds a helper to get the set of analyses a loop pass should preserve,
and checks that any loop pass we run satisfies the requirement.
llvm-svn: 268444
We have it for StringRef but not ArrayRef, and ArrayRef has drop_back,
so I see no reason it shouldn't have drop_front. Splitting this out of a
change that I have that will use this funcitonality.
llvm-svn: 268434
Be more specific in describing compression failures. Also, check for
this kind of error in emitNameData().
This is part of a series of patches to transition ProfileData over to
the stricter Error/Expected interface.
llvm-svn: 268400
This control how the cache is pruned. The cache still has to
be explicitely enabled/disabled by providing a path.
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 268393
Summary:
This is much closer to the way MIPS relocation expressions work
(%hi(foo + 2) rather than %hi(foo) + 2) and removes the need for the
various bodges in MipsAsmParser::evaluateRelocExpr().
Removing those bodges ensures that the constant stored in MCValue is the
full 32 or 64-bit (depending on ABI) offset from the symbol. This will be used
to correct the %hi/%lo matching needed to sort the relocation table correctly.
As part of this:
* Gave MCExpr::print() the ability to omit parenthesis when emitting a
symbol reference inside a MipsMCExpr operator like %hi(X). Without this
we print things like %lo(($L1)).
* %hi(%neg(%gprel(X))) is now three MipsMCExpr's instead of one. Most of
the related special cases have been removed or moved to MipsMCExpr. We
can remove the rest as we gain support for the less common relocations
when they are not part of this specific combination.
* Renamed MipsMCExpr::VariantKind and the enum prefix ('VK_') to avoid confusion
with MCSymbolRefExpr::VariantKind and its prefix (also 'VK_').
* fixup_Mips_GOT_Local and fixup_Mips_GOT_Global were found to be identical
and merged into fixup_Mips_GOT.
* MO_GOT16 and MO_GOT turned out to be identical and have been merged into
MO_GOT.
* VK_Mips_GOT and VK_Mips_GOT16 turned out to be the same thing so they
have been merged into MEK_GOT
Reviewers: sdardis
Subscribers: dsanders, sdardis, llvm-commits
Differential Revision: http://reviews.llvm.org/D19716
llvm-svn: 268379
We were overly cautious in our analysis of loops which have invokes
which unwind to EH pads. The loop unroll transform is safe because it
only clones blocks in the loop body, it does not try to split critical
edges involving EH pads. Instead, move the necessary safety check to
LoopUnswitch.
N.B. The safety check for loop unswitch is covered by an existing test
which fails without it.
llvm-svn: 268357
This parses the TPI stream (stream 2) from the PDB file. This stream
contains some header information followed by a series of codeview records.
There is some additional complexity here in that alongside this stream of
codeview records is a serialized hash table in order to efficiently query
the types. We parse the necessary bookkeeping information to allow us to
reconstruct the hash table, but we do not actually construct it yet as
there are still a few things that need to be understood first.
Differential Revision: http://reviews.llvm.org/D19840
Reviewed By: ruiu, rnk
llvm-svn: 268343
We wish to re-use this from llvm-pdbdump, and it provides a nice
way to print structured data in scoped format that could prove
useful for many other dumping tools as well. Moving to support
and changing name to ScopedPrinter to better reflect its purpose.
llvm-svn: 268342
There is not point in importing a "weak" or a "linkonce" function
since we won't be able to inline it anyway.
We already had a targeted check for WeakAny, this is using the
same check on GlobalValue as the inline, i.e.
isMayBeOverriddenLinkage()
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 268341
Remove the AddPristinesAndCSRs parameters from
addLiveIns()/addLiveOuts().
We need to respect pristine registers after prologue epilogue insertion,
Seeing that we got this wrong in at least two commits already, we should
rather pay the small price to query MachineFrameInfo for it.
There are three cases that did not set AddPristineAndCSRs to true even
after register allocation:
- ExecutionDepsFix: live-out registers are used as a hint that the
register is used soon. This is not true for pristine registers so
use the new addLiveOutsNoPristines() to maintain this behaviour.
- SystemZShortenInst: Not setting AddPristineAndCSRs to true looks like
a bug, should do the right thing automatically now.
- StackMapLivenessAnalysis: Not adding pristine registers looks like a
bug to me. Added a FIXME comment but maintain the current behaviour
as a change may need to get coordinated with GC runtimes.
llvm-svn: 268336
Summary:
This adds a unique ID to the COFF section uniquing map, similar to the
one we have for ELF. The unique id is not currently exposed via the
assembler because we don't have a use case for it yet. Users generally
create .pdata with the .seh_* family of directives, and the assembler
internally needs to produce .pdata and .xdata sections corresponding to
the code section.
The association between .text sections and the assembler-created .xdata
and .pdata sections is maintained as an ID field of MCSectionCOFF. The
CFI-related sections are created with the given unique ID, so if more
code is added to the same text section, we can find and reuse the CFI
sections that were already created.
Reviewers: majnemer, rafael
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D19376
llvm-svn: 268331
This operation may branch to the handler block and we do not want it
to happen anywhere within the basic block.
Moreover, by marking it "terminator and branch" the machine verifier
does not wrongly assume (because of AnalyzeBranch not knowing better)
the branch is analyzable. Indeed, the target was seeing only the
unconditional branch and not the faulting load op and thought it was
a simple unconditional block.
The machine verifier was complaining because of that and moreover,
other optimizations could have done wrong transformation!
In the process, simplify the representation of the handler block in
the faulting load op. Now, we directly reference the handler block
instead of using a label. This has the benefits of:
1. MC knows how to issue a label for a BB, so leave that to it.
2. Accessing the target BB from its label is painful, whereas it is
direct from a MBB operand.
Note: The 2 bytes offset in implicit-null-check.ll comes from the
fact the unconditional jumps are not removed anymore, as the whole
terminator sequence is not analyzable anymore.
Will fix it in a subsequence commit.
llvm-svn: 268327
Summary:
When SelectionDAG performs CSE it is possible that the context's source
location is different from that of the selected node. This can lead to
incorrect line number records. We update the debug location to the
one that occurs earlier in the instruction sequence.
This fixes PR21006.
Reviewers: echristo, sdmitrouk
Subscribers: jevinskie, asl, llvm-commits
Differential Revision: http://reviews.llvm.org/D12094
llvm-svn: 268323
There is not point in importing a "weak" or a "linkonce" function
since we won't be able to inline it anyway.
We already had a targeted check for WeakAny, this is using the
same check on GlobalValue as the inline, i.e.
isMayBeOverriddenLinkage()
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 268315
Produce another specific error message for a malformed Mach-O file when a symbol’s
section index is more than the number of sections. The existing test case in test/Object/macho-invalid.test
for macho-invalid-section-index-getSectionRawName now reports the error with the message indicating
that a symbol at a specific index has a bad section index and that bad section index value.
Again converting interfaces to Expected<> from ErrorOr<> does involve
touching a number of places. Where the existing code reported the error with a
string message or an error code it was converted to do the same.
Also there some were bugs in the existing code that did not deal with the
old ErrorOr<> return values. So now with Expected<> since they must be
checked and the error handled, I added a TODO and a comment:
"// TODO: Actually report errors helpfully" and a call something like
consumeError(NameOrErr.takeError()) so the buggy code will not crash
since needed to deal with the Error.
llvm-svn: 268298
that it computes. Currently this is used for testing and precision
tuning, but it might be used by optimizations later.
Differential Revision: http://reviews.llvm.org/D19179
llvm-svn: 268291
PDB has a lot of similar data structures. We already have code
for parsing a Name Map, but PDB seems to have a different but
very similar structure that is a hash table. This is the
beginning of code needed in order to parse the name hash table,
but it is not yet complete. It parses the basic metadata of
the hash table, the bucket array, and the names buffer, but
doesn't use any of these fields yet as the data structure
requires a non-trivial amount of work to understand.
llvm-svn: 268268
Make it possible that TryToSimplifyUncondBranchFromEmptyBlock merges empty
basic block including lifetime intrinsics as well as phi nodes and
unconditional branch into its successor or predecessor(s).
If successor of empty block has single predecessor, all contents including
lifetime intrinsics are sinked into the successor. Otherwise, they are
hoisted into its predecessor(s) and then merged into the predecessor(s).
Patch by Josh Yoon <josh.yoon@samsung.com>!
Differential Revision: http://reviews.llvm.org/D19257
llvm-svn: 268254
The implemented heuristic has a large body of code which better sits
in its own function for better readability. It also allows adding more
heuristics easier in the future.
llvm-svn: 268107
The motivation for this change is that PDB has the notion of
streams and substreams. Substreams often consist of variable
length structures that are convenient to be able to treat as
guaranteed, contiguous byte arrays, whereas the streams they
are contained in are not necessarily so, as a single stream
could be spread across many discontiguous blocks.
So, when processing data from a substream, we want to be able
to assume that we have a contiguous byte array so that we can
cast pointers to variable length arrays and such.
This leads to the question of how to be able to read the same
data structure from either a stream or a substream using the
same interface, which is where this patch comes in.
We separate out the stream's read state from the underlying
representation, and introduce a `StreamReader` class. Then
we change the name of `PDBStream` to `MappedBlockStream`, and
introduce a second kind of stream called a `ByteStream` which is
simply a sequence of contiguous bytes. Finally, we update all
of the std::vectors in `PDBDbiStream` to use `ByteStream` instead
as a proof of concept.
llvm-svn: 268071
Summary:
Historically, we had a switch in the Makefiles for turning on "expensive
checks". This has never been ported to the cmake build, but the
(dead-ish) code is still around.
This will also make it easier to turn it on in buildbots.
Reviewers: chandlerc
Subscribers: jyknight, mzolotukhin, RKSimon, gberry, llvm-commits
Differential Revision: http://reviews.llvm.org/D19723
llvm-svn: 268050
We neglected to transfer operand bundles for some transforms. These
were found via inspection, I'll try to come up with some test cases.
llvm-svn: 268011
We now read out the rest of the substreams from the DBI streams. One of
these substreams, the FileInfo substream, contains information about which
source files contribute to each module (aka compiland). This patch
additionally parses out the file information from that substream, and
dumps it in llvm-pdbdump.
Differential Revision: http://reviews.llvm.org/D19634
Reviewed by: ruiu
llvm-svn: 267928
ScheduleDAGMI::initQueues changes the RegionBegin to the first non-debug
instruction. Since it does not track register pressure, it does not affect
any RP trackers. ScheduleDAGMILive inherits initQueues from ScheduleDAGMI,
and it does reset the TopTPTracker in its schedule method. Any derived,
target-specific scheduler will need to do it as well, but the TopRPTracker
is only exposed as a "const" object to derived classes. Without the ability
to modify the tracker directly, this leaves a derived scheduler with a
potential of having the TopRPTracker out-of-sync with the CurrentTop.
The symptom of the problem:
void llvm::ScheduleDAGMILive::scheduleMI(llvm::SUnit *, bool):
Assertion `TopRPTracker.getPos() == CurrentTop && "out of sync"' failed.
Differential Revision: http://reviews.llvm.org/D19438
llvm-svn: 267918
The DetectDeadLanes pass performs a dataflow analysis of used/defined
subregister lanes across COPY instructions and instructions that will
get lowered to copies. It detects dead definitions and uses reading
undefined values which are obscured by COPY and subregister usage.
These dead definitions cause trouble in the register coalescer which
cannot deal with definitions suddenly becoming dead after coalescing
COPY instructions.
For now the pass only adds dead and undef flags to machine operands. It
should be possible to extend it in the future to remove the dead
instructions and redo the analysis for the affected virtual
registers.
Differential Revision: http://reviews.llvm.org/D18427
llvm-svn: 267851
This function performs the reverse computation of
composeSubRegIndexLaneMask().
It will be used in the upcoming "DetectDeadLanes" pass.
llvm-svn: 267849
Previously using lanemasks on registers without any subregisters was not
well defined. This commit extends TargetRegisterInfo/tablegen to:
- Report a lanemask of 1 for regclasses without subregisters
- Do the right thing when mapping a 0/1 lanemask from a class without
subregisters into a class with subregisters in
TargetRegisterInfo::composeSubRegIndexLaneMasks().
This will be used in the upcoming "DetectDeadLanes" patch.
llvm-svn: 267848
This gets more data out of the DBI strema of the PDB. In
particular it extracts the metadata for the list of modules
(compilands) that this PDB contains info about, and adds support
for dumping these fields to llvm-pdbdump.
Differential Revision: http://reviews.llvm.org/D19570
Reviewed By: ruiu
llvm-svn: 267818
This patch implements the transformation that promotes indirect calls to
conditional direct calls when the indirect-call value profile meta-data is
available.
Differential Revision: http://reviews.llvm.org/D17864
llvm-svn: 267815
Also replaces a number of calls to report_fatal_error with Error returns.
The plumbing will make it easier to return errors originating in libObject.
Replacing report_fatal_errors with Error returns will give JIT clients the
opportunity to recover gracefully when the JIT is unable to produce/relocate
code, as well as providing meaningful error messages that can be used to file
bug reports.
llvm-svn: 267776
Summary:
This is a hook to allow TargetMachine to install passes at the
EP_EarlyAsPossible PassManagerBuilder extension point.
Reviewers: chandlerc
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D18614
llvm-svn: 267763
Now the pass is just a tiny wrapper around the util. This lets us reuse
the logic elsewhere (done here for BuildLibCalls) instead of duplicating
it.
The next step is to have something like getOrInsertLibFunc that also
sets the attributes.
Differential Revision: http://reviews.llvm.org/D19470
llvm-svn: 267759
I tried to be as close as possible to the strongest check that
existed before; cleaning these up properly is left for future work.
Differential Revision: http://reviews.llvm.org/D19469
llvm-svn: 267758
Summary:
So it appears that to guarantee some of the ordering requirements of a GLSL
memoryBarrier() executed in the shader, we need to emit an s_waitcnt.
(We can't use an s_barrier, because memoryBarrier() may appear anywhere in
the shader, in particular it may appear in non-uniform control flow.)
Reviewers: arsenm, mareko, tstellarAMD
Subscribers: arsenm, llvm-commits
Differential Revision: http://reviews.llvm.org/D19203
llvm-svn: 267729
This change adds a new hook for estimating the cost of vector extracts followed
by zero- and sign-extensions. The motivating example for this change is the
SMOV and UMOV instructions on AArch64. These instructions move data from vector
to general purpose registers while performing the corresponding extension
(sign-extend for SMOV and zero-extend for UMOV) at the same time. For these
operations, TargetTransformInfo can assume the extensions are free and only
report the cost of the vector extract. The SLP vectorizer has been updated to
make use of the new hook.
Differential Revision: http://reviews.llvm.org/D18523
llvm-svn: 267725
Summary:
With the removal of support for lazy parsing of combined index summary
records (e.g. r267344), we no longer need to include the summary record
bitcode offset in the VST entries for definitions. Change the combined
index format to be similar to the per-module index format in using value
ids to cross-reference from the summary record to the VST entry (rather
than the summary record bitcode offset to cross-reference in the other
direction).
The visible changes are:
1) Add the value id to the combined summary records
2) Remove the summary offset from the combined VST records, which has
the following effects:
- No longer need the VST_CODE_COMBINED_GVDEFENTRY record, as all
combined index VST entries now only contain the value id and
corresponding GUID.
- No longer have duplicate VST entries in the case where there are
multiple definitions of a symbol (e.g. weak/linkonce), as they all
have the same value id and GUID.
An implication of #2 above is that in order to hook up an alias to the
correct aliasee based on the value id of the aliasee recorded in the
combined index alias record, we need to scan the entries in the index
for that GUID to find the one from the same module (i.e. the case where
there are multiple entries for the aliasee). But the reader no longer
has to maintain a special map to hook up the alias/aliasee.
Reviewers: joker.eph
Subscribers: joker.eph, llvm-commits
Differential Revision: http://reviews.llvm.org/D19481
llvm-svn: 267712
Extract a part of isDereferenceableAndAlignedPointer functionality to Value::getPointerDerferecnceableBytes. Currently it's a NFC, but in future I'm going to accumulate all the logic about value dereferenceability in this function similarly to Value::getPointerAlignment function (D16144).
Reviewed By: reames
Differential Revision: http://reviews.llvm.org/D17572
llvm-svn: 267708
This is required to use this function from isSafeToSpeculativelyExecute
Reviewed By: hfinkel
Differential Revision: http://reviews.llvm.org/D16231
llvm-svn: 267692
Summary:
D19403 adds a new pragma for loop distribution. This change adds
support for the corresponding metadata that the pragma is translated to
by the FE.
As part of this I had to rethink the flag -enable-loop-distribute. My
goal was to be backward compatible with the existing behavior:
A1. pass is off by default from the optimization pipeline
unless -enable-loop-distribute is specified
A2. pass is on when invoked directly from opt (e.g. for unit-testing)
The new pragma/metadata overrides these defaults so the new behavior is:
B1. A1 + enable distribution for individual loop with the pragma/metadata
B2. A2 + disable distribution for individual loop with the pragma/metadata
The default value whether the pass is on or off comes from the initiator
of the pass. From the PassManagerBuilder the default is off, from opt
it's on.
I moved -enable-loop-distribute under the pass. If the flag is
specified it overrides the default from above.
Then the pragma/metadata can further modifies this per loop.
As a side-effect, we can now also use -enable-loop-distribute=0 from opt
to emulate the default from the optimization pipeline. So to be precise
this is the new behavior:
C1. pass is off by default from the optimization pipeline
unless -enable-loop-distribute or the pragma/metadata enables it
C2. pass is on when invoked directly from opt
unless -enable-loop-distribute=0 or the pragma/metadata disables it
Reviewers: hfinkel
Subscribers: joker.eph, mzolotukhin, llvm-commits
Differential Revision: http://reviews.llvm.org/D19431
llvm-svn: 267672
Summary:
cloneLoopWithPreheader() does not update LoopInfo for sub-loop of
the original loop being cloned. Add assert to ensure no sub-loops for loop being cloned.
Reviewers: anemet, ashutosh.nema, hfinkel
Subscribers: mzolotukhin, llvm-commits
Differential Revision: http://reviews.llvm.org/D15922
llvm-svn: 267671
This reverts commit r267657, r267656, and r267655.
The test does not pass on multiple bots, I'm unsure why yet but let's unbreak them.
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 267664
I missed read the comment when I commited r267621 and thought the
comment did not need update. Matthias kindly proved me wrong.
Fixing that.
llvm-svn: 267638
The DBI stream contains a lot of bookkeeping information for other
streams. In particular it contains information about section contributions
and linked modules. This patch is a first attempt at parsing some of the
information out of the DBI stream. It currently only parses and dumps the
headers of the DBI stream, so none of the module data or section
contribution data is pulled out.
This is just a proof of concept that we understand the basic properties of
the DBI stream's metadata, and followup patches will try to extract more
detailed information out.
Differential Revision: http://reviews.llvm.org/D19500
Reviewed By: majnemer, ruiu
llvm-svn: 267585
When a block is tail-duplicated, the PHI nodes from that block are
replaced with appropriate COPY instructions. When those PHI nodes
contained use operands with subregisters, the subregisters were
dropped from the COPY instructions, resulting in incorrect code.
Keep track of the subregister information and use this information
when remapping instructions from the duplicated block.
Differential Revision: http://reviews.llvm.org/D19337
llvm-svn: 267583
This is part of solving PR27344:
https://llvm.org/bugs/show_bug.cgi?id=27344
CGP should undo the SimplifyCFG transform for the same reason that earlier patches have used this
same mechanism: it's possible that passes between SimplifyCFG and CGP may be able to optimize the
IR further with a select in place.
For the TLI hook default, >99% taken or not taken is chosen as the default threshold for a highly
predictable branch. Even the most limited HW branch predictors will be correct on this branch almost
all the time, so even a massive mispredict penalty perf loss would be overcome by the win from all
the times the branch was predicted correctly.
As a follow-up, we could make the default target hook less conservative by using the SchedMachineModel's
MispredictPenalty. Or we could just let targets override the default by implementing the hook with that
and other target-specific options. Note that trying to statically determine mispredict rates for
close-to-balanced profile weight data is generally impossible if the HW is sufficiently advanced. Ie,
50/50 taken/not-taken might still be 100% predictable.
Finally, note that this patch as-is will not solve PR27344 because the current __builtin_unpredictable()
branch weight default values are 4 and 64. A proposal to change that is in D19435.
Differential Revision: http://reviews.llvm.org/D19488
llvm-svn: 267572
Summary:
We don't use MinLatency any more since r184032.
Reviewers: atrick, hfinkel, mcrosier
Differential Revision: http://reviews.llvm.org/D19474
llvm-svn: 267502
Autoconf used to support setting LLVM_VERSION_INFO and there is some code filtered around llvm in Support/CommandLine.cpp and LTO/LTOCodeGenerator.cpp that uses it if it is set.
We also shouldn't be explicitly setting it as a define on llvm-shlib. It is pointless there because there is no code using it in llvm-shlib, and it is better to have it as part of the generated config.h so that it is available everywhere.
llvm-svn: 267490
Add a typedef for the std::map<GlobalValue::GUID, GlobalValueSummary *>
map that is passed around to identify summaries for values defined in a
particular module. This shortens up declarations in a variety of places.
llvm-svn: 267471
This replaces use of std::error_code and ErrorOr in the ORC RPC support library
with Error and Expected. This required updating the OrcRemoteTarget API, Client,
and server code, as well as updating the Orc C API.
This patch also fixes several instances where Errors were dropped.
llvm-svn: 267457
We didn't have logic to correctly handle CFGs where there was more than
one EH-pad successor (these are novel with WinEH).
There were situations where a register was live in one exceptional
successor but not another but the code as written would only consider
the first exceptional successor it found.
This resulted in split points which were insufficiently early if an
invoke was present.
This fixes PR27501.
N.B. This removes getLandingPadSuccessor.
llvm-svn: 267412
If several regions cover the same area of code, we have to restore
the combined value for that area when return from a nested region.
This patch achieves that by combining regions before calling buildSegments.
Differential Revision: http://reviews.llvm.org/D18610
llvm-svn: 267390
The new relocation recently defined in the Intel386 psABI
was still missing from this file. A subsequent commit will
add support for GOT32X in MC, together with a test.
llvm-svn: 267378
Summary:
Remove the GlobalValueInfo and change the ModuleSummaryIndex to directly
reference summary objects. The info structure was there to support lazy
parsing of the combined index summary objects, which is no longer
needed and not supported.
Reviewers: joker.eph
Subscribers: joker.eph, llvm-commits
Differential Revision: http://reviews.llvm.org/D19462
llvm-svn: 267344
Enum bitfields have crazy portability issues with MSVC. Use unsigned
instead of LinkageTypes here in the ModuleSummaryIndex to address
Takumi's concerns from r267335.
llvm-svn: 267342
The original patch caused crashes because it could derefence a null pointer
for SelectionDAGTargetInfo for targets that do not define it.
Evaluates fmul+fadd -> fmadd combines and similar code sequences in the
machine combiner. It adds support for float and double similar to the existing
integer implementation. The key features are:
- DAGCombiner checks whether it should combine greedily or let the machine
combiner do the evaluation. This is only supported on ARM64.
- It gives preference to throughput over latency: the heuristic used is
to combine always in loops. The targets decides whether the machine
combiner should optimize for throughput or latency.
- Supports for fmadd, f(n)msub, fmla, fmls patterns
- On by default at O3 ffast-math
llvm-svn: 267328
Right now it only contains the LinkageType, but will be extended
with "hasSection", "isOptSize", "hasInlineAssembly", etc.
Differential Revision: http://reviews.llvm.org/D19404
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 267319
Keeping as much as possible internal/private is
known to help the optimizer. Let's try to benefit from
this in ThinLTO.
Note: this is early work, but is enough to build clang (and
all the LLVM tools). I still need to write some lit-tests...
Differential Revision: http://reviews.llvm.org/D19103
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 267317
The option to control the emission of the new relocations
is -relax-relocations (blatantly copied from GNU as).
It can't be enabled by default because it breaks relatively
recent versions of ld.bfd/ld.gold (late 2015).
llvm-svn: 267307
Summary:
As discussed in D18298, some local globals can't
be renamed/promoted (because they have a section, or because
they are referenced from inline assembly).
To be able to detect naming collision, we need to keep around
the "GUID" using their original name without taking the linkage
into account.
Reviewers: tejohnson
Subscribers: joker.eph, llvm-commits
Differential Revision: http://reviews.llvm.org/D19454
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 267304
Eliminate DITypeIdentifierMap and make DITypeRef a thin wrapper around
DIType*. It is no longer legal to refer to a DICompositeType by its
'identifier:', and DIBuilder no longer retains all types with an
'identifier:' automatically.
Aside from the bitcode upgrade, this is mainly removing logic to resolve
an MDString-based reference to an actualy DIType. The commits leading
up to this have made the implicit type map in DICompileUnit's
'retainedTypes:' field superfluous.
This does not remove DITypeRef, DIScopeRef, DINodeRef, and
DITypeRefArray, or stop using them in DI-related metadata. Although as
of this commit they aren't serving a useful purpose, there are patchces
under review to reuse them for CodeView support.
The tests in LLVM were updated with deref-typerefs.sh, which is attached
to the thread "[RFC] Lazy-loading of debug info metadata":
http://lists.llvm.org/pipermail/llvm-dev/2016-April/098318.html
llvm-svn: 267296
Each reference to an unresolved MDNode is expensive, since the RAUW
support in MDNode uses a separate allocation and side map. Since
a distinct MDNode doesn't require its operands on creation (unlike
uniuqed nodes, there's no need to check for structural equivalence),
use nullptr for any of its unresolved operands. Besides reducing the
burden on MDNode maps, this can avoid allocating temporary MDNodes in
the first place.
We need some way to track operands. Invent DistinctMDOperandPlaceholder
for this purpose, which is a Metadata subclass that holds an ID and
points at its single user. DistinctMDOperandPlaceholder::replaceUseWith
is just like RAUW, but its name highlights that there is only ever
exactly one use.
There is no support for moving (or, obviously, copying) these. Move
support would be possible but expensive; leaving it unimplemented
prevents user error. In the BitcodeReader I originally considered
allocating on a BumpPtrAllocator and keeping a vector of pointers to
them, and then I realized that std::deque implements exactly this.
A couple of obvious follow-ups:
- Change ValueEnumerator to emit distinct nodes first to take more
advantage of this optimization. (How convenient... I think I might
have a couple of patches for this.)
- Change DIBuilder and its consumers (like CGDebugInfo in clang) to
use something like this when constructing debug info in the first
place.
llvm-svn: 267270
Only one consumer (llvm-objdump) actually cared about the fact that there were
two triples. Others were actively working around the fact that the Triple
returned by getArch might have been invalid. As for llvm-objdump, it needs to
be acutely aware of both Triples anyway, so being generic in the exposed API is
no benefit.
Also rename the version of getArch returning a Triple. Users were having to
pass an unwanted nullptr to disambiguate the two, which was nasty.
The only functional change here is that armv7m and armv7em object files no
longer crash llvm-objdump.
llvm-svn: 267249
The dwo_name was added to dwo files to improve diagnostics in dwp, but
it confuses tools that attempt to load any dwo named by a dwo_name, even
ones inside dwos. Avoid this by keeping track of whether a unit is
already a dwo unit, and if so, not loading further dwos.
llvm-svn: 267241
Summary:
Follow up to D19291: it now makes sense to use two Intr*Mem properties,
in particular IntrReadMem + IntrArgMemOnly is common.
Pointed out by Mikael Holmén.
Reviewers: uabelho, joker.eph, reames
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D19418
llvm-svn: 267238
The original commit was reverted because of a buildbot problem with LazyCallGraph::SCC handling (not related to the OptBisect handling).
Differential Revision: http://reviews.llvm.org/D19172
llvm-svn: 267231
This intrinsic takes two arguments, ``%ptr`` and ``%offset``. It loads
a 32-bit value from the address ``%ptr + %offset``, adds ``%ptr`` to that
value and returns it. The constant folder specifically recognizes the form of
this intrinsic and the constant initializers it may load from; if a loaded
constant initializer is known to have the form ``i32 trunc(x - %ptr)``,
the intrinsic call is folded to ``x``.
LLVM provides that the calculation of such a constant initializer will
not overflow at link time under the medium code model if ``x`` is an
``unnamed_addr`` function. However, it does not provide this guarantee for
a constant initializer folded into a function body. This intrinsic can be
used to avoid the possibility of overflows when loading from such a constant.
Differential Revision: http://reviews.llvm.org/D18367
llvm-svn: 267223
This patch changes the interface for createPGOFuncNameMetadata() where we add
another PGOFuncName argument.
Differential Revision: http://reviews.llvm.org/D19433
llvm-svn: 267216
The relative vtable ABI (PR26723) needs PLT relocations to refer to virtual
functions defined in other DSOs. The unnamed_addr attribute means that the
function's address is not significant, so we're allowed to substitute it
with the address of a PLT entry.
Also includes a bonus feature: addends for COFF image-relative references.
Differential Revision: http://reviews.llvm.org/D17938
llvm-svn: 267211
E.g. for:
!1 = {"llvm.distribute", i32 1}
it now returns the MDOperand for 1.
I will use this in LoopDistribution to check the value of the metadata.
Note that the change is backward-compatible with its current use in
LoopVersioningLICM. An Optional implicitly converts to a bool depending
whether it contains a value or not.
llvm-svn: 267190
Summary:
CachingMemorySSAWalker::invalidateInfo was using IsCall to determine
which cache map needed to be cleared of entries referring to the invalidated
MemoryAccess, but there could also be entries referring to it in the
other cache map (value entries, not key entries). This change just
clears both tables to be conservatively correct.
Also add a verifyRemoved() function, called when expensive
checks (i.e. XDEBUG) are enabled to verify that the invalidated
MemoryAccess object is not referenced in any of the caches.
Reviewers: dberlin, george.burgess.iv
Subscribers: mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D19388
llvm-svn: 267157
Summary:
This new pass allows targets to use the hazard recognizer without having
to also run one of the schedulers. This is useful when compiling with
optimizations disabled for targets that still need noop hazards
to be handled correctly.
Reviewers: hfinkel, atrick
Subscribers: arsenm, llvm-commits
Differential Revision: http://reviews.llvm.org/D18594
llvm-svn: 267156
r267049 broke multiple buildbots (e.g. clang-cmake-mips, and clang-x86_64-linux-selfhost-modules) which the follow-ups have not yet resolved and this is preventing subsequent committers from being notified about additional failures on the affected buildbots.
llvm-svn: 267148
EarlyCSE had inconsistent behavior with regards to flag'd instructions:
- In some cases, it would pessimize if the available instruction had
different flags by not performing CSE.
- In other cases, it would miscompile if it replaced an instruction
which had no flags with an instruction which has flags.
Fix this by being more consistent with our flag handling by utilizing
andIRFlags.
llvm-svn: 267111
Summary:
Also adds a small comment blurb on control flow + no-wrap flags, since
that question came up a few days back on llvm-dev.
Reviewers: bjarke.roune, broune
Subscribers: sanjoy, mcrosier, llvm-commits, mzolotukhin
Differential Revision: http://reviews.llvm.org/D19209
llvm-svn: 267110
Summary:
This intrinsic returns true if the current thread belongs to a live pixel
and false if it belongs to a pixel that we are executing only for derivative
computation. It will be used by Mesa to implement gl_HelperInvocation.
Note that for pixels that are killed during the shader, this implementation
also returns true, but it doesn't matter because those pixels are always
disabled in the EXEC mask.
This unearthed a corner case in the instruction verifier, which complained
about a v_cndmask 0, 1, exec, exec<imp-use> instruction. That's stupid but
correct code, so make the verifier accept it as such.
Reviewers: arsenm, tstellarAMD
Subscribers: arsenm, llvm-commits
Differential Revision: http://reviews.llvm.org/D19191
llvm-svn: 267102
Evaluates fmul+fadd -> fmadd combines and similar code sequences in the
machine combiner. It adds support for float and double similar to the existing
integer implementation. The key features are:
- DAGCombiner checks whether it should combine greedily or let the machine
combiner do the evaluation. This is only supported on ARM64.
- It gives preference to throughput over latency: the heuristic used is
to combine always in loops. The targets decides whether the machine
combiner should optimize for throughput or latency.
- Supports for fmadd, f(n)msub, fmla, fmls patterns
- On by default at O3 ffast-math
llvm-svn: 267098
This removes the interfaces added (and not yet complete) to support
lazy reading of summaries. This support is not expected to be needed
since we are moving to a model where the full index is only being
traversed in the thin link step, instead of the back ends.
(The second part of this that I plan to do next is remove the
GlobalValueInfo from the ModuleSummaryIndex - it was mostly needed to
support lazy parsing of summaries. The index can instead reference the
summary structures directly.)
llvm-svn: 267097
We'd disabled them on x86 because back in the early days some host tools
couldn't handle the new load commands. This no longer holds: anyone capable of
deploying Clang should be able to deploy its copies of ar/ranlib/etc.
rdar://25254790
llvm-svn: 267075
When printing the properties required by a pass, only print the
properties that are set, and not those that are clear (only properties
that are set are verified, clear properties are "don't-care").
llvm-svn: 267070
Summary:
Adds an instrumentation pass for the new EfficiencySanitizer ("esan")
performance tuning family of tools. Multiple tools will be supported
within the same framework. Preliminary support for a cache fragmentation
tool is included here.
The shared instrumentation includes:
+ Turn mem{set,cpy,move} instrinsics into library calls.
+ Slowpath instrumentation of loads and stores via callouts to
the runtime library.
+ Fastpath instrumentation will be per-tool.
+ Which memory accesses to ignore will be per-tool.
Reviewers: eugenis, vitalybuka, aizatsky, filcab
Subscribers: filcab, vkalintiris, pcc, silvas, llvm-commits, zhaoqin, kcc
Differential Revision: http://reviews.llvm.org/D19167
llvm-svn: 267058
Summary: As per title. This will help work on the C API.
Reviewers: Wallbraker, whitequark, joker.eph, echristo, rafael
Subscribers: joker.eph, llvm-commits
Differential Revision: http://reviews.llvm.org/D19173
llvm-svn: 267057
InstrProfSymtab::create can fail with instrprof_error::malformed, but
this error is silently dropped. Propagate the error up to the caller so
we fail early.
Eventually, I'd like to transition ProfileData over to the new Error
class so we can't ignore hard failures like this.
llvm-svn: 267055
splitting edges.
MachineBasicBlock::SplitCriticalEdges will crash if a nullptr would have
been passed for the Pass argument. Do not allow that by turning this
argument into a reference.
The alternative would have been to make the Pass a truly optional
argument, but although this is easy to do, I was afraid users using it
like this would not be aware the livness information, dominator tree and
such would silently be broken.
llvm-svn: 267052
PDB parsing code was hand-rolled into llvm-pdbdump. This patch moves the
parsing of this code into DebugInfoPDB and makes the dumper use this.
This is achieved by implementing the skeleton of RawPdbSession, the
non-DIA counterpart to the existing PDB read interface. None of the type /
source file / etc information is accessible yet, so this implementation is
not yet close to achieving parity with the DIA counterpart, but the
RawSession class simply holds a reference to a PDBFile class which handles
parsing the file format. Additionally a PDBStream class is introduced
which allows accessing the bytes of a particular stream in a PDB file.
Differential Revision: http://reviews.llvm.org/D19343
Reviewed By: majnemer
llvm-svn: 267049
Introduce canSplitCriticalEdge, so that clients can now query whether or
not a critical edge can be split without actually needing to split it.
This may be useful when gathering information for cost models for
instance.
llvm-svn: 267046
When custom lowered, this is not called if the store is custom
lowered. Move it to be a utility function so targets can
easily expand unaligned accesses when custom lowering.
llvm-svn: 267029
Instead of holding a mask, hold two value: the start index and the
length of the mapping. This is a more compact representation, although
less powerful. That being said, arbitrary masks would not have worked
for the generic so do not allow them in the first place.
llvm-svn: 267025
This patch implements a optimization bisect feature, which will allow optimizations to be selectively disabled at compile time in order to track down test failures that are caused by incorrect optimizations.
The bisection is enabled using a new command line option (-opt-bisect-limit). Individual passes that may be skipped call the OptBisect object (via an LLVMContext) to see if they should be skipped based on the bisect limit. A finer level of control (disabling individual transformations) can be managed through an addition OptBisect method, but this is not yet used.
The skip checking in this implementation is based on (and replaces) the skipOptnoneFunction check. Where that check was being called, a new call has been inserted in its place which checks the bisect limit and the optnone attribute. A new function call has been added for module and SCC passes that behaves in a similar way.
Differential Revision: http://reviews.llvm.org/D19172
llvm-svn: 267022
Summary:
IntrReadWriteArgMem simply becomes IntrArgMemOnly.
So there are fewer intrinsic properties that express their orthogonality
better, and correspond more closely to the corresponding IR attributes.
Suggested by: Philip Reames
Reviewers: joker.eph, reames, tstellarAMD
Subscribers: jholewinski, arsenm, llvm-commits
Differential Revision: http://reviews.llvm.org/D19291
llvm-svn: 267021
"Into" was misleading. I am also planning to use this helper to look
for loop metadata and return the argument, so find seems like a better
name.
llvm-svn: 267013
Before this fix, DILexicalBlockFile entries were skipped only in some cases and were not in other cases.
Differential Revision: http://reviews.llvm.org/D18724
llvm-svn: 267004
A DenseMap doesn't store the hashes, so it needs to recompute them when
the table is resized.
In some applications the hashing cost is noticeable. That is the case
for example in lld for symbol names (StringRef).
This patch adds a templated structure that can wraps any value that can
go in a DenseMap and caches the hash.
llvm-svn: 266981
Summary:
The function importer already decided what symbols need to be pulled
in. Also these magically added ones will not be in the export list
for the source module, which can confuse the internalizer for
instance.
Reviewers: tejohnson, rafael
Subscribers: joker.eph, llvm-commits
Differential Revision: http://reviews.llvm.org/D19096
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 266948
No matter what value you OR in to A, the result of (or A, B) is going to be UGE A. When A and B are positive, it's SGE too. If A is negative, OR'ing a value into it can't make it positive, but can increase its value closer to -1, therefore (or A, B) is SGE A. Working through all possible combinations produces this truth table:
```
A is
+, -, +/-
F F F + B is
T F ? -
? F ? +/-
```
The related optimizations are flipping the 'slt' for 'sge' which always NOTs the result (if the result is known), and swapping the LHS and RHS while swapping the comparison predicate.
There are more idioms left to implement (aren't there always!) but I've stopped here because any more would risk becoming unreasonable for reviewers.
llvm-svn: 266939
Produce another specific error message for a malformed Mach-O file when a symbol’s
string index is past the end of the string table. The existing test case in test/Object/macho-invalid.test
for macho-invalid-symbol-name-past-eof now reports the error with the message indicating
that a symbol at a specific index has a bad sting index and that bad string index value.
Again converting interfaces to Expected<> from ErrorOr<> does involve
touching a number of places. Where the existing code reported the error with a
string message or an error code it was converted to do the same. There is some
code for this that could be factored into a routine but I would like to leave that for
the code owners post-commit to do as they want for handling an llvm::Error. An
example of how this could be done is shown in the diff in
lib/ExecutionEngine/RuntimeDyld/RuntimeDyldImpl.h which had a Check() routine
already for std::error_code so I added one like it for llvm::Error .
Also there some were bugs in the existing code that did not deal with the
old ErrorOr<> return values. So now with Expected<> since they must be
checked and the error handled, I added a TODO and a comment:
“// TODO: Actually report errors helpfully” and a call something like
consumeError(NameOrErr.takeError()) so the buggy code will not crash
since needed to deal with the Error.
Note there fixes needed to lld that goes along with this that I will commit right after this.
So expect lld not to built after this commit and before the next one.
llvm-svn: 266919
Don't use std::vector<TrackingMDRef>, since (at least in some versions
of libc++) std::vector apparently copies values on grow operations
instead of moving them. Found this when I was temporarily deleting the
copy constructor for TrackingMDRef to investigate a performance
bottleneck.
llvm-svn: 266909
A ModuleSlotTracker can be created without actually being used (e.g.,
r266889 added one to the Verifier). Create the SlotTracker within it
lazily on the first call to ModuleSlotTracker::getMachine.
llvm-svn: 266902
Clients may call writeMergedModules before calling optimize, or call
compileOptimized without calling optimize. Make sure they don't sneak
past the verifier. This adds LTOCodeGenerator::verifyMergedModuleOnce,
and calls it from writeMergedModule, optimize, and codegenOptimized.
I couldn't find a good way to test this. I tried writing broken IR to
send into llvm-lto, but LTOCodeGenerator doesn't understand textual IR,
and assembler runs the verifier itself anyway. Checking in
valid-but-doesn't-verify bitcode here doesn't seem valuable.
llvm-svn: 266894
Speed up Verifier output by sharing a single ModuleSlotTracker for the
duration. There should be no functionality change here except for much
faster output when there's more than one statement.
Now the Verifier won't be traversing the full Metadata graph every time
it prints an error. The TypePrinter is still not shared, but that would
take some extra plumbing.
llvm-svn: 266889
Summary:
This patch prevents importing from (and therefore exporting from) any
module with a "llvm.used" local value. Local values need to be promoted
and renamed when importing, and their presense on the llvm.used variable
indicates that there are opaque uses that won't see the rename. One such
example is a use in inline assembly.
See also the discussion at:
http://lists.llvm.org/pipermail/llvm-dev/2016-April/098047.html
As part of this, move collectUsedGlobalVariables out of Transforms/Utils
and into IR/Module so that it can be used more widely. There are several
other places in LLVM that used copies of this code that can be cleaned
up as a follow on NFC patch.
Reviewers: joker.eph
Subscribers: pcc, llvm-commits, joker.eph
Differential Revision: http://reviews.llvm.org/D18986
llvm-svn: 266877
Summary:
LLVMAttribute has outlived its utility and is becoming a problem for C API users that what to use all the LLVM attributes. In order to help moving away from LLVMAttribute in a smooth manner, this diff introduce LLVMGetAttrKindIDInContext, which can be used instead of the enum values.
See D18749 for reference.
Reviewers: Wallbraker, whitequark, joker.eph, echristo, rafael
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D19081
llvm-svn: 266842
Summary:
This property is used to mark an intrinsic that only writes to memory, but
neither reads from memory nor has other side effects.
An example where this is useful is the llvm.amdgcn.buffer.store.format.*
intrinsic, which corresponds to a store instruction that goes through a special
buffer descriptor rather than through a plain pointer.
With this property, the intrinsic should still be handled as having side
effects at the LLVM IR level, but machine scheduling can make smarter
decisions.
Reviewers: tstellarAMD, arsenm, joker.eph, reames
Subscribers: arsenm, llvm-commits
Differential Revision: http://reviews.llvm.org/D18291
llvm-svn: 266826
Both AArch64 and ARM support llvm.<arch>.thread.pointer intrinsics that
just return the thread pointer. I have a pending patch that does the same
for SystemZ (D19054), and there are many more targets that could benefit
from one.
This patch merges the ARM and AArch64 intrinsics into a single target
independent one that will also be used by subsequent targets.
Differential Revision: http://reviews.llvm.org/D19098
llvm-svn: 266818
With this change, ideally IR pass can always generate llvm.stackguard
call to get the stack guard; but for now there are still IR form stack
guard customizations around (see getIRStackGuard()). Future SSP
customization should go through LOAD_STACK_GUARD.
There is a behavior change: stack guard values are not CSEed anymore,
since we should never reuse the value in case that it has been spilled (and
corrupted). See ssp-guard-spill.ll. This also cause the change of stack
size and codegen in X86 and AArch64 test cases.
Ideally we'd like to know if the guard created in llvm.stackprotector() gets
spilled or not. If the value is spilled, discard the value and reload
stack guard; otherwise reuse the value. This can be done by teaching
register allocator to know how to rematerialize LOAD_STACK_GUARD and
force a rematerialization (which seems hard), or check for spilling in
expandPostRAPseudo. It only makes sense when the stack guard is a global
variable, which requires more instructions to load. Anyway, this seems to go out
of the scope of the current patch.
llvm-svn: 266806
The functionality contained within getIntrinsicIDForCall is two-fold: it
checks if a CallInst's callee is a vectorizable intrinsic. If it isn't
an intrinsic, it attempts to map the call's target to a suitable
intrinsic.
Move the mapping functionality into getIntrinsicForCallSite and rename
getIntrinsicIDForCall to getVectorIntrinsicIDForCall while
reimplementing it in terms of getIntrinsicForCallSite.
llvm-svn: 266801
Add a new method, DICompositeType::buildODRType, that will create or
mutate the DICompositeType for a given ODR identifier, and use it in
LLParser and BitcodeReader instead of DICompositeType::getODRType.
The logic is as follows:
- If there's no node, create one with the given arguments.
- Else, if the current node is a forward declaration and the new
arguments would create a definition, mutate the node to match the
new arguments.
- Else, return the old node.
This adds a missing feature supported by the current DITypeIdentifierMap
(which I'm slowly making redudant). The only remaining difference is
that the DITypeIdentifierMap has a "the-last-one-wins" rule, whereas
DICompositeType::buildODRType has a "the-first-one-wins" rule.
For now I'm leaving behind DICompositeType::getODRType since it has
obvious, low-level semantics that are convenient for unit testing.
llvm-svn: 266786
This patch improves SimplifyCFG to catch cases like:
if (a < b) {
if (a > b) <- known to be false
unreachable;
}
Phabricator Revision: http://reviews.llvm.org/D18905
llvm-svn: 266767
Calling ValueMap::MD lazily constructs a ValueMap, which mallocs the
buckets. Instead of swapping constructed maps, move around the
underlying Optional<MDMapT>. This gets rid of some unnecessary malloc
traffic from r266579 (not that it showed up on a profile).
llvm-svn: 266761
Lift the API for debug info ODR type uniquing up a layer. Instead of
clients managing the map directly on the LLVMContext, add a static
method to DICompositeType called getODRType and handle the map in the
background. Also adds DICompositeType::getODRTypeIfExists, so far just
for convenience in the unit tests.
This simplifies the logic in LLParser and BitcodeReader. Because of
argument spam there are actually a few more lines of code now; I'll see
if I come up with a reasonable way to clean that up.
llvm-svn: 266742
Tighten up the API for debug info ODR type uniquing in LLVMContext. The
only reason to allow other DIType subclasses is to make the unit tests
prettier :/.
llvm-svn: 266737
Summary:
Need to use predecessors for reverse graph, successors for forward graph.
succ_iterator/pred_iterator are not compatible, this patch is all the work necessary to work around that (which is what everywhere else does). Not sure if there is a better way, so cc'ing some random folks to take a gander :)
Reviewers: dblaikie, qcolombet, echristo
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D18796
llvm-svn: 266718
Summary:
The `"patchable-function"` attribute can be used by an LLVM client to
influence LLVM's code generation in ways that makes the generated code
easily patchable at runtime (for instance, to redirect control).
Right now only one patchability scheme is supported,
`"prologue-short-redirect"`, but this can be expanded in the future.
Reviewers: joker.eph, rnk, echristo, dberris
Subscribers: joker.eph, echristo, mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D19046
llvm-svn: 266715
As per David's review, rename everything in the new API for ODR type
uniquing of debug info.
ensureDITypeMap => enableDebugTypeODRUniquing
destroyDITypeMap => disableDebugTypeODRUniquing
hasDITypeMap => isODRUniquingDebugTypes
llvm-svn: 266713
This reverts commit r266477.
This commit introduces cyclic dependency. This commit has "Analysis" depend on "ProfileData",
while "ProfileData" depends on "Object", which depends on "BitCode", which
depends on "Analysis".
llvm-svn: 266619
Three problems:
1. <future> can't be easily used. If you must use it, see
include/Support/ThreadPool.h for how.
2. constexpr problems, even after 266588.
3. Move assignment operators can't be defaulted in MSVC2013.
llvm-svn: 266615
Removed some unused headers, replaced some headers with forward class declarations.
Found using simple scripts like this one:
clear && ack --cpp -l '#include "llvm/ADT/IndexedMap.h"' | xargs grep -L 'IndexedMap[<]' | xargs grep -n --color=auto 'IndexedMap'
Patch by Eugene Kosov <claprix@yandex.ru>
Differential Revision: http://reviews.llvm.org/D19219
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 266595
asynchronous call/handle. Also updates the ORC remote JIT API to use the new
scheme.
The previous version of the RPC tools only supported void functions, and
required the user to manually call a paired function to return results. This
patch replaces the Procedure typedef (which only supported void functions) with
the Function typedef which supports return values, e.g.:
Function<FooId, int32_t(std::string)> Foo;
The RPC primitives and channel operations are also expanded. RPC channels must
support four new operations: startSendMessage, endSendMessage,
startRecieveMessage and endRecieveMessage, to handle channel locking. In
addition, serialization support for tuples to RPCChannels is added to enable
multiple return values.
The RPC primitives are expanded from callAppend, call, expect and handle, to:
appendCallAsync - Make an asynchronous call to the given function.
callAsync - The same as appendCallAsync, but calls send on the channel when
done.
callSTHandling - Blocking call for single-threaded code. Wraps a call to
callAsync then waits on the result, using a user-supplied
handler to handle any callbacks from the remote.
callST - The same as callSTHandling, except that it doesn't handle
callbacks - it expects the result to be the first return.
expect and handle - as before.
handleResponse - Handle a response from the remote.
waitForResult - Wait for the response with the given sequence number to arrive.
llvm-svn: 266581
Cache the result of mapping metadata nodes between instances of IRLinker
(i.e., for the lifetime of IRMover). There shouldn't be any real
functional change here, but this should give a major speedup. I had
loaned this to Mehdi when he tested performance of r266446, and the two
patches together gave a 10x speedup in metadata mapping.
llvm-svn: 266579
This required changing several places to print VT enums as strings instead of raw ints since the proper method to use to print became ambiguous. This is probably an improvement anyway.
This also appears to save ~8K from an x86 self host build of llc.
llvm-svn: 266562
Rather than relying on the structural equivalence of DICompositeType to
merge type definitions, use an explicit map on the LLVMContext that
LLParser and BitcodeReader consult when constructing new nodes.
Each non-forward-declaration DICompositeType with a non-empty
'identifier:' field is stored/loaded from the type map, and the first
definiton will "win".
This map is opt-in: clients that expect ODR types from different modules
to be merged must call LLVMContext::ensureDITypeMap.
- Clients that just happen to load more than one Module in the same
LLVMContext won't magically merge types.
- Clients (like LTO) that want to continue to merge types based on ODR
identifiers should opt-in immediately.
I have updated LTOCodeGenerator.cpp, the two "linking" spots in
gold-plugin.cpp, and llvm-link (unless -disable-debug-info-type-map) to
set this.
With this in place, it will be straightforward to remove the DITypeRef
concept (i.e., referencing types by their 'identifier:' string rather
than pointing at them directly).
llvm-svn: 266549
This is a requirement for the cache handling in D18494
Differential Revision: http://reviews.llvm.org/D18908
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 266519
To be able to work accurately on the reference graph when taking
decision about internalizing, promoting, renaming, etc. We need
to have the alias information explicit.
Differential Revision: http://reviews.llvm.org/D18836
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 266517
This reverts commit r266507, reapplying r266503 (and r266505
"ValueMapper: Use API from r266503 in unit tests, NFC") completely
unchanged.
I reverted because of a bot failure here:
http://lab.llvm.org:8011/builders/lld-x86_64-freebsd/builds/16810/
However, looking more closely, the failure was from a host-compiler
crash (clang 3.7.1) when building:
lib/CodeGen/AsmPrinter/CMakeFiles/LLVMAsmPrinter.dir/DwarfAccelTable.cpp.o
I didn't modify that file, or anything it includes, with that commit.
The next build (which hadn't picked up my revert) got past it:
http://lab.llvm.org:8011/builders/lld-x86_64-freebsd/builds/16811/
I think this was just unfortunate timing. I suppose the bot must be
flakey.
llvm-svn: 266510
This significantly contributes to peak memory usage during a
LTO Release+DebugInfo build of clang. In my profile the peak usage
is around 164MB before this change and ~130MB after.
llvm-svn: 266509
This reverts commit r266503, in case it's the root cause of this bot
failure:
http://lab.llvm.org:8011/builders/lld-x86_64-freebsd/builds/16810
I'm also reverting r266505 -- "ValueMapper: Use API from r266503 in unit
tests, NFC" -- since it's in the way.
llvm-svn: 266507
Eliminate co-recursion of Mapper::mapValue through
ValueMaterializer::materializeInitFor, through a major redesign of the
ValueMapper.cpp interface.
- Expose a ValueMapper class that controls the entry points to the
mapping algorithms.
- Change IRLinker to use ValueMapper directly, rather than
llvm::RemapInstruction, llvm::MapValue, etc.
- Use (e.g.) ValueMapper::scheduleMapGlobalInit to add mapping work to
a worklist in ValueMapper instead of recursing.
There were two fairly major complications.
Firstly, IRLinker::linkAppendingVarProto incorporates an on-the-fly IR
ugprade that I had to split apart. Long-term, this upgrade should be
done in the bitcode reader (and we should only accept the "new" form),
but for now I've just made it work and added a FIXME. The hold-op is
that we need to deprecate C API that relies on this.
Secondly, IRLinker has special logic to correctly implement aliases with
comdats, and uses two ValueToValueMapTy instances and two
ValueMaterializers. I supported this by allowing clients to register an
alternate mapping context, whose MCID can be passed in when scheduling
new work.
While out of scope for this commit, it should now be straightforward to
remove recursion from Mapper::mapValue.
llvm-svn: 266503
Because HoistSpillHelper::hoistAllSpills is called in postOptimization, before the
patch we didn't want LiveRangeEdit::eliminateDeadDefs to call splitSeparateComponents
and generate unassigned new vregs. However, skipping splitSeparateComponents will make
verify-machineinstrs unhappy, so I remove the early return, and use
HoistSpillHelper::LRE_DidCloneVirtReg to assign physreg/stackslot for those new vregs.
In addition, some code reorganization to make class HoistSpillHelper privately inheriting
from LiveRangeEdit::Delegate possible. This is to be consistent with class RAGreedy and
class RegisterCoalescer.
Differential Revision: http://reviews.llvm.org/D19142
llvm-svn: 266489
Adds an interface to get ProfileSummary for a module and makes InlineCost use ProfileSummary to get max function count.
Differential Revision: http://reviews.llvm.org/D18622
llvm-svn: 266477
This is a recommit of r266390 with a fix that will allow tests to pass
(hopefully). Before we got a StringRef to M->getTargetTriple() and right
after we moved the Module so we were referencing a dangling object.
llvm-svn: 266456
Currently each Function points to a DISubprogram and DISubprogram has a
scope field. For member functions the scope is a DICompositeType. DIScopes
point to the DICompileUnit to facilitate type uniquing.
Distinct DISubprograms (with isDefinition: true) are not part of the type
hierarchy and cannot be uniqued. This change removes the subprograms
list from DICompileUnit and instead adds a pointer to the owning compile
unit to distinct DISubprograms. This would make it easy for ThinLTO to
strip unneeded DISubprograms and their transitively referenced debug info.
Motivation
----------
Materializing DISubprograms is currently the most expensive operation when
doing a ThinLTO build of clang.
We want the DISubprogram to be stored in a separate Bitcode block (or the
same block as the function body) so we can avoid having to expensively
deserialize all DISubprograms together with the global metadata. If a
function has been inlined into another subprogram we need to store a
reference the block containing the inlined subprogram.
Attached to https://llvm.org/bugs/show_bug.cgi?id=27284 is a python script
that updates LLVM IR testcases to the new format.
http://reviews.llvm.org/D19034
<rdar://problem/25256815>
llvm-svn: 266446
Perform store clustering just like load clustering. This change add
StoreClusterMutation in machine-scheduler. To control StoreClusterMutation,
added enableClusterStores() in TargetInstrInfo.h. This is enabled only on
AArch64 for now.
This change also add support for unscaled stores which were not handled in
getMemOpBaseRegImmOfs().
llvm-svn: 266437
Summary:
InlineCost's threshold is multiplied by this value. This lets us adjust
the inlining threshold up or down on a per-target basis. For example,
we might want to increase the threshold on targets where calls are
unusually expensive.
Reviewers: chandlerc
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D18560
llvm-svn: 266405
Summary:
This lets us add this pass to the IR pass manager unconditionally; it
will simply not do anything on targets without branch divergence.
Reviewers: tra
Subscribers: llvm-commits, jingyue, rnk, chandlerc
Differential Revision: http://reviews.llvm.org/D18625
llvm-svn: 266398
This will be used in lld to avoid creating TargetMachine in two
different places. See D18999 for a more detailed discussion.
Differential Revision: http://reviews.llvm.org/D19139
llvm-svn: 266390
If the size of an AST entry changes, we also need to make sure we perform
necessary alias set merges, as the new size may overlap pointers in other sets.
We happen to run into this with memset, because memset allows an entry for a
i8* pointer to have a decidedly non-i8 size.
This fixes PR27262.
Differential Revision: http://reviews.llvm.org/D18939
llvm-svn: 266381
The only use for getGlobalContext() is in the C API.
Let's just move the static global here and nuke the C++ API.
Differential Revision: http://reviews.llvm.org/D19094
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 266380
At the same time, fixes InstructionsTest::CastInst unittest: yes
you can leave the IR in an invalid state and exit when you don't
destroy the context (like the global one), no longer now.
This is the first part of http://reviews.llvm.org/D19094
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 266379
Summary:
Re-factor some code to improve clarity and style based on review
comments from http://reviews.llvm.org/D18093.
Reviewers: MatzeB, mcrosier
Subscribers: MatzeB, mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D19128
llvm-svn: 266372
Some SIMD implementations are not IEEE-754 compliant, for example ARM's NEON.
This patch teaches the loop vectorizer to only allow transformations of loops
that either contain no floating-point operations or have enough allowance
flags supporting lack of precision (ex. -ffast-math, Darwin).
For that, the target description now has a method which tells us if the
vectorizer is allowed to handle FP math without falling into unsafe
representations, plus a check on every FP instruction in the candidate loop
to check for the safety flags.
This commit makes LLVM behave like GCC with respect to ARM NEON support, but
it stops short of fixing the underlying problem: sub-normals. Neither GCC
nor LLVM have a flag for allowing sub-normal operations. Before this patch,
GCC only allows it using unsafe-math flags and LLVM allows it by default with
no way to turn it off (short of not using NEON at all).
As a first step, we push this change to make it safe and in sync with GCC.
The second step is to discuss a new sub-normal's flag on both communitues
and come up with a common solution. The third step is to improve the FastMath
flags in LLVM to encode sub-normals and use those flags to restrict NEON FP.
Fixes PR16275.
llvm-svn: 266363
MachineInstr.h and MachineInstrBuilder.h are very popular headers,
widely included across all LLVM backends. It turns out that there only a
handful of TUs that actually care about DI operands on MachineInstrs.
After this change, touching DebugInfoMetadata.h and rebuilding llc only
needs 112 actions instead of 542.
llvm-svn: 266351
Summary:
If a PHI has an incoming undef, we can pretend that it is equal to one
non-undef, non-self incoming value.
This is particularly relevant in combination with the StructurizeCFG
pass, which introduces PHI nodes with undefs. Previously, this lead to
branch conditions that were uniform before StructurizeCFG to become
non-uniform afterwards, which confused the SIAnnotateControlFlow
pass.
This fixes a crash when Mesa radeonsi compiles a shader from
dEQP-GLES3.functional.shaders.switch.switch_in_for_loop_dynamic_vertex
Reviewers: arsenm, tstellarAMD, jingyue
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D19013
llvm-svn: 266347
Summary:
For GL_ARB_compute_shader we need to support workgroup sizes of at least 1024. However, if we want to allow large workgroup sizes, we may need to use less registers, as we have to run more waves per SIMD.
This patch adds an attribute to specify the maximum work group size the compiled program needs to support. It defaults, to 256, as that has no wave restrictions.
Reducing the number of registers available is done similarly to how the registers were reserved for chips with the sgpr init bug.
Reviewers: mareko, arsenm, tstellarAMD, nhaehnle
Subscribers: FireBurn, kerberizer, llvm-commits, arsenm
Differential Revision: http://reviews.llvm.org/D18340
Patch By: Bas Nieuwenhuizen
llvm-svn: 266337
Summary:
Add a print method to Predicated Scalar Evolution which prints all interesting
transformations done by PSE.
Loop Access Analysis will now print this as part of the analysis output.
We now use this to check the exact expression transformations that were done
by PSE in LAA.
The additional checking also acts as white-box testing for the getAsAddRec method.
Reviewers: anemet, sanjoy
Subscribers: sanjoy, mzolotukhin, llvm-commits
Differential Revision: http://reviews.llvm.org/D18792
llvm-svn: 266334
The behavior of {MIN,MAX}NAN differs from that of {MIN,MAX}NUM when only
one of the inputs is NaN: -NUM will return the non-NaN argument while
-NAN would return NaN.
It is desirable to lower to @llvm.{min,max}num to -NAN if they don't
have a native instruction for -NUM. Notably, ARMv7 NEON's vmin has the
-NAN semantics.
N.B. Of course, it is only safe to do this if the intrinsic call is
marked nnan.
llvm-svn: 266279
Summary: LLVMAttribute has outlived its utility and is becoming a problem for C API users that what to use all the LLVM attributes. In order to help moving away from LLVMAttribute in a smooth manner, this diff introduce LLVMGetAttrKindIDInContext, which can be used instead of the enum values.
Reviewers: Wallbraker, whitequark, joker.eph, echristo
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D18749
llvm-svn: 266257
An unsigned 2 bit bitfield takes 4 bytes in MSVC. Instead of a bitfield,
just use an unsigned char. We can go back to a bitfield when someone
implements the TODO of exposing and reusing the remaining 6 bits.
llvm-svn: 266256
A DISubprogram on x86_64 was 48 bytes. During an LTO build we
end up allocating *a lot* of these (see Duncan's numbers on
llvm-dev and/or my numbers in the review link).
This change reduces the size to 40 bytes, with a nice effect
on peak memory usage when LTO'ing clang.
There are more classes in the hierarchy which can be compacted
so more patches will come. DISubprogram was the biggest offender
in my profiling, anyway.
Differential Revision: http://reviews.llvm.org/D18918
llvm-svn: 266241
Summary:
To be able to work accurately on the reference graph when taking decision
about internalizing, promoting, renaming, etc. We need to have the alias
information explicit.
Reviewers: tejohnson
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D18836
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 266214
This patch fixes calculating of builtin_object_size if it depends on a
condition. Before this patch compiler did not know how to calculate the
object size when it finds a condition that cannot be eliminated.
This patch enables calculating of builtin_object_size even in case when
condition cannot be eliminated by choosing minimum or maximum value as a
result from condition. Choosing minimum or maximum value from condition
is based on the second argument of __builtin_object_size function.
Patch by Strahinja Petrovic.
Differential Revision: http://reviews.llvm.org/D18438
llvm-svn: 266193
Remove an ad-hoc transform in InstCombine and replace it with more
general machinery (ValueTracking, InstructionSimplify and VectorUtils).
This fixes PR27332.
llvm-svn: 266175
This will save a bunch of copies / initialization of intermediate
datastructure, and (hopefully) simplify the code.
This also abstract the symbol preservation mechanism outside of the
Internalization pass into the client code, which is not forced
to keep a map of strings for instance (ThinLTO will prefere hashes).
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 266163
two fixes with one about error verify-regalloc reported, and
another about live range update of phi after rematerialization.
r265547:
Replace analyzeSiblingValues with new algorithm to fix its compile
time issue. The patch is to solve PR17409 and its duplicates.
analyzeSiblingValues is a N x N complexity algorithm where N is
the number of siblings generated by reg splitting. Although it
causes siginificant compile time issue when N is large, it is also
important for performance since it removes redundent spills and
enables rematerialization.
To solve the compile time issue, the patch removes analyzeSiblingValues
and replaces it with lower cost alternatives containing two parts. The
first part creates a new spill hoisting method in postOptimization of
register allocation. It does spill hoisting at once after all the spills
are generated instead of inside every instance of selectOrSplit. The
second part queries the define expr of the original register for
rematerializaiton and keep it always available during register allocation
even if it is already dead. It deletes those dead instructions only in
postOptimization. With the two parts in the patch, it can remove
analyzeSiblingValues without sacrificing performance.
Patches on top of r265547:
r265610 "Fix the compare-clang diff error introduced by r265547."
r265639 "Fix the sanitizer bootstrap error in r265547."
r265657 "InlineSpiller.cpp: Escap \@ in r265547. [-Wdocumentation]"
Differential Revision: http://reviews.llvm.org/D15302
Differential Revision: http://reviews.llvm.org/D18934
Differential Revision: http://reviews.llvm.org/D18935
Differential Revision: http://reviews.llvm.org/D18936
llvm-svn: 266162
Summary:
For correct handling of alias to nameless
function, we need to be able to refer them through a GUID in the summary.
Here we name them using a hash of the non-private global names in the module.
Reviewers: tejohnson
Subscribers: joker.eph, llvm-commits
Differential Revision: http://reviews.llvm.org/D18883
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 266132
Summary:
Let keep llvm-as "dumb": it converts textual IR to bitcode. This
commit removes the dependency from llvm-as to libLLVMAnalysis.
We'll add back summary in llvm-as if we get to a textual
representation for it at some point. In the meantime, opt seems
like a better place for that.
Reviewers: tejohnson
Subscribers: joker.eph, llvm-commits
Differential Revision: http://reviews.llvm.org/D19032
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 266131
Summary:
They correspond to BUFFER_LOAD/STORE_DWORD[_X2,X3,X4] and mostly behave like
llvm.amdgcn.buffer.load/store.format. They will be used by Mesa for SSBO and
atomic counters at least when robust buffer access behavior is desired.
(These instructions perform no format conversion and do buffer range checking
per component.)
As a side effect of sharing patterns with llvm.amdgcn.buffer.store.format,
it has become trivial to add support for the f32 and v2f32 variants of that
intrinsic, so the patch does so.
Also DAG-ify (and fix) some tests that I noticed intermittent failures in
while developing this patch.
Some tests were (temporarily) adjusted for the required mayLoad/hasSideEffects
changes to the BUFFER_STORE_DWORD* instructions. See also
http://reviews.llvm.org/D18291.
Reviewers: arsenm, tstellarAMD, mareko
Subscribers: arsenm, llvm-commits
Differential Revision: http://reviews.llvm.org/D18292
llvm-svn: 266126
Summary:
The function import pass was computing all the imports for all the
modules in the index, and only using the imports for the current module.
Change this to instead compute only for the given module. This means
that the exports list can't be populated, but they weren't being used
anyway.
Longer term, the linker can collect all the imports and export lists
and serialize them out for consumption by the distributed backend
processes which use this pass.
Reviewers: joker.eph
Subscribers: llvm-commits, joker.eph
Differential Revision: http://reviews.llvm.org/D18945
llvm-svn: 266125
(Recommit of r266002, with r266011, r266016, and not accidentally
including an extra unused/uninitialized element in LibcallRoutineNames)
AtomicExpandPass can now lower atomic load, atomic store, atomicrmw, and
cmpxchg instructions to __atomic_* library calls, when the target
doesn't support atomics of a given size.
This is the first step towards moving all atomic lowering from clang
into llvm. When all is done, the behavior of __sync_* builtins,
__atomic_* builtins, and C11 atomics will be unified.
Previously LLVM would pass everything through to the ISelLowering
code. There, unsupported atomic instructions would turn into __sync_*
library calls. Because of that behavior, Clang currently avoids emitting
llvm IR atomic instructions when this would happen, and emits __atomic_*
library functions itself, in the frontend.
This change makes LLVM able to emit __atomic_* libcalls, and thus will
eventually allow clang to depend on LLVM to do the right thing.
It is advantageous to do the new lowering to atomic libcalls in
AtomicExpandPass, before ISel time, because it's important that all
atomic operations for a given size either lower to __atomic_*
libcalls (which may use locks), or native instructions which won't. No
mixing and matching.
At the moment, this code is enabled only for SPARC, as a
demonstration. The next commit will expand support to all of the other
targets.
Differential Revision: http://reviews.llvm.org/D18200
llvm-svn: 266115
Previously, we were using isGCRelocate predicates. Using a subclass of IntrinsicInst is far more idiomatic. The refactoring also enables a couple of minor simplifications and code sharing.
llvm-svn: 266098
This is a resubmittion of 263158 change.
This patch fixes the problem which occurs when loop-vectorize tries to use @llvm.masked.load/store intrinsic for a non-default addrspace pointer. It fails with "Calling a function with a bad signature!" assertion in CallInst constructor because it tries to pass a non-default addrspace pointer to the pointer argument which has default addrspace.
The fix is to add pointer type as another overloaded type to @llvm.masked.load/store intrinsics.
Reviewed By: reames
Differential Revision: http://reviews.llvm.org/D17270
llvm-svn: 266086
They broke the msan bot.
Original message:
Add __atomic_* lowering to AtomicExpandPass.
AtomicExpandPass can now lower atomic load, atomic store, atomicrmw,and
cmpxchg instructions to __atomic_* library calls, when the target
doesn't support atomics of a given size.
This is the first step towards moving all atomic lowering from clang
into llvm. When all is done, the behavior of __sync_* builtins,
__atomic_* builtins, and C11 atomics will be unified.
Previously LLVM would pass everything through to the ISelLowering
code. There, unsupported atomic instructions would turn into __sync_*
library calls. Because of that behavior, Clang currently avoids emitting
llvm IR atomic instructions when this would happen, and emits __atomic_*
library functions itself, in the frontend.
This change makes LLVM able to emit __atomic_* libcalls, and thus will
eventually allow clang to depend on LLVM to do the right thing.
It is advantageous to do the new lowering to atomic libcalls in
AtomicExpandPass, before ISel time, because it's important that all
atomic operations for a given size either lower to __atomic_*
libcalls (which may use locks), or native instructions which won't. No
mixing and matching.
At the moment, this code is enabled only for SPARC, as a
demonstration. The next commit will expand support to all of the other
targets.
Differential Revision: http://reviews.llvm.org/D18200
llvm-svn: 266062
This is intended to be shared by the ThinLTOCodeGenerator.
Note that there is a change in the way the verifier is run, previously
it was ran as a Pass on the merged module during internalization.
While now the verifier is called explicitely on the merged module
outside of the internalize "pass pipeline".
What remains strange in the API is the fact that `DisableVerify` in
the API does not disable this initial verifier.
Differential Revision: http://reviews.llvm.org/D19000
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 266047
`allocsize` is a function attribute that allows users to request that
LLVM treat arbitrary functions as allocation functions.
This patch makes LLVM accept the `allocsize` attribute, and makes
`@llvm.objectsize` recognize said attribute.
The review for this was split into two patches for ease of reviewing:
D18974 and D14933. As promised on the revisions, I'm landing both
patches as a single commit.
Differential Revision: http://reviews.llvm.org/D14933
llvm-svn: 266032
Although repairing definitions is not mandatory for correctness (only
phis would be impacted because of the RPO traversal), not repairing
might go against the cost model. Therefore, just repair when it is
possible.
llvm-svn: 266025
Use the MachineFunctionProperty mechanism to indicate whether the
liveness info is accurate instead of a bool flag on MRI.
Keeps the MRI accessor function for convenience. NFC
Differential Revision: http://reviews.llvm.org/D18767
llvm-svn: 266020
This is more robust to changes in the link ordering.
Differential Revision: http://reviews.llvm.org/D18946
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 266018
AtomicExpandPass can now lower atomic load, atomic store, atomicrmw, and
cmpxchg instructions to __atomic_* library calls, when the target
doesn't support atomics of a given size.
This is the first step towards moving all atomic lowering from clang
into llvm. When all is done, the behavior of __sync_* builtins,
__atomic_* builtins, and C11 atomics will be unified.
Previously LLVM would pass everything through to the ISelLowering
code. There, unsupported atomic instructions would turn into __sync_*
library calls. Because of that behavior, Clang currently avoids emitting
llvm IR atomic instructions when this would happen, and emits __atomic_*
library functions itself, in the frontend.
This change makes LLVM able to emit __atomic_* libcalls, and thus will
eventually allow clang to depend on LLVM to do the right thing.
It is advantageous to do the new lowering to atomic libcalls in
AtomicExpandPass, before ISel time, because it's important that all
atomic operations for a given size either lower to __atomic_*
libcalls (which may use locks), or native instructions which won't. No
mixing and matching.
At the moment, this code is enabled only for SPARC, as a
demonstration. The next commit will expand support to all of the other
targets.
Differential Revision: http://reviews.llvm.org/D18200
llvm-svn: 266002
MachineFrameInfo does not need to be able to distinguish between the
user asking us not to realign the stack and the target telling us it
doesn't support stack realignment. Either way, fixed stack objects have
their alignment clamped.
llvm-svn: 265971
Summary:
The motivation for this new function is to move an invalid assumption
about the relationship between the names of register definitions in
tablegen files and their assembly names into TargetRegisterInfo, so that
we can begin working on fixing this assumption.
The current problem is that if you have a register definition in
TableGen like:
def MYReg0 : Register<"r0", 0>;
The function TargetLowering::getRegForInlineAsmConstraint() derives the
assembly name from the tablegen name: "MyReg0" rather than the given
assembly name "r0". This is working, because on most targets the
tablegen name and the assembly names are case insensitive matches for
each other (e.g. def EAX : X86Reg<"eax", ...>
getRegAsmName() will allow targets to override this default assumption and
return the correct assembly name.
Reviewers: echristo, hfinkel
Subscribers: SamWot, echristo, hfinkel, llvm-commits
Differential Revision: http://reviews.llvm.org/D15614
llvm-svn: 265955
Summary:
This is the first step in also serializing the index out to LLVM
assembly.
The per-module summary written to bitcode is moved out of the bitcode
writer and to a new analysis pass (ModuleSummaryIndexWrapperPass).
The pass itself uses a new builder class to compute index, and the
builder class is used directly in places where we don't have a pass
manager (e.g. llvm-as).
Because we are computing summaries outside of the bitcode writer, we no
longer can use value ids created by the bitcode writer's
ValueEnumerator. This required changing the reference graph edge type
to use a new ValueInfo class holding a union between a GUID (combined
index) and Value* (permodule index). The Value* are converted to the
appropriate value ID during bitcode writing.
Also, this enables removal of the BitWriter library's dependence on the
Analysis library that was previously required for the summary computation.
Reviewers: joker.eph
Subscribers: joker.eph, llvm-commits
Differential Revision: http://reviews.llvm.org/D18763
llvm-svn: 265941
Summary:
This change teaches SCEV to see reduce `(extractvalue
0 (op.with.overflow X Y))` into `op X Y` (with a no-wrap tag if
possible).
Reviewers: atrick, regehr
Subscribers: mcrosier, mzolotukhin, llvm-commits
Differential Revision: http://reviews.llvm.org/D18684
llvm-svn: 265912
Summary:
After we make the adjustment, we can assume that for local allocas, but
not for stack parameters, the return address, or any other fixed stack
object (which has a negative offset and therefore lies prior to the
adjusted SP).
Fixes PR26662.
Reviewers: hfinkel, qcolombet, rnk
Subscribers: rnk, llvm-commits
Differential Revision: http://reviews.llvm.org/D18471
llvm-svn: 265886
Broken in D18938 because underlying_type only works for enums and not all stdlibs are sad when given a non-enum. Bots error out with 'only enumeration types have underlying types'.
There's probably a clever enable_if-ism that I can do with underlying_type and the actual integer value, but is_integral_or_enum also accepts implicit conversion so I need to ponder my life choices a bit before committing to template magic. A quick fix for now.
llvm-svn: 265880
Summary:
As discussed in D18775 making AtomicOrdering an enum class makes it non-hashable, which shouldn't be the case. Hashing.h defines hash_value for all is_integral_or_enum, but type_traits.h's definition of is_integral_or_enum only checks for *inplicit* conversion to integral types which leaves enum classes out and is very confusing because is_enum is true for enum classes.
This patch:
- Adds a check for is_enum when determining is_integral_or_enum.
- Explicitly converts the value parameter in hash_value to handle enum class hashing.
Note that the warning at the top of Hashing.h still applies: each execution of the program has a high probability of producing a different hash_code for a given input. Thus their values are not stable to save or persist, and should only be used during the execution for the construction of hashing datastructures.
Reviewers: dberlin, chandlerc
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D18938
llvm-svn: 265879
Sample-based profiling and optimization remarks currently remove
DICompileUnits from llvm.dbg.cu to suppress the emission of debug info
from them. This is somewhat of a hack and only borderline legal IR.
This patch uses the recently introduced NoDebug emission kind in
DICompileUnit to achieve the same result without breaking the Verifier.
A nice side-effect of this change is that it is now possible to combine
NoDebug and regular compile units under LTO.
http://reviews.llvm.org/D18808
<rdar://problem/25427165>
llvm-svn: 265861
This is a cleanup patch for SSP support in LLVM. There is no functional change.
llvm.stackprotectorcheck is not needed, because SelectionDAG isn't
actually lowering it in SelectBasicBlock; rather, it adds check code in
FinishBasicBlock, ignoring the position where the intrinsic is inserted
(See FindSplitPointForStackProtector()).
llvm-svn: 265851
This is in preparation for tail duplication during block placement. See D18226.
This needs to be a utility class for 2 reasons. No passes may run after block
placement, and also, tail-duplication affects subsequent layout decisions, so
it must be interleaved with placement, and can't be separated out into its own
pass. The original pass is still useful, and now runs by delegating to the
utility class.
llvm-svn: 265842
Strip out the remapping parts of IRLinker::linkFunctionBody and put them
in ValueMapper.cpp under the name Mapper::remapFunction (with a
top-level entry-point llvm::RemapFunction).
This is a nice cleanup on its own since it puts the remapping code
together and shares a single Mapper context for the entire
IRLinker::linkFunctionBody Call. Besides that, this will make it easier
to break the co-recursion between IRMover.cpp and ValueMapper.cpp in
follow ups.
llvm-svn: 265835
Add Mapper::remapInstruction, move the guts of llvm::RemapInstruction
into it, and use the same Mapper for most of the calls to MapValue and
MapMetadata. There should be no functionality change here.
I left off the call to MapValue that wasn't passing in a Materializer
argument (for basic blocks of PHINodes). It shouldn't change
functionality either, but I'm suspicious enough to commit separately.
llvm-svn: 265832
Prevent the Metadata side-table in ValueMap from growing unnecessarily
when RF_NoModuleLevelChanges. As a drive-by, make ValueMap::hasMD,
which apparently had no users until I used it here for testing, actually
compile.
llvm-svn: 265828
It caused PR27275: "ARM: Bad machine code: Using an undefined physical register"
Also reverting the following commits that were landed on top:
r265610 "Fix the compare-clang diff error introduced by r265547."
r265639 "Fix the sanitizer bootstrap error in r265547."
r265657 "InlineSpiller.cpp: Escap \@ in r265547. [-Wdocumentation]"
llvm-svn: 265790
This re-commits r265535 which was reverted in r265541 because it
broke the windows bots. The problem was that we had a PointerIntPair
which took a pointer to a struct allocated with new. The problem
was that new doesn't provide sufficient alignment guarantees.
This pattern was already present before r265535 and it just happened
to work. To fix this, we now separate the PointerToIntPair from the
ExitNotTakenInfo struct into a pointer and a bool.
Original commit message:
Summary:
When the backedge taken codition is computed from an icmp, SCEV can
deduce the backedge taken count only if one of the sides of the icmp
is an AddRecExpr. However, due to sign/zero extensions, we sometimes
end up with something that is not an AddRecExpr.
However, we can use SCEV predicates to produce a 'guarded' expression.
This change adds a method to SCEV to get this expression, and the
SCEV predicate associated with it.
In HowManyGreaterThans and HowManyLessThans we will now add a SCEV
predicate associated with the guarded backedge taken count when the
analyzed SCEV expression is not an AddRecExpr. Note that we only do
this as an alternative to returning a 'CouldNotCompute'.
We use new feature in Loop Access Analysis and LoopVectorize to analyze
and transform more loops.
Reviewers: anemet, mzolotukhin, hfinkel, sanjoy
Subscribers: flyingforyou, mcrosier, atrick, mssimpso, sanjoy, mzolotukhin, llvm-commits
Differential Revision: http://reviews.llvm.org/D17201
llvm-svn: 265786
This reverts commit r265765, reapplying r265759 after changing a call from
LocalAsMetadata::get to ValueAsMetadata::get (and adding a unit test). When a
local value is mapped to a constant (like "i32 %a" => "i32 7"), the new debug
intrinsic operand may no longer be pointing at a local.
http://lab.llvm.org:8080/green/job/clang-stage1-configure-RA_build/19020/
The previous coommit message follows:
--
This is a partial re-commit -- maybe more of a re-implementation -- of
r265631 (reverted in r265637).
This makes RF_IgnoreMissingLocals behave (almost) consistently between
the Value and the Metadata hierarchy. In particular:
- MapValue returns nullptr or "metadata !{}" for missing locals in
MetadataAsValue/LocalAsMetadata bridging paris, depending on
the RF_IgnoreMissingLocals flag.
- MapValue doesn't memoize LocalAsMetadata-related results.
- MapMetadata no longer deals with LocalAsMetadata or
RF_IgnoreMissingLocals at all. (This wasn't in r265631 at all, but
I realized during testing it would make the patch simpler with no
loss of generality.)
r265631 went too far, making both functions universally ignore
RF_IgnoreMissingLocals. This broke building (e.g.) compiler-rt.
Reassociate (and possibly other passes) don't currently maintain
dominates-use invariants for metadata operands, resulting in IR like
this:
define void @foo(i32 %arg) {
call void @llvm.some.intrinsic(metadata i32 %x)
%x = add i32 1, i32 %arg
}
If the inliner chooses to inline @foo into another function, then
RemapInstruction will call `MapValue(metadata i32 %x)` and assert that
the return is not nullptr.
I've filed PR27273 to add a Verifier check and fix the underlying
problem in the optimization passes.
As a workaround, return `!{}` instead of nullptr for unmapped
LocalAsMetadata when RF_IgnoreMissingLocals is unset. Otherwise, match
the behaviour of r265631.
Original commit message:
ValueMapper: Make LocalAsMetadata match function-local Values
Start treating LocalAsMetadata similarly to function-local members of
the Value hierarchy in MapValue and MapMetadata.
- Don't memoize them.
- Return nullptr if they are missing.
This also cleans up ConstantAsMetadata to stop listening to the
RF_IgnoreMissingLocals flag.
llvm-svn: 265768
Summary:
Fixes PR26774.
If you're aware of the issue, feel free to skip the "Motivation"
section and jump directly to "This patch".
Motivation:
I define "refinement" as discarding behaviors from a program that the
optimizer has license to discard. So transforming:
```
void f(unsigned x) {
unsigned t = 5 / x;
(void)t;
}
```
to
```
void f(unsigned x) { }
```
is refinement, since the behavior went from "if x == 0 then undefined
else nothing" to "nothing" (the optimizer has license to discard
undefined behavior).
Refinement is a fundamental aspect of many mid-level optimizations done
by LLVM. For instance, transforming `x == (x + 1)` to `false` also
involves refinement since the expression's value went from "if x is
`undef` then { `true` or `false` } else { `false` }" to "`false`" (by
definition, the optimizer has license to fold `undef` to any non-`undef`
value).
Unfortunately, refinement implies that the optimizer cannot assume
that the implementation of a function it can see has all of the
behavior an unoptimized or a differently optimized version of the same
function can have. This is a problem for functions with comdat
linkage, where a function can be replaced by an unoptimized or a
differently optimized version of the same source level function.
For instance, FunctionAttrs cannot assume a comdat function is
actually `readnone` even if it does not have any loads or stores in
it; since there may have been loads and stores in the "original
function" that were refined out in the currently visible variant, and
at the link step the linker may in fact choose an implementation with
a load or a store. As an example, consider a function that does two
atomic loads from the same memory location, and writes to memory only
if the two values are not equal. The optimizer is allowed to refine
this function by first CSE'ing the two loads, and the folding the
comparision to always report that the two values are equal. Such a
refined variant will look like it is `readonly`. However, the
unoptimized version of the function can still write to memory (since
the two loads //can// result in different values), and selecting the
unoptimized version at link time will retroactively invalidate
transforms we may have done under the assumption that the function
does not write to memory.
Note: this is not just a problem with atomics or with linking
differently optimized object files. See PR26774 for more realistic
examples that involved neither.
This patch:
This change introduces a new set of linkage types, predicated as
`GlobalValue::mayBeDerefined` that returns true if the linkage type
allows a function to be replaced by a differently optimized variant at
link time. It then changes a set of IPO passes to bail out if they see
such a function.
Reviewers: chandlerc, hfinkel, dexonsmith, joker.eph, rnk
Subscribers: mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D18634
llvm-svn: 265762
This is a partial re-commit -- maybe more of a re-implementation -- of
r265631 (reverted in r265637).
This makes RF_IgnoreMissingLocals behave (almost) consistently between
the Value and the Metadata hierarchy. In particular:
- MapValue returns nullptr or "metadata !{}" for missing locals in
MetadataAsValue/LocalAsMetadata bridging paris, depending on
the RF_IgnoreMissingLocals flag.
- MapValue doesn't memoize LocalAsMetadata-related results.
- MapMetadata no longer deals with LocalAsMetadata or
RF_IgnoreMissingLocals at all. (This wasn't in r265631 at all, but
I realized during testing it would make the patch simpler with no
loss of generality.)
r265631 went too far, making both functions universally ignore
RF_IgnoreMissingLocals. This broke building (e.g.) compiler-rt.
Reassociate (and possibly other passes) don't currently maintain
dominates-use invariants for metadata operands, resulting in IR like
this:
define void @foo(i32 %arg) {
call void @llvm.some.intrinsic(metadata i32 %x)
%x = add i32 1, i32 %arg
}
If the inliner chooses to inline @foo into another function, then
RemapInstruction will call `MapValue(metadata i32 %x)` and assert that
the return is not nullptr.
I've filed PR27273 to add a Verifier check and fix the underlying
problem in the optimization passes.
As a workaround, return `!{}` instead of nullptr for unmapped
LocalAsMetadata when RF_IgnoreMissingLocals is unset. Otherwise, match
the behaviour of r265631.
Original commit message:
ValueMapper: Make LocalAsMetadata match function-local Values
Start treating LocalAsMetadata similarly to function-local members of
the Value hierarchy in MapValue and MapMetadata.
- Don't memoize them.
- Return nullptr if they are missing.
This also cleans up ConstantAsMetadata to stop listening to the
RF_IgnoreMissingLocals flag.
llvm-svn: 265759
Now, recordRegBankForType records only the first register bank that
covers a type instead of the last. This behavior can, nevertheless, be
override with the additional Force parameter to force the update.
llvm-svn: 265741
TUs in each unit refer to the unit they are in, if the unit is moved
this reference is invalidated & things break.
No test case because UB isn't testable - ASan would likely catch this on
a large enough test case (just needs to have enough TUs that a
reallocation of the vector would occur) but didn't seem worthwhile. Up
for debate/revisiting if anyone feels strongly.
llvm-svn: 265740
specific type.
This will be used to find the default mapping of the instruction.
Also, this information is recorded, instead of computed, because it is
expensive from a type to know which register bank maps it.
Indeed, we need to iterate through all the register classes of all the
register banks to find the one that maps the given type.
llvm-svn: 265736
iterate over register class bitmask.
Thanks to this helper class, it would not require for each user of the
register classes bitmask to actually know how they are represents.
Moreover, it will make the code much easier to read.
llvm-svn: 265730
from a register.
On top of duplicating the logic, it was buggy! It would assert on
physical registers, since MachineRegisterInfo does not have any
information regarding register classes/banks for them.
llvm-svn: 265727
The pass walk through the machine function and assign the register banks
using the default mapping. In other words, there is no attempt to reduce
cross register copies.
llvm-svn: 265707
the mapping of an instruction on register bank.
For most instructions, it is possible to guess the mapping of the
instruciton by using the encoding constraints.
It remains instructions without encoding constraints.
For copy-like instructions, we try to propagate the information we get
from the other operands. Otherwise, the target has to give this
information.
llvm-svn: 265703
helper class.
The default constructor creates invalid (isValid() == false) instances
and may be used to communicate that a mapping was not found.
llvm-svn: 265699
A virtual register may have either a register bank or a register class.
This is represented by a PointerUnion between the related classes.
Typically, a virtual register went through the following states
regarding register class and register bank:
1. Creation: None is set. Virtual registers are fully generic.
2. Register bank assignment: Register bank is set. Virtual registers
live into a register bank, but we do not know the constraints they need
to fulfil.
3. Instruction selection: Register class is set. Virtual registers are
bound by encoding constraints.
To map these states to GlobalISel, the IRTranslator implements #1,
RegBankSelect #2, and Select #3.
llvm-svn: 265696
Follow-up to D18775 and related clang change. AtomicOrdering is a lattice, 'stronger' is the right thing to do, direct comparison is fraught with peril.
llvm-svn: 265685
This patch add support for GCC attribute((ifunc("resolver"))) for
targets that use ELF as object file format. In general ifunc is a
special kind of function alias with type @gnu_indirect_function. Patch
for Clang http://reviews.llvm.org/D15524
Differential Revision: http://reviews.llvm.org/D15525
llvm-svn: 265667
Remove the assertion that disallowed the combination, since
RF_IgnoreMissingLocals should have no effect on globals. As it happens,
RF_NullMapMissingGlobalValues asserted in MapValue(Constant*,...), so I
also changed a cast to a cast_or_null to get my test passing.
llvm-svn: 265633
Clarify what this RemapFlag actually means.
- Change the flag name to match its intended behaviour.
- Clearly document that it's not supposed to affect globals.
- Add a host of FIXMEs to indicate how to fix the behaviour to match
the intent of the flag.
RF_IgnoreMissingLocals should only affect the behaviour of
RemapInstruction for function-local operands; namely, for operands of
type Argument, Instruction, and BasicBlock. Currently, it is *only*
passed into RemapInstruction calls (and the transitive MapValue calls
that it makes).
When I split Metadata from Value I didn't understand the flag, and I
used it in a bunch of places for "global" metadata.
This commit doesn't have any functionality change, but prepares to
cleanup MapMetadata and MapValue.
llvm-svn: 265628
Produce the first specific error message for a malformed Mach-O file describing
the problem instead of the generic message for object_error::parse_failed of
"Invalid data was encountered while parsing the file”. Many more good error
messages will follow after this first one.
This is built on Lang Hames’ great work of adding the ’Error' class for
structured error handling and threading Error through MachOObjectFile
construction. And making createMachOObjectFile return Expected<...> .
So to to get the error to the llvm-obdump tool, I changed the stack of
these methods to also return Expected<...> :
object::ObjectFile::createObjectFile()
object::SymbolicFile::createSymbolicFile()
object::createBinary()
Then finally in ParseInputMachO() in MachODump.cpp the error can
be reported and the specific error message can be printed in llvm-objdump
and can be seen in the existing test case for the existing malformed binary
but with the updated error message.
Converting these interfaces to Expected<> from ErrorOr<> does involve
touching a number of places. To contain the changes for now use of
errorToErrorCode() and errorOrToExpected() are used where the callers
are yet to be converted.
Also there some were bugs in the existing code that did not deal with the
old ErrorOr<> return values. So now with Expected<> since they must be
checked and the error handled, I added a TODO and a comment:
“// TODO: Actually report errors helpfully” and a call something like
consumeError(ObjOrErr.takeError()) so the buggy code will not crash
since needed to deal with the Error.
Note there is one fix also needed to lld/COFF/InputFiles.cpp that goes along
with this that I will commit right after this. So expect lld not to built
after this commit and before the next one.
llvm-svn: 265606
This will be used by the register bank select pass to assign register banks
for generic virtual registers.
This was originally committed as r265573 but broke at least one windows bot.
The problem with the windows bot was that it was using a copy constructor for
the InstructionMappings class and could not synthesize it. Actually, the fact
that this class is not copy constructable is expected and the compiler should
use the move assignment constructor. Marking the problematic assignment
explicitly as using the move constructor has its own problems.
Indeed, with recent clang we get a warning that we may prevent the elision of
the copy by the compiler. A proper fix for both compilers would be to change the
API of getPossibleInstrMapping to take a InstructionMappings as input/output
parameter. This does not feel natural and since GISel is not used on windows
yet, I chose to workaround the problem by not compiling the problematic code on
windows.
llvm-svn: 265604
Summary:
In the context of http://wg21.link/lwg2445 C++ uses the concept of
'stronger' ordering but doesn't define it properly. This should be fixed
in C++17 barring a small question that's still open.
The code currently plays fast and loose with the AtomicOrdering
enum. Using an enum class is one step towards tightening things. I later
also want to tighten related enums, such as clang's
AtomicOrderingKind (which should be shared with LLVM as a 'C++ ABI'
enum).
This change touches a few lines of code which can be improved later, I'd
like to keep it as NFC for now as it's already quite complex. I have
related changes for clang.
As a follow-up I'll add:
bool operator<(AtomicOrdering, AtomicOrdering) = delete;
bool operator>(AtomicOrdering, AtomicOrdering) = delete;
bool operator<=(AtomicOrdering, AtomicOrdering) = delete;
bool operator>=(AtomicOrdering, AtomicOrdering) = delete;
This is separate so that clang and LLVM changes don't need to be in sync.
Reviewers: jyknight, reames
Subscribers: jyknight, llvm-commits
Differential Revision: http://reviews.llvm.org/D18775
llvm-svn: 265602
This makes it possible to distinguish between mesa shaders
and other kernels even in the presence of compute shaders.
Patch By: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Differential Revision: http://reviews.llvm.org/D18559
llvm-svn: 265589
instruction on a register bank. This will be used by the register bank select
pass to assign register banks for generic virtual registers." and the follow-on
commits while I find out a way to fix the win7 bot:
http://lab.llvm.org:8011/builders/sanitizer-windows/builds/19882
This reverts commit r265578, r265581, r265584, and r265585.
llvm-svn: 265587
helper class.
The default constructor creates invalid (isValid() == false) instances
and may be used to communicate that a mapping was not found.
llvm-svn: 265581
Use a DenseSet instead of a DenseMap for constants in LLVMContextImpl.
Last time I looked at this was some time before r223588, when
DenseSet<V> had no advantage over DenseMap<V,char>. After r223588,
there's a 50% memory savings.
This is all mechanical. There were little bits of missing API from
DenseSet so I added the trivial implementations:
- iterator::operator++(int)
- template <class LookupKeyT> insert_as(ValueTy, LookupKeyT)
There should be no functionality change, just reduced memory consumption
(this wasn't on a profile or anything; just a cleanup I stumbled on).
llvm-svn: 265577
1. Add FullUnrollMaxCount option that works like MaxCount, but also limits
the unroll count for fully unrolled loops. So if a loop has an iteration
count over this, it won't fully unroll.
2. Add CLI options for MaxCount and the new option, so they can be tested
(plus a test).
3. Make partial unrolling obey MaxCount.
An example use-case (the out of tree one this is originally designed for) is
a target’s TTI can analyze a loop and decide on a max unroll count separate
from the size threshold, e.g. based on register pressure, then constrain
LoopUnroll to not exceed that, regardless of the size of the unrolled loop.
llvm-svn: 265562
The method checks that the value is fully defined accross the different partial
mappings and that the partial mappings are compatible between each other.
llvm-svn: 265556
when DenseMap growed and moved memory. I verified it fixed the bootstrap
problem on x86_64-linux-gnu but I cannot verify whether it fixes
the bootstrap error on clang-ppc64be-linux. I will watch the build-bot
result closely.
Replace analyzeSiblingValues with new algorithm to fix its compile
time issue. The patch is to solve PR17409 and its duplicates.
analyzeSiblingValues is a N x N complexity algorithm where N is
the number of siblings generated by reg splitting. Although it
causes siginificant compile time issue when N is large, it is also
important for performance since it removes redundent spills and
enables rematerialization.
To solve the compile time issue, the patch removes analyzeSiblingValues
and replaces it with lower cost alternatives containing two parts. The
first part creates a new spill hoisting method in postOptimization of
register allocation. It does spill hoisting at once after all the spills
are generated instead of inside every instance of selectOrSplit. The
second part queries the define expr of the original register for
rematerializaiton and keep it always available during register allocation
even if it is already dead. It deletes those dead instructions only in
postOptimization. With the two parts in the patch, it can remove
analyzeSiblingValues without sacrificing performance.
Differential Revision: http://reviews.llvm.org/D15302
llvm-svn: 265547
Summary:
When the backedge taken codition is computed from an icmp, SCEV can
deduce the backedge taken count only if one of the sides of the icmp
is an AddRecExpr. However, due to sign/zero extensions, we sometimes
end up with something that is not an AddRecExpr.
However, we can use SCEV predicates to produce a 'guarded' expression.
This change adds a method to SCEV to get this expression, and the
SCEV predicate associated with it.
In HowManyGreaterThans and HowManyLessThans we will now add a SCEV
predicate associated with the guarded backedge taken count when the
analyzed SCEV expression is not an AddRecExpr. Note that we only do
this as an alternative to returning a 'CouldNotCompute'.
We use new feature in Loop Access Analysis and LoopVectorize to analyze
and transform more loops.
Reviewers: anemet, mzolotukhin, hfinkel, sanjoy
Subscribers: flyingforyou, mcrosier, atrick, mssimpso, sanjoy, mzolotukhin, llvm-commits
Differential Revision: http://reviews.llvm.org/D17201
llvm-svn: 265535
Instead of copying arguments from the source function to the
destination, steal them. This has a few advantages.
- The ValueMap doesn't need to be seeded with (or cleared of)
Arguments.
- Often the destination function won't have created any arguments yet,
so this avoids malloc traffic.
- Argument names don't need to be copied.
Because argument lists are lazy, this required a new
Function::stealArgumentListFrom helper.
llvm-svn: 265519
We must remove all aliased registers which may be more than the all sub
and super registers combined.
Bug found while reading the code. The bug does not affect any existing
target as the only use of register aliases I could found were control
registers on ARM and Hexagon which are all reserved.
llvm-svn: 265510
As part of the TRI argument of addRegBankCoverage we already have access to
the TargetRegisterClass through the ID of that register class.
Therefore, there is no point in needing a TargetRegisterClass instance,
the ID is enough to get to it.
llvm-svn: 265487
Bionic has a defined thread-local location for the stack protector
cookie. Emit a direct load instead of going through __stack_chk_guard.
llvm-svn: 265481
Add a common parent class for ConstantArray, ConstantVector, and
ConstantStruct called ConstantAggregate. These are the aggregate
subclasses of Constant that take operands.
This is mainly a cleanup, adding common `isa` target and removing
duplicated code. However, it also simplifies caching which constants
point transitively at `GlobalValue` (a possible future direction).
llvm-svn: 265466
Change the default constructor to create invalid object.
The target will have to properly initialize the register banks before
using them.
llvm-svn: 265460
destruction.
This makes the Expected<T> class behave like Error, even when in success mode.
Expected<T> values must be checked to see whether they contain an error prior
to being dereferenced, assigned to, or destructed.
llvm-svn: 265446
At IR level, the swifterror argument is an input argument with type
ErrorObject**. For targets that support swifterror, we want to optimize it
to behave as an inout value with type ErrorObject*; it will be passed in a
fixed physical register.
The main idea is to track the virtual registers for each swifterror value. We
define swifterror values as AllocaInsts with swifterror attribute or a function
argument with swifterror attribute.
In SelectionDAGISel.cpp, we set up swifterror values (SwiftErrorVals) before
handling the basic blocks.
When iterating over all basic blocks in RPO, before actually visiting the basic
block, we call mergeIncomingSwiftErrors to merge incoming swifterror values when
there are multiple predecessors or to simply propagate them. There, we create a
virtual register for each swifterror value in the entry block. For predecessors
that are not yet visited, we create virtual registers to hold the swifterror
values at the end of the predecessor. The assignments are saved in
SwiftErrorWorklist and will be materialized at the end of visiting the basic
block.
When visiting a load from a swifterror value, we copy from the current virtual
register assignment. When visiting a store to a swifterror value, we create a
virtual register to hold the swifterror value and update SwiftErrorMap to
track the current virtual register assignment.
Differential Revision: http://reviews.llvm.org/D18108
llvm-svn: 265433
Without setting the flag there is no way to determine if a symbol
points to an arm or to a thumb function as the LSB of the address
masked out in all getter function.
Note: Currently the thumb flag is only used for MachO files so
adding a test to this change is not possible. It will be used
by the upcoming fix for llvm-objdump for disassembling thumb
functions what is easily testable.
Differential revision: http://reviews.llvm.org/D17956
llvm-svn: 265387
Refactor common code that queries the ModuleSummaryIndex for a value's
GlobalValueInfo struct into getGlobalValueInfo helper methods, which
will also be used by D18763.
llvm-svn: 265370
We can only perform a tail call to a callee that preserves all the
registers that the caller needs to preserve.
This situation happens with calling conventions like preserver_mostcc or
cxx_fast_tls. It was explicitely handled for fast_tls and failing for
preserve_most. This patch generalizes the check to any calling
convention.
Related to rdar://24207743
Differential Revision: http://reviews.llvm.org/D18680
llvm-svn: 265329
Use the MachineFunctionProperty mechanism to indicate whether a MachineFunction
is in SSA form instead of a custom method on MachineRegisterInfo. NFC
Differential Revision: http://reviews.llvm.org/D18574
llvm-svn: 265318
time issue. The patch is to solve PR17409 and its duplicates.
analyzeSiblingValues is a N x N complexity algorithm where N is
the number of siblings generated by reg splitting. Although it
causes siginificant compile time issue when N is large, it is also
important for performance since it removes redundent spills and
enables rematerialization.
To solve the compile time issue, the patch removes analyzeSiblingValues
and replaces it with lower cost alternatives containing two parts. The
first part creates a new spill hoisting method in postOptimization of
register allocation. It does spill hoisting at once after all the spills
are generated instead of inside every instance of selectOrSplit. The
second part queries the define expr of the original register for
rematerializaiton and keep it always available during register allocation
even if it is already dead. It deletes those dead instructions only in
postOptimization. With the two parts in the patch, it can remove
analyzeSiblingValues without sacrificing performance.
Differential Revision: http://reviews.llvm.org/D15302
llvm-svn: 265309
Implemented truncstore for KNL and skylake-avx512.
Covered vectors from v2i1 to v64i1. We save the value in bits (not in bytes) - v32i1 is saved in 4 bytes.
Differential Revision: http://reviews.llvm.org/D18740
llvm-svn: 265283
RAUW support on MDNode usually requires an extra allocation for
ReplaceableMetadataImpl. This is only strictly necessary if there are
tracking references to the MDNode. Make the construction of
ReplaceableMetadataImpl lazy, so that we don't get allocations if we
don't need them.
Since MDNode::isResolved now checks MDNode::isTemporary and
MDNode::NumUnresolved instead of whether a ReplaceableMetadataImpl is
allocated, the internal changes are intrusive (at various internal
checkpoints, isResolved now has a different answer).
However, there should be no real functionality change here; just
slightly lazier allocation behaviour. The external semantics should be
identical.
llvm-svn: 265279
This adds an assertion to maintain the property from r265273. When
Mapper::mapSimpleMetadata calls Mapper::mapValue, it should not find its
way back to mapMetadataImpl. This guarantees that mapSimpleMetadata is
not involved in any recursion.
Since Mapper::mapValue calls out to arbitrary materializers, we need to
save a bit on the ValueMap to make this assertion effective.
There should be no functionality change here. This co-recursion should
already have been impossible.
llvm-svn: 265276
Instead of checking live during MapMetadata whether a subprogram is
needed, seed the ValueMap with `nullptr` up-front.
There is a small hypothetical functionality change. Previously, calling
MapMetadataOp on a node whose "scope:" chain led to an unneeded
subprogram would return nullptr. However, if that were ever called,
then the subprogram would be needed; a situation that the IRMover is
supposed to avoid a priori!
Besides cleaning up the code a little, this restores a nice property:
MapMetadataOp returns the same as MapMetadata.
llvm-svn: 265229
Support seeding a ValueMap with nullptr for Metadata entries, a
situation I didn't consider in the Metadata/Value split.
I added a ValueMapper::getMappedMD accessor that returns an
Optional<Metadata*> with the mapped (possibly null) metadata. IRMover
needs to use this to avoid modifying the map when it's checking for
unneeded subprograms. I updated a call from bugpoint since I find the
new code clearer.
llvm-svn: 265228
Summary: This should make the code more readable, especially all the map declarations.
Reviewers: tejohnson
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D18721
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 265215