The basic structure is that once a list record goes over 64K, the last
subrecord of the list is an LF_INDEX record that refers to the next
record. Because the type record graph must be toplogically sorted, this
means we have to emit them in reverse order. We build the type record in
order of declaration, so this means that if we don't want extra copies,
we need to detect when we were about to split a record, and leave space
for a continuation subrecord that will point to the eventual split
top-level record.
Also adds dumping support for these records.
Next we should make sure that large method overload lists work properly.
llvm-svn: 273294
Summary:
Fix the computation of the offsets present in the scopetable when using the
SEH (__except_handler4).
This patch added an intrinsic to track the position of the allocation on the
stack of the EHGuard. This position is needed when producing the ScopeTable.
```
struct _EH4_SCOPETABLE {
DWORD GSCookieOffset;
DWORD GSCookieXOROffset;
DWORD EHCookieOffset;
DWORD EHCookieXOROffset;
_EH4_SCOPETABLE_RECORD ScopeRecord[1];
};
struct _EH4_SCOPETABLE_RECORD {
DWORD EnclosingLevel;
long (*FilterFunc)();
union {
void (*HandlerAddress)();
void (*FinallyFunc)();
};
};
```
The code to generate the EHCookie is added in `X86WinEHState.cpp`.
Which is adding these instructions when using SEH4.
```
Lfunc_begin0:
# BB#0: # %entry
pushl %ebp
movl %esp, %ebp
pushl %ebx
pushl %edi
pushl %esi
subl $28, %esp
movl %ebp, %eax <<-- Loading FramePtr
movl %esp, -36(%ebp)
movl $-2, -16(%ebp)
movl $L__ehtable$use_except_handler4_ssp, %ecx
xorl ___security_cookie, %ecx
movl %ecx, -20(%ebp)
xorl ___security_cookie, %eax <<-- XOR FramePtr and Cookie
movl %eax, -40(%ebp) <<-- Storing EHGuard
leal -28(%ebp), %eax
movl $__except_handler4, -24(%ebp)
movl %fs:0, %ecx
movl %ecx, -28(%ebp)
movl %eax, %fs:0
movl $0, -16(%ebp)
calll _may_throw_or_crash
LBB1_1: # %cont
movl -28(%ebp), %eax
movl %eax, %fs:0
addl $28, %esp
popl %esi
popl %edi
popl %ebx
popl %ebp
retl
```
And the corresponding offset is computed:
```
Luse_except_handler4_ssp$parent_frame_offset = -36
.p2align 2
L__ehtable$use_except_handler4_ssp:
.long -2 # GSCookieOffset
.long 0 # GSCookieXOROffset
.long -40 # EHCookieOffset <<----
.long 0 # EHCookieXOROffset
.long -2 # ToState
.long _catchall_filt # FilterFunction
.long LBB1_2 # ExceptionHandler
```
Clang is not yet producing function using SEH4, but it's a work in progress.
This patch is a step toward having a valid implementation of SEH4.
Unfortunately, it is not yet fully working. The EH registration block is not
allocated at the right offset on the stack.
Reviewers: rnk, majnemer
Subscribers: llvm-commits, chrisha
Differential Revision: http://reviews.llvm.org/D21231
llvm-svn: 273281
Summary:
canCombineSinCosLibcall() would previously combine sin+cos into sincos for
GNUX32/GNUEABI/GNUEABIHF regardless of whether UnsafeFPMath were set or not.
However, GNU would only combine them for UnsafeFPMath because sincos does not
set errno like sin and cos do. It seems likely that this is an oversight.
Reviewers: t.p.northover
Subscribers: t.p.northover, aemerson, llvm-commits, rengolin
Differential Revision: http://reviews.llvm.org/D21431
llvm-svn: 273259
It did not handle correctly cases without GEP.
The following loop wasn't vectorized:
for (int i=0; i<len; i++)
*to++ = *from++;
I use getPtrStride() to find Stride for memory access and return 0 is the Stride is not 1 or -1.
Differential revision: http://reviews.llvm.org/D20789
llvm-svn: 273257
This reverts commit r273019.
From email I sent to list:
> I don't think this makes sense. Either the linker you're using supports
> this feature, or it doesn't. Having it enabled for llc if your linker
> doesn't support it is not fun.
>
> Further note that this also affects basically all other code using llvm
> libraries -- other than Clang, which explicitly sets it back to false by
> default, unless you set the ENABLE_X86_RELAX_RELOCATIONS cmake flag to
> true.
>
> If you want to enable the relax mode across all llvm tools in some
> circumstances, I think it should be via moving the cmake flag from clang
> down into llvm.
>
> I'm going to revert this commit, since I both think it intrinsically
> doesn't make sense to do this, and because it's breaking some of our
> tools.
llvm-svn: 273245
Works around a bug (PR28216) in Clang's MS mangling of templates with
partial specializations.
This mismatch was introduced in about six months ago in r256656.
llvm-svn: 273223
This patch adds function summaries, so that we don't need to recompute
various properties about function parameters/return values at each
callsite of a function. It also adds many interprocedural tests for
CFLAA.
Patch by Jia Chen.
Differential Revision: http://reviews.llvm.org/D21475#inline-182390
llvm-svn: 273219
Darwin added support in its Xcode 8.0 tools (released in the beta) for universal
files where offsets and sizes for the objects are 64-bits to allow support for
objects contained in universal files to be larger then 4gb. The change is very
straight forward. There is a new magic number that differs by one bit, much
like the 64-bit Mach-O files. Then there is a new structure that follow the
fat_header that has the same layout but with the offset and size fields using
64-bit values instead of 32-bit values.
rdar://26899493
llvm-svn: 273207
This will help sneak undefs past GVN into the DAG for
some tests.
Also add missing intrinsic for rsq_legacy, even though the node
was already selected to the instruction. Also start passing
the debug location to intrinsic errors.
llvm-svn: 273181
CodeGen has hooks that allow targets to emit specialized code instead
of calls to memcmp, memchr, strcpy, stpcpy, strcmp, strlen, strnlen.
When ASan/MSan/TSan/ESan is in use, this sidesteps its interceptors, resulting
in uninstrumented memory accesses. To avoid that, make these sanitizers
mark the calls as nobuiltin.
Differential Revision: http://reviews.llvm.org/D19781
llvm-svn: 273083
Access it through -passes=print-lcg-dot
Let me know any suggestions for changing the rendering; I'm not
particularly attached to what is implemented here.
llvm-svn: 273082
This is a functional change for LLE and LDist. The other clients (LV,
LVerLICM) already had this explicitly enabled.
The temporary boolean parameter to LAA is removed that allowed turning
off speculation of symbolic strides. This makes LAA's caching interface
LAA::getInfo only take the loop as the parameter. This makes the
interface more friendly to the new Pass Manager.
The flag -enable-mem-access-versioning is moved from LV to a LAA which
now allows turning off speculation globally.
llvm-svn: 273064
Darwin added support in its Xcode 8.0 tools (released in the beta) for static
library table of contents with 64-bit offsets to the archive members. The
change is very straight forward. The table of contents member is named
___.SYMDEF_64 or "___.SYMDEF_64 SORTED" and same layout is used but with
fields using 64 bit values instead of 32 bit values.
rdar://26869808
llvm-svn: 273058
Currently, frontends which emit source-based code coverage have to
duplicate logic to encode filenames and raw coverage mappings properly.
This violates an abstraction layer and forces frontends to copy tricky
code.
Introduce llvm::coverage::encodeFilenamesAndRawMappings() to take care
of this.
This will help us experiment with zlib-compressing coverage mapping
data.
llvm-svn: 273055
Summary:
This seems like the least intrusive way to pass this information
through.
Fixes PR28151
Reviewers: majnemer, aprantl, dblaikie
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D21444
llvm-svn: 273053
This is indeed a much cleaner approach (thanks to Daniel Berlin
for pointing out), and also David/Sean for review.
Differential Revision: http://reviews.llvm.org/D21454
llvm-svn: 273032
Many CPUs only have the ability to do a 4-byte cmpxchg (or ll/sc), not 1
or 2-byte. For those, you need to mask and shift the 1 or 2 byte values
appropriately to use the 4-byte instruction.
This change adds support for cmpxchg-based instruction sets (only SPARC,
in LLVM). The support can be extended for LL/SC-based PPC and MIPS in
the future, supplanting the ISel expansions those architectures
currently use.
Tests added for the IR transform and SPARCv9.
Differential Revision: http://reviews.llvm.org/D21029
llvm-svn: 273025
We convert `Default` to `NotPIC` so that target independent code
can reason about this correctly.
Differential Revision: http://reviews.llvm.org/D21394
llvm-svn: 273024
Recommiting after fixing non-atomic insert to front of SmallVector in
MCAsmLexer.h
Add explicit Comment Token in Assembly Lexing for future support for
outputting explicit comments from inline assembly. As part of this,
CPPHash Directives are now explicitly distinguished from Hash line
comments in Lexer.
Line comments are recorded as EndOfStatement tokens, not Comment tokens
to simplify compatibility with current TargetParsers. This slightly
complicates comment output.
This remove all lexing tasks out of the parser, does minor cleanup
to remove extraneous newlines Asm Output, and some improvements white
space handling.
Reviewers: rtrieu, dwmw2, rnk
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D20009
llvm-svn: 273007
Reapplying patch as it was reverted when it was first
committed because of an assertion failure when the
mrrc2 intrinsic was called in ARM mode. The failure
was happening because the instruction was being built
in ARMISelDAGToDAG.cpp and the tablegen description for
mrrc2 instruction doesn't allow you to use a predicate.
The ARM architecture manuals do say that mrrc2 in ARM
mode can be predicated with AL in assembly but this has
no effect on the encoding of the instruction as the top
4 bits will always be 1111 not 1110 which is the encoding
for the condition AL.
Differential Revision: http://reviews.llvm.org/D21408
llvm-svn: 272982
pass manager passes' `run` methods.
This removes a bunch of SFINAE goop from the pass manager and just
requires pass authors to accept `AnalysisManager<IRUnitT> &` as a dead
argument. This is a small price to pay for the simplicity of the system
as a whole, despite the noise that changing it causes at this stage.
This will also helpfull allow us to make the signature of the run
methods much more flexible for different kinds af passes to support
things like intelligently updating the pass's progression over IR units.
While this touches many, many, files, the changes are really boring.
Mostly made with the help of my trusty perl one liners.
Thanks to Sean and Hal for bouncing ideas for this with me in IRC.
llvm-svn: 272978
This is still NFCI, so the list of clients that allow symbolic stride
speculation does not change (yes: LV and LoopVersioningLICM, no: LLE,
LDist). However since the symbolic strides are now managed by LAA
rather than passed by client a new bool parameter is used to enable
symbolic stride speculation.
The existing test Transforms/LoopVectorize/version-mem-access.ll checks
that stride speculation is performed for LV.
The previously added test Transforms/LoopLoadElim/symbolic-stride.ll
ensures that no speculation is performed for LLE.
The next patch will change the functionality and turn on symbolic stride
speculation in all of LAA's clients and remove the bool parameter.
llvm-svn: 272970
When moving unsafe allocas to the unsafe stack, dbg.declare intrinsics are
updated to refer to the new location.
This change does the same to dbg.value intrinsics.
llvm-svn: 272968
Add explicit Comment Token in Assembly Lexing for future support for
outputting explicit comments from inline assembly. As part of this,
CPPHash Directives are now explicitly distinguished from Hash line
comments in Lexer.
Line comments are recorded as EndOfStatement tokens, not Comment tokens
to simplify compatibility with current TargetParsers. This slightly
complicates comment output.
This remove all lexing tasks out of the parser, does minor cleanup
to remove extraneous newlines Asm Output, and some improvements white
space handling.
Reviewers: rtrieu, dwmw2, rnk
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D20009
llvm-svn: 272953
Summary:
... into getFrameIndexReferencePreferSP. This change folds the
fail-then-retry logic into getFrameIndexReferencePreferSP.
There is a non-functional but behaviorial change in WinException --
earlier if `getFrameIndexReferenceFromSP` failed we'd trip an assert,
but now we'll silently use the (wrong) offset from the base pointer. I
could not write the assert I'd like to write ("FrameReg ==
StackRegister", like I've done in X86FrameLowering) since there is no
easy way to get to the stack register from WinException (happy to be
proven wrong here). One solution to this is to add a `bool
OnlyStackPointer` parameter to `getFrameIndexReferenceFromSP` that
asserts if it could not satisfy its promise of returning an offset from
a stack pointer, but that seems overkill.
Reviewers: rnk
Subscribers: sanjoy, mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D21427
llvm-svn: 272938
This allows better catching of compiler errors since we can use
the override keyword to verify that methods are actually
overridden.
Also in this patch I've changed from storing a boolean Error
code everywhere to returning an llvm::Error, to propagate richer
error information up the call stack.
Reviewed By: ruiu, rnk
Differential Revision: http://reviews.llvm.org/D21410
llvm-svn: 272926
Daniel Berlin expressed some real concerns about the port and proposed
and alternative approach. I'll revert this for now while working on a
new patch, which I hope to put up for review shortly. Sorry for the churn.
llvm-svn: 272925
Both parameters to visitTypeBegin are actually members of CVRecord,
so we can just pass CVRecord instead of destructuring it.
Differential Revision: http://reviews.llvm.org/D21435
llvm-svn: 272899
We should update results of the BranchProbabilityInfo after removing block in JumpThreading. Otherwise
we will get dangling pointer inside BranchProbabilityInfo cache.
Differential Revision: http://reviews.llvm.org/D20957
llvm-svn: 272891
Summary:
The Mips implementation only covers the feature bits described by the ELF
e_flags so far. Mips stores additional feature bits such as MSA in the
.MIPS.abiflags section.
Also fixed a small bug this revealed where microMIPS wouldn't add the
EF_MIPS_MICROMIPS flag when using -filetype=obj.
Reviewers: echristo, rafael
Subscribers: rafael, mehdi_amini, dsanders, sdardis, llvm-commits
Differential Revision: http://reviews.llvm.org/D21125
llvm-svn: 272880
The header files are designed to be used always together (through Pass.h).
Addresses the first part of https://llvm.org/bugs/show_bug.cgi?id=27991
Patch by Cristina Cristescu and me.
Reviewed by Richard Smith.
llvm-svn: 272877
- We lacked a short unique identifier for a statistics, so I renamed the
current "Name" field that just contained the DEBUG_TYPE name of the
current file to DebugType and added a new "Name" field that contains
the C++ identifier of the statistic variable.
- Add the -stats-json option which outputs statistics in json format.
Differential Revision: http://reviews.llvm.org/D20995
llvm-svn: 272826
Summary: As per title. This completes the C API Attribute support.
Reviewers: Wallbraker, whitequark, echristo, rafael, jyknight
Subscribers: mehdi_amini
Differential Revision: http://reviews.llvm.org/D21365
llvm-svn: 272811
This allows us to emit native IR in Clang (next commit).
Also, update the intrinsic tests to show that codegen already knows how to handle
the IR that Clang will soon produce.
llvm-svn: 272806
[DAG] Previously debug values would transfer debuginfo for the selected
start node for a replacement which allows for debug to be dropped.
Push debug value transfer to occur with node/value replacement in
SelectionDAG, remove now extraneous transfers of debug values.
This refixes PR9817 which was being incompletely checked in the
testsuite.
Reviewers: jyknight
Subscribers: dblaikie, llvm-commits
Differential Revision: http://reviews.llvm.org/D21037
llvm-svn: 272792
This uses the "runImpl" approach to share code with the old PM.
Porting to the new PM meant abandoning the anonymous namespace enclosing
most of SLPVectorizer.cpp which is a bit of a bummer (but not a big deal
compared to having to pull the pass class into a header which the new PM
requires since it calls the constructor directly).
llvm-svn: 272766
Summary:
... when the offset is not statically known.
Prioritize addresses relative to the stack pointer in the stackmap, but
fallback gracefully to other modes of addressing if the offset to the
stack pointer is not a known constant.
Patch by Oscar Blumberg!
Reviewers: sanjoy
Subscribers: llvm-commits, majnemer, rnk, sanjoy, thanm
Differential Revision: http://reviews.llvm.org/D21259
llvm-svn: 272756
Nearly all the changes to this pass have been done while maintaining and
updating other parts of LLVM. LLVM has had another pass, SROA, which
has superseded ScalarReplAggregates for quite some time.
Differential Revision: http://reviews.llvm.org/D21316
llvm-svn: 272737
If a local_unnamed_addr attribute is attached to a global, the address
is known to be insignificant within the module. It is distinct from the
existing unnamed_addr attribute in that it only describes a local property
of the module rather than a global property of the symbol.
This attribute is intended to be used by the code generator and LTO to allow
the linker to decide whether the global needs to be in the symbol table. It is
possible to exclude a global from the symbol table if three things are true:
- This attribute is present on every instance of the global (which means that
the normal rule that the global must have a unique address can be broken without
being observable by the program by performing comparisons against the global's
address)
- The global has linkonce_odr linkage (which means that each linkage unit must have
its own copy of the global if it requires one, and the copy in each linkage unit
must be the same)
- It is a constant or a function (which means that the program cannot observe that
the unique-address rule has been broken by writing to the global)
Although this attribute could in principle be computed from the module
contents, LTO clients (i.e. linkers) will normally need to be able to compute
this property as part of symbol resolution, and it would be inefficient to
materialize every module just to compute it.
See:
http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20160509/356401.htmlhttp://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20160516/356738.html
for earlier discussion.
Part of the fix for PR27553.
Differential Revision: http://reviews.llvm.org/D20348
llvm-svn: 272709
This reverts commit 879139e1c6577b09df52de56a6bab856a19ed185.
This was committed accidentally when I blindly typed git svn
dcommit instead of the command to generate a patch.
llvm-svn: 272693
We move the loop rotate functions in a separate class to avoid passing multiple
parameters to each function. This cleanup will help with further development of
loop rotation. NFC.
Patch written by Aditya Kumar and Sebastian Pop.
Differential Revision: http://reviews.llvm.org/D21311
llvm-svn: 272672
What happened here is that in the new PM there is a bunch of new copying
(actually, moving) and so this reads the HasProfileData member in
situations where it used to not be read.
It used to only be read strictly in the "runOnFunction" method and its
callees, where is *is* initialized (even after my patch).
So this ends up being benign as far as functional behavior of the
compiler (since we set HasProfileData in the "runImpl" method before we
ever make decisions based on it).
It's awesome that UBSan caught this. It highlights one more thing to
watch out for when porting passes.
Sanitizer bot log was:
-- Testing: 17049 tests, 32 threads --
Testing: 0 .. 10.. 20.. 30.. 40.. 50.. 60.. 70.. 80..
FAIL: LLVM :: Transforms/JumpThreading/thread-loads.ll (15184 of 17049)
******************** TEST 'LLVM :: Transforms/JumpThreading/thread-loads.ll' FAILED ********************
Script:
--
/mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm_build_ubsan/./bin/opt < /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/test/Transforms/JumpThreading/thread-loads.ll -jump-threading -S | /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm_build_ubsan/./bin/FileCheck /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/test/Transforms/JumpThreading/thread-loads.ll
/mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm_build_ubsan/./bin/opt < /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/test/Transforms/JumpThreading/thread-loads.ll -passes=jump-threading -S | /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm_build_ubsan/./bin/FileCheck /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/test/Transforms/JumpThreading/thread-loads.ll
--
Exit Code: 2
Command Output (stderr):
--
/mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/include/llvm/Transforms/Scalar/JumpThreading.h:90:57: runtime error: load of value 136, which is not a valid value for type 'bool'
#0 0x2c33ba1 in llvm::JumpThreadingPass::JumpThreadingPass(llvm::JumpThreadingPass&&) /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/include/llvm/Transforms/Scalar/JumpThreading.h:90:57
#1 0x2bc88e4 in void llvm::PassManager<llvm::Function>::addPass<llvm::JumpThreadingPass>(llvm::JumpThreadingPass) /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/include/llvm/IR/PassManager.h:282:40
#2 0x2bb2682 in llvm::PassBuilder::parseFunctionPassName(llvm::PassManager<llvm::Function>&, llvm::StringRef) /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/lib/Passes/PassRegistry.def:133:1
#3 0x2bb4914 in llvm::PassBuilder::parseFunctionPassPipeline(llvm::PassManager<llvm::Function>&, llvm::StringRef&, bool, bool) /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/lib/Passes/PassBuilder.cpp:489:12
#4 0x2bb6f81 in llvm::PassBuilder::parsePassPipeline(llvm::PassManager<llvm::Module>&, llvm::StringRef, bool, bool) /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/lib/Passes/PassBuilder.cpp:674:10
#5 0x986690 in llvm::runPassPipeline(llvm::StringRef, llvm::LLVMContext&, llvm::Module&, llvm::TargetMachine*, llvm::tool_output_file*, llvm::StringRef, llvm::opt_tool::OutputKind, llvm::opt_tool::VerifierKind, bool, bool) /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/tools/opt/NewPMDriver.cpp:85:8
#6 0x9af25e in main /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/tools/opt/opt.cpp:468:12
#7 0x7fd7e27dbf44 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21f44)
#8 0x960157 in _start (/mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm_build_ubsan/bin/opt+0x960157)
FileCheck error: '-' is empty.
FileCheck command line: /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm_build_ubsan/./bin/FileCheck /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/test/Transforms/JumpThreading/thread-loads.ll
--
********************
Testing: 0 .. 10.. 20.. 30.. 40.. 50.. 60.. 70.. 80.. 90..
Testing Time: 128.90s
********************
Failing Tests (1):
LLVM :: Transforms/JumpThreading/thread-loads.ll
Expected Passes : 16725
Expected Failures : 129
Unsupported Tests : 194
Unexpected Failures: 1
llvm-svn: 272616
The need for all these Lookup* functions is just because of calls to
getAnalysis inside methods (i.e. not at the top level) of the
runOnFunction method. They should be straightforward to clean up when
the old PM is gone.
llvm-svn: 272615
This reverts commit r272603 and adds a fix.
Big thanks to Davide for pointing me at r216244 which gives some insight
into how to fix this VS2013 issue. VS2013 can't synthesize a move
constructor. So the fix here is to add one explicitly to the
JumpThreadingPass class.
llvm-svn: 272607
I've tested this locally with VS2015 and there are no issues there,
so this might be a VS2013 specific issue.
Thanks to Davide for the suggested fix.
llvm-svn: 272601
This follows the approach in r263208 (for GVN) pretty closely:
- move the bulk of the body of the function to the new PM class.
- expose a runImpl method on the new-PM class that takes the IRUnitT and
pointers/references to any analyses and use that to implement the
old-PM class.
- use a private namespace in the header for stuff that used to be file
scope
llvm-svn: 272597
This is a bit gnarly since LVI is maintaining its own cache.
I think this port could be somewhat cleaner, but I'd rather not spend
too much time on it while we still have the old pass hanging around and
limiting how much we can clean things up.
Once the old pass is gone it will be easier (less time spent) to clean
it up anyway.
This is the last dependency needed for porting JumpThreading which I'll
do in a follow-up commit (there's no printer pass for LVI or anything to
test it, so porting a pass that depends on it seems best).
I've been mostly following:
r269370 / D18834 which ported Dependence Analysis
r268601 / D19839 which ported BPI
llvm-svn: 272593
Differential Revision: http://reviews.llvm.org/D19842
Corresponding clang patch: http://reviews.llvm.org/D19843
Re-commit after addressing issues with of generating too many warnings for Windows and asan test failures
Patch by Eric Niebler
llvm-svn: 272555
MRRC/MRRC2 instruction writes to two registers. The
intrinsic definition returns a single uint64_t to
represent the write, this is a compact way of
representing a write to two 32 bit registers,
the alternative might have been two return a
struct of 2 uint32_t's but this isn't as nice.
Differential Revision:
llvm-svn: 272544
This used to be free, copying and moving DebugLocs became expensive
after the metadata rewrite. Passing by reference eliminates a ton of
track/untrack operations. No functionality change intended.
llvm-svn: 272512
Below are my super rough notes when porting. They can probably serve as
a basic guide for porting other passes to the new PM. As I port more
passes I'll expand and generalize this and make a proper
docs/HowToPortToNewPassManager.rst document. There is also missing
documentation for general concepts and API's in the new PM which will
require some documentation.
Once there is proper documentation in place we can put up a list of
passes that have to be ported and game-ify/crowdsource the rest of the
porting (at least of the middle end; the backend is still unclear).
I will however be taking personal responsibility for ensuring that the
LLD/ELF LTO pipeline is ported in a timely fashion. The remaining passes
to be ported are (do something like
`git grep "<the string in the bullet point below>"` to find the pass):
General Scalar:
[ ] Simplify the CFG
[ ] Jump Threading
[ ] MemCpy Optimization
[ ] Promote Memory to Register
[ ] MergedLoadStoreMotion
[ ] Lazy Value Information Analysis
General IPO:
[ ] Dead Argument Elimination
[ ] Deduce function attributes in RPO
Loop stuff / vectorization stuff:
[ ] Alignment from assumptions
[ ] Canonicalize natural loops
[ ] Delete dead loops
[ ] Loop Access Analysis
[ ] Loop Invariant Code Motion
[ ] Loop Vectorization
[ ] SLP Vectorizer
[ ] Unroll loops
Devirtualization / CFI:
[ ] Cross-DSO CFI
[ ] Whole program devirtualization
[ ] Lower bitset metadata
CGSCC passes:
[ ] Function Integration/Inlining
[ ] Remove unused exception handling info
[ ] Promote 'by reference' arguments to scalars
Please let me know if you are interested in working on any of the passes
in the above list (e.g. reply to the post-commit thread for this patch).
I'll probably be tackling "General Scalar" and "General IPO" first FWIW.
Steps as I port "Deduce function attributes in RPO"
---------------------------------------------------
(note: if you are doing any work based on these notes, please leave a
note in the post-commit review thread for this commit with any
improvements / suggestions / incompleteness you ran into!)
Note: "Deduce function attributes in RPO" is a module pass.
1. Do preparatory refactoring.
Do preparatory factoring. In this case all I had to do was to pull out a static helper (r272503).
(TODO: give more advice here e.g. if pass holds state or something)
2. Rename the old pass class.
llvm/lib/Transforms/IPO/FunctionAttrs.cpp
Rename class ReversePostOrderFunctionAttrs -> ReversePostOrderFunctionAttrsLegacyPass
in preparation for adding a class ReversePostOrderFunctionAttrs as the pass in the new PM.
(edit: actually wait what? The new class name will be
ReversePostOrderFunctionAttrsPass, so it doesn't conflict. So this step is
sort of useless churn).
llvm/include/llvm/InitializePasses.h
llvm/lib/LTO/LTOCodeGenerator.cpp
llvm/lib/Transforms/IPO/IPO.cpp
llvm/lib/Transforms/IPO/FunctionAttrs.cpp
Rename initializeReversePostOrderFunctionAttrsPass -> initializeReversePostOrderFunctionAttrsLegacyPassPass
(note that the "PassPass" thing falls out of `s/ReversePostOrderFunctionAttrs/ReversePostOrderFunctionAttrsLegacyPass/`)
Note that the INITIALIZE_PASS macro is what creates this identifier name, so renaming the class requires this renaming too.
Note that createReversePostOrderFunctionAttrsPass does not need to be
renamed since its name is not generated from the class name.
3. Add the new PM pass class.
In the new PM all passes need to have their
declaration in a header somewhere, so you will often need to add a header.
In this case
llvm/include/llvm/Transforms/IPO/FunctionAttrs.h is already there because
PostOrderFunctionAttrsPass was already ported.
The file-level comment from the .cpp file can be used as the file-level
comment for the new header. You may want to tweak the wording slightly
from "this file implements" to "this file provides" or similar.
Add declaration for the new PM pass in this header:
class ReversePostOrderFunctionAttrsPass
: public PassInfoMixin<ReversePostOrderFunctionAttrsPass> {
public:
PreservedAnalyses run(Module &M, AnalysisManager<Module> &AM);
};
Its name should end with `Pass` for consistency (note that this doesn't
collide with the names of most old PM passes). E.g. call it
`<name of the old PM pass>Pass`.
Also, move the doxygen comment from the old PM pass to the declaration of
this class in the header.
Also, include the declaration for the new PM class
`llvm/Transforms/IPO/FunctionAttrs.h` at the top of the file (in this case,
it was already done when the other pass in this file was ported).
Now define the `run` method for the new class.
The main things here are:
a) Use AM.getResult<...>(M) to get results instead of `getAnalysis<...>()`
b) If the old PM pass would have returned "false" (i.e. `Changed ==
false`), then you should return PreservedAnalyses::all();
c) In the old PM getAnalysisUsage method, observe the calls
`AU.addPreserved<...>();`.
In the case `Changed == true`, for each preserved analysis you should do
call `PA.preserve<...>()` on a PreservedAnalyses object and return it.
E.g.:
PreservedAnalyses PA;
PA.preserve<CallGraphAnalysis>();
return PA;
Note that calls to skipModule/skipFunction are not supported in the new PM
currently, so optnone and optimization bisect support do not work. You can
just drop those calls for now.
4. Add the pass to the new PM pass registry to make it available in opt.
In llvm/lib/Passes/PassBuilder.cpp add a #include for your header.
`#include "llvm/Transforms/IPO/FunctionAttrs.h"`
In this case there is already an include (from when
PostOrderFunctionAttrsPass was ported).
Add your pass to llvm/lib/Passes/PassRegistry.def
In this case, I added
`MODULE_PASS("rpo-functionattrs", ReversePostOrderFunctionAttrsPass())`
The string is from the `INITIALIZE_PASS*` macros used in the old pass
manager.
Then choose a test that uses the pass and use the new PM `-passes=...` to
run it.
E.g. in this case there is a test that does:
; RUN: opt < %s -basicaa -functionattrs -rpo-functionattrs -S | FileCheck %s
I have added the line:
; RUN: opt < %s -aa-pipeline=basic-aa -passes='require<targetlibinfo>,cgscc(function-attrs),rpo-functionattrs' -S | FileCheck %s
The `-aa-pipeline=basic-aa` and
`require<targetlibinfo>,cgscc(function-attrs)` are what is needed to run
functionattrs in the new PM (note that in the new PM "functionattrs"
becomes "function-attrs" for some reason). This is just pulled from
`readattrs.ll` which contains the change from when functionattrs was ported
to the new PM.
Adding rpo-functionattrs causes the pass that was just ported to run.
llvm-svn: 272505
Summary: This also deprecated the get attribute function familly.
Reviewers: Wallbraker, whitequark, joker.eph, echristo, rafael, jyknight
Subscribers: axw, joker.eph, llvm-commits
Differential Revision: http://reviews.llvm.org/D19181
llvm-svn: 272504
This shouldn't have any functional difference, but it appears to be the
pattern used for other methods on DynamicLibrary, and it should avoid
the -Wpedantic warning on one of the build bots about the direct
reinterpret_cast.
llvm-svn: 272461
Adds a MachineFunctionPass that scans the body to find calls, and
update the register mask with the one saved by the
RegUsageInfoCollector analysis in PhysicalRegisterUsageInfo.
Patch by Vivek Pandya <vivekvpandya@gmail.com>
Differential Revision: http://reviews.llvm.org/D21180
llvm-svn: 272414
The costs are somewhat hand-wavy, but should be much closer to the truth
than what we get from BasicTTI.
Differential Revision: http://reviews.llvm.org/D21156
llvm-svn: 272406
Add an option to enable the analysis of MachineFunction register
usage to extract the list of clobbered registers.
When enabled, the CodeGen order is changed to be bottom up on the Call
Graph.
The analysis is split in two parts, RegUsageInfoCollector is the
MachineFunction Pass that runs post-RA and collect the list of
clobbered registers to produce a register mask.
An immutable pass, RegisterUsageInfo, stores the RegMask produced by
RegUsageInfoCollector, and keep them available. A future tranformation
pass will use this information to update every call-sites after
instruction selection.
Patch by Vivek Pandya <vivekvpandya@gmail.com>
Differential Revision: http://reviews.llvm.org/D20769
llvm-svn: 272403
This reapplies commit r272385 with a fix. The build was failing when compiled
with gcc, but not with clang. With the fix, we now get the data layout from the
current TTI implementation, which will hopefully solve the issue.
llvm-svn: 272395
This patch refines the default cost for interleaved load groups having gaps. If
a load group has gaps, the legalized instructions corresponding to the unused
elements will be dead. Thus, we don't need to account for them in the cost
model. Instead, we only need to account for the fraction of legalized loads
that will actually be used.
Differential Revision: http://reviews.llvm.org/D20873
llvm-svn: 272385
This is the next step towards being able to write PDBs.
MemoryBuffer is immutable, and StreamInterface is our replacement
which can be any combination of read-only, read-write, or write-only
depending on the particular implementation.
The one place where we were creating a PDBFile (in RawSession) is
updated to subclass ByteStream with a simple adapter that holds
a MemoryBuffer, and initializes the superclass with the buffer's
array, so that all the functionality of ByteStream works
transparently.
llvm-svn: 272370
This adds method and tests for writing to a PDB stream. With
this, even a PDB stream which is discontiguous can be treated
as a sequential stream of bytes for the purposes of writing.
Reviewed By: ruiu
Differential Revision: http://reviews.llvm.org/D21157
llvm-svn: 272369
Previously we could run only one machine pass with the run-pass option.
With that patch, we can now specify several passes with several run-pass
options (or just one option with a list of comma separated passes) and
llc will build the related pipeline.
This is great to test the interaction of two passes that are not
necessarily next to each other in the pipeline, or play with pass
ordering.
Now, we should be at parity with opt for the flexibility of running
passes.
Note: I also moved the run pass option from CommandFlags.h to llc.cpp
because, really, this is needed only there!
llvm-svn: 272356
Instead of directly using MaxFunctionCount and function entry count to determine callee hotness, use the isHotFunction/isColdFunction methods provided by ProfileSummaryInfo.
Differential revision: http://reviews.llvm.org/D21045
llvm-svn: 272321
Summary:
Currently clang emits these instructions via inline (volatile) asm in
the CUDA headers. Switching to intrinsics will let the optimizer reason
across calls to these intrinsics.
Reviewers: tra
Subscribers: llvm-commits, jholewinski
Differential Revision: http://reviews.llvm.org/D21160
llvm-svn: 272298
Summary:
__syncthreads, which corresponds to bar.sync 0, is already convergent.
This makes the more general bar.sync n likewise convergent.
Reviewers: tra
Subscribers: llvm-commits, jholewinski
Differential Revision: http://reviews.llvm.org/D21161
llvm-svn: 272295
looking for it along $PATH. This allows installs of LLVM tools outside of
$PATH to find the symbolizer and produce pretty backtraces if they crash.
llvm-svn: 272232
Refactor the code so that we do not compute in two different places the
end iterator for the range of new virtual registers for a given operand.
Although this refactoring was intended as NFC, this is not the case
because it actually fixes a bug where we were returning a range off by 1
(too long). Right now, this could not result in an actual bug because we
were accessing this range via the BreakDown size of the related operand.
llvm-svn: 272208
Summary:
Now DISubroutineType has a 'cc' field which should be a DW_CC_ enum. If
it is present and non-zero, the backend will emit it as a
DW_AT_calling_convention attribute. On the CodeView side, we translate
it to the appropriate enum for the LF_PROCEDURE record.
I added a new LLVM vendor specific enum to the list of DWARF calling
conventions. DWARF does not appear to attempt to standardize these, so I
assume it's OK to do this until we coordinate with GCC on how to emit
vectorcall convention functions.
Reviewers: dexonsmith, majnemer, aaboud, amccarth
Subscribers: mehdi_amini, llvm-commits
Differential Revision: http://reviews.llvm.org/D21114
llvm-svn: 272197
Absence of may-unwind calls is not enough to guarantee that a
UB-generating use of an add-rec poison in the loop latch will actually
cause UB. We also need to guard against calls that terminate the thread
or infinite loop themselves.
This partially addresses PR28012.
llvm-svn: 272181
Now, the target will be able to provide its how implementation to remap
an instruction. This open the way to crazier optimizations, but to
beginning with, we will be able to handle something else than the
default mapping.
llvm-svn: 272165
The generic implementation stated that all copies were free, which is
unlikely. Now, only the copies within the same register bank are free.
We assume they will get coalesced.
llvm-svn: 272085
The cost of a copy may be different based on how many bits we have to
copy around. E.g., a 8-bit copy may be different than a 32-bit copy.
llvm-svn: 272084
We were previously failing to do this and as a result failing to drop
attached metadata.
Not sure if there's a good way to test this. An in-progress patch exposed this
problem by allocating a GlobalVariable at the same address as a previously
allocated GlobalVariable.
Differential Revision: http://reviews.llvm.org/D21109
llvm-svn: 272077
Summary:
This patch is adding support for the MSVC buffer security check implementation
The buffer security check is turned on with the '/GS' compiler switch.
* https://msdn.microsoft.com/en-us/library/8dbf701c.aspx
* To be added to clang here: http://reviews.llvm.org/D20347
Some overview of buffer security check feature and implementation:
* https://msdn.microsoft.com/en-us/library/aa290051(VS.71).aspx
* http://www.ksyash.com/2011/01/buffer-overflow-protection-3/
* http://blog.osom.info/2012/02/understanding-vs-c-compilers-buffer.html
For the following example:
```
int example(int offset, int index) {
char buffer[10];
memset(buffer, 0xCC, index);
return buffer[index];
}
```
The MSVC compiler is adding these instructions to perform stack integrity check:
```
push ebp
mov ebp,esp
sub esp,50h
[1] mov eax,dword ptr [__security_cookie (01068024h)]
[2] xor eax,ebp
[3] mov dword ptr [ebp-4],eax
push ebx
push esi
push edi
mov eax,dword ptr [index]
push eax
push 0CCh
lea ecx,[buffer]
push ecx
call _memset (010610B9h)
add esp,0Ch
mov eax,dword ptr [index]
movsx eax,byte ptr buffer[eax]
pop edi
pop esi
pop ebx
[4] mov ecx,dword ptr [ebp-4]
[5] xor ecx,ebp
[6] call @__security_check_cookie@4 (01061276h)
mov esp,ebp
pop ebp
ret
```
The instrumentation above is:
* [1] is loading the global security canary,
* [3] is storing the local computed ([2]) canary to the guard slot,
* [4] is loading the guard slot and ([5]) re-compute the global canary,
* [6] is validating the resulting canary with the '__security_check_cookie' and performs error handling.
Overview of the current stack-protection implementation:
* lib/CodeGen/StackProtector.cpp
* There is a default stack-protection implementation applied on intermediate representation.
* The target can overload 'getIRStackGuard' method if it has a standard location for the stack protector cookie.
* An intrinsic 'Intrinsic::stackprotector' is added to the prologue. It will be expanded by the instruction selection pass (DAG or Fast).
* Basic Blocks are added to every instrumented function to receive the code for handling stack guard validation and errors handling.
* Guard manipulation and comparison are added directly to the intermediate representation.
* lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
* lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
* There is an implementation that adds instrumentation during instruction selection (for better handling of sibbling calls).
* see long comment above 'class StackProtectorDescriptor' declaration.
* The target needs to override 'getSDagStackGuard' to activate SDAG stack protection generation. (note: getIRStackGuard MUST be nullptr).
* 'getSDagStackGuard' returns the appropriate stack guard (security cookie)
* The code is generated by 'SelectionDAGBuilder.cpp' and 'SelectionDAGISel.cpp'.
* include/llvm/Target/TargetLowering.h
* Contains function to retrieve the default Guard 'Value'; should be overriden by each target to select which implementation is used and provide Guard 'Value'.
* lib/Target/X86/X86ISelLowering.cpp
* Contains the x86 specialisation; Guard 'Value' used by the SelectionDAG algorithm.
Function-based Instrumentation:
* The MSVC doesn't inline the stack guard comparison in every function. Instead, a call to '__security_check_cookie' is added to the epilogue before every return instructions.
* To support function-based instrumentation, this patch is
* adding a function to get the function-based check (llvm 'Value', see include/llvm/Target/TargetLowering.h),
* If provided, the stack protection instrumentation won't be inlined and a call to that function will be added to the prologue.
* modifying (SelectionDAGISel.cpp) do avoid producing basic blocks used for inline instrumentation,
* generating the function-based instrumentation during the ISEL pass (SelectionDAGBuilder.cpp),
* if FastISEL (not SelectionDAG), using the fallback which rely on the same function-based implemented over intermediate representation (StackProtector.cpp).
Modifications
* adding support for MSVC (lib/Target/X86/X86ISelLowering.cpp)
* adding support function-based instrumentation (lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp, .h)
Results
* IR generated instrumentation:
```
clang-cl /GS test.cc /Od /c -mllvm -print-isel-input
```
```
*** Final LLVM Code input to ISel ***
; Function Attrs: nounwind sspstrong
define i32 @"\01?example@@YAHHH@Z"(i32 %offset, i32 %index) #0 {
entry:
%StackGuardSlot = alloca i8* <<<-- Allocated guard slot
%0 = call i8* @llvm.stackguard() <<<-- Loading Stack Guard value
call void @llvm.stackprotector(i8* %0, i8** %StackGuardSlot) <<<-- Prologue intrinsic call (store to Guard slot)
%index.addr = alloca i32, align 4
%offset.addr = alloca i32, align 4
%buffer = alloca [10 x i8], align 1
store i32 %index, i32* %index.addr, align 4
store i32 %offset, i32* %offset.addr, align 4
%arraydecay = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 0
%1 = load i32, i32* %index.addr, align 4
call void @llvm.memset.p0i8.i32(i8* %arraydecay, i8 -52, i32 %1, i32 1, i1 false)
%2 = load i32, i32* %index.addr, align 4
%arrayidx = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 %2
%3 = load i8, i8* %arrayidx, align 1
%conv = sext i8 %3 to i32
%4 = load volatile i8*, i8** %StackGuardSlot <<<-- Loading Guard slot
call void @__security_check_cookie(i8* %4) <<<-- Epilogue function-based check
ret i32 %conv
}
```
* SelectionDAG generated instrumentation:
```
clang-cl /GS test.cc /O1 /c /FA
```
```
"?example@@YAHHH@Z": # @"\01?example@@YAHHH@Z"
# BB#0: # %entry
pushl %esi
subl $16, %esp
movl ___security_cookie, %eax <<<-- Loading Stack Guard value
movl 28(%esp), %esi
movl %eax, 12(%esp) <<<-- Store to Guard slot
leal 2(%esp), %eax
pushl %esi
pushl $204
pushl %eax
calll _memset
addl $12, %esp
movsbl 2(%esp,%esi), %esi
movl 12(%esp), %ecx <<<-- Loading Guard slot
calll @__security_check_cookie@4 <<<-- Epilogue function-based check
movl %esi, %eax
addl $16, %esp
popl %esi
retl
```
Reviewers: kcc, pcc, eugenis, rnk
Subscribers: majnemer, llvm-commits, hans, thakis, rnk
Differential Revision: http://reviews.llvm.org/D20346
llvm-svn: 272053
This allows mapping of any endian-aware type whose underlying
type (e.g. uint32_t) provides a ScalarTraits specialization.
Reviewed by: majnemer
Differential Revision: http://reviews.llvm.org/D21057
llvm-svn: 272049
Currently the only way to use the (V)MOVNTDQA nontemporal vector loads instructions is through the int_x86_sse41_movntdqa style builtins.
This patch adds support for lowering nontemporal loads from general IR, allowing us to remove the movntdqa builtins in a future patch.
We currently still fold nontemporal loads into suitable instructions, we should probably look at removing this (and nontemporal stores as well) or at least make the target's folding implementation aware that its dealing with a nontemporal memory transaction.
There is also an issue that VMOVNTDQA only acts on 128-bit vectors on pre-AVX2 hardware - so currently a normal ymm load is still used on AVX1 targets.
Differential Review: http://reviews.llvm.org/D20965
llvm-svn: 272010
In order to efficiently write PDBs, we need to be able to make a
StreamWriter class similar to a StreamReader, which can transparently deal
with writing to discontiguous streams, and we need to use this for all
writing, similar to how we use StreamReader for all reading.
Most discontiguous streams are the typical numbered streams that appear in
a PDB file and are described by the directory, but the exception to this,
that until now has been parsed by hand, is the directory itself.
MappedBlockStream works by querying the directory to find out which blocks
a stream occupies and various other things, so naturally the same logic
could not possibly work to describe the blocks that the directory itself
resided on.
To solve this, I've introduced an abstraction IPDBStreamData, which allows
the client to query for the list of blocks occupied by the stream, as well
as the stream length. I provide two implementations of this: one which
queries the directory (for indexed streams), and one which queries the
super block (for the directory stream).
This has the side benefit of vastly simplifying the code to parse the
directory. Whereas before a mini state machine was rolled by hand, now we
simply use FixedStreamArray to read out the stream sizes, then build a
vector of FixedStreamArrays for the stream map, all in just a few lines of
code.
Reviewed By: ruiu
Differential Revision: http://reviews.llvm.org/D21046
llvm-svn: 271982
The data strucutre in the new FPO stream is described in the
PE/COFF spec. There is one record per function if frame pointer
is omitted.
Differential Revision: http://reviews.llvm.org/D20999
llvm-svn: 271926
Summary:
There are some rough corners, since the new pass manager doesn't have
(as far as I can tell) LoopSimplify and LCSSA, so I've updated the
tests to run them separately in the old pass manager in the lit tests.
We also don't have an equivalent for AU.setPreservesCFG() in the new
pass manager, so I've left a FIXME.
Reviewers: bogner, chandlerc, davide
Subscribers: sanjoy, mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D20783
llvm-svn: 271846
libstdc++ (or in compilers, or somewhere, I can't track it down) that
causes unittests that use INITIALIZE_PASS to crash.
The analysis I've been able to do is that inside libstdc++'s
implementation of std::call_once, it uses pthread_once, and when that
returns an error code it throws std::system_error which then eventually
calls std::terminate.
Hopefully some of the folks who work on PPC can try to sort out what's
going on here. Until then, they'll have to use the fallback
implementation.
llvm-svn: 271821
CALL_ONCE_... macro in the legacy pass manager with the new
llvm::call_once facility.
Nothing changed sicne the last attempt in r271781 which I reverted in
r271788. At least one of the failures I saw was spurious, and I want to
make sure the other failures are real before I work around them -- they
appeared to only effect ppc64le and ppc64be.
Original commit message of r271781:
----
[LPM] Reinstate r271652 to replace the CALL_ONCE_... macro in the legacy
pass manager with the new llvm::call_once facility.
This reverts commit r271657 and re-applies r271652 with a fix to
actually work with arguments. In the original version, we just ended up
directly calling std::call_once via ADL because of the std::once_flag
argument. The llvm::call_once never worked with arguments. Now,
llvm::call_once is a variadic template that perfectly forwards
everything. As a part of this it had to move to the header and we use
a generic functor rather than an explict function pointer. It would be
nice to use std::invoke here but we don't have it yet. That means
pointer to members won't work here, but that seems a tolerable
compromise.
I've also tested this by forcing the fallback path, so hopefully it
sticks this time.
----
Original commit message of r271652:
----
[LPM] Replace the CALL_ONCE_... macro in the legacy pass manager with
the new llvm::call_once facility.
This facility matches the standard APIs and when the platform supports
it actually directly uses the standard provided functionality. This is
both more efficient on some platforms and much more TSan friendly.
The only remaining user of the cas_flag and home-rolled atomics is the
fallback implementation of call_once. I have a patch that removes them
entirely, but it needs a Windows patch to land first.
This alone substantially cleans up the macros for the legacy pass
manager, and should subsume some of the work Mehdi was doing to clear
the path for TSan testing of ThinLTO, a really important step to have
reliable upstream testing of ThinLTO in all forms.
----
llvm-svn: 271800
There appears to be a strange exception thrown and crash using call_once
on a PPC build bot, and a *really* weird windows link error for
GCMetadata.obj. Still need to investigate the cause of both problems.
Original change summary:
[LPM] Reinstate r271652 to replace the CALL_ONCE_... macro in the legacy
pass manager with the new llvm::call_once facility.
llvm-svn: 271788
pass manager with the new llvm::call_once facility.
This reverts commit r271657 and re-applies r271652 with a fix to
actually work with arguments. In the original version, we just ended up
directly calling std::call_once via ADL because of the std::once_flag
argument. The llvm::call_once never worked with arguments. Now,
llvm::call_once is a variadic template that perfectly forwards
everything. As a part of this it had to move to the header and we use
a generic functor rather than an explict function pointer. It would be
nice to use std::invoke here but we don't have it yet. That means
pointer to members won't work here, but that seems a tolerable
compromise.
I've also tested this by forcing the fallback path, so hopefully it
sticks this time.
Original commit message:
----
[LPM] Replace the CALL_ONCE_... macro in the legacy pass manager with
the new llvm::call_once facility.
This facility matches the standard APIs and when the platform supports
it actually directly uses the standard provided functionality. This is
both more efficient on some platforms and much more TSan friendly.
The only remaining user of the cas_flag and home-rolled atomics is the
fallback implementation of call_once. I have a patch that removes them
entirely, but it needs a Windows patch to land first.
This alone substantially cleans up the macros for the legacy pass
manager, and should subsume some of the work Mehdi was doing to clear
the path for TSan testing of ThinLTO, a really important step to have
reliable upstream testing of ThinLTO in all forms.
llvm-svn: 271781
This commit adds a convenience is_contained() function
which checks if an element exists in a container. It is part of a larger
series of patches adding an MPI checker to the clang static analyzer.
Reviewers: dblaikie,bkramer
A patch by Alexander Droste!
Differential Revision:http://reviews.llvm.org/D16053
llvm-svn: 271757
This is currently used by clang to lock access to modules; improve the
error message so that clang can use better output messages from locking
error issues.
rdar://problem/26529101
Differential Review: http://reviews.llvm.org/D20942
llvm-svn: 271755
My first attempt at this had an overly aggressive assert - chain nodes
will only be removed, but we could hit the assert if a non-chain node
was CSE'd (NodeToMatch, for instance).
This reapplies r271706 by reverting r271713 and fixing an assert.
Original message:
Avoid relying on UB by looking into deleted nodes for a marker value.
Instead, update the list of chain nodes as we go.
llvm-svn: 271733
Summary:
Previously we would try to load PDBs for every PE executable we tried to
symbolize. If that failed, we would fall back to DWARF. If there wasn't
any DWARF, we'd print mostly useless symbol information using the export
table.
With this change, we only try to load PDBs for executables that claim to
have them. If that fails, we can now print an error rather than falling
back silently. This should make it a lot easier to diagnose and fix
common symbolization issues, such as not having DIA or not having a PDB.
Reviewers: zturner, eugenis
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D20982
llvm-svn: 271725
This only translates data members for now. Translating overloaded
methods is complicated, so I stopped short of doing that.
Reviewers: aaboud
Differential Revision: http://reviews.llvm.org/D20924
llvm-svn: 271680
new instruction to ARM and AArch64 targets and several system registers.
Patch by: Roger Ferrer Ibanez and Oliver Stannard
Differential Revision: http://reviews.llvm.org/D20282
llvm-svn: 271670
the new llvm::call_once facility.
This facility matches the standard APIs and when the platform supports
it actually directly uses the standard provided functionality. This is
both more efficient on some platforms and much more TSan friendly.
The only remaining user of the cas_flag and home-rolled atomics is the
fallback implementation of call_once. I have a patch that removes them
entirely, but it needs a Windows patch to land first.
This alone substantially cleans up the macros for the legacy pass
manager, and should subsume some of the work Mehdi was doing to clear
the path for TSan testing of ThinLTO, a really important step to have
reliable upstream testing of ThinLTO in all forms.
llvm-svn: 271652
formatted fancily.
I'm working on rewriting these macros to use the new call_once stuff,
but really want to have clang-format work on the edits, so just
re-baselining the entire file here. No changes other than clang-format.
llvm-svn: 271635
This patch begins adding support for lowering to the XOP VPERMIL2PD/VPERMIL2PS shuffle instructions - adding the X86ISD::VPERMIL2 opcode and cleaning up the usage.
The internal llvm intrinsics were assuming the shuffle mask operand was the same type as the float/double input operands (I guess to simplify the intrinsic definitions in X86InstrXOP.td to a single value type). These needed changing to integer types (matching the clang builtin and the AMD intrinsics definitions), an auto upgrade path is added to convert old calls.
Mask decoding/target shuffle support will be added in future patches.
Differential Revision: http://reviews.llvm.org/D20049
llvm-svn: 271633
When printing line information and file checksums, we were printing
the file offset field from the struct header. This teaches
llvm-pdbdump how to turn those numbers into the filename. In the
case of file checksums, this is done by looking in the global
string table. In the case of line contributions, this is done
by indexing into the file names buffer of the DBI stream. Why
they use a different technique I don't know.
llvm-svn: 271630
To facilitate this, a couple of changes had to be made:
1. `ModuleSubstream` got moved from `DebugInfo/PDB` to
`DebugInfo/CodeView`, and various codeview related types are defined
there. It turns out `DebugInfo/CodeView/Line.h` already defines many of
these structures, but this is really old code that is not endian aware,
doesn't interact well with `StreamInterface` and not very helpful for
getting stuff out of a PDB. Eventually we should migrate the old readobj
`COFFDumper` code to these new structures, or at least merge their
functionality somehow.
2. A `ModuleSubstream` visitor is introduced. Depending on where your
module substream array comes from, different subsets of record types can
be expected. We are already hand parsing these substream arrays in many
places especially in `COFFDumper.cpp`. In the future we can migrate these
paths to the visitor as well, which should reduce a lot of code in
`COFFDumper.cpp`.
Differential Revision: http://reviews.llvm.org/D20936
Reviewed By: ruiu, majnemer
llvm-svn: 271621
This commit adds round tripping for MachO symbol data. Symbols are entries in the name list, that contain offsets into the string table which is at the end of the __LINKEDIT segment.
llvm-svn: 271604
This first pass only splits apart the records and dumps the line
info kinds and binary data. Subsequent patches will parse out
the binary data into more useful information and dump it in
detail.
llvm-svn: 271576
StreamRef was designed to be a thin wrapper over an abstract
stream interface that could itself be treated the same as any
other stream interface. For this reason, it inherited publicly
from StreamInterface, and stored a StreamInterface* internally.
But StreamRef was also designed to be lightweight and easily
copyable, similar to ArrayRef. This led to two misuses of
the classes.
1) When creating a StreamRef A from another StreamRef B, it was
possible to end up with A storing a pointer to B, even when
B was a temporary object, leading to use after free.
2) The above situation could be repeated ad nauseum, so that
A stores a pointer to B, which itself stores a pointer to
another StreamRef C, and so on and so on, creating an
unnecessarily level of nesting depth.
This patch removes the public inheritance relationship between
StreamRef and StreamInterface, making it so that we can never
accidentally convert a StreamRef to a StreamInterface.
llvm-svn: 271570
D19271.
Previous attempt was broken by NetBSD, so in this version I've made the
fallback path generic rather than Windows specific and sent both Windows
and NetBSD to it.
I've also re-formatted the code some, and used an exact clone of the
code in PassSupport.h for doing manual call-once using our atomics
rather than rolling a new one.
If this sticks, we can replace the fallback path for Windows with
a Windows-specific implementation that is more reliable.
Original commit message:
This patch adds an llvm_call_once which is a wrapper around
std::call_once on platforms where it is available and devoid
of bugs. The patch also migrates the ManagedStatic mutex to
be allocated using llvm_call_once.
These changes are philosophically equivalent to the changes
added in r219638, which were reverted due to a hang on Win32
which was the result of a bug in the Windows implementation
of std::call_once.
Differential Revision: http://reviews.llvm.org/D5922
llvm-svn: 271558
Unlike other sections that can grow to any size, the COFF section header
stream has maximum length because each record is fixed size and the COFF
file format limits the maximum number of sections. So I decided to not
create a specific stream class for it. Instead, I added a member function
to DbiStream class which returns a vector of COFF headers.
Differential Revision: http://reviews.llvm.org/D20717
llvm-svn: 271557
This is part of an effort to shave allocations from APInt heavy paths. I'll
be moving many of the other operators to r-value references soon and this is
a step towards doing that without too much duplication.
Saves 15k allocations when doing 'opt -O2 verify-uselistorder.bc'.
llvm-svn: 271556
Also fix slice wrappers drop_front and drop_back.
The unittests are pretty awkward, but do the job; alternatives
welcome!
..and yes, I do have ArrayRefs with more than 4 billion elements.
llvm-svn: 271546
except for CompareAndSwap. That is the only one still being used
anywhere now that statistics have been moved onto std::atomic.
Also, add a warning to the header that we shouldn't introduce more uses
of these old style atomics and instead should be using C++11's
std::atomic facilities.
Really hoping that we can hammer out the last couple of users here and
replace them with something more localized and/or principled, but
figured this was a pretty good start. =]
Note that this patch will need to be reverted if r271504 needs to be
reverted as that removes the last user of these. However, the biggest
risk for that patch was MSVC 2013 and at least one bot has already
passed where it would have failed there. I've tested MSVC 2015 using
their web interfaces and other platforms seem fine, so I'm optimistic.
Differential Revision: http://reviews.llvm.org/D20901
llvm-svn: 271540
This directory is used to find if there is a PDB associated with an
executable. I plan to use this functionality to teach llvm-symbolizer
whether it should use DIA or DWARF to symbolize a given DLL.
Reviewers: majnemer
Differential Revision: http://reviews.llvm.org/D20885
llvm-svn: 271539
Summary:
If the target requests it, use emptry spaces in the fixed and
callee-save stack area to allocate local stack objects.
AArch64: Change last callee-save reg stack object alignment instead of
size to leave a gap to take advantage of above change.
Reviewers: t.p.northover, qcolombet, MatzeB
Subscribers: rengolin, mcrosier, llvm-commits, aemerson
Differential Revision: http://reviews.llvm.org/D20220
llvm-svn: 271527
This patch removes the llvm intrinsics (V)CVTTPS2DQ and VCVTTPD2DQ truncation (round to zero) conversions and auto-upgrades to FP_TO_SINT calls instead.
Note: I looked at updating CVTTPD2DQ as well but this still requires a lot more work to correctly lower.
Differential Revision: http://reviews.llvm.org/D20860
llvm-svn: 271510
This removes usage of the hacky, incorrect, and TSan-unfriendly
home-grown atomics. It should actually be more efficient in some cases.
Based on our existing usage of <atomic>, all of this is portably
available AFAICT. One small challenge is initializing the stastic, but
I've tried a comparable sample out on MSVC (the most likely to complain
here) and it seems to work. Will have to watch the build bots of course.
llvm-svn: 271504
statistics.
Scaling statistics atomically doesn't make any sense anyways, and none
were using these. If you find yourself wanting to do this, you should
probably keep a local count that you scale and then apply that after
scaling to the shared statistic object.
llvm-svn: 271503
We only considered the length of the operation and the length of the
StreamRef without considered what it meant for the offset to be at a
non-zero position.
llvm-svn: 271496
... and merge into `Value::getPointerDereferenceableBytes`. This was
suggested by Artur Pilipenko in D20764 -- since we no longer allow loads
of unsized types, there is no need anymore to have this special logic.
llvm-svn: 271455
Add support for the new pass manager to MemorySSA pass.
Change MemorySSA to be computed eagerly upon construction.
Change MemorySSAWalker to be owned by the MemorySSA object that creates
it.
Reviewers: dberlin, george.burgess.iv
Subscribers: mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D19664
llvm-svn: 271432
Summary:
Make sure that the SCEVExpander Builder insert point and any
saved/restored insert points are kept consistent (i.e. their Instruction
and BasicBlock match) when moving instructions in SCEVExpander.
This fixes an issue triggered by
http://reviews.llvm.org/D18001 [LSR] Create fewer redundant instructions.
Test case will be added in reapply commit of above change:
http://reviews.llvm.org/D18480 Reapply [LSR] Create fewer redundant instructions.
Reviewers: sanjoy
Subscribers: mzolotukhin, sanjoy, qcolombet, mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D20703
llvm-svn: 271424
This patch extends CFLAA to recognize allocation functions such as
malloc, free, etc, so we can treat them more aggressively.
Patch by Jia Chen.
Differential Revision: http://reviews.llvm.org/D20776
llvm-svn: 271421
This will be necessary to allow the global merge pass to attach
multiple debug info metadata nodes to global variables once we reverse
the edge from DIGlobalVariable to GlobalVariable.
Differential Revision: http://reviews.llvm.org/D20414
llvm-svn: 271358
This tidies up some code that was manually constructing RuntimeDyld::SymbolInfo
instances from JITSymbols. It will save more mess in the future when
JITSymbol::getAddress is extended to return an Expected<TargetAddress> rather
than just a TargetAddress, since we'll be able to embed the error checking in
the conversion.
llvm-svn: 271350
This patch adds an IR, assembly and bitcode representation for metadata
attachments for globals. Future patches will port existing features to use
these new attachments.
Differential Revision: http://reviews.llvm.org/D20074
llvm-svn: 271348
Refactor LiveIntervals::renameDisconnectedComponents() to be a pass.
Also change the name to "RenameIndependentSubregs":
- renameDisconnectedComponents() worked on a MachineFunction at a time
so it is a natural candidate for a machine function pass.
- The algorithm is testable with a .mir test now.
- This also fixes a problem where the lazy renaming as part of the
MachineScheduler introduced IMPLICIT_DEF instructions after the number
of a nodes in a region were counted leading to a mismatch.
Differential Revision: http://reviews.llvm.org/D20507
llvm-svn: 271345
when the object is from a slice of a Mach-O Universal Binary use something like
"foo.o (for architecture i386)" as part of the error message when expected.
Also fixed places in these tools that were ignoring object file errors from
MachOUniversalBinary::getAsObjectFile() when the code moved on to see if
the slice was an archive.
To do this MachOUniversalBinary::getAsObjectFile() and
MachOUniversalBinary::getObjectForArch() were changed from returning
ErrorOr<...> to Expected<...> then that was threaded up to its users.
Converting these interfaces to Expected<> from ErrorOr<> does involve
touching a number of places. To contain the changes for now the use of
errorToErrorCode() is still used in two places yet to be fully converted.
llvm-svn: 271332
Adds the method MCStreamer::EmitBinaryData, which is usually an alias
for EmitBytes. In the MCAsmStreamer case, it is overridden to emit hex
dump output like this:
.byte 0x0e, 0x00, 0x08, 0x10
.byte 0x03, 0x00, 0x00, 0x00
.byte 0x00, 0x00, 0x00, 0x00
.byte 0x00, 0x10, 0x00, 0x00
Also, when verbose asm comments are enabled, this patch prints the dump
output for each comment before its record, like this:
# ArgList (0x1000) {
# TypeLeafKind: LF_ARGLIST (0x1201)
# NumArgs: 0
# Arguments [
# ]
# }
.byte 0x06, 0x00, 0x01, 0x12
.byte 0x00, 0x00, 0x00, 0x00
This should make debugging easier and testing more convenient.
Reviewers: aaboud
Subscribers: majnemer, zturner, amccarth, aaboud, llvm-commits
Differential Revision: http://reviews.llvm.org/D20711
llvm-svn: 271313
The MachO export trie is a serially encoded trie keyed by symbol name. This code parses the trie and preserves the structure so that it can be dumped again.
llvm-svn: 271300
Added support to map intrinsics
__builtin_arm_{ldc,ldcl,ldc2,ldc2l,stc,stcl,stc2,stc2l}
to their ARM instructions.
Differential Revision: http://reviews.llvm.org/D20564
llvm-svn: 271271
This adds support to the backed to actually support SjLj EH as an exception
model. This is *NOT* the default model, and requires explicitly opting into it
from the frontend. GCC supports this model and for MinGW can still be enabled
via the `--using-sjlj-exceptions` options.
Addresses PR27749!
llvm-svn: 271244
This function failed to type-check as it was. No test case yet (we only have
regression tests for the remote-JIT code, and LLI don't use this function), but
an upcoming chapter of the Kaleidoscope Building A JIT tutorials will use
this.
llvm-svn: 271189
Consolidate documentation by removing comments from the .cpp file where
the comments in the .cpp file were copy-pasted from the header.
llvm-svn: 271157
Summary:
This change teaches SCEV to see reduce `(extractvalue
0 (op.with.overflow X Y))` into `op X Y` (with a no-wrap tag if
possible).
This was first checked in at r265912 but reverted in r265950 because it
exposed some issues around how SCEV handled post-inc add recurrences.
Those issues have now been fixed.
Reviewers: atrick, regehr
Subscribers: mcrosier, mzolotukhin, llvm-commits
Differential Revision: http://reviews.llvm.org/D18684
llvm-svn: 271152
Fixes PR27315.
The post-inc version of an add recurrence needs to "follow the same
rules" as a normal add or subtract expression. Otherwise we miscompile
programs like
```
int main() {
int a = 0;
unsigned a_u = 0;
volatile long last_value;
do {
a_u += 3;
last_value = (long) ((int) a_u);
if (will_add_overflow(a, 3)) {
// Leave, and don't actually do the increment, so no UB.
printf("last_value = %ld\n", last_value);
exit(0);
}
a += 3;
} while (a != 46);
return 0;
}
```
This patch changes SCEV to put no-wrap flags on post-inc add recurrences
only when the poison from a potential overflow will go ahead to cause
undefined behavior.
To avoid regressing performance too much, I've assumed infinite loops
without side effects is undefined behavior to prove poison<->UB
equivalence in more cases. This isn't ideal, but is not new to LLVM as
a whole, and far better than the situation I'm trying to fix.
llvm-svn: 271151
This patch removes the llvm intrinsics VPMOVSX and (V)PMOVZX sign/zero extension intrinsics and auto-upgrades to SEXT/ZEXT calls instead. We already did this for SSE41 PMOVSX sometime ago so much of that implementation can be reused.
Reapplied now that the the companion patch (D20684) removes/auto-upgrade the clang intrinsics has been committed.
Differential Revision: http://reviews.llvm.org/D20686
llvm-svn: 271131
This matches the behavior of GNU assembler which supports symbolic
expressions in absolute expressions used in assembly directives.
Differential Revision: http://reviews.llvm.org/D20752
llvm-svn: 271102
This converts remaining uses of ByteStream, which was still
left in the symbol stream and type stream, to using the new
StreamInterface zero-copy classes.
RecordIterator is finally deleted, so this is the only way left
now. Additionally, more error checking is added when iterating
the various streams.
With this, the transition to zero copy pdb access is complete.
llvm-svn: 271101
Remove broken patterns matching it. This was matching the
unsafe math pattern and expanding the fix for the buggy instruction
from the pattern. The problems are also on CI. Remove the workarounds
and only use fract with unsafe math or from the intrinsic.
llvm-svn: 271078
Summary:
Unroll factor (Count) calculations moved to a new function.
Early exits on pragma and "-unroll-count" defined factor added.
New type of unrolling "Force" introduced (previously used implicitly).
New unroll preference "AllowRemainder" introduced and set "true" by default.
(should be set to false for architectures that suffers from it).
Reviewers: hfinkel, mzolotukhin, zzheng
Differential Revision: http://reviews.llvm.org/D19553
From: Evgeny Stupachenko <evstupac@gmail.com>
llvm-svn: 271071
This matches the behavior of GNU assembler which supports symbolic
expressions in absolute expressions used in assembly directives.
Differential Revision: http://reviews.llvm.org/D20656
llvm-svn: 271028
Due to differences in template instantiation rules, it is not
portable to static_assert(false) inside of an invalid specialization
of a template. Instead I just =delete the method so that it can't
be used, and leave a comment that it must be explicitly specialized.
llvm-svn: 271027
This reverts commit r271024 due to error: static_assert failed
"You must either provide a specialization of VarStreamArrayExtractor
or a custom extractor"
llvm-svn: 271026
This reduces the amount of memory used by llvm-pdbdump by roughly
1/3 of the size of the PDB file.
Differential Revision: http://reviews.llvm.org/D20724
Reviewed By: ruiu
llvm-svn: 271025
Fix: updated clang code which was not updated by mistake.
Original commit message:
[llvm-mc] - Teach llvm-mc to generate zlib styled compression sections.
This patch is strongly based on previously reverted D20331.
(because of gnuutils < 2.26 does not support compressed debug sections in non zlib-gnu style)
Difference that this patch supports both zlib and zlib-gnu styles.
-compress-debug-sections option now supports next values:
-compress-debug-sections=zlib-gnu
-compress-debug-sections=zlib
-compress-debug-sections=none
Previously specifying -compress-debug-sections enabled zlib-gnu compression,
so anyone can put "-compress-debug-sections=zlib-gnu" to restore the behavior
that was before this patch for case when compression was enabled.
Differential revision: http://reviews.llvm.org/D20676
llvm-svn: 270987
It broke buildbot:
http://lab.llvm.org:8011/builders/llvm-clang-lld-x86_64-scei-ps4-ubuntu-fast/builds/13585/steps/build/logs/stdio
Initial commit message:
[llvm-mc] - Teach llvm-mc to generate zlib styled compression sections.
This patch is strongly based on previously reverted D20331.
(because of gnuutils < 2.26 does not support compressed debug sections in non zlib-gnu style)
Difference that this patch supports both zlib and zlib-gnu styles.
-compress-debug-sections option now supports next values:
-compress-debug-sections=zlib-gnu
-compress-debug-sections=zlib
-compress-debug-sections=none
Previously specifying -compress-debug-sections enabled zlib-gnu compression,
so anyone can put "-compress-debug-sections=zlib-gnu" to restore the behavior
that was before this patch for case when compression was enabled.
Differential revision: http://reviews.llvm.org/D20676
llvm-svn: 270978
This patch is strongly based on previously reverted D20331.
(because of gnuutils < 2.26 does not support compressed debug sections in non zlib-gnu style)
Difference that this patch supports both zlib and zlib-gnu styles.
-compress-debug-sections option now supports next values:
-compress-debug-sections=zlib-gnu
-compress-debug-sections=zlib
-compress-debug-sections=none
Previously specifying -compress-debug-sections enabled zlib-gnu compression,
so anyone can put "-compress-debug-sections=zlib-gnu" to restore the behavior
that was before this patch for case when compression was enabled.
Differential revision: http://reviews.llvm.org/D20676
llvm-svn: 270977
This patch removes the llvm intrinsics VPMOVSX and (V)PMOVZX sign/zero extension intrinsics and auto-upgrades to SEXT/ZEXT calls instead. We already did this for SSE41 PMOVSX sometime ago so much of that implementation can be reused.
A companion patch (D20684) removes/auto-upgrade the clang intrinsics.
Differential Revision: http://reviews.llvm.org/D20686
llvm-svn: 270973
This will be needed in order to consistently return an Error
to clients of the API being developed in D20268.
Differential Revision: http://reviews.llvm.org/D20550
llvm-svn: 270967
Since we want to move toward zero-copy access to stream data, we
want to remove all instances of copying operations. So get rid
of some of those here.
Differential Revision: http://reviews.llvm.org/D20720
Reviewed By: ruiu
llvm-svn: 270960
APInt::operator+(uint64_t) just forwarded to operator+(const APInt&).
Constructing the APInt for the RHS takes an allocation which isn't
required. Also, for APInt's in the slow path, operator+ would
call add() internally which iterates over both arrays of values. Instead
we can use add_1 and sub_1 which only iterate while there is something to do.
Using the memory for 'opt -O2 verify-uselistorder.lto.opt.bc -o opt.bc'
(see r236629 for details), this reduces the number of allocations from
23.9M to 22.7M.
llvm-svn: 270959
type_traits header in libstdc++ 4.8 does not define is_trivially_contructible
so the code doesn't compile with it.
In this file we are using the trait for assertion to provide a better
error message. Removing it doesn't change the meaning of the code.
Differential Revision: http://reviews.llvm.org/D20719
llvm-svn: 270957
This comment was included in Peter Collingbourne's original version of
StringError (see http://reviews.llvm.org/D20550), where it made sense. It was
accidentally copied over with the rest of the class, but no longer applies.
llvm-svn: 270956
PDBs can be extremely large. We're already mapping the entire
PDB into the process's address space, but to make matters worse
the blocks of the PDB are not arranged contiguously. So, when
we have something like an array or a string embedded into the
stream, we have to make a copy. Since it's convenient to use
traditional data structures to iterate and manipulate these
records, we need the memory to be contiguous.
As a result of this, we were using roughly twice as much memory
as the file size of the PDB, because every stream was copied
out and re-stitched together contiguously.
This patch addresses this by improving the MappedBlockStream
to allocate from a BumpPtrAllocator only when a read requires
a discontiguous read. Furthermore, it introduces some data
structures backed by a stream which can iterate over both
fixed and variable length records of a PDB. Since everything
is backed by a stream and not a buffer, we can read almost
everything from the PDB with zero copies.
Differential Revision: http://reviews.llvm.org/D20654
Reviewed By: ruiu
llvm-svn: 270951
StringError can be used to represent Errors that aren't recoverable based on
the error type, but that have a useful error message that can be reported to
the user or logged.
llvm-svn: 270948
This adds support for YAML round tripping dyld info lazy bindings. The storage and format of these is the same as regular bind opcodes, they are just interpreted differently by dyld, and can have DONE opcodes in the middle of the opcode lists.
llvm-svn: 270920
This adds support for YAML round tripping dyld info weak bindings. The storage and format of these is the same as regular bind opcodes, they are just interpreted differently by dyld.
llvm-svn: 270911
Global variables and aliases are emitted eagerly, but there may not be any in
the incoming module. In that case, we can save some memory and compile time by
not building, emitting and tracking an empty globals module.
llvm-svn: 270908
This adds support for YAML round tripping dyld info bind opcodes. Bind opcodes can have signed or unsigned LEB128 data, and they can have symbols associated with them.
llvm-svn: 270901
r270777 improved the precision of alloca vs. inbounbds GEP alias queries: if
we have (a) an inbounds GEP and (b) a pointer based on an alloca, and the
beginning of the object the GEP points to would have a negative offset with
respect to the alloca, then the GEP can not alias pointer (b).
This makes the same logic fire when (b) is based on a GlobalVariable instead
of an alloca.
Differential Revision: http://reviews.llvm.org/D20652
llvm-svn: 270893
There is only one caller of MemorySSA::createNewAccess, and it passes true
as the IgnoreNonMemory argument. Remove that argument and fold its behavior
into createNewAccess.
llvm-svn: 270812
This fixes the example so that it matches the pass's behavior. I was a
little confused by the example until I tried running it and realized that
there was a mistake.
Differential Revision: http://reviews.llvm.org/D20657
llvm-svn: 270811
When we have "Image Info Version" module flag but don't have "Class Properties"
module flag, set "Class Properties" module flag to 0, so we can correctly emit
errors when one module has the flag set and another module does not.
rdar://26469641
llvm-svn: 270791
This matches the behavior of GNU assembler which supports symbolic
expressions in absolute expressions used in assembly directives.
Differential Revision: http://reviews.llvm.org/D20337
llvm-svn: 270786
If a we have (a) a GEP and (b) a pointer based on an alloca, and the
beginning of the object the GEP points would have a negative offset with
repsect to the alloca, then the GEP can not alias pointer (b).
For example, consider code like:
struct { int f0, int f1, ...} foo;
...
foo alloca;
foo *random = bar(alloca);
int *f0 = &alloca.f0
int *f1 = &random->f1;
Which is lowered, approximately, to:
%alloca = alloca %struct.foo
%random = call %struct.foo* @random(%struct.foo* %alloca)
%f0 = getelementptr inbounds %struct, %struct.foo* %alloca, i32 0, i32 0
%f1 = getelementptr inbounds %struct, %struct.foo* %random, i32 0, i32 1
Assume %f1 and %f0 alias. Then %f1 would point into the object allocated
by %alloca. Since the %f1 GEP is inbounds, that means %random must also
point into the same object. But since %f0 points to the beginning of %alloca,
the highest %f1 can be is (%alloca + 3). This means %random can not be higher
than (%alloca - 1), and so is not inbounds, a contradiction.
Differential Revision: http://reviews.llvm.org/D20495
llvm-svn: 270777
Getting accurate locations for loops is important, because those locations are
used by the frontend to generate optimization remarks. Currently, optimization
remarks for loops often appear on the wrong line, often the first line of the
loop body instead of the loop itself. This is confusing because that line might
itself be another loop, or might be somewhere else completely if the body was
inlined function call. This happens because of the way we find the loop's
starting location. First, we look for a preheader, and if we find one, and its
terminator has a debug location, then we use that. Otherwise, we look for a
location on an instruction in the loop header.
The fallback heuristic is not bad, but will almost always find the beginning of
the body, and not the loop statement itself. The preheader location search
often fails because there's often not a preheader, and even when there is a
preheader, depending on how it was formed, it sometimes carries the location of
some preceeding code.
I don't see any good theoretical way to fix this problem. On the other hand,
this seems like a straightforward solution: Put the debug location in the
loop's llvm.loop metadata. A companion Clang patch will cause Clang to insert
llvm.loop metadata with appropriate locations when generating debugging
information. With these changes, our loop remarks have much more accurate
locations.
Differential Revision: http://reviews.llvm.org/D19738
llvm-svn: 270771
As a result of D18634 we no longer infer certain attributes on linkonce_odr
functions at compile time, and may only infer them at LTO time. The readnone
attribute in particular is required for virtual constant propagation (part
of whole-program virtual call optimization) to work correctly.
This change moves the whole-program virtual call optimization pass after
the function attribute inference passes, and enables the attribute inference
passes at opt level 1, so that virtual constant propagation has a chance to
work correctly for linkonce_odr functions.
Differential Revision: http://reviews.llvm.org/D20643
llvm-svn: 270765
It may materialize a declaration, or a definition. The name could
be misleading. This is following a merge of materializeInitFor()
into materializeDeclFor().
Differential Revision: http://reviews.llvm.org/D20593
llvm-svn: 270759
They were originally separated to handle the co-recursion between
the ValueMapper and the ValueMaterializer. This recursion does not
exist anymore: the ValueMapper now uses a Worklist and the
ValueMaterializer is scheduling job on the Worklist.
Differential Revision: http://reviews.llvm.org/D20593
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 270758
We have need to reuse this functionality, including making
additional generic stream types that are smarter about how and
when they copy memory versus referencing the original memory.
So all of these structures belong in the common library
rather than being pdb specific.
llvm-svn: 270751
searching for external symbols, and fall back to the SymbolResolver::findSymbol
method if the former returns null.
This makes RuntimeDyld behave more like a static linker: Symbol definitions
from within the current module's "logical dylib" will be preferred to
external definitions. We can build on this behavior in the future to properly
support weak symbol handling.
Custom symbol resolvers that override the findSymbolInLogicalDylib method may
notice changes due to this patch. Clients who have not overridden this method
should generally be unaffected, however users of the OrcMCJITReplacement class
may notice changes.
llvm-svn: 270716
Also, rename recognizeBitReverseOrBSwapIdiom to recognizeBSwapOrBitReverseIdiom,
so the ordering of the MatchBSwaps and MatchBitReversals arguments are
consistent with the function name.
llvm-svn: 270715
Move the now index-based ODR resolution and internalization routines out
of ThinLTOCodeGenerator.cpp and into either LTO.cpp (index-based
analysis) or FunctionImport.cpp (index-driven optimizations).
This is to enable usage by other linkers.
llvm-svn: 270698
There's already a ARMTargetParser,now adding a similar one for aarch64.
so we can use it to do ARCH/CPU/FPU parsing in clang and llvm, instead of
string comparison.
Patch by Jojo Ma.
llvm-svn: 270687
Followup to D20528 clang patch, this removes the (V)CVTDQ2PD(Y) and (V)CVTPS2PD(Y) llvm intrinsics and auto-upgrades to sitofp/fpext instead.
Differential Revision: http://reviews.llvm.org/D20568
llvm-svn: 270678
Ensure that the unused fields are explicitly stated when defining the types.
Add some compile time assertions about the size requirements for the structure
types.
llvm-svn: 270663
Summary:
Adds fastpath instrumentation for esan's working set tool. The
instrumentation for an intra-cache-line load or store consists of an
inlined write to shadow memory bits for the corresponding cache line.
Adds a basic test for this instrumentation.
Reviewers: aizatsky
Subscribers: vitalybuka, zhaoqin, kcc, eugenis, llvm-commits
Differential Revision: http://reviews.llvm.org/D20483
llvm-svn: 270640
This patch adds support for:
S_EXPORT
LF_BITFIELD
With this patch, I have run through a couple of gigabytes of PDB
files and cannot find a type or symbol that we do not understand.
llvm-svn: 270637
This adds support for parsing and dumping the following
symbol types:
S_LPROCREF
S_ENVBLOCK
S_COMPILE2
S_REGISTER
S_COFFGROUP
S_SECTION
S_THUNK32
S_TRAMPOLINE
As of this patch, the test PDB files no longer have any unknown
symbol types.
llvm-svn: 270628
When dumping huge PDB files, too many of the options were grouped
together so you would get neverending spew of output. This patch
introduces more granular display options so you can only dump the
fields you actually care about.
llvm-svn: 270607
This makes use of the newly introduced `CVSymbolVisitor` to dump details
of each type of symbol record in the symbol streams. Future patches will
bring this visitor based dumping to the publics stream, as well as
creating a `SymbolDumpDelegate` to print more information about
relocations etc.
Differential Revision: http://reviews.llvm.org/D20545
Reviewed By: ruiu
llvm-svn: 270585
Summary:
This patch changes the ODR resolution and internalization to be based on
updates to the Index, which are consumed by the backend portion of the
transformations.
It will be followed by an NFC change to move these out of libLTO's
ThinLTOCodeGenerator so that it can be used by other linkers
(gold and lld) and by ThinLTO distributed backends.
The global summary-based portions use callbacks so that the client can
determine the prevailing copy and other information in a client-specific
way. Eventually, with the API being developed in D20268, these may be
modified to use information such as symbol resolutions, supplied by the
clients to the API.
Reviewers: joker-eph
Subscribers: joker.eph, pcc, llvm-commits
Differential Revision: http://reviews.llvm.org/D20290
llvm-svn: 270584
Now, after landing r270560, r270557, r270320 it is a proper time.
Original commit message:
[llvm-mc] - Teach llvm-mc to generate compressed debug sections in zlib style.
Before this patch llvm-mc generated zlib-gnu styled sections.
That means no SHF_COMPRESSED flag was set, magic 'zlib' signature
was used in combination with full size field. Sections were renamed to "*.z*".
This patch reimplements the compression style to zlib one as zlib-gnu looks
to be depricated everywhere.
Differential revision: http://reviews.llvm.org/D20331
llvm-svn: 270569
Fix was:
1) Had to regenerate dwarfdump-test-zlib.elf-x86-64, dwarfdump-test-zlib-gnu.elf-x86-64
(because llvm-symbolizer-zlib.test uses that inputs for its purposes and failed).
2) Updated llvm-symbolizer-zlib.test (updated used call function address to match new files +
added one more check for newly created dwarfdump-test-zlib-gnu.elf-x86-64 binary input).
3) Updated comment in dwarfdump-test-zlib.cc.
Original commit message:
[llvm-dwarfdump] - Teach dwarfdump to decompress debug sections in zlib style.
Before this llvm-dwarfdump only recognized zlib-gnu compression style of headers,
this patch adds support for zlib style.
It looks reasonable to support both styles for dumping,
even if we are not going to suport generating of deprecated gnu one.
Differential revision: http://reviews.llvm.org/D20470
llvm-svn: 270557
fix: forgot to commit the updated dwarfdump-test-zlib.elf-x86-64
Original commit message:
[llvm-dwarfdump] - Teach dwarfdump to decompress debug sections in zlib style.
Before this llvm-dwarfdump only recognized zlib-gnu compression style of headers,
this patch adds support for zlib style.
It looks reasonable to support both styles for dumping,
even if we are not going to suport generating of deprecated gnu one.
Differential revision: http://reviews.llvm.org/D20470
llvm-svn: 270543
Before this llvm-dwarfdump only recognized zlib-gnu compression style of headers,
this patch adds support for zlib style.
It looks reasonable to support both styles for dumping,
even if we are not going to suport generating of deprecated gnu one.
Differential revision: http://reviews.llvm.org/D20470
llvm-svn: 270540
Moved the ModuleLoader and supporting helper loadModuleFromBuffer out of
ThinLTOCodeGenerator and into new LTO.h/LTO.cpp files. This is in
preparation for a patch that will utilize these in the gold-plugin.
Note that there are some other pending patches (D20268 and D20290) that
also plan to refactor common interfaces and functionality into this same
pair of new files.
llvm-svn: 270509
to llvm-objdump. This section is created with -fembed-bitcode option.
This requires the use of libxar and the Cmake and lit support were crafted by
Chris Bieneman!
rdar://26202242
llvm-svn: 270491
This effectively revers commit r270389 and re-lands r270106, but it's
almost a rewrite.
The behavior change in r270106 was that we could no longer assume that
each LF_FUNC_ID record got its own type index. This patch adds a map
from DINode* to TypeIndex, so we can stop making that assumption.
This change also emits padding bytes between type records similar to the
way MSVC does. The size of the type record includes the padding bytes.
llvm-svn: 270485
This will pave the way to introduce a full fledged symbol visitor
similar to how we have a type visitor, thus allowing the same
dumping code to be used in llvm-readobj and llvm-pdbdump.
Differential Revision: http://reviews.llvm.org/D20384
Reviewed By: rnk
llvm-svn: 270475
In r268693, we started requiring that SelectionDAGISel::Select return
void, but provided a default implementation that did just that by
calling into the old interface. Now that all targets have been
updated, we'll just remove the default implementation.
llvm-svn: 270454
Summary: This needs to get in before anything is released concerning attribute. If the old name gets in the wild, then we are stuck with it forever. Putting it in its own diff should getting that part at least in fast.
Reviewers: Wallbraker, whitequark, joker.eph, echristo, rafael, jyknight
Subscribers: llvm-commits, joker.eph
Differential Revision: http://reviews.llvm.org/D20417
llvm-svn: 270452
Prior to this patch, we were using 1 for all the repairing costs.
Now, we use the information from the target to get this information.
llvm-svn: 270304