This patch adds BPF Debug Format (BTF) as a standalone
LLVM debuginfo. The BTF related sections are directly
generated from IR. The BTF debuginfo is generated
only when the compilation target is BPF.
What is BTF?
============
First, the BPF is a linux kernel virtual machine
and widely used for tracing, networking and security.
https://www.kernel.org/doc/Documentation/networking/filter.txthttps://cilium.readthedocs.io/en/v1.2/bpf/
BTF is the debug info format for BPF, introduced in the below
linux patch
69b693f0ae (diff-06fb1c8825f653d7e539058b72c83332)
in the patch set mentioned in the below lwn article.
https://lwn.net/Articles/752047/
The BTF format is specified in the above github commit.
In summary, its layout looks like
struct btf_header
type subsection (a list of types)
string subsection (a list of strings)
With such information, the kernel and the user space is able to
pretty print a particular bpf map key/value. One possible example below:
Withtout BTF:
key: [ 0x01, 0x01, 0x00, 0x00 ]
With BTF:
key: struct t { a : 1; b : 1; c : 0}
where struct is defined as
struct t { char a; char b; short c; };
How BTF is generated?
=====================
Currently, the BTF is generated through pahole.
https://git.kernel.org/pub/scm/devel/pahole/pahole.git/commit/?id=68645f7facc2eb69d0aeb2dd7d2f0cac0feb4d69
and available in pahole v1.12
https://git.kernel.org/pub/scm/devel/pahole/pahole.git/commit/?id=4a21c5c8db0fcd2a279d067ecfb731596de822d4
Basically, the bpf program needs to be compiled with -g with
dwarf sections generated. The pahole is enhanced such that
a .BTF section can be generated based on dwarf. This format
of the .BTF section matches the format expected by
the kernel, so a bpf loader can just take the .BTF section
and load it into the kernel.
8a138aed4a
The .BTF section layout is also specified in this patch:
with file include/llvm/BinaryFormat/BTF.h.
What use cases this patch tries to address?
===========================================
Currently, only the bpf instruction stream is required to
pass to the kernel. The kernel verifies it, jits it if configured
to do so, attaches it to a particular kernel attachment point,
and later executes when a particular event happens.
This patch tries to expand BTF to support two more use cases below:
(1). BPF supports subroutine calls.
During performance analysis, it would be good to
differentiate which call is hot instead of just
providing a virtual address. This would require to
pass a unique identifier for each subroutine to
the kernel, the subroutine name is a natual choice.
(2). If a particular jitted instruction is hot, we want
user to know which source line this jitted instruction
belongs to. This would require the source information
is available to various profiling tools.
Note that in a single ELF file,
. there may be multiple loadable bpf programs,
. for a particular to-be-loaded bpf instruction stream,
its instructions may come from multiple PROGBITS sections,
the bpf loader needs to merge them together to a single
consecutive insn stream before loading to the kernel.
For example:
section .text: subroutines funcFoo
section _progA: calling funcFoo
section _progB: calling funcFoo
The bpf loader could construct two loadable bpf instruction
streams and load them into the kernel:
. _progA funcFoo
. _progB funcFoo
So per ELF section function offset and instruction offset
will need to be adjusted before passing to the kernel, and
the kernel essentially expect only one code section regardless
of how many in the ELF file.
What do we propose and Why?
===========================
To support the above two use cases, we propose to
add an additional section, .BTF.ext, to the ELF file
which is the input of the bpf loader. A different section
is preferred since loader may need to manipulate it before
loading part of its data to the kernel.
The .BTF.ext section has a similar header to the .BTF section
and it contains two subsections for func_info and line_info.
. the func_info maps the func insn byte offset to a func
type in the .BTF type subsection.
. the line_info maps the insn byte offset to a line info.
. both func_info and line_info subsections are organized
by ELF PROGBITS AX sections.
pahole is not a good place to implement .BTF.ext as
pahole is mostly for structure hole information and more
importantly, we want to pass the actual code to the kernel.
. bpf program typically is small so storage overhead
should be small.
. in bpf land, it is totally possible that
an application loads the bpf program into the
kernel and then that application quits, so
holding debug info by the user space application
is not practical as you may not even know who
loads this bpf program.
. having source codes directly kept by kernel
would ease deployment since the original source
code does not need ship on every hosts and
kernel-devel package does not need to be
deployed even if kernel headers are used.
LLVM is a good place to implement.
. The only reliable time to get the source code is
during compilation time. This will result in both more
accurate information and easier deployment as
stated in the above.
. Another consideration is for JIT. The project like bcc
(https://github.com/iovisor/bcc)
use MCJIT to compile a C program into bpf insns and
load them to the kernel. The llvm generated BTF sections
will be readily available for such cases as well.
Design and implementation of emiting .BTF/.BTF.ext sections
===========================================================
The BTF debuginfo format is defined. Both .BTF and .BTF.ext
sections are generated directly from IR when both
"-target bpf" and "-g" are specified. Note that
dwarf sections are still generated as dwarf is used
by user space tools like llvm-objdump etc. for BPF target.
This patch also contains tests to verify generated
.BTF and .BTF.ext sections for all supported types, func_info
and line_info subsections. The patch is also tested
against linux kernel bpf sample tests and selftests.
Signed-off-by: Yonghong Song <yhs@fb.com>
Differential Revision: https://reviews.llvm.org/D53736
llvm-svn: 347999
This reverts r294500. DwarfCompileUnit::addAddressExpr uses DIEExpr
for PCOffset. In that case the expression is unrelated to thread locals
and so emitting a value of the DIEExpr does not have to always mean
emit-debug-thread-local.
llvm-svn: 347744
Summary:
Add a hook to the GCMetadataPrinter for emitting stack maps in
custom format. The hook will be called at stack map generation
time. The default stack map format is used if there is no hook.
For this to be useful a few data structures and accessors are
exposed from the StackMaps class, so the custom printer can
access the stack map data.
This patch authored by Cherry Zhang <cherryyz@google.com>.
Reviewers: thanm, apilipenko, reames
Reviewed By: reames
Subscribers: reames, apilipenko, nemanjai, javed.absar, kbarton, jsji, llvm-commits
Differential Revision: https://reviews.llvm.org/D53892
llvm-svn: 347584
ParentTy is never used other than an assignment, and since it is a
pointer, there is no side effect. Some versions of GCC notice and warn
on this.
Change-Id: I37dc1a18c7b58040419afb803621de13d8904a8f
llvm-svn: 347581
When you have a member function with a ref-qualifier, for example:
struct Foo {
void Func() &;
void Func2() &&;
};
clang-cl was not emitting this information. Doing so is a bit
awkward, because it's not a property of the LF_MFUNCTION type, which
is what you'd expect. Instead, it's a property of the this pointer
which is actually an LF_POINTER. This record has an attributes
bitmask on it, and our handling of this bitmask was all wrong. We
had some parts of the bitmask defined incorrectly, but importantly
for this bug, we didn't know about these extra 2 bits that represent
the ref qualifier at all.
Differential Revision: https://reviews.llvm.org/D54667
llvm-svn: 347354
This is for compatibility with MSVC, which also marks this pointers
as being const-qualified.
Fixes llvm.org/pr36526
Differential Revision: https://reviews.llvm.org/D54736
llvm-svn: 347353
Summary:
Experience has shown that the functionality is useful. It makes linking
optimized clang with debug info for me a lot faster, 20s to 13s. The
type merging phase of PDB writing goes from 10s to 3s.
This removes the LLVM cl::opt and replaces it with a metadata flag.
After this change, users can do the following to use ghash:
- add -gcodeview-ghash to compiler flags
- replace /DEBUG with /DEBUG:GHASH in linker flags
Reviewers: zturner, hans, thakis, takuto.ikuta
Subscribers: aprantl, hiraditya, JDevlieghere, llvm-commits
Differential Revision: https://reviews.llvm.org/D54370
llvm-svn: 347072
Summary:
This adds support for the 'event section' specified in the exception
handling proposal. (This was named 'exception section' first, but later
renamed to 'event section' to take possibilities of other kinds of
events into consideration. But currently we only store exception info in
this section.)
The event section is added between the global section and the export
section. This is for ease of validation per request of the V8 team.
This patch:
- Creates the event symbol type, which is a weak symbol
- Makes 'throw' instruction take the event symbol '__cpp_exception'
- Adds relocation support for events
- Adds WasmObjectWriter / WasmObjectFile (Reader) support
- Adds obj2yaml / yaml2obj support
- Adds '.eventtype' printing support
Reviewers: dschuff, sbc100, aardappel
Subscribers: jgravelle-google, sunfish, llvm-commits
Differential Revision: https://reviews.llvm.org/D54096
llvm-svn: 346825
Summary:
The comment refers to the field as "Kind:". However, in gdb,
https://sourceware.org/gdb//onlinedocs/gdb/Index-Section-Format.html names it "attributes",
gdb/dwarf2read.c:dw2_symtab_iter_next refers to the whole value as "cu_index_and_attrs"
Change it to `Attributes:` for consistency.
Reviewers: dblaikie
Reviewed By: dblaikie
Subscribers: aprantl, JDevlieghere, arphaman, llvm-commits
Differential Revision: https://reviews.llvm.org/D54480
llvm-svn: 346790
Summary:
Ranges base address specifiers can save a lot of object size in
relocation records especially in optimized builds.
For an optimized self-host build of Clang with split DWARF and debug
info compression in object files, but uncompressed debug info in the
executable, this change produces about 18% smaller object files and 6%
larger executable.
While it would've been nice to turn this on by default, gold's 32 bit
gdb-index support crashes on this input & I don't think there's any
perfect heuristic to implement solely in LLVM that would suffice - so
we'll need a flag one way or another (also possible people might want to
aggressively optimized for executable size that contains debug info
(even with compression this would still come at some cost to executable
size)) - so let's plumb it through.
Differential Revision: https://reviews.llvm.org/D54242
llvm-svn: 346788
Turns out knowing more than just the base address might be useful -
specifically a future change to respect a DICompileUnit flag for the use
of base address specifiers in DWARF < 5.
llvm-svn: 346380
Use MachineFrameInfo's OffsetAdjustment field to pass this information
from the target to CodeViewDebug.cpp. The X86 backend doesn't use it for
any other purpose.
This fixes PR38857 in the case where there is a non-aligned quantity of
CSRs and a non-aligned quantity of locals.
llvm-svn: 346062
The TypeIndex used by cl.exe is 0x103, which indicates a SimpleTypeMode
of NearPointer (note the absence of the bitness, normally pointers use a
mode of NearPointer32 or NearPointer64) and a SimpleTypeKind of void.
So this is basically a void*, but without a specified size, which makes
sense given how std::nullptr_t is defined.
clang-cl was actually not emitting *anything* for this. Instead, when we
encountered std::nullptr_t in a DIType, we would actually just emit a
TypeIndex of 0, which is obviously wrong.
std::nullptr_t in DWARF is represented as a DW_TAG_unspecified_type with
a name of "decltype(nullptr)", so we add that logic along with a test,
as well as an update to the dumping code so that we no longer print
void* when dumping 0x103 (which would previously treat Void/NearPointer
no differently than Void/NearPointer64).
Differential Revision: https://reviews.llvm.org/D53957
llvm-svn: 345811
Before this patch DbgInfoAvailable was set to true in
DwarfDebug::beginModule() or CodeViewDebug::CodeViewDebug(). This made
MIR testing weird since passes would suddenly stop dealing with debug
info just because we stopped the pipeline before the debug printers.
This patch changes the logic to initialize DbgInfoAvailable based on the
fact that debug_compile_units exist in the llvm Module. The debug
printers may then override it with false in case of debug printing being
disabled.
Differential Revision: https://reviews.llvm.org/D53885
llvm-svn: 345740
Add ARM64 unwind codes to MCLayer, as well SEH directives that will be emitted
by the frame lowering patch to follow. We only emit unwind codes into object
object files for now.
Differential Revision: https://reviews.llvm.org/D50166
llvm-svn: 345450
.debug_loclists is the DWARF 5 version of the .debug_loc.
With that patch, it will be emitted when DWARF 5 is used.
Differential revision: https://reviews.llvm.org/D53365
llvm-svn: 345377
Summary:
This adds support for LSDA (exception table) generation for wasm EH.
Wasm EH mostly follows the structure of Itanium-style exception tables,
with one exception: a call site table entry in wasm EH corresponds to
not a call site but a landing pad.
In wasm EH, the VM is responsible for stack unwinding. After an
exception occurs and the stack is unwound, the control flow is
transferred to wasm 'catch' instruction by the VM, after which the
personality function is called from the compiler-generated code. (Refer
to WasmEHPrepare pass for more information on this part.)
This patch:
- Changes wasm.landingpad.index intrinsic to take a token argument, to
make this 1:1 match with a catchpad instruction
- Stores landingpad index info and catch type info MachineFunction in
before instruction selection
- Lowers wasm.lsda intrinsic to an MCSymbol pointing to the start of an
exception table
- Adds WasmException class with overridden methods for table generation
- Adds support for LSDA section in Wasm object writer
Reviewers: dschuff, sbc100, rnk
Subscribers: mgorny, jgravelle-google, sunfish, llvm-commits
Differential Revision: https://reviews.llvm.org/D52748
llvm-svn: 345345
This isn't the most object-size efficient encoding, but it's the only
one GDB supports for the pre-standard fission format. I've written fixes
for this twice now... - so perhaps this comment will help me remember
why neither of these have been committed and why I shouldn't try to
write a third fix another year from now...
llvm-svn: 345326
This makes the offsets larger (since they are further from the base
address) but those are in the .dwo - and allows removing addresses and
relocations from the .o file.
This could be built into the AddressPool more fundamentally, perhaps -
when you ask for an AddressPool entry you could say "or give me some
other entry and an offset I need to use" - though what to do about
situations where the first use of an address in a section is not the
earliest address in that section... is tricky.
At least with range addresses we can be fairly sure we've seen the
earliest address first because we see the start address for the
function.
llvm-svn: 345224
Summary:
This renames the IsParsingMSInlineAsm member variable of AsmLexer to
LexMasmIntegers and moves it up to MCAsmLexer. This is the only behavior
controlled by that variable. I added a public setter, so that it can be
set from outside or from the llvm-mc command line. We may need to
arrange things so that users can get this behavior from clang, but
that's future work.
I also put additional hex literal lexing functionality under this flag
to fix PR32973. It appears that this hex literal parsing wasn't intended
to be enabled in non-masm-style blocks.
Now, masm integers (0b1101 and 0ABCh) work in __asm blocks from clang,
but 0b label references work when using .intel_syntax in standalone .s
files.
However, 0b label references will *not* work from __asm blocks in clang.
They will work from GCC inline asm blocks, which it sounds like is
important for Crypto++ as mentioned in PR36144.
Essentially, we only lex masm literals for inline asm blobs that use
intel syntax. If the .intel_syntax directive is used inside a gnu-style
inline asm statement, masm literals will not be lexed, which is
compatible with gas and llvm-mc standalone .s assembly.
This fixes PR36144 and PR32973.
Reviewers: Gerolf, avt77
Subscribers: eraman, hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D53535
llvm-svn: 345189
Summary:
If the target does not support `.asciz` and `.ascii` directives, the
strings are represented as bytes and each byte is placed on the new line
as a separate byte directive `.b8 <data>`. NVPTX target allows to
represent the vector of the data of the same type as a vector, where
values are separated using `,` symbol: `.b8 <data1>,<data2>,...`. This
allows to reduce the size of the final PTX file. Ptxas tool includes ptx
files into the resulting binary object, so reducing the size of the PTX
file is important.
Reviewers: tra, jlebar, echristo
Subscribers: jholewinski, llvm-commits
Differential Revision: https://reviews.llvm.org/D45822
llvm-svn: 345142
Logs provided by @stella.stamenova indicate that on Linux, lldb adds a
spurious slide offset to the return PC it loads from AT_call_return_pc
attributes (see the list thread: "[PATCH] D50478: Add support for
artificial tail call frames").
This patch side-steps the issue by getting rid of the load address
calculation in lldb's CallEdge::GetReturnPCAddress.
The idea is to have the DWARF writer emit function-local offsets to the
instruction after a call. I.e. return-pc = label-after-call-insn -
function-entry. LLDB can simply add this offset to the base address of a
function to get the return PC.
Differential Revision: https://reviews.llvm.org/D53469
llvm-svn: 344960
Using a base address specifier even for a single-element range is a size
win for object files (7 words versus 8 words - more significant savings
if the debug info is compressed (since it's 3 words of uncompressable
reloc + 4 compressable words compared to 6 uncompressable reloc + 2
compressable words) - does trade off executable size increase though.
llvm-svn: 344841
Putting addresses in the address pool, even with non-fission, can reduce
relocations - reusing the addresses from debug_info and debug_rnglists
(the latter coming soon)
llvm-svn: 344834
Summary:
This adds support for LSDA (exception table) generation for wasm EH.
Wasm EH mostly follows the structure of Itanium-style exception tables,
with one exception: a call site table entry in wasm EH corresponds to
not a call site but a landing pad.
In wasm EH, the VM is responsible for stack unwinding. After an
exception occurs and the stack is unwound, the control flow is
transferred to wasm 'catch' instruction by the VM, after which the
personality function is called from the compiler-generated code. (Refer
to WasmEHPrepare pass for more information on this part.)
This patch:
- Changes wasm.landingpad.index intrinsic to take a token argument, to
make this 1:1 match with a catchpad instruction
- Stores landingpad index info and catch type info MachineFunction in
before instruction selection
- Lowers wasm.lsda intrinsic to an MCSymbol pointing to the start of an
exception table
- Adds WasmException class with overridden methods for table generation
- Adds support for LSDA section in Wasm object writer
Reviewers: dschuff, sbc100, rnk
Subscribers: mgorny, jgravelle-google, sunfish, llvm-commits
Differential Revision: https://reviews.llvm.org/D52748
llvm-svn: 344575
The initial patch was not reviewed, and does not have any tests;
it should not have been merged.
This reverts 344395, 344390, 344387, 344385, 344381, 344376,
and 344366.
llvm-svn: 344405
Summary: We can fill in the command line and compiler path later if we want.
Reviewers: zturner
Subscribers: hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D53179
llvm-svn: 344393
This a resubmission of a patch which was previously reverted
due to breaking several lld tests. The issues causing those
failures have been fixed, so the patch is now resubmitted.
---Original Commit Message---
While it doesn't make a *ton* of sense for POSIX paths to be
in PDBs, it's possible to occur in real scenarios involving
cross compilation.
The tools need to be able to handle this, because certain types
of debugging scenarios are possible without a running process
and so don't necessarily require you to be on a Windows system.
These include post-mortem debugging and binary forensics (e.g.
using a debugger to disassemble functions and examine symbols
without running the process).
There's changes in clang, LLD, and lldb in this patch. After
this the cross-platform disassembly and source-list tests pass
on Linux.
Furthermore, the behavior of LLD can now be summarized by a much
simpler rule than before: Unless you specify /pdbsourcepath and
/pdbaltpath, the PDB ends up with paths that are valid within
the context of the machine that the link is performed on.
Differential Revision: https://reviews.llvm.org/D53149
llvm-svn: 344377
* Move #include outside of namespaces
* Add missing #include
* Add out-of-line virtual destructor to BTFTypeEntry
designated initializers should also be fixed
llvm-svn: 344376
BTF is the debug format for BPF, a kernel virtual machine
and widely used for tracing, networking and security, etc ([1]).
Currently only instruction streams are passed to kernel,
the kernel verifier verifies them before execution. In order to
provide better visibility of bpf programs to user space
tools, some debug information, e.g., function names and
debug line information are desirable for kernel so tools
can get such information with better annotation
for jited instructions for performance or other reasons.
The dwarf is too complicated in kernel and for BPF.
Hence, BTF is designed to be the debug format for BPF ([2]).
Right now, pahole supports BTF for types, which
are generated based on dwarf sections in the ELF file.
In order to annotate performance metrics for jited bpf insns,
it is necessary to pass debug line info to the kernel.
Furthermore, we want to pass the actual code to the
kernel because of the following reasons:
. bpf program typically is small so storage overhead
should be small.
. in bpf land, it is totally possible that
an application loads the bpf program into the
kernel and then that application quits, so
holding debug info by the user space application
is not practical.
. having source codes directly kept by kernel
would ease deployment since the original source
code does not need ship on every hosts and
kernel-devel package does not need to be
deployed even if kernel headers are used.
The only reliable time to get the source code is
during compilation time. This will result in both more
accurate information and easier deployment as
stated in the above.
Another consideration is for JIT. The project like bcc
use MCJIT to compile a C program into bpf insns and
load them to the kernel ([3]). The generated BTF sections
will be readily available for such cases as well.
This patch implemented generation of BTF info in llvm
compiler. The BTF related sections will be generated
when both -target bpf and -g are specified. Two sections
are generated:
.BTF contains all the type and string information, and
.BTF.ext contains the func_info and line_info.
The separation is related to how two sections are used
differently in bpf loader, e.g., linux libbpf ([4]).
The .BTF section can be loaded into the kernel directly
while .BTF.ext needs loader manipulation before loading
to the kernel. The format of the each section is roughly
defined in llvm:include/llvm/MC/MCBTFContext.h and
from the implementation in llvm:lib/MC/MCBTFContext.cpp.
A later example also shows the contents in each section.
The type and func_info are gathered during CodeGen/AsmPrinter
by traversing dwarf debug_info. The line_info is
gathered in MCObjectStreamer before writing to
the object file. After all the information is gathered,
the two sections are emitted in MCObjectStreamer::finishImpl.
With cmake CMAKE_BUILD_TYPE=Debug, the compiler can
dump out all the tables except insn offset, which
will be resolved later as relocation records.
The debug type "btf" is used for BTFContext dump.
Dwarf tests the debug info generation with
llvm-dwarfdump to decode the binary sections and
check whether the result is expected. Currently
we do not have such a tool yet. We will implement
btf dump functionality in bpftool ([5]) as the bpftool is
considered the recommended tool for bpf introspection.
The implementation for type and func_info is tested
with linux kernel test cases. The line_info is visually
checked with dump from linux kernel libbpf ([4]) and
checked with readelf dumping section raw data.
Note that the .BTF and .BTF.ext information will not
be emitted to assembly code and there is no assembler
support for BTF either.
In the below, with a clang/llvm built with CMAKE_BUILD_TYPE=Debug,
Each table contents are shown for a simple C program.
-bash-4.2$ cat -n test.c
1 struct A {
2 int a;
3 char b;
4 };
5
6 int test(struct A *t) {
7 return t->a;
8 }
-bash-4.2$ clang -O2 -target bpf -g -mllvm -debug-only=btf -c test.c
Type Table:
[1] FUNC name_off=1 info=0x0c000001 size/type=2
param_type=3
[2] INT name_off=12 info=0x01000000 size/type=4
desc=0x01000020
[3] PTR name_off=0 info=0x02000000 size/type=4
[4] STRUCT name_off=16 info=0x04000002 size/type=8
name_off=18 type=2 bit_offset=0
name_off=20 type=5 bit_offset=32
[5] INT name_off=22 info=0x01000000 size/type=1
desc=0x02000008
String Table:
0 :
1 : test
6 : .text
12 : int
16 : A
18 : a
20 : b
22 : char
27 : test.c
34 : int test(struct A *t) {
58 : return t->a;
FuncInfo Table:
sec_name_off=6
insn_offset=<Omitted> type_id=1
LineInfo Table:
sec_name_off=6
insn_offset=<Omitted> file_name_off=27 line_off=34 line_num=6 column_num=0
insn_offset=<Omitted> file_name_off=27 line_off=58 line_num=7 column_num=3
-bash-4.2$ readelf -S test.o
......
[12] .BTF PROGBITS 0000000000000000 0000028d
00000000000000c1 0000000000000000 0 0 1
[13] .BTF.ext PROGBITS 0000000000000000 0000034e
0000000000000050 0000000000000000 0 0 1
[14] .rel.BTF.ext REL 0000000000000000 00000648
0000000000000030 0000000000000010 16 13 8
......
-bash-4.2$
The latest linux kernel ([6]) can already support .BTF with type information.
The [7] has the reference implementation in linux kernel side
to support .BTF.ext func_info. The .BTF.ext line_info support is not
implemented yet. If you have difficulty accessing [6], you can
manually do the following to access the code:
git clone https://github.com/yonghong-song/bpf-next-linux.git
cd bpf-next-linux
git checkout btf
The change will push to linux kernel soon once this patch is landed.
References:
[1]. https://www.kernel.org/doc/Documentation/networking/filter.txt
[2]. https://lwn.net/Articles/750695/
[3]. https://github.com/iovisor/bcc
[4]. https://github.com/torvalds/linux/tree/master/tools/lib/bpf
[5]. https://github.com/torvalds/linux/tree/master/tools/bpf/bpftool
[6]. https://github.com/torvalds/linux
[7]. https://github.com/yonghong-song/bpf-next-linux/tree/btf
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Yonghong Song <yhs@fb.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Differential Revision: https://reviews.llvm.org/D52950
llvm-svn: 344366
It originally triggered a stepping problem in the debugger, which could
be fixed by adjusting CodeGen/LexicalScopes.cpp however it seems we prefer
the previous behavior anyway.
See the discussion for details: http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20181008/593833.html
This reverts commit r343880.
This reverts commit r343874.
llvm-svn: 344318
This was originally causing some test failures on non-Windows
platforms, which required fixes in the compiler and linker. After
those fixes, however, other tests started failing. Reverting
temporarily until I can address everything.
llvm-svn: 344279
While it doesn't make a *ton* of sense for POSIX paths to be
in PDBs, it's possible to occur in real scenarios involving
cross compilation.
The tools need to be able to handle this, because certain types
of debugging scenarios are possible without a running process
and so don't necessarily require you to be on a Windows system.
These include post-mortem debugging and binary forensics (e.g.
using a debugger to disassemble functions and examine symbols
without running the process).
There's changes in clang, LLD, and lldb in this patch. After
this the cross-platform disassembly and source-list tests pass
on Linux.
Furthermore, the behavior of LLD can now be summarized by a much
simpler rule than before: Unless you specify /pdbsourcepath and
/pdbaltpath, the PDB ends up with paths that are valid within
the context of the machine that the link is performed on.
Differential Revision: https://reviews.llvm.org/D53149
llvm-svn: 344269
DWARF v5 introduces DW_AT_call_all_calls, a subprogram attribute which
indicates that all calls (both regular and tail) within the subprogram
have call site entries. The information within these call site entries
can be used by a debugger to populate backtraces with synthetic tail
call frames.
Tail calling frames go missing in backtraces because the frame of the
caller is reused by the callee. Call site entries allow a debugger to
reconstruct a sequence of (tail) calls which led from one function to
another. This improves backtrace quality. There are limitations: tail
recursion isn't handled, variables within synthetic frames may not
survive to be inspected, etc. This approach is not novel, see:
https://gcc.gnu.org/wiki/summit2010?action=AttachFile&do=get&target=jelinek.pdf
This patch adds an IR-level flag (DIFlagAllCallsDescribed) which lowers
to DW_AT_call_all_calls. It adds the minimal amount of DWARF generation
support needed to emit standards-compliant call site entries. For easier
deployment, when the debugger tuning is LLDB, the DWARF requirement is
adjusted to v4.
Testing: Apart from check-{llvm, clang}, I built a stage2 RelWithDebInfo
clang binary. Its dSYM passed verification and grew by 1.4% compared to
the baseline. 151,879 call site entries were added.
rdar://42001377
Differential Revision: https://reviews.llvm.org/D49887
llvm-svn: 343883
Context: Compiler generated instructions do not have a debug location
assigned to them. However emitting 0-line records for all of them bloats
the line tables for very little benefit so we usually avoid doing that.
Not emitting anything will lead to the previous debug location getting
applied to the locationless instructions. This is not desirable for
block begin and after labels. Previously we would emit simply emit
line-0 records in this case, this patch changes the behavior to do a
forward search for a debug location in these cases before emitting a
line-0 record to further reduce line table bloat.
Inspired by the discussion in https://reviews.llvm.org/D52862
llvm-svn: 343874