Some files still contained the old University of Illinois Open Source
Licence header. This patch replaces that with the Apache 2 with LLVM
Exception licence.
Differential Revision: https://reviews.llvm.org/D107528
When we build with split dwarf in single mode the .o files that contain both "normal" debug sections and dwo sections, along with relocaiton sections for "normal" debug sections.
When we create DWARF context in DWARFObjInMemory we process relocations and store them in the map for .debug_info, etc section.
For DWO Context we also do it for non dwo dwarf sections. Which I believe is not necessary. This leads to a lot of memory being wasted. We observed 70GB extra memory being used.
I went with context sensitive approach, flag is passed in. I am not sure if it's always safe not to process relocations for regular debug sections if Obj contains .dwo sections.
If it is alternatvie might be just to scan, in constructor, sections and if there are .dwo sections not to process regular debug ones.
Reviewed By: dblaikie
Differential Revision: https://reviews.llvm.org/D106624
list for attributes that don't have the loclist class.
Summary: The overflow error occurs when we try to dump
location list for those attributes that do not have the
loclist class, like DW_AT_count and DW_AT_byte_size.
After re-reviewed the entire list, I sorted those
attributes into two parts, one for dumping location list
and one for dumping the location expression.
Reviewed By: probinson
Differential Revision: https://reviews.llvm.org/D105613
This fixes an assert firing when compiling code which involves 128 bit
integrals.
This would trigger runtime checks similar to this:
```
Assertion failed: getMinSignedBits() <= 64 && "Too many bits for int64_t", file llvm/include/llvm/ADT/APInt.h, line 1646
```
To get around this, we just saturate those big values.
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D105320
At most these use the StringRef/Twine wrappers and don't have any implicit uses of std::string.
Move the include down to any cpp implementation where std::string is actually used.
This call would incorrectly overwrite (with the .debug_rnglists.dwo from
the executable, if there was one) the rnglists section instead of the
correct value (from the .debug_rnglists.dwo in the .dwo file) that's
applied in DWARFUnit::tryExtractDIEsIfNeeded
Originally committed as 04c203e310
Reverted in 768510632c due to the test
failing when encountering windows directory separators.
Fix the path separator platform issue with a FileCheck pattern {{[/\\]}}
Original commit message:
A followup to the feature added in 69da27c749
that added the optional "start file name" to match "start line" - but this
didn't work with Split DWARF because of the need for the decl file number
resolution code to refer back to the skeleton unit to find its .debug_line
contribution. So this patch adds the necessary infrastructure to track the
skeleton unit corresponding to a split full unit for the purpose of this
lookup.
A followup to the feature added in
69da27c749 that added the optional "start
file name" to match "start line" - but this didn't work with Split DWARF
because of the need for the decl file number resolution code to refer
back to the skeleton unit to find its .debug_line contribution. So this
patch adds the necessary infrastructure to track the skeleton unit
corresponding to a split full unit for the purpose of this lookup.
Symbol tables can have symbols with no size in mach-o files that were failing to get combined into a single entry. This resulted in many duplicate entries for the same address and made gsym files larger.
Differential Revision: https://reviews.llvm.org/D105068
llvm-dwarfdump was silent even when the format of DWARF was invalid
and/or llvm-dwarfdump did not understand/support some of the constructs.
This can be pretty confusing as llvm-dwarfdump is a tool for DWARF
producers+consumers development.
Review comments also by @dblaikie.
Reviewed By: jhenderson
Differential Revision: https://reviews.llvm.org/D104271
This is a mechanical change. This actually also renames the
similarly named methods in the SmallString class, however these
methods don't seem to be used outside of the llvm subproject, so
this doesn't break building of the rest of the monorepo.
This patch is to address https://bugs.llvm.org/show_bug.cgi?id=50459.
YAML:455:28: error: GUID strings are 38 characters long
The valid format for a GUID is {XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX}
where X is a hex digit (0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F).
The length of the individual components must be: 8, 4, 4, 4, 12.
For some cases, the converted string generated by obj2yaml, does not
comply with those lengths. yaml2obj checks that the GUID string must
be 38 characters including the dashes and braces.
Reviewed By: amccarth
Differential Revision: https://reviews.llvm.org/D103089
getRelocatedSection interface should not check that the object file is
relocatable, as executable files may have relocations preserved with
`--emit-relocs` linker flag. The relocations are useful in context of post-link
binary analysis for function reference identification. For example, BOLT relies
on relocations to perform function reordering.
Reviewed By: MaskRay, jhenderson
Differential Revision: https://reviews.llvm.org/D102296
There doesn't seem to be a need to support recursive locking,
and a recursive mutex is unnecessarily inefficient.
Differential Revision: https://reviews.llvm.org/D102486
Do the single hash calculation before acquiring the lock, to reduce
lock contention. If Copy is true, and the string was not yet contained
in the StringStorage, use the new address from StringStorage, but
reuse the hash we already calculated.
Differential Revision: https://reviews.llvm.org/D102484
In many cases it is helpful to know at what address the resolved function starts.
This patch adds a new StartAddress member to the DILineInfo structure.
Reviewed By: jhenderson, dblaikie
Differential Revision: https://reviews.llvm.org/D102316
Handle PDB writing errors like any other error in LLD: emit an error and
continue. This allows the linker to print timing data and summary data
after linking, which can be helpful for finding PDB size problems. Also
report how large the file would have been.
Example output:
lld-link: error: Output data is larger than 4 GiB. File size would have been 6,937,108,480
lld-link: error: failed to write PDB file ./chrome.dll.pdb
Summary
--------------------------------------------------------------------------------
33282 Input OBJ files (expanded from all cmd-line inputs)
4 PDB type server dependencies
0 Precomp OBJ dependencies
33396931 Input type records
... snip ...
Input File Reading: 59756 ms ( 45.5%)
GC: 7500 ms ( 5.7%)
ICF: 3336 ms ( 2.5%)
Code Layout: 6329 ms ( 4.8%)
PDB Emission (Cumulative): 46192 ms ( 35.2%)
Add Objects: 27609 ms ( 21.0%)
Type Merging: 16740 ms ( 12.8%)
Symbol Merging: 10761 ms ( 8.2%)
Publics Stream Layout: 9383 ms ( 7.1%)
TPI Stream Layout: 1678 ms ( 1.3%)
Commit to Disk: 3461 ms ( 2.6%)
--------------------------------------------------
Total Link Time: 131244 ms (100.0%)
Differential Revision: https://reviews.llvm.org/D102713
This patch introduces source loading and pruning functions.
It will allow to use the DWARF embedded source and use the same code for JSON printout.
No functional changes.
Reviewed By: dblaikie
Differential Revision: https://reviews.llvm.org/D102539
The algorithm removing duplicates from the Funcs list used to have
amortized quadratic time complexity because it was potentially
removing each entry using std::vector::erase individually. This
patch is now using a erase-remove idiom with an adapted
removeIfBinary algorithm.
Probably this was made under the assumption that these removals are
rare, but there are cases where the case of duplicate entries is
occurring frequently. In these cases, the actual runtime was very
poor, taking hours to process a single binary of around 1 GiB size
including debug info. Another factor contributing to that is the
frequent output of the warning, which is now removed.
It seems this is particularly an issue with GCC-compiled binaries,
rather than clang-built binaries.
Reviewed By: clayborg
Differential Revision: https://reviews.llvm.org/D102219
This patch adds JSON output style to llvm-symbolizer to better support CLI automation by providing a machine readable output.
Reviewed By: jhenderson
Differential Revision: https://reviews.llvm.org/D96883
UnwindTable::parseRows() may return successfully if the CFIProgram has either
no CFI instructions or only DW_CFA_nop instructions and the UnwindRow return
argument will be empty. But currently, the callers are not checking for this case
which is leading to incorrect dumps in the unwind tables in such cases i.e.
CFA=unspecified
Reviewed By: clayborg
Differential Revision: https://reviews.llvm.org/D101892
When DIE is extracted manually, the DieArray is empty. When dump is invoked on aforementioned DIE it tries to extract child, even if Dump options say otherwise. Resulting in crash.
Reviewed By: dblaikie
Differential Revision: https://reviews.llvm.org/D99698
This patch introduces a DIPrinter interface to implement by different output style printer implementations. DIPrinterGNU and DIPrinterLLVM implement the GNU and LLVM output style printing respectively. No functional changes.
This refactoring clarifies and simplifies the code, and makes a new output style addition easier.
Reviewed By: jhenderson, dblaikie
Differential Revision: https://reviews.llvm.org/D98994
In future patches I will be setting the IsText parameter frequently so I will refactor the args to be in the following order. I have removed the FileSize parameter because it is never used.
```
static ErrorOr<std::unique_ptr<MemoryBuffer>>
getFile(const Twine &Filename, bool IsText = false,
bool RequiresNullTerminator = true, bool IsVolatile = false);
static ErrorOr<std::unique_ptr<MemoryBuffer>>
getFileOrSTDIN(const Twine &Filename, bool IsText = false,
bool RequiresNullTerminator = true);
static ErrorOr<std::unique_ptr<MB>>
getFileAux(const Twine &Filename, uint64_t MapSize, uint64_t Offset,
bool IsText, bool RequiresNullTerminator, bool IsVolatile);
static ErrorOr<std::unique_ptr<WritableMemoryBuffer>>
getFile(const Twine &Filename, bool IsVolatile = false);
```
Reviewed By: jhenderson
Differential Revision: https://reviews.llvm.org/D99182
Maybe there's a way to make them work, but until I've investigated
if tools can consume large PDBs, erroring out is better than slowly
and silently consuming all available ram due to internal invariants
being violated.
(Patch to make writing larger files work at
https://bugs.chromium.org/p/chromium/issues/detail?id=1179085#c25
but I haven't had time to check if windbg & co can consume these
large PDBs. llvm-pdbutil can't, but we can fix that one at least :) )
Differential Revision: https://reviews.llvm.org/D98788
The LINK_COMPONENTS dependency between DebugInfoCodeView and
DebugInfoMSF is unnecessary. Breaking them would allow a more
fine-controlled distribution.
Patch By: dangyi
Differential Revision: https://reviews.llvm.org/D98465
This reverts commit bacf9cf2c5 and
reinstates commit 1a9bd5b813.
Reverting this commit did not appear to make the problem go away, so we
can go ahead and reland it.
In some cases a broken or invalid debug info could cause a crash in DWARFUnit::getInlinedChainForAddress during parsing a chain of in-lined functions. This patch fixes this issue.
Reviewed By: dblaikie
Differential Revision: https://reviews.llvm.org/D98119
`AttributeSpec` does not contain values while `DWARFAttribute` already
does. Therefore one no longer needs to pass `uint64_t *OffsetPtr`.
Differential Revision: https://reviews.llvm.org/D98194
D81469 introduced a check to error on CIE version different
than 1 for eh_frame, but older compilers mistakenly create binaries
with this version set to 3 for DWARF4 or 4 to DWARF5. Move the check
to dump time instead of eh_frame parse time, so we can be tolerant
with older binaries.
Reviewed By: aprantl
Differential Revision: https://reviews.llvm.org/D97830
Their names don't convey much information, so they should be excluded.
The behavior matches addr2line.
Differential Revision: https://reviews.llvm.org/D96617
This patch intended to provide additional interface to LLVMsymbolizer
such that they work directly on object files. There is an existing
method - symbolizecode which takes an object file, this patch provides
similar overloads for symbolizeInlinedCode, symbolizeData,
symbolizeFrame. This can be useful for clients who already have a
in-memory object files to symbolize for.
Patch By: pvellien (praveen velliengiri)
Reviewed By: scott.linder
Differential Revision: https://reviews.llvm.org/D95232
This fixes coff-dwarf.test on some build bots.
The test relies on the sort order and prefers main (StorageClass: External) to .text (StorageClass: Static).
Before d08bd13ac8, only `SymbolRef::ST_Function`
symbols were used for .symtab symbolization. That commit added a `"DATA"` mode
to llvm-symbolizer which used `SymbolRef::ST_Data` symbols for symbolization.
Since function and data symbols have different addresses, we don't need to
differentiate the two modes. This patches unifies the two modes to simplify
code.
`"DATA"` is used by `compiler-rt/lib/sanitizer_common/sanitizer_symbolizer_libcdep.cpp`.
`check-hwasan` and `check-tsan` have runtime tests.
Differential Revision: https://reviews.llvm.org/D96322
The ELF spec says:
> STT_FILE: Conventionally, the symbol's name gives the name of the source file associated with the object file. A file symbol has STB_LOCAL binding, its section index is SHN_ABS, and it precedes the other STB_LOCAL symbols for the file, if it is present.
For a local symbol, the preceding STT_FILE symbol is almost always in the same
file[1]. GNU addr2line uses this heuristic to retrieve the filename associated
with a local symbol (e.g. internal linkage functions in C/C++).
GNU addr2line can assign STT_FILE filename to a non-local symbol, too, but the trick
only works if no regular symbol precede STT_FILE. This patch does not implement this corner case
(not useful for most executables which have more than one files).
In case of filename mismatch between .debug_line & .symtab, arbitrarily make .debug_line win.
[1]: LLD does not synthesize STT_FILE symbols
(https://bugs.llvm.org/show_bug.cgi?id=48023 see also
https://sourceware.org/bugzilla/show_bug.cgi?id=26822). An assembly file
without `.file` directives can cause mis-attribution. This is an edge case.
Differential Revision: https://reviews.llvm.org/D95927
In assembly files, omitting `.type foo,@function` is common. Such functions have
type `STT_NOTYPE` and llvm-symbolizer reports `??` for them.
An ifunc symbol usually has an associated resolver symbol which is defined at
the same address. Returning either one is fine for symbolization. The resolver
symbol may not end up in the symbol table if (object file) `.L` is used (linked
image) .symtab is stripped while .dynsym is retained.
This patch allows ELF STT_NOTYPE/STT_GNU_IFUNC symbols for .symtab symbolization.
I have left TODO in the test files for an unimplemented STT_FILE heuristic.
Differential Revision: https://reviews.llvm.org/D95916
GCC warning:
```
/llvm-project/llvm/lib/DebugInfo/DWARF/DWARFDebugFrame.cpp: In member function ‘llvm::Expected<long unsigned int> llvm::dwarf::CFIProgram::Instruction::getOperandAsUnsigned(const llvm::dwarf::CFIProgram&, uint32_t) const’:
/llvm-project/llvm/lib/DebugInfo/DWARF/DWARFDebugFrame.cpp:425:1: warning: control reaches end of non-void function [-Wreturn-type]
425 | }
| ^
/llvm-project/llvm/lib/DebugInfo/DWARF/DWARFDebugFrame.cpp: In member function ‘llvm::Expected<long int> llvm::dwarf::CFIProgram::Instruction::getOperandAsSigned(const llvm::dwarf::CFIProgram&, uint32_t) const’:
/llvm-project/llvm/lib/DebugInfo/DWARF/DWARFDebugFrame.cpp:477:1: warning: control reaches end of non-void function [-Wreturn-type]
477 | }
| ^
```
This patch adds the ability to evaluate the state machine for CIE and FDE unwind objects and produce a UnwindTable with all UnwindRow objects needed to unwind registers. It will also dump the UnwindTable for each CIE and FDE when dumping DWARF .debug_frame or .eh_frame sections in llvm-dwarfdump or llvm-objdump. This allows users to see what the unwind rows actually look like for a given CIE or FDE instead of just seeing a list of opcodes.
This patch adds new classes: UnwindLocation, RegisterLocations, UnwindRow, and UnwindTable.
UnwindLocation is a class that describes how to unwind a register or Call Frame Address (CFA).
RegisterLocations is a class that tracks registers and their UnwindLocations. It gets populated when parsing the DWARF call frame instruction opcodes for a unwind row. The registers are mapped from their register numbers to the UnwindLocation in a map.
UnwindRow contains the result of evaluating a row of DWARF call frame instructions for the CIE, or a row from a FDE. The CIE can produce a set of initial instructions that each FDE that points to that CIE will use as the seed for the state machine when parsing FDE opcodes. A UnwindRow for a CIE will not have a valid address, whille a UnwindRow for a FDE will have a valid address.
The UnwindTable is a class that contains a sorted (by address) vector of UnwindRow objects and is the result of parsing all opcodes in a CIE, or FDE. Parsing a CIE should produce a UnwindTable with a single row. Parsing a FDE will produce a UnwindTable with one or more UnwindRow objects where all UnwindRow objects have valid addresses. The rows in the UnwindTable will be sorted from lowest Address to highest after parsing the state machine, or an error will be returned if the table isn't sorted. To parse a UnwindTable clients can use the following methods:
static Expected<UnwindTable> UnwindTable::create(const CIE *Cie);
static Expected<UnwindTable> UnwindTable::create(const FDE *Fde);
A valid table will be returned if the DWARF call frame instruction opcodes have no encoding errors. There are a few things that can go wrong during the evaluation of the state machine and these create functions will catch and return them.
Differential Revision: https://reviews.llvm.org/D89845
This reverts commit 5b7aef6eb4 and relands
6529d7c5a4.
The ASan error was debugged and determined to be the fault of an invalid
object file input in our test suite, which was fixed by my last change.
LLD's project policy is that it assumes input objects are valid, so I
have added a comment about this assumption to the relocation bounds
check.
This is a pretty classic optimization. Instead of processing symbol
records and copying them to temporary storage, do a first pass to
measure how large the module symbol stream will be, and then copy the
data into place in the PDB file. This requires defering relocation until
much later, which accounts for most of the complexity in this patch.
This patch avoids copying the contents of all live .debug$S sections
into heap memory, which is worth about 20% of private memory usage when
making PDBs. However, this is not an unmitigated performance win,
because it can be faster to read dense, temporary, heap data than it is
to iterate symbol records in object file backed memory a second time.
Results on release chrome.dll:
peak mem: 5164.89MB -> 4072.19MB (-1,092.7MB, -21.2%)
wall-j1: 0m30.844s -> 0m32.094s (slightly slower)
wall-j3: 0m20.968s -> 0m20.312s (slightly faster)
wall-j8: 0m19.062s -> 0m17.672s (meaningfully faster)
I gathered similar numbers for a debug, component build of content.dll
in Chrome, and the performance impact of this change was in the noise.
The memory usage reduction was visible and similar.
Because of the new parallelism in the PDB commit phase, more cores makes
the new approach faster. I'm assuming that most C++ developer machines
these days are at least quad core, so I think this is a win.
Differential Revision: https://reviews.llvm.org/D94267
Fixes issue where if a line section doesn't start with a line number
then the addresses at the beginning of the section don't have line numbers.
For example, for a line section like this
```
0001:00000010-00000014, line/column/addr entries = 1
7 00000013 !
```
a line number wouldn't be found for addresses from 10 to 12.
This matches behavior when using the DIA SDK.
Differential Revision: https://reviews.llvm.org/D93306
The existing code handles this correctly and I checked that the code
in NativeInlineSiteSymbol also handles this correctly, but it was
wrong in the NativeFunctionSymbol code.
Differential Revision: https://reviews.llvm.org/D92134
llvm-symbolizer used to use the DIA SDK for symbolization on
Windows; this patch switches to using native symbolization, which was
implemented recently.
Users can still make the symbolizer use DIA by adding the `-dia` flag
in the LLVM_SYMBOLIZER_OPTS environment variable.
Differential Revision: https://reviews.llvm.org/D91814
This allows to reuse the RelocationResolver from the code
that doesn't want to deal with `RelocationRef` class.
I am going to use it in llvm-readobj. See the description
of D91530 for more details.
Differential revision: https://reviews.llvm.org/D91533
In the current state, if getFromHash(0) is called and there's no CU with
dwo_id=0, the lookup will stop at an empty slot, then the check
`Rows[H].getSignature() != S` won't cause the lookup to fail and return
a nullptr (as it should), because the empty slot has a 0 in the
signature field, and a pointer to the empty slot will be incorrectly
returned.
This patch fixes this by using the index field in the hash entry to
check for empty slots: signature = 0 can match a valid hash but
according to the spec the index for an occupied slot will always be
non-zero.
Differential Revision: https://reviews.llvm.org/D91670
No longer rely on an external tool to build the llvm component layout.
Instead, leverage the existing `add_llvm_componentlibrary` cmake function and
introduce `add_llvm_component_group` to accurately describe component behavior.
These function store extra properties in the created targets. These properties
are processed once all components are defined to resolve library dependencies
and produce the header expected by llvm-config.
Differential Revision: https://reviews.llvm.org/D90848
When compiling for Windows on Arm the amd64 debug interfce from the Visual
Studio SDK is used as the cmake currently only distinguishes between x86 and
amd64 by checking the pointer size. Instead we can get the target
architecture for the compilier and check that to distinguish between
architectures.
We used to only emit static const data members in CodeView as
S_CONSTANTS when they were used; this patch makes it so they are always emitted.
This changes CodeViewDebug.cpp to find the static const members from the
class debug info instead of creating DIGlobalVariables in the IR
whenever a static const data member is used.
Bug: https://bugs.llvm.org/show_bug.cgi?id=47580
Differential Revision: https://reviews.llvm.org/D89072
This reverts commit 504615353f.
We used to only emit static const data members in CodeView as
S_CONSTANTS when they were used; this patch makes it so they are always emitted.
I changed CodeViewDebug.cpp to find the static const members from the
class debug info instead of creating DIGlobalVariables in the IR
whenever a static const data member is used.
Bug: https://bugs.llvm.org/show_bug.cgi?id=47580
Differential Revision: https://reviews.llvm.org/D89072
Seems users have enough different uses of the symbolizer where they
might have unknown binaries and offsets such that "best effort" behavior
is all that's expected of llvm-symbolizer - so even erroring on unknown
executables and out of bounds offsets might not be suitable.
This reverts commit 1de0199748.
This reverts commit a7b209a6d4.
This reverts commit 338dd138ea.
Create the LLVM / CodeView register mappings for the 32-bit ARM Window targets.
Reviewed By: compnerd
Differential Revision: https://reviews.llvm.org/D89622
There's no way to know whether there's a loclist contribution to parse
if there's no loclistx encoding - and if there is one, there's no need
to walk back from the loclist_base (or, uin the case of
info.dwo/loclist.dwo - starting at 0 in the contribution) to parse the
header, instead rely on the DWARF32/64 and address size in the CU
that's already available.
This would come up in split DWARF (non-split wouldn't try to read a
loclist header in the absence of a loclist_base) when one unit had
location lists and another does not (because the loclists.dwo section
would be non-empty in that case - in the case where it's empty the
parsing would silently skip).
Simplify the testing a bit, rather than needing a whole dwp, etc - by
creating a malformed loclists.dwo section (and use single file Split
DWARF) that would trip up any attempt to parse it - but no attempt
should be made.
Register context information was already being passed into the DWARFDebugFrame code that dumps unwind information but it wasn't being used. This change adds the ability to dump registers names of a valid MC register context was passed in and if it knows about the register. Updated the tests to use the newly returned register names.
Differential Revision: https://reviews.llvm.org/D88767
It's not possible to do this in complete generality - a CU using a
sec_offset DW_AT_ranges has no way of knowing where its rnglists
contribution starts, so should not attempt to parse any full rnglist
table/header to do so. And even using FORM_rnglistx there's no need to
parse the header - the offset can be computed using the CU's DWARF
format (32 or 64) to compute offset entry sizes, and then the list
parsed at that offset without ever trying to find a rnglist contribution
header immediately prior to the rnglists_base.
Stored Error objects have to be checked, even if they are success
values.
This reverts commit 8d250ac3cd.
Relands commit 49b3459930655d879b2dc190ff8fe11c38a8be5f..
Original commit message:
-----------------------------------------
This makes type merging much faster (-24% on chrome.dll) when multiple
threads are available, but it slightly increases the time to link (+10%)
when /threads:1 is passed. With only one more thread, the new type
merging is faster (-11%). The output PDB should be identical to what it
was before this change.
To give an idea, here is the /time output placed side by side:
BEFORE | AFTER
Input File Reading: 956 ms | 968 ms
Code Layout: 258 ms | 190 ms
Commit Output File: 6 ms | 7 ms
PDB Emission (Cumulative): 6691 ms | 4253 ms
Add Objects: 4341 ms | 2927 ms
Type Merging: 2814 ms | 1269 ms -55%!
Symbol Merging: 1509 ms | 1645 ms
Publics Stream Layout: 111 ms | 112 ms
TPI Stream Layout: 764 ms | 26 ms trivial
Commit to Disk: 1322 ms | 1036 ms -300ms
----------------------------------------- --------
Total Link Time: 8416 ms 5882 ms -30% overall
The main source of the additional overhead in the single-threaded case
is the need to iterate all .debug$T sections up front to check which
type records should go in the IPI stream. See fillIsItemIndexFromDebugT.
With changes to the .debug$H section, we could pre-calculate this info
and eliminate the need to do this walk up front. That should restore
single-threaded performance back to what it was before this change.
This change will cause LLD to be much more parallel than it used to, and
for users who do multiple links in parallel, it could regress
performance. However, when the user is only doing one link, it's a huge
improvement. In the future, we can use NT worker threads to avoid
oversaturating the machine with work, but for now, this is such an
improvement for the single-link use case that I think we should land
this as is.
Algorithm
----------
Before this change, we essentially used a
DenseMap<GloballyHashedType, TypeIndex> to check if a type has already
been seen, and if it hasn't been seen, insert it now and use the next
available type index for it in the destination type stream. DenseMap
does not support concurrent insertion, and even if it did, the linker
must be deterministic: it cannot produce different PDBs by using
different numbers of threads. The output type stream must be in the same
order regardless of the order of hash table insertions.
In order to create a hash table that supports concurrent insertion, the
table cells must be small enough that they can be updated atomically.
The algorithm I used for updating the table using linear probing is
described in this paper, "Concurrent Hash Tables: Fast and General(?)!":
https://dl.acm.org/doi/10.1145/3309206
The GHashCell in this change is essentially a pair of 32-bit integer
indices: <sourceIndex, typeIndex>. The sourceIndex is the index of the
TpiSource object, and it represents an input type stream. The typeIndex
is the index of the type in the stream. Together, we have something like
a ragged 2D array of ghashes, which can be looked up as:
tpiSources[tpiSrcIndex]->ghashes[typeIndex]
By using these side tables, we can omit the key data from the hash
table, and keep the table cell small. There is a cost to this: resolving
hash table collisions requires many more loads than simply looking at
the key in the same cache line as the insertion position. However, most
supported platforms should have a 64-bit CAS operation to update the
cell atomically.
To make the result of concurrent insertion deterministic, the cell
payloads must have a priority function. Defining one is pretty
straightforward: compare the two 32-bit numbers as a combined 64-bit
number. This means that types coming from inputs earlier on the command
line have a higher priority and are more likely to appear earlier in the
final PDB type stream than types from an input appearing later on the
link line.
After table insertion, the non-empty cells in the table can be copied
out of the main table and sorted by priority to determine the ordering
of the final type index stream. At this point, item and type records
must be separated, either by sorting or by splitting into two arrays,
and I chose sorting. This is why the GHashCell must contain the isItem
bit.
Once the final PDB TPI stream ordering is known, we need to compute a
mapping from source type index to PDB type index. To avoid starting over
from scratch and looking up every type again by its ghash, we save the
insertion position of every hash table insertion during the first
insertion phase. Because the table does not support rehashing, the
insertion position is stable. Using the array of insertion positions
indexed by source type index, we can replace the source type indices in
the ghash table cells with the PDB type indices.
Once the table cells have been updated to contain PDB type indices, the
mapping for each type source can be computed in parallel. Simply iterate
the list of cell positions and replace them with the PDB type index,
since the insertion positions are no longer needed.
Once we have a source to destination type index mapping for every type
source, there are no more data dependencies. We know which type records
are "unique" (not duplicates), and what their final type indices will
be. We can do the remapping in parallel, and accumulate type sizes and
type hashes in parallel by type source.
Lastly, TPI stream layout must be done serially. Accumulate all the type
records, sizes, and hashes, and add them to the PDB.
Differential Revision: https://reviews.llvm.org/D87805
This makes type merging much faster (-24% on chrome.dll) when multiple
threads are available, but it slightly increases the time to link (+10%)
when /threads:1 is passed. With only one more thread, the new type
merging is faster (-11%). The output PDB should be identical to what it
was before this change.
To give an idea, here is the /time output placed side by side:
BEFORE | AFTER
Input File Reading: 956 ms | 968 ms
Code Layout: 258 ms | 190 ms
Commit Output File: 6 ms | 7 ms
PDB Emission (Cumulative): 6691 ms | 4253 ms
Add Objects: 4341 ms | 2927 ms
Type Merging: 2814 ms | 1269 ms -55%!
Symbol Merging: 1509 ms | 1645 ms
Publics Stream Layout: 111 ms | 112 ms
TPI Stream Layout: 764 ms | 26 ms trivial
Commit to Disk: 1322 ms | 1036 ms -300ms
----------------------------------------- --------
Total Link Time: 8416 ms 5882 ms -30% overall
The main source of the additional overhead in the single-threaded case
is the need to iterate all .debug$T sections up front to check which
type records should go in the IPI stream. See fillIsItemIndexFromDebugT.
With changes to the .debug$H section, we could pre-calculate this info
and eliminate the need to do this walk up front. That should restore
single-threaded performance back to what it was before this change.
This change will cause LLD to be much more parallel than it used to, and
for users who do multiple links in parallel, it could regress
performance. However, when the user is only doing one link, it's a huge
improvement. In the future, we can use NT worker threads to avoid
oversaturating the machine with work, but for now, this is such an
improvement for the single-link use case that I think we should land
this as is.
Algorithm
----------
Before this change, we essentially used a
DenseMap<GloballyHashedType, TypeIndex> to check if a type has already
been seen, and if it hasn't been seen, insert it now and use the next
available type index for it in the destination type stream. DenseMap
does not support concurrent insertion, and even if it did, the linker
must be deterministic: it cannot produce different PDBs by using
different numbers of threads. The output type stream must be in the same
order regardless of the order of hash table insertions.
In order to create a hash table that supports concurrent insertion, the
table cells must be small enough that they can be updated atomically.
The algorithm I used for updating the table using linear probing is
described in this paper, "Concurrent Hash Tables: Fast and General(?)!":
https://dl.acm.org/doi/10.1145/3309206
The GHashCell in this change is essentially a pair of 32-bit integer
indices: <sourceIndex, typeIndex>. The sourceIndex is the index of the
TpiSource object, and it represents an input type stream. The typeIndex
is the index of the type in the stream. Together, we have something like
a ragged 2D array of ghashes, which can be looked up as:
tpiSources[tpiSrcIndex]->ghashes[typeIndex]
By using these side tables, we can omit the key data from the hash
table, and keep the table cell small. There is a cost to this: resolving
hash table collisions requires many more loads than simply looking at
the key in the same cache line as the insertion position. However, most
supported platforms should have a 64-bit CAS operation to update the
cell atomically.
To make the result of concurrent insertion deterministic, the cell
payloads must have a priority function. Defining one is pretty
straightforward: compare the two 32-bit numbers as a combined 64-bit
number. This means that types coming from inputs earlier on the command
line have a higher priority and are more likely to appear earlier in the
final PDB type stream than types from an input appearing later on the
link line.
After table insertion, the non-empty cells in the table can be copied
out of the main table and sorted by priority to determine the ordering
of the final type index stream. At this point, item and type records
must be separated, either by sorting or by splitting into two arrays,
and I chose sorting. This is why the GHashCell must contain the isItem
bit.
Once the final PDB TPI stream ordering is known, we need to compute a
mapping from source type index to PDB type index. To avoid starting over
from scratch and looking up every type again by its ghash, we save the
insertion position of every hash table insertion during the first
insertion phase. Because the table does not support rehashing, the
insertion position is stable. Using the array of insertion positions
indexed by source type index, we can replace the source type indices in
the ghash table cells with the PDB type indices.
Once the table cells have been updated to contain PDB type indices, the
mapping for each type source can be computed in parallel. Simply iterate
the list of cell positions and replace them with the PDB type index,
since the insertion positions are no longer needed.
Once we have a source to destination type index mapping for every type
source, there are no more data dependencies. We know which type records
are "unique" (not duplicates), and what their final type indices will
be. We can do the remapping in parallel, and accumulate type sizes and
type hashes in parallel by type source.
Lastly, TPI stream layout must be done serially. Accumulate all the type
records, sizes, and hashes, and add them to the PDB.
Differential Revision: https://reviews.llvm.org/D87805
Since DWARFv5 places TUs in debug_info, some of DWARFContext's APIs have
become a bit erroneous, including TUs in the CU list by accident.
Correct that by providing compile_units (& dwo_compile_units) that
filter out the type units from the debug_info units.
Differential Revision: https://reviews.llvm.org/D87935
Flag DIEs that have DW_CHILDREN_yes set in their abbreviation but don't
actually have any children.
rdar://59809554
Differential revision: https://reviews.llvm.org/D88048
Most clients only need CVType and CVSymbol, not structs for every type
and symbol. Move CVSymbol and CVType to CVRecord.h to accomplish this.
Update some of the common headers that need CVSymbol and CVType to use
the new location.
When concatenating directory with filename in getFilenameByIndex, we
might end up with a path that contains extra dots. For example, if the
input is /path and ./example, we would return /path/./example. Run
sys::path::remove_dots on the output to eliminate unnecessary dots.
Differential Revision: https://reviews.llvm.org/D87657
Since a function might have portions of its code coming from multiple
different files, "start line" is ambiguous (it can't just be resolved
relative to the file/line specified). Add start file to disambiguate it.
When llvm-dwarfdump encounters no null terminated strings, we should
warn user about it rather than ignore it and print nothing.
Before this patch, when llvm-dwarfdump dumps a .debug_str section whose
content is "abc", it prints:
```
.debug_str contents:
```
After this patch:
```
.debug_str contents:
warning: no null terminated string at offset 0x0
```
Reviewed By: jhenderson, MaskRay
Differential Revision: https://reviews.llvm.org/D86998
This patch adds a helper function DumpStrSection to simplify codes.
Besides, nonprintable chars in debug_str and debug_str.dwo sections
are printed as escaped chars.
Reviewed By: jhenderson
Differential Revision: https://reviews.llvm.org/D86918
Parsing DWARFv5 debug_loclist offsets when a CU is parsed is weighing
down memory usage of symbolizers that don't need to parse this data at
all. There's not much benefit to caching these anyway - since they are
O(1) lookup and reading once you know where the offset list starts (and
can do bounds checking with the offset list size too).
In general, I think it might be time to start paying down some of the
technical debt of loc/loclist/range/rnglist parsing to try to unify it a
bit more.
eg:
* Currently DWARFUnit has: RangeSection, RangeSectionBase, LocSection,
LocSectionBase, LocTable, RngListTable, LoclistTableHeader (be nice if
these were all wrapped up in two variables - one for loclists, one for
rnglists)
* rnglists and loclists are handled differently (see:
LoclistTableHeader, but no RnglistTableHeader)
* maybe all these types could be less stateful - lazily parse what they
need to, even reparsing rather than caching because it doesn't seem
too expensive, for instance. (though admittedly so long as it's
constantcost/overead per compilatiton that's probably adequate)
* Maybe implementing and using a DWARFDataExtractor that can be
sub-ranged (so we could slice it up to just the single contribution) -
though maybe that's not so useful because loc/ranges need to refer to
it by absolute, not contribution-relative mechanisms
Differential Revision: https://reviews.llvm.org/D86110
dumpStringOffsetsSection() expects the size of a contribution to be
correctly aligned. The patch adds the corresponding verifications for
pre-v5 cases.
Differential Revision: https://reviews.llvm.org/D85739
Note that DWARFUnit::getAbbreviations() returns nullptr if the
abbreviations could not be read, but callers used the returned
pointer without checking.
Differential Revision: https://reviews.llvm.org/D85738
Allow the GNU .debug_macro extension to be parsed and printed by
llvm-dwarfdump. In an upcoming patch support will be added for emitting
that format also.
Reviewed By: dblaikie
Differential Revision: https://reviews.llvm.org/D82974
Although the DWARF specification states that .debug_aranges entries
can't have length zero, these can occur in the wild. There's no
particular reason to enforce this part of the spec, since functionally
they have no impact. The patch removes the error and introduces a new
warning for premature terminator entries which does not stop parsing.
This is a relanding of cb3a598c87, adding the missing obj2yaml part
that was needed.
Fixes https://bugs.llvm.org/show_bug.cgi?id=46805. See also
https://reviews.llvm.org/D71932 which originally introduced the error.
Reviewed by: ikudrin, dblaikie, Higuoxing
Differential Revision: https://reviews.llvm.org/D85313
Although the DWARF specification states that .debug_aranges entries
can't have length zero, these can occur in the wild. There's no
particular reason to enforce this part of the spec, since functionally
they have no impact. The patch removes the error and introduces a new
warning for premature terminator entries which does not stop parsing.
Fixes https://bugs.llvm.org/show_bug.cgi?id=46805. See also
https://reviews.llvm.org/D71932 which originally introduced the error.
Reviewed by: ikudrin, dblaikie
Differential Revision: https://reviews.llvm.org/D85313
We already need to include raw_ostream.h, also add missing StringRef.h and cstdint implicit dependencies.
Remove unnecessary includes from PDBExtras.cpp
LTO builds have been creating invalid DWARF and one of the errors was a file index that was out of bounds. "llvm-dwarfdump --verify" will check all file indexes for line tables already, but there are no checks for the validity of file indexes in attributes.
The verification will verify if there is a DW_AT_decl_file/DW_AT_call_file that:
- there is a line table for the compile unit
- the file index is valid
- the encoding is appropriate
Tests are added that test all of the above conditions.
Differential Revision: https://reviews.llvm.org/D84817
-Use the actual sect/offset to keep track of symbols in the cache so they don't get created multiple times with different addresses.
-Remove getSymTag from PDBFunctionSymbol/PDBPublicSymbol because it's already implemented in the base class
-Merge the symbolizer test files for DIA and native, since the tests are the same.
-Implement getCompilandId for NativeLineNumber
Reviewed By: amccarth
Differential Revision: https://reviews.llvm.org/D84208
DWARFListTableHeader::length() handles the zero value of HeaderData.Length
in a special way, which makes the result different from the calculated
value of FullLength, which leads to triggering an assertion. The patch
moves the assertion a bit later when `FullLength` is already checked for
minimal allowed value.
Differential Revision: https://reviews.llvm.org/D82886
When building in Debug on Windows-MSVC after b7402edce3, a lot of tests were failing because we were dereferencing an element past the end of HashRecords. This happened towards the end of the table, in unused slots.
The patch adds checking for various potential issues in parsing name
lookup tables and reporting them as recoverable errors, similarly as we
do for other tables.
Differential Revision: https://reviews.llvm.org/D83050
The parsing method did not check reading errors and might easily fall
into an infinite loop on an invalid input because of that.
Differential Revision: https://reviews.llvm.org/D83049
This adds the --debug-vars option to llvm-objdump, which prints
locations (registers/memory) of source-level variables alongside the
disassembly based on DWARF info. A vertical line is printed for each
live-range, with a label at the top giving the variable name and
location, and the position and length of the line indicating the program
counter range in which it is valid.
Differential revision: https://reviews.llvm.org/D70720
There are following issues with `CFIProgram::parse` code:
1) Invalid CFI opcodes were never tested. And currently a test would fail
when the `LLVM_ENABLE_ABI_BREAKING_CHECKS` is enabled. It happens because
the `DataExtractor::Cursor C` remains unchecked when the
"Invalid extended CFI opcode" error is reported:
```
.eh_frame section at offset 0x1128 address 0x0:
Program aborted due to an unhandled Error:
Error value was Success. (Note: Success values must still be checked prior to being destroyed).
```
2) It is impossible to reach the "Invalid primary CFI opcode" error with the current code.
There are 3 possible primary opcode values and all of them are handled. Hence this error
should be replaced with llvm_unreachable.
3) Errors currently reported are upper-case.
This patch refines the code in the `CFIProgram::parse` method to fix all issues mentioned
and adds unit tests for all possible invalid extended CFI opcodes.
Differential revision: https://reviews.llvm.org/D82868
Previously, the debug line parser would keep attempting to read data
even if it had run out of data to read. This meant errors in parsing
would often end up being reported as something else, such as an unknown
version or malformed directory/filename table. This patch fixes the
issues by using the Cursor API to capture errors.
Reviewed by: labath
Differential Revision: https://reviews.llvm.org/D83043
This reduces peak memory on my test case from 1960.14MB to 1700.63MB
(-260MB, -13.2%) with no measurable impact on CPU time. I'm currently
working with a publics stream that is about 277MB. Before this change,
we would allocate 277MB of heap memory, serialize publics into them,
hold onto that heap memory, open the PDB, and commit into it. After
this change, we defer the serialization until commit time.
In the last change I made to public writing, I re-sorted the list of
publics multiple times in place to avoid allocating new temporary data
structures. Deferring serialization until later requires that we don't
reorder the publics. Instead of sorting the publics, I partially
construct the hash table data structures, store a publics index in them,
and then sort the hash table data structures. Later, I replace the index
with the symbol record offset.
This change also addresses a FIXME and moves the list of global and
public records from GSIHashStreamBuilder to GSIStreamBuilder. Now that
publics aren't being serialized, it makes even less sense to store them
as a list of CVSymbol records. The hash table used to deduplicate
globals is moved as well, since that is specific to globals, and not
publics.
Reviewed By: aganea, hans
Differential Revision: https://reviews.llvm.org/D81296
Currently when the .eh_frame section is truncated so that
CFI instructions can't be read, it is possible to enter
an infinite loop.
It happens because `CFIProgram::parse` does not handle errors properly.
This patch fixes the issue.
Differential revision: https://reviews.llvm.org/D82017
Previously, if there was an error whilst parsing the operands of an
extended opcode, the operands would be treated as zero and printed. This
could potentially be slightly confusing. This patch changes the
behaviour to print the raw bytes instead.
Reviewed by: ikudrin
Differential Revision: https://reviews.llvm.org/D81570
Summary: Previous code would try to verify DW_AT_ranges and if any ranges would overlap, it would stop attributing any ranges after this to the DIE which caused incorrect errors to be reported that a DIE's address ranges were not contained in the parent DIE's ranges. Added a fix and a test.
Reviewers: aprantl, labath, probinson, JDevlieghere, jhenderson
Subscribers: hiraditya, MaskRay, cmtice, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D79962
Current LLVM implementation uses `MCAsmInfo::CodePointerSize` as addr_size when emitting the DWARF data. llvm-dwarfdump, on the other hand, handles `addr_size`s of 4 and 8 properly and considers all other sizes as an error. This works for most of mainline targets except for MSP430 and AVR.
msp430-gcc v8.3.1 emits DWARF32 with addr_size = 4 (DWARF32 does not imply addr_size = 4, 32 refers to internal offset width of 4 bytes) that is handled by llvm-dwarfdump already. Still, emitting 2-byte target pointers on MSP430 seems correct as well (but not for MSP430X that is supported by msp430-gcc but not by LLVM and has 20-bit address space).
This patch make it possible for MSP430 debug info support to be tested with llvm-dwarfdump.
Differential Revision: https://reviews.llvm.org/D82055
This is a natural extension of the previous changes to use the Cursor
class independently in the standard and extended opcode paths, and in
turn allows delaying error handling until the entire line has been
printed in verbose mode, removing interleaved output in some cases.
Reviewed by: MaskRay, JDevlieghere
Differential Revision: https://reviews.llvm.org/D81562
Standard opcodes usually have ULEB128 arguments, so it is generally not
possible to recover from such errors. This patch causes the parser to
stop parsing the table in such situations.
Also don't emit the operands or add data to the table if there is an
error reading these opcodes.
Reviewed by: JDevlieghere
Differential Revision: https://reviews.llvm.org/D81470
Summary:
This makes the code easier to reason about, as it will behave the same
way regardless of whether there is any more data coming after the
presumed end of the prologue.
Reviewers: jhenderson, dblaikie, probinson, ikudrin
Subscribers: hiraditya, MaskRay, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D77557
The verbose printing of unrecognised standard opcodes was broken in
multiple ways (additional blank lines, a closing parenthesis without
opening parenthesis and so on). This patch fixes it, and makes the
output more consistent with other opcodes.
The new line printing for debug line verbose output was inconsistent.
For new rows in the matrix, a blank line followed, whilst the
DW_LNS_copy opcode actually resulted in two blank lines. There was also
potential inconsistency in the blank lines at the end of the table. This
patch mostly resolves these issues - no blank lines appear in the output
except for a single line after the prologue and at table end to separate
it from any subsquent table, plus some instances after error messages.
Also add a unit test for verbose output to test the fine details of new
line placement and other aspects of verbose output.
Reviewed by: dblaikie
Differential Revision: https://reviews.llvm.org/D81102
Verbose and non-verbose parsing of .debug_line produced their output at
different points in the program. The most obvious impact of this was
that error messages were produced at different times, but it also
potentially reduced what clients could do by customising the stream or
warning/error handlers.
This change makes the two variants consistent by printing non-verbose
output inline, the same as verbose output.
Testing of the error messages has been modified to check the messages
always appear in the same location to illustrate the behaviour.
Reviewed by: JDevlieghere, dblaikie, MaskRay, labath
Differential Revision: https://reviews.llvm.org/D80989
The flushes previously existed to help ensure consistent error message
output when stdout and stderr were passed to the same location. This is
no longer necessary as errs() is now tied to outs().
Reviewed by: dblaikie, MaskRay, JDevlieghere, labath
Differential Revision: https://reviews.llvm.org/D80803
Previously, if an extended opcode was truncated, it would manifest as an
"unexpected line op length error" which wasn't quite accurate. This
change checks for errors any time data is read whilst parsing an
extended opcode, and reports any errors detected.
Reviewed by: MaskRay, labath, aprantl
Differential Revision: https://reviews.llvm.org/D80797
Like non-verbose output, so that it is easy to recognize the `Line,Column,File,ISA,Discriminator` column values.
Reviewed By: JDevlieghere, jhenderson
Differential Revision: https://reviews.llvm.org/D80874
Update for upstream comments. Improve test by writing all the debug
info by hand.
Reviewers: dblaikie, jhenderson
Subscribers: hiraditya, MaskRay, rupprecht, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D80168
This will ensure that nothing can ever start parsing data from a future
sequence and part-read data will be returned as 0 instead.
Reviewed by: aprantl, labath
Differential Revision: https://reviews.llvm.org/D80796
For most tables, we already use commas in headers. This set of patches
unifies dumping the remaining ones.
Differential Revision: https://reviews.llvm.org/D80806
For most tables, we already use commas in headers. This set of patches
unifies dumping the remaining ones.
Differential Revision: https://reviews.llvm.org/D80806
For most tables, we already use commas in headers. This set of patches
unifies dumping the remaining ones.
Differential Revision: https://reviews.llvm.org/D80806
This patch extends the parsing and dumping support of llvm-dwarfdump
for debug_macro.dwo section.
Following forms are supported:
- DW_MACRO_define
- DW_MACRO_undef
- DW_MACRO_start_file
- DW_MACRO_end_file
- DW_MACRO_define_strx
- DW_MACRO_undef_strx
- DW_MACRO_define_strp
- DW_MACRO_undef_strp
Reviewed by: ikudrin, dblaikie
Differential Revision: https://reviews.llvm.org/D78500
A CIE with the Length == 0 is a terminator:
https://refspecs.linuxfoundation.org/LSB_5.0.0/LSB-Core-generic/LSB-Core-generic/ehframechpt.html
And GNU objdump recognizes them and prints the following for such entries:
"00000000 ZERO terminator"
This patch teaches llvm-objdump to do the same. I had to update tests to use
"CHECK-NEXT" too.
(Note: it looks perhaps not right that printing is done inside the DebugInfo library,
I'd expect to see the change in the llvm-objdump's code somewhere instead,
but that is how it done atm).
Differential revision: https://reviews.llvm.org/D80476
I've noticed an issue with "Data.getRelocatedValue(...)" call.
it might silently ignore an error when a content is truncated.
That leads to an infinite loop in the code (e.g. llvm-readobj hangs).
After fixing the issue I've found that actually we always tried
to read past the end of a section, even when a content was valid.
It happened because the terminator CIE (a CIE with the length == 0)
was never handled. At first I've tried just to stop adding the terminator
entry (and return), but it does not seem to be correct, because tools like
llvm-objdump might want to print something for such entries
(see comments in the code and test cases).
This patch fixes issues mentioned, provides new test cases for
both llvm-readobj and lib/DebugInfo and adds FIXMEs to existent
test cases related.
Differential revision: https://reviews.llvm.org/D80299
Demangling Itanium symbols either consumes the whole input or fails,
but Microsoft symbols can be successfully demangled with just some
of the input.
Add an outparam that enables clients to know how much of the input was
consumed, and use this flag to give llvm-undname an opt-in warning
on partially consumed symbols.
Differential Revision: https://reviews.llvm.org/D80173
The patch changes dumping of offsets in .debug_str_offsets sections so
that they are printed as 16-digit hex values if the contribution is in
the DWARF64 format.
Differential Revision: https://reviews.llvm.org/D79997
The patch changes dumping of unit_length, debug_info_offset, and
debug_info_length fields in headers in .debug_pubname and
.debug_pubtypes sections so that they are printed as 16-digit hex values
if the contribution is in the DWARF64 format. Dumping of offsets in the
tables is changed in the same way.
Differential Revision: https://reviews.llvm.org/D79997
The patch changes dumping of a unit_length field and offsets in headers
in .debug_loclists and .debug_rnglists sections so that they are printed
as 16-digit hex values if the contribution is in the DWARF64 format.
Differential Revision: https://reviews.llvm.org/D79997
The patch changes dumping of unit_length and header_length fields in
headers in .debug_line sections so that they are printed as 16-digit hex
values if the contribution is in the DWARF64 format.
Differential Revision: https://reviews.llvm.org/D79997
The patch changes dumping of the unit_length field in a unit header so
that it is printed as a 16-digit hex value if the unit is in the DWARF64
format.
Differential Revision: https://reviews.llvm.org/D79997
The patch changes dumping of DWARF form values which sizes depend on
the DWARF format so that they are printed as 16-digit hex values for
DWARF64.
Differential Revision: https://reviews.llvm.org/D79997
The patch changes dumping of unit_length and debug_info_offset fields in
an address range header so that they are printed as 16-digit hex values
if the contribution is in the DWARF64 format.
Differential Revision: https://reviews.llvm.org/D79997
Imagine we have a broken .eh_frame.
Below is a possible sample output of llvm-readelf:
```
...
entry 2 {
initial_location: 0x10f5
address: 0x2080
}
}
}
.eh_frame section at offset 0x2028 address 0x2028:
LLVM ERROR: Parsing entry instructions at 0 failed
PLEASE submit a bug report to https://bugs.llvm.org/ and include the crash backtrace.
Stack dump:
0. Program arguments: /home/umb/LLVM/LLVM/llvm-project/build/bin/llvm-readelf -a 1
#0 0x000055f4a2ff5a1a llvm::sys::PrintStackTrace(llvm::raw_ostream&) (/home/umb/LLVM/LLVM/llvm-project/build/bin/llvm-readelf+0x2b9a1a)
...
#15 0x00007fdae5dc209b __libc_start_main /build/glibc-B9XfQf/glibc-2.28/csu/../csu/libc-start.c:342:3
#16 0x000055f4a2db746a _start (/home/umb/LLVM/LLVM/llvm-project/build/bin/llvm-readelf+0x7b46a)
Aborted
```
I.e. it calls abort(), suggests to submit a bug report and exits with the code 134.
This patch changes the logic to propagate errors to callers.
This fixes the behavior for llvm-dwarfdump, llvm-readobj and other possible tools.
Differential revision: https://reviews.llvm.org/D79165
Summary: This implements searching for function symbols and public symbols by address.
More specifically,
-Implements NativeSession::findSymbolByAddress for function symbols and
public symbols. I think data symbols are also searched for, but isn't
implemented in this patch.
-Adds classes for NativeFunctionSymbol and NativePublicSymbol
-Adds a '-use-native-pdb-reader' option to llvm-symbolizer, for testing
purposes.
Reviewers: rnk, amccarth, labath
Subscribers: mgorny, hiraditya, MaskRay, rupprecht, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D79269
This patch adds support for dumping DW_MACRO_define_strx,
DW_MACRO_undef_strx in llvm-dwarfdump. These forms are currently
supported only in debug_macro section.
Reviewed By: ikudrin, dblaikie
Differential Revision: https://reviews.llvm.org/D78736
It looks like that was an initial intention, but some code paths in
`DWARFExpression::Operation::extract()` did not initialize `EndOffset`
properly.
Differential Revision: https://reviews.llvm.org/D79622
Reduces time to link PGO instrumented net_unittets.exe by 11% (9.766s ->
8.672s, best of three). Reduces peak memory by 65.7MB (2142.71MB ->
2076.95MB).
Use a more compact struct, BulkPublic, for faster sorting. Sort in
parallel. Construct the hash buckets in parallel. Try to use one vector
to hold all the publics instead of copying them from one to another.
Allocate all the memory needed to serialize publics up front, and then
serialize them in place in parallel.
Reviewed By: aganea, hans
Differential Revision: https://reviews.llvm.org/D79467
With a fix to uninitialized EndOffset.
DW_OP_call_ref is the only operation that has an operand which depends
on the DWARF format. The patch fixes handling that operation in DWARF64
units.
Differential Revision: https://reviews.llvm.org/D79501
DW_OP_call_ref is the only operation that has an operand which depends
on the DWARF format. The patch fixes handling that operation in DWARF64
units.
Differential Revision: https://reviews.llvm.org/D79501
Summary:
Current implementation of DWARFDie::getName(DINameKind Kind) could
lead to double call to DWARFDie::find(DW_AT_name) in following
scenario:
getName(LinkageName);
getName(ShortName);
getName(LinkageName) calls find(DW_AT_name) if linkage name is not
found. Then, it is called again in getName(ShortName). This patch
alows to request LinkageName and ShortName separately
to avoid extra call to find(DW_AT_name).
It helps D74169 to parse clang debuginfo faster(~1%).
Reviewers: clayborg, dblaikie
Differential Revision: https://reviews.llvm.org/D79173
The number of public symbols is very large, and each deserialization
does a few heap allocations. The public symbols are serialized by the
linker, so we can assume they have the expected layout and use it
directly.
Saves O(#publics) temporary heap allocations and shrinks some data
structures.
This accounts for a large portion of the memory allocations in LLD.
This DebugSubsectionRecordBuilder object can be stored directly in
C13Builders, it mostly wraps other subsections.
Remove the container kind field from the object. It is always the same
for all elements in the vector, and we can pass it in during writing.
Summary:
In D77860, we have changed `getSymbolFlags()` return type to `Expected<uint32_t>`.
This change helps bubble the error further up the stack.
Reviewers: jhenderson, grimar, JDevlieghere, MaskRay
Reviewed By: jhenderson
Subscribers: hiraditya, MaskRay, rupprecht, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D79075
Summary:
Change std::vector to SmallVector to prevent re-allocations and to
have small pre-allocated storage.
Reviewers: clayborg, dblaikie
Differential Revision: https://reviews.llvm.org/D79123
We unconditionally compared the DW_AT_ranges offset to the length of the
.debug_ranges section. For DWARF5 we should look at the debug_rnglists
section instead.
Differential revision: https://reviews.llvm.org/D78971
We were passing the AppleObjCSection instead of the AddrSection. Maybe
the API changed and this remained unnoticed because the types are the
same, or maybe it's just a typo.
The sizes of offsets in the `.debug_str_offsets.dwo` section depend on
the format of compilation or type units referencing them: 4 bytes for
DWARF32 units and 8 bytes for DWARF64 ones. The fix uses parsed units
to determine the actual size of offsets in the corresponding part of
the `.debug_str_offsets.dwo` section.
Differential Revision: https://reviews.llvm.org/D78555
Summary: AttrIndex could be removed from DWARFAbbreviationDeclaration::getAttributeValue.
Reviewers: clayborg, dblaikie
Differential Revision: https://reviews.llvm.org/D78672
The method is called from only one place and the call is already guarded
by a condition which checks that IsDWO is false.
Differential Revision: https://reviews.llvm.org/D78482
the tests pass on Linux.
Summary:
This change implements readFromExe, and calculating VA and RVA, which
are some of the functionalities that will be used for native PDB reading
for llvm symbolizer.
bug: https://bugs.llvm.org/show_bug.cgi?id=41795
Summary:
This change implements readFromExe, and calculating VA and RVA, which
are some of the functionalities that will be used for native PDB reading
for llvm symbolizer.
bug: https://bugs.llvm.org/show_bug.cgi?id=41795
Reviewers: hans, amccarth, rnk
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D78128
Summary:
Without this we could silently accept an invalid prologue because the
default DataExtractor behavior is to return an empty string when
reaching the end of file. And empty string is also used to terminate
these lists.
This makes the parsing code slightly more complicated, but this
complexity will go away once the parser starts working with truncating
data extractors. The reason I am doing it this way is because without
this, the truncation would regress the quality of error messages (right
now, we produce bad error messages only near EOF, but truncation would
make everything behave as if it was near EOF).
Reviewers: dblaikie, probinson, jhenderson
Subscribers: hiraditya, MaskRay, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D77555
Summary:
If we have an (invalid) relocation which relocates bytes which partially
lie outside the range of the relocated section, the getRelocatedValue
would return confusing results. It would first read zero (because that's
what the underlying DataExtractor api does for out-of-bounds reads), and
then relocate that zero anyway.
A more appropriate behavior is to return zero straight away. This is
what this patch does.
Reviewers: dblaikie, jhenderson
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D78113
Originally committed as 416fa7720e
Reverted (due to buildbot failure - breaking lldb) in 7a45aeacf3.
I still can't seem to build lldb locally, but Pavel Labath has kindly
provided a potential fix to preserve the old behavior in lldb by
registering a simple recoverable error handler there that prints to the
desired stream in lldb, rather than stderr.
GCC emits this new form along with others forms(supported in llvm-dwardump)
and since it's support was missing in llvm-dwarfdump, it was not
able to correctly dump the content a debug_macro section for GCC
generated binaries.
This patch extends llvm-dwarfdump to support this form,
now GCC generated debug_macro section can be correctly dumped
using llvm-dwarfdump.
Reviewed By: dblaikie
Differential Revision: https://reviews.llvm.org/D78006
Summary:
Without that we could be silently reading zeroes, as that's the default
DataExtractor behavior. The entire parse would still most likely fail,
but it would do that with a seemingly unrelated/nonsensical error
message.
Reviewers: dblaikie, probinson, jhenderson
Subscribers: hiraditya, MaskRay, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D77554
This probably isn't ideal - the error was being printed specifically
inline with the dumping that was more legible - but then the error
wasn't reported to stderr and didn't produce a non-zero exit code.
Probably the error message could be improved by adding more context now
that it isn't printed in-situ of the DIE dumping as much.
Makes it easier to test "this doesn't produce an error" (& indeed makes
that the implied default so we don't accidentally write tests that have
silent/sneaky errors as well as the positive behavior we're testing for)
Though the support for applying relocations is patchy enough that a
bunch of tests treat lack of relocation application as more of a warning
than an error - so rather than me trying to figure out how to add
support for a bunch of relocation types, let's degrade that to a warning
to match the usage (& indeed, it's sort of more of a tool warning anyway
- it's not that the DWARF is wrong, just that the tool can't fully cope
with it - and it's not like the tool won't dump the DWARF, it just won't
follow/render certain relocations - I guess in the most general case it
might try to render an unrelocated value & instead render something
bogus... but mostly seems to be about interesting relocations used in
eh_frame (& honestly it might be nice if we were lazier about doing this
relocation resolution anyway - if you're not dumping eh_frame, should we
really be erroring about the relocations in it?))
Summary:
Although the function had a bool return value, it was always returning
true. Presumably this is because the main type of errors one can
encounter here is running off the end of the stream, and until very
recently, the DataExtractor class made it very difficult to detect that.
The situation has changed now, and we can easily detect errors here,
which this patch does.
Reviewers: dblaikie, aprantl
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D77308
In DWARFv5, type units are stored in .debug_info sections, along with
compilation units, and they are distinguished by the unit_type field
in the header, not by the name of the section. It is impossible to
associate the correct index section of a DWP file with the unit before
the unit's header is read. This patch fixes reading DWARFv5 type units
by parsing the header first and then applying the index entry according
to the actual unit type.
Differential Revision: https://reviews.llvm.org/D77552
In package files, the base offset provided by index sections should be
used to find the contribution of a unit. The patch adds that base
offset when reading range list tables.
Differential revision: https://reviews.llvm.org/D77401
This fixes the reading of location lists headers for compilation units
in package files by adjusting the reading offset according to the
corresponding record in the unit index. This is required for
DW_FORM_loclistx to work.
Differential revision: https://reviews.llvm.org/D77146
Without the patch, all version 5 compile units in a DWP file read
location tables from the beginning of a .debug_loclists.dwo section.
The patch fixes that by adjusting the reading offset the same way as
for pre-v5 units. The section identifier to find the contribution
entry corresponds to the version of the unit.
Differential revision: https://reviews.llvm.org/D77145
DWARFv5 defines index sections in package files in a slightly different
way than the pre-standard GNU proposal, see Section 7.3.5 in the DWARF
standard and https://gcc.gnu.org/wiki/DebugFissionDWP for GNU proposal.
The main concern here is values for section identifiers, which are
partially overlapped with changed meanings. The patch adds support for
v5 index sections and resolves that difficulty by defining a set of
identifiers for internal use which can represent and distinct values
of both standards.
Differential Revision: https://reviews.llvm.org/D75929
This is a preparation for an upcoming patch which adds support for
DWARFv5 unit index sections. The patch adds tag "_EXT_" to identifiers
which reference sections that are deprecated in the DWARFv5 standard.
See D75929 for the discussion.
Differential Revision: https://reviews.llvm.org/D77141
The old name was a bit misleading because the functions actually return
contributions to the corresponding sections.
Differential revision: https://reviews.llvm.org/D77302
Summary:
This patch adds parsing and dumping DWARFv5 .debug_macro section in llvm-dwarfdump,
it does not introduce any new switch. Existing switch "--debug-macro"
should be used to dump macinfo or macro section.
Reviewed By: dblaikie, ikudrin, jhenderson
Differential Revision: https://reviews.llvm.org/D73086