This reverts commit 879139e1c6577b09df52de56a6bab856a19ed185.
This was committed accidentally when I blindly typed git svn
dcommit instead of the command to generate a patch.
llvm-svn: 272693
This fixes an alignment issue by forcing all cached allocations
to be 8 byte aligned, and also fixes an issue arising on big
endian systems by writing ulittle32_t's instead of uint32_t's
in the test.
llvm-svn: 272437
This is the next step towards being able to write PDBs.
MemoryBuffer is immutable, and StreamInterface is our replacement
which can be any combination of read-only, read-write, or write-only
depending on the particular implementation.
The one place where we were creating a PDBFile (in RawSession) is
updated to subclass ByteStream with a simple adapter that holds
a MemoryBuffer, and initializes the superclass with the buffer's
array, so that all the functionality of ByteStream works
transparently.
llvm-svn: 272370
This adds method and tests for writing to a PDB stream. With
this, even a PDB stream which is discontiguous can be treated
as a sequential stream of bytes for the purposes of writing.
Reviewed By: ruiu
Differential Revision: http://reviews.llvm.org/D21157
llvm-svn: 272369
TPI hash table contains a parallel array for the type records.
For each type record R, a hash value is calculated by `H(R) % NumBuckets`
where H is a hash function, and the result is stored to a bucket element.
H is TPI1::hashPrec function in microsoft-pdb repository.
Our hash function does not support all type record types yet.
Currently it supports only records for line number.
I'll extend it in a follow up patch.
The aim of verify the hash table is not only detect corrupted files.
It ensures that our understanding of how the hash values are calculated
is correct.
llvm-svn: 272229
In order to efficiently write PDBs, we need to be able to make a
StreamWriter class similar to a StreamReader, which can transparently deal
with writing to discontiguous streams, and we need to use this for all
writing, similar to how we use StreamReader for all reading.
Most discontiguous streams are the typical numbered streams that appear in
a PDB file and are described by the directory, but the exception to this,
that until now has been parsed by hand, is the directory itself.
MappedBlockStream works by querying the directory to find out which blocks
a stream occupies and various other things, so naturally the same logic
could not possibly work to describe the blocks that the directory itself
resided on.
To solve this, I've introduced an abstraction IPDBStreamData, which allows
the client to query for the list of blocks occupied by the stream, as well
as the stream length. I provide two implementations of this: one which
queries the directory (for indexed streams), and one which queries the
super block (for the directory stream).
This has the side benefit of vastly simplifying the code to parse the
directory. Whereas before a mini state machine was rolled by hand, now we
simply use FixedStreamArray to read out the stream sizes, then build a
vector of FixedStreamArrays for the stream map, all in just a few lines of
code.
Reviewed By: ruiu
Differential Revision: http://reviews.llvm.org/D21046
llvm-svn: 271982
The data strucutre in the new FPO stream is described in the
PE/COFF spec. There is one record per function if frame pointer
is omitted.
Differential Revision: http://reviews.llvm.org/D20999
llvm-svn: 271926
When printing line information and file checksums, we were printing
the file offset field from the struct header. This teaches
llvm-pdbdump how to turn those numbers into the filename. In the
case of file checksums, this is done by looking in the global
string table. In the case of line contributions, this is done
by indexing into the file names buffer of the DBI stream. Why
they use a different technique I don't know.
llvm-svn: 271630
To facilitate this, a couple of changes had to be made:
1. `ModuleSubstream` got moved from `DebugInfo/PDB` to
`DebugInfo/CodeView`, and various codeview related types are defined
there. It turns out `DebugInfo/CodeView/Line.h` already defines many of
these structures, but this is really old code that is not endian aware,
doesn't interact well with `StreamInterface` and not very helpful for
getting stuff out of a PDB. Eventually we should migrate the old readobj
`COFFDumper` code to these new structures, or at least merge their
functionality somehow.
2. A `ModuleSubstream` visitor is introduced. Depending on where your
module substream array comes from, different subsets of record types can
be expected. We are already hand parsing these substream arrays in many
places especially in `COFFDumper.cpp`. In the future we can migrate these
paths to the visitor as well, which should reduce a lot of code in
`COFFDumper.cpp`.
Differential Revision: http://reviews.llvm.org/D20936
Reviewed By: ruiu, majnemer
llvm-svn: 271621
This first pass only splits apart the records and dumps the line
info kinds and binary data. Subsequent patches will parse out
the binary data into more useful information and dump it in
detail.
llvm-svn: 271576
Unlike other sections that can grow to any size, the COFF section header
stream has maximum length because each record is fixed size and the COFF
file format limits the maximum number of sections. So I decided to not
create a specific stream class for it. Instead, I added a member function
to DbiStream class which returns a vector of COFF headers.
Differential Revision: http://reviews.llvm.org/D20717
llvm-svn: 271557
This converts remaining uses of ByteStream, which was still
left in the symbol stream and type stream, to using the new
StreamInterface zero-copy classes.
RecordIterator is finally deleted, so this is the only way left
now. Additionally, more error checking is added when iterating
the various streams.
With this, the transition to zero copy pdb access is complete.
llvm-svn: 271101
Due to differences in template instantiation rules, it is not
portable to static_assert(false) inside of an invalid specialization
of a template. Instead I just =delete the method so that it can't
be used, and leave a comment that it must be explicitly specialized.
llvm-svn: 271027
This reverts commit r271024 due to error: static_assert failed
"You must either provide a specialization of VarStreamArrayExtractor
or a custom extractor"
llvm-svn: 271026
This reduces the amount of memory used by llvm-pdbdump by roughly
1/3 of the size of the PDB file.
Differential Revision: http://reviews.llvm.org/D20724
Reviewed By: ruiu
llvm-svn: 271025
Since we want to move toward zero-copy access to stream data, we
want to remove all instances of copying operations. So get rid
of some of those here.
Differential Revision: http://reviews.llvm.org/D20720
Reviewed By: ruiu
llvm-svn: 270960
PDBs can be extremely large. We're already mapping the entire
PDB into the process's address space, but to make matters worse
the blocks of the PDB are not arranged contiguously. So, when
we have something like an array or a string embedded into the
stream, we have to make a copy. Since it's convenient to use
traditional data structures to iterate and manipulate these
records, we need the memory to be contiguous.
As a result of this, we were using roughly twice as much memory
as the file size of the PDB, because every stream was copied
out and re-stitched together contiguously.
This patch addresses this by improving the MappedBlockStream
to allocate from a BumpPtrAllocator only when a read requires
a discontiguous read. Furthermore, it introduces some data
structures backed by a stream which can iterate over both
fixed and variable length records of a PDB. Since everything
is backed by a stream and not a buffer, we can read almost
everything from the PDB with zero copies.
Differential Revision: http://reviews.llvm.org/D20654
Reviewed By: ruiu
llvm-svn: 270951
We have need to reuse this functionality, including making
additional generic stream types that are smarter about how and
when they copy memory versus referencing the original memory.
So all of these structures belong in the common library
rather than being pdb specific.
llvm-svn: 270751
name_ids() did not return all IDs but only the first NameCount items.
The number of non-zero entries in IDs vector is NameCount, but it
does not mean that all non-zero entries are at the beginning of IDs
vector.
Differential Revision: http://reviews.llvm.org/D20611
llvm-svn: 270656
This adds support for parsing and dumping the following
symbol types:
S_LPROCREF
S_ENVBLOCK
S_COMPILE2
S_REGISTER
S_COFFGROUP
S_SECTION
S_THUNK32
S_TRAMPOLINE
As of this patch, the test PDB files no longer have any unknown
symbol types.
llvm-svn: 270628
When dumping huge PDB files, too many of the options were grouped
together so you would get neverending spew of output. This patch
introduces more granular display options so you can only dump the
fields you actually care about.
llvm-svn: 270607
DBI stream contains a stream number of the symbol record stream.
Symbol record streams is an array of length-type-value members.
Each member represents one symbol.
Publics stream contains offsets to the symbol record stream.
This patch is to print out all symbols that are referenced by
the publics stream.
Note that even with this patch, llvm-pdbdump cannot dump all the
information in a publics stream since it contains more information
than symbol names. I'll improve it in followup patches.
Differential Revision: http://reviews.llvm.org/D20480
llvm-svn: 270262
I don't yet fully understand the meaning of these data strcutures,
but at least it seems that their sizes and types are correct.
With this change, we can read publics streams till end.
Differential Revision: http://reviews.llvm.org/D20343
llvm-svn: 269861
Publics stream seems to contain information as to public symbols.
It actually contains a serialized hash table along with fixed-sized
headers. This patch is not complete. It scans only till the end of
the stream and dump the header information. I'll write code to
de-serialize the hash table later.
Reviewers: zturner
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D20256
llvm-svn: 269484
This reuses the CVTypeDumper from libcodeview to dump full
information about type records within a PDB file.
Differential Revision: http://reviews.llvm.org/D20022
Reviewed By: rnk
llvm-svn: 268808
Summary:
Port the dumper in llvm-readobj over to it.
I'm planning to use this visitor to power type stream merging.
While we're at it, try to switch from StringRef to ArrayRef<uint8_t> in some
places.
Reviewers: zturner, amccarth
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D19899
llvm-svn: 268535
Ability to parse codeview type streams is also needed by
DebugInfoPDB for parsing PDBs, so moving this into a library
gives us this option. Since DebugInfoPDB had already hand
rolled some code to do this, that code is now convereted over
to using this common abstraction.
Differential Revision: http://reviews.llvm.org/D19887
Reviewed By: dblaikie, amccarth
llvm-svn: 268454
This parses the TPI stream (stream 2) from the PDB file. This stream
contains some header information followed by a series of codeview records.
There is some additional complexity here in that alongside this stream of
codeview records is a serialized hash table in order to efficiently query
the types. We parse the necessary bookkeeping information to allow us to
reconstruct the hash table, but we do not actually construct it yet as
there are still a few things that need to be understood first.
Differential Revision: http://reviews.llvm.org/D19840
Reviewed By: ruiu, rnk
llvm-svn: 268343
PDB has a lot of similar data structures. We already have code
for parsing a Name Map, but PDB seems to have a different but
very similar structure that is a hash table. This is the
beginning of code needed in order to parse the name hash table,
but it is not yet complete. It parses the basic metadata of
the hash table, the bucket array, and the names buffer, but
doesn't use any of these fields yet as the data structure
requires a non-trivial amount of work to understand.
llvm-svn: 268268
There are probably hundreds of crashers we can find by fuzzing
more. For now we do the simplest possible validation of the
block size. Later, more complicated validations can verify that
other fields of the super block such as directory size, number
of blocks, agree with the size of the file etc.
llvm-svn: 268084
The motivation for this change is that PDB has the notion of
streams and substreams. Substreams often consist of variable
length structures that are convenient to be able to treat as
guaranteed, contiguous byte arrays, whereas the streams they
are contained in are not necessarily so, as a single stream
could be spread across many discontiguous blocks.
So, when processing data from a substream, we want to be able
to assume that we have a contiguous byte array so that we can
cast pointers to variable length arrays and such.
This leads to the question of how to be able to read the same
data structure from either a stream or a substream using the
same interface, which is where this patch comes in.
We separate out the stream's read state from the underlying
representation, and introduce a `StreamReader` class. Then
we change the name of `PDBStream` to `MappedBlockStream`, and
introduce a second kind of stream called a `ByteStream` which is
simply a sequence of contiguous bytes. Finally, we update all
of the std::vectors in `PDBDbiStream` to use `ByteStream` instead
as a proof of concept.
llvm-svn: 268071
We now read out the rest of the substreams from the DBI streams. One of
these substreams, the FileInfo substream, contains information about which
source files contribute to each module (aka compiland). This patch
additionally parses out the file information from that substream, and
dumps it in llvm-pdbdump.
Differential Revision: http://reviews.llvm.org/D19634
Reviewed by: ruiu
llvm-svn: 267928
This gets more data out of the DBI strema of the PDB. In
particular it extracts the metadata for the list of modules
(compilands) that this PDB contains info about, and adds support
for dumping these fields to llvm-pdbdump.
Differential Revision: http://reviews.llvm.org/D19570
Reviewed By: ruiu
llvm-svn: 267818
Summary:
llvm-symbolizer wants to get linkage names of functions for historical
reasons. Linkage names are only recorded in the PDB for public symbols,
and the linkage name is apparently stored separately in some "public
symbol" record. We had a workaround in PDBContext which would look for
such symbols when the user requested linkage names.
However, when given an address that was truly in a private function and
public funciton, we would accidentally find nearby public symbols and
return those function names. The fix is to look for both function
symbols and public symbols and only prefer the public symbol name if the
addresses of the symbols agree.
Fixes PR27492
Reviewers: zturner
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D19571
llvm-svn: 267732
The DBI stream contains a lot of bookkeeping information for other
streams. In particular it contains information about section contributions
and linked modules. This patch is a first attempt at parsing some of the
information out of the DBI stream. It currently only parses and dumps the
headers of the DBI stream, so none of the module data or section
contribution data is pulled out.
This is just a proof of concept that we understand the basic properties of
the DBI stream's metadata, and followup patches will try to extract more
detailed information out.
Differential Revision: http://reviews.llvm.org/D19500
Reviewed By: majnemer, ruiu
llvm-svn: 267585