Reported at https://reviews.llvm.org/D64930#1642223
If the only section of a PT_LOAD is a SHT_NOBITS section (e.g. .bss), we
may not align its sh_offset. p_offset of the PT_LOAD will be set to
sh_offset, and we will get p_offset!=p_vaddr (mod p_align). If such
executable is mapped by the Linux kernel, it will segfault.
After D64906, this may happen the non-linker script case.
The linker script case has had this issue for a long time.
This was fixed by rL321657 (but the test linkerscript/nobits-offset.s
failed to test a SHT_NOBITS section), but broken by rL345154.
Reviewed By: peter.smith
Differential Revision: https://reviews.llvm.org/D66658
llvm-svn: 369828
Ported the D64906 technique to EM_386.
If `sh_addralign(.tdata) < sh_addralign(.tbss)`,
we can potentially make `p_vaddr(PT_TLS)%p_align(PT_TLS) != 0`.
ld.so that are known to have problems if p_vaddr%p_align!=0:
* FreeBSD 13.0-CURRENT rtld-elf
* glibc https://sourceware.org/bugzilla/show_bug.cgi?id=24606
New test i386-tls-vaddr-align.s checks our workaround makes p_vaddr%p_align = 0.
Reviewed By: ruiu
Differential Revision: https://reviews.llvm.org/D65865
llvm-svn: 369347
Ported the D64906 technique to AArch64. It deletes 3 alignments at
PT_LOAD boundaries for the default case: the size of an aarch64 binary
decreases by at most 192kb.
If `sh_addralign(.tdata) < sh_addralign(.tbss)`,
we can potentially make `p_vaddr(PT_TLS)%p_align(PT_TLS) != 0`.
ld.so that are known to have problems if p_vaddr%p_align!=0:
* musl<=1.1.22
* FreeBSD 13.0-CURRENT (and before) rtld-elf arm64
New test aarch64-tls-vaddr-align.s checks that our workaround makes p_vaddr%p_align = 0.
Reviewed By: ruiu
Differential Revision: https://reviews.llvm.org/D64930
llvm-svn: 369344
This change affects the non-linker script case (precisely, when the
`SECTIONS` command is not used). It deletes 3 alignments at PT_LOAD
boundaries for the default case: the size of a powerpc64 binary can be
decreased by at most 192kb. The technique can be ported to other
targets.
Let me demonstrate the idea with a maxPageSize=65536 example:
When assigning the address to the first output section of a new PT_LOAD,
if the end p_vaddr of the previous PT_LOAD is 0x10020, we advance to
the next multiple of maxPageSize: 0x20000. The new PT_LOAD will thus
have p_vaddr=0x20000. Because p_offset and p_vaddr are congruent modulo
maxPageSize, p_offset will be 0x20000, leaving a p_offset gap [0x10020,
0x20000) in the output.
Alternatively, if we advance to 0x20020, the new PT_LOAD will have
p_vaddr=0x20020. We can pick either 0x10020 or 0x20020 for p_offset!
Obviously 0x10020 is the choice because it leaves no gap. At runtime,
p_vaddr will be rounded down by pagesize (65536 if
pagesize=maxPageSize). This PT_LOAD will load additional initial
contents from p_offset ranges [0x10000,0x10020), which will also be
loaded by the previous PT_LOAD. This is fine if -z noseparate-code is in
effect or if we are not transiting between executable and non-executable
segments.
ld.bfd -z noseparate-code leverages this technique to keep output small.
This patch implements the technique in lld, which is mostly effective on
targets with large defaultMaxPageSize (AArch64/MIPS/PPC: 65536). The 3
removed alignments can save almost 3*65536 bytes.
Two places that rely on p_vaddr%pagesize = 0 have to be updated.
1) We used to round p_memsz(PT_GNU_RELRO) up to commonPageSize (defaults
to 4096 on all targets). Now p_vaddr%commonPageSize may be non-zero.
The updated formula takes account of that factor.
2) Our TP offsets formulae are only correct if p_vaddr%p_align = 0.
Fix them. See the updated comments in InputSection.cpp for details.
On targets that we enable the technique (only PPC64 now),
we can potentially make `p_vaddr(PT_TLS)%p_align(PT_TLS) != 0`
if `sh_addralign(.tdata) < sh_addralign(.tbss)`
This exposes many problems in ld.so implementations, especially the
offsets of dynamic TLS blocks. Known issues:
FreeBSD 13.0-CURRENT rtld-elf (i386/amd64/powerpc/arm64)
glibc (HEAD) i386 and x86_64 https://sourceware.org/bugzilla/show_bug.cgi?id=24606
musl<=1.1.22 on TLS Variant I architectures (aarch64/powerpc64/...)
So, force p_vaddr%p_align = 0 by rounding dot up to p_align(PT_TLS).
The technique will be enabled (with updated tests) for other targets in
subsequent patches.
Reviewed By: ruiu
Differential Revision: https://reviews.llvm.org/D64906
llvm-svn: 369343
Currently the following 3 relocation types do not trigger the creation
of a canonical PLT (which changes STT_GNU_IFUNC to STT_FUNC and
redirects all references):
1) GOT-generating (`needsGot`)
2) PLT-generating (`needsPlt`)
3) R_ABS with 0 addend in a writable location. This is used for
for ifunc function pointers in writable sections such as .data and .toc.
This patch deletes case 3) to simplify the R_*_IRELATIVE generating
logic added in D57371. Other advantages:
* It is guaranteed no more than 1 R_*_IRELATIVE is created for an ifunc.
* PPC64: no need to special case ifunc in toc-indirect to toc-relative relaxation. See D65755
The deleted elf::addIRelativeRelocs demonstrates that one-pass scan
through relocations makes several optimizations difficult. This is
something we can think about in the future.
Reviewed By: peter.smith
Differential Revision: https://reviews.llvm.org/D65995
llvm-svn: 368661
In Writer::includeInDynSym(), exportDynamic is used by a Defined with
protected or default visibility, to record whether it is required to be
exported into .dynsym. It is set when any of the following conditions
hold:
1) There is an interposable symbol from a DSO (Undefined or SharedSymbol with default visibility)
2) If -shared or --export-dynamic is specified, any symbol in an object file/bitcode sets this property, unless suppressed by canBeOmittedFromSymbolTable().
3) --dynamic-list when producing an executable
4) protected symbol from a DSO preempted by copy relocation/canonical PLT when
--ignore-{data,function}-address-equality is specified
5) ifunc is exported when -z ifunc-noplt is specified
Bullet points 4) and 5) are irrelevant in this patch.
Bullet 3) does not play well with 1) and 2). When -shared is specified,
exportDynamic of most symbols is true. This makes it incapable to record
--dynamic-list marked symbols. We thus have obscure:
if (!config->shared)
b->exportDynamic = true;
else if (b->includeInDynsym())
b->isPreemptible = true;
This patch adds another bit `Symbol::inDynamicList` to record
3). We can thus simplify handleDynamicList() by unifying the DSO and
executable cases. It also allows us to simplify isPreemptible - now
the field is only used in finalizeSections() and later stages.
Reviewed By: peter.smith
Differential Revision: https://reviews.llvm.org/D66091
llvm-svn: 368659
We prioritize non-* wildcards overs VER_NDX_LOCAL/VER_NDX_GLOBAL "*".
This patch generalizes the rule to "*" of other versions and thus fixes PR40176.
I don't feel strongly about this GNU linkers' behavior but the
generalization simplifies code.
Delete `config->defaultSymbolVersion` which was used to special case
VER_NDX_LOCAL/VER_NDX_GLOBAL "*".
In `SymbolTable::scanVersionScript`, custom versions are handled the same
way as VER_NDX_LOCAL/VER_NDX_GLOBAL. So merge
`config->versionScript{Locals,Globals}` into `config->versionDefinitions`.
Overall this seems to simplify the code.
In `SymbolTable::assign{Exact,Wildcard}Versions`,
`sym->verdefIndex == config->defaultSymbolVersion` is changed to
`verdefIndex == UINT32_C(-1)`.
This allows us to give duplicate assignment diagnostics for
`{ global: foo; };` `V1 { global: foo; };`
In test/linkerscript/version-script.s:
vs_index of an undefined symbol changes from 0 to 1. This doesn't matter (arguably 1 is better because the binding is STB_GLOBAL) because vs_index of an undefined symbol is ignored.
Reviewed By: ruiu
Differential Revision: https://reviews.llvm.org/D65716
llvm-svn: 367869
An R_*_IRELATIVE represents the address of a STT_GNU_IFUNC symbol
(redirected at runtime) which is non-preemptable and is not associated
with a canonical PLT (associated with a symbol with a section index of
SHN_UNDEF but a non-zero st_value).
.rel[a].plt [DT_JMPREL, DT_JMPREL+DT_JMPRELSZ) contains relocations that
can be lazily resolved. R_*_IRELATIVE are always eagerly resolved, so
conceptually they do not belong to .rela.plt. "iplt" is mostly a misnomer.
glibc powerpc and powerpc64 do not resolve R_*_IRELATIVE if they are in .rela.plt.
// a.o - synthesized PLT call stub has an R_*_IRELATIVE
void ifunc(); int main() { ifunc(); }
// b.o
static void real() {}
asm (".type ifunc, %gnu_indirect_function");
void *ifunc() { return ℜ }
The lld-linked executable crashes. ld.bfd places R_*_IRELATIVE in
.rela.dyn and the executable works.
glibc i386, x86_64, and aarch64 have logic
(glibc/sysdeps/*/dl-machine.h:elf_machine_lazy_rel) to eagerly resolve
R_*_IRELATIVE in .rel[a].plt so the lld-linked executable works.
Move R_*_IRELATIVE from .rel[a].plt to .rel[a].dyn to fix the crashes on
glibc powerpc/powerpc64. This also helps simplifying ifunc
implementation in FreeBSD rtld-elf powerpc64.
If --pack-dyn-relocs=android[+relr] is specified, the Android packed
dynamic relocation format is used for .rela.dyn. We cannot name
in.relaIplt ".rela.dyn" because the output section will have mixed
formats. This can be improved in the future.
Reviewed By: pcc
Differential Revision: https://reviews.llvm.org/D65651
llvm-svn: 367745
This patch
1) adds -z separate-code and -z noseparate-code (default).
2) changes the condition that the last page of last PF_X PT_LOAD is
padded with trap instructions.
Current condition (after D33630): if there is no `SECTIONS` commands.
After this change: if -z separate-code is specified.
-z separate-code was introduced to ld.bfd in 2018, to place the text
segment in its own pages. There is no overlap in pages between an
executable segment and a non-executable segment:
1) RX cannot load initial contents from R or RW(or non-SHF_ALLOC).
2) R and RW(or non-SHF_ALLOC) cannot load initial contents from RX.
lld's current status:
- Between R and RX: in `Writer<ELFT>::fixSectionAlignments()`, the start of a
segment is always aligned to maxPageSize, so the initial contents loaded by R
and RX do not overlap. I plan to allow overlaps in D64906 if -z noseparate-code
is in effect.
- Between RX and RW(or non-SHF_ALLOC if RW doesn't exist):
we currently unconditionally pad the last page to commonPageSize
(defaults to 4096 on all targets we support).
This patch will make it effective only if -z separate-code is specified.
-z separate-code is a dubious feature that intends to reduce the number
of ROP gadgets (which is actually ineffective because attackers can find
plenty of gadgets in the text segment, no need to find gadgets in
non-code regions).
With the overlapping PT_LOAD technique D64906, -z noseparate-code
removes two more alignments at segment boundaries than -z separate-code.
This saves at most defaultCommonPageSize*2 bytes, which are significant
on targets with large defaultCommonPageSize (AArch64/MIPS/PPC: 65536).
Issues/feedback on alignment at segment boundaries to help understand
the implication:
* binutils PR24490 (the situation on ld.bfd is worse because they have
two R-- on both sides of R-E so more alignments.)
* In binutils, the 2018-02-27 commit "ld: Add --enable-separate-code" made -z separate-code the default on Linux.
d969dea983
In musl-cross-make, binutils is configured with --disable-separate-code
to address size regressions caused by -z separate-code. (lld actually has the same
issue, which I plan to fix in a future patch. The ld.bfd x86 status is
worse because they default to max-page-size=0x200000).
* https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=237676 people want
smaller code size. This patch will remove one alignment boundary.
* Stef O'Rear: I'm opposed to any kind of page alignment at the
text/rodata line (having a partial page of text aliased as rodata and
vice versa has no demonstrable harm, and I actually care about small
systems).
So, make -z noseparate-code the default.
Reviewed By: ruiu
Differential Revision: https://reviews.llvm.org/D64903
llvm-svn: 367537
Summary:
After D58892 split the RW PT_LOAD on the PT_GNU_RELRO boundary, the new
layout is:
PT_LOAD(PT_GNU_RELRO(.data.rel.ro .bss.rel.ro)) PT_LOAD(.data. .bss)
The two pageAlign() calls at PT_GNU_RELRO boundaries are redundant due
to the existence of PT_LOAD.
Reviewers: grimar, peter.smith, ruiu, espindola
Reviewed By: ruiu
Subscribers: sfertile, atanasyan, emaste, arichardson, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64854
llvm-svn: 366307
This patch does the same thing as r365595 to other subdirectories,
which completes the naming style change for the entire lld directory.
With this, the naming style conversion is complete for lld.
Differential Revision: https://reviews.llvm.org/D64473
llvm-svn: 365730
This patch is mechanically generated by clang-llvm-rename tool that I wrote
using Clang Refactoring Engine just for creating this patch. You can see the
source code of the tool at https://reviews.llvm.org/D64123. There's no manual
post-processing; you can generate the same patch by re-running the tool against
lld's code base.
Here is the main discussion thread to change the LLVM coding style:
https://lists.llvm.org/pipermail/llvm-dev/2019-February/130083.html
In the discussion thread, I proposed we use lld as a testbed for variable
naming scheme change, and this patch does that.
I chose to rename variables so that they are in camelCase, just because that
is a minimal change to make variables to start with a lowercase letter.
Note to downstream patch maintainers: if you are maintaining a downstream lld
repo, just rebasing ahead of this commit would cause massive merge conflicts
because this patch essentially changes every line in the lld subdirectory. But
there's a remedy.
clang-llvm-rename tool is a batch tool, so you can rename variables in your
downstream repo with the tool. Given that, here is how to rebase your repo to
a commit after the mass renaming:
1. rebase to the commit just before the mass variable renaming,
2. apply the tool to your downstream repo to mass-rename variables locally, and
3. rebase again to the head.
Most changes made by the tool should be identical for a downstream repo and
for the head, so at the step 3, almost all changes should be merged and
disappear. I'd expect that there would be some lines that you need to merge by
hand, but that shouldn't be too many.
Differential Revision: https://reviews.llvm.org/D64121
llvm-svn: 365595
If .sdata is absent, linker synthesized __global_pointer$ gets a section index of SHN_ABS.
(ld.bfd has a similar issue: binutils PR24678)
Scrt1.o may use `lla gp, __global_pointer$` to reference the symbol PC
relatively. In -pie/-shared mode, lld complains if a PC relative
relocation references an absolute symbol (SHN_ABS) but ld.bfd doesn't:
ld.lld: error: relocation R_RISCV_PCREL_HI20 cannot refer to lute symbol: __global_pointer$
Let the reference of __global_pointer$ to force creation of .sdata to
fix the problem. This is similar to _GLOBAL_OFFSET_TABLE_, which forces
creation of .got or .got.plt .
Also, change the visibility from STV_HIDDEN to STV_DEFAULT and don't
define the symbol for -shared. This matches ld.bfd, though I don't
understand why it uses STV_DEFAULT.
Reviewed By: ruiu, jrtc27
Differential Revision: https://reviews.llvm.org/D63132
llvm-svn: 363351
Otherwise the getPartition() accessor may return an OOB pointer. Found
using _GLIBCXX_DEBUG.
The error is benign (we never dereference the pointer for the end marker)
so this wasn't caught by e.g. the sanitizer bots.
llvm-svn: 363026
We create several types of synthetic sections for loadable partitions, including:
- The dynamic symbol table. This allows code outside of the loadable partitions
to find entry points with dlsym.
- Creating a dynamic symbol table also requires the creation of several other
synthetic sections for the partition, such as the dynamic table and hash table
sections.
- The partition's ELF header is represented as a synthetic section in the
combined output file, and will be used by llvm-objcopy to extract partitions.
Differential Revision: https://reviews.llvm.org/D62350
llvm-svn: 362819
Many -static/-no-pie/-shared/-pie applications linked against glibc or musl
should work with this patch. This also helps FreeBSD PowerPC64 to migrate
their lib32 (PR40888).
* Fix default image base and max page size.
* Support new-style Secure PLT (see below). Old-style BSS PLT is not
implemented, so it is not suitable for FreeBSD rtld now because it doesn't
support Secure PLT yet.
* Support more initial relocation types:
R_PPC_ADDR32, R_PPC_REL16*, R_PPC_LOCAL24PC, R_PPC_PLTREL24, and R_PPC_GOT16.
The addend of R_PPC_PLTREL24 is special: it decides the call stub PLT type
but it should be ignored for the computation of target symbol VA.
* Support GNU ifunc
* Support .glink used for lazy PLT resolution in glibc
* Add a new thunk type: PPC32PltCallStub that is similar to PPC64PltCallStub.
It is used by R_PPC_REL24 and R_PPC_PLTREL24.
A PLT stub used in -fPIE/-fPIC usually loads an address relative to
.got2+0x8000 (-fpie/-fpic code uses _GLOBAL_OFFSET_TABLE_ relative
addresses).
Two .got2 sections in two object files have different addresses, thus a PLT stub
can't be shared by two object files. To handle this incompatibility,
change the parameters of Thunk::isCompatibleWith to
`const InputSection &, const Relocation &`.
PowerPC psABI specified an old-style .plt (BSS PLT) that is both
writable and executable. Linkers don't make separate RW- and RWE segments,
which causes all initially writable memory (think .data) executable.
This is a big security concern so a new PLT scheme (secure PLT) was developed to
address the security issue.
TLS will be implemented in D62940.
glibc older than ~2012 requires .rela.dyn to include .rela.plt, it can
not handle the DT_RELA+DT_RELASZ == DT_JMPREL case correctly. A hack
(not included in this patch) in LinkerScript.cpp addOrphanSections() to
work around the issue:
if (Config->EMachine == EM_PPC) {
// Older glibc assumes .rela.dyn includes .rela.plt
Add(In.RelaDyn);
if (In.RelaPlt->isLive() && !In.RelaPlt->Parent)
In.RelaDyn->getParent()->addSection(In.RelaPlt);
}
Reviewed By: ruiu
Differential Revision: https://reviews.llvm.org/D62464
llvm-svn: 362721
We currently (ab)use the Live bit on output sections to track whether
the section has ever had an input section added to it, and then later
use it during orphan placement. This will conflict with one of my upcoming
partition-related changes that will assign all output sections to a partition
(thus marking them as live) so that they can be added to the correct segment
by the code that creates program headers.
Instead of using the Live bit for this purpose, create a new flag and
start using it to track the property explicitly.
Differential Revision: https://reviews.llvm.org/D62348
llvm-svn: 362444
(1) {gcc,clang} -fuse-ld=bfd -pie -fPIE -nostdlib a.c => .interp created
(2) {gcc,clang} -fuse-ld=lld -pie -fPIE -nostdlib a.c => .interp not created
(3) {gcc,clang} -fuse-ld=lld -pie -fPIE -nostdlib a.c a.so => .interp created
The inconsistency of (2) is due to the condition `!Config->SharedFiles.empty()`.
To make lld behave more like ld.bfd, we could change the condition to:
Config->HasDynSymTab && !Config->DynamicLinker.empty() && Script->needsInterpSection();
However, that would bring another inconsistency as can be observed with:
(4) {gcc,clang} -fuse-ld=bfd -no-pie -nostdlib a.c => .interp not created
So instead, use `!Config->DynamicLinker.empty() && Script->needsInterpSection()`,
which is both simple and consistent in these cases.
The inconsistency of (4) likely originated from ld.bfd and gold's choice to have a default --dynamic-linker.
Their condition to create .interp is ANDed with (not -shared).
Since lld doesn't have a default --dynamic-linker,
compiler drivers (gcc/clang) don't pass --dynamic-linker for -shared,
and direct ld users are not supposed to specify --dynamic-linker for -shared,
we do not need the condition !Config->Shared.
Reviewed By: ruiu
Differential Revision: https://reviews.llvm.org/D62765
llvm-svn: 362355
For the Local Dynamic case of TLSDESC, _TLS_MODULE_BASE_ is defined as a
special TLS symbol that makes:
1) Without relaxation: it produces a dynamic TLSDESC relocation that
computes 0. Adding @dtpoff to access a TLS symbol.
2) With LD->LE relaxation: _TLS_MODULE_BASE_@tpoff = 0 (lowest address in
the TLS block). Adding @tpoff to access a TLS symbol.
For 1), this saves dynamic relocations and GOT slots as otherwise
(General Dynamic) we would create an R_X86_64_TLSDESC and reserve two
GOT slots for each symbol.
Add ElfSym::TlsModuleBase and change the signature of getTlsTpOffset()
to special case _TLS_MODULE_BASE_.
Reviewed By: ruiu
Differential Revision: https://reviews.llvm.org/D62577
llvm-svn: 362078
This change causes us to read partition specifications from partition
specification sections and split output sections into partitions according
to their reachability from partition entry points.
This is only the first step towards a full implementation of partitions. Later
changes will add additional synthetic sections to each partition so that
they can be loaded independently.
Differential Revision: https://reviews.llvm.org/D60353
llvm-svn: 361925
This reverts D53906.
D53906 increased p_align of PT_TLS on ARM/AArch64 to 32/64 to make the
static TLS layout compatible with Android Bionic's ELF TLS. However,
this may cause glibc ARM/AArch64 programs to crash (see PR41527).
The faulty PT_TLS in the executable satisfies p_vaddr%p_align != 0. The
remainder is normally 0 but may be non-zero with the hack in place. The
problem is that we increase PT_TLS's p_align after OutputSections'
addresses are fixed (assignAddress()). It is possible that
p_vaddr%old_p_align = 0 while p_vaddr%new_p_align != 0.
For a thread local variable defined in the executable, lld computed TLS
offset (local exec) is different from glibc computed TLS offset from
another module (initial exec/generic dynamic). Note: PR41527 said the
bug affects initial exec but actually generic dynamic is affected as
well.
(glibc is correct in that it compute offsets that satisfy
`offset%p_align == p_vaddr%p_align`, which is a basic ELF requirement.
This hack appears to work on FreeBSD rtld, musl<=1.1.22, and Bionic, but
that is just because they (and lld) incorrectly compute offsets that
satisfy `offset%p_align = 0` instead.)
Android developers are fine to revert this patch, carry this patch in
their tree before figuring out a long-term solution (e.g. a dummy .tdata
with sh_addralign=64 sh_size={0,1} in crtbegin*.o files. The overhead is
now insignificant after D62059).
Reviewed By: rprichard, srhines
Differential Revision: https://reviews.llvm.org/D62055
llvm-svn: 361090
The code was added in r252352, probably to address some layout issues.
Actually PT_TLS's p_memsz doesn't need to be aligned on either variant.
ld.bfd doesn't do that.
In case of larger alignment (e.g. 64 for Android Bionic on AArch64, see
D62055), this may make the overhead smaller.
Reviewed By: ruiu
Differential Revision: https://reviews.llvm.org/D62059
llvm-svn: 361029
This is the last patch of the series of patches to make it possible to
resolve symbols without asking SymbolTable to do so.
The main point of this patch is the introduction of
`elf::resolveSymbol(Symbol *Old, Symbol *New)`. That function resolves
or merges given symbols by examining symbol types and call
replaceSymbol (which memcpy's New to Old) if necessary.
With the new function, we have now separated symbol resolution from
symbol lookup. If you already have a Symbol pointer, you can directly
resolve the symbol without asking SymbolTable to do that.
Now that the nice abstraction become available, I can start working on
performance improvement of the linker. As a starter, I'm thinking of
making --{start,end}-lib faster.
--{start,end}-lib is currently unnecessarily slow because it looks up
the symbol table twice for each symbol.
- The first hash table lookup/insertion occurs when we instantiate a
LazyObject file to insert LazyObject symbols.
- The second hash table lookup/insertion occurs when we create an
ObjFile from LazyObject file. That overwrites LazyObject symbols
with Defined symbols.
I think it is not too hard to see how we can now eliminate the second
hash table lookup. We can keep LazyObject symbols in Step 1, and then
call elf::resolveSymbol() to do Step 2.
Differential Revision: https://reviews.llvm.org/D61898
llvm-svn: 360975
Previously, we handled common symbols as a kind of Defined symbol,
but what we were doing for common symbols is pretty different from
regular defined symbols.
Common symbol and defined symbol are probably as different as shared
symbol and defined symbols are different.
This patch introduces CommonSymbol to represent common symbols.
After symbols are resolved, they are converted to Defined symbols
residing in a .bss section.
Differential Revision: https://reviews.llvm.org/D61895
llvm-svn: 360841
SymbolTable's add-family functions have lots of parameters because
when they have to create a new symbol, they forward given arguments
to Symbol's constructors. Therefore, the functions take at least as
many arguments as their corresponding constructors.
This patch simplifies the add-family functions. Now, the functions
take a symbol instead of arguments to construct a symbol. If there's
no existing symbol, a given symbol is memcpy'ed to the symbol table.
Otherwise, the functions attempt to merge the existing and a given
new symbol.
I also eliminated `CanOmitFromDynSym` parameter, so that the functions
take really one argument.
Symbol classes are trivially constructible, so looks like constructing
them to pass to add-family functions is as cheap as passing a lot of
arguments to the functions. A quick benchmark showed that this patch
seems performance-neutral.
This is a preparation for
http://lists.llvm.org/pipermail/llvm-dev/2019-April/131902.html
Differential Revision: https://reviews.llvm.org/D61855
llvm-svn: 360838
The -n (--nmagic) disables page alignment, and acts as a -Bstatic
The -N (--omagic) does what -n does but also marks the executable segment as
writeable. As page alignment is disabled headers are not allocated unless
explicit in the linker script.
To disable page alignment in LLD we choose to set the page sizes to 1 so
that any alignment based on the page size does nothing. To set the
Target->PageSize to 1 we implement -z common-page-size, which has the side
effect of allowing the user to set the value as well.
Setting the page alignments to 1 does mean that any use of
CONSTANT(MAXPAGESIZE) or CONSTANT(COMMONPAGESIZE) in a linker script will
return 1, unlike in ld.bfd. However given that -n and -N disable paging
these probably shouldn't be used in a linker script where -n or -N is in
use.
Differential Revision: https://reviews.llvm.org/D61688
llvm-svn: 360593
Summary:
While the generic ABI requires notes to be 8-byte aligned in ELF64, many
vendor-specific notes (from Linux, NetBSD, Solaris, etc) use 4-byte
alignment.
In a PT_NOTE segment, if 4-byte aligned notes are followed by an 8-byte
aligned note, the possible 4-byte padding may make consumers fail to
parse the 8-byte aligned note. See PR41000 for a recent report about
.note.gnu.property (NT_GNU_PROPERTY_TYPE_0).
(Note, for NT_GNU_PROPERTY_TYPE_0, the consumers should probably migrate
to PT_GNU_PROPERTY, but the alignment issue affects other notes as well.)
To fix the issue, don't mix notes with different alignments in one
PT_NOTE. If compilers emit 4-byte aligned notes before 8-byte aligned
notes, we'll create at most 2 segments.
sh_size%sh_addralign=0 is actually implied by the rule for linking
unrecognized sections (in generic ABI), so we don't have to check that.
Notes that match in name, type and attribute flags are concatenated into
a single output section. The compilers have to ensure
sh_size%sh_addralign=0 to make concatenated notes parsable.
An alternative approach is to create a PT_NOTE for each SHT_NOTE, but
we'll have to incur the sizeof(Elf64_Phdr)=56 overhead every time a new
note section is introduced.
Reviewers: ruiu, jakehehrlich, phosek, jhenderson, pcc, espindola
Subscribers: emaste, arichardson, krytarowski, fedor.sergeev, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D61296
llvm-svn: 359853
This is a follow up to r358979 which made findOrphanPos only consider
live sections. Unfortunately, this required change to getRankProximity,
used by findOrphanPos, was missed.
Differential Revision: https://reviews.llvm.org/D61197
llvm-svn: 359554
This patch changes the behaviour of findOrphanPos to only consider live
sections when placing orphan sections. This used to be how it behaved in
the past.
Differential Revision: https://reviews.llvm.org/D60273
llvm-svn: 358979
Make some small adjustment while touching the code: make parameters
const, use less_first(), etc.
Differential Revision: https://reviews.llvm.org/D60989
llvm-svn: 358943
This generalizes code and also fixes the broken behavior shown in
one of our test cases for some targets, like x86-64.
The issue occurs when the forward declarations are used in the script.
One of the samples is:
SECTIONS {
foo = ADDR(.text) - ABSOLUTE(ADDR(.text));
};
In that case, we have a broken output when output target does
not use thunks. That happens because thunks creating code
(called from maybeAddThunks)
calls Script->assignAddresses() at least one more time,
what fixups the values. As a result final symbols values can
be different on AArch64 and x86, for example.
In this patch, I generalize and rename maybeAddThunks to
finalizeAddressDependentContent and now it is used and called
by all targets.
Differential revision: https://reviews.llvm.org/D55550
llvm-svn: 358646
With partitions, each partition should have the same build id. This means
that the build id needs to be only computed once, otherwise we will end up
with different build ids in each partition as a result of the file contents
changing. This change moves the computation of the build id into Writer so
that it only happens once.
Differential Revision: https://reviews.llvm.org/D60342
llvm-svn: 358536
For partitions I intend to use the same set of version indexes in
each partition for simplicity. Since each partition will need its own
VersionNeedSection this will require moving the verneed tracking out of
VersionNeedSection. The way I've done this is to move most of the tracking
into SharedFile. What will eventually become the per-partition tracking
still lives in VersionNeedSection.
As a bonus the code gets a little simpler and more consistent with how we
handle verdef.
Differential Revision: https://reviews.llvm.org/D60307
llvm-svn: 357926
Previously, we drop symbols starting with .L from the symbol table, so
if there is a relocation that refers a .L symbol, it ended up
referencing a null -- which happened to be interpreted as an absolute
symbol.
This patch copies all symbols including local ones if -emit-reloc is
given.
Fixes https://bugs.llvm.org/show_bug.cgi?id=41385
Differential Revision: https://reviews.llvm.org/D60306
llvm-svn: 357885
And rename the function to combineEhSections(). This makes the processing
of .ARM.exidx even more similar to .eh_frame and means that we can avoid an
additional loop over InputSections.
Differential Revision: https://reviews.llvm.org/D60026
llvm-svn: 357417