Currently we only support %z and %ll width modifiers,
but surprisingly not %l. This makes it impossible to print longs
(sizeof(long) not necessary equal to sizeof(size_t)).
We had some printf's that printed longs with %zu,
but that's wrong and now with __attribute__((format)) in place
they are flagged by compiler. So we either have a choice of
doing static_cast<uptr>(long) everywhere or add %l.
Adding %l looks better, that's a standard modifier.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D108066
This reverts commits
"sanitizer_common: support printing __m128i type"
and "[sanitizer] Fix VSNPrintf %V on Windows".
Unfortunately, custom "%V" is inherently incompatible with -Wformat,
it produces both:
warning: invalid conversion specifier 'V' [-Wformat-invalid-specifier]
warning: data argument not used by format string [-Wformat-extra-args]
If we disable both of these warnings we lose lots of useful warnings as well.
Depends on D107978.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D107979
Mutex supports reader access, OS blocking, spinning,
portable and smaller than BlockingMutex.
Overall it's supposed to be better than RWMutex/BlockingMutex.
Replace RWMutex/BlockingMutex with Mutex.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D106936
Mutex does not support LINKER_INITIALIZED support.
As preparation to switching BlockingMutex to Mutex,
proactively replace all BlockingMutex(LINKER_INITIALIZED) to Mutex.
All of these are objects with static storage duration and Mutex ctor
is constexpr, so it should be equivalent.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D106944
Now that sanitizer_common mutex has feature-parity with tsan mutex,
switch tsan to the sanitizer_common mutex and remove tsan's custom mutex.
Reviewed By: vitalybuka, melver
Differential Revision: https://reviews.llvm.org/D106379
We currently have 3 different mutexes:
- RWMutex
- BlockingMutex
- __tsan::Mutex
RWMutex and __tsan::Mutex are roughly the same,
except that tsan version supports deadlock detection.
BlockingMutex degrades better under heavy contention
from lots of threads (blocks in OS), but much slower
for light contention and has non-portable performance
and has larger static size and is not reader-writer.
Add a new mutex that combines all advantages of these
mutexes: it's reader-writer, has fast non-contended path,
supports blocking to gracefully degrade under higher contention,
has portable size/performance.
For now it's named Mutex2 for incremental submission. The plan is to:
- land this change
- then move deadlock detection logic from tsan
- then rename it to Mutex and remove tsan Mutex
- then typedef RWMutex/BlockingMutex to this mutex
SpinMutex stays as separate type because it has faster fast path:
1 atomic RMW per lock/unlock as compared to 2 for this mutex.
Reviewed By: vitalybuka, melver
Differential Revision: https://reviews.llvm.org/D106231
We obtain the current PC is all interceptors and collectively
common interceptor code contributes to overall slowdown
(in particular cheaper str/mem* functions).
The current way to obtain the current PC involves:
4493e1: e8 3a f3 fe ff callq 438720 <_ZN11__sanitizer10StackTrace12GetCurrentPcEv>
4493e9: 48 89 c6 mov %rax,%rsi
and the called function is:
uptr StackTrace::GetCurrentPc() {
438720: 48 8b 04 24 mov (%rsp),%rax
438724: c3 retq
The new way uses address of a local label and involves just:
44a888: 48 8d 35 fa ff ff ff lea -0x6(%rip),%rsi
I am not switching all uses of StackTrace::GetCurrentPc to GET_CURRENT_PC
because it may lead some differences in produced reports and break tests.
The difference comes from the fact that currently we have PC pointing
to the CALL instruction, but the new way does not yield any code on its own
so the PC points to a random instruction in the function and symbolizing
that instruction can produce additional inlined frames (if the random
instruction happen to relate to some inlined function).
Reviewed By: vitalybuka, melver
Differential Revision: https://reviews.llvm.org/D106046
Semaphore is a portable way to park/unpark threads.
The plan is to use it to implement a portable blocking
mutex in subsequent changes. Semaphore can also be used
to efficiently wait for other things (e.g. we currently
spin to synchronize thread creation and start).
Reviewed By: vitalybuka, melver
Differential Revision: https://reviews.llvm.org/D106071
Currently ThreadRegistry is overcomplicated because of tsan,
it needs tid quarantine and reuse counters. Other sanitizers
don't need that. It also seems that no other sanitizer now
needs max number of threads. Asan used to need 2^24 limit,
but it does not seem to be needed now. Other sanitizers blindly
copy-pasted that without reasons. Lsan also uses quarantine,
but I don't see why that may be potentially needed.
Add a ThreadRegistry ctor that does not require any sizes
and use it in all sanitizers except for tsan.
In preparation for new tsan runtime, which won't need
any of these parameters as well.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D105713
Currently we allocate MemoryMapper per size class.
MemoryMapper mmap's and munmap's internal buffer.
This results in 50 mmap/munmap calls under the global
allocator mutex. Reuse MemoryMapper and the buffer
for all size classes. This radically reduces number of
mmap/munmap calls. Smaller size classes tend to have
more objects allocated, so it's highly likely that
the buffer allocated for the first size class will
be enough for all subsequent size classes.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D105778
This reverts commit 0726695214.
This causes the following build failure with gcc 10.3.0:
/home/nikic/llvm-project/compiler-rt/lib/sanitizer_common/sanitizer_allocator_primary64.h:114:31: error: declaration of ‘typedef class __sanitizer::MemoryMapper<__sanitizer::SizeClassAllocator64<Params> > __sanitizer::SizeClassAllocator64<Params>::MemoryMapper’ changes meaning of ‘MemoryMapper’ [-fpermissive]
114 | typedef MemoryMapper<ThisT> MemoryMapper;
Currently we allocate MemoryMapper per size class.
MemoryMapper mmap's and munmap's internal buffer.
This results in 50 mmap/munmap calls under the global
allocator mutex. Reuse MemoryMapper and the buffer
for all size classes. This radically reduces number of
mmap/munmap calls. Smaller size classes tend to have
more objects allocated, so it's highly likely that
the buffer allocated for the first size class will
be enough for all subsequent size classes.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D105778
__m128i is vector SSE type used in tsan.
It's handy to be able to print it for debugging.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D102167
This method is like StackTrace::Print but instead of printing to stderr
it copies its output to a user-provided buffer.
Part of https://reviews.llvm.org/D102451.
Reviewed By: vitalybuka, stephan.yichao.zhao
Differential Revision: https://reviews.llvm.org/D102815
This relates to https://reviews.llvm.org/D95835.
In DFSan origin tracking we use StackDepot to record
stack traces and origin traces (like MSan origin tracking).
For at least two reasons, we wanted to control StackDepot's memory cost
1) We may use DFSan origin tracking to monitor programs that run for
many days. This may eventually use too much memory for StackDepot.
2) DFSan supports flush shadow memory to reduce overhead. After flush,
all existing IDs in StackDepot are not valid because no one will
refer to them.
Currently we have a bit of a mess related to tids:
- sanitizers re-declare kInvalidTid multiple times
- some call it kUnknownTid
- implicit assumptions that main tid is 0
- asan/memprof claim their tids need to fit into 24 bits,
but this does not seem to be true anymore
- inconsistent use of u32/int to store tids
Introduce kInvalidTid/kMainTid in sanitizer_common
and use them consistently.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D101428
... so that FreeBSD specific GetTls/glibc specific pthread_self code can be
removed. This also helps FreeBSD arm64/powerpc64 which don't have GetTls
implementation yet.
GetTls is the range of
* thread control block and optional TLS_PRE_TCB_SIZE
* static TLS blocks plus static TLS surplus
On glibc, lsan requires the range to include
`pthread::{specific_1stblock,specific}` so that allocations only referenced by
`pthread_setspecific` can be scanned.
This patch uses `dl_iterate_phdr` to collect TLS blocks. Find the one
with `dlpi_tls_modid==1` as one of the initially loaded module, then find
consecutive ranges. The boundaries give us addr and size.
This allows us to drop the glibc internal `_dl_get_tls_static_info` and
`InitTlsSize`. However, huge glibc x86-64 binaries with numerous shared objects
may observe time complexity penalty, so exclude them for now. Use the simplified
method with non-Android Linux for now, but in theory this can be used with *BSD
and potentially other ELF OSes.
This removal of RISC-V `__builtin_thread_pointer` makes the code compilable with
more compiler versions (added in Clang in 2020-03, added in GCC in 2020-07).
This simplification enables D99566 for TLS Variant I architectures.
Note: as of musl 1.2.2 and FreeBSD 12.2, dlpi_tls_data returned by
dl_iterate_phdr is not desired: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=254774
This can be worked around by using `__tls_get_addr({modid,0})` instead
of `dlpi_tls_data`. The workaround can be shared with the workaround for glibc<2.25.
This fixes some tests on Alpine Linux x86-64 (musl)
```
test/lsan/Linux/cleanup_in_tsd_destructor.c
test/lsan/Linux/fork.cpp
test/lsan/Linux/fork_threaded.cpp
test/lsan/Linux/use_tls_static.cpp
test/lsan/many_tls_keys_thread.cpp
test/msan/tls_reuse.cpp
```
and `test/lsan/TestCases/many_tls_keys_pthread.cpp` on glibc aarch64.
The number of sanitizer test failures does not change on FreeBSD/amd64 12.2.
Differential Revision: https://reviews.llvm.org/D98926
This was reverted by f176803ef1 due to
Ubuntu 16.04 x86-64 glibc 2.23 problems.
This commit additionally calls `__tls_get_addr({modid,0})` to work around the
dlpi_tls_data==NULL issues for glibc<2.25
(https://sourceware.org/bugzilla/show_bug.cgi?id=19826)
GetTls is the range of
* thread control block and optional TLS_PRE_TCB_SIZE
* static TLS blocks plus static TLS surplus
On glibc, lsan requires the range to include
`pthread::{specific_1stblock,specific}` so that allocations only referenced by
`pthread_setspecific` can be scanned.
This patch uses `dl_iterate_phdr` to collect TLS blocks. Find the one
with `dlpi_tls_modid==1` as one of the initially loaded module, then find
consecutive ranges. The boundaries give us addr and size.
This allows us to drop the glibc internal `_dl_get_tls_static_info` and
`InitTlsSize` entirely. Use the simplified method with non-Android Linux for
now, but in theory this can be used with *BSD and potentially other ELF OSes.
This simplification enables D99566 for TLS Variant I architectures.
See https://reviews.llvm.org/D93972#2480556 for analysis on GetTls usage
across various sanitizers.
Differential Revision: https://reviews.llvm.org/D98926
On 64-bit systems with small VMAs (e.g. 39-bit) we can't use
SizeClassAllocator64 parameterized with size class maps containing a large
number of classes, as that will make the allocator region size too small
(< 2^32). Several tests were already disabled for Android because of this.
This patch provides the correct allocator configuration for RISC-V
(riscv64), generalizes the gating condition for tests that can't be enabled
for small VMA systems, and tweaks the tests that can be made compatible with
those systems to enable them.
I think the previous gating on Android should instead be AArch64+Android, so
the patch reflects that.
Differential Revision: https://reviews.llvm.org/D97234
GetTls is the range of
* thread control block and optional TLS_PRE_TCB_SIZE
* static TLS blocks plus static TLS surplus
On glibc, lsan requires the range to include
`pthread::{specific_1stblock,specific}` so that allocations only referenced by
`pthread_setspecific` can be scanned.
This patch uses `dl_iterate_phdr` to collect TLS ranges. Find the one
with `dlpi_tls_modid==1` as one of the initially loaded module, then find
consecutive ranges. The boundaries give us addr and size.
This allows us to drop the glibc internal `_dl_get_tls_static_info` and
`InitTlsSize` entirely. Use the simplified method with non-Android Linux for
now, but in theory this can be used with *BSD and potentially other ELF OSes.
In the future, we can move `ThreadDescriptorSize` code to lsan (and consider
intercepting `pthread_setspecific`) to avoid hacks in generic code.
See https://reviews.llvm.org/D93972#2480556 for analysis on GetTls usage
across various sanitizers.
Differential Revision: https://reviews.llvm.org/D98926
The main use case for this change is HWASan aliasing mode, which premaps
the alias space adjacent to the dynamic shadow. With this change, the
primary allocator can allocate from the alias space instead of a
separate region.
Reviewed By: vitalybuka, eugenis
Differential Revision: https://reviews.llvm.org/D98293
The main use case for this change is HWASan aliasing mode, which premaps
the alias space adjacent to the dynamic shadow. With this change, the
primary allocator can allocate from the alias space instead of a
separate region.
Reviewed By: vitalybuka, eugenis
Differential Revision: https://reviews.llvm.org/D98293