The existing implementation ran CHECKs to assert that the thread state
was stored inside the tls. However, the mac implementation of tsan doesn't
store the thread state in tls, so these checks fail once darwin tls support
is added to the sanitizers. Only run these checks on platforms where
the thread state is expected to be contained in the tls.
llvm-svn: 303886
Summary:
TSan's Android `__get_tls()` and `TLS_SLOT_TSAN` can be used by other sanitizers as well (see D32649), this change moves them to sanitizer_common.
I picked sanitizer_linux.h as their new home.
In the process, add the 32-bit versions for ARM, i386 & MIPS.
Can the address of `__get_tls()[TLS_SLOT_TSAN]` change in between the calls?
I am not sure if there is a need to repeat the construct as opposed to using a variable. So I left things as they were.
Testing on my side was restricted to a successful cross-compilation.
Reviewers: dvyukov, kubamracek
Reviewed By: dvyukov
Subscribers: aemerson, rengolin, srhines, dberris, arichardson, llvm-commits
Differential Revision: https://reviews.llvm.org/D32705
llvm-svn: 301926
Summary:
The current code was sometimes attempting to release huge chunks of
memory due to undesired RoundUp/RoundDown interaction when the requested
range is fully contained within one memory page.
Reviewers: eugenis
Subscribers: kubabrecka, llvm-commits
Patch by Aleksey Shlyapnikov.
Differential Revision: https://reviews.llvm.org/D27228
llvm-svn: 288271
Currently we either define SANITIZER_GO for Go or don't define it at all for C++.
This works fine with preprocessor (ifdef/ifndef/defined), but does not work
for C++ if statements (e.g. if (SANITIZER_GO) {...}). Also this is different
from majority of SANITIZER_FOO macros which are always defined to either 0 or 1.
Always define SANITIZER_GO to either 0 or 1.
This allows to use SANITIZER_GO in expressions and in flag default values.
Also remove kGoMode and kCppMode, which were meant to be used in expressions,
but they are not defined in sanitizer_common code, so SANITIZER_GO become prevalent.
Also convert some preprocessor checks to C++ if's or ternary expressions.
Majority of this change is done mechanically with:
sed "s#ifdef SANITIZER_GO#if SANITIZER_GO#g"
sed "s#ifndef SANITIZER_GO#if \!SANITIZER_GO#g"
sed "s#defined(SANITIZER_GO)#SANITIZER_GO#g"
llvm-svn: 285443
Current AArch64 {sig}{set,long}jmp interposing requires accessing glibc
private __pointer_chk_guard to get process xor mask to demangled the
internal {sig}jmp_buf function pointers.
It causes some packing issues, as described in gcc PR#71042 [1], and is
is not a godd practice to rely on a private glibc namespace (since ABI is
not meant to be stable).
This patch fixes it by changing how libtsan obtains the guarded pointer
value: at initialization a specific routine issues a setjmp call and
using the mangled function pointer and the original value derive the
random guarded pointer.
Checked on aarch64 39-bit VMA.
[1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=71042
llvm-svn: 278292
This patch adds 48-bits VMA support for tsan on aarch64. As current
mappings for aarch64, 48-bit VMA also supports PIE executable. This
limits the mapping mechanism because the PIE address bits
(usually 0aaaaXXXXXXXX) makes it harder to create a mask/xor value
to include all memory regions. I think it is possible to create a
large application VAM range by either dropping PIE support or tune
current range.
It also changes slight the way addresses are packed in SyncVar structure:
previously it assumes x86_64 as the maximum VMA range. Since ID is 14 bits
wide, shifting 48 bits should be ok.
Tested on x86_64, ppc64le and aarch64 (39 and 48 bits VMA).
llvm-svn: 277137
The new annotation was added a while ago, but was not actually used.
Use the annotation to detect linker-initialized mutexes instead
of the broken IsGlobalVar which has both false positives and false
negatives. Remove IsGlobalVar mess.
llvm-svn: 271663
In short, CVE-2016-2143 will crash the machine if a process uses both >4TB
virtual addresses and fork(). ASan, TSan, and MSan will, by necessity, map
a sizable chunk of virtual address space, which is much larger than 4TB.
Even worse, sanitizers will always use fork() for llvm-symbolizer when a bug
is detected. Disable all three by aborting on process initialization if
the running kernel version is not known to contain a fix.
Unfortunately, there's no reliable way to detect the fix without crashing
the kernel. So, we rely on whitelisting - I've included a list of upstream
kernel versions that will work. In case someone uses a distribution kernel
or applied the fix themselves, an override switch is also included.
Differential Revision: http://reviews.llvm.org/D18915
llvm-svn: 266297
Summary:
After patch https://lkml.org/lkml/2015/12/21/340 is introduced in
linux kernel, the random gap between stack and heap is increased
from 128M to 36G on 39-bit aarch64. And it is almost impossible
to cover this big range. So we need to disable randomized virtual
space on aarch64 linux.
Reviewers: llvm-commits, zatrazz, dvyukov, rengolin
Subscribers: aemerson, rengolin, tberghammer, danalbert, srhines
Differential Revision: http://reviews.llvm.org/D18526
llvm-svn: 265366
This reverts commits r264068 and r264079, and they were breaking the build and
weren't reverted in time, nor they exhibited expected behaviour from the
reviewers. There is more to discuss than just a test fix.
llvm-svn: 264150
Summary:
After patch https://lkml.org/lkml/2015/12/21/340 is introduced in
linux kernel, the random gap between stack and heap is increased
from 128M to 36G on 39-bit aarch64. And it is almost impossible
to cover this big range. So I think we need to disable randomized
virtual space on aarch64 linux.
Reviewers: kcc, llvm-commits, eugenis, zatrazz, dvyukov, rengolin
Subscribers: rengolin, aemerson, tberghammer, danalbert, srhines, enh
Differential Revision: http://reviews.llvm.org/D18003
llvm-svn: 264068
Summary:
1. Android doesn't support __thread keyword. So allocate ThreadState
dynamically and store its pointer in one TLS slot provided by Android.
2. On Android, intercepted functions can be called before ThreadState
is initialized. So add test of thr_->is_inited in some places.
3. On Android, intercepted functions can be called after ThreadState
is destroyed. So add a fake dead_thread_state to represent all
destroyed ThreadStates. And that is also why we don't store the pointer
to ThreadState in shadow memory of pthread_self().
Reviewers: kcc, eugenis, dvyukov
Subscribers: kubabrecka, llvm-commits, tberghammer, danalbert, srhines
Differential Revision: http://reviews.llvm.org/D15301
llvm-svn: 257866
check_memcpy test added in r254959 fails on some configurations due to
memset() calls inserted by Clang. Try harder to avoid them:
* Explicitly use internal_memset() instead of empty braced-initializer.
* Replace "new T()" with "new T", as the former generates zero-initialization
for structs in C++11.
llvm-svn: 255136
This patch is by Simone Atzeni with portions by Adhemerval Zanella.
This contains the LLVM patches to enable the thread sanitizer for
PPC64, both big- and little-endian. Two different virtual memory
sizes are supported: Old kernels use a 44-bit address space, while
newer kernels require a 46-bit address space.
There are two companion patches that will be added shortly. There is
a Clang patch to actually turn on the use of the thread sanitizer for
PPC64. There is also a patch that I wrote to provide interceptor
support for setjmp/longjmp on PPC64.
Patch discussion at reviews.llvm.org/D12841.
llvm-svn: 255057
This patch unify the 39 and 42-bit support for AArch64 by using an external
memory read to check the runtime detected VMA and select the better mapping
and transformation. Although slower, this leads to same instrumented binary
to be independent of the kernel.
Along with this change this patch also fix some 42-bit failures with
ALSR disable by increasing the upper high app memory threshold and also
the 42-bit madvise value for non large page set.
llvm-svn: 254151
TSan needs to use a custom malloc zone on OS X, which is already implemented in ASan. This patch uses the sanitizer_common implementation in `sanitizer_malloc_mac.inc` for TSan as well.
Reviewed at http://reviews.llvm.org/D14330
llvm-svn: 252155
Updating the shadow memory initialization in `tsan_platform_mac.cc` to also initialize the meta shadow and to mprotect the memory ranges that need to be avoided.
Differential Revision: http://reviews.llvm.org/D14324
llvm-svn: 252044
This patch enabled TSAN for aarch64 with 39-bit VMA layout. As defined by
tsan_platform.h the layout used is:
0000 4000 00 - 0200 0000 00: main binary
2000 0000 00 - 4000 0000 00: shadow memory
4000 0000 00 - 5000 0000 00: metainfo
5000 0000 00 - 6000 0000 00: -
6000 0000 00 - 6200 0000 00: traces
6200 0000 00 - 7d00 0000 00: -
7d00 0000 00 - 7e00 0000 00: heap
7e00 0000 00 - 7fff ffff ff: modules and main thread stack
Which gives it about 8GB for main binary, 4GB for heap and 8GB for
modules and main thread stack.
Most of tests are passing, with the exception of:
* ignore_lib0, ignore_lib1, ignore_lib3 due a kernel limitation for
no support to make mmap page non-executable.
* longjmp tests due missing specialized assembly routines.
These tests are xfail for now.
The only tsan issue still showing is:
rtl/TsanRtlTest/Posix.ThreadLocalAccesses
Which still required further investigation. The test is disable for
aarch64 for now.
llvm-svn: 244055
Meta shadow is compressing and we don't flush it,
so it makes sense to mark it as NOHUGEPAGE to not over-allocate memory.
On one program it reduces memory consumption from 5GB to 2.5GB.
llvm-svn: 240028
This is done by creating a named shared memory region, unlinking it
and setting up a private (i.e. copy-on-write) mapping of that instead
of a regular anonymous mapping. I've experimented with regular
(sparse) files, but they can not be scaled to the size of MSan shadow
mapping, at least on Linux/X86_64 and ext3 fs.
Controlled by a common flag, decorate_proc_maps, disabled by default.
This patch has a few shortcomings:
* not all mappings are annotated, especially in TSan.
* our handling of memset() of shadow via mmap() puts small anonymous
mappings inside larger named mappings, which looks ugly and can, in
theory, hit the mapping number limit.
llvm-svn: 238621
On Windows, we have to know if a memory to be protected is mapped or not.
On POSIX, Mprotect was semantically different from mprotect most people know.
llvm-svn: 234602
It is currently broken because it reads a wrong value from profile (heap instead of total).
Also make it faster by reading /proc/self/statm. Reading of /proc/self/smaps
can consume more than 50% of time on beefy apps if done every 100ms.
llvm-svn: 213942