In short, CVE-2016-2143 will crash the machine if a process uses both >4TB
virtual addresses and fork(). ASan, TSan, and MSan will, by necessity, map
a sizable chunk of virtual address space, which is much larger than 4TB.
Even worse, sanitizers will always use fork() for llvm-symbolizer when a bug
is detected. Disable all three by aborting on process initialization if
the running kernel version is not known to contain a fix.
Unfortunately, there's no reliable way to detect the fix without crashing
the kernel. So, we rely on whitelisting - I've included a list of upstream
kernel versions that will work. In case someone uses a distribution kernel
or applied the fix themselves, an override switch is also included.
Differential Revision: http://reviews.llvm.org/D19576
llvm-svn: 267747
This is reincarnation of http://reviews.llvm.org/D17648 with the bug fix pointed out by Adhemerval (zatrazz).
Currently ThreadState holds both logical state (required for race-detection algorithm, user-visible)
and physical state (various caches, most notably malloc cache). Move physical state in a new
Process entity. Besides just being the right thing from abstraction point of view, this solves several
problems:
Cache everything on P level in Go. Currently we cache on a mix of goroutine and OS thread levels.
This unnecessary increases memory consumption.
Properly handle free operations in Go. Frees are issue by GC which don't have goroutine context.
As the result we could not do anything more than just clearing shadow. For example, we leaked
sync objects and heap block descriptors.
This will allow to get rid of libc malloc in Go (now we have Processor context for internal allocator cache).
This in turn will allow to get rid of dependency on libc entirely.
Potentially we can make Processor per-CPU in C++ mode instead of per-thread, which will
reduce resource consumption.
The distinction between Thread and Processor is currently used only by Go, C++ creates Processor per OS thread,
which is equivalent to the current scheme.
llvm-svn: 267678
Currently ThreadState holds both logical state (required for race-detection algorithm, user-visible)
and physical state (various caches, most notably malloc cache). Move physical state in a new
Process entity. Besides just being the right thing from abstraction point of view, this solves several
problems:
1. Cache everything on P level in Go. Currently we cache on a mix of goroutine and OS thread levels.
This unnecessary increases memory consumption.
2. Properly handle free operations in Go. Frees are issue by GC which don't have goroutine context.
As the result we could not do anything more than just clearing shadow. For example, we leaked
sync objects and heap block descriptors.
3. This will allow to get rid of libc malloc in Go (now we have Processor context for internal allocator cache).
This in turn will allow to get rid of dependency on libc entirely.
4. Potentially we can make Processor per-CPU in C++ mode instead of per-thread, which will
reduce resource consumption.
The distinction between Thread and Processor is currently used only by Go, C++ creates Processor per OS thread,
which is equivalent to the current scheme.
llvm-svn: 262037
In AddressSanitizer, we have the MaybeReexec method to detect when we're running without DYLD_INSERT_LIBRARIES (in which case interceptors don't work) and re-execute with the environment variable set. On OS X 10.11+, this is no longer necessary, but to have ThreadSanitizer supported on older versions of OS X, let's use the same method as well. This patch moves the implementation from `asan/` into `sanitizer_common/`.
Differential Revision: http://reviews.llvm.org/D15123
llvm-svn: 254600
On OS X, for weak function (that user can override by providing their own implementation in the main binary), we need extern `"C" SANITIZER_INTERFACE_ATTRIBUTE SANITIZER_WEAK_ATTRIBUTE NOINLINE`.
Fixes a broken test case on OS X, java_symbolization.cc, which uses a weak function __tsan_symbolize_external.
Differential Revision: http://reviews.llvm.org/D14907
llvm-svn: 254298
This patch unify the 39 and 42-bit support for AArch64 by using an external
memory read to check the runtime detected VMA and select the better mapping
and transformation. Although slower, this leads to same instrumented binary
to be independent of the kernel.
Along with this change this patch also fix some 42-bit failures with
ALSR disable by increasing the upper high app memory threshold and also
the 42-bit madvise value for non large page set.
llvm-svn: 254151
This implements a "poor man's TLV" to be used for TSan's ThreadState on OS X. Based on the fact that `pthread_self()` is always available and reliable and returns a valid pointer to memory, we'll use the shadow memory of this pointer as a thread-local storage. No user code should ever read/write to this internal libpthread structure, so it's safe to use it for this purpose. We lazily allocate the ThreadState object and store the pointer here.
Differential Revision: http://reviews.llvm.org/D14288
llvm-svn: 252159
TSan needs to use a custom malloc zone on OS X, which is already implemented in ASan. This patch uses the sanitizer_common implementation in `sanitizer_malloc_mac.inc` for TSan as well.
Reviewed at http://reviews.llvm.org/D14330
llvm-svn: 252155
This patch adds a runtime check for asan, dfsan, msan, and tsan for
architectures that support multiple VMA size (like aarch64). Currently
the check only prints a warning indicating which is the VMA built and
expected against the one detected at runtime.
llvm-svn: 247413
Race deduplication code proved to be a performance bottleneck in the past if suppressions/annotations are used, or just some races left unaddressed. And we still get user complaints about this:
https://groups.google.com/forum/#!topic/thread-sanitizer/hB0WyiTI4e4
ReportRace already has several layers of caching for racy pcs/addresses to make deduplication faster. However, ReportRace still takes a global mutex (ThreadRegistry and ReportMutex) during deduplication and also calls mmap/munmap (which take process-wide semaphore in kernel), this makes deduplication non-scalable.
This patch moves race deduplication outside of global mutexes and also removes all mmap/munmap calls.
As the result, race_stress.cc with 100 threads and 10000 iterations become 30x faster:
before:
real 0m21.673s
user 0m5.932s
sys 0m34.885s
after:
real 0m0.720s
user 0m23.646s
sys 0m1.254s
http://reviews.llvm.org/D12554
llvm-svn: 246758
Summary:
Merge "exitcode" flag from ASan, LSan, TSan and "exit_code" from MSan
into one entity. Additionally, make sure sanitizer_common now uses the
value of common_flags()->exitcode when dying on error, so that this
flag will automatically work for other sanitizers (UBSan and DFSan) as
well.
User-visible changes:
* "exit_code" MSan runtime flag is now deprecated. If explicitly
specified, this flag will take precedence over "exitcode".
The users are encouraged to migrate to the new version.
* __asan_set_error_exit_code() and __msan_set_exit_code() functions
are removed. With few exceptions, we don't support changing runtime
flags during program execution - we can't make them thread-safe.
The users should use __sanitizer_set_death_callback()
that would call _exit() with proper exit code instead.
* Plugin tools (LSan and UBSan) now inherit the exit code of the parent
tool. In particular, this means that ASan would now crash the program
with exit code "1" instead of "23" if it detects leaks.
Reviewers: kcc, eugenis
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D12120
llvm-svn: 245734
include_if_exists=/path/to/sanitizer/options reads flags from the
file if it is present. "%b" in the include file path (for both
variants of the flag) is replaced with the basename of the main
executable.
llvm-svn: 242853
This is done by creating a named shared memory region, unlinking it
and setting up a private (i.e. copy-on-write) mapping of that instead
of a regular anonymous mapping. I've experimented with regular
(sparse) files, but they can not be scaled to the size of MSan shadow
mapping, at least on Linux/X86_64 and ext3 fs.
Controlled by a common flag, decorate_proc_maps, disabled by default.
This patch has a few shortcomings:
* not all mappings are annotated, especially in TSan.
* our handling of memset() of shadow via mmap() puts small anonymous
mappings inside larger named mappings, which looks ugly and can, in
theory, hit the mapping number limit.
llvm-svn: 238621
Embed UBSan runtime into TSan and MSan runtimes in the same as we do
in ASan. Extend UBSan test suite to also run tests for these
combinations.
llvm-svn: 235954
Provide defaults for TSAN_COLLECT_STATS and TSAN_NO_HISTORY.
Replace #ifdef directives with #if. This fixes a bug introduced
in r229112, where building TSan runtime with -DTSAN_COLLECT_STATS=0
would still enable stats collection and reporting.
llvm-svn: 229581
In Go mode the background thread is not started (internal_thread_start is empty).
There is no sense in having this code compiled in.
Also removes dependency on sanitizer_linux_libcdep.cc which is good,
ideally Go runtime does not depend on libc at all.
llvm-svn: 229396
Revision 229127 introduced a bug:
zero value is not OK for trace headers,
because stack0 needs constructor call.
Instead unmap the unused part of trace after
all ctors have been executed.
llvm-svn: 229263
We are going to use only a small part of the trace with the default
value of history_size. However, the constructor writes to the whole trace.
It writes mostly zeros, so freshly mmaped memory will do.
The only non-zero field if mutex type used for debugging.
Reduces per-goroutine overhead by 8K.
https://code.google.com/p/thread-sanitizer/issues/detail?id=89
llvm-svn: 229127
MemoryAccess function consumes ~4K of stack in debug mode,
in significant part due to the unrolled loop.
And gtest gives only 4K of stack to death test
threads, which causes stack overflows in debug mode.
llvm-svn: 226644
TSAN_SHADOW_COUNT is defined to 4 in all environments.
Other values of TSAN_SHADOW_COUNT were never tested and
were broken by recent changes to shadow mapping.
Remove it as there is no reason to fix nor maintain it.
llvm-svn: 226466
Summary:
This change removes `__tsan::StackTrace` class. There are
now three alternatives:
# Lightweight `__sanitizer::StackTrace`, which doesn't own a buffer
of PCs. It is used in functions that need stack traces in read-only
mode, and helps to prevent unnecessary allocations/copies (e.g.
for StackTraces fetched from StackDepot).
# `__sanitizer::BufferedStackTrace`, which stores buffer of PCs in
a constant array. It is used in TraceHeader (non-Go version)
# `__tsan::VarSizeStackTrace`, which owns buffer of PCs, dynamically
allocated via TSan internal allocator.
Test Plan: compiler-rt test suite
Reviewers: dvyukov, kcc
Reviewed By: kcc
Subscribers: llvm-commits, kcc
Differential Revision: http://reviews.llvm.org/D6004
llvm-svn: 221194
Get rid of Symbolizer::Init(path_to_external) in favor of
thread-safe Symbolizer::GetOrInit(), and use the latter version
everywhere. Implicitly depend on the value of external_symbolizer_path
runtime flag instead of passing it around manually.
No functionality change.
llvm-svn: 214005
The optimization is two-fold:
First, the algorithm now uses SSE instructions to
handle all 4 shadow slots at once. This makes processing
faster.
Second, if shadow contains the same access, we do not
store the event into trace. This increases effective
trace size, that is, tsan can remember up to 10x more
previous memory accesses.
Perofrmance impact:
Before:
[ OK ] DISABLED_BENCH.Mop8Read (2461 ms)
[ OK ] DISABLED_BENCH.Mop8Write (1836 ms)
After:
[ OK ] DISABLED_BENCH.Mop8Read (1204 ms)
[ OK ] DISABLED_BENCH.Mop8Write (976 ms)
But this measures only fast-path.
On large real applications the speedup is ~20%.
Trace size impact:
On app1:
Memory accesses : 1163265870
Including same : 791312905 (68%)
on app2:
Memory accesses : 166875345
Including same : 150449689 (90%)
90% of filtered events means that trace size is effectively 10x larger.
llvm-svn: 209897