This patch adds a wrapper for call_once, which uses an already-compiled helper __call_once with an atomic release which is invisible to TSan. To avoid false positives, the interceptor performs an explicit atomic release in the callback wrapper.
Differential Revision: https://reviews.llvm.org/D24188
llvm-svn: 280920
Current AArch64 {sig}{set,long}jmp interposing requires accessing glibc
private __pointer_chk_guard to get process xor mask to demangled the
internal {sig}jmp_buf function pointers.
It causes some packing issues, as described in gcc PR#71042 [1], and is
is not a godd practice to rely on a private glibc namespace (since ABI is
not meant to be stable).
This patch fixes it by changing how libtsan obtains the guarded pointer
value: at initialization a specific routine issues a setjmp call and
using the mangled function pointer and the original value derive the
random guarded pointer.
Checked on aarch64 39-bit VMA.
[1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=71042
llvm-svn: 278292
The system implementation of OSAtomicTestAndClear returns the original bit, but the TSan interceptor has a bug which always returns zero from the function. This patch fixes this and adds a test.
Differential Revision: https://reviews.llvm.org/D23061
llvm-svn: 277461
On Darwin, there are some apps that rely on realloc(nullptr, 0) returning a valid pointer. TSan currently returns nullptr in this case, let's fix it to avoid breaking binary compatibility.
Differential Revision: https://reviews.llvm.org/D22800
llvm-svn: 277458
This patch adds 48-bits VMA support for tsan on aarch64. As current
mappings for aarch64, 48-bit VMA also supports PIE executable. This
limits the mapping mechanism because the PIE address bits
(usually 0aaaaXXXXXXXX) makes it harder to create a mask/xor value
to include all memory regions. I think it is possible to create a
large application VAM range by either dropping PIE support or tune
current range.
It also changes slight the way addresses are packed in SyncVar structure:
previously it assumes x86_64 as the maximum VMA range. Since ID is 14 bits
wide, shifting 48 bits should be ok.
Tested on x86_64, ppc64le and aarch64 (39 and 48 bits VMA).
llvm-svn: 277137
When we delay signals we can deliver them when the signal
is blocked. This can be surprising to the program.
Intercept signal blocking functions merely to process
pending signals. As the result, at worst we will delay
a signal till return from the signal blocking function.
llvm-svn: 276876
This patch adds interceptors for dispatch_io_*, dispatch_read and dispatch_write functions. This avoids false positives when using GCD IO. Adding several test cases.
Differential Revision: http://reviews.llvm.org/D21889
llvm-svn: 275071
This patch adds synchronization between the creation of the GCD data object and destructor’s execution. It’s far from perfect, because ideally we’d want to synchronize the destruction of the last reference (via dispatch_release) and the destructor’s execution, but intercepting objc_release is problematic.
Differential Revision: http://reviews.llvm.org/D21990
llvm-svn: 274749
We already have interceptors for dispatch_source API (e.g. dispatch_source_set_event_handler), but they currently only handle submission synchronization. We also need to synchronize based on the target queue (serial, concurrent), in other words, we need to use dispatch_callback_wrap. This patch implements that.
Differential Revision: http://reviews.llvm.org/D21999
llvm-svn: 274619
In the patch that introduced support for GCD barrier blocks, I removed releasing a group when leaving it (in dispatch_group_leave). However, this is necessary to synchronize leaving a group and a notification callback (dispatch_group_notify). Adding this back, simplifying dispatch_group_notify_f and adding a test case.
Differential Revision: http://reviews.llvm.org/D21927
llvm-svn: 274549
Because we use SCOPED_TSAN_INTERCEPTOR in the dispatch_once interceptor, the original dispatch_once can also be sometimes called (when ignores are enabled or when thr->is_inited is false). However the original dispatch_once function doesn’t expect to find “2” in the storage and it will spin forever (but we use “2” to indicate that the initialization is already done, so no waiting is necessary). This patch makes sure we never call the original dispatch_once.
Differential Revision: http://reviews.llvm.org/D21976
llvm-svn: 274548
The dispatch_group_async interceptor actually extends the lifetime of the executed block. This means the destructor of the block (and captured variables) is called *after* dispatch_group_leave, which changes the semantics of dispatch_group_async. This patch fixes that.
Differential Revision: http://reviews.llvm.org/D21816
llvm-svn: 274117
Adding support for GCD barrier blocks in concurrent queues. This uses two sync object in the same way as read-write locks do. This also simplifies the use of dispatch groups (the notifications act as barrier blocks).
Differential Revision: http://reviews.llvm.org/D21604
llvm-svn: 273893
The non-barrier versions of OSAtomic* functions are semantically mo_relaxed, but the two variants (e.g. OSAtomicAdd32 and OSAtomicAdd32Barrier) are actually aliases of each other, and we cannot have different interceptors for them, because they're actually the same function. Thus, we have to stay conservative and treat the non-barrier versions as mo_acq_rel.
Differential Revision: http://reviews.llvm.org/D21733
llvm-svn: 273890
There is a "well-known" TSan false positive when using C++ weak_ptr/shared_ptr and code in destructors, e.g. described at <https://llvm.org/bugs/show_bug.cgi?id=22324>. The "standard" solution is to build and use a TSan-instrumented version of libcxx, which is not trivial for end-users. This patch tries a different approach (on OS X): It adds an interceptor for the specific function in libc++.dylib, which implements the atomic operation that needs to be visible to TSan.
Differential Revision: http://reviews.llvm.org/D21609
llvm-svn: 273806
This patch replaces all uses of __libc_malloc and friends with the internal allocator.
It seems that the only reason why we have calls to __libc_malloc in the first place was the lack of the internal allocator at the time. Using the internal allocator will also make sure that the system allocator is never used (this is the same behavior as ASan), and we don’t have to worry about working with unknown pointers coming from the system allocator.
Differential Revision: http://reviews.llvm.org/D21025
llvm-svn: 271916
This is a very simple optimization that gets about 10% speedup for certain programs. We’re currently storing the pointer to the main thread’s ThreadState, but we can store the state directly in a static variable, which avoid the load acquire.
Differential Revision: http://reviews.llvm.org/D20910
llvm-svn: 271906
The new annotation was added a while ago, but was not actually used.
Use the annotation to detect linker-initialized mutexes instead
of the broken IsGlobalVar which has both false positives and false
negatives. Remove IsGlobalVar mess.
llvm-svn: 271663
Currently the added test produces false race reports with glibc 2.19,
because DLTS memory is reused by pthread under the hood.
Use the DTLS machinery to intercept new DTLS ranges.
__tls_get_addr known to cause issues for tsan in the past,
so write the interceptor more carefully.
Reviewed in http://reviews.llvm.org/D20927
llvm-svn: 271568
We're missing interceptors for dispatch_after and dispatch_after_f. Let's add them to avoid false positives. Added a test case.
Differential Revision: http://reviews.llvm.org/D20426
llvm-svn: 270071
Summary:
Adds *fstat to the common interceptors.
Removes the now-duplicate fstat interceptor from msan/tsan
This adds fstat to asan/esan, which previously did not intercept it.
Resubmit of http://reviews.llvm.org/D20318 with ios build fixes.
Reviewers: eugenis, vitalybuka, aizatsky
Subscribers: zaks.anna, kcc, bruening, kubabrecka, srhines, danalbert, tberghammer
Differential Revision: http://reviews.llvm.org/D20350
llvm-svn: 269981
The previous patch (r269291) was reverted (commented out) because the patch caused leaks that
were detected by LSan and they broke some lit tests. The actual reason was that dlsym allocates
an error string buffer in TLS, and some LSan lit tests are intentionally not scanning TLS for
root pointers. This patch simply makes LSan ignore the allocation from dlsym, because it's
not interesting anyway.
llvm-svn: 269917
Summary:
Adds *fstat to the common interceptors.
Removes the now-duplicate fstat interceptor from msan/tsan
This adds fstat to asan/esan, which previously did not intercept it.
Reviewers: eugenis, vitalybuka, aizatsky
Subscribers: tberghammer, danalbert, srhines, kubabrecka, bruening, kcc
Differential Revision: http://reviews.llvm.org/D20318
llvm-svn: 269856
The ignore_interceptors_accesses setting did not have an effect on mmap, so
let's change that. It helps in cases user code is accessing the memory
written to by mmap when the synchronization is ensured by the code that
does not get rebuilt.
(This effects Swift interoperability since it's runtime is mapping memory
which gets accessed by the code emitted into the Swift application by the
compiler.)
Differential Revision: http://reviews.llvm.org/D20294
llvm-svn: 269855
http://reviews.llvm.org/rL269291 introduced a memory leak.
Disabling offending call temprorary rather than rolling back the chain
of CLs.
llvm-svn: 269799
To invoke the Swift demangler, we use dlsym to locate swift_demangle. However, dlsym malloc's storage and stores it in thread-local storage. Since allocations from the symbolizer are done with the system allocator (at least in TSan, interceptors are skipped when inside the symbolizer), we will crash when we try to deallocate later using the sanitizer allocator again.
To fix this, let's just not call dlsym from the demangler, and call it during initialization. The dlsym function calls malloc, so it needs to be only used after our allocator is initialized. Adding a Symbolizer::LateInitialize call that is only invoked after all other initializations.
Differential Revision: http://reviews.llvm.org/D20015
llvm-svn: 269291
Adds *stat to the common interceptors.
Removes the now-duplicate *stat interceptor from msan/tsan/esan.
This adds *stat to asan, which previously did not intercept it.
Patch by Qin Zhao.
llvm-svn: 269223
Another stack where we try to free sync objects,
but don't have a processors is:
// ResetRange
// __interceptor_munmap
// __deallocate_stack
// start_thread
// clone
Again, it is a latent bug that lead to memory leaks.
Also, increase amount of memory we scan in MetaMap::ResetRange.
Without that the test does not fail, as we fail to free
the sync objects on stack.
llvm-svn: 269041
Fixes crash reported in:
https://bugs.chromium.org/p/v8/issues/detail?id=4995
The problem is that we don't have a processor in a free interceptor
during thread exit.
The crash was introduced by introduction of Processors.
However, previously we silently leaked memory which wasn't any better.
llvm-svn: 268782
Summary:
Adds stat/__xstat to the common interceptors.
Removes the now-duplicate stat/__xstat interceptor from msan/tsan/esan.
This adds stat/__xstat to asan, which previously did not intercept it.
Resubmit of http://reviews.llvm.org/D19875 with win build fixes.
Reviewers: aizatsky, eugenis
Subscribers: tberghammer, llvm-commits, danalbert, vitalybuka, bruening, srhines, kubabrecka, kcc
Differential Revision: http://reviews.llvm.org/D19890
llvm-svn: 268466
Summary:
Adds stat/__xstat to the common interceptors.
Removes the now-duplicate stat/__xstat interceptor from msan/tsan/esan.
This adds stat/__xstat to asan, which previously did not intercept it.
Reviewers: aizatsky, eugenis
Subscribers: tberghammer, danalbert, srhines, kubabrecka, llvm-commits, vitalybuka, eugenis, kcc, bruening
Differential Revision: http://reviews.llvm.org/D19875
llvm-svn: 268440
In http://reviews.llvm.org/D19100, I introduced a bug: On OS X, existing programs rely on malloc_size() to detect whether a pointer comes from heap memory (malloc_size returns non-zero) or not. We have to distinguish between a zero-sized allocation (where we need to return 1 from malloc_size, due to other binary compatibility reasons, see http://reviews.llvm.org/D19100), and pointers that are not returned from malloc at all.
Differential Revision: http://reviews.llvm.org/D19653
llvm-svn: 268157
Recent TSan changes (r267678) which factor out parts of ThreadState into a Processor structure broke worker threads on OS X. This fixes it by properly calling ProcCreate for GCD worker threads and by replacing some CHECKs with RAW_CHECK in early process initialization. CHECK() in TSan calls the allocator, which requires a valid Processor.
llvm-svn: 267864
On linux, some architectures had an ABI transition from 64-bit long double
(ie. same as double) to 128-bit long double. On those, glibc symbols
involving long doubles come in two versions, and we need to pass the
correct one to dlvsym when intercepting them.
A few more functions we intercept are also versioned (all printf, scanf,
strtold variants), but there's no need to fix these, as the REAL() versions
are never called.
Differential Revision: http://reviews.llvm.org/D19555
llvm-svn: 267794
In short, CVE-2016-2143 will crash the machine if a process uses both >4TB
virtual addresses and fork(). ASan, TSan, and MSan will, by necessity, map
a sizable chunk of virtual address space, which is much larger than 4TB.
Even worse, sanitizers will always use fork() for llvm-symbolizer when a bug
is detected. Disable all three by aborting on process initialization if
the running kernel version is not known to contain a fix.
Unfortunately, there's no reliable way to detect the fix without crashing
the kernel. So, we rely on whitelisting - I've included a list of upstream
kernel versions that will work. In case someone uses a distribution kernel
or applied the fix themselves, an override switch is also included.
Differential Revision: http://reviews.llvm.org/D19576
llvm-svn: 267747