to reflect the new license.
We understand that people may be surprised that we're moving the header
entirely to discuss the new license. We checked this carefully with the
Foundation's lawyer and we believe this is the correct approach.
Essentially, all code in the project is now made available by the LLVM
project under our new license, so you will see that the license headers
include that license only. Some of our contributors have contributed
code under our old license, and accordingly, we have retained a copy of
our old license notice in the top-level files in each project and
repository.
llvm-svn: 351636
This is a follow-up patch to r342541. After further investigations, only
48bits VMA size can be supported. As this is enforced in function
InitializePlatformEarly from lib/rt1/tsan_platform_linux.cc, the access
to the global variable vmaSize variable + switch can be removed. This
also addresses a comment from https://reviews.llvm.org/D52167.
vmaSize of 39 or 42bits are not compatible with a Go program memory
layout as the Go heap will not fit in the shadow memory area.
Patch by: Fangming Fang <Fangming.Fang@arm.com>
llvm-svn: 344329
Summary:
This patch adds TSan runtime support for Go on linux-aarch64
platforms. This enables people working on golang to implement their
platform/language part of the TSan support.
Basic testing is done with lib/tsan/go/buildgo.sh. Additional testing will be
done as part of the work done in the Go project.
It is intended to support other VMA sizes, except 39 which does not
have enough bits to support the Go heap requirements.
Patch by Fangming Fang <Fangming.Fang@arm.com>.
Reviewers: kubamracek, dvyukov, javed.absar
Subscribers: mcrosier, dberris, mgorny, kristof.beyls, delcypher, #sanitizers, llvm-commits
Differential Revision: https://reviews.llvm.org/D52167
llvm-svn: 342541
The current implementation of the Go sanitizer only works on x86_64.
Added some modifications to the buildgo.sh script and the Tsan code
to make it work on powerpc64/linux.
Author: cseo (Carlos Eduardo Seo)
Reviewed in: https://reviews.llvm.org/D43025
llvm-svn: 330122
MemToShadowImpl() maps lower addresses to a memory space out of sanitizers
range. The simplest example is address 0 which is mapped to 0x2000000000
static const uptr kShadowBeg = 0x2400000000ull;
but accessing the address during tsan execution will lead to a segmentation
fault.
This patch expands the range used by the sanitizer and ensures that 1/8 of
the maximum valid address in the virtual address spaces is used for shadow
memory.
Patch by Milos Stojanovic.
Differential Revision: https://reviews.llvm.org/D41777
llvm-svn: 323013
In more recent Linux kernels with 47 bit VMAs the layout of virtual memory
for powerpc64 changed causing the thread sanitizer to not work properly. This
patch adds support for 47 bit VMA kernels for powerpc64.
Tested on several 4.x and 3.x kernel releases.
llvm-svn: 318044
Summary:
Changes:
* Add initial msan stub support.
* Handle NetBSD specific pthread_setname_np(3).
* NetBSD supports __attribute__((tls_model("initial-exec"))),
define it in SANITIZER_TLS_INITIAL_EXEC_ATTRIBUTE.
* Add ReExec() specific bits for NetBSD.
* Simplify code and add syscall64 and syscall_ptr for !NetBSD.
* Correct bunch of syscall wrappers for NetBSD.
* Disable test/tsan/map32bit on NetBSD as not applicable.
* Port test/tsan/strerror_r to a POSIX-compliant OSes.
* Disable __libc_stack_end on NetBSD.
* Disable ReadNullSepFileToArray() on NetBSD.
* Define struct_ElfW_Phdr_sz, detected missing symbol by msan.
* Change type of __sanitizer_FILE from void to char. This helps
to reuse this type as an array. Long term it will be properly
implemented along with SANITIZER_HAS_STRUCT_FILE setting to 1.
* Add initial NetBSD support in lib/tsan/go/buildgo.sh.
* Correct referencing stdout and stderr in tsan_interceptors.cc
on NetBSD.
* Document NetBSD x86_64 specific virtual memory layout in
tsan_platform.h.
* Port tests/rtl/tsan_test_util_posix.cc to NetBSD.
* Enable NetBSD tests in test/msan/lit.cfg.
* Enable NetBSD tests in test/tsan/lit.cfg.
Sponsored by <The NetBSD Foundation>
Reviewers: joerg, vitalybuka, eugenis, kcc, dvyukov
Reviewed By: dvyukov
Subscribers: #sanitizers, llvm-commits, kubamracek
Tags: #sanitizers
Differential Revision: https://reviews.llvm.org/D39124
llvm-svn: 316591
The existing implementation ran CHECKs to assert that the thread state
was stored inside the tls. However, the mac implementation of tsan doesn't
store the thread state in tls, so these checks fail once darwin tls support
is added to the sanitizers. Only run these checks on platforms where
the thread state is expected to be contained in the tls.
llvm-svn: 303886
Currently we either define SANITIZER_GO for Go or don't define it at all for C++.
This works fine with preprocessor (ifdef/ifndef/defined), but does not work
for C++ if statements (e.g. if (SANITIZER_GO) {...}). Also this is different
from majority of SANITIZER_FOO macros which are always defined to either 0 or 1.
Always define SANITIZER_GO to either 0 or 1.
This allows to use SANITIZER_GO in expressions and in flag default values.
Also remove kGoMode and kCppMode, which were meant to be used in expressions,
but they are not defined in sanitizer_common code, so SANITIZER_GO become prevalent.
Also convert some preprocessor checks to C++ if's or ternary expressions.
Majority of this change is done mechanically with:
sed "s#ifdef SANITIZER_GO#if SANITIZER_GO#g"
sed "s#ifndef SANITIZER_GO#if \!SANITIZER_GO#g"
sed "s#defined(SANITIZER_GO)#SANITIZER_GO#g"
llvm-svn: 285443
Currently windows fails on startup with:
CHECK failed: gotsan.cc:3077 "(((m - prev_m) / kMetaShadowSize)) == (((p - prev) / kMetaShadowCell))" (0x3ffffffeffffff7e, 0x6ffffff7e)
Make MemToMeta do the same MemToShadow does on windows: add offset instead of or'ing it.
llvm-svn: 285420
This is a follow up to r282152.
A more extensive testing on real apps revealed a subtle bug in r282152.
The revision made shadow mapping non-linear even within a single
user region. But there are lots of code in runtime that processes
memory ranges and assumes that mapping is linear. For example,
region memory access handling simply increments shadow address
to advance to the next shadow cell group. Similarly, DontNeedShadowFor,
java memory mover, search of heap memory block header, etc
make similar assumptions.
To trigger the bug user range would need to cross 0x008000000000 boundary.
This was observed for a module data section.
Make shadow mapping linear within a single user range again.
Add a startup CHECK for linearity.
llvm-svn: 282405
Don't xor user address with kAppMemXor in meta mapping.
The only purpose of kAppMemXor is to raise shadow for ~0 user addresses,
so that they don't map to ~0 (which would cause overlap between
user memory and shadow).
For meta mapping we explicitly add kMetaShadowBeg offset,
so we don't need to additionally raise meta shadow.
llvm-svn: 282403
In ShadowToMem we call MemToShadow potentially for incorrect addresses.
So DCHECK(IsAppMem(p)) can fire in debug mode.
Fix this by swapping range and MemToShadow checks.
llvm-svn: 282157
4.1+ Linux kernels map pie binaries at 0x55:
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=d1fd836dcf00d2028c700c7e44d2c23404062c90
Currently tsan does not support app memory at 0x55 (https://github.com/google/sanitizers/issues/503).
Older kernels also map pie binaries at 0x55 when ASLR is disables (most notably under gdb).
This change extends tsan mapping for linux/x86_64 to cover 0x554-0x568 app range and fixes both 4.1+ kernels and gdb.
This required to slightly shrink low and high app ranges and move heap. The mapping become even more non-linear, since now we xor lower bits. Now even a continuous app range maps to split, intermixed shadow ranges. This breaks ShadowToMemImpl as it assumes linear mapping at least within a continuous app range (however it turned out to be already broken at least on arm64/42-bit vma as uncovered by r281970). So also change ShadowToMemImpl to hopefully a more robust implementation that does not assume a linear mapping.
llvm-svn: 282152
This patch adds 48-bits VMA support for tsan on aarch64. As current
mappings for aarch64, 48-bit VMA also supports PIE executable. This
limits the mapping mechanism because the PIE address bits
(usually 0aaaaXXXXXXXX) makes it harder to create a mask/xor value
to include all memory regions. I think it is possible to create a
large application VAM range by either dropping PIE support or tune
current range.
It also changes slight the way addresses are packed in SyncVar structure:
previously it assumes x86_64 as the maximum VMA range. Since ID is 14 bits
wide, shifting 48 bits should be ok.
Tested on x86_64, ppc64le and aarch64 (39 and 48 bits VMA).
llvm-svn: 277137
The new annotation was added a while ago, but was not actually used.
Use the annotation to detect linker-initialized mutexes instead
of the broken IsGlobalVar which has both false positives and false
negatives. Remove IsGlobalVar mess.
llvm-svn: 271663
This patch adds PIE executable support for aarch64-linux. It adds
two more segments:
- 0x05500000000-0x05600000000: 39-bits PIE program segments
- 0x2aa00000000-0x2ab00000000: 42-bits PIE program segments
Fortunately it is possible to use the same transformation formula for
the new segments range with some adjustments in shadow to memory
formula (it adds a constant offset based on the VMA size).
A simple testcase is also added, however it is disabled on x86 due the
fact it might fail on newer kernels [1].
[1] https://git.kernel.org/linus/d1fd836dcf00d2028c700c7e44d2c23404062c90
llvm-svn: 256184
This patch is by Simone Atzeni with portions by Adhemerval Zanella.
This contains the LLVM patches to enable the thread sanitizer for
PPC64, both big- and little-endian. Two different virtual memory
sizes are supported: Old kernels use a 44-bit address space, while
newer kernels require a 46-bit address space.
There are two companion patches that will be added shortly. There is
a Clang patch to actually turn on the use of the thread sanitizer for
PPC64. There is also a patch that I wrote to provide interceptor
support for setjmp/longjmp on PPC64.
Patch discussion at reviews.llvm.org/D12841.
llvm-svn: 255057
This patch unify the 39 and 42-bit support for AArch64 by using an external
memory read to check the runtime detected VMA and select the better mapping
and transformation. Although slower, this leads to same instrumented binary
to be independent of the kernel.
Along with this change this patch also fix some 42-bit failures with
ALSR disable by increasing the upper high app memory threshold and also
the 42-bit madvise value for non large page set.
llvm-svn: 254151
On OS X, GCD worker threads are created without a call to pthread_create. We need to properly register these threads with ThreadCreate and ThreadStart. This patch uses a libpthread API (`pthread_introspection_hook_install`) to get notifications about new threads and about threads that are about to be destroyed.
Differential Revision: http://reviews.llvm.org/D14328
llvm-svn: 252049
Updating the shadow memory initialization in `tsan_platform_mac.cc` to also initialize the meta shadow and to mprotect the memory ranges that need to be avoided.
Differential Revision: http://reviews.llvm.org/D14324
llvm-svn: 252044
This patch adds support for tsan on aarch64-linux with 42-bit VMA
(current default config for 64K pagesize kernels). The support is
enabled by defining the SANITIZER_AARCH64_VMA to 42 at build time
for both clang/llvm and compiler-rt. The default VMA is 39 bits.
It also enabled tsan for previous supported VMA (39).
llvm-svn: 246330
This patch enabled TSAN for aarch64 with 39-bit VMA layout. As defined by
tsan_platform.h the layout used is:
0000 4000 00 - 0200 0000 00: main binary
2000 0000 00 - 4000 0000 00: shadow memory
4000 0000 00 - 5000 0000 00: metainfo
5000 0000 00 - 6000 0000 00: -
6000 0000 00 - 6200 0000 00: traces
6200 0000 00 - 7d00 0000 00: -
7d00 0000 00 - 7e00 0000 00: heap
7e00 0000 00 - 7fff ffff ff: modules and main thread stack
Which gives it about 8GB for main binary, 4GB for heap and 8GB for
modules and main thread stack.
Most of tests are passing, with the exception of:
* ignore_lib0, ignore_lib1, ignore_lib3 due a kernel limitation for
no support to make mmap page non-executable.
* longjmp tests due missing specialized assembly routines.
These tests are xfail for now.
The only tsan issue still showing is:
rtl/TsanRtlTest/Posix.ThreadLocalAccesses
Which still required further investigation. The test is disable for
aarch64 for now.
llvm-svn: 244055
The new storage (MetaMap) is based on direct shadow (instead of a hashmap + per-block lists).
This solves a number of problems:
- eliminates quadratic behaviour in SyncTab::GetAndLock (https://code.google.com/p/thread-sanitizer/issues/detail?id=26)
- eliminates contention in SyncTab
- eliminates contention in internal allocator during allocation of sync objects
- removes a bunch of ad-hoc code in java interface
- reduces java shadow from 2x to 1/2x
- allows to memorize heap block meta info for Java and Go
- allows to cleanup sync object meta info for Go
- which in turn enabled deadlock detector for Go
llvm-svn: 209810
This allows to increase max shadow stack size to 64K,
and reliably catch shadow stack overflows instead of silently
corrupting memory.
llvm-svn: 192797
The flag allows to bound maximum process memory consumption (best effort).
If RSS reaches memory_limit_mb, tsan flushes all shadow memory.
llvm-svn: 191913
Move this function to sanitizer_common because LSan uses it too. Also, fix a bug
where the TLS range reported for main thread was off by the size of the thread
descriptor from libc (TSan doesn't care much, but for LSan it's critical).
llvm-svn: 181322