to reflect the new license.
We understand that people may be surprised that we're moving the header
entirely to discuss the new license. We checked this carefully with the
Foundation's lawyer and we believe this is the correct approach.
Essentially, all code in the project is now made available by the LLVM
project under our new license, so you will see that the license headers
include that license only. Some of our contributors have contributed
code under our old license, and accordingly, we have retained a copy of
our old license notice in the top-level files in each project and
repository.
llvm-svn: 351636
Add a new flag, __tsan_mutex_not_static, which has the opposite sense
of __tsan_mutex_linker_init. When the new __tsan_mutex_not_static flag
is passed to __tsan_mutex_destroy, tsan ignores the destruction unless
the mutex was also created with the __tsan_mutex_not_static flag.
This is useful for constructors that otherwise woud set
__tsan_mutex_linker_init but cannot, because they are declared constexpr.
Google has a custom mutex with two constructors, a "linker initialized"
constructor that relies on zero-initialization and sets
__tsan_mutex_linker_init, and a normal one which sets no tsan flags.
The "linker initialized" constructor is morally constexpr, but we can't
declare it constexpr because of the need to call into tsan as a side effect.
With this new flag, the normal c'tor can set __tsan_mutex_not_static,
the "linker initialized" constructor can rely on tsan's lazy initialization,
and __tsan_mutex_destroy can still handle both cases correctly.
Author: Greg Falcon (gfalcon)
Reviewed in: https://reviews.llvm.org/D39095
llvm-svn: 316209
There are several problems with the current annotations (AnnotateRWLockCreate and friends):
- they don't fully support deadlock detection (we need a hook _before_ mutex lock)
- they don't support insertion of random artificial delays to perturb execution (again we need a hook _before_ mutex lock)
- they don't support setting extended mutex attributes like read/write reentrancy (only "linker init" was bolted on)
- they don't support setting mutex attributes if a mutex don't have a "constructor" (e.g. static, Java, Go mutexes)
- they don't ignore synchronization inside of lock/unlock operations which leads to slowdown and false negatives
The new annotations solve of the above problems. See tsan_interface.h for the interface specification and comments.
Reviewed in https://reviews.llvm.org/D31093
llvm-svn: 298809
This patch adds 48-bits VMA support for tsan on aarch64. As current
mappings for aarch64, 48-bit VMA also supports PIE executable. This
limits the mapping mechanism because the PIE address bits
(usually 0aaaaXXXXXXXX) makes it harder to create a mask/xor value
to include all memory regions. I think it is possible to create a
large application VAM range by either dropping PIE support or tune
current range.
It also changes slight the way addresses are packed in SyncVar structure:
previously it assumes x86_64 as the maximum VMA range. Since ID is 14 bits
wide, shifting 48 bits should be ok.
Tested on x86_64, ppc64le and aarch64 (39 and 48 bits VMA).
llvm-svn: 277137
This is reincarnation of http://reviews.llvm.org/D17648 with the bug fix pointed out by Adhemerval (zatrazz).
Currently ThreadState holds both logical state (required for race-detection algorithm, user-visible)
and physical state (various caches, most notably malloc cache). Move physical state in a new
Process entity. Besides just being the right thing from abstraction point of view, this solves several
problems:
Cache everything on P level in Go. Currently we cache on a mix of goroutine and OS thread levels.
This unnecessary increases memory consumption.
Properly handle free operations in Go. Frees are issue by GC which don't have goroutine context.
As the result we could not do anything more than just clearing shadow. For example, we leaked
sync objects and heap block descriptors.
This will allow to get rid of libc malloc in Go (now we have Processor context for internal allocator cache).
This in turn will allow to get rid of dependency on libc entirely.
Potentially we can make Processor per-CPU in C++ mode instead of per-thread, which will
reduce resource consumption.
The distinction between Thread and Processor is currently used only by Go, C++ creates Processor per OS thread,
which is equivalent to the current scheme.
llvm-svn: 267678
Currently ThreadState holds both logical state (required for race-detection algorithm, user-visible)
and physical state (various caches, most notably malloc cache). Move physical state in a new
Process entity. Besides just being the right thing from abstraction point of view, this solves several
problems:
1. Cache everything on P level in Go. Currently we cache on a mix of goroutine and OS thread levels.
This unnecessary increases memory consumption.
2. Properly handle free operations in Go. Frees are issue by GC which don't have goroutine context.
As the result we could not do anything more than just clearing shadow. For example, we leaked
sync objects and heap block descriptors.
3. This will allow to get rid of libc malloc in Go (now we have Processor context for internal allocator cache).
This in turn will allow to get rid of dependency on libc entirely.
4. Potentially we can make Processor per-CPU in C++ mode instead of per-thread, which will
reduce resource consumption.
The distinction between Thread and Processor is currently used only by Go, C++ creates Processor per OS thread,
which is equivalent to the current scheme.
llvm-svn: 262037
Munmap interceptor did not reset meta shadow for the range,
and __tsan_java_move crashed because it encountered
non-zero meta shadow for the destination.
llvm-svn: 232029
Vector clocks is the most actively allocated object in tsan runtime.
Current internal allocator is not scalable enough to handle allocation
of clocks in scalable way (too small caches). This changes transforms
clocks to 2-level array with 512-byte blocks. Since all blocks are of
the same size, it's possible to cache them more efficiently in per-thread caches.
llvm-svn: 214912
The new storage (MetaMap) is based on direct shadow (instead of a hashmap + per-block lists).
This solves a number of problems:
- eliminates quadratic behaviour in SyncTab::GetAndLock (https://code.google.com/p/thread-sanitizer/issues/detail?id=26)
- eliminates contention in SyncTab
- eliminates contention in internal allocator during allocation of sync objects
- removes a bunch of ad-hoc code in java interface
- reduces java shadow from 2x to 1/2x
- allows to memorize heap block meta info for Java and Go
- allows to cleanup sync object meta info for Go
- which in turn enabled deadlock detector for Go
llvm-svn: 209810
Introduce DDetector interface between the tool and the DD itself.
It will help to experiment with other DD implementation,
as well as reuse DD in other tools.
llvm-svn: 202485
Algorithm description: http://code.google.com/p/thread-sanitizer/wiki/ThreadSanitizerAlgorithm
Status:
The tool is known to work on large real-life applications, but still has quite a few rough edges.
Nothing is guaranteed yet.
The tool works on x86_64 Linux.
Support for 64-bit MacOS 10.7+ is planned for late 2012.
Support for 32-bit OSes is doable, but problematic and not yet planed.
Further commits coming:
- tests
- makefiles
- documentation
- clang driver patch
The code was previously developed at http://code.google.com/p/data-race-test/source/browse/trunk/v2/
by Dmitry Vyukov and Kostya Serebryany with contributions from
Timur Iskhodzhanov, Alexander Potapenko, Alexey Samsonov and Evgeniy Stepanov.
llvm-svn: 156542