Commit Graph

28 Commits

Author SHA1 Message Date
Chandler Carruth 2946cd7010 Update the file headers across all of the LLVM projects in the monorepo
to reflect the new license.

We understand that people may be surprised that we're moving the header
entirely to discuss the new license. We checked this carefully with the
Foundation's lawyer and we believe this is the correct approach.

Essentially, all code in the project is now made available by the LLVM
project under our new license, so you will see that the license headers
include that license only. Some of our contributors have contributed
code under our old license, and accordingly, we have retained a copy of
our old license notice in the top-level files in each project and
repository.

llvm-svn: 351636
2019-01-19 08:50:56 +00:00
Dmitry Vyukov 9e2cd1c125 [tsan] Add Mutex annotation flag for constant-initialized __tsan_mutex_linker_init behavior
Add a new flag, _⁠_tsan_mutex_not_static, which has the opposite sense
of _⁠_tsan_mutex_linker_init. When the new _⁠_tsan_mutex_not_static flag
is passed to _⁠_tsan_mutex_destroy, tsan ignores the destruction unless
the mutex was also created with the _⁠_tsan_mutex_not_static flag.

This is useful for constructors that otherwise woud set
_⁠_tsan_mutex_linker_init but cannot, because they are declared constexpr.

Google has a custom mutex with two constructors, a "linker initialized"
constructor that relies on zero-initialization and sets
⁠_⁠_tsan_mutex_linker_init, and a normal one which sets no tsan flags.
The "linker initialized" constructor is morally constexpr, but we can't
declare it constexpr because of the need to call into tsan as a side effect.

With this new flag, the normal c'tor can set _⁠_tsan_mutex_not_static,
the "linker initialized" constructor can rely on tsan's lazy initialization,
and _⁠_tsan_mutex_destroy can still handle both cases correctly.

Author: Greg Falcon (gfalcon)
Reviewed in: https://reviews.llvm.org/D39095

llvm-svn: 316209
2017-10-20 12:08:53 +00:00
Dmitry Vyukov dc2a38cdf2 tsan: fix reading of mutex flags
SyncVar::IsFlagSet returns true if any flag is set.
This is wrong. Check the actual requested flag.

llvm-svn: 305281
2017-06-13 09:37:51 +00:00
Dmitry Vyukov 8096a8c86f tsan: add new mutex annotations
There are several problems with the current annotations (AnnotateRWLockCreate and friends):
- they don't fully support deadlock detection (we need a hook _before_ mutex lock)
- they don't support insertion of random artificial delays to perturb execution (again we need a hook _before_ mutex lock)
- they don't support setting extended mutex attributes like read/write reentrancy (only "linker init" was bolted on)
- they don't support setting mutex attributes if a mutex don't have a "constructor" (e.g. static, Java, Go mutexes)
- they don't ignore synchronization inside of lock/unlock operations which leads to slowdown and false negatives
The new annotations solve of the above problems. See tsan_interface.h for the interface specification and comments.

Reviewed in https://reviews.llvm.org/D31093

llvm-svn: 298809
2017-03-26 15:27:04 +00:00
Adhemerval Zanella 4f9de1e7bf tsan: Enable 48-bit VMA support on aarch64
This patch adds 48-bits VMA support for tsan on aarch64.  As current
mappings for aarch64, 48-bit VMA also supports PIE executable.  This
limits the mapping mechanism because the PIE address bits
(usually 0aaaaXXXXXXXX) makes it harder to create a mask/xor value
to include all memory regions.  I think it is possible to create a
large application VAM range by either dropping PIE support or tune
current range.

It also changes slight the way addresses are packed in SyncVar structure:
previously it assumes x86_64 as the maximum VMA range.  Since ID is 14 bits
wide, shifting 48 bits should be ok.

Tested on x86_64, ppc64le and aarch64 (39 and 48 bits VMA).

llvm-svn: 277137
2016-07-29 12:45:35 +00:00
Dmitry Vyukov b3a51bdcd7 tsan: don't create sync objects on acquire
Creating sync objects on acquire is pointless:
acquire of a just created sync object if a no-op.

llvm-svn: 273862
2016-06-27 11:14:59 +00:00
Dmitry Vyukov d87c7b321a tsan: split thread into logical and physical state
This is reincarnation of http://reviews.llvm.org/D17648 with the bug fix pointed out by Adhemerval (zatrazz).

Currently ThreadState holds both logical state (required for race-detection algorithm, user-visible)
and physical state (various caches, most notably malloc cache). Move physical state in a new
Process entity. Besides just being the right thing from abstraction point of view, this solves several
problems:

Cache everything on P level in Go. Currently we cache on a mix of goroutine and OS thread levels.
This unnecessary increases memory consumption.

Properly handle free operations in Go. Frees are issue by GC which don't have goroutine context.
As the result we could not do anything more than just clearing shadow. For example, we leaked
sync objects and heap block descriptors.

This will allow to get rid of libc malloc in Go (now we have Processor context for internal allocator cache).
This in turn will allow to get rid of dependency on libc entirely.

Potentially we can make Processor per-CPU in C++ mode instead of per-thread, which will
reduce resource consumption.
The distinction between Thread and Processor is currently used only by Go, C++ creates Processor per OS thread,
which is equivalent to the current scheme.

llvm-svn: 267678
2016-04-27 08:23:02 +00:00
Dmitry Vyukov 7f022ae4c2 tsan: revert r262037
Broke aarch64 and darwin bots.

llvm-svn: 262046
2016-02-26 18:26:48 +00:00
Dmitry Vyukov b8868b9bea tsan: split thread into logical and physical state
Currently ThreadState holds both logical state (required for race-detection algorithm, user-visible)
and physical state (various caches, most notably malloc cache). Move physical state in a new
Process entity. Besides just being the right thing from abstraction point of view, this solves several
problems:
1. Cache everything on P level in Go. Currently we cache on a mix of goroutine and OS thread levels.
This unnecessary increases memory consumption.
2. Properly handle free operations in Go. Frees are issue by GC which don't have goroutine context.
As the result we could not do anything more than just clearing shadow. For example, we leaked
sync objects and heap block descriptors.
3. This will allow to get rid of libc malloc in Go (now we have Processor context for internal allocator cache).
This in turn will allow to get rid of dependency on libc entirely.
4. Potentially we can make Processor per-CPU in C++ mode instead of per-thread, which will
reduce resource consumption.
The distinction between Thread and Processor is currently used only by Go, C++ creates Processor per OS thread,
which is equivalent to the current scheme.

llvm-svn: 262037
2016-02-26 16:57:14 +00:00
Dmitry Vyukov d161fcba17 tsan: fix shift overflow
3<<30 fits into 32-bit unsigned, but does not fit into int.
Found by ubsan.

llvm-svn: 243241
2015-07-26 07:45:26 +00:00
Dmitry Vyukov a60829a1b6 tsan: fix crash during __tsan_java_move
Munmap interceptor did not reset meta shadow for the range,
and __tsan_java_move crashed because it encountered
non-zero meta shadow for the destination.

llvm-svn: 232029
2015-03-12 11:24:16 +00:00
Dmitry Vyukov 70db9d4d72 tsan: allocate vector clocks using slab allocator
Vector clocks is the most actively allocated object in tsan runtime.
Current internal allocator is not scalable enough to handle allocation
of clocks in scalable way (too small caches). This changes transforms
clocks to 2-level array with 512-byte blocks. Since all blocks are of
the same size, it's possible to cache them more efficiently in per-thread caches.

llvm-svn: 214912
2014-08-05 18:45:02 +00:00
Dmitry Vyukov bde4c9c773 tsan: refactor storage of meta information for heap blocks and sync objects
The new storage (MetaMap) is based on direct shadow (instead of a hashmap + per-block lists).
This solves a number of problems:
 - eliminates quadratic behaviour in SyncTab::GetAndLock (https://code.google.com/p/thread-sanitizer/issues/detail?id=26)
 - eliminates contention in SyncTab
 - eliminates contention in internal allocator during allocation of sync objects
 - removes a bunch of ad-hoc code in java interface
 - reduces java shadow from 2x to 1/2x
 - allows to memorize heap block meta info for Java and Go
 - allows to cleanup sync object meta info for Go
 - which in turn enabled deadlock detector for Go

llvm-svn: 209810
2014-05-29 13:50:54 +00:00
Dmitry Vyukov f49921ba53 tsan: reorder SyncVar members to reduce contention
llvm-svn: 204655
2014-03-24 18:51:37 +00:00
Dmitry Vyukov d3466b9e5e tsan: remove unused function declarations
llvm-svn: 204328
2014-03-20 10:39:46 +00:00
Dmitry Vyukov 9cf08c46a6 tsan: remove unused declaration
llvm-svn: 204324
2014-03-20 10:13:30 +00:00
Dmitry Vyukov 6cfab724ec tsan: refactor deadlock detector
Introduce DDetector interface between the tool and the DD itself.
It will help to experiment with other DD implementation,
as well as reuse DD in other tools.

llvm-svn: 202485
2014-02-28 10:48:13 +00:00
Kostya Serebryany a63632a5c6 [tsan] rudimentary support for deadlock detector in tsan (nothing really works yet except for a single tiny test). Also rename tsan's DeadlockDetector to InternalDeadlockDetector
llvm-svn: 201407
2014-02-14 12:20:42 +00:00
Dmitry Vyukov a221620b2e tsan: use StackDepot in sync object to store creation stacks
llvm-svn: 177258
2013-03-18 08:27:47 +00:00
Dmitry Vyukov a33bf2701e tsan: fix Java memory move operations and add the test
llvm-svn: 170891
2012-12-21 13:23:48 +00:00
Dmitry Vyukov 2547ac65eb tsan: java interface implementation skeleton
llvm-svn: 170707
2012-12-20 17:29:34 +00:00
Dmitry Vyukov fd5ebcd1b0 tsan: add mutexsets to reports
With this change reports say what mutexes the threads hold around the racy memory accesses.

llvm-svn: 169493
2012-12-06 12:16:15 +00:00
Dmitry Vyukov 3482ec3bc8 tsan: better diagnostics for destroy of a locked mutex + a test
llvm-svn: 162022
2012-08-16 15:08:49 +00:00
Dmitry Vyukov 19ae9f3b2e tsan: support for linker initializer mutexes with static storage duration
llvm-svn: 162021
2012-08-16 14:21:09 +00:00
Dmitry Vyukov 6fa46f7003 tsan/asan: unify atomics (move atomics from tsan to sanitizer_common)
llvm-svn: 159437
2012-06-29 16:58:33 +00:00
Dmitry Vyukov de1fd1c83b tsan: do not call malloc/free in memory access handling routine.
This improves signal-/fork-safety of instrumented programs.

llvm-svn: 158988
2012-06-22 11:08:55 +00:00
Dmitry Vyukov 15710c9220 tsan: simple memory profiler
llvm-svn: 157248
2012-05-22 11:33:03 +00:00
Kostya Serebryany 4ad375f0a9 [tsan] First commit of ThreadSanitizer (TSan) run-time library.
Algorithm description: http://code.google.com/p/thread-sanitizer/wiki/ThreadSanitizerAlgorithm

Status:
The tool is known to work on large real-life applications, but still has quite a few rough edges.
Nothing is guaranteed yet.

The tool works on x86_64 Linux.
Support for 64-bit MacOS 10.7+ is planned for late 2012.
Support for 32-bit OSes is doable, but problematic and not yet planed.

Further commits coming:
  - tests
  - makefiles
  - documentation
  - clang driver patch

The code was previously developed at http://code.google.com/p/data-race-test/source/browse/trunk/v2/
by Dmitry Vyukov and Kostya Serebryany with contributions from
Timur Iskhodzhanov, Alexander Potapenko, Alexey Samsonov and Evgeniy Stepanov.

llvm-svn: 156542
2012-05-10 13:48:04 +00:00