Commit Graph

11 Commits

Author SHA1 Message Date
Dmitry Vyukov 14e306fa4b tsan: use DCHECK instead of CHECK in atomic functions
Atomic functions are semi-hot in profiles.
The CHECKs verify values passed by compiler
and they never fired, so replace them with DCHECKs.

Reviewed By: vitalybuka, melver

Differential Revision: https://reviews.llvm.org/D107373
2021-08-04 13:23:57 +02:00
Dmitry Vyukov 831910c5c4 tsan: new MemoryAccess interface
Currently we have MemoryAccess function that accepts
"bool kAccessIsWrite, bool kIsAtomic" and 4 wrappers:
MemoryRead/MemoryWrite/MemoryReadAtomic/MemoryWriteAtomic.

Such scheme with bool flags is not particularly scalable/extendable.
Because of that we did not have Read/Write wrappers for UnalignedMemoryAccess,
and "true, false" or "false, true" at call sites is not very readable.

Moreover, the new tsan runtime will introduce more flags
(e.g. move "freed" and "vptr access" to memory acccess flags).
We can't have 16 wrappers and each flag also takes whole
64-bit register for non-inlined calls.

Introduce AccessType enum that contains bit mask of
read/write, atomic/non-atomic, and later free/non-free,
vptr/non-vptr.
Such scheme is more scalable, more readble, more efficient
(don't consume multiple registers for these flags during calls)
and allows to cover unaligned and range variations of memory
access functions as well.

Also switch from size log to just size.
The new tsan runtime won't have the limitation of supporting
only 1/2/4/8 access sizes, so we don't need the logarithms.

Also add an inline thunk that converts the new interface to the old one.
For inlined calls it should not add any overhead because
all flags/size can be computed as compile time.

Reviewed By: vitalybuka, melver

Differential Revision: https://reviews.llvm.org/D107276
2021-08-03 11:03:23 +02:00
Dmitry Vyukov 7bd81fe183 tsan: don't save creation stack for some sync objects
Currently we save the creation stack for sync objects always.
But it's not needed to some sync objects, most notably atomics.
We simply don't use atomic creation stack anywhere.
Allow callers to control saving of the creation stack
and don't save it for atomics.

Depends on D107257.

Reviewed By: melver

Differential Revision: https://reviews.llvm.org/D107258
2021-08-02 13:30:24 +02:00
Dmitry Vyukov 9e3e97aa81 tsan: refactor MetaMap::GetAndLock interface
Don't lock the sync object inside of MetaMap methods.
This has several advantages:
 - the new interface does not confuse thread-safety analysis
   so we can remove a bunch of NO_THREAD_SAFETY_ANALYSIS attributes
 - this allows use of scoped lock objects
 - this allows more flexibility, e.g. locking some other mutex
   between searching and locking the sync object
Also prefix the methods with GetSync to be consistent with GetBlock method.
Also make interface wrappers inlinable, otherwise we either end up with
2 copies of the method, or with an additional call.

Reviewed By: melver

Differential Revision: https://reviews.llvm.org/D107256
2021-08-02 13:29:46 +02:00
Dmitry Vyukov a1a37ddc3f tsan: remove /**/ at the of multi-line macros
Prefer code readability over writeability.

Reviewed By: vitalybuka

Differential Revision: https://reviews.llvm.org/D106982
2021-07-29 07:50:09 +02:00
Dmitry Vyukov da7a5c09c8 tsan: don't print __tsan_atomic* functions in report stacks
Currently __tsan_atomic* functions do FuncEntry/Exit using caller PC
and then use current PC (pointing to __tsan_atomic* itself) during
memory access handling. As the result the top function in reports
involving atomics is __tsan_atomic* and the next frame points to user code.

Remove FuncEntry/Exit in atomic functions and use caller PC
during memory access handling. This removes __tsan_atomic*
from the top of report stacks, so that they point right to user code.

The motivation for this is performance.
Some atomic operations are very hot (mostly loads),
so removing FuncEntry/Exit is beneficial.
This also reduces thread trace consumption (1 event instead of 3).

__tsan_atomic* at the top of the stack is not necessary
and does not add any new information. We already say
"atomic write of size 4", "__tsan_atomic32_store" does not add
anything new.

It also makes reports consistent between atomic and non-atomic
accesses. For normal accesses we say "previous write" and point
to user code; for atomics we say "previous atomic write" and now
also point to user code.

Reviewed By: vitalybuka

Differential Revision: https://reviews.llvm.org/D106966
2021-07-28 20:34:46 +02:00
Dmitry Vyukov 8924d8e37e tsan: disable thread safety analysis in more functions
In preparation for replacing tsan Mutex with sanitizer_common Mutex,
which has thread-safety annotations. Thread safety analysis does not
understand MetaMap::GetAndLock which returns a locked sync object.

Reviewed By: vitalybuka, melver

Differential Revision: https://reviews.llvm.org/D106548
2021-07-23 09:12:59 +02:00
Dmitry Vyukov adb55d7c32 tsan: remove the stats subsystem
I don't think the stat subsystem was ever used since tsan
development in 2012. But it adds lots of code and this
effectively dead code needs to be updated if the runtime
code changes, which adds maintanance cost for no benefit.
Normal profiler usually gives enough info and that info
is more trustworthy.
Remove the stats subsystem.

Reviewed By: vitalybuka

Differential Revision: https://reviews.llvm.org/D106276
2021-07-20 07:47:38 +02:00
Bruno Cardoso Lopes fd184c062c [TSAN] Honor failure memory orders in AtomicCAS
LLVM has lifted strong requirements for CAS failure memory orders in 431e3138a and 819e0d105e.

Add support for honoring them in `AtomicCAS`.

https://github.com/google/sanitizers/issues/970

Differential Revision: https://reviews.llvm.org/D99434
2021-05-13 01:07:22 -07:00
Vitaly Buka c0fa632236 Remove NOLINTs from compiler-rt
llvm-svn: 371687
2019-09-11 23:19:48 +00:00
Nico Weber 5a3bb1a4d6 compiler-rt: Rename .cc file in lib/tsan/rtl to .cpp
Like r367463, but for tsan/rtl.

llvm-svn: 367564
2019-08-01 14:22:42 +00:00