Commit Graph

9 Commits

Author SHA1 Message Date
Dmitry Vyukov d77b476c19 tsan: avoid extra call indirection in unaligned access functions
Currently unaligned access functions are defined in tsan_interface.cpp
and do a real call to MemoryAccess. This means we have a real call
and no read/write constant propagation.

Unaligned memory access can be quite hot for some programs
(observed on some compression algorithms with ~90% of unaligned accesses).

Move them to tsan_interface_inl.h to avoid the additional call
and enable constant propagation.
Also reorder the actual store and memory access handling for
__sanitizer_unaligned_store callbacks to enable tail calling
in MemoryAccess.

Depends on D107282.

Reviewed By: vitalybuka, melver

Differential Revision: https://reviews.llvm.org/D107283
2021-08-03 11:12:49 +02:00
Dmitry Vyukov 831910c5c4 tsan: new MemoryAccess interface
Currently we have MemoryAccess function that accepts
"bool kAccessIsWrite, bool kIsAtomic" and 4 wrappers:
MemoryRead/MemoryWrite/MemoryReadAtomic/MemoryWriteAtomic.

Such scheme with bool flags is not particularly scalable/extendable.
Because of that we did not have Read/Write wrappers for UnalignedMemoryAccess,
and "true, false" or "false, true" at call sites is not very readable.

Moreover, the new tsan runtime will introduce more flags
(e.g. move "freed" and "vptr access" to memory acccess flags).
We can't have 16 wrappers and each flag also takes whole
64-bit register for non-inlined calls.

Introduce AccessType enum that contains bit mask of
read/write, atomic/non-atomic, and later free/non-free,
vptr/non-vptr.
Such scheme is more scalable, more readble, more efficient
(don't consume multiple registers for these flags during calls)
and allows to cover unaligned and range variations of memory
access functions as well.

Also switch from size log to just size.
The new tsan runtime won't have the limitation of supporting
only 1/2/4/8 access sizes, so we don't need the logarithms.

Also add an inline thunk that converts the new interface to the old one.
For inlined calls it should not add any overhead because
all flags/size can be computed as compile time.

Reviewed By: vitalybuka, melver

Differential Revision: https://reviews.llvm.org/D107276
2021-08-03 11:03:23 +02:00
Daniel Kiss 4220531cef [AArch64][compiler-rt] Strip PAC from the link register.
-mbranch-protection protects the LR on the stack with PAC.
When the frames are walked the LR need to be cleared.
This inline assembly later will be replaced with a new builtin.

Test: build with  -DCMAKE_C_FLAGS="-mbranch-protection=standard".

Reviewed By: kubamracek

Differential Revision: https://reviews.llvm.org/D98008
2021-03-18 22:01:50 +01:00
Daniel Kiss c1940aac99 Revert "[AArch64][compiler-rt] Strip PAC from the link register."
This reverts commit ad40453fc4.
2021-03-18 22:01:50 +01:00
Daniel Kiss ad40453fc4 [AArch64][compiler-rt] Strip PAC from the link register.
-mbranch-protection protects the LR on the stack with PAC.
When the frames are walked the LR need to be cleared.
This inline assembly later will be replaced with a new builtin.

Test: build with  -DCMAKE_C_FLAGS="-mbranch-protection=standard".

Reviewed By: kubamracek

Differential Revision: https://reviews.llvm.org/D98008
2021-03-15 10:25:59 +01:00
Marco Vanotti 6760f7ee6f [compiler-rt][tsan] Remove unnecesary typedefs
These typedefs are not used anywhere else in this compilation unit.

Differential Revision: https://reviews.llvm.org/D86826
2020-08-28 18:43:54 -07:00
Kuba Mracek e713b0ecbc [tsan] On arm64e, strip out ptrauth bits from incoming PCs
Differential Revision: https://reviews.llvm.org/D86378
2020-08-25 11:59:36 -07:00
Vitaly Buka c0fa632236 Remove NOLINTs from compiler-rt
llvm-svn: 371687
2019-09-11 23:19:48 +00:00
Nico Weber 5a3bb1a4d6 compiler-rt: Rename .cc file in lib/tsan/rtl to .cpp
Like r367463, but for tsan/rtl.

llvm-svn: 367564
2019-08-01 14:22:42 +00:00