We currently hardcode the maximum VM address on iOS/AArch64, which is not really correct and this value changes between device configurations. Let's use TASK_VM_INFO to retrieve the maximum VM address dynamically.
Differential Revision: https://reviews.llvm.org/D35032
llvm-svn: 307307
The logic in GetMaxVirtualAddress is already pretty complex, and I want to get rid of the hardcoded value for iOS/AArch64, which would need adding more Darwin-specific code, so let's split the implementation into sanitizer_linux.cc and sanitizer_mac.cc files. NFC.
Differential Revision: https://reviews.llvm.org/D35031
llvm-svn: 307281
On Darwin, sigprocmask changes the signal mask for the entire process. This has some unwanted consequences, because e.g. internal_start_thread wants to disable signals only in the current thread (to make the new thread inherit the signal mask), which is currently broken on Darwin. This patch switches to pthread_sigmask.
Differential Revision: https://reviews.llvm.org/D35016
llvm-svn: 307212
Summary:
In `sanitizer_allocator_primary32.h`:
- rounding up in `MapWithCallback` is not needed as `MmapOrDie` does it. Note
that the 64-bit counterpart doesn't round up, this keeps the behavior
consistent;
- since `IsAligned` exists, use it in `AllocateRegion`;
- in `PopulateFreeList`:
- checking `b->Count` to be greater than 0 when `b->Count() == max_count` is
redundant when done more than once. Just check that `max_count` is greater
than 0 out of the loop; the compiler (at least on ARM) didn't optimize it;
- mark the batch creation failure as `UNLIKELY`;
In `sanitizer_allocator_primary64.h`:
- in `MapWithCallback`, mark the failure condition as `UNLIKELY`;
In `sanitizer_posix.h`:
- mark a bunch of Mmap related failure conditions as `UNLIKELY`;
- in `MmapAlignedOrDieOnFatalError`, we have `IsAligned`, so use it; rearrange
the conditions as one test was redudant;
- in `MmapFixedImpl`, 30 chars was not large enough to hold the message and a
full 64-bit address (or at least a 48-bit usermode address), increase to 40.
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: aemerson, kubamracek, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D34840
llvm-svn: 306834
Do this by removing SANITIZER_INTERCEPT_WCSLEN and intercept wcslen
everywhere. Before this change, we were already intercepting wcslen on
Windows, but the interceptor was in asan, not sanitizer_common. After
this change, we stopped intercepting wcslen on Windows, which broke
asan_dll_thunk.c, which attempts to thunk to __asan_wcslen in the ASan
runtime.
llvm-svn: 306706
Summary:
Operator new interceptors behavior is now controlled by their nothrow
property as well as by allocator_may_return_null flag value:
- allocator_may_return_null=* + new() - die on allocation error
- allocator_may_return_null=0 + new(nothrow) - die on allocation error
- allocator_may_return_null=1 + new(nothrow) - return null
Ideally new() should throw std::bad_alloc exception, but that is not
trivial to achieve, hence TODO.
Reviewers: eugenis
Subscribers: kubamracek, llvm-commits
Differential Revision: https://reviews.llvm.org/D34731
llvm-svn: 306604
Summary:
Make SizeClassAllocator32 return nullptr when it encounters OOM, which
allows the entire sanitizer's allocator to follow allocator_may_return_null=1
policy, even for small allocations (LargeMmapAllocator is already fixed
by D34243).
Will add a test for OOM in primary allocator later, when
SizeClassAllocator64 can gracefully handle OOM too.
Reviewers: eugenis
Subscribers: kubamracek, llvm-commits
Differential Revision: https://reviews.llvm.org/D34433
llvm-svn: 305972
Summary:
AFAICT compiler-rt doesn't have a function that would return 'good' random
bytes to seed a PRNG. Currently, the `SizeClassAllocator64` uses addresses
returned by `mmap` to seed its PRNG, which is not ideal, and
`SizeClassAllocator32` doesn't benefit from the entropy offered by its 64-bit
counterpart address space, so right now it has nothing. This function aims at
solving this, allowing to implement good 32-bit chunk randomization. Scudo also
has a function that does this for Cookie purposes, which would go away in a
later CL once this lands.
This function will try the `getrandom` syscall if available, and fallback to
`/dev/urandom` if not.
Unfortunately, I do not have a way to implement and test a Mac and Windows
version, so those are unimplemented as of now. Note that `kRandomShuffleChunks`
is only used on Linux for now.
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: zturner, rnk, llvm-commits, kubamracek
Differential Revision: https://reviews.llvm.org/D34412
llvm-svn: 305922
Change some reinterpret_casts to c-style casts due to template instantiation
restrictions and build breakage due to missing paranthesises.
llvm-svn: 305899
Summary:
On Android we still need to reset preinstalled handlers and allow use handlers later.
This reverts commit r304039.
Reviewers: eugenis
Subscribers: kubamracek, dberris, llvm-commits
Differential Revision: https://reviews.llvm.org/D34434
llvm-svn: 305871
Summary:
Move cached allocator_may_return_null flag to sanitizer_allocator.cc and
provide API to consolidate and unify the behavior of all specific allocators.
Make all sanitizers using CombinedAllocator to follow
AllocatorReturnNullOrDieOnOOM() rules to behave the same way when OOM
happens.
When OOM happens, turn allocator_out_of_memory flag on regardless of
allocator_may_return_null flag value (it used to not to be set when
allocator_may_return_null == true).
release_to_os_interval_ms and rss_limit_exceeded will likely be moved to
sanitizer_allocator.cc too (later).
Reviewers: eugenis
Subscribers: srhines, kubamracek, llvm-commits
Differential Revision: https://reviews.llvm.org/D34310
llvm-svn: 305858
Summary:
CombinedAllocator::Allocate cleared parameter is not used anywhere and
seem to be obsolete.
Reviewers: eugenis
Subscribers: llvm-commits, kubamracek
Differential Revision: https://reviews.llvm.org/D34289
llvm-svn: 305590
Summary:
Context: https://github.com/google/sanitizers/issues/740.
Making secondary allocator to respect allocator_may_return_null=1 flag
and return nullptr when "out of memory" happens.
More changes in primary allocator and operator new will follow.
Reviewers: eugenis
Subscribers: kubamracek, llvm-commits
Differential Revision: https://reviews.llvm.org/D34243
llvm-svn: 305569
Summary:
After r303941 it was not possible to setup ASAN_OPTIONS to have the same
behavior for pre r303941 and post r303941 builds.
Pre r303941 Asan does not accept handle_sigbus=2.
Post r303941 Asan does not accept allow_user_segv_handler.
This fix ignores allow_user_segv_handler=1, but for allow_user_segv_handler=0
it will upgrade flags like handle_sigbus=1 to handle_sigbus=2. So user can set
ASAN_OPTIONS=allow_user_segv_handler=0 and have same behavior on old and new
clang builds (except range from r303941 to this revision).
In future users which need to prevent third party handlers should switch to
handle_sigbus=2 and remove allow_user_segv_handler as soon as suport of older
builds is not needed.
Related bugs:
https://github.com/google/oss-fuzz/issues/675https://bugs.chromium.org/p/chromium/issues/detail?id=731130
Reviewers: eugenis
Subscribers: llvm-commits, kubamracek
Differential Revision: https://reviews.llvm.org/D34227
llvm-svn: 305433
Summary:
This broke thread_local_quarantine_pthread_join.cc on some architectures, due
to the overhead of the stashed regions. Reverting while figuring out the best
way to deal with it.
Reviewers: alekseyshl
Reviewed By: alekseyshl
Subscribers: llvm-commits, kubamracek
Differential Revision: https://reviews.llvm.org/D34213
llvm-svn: 305404
Summary:
The reasoning behind this change is explained in D33454, which unfortunately
broke the Windows version (due to the platform not supporting partial unmapping
of a memory region).
This new approach changes `MmapAlignedOrDie` to allow for the specification of
a `padding_chunk`. If non-null, and the initial allocation is aligned, this
padding chunk will hold the address of the extra memory (of `alignment` bytes).
This allows `AllocateRegion` to get 2 regions if the memory is aligned
properly, and thus help reduce fragmentation (and saves on unmapping
operations). As with the initial D33454, we use a stash in the 32-bit Primary
to hold those extra regions and return them on the fast-path.
The Windows version of `MmapAlignedOrDie` will always return a 0
`padding_chunk` if one was requested.
Reviewers: alekseyshl, dvyukov, kcc
Reviewed By: alekseyshl
Subscribers: llvm-commits, kubamracek
Differential Revision: https://reviews.llvm.org/D34152
llvm-svn: 305391
Summary:
Move the OOM decision based on RSS limits out of generic allocator to
ASan allocator, where it makes more sense at the moment.
Reviewers: eugenis
Subscribers: kubamracek, llvm-commits
Differential Revision: https://reviews.llvm.org/D34180
llvm-svn: 305342
GNU version of strerror_r returns a result pointer that doesn't match the input
buffer. The result pointer is in fact a pointer to some internal storage.
TSAN was recording a write to this location, which was incorrect.
Fixed https://github.com/google/sanitizers/issues/696
llvm-svn: 304858
Revert "Mark sancov test as unsupported on Darwin"
Revert "[LSan] Detect dynamic loader by its base address."
This reverts commit r304633.
This reverts commit r304673.
This reverts commit r304632.
Those commit have broken LOTS of ARM/AArch64 bots for two days.
llvm-svn: 304699
Summary:
Very recently, FreeBSD 12 has been updated to use 64-bit inode numbers:
<https://svnweb.freebsd.org/changeset/base/318737>. This entails many
user-visible changes, but for the sanitizers the modifications are
limited in scope:
* The `stat` and `lstat` syscalls were removed, and should be replaced
with calls to `fstatat`.
* The `getdents` syscall was removed, and should be replaced with calls
to `getdirentries`.
* The layout of `struct dirent` was changed to accomodate 64-bit inode
numbers, and a new `d_off` field was added.
* The system header <sys/_types.h> now contains a macro `__INO64` to
determine whether the system uses 64-bit inode numbers.
I tested these changes on both FreeBSD 12.0-CURRENT (after r318959,
which adds the `__INO64` macro), and FreeBSD 11.0-STABLE (which still
uses 32-bit inode numbers).
Reviewers: emaste, kcc, vitalybuka, kubamracek
Reviewed By: vitalybuka
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D33600
llvm-svn: 304658
Summary:
Whenever possible (Linux + glibc 2.16+), detect dynamic loader module by
its base address, not by the module name matching. The current name
matching approach fails on some configurations.
Reviewers: eugenis
Subscribers: kubamracek, llvm-commits
Differential Revision: https://reviews.llvm.org/D33859
llvm-svn: 304633
Summary:
allow_user_segv_handler had confusing name did not allow to control behavior for
signals separately.
Reviewers: eugenis, alekseyshl, kcc
Subscribers: llvm-commits, dberris, kubamracek
Differential Revision: https://reviews.llvm.org/D33371
llvm-svn: 303941
Summary:
This required for any users who call exit() after creating
thread-specific data, as tls destructors are only called when
pthread_exit() or pthread_cancel() are used. This should also
match tls behavior on linux.
Getting the base address of the tls section is straightforward,
as it's stored as a section offset in %gs. The size is a bit trickier
to work out, as there doesn't appear to be any official documentation
or source code referring to it. The size used in this patch was determined
by taking the difference between the base address and the address of the
subsequent memory region returned by vm_region_recurse_64, which was
1024 * sizeof(uptr) on all threads except the main thread, where it was
larger. Since the section must be the same size on all of the threads,
1024 * sizeof(uptr) seemed to be a reasonable size to use, barring
a more programtic way to get the size.
1024 seems like a reasonable number, given that PTHREAD_KEYS_MAX
is 512 on darwin, so pthread keys will fit inside the region while
leaving space for other tls data. A larger size would overflow the
memory region returned by vm_region_recurse_64, and a smaller size
wouldn't leave room for all the pthread keys. In addition, the
stress test added here passes, which means that we are scanning at
least the full set of possible pthread keys, and probably
the full tls section.
Reviewers: alekseyshl, kubamracek
Subscribers: krytarowski, llvm-commits
Differential Revision: https://reviews.llvm.org/D33215
llvm-svn: 303887
Summary:
Apparently Windows's `UnmapOrDie` doesn't support partial unmapping. Which
makes the new region allocation technique not Windows compliant.
Reviewers: alekseyshl, dvyukov
Reviewed By: alekseyshl
Subscribers: llvm-commits, kubamracek
Differential Revision: https://reviews.llvm.org/D33554
llvm-svn: 303883
Summary:
Currently, AllocateRegion has a tendency to fragment memory: it allocates
`2*kRegionSize`, and if the memory is aligned, will unmap `kRegionSize` bytes,
thus creating a hole, which can't itself be reused for another region. This
is exacerbated by the fact that if 2 regions get allocated one after another
without any `mmap` in between, the second will be aligned due to mappings
generally being contiguous.
An idea, suggested by @alekseyshl, to prevent such a behavior is to have a
stash of regions: if the `2*kRegionSize` allocation is properly aligned, split
it in two, and stash the second part to be returned next time a region is
requested.
At this point, I thought about a couple of ways to implement this:
- either an `IntrusiveList` of regions candidates, storing `next` at the
begining of the region;
- a small array of regions candidates existing in the Primary.
While the second option is more constrained in terms of size, it offers several
advantages:
- security wise, a pointer in a region candidate could be overflowed into, and
abused when popping an element;
- we do not dirty the first page of the region by storing something in it;
- unless several threads request regions simultaneously from different size
classes, the stash rarely goes above 1 entry.
I am not certain about the Windows impact of this change, as `sanitizer_win.cc`
has its own version of MmapAlignedOrDie, maybe someone could chime in on this.
MmapAlignedOrDie is effectively unused after this change and could be removed
at a later point. I didn't notice any sizeable performance gain, even though we
are saving a few `mmap`/`munmap` syscalls.
Reviewers: alekseyshl, kcc, dvyukov
Reviewed By: alekseyshl
Subscribers: llvm-commits, kubamracek
Differential Revision: https://reviews.llvm.org/D33454
llvm-svn: 303879
Summary:
Dmitry, seeking your expertise. I believe, the proper way to implement
Lock/Unlock here would be to use acquire/release semantics. Am I missing
something?
Reviewers: dvyukov
Subscribers: llvm-commits, kubamracek
Differential Revision: https://reviews.llvm.org/D33521
llvm-svn: 303869