This switches everything to use the memory attribute proposed in
https://discourse.llvm.org/t/rfc-unify-memory-effect-attributes/65579.
The old argmemonly, inaccessiblememonly and inaccessiblemem_or_argmemonly
attributes are dropped. The readnone, readonly and writeonly attributes
are restricted to parameters only.
The old attributes are auto-upgraded both in bitcode and IR.
The bitcode upgrade is a policy requirement that has to be retained
indefinitely. The IR upgrade is mainly there so it's not necessary
to update all tests using memory attributes in this patch, which
is already large enough. We could drop that part after migrating
tests, or retain it longer term, to make it easier to import IR
from older LLVM versions.
High-level Function/CallBase APIs like doesNotAccessMemory() or
setDoesNotAccessMemory() are mapped transparently to the memory
attribute. Code that directly manipulates attributes (e.g. via
AttributeList) on the other hand needs to switch to working with
the memory attribute instead.
Differential Revision: https://reviews.llvm.org/D135780
This addresses a bug where vector versions of ctlz are creating false positive reports.
Depends on D136369
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D136523
Instrumentation just ORs shadow of inputs.
I assume some result shadow bits can be reset if we go into specifics of particular checks,
but as-is it is still an improvement against existing default strict instruction handler, when
every set bit of input shadow is reported as an error.
Reviewed By: kda
Differential Revision: https://reviews.llvm.org/D134123
Msan has default handler for unknown instructions which
previously applied to these as well. However depending on
mask, not all pointers or passthru part will be used. This
allows other passes to insert undef into sum arguments.
As result, default strict instruction handler can produce false reports.
Reviewed By: kda, kstoimenov
Differential Revision: https://reviews.llvm.org/D133678
According to logs, ClInstrumentationWithCallThreshold is workaround
for slow backend with large number of basic blocks.
However, I can't reproduce that one, but I see significant slowdown
after ClCheckConstantShadow. Without ClInstrumentationWithCallThreshold
compiler is able to eliminate many of the branches.
So maybe we should drop ClInstrumentationWithCallThreshold completly.
For now I just change the logic to ignore constant shadow so it will
not trigger callback fallback too early.
Reviewed By: kstoimenov
Differential Revision: https://reviews.llvm.org/D133880
If multiple warnings created on the same instruction (debug location)
it can be difficult to figure out which input value is the cause.
This patches chains origins just before the warning using last origins
update debug information.
To avoid inflating the binary unnecessarily, do this only when uncertainty is
high enough, 3 warnings by default. On average it adds 0.4% to the
.text size.
Reviewed By: kda, fmayer
Differential Revision: https://reviews.llvm.org/D133232
GlobalsAA is considered stateless as usually transformations do not introduce
new global accesses, and removed global access is not a problem for GlobalsAA
users.
Sanitizers introduce new global accesses:
- Msan and Dfsan tracks origins and parameters with TLS, and to store stack origins.
- Sancov uses global counters. HWAsan store tag state in TLS.
- Asan modifies globals, but I am not sure if invalidation is required.
I see no evidence that TSan needs invalidation.
Reviewed By: aeubanks
Differential Revision: https://reviews.llvm.org/D133394
When we want to add instrumentation after
an instruction, instrumentation still should
keep debug info of the instruction.
Reviewed By: kda, kstoimenov
Differential Revision: https://reviews.llvm.org/D133091
Reduces .text size by 1% on our large binary.
On CTMark (-O2 -fsanitize=memory -fsanitize-memory-use-after-dtor -fsanitize-memory-param-retval)
Size -0.4%
Time -0.8%
Reviewed By: kda
Differential Revision: https://reviews.llvm.org/D133071
It's a preparation of to combine shadow checks of the same instruction
Reviewed By: kda, kstoimenov
Differential Revision: https://reviews.llvm.org/D133065
When instrumenting `alloca`s, we use a `SmallSet` (i.e. `SmallPtrSet`). When there are fewer elements than the `SmallSet` size, it behaves like a vector, offering stable iteration order. Once we have too many `alloca`s to instrument, the iteration order becomes unstable. This manifests as non-deterministic builds because of the global constant we create while instrumenting the alloca.
The test added is a simple IR file, but was discovered while building `libcxx/src/filesystem/operations.cpp` from libc++. A reduced C++ example from that:
```
// clang++ -fsanitize=memory -fsanitize-memory-track-origins \
// -fno-discard-value-names -S -emit-llvm \
// -c op.cpp -o op.ll
struct Foo {
~Foo();
};
bool func1(Foo);
void func2(Foo);
void func3(int) {
int f_st, t_st;
Foo f, t;
func1(f) || func1(f) || func1(t) || func1(f) && func1(t);
func2(f);
}
```
Reviewed By: kda
Differential Revision: https://reviews.llvm.org/D133034
If constant shadown enabled we had false reports because
!isZeroValue() does not guaranty that the values is actually not zero.
Reviewed By: eugenis
Differential Revision: https://reviews.llvm.org/D132761
The array size specification of the an alloca can be any integer,
so zext or trunc it to intptr before attempting to multiply it
with an intptr constant.
Reviewed By: eugenis
Differential Revision: https://reviews.llvm.org/D131846
We were passing the type of `Val` to `getShadowOriginPtr`, rather
than the type of `Val`'s shadow resulting in broken IR. The fix
is simple.
Reviewed By: eugenis
Differential Revision: https://reviews.llvm.org/D131845
Allows for even more savings in the binary image while simultaneously removing the name of the offending stack variable.
Depends on D131631
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D131728
The goal is to reduce the size of the MSAN with track origins binary, by making
the variable name locations constant which will allow the linker to compress
them.
Follows: https://reviews.llvm.org/D131415
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D131631
Other sanitizers (ASan, TSan, see added tests) already handle
memcpy.inline and memset.inline by not relying on InstVisitor to turn
the intrinsics into calls. Only MSan instrumentation currently does not
support them due to missing InstVisitor callbacks.
Fix it by actually making InstVisitor handle Mem*InlineInst.
While the mem*.inline intrinsics promise no calls to external functions
as an optimization, for the sanitizers we need to break this guarantee
since access into the runtime is required either way, and performance
can no longer be guaranteed. All other cases, where generating a call is
incorrect, should instead use no_sanitize.
Fixes: https://github.com/llvm/llvm-project/issues/57048
Reviewed By: vitalybuka, dvyukov
Differential Revision: https://reviews.llvm.org/D131577
This is done by calling __msan_set_alloca_origin and providing the location of the variable by using the call stack.
This is prepatory work for dropping variable names when track-origins is enabled.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D131205
Use the FreeBSD AArch64 memory layout values when building for it.
These are based on the x86_64 values, scaled to take into account the
larger address space on AArch64.
Reviewed by: vitalybuka
Differential Revision: https://reviews.llvm.org/D125883
This patch adds !nosanitize metadata to FixedMetadataKinds.def, !nosanitize indicates that LLVM should not insert any sanitizer instrumentation.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D126294
Using the legacy PM for the optimization pipeline was deprecated in 13.0.0.
Following recent changes to remove non-core features of the legacy
PM/optimization pipeline, remove MemorySanitizerLegacyPass.
Differential Revision: https://reviews.llvm.org/D123894
We need to explicitly query the shadow here, because it is lazily
initialized for byval arguments. Without opaque pointers this used to
mostly work out, because there would be a bitcast to `i8*` present, and
that would query, and copy in case of byval, the argument shadow.
Reviewed By: vitalybuka, eugenis
Differential Revision: https://reviews.llvm.org/D123602
We need to explicitly query the shadow here, because it is lazily
initialized for byval arguments. Without opaque pointers this used to
mostly work out, because there would be a bitcast to `i8*` present, and
that would query, and copy in case of byval, the argument shadow.
Reviewed By: vitalybuka, eugenis
Differential Revision: https://reviews.llvm.org/D123602