Includes verifier changes checking the elementtype, clang codegen
changes to emit the elementtype, and ISel changes using the elementtype.
Reviewed By: #opaque-pointers, nikic
Differential Revision: https://reviews.llvm.org/D120527
We should not be using APIs here that try to fetch the attribute
from both the call attributes and the function attributes. Otherwise
we'll try to upgrade a non-existent sret attribute on the call using
the attribute on the function.
Since D101045, allocas are no longer required to be part of the
default alloca address space. There may be allocas in multiple
different address spaces. However, the bitcode reader would
simply assume the default alloca address space, resulting in
either an error or incorrect IR.
Add an optional record for allocas which encodes the address
space.
As these errors are detected after the instruction has already been
created (but before it has been inserted into the function), we
also need to delete it.
This will let us start moving away from hard-coded attributes in
MemoryBuiltins.cpp and put the knowledge about various attribute
functions in the compilers that emit those calls where it probably
belongs.
Differential Revision: https://reviews.llvm.org/D117921
This completes the propagation of type IDs through bitcode reading,
and switches remaining uses of getPointerElementType() to use
contained type IDs.
The main new thing here is that sometimes we need to create a type
ID for a type that was not explicitly encoded in bitcode (or we
don't know its ID at the current point). For such types we create a
"virtual" type ID, which is cached based on the type and the
contained type IDs. Luckily, we generally only need zero or one
contained type IDs, and in the one case where we need two, we can
get away with not including it in the cache key.
With this change, we pass the entirety of llvm-test-suite at O3
with opaque pointers.
Differential Revision: https://reviews.llvm.org/D120471
Currently adding attribute no_sanitize("bounds") isn't disabling
-fsanitize=local-bounds (also enabled in -fsanitize=bounds). The Clang
frontend handles fsanitize=array-bounds which can already be disabled by
no_sanitize("bounds"). However, instrumentation added by the
BoundsChecking pass in the middle-end cannot be disabled by the
attribute.
The fix is very similar to D102772 that added the ability to selectively
disable sanitizer pass on certain functions.
In this patch, if no_sanitize("bounds") is provided, an additional
function attribute (NoSanitizeBounds) is attached to IR to let the
BoundsChecking pass know we want to disable local-bounds checking. In
order to support this feature, the IR is extended (similar to D102772)
to make Clang able to preserve the information and let BoundsChecking
pass know bounds checking is disabled for certain function.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D119816
This is the next step towards supporting bitcode auto upgrade with
opaque pointers. The ValueList now stores the Value* together with
its associated type ID, which allows inspecting the original pointer
element type of arbitrary values.
This is a largely mechanical change threading the type ID through
various places. I've left TODOTypeID placeholders in a number of
places where determining the type ID is either non-trivial or
requires allocating a new type ID not present in the original
bitcode. For this reason, the new type IDs are also not used for
anything yet (apart from propagation). They will get used once the
TODOs are resolved.
Differential Revision: https://reviews.llvm.org/D119821
This is step two of supporting autoupgrade of old bitcode to opaque
pointers. Rather than tracking the element type ID of pointers in
particular, track all type IDs that a type contains. This allows us
to recover the element type in more complex situations, e.g. when
we need to determine the pointer element type of a vector element
or function type parameter.
Differential Revision: https://reviews.llvm.org/D119339
We have the `clang -cc1` command-line option `-funwind-tables=1|2` and
the codegen option `VALUE_CODEGENOPT(UnwindTables, 2, 0) ///< Unwind
tables (1) or asynchronous unwind tables (2)`. However, this is
encoded in LLVM IR by the presence or the absence of the `uwtable`
attribute, i.e. we lose the information whether to generate want just
some unwind tables or asynchronous unwind tables.
Asynchronous unwind tables take more space in the runtime image, I'd
estimate something like 80-90% more, as the difference is adding
roughly the same number of CFI directives as for prologues, only a bit
simpler (e.g. `.cfi_offset reg, off` vs. `.cfi_restore reg`). Or even
more, if you consider tail duplication of epilogue blocks.
Asynchronous unwind tables could also restrict code generation to
having only a finite number of frame pointer adjustments (an example
of *not* having a finite number of `SP` adjustments is on AArch64 when
untagging the stack (MTE) in some cases the compiler can modify `SP`
in a loop).
Having the CFI precise up to an instruction generally also means one
cannot bundle together CFI instructions once the prologue is done,
they need to be interspersed with ordinary instructions, which means
extra `DW_CFA_advance_loc` commands, further increasing the unwind
tables size.
That is to say, async unwind tables impose a non-negligible overhead,
yet for the most common use cases (like C++ exceptions), they are not
even needed.
This patch extends the `uwtable` attribute with an optional
value:
- `uwtable` (default to `async`)
- `uwtable(sync)`, synchronous unwind tables
- `uwtable(async)`, asynchronous (instruction precise) unwind tables
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D114543
Make it clearer that this method is specifically for pointer
element types, and not other element types. This distinction will
be relevant in the future.
The somewhat unusual spelling is to make sure this does not show
up when grepping for getPointerElementType.
Auto-upgrades that rely on the pointer element type do not work in
opaque pointer mode. The idea behind this patch is that we can
instead work with type IDs, for which we can retain the pointer
element type. For typed pointer bitcode, we will have a distinct
type ID for pointers with distinct element type, even if there will
only be a single corresponding opaque pointer type.
The disclaimer here is that this is only the first step of the change,
and there are still more getPointerElementType() calls to remove.
I expect that two more patches will be needed:
1. Track all "contained" type IDs, which will allow us to handle
function params (which are contained in the function type) and GEPs
(which may use vectors of pointers)
2. Track type IDs for values, which is e.g. necessary to handle loads.
Differential Revision: https://reviews.llvm.org/D118694
Instead use either Type::getPointerElementType() or
Type::getNonOpaquePointerElementType().
This is part of D117885, in preparation for deprecating the API.
This is the autoupgrade part of D116531. If old bitcode is missing
the elementtype attribute for indirect inline asm constraints,
automatically add it. As usual, this only works when upgrading
in typed mode, we haven't figured out upgrade in opaque mode yet.
The bitcode reader expected that the pointers are typed,
so that it can extract the function type for the assembly
so `bitc::CST_CODE_INLINEASM` did not explicitly store said function type.
I'm not really sure how the upgrade path will look for existing bitcode,
but i think we can easily support opaque pointers going forward,
by simply storing the function type.
Reviewed By: #opaque-pointers, nikic
Differential Revision: https://reviews.llvm.org/D116341
Can't get the pointee type of an opaque pointer,
but in that case said attributes must already be typed,
so just don't try to rewrite them if they already are.
With Control-Flow Integrity (CFI), the LowerTypeTests pass replaces
function references with CFI jump table references, which is a problem
for low-level code that needs the address of the actual function body.
For example, in the Linux kernel, the code that sets up interrupt
handlers needs to take the address of the interrupt handler function
instead of the CFI jump table, as the jump table may not even be mapped
into memory when an interrupt is triggered.
This change adds the no_cfi constant type, which wraps function
references in a value that LowerTypeTestsModule::replaceCfiUses does not
replace.
Link: https://github.com/ClangBuiltLinux/linux/issues/1353
Reviewed By: nickdesaulniers, pcc
Differential Revision: https://reviews.llvm.org/D108478
Instead track global objects with implicit comdat in a separate
set. The current approach of temporarily assigning an invalid
comdat pointer is incompatible with D115864.
Verify that the resolver exists, that it is a defined
Function, and that its return type matches the ifunc's
type. Add corresponding check to BitcodeReader, change
clang to emit the correct type, and fix tests to comply.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D112349
Avoid naming some Expected<T> values in the Bitcode reader by using
takeError() and moveInto() more often. This follows the smaller set of
changes included in 2410fb4616.
As discussed in:
* https://reviews.llvm.org/D94166
* https://lists.llvm.org/pipermail/llvm-dev/2020-September/145031.html
The GlobalIndirectSymbol class lost most of its meaning in
https://reviews.llvm.org/D109792, which disambiguated getBaseObject
(now getAliaseeObject) between GlobalIFunc and everything else.
In addition, as long as GlobalIFunc is not a GlobalObject and
getAliaseeObject returns GlobalObjects, a GlobalAlias whose aliasee
is a GlobalIFunc cannot currently be modeled properly. Creating
aliases for GlobalIFuncs does happen in the wild (e.g. glibc). In addition,
calling getAliaseeObject on a GlobalIFunc will currently return nullptr,
which is undesirable because it should return the object itself for
non-aliases.
This patch refactors the GlobalIFunc class to inherit directly from
GlobalObject, and removes GlobalIndirectSymbol (while inlining the
relevant parts into GlobalAlias and GlobalIFunc). This allows for
calling getAliaseeObject() on a GlobalIFunc to return the GlobalIFunc
itself, making getAliaseeObject() more consistent and enabling
alias-to-ifunc to be properly modeled in the IR.
I exercised some judgement in the API clients of GlobalIndirectSymbol:
some were 'monomorphized' for GlobalAlias and GlobalIFunc, and
some remained shared (with the type adapted to become GlobalValue).
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D108872
The current code checks whether the vector's element type is a valid structure element type, rather than a valid vector element type. The two have separate implementations and but only accept very slightly different sets of types, which is probably why this wasn't caught before.
Differential Revision: https://reviews.llvm.org/D109655
Currently the max alignment representable is 1GB, see D108661.
Setting the align of an object to 4GB is desirable in some cases to make sure the lower 32 bits are clear which can be used for some optimizations, e.g. https://crbug.com/1016945.
This uses an extra bit in instructions that carry an alignment. We can store 15 bits of "free" information, and with this change some instructions (e.g. AtomicCmpXchgInst) use 14 bits.
We can increase the max alignment representable above 4GB (up to 2^62) since we're only using 33 of the 64 values, but I've just limited it to 4GB for now.
The one place we have to update the bitcode format is for the alloca instruction. It stores its alignment into 5 bits of a 32 bit bitfield. I've added another field which is 8 bits and should be future proof for a while. For backward compatibility, we check if the old field has a value and use that, otherwise use the new field.
Updating clang's max allowed alignment will come in a future patch.
Reviewed By: hans
Differential Revision: https://reviews.llvm.org/D110451