Avoid __builtin_assume_aligned crash when the 1st arg is array type (or
string literal).
Fixes Issue #57169
Differential Revision: https://reviews.llvm.org/D133202
All the coroutine builtins were emitted in EmitCoroutineIntrinsic except
__builtin_coro_size. This patch tries to emit all the corotine builtins
uniformally.
This patch replaces clamp idioms with std::clamp where the range is
obviously valid from the source code (that is, low <= high) to avoid
introducing undefined behavior.
This patch replaces svget, svset and svcreate aarch64 intrinsics for tuple
types with the generic llvm-ir intrinsics extract/insert vector
Differential Revision: https://reviews.llvm.org/D131547
At the moment, Clang only considers errno when deciding if a builtin
is const. This ignores the fact that some library functions may raise
floating point exceptions, which may modify global state, e.g. when
updating FP status registers.
To model the fact that some library functions/builtins may raise
floating point exceptions, this patch adds a new 'g' modifier for
builtins. If a builtin is marked with 'g', it cannot be considered
const, unless FP exceptions are ignored.
So far I've not added CHECK lines for all calls in math-libcalls.c. I'll
do that once we agree on the overall direction.
A consequence seems to be that we fail to select some of the constrained
math builtins now, but I am not entirely sure what's going on there.
Reviewed By: john.brawn
Differential Revision: https://reviews.llvm.org/D129231
The order has to be a constant and should be enforced by the builtin
definition. The fallthrough behavior would have been broken anyway.
There's still an existing issue/assert if you try to use garbage for the
ordering. The IRGen should be broken, but we also hit another assert
before that.
Fixes issue 56832
1. Add policy functions support and tests for vadd, vmv, vfmv and all load
instructions except segment load. I didn't add all combination of policy
functions in test because it seem not to make sense.
2. Rename HasUnMaskedOverloaded to SupportOverloading.
3. vmv.s.x for ta policy could not have overloaded API.
4. This patch does not support all operations, I will have other follow-up
patches support all.
[RFC] https://github.com/riscv-non-isa/rvv-intrinsic-doc/pull/137
Reviewed By: kito-cheng, fakepaper56, fakepaper56
Differential Revision: https://reviews.llvm.org/D126742
I went over the output of the following mess of a command:
(ulimit -m 2000000; ulimit -v 2000000; git ls-files -z |
parallel --xargs -0 cat | aspell list --mode=none --ignore-case |
grep -E '^[A-Za-z][a-z]*$' | sort | uniq -c | sort -n |
grep -vE '.{25}' | aspell pipe -W3 | grep : | cut -d' ' -f2 | less)
and proceeded to spend a few days looking at it to find probable typos
and fixed a few hundred of them in all of the llvm project (note, the
ones I found are not anywhere near all of them, but it seems like a
good start).
Differential Revision: https://reviews.llvm.org/D130827
Some RISC-V builtins requires ICE operands. We should call
getIntegerConstantExpr instead of EmitScalarExpr to match other
targets.
This was made a little trickier by the vector intrinsics not having
a valid type string, but there are two that have ICE operands so
I specified them manually.
This patch is needed because developers expect "GCCBuiltin" items to be the GCC intrinsics equivalent and not the Clang internals.
Reviewed By: #libc_abi, RKSimon, xbolva00
Differential Revision: https://reviews.llvm.org/D127460
XL considers different vector types to be incompatible with each other.
For example assignment between variables of types vector float and vector
long long or even vector signed int and vector unsigned int are diagnosed.
clang, however does not diagnose such cases and does a simple bitcast between
the two types. This could easily result in program errors. This patch is to
fix the implicit casts in altivec.h so that there is no incompatible vector
type errors whit -fno-lax-vector-conversions, this is the prerequisite patch
to switch the default to -fno-lax-vector-conversions later.
Reviewed By: nemanjai, amyk
Differential Revision: https://reviews.llvm.org/D124093
In the same spirit as D73543 and in reply to https://reviews.llvm.org/D126768#3549920 this patch is adding support for `__builtin_memset_inline`.
The idea is to get support from the compiler to easily write efficient memory function implementations.
This patch could be split in two:
- one for the LLVM part adding the `llvm.memset.inline.*` intrinsics.
- and another one for the Clang part providing the instrinsic as a builtin.
Differential Revision: https://reviews.llvm.org/D126903
https://docs.microsoft.com/en-us/cpp/intrinsics/arm64-intrinsics?view=msvc-170
unsigned char __readx18byte(unsigned long)
unsigned short __readx18word(unsigned long)
unsigned long __readx18dword(unsigned long)
unsigned __int64 __readx18qword(unsigned long)
Given the lack of documentation of the intrinsics, we chose to align the offset with just
`CharUnits::One()` when calling `IRBuilderBase::CreateAlignedLoad()`
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D126024
https://docs.microsoft.com/en-us/cpp/intrinsics/arm64-intrinsics?view=msvc-170
void __writex18byte(unsigned long, unsigned char)
void __writex18word(unsigned long, unsigned short)
void __writex18dword(unsigned long, unsigned long)
void __writex18qword(unsigned long, unsigned __int64)
Given the lack of documentation of the intrinsics, we chose to align the offset with just
`CharUnits::One()` when calling `IRBuilderBase::CreateAlignedStore()`.
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D126023
Most clients only used these methods because they wanted to be able to
extend or truncate to the same bit width (which is a no-op). Now that
the standard zext, sext and trunc allow this, there is no reason to use
the OrSelf versions.
The OrSelf versions additionally have the strange behaviour of allowing
extending to a *smaller* width, or truncating to a *larger* width, which
are also treated as no-ops. A small amount of client code relied on this
(ConstantRange::castOp and MicrosoftCXXNameMangler::mangleNumber) and
needed rewriting.
Differential Revision: https://reviews.llvm.org/D125557
D117829 added the generic "__builtin_reduce_mul" which we can use to replace the x86 specific integer mul reduction builtins - internally these were mapping to the same intrinsic already so there are no test changes required.
Differential Revision: https://reviews.llvm.org/D125222
Similar to the existing bitwise reduction builtins, this lowers to a llvm.vector.reduce.mul intrinsic call.
For other reductions, we've tried to share builtins for float/integer vectors, but the fmul reduction intrinsic also take a starting value argument and can either do unordered or serialized, but not reduction-trees as specified for the builtins. However we address fmul support this shouldn't affect the integer case.
Differential Revision: https://reviews.llvm.org/D117829
Compared to the old implementation:
* In C++, we only recurse into aggregate classes.
* Unnamed bit-fields are not printed.
* Constant evaluation is supported.
* Proper conversion is done when passing arguments through `...`.
* Additional arguments are supported and are injected prior to the
format string; this directly supports use with `fprintf`, for example.
* An arbitrary callable can be passed rather than only a function
pointer. In particular, in C++, a function template or overload set is
acceptable.
* All text generated by Clang is printed via `%s` rather than directly;
this avoids issues where Clang's pretty-printing output might itself
contain a `%` character.
* Fields of types that we don't know how to print are printed with a
`"*%p"` format and passed by address to the print function.
* No return value is produced.
Reviewed By: aaron.ballman, erichkeane, yihanaa
Differential Revision: https://reviews.llvm.org/D124221
D124741 added the generic "__builtin_reduce_add" which we can use to replace the x86 specific integer add reduction builtins - internally these were mapping to the same intrinsic already so there are no test changes required.
Differential Revision: https://reviews.llvm.org/D124757
Similar to the existing bitwise reduction builtins, this lowers to a llvm.vector.reduce.add intrinsic call.
For other reductions, we've tried to share builtins for float/integer vectors, but the fadd reduction intrinsics also take a starting value argument and can either do unordered or serialized, but not reduction-trees as specified for the builtins. However we address fadd support this shouldn't affect the integer case.
(Split off from D117829)
Differential Revision: https://reviews.llvm.org/D124741
Thanks for @rsmith to point this. I'm sorry for introducing this bug.
See @rsmith 's comment in https://reviews.llvm.org/D122248
Eg:(By @rsmith ) https://godbolt.org/z/o7vcbWaEf
I have added a test case
struct:
```
struct U19A {
int a;
};
struct U19B {
struct U19A a;
};
struct U19B a = {
.a.a = 2022
};
```
Dump result:
```
struct U19B {
struct U19A a = {
int a = 2022
}
}
```
Reviewed By: erichkeane
Differential Revision: https://reviews.llvm.org/D122920
This is extended to all `std::` functions that take a reference to a
value and return a reference (or pointer) to that same value: `move`,
`forward`, `move_if_noexcept`, `as_const`, `addressof`, and the
libstdc++-specific function `__addressof`.
We still require these functions to be declared before they can be used,
but don't instantiate their definitions unless their addresses are
taken. Instead, code generation, constant evaluation, and static
analysis are given direct knowledge of their effect.
This change aims to reduce various costs associated with these functions
-- per-instantiation memory costs, compile time and memory costs due to
creating out-of-line copies and inlining them, code size at -O0, and so
on -- so that they are not substantially more expensive than a cast.
Most of these improvements are very small, but I measured a 3% decrease
in -O0 object file size for a simple C++ source file using the standard
library after this change.
We now automatically infer the `const` and `nothrow` attributes on these
now-builtin functions, in particular meaning that we get a warning for
an unused call to one of these functions.
In C++20 onwards, we disallow taking the addresses of these functions,
per the C++20 "addressable function" rule. In earlier language modes, a
compatibility warning is produced but the address can still be taken.
The same infrastructure is extended to the existing MSVC builtin
`__GetExceptionInfo`, which is now only recognized in namespace `std`
like it always should have been.
This is a re-commit of
fc30901096,
a571f82a50,
64c045e25b, and
de6ddaeef3,
and reverts aa643f455a.
This change also includes a workaround for users using libc++ 3.1 and
earlier (!!), as apparently happens on AIX, where std::move sometimes
returns by value.
Reviewed By: aaron.ballman
Differential Revision: https://reviews.llvm.org/D123345
Revert "Fixup D123950 to address revert of D123345"
This reverts commit aa643f455a.
This is extended to all `std::` functions that take a reference to a
value and return a reference (or pointer) to that same value: `move`,
`forward`, `move_if_noexcept`, `as_const`, `addressof`, and the
libstdc++-specific function `__addressof`.
We still require these functions to be declared before they can be used,
but don't instantiate their definitions unless their addresses are
taken. Instead, code generation, constant evaluation, and static
analysis are given direct knowledge of their effect.
This change aims to reduce various costs associated with these functions
-- per-instantiation memory costs, compile time and memory costs due to
creating out-of-line copies and inlining them, code size at -O0, and so
on -- so that they are not substantially more expensive than a cast.
Most of these improvements are very small, but I measured a 3% decrease
in -O0 object file size for a simple C++ source file using the standard
library after this change.
We now automatically infer the `const` and `nothrow` attributes on these
now-builtin functions, in particular meaning that we get a warning for
an unused call to one of these functions.
In C++20 onwards, we disallow taking the addresses of these functions,
per the C++20 "addressable function" rule. In earlier language modes, a
compatibility warning is produced but the address can still be taken.
The same infrastructure is extended to the existing MSVC builtin
`__GetExceptionInfo`, which is now only recognized in namespace `std`
like it always should have been.
This is a re-commit of
fc30901096,
a571f82a50, and
64c045e25b
which were reverted in
e75d8b7037
due to a crasher bug where CodeGen would emit a builtin glvalue as an
rvalue if it constant-folds.
Reviewed By: aaron.ballman
Differential Revision: https://reviews.llvm.org/D123345
std::addressof, plus the libstdc++-specific std::__addressof.
This brings us to parity with the corresponding GCC behavior.
Remove STDBUILTIN macro that ended up not being used.
We still require these functions to be declared before they can be used,
but don't instantiate their definitions unless their addresses are
taken. Instead, code generation, constant evaluation, and static
analysis are given direct knowledge of their effect.
This change aims to reduce various costs associated with these functions
-- per-instantiation memory costs, compile time and memory costs due to
creating out-of-line copies and inlining them, code size at -O0, and so
on -- so that they are not substantially more expensive than a cast.
Most of these improvements are very small, but I measured a 3% decrease
in -O0 object file size for a simple C++ source file using the standard
library after this change.
We now automatically infer the `const` and `nothrow` attributes on these
now-builtin functions, in particular meaning that we get a warning for
an unused call to one of these functions.
In C++20 onwards, we disallow taking the addresses of these functions,
per the C++20 "addressable function" rule. In earlier language modes, a
compatibility warning is produced but the address can still be taken.
The same infrastructure is extended to the existing MSVC builtin
`__GetExceptionInfo`, which is now only recognized in namespace `std`
like it always should have been.
Reviewed By: aaron.ballman
Differential Revision: https://reviews.llvm.org/D123345
This patch changes `EmitPPCBuiltinExpr` in `CGBuiltin.cpp` to remove
the loop at the beginning of the function that emits the arguments and
to delay emitting the arguments until inside the switch statement. These
changes will put `EmitPPCBuiltinExpr` in line with the strategy of the
target independent function `EmitBuiltinExpr`. Also, this patch
ensures that arguments are only emitted once.
Tests that included builtins affected by these changes have been
modified to match expected behaviour.
Reviewed By: #powerpc, nemanjai, amyk
Differential Revision: https://reviews.llvm.org/D121637
This patch changes `EmitPPCBuiltinExpr` in `CGBuiltin.cpp` to remove
the loop at the beginning of the function that emits the arguments and
to delay emitting the arguments until inside the switch statement. These
changes will put `EmitPPCBuiltinExpr` in line with the strategy of the
target independent function `EmitBuiltinExpr`. Also, this patch
ensures that arguments are only emitted once.
Tests that included builtins affected by these changes have been
modified to match expected behaviour.
Reviewed By: #powerpc, nemanjai, amyk
Differential Revision: https://reviews.llvm.org/D121637
Add support for builtin_[max|min] which has below prototype:
A builtin_max (A1, A2, A3, ...)
All arguments must have the same type; they must all be float, double, or long double.
Internally use SelectCC to get the result.
Reviewed By: qiucf
Differential Revision: https://reviews.llvm.org/D122478