Reapply 49e5f603d4
which had been reverted in c94332919b.
Originally reverted because I hadn't updated it in quite a while when I
got around to committing it, so there were a bunch of missing changes to
new code since I'd written the patch.
Reviewers: aaron.ballman
Differential Revision: https://reviews.llvm.org/D76646
The _ExtInt type allows custom width integers, but the atomic memory
access's operand must have a power-of-two size. _ExtInts with
non-power-of-two size should not be allowed for atomic intrinsic.
Before this change:
$ cat test.c
typedef unsigned _ExtInt(42) dtype;
void verify_binary_op_nand(dtype* pval1, dtype val2)
{ __sync_nand_and_fetch(pval1, val2); }
$ clang test.c
clang-11:
/home/ubuntu/llvm_workspace/llvm/clang/lib/CodeGen/CGBuiltin.cpp:117:
llvm::Value*
EmitToInt(clang::CodeGen::CodeGenFunction&, llvm::Value*,
clang::QualType, llvm::IntegerType*): Assertion `V->getType() ==
IntType' failed.
PLEASE submit a bug report to https://bugs.llvm.org/ and include the
crash backtrace, preprocessed source, and associated run script.
After this change:
$ clang test.c
test.c:3:30: error: Atomic memory operand must have a power-of-two size
{ __sync_nand_and_fetch(pval1, val2); }
^
List of the atomic intrinsics that have this
problem:
__sync_fetch_and_add
__sync_fetch_and_sub
__sync_fetch_and_or
__sync_fetch_and_and
__sync_fetch_and_xor
__sync_fetch_and_nand
__sync_nand_and_fetch
__sync_and_and_fetch
__sync_add_and_fetch
__sync_sub_and_fetch
__sync_or_and_fetch
__sync_xor_and_fetch
__sync_fetch_and_min
__sync_fetch_and_max
__sync_fetch_and_umin
__sync_fetch_and_umax
__sync_val_compare_and_swap
__sync_bool_compare_and_swap
Differential Revision: https://reviews.llvm.org/D83340
There is a version that just tests (also called
isIntegerConstantExpression) & whereas this version is specifically used
when the value is of interest (a few call sites were actually refactored
to calling the test-only version) so let's make the API look more like
it.
Reviewers: aaron.ballman
Differential Revision: https://reviews.llvm.org/D76646
We were previously bypassing the conditional expression special case for binary
conditional expressions.
rdar://64134411
Differential revision: https://reviews.llvm.org/D81751
in general, value dependent is a subset of instnatiation dependent. This
would allows us to produce diagnostics for the align expression (which
is instantiation dependent but not value dependent).
Differential Revision: https://reviews.llvm.org/D83074
This was suggested in D72782 and brings the diagnostics more in line
with how argument references are handled elsewhere.
Reviewers: rjmccall, jfb, Bigcheese
Reviewed By: rjmccall
Differential Revision: https://reviews.llvm.org/D82473
Add a new builtin-function __builtin_expect_with_probability and
intrinsic llvm.expect.with.probability.
The interface is __builtin_expect_with_probability(long expr, long
expected, double probability).
It is mainly the same as __builtin_expect besides one more argument
indicating the probability of expression equal to expected value. The
probability should be a constant floating-point expression and be in
range [0.0, 1.0] inclusive.
It is similar to builtin-expect-with-probability function in GCC
built-in functions.
Differential Revision: https://reviews.llvm.org/D79830
In C++17 the operand(s) of an overloaded operator are sequenced as for
the corresponding built-in operator when the overloaded operator is
called with the operator notation ([over.match.oper]p2).
Reported in PR35340.
Differential Revision: https://reviews.llvm.org/D81330
Reviewed By: rsmith
This patch add __builtin_matrix_column_major_store to Clang,
as described in clang/docs/MatrixTypes.rst. In the initial version,
the stride is not optional yet.
Reviewers: rjmccall, jfb, rsmith, Bigcheese
Reviewed By: rjmccall
Differential Revision: https://reviews.llvm.org/D72782
For example:
svint32_t svget4(svint32x4_t tuple, uint64_t imm_index)
returns the subvector at `index`, which must be in range `0..3`.
svint32x3_t svset3(svint32x3_t tuple, uint64_t index, svint32_t vec)
returns a tuple vector with `vec` inserted into `tuple` at `index`,
which must be in range `0..2`.
Reviewers: c-rhodes, efriedma
Reviewed By: c-rhodes
Tags: #clang
Differential Revision: https://reviews.llvm.org/D81464
This patch add __builtin_matrix_column_major_load to Clang,
as described in clang/docs/MatrixTypes.rst. In the initial version,
the stride is not optional yet.
Reviewers: rjmccall, rsmith, jfb, Bigcheese
Reviewed By: rjmccall
Differential Revision: https://reviews.llvm.org/D72781
_ExtInt types
- Fix computed size for _ExtInt types passed to checked arithmetic
builtins.
- Emit diagnostic when signed _ExtInt larger than 128-bits is passed
to __builtin_mul_overflow.
- Change Sema checks for builtins to accept placeholder types.
Differential Revision: https://reviews.llvm.org/D81420
Summary:
__builtin_amdgcn_atomic_inc32(int *Ptr, int Val, unsigned MemoryOrdering, const char *SyncScope)
__builtin_amdgcn_atomic_inc64(int64_t *Ptr, int64_t Val, unsigned MemoryOrdering, const char *SyncScope)
__builtin_amdgcn_atomic_dec32(int *Ptr, int Val, unsigned MemoryOrdering, const char *SyncScope)
__builtin_amdgcn_atomic_dec64(int64_t *Ptr, int64_t Val, unsigned MemoryOrdering, const char *SyncScope)
First and second arguments gets transparently passed to the amdgcn atomic
inc/dec intrinsic. Fifth argument of the intrinsic is set as true if the
first argument of the builtin is a volatile pointer. The third argument of
this builtin is one of the memory-ordering specifiers ATOMIC_ACQUIRE,
ATOMIC_RELEASE, ATOMIC_ACQ_REL, or ATOMIC_SEQ_CST following C++11 memory
model semantics. This is mapped to corresponding LLVM atomic memory ordering
for the atomic inc/dec instruction using CLANG atomic C ABI. The fourth
argument is an AMDGPU-specific synchronization scope defined as string.
Reviewers: arsenm, sameerds, JonChesterfield, jdoerfert
Reviewed By: arsenm, sameerds
Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, jfb, kerbowa, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D80804
This patch add __builtin_matrix_transpose to Clang, as described in
clang/docs/MatrixTypes.rst.
Reviewers: rjmccall, jfb, rsmith, Bigcheese
Reviewed By: rjmccall
Differential Revision: https://reviews.llvm.org/D72778
Summary:
This patch upstreams support for a new storage only bfloat16 C type.
This type is used to implement primitive support for bfloat16 data, in
line with the Bfloat16 extension of the Armv8.6-a architecture, as
detailed here:
https://community.arm.com/developer/ip-products/processors/b/processors-ip-blog/posts/arm-architecture-developments-armv8-6-a
The bfloat type, and its properties are specified in the Arm Architecture
Reference Manual:
https://developer.arm.com/docs/ddi0487/latest/arm-architecture-reference-manual-armv8-for-armv8-a-architecture-profile
In detail this patch:
- introduces an opaque, storage-only C-type __bf16, which introduces a new bfloat IR type.
This is part of a patch series, starting with command-line and Bfloat16
assembly support. The subsequent patches will upstream intrinsics
support for BFloat16, followed by Matrix Multiplication and the
remaining Virtualization features of the armv8.6-a architecture.
The following people contributed to this patch:
- Luke Cheeseman
- Momchil Velikov
- Alexandros Lamprineas
- Luke Geeson
- Simon Tatham
- Ties Stuij
Reviewers: SjoerdMeijer, rjmccall, rsmith, liutianle, RKSimon, craig.topper, jfb, LukeGeeson, fpetrogalli
Reviewed By: SjoerdMeijer
Subscribers: labrinea, majnemer, asmith, dexonsmith, kristof.beyls, arphaman, danielkiss, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D76077
In C++17 the postfix-expression of a call expression is sequenced before
each expression in the expression-list and any default argument.
Differential Revision: https://reviews.llvm.org/D58579
Reviewed By: rsmith
When I fixed the targets specific builtins to make sure that aux-targets
are checked, it seems I didn't consider cases where the builtins check
the target info for further info. This patch bubbles the target-info
down to the individual checker functions to ensure that they validate
against the aux-target as well.
For non-aux-target invocations, this is an NFC.
Such a builtin function is mostly useful to preserve btf type id
for non-global data. For example,
extern void foo(..., void *data, int size);
int test(...) {
struct t { int a; int b; int c; } d;
d.a = ...; d.b = ...; d.c = ...;
foo(..., &d, sizeof(d));
}
The function "foo" in the above only see raw data and does not
know what type of the data is. In certain cases, e.g., logging,
the additional type information will help pretty print.
This patch implemented a BPF specific builtin
u32 btf_type_id = __builtin_btf_type_id(param, flag)
which will return a btf type id for the "param".
flag == 0 will indicate a BTF local relocation,
which means btf type_id only adjusted when bpf program BTF changes.
flag == 1 will indicate a BTF remote relocation,
which means btf type_id is adjusted against linux kernel or
future other entities.
Differential Revision: https://reviews.llvm.org/D74668
alignment information on VarDecls in more cases
This commit improves upon https://reviews.llvm.org/D21099. The code that
computes the source alignment now understands array subscript
expressions, binary operators, derived-to-base casts, and several more
expressions.
rdar://problem/59242343
Differential Revision: https://reviews.llvm.org/D78767
I discovered that when using an aux-target builtin, it was recognized as
a builtin but never checked. This patch checks for an aux-target builtin
and instead validates it against the correct target.
It does this by extracting the checking code for Target-specific
builtins into its own function, then calls with either targetInfo or
AuxTargetInfo.
Expose llvm fence instruction as clang builtin for AMDGPU target
__builtin_amdgcn_fence(unsigned int memoryOrdering, const char *syncScope)
The first argument of this builtin is one of the memory-ordering specifiers
__ATOMIC_ACQUIRE, __ATOMIC_RELEASE, __ATOMIC_ACQ_REL, or __ATOMIC_SEQ_CST
following C++11 memory model semantics. This is mapped to corresponding
LLVM atomic memory ordering for the fence instruction using LLVM atomic C
ABI. The second argument is an AMDGPU-specific synchronization scope
defined as string.
Reviewed By: sameerds
Differential Revision: https://reviews.llvm.org/D75917
This patch also adds the enum `sv_prfop` for the prefetch operation specifier
and checks to ensure the passed enum values are valid.
Reviewers: SjoerdMeijer, efriedma, ctetreau
Reviewed By: efriedma
Tags: #clang
Differential Revision: https://reviews.llvm.org/D78674
This is a code clean up of the PropertyAttributeKind and
ObjCPropertyAttributeKind enums in ObjCPropertyDecl and ObjCDeclSpec that are
exactly identical. This non-functional change consolidates these enums
into one. The changes are to many files across clang (and comments in LLVM) so
that everything refers to the new consolidated enum in DeclObjCCommon.h.
2nd Landing Attempt...
Differential Revision: https://reviews.llvm.org/D77233
This is a code clean up of the PropertyAttributeKind and
ObjCPropertyAttributeKind enums in ObjCPropertyDecl and ObjCDeclSpec that are
exactly identical. This non-functional change consolidates these enums
into one. The changes are to many files across clang (and comments in LLVM) so
that everything refers to the new consolidated enum in DeclObjCCommon.h.
Differential Revision: https://reviews.llvm.org/D77233
Adds another bunch of of intrinsics that take immediates with
varying ranges based, some being a complex rotation immediate
which are a set of allowed immediates rather than a range.
svmla_lane: lane immediate ranging 0..(128/(1*sizeinbits(elt)) - 1)
svcmla_lane: lane immediate ranging 0..(128/(2*sizeinbits(elt)) - 1)
svdot_lane: lane immediate ranging 0..(128/(4*sizeinbits(elt)) - 1)
svcadd: complex rotate immediate [90, 270]
svcmla:
svcmla_lane: complex rotate immediate [0, 90, 180, 270]
Reviewers: efriedma, SjoerdMeijer, rovka
Reviewed By: efriedma
Tags: #clang
Differential Revision: https://reviews.llvm.org/D76680
This patch adds a number of intrinsics that take immediates with
varying ranges based on the element size one of the operands.
svext: immediate ranging 0 to (2048/sizeinbits(elt) - 1)
svasrd: immediate ranging 1..sizeinbits(elt)
svqshlu: immediate ranging 1..sizeinbits(elt)/2
ftmad: immediate ranging 0..(sizeinbits(elt) - 1)
Reviewers: efriedma, SjoerdMeijer, rovka, rengolin
Reviewed By: SjoerdMeijer
Tags: #clang
Differential Revision: https://reviews.llvm.org/D76679
This reverts commit 61ba1481e2.
I'm reverting this because it breaks the lldb build with
incomplete switch coverage warnings. I would fix it forward,
but am not familiar enough with lldb to determine the correct
fix.
lldb/source/Plugins/TypeSystem/Clang/TypeSystemClang.cpp:3958:11: error: enumeration values 'DependentExtInt' and 'ExtInt' not handled in switch [-Werror,-Wswitch]
switch (qual_type->getTypeClass()) {
^
lldb/source/Plugins/TypeSystem/Clang/TypeSystemClang.cpp:4633:11: error: enumeration values 'DependentExtInt' and 'ExtInt' not handled in switch [-Werror,-Wswitch]
switch (qual_type->getTypeClass()) {
^
lldb/source/Plugins/TypeSystem/Clang/TypeSystemClang.cpp:4889:11: error: enumeration values 'DependentExtInt' and 'ExtInt' not handled in switch [-Werror,-Wswitch]
switch (qual_type->getTypeClass()) {
Introduction/Motivation:
LLVM-IR supports integers of non-power-of-2 bitwidth, in the iN syntax.
Integers of non-power-of-two aren't particularly interesting or useful
on most hardware, so much so that no language in Clang has been
motivated to expose it before.
However, in the case of FPGA hardware normal integer types where the
full bitwidth isn't used, is extremely wasteful and has severe
performance/space concerns. Because of this, Intel has introduced this
functionality in the High Level Synthesis compiler[0]
under the name "Arbitrary Precision Integer" (ap_int for short). This
has been extremely useful and effective for our users, permitting them
to optimize their storage and operation space on an architecture where
both can be extremely expensive.
We are proposing upstreaming a more palatable version of this to the
community, in the form of this proposal and accompanying patch. We are
proposing the syntax _ExtInt(N). We intend to propose this to the WG14
committee[1], and the underscore-capital seems like the active direction
for a WG14 paper's acceptance. An alternative that Richard Smith
suggested on the initial review was __int(N), however we believe that
is much less acceptable by WG14. We considered _Int, however _Int is
used as an identifier in libstdc++ and there is no good way to fall
back to an identifier (since _Int(5) is indistinguishable from an
unnamed initializer of a template type named _Int).
[0]https://www.intel.com/content/www/us/en/software/programmable/quartus-prime/hls-compiler.html)
[1]http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2472.pdf
Differential Revision: https://reviews.llvm.org/D73967
Summary:
This patch adds a mechanism to easily add range checks for a builtin's
immediate operands. This patch is tested with the qdech intrinsic, which takes
both an enum for the predicate pattern, as well as an immediate for the
multiplier.
Reviewers: efriedma, SjoerdMeijer, rovka
Reviewed By: efriedma, SjoerdMeijer
Subscribers: mgorny, tschuett, mgrang, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D76678
WG14 has adopted N2480 (http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2480.pdf)
into C2x at the meetings last week, allowing parameter names of a function
definition to be elided. This patch relaxes the error so that C++ and C2x do not
diagnose this situation, and modes before C2x will allow it as an extension.
This also adds the same feature to ObjC blocks under the assumption that ObjC
wishes to follow the C standard in this regard.
instead of recursing on the stack.
This doesn't actually resolve PR45333, because we now hit stack overflow
somewhere else, but it does get us further. I've not found any way of
testing this that doesn't still crash elsewhere.
Summary: If the size parameter of `__builtin_memcpy_inline` comes from an un-instantiated template parameter current code would crash.
Reviewers: efriedma, courbet
Subscribers: cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D76504
Summary:
This patch generalizes the existing code to support CDE intrinsics
which will share some properties with existing MVE intrinsics
(some of the intrinsics will be polymorphic and accept/return values
of MVE vector types).
Specifically the patch:
* Adds new tablegen backends -gen-arm-cde-builtin-def,
-gen-arm-cde-builtin-codegen, -gen-arm-cde-builtin-sema,
-gen-arm-cde-builtin-aliases, -gen-arm-cde-builtin-header based on
existing MVE backends.
* Renames the '__clang_arm_mve_alias' attribute into
'__clang_arm_builtin_alias' (it will be used with CDE intrinsics as
well as MVE intrinsics)
* Implements semantic checks for the coprocessor argument of the CDE
intrinsics as well as the existing coprocessor intrinsics.
* Adds one CDE intrinsic __arm_cx1 to test the above changes
Reviewers: simon_tatham, MarkMurrayARM, ostannard, dmgreen
Reviewed By: simon_tatham
Subscribers: sdesmalen, mgorny, kristof.beyls, danielkiss, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D75850
Clang is missing a warning for
builtin_return_address/builtin_frame_address called with > 0 argument.
Gcc provides a warning for this via -Wframe-address:
https://gcc.gnu.org/onlinedocs/gcc/Return-Address.html
As calling these functions with argument > 0 has caused several crashes
for us, we would like to have the same warning as gcc here. This diff
adds the warning and makes it part of -Wmost.
Differential Revision: https://reviews.llvm.org/D75768
__builtin_os_log_format
This is needed to keep all the objects, including temporaries returned
by function calls, written to the buffer alive until os_log_pack_send is
called.
rdar://problem/60105410
Summary:
There was even a TODO for this.
The main motivation is to make use of call-site based
`__attribute__((alloc_align(param_idx)))` validation (D72996).
Reviewers: rsmith, erichkeane, aaron.ballman, jdoerfert
Reviewed By: rsmith
Subscribers: cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D73020
Verifies that an argument passed to __builtin_frame_address or __builtin_return_address is within the range [0, 0xFFFF]
Differential revision: https://reviews.llvm.org/D66839
Re-committed after fixed: c93112dc4f
Summary:
As @rsmith notes in https://reviews.llvm.org/D73020#inline-672219
while that is certainly UB land, it may not be actually reachable at runtime, e.g.:
```
template<int N> void *make() {
if ((N & (N-1)) == 0)
return operator new(N, std::align_val_t(N));
else
return operator new(N);
}
void *p = make<7>();
```
and we shouldn't really error-out there.
That being said, i'm not really following the logic here.
Which ones of these cases should remain being an error?
Reviewers: rsmith, erichkeane
Reviewed By: erichkeane
Subscribers: cfe-commits, rsmith
Tags: #clang
Differential Revision: https://reviews.llvm.org/D73996
New intrinisics are implemented for when we need to port SIMD code from other
arhitectures and only load or store portions of MSA registers.
Following intriniscs are added which only load/store element 0 of a vector:
v4i32 __builtin_msa_ldrq_w (const void *, imm_n2048_2044);
v2i64 __builtin_msa_ldr_d (const void *, imm_n4096_4088);
void __builtin_msa_strq_w (v4i32, void *, imm_n2048_2044);
void __builtin_msa_str_d (v2i64, void *, imm_n4096_4088);
Differential Revision: https://reviews.llvm.org/D73644
Implement a pessimistic evaluator of the minimal required size for a buffer
based on the format string, and couple that with the fortified version to emit a
warning when the buffer size is lower than the lower bound computed from the
format string.
Differential Revision: https://reviews.llvm.org/D71566
There is llvm::Value::MaximumAlignment, which is numerically
equivalent to these constants, but we can't use it directly
because we can't include llvm IR headers in clang Sema.
So instead, copy-paste the constant, and fixup the places to use it.
This was initially reviewed in https://reviews.llvm.org/D72998
Summary:
`alloc_align` attribute takes parameter number, not the alignment itself,
so given **just** the attribute/function declaration we can't do any
sanity checking for said alignment.
However, at call site, given the actual `Expr` that is passed
into that parameter, we //might// be able to evaluate said `Expr`
as Integer Constant Expression, and perform the sanity checks.
But since there is no requirement for that argument to be an immediate,
we may fail, and that's okay.
However if we did evaluate, we should enforce the same constraints
as with `__builtin_assume_aligned()`/`__attribute__((assume_aligned(imm)))`:
said alignment is a power of two, and is not greater than our magic threshold
This was initially committed in c2a9061ac5
but reverted in 00756b1823 because of
suspicious bot failures.
Reviewers: erichkeane, aaron.ballman, hfinkel, rsmith, jdoerfert
Reviewed By: erichkeane
Subscribers: cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D72996
Summary:
I initially encountered those assertions when trying to create
this IR `alignment` attribute from clang's `__attribute__((assume_aligned(imm)))`,
because until D72994 there is no sanity checking for the value of `imm`.
But even then, we have `llvm::Value::MaximumAlignment` constant (which is `536870912`),
which is enforced for clang attributes, and then there are some other magical constant
(`0x40000000` i.e. `1073741824` i.e. `2 * 536870912`) in
`Attribute::getWithAlignment()`/`AttrBuilder::addAlignmentAttr()`.
I strongly suspect that `0x40000000` is incorrect,
and that also should be `llvm::Value::MaximumAlignment`.
Reviewers: erichkeane, hfinkel, jdoerfert, gchatelet, courbet
Reviewed By: erichkeane
Subscribers: hiraditya, cfe-commits, llvm-commits
Tags: #llvm, #clang
Differential Revision: https://reviews.llvm.org/D72998
Summary:
`alloc_align` attribute takes parameter number, not the alignment itself,
so given **just** the attribute/function declaration we can't do any
sanity checking for said alignment.
However, at call site, given the actual `Expr` that is passed
into that parameter, we //might// be able to evaluate said `Expr`
as Integer Constant Expression, and perform the sanity checks.
But since there is no requirement for that argument to be an immediate,
we may fail, and that's okay.
However if we did evaluate, we should enforce the same constraints
as with `__builtin_assume_aligned()`/`__attribute__((assume_aligned(imm)))`:
said alignment is a power of two, and is not greater than our magic threshold
Reviewers: erichkeane, aaron.ballman, hfinkel, rsmith, jdoerfert
Reviewed By: erichkeane
Subscribers: cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D72996
Summary:
Immediate vmvnq is code-generated as a simple vector constant in IR,
and left to the backend to recognize that it can be created with an
MVE VMVN instruction. The predicated version is represented as a
select between the input and the same constant, and I've added a
Tablegen isel rule to turn that into a predicated VMVN. (That should
be better than the previous VMVN + VPSEL: it's the same number of
instructions but now it can fold into an adjacent VPT block.)
The unpredicated forms of VBIC and VORR are done by enabling the same
isel lowering as for NEON, recognizing appropriate immediates and
rewriting them as ARMISD::VBICIMM / ARMISD::VORRIMM SDNodes, which I
then instruction-select into the right MVE instructions (now that I've
also reworked those instructions to use the same MC operand encoding).
In order to do that, I had to promote the Tablegen SDNode instance
`NEONvorrImm` to a general `ARMvorrImm` available in MVE as well, and
similarly for `NEONvbicImm`.
The predicated forms of VBIC and VORR are represented as a vector
select between the original input vector and the output of the
unpredicated operation. The main convenience of this is that it still
lets me use the existing isel lowering for VBICIMM/VORRIMM, and not
have to write another copy of the operand encoding translation code.
This intrinsic family is the first to use the `imm_simd` system I put
into the MveEmitter tablegen backend. So, naturally, it showed up a
bug or two (emitting bogus range checks and the like). Fixed those,
and added a full set of tests for the permissible immediates in the
existing Sema test.
Also adjusted the isel pattern for `vmovlb.u8`, which stopped matching
because lowering started turning its input into a VBICIMM. Now it
recognizes the VBICIMM instead.
Reviewers: dmgreen, MarkMurrayARM, miyuki, ostannard
Reviewed By: dmgreen
Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D72934
This change introduces three new builtins (which work on both pointers
and integers) that can be used instead of common bitwise arithmetic:
__builtin_align_up(x, alignment), __builtin_align_down(x, alignment) and
__builtin_is_aligned(x, alignment).
I originally added these builtins to the CHERI fork of LLVM a few years ago
to handle the slightly different C semantics that we use for CHERI [1].
Until recently these builtins (or sequences of other builtins) were
required to generate correct code. I have since made changes to the default
C semantics so that they are no longer strictly necessary (but using them
does generate slightly more efficient code). However, based on our experience
using them in various projects over the past few years, I believe that adding
these builtins to clang would be useful.
These builtins have the following benefit over bit-manipulation and casts
via uintptr_t:
- The named builtins clearly convey the semantics of the operation. While
checking alignment using __builtin_is_aligned(x, 16) versus
((x & 15) == 0) is probably not a huge win in readably, I personally find
__builtin_align_up(x, N) a lot easier to read than (x+(N-1))&~(N-1).
- They preserve the type of the argument (including const qualifiers). When
using casts via uintptr_t, it is easy to cast to the wrong type or strip
qualifiers such as const.
- If the alignment argument is a constant value, clang can check that it is
a power-of-two and within the range of the type. Since the semantics of
these builtins is well defined compared to arbitrary bit-manipulation,
it is possible to add a UBSAN checker that the run-time value is a valid
power-of-two. I intend to add this as a follow-up to this change.
- The builtins avoids int-to-pointer casts both in C and LLVM IR.
In the future (i.e. once most optimizations handle it), we could use the new
llvm.ptrmask intrinsic to avoid the ptrtoint instruction that would normally
be generated.
- They can be used to round up/down to the next aligned value for both
integers and pointers without requiring two separate macros.
- In many projects the alignment operations are already wrapped in macros (e.g.
roundup2 and rounddown2 in FreeBSD), so by replacing the macro implementation
with a builtin call, we get improved diagnostics for many call-sites while
only having to change a few lines.
- Finally, the builtins also emit assume_aligned metadata when used on pointers.
This can improve code generation compared to the uintptr_t casts.
[1] In our CHERI compiler we have compilation mode where all pointers are
implemented as capabilities (essentially unforgeable 128-bit fat pointers).
In our original model, casts from uintptr_t (which is a 128-bit capability)
to an integer value returned the "offset" of the capability (i.e. the
difference between the virtual address and the base of the allocation).
This causes problems for cases such as checking the alignment: for example, the
expression `if ((uintptr_t)ptr & 63) == 0` is generally used to check if the
pointer is aligned to a multiple of 64 bytes. The problem with offsets is that
any pointer to the beginning of an allocation will have an offset of zero, so
this check always succeeds in that case (even if the address is not correctly
aligned). The same issues also exist when aligning up or down. Using the
alignment builtins ensures that the address is used instead of the offset. While
I have since changed the default C semantics to return the address instead of
the offset when casting, this offset compilation mode can still be used by
passing a command-line flag.
Reviewers: rsmith, aaron.ballman, theraven, fhahn, lebedev.ri, nlopes, aqjune
Reviewed By: aaron.ballman, lebedev.ri
Differential Revision: https://reviews.llvm.org/D71499
The current handling of the operators ||, && and ?: has a number of false
positive and false negative. The issues for operator || and && are:
1. We need to add sequencing regions for the LHS and RHS as is done for the
comma operator. Not doing so causes false positives in expressions like
`((a++, false) || (a++, false))` (from PR39779, see PR22197 for another
example).
2. In the current implementation when the evaluation of the LHS fails, the RHS
is added to a worklist to be processed later. This results in false negatives
in expressions like `(a && a++) + a`.
Fix these issues by introducing sequencing regions for the LHS and RHS, and by
not deferring the visitation of the RHS.
The issues with the ternary operator ?: are similar, with the added twist that
we should not warn on expressions like `(x ? y += 1 : y += 2)` since exactly
one of the 2nd and 3rd expression is going to be evaluated, but we should still
warn on expressions like `(x ? y += 1 : y += 2) = y`.
Differential Revision: https://reviews.llvm.org/D57747
Reviewed By: rsmith
NFCs factored out of the following patches:
- Change all of the `Expr *` to `const Expr *` in SequenceChecker for
const-correctness. SequenceChecker should not modify AST nodes.
- Add some comments.
- clang-format
Differential Revision: https://reviews.llvm.org/D57659
Reviewed By: xbolva00
The FP-classification builtins (__builtin_isfinite, etc) use variadic
packs in the definition file to mean an overload set. Because of that,
floats were converted to doubles, which is incorrect. There WAS a patch
to remove the cast after the fact.
THis patch switches these builtins to just be custom type checking,
calls the implicit conversions for the integer members, and makes sure
the correct L->R casts are put into place, then does type checking like
normal.
A future direction (that wouldn't be NFC) would consider making
conversions for the floating point parameter legal.
Note: The initial patch for this missed that certain systems need to
still convert half to float, since they dont' support that type.
This covers:
* usual arithmetic conversions (comparisons, arithmetic, conditionals)
between different enumeration types
* usual arithmetic conversions between enums and floating-point types
* comparisons between two operands of array type
The deprecation warnings are on-by-default (in C++20 compilations); it
seems likely that these forms will become ill-formed in C++23, so
warning on them now by default seems wise.
For the first two bullets, off-by-default warnings were also added for
all the cases where we didn't already have warnings (covering language
modes prior to C++20). These warnings are in subgroups of the existing
-Wenum-conversion (except that the first case is not warned on if either
enumeration type is anonymous, consistent with our existing
-Wenum-conversion warnings).
This reverts commit b1e542f302.
The original 'hack' didn't chop out fp-16 to double conversions, so
systems that use FP16ConversionIntrinsics end up in IR-CodeGen with an
i16 type isntead of a float type (like PPC64-BE). The bots noticed
this.
Reverting until I figure out how to fix this
The FP-classification builtins (__builtin_isfinite, etc) use variadic
packs in the definition file to mean an overload set. Because of that,
floats were converted to doubles, which is incorrect. There WAS a patch
to remove the cast after the fact.
THis patch switches these builtins to just be custom type checking,
calls the implicit conversions for the integer members, and makes sure
the correct L->R casts are put into place, then does type checking like
normal.
A future direction (that wouldn't be NFC) would consider making
conversions for the floating point parameter legal.
Remove implicit conversion that promotes half to double
for the target that support fp16. If the target doesn't
support fp16, fp16 will be converted to fp16 intrinsic.
Summary:
It shouldn't promote half to double or any larger precision types for fp classification builtins.
Because fp classification builtins would get incorrect result with promoted argument.
For example, __builtin_isnormal with a subnormal half value should return false, but it is not.
That the subnormal half value is promoted to a normal double value.
Reviewers: aaron.ballman
Reviewed By: aaron.ballman
Subscribers: cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D71049
This patch reapplies commit 759948467e. Patch was reverted due to a
clang-tidy test fail on Windows. The test has been modified. There
are no additional code changes.
Patch was tested with ninja check-all on Windows and Linux.
Summary of code changes:
Clang currently crashes for switch statements inside a template when the
condition is a non-integer field member because contextual implicit
conversion is skipped when parsing the condition. This conversion is
however later checked in an assert when the case statement is handled.
The conversion is skipped when parsing the condition because
the field member is set as type-dependent based on its containing class.
This patch sets the type dependency based on the field's type instead.
This patch fixes Bug 40982.
Now Clang does not check that features required by built-in functions
are enabled. That causes errors in the backend reported in PR44018.
This patch fixes this bug by checking that required features
are enabled.
This should fix PR44018.
Differential Revision: https://reviews.llvm.org/D70808
We seem to have been gradually growing support for atomic min/max operations
(exposing longstanding IR atomicrmw instructions). But until now there have
been gaps in the expected intrinsics. This adds support for the C11-style
intrinsics (i.e. taking _Atomic, rather than individually blessed by C11
standard), and the variants that return the new value instead of the original
one.
That way, people won't be misled by trying one form and it not working, and the
front-end is more friendly to people using _Atomic types, as we recommend.
Some clients of this function want to know about any expression that is known
to produce a 0/1 value, and others care about expressions that are semantically
boolean.
This fixes a -Wswitch-bool regression I introduced in 8bfb353bb3, pointed out
by Chris Hamilton!
This patch reapplies commit 76945821b9. The first version broke
buildbots due to clang-tidy test fails. The fails are because some
errors in templates are now diagnosed earlier (does not wait till
instantiation). I have modified the tests to add checks for these
diagnostics/prevent these diagnostics. There are no additional code
changes.
Summary of code changes:
Clang currently crashes for switch statements inside a template when the
condition is a non-integer field member because contextual implicit
conversion is skipped when parsing the condition. This conversion is
however later checked in an assert when the case statement is handled.
The conversion is skipped when parsing the condition because
the field member is set as type-dependent based on its containing class.
This patch sets the type dependency based on the field's type instead.
This patch fixes Bug 40982.
Reviewers: rnk, gribozavr2
Patch by: Elizabeth Andrews (eandrews)
Differential revision: https://reviews.llvm.org/D69950
This commit sets up the infrastructure for auto-generating <arm_mve.h>
and doing clang-side code generation for the builtins it relies on,
and demonstrates that it works by implementing a representative sample
of the ACLE intrinsics, more or less matching the ones introduced in
LLVM IR by D67158,D68699,D68700.
Like NEON, that header file will provide a set of vector types like
uint16x8_t and C functions with names like vaddq_u32(). Unlike NEON,
the ACLE spec for <arm_mve.h> includes a polymorphism system, so that
you can write plain vaddq() and disambiguate by the vector types you
pass to it.
Unlike the corresponding NEON code, I've arranged to make every user-
facing ACLE intrinsic into a clang builtin, and implement all the code
generation inside clang. So <arm_mve.h> itself contains nothing but
typedefs and function declarations, with the latter all using the new
`__attribute__((__clang_builtin))` system to arrange that the user-
facing function names correspond to the right internal BuiltinIDs.
So the new MveEmitter tablegen system specifies the full sequence of
IRBuilder operations that each user-facing ACLE intrinsic should
translate into. Where possible, the ACLE intrinsics map to standard IR
operations such as vector-typed `add` and `fadd`; where no standard
representation exists, I call down to the sample IR intrinsics
introduced in an earlier commit.
Doing it like this means that you get the polymorphism for free just
by using __attribute__((overloadable)): the clang overload resolution
decides which function declaration is the relevant one, and _then_ its
BuiltinID is looked up, so by the time we're doing code generation,
that's all been resolved by the standard system. It also means that
you get really nice error messages if the user passes the wrong
combination of types: clang will show the declarations from the header
file and explain why each one doesn't match.
(The obvious alternative approach would be to have wrapper functions
in <arm_mve.h> which pass their arguments to the underlying builtins.
But that doesn't work in the case where one of the arguments has to be
a constant integer: the wrapper function can't pass the constantness
through. So you'd have to do that case using a macro instead, and then
use C11 `_Generic` to handle the polymorphism. Then you have to add
horrible workarounds because `_Generic` requires even the untaken
branches to type-check successfully, and //then// if the user gets the
types wrong, the error message is totally unreadable!)
Reviewers: dmgreen, miyuki, ostannard
Subscribers: mgorny, javed.absar, kristof.beyls, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D67161
The behavior from the original patch has changed, since we're no longer
allowing LLVM to just ignore the alignment. Instead, we're just
assuming the maximum possible alignment.
Differential Revision: https://reviews.llvm.org/D68824
llvm-svn: 374562
The test fails on Windows, with
error: 'warning' diagnostics expected but not seen:
File builtin-assume-aligned.c Line 62: requested alignment
must be 268435456 bytes or smaller; assumption ignored
error: 'warning' diagnostics seen but not expected:
File builtin-assume-aligned.c Line 62: requested alignment
must be 8192 bytes or smaller; assumption ignored
llvm-svn: 374456
Code to handle __builtin_assume_aligned was allowing larger values, but
would convert this to unsigned along the way. This patch removes the
EmitAssumeAligned overloads that take unsigned to do away with this
problem.
Additionally, it adds a warning that values greater than 1 <<29 are
ignored by LLVM.
Differential Revision: https://reviews.llvm.org/D68824
llvm-svn: 374450
The static analyzer is warning about potential null dereferences, but in these cases we should be able to use castAs<> directly and if not assert will fire for us.
llvm-svn: 373911
The warnings now in -Wformat-type-confusion don't align with how we interpret
'pedantic' in clang, and don't belong in -pedantic.
Differential revision: https://reviews.llvm.org/D67775
llvm-svn: 373774
The static analyzer is warning about potential null dereferences, but in these cases we should be able to use castAs<RecordType> directly and if not assert will fire for us.
llvm-svn: 373584
The static analyzer is warning about potential null dereferences, but in these cases we should be able to use castAs<VectorType> directly and if not assert will fire for us.
llvm-svn: 373478
Summary:
- Rearrange the atomic expr order to the API order when rebuilding
atomic expr during template instantiation.
Reviewers: erichkeane
Subscribers: jfb, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D67924
llvm-svn: 372640
Extracted from D63082. GCC has this warning under -Wint-in-bool-context, but as noted in the D63082's review, we should put it under TautologicalConstantCompare.
llvm-svn: 372531
Commit c15aa241f8 ("[CLANG][BPF] change __builtin_preserve_access_index()
signature") changed the builtin function signature to
PointerT __builtin_preserve_access_index(PointerT ptr)
with a pointer type as the argument/return type, where argument and
return types must be the same.
There is really no reason for this constraint. The builtin just
presented a code region so that IR builtins
__builtin_{array, struct, union}_preserve_access_index
can be applied.
This patch removed the pointer type restriction to permit any
argument type as long as it is permitted by the compiler.
Differential Revision: https://reviews.llvm.org/D67883
llvm-svn: 372516
RebuildAtomicExpr was skipping doing semantic analysis which broke in
the cases where the expressions were not dependent. This resulted in the
ImplicitCastExpr from an array to a pointer being lost, causing a crash
in IR CodeGen.
Differential Revision: https://reviews.llvm.org/D67854
llvm-svn: 372422
The clang intrinsic __builtin_preserve_access_index() currently
has signature:
const void * __builtin_preserve_access_index(const void * ptr)
This may cause compiler warning when:
- parameter type is "volatile void *" or "const volatile void *", or
- the assign-to type of the intrinsic does not have "const" qualifier.
Further, this signature does not allow dereference of the
builtin result pointer as it is a "const void *" type, which
adds extra step for the user to do type casting.
Let us change the signature to:
PointerT __builtin_preserve_access_index(PointerT ptr)
such that the result and argument types are the same.
With this, directly dereferencing the builtin return value
becomes possible.
Differential Revision: https://reviews.llvm.org/D67734
llvm-svn: 372294
Also, add a diagnostic under -Wformat for printing a boolean value as a
character.
rdar://54579473
Differential revision: https://reviews.llvm.org/D66856
llvm-svn: 372247
Also, add a diagnostic group, -Wobjc-signed-char-bool, to control all these
related diagnostics.
rdar://51954400
Differential revision: https://reviews.llvm.org/D67559
llvm-svn: 372183
Current for SAE instructions we only allow _MM_FROUND_CUR_DIRECTION(bit 2) or _MM_FROUND_NO_EXC(bit 3) to be used as the immediate passed to the inrinsics. But these instructions don't perform rounding so _MM_FROUND_CUR_DIRECTION is just sort of a default placeholder when you don't want to suppress exceptions. Using _MM_FROUND_NO_EXC by itself is really bit equivalent to (_MM_FROUND_NO_EXC | _MM_FROUND_TO_NEAREST_INT) since _MM_FROUND_TO_NEAREST_INT is 0. Since we aren't rounding on these instructions we should also accept (_MM_FROUND_CUR_DIRECTION | _MM_FROUND_NO_EXC) as equivalent to (_MM_FROUND_NO_EXC). icc allows this, but gcc does not.
Differential Revision: https://reviews.llvm.org/D67289
llvm-svn: 371430
Previously, -Wsizeof-pointer-div failed to catch:
const int *r;
sizeof(r) / sizeof(int);
Now fixed.
Also introduced -Wsizeof-array-div which catches bugs like:
sizeof(r) / sizeof(short);
(Array element type does not match type of sizeof operand).
llvm-svn: 371222
Summary:
This is follow up of https://reviews.llvm.org/D66699.
We might get ISEL ICE if we call vec_dss with non const 3rd arg.
```
Cannot select: intrinsic %llvm.ppc.altivec.dst
```
We should check the constraints in clang and generate better error
messages.
Reviewers: nemanjai, hfinkel, echristo, #powerpc, wuzish
Reviewed By: #powerpc, wuzish
Subscribers: wuzish, kbarton, MaskRay, shchenz, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D66748
llvm-svn: 370912
Summary:
This is similar to vec_ct* in https://reviews.llvm.org/rL304205.
The argument must be a constant, otherwise instruction selection
will fail. always_inline is not enough for isel to always fold
everything away at -O0.
The fix is to turn the function into macros in altivec.h.
Fixes https://bugs.llvm.org/show_bug.cgi?id=43072
Reviewers: nemanjai, hfinkel, #powerpc, wuzish
Reviewed By: #powerpc, wuzish
Subscribers: wuzish, kbarton, MaskRay, shchenz, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D66699
llvm-svn: 370902
The err_typecheck_call_too_few_args diagnostic takes arguments, but
none were provided causing clang to crash when attempting to diagnose
an enqueue_kernel call with too few arguments.
Fixes llvm.org/PR42045
Differential Revision: https://reviews.llvm.org/D66883
llvm-svn: 370322
Only honour format_arg attributes on -[NSBundle localizedStringForKey] when its
argument has a format specifier in it, otherwise its likely to just be a key to
fetch localized strings.
Fixes rdar://23622446
Differential revision: https://reviews.llvm.org/D27165
llvm-svn: 368878
Clang currently crashes for switch statements inside a template when
the condition is a non-integer field. The crash is due to incorrect
type-dependency of field. Type-dependency of member expressions is
currently set based on the containing class. This patch changes this for
'members of the current instantiation' to set the type dependency based
on the member's type instead.
A few lit tests started to fail once I applied this patch because errors
are now diagnosed earlier (does not wait till instantiation). I've modified
these tests in this patch as well.
Patch fixes PR#40982
Differential Revision: https://reviews.llvm.org/D61027
llvm-svn: 368706
Issue an warning when the code tries to do an implicit int -> float
conversion, where the float type ha a narrower significant than the
float type.
The new warning is controlled by flag -Wimplicit-int-float-conversion,
under -Wimplicit-float-conversion and -Wconversion. It is also silenced
when c++11 narrowing warning is issued.
Differential Revision: https://reviews.llvm.org/D64666
llvm-svn: 367497
This CL adds an optional warning to diagnose uses of the
`__builtin_alloca` family of functions. The use of these functions is
discouraged by many, so it seems like a good idea to allow clang to warn
about it.
Patch by Elaina Guan!
Differential Revision: https://reviews.llvm.org/D64883
llvm-svn: 367067
This reverts commit r366972 which broke the following tests:
Clang :: CXX/dcl.decl/dcl.init/dcl.init.list/p7-0x.cpp
Clang :: CXX/dcl.decl/dcl.init/dcl.init.list/p7-cxx11-nowarn.cpp
llvm-svn: 366979
Issue an warning when the code tries to do an implicit int -> float
conversion, where the float type ha a narrower significant than the
float type.
The new warning is controlled by flag -Wimplicit-int-float-conversion,
under -Wimplicit-float-conversion and -Wconversion.
Differential Revision: https://reviews.llvm.org/D64666
llvm-svn: 366972
This patch series adds support for the next-generation arch13
CPU architecture to the SystemZ backend.
This includes:
- Basic support for the new processor and its features.
- Support for low-level builtins mapped to new LLVM intrinsics.
- New high-level intrinsics in vecintrin.h.
- Indicate support by defining __VEC__ == 10303.
Note: No currently available Z system supports the arch13
architecture. Once new systems become available, the
official system name will be added as supported -march name.
llvm-svn: 365933
For background of BPF CO-RE project, please refer to
http://vger.kernel.org/bpfconf2019.html
In summary, BPF CO-RE intends to compile bpf programs
adjustable on struct/union layout change so the same
program can run on multiple kernels with adjustment
before loading based on native kernel structures.
In order to do this, we need keep track of GEP(getelementptr)
instruction base and result debuginfo types, so we
can adjust on the host based on kernel BTF info.
Capturing such information as an IR optimization is hard
as various optimization may have tweaked GEP and also
union is replaced by structure it is impossible to track
fieldindex for union member accesses.
Three intrinsic functions, preserve_{array,union,struct}_access_index,
are introducted.
addr = preserve_array_access_index(base, index, dimension)
addr = preserve_union_access_index(base, di_index)
addr = preserve_struct_access_index(base, gep_index, di_index)
here,
base: the base pointer for the array/union/struct access.
index: the last access index for array, the same for IR/DebugInfo layout.
dimension: the array dimension.
gep_index: the access index based on IR layout.
di_index: the access index based on user/debuginfo types.
If using these intrinsics blindly, i.e., transforming all GEPs
to these intrinsics and later on reducing them to GEPs, we have
seen up to 7% more instructions generated. To avoid such an overhead,
a clang builtin is proposed:
base = __builtin_preserve_access_index(base)
such that user wraps to-be-relocated GEPs in this builtin
and preserve_*_access_index intrinsics only apply to
those GEPs. Such a buyin will prevent performance degradation
if people do not use CO-RE, even for programs which use
bpf_probe_read().
For example, for the following example,
$ cat test.c
struct sk_buff {
int i;
int b1:1;
int b2:2;
union {
struct {
int o1;
int o2;
} o;
struct {
char flags;
char dev_id;
} dev;
int netid;
} u[10];
};
static int (*bpf_probe_read)(void *dst, int size, const void *unsafe_ptr)
= (void *) 4;
#define _(x) (__builtin_preserve_access_index(x))
int bpf_prog(struct sk_buff *ctx) {
char dev_id;
bpf_probe_read(&dev_id, sizeof(char), _(&ctx->u[5].dev.dev_id));
return dev_id;
}
$ clang -target bpf -O2 -g -emit-llvm -S -mllvm -print-before-all \
test.c >& log
The generated IR looks like below:
...
define dso_local i32 @bpf_prog(%struct.sk_buff*) #0 !dbg !15 {
%2 = alloca %struct.sk_buff*, align 8
%3 = alloca i8, align 1
store %struct.sk_buff* %0, %struct.sk_buff** %2, align 8, !tbaa !45
call void @llvm.dbg.declare(metadata %struct.sk_buff** %2, metadata !43, metadata !DIExpression()), !dbg !49
call void @llvm.lifetime.start.p0i8(i64 1, i8* %3) #4, !dbg !50
call void @llvm.dbg.declare(metadata i8* %3, metadata !44, metadata !DIExpression()), !dbg !51
%4 = load i32 (i8*, i32, i8*)*, i32 (i8*, i32, i8*)** @bpf_probe_read, align 8, !dbg !52, !tbaa !45
%5 = load %struct.sk_buff*, %struct.sk_buff** %2, align 8, !dbg !53, !tbaa !45
%6 = call [10 x %union.anon]* @llvm.preserve.struct.access.index.p0a10s_union.anons.p0s_struct.sk_buffs(
%struct.sk_buff* %5, i32 2, i32 3), !dbg !53, !llvm.preserve.access.index !19
%7 = call %union.anon* @llvm.preserve.array.access.index.p0s_union.anons.p0a10s_union.anons(
[10 x %union.anon]* %6, i32 1, i32 5), !dbg !53
%8 = call %union.anon* @llvm.preserve.union.access.index.p0s_union.anons.p0s_union.anons(
%union.anon* %7, i32 1), !dbg !53, !llvm.preserve.access.index !26
%9 = bitcast %union.anon* %8 to %struct.anon.0*, !dbg !53
%10 = call i8* @llvm.preserve.struct.access.index.p0i8.p0s_struct.anon.0s(
%struct.anon.0* %9, i32 1, i32 1), !dbg !53, !llvm.preserve.access.index !34
%11 = call i32 %4(i8* %3, i32 1, i8* %10), !dbg !52
%12 = load i8, i8* %3, align 1, !dbg !54, !tbaa !55
%13 = sext i8 %12 to i32, !dbg !54
call void @llvm.lifetime.end.p0i8(i64 1, i8* %3) #4, !dbg !56
ret i32 %13, !dbg !57
}
!19 = distinct !DICompositeType(tag: DW_TAG_structure_type, name: "sk_buff", file: !3, line: 1, size: 704, elements: !20)
!26 = distinct !DICompositeType(tag: DW_TAG_union_type, scope: !19, file: !3, line: 5, size: 64, elements: !27)
!34 = distinct !DICompositeType(tag: DW_TAG_structure_type, scope: !26, file: !3, line: 10, size: 16, elements: !35)
Note that @llvm.preserve.{struct,union}.access.index calls have metadata llvm.preserve.access.index
attached to instructions to provide struct/union debuginfo type information.
For &ctx->u[5].dev.dev_id,
. The "%6 = ..." represents struct member "u" with index 2 for IR layout and index 3 for DI layout.
. The "%7 = ..." represents array subscript "5".
. The "%8 = ..." represents union member "dev" with index 1 for DI layout.
. The "%10 = ..." represents struct member "dev_id" with index 1 for both IR and DI layout.
Basically, traversing the use-def chain recursively for the 3rd argument of bpf_probe_read() and
examining all preserve_*_access_index calls, the debuginfo struct/union/array access index
can be achieved.
The intrinsics also contain enough information to regenerate codes for IR layout.
For array and structure intrinsics, the proper GEP can be constructed.
For union intrinsics, replacing all uses of "addr" with "base" should be enough.
Signed-off-by: Yonghong Song <yhs@fb.com>
Differential Revision: https://reviews.llvm.org/D61809
llvm-svn: 365438
For background of BPF CO-RE project, please refer to
http://vger.kernel.org/bpfconf2019.html
In summary, BPF CO-RE intends to compile bpf programs
adjustable on struct/union layout change so the same
program can run on multiple kernels with adjustment
before loading based on native kernel structures.
In order to do this, we need keep track of GEP(getelementptr)
instruction base and result debuginfo types, so we
can adjust on the host based on kernel BTF info.
Capturing such information as an IR optimization is hard
as various optimization may have tweaked GEP and also
union is replaced by structure it is impossible to track
fieldindex for union member accesses.
Three intrinsic functions, preserve_{array,union,struct}_access_index,
are introducted.
addr = preserve_array_access_index(base, index, dimension)
addr = preserve_union_access_index(base, di_index)
addr = preserve_struct_access_index(base, gep_index, di_index)
here,
base: the base pointer for the array/union/struct access.
index: the last access index for array, the same for IR/DebugInfo layout.
dimension: the array dimension.
gep_index: the access index based on IR layout.
di_index: the access index based on user/debuginfo types.
If using these intrinsics blindly, i.e., transforming all GEPs
to these intrinsics and later on reducing them to GEPs, we have
seen up to 7% more instructions generated. To avoid such an overhead,
a clang builtin is proposed:
base = __builtin_preserve_access_index(base)
such that user wraps to-be-relocated GEPs in this builtin
and preserve_*_access_index intrinsics only apply to
those GEPs. Such a buyin will prevent performance degradation
if people do not use CO-RE, even for programs which use
bpf_probe_read().
For example, for the following example,
$ cat test.c
struct sk_buff {
int i;
int b1:1;
int b2:2;
union {
struct {
int o1;
int o2;
} o;
struct {
char flags;
char dev_id;
} dev;
int netid;
} u[10];
};
static int (*bpf_probe_read)(void *dst, int size, const void *unsafe_ptr)
= (void *) 4;
#define _(x) (__builtin_preserve_access_index(x))
int bpf_prog(struct sk_buff *ctx) {
char dev_id;
bpf_probe_read(&dev_id, sizeof(char), _(&ctx->u[5].dev.dev_id));
return dev_id;
}
$ clang -target bpf -O2 -g -emit-llvm -S -mllvm -print-before-all \
test.c >& log
The generated IR looks like below:
...
define dso_local i32 @bpf_prog(%struct.sk_buff*) #0 !dbg !15 {
%2 = alloca %struct.sk_buff*, align 8
%3 = alloca i8, align 1
store %struct.sk_buff* %0, %struct.sk_buff** %2, align 8, !tbaa !45
call void @llvm.dbg.declare(metadata %struct.sk_buff** %2, metadata !43, metadata !DIExpression()), !dbg !49
call void @llvm.lifetime.start.p0i8(i64 1, i8* %3) #4, !dbg !50
call void @llvm.dbg.declare(metadata i8* %3, metadata !44, metadata !DIExpression()), !dbg !51
%4 = load i32 (i8*, i32, i8*)*, i32 (i8*, i32, i8*)** @bpf_probe_read, align 8, !dbg !52, !tbaa !45
%5 = load %struct.sk_buff*, %struct.sk_buff** %2, align 8, !dbg !53, !tbaa !45
%6 = call [10 x %union.anon]* @llvm.preserve.struct.access.index.p0a10s_union.anons.p0s_struct.sk_buffs(
%struct.sk_buff* %5, i32 2, i32 3), !dbg !53, !llvm.preserve.access.index !19
%7 = call %union.anon* @llvm.preserve.array.access.index.p0s_union.anons.p0a10s_union.anons(
[10 x %union.anon]* %6, i32 1, i32 5), !dbg !53
%8 = call %union.anon* @llvm.preserve.union.access.index.p0s_union.anons.p0s_union.anons(
%union.anon* %7, i32 1), !dbg !53, !llvm.preserve.access.index !26
%9 = bitcast %union.anon* %8 to %struct.anon.0*, !dbg !53
%10 = call i8* @llvm.preserve.struct.access.index.p0i8.p0s_struct.anon.0s(
%struct.anon.0* %9, i32 1, i32 1), !dbg !53, !llvm.preserve.access.index !34
%11 = call i32 %4(i8* %3, i32 1, i8* %10), !dbg !52
%12 = load i8, i8* %3, align 1, !dbg !54, !tbaa !55
%13 = sext i8 %12 to i32, !dbg !54
call void @llvm.lifetime.end.p0i8(i64 1, i8* %3) #4, !dbg !56
ret i32 %13, !dbg !57
}
!19 = distinct !DICompositeType(tag: DW_TAG_structure_type, name: "sk_buff", file: !3, line: 1, size: 704, elements: !20)
!26 = distinct !DICompositeType(tag: DW_TAG_union_type, scope: !19, file: !3, line: 5, size: 64, elements: !27)
!34 = distinct !DICompositeType(tag: DW_TAG_structure_type, scope: !26, file: !3, line: 10, size: 16, elements: !35)
Note that @llvm.preserve.{struct,union}.access.index calls have metadata llvm.preserve.access.index
attached to instructions to provide struct/union debuginfo type information.
For &ctx->u[5].dev.dev_id,
. The "%6 = ..." represents struct member "u" with index 2 for IR layout and index 3 for DI layout.
. The "%7 = ..." represents array subscript "5".
. The "%8 = ..." represents union member "dev" with index 1 for DI layout.
. The "%10 = ..." represents struct member "dev_id" with index 1 for both IR and DI layout.
Basically, traversing the use-def chain recursively for the 3rd argument of bpf_probe_read() and
examining all preserve_*_access_index calls, the debuginfo struct/union/array access index
can be achieved.
The intrinsics also contain enough information to regenerate codes for IR layout.
For array and structure intrinsics, the proper GEP can be constructed.
For union intrinsics, replacing all uses of "addr" with "base" should be enough.
Signed-off-by: Yonghong Song <yhs@fb.com>
llvm-svn: 365435
On macOS, BOOL is a typedef for signed char, but it should never hold a value
that isn't 1 or 0. Any code that expects a different value in their BOOL should
be fixed.
rdar://51954400
Differential revision: https://reviews.llvm.org/D63856
llvm-svn: 365408
Summary:
Since the addition of __builtin_is_constant_evaluated the result of an expression can change based on whether it is evaluated in constant context. a lot of semantic checking performs evaluations with out specifying context. which can lead to wrong diagnostics.
for example:
```
constexpr int i0 = (long long)__builtin_is_constant_evaluated() * (1ll << 33); //#1
constexpr int i1 = (long long)!__builtin_is_constant_evaluated() * (1ll << 33); //#2
```
before the patch, #2 was diagnosed incorrectly and #1 wasn't diagnosed.
after the patch #1 is diagnosed as it should and #2 isn't.
Changes:
- add a flag to Sema to passe in constant context mode.
- in SemaChecking.cpp calls to Expr::Evaluate* are now done in constant context when they should.
- in SemaChecking.cpp diagnostics for UB are not checked for in constant context because an error will be emitted by the constant evaluator.
- in SemaChecking.cpp diagnostics for construct that cannot appear in constant context are not checked for in constant context.
- in SemaChecking.cpp diagnostics on constant expression are always emitted because constant expression are always evaluated.
- semantic checking for initialization of constexpr variables is now done in constant context.
- adapt test that were depending on warning changes.
- add test.
Reviewers: rsmith
Reviewed By: rsmith
Subscribers: cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D62009
llvm-svn: 363488
The `__builtin_msa_ctcmsa` and `__builtin_msa_cfcmsa` builtins are mapped
to the `ctcmsa` and `cfcmsa` instructions respectively. While MSA
control registers have indexes in 0..7 range, the instructions accept
register index in 0..31 range [1].
[1] MIPS Architecture for Programmers Volume IV-j:
The MIPS64 SIMD Architecture Module
https://www.mips.com/?do-download=the-mips64-simd-architecture-module
llvm-svn: 361967
These don't support embedded rounding so we shouldn't be setting HasRC. That way we only
allow current direction and suppress all exceptions.
llvm-svn: 361897
where either the modification or the other access is unreachable.
This reverts r359984 (which reverted r359962). The bug in clang-tidy's
test suite exposed by the original commit was fixed in r360009.
llvm-svn: 360010
us emitting the operand of __builtin_constant_p if it has side-effects.
Original commit message:
Fix interactions between __builtin_constant_p and constexpr to match
current trunk GCC.
GCC permits information from outside the operand of
__builtin_constant_p (but in the same constant evaluation context) to be
used within that operand; clang now does so too. A few other minor
deviations from GCC's behavior showed up in my testing and are also
fixed (matching GCC):
* Clang now supports nullptr_t as the argument type for
__builtin_constant_p
* Clang now returns true from __builtin_constant_p if called with a
null pointer
* Clang now returns true from __builtin_constant_p if called with an
integer cast to pointer type
llvm-svn: 359367
This provides intrinsics support for Memory Tagging Extension (MTE),
which was introduced with the Armv8.5-a architecture.
These intrinsics are available when __ARM_FEATURE_MEMORY_TAGGING is defined.
Each intrinsic is described in detail in the ACLE Q1 2019 documentation:
https://developer.arm.com/docs/101028/latest
Reviewed By: Tim Nortover, David Spickett
Differential Revision: https://reviews.llvm.org/D60485
llvm-svn: 359348
current trunk GCC.
GCC permits information from outside the operand of
__builtin_constant_p (but in the same constant evaluation context) to be
used within that operand; clang now does so too. A few other minor
deviations from GCC's behavior showed up in my testing and are also
fixed (matching GCC):
* Clang now supports nullptr_t as the argument type for
__builtin_constant_p
* Clang now returns true from __builtin_constant_p if called with a
null pointer
* Clang now returns true from __builtin_constant_p if called with an
integer cast to pointer type
llvm-svn: 359059
Summary:
- If a parameter is used, nonnull checking needs function prototype to
retrieve the corresponding parameter's attributes. However, at the
prototype substitution phase when a template is being instantiated,
expression may be created and checked without a fully specialized
prototype. Under such a scenario, skip nonnull checking on that
argument.
Reviewers: rjmccall, tra, yaxunl
Subscribers: javed.absar, kristof.beyls, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D59900
llvm-svn: 357236
This fixes a false positive on the following, where st is configured to have
different sizes based on some preprocessor logic:
if (sizeof(buf) == sizeof(*st))
memcpy(&buf, st, sizeof(*st));
llvm-svn: 357041
Bail-out of CheckArrayAccess when the types of the base expression before
and after eventual casts are dependent. We will get another chance to check
for array bounds during instantiation. Fixes PR41087.
Differential Revision: https://reviews.llvm.org/D59776
Reviewed By: efriedma
llvm-svn: 356957
These diagnose overflowing calls to subset of fortifiable functions. Some
functions, like sprintf or strcpy aren't supported right not, but we should
probably support these in the future. We previously supported this kind of
functionality with -Wbuiltin-memcpy-chk-size, but that diagnostic doesn't work
with _FORTIFY implementations that use wrapper functions. Also unlike that
diagnostic, we emit these warnings regardless of whether _FORTIFY_SOURCE is
actually enabled, which is nice for programs that don't enable the runtime
checks.
Why not just use diagnose_if, like Bionic does? We can get better diagnostics in
the compiler (i.e. mention the sizes), and we have the potential to diagnose
sprintf and strcpy which is impossible with diagnose_if (at least, in languages
that don't support C++14 constexpr). This approach also saves standard libraries
from having to add diagnose_if.
rdar://48006655
Differential revision: https://reviews.llvm.org/D58797
llvm-svn: 356397
This patch includes the necessary code for converting between a fixed point type and integer.
This also includes constant expression evaluation for conversions with these types.
Differential Revision: https://reviews.llvm.org/D56900
llvm-svn: 355462
...instead of just comparing rank. Also, fix a bad warning about
_Float16, since its declared out of order in BuiltinTypes.def,
meaning comparing rank using BuiltinType::getKind() is incorrect.
Differential revision: https://reviews.llvm.org/D58254
llvm-svn: 354190
We were warning on valid ObjC property reference exprs, and passing
in the wrong arguments to DiagnoseFloatingImpCast (leading to a badly
worded diagnostic).
rdar://47644670
Differential revision: https://reviews.llvm.org/D58145
llvm-svn: 354074
Summary:
This makes it consistent with `memcmp` and `__builtin_bcmp`.
Also see the discussion in https://reviews.llvm.org/D56593.
Reviewers: jyknight
Subscribers: kristina, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D58120
llvm-svn: 354023
This builtin has the same UI as __builtin_object_size, but has the
potential to be evaluated dynamically. It is meant to be used as a
drop-in replacement for libraries that use __builtin_object_size when
a dynamic checking mode is enabled. For instance,
__builtin_object_size fails to provide any extra checking in the
following function:
void f(size_t alloc) {
char* p = malloc(alloc);
strcpy(p, "foobar"); // expands to __builtin___strcpy_chk(p, "foobar", __builtin_object_size(p, 0))
}
This is an overflow if alloc < 7, but because LLVM can't fold the
object size intrinsic statically, it folds __builtin_object_size to
-1. With __builtin_dynamic_object_size, alloc is passed through to
__builtin___strcpy_chk.
rdar://32212419
Differential revision: https://reviews.llvm.org/D56760
llvm-svn: 352665
Re-enable format string warnings on printf.
The warnings are still incomplete. Apparently it is undefined to use a
vector specifier without a length modifier, which is not currently
warned on. Additionally, type warnings appear to not be working with
the hh modifier, and aren't warning on all of the special restrictions
from c99 printf.
llvm-svn: 352540
As noted in https://bugs.llvm.org/show_bug.cgi?id=36651, the specialization for
isPodLike<std::pair<...>> did not match the expectation of
std::is_trivially_copyable which makes the memcpy optimization invalid.
This patch renames the llvm::isPodLike trait into llvm::is_trivially_copyable.
Unfortunately std::is_trivially_copyable is not portable across compiler / STL
versions. So a portable version is provided too.
Note that the following specialization were invalid:
std::pair<T0, T1>
llvm::Optional<T>
Tests have been added to assert that former specialization are respected by the
standard usage of llvm::is_trivially_copyable, and that when a decent version
of std::is_trivially_copyable is available, llvm::is_trivially_copyable is
compared to std::is_trivially_copyable.
As of this patch, llvm::Optional is no longer considered trivially copyable,
even if T is. This is to be fixed in a later patch, as it has impact on a
long-running bug (see r347004)
Note that GCC warns about this UB, but this got silented by https://reviews.llvm.org/D50296.
Differential Revision: https://reviews.llvm.org/D54472
llvm-svn: 351701
to reflect the new license.
We understand that people may be surprised that we're moving the header
entirely to discuss the new license. We checked this carefully with the
Foundation's lawyer and we believe this is the correct approach.
Essentially, all code in the project is now made available by the LLVM
project under our new license, so you will see that the license headers
include that license only. Some of our contributors have contributed
code under our old license, and accordingly, we have retained a copy of
our old license notice in the top-level files in each project and
repository.
llvm-svn: 351636
This patch includes logic for constant expression evaluation of fixed point additions.
Differential Revision: https://reviews.llvm.org/D55868
llvm-svn: 351593
Summary: In the [expr.sub] p1, we can read that for a given E1[E2], E1 is sequenced before E2.
Patch by Mateusz Janek.
Reviewers: rsmith, Rakete1111
Reviewed By: rsmith, Rakete1111
Subscribers: riccibruno, lebedev.ri, Rakete1111, hiraditya, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D50766
llvm-svn: 350874
When the type of the base expression after IgnoreParenCasts is incomplete,
it is still possible to diagnose an array access which precedes the array
bounds.
This is a follow-up on D55862 which added an early return when the type of
the base expression after IgnoreParenCasts was incomplete.
Differential Revision: https://reviews.llvm.org/D56050
Reviewed By: efriedma
llvm-svn: 350622
When checking that the array access is not out-of-bounds in CheckArrayAccess
it is possible that the type of the base expression after IgnoreParenCasts is
incomplete, even though the type of the base expression before IgnoreParenCasts
is complete. In this case we have no information about whether the array access
is out-of-bounds and we should just bail-out instead. This fixes PR39746 which
was caused by trying to obtain the size of an incomplete type.
Differential Revision: https://reviews.llvm.org/D55862
Reviewed By: efriedma
llvm-svn: 349811
Only explicitly look through integer and floating-point promotion where the result type is actually a promotion, which is not always the case for bit-fields in C.
Patch by Bevin Hansson.
llvm-svn: 349497
Summary:
This patch adds `__builtin_launder`, which is required to implement `std::launder`. Additionally GCC provides `__builtin_launder`, so thing brings Clang in-line with GCC.
I'm not exactly sure what magic `__builtin_launder` requires, but based on previous discussions this patch applies a `@llvm.invariant.group.barrier`. As noted in previous discussions, this may not be enough to correctly handle vtables.
Reviewers: rnk, majnemer, rsmith
Reviewed By: rsmith
Subscribers: kristina, Romain-Geissler-1A, erichkeane, amharc, jroelofs, cfe-commits, Prazek
Differential Revision: https://reviews.llvm.org/D40218
llvm-svn: 349195
Only explicitly look through integer and floating-point promotion where the result type is actually a promotion, which is not always the case for bit-fields in C.
llvm-svn: 348889
We would issue a false-positive diagnostic for parameters in function declarations shadowing fields; we now only issue the diagnostic on a function definition instead.
llvm-svn: 348400
It seems the two failing tests can be simply fixed after r348037
Fix 3 cases in Analysis/builtin-functions.cpp
Delete the bad CodeGen/builtin-constant-p.c for now
llvm-svn: 348053
Kept the "indirect_builtin_constant_p" test case in test/SemaCXX/constant-expression-cxx1y.cpp
while we are investigating why the following snippet fails:
extern char extern_var;
struct { int a; } a = {__builtin_constant_p(extern_var)};
llvm-svn: 348039
This was reverted in r347656 due to me thinking it caused a miscompile of
Chromium. Turns out it was the Chromium code that was broken.
llvm-svn: 347756
Summary:
Prior to this patch, OpenCL code such as the following would attempt to create
a BranchInst with a non-bool argument:
if (enqueue_kernel(get_default_queue(), 0, nd, ^(void){})) /* ... */
This patch is a follow up on a similar issue with pipe builtin
operations. See commit r280800 and https://bugs.llvm.org/show_bug.cgi?id=30219.
This change, while being conservative on non-builtin functions,
should set the type of expressions invoking builtins to the
proper type, instead of defaulting to `bool` and requiring
manual overrides in Sema::CheckBuiltinFunctionCall.
In addition to tests for enqueue_kernel, the tests are extended to
check other OpenCL builtins.
Reviewers: Anastasia, spatel, rsmith
Reviewed By: Anastasia
Subscribers: kristina, cfe-commits, svenvh
Differential Revision: https://reviews.llvm.org/D52879
llvm-svn: 347658
This caused a miscompile in Chrome (see crbug.com/908372) that's
illustrated by this small reduction:
static bool f(int *a, int *b) {
return !__builtin_constant_p(b - a) || (!(b - a));
}
int arr[] = {1,2,3};
bool g() {
return f(arr, arr + 3);
}
$ clang -O2 -S -emit-llvm a.cc -o -
g() should return true, but after r347417 it became false for some reason.
This also reverts the follow-up commits.
r347417:
> Re-Reinstate 347294 with a fix for the failures.
>
> Don't try to emit a scalar expression for a non-scalar argument to
> __builtin_constant_p().
>
> Third time's a charm!
r347446:
> The result of is.constant() is unsigned.
r347480:
> A __builtin_constant_p() returns 0 with a function type.
r347512:
> isEvaluatable() implies a constant context.
>
> Assume that we're in a constant context if we're asking if the expression can
> be compiled into a constant initializer. This fixes the issue where a
> __builtin_constant_p() in a compound literal was diagnosed as not being
> constant, even though it's always possible to convert the builtin into a
> constant.
r347531:
> A "constexpr" is evaluated in a constant context. Make sure this is reflected
> if a __builtin_constant_p() is a part of a constexpr.
llvm-svn: 347656
Summary:
GCC already catches these situations so we should handle it too.
GCC warns in C++ mode only (does anybody know why?). I think it is useful in C mode too.
Reviewers: rsmith, erichkeane, aaron.ballman, efriedma, xbolva00
Reviewed By: xbolva00
Subscribers: efriedma, craig.topper, scanon, cfe-commits
Differential Revision: https://reviews.llvm.org/D52835
llvm-svn: 346865
This patch fixes a minimum divider for offset in intrinsics
msa_[st/ld]_[b/h/w/d], when value is known in compile time.
Differential revision: https://reviews.llvm.org/D54038
llvm-svn: 346302
A mask type is a 1 to 8-byte string that follows the "mask." annotation
in the format string. This enables obfuscating data in the event the
provided privacy level isn't enabled.
rdar://problem/36756282
llvm-svn: 346211
The size of an os_log buffer is known at any stage of compilation, so making it
a constant expression means that the common idiom of declaring a buffer for it
won't result in a VLA. That allows the compiler to skip saving and restoring
the stack pointer around such buffers.
This also moves the OSLog and other FormatString helpers from
libclangAnalysis to libclangAST to avoid a circular dependency.
llvm-svn: 345971
We haven't supported compiling ObjC1 for a long time (and never will again), so
there isn't any reason to keep these separate. This patch replaces
LangOpts::ObjC1 and LangOpts::ObjC2 with LangOpts::ObjC.
Differential revision: https://reviews.llvm.org/D53547
llvm-svn: 345637
Summary:
- Add `UETT_PreferredAlignOf` to account for the difference between `__alignof` and `alignof`
- `AlignOfType` now returns ABI alignment instead of preferred alignment iff clang-abi-compat > 7, and one uses _Alignof or alignof
Patch by Nicole Mazzuca!
Differential Revision: https://reviews.llvm.org/D53207
llvm-svn: 345419
Constructing a global std::map requires clang to generate a linear
amount of code to construct the initializer list if the elements are not
constexpr-constructible. std::vector is not constexpr-constructible, so
this code pattern was generating large amounts of code.
Also, because of PR38829, LLVM is pathologically slow on large basic
blocks, and this causes slow compilation. This works around the bug and
reduces code size.
SemaChecking.cpp -debug-info-kind=limited:
time objsize
before: 1m45.023s 9.8M
after: 0m25.205s 6.9M
So, a 42% obj size reduction and 3.2x speedup.
llvm-svn: 345329
Add a warning if a parameter with a named address space is passed
to a to_addr builtin.
For example:
int i;
to_private(&i); // generate warning as conversion from private to private is redundant.
Patch by Alistair Davies.
Differential Revision: https://reviews.llvm.org/D51411
llvm-svn: 342638
unsigned long long builtin_unpack_vector_int128 (vector int128_t, int);
vector int128_t builtin_pack_vector_int128 (unsigned long long, unsigned long long);
Builtins should behave the same way as in GCC.
Patch By: wuzish (Zixuan Wu)
Differential Revision: https://reviews.llvm.org/D52074
llvm-svn: 342614
Summary:
_Atomic and __sync_* operations are implicitly sequentially-consistent. Some
codebases want to force explicit usage of memory order instead. This warning
allows them to know where implicit sequentially-consistent memory order is used.
The warning isn't on by default because _Atomic was purposefully designed to
have seq_cst as the default: the idea was that it's the right thing to use most
of the time. This warning allows developers who disagree to enforce explicit
usage instead.
A follow-up patch will take care of C++'s std::atomic. It'll be different enough
from this patch that I think it should be separate: for C++ the atomic
operations all have a memory order parameter (or two), but it's defaulted. I
believe this warning should trigger when the default is used, but not when
seq_cst is used explicitly (or implicitly as the failure order for cmpxchg).
<rdar://problem/28172966>
Reviewers: rjmccall
Subscribers: dexonsmith, cfe-commits
Differential Revision: https://reviews.llvm.org/D51084
llvm-svn: 341860
Namely, print the likely macro name when it's used, and include the actual
computed sizes in the diagnostic message, which are sometimes not obvious.
rdar://43909200
Differential revision: https://reviews.llvm.org/D51697
llvm-svn: 341566
This adds the following intrinsics:
_kshiftli_mask8
_kshiftli_mask16
_kshiftli_mask32
_kshiftli_mask64
_kshiftri_mask8
_kshiftri_mask16
_kshiftri_mask32
_kshiftri_mask64
llvm-svn: 341234
Summary:
C++11 onwards specs the non-member functions atomic_load and atomic_load_explicit as taking the atomic<T> by const (potentially volatile) pointer. C11, in its infinite wisdom, decided to drop the const, and C17 will fix this with DR459 (the current draft forgot to fix B.16, but that’s not the normative part).
clang’s lib/Headers/stdatomic.h implements these as #define to the __c11_* equivalent, which are builtins with custom typecheck. Fix the typecheck.
D47613 takes care of the libc++ side.
Discussion: http://lists.llvm.org/pipermail/cfe-dev/2018-May/058129.html
<rdar://problem/27426936>
Reviewers: rsmith
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D47618
llvm-svn: 338743
This diagnoses calls to memset that have the second and third arguments
transposed, for example:
memset(buf, sizeof(buf), 0);
This is done by checking if the third argument is a literal 0, or if the second
is a sizeof expression (and the third isn't). The first check is also done for
calls to bzero.
Differential revision: https://reviews.llvm.org/D49112
llvm-svn: 337470
The '%tu'/'%td' as formatting specifiers have been used to print out the
NSInteger/NSUInteger values for a long time. Typically their ABI matches, but that's
not the case on watchOS. The ABI difference boils down to the following:
- Regular 32-bit darwin targets (like armv7) use 'ptrdiff_t' of type 'int',
which matches 'NSInteger'.
- WatchOS arm target (armv7k) uses 'ptrdiff_t' of type 'long', which doesn't
match 'NSInteger' of type 'int'.
Because of this ABI difference these specifiers trigger -Wformat warnings only
for watchOS builds, which is really inconvenient for cross-platform code.
This patch avoids this -Wformat warning for '%tu'/'%td' and NS[U]Integer only,
and instead uses the new -Wformat-pedantic warning that JF introduced in
https://reviews.llvm.org/D47290. This is acceptable because Darwin guarantees that,
despite the watchOS ABI differences, sizeof(ptrdiff_t) == sizeof(NS[U]Integer),
and alignof(ptrdiff_t) == alignof(NS[U]Integer) so the warning is therefore noisy
for pedantic reasons.
I'll update public documentation to ensure that this behaviour is properly
communicated.
rdar://41739204
Differential Revision: https://reviews.llvm.org/D48852
llvm-svn: 336396
If a function has multiple format_arg attributes, clang only considers
the first it finds (because AttributeLists are in reverse order, not
necessarily the textually first) and ignores all others.
Loop over all FormatArgAttr to print warnings for all declared
format_arg attributes.
For instance, libintl's ngettext (select plural or singular version of
format string) has two __format_arg__ attributes.
Differential Revision: https://reviews.llvm.org/D48734
llvm-svn: 336239
Summary:
Pick D42933 back up, and make NSInteger/NSUInteger with %zu/%zi specifiers on Darwin warn only in pedantic mode. The default -Wformat recently started warning for the following code because of the added support for analysis for the '%zi' specifier.
NSInteger i = NSIntegerMax;
NSLog(@"max NSInteger = %zi", i);
The problem is that on armv7 %zi is 'long', and NSInteger is typedefed to 'int' in Foundation. We should avoid this warning as it's inconvenient to our users: it's target specific (happens only on armv7 and not arm64), and breaks their existing code. We should also silence the warning for the '%zu' specifier to ensure consistency. This is acceptable because Darwin guarantees that, despite the unfortunate choice of typedef, sizeof(size_t) == sizeof(NS[U]Integer), the warning is therefore noisy for pedantic reasons. Once this is in I'll update public documentation.
Related discussion on cfe-dev:
http://lists.llvm.org/pipermail/cfe-dev/2018-May/058050.html
<rdar://36874921&40501559>
Reviewers: ahatanak, vsapsai, alexshap, aaron.ballman, javed.absar, jfb, rjmccall
Subscribers: kristof.beyls, aheejin, cfe-commits
Differential Revision: https://reviews.llvm.org/D47290
llvm-svn: 335393
dead code.
This is important for C++ templates that essentially compute the valid
input in a way that is constant and will cause all the invalid cases to
be dead code that is deleted. Code in the wild actually does this and
GCC also accepts these kinds of patterns so it is important to support
it.
To make this work, we provide a non-error path to diagnose these issues,
and use a default-error warning instead. This keeps the relatively
strict handling but prevents nastiness like SFINAE on these errors. It
also allows us to safely use the system to diagnose this only when it
occurs at runtime (in emitted code).
Entertainingly, this required fixing the syntax in various other ways
for the x86 test because we never bothered to diagnose that the returns
were invalid.
Since debugging these compile failures was super confusing, I've also
improved the diagnostic to actually say what the value was. Most of the
checks I've made ignore this to simplify maintenance, but I've checked
it in a few places to make sure the diagnsotic is working.
Depends on D48462. Without that, we might actually crash some part of
the compiler after bypassing the error here.
Thanks to Richard, Ben Kramer, and especially Craig Topper for all the
help here.
Differential Revision: https://reviews.llvm.org/D48464
llvm-svn: 335309
r242675 changed the signature for the signbit builtin but did not introduce proper semantic checking to ensure the arguments are as-expected. This patch groups the signbit builtin along with the other fp classification builtins. Fixes PR28172.
llvm-svn: 335050
r242675 changed the signature for the signbit builtin but did not introduce proper semantic checking to ensure the arguments are as-expected. This patch groups the signbit builtin along with the other fp classification builtins. Fixes PR28172.
llvm-svn: 335048
The previous names took the shift amount in bits to match gcc and required a multiply by 8 in the header. This creates a misleading error message when we check the range of the immediate to the builtin since the allowed range also got multiplied by 8.
This commit changes the builtins to use a byte shift amount to match the underlying instruction and the Intel intrinsic.
Fixes the remaining issue from PR37795.
llvm-svn: 334773
Summary:
This fixes the ranges for the vcvth family of FP16 intrinsics in the clang front end. Previously it was accepting incorrect ranges
-Changed builtin range checking in SemaChecking
-added tests SemaCheck changes - included in their own file since no similar one exists
-modified existing tests to reflect new ranges
Reviewers: SjoerdMeijer, javed.absar
Reviewed By: SjoerdMeijer
Subscribers: kristof.beyls, cfe-commits
Differential Revision: https://reviews.llvm.org/D47592
llvm-svn: 334489
I'm looking into making the select builtins require avx512f, avx512bw, or avx512vl since masking operations generally require those features.
The extract builtins are funny because the 512-bit versions return a 128 or 256 bit vector with masking even when avx512vl is not supported.
llvm-svn: 334330
These builtins are all handled by CGBuiltin.cpp so it doesn't much matter what the immediate type is, but int matches the intrinsic spec.
llvm-svn: 334310
Test changes are due to differences in how we generate undef elements now. We also changed the types used for extractf128_si256/insertf128_si256 to match the signature of the builtin that previously existed which this patch resurrects. This also matches gcc.
llvm-svn: 334261
Adds support for these intrinsics, which are ARM and ARM64 only:
_interlockedbittestandreset_acq
_interlockedbittestandreset_rel
_interlockedbittestandreset_nf
_interlockedbittestandset_acq
_interlockedbittestandset_rel
_interlockedbittestandset_nf
Refactor the bittest intrinsic handling to decompose each intrinsic into
its action, its width, and its atomicity.
llvm-svn: 334239
We still emit shufflevector instructions we just do it from CGBuiltin.cpp now. This ensures the intrinsics that use this are only available on CPUs that support the feature.
I also added range checking to the immediate, but only checked it is 8 bits or smaller. We should maybe be stricter since we never use all 8 bits, but gcc doesn't seem to do that.
llvm-svn: 334237
We still lower them to native shuffle IR, but we do it in CGBuiltin.cpp now. This allows us to check the target feature and ensure the immediate fits in 8 bits.
This also improves our -O0 codegen slightly because we're able to see the zeroinitializer in the shuffle. It looks like it got lost behind a store+load previously.
llvm-svn: 334208
Summary:
We recently switch to using a selects in the intrinsics header files for FMA instructions. But the 512-bit versions support flavors with rounding mode which must be an Integer Constant Expression. This has forced those intrinsics to be implemented as macros. As it stands now the mask and mask3 intrinsics evaluate one of their macro arguments twice. If that argument itself is another intrinsic macro, we can end up over expanding macros. Or if its something we can CSE later it would show up multiple times when it shouldn't.
I tried adding __extension__ around the macro and making it an expression statement and declaring a local variable. But whatever name you choose for the local variable can never be used as the name of an input to the macro in user code. If that happens you would end up with the same name on the LHS and RHS of an assignment after expansion. We might be safe if we use __ in front of the variable names because those names are reserved and user code shouldn't use that, but I wasn't sure I wanted to make that claim.
The other option which I've chosen here, is to add back _mask, _maskz, and _mask3 flavors of the builtin which we will expand in CGBuiltin.cpp to replicate the argument as needed and insert any fneg needed on the third operand to make a subtract. The _maskz isn't truly necessary if we have an unmasked version or if we use the masked version with a -1 mask and wrap a select around it. But I've chosen to make things more uniform.
I separated out the scalar builtin handling to avoid too many things going on in EmitX86FMAExpr. It was different enough due to the extract and insert that the minor duplication of the CreateCall was probably worth it.
Reviewers: tkrupa, RKSimon, spatel, GBuella
Reviewed By: tkrupa
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D47724
llvm-svn: 334159
Previously we were just using extended vector operations in the header file.
This unfortunately allowed non-constant indices to be used with the intrinsics. This is incompatible with gcc, icc, and MSVC. It also introduces a different performance characteristic because non-constant index gets lowered to a vector store and an element sized load.
By adding the builtins we can check for the index to be a constant and ensure its in range of the vector element count.
User code still has the option to use extended vector operations themselves if they need non-constant indexing.
llvm-svn: 334057
This patch replaces all packed (and scalar without rounding
mode) fused intrinsics with fmadd/fmaddsub variations.
Then fmadd/fmaddsub are lowered to native IR.
Patch by tkrupa
Reviewers: craig.topper, sroland, spatel, RKSimon
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D47444
llvm-svn: 333555
Handling of the third parameter was only checking for *_n and not for the C11 variant, which means that cmpxchg of a 'desired' 0 value was erroneously warning. Handle C11 properly, and add extgensive tests for this as well as NULL pointers in a bunch of places.
Fixes r333246 from D47229.
llvm-svn: 333290
Summary:
As a companion to libc++ patch https://reviews.llvm.org/D47225, mark builtin atomic non-member functions which accept pointers as nonnull.
The atomic non-member functions accept pointers to std::atomic / std::atomic_flag as well as to the non-atomic value. These are all dereferenced unconditionally when lowered, and therefore will fault if null. It's a tiny gotcha for new users, especially when they pass in NULL as expected value (instead of passing a pointer to a NULL value).
<rdar://problem/18473124>
Reviewers: arphaman
Subscribers: aheejin, cfe-commits
Differential Revision: https://reviews.llvm.org/D47229
llvm-svn: 333246
Like other conversion warnings, allow float overflow warnings to be disabled
in known dead paths of template instantiation. This often occurs when a
template template type is a numeric type and the template will check the
range of the numeric type before performing the conversion.
llvm-svn: 332310
These intrinsics work exactly as all other atomic_fetch_* intrinsics and allow to create *atomicrmw* with ordering.
Updated the clang-extensions document.
Differential Revision: https://reviews.llvm.org/D46386
llvm-svn: 332193
This is similar to the LLVM change https://reviews.llvm.org/D46290.
We've been running doxygen with the autobrief option for a couple of
years now. This makes the \brief markers into our comments
redundant. Since they are a visual distraction and we don't want to
encourage more \brief markers in new code either, this patch removes
them all.
Patch produced by
for i in $(git grep -l '\@brief'); do perl -pi -e 's/\@brief //g' $i & done
for i in $(git grep -l '\\brief'); do perl -pi -e 's/\\brief //g' $i & done
Differential Revision: https://reviews.llvm.org/D46320
llvm-svn: 331834
As Eli brought up here: https://reviews.llvm.org/D46535
I'd previously messed up this fix by missing conversions
that are just slightly outside the range. This patch fixes
this by no longer ignoring the return value of
convertToInteger. Additionally, one of the error messages
wasn't very sensical (mentioning out of range value, when it
really was not), so it was cleaned up as well.
llvm-svn: 331812
As identified and briefly discussed here:
https://bugs.llvm.org/show_bug.cgi?id=37305
Converting a floating point number to an integer type when
the integral part is out of the range of the integer type is
undefined behavior in C. Additionally, CodeGen emits an undef
in this situation.
HOWEVER, we've been giving a warning that says that the value is
changed. This patch corrects the warning to list that it is actually
undefined behavior.
Differential Revision: https://reviews.llvm.org/D46535
llvm-svn: 331673
FunctionProtoType.
We previously re-evaluated the expression each time we wanted to know whether
the type is noexcept or not. We now evaluate the expression exactly once.
This is not quite "no functional change": it fixes a crasher bug during AST
deserialization where we would try to evaluate the noexcept specification in a
situation where we have not deserialized sufficient portions of the AST to
permit such evaluation.
llvm-svn: 331428
When a '>>' token is split into two '>' tokens (in C++11 onwards), or (as an
extension) when we do the same for other tokens starting with a '>', we can't
just use a location pointing to the first '>' as the location of the split
token, because that would result in our miscomputing the length and spelling
for the token. As a consequence, for example, a refactoring replacing 'A<X>'
with something else would sometimes replace one character too many, and
similarly diagnostics highlighting a template-id source range would highlight
one character too many.
Fix this by creating an expansion range covering the first character of the
'>>' token, whose spelling is '>'. For this to work, we generalize the
expansion range of a macro FileID to be either a token range (the common case)
or a character range (used in this new case).
llvm-svn: 331155
These builtins can't be handled by the backend on 64-bit targets. So error up front instead of throwing an isel error.
Fixes PR37225
Differential Revision: https://reviews.llvm.org/D46132
llvm-svn: 330987
Issue a warning when non-trivial C structs are copied or initialized by
calls to memset, bzero, memcpy, or memmove.
rdar://problem/36124208
Differential Revision: https://reviews.llvm.org/D45310
llvm-svn: 330202
The current support of the feature produces only 2 lines in report:
-Some general Code Generation Time;
-Total time of Backend Consumer actions.
This patch extends Clang time report with new lines related to Preprocessor, Include Filea Search, Parsing, etc.
Differential Revision: https://reviews.llvm.org/D43578
llvm-svn: 329684
Found via codespell -q 3 -I ../clang-whitelist.txt
Where whitelist consists of:
archtype
cas
classs
checkk
compres
definit
frome
iff
inteval
ith
lod
methode
nd
optin
ot
pres
statics
te
thru
Patch by luzpaz! (This is a subset of D44188 that applies cleanly with a few
files that have dubious fixes reverted.)
Differential revision: https://reviews.llvm.org/D44188
llvm-svn: 329399
The diagnostic system for Clang can already handle many AST nodes. Instead
of converting them to strings first, just hand the AST node directly to
the diagnostic system and let it handle the output. Minor changes in some
diagnostic output.
llvm-svn: 328688
Summary:
Libc++'s default allocator uses `__builtin_operator_new` and `__builtin_operator_delete` in order to allow the calls to new/delete to be ellided. However, libc++ now needs to support over-aligned types in the default allocator. In order to support this without disabling the existing optimization Clang needs to support calling the aligned new overloads from the builtins.
See llvm.org/PR22634 for more information about the libc++ bug.
This patch changes `__builtin_operator_new`/`__builtin_operator_delete` to call any usual `operator new`/`operator delete` function. It does this by performing overload resolution with the arguments passed to the builtin to determine which allocation function to call. If the selected function is not a usual allocation function a diagnostic is issued.
One open issue is if the `align_val_t` overloads should be considered "usual" when `LangOpts::AlignedAllocation` is disabled.
In order to allow libc++ to detect this new behavior the value for `__has_builtin(__builtin_operator_new)` has been updated to `201802`.
Reviewers: rsmith, majnemer, aaron.ballman, erik.pilkington, bogner, ahatanak
Reviewed By: rsmith
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D43047
llvm-svn: 328134
The patch fixes a number of bugs related to parameter indexing in
attributes:
* Parameter indices in some attributes (argument_with_type_tag,
pointer_with_type_tag, nonnull, ownership_takes, ownership_holds,
and ownership_returns) are specified in source as one-origin
including any C++ implicit this parameter, were stored as
zero-origin excluding any this parameter, and were erroneously
printing (-ast-print) and confusingly dumping (-ast-dump) as the
stored values.
* For alloc_size, the C++ implicit this parameter was not subtracted
correctly in Sema, leading to assert failures or to silent failures
of __builtin_object_size to compute a value.
* For argument_with_type_tag, pointer_with_type_tag, and
ownership_returns, the C++ implicit this parameter was not added
back to parameter indices in some diagnostics.
This patch fixes the above bugs and aims to prevent similar bugs in
the future by introducing careful mechanisms for handling parameter
indices in attributes. ParamIdx stores a parameter index and is
designed to hide the stored encoding while providing accessors that
require each use (such as printing) to make explicit the encoding that
is needed. Attribute declarations declare parameter index arguments
as [Variadic]ParamIdxArgument, which are exposed as ParamIdx[*]. This
patch rewrites all attribute arguments that are processed by
checkFunctionOrMethodParameterIndex in SemaDeclAttr.cpp to be declared
as [Variadic]ParamIdxArgument. The only exception is xray_log_args's
argument, which is encoded as a count not an index.
Differential Revision: https://reviews.llvm.org/D43248
llvm-svn: 326602
So I wrote a clang-tidy check to lint out redundant `isa`, `cast`, and
`dyn_cast`s for fun. This is a portion of what it found for clang; I
plan to do similar cleanups in LLVM and other subprojects when I find
time.
Because of the volume of changes, I explicitly avoided making any change
that wasn't highly local and obviously correct to me (e.g. we still have
a number of foo(cast<Bar>(baz)) that I didn't touch, since overloading
is a thing and the cast<Bar> did actually change the type -- just up the
class hierarchy).
I also tried to leave the types we were cast<>ing to somewhere nearby,
in cases where it wasn't locally obvious what we were dealing with
before.
llvm-svn: 326416
The code for going up the macro arg expansion is duplicated in many
places (and we need it for the analyzer as well, so I did not want to
duplicate it two more times).
This patch is an NFC, so the semantics should remain the same.
Differential Revision: https://reviews.llvm.org/D42458
llvm-svn: 324780
typeof expressions
This commit looks through typeof type at the original expression when diagnosing
-Wsign-compare to avoid an unfriendly diagnostic.
rdar://36588828
Differential Revision: https://reviews.llvm.org/D42561
llvm-svn: 324514
The 'trivial_abi' attribute can be applied to a C++ class, struct, or
union. It makes special functions of the annotated class (the destructor
and copy/move constructors) to be trivial for the purpose of calls and,
as a result, enables the annotated class or containing classes to be
passed or returned using the C ABI for the underlying type.
When a type that is considered trivial for the purpose of calls despite
having a non-trivial destructor (which happens only when the class type
or one of its subobjects is a 'trivial_abi' class) is passed to a
function, the callee is responsible for destroying the object.
For more background, see the discussions that took place on the mailing
list:
http://lists.llvm.org/pipermail/cfe-dev/2017-November/055955.htmlhttp://lists.llvm.org/pipermail/cfe-commits/Week-of-Mon-20180101/thread.html#214043
rdar://problem/35204524
Differential Revision: https://reviews.llvm.org/D41039
llvm-svn: 324269
The constant is already reduced to 8-bits by the time we get here and the checks were just ensuring that it was 8 bits. Thus I don't think there's anyway for them to fail.
llvm-svn: 322244
Adding the new enumerator forced a bunch more changes into this patch than I
would have liked. The -Wtautological-compare warning was extended to properly
check the new comparison operator, clang-format needed updating because it uses
precedence levels as weights for determining where to break lines (and several
operators increased their precedence levels with this change), thread-safety
analysis needed changes to build its own IL properly for the new operator.
All "real" semantic checking for this operator has been deferred to a future
patch. For now, we use the relational comparison rules and arbitrarily give
the builtin form of the operator a return type of 'void'.
llvm-svn: 320707
and fold together into a single function.
In so doing, fix a handful of remaining bugs where we would report false
positives or false negatives if we promote a signed value to an unsigned type
for the comparison.
This re-commits r320122 and r320124, minus two changes:
* Comparisons between a constant and a non-constant expression of enumeration
type never warn, not even if the constant is out of range. We should be
warning about the creation of such a constant, not about its use.
* We do not use more precise bit-widths for comparisons against bit-fields.
The more precise diagnostics probably are the right thing, but we should
consider moving them under their own warning flag.
Other than the refactoring, this patch should only change the behavior for the
buggy cases (where the warnings didn't take into account that promotion from
signed to unsigned can leave a range of inaccessible values in the middle of
the promoted type).
llvm-svn: 320211
> Unify implementation of our two different flavours of -Wtautological-compare.
>
> In so doing, fix a handful of remaining bugs where we would report false
> positives or false negatives if we promote a signed value to an unsigned type
> for the comparison.
This caused a new warning in Chromium:
../../base/trace_event/trace_log.cc:1545:29: error: comparison of constant 64
with expression of type 'unsigned int' is always true
[-Werror,-Wtautological-constant-out-of-range-compare]
DCHECK(handle.event_index < TraceBufferChunk::kTraceBufferChunkSize);
~~~~~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The 'unsigned int' is really a 6-bit bitfield, which is why it's always
less than 64.
I thought we didn't use to warn (with out-of-range-compare) when comparing
against the boundaries of a type?
llvm-svn: 320162
This broke Chromium:
../../base/trace_event/trace_log.cc:1545:29: error: comparison of constant 64
with expression of type 'unsigned int' is always true
[-Werror,-Wtautological-constant-out-of-range-compare]
DCHECK(handle.event_index < TraceBufferChunk::kTraceBufferChunkSize);
~~~~~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The 'unsigned int' is really a 6-bit bitfield, which is why it's always
less than 63.
Did this use to fall under the "in-range" case before? I thought we
didn't use to warn when comparing against the boundaries of a type.
llvm-svn: 320133
In so doing, fix a handful of remaining bugs where we would report false
positives or false negatives if we promote a signed value to an unsigned type
for the comparison.
llvm-svn: 320122
This is a follow up of r302131, in which we forgot to add SemaChecking
tests. Adding these tests revealed two problems which have been fixed:
- added missing intrinsic __qdbl,
- properly range checking ssat16 and usat16.
Differential Revision: https://reviews.llvm.org/D40888
llvm-svn: 320019
This is a fix for PR35509 in which we crash because we attempt to compute the
alignment of an incomplete type.
Differential Revision: https://reviews.llvm.org/D40895
llvm-svn: 320017
codepath plus the new "minimum / maximum value of type" diagnostic to get the
same effect.
Move the warning for an in-range but tautological comparison of a constant (0
or 1) against a bool out of -Wtautological-constant-out-of-range-compare into
the more-appropriate -Wtautological-constant-compare.
llvm-svn: 319942
An enumeration with a fixed underlying type can have any value in its
underlying type, not just those spanned by the values of its enumerators.
llvm-svn: 319875
It seems this somehow made -Wempty-body fire in some macro cases where
it didn't before, e.g.
../../third_party/ffmpeg/libavcodec/bitstream.c(169,5): error: if statement has empty body [-Werror,-Wempty-body]
ff_dlog(NULL, "new table index=%d size=%d\n", table_index, table_size);
^
../../third_party/ffmpeg\libavutil/internal.h(276,80): note: expanded from macro 'ff_dlog'
# define ff_dlog(ctx, ...) do { if (0) av_log(ctx, AV_LOG_DEBUG, __VA_ARGS__); } while (0)
^
../../third_party/ffmpeg/libavcodec/bitstream.c(169,5): note: put the
semicolon on a separate line to silence this warning
Reverting until this can be figured out.
> Do not show it when `if` or `else` come from macros.
> E.g.,
>
> #define USED(A) if (A); else
> #define SOME_IF(A) if (A)
>
> void test() {
> // No warnings are shown in those cases now.
> USED(0);
> SOME_IF(0);
> }
>
> Patch by Ilya Biryukov!
>
> Differential Revision: https://reviews.llvm.org/D40185
llvm-svn: 318665
Do not show it when `if` or `else` come from macros.
E.g.,
#define USED(A) if (A); else
#define SOME_IF(A) if (A)
void test() {
// No warnings are shown in those cases now.
USED(0);
SOME_IF(0);
}
Patch by Ilya Biryukov!
Differential Revision: https://reviews.llvm.org/D40185
llvm-svn: 318556