This reverts commit 7c51f02eff because it
stills breaks the LLDB tests. This was re-landed without addressing the
issue or even agreement on how to address the issue. More details and
discussion in https://reviews.llvm.org/D112374.
Without this patch, clang will not wrap in an ElaboratedType node types written
without a keyword and nested name qualifier, which goes against the intent that
we should produce an AST which retains enough details to recover how things are
written.
The lack of this sugar is incompatible with the intent of the type printer
default policy, which is to print types as written, but to fall back and print
them fully qualified when they are desugared.
An ElaboratedTypeLoc without keyword / NNS uses no storage by itself, but still
requires pointer alignment due to pre-existing bug in the TypeLoc buffer
handling.
---
Troubleshooting list to deal with any breakage seen with this patch:
1) The most likely effect one would see by this patch is a change in how
a type is printed. The type printer will, by design and default,
print types as written. There are customization options there, but
not that many, and they mainly apply to how to print a type that we
somehow failed to track how it was written. This patch fixes a
problem where we failed to distinguish between a type
that was written without any elaborated-type qualifiers,
such as a 'struct'/'class' tags and name spacifiers such as 'std::',
and one that has been stripped of any 'metadata' that identifies such,
the so called canonical types.
Example:
```
namespace foo {
struct A {};
A a;
};
```
If one were to print the type of `foo::a`, prior to this patch, this
would result in `foo::A`. This is how the type printer would have,
by default, printed the canonical type of A as well.
As soon as you add any name qualifiers to A, the type printer would
suddenly start accurately printing the type as written. This patch
will make it print it accurately even when written without
qualifiers, so we will just print `A` for the initial example, as
the user did not really write that `foo::` namespace qualifier.
2) This patch could expose a bug in some AST matcher. Matching types
is harder to get right when there is sugar involved. For example,
if you want to match a type against being a pointer to some type A,
then you have to account for getting a type that is sugar for a
pointer to A, or being a pointer to sugar to A, or both! Usually
you would get the second part wrong, and this would work for a
very simple test where you don't use any name qualifiers, but
you would discover is broken when you do. The usual fix is to
either use the matcher which strips sugar, which is annoying
to use as for example if you match an N level pointer, you have
to put N+1 such matchers in there, beginning to end and between
all those levels. But in a lot of cases, if the property you want
to match is present in the canonical type, it's easier and faster
to just match on that... This goes with what is said in 1), if
you want to match against the name of a type, and you want
the name string to be something stable, perhaps matching on
the name of the canonical type is the better choice.
3) This patch could exposed a bug in how you get the source range of some
TypeLoc. For some reason, a lot of code is using getLocalSourceRange(),
which only looks at the given TypeLoc node. This patch introduces a new,
and more common TypeLoc node which contains no source locations on itself.
This is not an inovation here, and some other, more rare TypeLoc nodes could
also have this property, but if you use getLocalSourceRange on them, it's not
going to return any valid locations, because it doesn't have any. The right fix
here is to always use getSourceRange() or getBeginLoc/getEndLoc which will dive
into the inner TypeLoc to get the source range if it doesn't find it on the
top level one. You can use getLocalSourceRange if you are really into
micro-optimizations and you have some outside knowledge that the TypeLocs you are
dealing with will always include some source location.
4) Exposed a bug somewhere in the use of the normal clang type class API, where you
have some type, you want to see if that type is some particular kind, you try a
`dyn_cast` such as `dyn_cast<TypedefType>` and that fails because now you have an
ElaboratedType which has a TypeDefType inside of it, which is what you wanted to match.
Again, like 2), this would usually have been tested poorly with some simple tests with
no qualifications, and would have been broken had there been any other kind of type sugar,
be it an ElaboratedType or a TemplateSpecializationType or a SubstTemplateParmType.
The usual fix here is to use `getAs` instead of `dyn_cast`, which will look deeper
into the type. Or use `getAsAdjusted` when dealing with TypeLocs.
For some reason the API is inconsistent there and on TypeLocs getAs behaves like a dyn_cast.
5) It could be a bug in this patch perhaps.
Let me know if you need any help!
Signed-off-by: Matheus Izvekov <mizvekov@gmail.com>
Differential Revision: https://reviews.llvm.org/D112374
This reverts commit bdc6974f92 because it
breaks all the LLDB tests that import the std module.
import-std-module/array.TestArrayFromStdModule.py
import-std-module/deque-basic.TestDequeFromStdModule.py
import-std-module/deque-dbg-info-content.TestDbgInfoContentDequeFromStdModule.py
import-std-module/forward_list.TestForwardListFromStdModule.py
import-std-module/forward_list-dbg-info-content.TestDbgInfoContentForwardListFromStdModule.py
import-std-module/list.TestListFromStdModule.py
import-std-module/list-dbg-info-content.TestDbgInfoContentListFromStdModule.py
import-std-module/queue.TestQueueFromStdModule.py
import-std-module/stack.TestStackFromStdModule.py
import-std-module/vector.TestVectorFromStdModule.py
import-std-module/vector-bool.TestVectorBoolFromStdModule.py
import-std-module/vector-dbg-info-content.TestDbgInfoContentVectorFromStdModule.py
import-std-module/vector-of-vectors.TestVectorOfVectorsFromStdModule.py
https://green.lab.llvm.org/green/view/LLDB/job/lldb-cmake/45301/
Without this patch, clang will not wrap in an ElaboratedType node types written
without a keyword and nested name qualifier, which goes against the intent that
we should produce an AST which retains enough details to recover how things are
written.
The lack of this sugar is incompatible with the intent of the type printer
default policy, which is to print types as written, but to fall back and print
them fully qualified when they are desugared.
An ElaboratedTypeLoc without keyword / NNS uses no storage by itself, but still
requires pointer alignment due to pre-existing bug in the TypeLoc buffer
handling.
Signed-off-by: Matheus Izvekov <mizvekov@gmail.com>
Differential Revision: https://reviews.llvm.org/D112374
These intrinsics are now fundemental for SVE code generation and have been
present for a year and a half, hence move them out of the experimental
namespace.
Differential Revision: https://reviews.llvm.org/D127976
Previously `#pragma STDC FENV_ACCESS ON` always set dynamic rounding
mode and strict exception handling. It is not correct in the presence
of other pragmas that also modify rounding mode and exception handling.
For example, the effect of previous pragma FENV_ROUND could be
cancelled, which is not conformant with the C standard. Also
`#pragma STDC FENV_ACCESS OFF` turned off only FEnvAccess flag, leaving
rounding mode and exception handling unchanged, which is incorrect in
general case.
Concrete rounding and exception mode depend on a combination of several
factors like various pragmas and command-line options. During the review
of this patch an idea was proposed that the semantic actions associated
with such pragmas should only set appropriate flags. Actual rounding
mode and exception handling should be calculated taking into account the
state of all relevant options. In such implementation the pragma
FENV_ACCESS should not override properties set by other pragmas but
should set them if such setting is absent.
To implement this approach the following main changes are made:
- Field `FPRoundingMode` is removed from `LangOptions`. Actually there
are no options that set it to arbitrary rounding mode, the choice was
only `dynamic` or `tonearest`. Instead, a new boolean flag
`RoundingMath` is added, with the same meaning as the corresponding
command-line option.
- Type `FPExceptionModeKind` now has possible value `FPE_Default`. It
does not represent any particular exception mode but indicates that
such mode was not set and default value should be used. It allows to
distinguish the case:
{
#pragma STDC FENV_ACCESS ON
...
}
where the pragma must set FPE_Strict, from the case:
{
#pragma clang fp exceptions(ignore)
#pragma STDC FENV_ACCESS ON
...
}
where exception mode should remain `FPE_Ignore`.
- Class `FPOptions` has now methods `getRoundingMode` and
`getExceptionMode`, which calculates the respective properties from
other specified FP properties.
- Class `LangOptions` has now methods `getDefaultRoundingMode` and
`getDefaultExceptionMode`, which calculates default modes from the
specified options and should be used instead of `getRoundingMode` and
`getFPExceptionMode` of the same class.
Differential Revision: https://reviews.llvm.org/D126364
Allows emitting define amdgpu_kernel void @func() IR from C or C++.
This replaces the current workflow which is to write a stub in opencl that
calls an external C function implemented in C++ combined through llvm-link.
Calling the resulting function still requires a manual implementation of the
ABI from the host side. The primary application is for more rapid debugging
of the amdgpu backend by permuting a C or C++ test file instead of manually
updating an IR file.
Implementation closely follows D54425. Non-amd reviewers from there.
Reviewed By: yaxunl
Differential Revision: https://reviews.llvm.org/D125970
This is extended to all `std::` functions that take a reference to a
value and return a reference (or pointer) to that same value: `move`,
`forward`, `move_if_noexcept`, `as_const`, `addressof`, and the
libstdc++-specific function `__addressof`.
We still require these functions to be declared before they can be used,
but don't instantiate their definitions unless their addresses are
taken. Instead, code generation, constant evaluation, and static
analysis are given direct knowledge of their effect.
This change aims to reduce various costs associated with these functions
-- per-instantiation memory costs, compile time and memory costs due to
creating out-of-line copies and inlining them, code size at -O0, and so
on -- so that they are not substantially more expensive than a cast.
Most of these improvements are very small, but I measured a 3% decrease
in -O0 object file size for a simple C++ source file using the standard
library after this change.
We now automatically infer the `const` and `nothrow` attributes on these
now-builtin functions, in particular meaning that we get a warning for
an unused call to one of these functions.
In C++20 onwards, we disallow taking the addresses of these functions,
per the C++20 "addressable function" rule. In earlier language modes, a
compatibility warning is produced but the address can still be taken.
The same infrastructure is extended to the existing MSVC builtin
`__GetExceptionInfo`, which is now only recognized in namespace `std`
like it always should have been.
This is a re-commit of
fc30901096,
a571f82a50,
64c045e25b, and
de6ddaeef3,
and reverts aa643f455a.
This change also includes a workaround for users using libc++ 3.1 and
earlier (!!), as apparently happens on AIX, where std::move sometimes
returns by value.
Reviewed By: aaron.ballman
Differential Revision: https://reviews.llvm.org/D123345
Revert "Fixup D123950 to address revert of D123345"
This reverts commit aa643f455a.
This is extended to all `std::` functions that take a reference to a
value and return a reference (or pointer) to that same value: `move`,
`forward`, `move_if_noexcept`, `as_const`, `addressof`, and the
libstdc++-specific function `__addressof`.
We still require these functions to be declared before they can be used,
but don't instantiate their definitions unless their addresses are
taken. Instead, code generation, constant evaluation, and static
analysis are given direct knowledge of their effect.
This change aims to reduce various costs associated with these functions
-- per-instantiation memory costs, compile time and memory costs due to
creating out-of-line copies and inlining them, code size at -O0, and so
on -- so that they are not substantially more expensive than a cast.
Most of these improvements are very small, but I measured a 3% decrease
in -O0 object file size for a simple C++ source file using the standard
library after this change.
We now automatically infer the `const` and `nothrow` attributes on these
now-builtin functions, in particular meaning that we get a warning for
an unused call to one of these functions.
In C++20 onwards, we disallow taking the addresses of these functions,
per the C++20 "addressable function" rule. In earlier language modes, a
compatibility warning is produced but the address can still be taken.
The same infrastructure is extended to the existing MSVC builtin
`__GetExceptionInfo`, which is now only recognized in namespace `std`
like it always should have been.
This is a re-commit of
fc30901096,
a571f82a50, and
64c045e25b
which were reverted in
e75d8b7037
due to a crasher bug where CodeGen would emit a builtin glvalue as an
rvalue if it constant-folds.
Reviewed By: aaron.ballman
Differential Revision: https://reviews.llvm.org/D123345
We still require these functions to be declared before they can be used,
but don't instantiate their definitions unless their addresses are
taken. Instead, code generation, constant evaluation, and static
analysis are given direct knowledge of their effect.
This change aims to reduce various costs associated with these functions
-- per-instantiation memory costs, compile time and memory costs due to
creating out-of-line copies and inlining them, code size at -O0, and so
on -- so that they are not substantially more expensive than a cast.
Most of these improvements are very small, but I measured a 3% decrease
in -O0 object file size for a simple C++ source file using the standard
library after this change.
We now automatically infer the `const` and `nothrow` attributes on these
now-builtin functions, in particular meaning that we get a warning for
an unused call to one of these functions.
In C++20 onwards, we disallow taking the addresses of these functions,
per the C++20 "addressable function" rule. In earlier language modes, a
compatibility warning is produced but the address can still be taken.
The same infrastructure is extended to the existing MSVC builtin
`__GetExceptionInfo`, which is now only recognized in namespace `std`
like it always should have been.
Reviewed By: aaron.ballman
Differential Revision: https://reviews.llvm.org/D123345
This was skipping specific lifetime + bitcast patterns, but with
opaque pointers the bitcast will not be present, and we did not
perform this fold.
Instead skip over lifetime.end and bitcasts generally, without
trying to correlate them.
Currently, the regcall calling conversion in Clang doesn't match with
ICC when passing / returning structures. https://godbolt.org/z/axxKMKrW7
This patch tries to fix the problem to match with ICC.
Reviewed By: LuoYuanke
Differential Revision: https://reviews.llvm.org/D122104
Motivation:
```
int test(int x, int y) {
int r = 0;
[[clang::always_inline]] r += foo(x, y); // force compiler to inline this function here
return r;
}
```
In 2018, @kuhar proposed "Introduce per-callsite inline intrinsics" in https://reviews.llvm.org/D51200 to solve this motivation case (and many others).
This patch solves this problem with call site attribute. "noinline" statement attribute already landed in D119061. Also, some LLVM Inliner fixes landed so call site attribute is stronger than function attribute.
Reviewed By: aaron.ballman
Differential Revision: https://reviews.llvm.org/D120717
Motivation:
```
int foo(int x, int y) { // any compiler will happily inline this function
return x / y;
}
int test(int x, int y) {
int r = 0;
[[clang::noinline]] r += foo(x, y); // for some reason we don't want any inlining here
return r;
}
```
In 2018, @kuhar proposed "Introduce per-callsite inline intrinsics" in https://reviews.llvm.org/D51200 to solve this motivation case (and many others).
This patch solves this problem with call site attribute. The implementation is "smaller" wrt approach which uses new intrinsics and thanks to https://reviews.llvm.org/D79121 (Add nomerge statement attribute to clang), we have got some basic infrastructure to deal with attrs on statements with call expressions.
GCC devs are more inclined to call attribute solution as well, as builtins are problematic for them - https://gcc.gnu.org/bugzilla/show_bug.cgi?id=104187. But they have no patch proposal yet so.. We have free hands here.
If this approach makes sense, next future steps would be support for call site attributes for always_inline / flatten.
Reviewed By: aaron.ballman, kuhar
Differential Revision: https://reviews.llvm.org/D119061
This flag was previously renamed `enable_noundef_analysis` to
`disable-noundef-analysis,` which is not a conventional name. (Driver and
CC1's boolean options are using [no-] prefix)
As discussed at https://reviews.llvm.org/D105169, this patch reverts its
name to `[no-]enable_noundef_analysis` and enables noundef-analysis as
default.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D119998
To make uses of the deprecated constructor easier to spot, and to
ensure that no new uses are introduced, rename it to
Address::deprecated().
While doing the rename, I've filled in element types in cases
where it was relatively obvious, but we're still left with 135
calls to the deprecated constructor.
The "-fzero-call-used-regs" option tells the compiler to zero out
certain registers before the function returns. It's also available as a
function attribute: zero_call_used_regs.
The two upper categories are:
- "used": Zero out used registers.
- "all": Zero out all registers, whether used or not.
The individual options are:
- "skip": Don't zero out any registers. This is the default.
- "used": Zero out all used registers.
- "used-arg": Zero out used registers that are used for arguments.
- "used-gpr": Zero out used registers that are GPRs.
- "used-gpr-arg": Zero out used GPRs that are used as arguments.
- "all": Zero out all registers.
- "all-arg": Zero out all registers used for arguments.
- "all-gpr": Zero out all GPRs.
- "all-gpr-arg": Zero out all GPRs used for arguments.
This is used to help mitigate Return-Oriented Programming exploits.
Reviewed By: nickdesaulniers
Differential Revision: https://reviews.llvm.org/D110869
Following the discussion on D118229, this marks all pointer-typed
kernel arguments as having ABI alignment, per section 6.3.5 of
the OpenCL spec:
> For arguments to a __kernel function declared to be a pointer to
> a data type, the OpenCL compiler can assume that the pointee is
> always appropriately aligned as required by the data type.
Differential Revision: https://reviews.llvm.org/D118894
Take the following as an example
struct z {
z (*p)();
};
z f();
When we attempt to get the LLVM type of f, we recurse into z. z itself
has a function pointer with the same type as f. Given the recursion,
Clang simply treats z::p as a pointer to an empty struct `{}*`. The
LLVM type of f is as expected. So we have two different potential
LLVM types for a given Clang type. If we store one of those into the
cache, when we access the cache with a different context (e.g. we
are/aren't recursing on z) we may get an incorrect result. There is some
attempt to clear the cache in these cases, but it doesn't seem to handle
all cases.
This change makes it so we only use the cache when we are not in any
sort of function context, i.e. `noRecordsBeingLaidOut() &&
FunctionsBeingProcessed.empty()`, which are the cases where we may
decide to choose a different LLVM type for a given Clang type. LLVM
types for builtin types are never recursive so they're always ok.
This allows us to clear the type cache less often (as seen with the
removal of one of the calls to `TypeCache.clear()`). We
still need to clear it when we use a placeholder type then replace it
later with the final type and other dependent types need to be
recalculated.
I've added a check that the cached type matches what we compute. It
triggered in this test case without the fix. It's currently not
check-clang clean so it's not on by default for something like expensive
checks builds.
This change uncovered another issue where the LLVM types for an argument
and its local temporary don't match. For example in type-cache-3, when
expanding z::dc's argument into a temporary alloca, we ConvertType() the
type of z::p which is `void ({}*)*`, which doesn't match the alloca GEP
type of `{}*`.
No noticeable compile time changes:
https://llvm-compile-time-tracker.com/compare.php?from=3918dd6b8acf8c5886b9921138312d1c638b2937&to=50bdec9836ed40e38ece0657f3058e730adffc4c&stat=instructionsFixes#53465.
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D118744
After fa87fa97fb, this was no longer guaranteed to be the cleanup
just added by this code, if IsEHCleanup got disabled. Instead, use
stable_begin(), which _is_ guaranteed to be the cleanup just added.
This caused a crash when a object that is callee destroyed (e.g. with the MS ABI) was passed in a call from a noexcept function.
Added a test to verify.
Fixes: fa87fa97fb
Instead use either Type::getPointerElementType() or
Type::getNonOpaquePointerElementType().
This is part of D117885, in preparation for deprecating the API.
When adding new attributes, existing attributes are dropped. While
this appears to be a longstanding issue, this was highlighted by D105169
which dropped a lot of attributes due to adding the new noundef
attribute.
Ahmed Bougacha (@ab) tracked down the issue and provided the fix in
CGCall.cpp. I bundled it up and updated the tests.
Turning on `enable_noundef_analysis` flag allows better codegen by removing freeze instructions.
I modified clang by renaming `enable_noundef_analysis` flag to `disable-noundef-analysis` and turning it off by default.
Test updates are made as a separate patch: D108453
Reviewed By: eugenis
Differential Revision: https://reviews.llvm.org/D105169
Minor adjustment in order of noundef analysis to be a bit more optimal (when disabled).
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D117078
sret is special in that it does not use the memory type
representation. Manually construct the LValue using ConvertType
instead of ConvertTypeForMem here.
This fixes matrix-lowering-opt-levels.c on s390x.
This is the last cleanup step resulting from D115804 .
Now that clang uses intrinsics when we're in the special FP mode,
we don't need a function attribute as an indicator to the backend.
The LLVM part of the change is in D115885.
Differential Revision: https://reviews.llvm.org/D115886
This no longer allows creating an invalid Address through the regular
constructor. There were only two places that did this (AggValueSlot
and EHCleanupScope) which did this by converting a potential nullptr
into an Address. I've fixed both of these by directly storing an
Address instead.
This is intended as a bit of preliminary cleanup for D103465.
Differential Revision: https://reviews.llvm.org/D115630
WG14 adopted the _ExtInt feature from Clang for C23, but renamed the
type to be _BitInt. This patch does the vast majority of the work to
rename _ExtInt to _BitInt, which accounts for most of its size. The new
type is exposed in older C modes and all C++ modes as a conforming
extension. However, there are functional changes worth calling out:
* Deprecates _ExtInt with a fix-it to help users migrate to _BitInt.
* Updates the mangling for the type.
* Updates the documentation and adds a release note to warn users what
is going on.
* Adds new diagnostics for use of _BitInt to call out when it's used as
a Clang extension or as a pre-C23 compatibility concern.
* Adds new tests for the new diagnostic behaviors.
I want to call out the ABI break specifically. We do not believe that
this break will cause a significant imposition for early adopters of
the feature, and so this is being done as a full break. If it turns out
there are critical uses where recompilation is not an option for some
reason, we can consider using ABI tags to ease the transition.
This patch removes the assumption propagation that was added in D110655
primarily to get assumption informatino on opaque call sites for
optimizations. The analysis done in D111445 allows us to do this more
intelligently in the back-end.
Depends on D111445
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D111463
Turning on `enable_noundef_analysis` flag allows better codegen by removing freeze instructions.
I modified clang by renaming `enable_noundef_analysis` flag to `disable-noundef-analysis` and turning it off by default.
Test updates are made as a separate patch: D108453
Reviewed By: eugenis
Differential Revision: https://reviews.llvm.org/D105169
[Clang/Test]: Rename enable_noundef_analysis to disable-noundef-analysis and turn it off by default (2)
This patch updates test files after D105169.
Autogenerated test codes are changed by `utils/update_cc_test_checks.py,` and non-autogenerated test codes are changed as follows:
(1) I wrote a python script that (partially) updates the tests using regex: {F18594904} The script is not perfect, but I believe it gives hints about which patterns are updated to have `noundef` attached.
(2) The remaining tests are updated manually.
Reviewed By: eugenis
Differential Revision: https://reviews.llvm.org/D108453
Resolve lit failures in clang after 8ca4b3e's land
Fix lit test failures in clang-ppc* and clang-x64-windows-msvc
Fix missing failures in clang-ppc64be* and retry fixing clang-x64-windows-msvc
Fix internal_clone(aarch64) inline assembly
Turning on `enable_noundef_analysis` flag allows better codegen by removing freeze instructions.
I modified clang by renaming `enable_noundef_analysis` flag to `disable-noundef-analysis` and turning it off by default.
Test updates are made as a separate patch: D108453
Reviewed By: eugenis
Differential Revision: https://reviews.llvm.org/D105169