Summary:
This includes initial support for the (hopefully final) updated Objective-C ABI, developed here:
https://github.com/davidchisnall/clang-gnustep-abi-2
It also includes some cleanups and refactoring from older GNU ABIs.
The current version is ELF only, other formats to follow.
Reviewers: rjmccall, DHowett-MSFT
Reviewed By: rjmccall
Subscribers: smeenai, cfe-commits
Differential Revision: https://reviews.llvm.org/D46052
llvm-svn: 332950
Because the intrinsics in the headers are implemented as macros, we can't just use a select builtin and pternlog builtin. This would require one of the macro arguments to be used twice. Depending on what was passed to the macro we could expand an expression twice leading to weird behavior. We could maybe declare our local variable in the macro, but that would need to worry about name collisions.
To avoid that just generate IR directly in CGBuiltin.cpp.
Differential Revision: https://reviews.llvm.org/D47125
llvm-svn: 332891
1. added restrictions to memory scope, order and volatile parameters
2. added custom processing for these builtins - currently is not used code,
needed to switch off GCCBuiltin link to the builtins (ongoing change to llvm
tree)
3. builtins renamed as requested
Differential Revision: https://reviews.llvm.org/D43281
llvm-svn: 332848
If a variable has an initializer, codegen tries to build its value. If
the variable is large in size, building its value requires substantial
resources. It causes strange behavior from user viewpoint: compilation
of huge zero initialized arrays like:
char data_1[2147483648u] = { 0 };
consumes enormous amount of time and memory.
With this change codegen tries to determine if variable initializer is
equivalent to zero initializer. In this case variable value is not
constructed.
This change fixes PR18978.
Differential Revision: https://reviews.llvm.org/D46241
llvm-svn: 332847
The first version of the patch (r332228) was flawed because it was
putting structors into C5/D5 comdats very eagerly. This is correct only
if we can ensure the comdat contains all required versions of the
structor (which wasn't the case). This version uses a more nuanced
approach:
- for local structor symbols we use an alias because we don't have to
worry about comdats or other compilation units.
- linkonce symbols are emitted separately, as we cannot guarantee we
will have all symbols we need to form a comdat (they are emitted
lazily, only when referenced).
- available_externally symbols are also emitted separately, as the code
seemed to be worried about emitting an alias in this case.
- other linkage types are not affected by the optimization level. They
either get put into a comdat (weak) or get aliased (external).
Reviewers: rjmccall, aprantl
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D46685
llvm-svn: 332839
When a lambda capture captures a __block in the same statement, the compiler asserts out because isCapturedBy assumes that an Expr can only be a BlockExpr, StmtExpr, or if it's a Stmt then all the statement's children are expressions. That's wrong, we need to visit all sub-statements even if they're not expressions to see if they also capture.
Fix this issue by pulling out the isCapturedBy logic to use RecursiveASTVisitor.
<rdar://problem/39926584>
llvm-svn: 332801
To support linking device code in different source files, it is necessary to
embed fat binary at host linking stage.
This patch emits an external symbol for fat binary in host codegen, then
embed the fat binary by lld through a linker script.
Differential Revision: https://reviews.llvm.org/D46472
llvm-svn: 332724
MethodVFTableLocations in MigrosoftVTableContext contains canonicalized
decl. But, it's sometimes asked to lookup for non-canonicalized decl,
and that causes assertion failure, and compilation failure.
Fixes PR37481.
Patch by Taiju Tsuiki!
Differential Revision: https://reviews.llvm.org/D46929
llvm-svn: 332639
lifetime.start/end expects pointer argument in alloca address space.
However in C++ a temporary variable is in default address space.
This patch changes API CreateMemTemp and CreateTempAlloca to
get the original alloca instruction and pass it lifetime.start/end.
It only affects targets with non-zero alloca address space.
Differential Revision: https://reviews.llvm.org/D45900
llvm-svn: 332593
functions.
If the combined construct is specified in the declare target function
and the device code is emitted, the compiler crashes because of the
incorrectly chosen captured stmt. We should choose the innermost
captured statement, not the outermost.
llvm-svn: 332477
If the orphaned directive is executed in SPMD mode, we need to emit the
check for the SPMD mode and run the orphaned parallel directive in
sequential mode.
llvm-svn: 332467
In generic data-sharing mode we do not need to globalize
variables/parameters of reference/pointer types. They already are placed
in the global memory.
llvm-svn: 332380
The DEBUG() macro is very generic so it might clash with other projects.
The renaming was done as follows:
- git grep -l 'DEBUG' | xargs sed -i 's/\bDEBUG\s\?(/LLVM_DEBUG(/g'
- git diff -U0 master | ../clang/tools/clang-format/clang-format-diff.py -i -p1 -style LLVM
Explicitly avoided changing the strings in the clang-format tests.
Differential Revision: https://reviews.llvm.org/D44975
llvm-svn: 332350
Some targets have constant address space (e.g. amdgcn). For them string literal should be
emitted in constant address space then casted to default address space.
Differential Revision: https://reviews.llvm.org/D46643
llvm-svn: 332279
Summary:
Removing the full structor and replacing all usages with the base one
can degrade debug quality as it will leave the debugger unable to locate
the full object structor. This is apparent when evaluating an expression
in the debugger which requires constructing an object of class which has
had this optimization applied to it. When compiling the expression, we
pretend that the class and its methods have been defined in another
compilation unit, so the expression compiler assumes the structor
definition must be available. This didn't use to be the case for
structors with internal linkage. Less aggressive optimizations like
emitting the full structor as an alias remain in place, as they do not
cause the structor symbol to disappear completely.
This improves debug quality on non-darwin platforms (darwin does not
have -mconstructor-aliases on by default, so it is spared these
problems) and enable us to remove some workarounds from LLDB which attempt to
mitigate this issue.
Reviewers: rjmccall, aprantl
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D46685
llvm-svn: 332228
These intrinsics work exactly as all other atomic_fetch_* intrinsics and allow to create *atomicrmw* with ordering.
Updated the clang-extensions document.
Differential Revision: https://reviews.llvm.org/D46386
llvm-svn: 332193
Summary:
The Itanium ABI requires that the type info for pointer-to-incomplete types to have internal linkage, so that it doesn't interfere with the type info once completed. Currently it also marks the type info name as internal as well. However, this causes a bug with the STL implementations, which use the type info name pointer to perform ordering and hashing of type infos.
For example:
```
// header.h
struct T;
extern std::type_info const& Info;
// tu_one.cpp
#include "header.h"
std::type_info const& Info = typeid(T*);
// tu_two.cpp
#include "header.h"
struct T {};
int main() {
auto &TI1 = Info;
auto &TI2 = typeid(T*);
assert(TI1 == TI2); // Fails
assert(TI1.hash_code() == TI2.hash_code()); // Fails
}
```
This patch fixes the STL bug by emitting the type info name as linkonce_odr when the type-info is for a pointer-to-incomplete type.
Note that libc++ could fix this without a compiler change, but the quality of fix would be poor. The library would either have to:
(A) Always perform strcmp/string hashes.
(B) Determine if we have a pointer-to-incomplete type, and only do strcmp then. This would require an ABI break for libc++.
Reviewers: rsmith, rjmccall, majnemer, vsapsai
Reviewed By: rjmccall
Subscribers: smeenai, cfe-commits
Differential Revision: https://reviews.llvm.org/D46665
llvm-svn: 332028
This commit relands r331904.
Adding a SrcMgr::CharacteristicKind parameter to the InclusionDirective
in PPCallbacks, and updating calls to that function. This will be useful
in https://reviews.llvm.org/D43778 to determine which includes are
system
headers.
Differential Revision: https://reviews.llvm.org/D46614
llvm-svn: 332021
Added initial support for L2 parallelism in SPMD mode. Note, though,
that the orphaned parallel directives are not currently supported in
SPMD mode.
llvm-svn: 332016
This is unnecessary for AVX512VL supporting CPUs like SKX. We can just emit a 128-bit masked load/store here no matter what. The backend will widen it to 512-bits on KNL CPUs.
Fixes the frontend portion of PR37386. Need to fix the backend to optimize the new sequences well.
llvm-svn: 331958
Summary:
The Itanium ABI requires that the type info for pointer-to-incomplete types to have internal linkage, so that it doesn't interfere with the type info once completed. Currently it also marks the type info name as internal as well. However, this causes a bug with the STL implementations, which use the type info name pointer to perform ordering and hashing of type infos.
For example:
```
// header.h
struct T;
extern std::type_info const& Info;
// tu_one.cpp
#include "header.h"
std::type_info const& Info = typeid(T*);
// tu_two.cpp
#include "header.h"
struct T {};
int main() {
auto &TI1 = Info;
auto &TI2 = typeid(T*);
assert(TI1 == TI2); // Fails
assert(TI1.hash_code() == TI2.hash_code()); // Fails
}
```
This patch fixes the STL bug by emitting the type info name as linkonce_odr when the type-info is for a pointer-to-incomplete type.
Note that libc++ could fix this without a compiler change, but the quality of fix would be poor. The library would either have to:
(A) Always perform strcmp/string hashes.
(B) Determine if we have a pointer-to-incomplete type, and only do strcmp then. This would require an ABI break for libc++.
Reviewers: rsmith, rjmccall, majnemer, vsapsai
Reviewed By: rjmccall
Subscribers: smeenai, cfe-commits
Differential Revision: https://reviews.llvm.org/D46665
llvm-svn: 331957
Previously we emitted something like
rotl(x, n) {
n &= bitwidth-1;
return n != 0 ? ((x << n) | (x >> (bitwidth - n)) : x;
}
We use a select to avoid the undefined behavior on the (bitwidth - n) shift.
The middle and backend don't really recognize this as a rotate and end up emitting a cmov or control flow because of the select.
A better pattern is (x << (n & mask)) | (x << (-n & mask)) where mask is bitwidth - 1.
Fixes the main complaint in PR37387. There's still some work to be done if the user writes that sequence directly on a short or char where type promotion rules can prevent it from being recognized. The builtin is emitting direct IR with unpromoted types so that isn't a problem for it.
Differential Revision: https://reviews.llvm.org/D46656
llvm-svn: 331943
Summary:
This attribute tells clang to skip this function from stack protector
when -stack-protector option is passed.
GCC option for this is:
__attribute__((__optimize__("no-stack-protector"))) and the
equivalent clang syntax would be: __attribute__((no_stack_protector))
This is used in Linux kernel to selectively disable stack protector
in certain functions.
Reviewers: aaron.ballman, rsmith, rnk, probinson
Reviewed By: aaron.ballman
Subscribers: probinson, srhines, cfe-commits
Differential Revision: https://reviews.llvm.org/D46300
llvm-svn: 331925
Adding a SrcMgr::CharacteristicKind parameter to the InclusionDirective
in PPCallbacks, and updating calls to that function. This will be useful
in https://reviews.llvm.org/D43778 to determine which includes are system
headers.
Differential Revision: https://reviews.llvm.org/D46614
llvm-svn: 331904
It is required to emit unique names for offloading regions ids. Required
to support compilation and linking of several compilation units.
llvm-svn: 331899
If the global variables are marked as declare target and they need
ctors/dtors, these ctors/dtors are emitted and then invoked by the
offloading runtime library. They are not explicitly used in the emitted
code and thus can be optimized out. Patch marks these functions as used,
so the optimizer cannot remove these function during the optimization
phase.
llvm-svn: 331879
It broke the Chromium build (see reply on the review).
> Generate DILabel metadata and call llvm.dbg.label after label
> statement to associate the metadata with the label.
>
> Differential Revision: https://reviews.llvm.org/D45045
>
> Patch by Hsiangkai Wang.
This doesn't revert the change to backend-unsupported-error.ll
that seems to correspond to an llvm-side change.
llvm-svn: 331861
Generate DILabel metadata and call llvm.dbg.label after label
statement to associate the metadata with the label.
Differential Revision: https://reviews.llvm.org/D45045
Patch by Hsiangkai Wang.
llvm-svn: 331843
This is similar to the LLVM change https://reviews.llvm.org/D46290.
We've been running doxygen with the autobrief option for a couple of
years now. This makes the \brief markers into our comments
redundant. Since they are a visual distraction and we don't want to
encourage more \brief markers in new code either, this patch removes
them all.
Patch produced by
for i in $(git grep -l '\@brief'); do perl -pi -e 's/\@brief //g' $i & done
for i in $(git grep -l '\\brief'); do perl -pi -e 's/\\brief //g' $i & done
Differential Revision: https://reviews.llvm.org/D46320
llvm-svn: 331834
The linkage of the global entries must be weak to enable support of
redefinition of the same target regions in multiple compilation units.
llvm-svn: 331768
This patch addresses some mostly trivial post-commit review comments received
on r331677.
Additionally, this patch fixes an assertion in `getNarrowingKind` caused by
the use of an uninitialized value from `checkThreeWayNarrowingConversion`.
llvm-svn: 331707
Summary:
This patch tackles long hanging fruit for the builtin operator<=> expressions. It is currently needs some cleanup before landing, but I want to get some initial feedback.
The main changes are:
* Lookup, build, and store the required standard library types and expressions in `ASTContext`. By storing them in ASTContext we don't need to store (and duplicate) the required expressions in the BinaryOperator AST nodes.
* Implement [expr.spaceship] checking, including diagnosing narrowing conversions.
* Implement `ExprConstant` for builtin spaceship operators.
* Implement builitin operator<=> support in `CodeGenAgg`. Initially I emitted the required comparisons using `ScalarExprEmitter::VisitBinaryOperator`, but this caused the operand expressions to be emitted once for every required cmp.
* Implement [builtin.over] with modifications to support the intent of P0946R0. See the note on `BuiltinOperatorOverloadBuilder::addThreeWayArithmeticOverloads` for more information about the workaround.
Reviewers: rsmith, aaron.ballman, majnemer, rnk, compnerd, rjmccall
Reviewed By: rjmccall
Subscribers: rjmccall, rsmith, aaron.ballman, junbuml, mgorny, cfe-commits
Differential Revision: https://reviews.llvm.org/D45476
llvm-svn: 331677
Added initial codegen for level 2, 3 etc. parallelism. Currently, all
the second, the third etc. parallel regions will run sequentially.
llvm-svn: 331642
Summary:
Passes down the necessary code ge options to the LTO Config to enable
-fdiagnostics-show-hotness and -fsave-optimization-record in the ThinLTO
backend for a distributed build.
Also, remove warning about not having PGO when the input is IR.
Reviewers: pcc
Subscribers: mehdi_amini, inglorion, eraman, cfe-commits
Differential Revision: https://reviews.llvm.org/D46464
llvm-svn: 331592
Summary:
http://wg21.link/P0664r2 section "Evolution/Core Issues 24" describes a
proposed change to Coroutines TS that would have any exceptions thrown
after the initial suspend point of a coroutine be caught by the handler
specified by the promise type's 'unhandled_exception' member function.
This commit provides a sample implementation of the specified behavior.
Test Plan: `check-clang`
Reviewers: GorNishanov, EricWF
Reviewed By: GorNishanov
Subscribers: cfe-commits, lewissbaker, eric_niebler
Differential Revision: https://reviews.llvm.org/D45860
llvm-svn: 331519
FunctionProtoType.
We previously re-evaluated the expression each time we wanted to know whether
the type is noexcept or not. We now evaluate the expression exactly once.
This is not quite "no functional change": it fixes a crasher bug during AST
deserialization where we would try to evaluate the noexcept specification in a
situation where we have not deserialized sufficient portions of the AST to
permit such evaluation.
llvm-svn: 331428
Some symbols are not allowed to be used as names on some targets. Patch
ries to unify the emission of the names of LLVM globals so they could be
used on different targets.
llvm-svn: 331358
devices.
If the function is an instantiation|specialization of the template and
is used in the device code, the definitions of such functions should be
emitted for the device.
llvm-svn: 331261
This is not yet part of any C++ working draft, and so is controlled by the flag
-fchar8_t rather than a -std= flag. (The GCC implementation is controlled by a
flag with the same name.)
This implementation is experimental, and will be removed or revised
substantially to match the proposal as it makes its way through the C++
committee.
llvm-svn: 331244
As suggested in the post-commit thread for rL331056, we should match these
clang options with the established vocabulary of the corresponding sanitizer
option. Also, the use of 'strict' is well-known for these kinds of knobs,
and we can improve the descriptive text in the docs.
So this intends to match the logic of D46135 but only change the words.
Matching LLVM commit to match this spelling of the attribute to follow shortly.
Differential Revision: https://reviews.llvm.org/D46236
llvm-svn: 331209
Emit error messages instead of compiler crashing when the target region
does not exist in the device code + fix crash when the location comes
from macros.
llvm-svn: 331195
When a '>>' token is split into two '>' tokens (in C++11 onwards), or (as an
extension) when we do the same for other tokens starting with a '>', we can't
just use a location pointing to the first '>' as the location of the split
token, because that would result in our miscomputing the length and spelling
for the token. As a consequence, for example, a refactoring replacing 'A<X>'
with something else would sometimes replace one character too many, and
similarly diagnostics highlighting a template-id source range would highlight
one character too many.
Fix this by creating an expansion range covering the first character of the
'>>' token, whose spelling is '>'. For this to work, we generalize the
expansion range of a macro FileID to be either a token range (the common case)
or a character range (used in this new case).
llvm-svn: 331155
As discussed in the post-commit thread for:
rL330437 ( http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20180423/545906.html )
We need a way to opt-out of a float-to-int-to-float cast optimization because too much
existing code relies on the platform-specific undefined result of those casts when the
float-to-int overflows.
The LLVM changes associated with adding this function attribute are here:
rL330947
rL330950
rL330951
Also as suggested, I changed the LLVM doc to mention the specific sanitizer flag that
catches this problem:
rL330958
Differential Revision: https://reviews.llvm.org/D46135
llvm-svn: 331041
The ACLE spec which describes these intrinsics hasn't been published yet, but
this is based on the final draft which will be published soon, and these have
already been implemented by GCC.
Differential revision: https://reviews.llvm.org/D46109
llvm-svn: 331039
SPIR-V encodes the read_only and write_only access qualifiers of pipes,
so separate LLVM IR types are required to target SPIR-V. Other backends
may also find this useful.
These new types are `opencl.pipe_ro_t` and `opencl.pipe_wo_t`, which
replace `opencl.pipe_t`.
This replaces __get_pipe_num_packets(...) and __get_pipe_max_packets(...)
which took a read_only pipe with separate versions for read_only and
write_only pipes, namely:
* __get_pipe_num_packets_ro(...)
* __get_pipe_num_packets_wo(...)
* __get_pipe_max_packets_ro(...)
* __get_pipe_max_packets_wo(...)
These separate versions exist to avoid needing a bitcast to one of the
two qualified pipe types.
Patch by Stuart Brady.
Differential Revision: https://reviews.llvm.org/D46015
llvm-svn: 331026
function if a function delegates to another function.
Fix a bug introduced in r328731, which caused a struct with ObjC __weak
fields that was passed to a function to be destructed twice, once in the
callee function and once in another function the callee function
delegates to. To prevent this, keep track of the callee-destructed
structs passed to a function and disable their cleanups at the point of
the call to the delegated function.
This reapplies r331016, which was reverted in r331019 because it caused
an assertion to fail in EmitDelegateCallArg on a windows bot. I made
changes to EmitDelegateCallArg so that it doesn't try to deactivate
cleanups for structs that have trivial destructors (cleanups for those
structs are never pushed to the cleanup stack in EmitParmDecl).
rdar://problem/39194693
Differential Revision: https://reviews.llvm.org/D45382
llvm-svn: 331020
function if a function delegates to another function.
Fix a bug introduced in r328731, which caused a struct with ObjC __weak
fields that was passed to a function to be destructed twice, once in the
callee function and once in another function the callee function
delegates to. To prevent this, keep track of the callee-destructed
structs passed to a function and disable their cleanups at the point of
the call to the delegated function.
rdar://problem/39194693
Differential Revision: https://reviews.llvm.org/D45382
llvm-svn: 331016
This patch is a tweak of changyu's patch: https://reviews.llvm.org/D40381. It differs in that the recognition of the 'concept' token is moved into the machinery that recognizes declaration-specifiers - this allows us to leverage the attribute handling machinery more seamlessly.
See the test file to get a sense of the basic parsing that this patch supports.
There is much more work to be done before concepts are usable...
Thanks Changyu!
llvm-svn: 330794
HIP is a language similar to CUDA (https://github.com/ROCm-Developer-Tools/HIP/blob/master/docs/markdown/hip_kernel_language.md ).
The language syntax is very similar, which allows a hip program to be compiled as a CUDA program by Clang. The main difference
is the host API. HIP has a set of vendor neutral host API which can be implemented on different platforms. Currently there is open source
implementation of HIP runtime on amdgpu target (https://github.com/ROCm-Developer-Tools/HIP).
This patch adds support of input kind and language standard hip.
When hip file is compiled, both LangOpts.CUDA and LangOpts.HIP is turned on. This allows compilation of hip program as CUDA
in most cases and only special handling of hip program is needed LangOpts.HIP is checked.
This patch also adds support of kernel launching of HIP program using HIP host API.
When -x hip is not specified, there is no behaviour change for CUDA.
Patch by Greg Rodgers.
Revised and lit test added by Yaxun Liu.
Differential Revision: https://reviews.llvm.org/D44984
llvm-svn: 330790
/usr/local/bin/ld.lld: error: undefined symbol: llvm::createAggressiveInstCombinerPass()
>>> referenced by cc1_main.cpp
>>> tools/clang/tools/driver/CMakeFiles/clang.dir/cc1_main.cpp.o:(_GLOBAL__sub_I_cc1_main.cpp)
And so on
The bot coverage is clearly missing.
llvm-svn: 330694
NVPTX target.
When generating the wrapper function for the offloading region, we need
to call the outlined function and cast the arguments correctly to follow
the ABI. Usually, variables captured by value are casted to `uintptr_t`
type. But this should not performed for the variables with pointer type.
llvm-svn: 330620
If an atomic variable is misaligned (and that suspicion is why Clang emits
libcalls at all) the runtime support library will have to use a lock to safely
access it, with potentially very bad performance consequences. There's a very
good chance this is unintentional so it makes sense to issue a warning.
Also give it a named group so people can promote it to an error, or disable it
if they really don't care.
llvm-svn: 330566
Some targets need special LLVM calling convention for CUDA kernel.
This patch does that through a TargetCodeGenInfo hook.
It only affects amdgcn target.
Patch by Greg Rodgers.
Revised and lit tests added by Yaxun Liu.
Differential Revision: https://reviews.llvm.org/D45223
llvm-svn: 330447
Summary:
By default Clang outputs its version (including git commit hash, in
case of trunk builds) into object and assembly files. It might be
useful to have an option to disable this, especially for debugging
purposes.
This patch implements new command line flags -Qn and -Qy (the names
are chosen for compatibility with GCC). -Qn disables output of
the 'llvm.ident' metadata string and the 'producer' debug info. -Qy
(enabled by default) does the opposite.
Reviewers: faisalv, echristo, aprantl
Reviewed By: aprantl
Subscribers: aprantl, cfe-commits, JDevlieghere, rogfer01
Differential Revision: https://reviews.llvm.org/D45255
llvm-svn: 330442
nvcc generates a unique registration function for each object file
that contains relocatable device code. Unique names are achieved
with a module id that is also reflected in the function's name.
Differential Revision: https://reviews.llvm.org/D42922
llvm-svn: 330425
This implements support for the previously ignored flag
`-falign-functions`. This allows the frontend to request alignment on
function definitions in the translation unit where they are not
explicitly requested in code. This is compatible with the GCC behaviour
and the ICC behaviour.
The scalar value passed to `-falign-functions` aligns functions to a
power-of-two boundary. If flag is used, the functions are aligned to
16-byte boundaries. If the scalar is specified, it must be an integer
less than or equal to 4096. If the value is not a power-of-two, the
driver will round it up to the nearest power of two.
llvm-svn: 330378
The force_align_arg_pointer attribute was using a hardcoded 16-byte
alignment value which in combination with -mstack-alignment=32 (or
larger) would produce a misaligned stack which could result in crashes
when accessing stack buffers using aligned AVX load/store instructions.
Fix the issue by using the "stackrealign" function attribute instead
of using a hardcoded 16-byte alignment.
Patch By: Gramner
Differential Revision: https://reviews.llvm.org/D45812
llvm-svn: 330331
This is the patch that lowers x86 intrinsics to native IR
in order to enable optimizations.
Patch by tkrupa
Differential Revision: https://reviews.llvm.org/D44786
llvm-svn: 330323
have a non-trivial destructor.
This fixes a bug introduced in r328731 where CodeGen emits calls to
synthesized destructors for non-trivial C structs in C++ mode when the
struct passed to EmitCallArg doesn't have a non-trivial destructor.
Under Microsoft's ABI, ASTContext::isParamDestroyedInCallee currently
always returns true, so it's necessary to check whether the struct has a
non-trivial destructor before pushing a cleanup in EmitCallArg.
This fixes PR37146.
llvm-svn: 330304
Summary:
A clang builtin for xray typed events. Differs from
__xray_customevent(...) by the presence of a type tag that is vended by
compiler-rt in typical usage. This allows xray handlers to expand logged
events with their type description and plugins to process traced events
based on type.
This change depends on D45633 for the intrinsic definition.
Reviewers: dberris, pelikan, rnk, eizan
Subscribers: cfe-commits, llvm-commits
Differential Revision: https://reviews.llvm.org/D45716
llvm-svn: 330220
to a header file.
This is in preparation for using the visitor classes to warn about
memcpy'ing non-trivial C structs.
See the discussion here:
https://reviews.llvm.org/D45310
rdar://problem/36124208
llvm-svn: 330201
register destructor functions annotated with __attribute__((destructor))
using __cxa_atexit or atexit.
Register destructor functions annotated with __attribute__((destructor))
calling __cxa_atexit in a synthesized constructor function instead of
emitting references to the functions in a special section.
The primary reason for adding this option is that we are planning to
deprecate the __mod_term_funcs section on Darwin in the future. This
feature is enabled by default only on Darwin. Users who do not want this
can use command line option 'fno_register_global_dtors_with_atexit' to
disable it.
rdar://problem/33887655
Differential Revision: https://reviews.llvm.org/D45578
llvm-svn: 330199
Summary:
The clang driver option -save-temps was not passed to the LTO config,
so when invoking the ThinLTO backends via clang during distributed
builds there was no way to get LTO to save temp files.
Getting this to work with ThinLTO distributed builds also required
changing the driver to avoid a separate compile step to emit unoptimized
bitcode when the input was already bitcode under -save-temps. Not only is
this unnecessary in general, it is problematic for ThinLTO backends since
the temporary bitcode file to the backend would not match the module path
in the combined index, leading to incorrect ThinLTO backend index-based
optimizations.
Reviewers: pcc
Subscribers: mehdi_amini, inglorion, eraman, cfe-commits
Differential Revision: https://reviews.llvm.org/D45217
llvm-svn: 330194
Global variables marked as declare target are allowed to be used in map
clauses. Patch fixes the crash of the compiler on the declare target
variables in map clauses.
llvm-svn: 330156
volatile array field is copied.
The crash occurs because method 'visitArray' passes a null FieldDecl to
method 'visit' and some of the methods called downstream expect a
non-null FieldDecl to be passed.
This reapplies r330151 with a fix to the test case.
rdar://problem/33599681
llvm-svn: 330155
framework module SomeKitCore {
...
export_as SomeKit
}
Given the module above, while generting autolink information during
codegen, clang should to emit '-framework SomeKitCore' only if SomeKit
was not imported in the relevant TU, otherwise it should use '-framework
SomeKit' instead.
rdar://problem/38269782
llvm-svn: 330152
volatile array field is copied.
The crash occurs because method 'visitArray' passes a null FieldDecl to
method 'visit' and some of the methods called downstream expect a
non-null FieldDecl to be passed.
rdar://problem/33599681
llvm-svn: 330151
When emitting CodeView debug information, compiler-generated thunk routines
should be emitted using S_THUNK32 symbols instead of S_GPROC32_ID symbols so
Visual Studio can properly step into the user code. This initial support only
handles standard thunk ordinals.
Differential Revision: https://reviews.llvm.org/D43838
llvm-svn: 330132
Summary:
Clean carriage returns from lib/ and include/. NFC.
(I have to make this change locally in order for `git diff` to show sane output after I edit a file, so I might as well ask for it to be committed. I don't have commit privs myself.)
(Without this patch, `git rebase`ing any change involving SemaDeclCXX.cpp is a real nightmare. :( So while I have no right to ask for this to be committed, geez would it make my workflow easier if it were.)
Here's the command I used to reformat things. (Requires bash and OSX/FreeBSD sed.)
git grep -l $'\r' lib include | xargs sed -i -e $'s/\r//'
find lib include -name '*-e' -delete
Reviewers: malcolm.parsons
Reviewed By: malcolm.parsons
Subscribers: emaste, krytarowski, cfe-commits
Differential Revision: https://reviews.llvm.org/D45591
Patch by Arthur O'Dwyer.
llvm-svn: 330112
Summary:
This change addresses http://llvm.org/PR36926 by allowing users to pick
which instrumentation bundles to use, when instrumenting with XRay. In
particular, the flag `-fxray-instrumentation-bundle=` has four valid
values:
- `all`: the default, emits all instrumentation kinds
- `none`: equivalent to -fnoxray-instrument
- `function`: emits the entry/exit instrumentation
- `custom`: emits the custom event instrumentation
These can be combined either as comma-separated values, or as
repeated flag values.
Reviewers: echristo, kpw, eizan, pelikan
Reviewed By: pelikan
Subscribers: mgorny, cfe-commits
Differential Revision: https://reviews.llvm.org/D44970
llvm-svn: 329985
It means the same thing as -mllvm; there isn't any reason to have two
options which do the same thing.
Differential Revision: https://reviews.llvm.org/D45109
llvm-svn: 329965
Summary:
Protocols that were being referenced but could not be fully realized were being emitted without `properties`/`optional_properties`. Since all v3 protocols must be 9 processor words wide, the lack of these fields is catastrophic for the runtime.
As an example, the runtime cannot know [here](https://github.com/gnustep/libobjc2/blob/master/protocol.c#L73) that `properties` and `optional_properties` are invalid.
Reviewers: rjmccall, theraven
Reviewed By: rjmccall, theraven
Subscribers: cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D45305
llvm-svn: 329882
When we enter a __finally block, the CGF's CurCodeDecl will be null
(because CodeGenFunction::StartFunction is given an empty GlobalDecl for
a __finally block), and so the dyn_cast here will result in an assertion
failure. Change it to dyn_cast_or_null to handle this case.
Differential Revision: https://reviews.llvm.org/D45523
llvm-svn: 329836
The current support of the feature produces only 2 lines in report:
-Some general Code Generation Time;
-Total time of Backend Consumer actions.
This patch extends Clang time report with new lines related to Preprocessor, Include Filea Search, Parsing, etc.
Differential Revision: https://reviews.llvm.org/D43578
llvm-svn: 329684
Summary:
Right now to disable -fsanitize=kernel-address instrumentation, one needs to use no_sanitize("kernel-address"). Make either no_sanitize("address") or no_sanitize("kernel-address") disable both ASan and KASan instrumentation. Also remove redundant test.
Patch by Andrey Konovalov
Reviewers: eugenis, kcc, glider, dvyukov, vitalybuka
Reviewed By: eugenis, vitalybuka
Differential Revision: https://reviews.llvm.org/D44981
llvm-svn: 329612
I believe all the pieces are now in place in the backend to make this work correctly. We can either mask the input to 32 bits for pmuludg or shl/ashr for pmuldq and use a regular mul instruction. The backend should combine this to PMULUDQ/PMULDQ and then SimplifyDemandedBits will remove the and/shifts.
Differential Revision: https://reviews.llvm.org/D45421
llvm-svn: 329605
Added NUW flags for all the add|mul|sub operations + replaced sdiv by udiv
as we operate on unsigned values only (addresses, converted to integers)
llvm-svn: 329411
Found via codespell -q 3 -I ../clang-whitelist.txt
Where whitelist consists of:
archtype
cas
classs
checkk
compres
definit
frome
iff
inteval
ith
lod
methode
nd
optin
ot
pres
statics
te
thru
Patch by luzpaz! (This is a subset of D44188 that applies cleanly with a few
files that have dubious fixes reverted.)
Differential revision: https://reviews.llvm.org/D44188
llvm-svn: 329399
the tail padding is not reused.
We track on the AggValueSlot (and through a couple of other
initialization actions) whether we're dealing with an object that might
share its tail padding with some other object, so that we can avoid
emitting stores into the tail padding if that's the case. We still
widen stores into tail padding when we can do so.
Differential Revision: https://reviews.llvm.org/D45306
llvm-svn: 329342
identifier.
This patch fixes a few places in CGObjCMac.cpp where the class
identifier was used instead of the name specified by objc_runtime_name.
rdar://problem/37910822
Differential Revision: https://reviews.llvm.org/D45101
llvm-svn: 329128
Summary:
Add support for the -fsanitize=shadow-call-stack flag which causes clang
to add ShadowCallStack attribute to functions compiled with that flag
enabled.
Reviewers: pcc, kcc
Reviewed By: pcc, kcc
Subscribers: cryptoad, cfe-commits, kcc
Differential Revision: https://reviews.llvm.org/D44801
llvm-svn: 329122
This reverts r328795 which introduced an issue with referencing __global__
function templates. More details in the original review D44747.
llvm-svn: 329099
Summary:
The following class hierarchy requires that we be able to emit a
this-adjusting thunk for B::foo in C's vftable:
struct Incomplete;
struct A {
virtual A* foo(Incomplete p) = 0;
};
struct B : virtual A {
void foo(Incomplete p) override;
};
struct C : B { int c; };
This TU is valid, but lacks a definition of 'Incomplete', which makes it
hard to build a thunk for the final overrider, B::foo.
Before this change, Clang gives up attempting to emit the thunk, because
it assumes that if the parameter types are incomplete, it must be
emitting the thunk for optimization purposes. This is untrue for the MS
ABI, where the implementation of B::foo has no idea what thunks C's
vftable may require. Clang needs to emit the thunk without necessarily
having access to the complete prototype of foo.
This change makes Clang emit a musttail variadic call when it needs such
a thunk. I call these "unprototyped" thunks, because they only prototype
the "this" parameter, which must always come first in the MS C++ ABI.
These thunks work, but they create ugly LLVM IR. If the call to the
thunk is devirtualized, it will be a call to a bitcast of a function
pointer. Today, LLVM cannot inline through such a call, but I want to
address that soon, because we also use this pattern for virtual member
pointer thunks.
This change also implements an old FIXME in the code about reusing the
thunk's computed CGFunctionInfo as much as possible. Now we don't end up
computing the thunk's mangled name and arranging it's prototype up to
around three times.
Fixes PR25641
Reviewers: rjmccall, rsmith, hans
Subscribers: Prazek, cfe-commits
Differential Revision: https://reviews.llvm.org/D45112
llvm-svn: 329009
This re-lands r328845 with fixes for crbug.com/827810.
The initial motiviation was to hoist MethodVFTableLocation to global
scope so it could be forward declared.
In this patch, I noticed that MicrosoftVTableContext uses some risky
patterns. It has methods that return references to data stored in
DenseMaps. I've made some of them return by value for trivial structs
and I've moved some things into separate allocations.
llvm-svn: 329007
CUDA shared variable should be initialized with undef.
Patch by Greg Rodgers.
Revised and lit test added by Yaxun Liu.
Differential Revision: https://reviews.llvm.org/D44985
llvm-svn: 328994
A recent addition to Coroutines TS (https://wg21.link/p0913) adds a pre-defined
coroutine noop_coroutine that does nothing. To implement this feature, we implemented
an llvm.coro.noop intrinsic that returns a coroutine handle to a coroutine that
does nothing when resumed or destroyed.
This patch adds a builtin __builtin_coro_noop() that maps to llvm.coro.noop intrinsic.
Related llvm change: https://reviews.llvm.org/D45114
llvm-svn: 328993
Summary:
The docs for the LLVM coroutines intrinsic `@llvm.coro.id` state that
"The second argument, if not null, designates a particular alloca instruction
to be a coroutine promise."
However, if the address sanitizer pass is run before the `@llvm.coro.id`
intrinsic is lowered, the `alloca` instruction passed to the intrinsic as its
second argument is converted, as per the
https://github.com/google/sanitizers/wiki/AddressSanitizerAlgorithm docs, to
an `inttoptr` instruction that accesses the address of the promise.
On optimization levels `-O1` and above, the `-asan` pass is run after
`-coro-early`, `-coro-split`, and `-coro-elide`, and before
`-coro-cleanup`, and so there is no issue. At `-O0`, however, `-asan`
is run in between `-coro-early` and `-coro-split`, which causes an
assertion to be hit when the `inttoptr` instruction is forcibly cast to
an `alloca`.
Rearrange the passes such that the coroutine passes are registered
before the sanitizer passes.
Test Plan:
Compile a simple C++ program that uses coroutines in `-O0` with
`-fsanitize-address`, and confirm no assertion is hit:
`clang++ coro-example.cpp -fcoroutines-ts -g -fsanitize=address -fno-omit-frame-pointer`.
Reviewers: GorNishanov, lewissbaker, EricWF
Reviewed By: GorNishanov
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D43927
llvm-svn: 328951
The problem with the previous logic was that there might not be any
explicit copy/move constructor declarations, e.g. if the type is
trivial and we've never type-checked a copy of it. Relying on Sema's
computation seems much more reliable.
Also, I believe Richard's recommendation is exactly the rule we use
now on the Itanium ABI, modulo the trivial_abi attribute (which this
change of course fixes our handling of in Swift).
This does mean that we have a less portable rule for deciding
indirectness for swiftcall. I would prefer it if we just applied the
Itanium rule universally under swiftcall, but in the meantime, I need
to fix this bug.
This only arises when defining functions with class-type arguments
in C++, as we do in the Swift runtime. It doesn't affect normal Swift
operation because we don't import code as C++.
llvm-svn: 328942
variables.
Added emission of the offloading data sections for the variables within
declare target regions + fixes emission of the declare target variables
marked as declare target not within the declare target region.
llvm-svn: 328888
This allows forward declaring it so that we can add it to
MicrosoftMangleContext::mangleVirtualMemPtrThunk without including
VTableBuilder.h. That saves a hashtable lookup when emitting virtual
member pointer functions.
It also shortens a really long type name. This struct has "VFtable" in
the name, so it seems pretty unlikely that someone will assume it is
generally useful for non-MS C++ ABI stuff.
llvm-svn: 328845
This commit generalizes NRVO to cover C structs (both trivial and
non-trivial structs).
rdar://problem/33599681
Differential Revision: https://reviews.llvm.org/D44968
llvm-svn: 328809
This patch sets target specific calling convention for CUDA kernels in IR.
Patch by Greg Rodgers.
Revised and lit test added by Yaxun Liu.
Differential Revision: https://reviews.llvm.org/D44747
llvm-svn: 328795
The conversion of operatios to bitcode helps to eliminate an additional
store in certain cases. We used to lower these load intrinsics in DAG to
DAG conversion by which time, the "Dead Store Elimination" pass is
already run. There is an associated LLVM patch.
Patch by Sumanth Gundapaneni.
llvm-svn: 328776
ObjC and ObjC++ pass non-trivial structs in a way that is incompatible
with each other. For example:
typedef struct {
id f0;
__weak id f1;
} S;
// this code is compiled in c++.
extern "C" {
void foo(S s);
}
void caller() {
// the caller passes the parameter indirectly and destructs it.
foo(S());
}
// this function is compiled in c.
// 'a' is passed directly and is destructed in the callee.
void foo(S a) {
}
This patch fixes the incompatibility by passing and returning structs
with __strong or weak fields using the C ABI in C++ mode. __strong and
__weak fields in a struct do not cause the struct to be destructed in
the caller and __strong fields do not cause the struct to be passed
indirectly.
Also, this patch fixes the microsoft ABI bug mentioned here:
https://reviews.llvm.org/D41039?id=128767#inline-364710
rdar://problem/38887866
Differential Revision: https://reviews.llvm.org/D44908
llvm-svn: 328731
These instructions have been around for a long time, but we
haven't supported intrinsics for them. The "new" vesrions use
the CSx register for the start of the buffer instead of the K
field in the Mx register.
There is a related llvm patch.
Patch by Brendon Cahoon.
llvm-svn: 328725
When the declare target variables are emitted for the device,
constructors|destructors for these variables must emitted and registered
by the runtime in the offloading sections.
llvm-svn: 328705
r327219 added wrappers to std::sort which randomly shuffle the container before
sorting. This will help in uncovering non-determinism caused due to undefined
sorting order of objects having the same key.
To make use of that infrastructure we need to invoke llvm::sort instead of
std::sort.
llvm-svn: 328636
If the link clause is used on the declare target directive, the object
should be linked on target or target data directives, not during the
codegen. Patch adds support for this clause.
llvm-svn: 328544
Summary:
Disables certain CMP optimizations to improve fuzzing signal under -O1
and -O2.
Switches all fuzzer tests to -O2 except for a few leak tests where the
leak is optimized out under -O2.
Reviewers: kcc, vitalybuka
Reviewed By: vitalybuka
Subscribers: cfe-commits, llvm-commits
Differential Revision: https://reviews.llvm.org/D44798
llvm-svn: 328384
Need to override convertConstraint to recognise amdgpu specific register names.
Differential Revision: https://reviews.llvm.org/D44533
llvm-svn: 328359
Add two additional implicit arguments for OpenCL for the AMDGPU target using the AMDHSA runtime to support device enqueue.
Differential Revision: https://reviews.llvm.org/D44696
llvm-svn: 328350
- Remove use of the opencl and amdopencl environment member of the target triple for the AMDGPU target.
- Use a function attribute to communicate to the AMDGPU backend.
Differential Revision: https://reviews.llvm.org/D43735
llvm-svn: 328347
The issues was that we were setting hidden visibility if, when
processing a hidden class, we found out that we needed to emit a
reference to a vtable provided by the standard library.
Original message:
Set dso_local on vtables.
llvm-svn: 328288
Putting back the code in commit r327189 that was reverted in r322737. The code is being committed in three stages and this one is the last stage: 1) r327455 fp16 feature flags, 2) r327836 pass half type or i16 based on FullFP16, and 3) the code here which the front-end fp16 vector intrinsic for ARM.
Differential revision https://reviews.llvm.org/D43650
llvm-svn: 328277
The difference between CreateRuntimeFunction and CreateBuiltinFunction
is that CreateBuiltinFunction would not set dllimport or dso_local.
To keep the current semantics, just forward to CreateRuntimeFunction
with Local=true so it doesn't add dllimport.
llvm-svn: 328224
Summary: The workers also need to initialize the global stack. The call to the initialization function needs to happen after the kernel_init() function is called by the master. This ensures that the per-team data structures of the runtime have been initialized.
Reviewers: ABataev, grokos, carlo.bertolli, caomhin
Reviewed By: ABataev
Subscribers: jholewinski, guansong, cfe-commits
Differential Revision: https://reviews.llvm.org/D44749
llvm-svn: 328219
Now that LLVM has support for emitting calling conventions in DWARF (see
r328191) have clang emit them.
Patch by: Adrien Guinet
Differential revision: https://reviews.llvm.org/D42351
llvm-svn: 328196
This is needed for the upcoming implementation of the
new 8x32x16 and 32x8x16 variants of WMMA instructions
introduced in CUDA 9.1.
Differential Revision: https://reviews.llvm.org/D44719
llvm-svn: 328158
Summary:
Libc++'s default allocator uses `__builtin_operator_new` and `__builtin_operator_delete` in order to allow the calls to new/delete to be ellided. However, libc++ now needs to support over-aligned types in the default allocator. In order to support this without disabling the existing optimization Clang needs to support calling the aligned new overloads from the builtins.
See llvm.org/PR22634 for more information about the libc++ bug.
This patch changes `__builtin_operator_new`/`__builtin_operator_delete` to call any usual `operator new`/`operator delete` function. It does this by performing overload resolution with the arguments passed to the builtin to determine which allocation function to call. If the selected function is not a usual allocation function a diagnostic is issued.
One open issue is if the `align_val_t` overloads should be considered "usual" when `LangOpts::AlignedAllocation` is disabled.
In order to allow libc++ to detect this new behavior the value for `__has_builtin(__builtin_operator_new)` has been updated to `201802`.
Reviewers: rsmith, majnemer, aaron.ballman, erik.pilkington, bogner, ahatanak
Reviewed By: rsmith
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D43047
llvm-svn: 328134
This way we can support address-space specific variants without explicitly
encoding the space in the name of the intrinsic. Less intrinsics to deal with ->
less boilerplate.
Added a bit of tablegen magic to match/replace an intrinsics with a pointer
argument in particular address space with the space-specific instruction
variant.
Updated tests to use non-default address spaces.
Differential Revision: https://reviews.llvm.org/D43268
llvm-svn: 328006
constructs in generic mode.
Fixed codegen for distribute parallel combined constructs. We have to
pass and read the shared lower and upper bound from the distribute
region in the inner parallel region. Patch is for generic mode.
llvm-svn: 327990
If the generic codegen is enabled and private copy of the original
variable escapes the declaration context, this private copy should be
globalized just like it was the original variable.
llvm-svn: 327985
source expressions when iterating over a PseudoObjectExpr's semantic
subexpression list.
Previously the loop in emitPseudoObjectExpr would emit the IR for each
OpaqueValueExpr that was in a PseudoObjectExpr's semantic-form
expression list and use the result when the OpaqueValueExpr later
appeared in other expressions. This caused an assertion failure when
AggExprEmitter tried to copy the result of an OpaqueValueExpr and the
copied type didn't have trivial copy/move constructors or assignment
operators.
This patch adds flag IsUnique to OpaqueValueExpr which indicates it is a
unique reference to its source expression (it is not used in multiple
places). The loop in emitPseudoObjectExpr ignores OpaqueValueExprs that
are unique and CodeGen visitors simply traverse the source expressions
of such OpaqueValueExprs.
rdar://problem/34363596
Differential Revision: https://reviews.llvm.org/D39562
llvm-svn: 327939
The inline assembly generated for the ARC autorelease elision marker
must have a funclet token if it's emitted inside a funclet, otherwise
the inline assembly (and all subsequent code in the funclet) will be
marked unreachable. r324689 fixed this issue for regular inline assembly
blocks.
Note that clang only emits the marker at -O0, so this only fixes that
case. The optimizations case (where the marker is emitted by the
backend) will be fixed in a separate change.
Differential Revision: https://reviews.llvm.org/D44640
llvm-svn: 327892
This patch uses the infrastructure added in r326307 for enabling
non-trivial fields to be declared in C structs to allow __weak fields in
C structs in ARC.
This recommits r327206, which was reverted because it caused
module-enabled builders to fail. I discovered that the
CXXRecordDecl::CanPassInRegisters flag wasn't being set correctly in
some cases after I moved it to RecordDecl.
Thanks to Eric Liu for helping me investigate the bug.
rdar://problem/33599681
https://reviews.llvm.org/D44095
llvm-svn: 327870
For generating NEON intrinsics, this determines the NEON data type, and whether
it should be a half type or an i16 type. I.e., we always pass a half type for
AArch64, this hasn't changed, but now also for ARM but only when FullFP16 is
enabled, and i16 otherwise.
This is intended to be non-functional change, but together with the backend
work in D44538 which adds support for f16 vectors, this enables adding the
AArch32 FP16 (vector) intrinsics.
Differential Revision: https://reviews.llvm.org/D44561
llvm-svn: 327836
Summary:
The codegen for conditions assumes that a normal variable declaration is used in a condition, but this is not the case when a structured binding is used.
This fixes [PR36747](http://llvm.org/pr36747).
Thanks Nicolas Lesser for contributing the patch.
Reviewers: lichray, rsmith
Reviewed By: lichray
Subscribers: cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D44534
llvm-svn: 327780
The patch adds nocf_check target independent attribute for disabling checks that were enabled by cf-protection flag.
The attribute can be appertained to functions and function pointers.
Attribute name follows GCC's similar attribute name.
Differential Revision: https://reviews.llvm.org/D41880
llvm-svn: 327768
Summary:
Previously we tried too hard to uphold the fiction that destructor
variants work like they do on Itanium throughout the ABI-neutral parts
of clang. This lead to MS C++ ABI incompatiblities and other bugs. Now,
-mconstructor-aliases will no longer control this ABI detail, and clang
-cc1's LLVM IR output will be this much closer to the clang driver's.
Based on a patch by Zahira Ammarguellat:
https://reviews.llvm.org/D39063
I've tried to move the logic that Zahira added into MicrosoftCXXABI.cpp.
There is only one ABI-specific detail sticking out, and that is in
CodeGenModule::getAddrOfCXXStructor, where we collapse complete dtors to
base dtors in the MS ABI.
This fixes PR32990.
Reviewers: erichkeane, zahiraam, majnemer, rjmccall
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D44505
llvm-svn: 327732
The compiler complained about
../tools/clang/lib/CodeGen/CGOpenMPRuntimeNVPTX.cpp:184:15: error: unused variable 'CSI' [-Werror,-Wunused-variable]
if (auto *CSI = CGF.CapturedStmtInfo) {
^
1 error generated.
I don't know this code but it seems like an easy fix so I push it anyway
to get rid of the warning.
llvm-svn: 327694
If the variable is captured by value and the corresponding parameter in
the outlined function escapes its declaration context, this parameter
must be globalized. To globalize it we need to get the address of the
original parameter, load the value, store it to the global address and
use this global address instead of the original.
Patch improves globalization for parallel|teams regions + functions in
declare target regions.
llvm-svn: 327654
Added initial codegen for device side of declarations inside `omp
declare target` construct + codegen for implicit `declare target`
functions, which are used in the target regions.
llvm-svn: 327636
In this particular case it would be possible to just add an else with
CGM.setDSOLocal(GV), but it seems better to have as many callers as
possible just call setGVProperties so that we can centralize the logic
there.
This patch then makes setGVProperties able to handle null Decls.
llvm-svn: 327543
Recent change r326946 (https://reviews.llvm.org/D34367) causes regression in Eigen due to increased
memory footprint of CallArg.
This patch reduces LValue size from 112 to 96 bytes and reduces inline argument count of CallArgList
from 16 to 8.
It has been verified that this will let the added deep AST tree test pass with r326946.
In the long run, CallArg or LValue memory footprint should be further optimized.
Differential Revision: https://reviews.llvm.org/D44445
llvm-svn: 327515
In C, we'll wait until the end of the scope to clean up aggregate
temporaries used for returns from calls. This means in cases like:
{
// Assuming that `Bar` is large enough to warrant indirect returns
struct Bar b = {};
b = foo(&b);
b = foo(&b);
b = foo(&b);
b = foo(&b);
}
...We'll allocate space for 5 Bars on the stack (`b`, and 4
temporaries). This becomes painful in things like large switch
statements.
If cleaning up sooner is trivial, we should do it.
llvm-svn: 327229
This patch uses the infrastructure added in r326307 for enabling
non-trivial fields to be declared in C structs to allow __weak fields in
C structs in ARC.
rdar://problem/33599681
Differential Revision: https://reviews.llvm.org/D44095
llvm-svn: 327206
If CodeGenFunction::EmitCall is:
- asked to emit a call with an indirectly returned value,
- given an invalid return value slot, and
- told the return value of the function it's calling is unused
then it'll make its own temporary, and add lifetime markers so that the
temporary's lifetime ends immediately after the call.
The early lifetime.end becomes problematic when we need to run a
destructor on the result of the function.
Instead of unconditionally saying that results of all calls are used
here (which would be correct, but would also cause us to never emit
lifetime markers for these temporaries), we just build our own temporary
to pass in when a dtor has to be run.
llvm-svn: 327192
If initialization of the task reductions requires pointer to original
variable, which is stored in the threadprivate storage, we used the
address of this pointer instead.
llvm-svn: 327136
Simplify the dispatching for the personality routines. This really had
no test coverage previously, so add test coverage for the various cases.
This turns out to be pretty complicated as the various languages and
models interact to change personalities around.
You really should feel bad for the compiler if you are using exceptions.
There is no reason for this type of cruelty.
llvm-svn: 327105
using.
We may emit the code in wrong order because of incorrect implementation
of the runtime functions for task reductions. Threadprivate storages may
be initialized after real initialization of the reduction items. Patch
fixes this problem.
llvm-svn: 327008
Before this, we'd only emit lifetime.ends for these temps in
non-exceptional paths. This potentially made our stack larger than it
needed to be for any code that follows an EH cleanup. e.g. in
```
struct Foo { char cs[32]; };
void escape(void *);
struct Bar { ~Bar() { char cs[64]; escape(cs); } };
Foo getFoo();
void baz() {
Bar b;
getFoo();
}
```
baz() would require 96 bytes of stack, since the temporary from getFoo()
only had a lifetime.end on the non-exceptional path.
This also makes us keep hold of the Value* returned by
EmitLifetimeStart, so we don't have to remake it later.
llvm-svn: 326988
No effective behavior change, just for cleanliness.
Analysis and typing by me, actual patch mostly by Reid.
Fixes PR36159.
https://reviews.llvm.org/D44223
llvm-svn: 326960
Summary: Remove this scheme for now since it will be covered by another more generic scheme using global memory. This code will be worked into an optimization for the generic data sharing scheme. Removing this completely and then adding it via future patches will make all future data sharing patches cleaner.
Reviewers: ABataev, carlo.bertolli, caomhin
Reviewed By: ABataev
Subscribers: jholewinski, guansong, cfe-commits
Differential Revision: https://reviews.llvm.org/D43625
llvm-svn: 326948
The indirect function argument is in alloca address space in LLVM IR. However,
during Clang codegen for C++, the address space of indirect function argument
should match its address space in the source code, i.e., default addr space, even
for indirect argument. This is because destructor of the indirect argument may
be called in the caller function, and address of the indirect argument may be
taken, in either case the indirect function argument is expected to be in default
addr space, not the alloca address space.
Therefore, the indirect function argument should be mapped to the temp var
casted to default address space. The caller will cast it to alloca addr space
when passing it to the callee. In the callee, the argument is also casted to the
default address space and used.
CallArg is refactored to facilitate this fix.
Differential Revision: https://reviews.llvm.org/D34367
llvm-svn: 326946
OpenCL runtime tracks the invoke function emitted for
any block expression. Due to restrictions on blocks in
OpenCL (v2.0 s6.12.5), it is always possible to know the
block invoke function when emitting call of block expression
or __enqueue_kernel builtin functions. Since __enqueu_kernel
already has an argument for the invoke function, it is redundant
to have invoke function member in the llvm block literal structure.
This patch removes invoke function from the llvm block literal
structure. It also removes the bitcast of block invoke function
to the generic block literal type which is useless for OpenCL.
This will save some space for the kernel argument, and also
eliminate some store instructions.
Differential Revision: https://reviews.llvm.org/D43783
llvm-svn: 326937
We may emit incorrect lifetime info during codegen for loop counters in
OpenMP constructs because of automatic scope cleanup when we needed
temporarily locations for private loop counters.
llvm-svn: 326922
EmitLifetimeStart returns a non-null `size` pointer if it actually
emits a lifetime.start. Later in this function, we use `tempSize`'s
nullness to determine whether or not we should emit a lifetime.end.
llvm-svn: 326844
variables.
If the task has reduction construct and this construct for some variable
requires unique threadprivate storage, we may generate different names
for variables used in taskgroup task_reduction clause and in task
in_reduction clause. Patch fixes this problem.
llvm-svn: 326827
Summary:
Currently only calls to mcount were suppressed with
no_instrument_function attribute.
Linux kernel requires that calls to fentry should also not be
generated.
This is an extended fix for PR PR33515.
Reviewers: hfinkel, rengolin, srhines, rnk, rsmith, rjmccall, hans
Reviewed By: rjmccall
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D43995
llvm-svn: 326639
The patch fixes a number of bugs related to parameter indexing in
attributes:
* Parameter indices in some attributes (argument_with_type_tag,
pointer_with_type_tag, nonnull, ownership_takes, ownership_holds,
and ownership_returns) are specified in source as one-origin
including any C++ implicit this parameter, were stored as
zero-origin excluding any this parameter, and were erroneously
printing (-ast-print) and confusingly dumping (-ast-dump) as the
stored values.
* For alloc_size, the C++ implicit this parameter was not subtracted
correctly in Sema, leading to assert failures or to silent failures
of __builtin_object_size to compute a value.
* For argument_with_type_tag, pointer_with_type_tag, and
ownership_returns, the C++ implicit this parameter was not added
back to parameter indices in some diagnostics.
This patch fixes the above bugs and aims to prevent similar bugs in
the future by introducing careful mechanisms for handling parameter
indices in attributes. ParamIdx stores a parameter index and is
designed to hide the stored encoding while providing accessors that
require each use (such as printing) to make explicit the encoding that
is needed. Attribute declarations declare parameter index arguments
as [Variadic]ParamIdxArgument, which are exposed as ParamIdx[*]. This
patch rewrites all attribute arguments that are processed by
checkFunctionOrMethodParameterIndex in SemaDeclAttr.cpp to be declared
as [Variadic]ParamIdxArgument. The only exception is xray_log_args's
argument, which is encoded as a count not an index.
Differential Revision: https://reviews.llvm.org/D43248
llvm-svn: 326602
Patch fixes the problem with the functions marked as `declare simd`. If
the canonical declaration does not have associated `declare simd`
construct, we may not generate required code even if other
redeclarations are marked as `declare simd`.
llvm-svn: 326594
This makes it easier to debug crashes and hangs in block functions since
users can easily find out where the block is called from. The option
doesn't disable tail-calls from non-escaping blocks since non-escaping
blocks are not as hard to debug as escaping blocks.
rdar://problem/35758207
Differential Revision: https://reviews.llvm.org/D43841
llvm-svn: 326530
This shouldn't change any results for now, but is more consistent with
how we set dllimport/dllexport and will make future changes easier.
Since clang produces IR as it parses, it can find out mid file that
something is dllimport. When that happens we have to drop
dso_local. This is not a problem right now because
CodeGenModule::setDSOLocal is called from relatively few places at
the moment.
llvm-svn: 326527
Since LLVM r326341, default EmulatedTLS mode is decided in backend
according to target triple. Any front-end should pass -f[no]-emulated-tls
to backend and set up ExplicitEmulatedTLS only when the flags are used.
Differential Revision: https://reviews.llvm.org/D43965
llvm-svn: 326499
So I wrote a clang-tidy check to lint out redundant `isa`, `cast`, and
`dyn_cast`s for fun. This is a portion of what it found for clang; I
plan to do similar cleanups in LLVM and other subprojects when I find
time.
Because of the volume of changes, I explicitly avoided making any change
that wasn't highly local and obviously correct to me (e.g. we still have
a number of foo(cast<Bar>(baz)) that I didn't touch, since overloading
is a thing and the cast<Bar> did actually change the type -- just up the
class hierarchy).
I also tried to leave the types we were cast<>ing to somewhere nearby,
in cases where it wasn't locally obvious what we were dealing with
before.
llvm-svn: 326416
This is the next step in setting dso_local for COFF.
The patches changes setGVProperties to first set dllimport/dllexport
and changes a few cases that were setting dllimport/dllexport
manually. With this a few more GVs are marked dso_local.
llvm-svn: 326397
Differential Revision: https://reviews.llvm.org/D43852
This patch extends the SPMD implementation to all target constructs and guards this implementation under a new flag.
llvm-svn: 326368
objc_msgSend_stret takes a hidden parameter for the returned structure's
address for the construction. When the function signature is rewritten
for the inalloca passing, the return type is no longer marked as
indirect but rather inalloca stret. This enhances the test for the
indirect return to check for that case as well. This fixes the
incorrect return classification for Windows x86.
llvm-svn: 326362
Binaries for multiple architectures are combined by fatbinary,
so the current code was effectively not needed.
Differential Revision: https://reviews.llvm.org/D43461
llvm-svn: 326342
ARC mode.
Declaring __strong pointer fields in structs was not allowed in
Objective-C ARC until now because that would make the struct non-trivial
to default-initialize, copy/move, and destroy, which is not something C
was designed to do. This patch lifts that restriction.
Special functions for non-trivial C structs are synthesized that are
needed to default-initialize, copy/move, and destroy the structs and
manage the ownership of the objects the __strong pointer fields point
to. Non-trivial structs passed to functions are destructed in the callee
function.
rdar://problem/33599681
Differential Revision: https://reviews.llvm.org/D41228
llvm-svn: 326307
In DWARF v5 the Line Number Program Header is extensible, allowing values with
new content types. This vendor extension to DWARF v5 allows source text to be
embedded directly in the line tables of the debug line section.
Add new flag (-g[no-]embed-source) to Driver and CC1 which indicates
that source should be passed through to LLVM during CodeGen.
Differential Revision: https://reviews.llvm.org/D42766
llvm-svn: 326102
The tests that failed on a windows host have been fixed.
Original message:
Start setting dso_local for COFF.
With this there are still some GVs where we don't set dso_local
because setGVProperties is never called. I intend to fix that in
followup commits. This is just the bare minimum to teach
shouldAssumeDSOLocal what it should do for COFF.
llvm-svn: 325940
With this there are still some GVs where we don't set dso_local
because setGVProperties is never called. I intend to fix that in
followup commits. This is just the bare minimum to teach
shouldAssumeDSOLocal what it should do for COFF.
llvm-svn: 325915
The value of dso_local can be computed from just IR properties and
global information (object file type, command line options, etc).
With this patch we no longer pass in the Decl. It was almost unused
and making it fully unused guarantees that dso_local is consistent
with the rest of the IR.
llvm-svn: 325846
Differential Revision: https://reviews.llvm.org/D43513
This is a bug fix that removes the emission of reduction support for pragma 'distribute' when found alone or in combinations without simd.
Pragma 'distribute' does not have a reduction clause, but when combined with pragma 'simd' we need to emit the support for simd's reduction clause as part of code generation for distribute. This guard is similar to the one used for reduction support earlier in the same code gen function.
llvm-svn: 325822
Summary:
OpenCL 2.0 specification defines '-cl-uniform-work-group-size' option,
which requires that the global work-size be a multiple of the work-group
size specified to clEnqueueNDRangeKernel and allows optimizations that
are made possible by this restriction.
The patch introduces the support of this option.
To keep information about whether an OpenCL kernel has uniform work
group size or not, clang generates 'uniform-work-group-size' function
attribute for every kernel:
- "uniform-work-group-size"="true" for OpenCL 1.2 and lower,
- "uniform-work-group-size"="true" for OpenCL 2.0 and higher if
'-cl-uniform-work-group-size' option was specified,
- "uniform-work-group-size"="false" for OpenCL 2.0 and higher if no
'-cl-uniform-work-group-size' options was specified.
If the function is not an OpenCL kernel, 'uniform-work-group-size'
attribute isn't generated.
Patch by: krisb
Reviewers: yaxunl, Anastasia, b-sumner
Reviewed By: yaxunl, Anastasia
Subscribers: nhaehnle, yaxunl, Anastasia, cfe-commits
Differential Revision: https://reviews.llvm.org/D43570
llvm-svn: 325771
When using blocks with C++ on Windows x86, it is possible to have the
block literal be pushed into the inalloca'ed parameters. Teach IRGen to
handle the case properly by extracting the block literal from the
inalloca parameter. This fixes the use of blocks with C++ on Windows
x86.
llvm-svn: 325724
This patch fixes creating TBAA access descriptors for
may_alias-marked access types. Currently, for such types we
generate ordinary descriptors with char as its access type. The
patch changes this to produce proper may-alias descriptors.
Differential Revision: https://reviews.llvm.org/D42366
llvm-svn: 325575
Currently, clang compiles explicit initializers for array
elements into series of store instructions. For large arrays of
built-in types this results in bloated output code and
significant amount of time spent on the instruction selection
phase. This patch fixes the issue by initializing such arrays
with global constants that store the binary image of the
initializer.
Differential Revision: https://reviews.llvm.org/D43181
llvm-svn: 325478
Summary:
Gold plugin does not add pass to ThinLTO modules without useful symbols.
In this case ThinLTO can't create corresponding index file and some features, like CFI,
cannot be processes by backed correctly without index.
Given that we don't need the backed output we can request it to avoid
processing the module. This is implemented by this patch using new
"SkipModuleByDistributedBackend" flag.
Reviewers: pcc, tejohnson
Subscribers: mehdi_amini, inglorion, eraman, cfe-commits
Differential Revision: https://reviews.llvm.org/D42995
llvm-svn: 325411
Summary:
ThinLTO compilation may decide not to split module and keep at as regular LTO.
In this can this module already processed during indexing and already a part of
merged object file. So here we can just skip it.
Reviewers: pcc, tejohnson
Reviewed By: tejohnson
Subscribers: mehdi_amini, inglorion, eraman, cfe-commits
Differential Revision: https://reviews.llvm.org/D42680
llvm-svn: 325410
Codegen for ordered with doacross construct might produce incorrect code
because of missing cleanup scope for the construct. Without this scope
the final runtime function call could be emitted in the wrong order that
leads to incorrect codegen.
llvm-svn: 325304
The following test case causes issue with codegen of __enqueue_block
void (^block)(void) = ^{ callee(id, out); };
enqueue_kernel(queue, 0, ndrange, block);
Clang first does codegen for block expression in the first line and deletes its block info.
Clang then tries to do codegen for the same block expression again for the second line,
and fails because the block info is gone.
The fix is to do normal codegen for both lines. Introduce an API to OpenCL runtime to
record llvm block invoke function and llvm block literal emitted for each AST block
expression, and use the recorded information for generating the wrapper kernel.
The EmitBlockLiteral APIs are cleaned up to minimize changes to the normal codegen
of blocks.
Another minor issue is that some clean up AST expression is generated for block
with captures, which can be stripped by IgnoreImplicit.
Differential Revision: https://reviews.llvm.org/D43240
llvm-svn: 325264
Added support in clang for GCC function attribute 'artificial'. This attribute
is used to control stepping behavior of debugger with respect to inline
functions.
Patch By: Elizabeth Andrews (eandrews)
Differential Revision: https://reviews.llvm.org/D43259
llvm-svn: 325081
Summary:
This patch also adds the 'DW_AT_artificial' flag to the generated variable.
Addresses the issues mentioned in http://llvm.org/PR30553.
Reviewers: CarlosAlbertoEnciso, probinson, aprantl
Reviewed By: aprantl
Subscribers: JDevlieghere, cfe-commits
Differential Revision: https://reviews.llvm.org/D43189
llvm-svn: 324988
As reported here: https://bugs.llvm.org/show_bug.cgi?id=36301
The issue is that the 'use' causes the plain declaration to emit
the attributes to LLVM-IR. However, if the definition added it
later, these would silently disappear.
This commit extracts that logic to its own function in CodeGenModule,
and has the attribute-applications done during 'definition' update
the attributes properly.
Differential Revision: https://reviews.llvm.org/D43095
llvm-svn: 324907
Summary:
Right now clang is skipping array cookie poisoning for any operator
new[] which is not part of the set of replaceable global allocation
functions.
This commit adds a flag to tell clang to poison all operator new[]
cookies.
A previous review was poisoning all array cookies unconditionally, but
there is an edge case which would stop working under ASan (a custom
operator new[] saves whatever pointer it returned, and then accesses
it).
This newer revision adds a command line argument to toggle this feature.
Original revision: https://reviews.llvm.org/D41301
Compiler-rt test revision with an explanation of the edge case: https://reviews.llvm.org/D41664
Reviewers: rjmccall, kcc, rsmith
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D43013
llvm-svn: 324884
Summary:
This change avoids the overhead of storing, and later crawling,
an initializer list of all zeros for arrays. When LLVM
visits this (llvm/IR/Constants.cpp) ConstantArray::getImpl()
it will scan the list looking for an array of all zero.
We can avoid the store, and short-cut the scan, by detecting
all zeros when clang builds-up the initialization representation.
This was brought to my attention when investigating PR36030
Reviewers: majnemer, rjmccall
Reviewed By: rjmccall
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D42549
llvm-svn: 324776
Summary:
Fixes PR36247, which is where WinEHPrepare replaces inline asm in
funclets with unreachable.
Make getBundlesForFunclet return by value to simplify some call sites.
Reviewers: smeenai, majnemer
Subscribers: eraman, cfe-commits
Differential Revision: https://reviews.llvm.org/D43033
llvm-svn: 324689
Summary:
This patch is a fix for following issue:
https://bugs.llvm.org/show_bug.cgi?id=31362 The problem was caused by front end
lowering C calling conventions without taking into account calling conventions
enforced by attribute. In this case win64cc was no correctly lowered on targets
other than Windows.
Reviewed By: rnk (Reid Kleckner)
Differential Revision: https://reviews.llvm.org/D43016
Author: belickim <mateusz.belicki@intel.com>
llvm-svn: 324594
The difference from the previous try is that we no longer directly
access function declarations from position independent executables. It
should work, but currently doesn't with some linkers.
It now includes a fix to not mark available_externally definitions as
dso_local.
Original message:
Start setting dso_local in clang.
This starts adding dso_local to clang.
The hope is to eventually have TargetMachine::shouldAssumeDsoLocal go
away. My objective for now is to move enough of it to clang to remove
the need for the TargetMachine one to handle PIE copy relocations and
-fno-plt. With that it should then be easy to implement a
-fno-copy-reloc in clang.
This patch just adds the cases where we assume a symbol to be local
based on the file being compiled for an executable or a shared
library.
llvm-svn: 324535
This reverts commit r324500.
The bots found two failures:
ThreadSanitizer-x86_64 :: Linux/pie_no_aslr.cc
ThreadSanitizer-x86_64 :: pie_test.cc
when using gold. The issue is a limitation in gold when building pie
binaries. I will investigate how to work around it.
llvm-svn: 324505
It now includes a fix to not mark available_externally definitions as
dso_local.
Original message:
Start setting dso_local in clang.
This starts adding dso_local to clang.
The hope is to eventually have TargetMachine::shouldAssumeDsoLocal go
away. My objective for now is to move enough of it to clang to remove
the need for the TargetMachine one to handle PIE copy relocations and
-fno-plt. With that it should then be easy to implement a
-fno-copy-reloc in clang.
This patch just adds the cases where we assume a symbol to be local
based on the file being compiled for an executable or a shared
library.
llvm-svn: 324500
I found this while looking at the ppc failures caused by the dso_local
change.
The issue was that the patch would produce the wrong answer for
available_externally. Having ForDefinition_t available in places where
the code can just check the linkage is a bit of a foot gun.
This patch removes the ForDefiniton_t argument in places where the
linkage is already know.
llvm-svn: 324499
This patch:
* fixes an incorrect sign-extension of unsigned values, when emitting
debug info metadata for enumerators
* the enumerators metadata is created with a flag, which determines
interpretation of the value bits (signed or unsigned)
* the enumerations metadata contains the underlying integer type and a
flag, indicating whether this is a C++ "fixed enum"
Differential Revision: https://reviews.llvm.org/D42736
llvm-svn: 324490
This adds the frontend support required to support the use of the
comment pragma to enable auto linking on ELFish targets. This is a
generic ELF extension supported by LLVM. We need to change the handling
for the "dependentlib" in order to accommodate the previously discussed
encoding for the dependent library descriptor. Without the custom
handling of the PCK_Lib directive, the -l prefixed option would be
encoded into the resulting object (which is treated as a frontend
error).
llvm-svn: 324438
This change reduces the live range of the loaded function pointer,
resulting in a slight code size decrease (~10KB in clang), and also
improves the security of CFI for virtual calls by making it less
likely that the function pointer will be spilled, and ensuring that
it is not spilled across a function call boundary.
Fixes PR35353.
Differential Revision: https://reviews.llvm.org/D42725
llvm-svn: 324286
The 'trivial_abi' attribute can be applied to a C++ class, struct, or
union. It makes special functions of the annotated class (the destructor
and copy/move constructors) to be trivial for the purpose of calls and,
as a result, enables the annotated class or containing classes to be
passed or returned using the C ABI for the underlying type.
When a type that is considered trivial for the purpose of calls despite
having a non-trivial destructor (which happens only when the class type
or one of its subobjects is a 'trivial_abi' class) is passed to a
function, the callee is responsible for destroying the object.
For more background, see the discussions that took place on the mailing
list:
http://lists.llvm.org/pipermail/cfe-dev/2017-November/055955.htmlhttp://lists.llvm.org/pipermail/cfe-commits/Week-of-Mon-20180101/thread.html#214043
rdar://problem/35204524
Differential Revision: https://reviews.llvm.org/D41039
llvm-svn: 324269
Summary:
Previously, Clang only emitted label names in assert builds.
However there is a CC1 option -discard-value-names that should have been used to control emission instead.
This patch removes the NDEBUG preprocessor block and instead allows LLVM to handle removing the names in accordance with the option.
Reviewers: erichkeane, aaron.ballman, majnemer
Reviewed By: aaron.ballman
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D42829
llvm-svn: 324127
This starts adding dso_local to clang.
The hope is to eventually have TargetMachine::shouldAssumeDsoLocal go
away. My objective for now is to move enough of it to clang to remove
the need for the TargetMachine one to handle PIE copy relocations and
-fno-plt. With that it should then be easy to implement a
-fno-copy-reloc in clang.
This patch just adds the cases where we assume a symbol to be local
based on the file being compiled for an executable or a shared
library.
llvm-svn: 324107
When trying to track down a different bug, we discovered
that calling __builtin_va_arg on a vec3f type caused
the SROA pass to issue a warning that there was an illegal
access.
Further research showed that the vec3f type is
alloca'ed as size '12', but the _builtin_va_arg code
on x86_64 was always loading this out of registers as
{double, double}. Thus, the 2nd store into the vec3f
was storing in bytes 12-15!
This patch alters the original implementation which always
assumed {double, double} to use the actual coerced type
instead, so the LLVM-IR generated is a load/GEP/store of
a <2 x float> and a float, rather than a double and a double.
Tests were added for all combinations I could think of that
would fit in 2 FP registers, and all work exactly as expected.
Differential Revision: https://reviews.llvm.org/D42811
llvm-svn: 324098
This fixes building Qt as shared libraries with clang in MinGW
mode; previously subclasses of the QObjectData class (in other
DLLs than the base DLL) failed to find the typeinfo symbols
(that neither were emitted in the base DLL nor in the DLL
containing the subclass).
If the virtual destructor in the newly added testcase wouldn't
be pure (or if there'd be another non-pure virtual method),
it'd be a key function and things would work out even before this
change. Make sure to locally emit the typeinfo for these classes
as well.
This matches what GCC does in this specific testcase.
This fixes the root issue that spawned PR35146. (The difference
to GCC that is initially described in that bug still is present
though.)
Differential Revision: https://reviews.llvm.org/D42641
llvm-svn: 324059
Summary:
This patch enables debugging of C99 VLA types by generating more precise
LLVM Debug metadata, using the extended DISubrange 'count' field that
takes a DIVariable.
This should implement:
Bug 30553: Debug info generated for arrays is not what GDB expects (not as good as GCC's)
https://bugs.llvm.org/show_bug.cgi?id=30553
Reviewers: echristo, aprantl, dexonsmith, clayborg, pcc, kristof.beyls, dblaikie
Reviewed By: aprantl
Subscribers: jholewinski, schweitz, davide, fhahn, JDevlieghere, cfe-commits
Differential Revision: https://reviews.llvm.org/D41698
llvm-svn: 323952
This patch fixes a bug in CGRecordLowering::accumulateBitFields where it
unconditionally starts a new run and emits a storage field when it sees
a zero-sized bitfield, which causes an assertion in insertPadding to
fail when -fno-bitfield-type-align is used.
It shouldn't emit new storage if UseZeroLengthBitfieldAlignment and
UseBitFieldTypeAlignment are both false.
rdar://problem/36762205
llvm-svn: 323943
Summary:
This change is step three in the series of changes to remove alignment argument from
memcpy/memmove/memset in favour of alignment attributes. Steps:
Step 1) Remove alignment parameter and create alignment parameter attributes for
memcpy/memmove/memset. ( rL322965, rC322964, rL322963 )
Step 2) Expand the IRBuilder API to allow creation of memcpy/memmove with differing
source and dest alignments. ( rL323597 )
Step 3) Update Clang to use the new IRBuilder API.
Step 4) Update Polly to use the new IRBuilder API.
Step 5) Update LLVM passes that create memcpy/memmove calls to use the new IRBuilder API,
and those that use use MemIntrinsicInst::[get|set]Alignment() to use getDestAlignment()
and getSourceAlignment() instead.
Step 6) Remove the single-alignment IRBuilder API for memcpy/memmove, and the
MemIntrinsicInst::[get|set]Alignment() methods.
Reference
http://lists.llvm.org/pipermail/llvm-dev/2015-August/089384.htmlhttp://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20151109/312083.html
Reviewers: rjmccall
Subscribers: jyknight, nemanjai, nhaehnle, javed.absar, sbc100, aheejin, kbarton, fedor.sergeev, cfe-commits
Differential Revision: https://reviews.llvm.org/D41677
llvm-svn: 323617
constructor.
Previously, clang would emit an over-aligned (16-byte) store to
initialize B::x in B's base constructor when compiling the following
code:
struct A {
__attribute__((aligned(16))) double data1;
};
struct B : public virtual A {
B() : x(123) {}
double a;
int x;
};
struct C : public virtual B {};
void test() { B b; C c; }
This was happening because the code in IRGen that does member
initialization was using the alignment of a complete object instead of
the non-virtual alignment.
This commit fixes the bug.
rdar://problem/36382481
Differential Revision: https://reviews.llvm.org/D42521
llvm-svn: 323578
The MSVC runtime library does not provide a definition of wmemcmp,
so we need an inline implementation.
Differential Revision: https://reviews.llvm.org/D42441
llvm-svn: 323362
Hidden visibility is almost the opposite of dllimport. We were
producing them before (dllimport wins in the existing llvm
implementation), but now the llvm verifier produces an error.
llvm-svn: 323361
These symbols are supposed to be preserved even by the linker. Use the
`llvm.used` to ensure that the symbols are not removed by DCE in the
linker. This should be a no-op change on MachO since the symbols are
annotated as `no_dead_strip`.
llvm-svn: 323247
Pass and return _Float16 as if it were an int or float for ARM, but with the
top 16 bits unspecified, similarly like we already do for __fp16.
We will implement proper half-precision function argument lowering in the ARM
backend soon, but want to use this workaround in the mean time.
Differential Revision: https://reviews.llvm.org/D42318
llvm-svn: 323185
When a function taking transparent union is declared as taking one of
union members earlier in the translation unit, clang would hit an
"Invalid cast" assertion during EmitFunctionProlog. This case
corresponds to function f1 in test/CodeGen/transparent-union-redecl.c.
We decided to cast i32 to union because after merging function
declarations function parameter type becomes int,
CGFunctionInfo::ArgInfo type matches with ABIArgInfo type, so we decide
it is a trivial case. But these types should also be castable to
parameter declaration type which is not the case here.
Now the fix is in converting from ABIArgInfo type to VarDecl type and using
argument demotion when necessary.
Additional tests in Sema/transparent-union.c capture current behavior and make
sure there are no regressions.
rdar://problem/34949329
Reviewers: rjmccall, rafael
Reviewed By: rjmccall
Subscribers: aemerson, cfe-commits, kristof.beyls, ahatanak
Differential Revision: https://reviews.llvm.org/D41311
llvm-svn: 323156
The standard says:
[expr.static.cast] p11: "If the prvalue of type “pointer to cv1 B” points to a B
that is actually a subobject of an object of type D, the resulting pointer points
to the enclosing object of type D. Otherwise, the behavior is undefined."
Therefore, the GEP must be inbounds.
This should solve the failure to optimize away a null check shown in PR35909:
https://bugs.llvm.org/show_bug.cgi?id=35909
Differential Revision: https://reviews.llvm.org/D42249
llvm-svn: 322950
Firstly, each offloading entry must have a unique name or the
linker will complain if there are multiple files with target
regions. Secondly, the compiler must not introduce padding so
mark the struct with a PackedAttr.
Differential Revision: https://reviews.llvm.org/D42168
llvm-svn: 322858
When parsing C++ type construction expressions with list initialization,
forward the locations of the braces to Sema.
Without these locations, the code coverage pass crashes on the given test
case, because the pass relies on getLocEnd() returning a valid location.
Here is what this patch does in more detail:
- Forwards init-list brace locations to Sema (ParseExprCXX),
- Builds an InitializationKind with these locations (SemaExprCXX), and
- Uses these locations for constructor initialization (SemaInit).
The remaining changes fall out of introducing a new overload for
creating direct-list InitializationKinds.
Testing: check-clang, and a stage2 coverage-enabled build of clang with
asserts enabled.
Differential Revision: https://reviews.llvm.org/D41921
llvm-svn: 322729
simd`.
Added host codegen + codegen for devices with default codegen for
`#pragma omp target teams distribute parallel for simd` directive.
llvm-svn: 322515
RISCVABIInfo is implemented in terms of XLen, supporting both RV32 and RV64.
Unfortunately we need to count argument registers in the frontend in order to
determine when to emit signext and zeroext attributes. Integer scalars are
extended according to their type up to 32-bits and then sign-extended to XLen
when passed in registers, but are anyext when passed on the stack. This patch
only implements the base integer (soft float) ABIs.
For more information on the RISC-V ABI, see [the ABI
doc](https://github.com/riscv/riscv-elf-psabi-doc/blob/master/riscv-elf.md),
my [golden model](https://github.com/lowRISC/riscv-calling-conv-model), and
the [LLVM RISC-V calling convention
patch](https://reviews.llvm.org/D39898#2d1595b4) (specifically the comment
documenting frontend expectations).
Differential Revision: https://reviews.llvm.org/D40023
llvm-svn: 322494
Summary:
kunpck intrinsics were removed in favor of native IR a few months ago. The implementation lowers them as by operation on the integer types passed to the intrinsic and then just shifting, masking, and oring them together. A special X86 DAG combine was added to recognize this patter and turn it into a concat_vector operation.
I think it makes more sense to keep the IR implementation closer to vector operations on vXi1. Given that we expect these builtins to be used around other builtins that operate on k-registers which we try to represent in IR with vXi1. InstCombine should be able to get rid of the bitcasts between integers and vXi1 leaving only the vector operations.
Reviewers: RKSimon, spatel, zvi, jina.nahias
Reviewed By: RKSimon
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D42016
llvm-svn: 322461
This alignment can be less than 4 on certain embedded targets, which may
not even be able to deal with 4-byte alignment on the stack.
Patch by Jacob Young!
llvm-svn: 322406
As @rjmccall suggested in D40023, we can get rid of
ABIInfo::shouldSignExtUnsignedType (used to handle cases like the Mips calling
convention where 32-bit integers are always sign extended regardless of the
sign of the type) by adding a SignExt field to ABIArgInfo. In the common case,
this new field is set automatically by ABIArgInfo::getExtend based on the sign
of the type. For targets that want greater control, they can use
ABIArgInfo::getSignExtend or ABIArgInfo::getZeroExtend when necessary. This
change also cleans up logic in CGCall.cpp.
There is no functional change intended in this patch, and all tests pass
unchanged. As noted in D40023, Mips might want to sign-extend unsigned 32-bit
integer return types. A future patch might modify
MipsABIInfo::classifyReturnType to use MipsABIInfo::extendType.
Differential Revision: https://reviews.llvm.org/D41999
llvm-svn: 322396
getAssociatedStmt() returns the outermost captured statement for the
OpenMP directive. It may return incorrect region in case of combined
constructs. Reworked the code to reduce the number of calls of
getAssociatedStmt() and used getInnermostCapturedStmt() and
getCapturedStmt() functions instead.
In case of firstprivate variables it may lead to an extra allocas
generation for private copies even if the variable is passed by value
into outlined function and could be used directly as private copy.
llvm-svn: 322393
While updating clang tests for having clang set dso_local I noticed
that:
- There are *a lot* of tests to update.
- Many of the updates are redundant.
They are redundant because a GV is "obviously dso_local". This patch
starts formalizing that a bit by requiring that internal and private
GVs be dso_local too. Since they all are, we don't have to print
dso_local to the textual representation, making it a bit more compact
and easier to read.
llvm-svn: 322318
Adds option /guard:cf to clang-cl and -cfguard to cc1 to emit function IDs
of functions that have their address taken into a section named .gfids$y for
compatibility with Microsoft's Control Flow Guard feature.
The original patch didn't have the lit.local.cfg file that restricts the new
test to x86, thus the new test was failing on the non-x86 bots.
Differential Revision: https://reviews.llvm.org/D40531
The reverts r322008, which was a revert of r322005.
This reverts commit a05b89f9aca70597dc79fe97bc49b50b51f525ba.
llvm-svn: 322136
GCOV in the old pass manager also strips debug info (if debug info is
disabled/only produced for profiling anyway) after the GCOV pass runs.
I think the strip pass hasn't been ported to the new pass manager, so it
might take me a little while to wire that up.
llvm-svn: 322126
Cf-protection is a target independent flag that instructs the back-end to instrument control flow mechanisms like: Branch, Return, etc.
For example in X86 this flag will be used to instrument Indirect Branch Tracking instructions.
Differential Revision: https://reviews.llvm.org/D40478
Change-Id: I5126e766c0e6b84118cae0ee8a20fe78cc373dea
llvm-svn: 322063
r322028 attempted to remove something from the "Manglings"
list when it was no longer valid, and did so with 'erase'.
However, StringRefs to these were stored, so these became
dangling references. This patch changes to using 'remove' instead
of 'erase' to keep the strings valid.
llvm-svn: 322052
GCC's attribute 'target', in addition to being an optimization hint,
also allows function multiversioning. We currently have the former
implemented, this is the latter's implementation.
This works by enabling functions with the same name/signature to coexist,
so that they can all be emitted. Multiversion state is stored in the
FunctionDecl itself, and SemaDecl manages the definitions.
Note that it ends up having to permit redefinition of functions so
that they can all be emitted. Additionally, all versions of the function
must be emitted, so this also manages that.
Note that this includes some additional rules that GCC does not, since
defining something as a MultiVersion function after a usage has been made illegal.
The only 'history rewriting' that happens is if a function is emitted before
it has been converted to a multiversion'ed function, at which point its name
needs to be changed.
Function templates and virtual functions are NOT yet supported (not supported
in GCC either).
Additionally, constructors/destructors are disallowed, but the former is
planned.
llvm-svn: 322028
The new test fails on the Hexagon bot. Reverting while I investigate.
This reverts https://reviews.llvm.org/rL322005
This reverts commit b7e0026b4385180c378edc658ec91a39566f2942.
llvm-svn: 322008
Adds option /guard:cf to clang-cl and -cfguard to cc1 to emit function IDs
of functions that have their address taken into a section named .gfids$y for
compatibility with Microsoft's Control Flow Guard feature.
Differential Revision: https://reviews.llvm.org/D40531
llvm-svn: 322005
Adds the -fstack-size-section flag to enable the .stack_sizes section. The flag defaults to on for the PS4 triple.
Differential Revision: https://reviews.llvm.org/D40712
llvm-svn: 321992
These just overloads for _Float128. They're supported by GCC 7 and used
by glibc. APFloat support is already there so just add the overloads.
__builtin_copysignf128
__builtin_fabsf128
__builtin_huge_valf128
__builtin_inff128
__builtin_nanf128
__builtin_nansf128
This is the same support that GCC has, according to the documentation,
but limited to _Float128.
llvm-svn: 321948
As discussed in the mail thread <https://groups.google.com/a/isocpp.org/forum/
#!topic/std-discussion/T64_dW3WKUk> "Calling noexcept function throug non-
noexcept pointer is undefined behavior?", such a call should not be UB.
However, Clang currently warns about it.
This change removes exception specifications from the function types recorded
for -fsanitize=function, both in the functions themselves and at the call sites.
That means that calling a non-noexcept function through a noexcept pointer will
also not be flagged as UB. In the review of this change, that was deemed
acceptable, at least for now. (See the "TODO" in compiler-rt
test/ubsan/TestCases/TypeCheck/Function/function.cpp.)
To remove exception specifications from types, the existing internal
ASTContext::getFunctionTypeWithExceptionSpec was made public, and some places
otherwise unrelated to this change have been adapted to call it, too.
This is the cfe part of a patch covering both cfe and compiler-rt.
Differential Revision: https://reviews.llvm.org/D40720
llvm-svn: 321859
This implements the DWARF 5 feature described at
http://www.dwarfstd.org/ShowIssue.php?issue=141215.1
This allows a consumer to understand whether a composite data type is
trivially copyable and thus should be passed by value instead of by
reference. The canonical example is being able to distinguish the
following two types:
// S is not trivially copyable because of the explicit destructor.
struct S {
~S() {}
};
// T is a POD type.
struct T {
~T() = default;
};
<rdar://problem/36034993>
Differential Revision: https://reviews.llvm.org/D41039
llvm-svn: 321845
If the reduction required shuffle in the NVPTX codegen, we may need to
cast the reduced value to the integer type. This casting was implemented
incorrectly and may cause compiler crash. Patch fixes this problem.
llvm-svn: 321818
r320902 fixed the IRGen for some types of checked multiplications. It
did not handle unsigned overflow correctly in the case where the signed
operand is negative (PR35750).
Eli pointed out that on overflow, the result must be equal to the unique
value that is equivalent to the mathematically-correct result modulo two
raised to the k power, where k is the number of bits in the result type.
This patch fixes the specialized IRGen from r320902 accordingly.
Testing: Apart from check-clang, I modified the test harness from
r320902 to validate the results of all multiplications -- not just the
ones which don't overflow:
https://gist.github.com/vedantk/3eb9c88f82e5c32f2e590555b4af5081
llvm.org/PR35750, rdar://34963321
Differential Revision: https://reviews.llvm.org/D41717
llvm-svn: 321771
When a type is only used as a template parameter and that type is the
only type imported from another #include'd module, no skeleton CU for
that module is generated, so a consumer doesn't know where to find the
type definition. By emitting an import declaration, we can force a
skeleton CU to be generated for each imported module.
rdar://problem/36266156
llvm-svn: 321754
Summary:
The C++ Itanium ABI says:
No cookie is required if the new operator being used is ::operator new[](size_t, void*).
We should only avoid poisoning the cookie if we're calling this
operator, not others. This is dealt with before the call to
InitializeArrayCookie.
Reviewers: rjmccall, kcc, rsmith
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D41301
llvm-svn: 321645
only.
Added support for -fopenmp-simd option that allows compilation of
simd-based constructs without emission of OpenMP runtime calls.
llvm-svn: 321560
...when such an operation is done on an object during con-/destruction.
This is the cfe part of a patch covering both cfe and compiler-rt.
Differential Revision: https://reviews.llvm.org/D40295
llvm-svn: 321519
Now that in the new TBAA format we allow access types to be of
any object types, including aggregate ones, it becomes critical
to specify types of all sub-objects such aggregates comprise as
their members. In order to meet this requirement, this patch
enables generation of field descriptors for members of array
types.
Differential Revision: https://reviews.llvm.org/D41399
llvm-svn: 321352
Now that the MDBuilder helpers generating TBAA type and access
descriptors in the new format are in place, we can teach clang to
use them when requested.
Differential Revision: https://reviews.llvm.org/D41394
llvm-svn: 321351
When a function taking transparent union is declared as taking one of
union members earlier in the translation unit, clang would hit an
"Invalid cast" assertion during EmitFunctionProlog. This case
corresponds to function f1 in test/CodeGen/transparent-union-redecl.c.
We decided to cast i32 to union because after merging function
declarations function parameter type becomes int,
CGFunctionInfo::ArgInfo type matches with ABIArgInfo type, so we decide
it is a trivial case. But these types should also be castable to
parameter declaration type which is not the case here.
The fix is in checking for the trivial case if ABIArgInfo type matches with
parameter declaration type. It exposed inconsistency that we check
hasScalarEvaluationKind for different types in EmitParmDecl and
EmitFunctionProlog, and comment says they should match.
Additional tests in Sema/transparent-union.c capture current behavior and make
sure there are no regressions.
rdar://problem/34949329
Reviewers: rjmccall, rafael
Reviewed By: rjmccall
Subscribers: aemerson, cfe-commits, kristof.beyls
Differential Revision: https://reviews.llvm.org/D41311
llvm-svn: 321296
The new format requires to specify both the type of the access
and its size. This patch fixes setting access sizes for TBAA tags
that denote accesses to structure members. This fix affects all
future TBAA metadata tests for the new format, so I guess we
don't need any special tests for this fix.
Differential Revision: https://reviews.llvm.org/D41452
llvm-svn: 321250
Diagnose 'unreachable' UB when a noreturn function returns.
1. Insert a check at the end of functions marked noreturn.
2. A decl may be marked noreturn in the caller TU, but not marked in
the TU where it's defined. To diagnose this scenario, strip away the
noreturn attribute on the callee and insert check after calls to it.
Testing: check-clang, check-ubsan, check-ubsan-minimal, D40700
rdar://33660464
Differential Revision: https://reviews.llvm.org/D40698
llvm-svn: 321231
Summary: Very similar to AddressSanitizer, with the exception of the error type encoding.
Reviewers: kcc, alekseyshl
Subscribers: cfe-commits, kubamracek, llvm-commits, hiraditya
Differential Revision: https://reviews.llvm.org/D41417
llvm-svn: 321203
Summary: Plant an inline version of "((ac+bd)/(cc+dd)) + i((bc-ad)/(cc+dd))" instead.
Patch by Paul Walker.
Reviewed By: hfinkel
Differential Revision: https://reviews.llvm.org/D40299
llvm-svn: 321183
Fixes regression from r320533.
This fixes the undefined behavior, but I'm not sure it's really right...
I think we end up with missing coverage for code in modules.
Differential Revision: https://reviews.llvm.org/D41374
llvm-svn: 321052
At least <http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux-android/
builds/6013/steps/annotate/logs/stdio> complains about
__ubsan::__ubsan_handle_function_type_mismatch_abort (compiler-rt
lib/ubsan/ubsan_handlers.cc) returning now despite being declared 'noreturn', so
looks like a different approach is needed for the function_type_mismatch check
to be called also in cases that may ultimately succeed.
llvm-svn: 320982
As discussed in the mail thread <https://groups.google.com/a/isocpp.org/forum/
#!topic/std-discussion/T64_dW3WKUk> "Calling noexcept function throug non-
noexcept pointer is undefined behavior?", such a call should not be UB.
However, Clang currently warns about it.
There is no cheap check whether two function type_infos only differ in noexcept,
so pass those two type_infos as additional data to the function_type_mismatch
handler (with the optimization of passing a null "static callee type" info when
that is already noexcept, so the additional check can be avoided anyway). For
the Itanium ABI (which appears to be the only one that happens to be used on
platforms that support -fsanitize=function, and which appears to only record
noexcept information for pointer-to-function type_infos, not for function
type_infos themselves), we then need to check the mangled names for occurrence
of "Do" representing "noexcept".
This is the cfe part of a patch covering both cfe and compiler-rt.
Differential Revision: https://reviews.llvm.org/D40720
llvm-svn: 320978
There are 2 parts to getting the -fassociative-math command-line flag translated to LLVM FMF:
1. In the driver/frontend, we accept the flag and its 'no' inverse and deal with the
interactions with other flags like -ffast-math -fno-signed-zeros -fno-trapping-math.
This was mostly already done - we just need to translate the flag as a codegen option.
The test file is complicated because there are many potential combinations of flags here.
Note that we are matching gcc's behavior that requires 'nsz' and no-trapping-math.
2. In codegen, we map the codegen option to FMF in the IR builder. This is simple code and
corresponding test.
For the motivating example from PR27372:
float foo(float a, float x) { return ((a + x) - x); }
$ ./clang -O2 27372.c -S -o - -ffast-math -fno-associative-math -emit-llvm | egrep 'fadd|fsub'
%add = fadd nnan ninf nsz arcp contract float %0, %1
%sub = fsub nnan ninf nsz arcp contract float %add, %2
So 'reassoc' is off as expected (and so is the new 'afn' but that's a different patch).
This case now works as expected end-to-end although the underlying logic is still wrong:
$ ./clang -O2 27372.c -S -o - -ffast-math -fno-associative-math | grep xmm
addss %xmm1, %xmm0
subss %xmm1, %xmm0
We're not done because the case where 'reassoc' is set is ignored by optimizer passes. Example:
$ ./clang -O2 27372.c -S -o - -fassociative-math -fno-signed-zeros -fno-trapping-math -emit-llvm | grep fadd
%add = fadd reassoc float %0, %1
$ ./clang -O2 27372.c -S -o - -fassociative-math -fno-signed-zeros -fno-trapping-math | grep xmm
addss %xmm1, %xmm0
subss %xmm1, %xmm0
Differential Revision: https://reviews.llvm.org/D39812
llvm-svn: 320920
This patch introduces a specialized way to lower overflow-checked
multiplications with mixed-sign operands. This fixes link failures and
ICEs on code like this:
void mul(int64_t a, uint64_t b) {
int64_t res;
__builtin_mul_overflow(a, b, &res);
}
The generic checked-binop irgen would use a 65-bit multiplication
intrinsic here, which requires runtime support for _muloti4 (128-bit
multiplication), and therefore fails to link on i386. To get an ICE
on x86_64, change the example to use __int128_t / __uint128_t.
Adding runtime and backend support for 65-bit or 129-bit checked
multiplication on all of our supported targets is infeasible.
This patch solves the problem by using simpler, specialized irgen for
the mixed-sign case.
llvm.org/PR34920, rdar://34963321
Testing: Apart from check-clang, I compared the output from this fairly
comprehensive test driver using unpatched & patched clangs:
https://gist.github.com/vedantk/3eb9c88f82e5c32f2e590555b4af5081
Differential Revision: https://reviews.llvm.org/D41149
llvm-svn: 320902
Previously the attributes were emitted only for function definitions.
Patch adds emission of the attributes for function declarations.
llvm-svn: 320826
Most of the -Wsign-compare warnings are due to the fact that
enums are signed by default in the MS ABI, while the
tautological comparison warnings trigger on x86 builds where
sizeof(size_t) is 4 bytes, so N > numeric_limits<unsigned>::max()
is always false.
Differential Revision: https://reviews.llvm.org/D41256
llvm-svn: 320750
Summary:
InterlockedCompareExchange128 is a bit more complicated than the other
InterlockedCompareExchange functions, so it requires a bit more work. It
doesn't directly refer to 128bit ints, instead it takes pointers to
64bit ints for Destination and ComparandResult, and exchange is taken as
two 64bit ints (high & low). The previous value is written to
ComparandResult, and success is returned. This implementation does the
following in order to produce a cmpxchg instruction:
1. Cast everything to 128bit ints or int pointers, and glues together
the Exchange values
2. Reads from CompareandResult to get the comparand
3. Calls cmpxchg volatile (on X86 this will produce a lock cmpxchg16b
instruction)
1. Result 0 (previous value) is written back to ComparandResult
2. Result 1 (success bool) is zext'ed to a uchar and returned
Resolves bug https://llvm.org/PR35251
Patch by Colden Cullen!
Reviewers: rnk, agutowski
Reviewed By: rnk
Subscribers: majnemer, cfe-commits
Differential Revision: https://reviews.llvm.org/D41032
llvm-svn: 320730
Adding the new enumerator forced a bunch more changes into this patch than I
would have liked. The -Wtautological-compare warning was extended to properly
check the new comparison operator, clang-format needed updating because it uses
precedence levels as weights for determining where to break lines (and several
operators increased their precedence levels with this change), thread-safety
analysis needed changes to build its own IL properly for the new operator.
All "real" semantic checking for this operator has been deferred to a future
patch. For now, we use the relational comparison rules and arbitrarily give
the builtin form of the operator a return type of 'void'.
llvm-svn: 320707
Under the Microsoft ABI, it is possible for an object not to have
a virtual table pointer of its own if all of its virtual functions
were introduced by virtual bases. In that case, we need to load the
vtable pointer from one of the virtual bases and perform the type
check using its type.
Differential Revision: https://reviews.llvm.org/D41036
llvm-svn: 320638
Summary:
The backend should only emit data sharing code for the cases where it is needed.
A new function attribute is used by Clang to enable data sharing only for the cases where OpenMP semantics require it and there are variables that need to be shared.
Reviewers: hfinkel, Hahnfeld, ABataev, carlo.bertolli, caomhin
Reviewed By: ABataev
Subscribers: cfe-commits, jholewinski
Differential Revision: https://reviews.llvm.org/D41123
llvm-svn: 320527
This adds a new command line option -mprefer-vector-width to specify a preferred vector width for the vectorizers. Valid values are 'none' and unsigned integers. The driver will check that it meets those constraints. Specific supported integers will be managed by the targets in the backend.
Clang will take the value and add it as a new function attribute during CodeGen.
This represents the alternate direction proposed by Sanjay in this RFC: http://lists.llvm.org/pipermail/llvm-dev/2017-November/118734.html
The syntax here matches gcc, though gcc treats it as an x86 specific command line argument. gcc only allows values of 128, 256, and 512. I'm not having clang check any values.
Differential Revision: https://reviews.llvm.org/D40230
llvm-svn: 320419
This commit fixes a bug in IRGen where it generates completely broken
code for __fp16 vectors on X86. For example when the following code is
compiled:
half4 hv0, hv1, hv2; // these are vectors of __fp16.
void foo221() {
hv0 = hv1 + hv2;
}
clang generates the following IR, in which two i16 vectors are added:
@hv1 = common global <4 x i16> zeroinitializer, align 8
@hv2 = common global <4 x i16> zeroinitializer, align 8
@hv0 = common global <4 x i16> zeroinitializer, align 8
define void @foo221() {
%0 = load <4 x i16>, <4 x i16>* @hv1, align 8
%1 = load <4 x i16>, <4 x i16>* @hv2, align 8
%add = add <4 x i16> %0, %1
store <4 x i16> %add, <4 x i16>* @hv0, align 8
ret void
}
To fix the bug, this commit uses the code committed in r314056, which
modified clang to promote and truncate __fp16 vectors to and from float
vectors in the AST. It also fixes another IRGen bug where a short value
is assigned to an __fp16 variable without any integer-to-floating-point
conversion, as shown in the following example:
__fp16 a;
short b;
void foo1() {
a = b;
}
@b = common global i16 0, align 2
@a = common global i16 0, align 2
define void @foo1() #0 {
%0 = load i16, i16* @b, align 2
store i16 %0, i16* @a, align 2
ret void
}
rdar://problem/20625184
Differential Revision: https://reviews.llvm.org/D40112
llvm-svn: 320215
This is a follow-up to r320128. Eli pointed out that there is some gray
area in the language standard about whether the constant size is exact,
or a lower bound.
https://reviews.llvm.org/D40940
llvm-svn: 320185
There is no way to apply sanitizer suppressions to ObjC blocks. A
reasonable default is to have blocks inherit their parent's sanitizer
options.
rdar://32769634
Differential Revision: https://reviews.llvm.org/D40668
llvm-svn: 320132
CreateCoercedLoad/CreateCoercedStore assumes pointer argument of
memcpy is in addr space 0, which is not correct and causes invalid
bitcasts for triple amdgcn---amdgiz.
It is fixed by using alloca addr space instead.
Differential Revision: https://reviews.llvm.org/D40806
llvm-svn: 320000
The adjustment is calculated with CreatePtrDiff() which returns
the difference in (base) elements. This is passed to CreateGEP()
so make sure that the GEP base has the correct pointer type:
It needs to be a pointer to the base type, not a pointer to a
constant sized array.
Differential Revision: https://reviews.llvm.org/D40911
llvm-svn: 319931
Commit 7ac28eb0a5 / r310911 ("[OpenCL] Allow targets to select address
space per type", 2017-08-15) made Basic depend on AST, introducing a
circular dependency. Break this dependency by adding the
OpenCLTypeKind enum in Basic and map from AST types to this enum in
ASTContext.
Differential Revision: https://reviews.llvm.org/D40838
llvm-svn: 319883
Though it is incorrect from point of view of OpenMP standard to have
dependent iteration space in OpenMP loops, compiler should not crash.
Patch fixes this problem.
llvm-svn: 319700
There are 20 LLVM math intrinsics that correspond to mathlib calls according to the LangRef:
http://llvm.org/docs/LangRef.html#standard-c-library-intrinsics
We were only converting 3 mathlib calls (sqrt, fma, pow) and 12 builtin calls (ceil, copysign,
fabs, floor, fma, fmax, fmin, nearbyint, pow, rint, round, trunc) to their intrinsic-equivalents.
This patch pulls the transforms together and handles all 20 cases. The switch is guarded by a
check for const-ness to make sure we're not doing the transform if errno could possibly be set by
the libcall or builtin.
Differential Revision: https://reviews.llvm.org/D40044
llvm-svn: 319593
Previously we emitted `__tgt_target_teams` only for standalone teams
directives. This patch allows emit this function for all teams-based
directives.
llvm-svn: 319585
These command line options are not intended for public use, and often
don't even make sense in the context of a particular tool anyway. About
90% of them are already hidden, but when people add new options they
forget to hide them, so if you were to make a brand new tool today, link
against one of LLVM's libraries, and run tool -help you would get a
bunch of junk that doesn't make sense for the tool you're writing.
This patch hides these options. The real solution is to not have
libraries defining command line options, but that's a much larger effort
and not something I'm prepared to take on.
Differential Revision: https://reviews.llvm.org/D40674
llvm-svn: 319505
The basic idea behind this patch is that since in strict aliasing
mode all accesses to union members require their outermost
enclosing union objects to be specified explicitly, then for a
couple given accesses to union members of the form
p->a.b.c...
q->x.y.z...
it is known they can only alias if both p and q point to the same
union type and offset ranges of members a.b.c... and x.y.z...
overlap. Note that the actual types of the members do not matter.
Specifically, in this patch we do the following:
* Make unions to be valid TBAA base access types. This enables
generation of TBAA type descriptors for unions.
* Encode union types as structures with a single member of a
special "union member" type. Currently we do not encode
information about sizes of types, but conceptually such union
members are considered to be of the size of the whole union.
* Encode accesses to direct and indirect union members, including
member arrays, as accesses to these special members. All
accesses to members of a union thus get the same offset, which
is the offset of the union they are part of. This means the
existing LLVM TBAA machinery is able to handle such accesses
with no changes.
While this is already an improvement comparing to the current
situation, that is, representing all union accesses as may-alias
ones, there are further changes planned to complete the support
for unions. One of them is storing information about access sizes
so we can distinct accesses to non-overlapping union members,
including accesses to different elements of member arrays.
Another change is encoding type sizes in order to make it
possible to compute offsets within constant-indexed array
elements. These enhancements will be addressed with separate
patches.
Differential Revision: https://reviews.llvm.org/D39455
llvm-svn: 319413
Summary:
The -fxray-always-emit-customevents flag instructs clang to always emit
the LLVM IR for calls to the `__xray_customevent(...)` built-in
function. The default behaviour currently respects whether the function
has an `[[clang::xray_never_instrument]]` attribute, and thus not lower
the appropriate IR code for the custom event built-in.
This change allows users calling through to the
`__xray_customevent(...)` built-in to always see those calls lowered to
the corresponding LLVM IR to lay down instrumentation points for these
custom event calls.
Using this flag enables us to emit even just the user-provided custom
events even while never instrumenting the start/end of the function
where they appear. This is useful in cases where "phase markers" using
__xray_customevent(...) can have very few instructions, must never be
instrumented when entered/exited.
Reviewers: rnk, dblaikie, kpw
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D40601
llvm-svn: 319388
Emit a gap area starting after the r-paren location and ending at the
start of the body for the braces-optional statements (for, for-each,
while, etc). The count for the gap area equal to the body's count. This
extends the fix in r317758.
Fixes PR35387, rdar://35570345
Testing: stage2 coverage-enabled build of clang, check-clang
llvm-svn: 319373
Fixes regression introduced by r319297. MSVC environments still use SEH
unwind opcodes but they should use the Microsoft C++ EH personality, not
the mingw one.
llvm-svn: 319363
This is a re-apply of r319294.
adds -fseh-exceptions and -fdwarf-exceptions flags
clang will check if the user has specified an exception model flag,
in the absense of specifying the exception model clang will then check
the driver default and append the model flag for that target to cc1
-fno-exceptions has a higher priority then specifying the model
move __SEH__ macro definitions out of Targets into InitPreprocessor
behind the -fseh-exceptions flag
move __ARM_DWARF_EH__ macrodefinitions out of verious targets and into
InitPreprocessor behind the -fdwarf-exceptions flag and arm|thumb check
remove unused USESEHExceptions from the MinGW Driver
fold USESjLjExceptions into a new GetExceptionModel function that
gives the toolchain classes more flexibility with eh models
Reviewers: rnk, mstorsjo
Differential Revision: https://reviews.llvm.org/D39673
llvm-svn: 319297
adds -fseh-exceptions and -fdwarf-exceptions flags
clang will check if the user has specified an exception model flag,
in the absense of specifying the exception model clang will then check
the driver default and append the model flag for that target to cc1
clang cc1 assumes dwarf is the default if none is passed
and -fno-exceptions has a higher priority then specifying the model
move __SEH__ macro definitions out of Targets into InitPreprocessor
behind the -fseh-exceptions flag
move __ARM_DWARF_EH__ macrodefinitions out of verious targets and into
InitPreprocessor behind the -fdwarf-exceptions flag and arm|thumb check
remove unused USESEHExceptions from the MinGW Driver
fold USESjLjExceptions into a new GetExceptionModel function that
gives the toolchain classes more flexibility with eh models
Reviewers: rnk, mstorsjo
Differential Revision: https://reviews.llvm.org/D39673
llvm-svn: 319294
Currently CodeGen is calling std::sort on the features vector in TargetOptions for every function, but I don't think CodeGen should be modifying TargetOptions.
Differential Revision: https://reviews.llvm.org/D40228
llvm-svn: 319195
These functions were defined as static members of TemplateSpecializationType.
Now they are moved to namespace level. Previously there were different
implementations for lists containing TemplateArgument and TemplateArgumentLoc,
now these implementations share the same code.
This change is a result of refactoring patch D40508. NFC.
llvm-svn: 319178
The information about access and type sizes is necessary for
producing TBAA metadata in the new size-aware format. With this
patch, D39955 and D39956 in place we should be able to change
CodeGenTBAA::createScalarTypeNode() and
CodeGenTBAA::getBaseTypeInfo() to generate metadata in the new
format under the -new-struct-path-tbaa command-line option. For
now, this new information remains unused.
Differential Revision: https://reviews.llvm.org/D40176
llvm-svn: 319012
In the future the compiler will analyze whether the OpenMP
runtime needs to be (fully) initialized and avoid that overhead
if possible. The functions already take an argument to transfer
that information to the runtime, so pass in the default value 1.
(This is needed for binary compatibility with libomptarget-nvptx
currently being upstreamed.)
Differential Revision: https://reviews.llvm.org/D40354
llvm-svn: 318836
This clang patch changes the __tgt_* API function signatures in preparation for the new map interface.
Changes are: Device IDs 32bits --> 64bits, Flags 32bits --> 64bits
Differential revision: https://reviews.llvm.org/D40281
llvm-svn: 318789
This is an instrumentation flag that's similar to
-finstrument-functions, but it only inserts calls on function entry, the
calls are inserted post-inlining, and they don't take any arugments.
This is intended for users who want to instrument function entry with
minimal overhead.
(-pg would be another alternative, but forces frame pointer emission and
affects link flags, so is probably best left alone to be used for
generating gcov data.)
Differential revision: https://reviews.llvm.org/D40276
llvm-svn: 318785
OpenMP 5.0 introduces asynchronous data update/dependecies clauses on
target data directives. Patch adds initial support for outer task
regions to use task-based codegen for future async target data
directives.
llvm-svn: 318781
Summary:
This patch is part of the development effort to add support in the current OpenMP GPU offloading implementation for implicitly sharing variables between a target region executed by the team master thread and the worker threads within that team.
This patch is the first of three required for successfully performing the implicit sharing of master thread variables with the worker threads within a team. The remaining two patches are:
- Patch D38978 to the LLVM NVPTX backend which ensures the lowering of shared variables to an device memory which allows the sharing of references;
- Patch (coming soon) is a patch to libomptarget runtime library which ensures that a list of references to shared variables is properly maintained.
A simple code snippet which illustrates an implicit data sharing situation is as follows:
```
#pragma omp target
{
// master thread only
int v;
#pragma omp parallel
{
// worker threads
// use v
}
}
```
Variable v is implicitly shared from the team master thread which executes the code in between the target and parallel directives. The worker threads must operate on the latest version of v, including any updates performed by the master.
The code generated in this patch relies on the LLVM NVPTX patch (mentioned above) which prevents v from being lowered in the thread local memory of the master thread thus making the reference to this variable un-shareable with the workers. This ensures that the code generated by this patch is correct.
Since the parallel region is outlined the passing of arguments to the outlined regions must preserve the original order of arguments. The runtime therefore maintains a list of references to shared variables thus ensuring their passing in the correct order. The passing of arguments to the outlined parallel function is performed in a separate function which the data sharing infrastructure constructs in this patch. The function is inlined when optimizations are enabled.
Reviewers: hfinkel, carlo.bertolli, arpith-jacob, Hahnfeld, ABataev, caomhin
Reviewed By: ABataev
Subscribers: cfe-commits, jholewinski
Differential Revision: https://reviews.llvm.org/D38976
llvm-svn: 318773
This patch introduces a couple of helper functions that make it
possible to handle the caching logic in a single place.
Differential Revision: https://reviews.llvm.org/D39953
llvm-svn: 318752
https://reviews.llvm.org/D40187
This patch implements code gen for 'teams distribute parallel for' on the host, including all its clauses and related regression tests.
llvm-svn: 318692
The object is provided by the objc runtime and is never visible in the
module itself, but even so, the address point we compute points into it,
and "+16" is guaranteed not to overflow.
This matches the c++ vtable IRGen.
Note that I'm not entirely convinced the 'i8*' type is correct here: at
the IR level, we're accessing memory that's outside the global object.
But we don't control the allocation, so it's not obviously wrong either.
But either way, this is only in a global initializer, so I don't think
it's going to be mucked with. Filed PR35352 to discuss that.
llvm-svn: 318545
Summary:
The MS ABI convention is that the 'this' pointer on entry is the address
of the vfptr that was used to make the virtual method call. In other
words, the pointer on entry always points to the base subobject that
introduced the virtual method. Consider this hierarchy:
struct A { virtual void f() = 0; };
struct B { virtual void g() = 0; };
struct C : A, B {
void f() override;
void g() override;
};
On entry to C::g, [ER]CX will contain the address of C's B subobject,
and C::g will have to subtract sizeof(A) to recover a pointer to C.
Before this change, we applied this adjustment in the prologue and
stored the new value into the "this" local variable alloca used for
debug info. However, MSVC does not do this, presumably because it is
often profitable to fold the adjustment into later field accesses. This
creates a problem, because the debugger expects the variable to be
unadjusted. Unfortunately, CodeView doesn't have anything like DWARF
expressions for computing variables that aren't in the program anymore,
so we have to declare 'this' to be the unadjusted value if we want the
debugger to see the right value.
This has the side benefit that, in optimized builds, the 'this' pointer
will usually be available on function entry because it doesn't require
any adjustment.
Reviewers: hans
Subscribers: aprantl, cfe-commits
Differential Revision: https://reviews.llvm.org/D40109
llvm-svn: 318440
Summary:
Constant samplers are handled as static variables and clang's code generation
library, which leads to llvm::unreachable. We bypass emitting sampler variable
as static since it's translated to a function call later.
Reviewers: yaxunl, Anastasia
Reviewed By: yaxunl, Anastasia
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D34342
llvm-svn: 318290
LLVM exposes a file in the backend (X86TargetParser.def) that
contains information about the correct list of CpuIs values.
This patch removes 2 of the copied and pasted versions of this
list from clang and instead includes the data from the .def file.
Differential Revision: https://reviews.llvm.org/D40054
llvm-svn: 318234
Lifting from Bob Wilson's notes: The hash value that we compute and
store in PGO profile data to detect out-of-date profiles does not
include enough information. This means that many significant changes to
the source will not cause compiler warnings about the profile being out
of date, and worse, we may continue to use the outdated profile data to
make bad optimization decisions. There is some tension here because
some source changes won't affect PGO and we don't want to invalidate the
profile unnecessarily.
This patch adds a new hashing scheme which is more sensitive to loop
nesting, conditions, and out-of-order control flow. Here are examples
which show snippets which get the same hash under the current scheme,
and different hashes under the new scheme:
Loop Nesting Example
--------------------
// Snippet 1
while (foo()) {
while (bar()) {}
}
// Snippet 2
while (foo()) {}
while (bar()) {}
Condition Example
-----------------
// Snippet 1
if (foo())
bar();
baz();
// Snippet 2
if (foo())
bar();
else
baz();
Out-of-order Control Flow Example
---------------------------------
// Snippet 1
while (foo()) {
if (bar()) {}
baz();
}
// Snippet 2
while (foo()) {
if (bar())
continue;
baz();
}
In each of these cases, it's useful to differentiate between the
snippets because swapping their profiles gives bad optimization hints.
The new hashing scheme considers some logical operators in an effort to
detect more changes in conditions. This isn't a perfect scheme. E.g, it
does not produce the same hash for these equivalent snippets:
// Snippet 1
bool c = !a || b;
if (d && e) {}
// Snippet 2
bool f = d && e;
bool c = !a || b;
if (f) {}
This would require an expensive data flow analysis. Short of that, the
new hashing scheme looks reasonably complete, based on a scan over the
statements we place counters on.
Profiles which use the old version of the PGO hash remain valid and can
be used without issue (there are tests in tree which check this).
rdar://17068282
Differential Revision: https://reviews.llvm.org/D39446
llvm-svn: 318229
This updates -mcount to use the new attribute names (LLVM r318195), and
switches over -finstrument-functions to also use these attributes rather
than inserting instrumentation in the frontend.
It also adds a new flag, -finstrument-functions-after-inlining, which
makes the cygprofile instrumentation get inserted after inlining rather
than before.
Differential Revision: https://reviews.llvm.org/D39331
llvm-svn: 318199
Summary: Currently the -fdebug-pass-manager flag for clang doesn't enable the debug logging in the analysis managers. This is different than what the switch does when passed to opt.
Reviewers: chandlerc
Reviewed By: chandlerc
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D40007
llvm-svn: 318140
Not much interesting here. Mostly wiring things together.
One thing worth noting is that the approach is substantially different
from the old PM. Here, the -O0 case works fundamentally differently in
that we just directly build the pipeline without any callbacks or other
cruft. In some ways, this is nice and clean. However, I don't like that
it causes the sanitizers to be enabled with different changes at
different times. =/ Suggestions for a better way to do this are welcome.
Differential Revision: https://reviews.llvm.org/D39085
llvm-svn: 318131
Registers it and everything, updates all the references, etc.
Next patch will add support to Clang's `-fexperimental-new-pass-manager`
path to actually enable BoundsChecking correctly.
Differential Revision: https://reviews.llvm.org/D39084
llvm-svn: 318128
cbrt() is always constant because it can't overflow or underflow. Therefore, it can't set errno.
fma() is not always constant because it can overflow or underflow. Therefore, it can set errno.
But we know that it never sets errno on GNU / MSVC, so make it constant in those environments.
Differential Revision: https://reviews.llvm.org/D39641
llvm-svn: 318093
Recommit of r317951 and r317951 along with what I believe should fix
the remaining buildbot failures - the target triple should be specified
for both the ThinLTO pre-thinlink compile and backend (post-thinlink)
compile to ensure it is consistent.
Original description:
The LTO Config field wasn't being set when invoking a ThinLTO backend
via clang (i.e. for distributed builds).
llvm-svn: 318042
Summary:
We don't want to store cleanup dest slot saved into the coroutine frame (as some of the cleanup code may
access them after coroutine frame destroyed).
This is an alternative to https://reviews.llvm.org/D37093
It is possible to do this for all functions, but, cursory check showed that in -O0, we get slightly longer function (by 1-3 instructions), thus, we are only limiting cleanup.dest.slot elimination to coroutines.
Reviewers: rjmccall, hfinkel, eric_niebler
Reviewed By: eric_niebler
Subscribers: EricWF, cfe-commits
Differential Revision: https://reviews.llvm.org/D39768
llvm-svn: 317981
llvm-objcopy is getting to where it can be used in non-trivial ways
(such as for dwarf fission in clang). It now supports dwarf fission but
this feature hasn't been thoroughly tested yet. This change allows
people to optionally build clang to use llvm-objcopy rather than GNU
objcopy. By default GNU objcopy is still used so nothing should change.
Differential Revision: https://reviews.llvm.org/D39029
llvm-svn: 317960
Summary:
The LTO Config field wasn't being set when invoking a ThinLTO backend
via clang (i.e. for distributed builds).
Reviewers: danielcdh
Subscribers: mehdi_amini, inglorion, eraman, cfe-commits
Differential Revision: https://reviews.llvm.org/D39923
llvm-svn: 317951
There are some limitations with emitting regions in macro expansions
because we don't gather file IDs within the expansions. Fix the check
that prevents us from emitting deferred regions in expansions to make an
exception for headers, which is something we can handle.
rdar://35373009
llvm-svn: 317760
The area immediately after a terminated region in the function top-level
should have the same count as the label it precedes.
This solves another problem with wrapped segments. Consider:
1| a:
2| return 0;
3| b:
4| return 1;
Without a gap area starting after the first return, the wrapped segment
from line 2 would make it look like line 3 is executed, when it's not.
rdar://35373009
llvm-svn: 317759
The area immediately after the closing right-paren of an if condition
should have a count equal to the 'then' block's count. Use a gap region
to set this count, so that region highlighting for the 'then' block
remains precise.
This solves a problem we have with wrapped segments. Consider:
1| if (false)
2| foo();
Without a gap area starting after the condition, the wrapped segment
from line 1 would make it look like line 2 is executed, when it's not.
rdar://35373009
llvm-svn: 317758
Summary:
This just seems to have been an oversight. We already supported the f64
atomic add with an explicit scope (e.g. "cta"), but not the scopeless
version.
Reviewers: tra
Subscribers: jholewinski, sanjoy, cfe-commits, llvm-commits, hiraditya
Differential Revision: https://reviews.llvm.org/D39638
llvm-svn: 317623
This patch renames some of the flag names of the clang/libomptarget map interface. The old names are slightly misleading, whereas the new ones describe in a better way what each flag is about.
Only the macros within the enumeration are renamed, there is no change in functionality therefore there are no updated regression tests.
Differential Revision: https://reviews.llvm.org/D39745
llvm-svn: 317598
GNU frontends don't have options like /MT, /MD
This fixes a few link error regressions with libc++ and libc++abi
Reviewers: rnk, mstorsjo, compnerd
Differential Revision: https://reviews.llvm.org/D33620
llvm-svn: 317398
If the thread id is requested in windows mode within funclets, we may
generate incorrect function call that could lead to broken codegen.
llvm-svn: 317208
The cloning happens before all metadata nodes are resolved. Prevent the value
mapper from running into unresolved or temporary MD nodes.
Differential Revision: https://reviews.llvm.org/D39396
llvm-svn: 317047
Summary:
This change allows generalizing pointers in type signatures used for
cfi-icall by enabling the -fsanitize-cfi-icall-generalize-pointers flag.
This works by 1) emitting an additional generalized type signature
metadata node for functions and 2) llvm.type.test()ing for the
generalized type for translation units with the flag specified.
This flag is incompatible with -fsanitize-cfi-cross-dso because it would
require emitting twice as many type hashes which would increase artifact
size.
Reviewers: pcc, eugenis
Reviewed By: pcc
Subscribers: kcc
Differential Revision: https://reviews.llvm.org/D39358
llvm-svn: 317044
The LLVM sqrt intrinsic definition changed with:
D28797
...so we don't have to use any relaxed FP settings other than errno handling.
This patch sidesteps a question raised in PR27435:
https://bugs.llvm.org/show_bug.cgi?id=27435
Is a programmer using __builtin_sqrt() invoking the compiler's intrinsic definition of sqrt or the mathlib definition of sqrt?
But we have an answer now: the builtin should match the behavior of the libm function including errno handling.
Differential Revision: https://reviews.llvm.org/D39204
llvm-svn: 317031
This patch fixes various places in clang to propagate may-alias
TBAA access descriptors during construction of lvalues, thus
eliminating the need for the LValueBaseInfo::MayAlias flag.
This is part of D38126 reworked to be a separate patch to
simplify review.
Differential Revision: https://reviews.llvm.org/D39008
llvm-svn: 316988
For non-zero alloca addr space, alloca is usually casted to default addr
space immediately.
For non-vla, alloca is inserted at AllocaInsertPt, therefore the addr
space cast should also be insterted at AllocaInsertPt. However,
for vla, alloca is inserted at the current insertion point of IRBuilder,
therefore the addr space cast should also inserted at the current
insertion point of IRBuilder.
Currently clang always insert addr space cast at AllocaInsertPt, which
causes invalid IR.
This patch fixes that.
Differential Revision: https://reviews.llvm.org/D39374
llvm-svn: 316909
Craig noticed that CodeGen wasn't properly ignoring the
values sent to the target attribute. This patch ignores
them.
This patch also sets the 'default' for this checking to
'supported', since only X86 has implemented the support
for checking valid CPU names and Feature Names.
One test was changed to i686, since it uses a lakemont,
which would otherwise be prohibited in x86_64.
Differential Revision: https://reviews.llvm.org/D39357
llvm-svn: 316783
Fixes an assertion failure when ivar is a struct containing incomplete
array. Also completes support for direct flexible array members.
rdar://problem/21054495
Reviewers: rjmccall, theraven
Reviewed By: rjmccall
Subscribers: cfe-commits
Differential Revision: https://reviews.llvm.org/D38774
llvm-svn: 316723
Instead of only setting a non-zero debug location on the return
instruction in *_helper_block functions, set a proper location on all
instructions within these functions. Pick the start location of the
block literal expr for maximum clarity.
The debugger does not step into *_helper_block functions during normal
single-stepping because we mark their parameters as artificial. This is
what we want (the functions are implicitly generated and uninteresting
to most users). The stepping behavior is unchanged by this patch.
rdar://32907581
Differential Revision: https://reviews.llvm.org/D39310
llvm-svn: 316704
The exisiting code goes out of its way to put block parameters into an
alloca only at -O0, and then describes the funciton argument with a
dbg.declare, which is undocumented in the LLVM-CFE contract and does
not actually behave as intended after LLVM r642022.
This patch just generates the alloca unconditionally, the mem2reg pass
will eliminate it at -O1 and up anyway and points the dbg.declare to
the alloca as intended (which mem2reg will then correctly rewrite into
a dbg.value).
This reapplies r316684 with some dead code removed.
rdar://problem/35043980
Differential Revision: https://reviews.llvm.org/D39305
llvm-svn: 316689
The exisiting code goes out of its way to put block parameters into an
alloca only at -O0, and then describes the funciton argument with a
dbg.declare, which is undocumented in the LLVM-CFE contract and does
not actually behave as intended after LLVM r642022.
This patch just generates the alloca unconditionally, the mem2reg pass
will eliminate it at -O1 and up anyway and points the dbg.declare to
the alloca as intended (which mem2reg will then correctly rewrite into
a dbg.value).
rdar://problem/35043980
Differential Revision: https://reviews.llvm.org/D39305
llvm-svn: 316684
Darwin uses char * for the variadic list type (va_list). We use the PPC
SVR4 ABI for PPC, which uses a structure type for the va_list. When
constructing the GEP, we would fail due to the incorrect handling for
the va_list. Correct this to use the right type.
llvm-svn: 316599
Ensure that we check the ivar containing decl for the DLL storage
attribute rather than the ivar itself as the dll storage is associated
to the interface decl not the ivar decl.
llvm-svn: 316545
Builder save/restores insertion pointer when emitting addr space cast
for alloca, but does not save/restore debug loc, which causes verifier
failure for certain call instructions.
This patch fixes that.
Differential Revision: https://reviews.llvm.org/D39069
llvm-svn: 316484
In some cases the compiler can deduce the length of an array section
as constants. With this information, VLAs can be avoided in place of
a constant sized array or even a scalar value if the length is 1.
Example:
int a[4], b[2];
pragma omp parallel reduction(+: a[1:2], b[1:1])
{ }
For chained array sections, this optimization is restricted to cases
where all array sections except the last have a constant length 1.
This trivially guarantees that there are no holes in the memory region
that needs to be privatized.
Example:
int c[3][4];
pragma omp parallel reduction(+: c[1:1][1:2])
{ }
This relands commit r316229 that I reverted in r316235 because it
failed on some bots. During investigation I found that this was because
Clang and GCC evaluate the two arguments to emplace_back() in
ReductionCodeGen::emitSharedLValue() in a different order, hence
leading to a different order of generated instructions in the final
LLVM IR. Fix this by passing in the arguments from temporary variables
that are evaluated in a defined order.
Differential Revision: https://reviews.llvm.org/D39136
llvm-svn: 316362
In some cases the compiler can deduce the length of an array section
as constants. With this information, VLAs can be avoided in place of
a constant sized array or even a scalar value if the length is 1.
Example:
int a[4], b[2];
pragma omp parallel reduction(+: a[1:2], b[1:1])
{ }
For chained array sections, this optimization is restricted to cases
where all array sections except the last have a constant length 1.
This trivially guarantees that there are no holes in the memory region
that needs to be privatized.
Example:
int c[3][4];
pragma omp parallel reduction(+: c[1:1][1:2])
{ }
Differential Revision: https://reviews.llvm.org/D39136
llvm-svn: 316229
In function GetIntrinsic, not all types are covered. Types double and long long are missed, type long is wrongly treated same as int, it should be same as long long. These problems cause compiler crashes when compiling code in PR31161. This patch fixed the problem.
Differential Revision: https://reviews.llvm.org/D38820
llvm-svn: 316179
If the variables is boolean and we generating inner function with real
types, the codegen may crash because of not loading boolean value from
memory.
llvm-svn: 316011
Currently clang assumes the temporary variables emitted during
codegen of atomic builtins have address space 0, which
is not true for target triple amdgcn---amdgiz and causes invalid
bitcasts.
This patch fixes that.
Differential Revision: https://reviews.llvm.org/D38966
llvm-svn: 316000
The main change is that now we generate TBAA info before
constructing the resulting lvalue instead of constructing lvalue
with some default TBAA info and fixing it as necessary
afterwards. We also keep the TBAA info close to lvalue base info,
which is supposed to simplify their future merging.
This patch should not bring in any functional changes.
This is part of D38126 reworked to be a separate patch to
simplify review.
Differential Revision: https://reviews.llvm.org/D38947
llvm-svn: 315989
This patch addresses the rest of the cases where we pass lvalue
base info, but do not provide corresponding TBAA info.
This patch should not bring in any functional changes.
This is part of D38126 reworked to be a separate patch to make
reviewing easier.
Differential Revision: https://reviews.llvm.org/D38945
llvm-svn: 315986
A trailing deferred region isn't necessary in a function that ends with
this pattern:
...
else {
...
return;
}
Special-case this pattern so that the closing curly brace of the
function isn't marked as uncovered. This issue came up in PR34962.
llvm-svn: 315982
This makes it possible to view sub-line region counts for the l.h.s of
&& and || expressions in coverage reports.
It also fixes PR33465, which shows an example of incorrect coverage
output for an assignment statement containing '||'.
llvm-svn: 315979
Currently all the consecutive bitfields are wrapped as a large integer unless there is unamed zero sized bitfield in between. The patch provides an alternative manner which makes the bitfield to be accessed as separate memory location if it has legal integer width and is naturally aligned. Such separate bitfield may split the original consecutive bitfields into subgroups of consecutive bitfields, and each subgroup will be wrapped as an integer. Now This is all controlled by an option -ffine-grained-bitfield-accesses. The alternative of bitfield access manner can improve the access efficiency of those bitfields with legal width and being aligned, but may reduce the chance of load/store combining of other bitfields, so it depends on how the bitfields are defined and actually accessed to choose when to use the option. For now the option is off by default.
Differential revision: https://reviews.llvm.org/D36562
llvm-svn: 315915
Summary:
Convert clang::LangAS to a strongly typed enum
Currently both clang AST address spaces and target specific address spaces
are represented as unsigned which can lead to subtle errors if the wrong
type is passed. It is especially confusing in the CodeGen files as it is
not possible to see what kind of address space should be passed to a
function without looking at the implementation.
I originally made this change for our LLVM fork for the CHERI architecture
where we make extensive use of address spaces to differentiate between
capabilities and pointers. When merging the upstream changes I usually
run into some test failures or runtime crashes because the wrong kind of
address space is passed to a function. By converting the LangAS enum to a
C++11 we can catch these errors at compile time. Additionally, it is now
obvious from the function signature which kind of address space it expects.
I found the following errors while writing this patch:
- ItaniumRecordLayoutBuilder::LayoutField was passing a clang AST address
space to TargetInfo::getPointer{Width,Align}()
- TypePrinter::printAttributedAfter() prints the numeric value of the
clang AST address space instead of the target address space.
However, this code is not used so I kept the current behaviour
- initializeForBlockHeader() in CGBlocks.cpp was passing
LangAS::opencl_generic to TargetInfo::getPointer{Width,Align}()
- CodeGenFunction::EmitBlockLiteral() was passing a AST address space to
TargetInfo::getPointerWidth()
- CGOpenMPRuntimeNVPTX::translateParameter() passed a target address space
to Qualifiers::addAddressSpace()
- CGOpenMPRuntimeNVPTX::getParameterAddress() was using
llvm::Type::getPointerTo() with a AST address space
- clang_getAddressSpace() returns either a LangAS or a target address
space. As this is exposed to C I have kept the current behaviour and
added a comment stating that it is probably not correct.
Other than this the patch should not cause any functional changes.
Reviewers: yaxunl, pcc, bader
Reviewed By: yaxunl, bader
Subscribers: jlebar, jholewinski, nhaehnle, Anastasia, cfe-commits
Differential Revision: https://reviews.llvm.org/D38816
llvm-svn: 315871
In OpenCL the kernel function and non-kernel function has different calling conventions.
For certain targets they have different argument ABIs. Also kernels have special function
attributes and metadata for runtime to launch them.
The blocks passed to enqueue_kernel is supposed to be executed as kernels. As such,
the block invoke function should be emitted as kernel with proper calling convention and
argument ABI.
This patch emits enqueued block as kernel. If a block is both called directly and passed
to enqueue_kernel, separate functions will be generated.
Differential Revision: https://reviews.llvm.org/D38134
llvm-svn: 315804
The function sanitizer only checks indirect calls through function
pointers. This excludes all non-static member functions (constructor
calls, calls through thunks, etc. all use a separate code path). Don't
emit function signatures for functions that won't be checked.
Apart from cutting down on code size, this should fix a regression on
Linux caused by r313096. For context, see the mailing list discussion:
r313096 - [ubsan] Function Sanitizer: Don't require writable text segments
Testing: check-clang, check-ubsan
Differential Revision: https://reviews.llvm.org/D38913
llvm-svn: 315786
Currently Clang uses default address space (0) to represent private address space for OpenCL
in AST. There are two issues with this:
Multiple address spaces including private address space cannot be diagnosed.
There is no mangling for default address space. For example, if private int* is emitted as
i32 addrspace(5)* in IR. It is supposed to be mangled as PUAS5i but it is mangled as
Pi instead.
This patch attempts to represent OpenCL private address space explicitly in AST. It adds
a new enum LangAS::opencl_private and adds it to the variable types which are implicitly
private:
automatic variables without address space qualifier
function parameter
pointee type without address space qualifier (OpenCL 1.2 and below)
Differential Revision: https://reviews.llvm.org/D35082
llvm-svn: 315668
This feature is not (yet) approved by the C++ committee, so this is liable to
be reverted or significantly modified based on committee feedback.
No functionality change intended for existing code (a new type must be defined
in namespace std to take advantage of this feature).
llvm-svn: 315662
Fix PR32990 by effectively reverting r283063 and solving it a different
way.
We want to limit the hack to not replace equivalent available_externally
dtors specifically to libc++, which uses always_inline. It seems certain
versions of libc++ do not provide all the symbols that an explicit
template instantiation is expected to provide.
If we get to the code that forms a real alias, only *then* check if this
is available_externally, and do that by asking a better question, which
is "is this a declaration for the linker?", because *that's* what means
we can't form an alias to it.
As a follow-on simplification, remove the InEveryTU parameter. Its last
use guarded this code for forming aliases, but we should never form
aliases to declarations, regardless of what we know about every TU.
llvm-svn: 315656
reduction.
If the reduction is an array or an array section and reduction operation
is declare reduction without initializer, it may lead to crash.
llvm-svn: 315611
in C.
If we try to get the lvalue for thread_id variables in inlined regions,
we did not use the correct version of function. Fixed this bug by adding
overrided version of the function getThreadIDVariableLValue for inlined
regions.
llvm-svn: 315578
This patch enables explicit generation of TBAA information in all
cases where LValue base info is propagated or constructed in
non-trivial ways. Eventually, we will consider each of these
cases to make sure the TBAA information is correct and not too
conservative. For now, we just fall back to generating TBAA info
from the access type.
This patch should not bring in any functional changes.
This is part of D38126 reworked to be a separate patch to
simplify review.
Differential Revision: https://reviews.llvm.org/D38733
llvm-svn: 315575
This reverts commit 4e4ee1c507e2707bb3c208e1e1b6551c3015cbf5.
This is failing due to some code that isn't built on MSVC
so I didn't catch. Not immediately obvious how to fix this
at first glance, so I'm reverting for now.
llvm-svn: 315536
There's a lot of misuse of Twine scattered around LLVM. This
ranges in severity from benign (returning a Twine from a function
by value that is just a string literal) to pretty sketchy (storing
a Twine by value in a class). While there are some uses for
copying Twines, most of the very compelling ones are confined
to the Twine class implementation itself, and other uses are
either dubious or easily worked around.
This patch makes Twine's copy constructor private, and fixes up
all callsites.
Differential Revision: https://reviews.llvm.org/D38767
llvm-svn: 315530
If both taskloop and task directives are used at the same time in one
program, we may ran into the situation when the particular type for task
directive is reused for taskloop directives. Patch fixes this problem.
llvm-svn: 315464
This change adds a new function, CodeGen::getFieldNumber, that
enables a user of clang's code generation to get the field number
in a generated LLVM IR struct that corresponds to a particular field
in a C struct.
It is important to expose this information in Clang's code generation
interface because there is no reasonable way for users of Clang's code
generation to get this information. In particular:
LLVM struct types do not include field names.
Clang adds a non-trivial amount of logic to the code generation of LLVM IR types for structs, in particular to handle padding and bit fields.
Patch by Michael Ferguson!
Differential Revision: https://reviews.llvm.org/D38473
llvm-svn: 315392
Usually compare expression should return i1 type, so EmitScalarConversion is called before return
return EmitScalarConversion(Result, CGF.getContext().BoolTy, E->getType(), E->getExprLoc());
But when ppc intrinsic is called to compare vectors, the ppc intrinsic can return i32 even E->getType() is BoolTy, in this case EmitScalarConversion does nothing, an i32 type result is returned and causes crash later.
This patch detects this case and truncates the result before return.
Differential Revision: https://reviews.llvm.org/D38656
llvm-svn: 315358
Besides obvious code simplification, avoiding explicit creation
of LValueBaseInfo objects makes it easier to make TBAA
information to be part of such objects.
This is part of D38126 reworked to be a separate patch to
simplify review.
Differential Revision: https://reviews.llvm.org/D38695
llvm-svn: 315289
This was done for CUDA functions in r261779, and for the same
reason this also needs to be done for OpenCL. An arbitrary
function could have a barrier() call in it, which in turn
requires the calling function to be convergent.
llvm-svn: 315094
The Cpu Init functionality is required for the target
attribute, so this patch simply splits it out into its own
function, exactly like CpuIs and CpuSupports.
llvm-svn: 315075
In C++11 variable to global variables are considered as constant
expressions and these variables are not captured in the outlined
regions. Patch allows capturing of such variables in the OpenMP regions.
llvm-svn: 315074
This patch is an attempt to clarify and simplify generation and
propagation of TBAA information. The idea is to pack all values
that describe a memory access, namely, base type, access type and
offset, into a single structure. This is supposed to make further
changes, such as adding support for unions and array members,
easier to prepare and review.
DecorateInstructionWithTBAA() is no more responsible for
converting types to tags. These implicit conversions not only
complicate reading the code, but also suggest assigning scalar
access tags while we generally prefer full-size struct-path tags.
TBAAPathTag is replaced with TBAAAccessInfo; the latter is now
the type of the keys of the cache map that translates access
descriptors to metadata nodes.
Fixed a bug with writing to a wrong map in
getTBAABaseTypeMetadata() (former getTBAAStructTypeInfo()).
We now check for valid base access types every time we
dereference a field. The original code only checks the top-level
base type. See isValidBaseType() / isTBAAPathStruct() calls.
Some entities have been renamed to sound more adequate and less
confusing/misleading in presence of path-aware TBAA information.
Now we do not lookup twice for the same cache entry in
getAccessTagInfo().
Refined relevant comments and descriptions.
Differential Revision: https://reviews.llvm.org/D37826
llvm-svn: 315048
code size.
Currently clang expands a call to __builtin_os_log_format into a long
sequence of instructions at the call site, causing code size to
increase in some cases.
This commit attempts to reduce code size by emitting a helper function
that can be shared by calls to __builtin_os_log_format with similar
formats and arguments. The helper function has linkonce_odr linkage to
enable the linker to merge identical functions across translation units.
Attribute 'noinline' is attached to the helper function at -Oz so that
the inliner doesn't inline functions that can potentially be merged.
This commit also fixes a bug where the generated IR writes past the end
of the buffer when "%m" is the last specifier appearing in the format
string passed to __builtin_os_log_format.
Original patch by Duncan Exon Smith.
rdar://problem/34065973
rdar://problem/34196543
Differential Revision: https://reviews.llvm.org/D38606
llvm-svn: 315045
This patch makes it possible to produce access tags in a uniform
manner regardless whether the resulting tag will be a scalar or a
struct-path one. getAccessTagInfo() now takes care of the actual
translation of access descriptors to tags and can handle all
kinds of accesses. Facilities that specific to scalar accesses
are eliminated.
Some more details:
* DecorateInstructionWithTBAA() is not responsible for conversion
of types to access tags anymore. Instead, it takes an access
descriptor (TBAAAccessInfo) and generates corresponding access
tag from it.
* getTBAAInfoForVTablePtr() reworked to
getTBAAVTablePtrAccessInfo() that now returns the
virtual-pointer access descriptor and not the virtual-point
type metadata.
* Added function getTBAAMayAliasAccessInfo() that returns the
descriptor for may-alias accesses.
* getTBAAStructTagInfo() renamed to getTBAAAccessTagInfo() as now
it is the only way to generate access tag by a given access
descriptor. It is capable of producing both scalar and
struct-path tags, depending on options and availability of the
base access type. getTBAAScalarTagInfo() and its cache
ScalarTagMetadataCache are eliminated.
* Now that we do not need to care about whether the resulting
access tag should be a scalar or struct-path one,
getTBAAStructTypeInfo() is renamed to getBaseTypeInfo().
* Added function getTBAAAccessInfo() that constructs access
descriptor by a given QualType access type.
This is part of D37826 reworked to be a separate patch to
simplify review.
Differential Revision: https://reviews.llvm.org/D38503
llvm-svn: 314979
This patch makes it possible to produce access tags in a uniform
manner regardless whether the resulting tag will be a scalar or a
struct-path one. getAccessTagInfo() now takes care of the actual
translation of access descriptors to tags and can handle all
kinds of accesses. Facilities that specific to scalar accesses
are eliminated.
Some more details:
* DecorateInstructionWithTBAA() is not responsible for conversion
of types to access tags anymore. Instead, it takes an access
descriptor (TBAAAccessInfo) and generates corresponding access
tag from it.
* getTBAAInfoForVTablePtr() reworked to
getTBAAVTablePtrAccessInfo() that now returns the
virtual-pointer access descriptor and not the virtual-point
type metadata.
* Added function getTBAAMayAliasAccessInfo() that returns the
descriptor for may-alias accesses.
* getTBAAStructTagInfo() renamed to getTBAAAccessTagInfo() as now
it is the only way to generate access tag by a given access
descriptor. It is capable of producing both scalar and
struct-path tags, depending on options and availability of the
base access type. getTBAAScalarTagInfo() and its cache
ScalarTagMetadataCache are eliminated.
* Now that we do not need to care about whether the resulting
access tag should be a scalar or struct-path one,
getTBAAStructTypeInfo() is renamed to getBaseTypeInfo().
* Added function getTBAAAccessInfo() that constructs access
descriptor by a given QualType access type.
This is part of D37826 reworked to be a separate patch to
simplify review.
Differential Revision: https://reviews.llvm.org/D38503
llvm-svn: 314977
Currently block is translated to a structure equivalent to
struct Block {
void *isa;
int flags;
int reserved;
void *invoke;
void *descriptor;
};
Except invoke, which is the pointer to the block invoke function,
all other fields are useless for OpenCL, which clutter the IR and
also waste memory since the block struct is passed to the block
invoke function as argument.
On the other hand, the size and alignment of the block struct is
not stored in the struct, which causes difficulty to implement
__enqueue_kernel as library function, since the library function
needs to know the size and alignment of the argument which needs
to be passed to the kernel.
This patch removes the useless fields from the block struct and adds
size and align fields. The equivalent block struct will become
struct Block {
int size;
int align;
generic void *invoke;
/* custom fields */
};
It also changes the pointer to the invoke function to be
a generic pointer since the address space of a function
may not be private on certain targets.
Differential Revision: https://reviews.llvm.org/D37822
llvm-svn: 314932
https://reviews.llvm.org/D38371
This patch implements codegen for the combined 'teams distribute" OpenMP pragma and adds regression tests for all its clauses.
llvm-svn: 314905
This patch fixes clang to propagate complete TBAA information for
atomic accesses and not just the final access types. Prepared
against D38456 and requires it to be committed first.
This is part of D37826 reworked to be a separate patch to
simplify review.
Differential Revision: https://reviews.llvm.org/D38460
llvm-svn: 314784
With this patch we implement a concept of TBAA access descriptors
that are capable of representing both scalar and struct-path
accesses in a generic way.
This is part of D37826 reworked to be a separate patch to
simplify review.
Differential Revision: https://reviews.llvm.org/D38456
llvm-svn: 314780
Don't emit alignment checks which the IR constant folder throws away.
I've tested this out on X86FastISel.cpp. While this doesn't decrease
end-to-end compile-time significantly, it results in 122 fewer type
checks (1% reduction) overall, without adding any real complexity.
Differential Revision: https://reviews.llvm.org/D37544
llvm-svn: 314752
directives.
The argument of the `device` clause in target-based executable
directives must be captured to support codegen for the `target`
directives with the `depend` clauses.
llvm-svn: 314686
Simplified and generalized codegen for non-offloading part that works if
offloading is failed or condition of the `if` clause is `false`.
llvm-svn: 314670
This patch fixes misleading names of entities related to getting,
setting and generation of TBAA access type descriptors.
This is effectively an attempt to provide a review for D37826 by
breaking it into smaller pieces.
Differential Revision: https://reviews.llvm.org/D38404
llvm-svn: 314657
to have child entries describing the template parameters. This will
be on by default for SCE tuning.
Differential Revision: https://reviews.llvm.org/D14358
llvm-svn: 314444
Added missing addrspacecast case in alignment computation
logic of pointer type emission in IR generation.
Differential Revision: https://reviews.llvm.org/D37804
llvm-svn: 314304
Currently, if _attribute_((section())) is used for extern variables,
section information is not emitted in generated IR when the variables are used.
This is expected since sections are not generated for external linkage objects.
However NiosII requires this information as it uses special GP-relative accesses
for any objects that use attribute section (.sdata). GCC keeps this attribute in
middle-end.
This change emits the section information for all targets.
Patch By: Elizabeth Andrews
Differential Revision:https://reviews.llvm.org/D36487
llvm-svn: 314262
This patch fixes clang to decorate reference accesses as pointers
and not as "omnipotent chars".
Differential Revision: https://reviews.llvm.org/D38074
llvm-svn: 314209
directives.
If the variable is used in the target-based region but is not found in
any private|mapping clause, then generate implicit firstprivate|map
clauses for these implicitly mapped variables.
llvm-svn: 314205
Summary:
This is the follow-up patch to D37924.
This change refactors clang to use the the newly added section headers
in SpecialCaseList to specify which sanitizers blacklists entries
should apply to, like so:
[cfi-vcall]
fun:*bad_vcall*
[cfi-derived-cast|cfi-unrelated-cast]
fun:*bad_cast*
The SanitizerSpecialCaseList class has been added to allow querying by
SanitizerMask, and SanitizerBlacklist and its downstream users have been
updated to provide that information. Old blacklists not using sections
will continue to function identically since the blacklist entries will
be placed into a '[*]' section by default matching against all
sanitizers.
Reviewers: pcc, kcc, eugenis, vsk
Reviewed By: eugenis
Subscribers: dberris, cfe-commits, mgorny
Differential Revision: https://reviews.llvm.org/D37925
llvm-svn: 314171
This is to fix PR34347. EmitAtomicExpr now only uses alignment information from
Type, instead of Decl, so when the declaration of an atomic variable is marked
to have the alignment equal as its size, EmitAtomicExpr doesn't know about it and
will generate libcall instead of atomic op. The patch uses EmitPointerWithAlignment
to get the precise alignment information.
Differential Revision: https://reviews.llvm.org/D37310
llvm-svn: 314145
This commit fixes a bug in the handling of storage-only __fp16 vectors
where clang didn't promote __fp16 vector operands to float vectors.
Conceptually, it performs the following transformation on the AST in
CreateBuiltinBinOp and CreateBuiltinUnaryOp:
(Before)
typedef __fp16 half4 __attribute__ ((vector_size (8)));
typedef float float4 __attribute__ ((vector_size (16)));
half4 hv0, hv1, hv2, hv3;
hv0 = hv1 + hv2 + hv3;
(After)
float4 t0 = (float4)hv1 + (float4)hv2;
float4 t1 = t0 + (float4)hv3;
hv0 = (half4)t1;
Note that this commit fixes the bug for targets that set
HalfArgsAndReturns to true (ARM and ARM64). Targets using intrinsics
such as llvm.convert.to.fp16 to handle __fp16 are still broken.
rdar://problem/20625184
Differential Revision: https://reviews.llvm.org/D32520
llvm-svn: 314056
body of global block invoke functions.
This commit fixes an infinite loop in IRGen that occurs when compiling
the following code:
void FUNC2() {
static void (^const block1)(int) = ^(int a){
if (a--)
block1(a);
};
}
This is how IRGen gets stuck in the infinite loop:
1. GenerateBlockFunction is called to emit the body of "block1".
2. GetAddrOfGlobalBlock is called to get the address of "block1". The
function calls getAddrOfGlobalBlockIfEmitted to check whether the
global block has been emitted. If it hasn't been emitted, it then
tries to emit the body of the block function by calling
GenerateBlockFunction, which goes back to step 1.
This commit prevents the inifinite loop by building the global block in
GenerateBlockFunction before emitting the body of the block function.
rdar://problem/34541684
Differential Revision: https://reviews.llvm.org/D38118
llvm-svn: 314029
Add an option to emit limited coverage info for unused decls. It's just a
cl::opt for now to allow us to experiment quickly.
When building llc, this results in an 84% size reduction in the llvm_covmap
section, and a similar size reduction in the llvm_prf_names section. In
practice I expect the size reduction to be roughly quadratic with the size of
the program.
The downside is that coverage for headers will no longer be complete. This will
make the line/function/region coverage metrics incorrect, since they will be
artificially high. One mitigation would be to somehow disable those metrics
when using limited-coverage=true.
This is related to: llvm.org/PR34533 (make SourceBasedCodeCoverage scale)
Differential Revision: https://reviews.llvm.org/D38107
llvm-svn: 314002
If the captured variable has re-declaration we may end up with the
situation where the captured variable is the re-declaration while the
referenced variable is the canonical declaration (or vice versa). In
this case we may generate wrong code. Patch fixes this situation.
llvm-svn: 313995
The attribute informs the compiler that the annotated pointer parameter
of a function cannot escape and enables IRGen to attach attribute
'nocapture' to parameters that are annotated with the attribute. That is
the only optimization that currently takes advantage of 'noescape', but
there are other optimizations that will be added later that improves
IRGen for ObjC blocks.
This recommits r313722, which was reverted in r313725 because clang
couldn't build compiler-rt. It failed to build because there were
function declarations that were missing 'noescape'. That has been fixed
in r313929.
rdar://problem/19886775
Differential Revision: https://reviews.llvm.org/D32210
llvm-svn: 313945
This reverts commit r313722.
It looks like compiler-rt/lib/tsan/rtl/tsan_libdispatch_mac.cc cannot be
compiled because some of the functions declared in the file do not match
the ones in the SDK headers (which are annotated with 'noescape').
llvm-svn: 313725
The attribute informs the compiler that the annotated pointer parameter
of a function cannot escape and enables IRGen to attach attribute
'nocapture' to parameters that are annotated with the attribute. That is
the only optimization that currently takes advantage of 'noescape', but
there are other optimizations that will be added later that improves
IRGen for ObjC blocks.
rdar://problem/19886775
Differential Revision: https://reviews.llvm.org/D32210
llvm-svn: 313722
The attribute informs the compiler that the annotated pointer parameter
of a function cannot escape and enables IRGen to attach attribute
'nocapture' to parameters that are annotated with the attribute. That is
the only optimization that currently takes advantage of 'noescape', but
there are other optimizations that will be added later that improves
IRGen for ObjC blocks.
rdar://problem/19886775
Differential Revision: https://reviews.llvm.org/D32520
llvm-svn: 313720
As a special case, throw away deferred regions for trailing returns.
This allows the closing curly brace to have a count, and is less
distracting.
llvm-svn: 313603
Summary:
Restore the `__builtin_wasm_rethrow` builtin deleted in D37931. On second
thought, it appears it can be used to implement `__cxa_rethrow`.
Reviewers: dschuff, sunfish
Reviewed By: dschuff
Subscribers: jfb, sbc100, jgravelle-google
Differential Revision: https://reviews.llvm.org/D37942
llvm-svn: 313430
This patch replaces the perm2f128 intrinsics with native shuffle vectors.
This uses a pretty simple approach to allocate source 0 to the lower half input and source 1 to the upper half input. Then its just a matter of filling in the indices to use either the lower or upper half of that specific source. This can result in the same source being used by both operands. InstCombine or SelectionDAGBuilder should be able to clean that up.
Differential Revision: https://reviews.llvm.org/D37892
llvm-svn: 313418
Summary:
Remove `__builtin_wasm_rethrow` builtin. I thought it was required to implement
`__cxa_rethrow` function in libcxxabi, but it turned out it will be using
`__builtin_wasm_throw` instead.
Reviewers: dschuff, jgravelle-google
Reviewed By: jgravelle-google
Subscribers: jfb, sbc100, jgravelle-google
Differential Revision: https://reviews.llvm.org/D37931
llvm-svn: 313402
Summary:
To improve CodeView quality for static member functions, we need to make the
static explicit. In addition to a small change in LLVM's CodeViewDebug to
return the appropriate MethodKind, this requires a small change in Clang to
note the staticness in the debug info metadata.
Subscribers: aprantl, hiraditya
Differential Revision: https://reviews.llvm.org/D37715
llvm-svn: 313192
This change will make it possible to use -fsanitize=function on Darwin and
possibly on other platforms. It fixes an issue with the way RTTI is stored into
function prologue data.
On Darwin, addresses stored in prologue data can't require run-time fixups and
must be PC-relative. Run-time fixups are undesirable because they necessitate
writable text segments, which can lead to security issues. And absolute
addresses are undesirable because they break PIE mode.
The fix is to create a private global which points to the RTTI, and then to
encode a PC-relative reference to the global into prologue data.
Differential Revision: https://reviews.llvm.org/D37597
llvm-svn: 313096
Summary:
Microsoft Visual Studio expects debug locations to correspond to
statements. We used to emit locations for expressions nested inside statements.
This would confuse the debugger, causing it to stop multiple times on the
same line and breaking the "step into specific" feature. This change inhibits
the emission of debug locations for nested expressions when emitting CodeView
debug information, unless column information is enabled.
Fixes PR34312.
Reviewers: rnk, zturner
Reviewed By: rnk
Subscribers: majnemer, echristo, aprantl, cfe-commits
Differential Revision: https://reviews.llvm.org/D37529
llvm-svn: 312965
This patch teaches the preprocessor to report more precise source ranges for
code that is skipped due to conditional directives.
The new behavior includes the '#' from the opening directive and the full text
of the line containing the closing directive in the skipped area. This matches
up clang's behavior (we don't IRGen the code between the closing "endif" and
the end of a line).
This also affects the code coverage implementation. See llvm.org/PR34166 (this
also happens to be rdar://problem/23224058).
The old behavior (report the end of the skipped range as the end
location of the 'endif' token) is preserved for indexing clients.
Differential Revision: https://reviews.llvm.org/D36642
llvm-svn: 312947
When performing a NSFastEnumeration, the compiler synthesizes a call to
`countByEnumeratingWithState:objects:count:` where the `count` parameter
is of type `NSUInteger` and the return type is a `NSUInteger`. We would
previously always use a `UnsignedLongTy` for the `NSUInteger` type. On
32-bit targets, `long` is 32-bits which is the same as `unsigned int`.
Most 64-bit targets are LP64, where `long` is 64-bits. However, on
LLP64 targets, such as Windows, `long` is 32-bits. Introduce new
`getNSUIntegerType` and `getNSIntegerType` helpers to allow us to
determine the correct type for the `NSUInteger` type. Wire those
through into the generation of the message dispatch to the selector.
llvm-svn: 312835
This is to fix PR34347. EmitAtomicExpr now only uses alignment information from
Type, instead of Decl, so when the declaration of an atomic variable is marked
to have the alignment equal as its size, EmitAtomicExpr doesn't know about it and
will generate libcall instead of atomic op. The patch uses EmitPointerWithAlignment
to get the precise alignment information.
Differential Revision: https://reviews.llvm.org/D37310
llvm-svn: 312830
The current coverage implementation doesn't handle region termination
very precisely. Take for example an `if' statement with a `return':
void f() {
if (true) {
return; // The `if' body's region is terminated here.
}
// This line gets the same coverage as the `if' condition.
}
If the function `f' is called, the line containing the comment will be
marked as having executed once, which is not correct.
The solution here is to create a deferred region after terminating a
region. The deferred region is completed once the start location of the
next statement is known, and is then pushed onto the region stack.
In the cases where it's not possible to complete a deferred region, it
can safely be dropped.
Testing: lit test updates, a stage2 coverage-enabled build of clang
This is a reapplication but there are no changes from the original commit.
With D36813, the segment builder in llvm will be able to handle deferred
regions correctly.
llvm-svn: 312818
This is to fix PR34347. EmitAtomicExpr now only uses alignment information from
Type, instead of Decl, so when the declaration of an atomic variable is marked
to have the alignment equal as its size, EmitAtomicExpr doesn't know about it and
will generate libcall instead of atomic op. The patch uses EmitPointerWithAlignment
to get the precise alignment information.
Differential Revision: https://reviews.llvm.org/D37310
llvm-svn: 312801
This is a recommit of r312781; in some build configurations
variable names are omitted, so changed the new regression
test accordingly.
llvm-svn: 312794
Summary:
1.Updated annotations for include/clang/StaticAnalyzer/Core/PathSensitive/Store.h, which belong to the old version of clang.
2.Delete annotations for CodeGenFunction::getEvaluationKind() in clang/lib/CodeGen/CodeGenFunction.h, which belong to the old version of clang.
Reviewers: bkramer, krasimir, klimek
Reviewed By: bkramer
Subscribers: MTC
Differential Revision: https://reviews.llvm.org/D36330
Contributed by @MTC!
llvm-svn: 312790
This adds _Float16 as a source language type, which is a 16-bit floating point
type defined in C11 extension ISO/IEC TS 18661-3.
In follow up patches documentation and more tests will be added.
Differential Revision: https://reviews.llvm.org/D33719
llvm-svn: 312781
__kmpc_for_static_fini().
Added special flags for calls of __kmpc_for_static_fini(), like previous
ly for __kmpc_for_static_init(). Added flag OMP_IDENT_WORK_DISTRIBUTE
for distribute cnstruct, OMP_IDENT_WORK_SECTIONS for sections-based
constructs and OMP_IDENT_WORK_LOOP for loop-based constructs in
location flags.
llvm-svn: 312642
move constructor.
Previously user-defined reduction initializer was considered as an
assignment expression, not as initializer. Fixed this by treating the
initializer expression as an initializer.
llvm-svn: 312638
Summary:
As the attributed statements are considered simple statements no
stoppoint was generated before emitting attributed do/while/for/range-
statement. This lead to faulty debug locations.
Reviewers: echristo, aaron.ballman, dblaikie
Reviewed By: dblaikie
Subscribers: bjope, aprantl, cfe-commits
Differential Revision: https://reviews.llvm.org/D37428
llvm-svn: 312623
By exposing the constant initializer, the optimizer can fold many
of these constructs.
This is a recommit of r311857 that was reverted in r311898 because
an assert was hit when building Chromium.
We have to take into account that the GlobalVariable may be first
created with a different type than the initializer. This can
happen for example when the variable is a struct with tail padding
while the initializer does not have padding. In such case, the
variable needs to be destroyed an replaced with a new one with the
type of the initializer.
Differential Revision: https://reviews.llvm.org/D34992
llvm-svn: 312512
Because it is common to treat vector types as an array of their elements, or
even some other type that's not the element type, and thus index into them, we
can't use struct-path TBAA for these accesses. Even though we already treat all
vector types as equivalent to 'char', we were using field-offset information
for them with TBAA, and this renders undefined the intra-value indexing we
intend to allow. Note that, although 'char' is universally aliasing, with path
TBAA, we can still differentiate between access to s.a and s.b in
struct { char a, b; } s;. We can't use this capability as-is for vector types.
Fixes PR33967.
llvm-svn: 312447
Not all targets support vararg (e.g. amdgpu). Instead of using vararg in the emitted functions for enqueue_kernel,
this patch creates a temporary array of size_t, stores the size arguments in the temporary array
and passes it to the emitted functions for enqueue_kernel.
Differential Revision: https://reviews.llvm.org/D36678
llvm-svn: 312441
"target" implementation
A small set of refactors that'll make it easier for me to implement 'target'
support.
First, extract the CPUSupports functionality into its own function.
THis has the advantage of not wasting time in this builtin to deal with
arguments.
Second, pulls both CPUSupports and CPUIs implementation into a member-function,
so that it can be called from the resolver generation that I'm working on.
Third, creates an overload that takes simply the feature/cpu name (rather than
extracting it from a callexpr), since that info isn't available later.
Note that despite how the 'diff' looks, the EmitX86CPUSupports function simply
takes the implementation out of the 'switch'.
llvm-svn: 312355
This fixes cases where dynamic classes produced RTTI data with
external linkage, producing linker errors about duplicate symbols.
This touches code close to what was changed in SVN r244266, but
this change doesn't break the tests added in that revision.
The previous version had missed to update CodeGenCXX/virt-dtor-key.cpp,
which had a behaviour change only when running the testsuite on windows.
Differential revision: https://reviews.llvm.org/D37327
llvm-svn: 312306
This fixes cases where dynamic classes produced RTTI data with
external linkage, producing linker errors about duplicate symbols.
This touches code close to what was changed in SVN r244266, but
this change doesn't break the tests added in that revision.
Differential revision: https://reviews.llvm.org/D37206
llvm-svn: 312224
Summary:
An implementation of ubsan runtime library suitable for use in production.
Minimal attack surface.
* No stack traces.
* Definitely no C++ demangling.
* No UBSAN_OPTIONS=log_file=/path (very suid-unfriendly). And no UBSAN_OPTIONS in general.
* as simple as possible
Minimal CPU and RAM overhead.
* Source locations unnecessary in the presence of (split) debug info.
* Values and types (as in A+B overflows T) can be reconstructed from register/stack dumps, once you know what type of error you are looking at.
* above two items save 3% binary size.
When UBSan is used with -ftrap-function=abort, sometimes it is hard to reason about failures. This library replaces abort with a slightly more informative message without much extra overhead. Since ubsan interface in not stable, this code must reside in compiler-rt.
Reviewers: pcc, kcc
Subscribers: srhines, mgorny, aprantl, krytarowski, llvm-commits
Differential Revision: https://reviews.llvm.org/D36810
llvm-svn: 312029
It caused PR759744.
> Emit static constexpr member as available_externally definition
>
> By exposing the constant initializer, the optimizer can fold many
> of these constructs.
>
> Differential Revision: https://reviews.llvm.org/D34992
llvm-svn: 311898
This adds builtin_cpu_init which will emit a call to cpu_indicator_init in libgcc or compiler-rt.
This is needed to support builtin_cpu_supports/builtin_cpu_is in an ifunc resolver.
Differential Revision: https://reviews.llvm.org/D36336
llvm-svn: 311874