This patch generates some helper variables which used as a private copies of the corresponding original variables inside an OpenMP 'parallel' directive. These generated variables are initialized by default (with the default constructor, if any). In outlined function references to original variables are replaced by the references to these private helper variables. At the end of the initialization of the private variables and implicit barier is set by calling __kmpc_barrier(...) runtime function to be sure that all threads were initialized using original values of the variables.
Differential Revision: http://reviews.llvm.org/D4752
llvm-svn: 220262
Summary:
The general approach is to add extra paddings after every field
in AST/RecordLayoutBuilder.cpp, then add code to CTORs/DTORs that poisons the paddings
(CodeGen/CGClass.cpp).
Everything is done under the flag -fsanitize-address-field-padding.
The blacklist file (-fsanitize-blacklist) allows to avoid the transformation
for given classes or source files.
See also https://code.google.com/p/address-sanitizer/wiki/IntraObjectOverflow
Test Plan: run SPEC2006 and some of the Chromium tests with -fsanitize-address-field-padding
Reviewers: samsonov, rnk, rsmith
Reviewed By: rsmith
Subscribers: majnemer, cfe-commits
Differential Revision: http://reviews.llvm.org/D5687
llvm-svn: 219961
The functionality contained in CodeGenFunction::EmitAlignmentAssumption has
been moved to IRBuilder (so that it can also be used by LLVM-level code).
Remove this now-duplicate implementation in favor of the IRBuilder code.
llvm-svn: 219877
This change adds UBSan check to upcasts. Namely, when we
perform derived-to-base conversion, we:
1) check that the pointer-to-derived has suitable alignment
and underlying storage, if this pointer is non-null.
2) if vptr-sanitizer is enabled, and we perform conversion to
virtual base, we check that pointer-to-derived has a matching vptr.
llvm-svn: 219642
Moved CGOpenMPRegionInfo from CGOpenMPRuntime.h to CGOpenMPRuntime.cpp file and reworked the code for this change. Also added processing of ThreadID variable passed as an argument in outlined functions in parallel and task directives.
llvm-svn: 219490
This patch makes class OMPPrivateScope a common class for all private variables. Reworked processing of firstprivate variables (now it is based on OMPPrivateScope too).
llvm-svn: 219486
Make it possible to pass NULL through variadic functions on 64-bit
Windows targets. The Visual C++ headers define NULL to 0, when they
should define it to 0LL on Win64 so that NULL is a pointer-sized
integer.
Fixes PR20949.
Reviewers: thakis, rsmith
Differential Revision: http://reviews.llvm.org/D5480
llvm-svn: 219456
Includes parsing and semantic analysis for 'omp teams' directive support from OpenMP 4.0. Adds additional analysis to 'omp target' directive with 'omp teams' directive.
llvm-svn: 219385
This patch generates some helper variables that used as private copies of the corresponding original variables inside an OpenMP 'parallel' directive. These generated variables are initialized by copy using values of the original variables (with the copy constructor, if any). For arrays, initializator is generated for single element and in the codegen procedure this initial value is automatically propagated between all elements of the private copy.
In outlined function, references to original variables are replaced by the references to these private helper variables. At the end of the initialization of the private variables an implicit barier is generated by calling __kmpc_barrier(...) runtime function to be sure that all threads were initialized using original values of the variables.
Differential Revision: http://reviews.llvm.org/D5140
llvm-svn: 219306
This patch generates some helper variables that used as private copies of the corresponding original variables inside an OpenMP 'parallel' directive. These generated variables are initialized by copy using values of the original variables (with the copy constructor, if any). For arrays, initializator is generated for single element and in the codegen procedure this initial value is automatically propagated between all elements of the private copy.
In outlined function, references to original variables are replaced by the references to these private helper variables. At the end of the initialization of the private variables an implicit barier is generated by calling __kmpc_barrier(...) runtime function to be sure that all threads were initialized using original values of the variables.
Differential Revision: http://reviews.llvm.org/D5140
llvm-svn: 219297
This patch generates some helper variables that used as private copies of the corresponding original variables inside an OpenMP 'parallel' directive. These generated variables are initialized by copy using values of the original variables (with the copy constructor, if any). For arrays, initializator is generated for single element and in the codegen procedure this initial value is automatically propagated between all elements of the private copy.
In outlined function, references to original variables are replaced by the references to these private helper variables. At the end of the initialization of the private variables an implicit barier is generated by calling __kmpc_barrier(...) runtime function to be sure that all threads were initialized using original values of the variables.
Differential Revision: http://reviews.llvm.org/D5140
llvm-svn: 219295
Summary:
Previously CodeGen assumed that static locals were emitted before they
could be accessed, which is true for automatic storage duration locals.
However, it is possible to have CodeGen emit a nested function that uses
a static local before emitting the function that defines the static
local, breaking that assumption.
Fix it by creating the static local upon access and ensuring that the
deferred function body gets emitted. We may not be able to emit the
initializer properly from outside the function body, so don't try.
Fixes PR18020. See also previous attempts to fix static locals in
PR6769 and PR7101.
Reviewers: rsmith
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D4787
llvm-svn: 219265
Includes parsing and semantic analysis for 'omp teams' directive support from OpenMP 4.0. Adds additional analysis to 'omp target' directive with 'omp teams' directive.
llvm-svn: 219197
No functional changes intended.
Renamed EmitOMPSimdLoop to EmitOMPInnerLoop, I plan to re-use
it to emit inner loop in the future patches for CodeGen of the
worksharing loop directives (omp for, omp for simd).
llvm-svn: 219195
Summary:
This add support for the C++11 feature, thread_local global variables.
The ABI Clang implements is an improvement of the MSVC ABI. Sadly,
further improvements could be made but not without sacrificing ABI
compatibility.
The feature is implemented as follows:
- All thread_local initialization routines are pointed to from the
.CRT$XDU section.
- All non-weak thread_local variables have their initialization routines
call from a single function instead of getting their own .CRT$XDU
section entry. This is done to open up optimization opportunities to
the compiler.
- All weak thread_local variables have their own .CRT$XDU section entry.
This entry is in a COMDAT with the global variable it is initializing;
this ensures that we will initialize the global exactly once.
- Destructors are registered in the initialization function using
__tlregdtor.
Differential Revision: http://reviews.llvm.org/D5597
llvm-svn: 219074
This patch implements collapsing of the loops (in particular, in
presense of clause 'collapse'). It calculates number of iterations N
and expressions nesessary to calculate the nested loops counters
values based on new iteration variable (that goes from 0 to N-1)
in Sema. It also adds Codegen for 'omp simd', which uses
(and tests) this feature.
Differential Revision: http://reviews.llvm.org/D5184
llvm-svn: 218743
CodeGen would try to come up with an LLVM IR type for a pointer to
member type on the way to forming an LLVM IR type for a pointer to
pointer to member type.
However, if the pointer to member representation has not been locked in yet,
we would not be able to come up with a pointer to member IR type.
In these cases, make the pointer to member type an incomplete type.
This will make the pointer to pointer to member type a pointer to an
incomplete type. If the class eventually obtains an inheritance model,
we will make the pointer to member type represent the actual inheritance
model.
Differential Revision: http://reviews.llvm.org/D5373
llvm-svn: 218084
Summary:
This patch implements a new UBSan check, which verifies
that function arguments declared to be nonnull with __attribute__((nonnull))
are actually nonnull in runtime.
To implement this check, we pass FunctionDecl to CodeGenFunction::EmitCallArgs
(where applicable) and if function declaration has nonnull attribute specified
for a certain formal parameter, we compare the corresponding RValue to null as
soon as it's calculated.
Test Plan: regression test suite
Reviewers: rsmith
Reviewed By: rsmith
Subscribers: cfe-commits, rnk
Differential Revision: http://reviews.llvm.org/D5082
llvm-svn: 217389
This makes use of the recently-added @llvm.assume intrinsic to implement a
__builtin_assume(bool) intrinsic (to provide additional information to the
optimizer). This hooks up __assume in MS-compatibility mode to mirror
__builtin_assume (the semantics have been intentionally kept compatible), and
implements GCC's __builtin_assume_aligned as assume((p - o) & mask == 0). LLVM
now contains special logic to deal with assumptions of this form.
llvm-svn: 217349
If control falls off the end of a function after an __asm block, MSVC
assumes that the inline assembly filled the EAX and possibly EDX
registers with an appropriate return value. This functionality is used
in inline functions returning 64-bit integers in system headers, so we
need some amount of compatibility.
This is implemented in Clang by adding extra output constraints to every
inline asm block, and storing the resulting output registers into the
return value slot. If we see an asm block somewhere in the function
body, we emit a normal epilogue instead of marking the end of the
function with a return type unreachable.
Normal returns in functions not using this functionality will overwrite
the return value slot, and in most cases LLVM should be able to
eliminate the dead stores.
Fixes PR17201.
Reviewed By: majnemer
Differential Revision: http://reviews.llvm.org/D5177
llvm-svn: 217187
into EmitCXXMemberOrOperatorCall methods. In the end we want
to make declaration visible in EmitCallArgs() method, that
would allow us to alter CodeGen depending on function/parameter
attributes.
No functionality change.
llvm-svn: 216404
for loops introduce two scopes - one for the outer loop variable and its
initialization, and another for the body of the loop, including any
variable declared inside the loop condition.
llvm-svn: 216288
Summary:
This refactoring introduces ClangToLLVMArgMapping class, which
encapsulates the information about the order in which function arguments listed
in CGFunctionInfo should be passed to actual LLVM IR function, such as:
1) positions of sret, if there is any
2) position of inalloca argument, if there is any
3) position of helper padding argument for each call argument
4) positions of regular argument (there can be many if it's expanded).
Simplify several related methods (ConstructAttributeList, EmitFunctionProlog
and EmitCall): now they don't have to maintain iterators over the list
of LLVM IR function arguments, dealing with all the sret/inalloca/this complexities,
and just use expected positions of LLVM IR arguments stored in ClangToLLVMArgMapping.
This may increase the running time of EmitFunctionProlog, as we have to traverse
expandable arguments twice, but in further refactoring we will be able
to speed up EmitCall by passing already calculated CallArgsToIRArgsMapping to
ConstructAttributeList, thus avoiding traversing expandable argument there.
No functionality change.
Test Plan: regression test suite
Reviewers: majnemer, rnk
Reviewed By: rnk
Subscribers: cfe-commits, rjmccall, timurrrr
Differential Revision: http://reviews.llvm.org/D4938
llvm-svn: 216251
Summary:
This is a first small step towards passing generic "Expr" instead of
ArgBeg/ArgEnd pair into EmitCallArgs() family of methods. Having "Expr" will
allow us to get the corresponding FunctionDecl and its ParmVarDecls,
thus allowing us to alter CodeGen depending on the function/parameter
attributes.
No functionality change.
Test Plan: regression test suite
Reviewers: rnk
Reviewed By: rnk
Subscribers: aemerson, cfe-commits
Differential Revision: http://reviews.llvm.org/D4915
llvm-svn: 216214
to instruct the code generator to not enforce a higher alignment
than the given number (of bytes) when accessing memory via an opaque
pointer or reference. Patch reviewed by John McCall (with post-commit
review pending). rdar://16254558
llvm-svn: 214911
This moves some memptr specific code into the generic thunk emission
codepath.
Fixes PR20053.
Reviewers: majnemer
Differential Revision: http://reviews.llvm.org/D4613
llvm-svn: 214004
The target method of the thunk will perform the cleanup. This can't be
tested in 32-bit x86 yet because passing something by value would create
an inalloca, and we refuse to generate broken code for that.
llvm-svn: 213976
This is used to mark the instructions emitted by Clang to implement
variety of UBSan checks. Generally, we don't want to instrument these
instructions with another sanitizers (like ASan).
Reviewed in http://reviews.llvm.org/D4544
llvm-svn: 213291
Originally committed in r211722, this fixed one case of dtor calls being
emitted without locations (this causes problems for debug info if the
call is then inlined), this caught only some of the cases.
Instead of trying to re-enable the location before the cleanup, simply
re-enable the location immediately after the unconditional branches in
question using a scoped device to ensure the no-location state doesn't
leak out arbitrarily.
llvm-svn: 212761
Now CodeGenFunction is responsible for looking at sanitizer blacklist
(in CodeGenFunction::StartFunction) and turning off instrumentation,
if necessary.
No functionality change.
llvm-svn: 212501
This patch removes the dead code, and refines the
getEHResumeBlock() slightly.
The CleanupHackLevel was a hack to the old exception
handling intrinsics, which have several issues with function
inliner.
Since LLVM 3.0, the new landingpad and resume instructions
are added to LLVM IR. With the new exception handling
mechanism, most of the issues are fixed now. We should
always use these instructions to implement the exception
handling code nowadays, and we don't need the hack any more.
Besides, the `CleanupHackLevel` is a compile-time constant,
thus other cases have been considered as dead code for a while.
llvm-svn: 212097
This reverts commit r211096. Looks like it broke the msvc build:
SemaOpenMP.cpp(140) : error C4519: default template arguments are only allowed on a class template
llvm-svn: 211113
to the normal non-placement ::operator new and ::operator delete, but allow
optimizations like new-expressions and delete-expressions do.
llvm-svn: 210137
trailing elements as a single loop, rather than sometimes emitting a nest of
several loops. This fixes a bug where CodeGen would sometimes try to emit an
expression with the wrong type for the element being initialized. Plus various
other minor cleanups to the IR produced for array new initialization.
llvm-svn: 210079
A few (mostly CodeGen) parts of Clang were tightly coupled to the
AArch64 backend. Now that it's gone, they will not even compile.
I've also deduplicated RUN lines in many of the AArch64 tests. This
might improve "make check-all" time noticably: some of those NEON
tests were monsters.
llvm-svn: 209578
It also adds a simple initial version of codegen for pragma omp simd (it will change in the future to support all the clauses).
Differential revision: http://reviews.llvm.org/D3644
llvm-svn: 209411
This patch implements global named registers in Clang, lowering to the just
created intrinsics in LLVM (@llvm.read/write_register). A new type of LValue
had to be created (Register), which just adds support to carry the metadata
node containing the name of the register. Two new methods to emit loads and
stores interoperate with another to emit the named metadata node.
No guarantees are being made and only non-allocatable global variable named
registers are being supported. Local named register support is unchanged.
llvm-svn: 209149
CapturedStmt was being ignored by instrumentation based profiling, and
its counters attributed to the containing function. Instead, we need
to treat this as a top level entity, like we do with blocks.
llvm-svn: 206231
are not associated with any source lines.
Previously, if the Location of a Decl was empty, EmitFunctionStart would
just keep using CurLoc, which would sometimes be correct (e.g., thunks)
but in other cases would just point to a hilariously random location.
This patch fixes this by completely eliminating all uses of CurLoc from
EmitFunctionStart and rather have clients explicitly pass in a
SourceLocation for the function header and the function body.
rdar://problem/14985269
llvm-svn: 205999
This adds Clang support for the ARM64 backend. There are definitely
still some rough edges, so please bring up any issues you see with
this patch.
As with the LLVM commit though, we think it'll be more useful for
merging with AArch64 from within the tree.
llvm-svn: 205100
This extends the intrinsic lookup table format slightly, and adds
entries for use the shared ARM/AArch64 definitions. The benefit is
currently smaller than for the SISD intrinsics (there's more custom
code implementing this set), but a few lines are saved and there's
scope for future expansion.
llvm-svn: 201848
This extracts the table-driven intrinsic lookup phase into a separate
function, to be used by EmitCommonNeonBuiltinExpr soon.
It also simplifies the logic used in that lookup, since VectorCastArgN
and ScalarArgN were actually identical.
llvm-svn: 201847
Previously, we made one traversal of the AST prior to codegen to assign
counters to the ASTs and then propagated the count values during codegen. This
patch now adds a separate AST traversal prior to codegen for the
-fprofile-instr-use option to propagate the count values. The counts are then
saved in a map from which they can be retrieved during codegen.
This new approach has several advantages:
1. It gets rid of a lot of extra PGO-related code that had previously been
added to codegen.
2. It fixes a serious bug. My original implementation (which was mailed to the
list but never committed) used 3 counters for every loop. Justin improved it to
move 2 of those counters into the less-frequently executed breaks and continues,
but that turned out to produce wrong count values in some cases. The solution
requires visiting a loop body before the condition so that the count for the
condition properly includes the break and continue counts. Changing codegen to
visit a loop body first would be a fairly invasive change, but with a separate
AST traversal, it is easy to control the order of traversal. I've added a
testcase (provided by Justin) to make sure this works correctly.
3. It improves the instrumentation overhead, reducing the number of counters for
a loop from 3 to 1. We no longer need dedicated counters for breaks and
continues, since we can just use the propagated count values when visiting
breaks and continues.
To make this work, I needed to make a change to the way we count case
statements, going back to my original approach of not including the fall-through
in the counter values. This was necessary because there isn't always an AST node
that can be used to record the fall-through count. Now case statements are
handled the same as default statements, with the fall-through paths branching
over the counter increments. While I was at it, I also went back to using this
approach for do-loops -- omitting the fall-through count into the loop body
simplifies some of the calculations and make them behave the same as other
loops. Whenever we start using this instrumentation for coverage, we'll need
to add the fall-through counts into the counter values.
llvm-svn: 201528
properties by fixing shouldBindAsLValue to accept arrays
(like record types) because we always manipulate
them in memory. Patch suggested by John MaCall.
// rdar://15610943
llvm-svn: 201428
When a non-trivial parameter is present, clang now gathers up all the
parameters that lack inreg and puts them into a packed struct. MSVC
always aligns each parameter to 4 bytes and no more, so this is a pretty
simple struct to lay out.
On win64, non-trivial records are passed indirectly. Prior to this
change, clang was incorrectly using byval on win64.
I'm able to self-host a working clang with this change and additional
LLVM patches.
Reviewers: rsmith
Differential Revision: http://llvm-reviews.chandlerc.com/D2636
llvm-svn: 200597
I misunderstood the discussion on this. The complexity here is
justified by the malloc overhead it saves.
This reverts commit r199302.
llvm-svn: 199700
Fix a perennial source of confusion in the clang type system: Declarations and
function prototypes have parameters to which arguments are supplied, so calling
these 'arguments' was a stretch even in C mode, let alone C++ where default
arguments, templates and overloading make the distinction important to get
right.
Readability win across the board, especially in the casting, ADL and
overloading implementations which make a lot more sense at a glance now.
Will keep an eye on the builders and update dependent projects shortly.
No functional change.
llvm-svn: 199686
Way back in r129652 we tried to avoid emitting an empty block at -O0
for switch cases that did nothing but break. This led to a poor
debugging experience as reported in PR9796, so we disabled the
optimization for -O0 but left it in for higher optimization levels in
r154420.
Since the whole point of this was to improve -O0, it's silly to keep
the complexity at all.
llvm-svn: 199302
C and C++ don't emit an extra lexical scope for the compound statement
that is the body of an Objective-C method.
rdar://problem/15010825
llvm-svn: 198699
encodes the canonical rules for LLVM's style. I noticed this had drifted
quite a bit when cleaning up LLVM, so wanted to clean up Clang as well.
llvm-svn: 198686
Unlike Itanium's VTTs, the 'most derived' boolean or bitfield is the
last parameter for non-variadic constructors, rather than the second.
For variadic constructors, the 'most derived' parameter comes after the
'this' parameter. This affects constructor calls and constructor decls
in a variety of places.
Reviewers: timurrrr
Differential Revision: http://llvm-reviews.chandlerc.com/D2405
llvm-svn: 197518
Summary:
MSVC destroys arguments in the callee from left to right. Because C++
objects have to be destroyed in the reverse order of construction, Clang
has to construct arguments from right to left and destroy arguments from
left to right.
This patch fixes the ordering by reversing the order of evaluation of
all call arguments under the MS C++ ABI.
Fixes PR18035.
Reviewers: rsmith
Differential Revision: http://llvm-reviews.chandlerc.com/D2275
llvm-svn: 196402
Instead of storing the vtable offset directly in the function pointer and
doing a branch to check for virtualness at each call site, the MS ABI
generates a thunk for calling the function at a specific vtable offset,
and puts that in the function pointer.
This patch adds support for emitting such thunks. However, it doesn't support
pointers to virtual member functions that are variadic, have an incomplete
aggregate return type or parameter, or are overriding a function in a virtual
base class.
Differential Revision: http://llvm-reviews.chandlerc.com/D2104
llvm-svn: 194827
deallocation function (and the corresponding unsized deallocation function has
been declared), emit a weak discardable definition of the function that
forwards to the corresponding unsized deallocation.
This allows a C++ standard library implementation to provide both a sized and
an unsized deallocation function, where the unsized one does not just call the
sized one, for instance by putting both in the same object file within an
archive.
llvm-svn: 194055
This uses function prefix data to store function type information at the
function pointer.
Differential Revision: http://llvm-reviews.chandlerc.com/D1338
llvm-svn: 193058
These IR instructions are undefined when the amount is equal to operand
size, but NEON right shifts support such shifts. Work around that by
emitting a different IR in these cases.
llvm-svn: 191953
The general strategy is to create template versions of the conversion function and static invoker and then during template argument deduction of the conversion function, create the corresponding call-operator and static invoker specializations, and when the conversion function is marked referenced generate the body of the conversion function using the corresponding static-invoker specialization. Similarly, Codegen does something similar - when asked to emit the IR for a specialized static invoker of a generic lambda, it forwards emission to the corresponding call operator.
This patch has been reviewed in person both by Doug and Richard. Richard gave me the LGTM.
A few minor changes:
- per Richard's request i added a simple check to gracefully inform that captures (init, explicit or default) have not been added to generic lambdas just yet (instead of the assertion violation).
- I removed a few lines of code that added the call operators instantiated parameters to the currentinstantiationscope. Not only did it not handle parameter packs, but it is more relevant in the patch for nested lambdas which will follow this one, and fix that problem more comprehensively.
- Doug had commented that the original implementation strategy of using the TypeSourceInfo of the call operator to create the static-invoker was flawed and allowed const as a member qualifier to creep into the type of the static-invoker. I currently kludge around it - but after my initial discussion with Doug, with a follow up session with Richard, I have added a FIXME so that a more elegant solution that involves the use of TrivialTypeSourceInfo call followed by the correct wiring of the template parameters to the functionprototypeloc is forthcoming.
Thanks!
llvm-svn: 191634
This reverts commit r189320.
Alexey Samsonov and Dmitry Vyukov presented some arguments for keeping
these around - though it still seems like those tasks could be solved by
a tool just using the symbol table. In a very small number of cases,
thunks may be inlined & debug info might be able to save profilers &
similar tools from misclassifying those cases as part of the caller.
The extra changes here plumb through the VarDecl for various cases to
CodeGenFunction - this provides better fidelity through a few APIs but
generally just causes the CGF::StartFunction to fallback to using the
name of the IR function as the name in the debug info.
The changes to debug-info-global-ctor-dtor.cpp seem like goodness. The
two names that go missing (in favor of only emitting those names as
linkage names) are names that can be demangled - emitting them only as
the linkage name should encourage tools to do just that.
Again, thanks to Dinesh Dwivedi for investigation/work on this issue.
llvm-svn: 189421
CodeGenFunction is run on only one function - a new object is made for
each new function. I would add an assertion/flag to this effect, but
there's an exception: ObjC properties involve emitting helper functions
that are all emitted by the same CodeGenFunction object, so such a check
is not possible/correct.
llvm-svn: 189277
They were mostly copy&paste of each other, move it to CodeGenFunction. Of course
the two implementations have diverged over time; the one in CGExprCXX seems to
be the more modern one so I picked that one and moved it to CGClass which feels
like a better home for it. No intended functionality change.
llvm-svn: 189203
Refactor the underlying code a bit to remove unnecessary calls to
"hasErrorOccurred" & make them consistently at all the entry points to
the IRGen ASTConsumer.
llvm-svn: 188707
Restore it after each argument is emitted. This fixes the scope info for
inlined subroutines inside of function argument expressions. (E.g.,
anything STL).
rdar://problem/12592135
llvm-svn: 187240
This allows clang to use the backend parameter attribute 'returned' when generating 'this'-returning constructors and destructors in ARM and MSVC C++ ABIs.
llvm-svn: 185291
The backend will now use the generic 'returned' attribute to form tail calls where possible, as well as avoid save-restores of 'this' in some cases (specifically the cases that matter for the ARM C++ ABI).
This patch also reverts a prior front-end only partial implementation of these optimizations, since it's no longer required.
llvm-svn: 184205
Introduce CXXStdInitializerListExpr node, representing the implicit
construction of a std::initializer_list<T> object from its underlying array.
The AST representation of such an expression goes from an InitListExpr with a
flag set, to a CXXStdInitializerListExpr containing a MaterializeTemporaryExpr
containing an InitListExpr (possibly wrapped in a CXXBindTemporaryExpr).
This more detailed representation has several advantages, the most important of
which is that the new MaterializeTemporaryExpr allows us to directly model
lifetime extension of the underlying temporary array. Using that, this patch
*drastically* simplifies the IR generation of this construct, provides IR
generation support for nested global initializer_list objects, fixes several
bugs where the destructors for the underlying array would accidentally not get
invoked, and provides constant expression evaluation support for
std::initializer_list objects.
llvm-svn: 183872
were lacking ExprWithCleanups nodes in some cases where the new approach to
lifetime extension needed them).
Original commit message:
Rework IR emission for lifetime-extended temporaries. Instead of trying to walk
into the expression and dig out a single lifetime-extended entity and manually
pull its cleanup outside the expression, instead keep a list of the cleanups
which we'll need to emit when we get to the end of the full-expression. Also
emit those cleanups early, as EH-only cleanups, to cover the case that the
full-expression does not terminate normally. This allows IR generation to
properly model temporary lifetime when multiple temporaries are extended by the
same declaration.
We have a pre-existing bug where an exception thrown from a temporary's
destructor does not clean up lifetime-extended temporaries created in the same
expression and extended to automatic storage duration; that is not fixed by
this patch.
llvm-svn: 183859
into the expression and dig out a single lifetime-extended entity and manually
pull its cleanup outside the expression, instead keep a list of the cleanups
which we'll need to emit when we get to the end of the full-expression. Also
emit those cleanups early, as EH-only cleanups, to cover the case that the
full-expression does not terminate normally. This allows IR generation to
properly model temporary lifetime when multiple temporaries are extended by the
same declaration.
We have a pre-existing bug where an exception thrown from a temporary's
destructor does not clean up lifetime-extended temporaries created in the same
expression and extended to automatic storage duration; that is not fixed by
this patch.
llvm-svn: 183721
No functionality change. CGCleanup.cpp provides the implementation for
EHScopeStack, so it seems more consistent to place the class definition
in CGCleanup.h.
This should also help solve a header ordering problem that I have.
llvm-svn: 183631
While we can't yet emit vbtables, this allows us to find virtual bases
of objects constructed in other TUs.
This make iostream hello world work, since basic_ostream virtually
inherits from basic_ios.
Differential Revision: http://llvm-reviews.chandlerc.com/D795
llvm-svn: 182870
The most common (non-buggy) case are where such objects are used as
return expressions in bool-returning functions or as boolean function
arguments. In those cases I've used (& added if necessary) a named
function to provide the equivalent (or sometimes negative, depending on
convenient wording) test.
DiagnosticBuilder kept its implicit conversion operator owing to the
prevalent use of it in return statements.
One bug was found in ExprConstant.cpp involving a comparison of two
PointerUnions (PointerUnion did not previously have an operator==, so
instead both operands were converted to bool & then compared). A test
is included in test/SemaCXX/constant-expression-cxx1y.cpp for the fix
(adding operator== to PointerUnion in LLVM).
llvm-svn: 181869
EmitCapturedStmt creates a captured struct containing all of the captured
variables, and then emits a call to the outlined function. This is similar in
principle to EmitBlockLiteral.
GenerateCapturedFunction actually produces the outlined function. It is based
on GenerateBlockFunction, but is much simpler. The function type is determined
by the parameters that are in the CapturedDecl.
Some changes have been added to this patch that were reviewed as part of the
serialization patch and moving the parameters to the captured decl.
Differential Revision: http://llvm-reviews.chandlerc.com/D640
llvm-svn: 181536
Un-break the gdb buildbot.
- Use the debug location of the return expression for the cleanup code
if the return expression is trivially evaluatable, regardless of the
number of stop points in the function.
- Ensure that any EH code in the cleanup still gets the line number of
the closing } of the lexical scope.
- Added a testcase with EH in the cleanup.
rdar://problem/13442648
llvm-svn: 181056
a lambda.
Bug #1 is that CGF's CurFuncDecl was "stuck" at lambda invocation
functions. Fix that by generally improving getNonClosureContext
to look through lambdas and captured statements but only report
code contexts, which is generally what's wanted. Audit uses of
CurFuncDecl and getNonClosureAncestor for correctness.
Bug #2 is that lambdas weren't specially mapping 'self' when inside
an ObjC method. Fix that by removing the requirement for that
and using the normal EmitDeclRefLValue path in LoadObjCSelf.
rdar://13800041
llvm-svn: 181000
- Use the debug location of the return expression for the cleanup code
if the return expression is trivially evaluatable, regardless of the
number of stop points in the function.
- Ensure that any EH code in the cleanup still gets the line number of
the closing } of the lexical scope.
- Added a testcase with EH in the cleanup.
rdar://problem/13442648
llvm-svn: 180982
If there is cleanup code, the cleanup code gets the debug location of
the closing '}'. The subsequent ret IR-instruction does not get a
debug location. The return _expression_ will get the debug location
of the return statement.
If the function contains only a single, simple return statement,
the cleanup code may become the first breakpoint in the function.
In this case we set the debug location for the cleanup code
to the location of the return statement.
rdar://problem/13442648
llvm-svn: 180932
Add a CXXDefaultInitExpr, analogous to CXXDefaultArgExpr, and use it both in
CXXCtorInitializers and in InitListExprs to represent a default initializer.
There's an additional complication here: because the default initializer can
refer to the initialized object via its 'this' pointer, we need to make sure
that 'this' points to the right thing within the evaluation.
llvm-svn: 179958
Added TBAABaseType and TBAAOffset in LValue. These two fields are initialized to
the actual type and 0, and are updated in EmitLValueForField.
Path-aware TBAA tags are enabled for EmitLoadOfScalar and EmitStoreOfScalar.
Added command line option -struct-path-tbaa.
llvm-svn: 178797
when we actually end a lexical block.
* Added new test for line table / block cleanup.
* Follow-up to r177819 / rdar://problem/13115369
llvm-svn: 178490
to an out-parameter using the indirect-writeback conversion,
and we copied the current value of the variable to the temporary,
make sure that we register an intrinsic use of that value with
the optimizer so that the value won't get released until we have
a chance to retain it.
rdar://13195034
llvm-svn: 177813
For constructors/desctructors that return 'this', if there exists a callsite
that returns 'this' and is immediately before the return instruction, make
sure we are using the return value from the callsite.
We don't need to keep 'this' alive through the callsite. It also enables
optimizations in the backend, such as tail call optimization.
Updated from r177211.
rdar://12818789
llvm-svn: 177541
For constructors/desctructors that return 'this', if there exists a callsite
that returns 'this' and is immediately before the return instruction, make
sure we are using the return value from the callsite.
We don't need to keep 'this' alive through the callsite. It also enables
optimizations in the backend, such as tail call optimization.
rdar://12818789
llvm-svn: 177211
aggregate types in a profoundly wrong way that has to be
worked around in every call site, to getEvaluationKind,
which classifies and distinguishes between all of these
cases.
Also, normalize the API for loading and storing complexes.
I'm working on a larger patch and wanted to pull these
changes out, but it would have be annoying to detangle
them from each other.
llvm-svn: 176656
calls and declarations.
LLVM has a default CC determined by the target triple. This is
not always the actual default CC for the ABI we've been asked to
target, and so we sometimes find ourselves annotating all user
functions with an explicit calling convention. Since these
calling conventions usually agree for the simple set of argument
types passed to most runtime functions, using the LLVM-default CC
in principle has no effect. However, the LLVM optimizer goes
into histrionics if it sees this kind of formal CC mismatch,
since it has no concept of CC compatibility. Therefore, if this
module happens to define the "runtime" function, or got LTO'ed
with such a definition, we can miscompile; so it's quite
important to get this right.
Defining runtime functions locally is quite common in embedded
applications.
llvm-svn: 176286
bitfield related issues.
The original commit broke Takumi's builder. The bug was caused by bitfield sizes
being determined by their underlying type, rather than the field info. A similar
issue with bitfield alignments showed up on closer testing. Both have been fixed
in this patch.
llvm-svn: 175389
move-constructors and move-assignment operators, use memcpy to copy adjacent
POD members.
Previously, classes with one or more Non-POD members would fall back on
element-wise copies for all members, including POD members. This often
generated a lot of IR. Without padding metadata, it wasn't often possible
for the LLVM optimizers to turn the element-wise copies into a memcpy.
This code hasn't yet received any serious tuning. I didn't see any serious
regressions on a self-hosted clang build, or any of the nightly tests, but
I think it's important to get this out in the wild to get more testing.
Insights, feedback and comments welcome.
Many thanks to David Blaikie, Richard Smith, and especially John McCall for
their help and feedback on this work.
llvm-svn: 174919
r173593 made us a little too eager to associate all code at the end of a
function with the user-written 'return' line. This caused problems with
breakpoints as they'd be set in exception handling code preceeding the
actual non-exception return handling code, leading to the breakpoint never
being hit in non-exceptional execution.
This change restores the pre-r173593 exception handling line information where
the cleanup code is associated with the '}' not the return line.
llvm-svn: 174206
implementation; this is much more inline with the original implementation
(i.e., pre-ubsan) and does not require run-time library support.
The trapping implementation can be invoked using either '-fcatch-undefined-behavior'
or '-fsanitize=undefined-trap -fsanitize-undefined-trap-on-error', with the latter
being preferred. Eventually, the -fcatch-undefined-behavior' flag will be removed.
llvm-svn: 173848
One of the gotchas (see changes to CodeGenFunction) was due to the fix in
r139416 (for PR10829). This only worked previously because the top level
lexical block would set the location to the end of the function, the debug
location would be updated (as per r139416), the location would be set to
the end of the function again (but that would no-op, since it was the same
as the previous location), then the return instruction would be emitted using
the debug location.
Once the top level lexical block was no longer emitted, the end-of-function
location change was causing the debug loc to be updated, regressing that bug.
llvm-svn: 173593
Title: [PR9027] volatile struct bug: member is not loaded at -O;
This is caused by last flag passed to @llvm.memcpy being false,
not honoring that aggregate has at least one 'volatile' data member
(even though aggregate itself has not been qualified as 'volatile'.
As a result, optimization optimizes away the memcpy altogether.
Patch review by John MaCall (I still need to fix up a test though).
llvm-svn: 173535
uncovered.
This required manually correcting all of the incorrect main-module
headers I could find, and running the new llvm/utils/sort_includes.py
script over the files.
I also manually added quite a few missing headers that were uncovered by
shuffling the order or moving headers up to be main-module-headers.
llvm-svn: 169237
checks to enable. Remove frontend support for -fcatch-undefined-behavior,
-faddress-sanitizer and -fthread-sanitizer now that they don't do anything.
llvm-svn: 167413
combination of a load+objc_release; this is generally better
for tools that try to track why values are retained and
released. Also use objc_storeStrong when copying a block
(again, only at -O0), which requires us to do a preliminary
store of null in order to compensate for objc_storeStrong's
assign semantics.
llvm-svn: 166085
if (CGM.getModuleDebugInfo())
DebugInfo = CGM.getModuleDebugInfo()
into a call:
maybeInitializeDebugInfo();
This is a simplification for a possible future fix of PR13942.
llvm-svn: 166019
This fixes a regression from r162254, the optimizer has problems reasoning
about the smaller memcpy as it's often not safe to widen a store but making it
smaller is.
llvm-svn: 164917
the trap BB out of the individual checks and into a common function, to prepare
for making this code call into a runtime library. Rename the existing EmitCheck
to EmitTypeCheck to clarify it and to move it out of the way of the new
EmitCheck.
llvm-svn: 163451
AsmStmts. This function is only used by GCCAsmStmts, however. Constraints need
to be properly computed before MSAsmStmts can use EmitAsmStmt. No functional
change intended.
llvm-svn: 162776
* when checking that a pointer or reference refers to appropriate storage for a type, also check the alignment and perform a null check
* check that references are bound to appropriate storage
* check that 'this' has appropriate storage in member accesses and member function calls
llvm-svn: 162523
there's something going on there. Remove the unconditional line entry
and only add one if we're emitting cleanups (any other statements
would be handled normally).
Fixes rdar://9199234
llvm-svn: 160033
if we want to ignore a result, the Dest will be null. Otherwise,
we must copy into it. This means we need to ensure a slot when
loading from a volatile l-value.
With all that in place, fix a bug with chained assignments into
__block variables of aggregate type where we were losing insight into
the actual source of the value during the second assignment.
llvm-svn: 159630
The tablegen'd code does the same thing without this egregious duplication.
In my limited testing everything seems to work, however there can be
differences if the clang and llvm builtin definitions don't match.
llvm-svn: 159371
literal helper functions. All helper functions (global
and locals) use block_invoke as their prefix. Local literal
helper names are prefixed by their enclosing mangled function
names. Blocks in non-local initializers (e.g. a global variable
or a C++11 field) are prefixed by their mangled variable name.
The descriminator number added to end of the name starts off
with blank (for first block) and _<N> (for the N+2-th block).
llvm-svn: 159206
getter result type is safe but does not match with property
type resulting in spurious warning followed by crash in
IRGen. // rdar://11515196
llvm-svn: 157641
When enabled, clang generates bounds checks for array and pointers dereferences. Work to follow in LLVM's backend.
OK'ed by Chad; thanks for the review.
llvm-svn: 156431
and only consider using __cxa_atexit in the Itanium logic. The
default logic is to use atexit().
Emit "guarded" initializers in Microsoft mode unconditionally.
This is definitely not correct, but it's closer to correct than
just not emitting the initializer.
Based on a patch by Timur Iskhodzhanov!
llvm-svn: 155894
attached. Since we do not support any attributes which appertain to a statement
(yet), testing of this is necessarily quite minimal.
Patch by Alexander Kornienko!
llvm-svn: 154723
These patches cause us to miscompile and/or reject code with static
function-local variables in an extern-C context. Previously, we were
papering over this as long as the variables are within the same
translation unit, and had not seen any failures in the wild. We still
need a proper fix, which involves mangling static locals inside of an
extern-C block (as GCC already does), but this patch causes pretty
widespread regressions. Firefox, and many other applications no longer
build.
Lots of test cases have been posted to the list in response to this
commit, so there should be no problem reproducing the issues.
llvm-svn: 153768
For i686 targets (eg. cygwin), I saw "Range must not be empty!" in verifier.
It produces (i32)[0x80000000:0x80000000) from (uint64_t)[0xFFFFFFFF80000000ULL:0x0000000080000000ULL), for signed i32 on MDNode::Range.
llvm-svn: 153382
track whether the referenced declaration comes from an enclosing
local context. I'm amenable to suggestions about the exact meaning
of this bit.
llvm-svn: 152491
we correctly emit loads of BlockDeclRefExprs even when they
don't qualify as ODR-uses. I think I'm adequately convinced
that BlockDeclRefExpr can die.
llvm-svn: 152479
NSNumber, and boolean literals. This includes both Sema and Codegen support.
Included is also support for new Objective-C container subscripting.
My apologies for the large patch. It was very difficult to break apart.
The patch introduces changes to the driver as well to cause clang to link
in additional runtime support when needed to support the new language features.
Docs are forthcoming to document the implementation and behavior of these features.
llvm-svn: 152137
Note that this transformation has a substantial semantic effect outside of ARC: it gives the converted lambda lifetime semantics similar to a block literal. With ARC, the effect is much less obvious because the lifetime of blocks is already managed.
llvm-svn: 151797
We now generate temporary arrays to back std::initializer_list objects
initialized with braces. The initializer_list is then made to point at
the array. We support both ptr+size and start+end forms, although
the latter is untested.
Array lifetime is correct for temporary std::initializer_lists (e.g.
call arguments) and local variables. It is untested for new expressions
and member initializers.
Things left to do:
Massively increase the amount of testing. I need to write tests for
start+end init lists, temporary objects created as a side effect of
initializing init list objects, new expressions, member initialization,
creation of temporary objects (e.g. std::vector) for initializer lists,
and probably more.
Get lifetime "right" for member initializers and new expressions. Not
that either are very useful.
Implement list-initialization of array new expressions.
llvm-svn: 150803
conversion to function pointer. Rather than having IRgen synthesize
the body of this function, we instead introduce a static member
function "__invoke" with the same signature as the lambda's
operator() in the AST. Sema then generates a body for the conversion
to function pointer which simply returns the address of __invoke. This
approach makes it easier to evaluate a call to the conversion function
as a constant, makes the linkage of the __invoke function follow the
normal rules for member functions, and may make life easier down the
road if we ever want to constexpr'ify some of lambdas.
Note that IR generation is responsible for filling in the body of
__invoke (Sema just adds a dummy body), because the body can't
generally be expressed in C++.
Eli, please review!
llvm-svn: 150783
-fno-objc-arc-exceptions. This will allow the optimizer to perform
optimizations which are only safe under that flag.
This is a part of rdar://10803830.
llvm-svn: 150644
constructor, and that constructor is used to initialize an object of static
storage duration such that all members and bases are initialized by constant
expressions, constant initialization is performed. In this case, the object
can still have a non-trivial destructor, and if it does, we must emit a dynamic
initializer which performs no initialization and instead simply registers that
destructor.
llvm-svn: 150419
consume one or more of their arguments. If not done, this will cause a leak
as method will not consume the argument when receiver is null.
// rdar://10444474
llvm-svn: 149184
- Add atomic-to/from-nonatomic cast types
- Emit atomic operations for arithmetic on atomic types
- Emit non-atomic stores for initialisation of atomic types, but atomic stores and loads for every other store / load
- Add a __atomic_init() intrinsic which does a non-atomic store to an _Atomic() type. This is needed for the corresponding C11 stdatomic.h function.
- Enables the relevant __has_feature() checks. The feature isn't 100% complete yet, but it's done enough that we want people testing it.
Still to do:
- Make the arithmetic operations on atomic types (e.g. Atomic(int) foo = 1; foo++;) use the correct LLVM intrinsic if one exists, not a loop with a cmpxchg.
- Add a signal fence builtin
- Properly set the fenv state in atomic operations on floating point values
- Correctly handle things like _Atomic(_Complex double) which are too large for an atomic cmpxchg on some platforms (this requires working out what 'correctly' means in this context)
- Fix the many remaining corner cases
llvm-svn: 148242
The test includes a FIXME for a related case involving calls; it's a bit more complicated to fix because the RValue class doesn't keep track of alignment.
<rdar://problem/10463337>
llvm-svn: 145862
generic pushDestroy function.
This would reduce the number of useful declarations in
CGTemporaries.cpp to one. Since CodeGenFunction::EmitCXXTemporary
does not deserve its own file, move it to CGCleanup.cpp and delete
CGTemporaries.cpp.
llvm-svn: 145202
need to provide a 'dominating IP' which is guaranteed to
dominate the (de)activation point but which cannot be avoided
along any execution path from the (de)activation point to
the push-point of the cleanup. Using the entry block is
bad mojo.
llvm-svn: 144276
full-expression. Naturally they're inactive before we enter
the block literal expression. This restores the intended
behavior that blocks belong to their enclosing scope.
There's a useful -O0 / compile-time optimization that we're
missing here with activating cleanups following straight-line
code from their inactive beginnings.
llvm-svn: 144268
property references to use a new PseudoObjectExpr
expression which pairs a syntactic form of the expression
with a set of semantic expressions implementing it.
This should significantly reduce the complexity required
elsewhere in the compiler to deal with these kinds of
expressions (e.g. IR generation's special l-value kind,
the static analyzer's Message abstraction), at the lower
cost of specifically dealing with the odd AST structure
of these expressions. It should also greatly simplify
efforts to implement similar language features in the
future, most notably Managed C++'s properties and indexed
properties.
Most of the effort here is in dealing with the various
clients of the AST. I've gone ahead and simplified the
ObjC rewriter's use of properties; other clients, like
IR-gen and the static analyzer, have all the old
complexity *and* all the new complexity, at least
temporarily. Many thanks to Ted for writing and advising
on the necessary changes to the static analyzer.
I've xfailed a small diagnostics regression in the static
analyzer at Ted's request.
llvm-svn: 143867
The OpenCL single precision division operation is only required to
be accurate to 2.5ulp. Annotate the fdiv instruction with metadata
which signals to the backend that an imprecise divide instruction
may be used.
llvm-svn: 143136
This model uses the 'landingpad' instruction, which is pinned to the top of the
landing pad. (A landing pad is defined as the destination of the unwind branch
of an invoke instruction.) All of the information needed to generate the correct
exception handling metadata during code generation is encoded into the
landingpad instruction.
The new 'resume' instruction takes the place of the llvm.eh.resume intrinsic
call. It's lowered in much the same way as the intrinsic is.
llvm-svn: 140049
possible for that to matter right now, but eventually I think we'll
need to unify this better, and then it might. Also, use a more
efficient looping structure.
llvm-svn: 139788
single code path. Use atomic loads and stores where necessary. Load and
store anything of the appropriate size and alignment with primitive
operations instead of going through the call.
llvm-svn: 139580
Use a more portable heuristic for deciding when to emit a single
atomic store; it's possible that I've lost information here, but
I'm not sure how much of the logic before was intentionally arch-specific
and how much was just not quite consistent.
llvm-svn: 139468
emit call results into potentially aliased slots. This allows us
to properly mark indirect return slots as noalias, at the cost
of requiring an extra memcpy when assigning an aggregate call
result into a l-value. It also brings us into compliance with
the x86-64 ABI.
llvm-svn: 138599
hierarchy of delegation, and that EH selector values are meaningful
function-wide (good thing, too, or inlining wouldn't work).
2,3d
1a
hierarchy of delegation and that EH selector values have the same
meaning everywhere in the function instead of being meaningful only
in the context of a specific selector.
This removes the need for routing edges through EH cleanups,
since a cleanup simply always branches to its enclosing scope.
llvm-svn: 137293
- an off-by-one error in emission of irregular array limits for
InitListExprs
- use an EH partial-destruction cleanup within the normal
array-destruction cleanup
- get the branch destinations right for the empty check
Also some refactoring which unfortunately obscures these changes.
llvm-svn: 134890
- Emit default-initialization of arrays that were partially initialized
with initializer lists with a loop, rather than emitting the default
initializer N times;
- support destroying VLAs of non-trivial type, although this is not
yet exposed to users; and
- support the partial destruction of arrays initialized with
initializer lists when an initializer throws an exception.
llvm-svn: 134784
retain/release the temporary object appropriately. Previously, we
would only perform the retain/release operations when the reference
would extend the lifetime of the temporary, but this does the wrong
thing across calls.
llvm-svn: 133620
existence by always threading an edge from the catchall. Not doing
this was previously causing a crash in the very extreme case where
neither the normal cleanup nor the EH catchall was actually reachable:
we would delete the catchall entry block, which would cause us to
delete the entry block of the finally cleanup as well because the
cleanup logic would merge the blocks, which in turn triggered an assert
because later blocks in the finally would still be using values from the
entry. Laziness turns out to be the most elegant solution to the problem.
llvm-svn: 133601
MaterializeTemporaryExpr captures a reference binding to a temporary
value, making explicit that the temporary value (a prvalue) needs to
be materialized into memory so that its address can be used. The
intended AST invariant here is that a reference will always bind to a
glvalue, and MaterializeTemporaryExpr will be used to convert prvalues
into glvalues for that binding to happen. For example, given
const int& r = 1.0;
The initializer of "r" will be a MaterializeTemporaryExpr whose
subexpression is an implicit conversion from the double literal "1.0"
to an integer value.
IR generation benefits most from this new node, since it was
previously guessing (badly) when to materialize temporaries for the
purposes of reference binding. There are likely more refactoring and
cleanups we could perform there, but the introduction of
MaterializeTemporaryExpr fixes PR9565, a case where IR generation
would effectively bind a const reference directly to a bitfield in a
struct. Addresses <rdar://problem/9552231>.
llvm-svn: 133521
they should still be officially __strong for the purposes of errors,
block capture, etc. Make a new bit on variables, isARCPseudoStrong(),
and set this for 'self' and these enumeration-loop variables. Change
the code that was looking for the old patterns to look for this bit,
and change IR generation to find this bit and treat the resulting
variable as __unsafe_unretained for the purposes of init/destroy in
the two places it can come up.
llvm-svn: 133243
Language-design credit goes to a lot of people, but I particularly want
to single out Blaine Garst and Patrick Beard for their contributions.
Compiler implementation credit goes to Argyrios, Doug, Fariborz, and myself,
in no particular order.
llvm-svn: 133103
to be careful to emit landing pads that are always prepared to handle a
cleanup path. This is correct mostly because of the fix to the LLVM
inliner, r132200.
llvm-svn: 132209
As far as I know, this implementation is complete but might be missing a
few optimizations. Exceptions and virtual bases are handled correctly.
Because I'm an optimist, the web page has appropriately been updated. If
I'm wrong, feel free to downgrade its support categories.
llvm-svn: 130642
__block object copy/dispose helpers for C++ objects with those for
different variables with completely different semantics simply because
they happen to both be no more aligned than a pointer.
Found by inspection.
Also, internalize most of the helper generation logic within CGBlocks.cpp,
and refactor it to fit my peculiar aesthetic sense.
llvm-svn: 128618
simplify the logic of initializing function parameters so that we don't need
both a variable declaration and a type in FunctionArgList. This also means
that we need to propagate the CGFunctionInfo down in a lot of places rather
than recalculating it from the FAL. There's more we can do to eliminate
redundancy here, and I've left FIXMEs behind to do it.
llvm-svn: 127314
class and to bind the shared value using OpaqueValueExpr. This fixes an
unnoticed problem with deserialization of these expressions where the
deserialized form would lose the vital pointer-equality trait; or rather,
it fixes it because this patch also does the right thing for deserializing
OVEs.
Change OVEs to not be a "temporary object" in the sense that copy elision is
permitted.
This new representation is not totally unawkward to work with, but I think
that's really part and parcel with the semantics we're modelling here. In
particular, it's much easier to fix things like the copy elision bug and to
make the CFG look right.
I've tried to update the analyzer to deal with this in at least some
obvious cases, and I think we get a much better CFG out, but the printing
of OpaqueValueExprs probably needs some work.
llvm-svn: 125744
LabelDecl and LabelStmt. There is a 1-1 correspondence between the
two, but this simplifies a bunch of code by itself. This is because
labels are the only place where we previously had references to random
other statements, causing grief for AST serialization and other stuff.
This does cause one regression (attr(unused) doesn't silence unused
label warnings) which I'll address next.
This does fix some minor bugs:
1. "The only valid attribute " diagnostic was capitalized.
2. Various diagnostics printed as ''labelname'' instead of 'labelname'
3. This reduces duplication of label checking between functions and blocks.
Review appreciated, particularly for the cindex and template bits.
llvm-svn: 125733
- Have CGM precompute a number of commonly-used types
- Have CGF copy that during initialization instead of recomputing them
- Use TBAA info when initializing a parameter variable
- Refactor the scalar ++/-- code
llvm-svn: 125562
- BlockDeclRefExprs always store VarDecls
- BDREs no longer store copy expressions
- BlockDecls now store a list of captured variables, information about
how they're captured, and a copy expression if necessary
With that in hand, change IR generation to use the captures data in
blocks instead of walking the block independently.
Additionally, optimize block layout by emitting fields in descending
alignment order, with a heuristic for filling in words when alignment
of the end of the block header is insufficient for the most aligned
field.
llvm-svn: 125005
fixing a crash which probably nobody was ever going to see. In doing so,
fix a horrendous number of problems with the conditional-cleanups code.
Also, make conditional cleanups re-use the cleanup's activation variable,
which avoids some unfortunate repetitiveness.
llvm-svn: 124481
I'm separately committing this because it incidentally changes some
block orderings and minor IR issues, like using a phi instead of
an unnecessary alloca.
llvm-svn: 124277
process, perform a number of refactorings:
- Move MiscNameMangler member functions to MangleContext
- Remove GlobalDecl dependency from MangleContext
- Make MangleContext abstract and move Itanium/Microsoft functionality
to their own classes/files
- Implement ASTContext::createMangleContext and have CodeGen use it
No (intended) functionality change.
llvm-svn: 123386
Fix a bug in the emission of complex compound assignment l-values.
Introduce a method to emit an expression whose value isn't relevant.
Make that method evaluate its operand as an l-value if it is one.
Fixes our volatile compliance in C++.
llvm-svn: 120931
struct X {
X() : au_i1(123) {}
union {
int au_i1;
float au_f1;
};
};
clang will now deal with au_i1 explicitly as an IndirectFieldDecl.
llvm-svn: 120900
Also, move the l-value emission code into CGObjC.cpp and teach it, for
completeness, to store away self for a super send.
Also, inline the super cases for property gets and sets and make them
use the correct result type for implicit getter/setter calls.
llvm-svn: 120887
when an initializer is variable (I handled the constant case in a previous
patch). This has three pieces:
1. Enhance AggValueSlot to have a 'isZeroed' bit to tell CGExprAgg that
the memory being stored into has previously been memset to zero.
2. Teach CGExprAgg to not emit stores of zero to isZeroed memory.
3. Teach CodeGenFunction::EmitAggExpr to scan initializers to determine
whether they are profitable to emit a memset + inividual stores vs
stores for everything.
The heuristic used is that a global has to be more than 16 bytes and
has to be 3/4 zero to be candidate for this xform. The two testcases
are illustrative of the scenarios this catches. We now codegen test9 into:
call void @llvm.memset.p0i8.i64(i8* %0, i8 0, i64 400, i32 4, i1 false)
%.array = getelementptr inbounds [100 x i32]* %Arr, i32 0, i32 0
%tmp = load i32* %X.addr, align 4
store i32 %tmp, i32* %.array
and test10 into:
call void @llvm.memset.p0i8.i64(i8* %0, i8 0, i64 392, i32 8, i1 false)
%tmp = getelementptr inbounds %struct.b* %S, i32 0, i32 0
%tmp1 = getelementptr inbounds %struct.a* %tmp, i32 0, i32 0
%tmp2 = load i32* %X.addr, align 4
store i32 %tmp2, i32* %tmp1, align 4
%tmp5 = getelementptr inbounds %struct.b* %S, i32 0, i32 3
%tmp10 = getelementptr inbounds %struct.a* %tmp5, i32 0, i32 4
%tmp11 = load i32* %X.addr, align 4
store i32 %tmp11, i32* %tmp10, align 4
Previously we produced 99 stores of zero for test9 and also tons for test10.
This xforms should substantially speed up -O0 builds when it kicks in as well
as reducing code size and optimizer heartburn on insane cases. This resolves
PR279.
llvm-svn: 120692
assignment to volatiles in C. This in effect reverts some of mjs's
work in and around r72572. Basically, the C++ standard is quite
clear, except that it lies about volatile behavior approximating
C's, whereas the C standard is almost actively misleading.
llvm-svn: 119344
data members by delaying the emission of the initializer until after
linkage and visibility have been set on the global. Also, don't
emit a guard unless the variable actually ends up with vague linkage,
and don't use thread-safe statics in any case.
llvm-svn: 118336
__builtin_ia32_vec_init_v8qi
__builtin_ia32_vec_init_v4hi
__builtin_ia32_vec_init_v2si
They are lowered to bitcasts. (These are all ready tested by the gcc testsuite.)
<rdar://problem/8529957>
llvm-svn: 116147
one of them) was causing a series of failures:
http://google1.osuosl.org:8011/builders/clang-x86_64-darwin10-selfhost/builds/4518
svn merge -c -114929 https://llvm.org/svn/llvm-project/cfe/trunk
--- Reverse-merging r114929 into '.':
U include/clang/Sema/Sema.h
U include/clang/AST/DeclCXX.h
U lib/Sema/SemaDeclCXX.cpp
U lib/Sema/SemaTemplateInstantiateDecl.cpp
U lib/Sema/SemaDecl.cpp
U lib/Sema/SemaTemplateInstantiate.cpp
U lib/AST/DeclCXX.cpp
svn merge -c -114925 https://llvm.org/svn/llvm-project/cfe/trunk
--- Reverse-merging r114925 into '.':
G include/clang/AST/DeclCXX.h
G lib/Sema/SemaDeclCXX.cpp
G lib/AST/DeclCXX.cpp
svn merge -c -114924 https://llvm.org/svn/llvm-project/cfe/trunk
--- Reverse-merging r114924 into '.':
G include/clang/AST/DeclCXX.h
G lib/Sema/SemaDeclCXX.cpp
G lib/Sema/SemaDecl.cpp
G lib/AST/DeclCXX.cpp
U lib/AST/ASTContext.cpp
svn merge -c -114921 https://llvm.org/svn/llvm-project/cfe/trunk
--- Reverse-merging r114921 into '.':
G include/clang/AST/DeclCXX.h
G lib/Sema/SemaDeclCXX.cpp
G lib/Sema/SemaDecl.cpp
G lib/AST/DeclCXX.cpp
llvm-svn: 114933
the cleanup might not be dominated by the allocation code.
In this case, we have to store aside all the delete arguments
in case we need them later. There's room for optimization here
in cases where we end up not actually needing the cleanup in
different branches (or being able to pop it after the
initialization code).
Also make sure we only call this operator delete along the path
where we actually allocated something.
Fixes rdar://problem/8439196.
llvm-svn: 114145
slot. The easiest way to do that was to bundle up the information
we care about for aggregate slots into a new structure which demands
that its creators at least consider the question.
I could probably be convinced that the ObjC 'needs GC' bit should
be rolled into this structure.
Implement generalized copy elision. The main obstacle here is that
IR-generation must be much more careful about making sure that exactly
llvm-svn: 113962
implement ARM array cookies. Also fix a few unfortunate bugs:
- throwing dtors in deletes prevented the allocation from being deleted
- adding the cookie to the new[] size was not being considered for
overflow (and, more seriously, was screwing up the earlier checks)
- deleting an array via a pointer to array of class type was not
causing any destructors to be run and was passing the unadjusted
pointer to the deallocator
- lots of address-space problems, in case anyone wants to support
free store in a variant address space :)
llvm-svn: 112814
update callers as best I can.
- This is a work in progress, our alignment handling is very horrible / sketchy -- I am just aiming for monotonic improvement.
- Serious review appreciated.
llvm-svn: 111707
This takes some trickery since CastExpr has subclasses (and indeed,
is abstract).
Also, smoosh the CastKind into the bitfield from Expr.
Drops two words of storage from Expr in the common case of expressions
which don't need inheritance paths. Avoids a separate allocation and
another word of overhead in cases needing inheritance paths. Also has
the advantage of not leaking memory, since destructors for AST nodes are
never run.
llvm-svn: 110507
enclosing normal cleanup, not the top of the EH stack. I'm *really*
surprised this hasn't been causing more problems.
Fixes rdar://problem/8231514.
llvm-svn: 109569
initializer of (). Make sure to use a simple memset() when we can, or
fall back to generating a loop when a simple memset will not
suffice. Fixes <rdar://problem/8212208>, a regression due to my work
in r107857.
llvm-svn: 108977
which generates more efficient and more obviously conformant
code. We now test for overflow of the multiply then force
the result to -1 if so. On X86, this generates nice code
like this:
__Z4testl: ## @_Z4testl
## BB#0: ## %entry
subl $12, %esp
movl $4, %eax
mull 16(%esp)
testl %edx, %edx
movl $-1, %ecx
cmovel %eax, %ecx
movl %ecx, (%esp)
call __Znam
addl $12, %esp
ret
llvm-svn: 108927
causing clang to compile this code into something that correctly throws a
length error, fixing a potential integer overflow security attack:
void *test(long N) {
return new int[N];
}
int main() {
test(1L << 62);
}
We do this even when exceptions are disabled, because it is better for the
code to abort than for the attack to succeed.
This is heavily based on a patch that Fariborz wrote.
llvm-svn: 108915
mostly in avoiding unnecessary work at compile time but also in producing more
sensible block orderings.
Move the destructor cleanups for local variables over to use lazy cleanups.
Eventually all cleanups will do this; for now we have some awkward code
duplication.
Tell IR generation just to never produce landing pads in -fno-exceptions.
This is a much more comprehensive solution to a problem which previously was
half-solved by checks in most cleanup-generation spots.
llvm-svn: 108270
emit metadata associating allocas and global values with a Decl*. This feature
is controlled by an option that (intentionally) cannot be enabled on the command
line.
To use this feature, simply set
CodeGenOptions.EmitDeclMetadata = true;
and then interpret the completely underspecified metadata. :)
llvm-svn: 107739
self-host. Hopefully these results hold up on different platforms.
I tried to keep the GNU ObjC runtime happy, but it's hard for me to test.
Reimplement how clang generates IR for exceptions. Instead of creating new
invoke destinations which sequentially chain to the previous destination,
push a more semantic representation of *why* we need the cleanup/catch/filter
behavior, then collect that information into a single landing pad upon request.
Also reorganizes how normal cleanups (i.e. cleanups triggered by non-exceptional
control flow) are generated, since it's actually fairly closely tied in with
the former. Remove the need to track which cleanup scope a block is associated
with.
Document a lot of previously poorly-understood (by me, at least) behavior.
The new framework implements the Horrible Hack (tm), which requires every
landing pad to have a catch-all so that inlining will work. Clang no longer
requires the Horrible Hack just to make exceptions flow correctly within
a function, however. The HH is an unfortunate requirement of LLVM's EH IR.
llvm-svn: 107631
alloca for an argument. Make sure the argument gets the proper
decl alignment, which may be different than the type alignment.
This fixes PR7567
llvm-svn: 107627
have CGF create and make accessible standard int32,int64 and
intptr types. This fixes a ton of 80 column violations
introduced by LLVMContextification and cleans up stuff a lot.
llvm-svn: 106977
load/store nonsense in the epilog. For example, for:
int foo(int X) {
int A[100];
return A[X];
}
we used to generate:
%arrayidx = getelementptr inbounds [100 x i32]* %A, i32 0, i64 %idxprom ; <i32*> [#uses=1]
%tmp1 = load i32* %arrayidx ; <i32> [#uses=1]
store i32 %tmp1, i32* %retval
%0 = load i32* %retval ; <i32> [#uses=1]
ret i32 %0
}
which codegen'd to this code:
_foo: ## @foo
## BB#0: ## %entry
subq $408, %rsp ## imm = 0x198
movl %edi, 400(%rsp)
movl 400(%rsp), %edi
movslq %edi, %rax
movl (%rsp,%rax,4), %edi
movl %edi, 404(%rsp)
movl 404(%rsp), %eax
addq $408, %rsp ## imm = 0x198
ret
Now we generate:
%arrayidx = getelementptr inbounds [100 x i32]* %A, i32 0, i64 %idxprom ; <i32*> [#uses=1]
%tmp1 = load i32* %arrayidx ; <i32> [#uses=1]
ret i32 %tmp1
}
and:
_foo: ## @foo
## BB#0: ## %entry
subq $408, %rsp ## imm = 0x198
movl %edi, 404(%rsp)
movl 404(%rsp), %edi
movslq %edi, %rax
movl (%rsp,%rax,4), %eax
addq $408, %rsp ## imm = 0x198
ret
This actually does matter, cutting out 2000 lines of IR from CGStmt.ll
for example.
Another interesting effect is that altivec.h functions which are dead
now get dce'd by the inliner. Hence all the changes to
builtins-ppc-altivec.c to ensure the calls aren't dead.
llvm-svn: 106970
'self' variable arising from uses of the 'super' keyword. Also reorganize
some code so that BlockInfo (now CGBlockInfo) can be opaque outside of
CGBlocks.cpp.
Fixes rdar://problem/8010633.
llvm-svn: 104312
__cxa_guard_abort along the exceptional edge into (in effect) a nested
"try" that rethrows after aborting. Fixes PR7144 and the remaining
Boost.ProgramOptions failures, along with the regressions that r103880
caused.
The crucial difference between this and r103880 is that we now follow
LLVM's little dance with the llvm.eh.exception and llvm.eh.selector
calls, then use _Unwind_Resume_or_Rethrow to rethrow.
llvm-svn: 103892
__cxa_guard_abort along the exceptional edge into (in effect) a nested
"try" that rethrows after aborting. Fixes PR7144 and the remaining
Boost.ProgramOptions failures.
llvm-svn: 103880
implicitly-generated copy constructor. Previously, Sema would perform
some checking and instantiation to determine which copy constructors,
etc., would be called, then CodeGen would attempt to figure out which
copy constructor to call... but would get it wrong, or poke at an
uninstantiated default argument, or fail in other ways.
The new scheme is similar to what we now do for the implicit
copy-assignment operator, where Sema performs all of the semantic
analysis and builds specific ASTs that look similar to the ASTs we'd
get from explicitly writing the copy constructor, so that CodeGen need
only do a direct translation.
However, it's not quite that simple because one cannot explicit write
elementwise copy-construction of an array. So, I've extended
CXXBaseOrMemberInitializer to contain a list of indexing variables
used to copy-construct the elements. For example, if we have:
struct A { A(const A&); };
struct B {
A array[2][3];
};
then we generate an implicit copy assignment operator for B that looks
something like this:
B::B(const B &other) : array[i0][i1](other.array[i0][i1]) { }
CodeGen will loop over the invented variables i0 and i1 to visit all
elements in the array, so that each element in the destination array
will be copy-constructed from the corresponding element in the source
array. Of course, if we're dealing with arrays of scalars or class
types with trivial copy-assignment operators, we just generate a
memcpy rather than a loop.
Fixes PR6928, PR5989, and PR6887. Boost.Regex now compiles and passes
all of its regression tests.
Conspicuously missing from this patch is handling for the exceptional
case, where we need to destruct those objects that we have
constructed. I'll address that case separately.
llvm-svn: 103079
assignment operators.
Previously, Sema provided type-checking and template instantiation for
copy assignment operators, then CodeGen would synthesize the actual
body of the copy constructor. Unfortunately, the two were not in sync,
and CodeGen might pick a copy-assignment operator that is different
from what Sema chose, leading to strange failures, e.g., link-time
failures when CodeGen called a copy-assignment operator that was not
instantiation, run-time failures when copy-assignment operators were
overloaded for const/non-const references and the wrong one was
picked, and run-time failures when by-value copy-assignment operators
did not have their arguments properly copy-initialized.
This implementation synthesizes the implicitly-defined copy assignment
operator bodies in Sema, so that the resulting ASTs encode exactly
what CodeGen needs to do; there is no longer any special code in
CodeGen to synthesize copy-assignment operators. The synthesis of the
body is relatively simple, and we generate one of three different
kinds of copy statements for each base or member:
- For a class subobject, call the appropriate copy-assignment
operator, after overload resolution has determined what that is.
- For an array of scalar types or an array of class types that have
trivial copy assignment operators, construct a call to
__builtin_memcpy.
- For an array of class types with non-trivial copy assignment
operators, synthesize a (possibly nested!) for loop whose inner
statement calls the copy constructor.
- For a scalar type, use built-in assignment.
This patch fixes at least a few tests cases in Boost.Spirit that were
failing because CodeGen picked the wrong copy-assignment operator
(leading to link-time failures), and I suspect a number of undiagnosed
problems will also go away with this change.
Some of the diagnostics we had previously have gotten worse with this
change, since we're going through generic code for our
type-checking. I will improve this in a subsequent patch.
llvm-svn: 102853