builds introduced in r134972:
lib/CodeGen/CGExpr.cpp:1294:7: error: no matching function for call to 'EmitBitCastOfLValueToProperType'
lib/CodeGen/CGExpr.cpp:1278:1: note: candidate function not viable: no known conversion from 'CGBuilderTy' (aka 'IRBuilder<false>') to 'llvm::IRBuilder<> &' for 1st argument
This fixes the issue by passing CodeGenFunction on down, and using its
builder directly rather than passing just the builder down.
This may not be the best / cleanest fix, Chris please review. It at
least fixes builds.
llvm-svn: 134977
uncompleted struct types. We now do what llvm-gcc does and compile
them into [i8 x 0]. If the type is later completed, we make sure that
it is appropriately cast.
We compile the terrible example to something like this now:
%struct.A = type { i32, i32, i32 }
@g = external global [0 x i8]
define void @_Z1fv() nounwind {
entry:
call void @_Z3fooP1A(%struct.A* bitcast ([0 x i8]* @g to %struct.A*))
ret void
}
declare void @_Z3fooP1A(%struct.A*)
define %struct.A* @_Z2f2v() nounwind {
entry:
ret %struct.A* getelementptr inbounds ([0 x %struct.A]* bitcast ([0 x i8]* @g to [0 x %struct.A]*), i32 0, i64 1)
}
llvm-svn: 134972
stuff like this:
typedef struct {
int x, y, z;
} foo_t;
foo_t g;
into:
%"struct.<anonymous>" = type { i32, i32, i32 }
we now get:
%struct.foo_t = type { i32, i32, i32 }
This doesn't change the behavior of the compiler, but makes the IR much easier to read.
llvm-svn: 134969
an assert on Darwin llvm-gcc builds.
Assertion failed: (castIsValid(op, S, Ty) && "Invalid cast!"), function Create, file /Users/buildslave/zorg/buildbot/smooshlab/slave-0.8/build.llvm-gcc-i386-darwin9-RA/llvm.src/lib/VMCore/Instructions.cpp, line 2067.
etc.
http://smooshlab.apple.com:8013/builders/llvm-gcc-i386-darwin9-RA/builds/2354
--- Reverse-merging r134888 into '.':
U lib/CodeGen/CodeGenModule.cpp
llvm-svn: 134950
- an off-by-one error in emission of irregular array limits for
InitListExprs
- use an EH partial-destruction cleanup within the normal
array-destruction cleanup
- get the branch destinations right for the empty check
Also some refactoring which unfortunately obscures these changes.
llvm-svn: 134890
is called whenever a tag type is completed. We previously used that
as the sign to layout the codegen representation for the tag type,
which worked but meant that we laid out *every* completed type, whether
it was used or not.
Now we just lay out the type if we've already seen it somehow else.
This means that we lay out types we've used but haven't seen a body
for, but we don't lay out tons of stuff that noone cares about.
llvm-svn: 134866
caused us to skip layout out a function accurately. If
so, flush the type cache for both the function and struct
case to ensure that any pointers to the functions get
recomputed. This is overconservative, but with this patch
clang can build itself again.
llvm-svn: 134863
conservative when converting a functiontype to IR when in a "pointer within
a struct" context. This has the unfortunate sideeffect of compiling all
function pointers inside of structs into "{}*" which, though correct, is
ugly. This has the positive side effect of being correct, and it is pretty
straight-forward to improve on this.
llvm-svn: 134861
it is a predicate, not an action. Change the return type to be a bool,
not the incomplete member. Enhace it to detect the recursive compilation
case, allowing us to compile Eli's testcase on llvmdev:
struct T {
struct T (*p)(void);
} t;
into:
%struct.T = type { {}* }
@t = common global %struct.T zeroinitializer, align 8
llvm-svn: 134853
expecting so much concentrated oddity on what seemed like a
trivial feature. Thanks to François Pichet for doing the
MSVC legwork here.
llvm-svn: 134813
- Emit default-initialization of arrays that were partially initialized
with initializer lists with a loop, rather than emitting the default
initializer N times;
- support destroying VLAs of non-trivial type, although this is not
yet exposed to users; and
- support the partial destruction of arrays initialized with
initializer lists when an initializer throws an exception.
llvm-svn: 134784
Note that because we don't usually touch the MMX registers anyway, all -mno-mmx needs to do is tweak the x86-32 calling convention a little for vectors that look like MMX vectors, and prevent the definition of __MMX__.
clang doesn't actually stop the user from using MMX inline asm operands or MMX builtins in -mno-mmx mode; as a QOI issue, it would be nice to diagnose, but I doubt it really matters much.
<rdar://problem/9694837>
llvm-svn: 134770
the normal case.
Before, for this:
$ cat t.c
int test(int x) { return x * 2; }
We would get this:
addl %edi, %edi
jno LBB0_2
## BB#1: ## %overflow
ud2
LBB0_2: ## %nooverflow
movl %edi, %eax
popq %rbp
ret
Now we get this:
addl %edi, %edi
jo LBB0_2
## BB#1: ## %nooverflow
movl %edi, %eax
popq %rbp
ret
LBB0_2: ## %overflow
ud2
<rdar://problem/8283919>
llvm-svn: 134642
where we have an immediate need of a retained value.
As an exception, don't do this when the call is made as the immediate
operand of a __bridge retain. This is more in the way of a workaround
than an actual guarantee, so it's acceptable to be brittle here.
rdar://problem/9504800
llvm-svn: 134605
structure to hold inferred information, then propagate each invididual
bit down to -cc1. Separate the bits of "supports weak" and "has a native
ARC runtime"; make the latter a CodeGenOption.
The tool chain is still driving this decision, because it's the place that
has the required deployment target information on Darwin, but at least it's
better-factored now.
llvm-svn: 134453
trivial default constructors. This generated-code regression was
caused by r131796, which had simplified the handling of default
initialization in Sema. Fixes <rdar://problem/9694300>.
llvm-svn: 134260
The fixed implementation is compatible with the implementation both gcc and llvm-gcc use.
rdar://9686430 . (This is the issue that was reported in the thread "[LLVMdev] Segfault calling LLVM libs from a clang-compiled executable".)
llvm-svn: 134059
arithmetic on a VLA as 'nsw', per discussion with djg, and
implement pointer arithmetic (other than array accesses) and
pointer subtraction for VLA types.
llvm-svn: 133855
Sorry, this was a bad idea. Within clang these builtins are in a separate
"ARM" namespace, but the actual builtin names should clearly distinguish tha
they are target specific.
llvm-svn: 133833
objects, so that we steal the retain count of a temporary __strong
pointer (zeroing out that temporary), eliding a retain/release
pair. Addresses <rdar://problem/9364932>.
llvm-svn: 133621
retain/release the temporary object appropriately. Previously, we
would only perform the retain/release operations when the reference
would extend the lifetime of the temporary, but this does the wrong
thing across calls.
llvm-svn: 133620
existence by always threading an edge from the catchall. Not doing
this was previously causing a crash in the very extreme case where
neither the normal cleanup nor the EH catchall was actually reachable:
we would delete the catchall entry block, which would cause us to
delete the entry block of the finally cleanup as well because the
cleanup logic would merge the blocks, which in turn triggered an assert
because later blocks in the finally would still be using values from the
entry. Laziness turns out to be the most elegant solution to the problem.
llvm-svn: 133601
- Changes bit-field access policy to try to use (aligned) register sized accesses.
The idea here is that by using larger accesses we expose more coalescing
potential to the backend when we have situations like adjacent bit-fields in the
same structure (which is common), and that the backend should be smart enough to
narrow the accesses down when no coalescing is done or when it is shown not to
be profitable.
--
$ clang -m32 -O3 -S -o - t.c
_f0: ## @f0
pushl %ebp
movl %esp, %ebp
movl 8(%ebp), %eax
movb (%eax), %cl
andb $-128, %cl
orb $1, %cl
movb %cl, (%eax)
movb 1(%eax), %cl
andb $-128, %cl
orb $1, %cl
movb %cl, 1(%eax)
movb 2(%eax), %cl
andb $-128, %cl
orb $1, %cl
movb %cl, 2(%eax)
movb 3(%eax), %cl
andb $-128, %cl
orb $1, %cl
movb %cl, 3(%eax)
popl %ebp
ret
$ clang -m32 -O3 -S -o - t.c -Xclang -fuse-register-sized-bitfield-access
_f0: ## @f0
pushl %ebp
movl %esp, %ebp
movl 8(%ebp), %eax
movl $-2139062144, %ecx ## imm = 0xFFFFFFFF80808080
andl (%eax), %ecx
orl $16843009, %ecx ## imm = 0x1010101
movl %ecx, (%eax)
popl %ebp
ret
--
llvm-svn: 133532
MaterializeTemporaryExpr captures a reference binding to a temporary
value, making explicit that the temporary value (a prvalue) needs to
be materialized into memory so that its address can be used. The
intended AST invariant here is that a reference will always bind to a
glvalue, and MaterializeTemporaryExpr will be used to convert prvalues
into glvalues for that binding to happen. For example, given
const int& r = 1.0;
The initializer of "r" will be a MaterializeTemporaryExpr whose
subexpression is an implicit conversion from the double literal "1.0"
to an integer value.
IR generation benefits most from this new node, since it was
previously guessing (badly) when to materialize temporaries for the
purposes of reference binding. There are likely more refactoring and
cleanups we could perform there, but the introduction of
MaterializeTemporaryExpr fixes PR9565, a case where IR generation
would effectively bind a const reference directly to a bitfield in a
struct. Addresses <rdar://problem/9552231>.
llvm-svn: 133521
an assembly file it worked correctly, while for a .c file it would given an
error about how --noexecstack is not a supported argument to -Wa.
llvm-svn: 133489
(or follow up) extern declaration with weak_import as
an actual definition. make clang follows this behavior.
// rdar://9538608
llvm-gcc treats an extern declaration with weak_import
llvm-svn: 133450
Change PHINodes to store simple pointers to their incoming basic blocks,
instead of full-blown Uses.
Note that this loses an optimization in SplitCriticalEdge(), because we
can no longer walk the use list of a BasicBlock to find phi nodes. See
the comment I removed starting "However, the foreach loop is slow for
blocks with lots of predecessors".
Extend replaceAllUsesWith() on a BasicBlock to also update any phi
nodes in the block's successors. This mimics what would have happened
when PHINodes were proper Users of their incoming blocks. (Note that
this only works if OldBB->replaceAllUsesWith(NewBB) is called when
OldBB still has a terminator instruction, so it still has some
successors.)
llvm-svn: 133435
ConvertType on InitListExprs as they are being converted. This is
needed for a forthcoming patch, and improves the IR generated anyway
(see additional type names in testcases).
This patch also converts a bunch of std::vector's in CGObjCMac to use
C arrays. There are a ton more that should be converted as well.
llvm-svn: 133413
separate aggregate temporary and then memcpy it over to the
destination. This fixes a regression I introduced with r133235, where
the compound literal on the RHS of an assignment makes use of the
structure on the LHS of the assignment.
I'm deeply suspicious of AggExprEmitter::VisitBinAssign()'s
optimization where it emits the RHS of an aggregate assignment
directly into the LHS lvalue without checking whether there is any
aliasing between the LHS/RHS. However, I'm not in a position to
revisit this now.
Big thanks to Eli for finding the regression!
llvm-svn: 133261
they should still be officially __strong for the purposes of errors,
block capture, etc. Make a new bit on variables, isARCPseudoStrong(),
and set this for 'self' and these enumeration-loop variables. Change
the code that was looking for the old patterns to look for this bit,
and change IR generation to find this bit and treat the resulting
variable as __unsafe_unretained for the purposes of init/destroy in
the two places it can come up.
llvm-svn: 133243
C++, which means:
- binding the temporary as needed in Sema, so that we generate the
appropriate call to the destructor, and
- emitting the compound literal into the appropriate location for
the aggregate, rather than trying to emit it as a temporary and
memcpy() it.
Fixes PR10138 / <rdar://problem/9615901>.
llvm-svn: 133235