Commit Graph

114 Commits

Author SHA1 Message Date
Bill Wendling ec9d2633f1 When we adjust the inline ASM type, we need to take into account an early
clobber with the 'y' constraint. Otherwise, we get the wrong return type and an
assert, because it created a '<1 x i64>' vector type instead of the x86_mmx
type.

llvm-svn: 127185
2011-03-07 22:47:14 +00:00
Tilmann Scheller 99cc30c371 Revert "Add CC_Win64ThisCall and set it in the necessary places."
This reverts commit 126863.

llvm-svn: 126886
2011-03-02 21:36:49 +00:00
Tilmann Scheller 454464b491 Add CC_Win64ThisCall and set it in the necessary places.
llvm-svn: 126863
2011-03-02 19:36:23 +00:00
NAKAMURA Takumi f8a6e802f9 lib/CodeGen/TargetInfo.cpp: On Win64, arg i128 should be emitted as INDIRECT.
mingw-w64's i128 tweak should be done with x86_64-mingw32.

llvm-svn: 126186
2011-02-22 03:56:57 +00:00
Peter Collingbourne 8f5cf74c77 Re-instate r125819 and r125820 with no functionality change
llvm-svn: 126060
2011-02-19 23:03:58 +00:00
Rafael Espindola a6d2bff0c5 Revert 125820 and 125819 to fix PR9266.
llvm-svn: 126050
2011-02-19 21:39:31 +00:00
Peter Collingbourne 3ae6caaf1b Move TargetInfo::adjustInlineAsmType to TargetCodeGenInfo
llvm-svn: 125819
2011-02-18 02:24:56 +00:00
NAKAMURA Takumi 31ea2f14bc Triple::MinGW64 is deprecated and removed. We can use Triple::MinGW32 instead.
No one uses *-mingw64. mingw-w64 is represented as {i686|x86_64}-w64-mingw32.

llvm-svn: 125742
2011-02-17 08:51:38 +00:00
NAKAMURA Takumi 029d74b264 Fix whitespace.
llvm-svn: 125741
2011-02-17 08:50:50 +00:00
Benjamin Kramer 24f1d3e60a Add NetBSD target support. Patch by Joerg Sonnenberger.
llvm-svn: 124736
2011-02-02 18:59:27 +00:00
NAKAMURA Takumi e03c603624 lib/CodeGen/TargetInfo.cpp: Fix coding style and erase an obsolete comment.
llvm-svn: 123790
2011-01-19 00:11:33 +00:00
NAKAMURA Takumi bd91f50190 lib/CodeGen/TargetInfo.cpp: Add Win64 calling conversion.
FIXME: It would be incompatible to Microsoft's in one point.
On mingw64-gcc, {i128} is expanded for args and returned as {rax, rdx}.

llvm-svn: 123692
2011-01-17 22:56:31 +00:00
Bob Wilson b9fa00e0c2 Remove special handling for opaque Neon vector types.
Clang does not wrap the vectors in structs anymore so this isn't needed.

llvm-svn: 123241
2011-01-11 16:53:49 +00:00
Bob Wilson bd4520b535 Move DefaultABIInfo::classifyReturnType where it belongs. No functional change.
llvm-svn: 123195
2011-01-10 23:54:17 +00:00
Wesley Peck 36a1f68fec 1. Add some ABI information for the Microblaze.
2. Add attibutes "interrupt_handler" and "save_volatiles" for the Microblaze target.

llvm-svn: 122184
2010-12-19 19:57:51 +00:00
Benjamin Kramer 8c173cc364 Use a twine.
llvm-svn: 118892
2010-11-12 15:42:18 +00:00
Anders Carlsson fd88a6160d Rename getBaseClassOffset to getBaseClassOffsetInBits and introduce a getBaseClassOffset which returns the offset in CharUnits. Do the same thing for getVBaseClassOffset.
llvm-svn: 117881
2010-10-31 23:22:37 +00:00
Michael J. Spencer f5a1fbcdf3 Fix Whitespace.
llvm-svn: 116798
2010-10-19 06:39:39 +00:00
Bill Wendling 9987c0ea42 We shouldn't keep track of MMX registers "needed" separately from the SSE
registers needed.

llvm-svn: 116772
2010-10-18 23:51:38 +00:00
Bill Wendling 5cd41c4b13 Reapply r116684 with fixes. The test cases needed to be updated.
llvm-svn: 116696
2010-10-18 03:41:31 +00:00
Bill Wendling c7c9be661f Temporarily revert r116684. It was causing failures with
Clang :: CodeGen/x86_32-arguments-darwin.c
    Clang :: CodeGen/x86_32-arguments-linux.c

llvm-svn: 116687
2010-10-17 07:58:46 +00:00
Bill Wendling 812f4b123e The "gcc.dg/compat/vector-1 -m32" test was broken after the MMX rewrite. The
function parameters weren't converted to use the correct type (x86_mmx). Add a
check, similar to the one in llvm-gcc, to see if we need the x86_mmx type for
that function parameter. If so, it coerces the type to be that.

llvm-svn: 116684
2010-10-17 07:38:01 +00:00
Chris Lattner a09e8efd1f Per discussion with Sanjiv, remove the PIC16 target from mainline. When/if
it comes back, it will be largely a rewrite, so keeping the old codebase
in tree isn't helping anyone.

llvm-svn: 116191
2010-10-11 05:44:49 +00:00
Daniel Dunbar 19964dbe3b IRgen/ABI/ARM: Return large vectors in memory.
llvm-svn: 114619
2010-09-23 01:54:32 +00:00
Daniel Dunbar b34b08098c IRgen/ABI/ARM: Trust the backend to pass vectors correctly for the given ABI.
- Therefore, we can lower out the NEON wrapper structs and pass the vectors
   directly. This makes a huge difference in the cleanliness of the IR after
   optimization.
 - I will trust, but verify, via future ABITest testing (for APCS-GNU, at
   least).

llvm-svn: 114618
2010-09-23 01:54:28 +00:00
Daniel Dunbar dd38fbc7fb IRgen/ABI/x86-32: Realign indirect arguments when the ABI requires us to pass
them with a smaller alignment than the rest of codegen expects.

llvm-svn: 114115
2010-09-16 20:42:06 +00:00
Daniel Dunbar 7b7c2937ef IRgen/ABI: Add support for realigning structures which are passed by indirect
reference.

llvm-svn: 114114
2010-09-16 20:42:02 +00:00
Daniel Dunbar ed23de3348 IRgen/ABI/x86_32/Darwin: On Darwin, only structures with SSE vector types get passed
with a non-default-stack-ABI-alignment (of 16).
 - This fixes the ABI convenient, but breaks codegen since we now have
   underaligned arguments. Marginal improvement overall though, and will be
   fixed in next commit.

llvm-svn: 114113
2010-09-16 20:42:00 +00:00
Daniel Dunbar 8a6c91ff76 IRgen/x86_32/Linux: Linux seems to align all stack objects to 4 bytes, unlike
Darwin. Checked vs the handiest Linux llvm-gcc I had around, someone on Linux is
welcome to investigate more.

llvm-svn: 114112
2010-09-16 20:41:56 +00:00
Chris Lattner d426c8eae3 fix rdar://8360877 a really nasty miscompilation in Boost.Xpressive
caused by my ABI work.  Passing:

struct outer {
  int x;
  struct epsilon_matcher {} e;
  int f;
};

as {i32,i32} isn't safe, because the offset of the second element
needs to be at 8 when it is interpreted as a memory value.

llvm-svn: 112686
2010-09-01 00:50:20 +00:00
Chris Lattner be5eb17536 same refactoring as before, this time on the argument side.
llvm-svn: 112684
2010-09-01 00:24:35 +00:00
Chris Lattner 52b3c13149 refactor some code to cut down on redundancy, no functionality change.
llvm-svn: 112683
2010-09-01 00:20:33 +00:00
Chris Lattner 04dc957260 Add support for windows x86-64 varargs, patch by Cameron Esfahani!
llvm-svn: 112603
2010-08-31 16:44:54 +00:00
Chris Lattner a48fbe8c53 Fix PR8029, a x86-32 ABI regression in introduced in r112211
llvm-svn: 112537
2010-08-30 22:03:23 +00:00
Chris Lattner d7e54804ee improve comments.
llvm-svn: 112214
2010-08-26 20:08:43 +00:00
Chris Lattner d774ae9ed1 fix 2xi16 to pass as i32 instead of <2 x i16>. The former passes in
memory (as required) the later now passes in an xmm register.  This
fixes gcc.dg/compat/vector_1 on x86-32.

llvm-svn: 112211
2010-08-26 20:05:13 +00:00
Chris Lattner 69e683fb35 vector of long and ulong are also classified as INTEGER in x86-64 abi,
this fixes rdar://8358475 a failure of the gcc.dg/compat/vector_1 abi
test.

llvm-svn: 112205
2010-08-26 18:13:50 +00:00
Chris Lattner 46830f2fd6 1 x ulonglong needs to be classified as INTEGER, just like 1 x longlong,
this fixes a miscompilation on the included testcase, rdar://8359248

llvm-svn: 112201
2010-08-26 18:03:20 +00:00
Chris Lattner 51e1cc2fe2 tame an assertion, fixing rdar://8357396
llvm-svn: 112174
2010-08-26 06:28:35 +00:00
Chris Lattner 9f8b451876 Finally pass "two floats in a 64-bit unit" as a <2 x float> instead of
as a double in the x86-64 ABI.  This allows us to generate much better
code for certain things, e.g.:

_Complex float f32(_Complex float A, _Complex float B) {
  return A+B;
}

Used to compile into (look at the integer silliness!):

_f32:                                   ## @f32
## BB#0:                                ## %entry
	movd	%xmm1, %rax
	movd	%eax, %xmm1
	movd	%xmm0, %rcx
	movd	%ecx, %xmm0
	addss	%xmm1, %xmm0
	movd	%xmm0, %edx
	shrq	$32, %rax
	movd	%eax, %xmm0
	shrq	$32, %rcx
	movd	%ecx, %xmm1
	addss	%xmm0, %xmm1
	movd	%xmm1, %eax
	shlq	$32, %rax
	addq	%rdx, %rax
	movd	%rax, %xmm0
	ret

Now we get:

_f32:                                   ## @f32
	movdqa	%xmm0, %xmm2
	addss	%xmm1, %xmm2
	pshufd	$16, %xmm2, %xmm2
	pshufd	$1, %xmm1, %xmm1
	pshufd	$1, %xmm0, %xmm0
	addss	%xmm1, %xmm0
	pshufd	$16, %xmm0, %xmm1
	movdqa	%xmm2, %xmm0
	unpcklps	%xmm1, %xmm0
	ret

and compile stuff like:

extern float _Complex ccoshf( float _Complex ) ;
float _Complex ccosf ( float _Complex z ) {
 float _Complex iz;
 (__real__ iz) = -(__imag__ z);
 (__imag__ iz) = (__real__ z);
 return ccoshf(iz);
}

into:

_ccosf:                                 ## @ccosf
## BB#0:                                ## %entry
	pshufd	$1, %xmm0, %xmm1
	xorps	LCPI4_0(%rip), %xmm1
	unpcklps	%xmm0, %xmm1
	movaps	%xmm1, %xmm0
	jmp	_ccoshf                 ## TAILCALL

instead of:

_ccosf:                                 ## @ccosf
## BB#0:                                ## %entry
	movd	%xmm0, %rax
	movq	%rax, %rcx
	shlq	$32, %rcx
	shrq	$32, %rax
	xorl	$-2147483648, %eax      ## imm = 0xFFFFFFFF80000000
	addq	%rcx, %rax
	movd	%rax, %xmm0
	jmp	_ccoshf                 ## TAILCALL


There is still "stuff to be done" here for the struct case,
but this resolves rdar://6379669 - [x86-64 ABI] Pass and return 
_Complex float / double efficiently

llvm-svn: 112111
2010-08-25 23:39:14 +00:00
Michael J. Spencer b2f376bdd0 Fix horrible white space errors.
llvm-svn: 112067
2010-08-25 18:17:27 +00:00
John McCall a1dee5300b Experiment with using first-class aggregates to represent member function
pointers.  I find the resulting code to be substantially cleaner, and it
makes it very easy to use the same APIs for data member pointers (which I have
conscientiously avoided here), and it avoids a plethora of potential
inefficiencies due to excessive memory copying, but we'll have to see if it
actually works.

llvm-svn: 111776
2010-08-22 10:59:02 +00:00
Chris Lattner 8a2f3c778e fix PR5179 and correctly fix PR5831 to not miscompile.
The X86-64 ABI code didn't handle the case when a struct
would get classified and turn up as "NoClass INTEGER" for
example.  This is perfectly possible when the first slot
is all padding (e.g. due to empty base classes).  In this
situation, the first 8-byte doesn't take a register at all,
only the second 8-byte does.

This fixes this by enhancing the x86-64 abi stuff to allow
and handle this case, reverts the broken fix for PR5831,
and enhances the target independent stuff to be able to 
handle an argument value in registers being accessed at an
offset from the memory value.

This is the last x86-64 calling convention related miscompile
that I'm aware of.

llvm-svn: 109848
2010-07-30 04:02:24 +00:00
Chris Lattner 1f3a063f00 move the last hunk of getCoerceResult into the place
that needs it and remove getCoerceResult.

llvm-svn: 109807
2010-07-29 21:42:50 +00:00
Chris Lattner 60fbd7744f now that direct and coerce are merged, getCoerceResult gets simpler.
llvm-svn: 109805
2010-07-29 21:29:53 +00:00
Chris Lattner 09794695ef now that GetSSETypeAtOffset handles passing SSE class values as
float, the special case hack in getCoerceResult can go away.

llvm-svn: 109804
2010-07-29 21:22:50 +00:00
Chris Lattner e556a71859 Implement the clang-side of detection for when to pass as
<2 x float> instead of double.  This works but can't be turned
on until I teach codegen to pass <2 x float> as one XMM register
instead of two.

llvm-svn: 109790
2010-07-29 18:39:32 +00:00
Chris Lattner 50a357e962 Look at me, I can count!
llvm-svn: 109786
2010-07-29 18:19:50 +00:00
Chris Lattner 7f4b81af7a fix rdar://8251384, another case where we could access beyond the
end of a struct.  This improves the case when the struct being passed
contains 3 floats, either due to a struct or array of 3 things.  Before
we'd generate this IR for the testcase:

define float @bar(double %X.coerce0, double %X.coerce1) nounwind {
entry:
  %X = alloca %struct.foof, align 8               ; <%struct.foof*> [#uses=2]
  %0 = bitcast %struct.foof* %X to %1*            ; <%1*> [#uses=2]
  %1 = getelementptr %1* %0, i32 0, i32 0         ; <double*> [#uses=1]
  store double %X.coerce0, double* %1
  %2 = getelementptr %1* %0, i32 0, i32 1         ; <double*> [#uses=1]
  store double %X.coerce1, double* %2
  %tmp = getelementptr inbounds %struct.foof* %X, i32 0, i32 2 ; <float*> [#uses=1]
  %tmp1 = load float* %tmp                        ; <float> [#uses=1]
  ret float %tmp1
}

which compiled (with optimization) to:

_bar:                                   ## @bar
## BB#0:                                ## %entry
	movd	%xmm1, %rax
	movd	%eax, %xmm0
	ret

Now we produce:

define float @bar(double %X.coerce0, float %X.coerce1) nounwind {
entry:
  %X = alloca %struct.foof, align 8               ; <%struct.foof*> [#uses=2]
  %0 = bitcast %struct.foof* %X to %0*            ; <%0*> [#uses=2]
  %1 = getelementptr %0* %0, i32 0, i32 0         ; <double*> [#uses=1]
  store double %X.coerce0, double* %1
  %2 = getelementptr %0* %0, i32 0, i32 1         ; <float*> [#uses=1]
  store float %X.coerce1, float* %2
  %tmp = getelementptr inbounds %struct.foof* %X, i32 0, i32 2 ; <float*> [#uses=1]
  %tmp1 = load float* %tmp                        ; <float> [#uses=1]
  ret float %tmp1
}

and:

_bar:                                   ## @bar
## BB#0:                                ## %entry
	movaps	%xmm1, %xmm0
	ret

llvm-svn: 109776
2010-07-29 18:13:09 +00:00
Chris Lattner c95a398947 start setting up infrastructure for passing multi-floats
as <2 x float> instead of as double.  The backend isn't ready
yet, but infrastructure in the frontend can come up.

llvm-svn: 109768
2010-07-29 17:49:08 +00:00