Commit Graph

662 Commits

Author SHA1 Message Date
Evan Cheng 7a3e750fd2 Fix this xform: (sra (shl X, m), result_size) -> (sign_extend (trunc (shl X, result_size - n - m)))
llvm-svn: 48578
2008-03-20 02:18:41 +00:00
Christopher Lamb 8fe9109469 Fix X86's isTruncateFree to not claim that truncate to i1 is free. This fixes Bill's testcase that failed for r48491.
llvm-svn: 48542
2008-03-19 08:30:06 +00:00
Evan Cheng 484064370a Fix a x86-64 isel lowering bug that's been around forever. A x86-64 varargs function implicitly reads X86::AL, don't clobber it!
llvm-svn: 48515
2008-03-18 23:36:35 +00:00
Chris Lattner 8a923e7c28 Reimplement the parameter attributes support, phase #1. hilights:
1. There is now a "PAListPtr" class, which is a smart pointer around
   the underlying uniqued parameter attribute list object, and manages
   its refcount.  It is now impossible to mess up the refcount.
2. PAListPtr is now the main interface to the underlying object, and
   the underlying object is now completely opaque.
3. Implementation details like SmallVector and FoldingSet are now no
   longer part of the interface.
4. You can create a PAListPtr with an arbitrary sequence of
   ParamAttrsWithIndex's, no need to make a SmallVector of a specific 
   size (you can just use an array or scalar or vector if you wish).
5. All the client code that had to check for a null pointer before
   dereferencing the pointer is simplified to just access the 
   PAListPtr directly.
6. The interfaces for adding attrs to a list and removing them is a
   bit simpler.

Phase #2 will rename some stuff (e.g. PAListPtr) and do other less 
invasive changes.

llvm-svn: 48289
2008-03-12 17:45:29 +00:00
Chris Lattner 120ad01fcb start handling the 'f' x87 constraint.
llvm-svn: 48239
2008-03-11 19:06:29 +00:00
Chris Lattner 1bd44363f2 Change the model for FP Stack return to use fp operands on the
RET instruction instead of using FpSET_ST0_32.  This also generalizes
the code to handling returning of multiple FP results.

llvm-svn: 48209
2008-03-11 03:23:40 +00:00
Chris Lattner 4b3a7fa823 Eliminate the FP_GET_ST0/FP_SET_ST0 target-specific dag nodes, just lower to
copyfromreg/copytoreg instead.

llvm-svn: 48174
2008-03-10 21:08:41 +00:00
Evan Cheng ae2c56d93e Default ISD::PREFETCH to expand.
llvm-svn: 48169
2008-03-10 19:38:10 +00:00
Scott Michel a6729e8666 Give TargetLowering::getSetCCResultType() a parameter so that ISD::SETCC's
return ValueType can depend its operands' ValueType.

This is a cosmetic change, no functionality impacted.

llvm-svn: 48145
2008-03-10 15:42:14 +00:00
Dale Johannesen 4e622ec86d Increase ISD::ParamFlags to 64 bits. Increase the ByValSize
field to 32 bits, thus enabling correct handling of ByVal
structs bigger than 0x1ffff.  Abstract interface a bit.
Fixes gcc.c-torture/execute/pr23135.c and 
gcc.c-torture/execute/pr28982b.c in gcc testsuite (were ICE'ing
on ppc32, quietly producing wrong code on x86-32.)

llvm-svn: 48122
2008-03-10 02:17:22 +00:00
Chris Lattner 4c869594bc rename FP_SETRESULT -> FP_SET_ST0
llvm-svn: 48094
2008-03-09 07:08:44 +00:00
Chris Lattner d587e580a6 rename FpGETRESULT32 -> FpGET_ST0_32 etc. Add support for
isel'ing value preserving FP roundings from one fp stack reg to another
into a noop, instead of stack traffic.

llvm-svn: 48093
2008-03-09 07:05:32 +00:00
Chris Lattner b6387c8a74 Finish implementing a readme entry: when inserting an i64 variable
into a vector of zeros or undef, and when the top part is obviously
zero, we can just use movd + shuffle.  This allows us to compile
vec_set-B.ll into:

_test3:
	movl	$1234567, %eax
	andl	4(%esp), %eax
	movd	%eax, %xmm0
	ret

instead of:

_test3:
	subl	$28, %esp
	movl	$1234567, %eax
	andl	32(%esp), %eax
	movl	%eax, (%esp)
	movl	$0, 4(%esp)
	movq	(%esp), %xmm0
	addl	$28, %esp
	ret

llvm-svn: 48090
2008-03-09 05:42:06 +00:00
Chris Lattner eef374c197 Implement a readme entry, compiling
#include <xmmintrin.h>
__m128i doload64(short x) {return _mm_set_epi16(0,0,0,0,0,0,0,1);}

into:
	movl	$1, %eax
	movd	%eax, %xmm0
	ret

instead of a constant pool load.

llvm-svn: 48063
2008-03-09 01:05:04 +00:00
Chris Lattner ad58828354 1) Improve comments.
2) Don't try to insert an i64 value into the low part of a 
   vector with movq on an x86-32 target.  This allows us to 
   compile:

__m128i doload64(short x) {return _mm_set_epi16(0,0,0,0,0,0,0,1);}

into:

_doload64:
	movaps	LCPI1_0, %xmm0
	ret

instead of:

_doload64:
	subl	$28, %esp
	movl	$0, 4(%esp)
	movl	$1, (%esp)
	movq	(%esp), %xmm0
	addl	$28, %esp
	ret

llvm-svn: 48057
2008-03-08 22:59:52 +00:00
Chris Lattner 8a6ebd23a8 minor simplifications to this code, don't create a dead
SCALAR_TO_VECTOR on paths that end up not using it.

llvm-svn: 48056
2008-03-08 22:48:29 +00:00
Evan Cheng 95cf661534 Implement x86 support for @llvm.prefetch. It corresponds to prefetcht{0|1|2} and prefetchnta instructions.
llvm-svn: 48042
2008-03-08 00:58:38 +00:00
Chris Lattner d4defb00df mark frem as expand for all legal fp types on x86, regardless of whether
we're using SSE or not.  This fixes PR2122.

llvm-svn: 48006
2008-03-07 06:36:32 +00:00
Andrew Lenharth 357061a74d 64bit CAS on 32bit x86.
llvm-svn: 47929
2008-03-05 01:15:49 +00:00
Andrew Lenharth 4fee9f35b5 x86-64 atomics
llvm-svn: 47903
2008-03-04 21:13:33 +00:00
Dan Gohman a986eea82f Add support for lowering i64 SRA_PARTS and friends on x86-64.
llvm-svn: 47865
2008-03-03 22:22:09 +00:00
Andrew Lenharth f5c90ec12c make CAS work
llvm-svn: 47799
2008-03-01 22:27:48 +00:00
Andrew Lenharth d032c33300 all but CAS working on x86
llvm-svn: 47798
2008-03-01 21:52:34 +00:00
Evan Cheng c799065cc3 Add a quick and dirty "loop aligner pass". x86 uses it to align its loops to 16-byte boundaries.
llvm-svn: 47703
2008-02-28 00:43:03 +00:00
Chris Lattner 83263b8cfb Make X86TargetLowering::LowerSINT_TO_FP return without creating a dead
stack slot and store if the  SINT_TO_FP is actually legal.  This allows
us to compile:

double a(double b) {return (unsigned)b;}

to:

_a:
	cvttsd2siq	%xmm0, %rax
	movl	%eax, %eax
	cvtsi2sdq	%rax, %xmm0
	ret

instead of:

_a:
	subq	$8, %rsp
	cvttsd2siq	%xmm0, %rax
	movl	%eax, %eax
	cvtsi2sdq	%rax, %xmm0
	addq	$8, %rsp
	ret

crazy.

llvm-svn: 47660
2008-02-27 05:57:41 +00:00
Arnold Schwaighofer 3bfca3e942 Refactor according to Evan's and Anton's suggestions.
llvm-svn: 47635
2008-02-26 22:21:54 +00:00
Arnold Schwaighofer 1f17bf6171 Correct function comments.
llvm-svn: 47606
2008-02-26 17:50:59 +00:00
Arnold Schwaighofer 69a10f4112 Add support for intermodule tail calls on x86/32bit with
GOT-style position independent code. Before only tail calls to
protected/hidden functions within the same module were optimized.
Now all function calls are tail call optimized.

llvm-svn: 47594
2008-02-26 10:21:54 +00:00
Arnold Schwaighofer b01b99ec78 Change the lowering of arguments for tail call optimized
calls. Before arguments that could overwrite each other were
explicitly lowered to a stack slot, not giving the register allocator
a chance to optimize. Now a sequence of copyto/copyfrom virtual
registers ensures that arguments are loaded in (virtual) registers
before they are lowered to the stack slot (and might overwrite each
other). Also parameter stack slots are marked mutable for
(potentially) tail calling functions.

llvm-svn: 47593
2008-02-26 09:19:59 +00:00
Dale Johannesen 65b404d61c Revise previous patch per review.
llvm-svn: 47573
2008-02-25 22:29:22 +00:00
Dale Johannesen 32d84b1772 Expand removal of MMX memory copies to allow 1 level
of TokenFactor underneath chain (seems to be enough)

llvm-svn: 47554
2008-02-25 19:20:14 +00:00
Dale Johannesen 09f410b6d7 Split ParameterAttributes.h, putting the complicated
stuff into ParamAttrsList.h.  Per feedback from
ParamAttrs changes.

llvm-svn: 47504
2008-02-22 22:17:59 +00:00
Chris Lattner ab8bfc28c8 copy mmx values from/to memory with GPRs on x86-32
instead of with mmx registers.  This horribleness is apparently
done by gcc to avoid having to insert emms in places that really 
should have it.  This is the second half of rdar://5741668.

llvm-svn: 47474
2008-02-22 05:18:04 +00:00
Chris Lattner 997b3a65ca Start using GPR's to copy around mmx value instead of mmx regs.
GCC apparently does this, and code depends on not having to do
emms when this happens.  This is x86-64 only so far, second half
should handle x86-32.

rdar://5741668

llvm-svn: 47470
2008-02-22 02:09:43 +00:00
Anton Korobeynikov 40d67c59d5 Remove bunch of gcc 4.3-related warnings from Target
llvm-svn: 47369
2008-02-20 11:22:39 +00:00
Evan Cheng 6200c225e0 - When DAG combiner is folding a bit convert into a BUILD_VECTOR, it should check if it's essentially a SCALAR_TO_VECTOR. Avoid turning (v8i16) <10, u, u, u> to <10, 0, u, u, u, u, u, u>. Instead, simply convert it to a SCALAR_TO_VECTOR of the proper type.
- X86 now normalize SCALAR_TO_VECTOR to (BIT_CONVERT (v4i32 SCALAR_TO_VECTOR)). Get rid of X86ISD::S2VEC.

llvm-svn: 47290
2008-02-18 23:04:32 +00:00
Dan Gohman c589243107 Chris pointed out that it's not necessary to set i64 MUL to Expand
on x86-32 since i64 itself is not a Legal type. And, update some
comments.

llvm-svn: 47282
2008-02-18 19:34:53 +00:00
Dan Gohman a589ee11bb Don't mark scalar integer multiplication as Expand on x86, since x86
has plain one-result scalar integer multiplication instructions.
This avoids expanding such instructions into MUL_LOHI sequences that
must be special-cased at isel time, and avoids the problem with that
code that provented memory operands from being folded.

This fixes PR1874, addressesing the most common case. The uncommon
cases of optimizing multiply-high operations will require work
in DAGCombiner.

llvm-svn: 47277
2008-02-18 17:55:26 +00:00
Andrew Lenharth fedcf477b5 I cannot find a libgcc function for this builtin. Therefor expanding it to a noop (which is how it use to be treated). If someone who knows the x86 backend better than me could tell me how to get a lock prefix on an instruction, that would be nice to complete x86 support.
llvm-svn: 47213
2008-02-16 14:46:26 +00:00
Duncan Sands 4c95dbd69f In TargetLowering::LowerCallTo, don't assert that
the return value is zero-extended if it isn't
sign-extended.  It may also be any-extended.
Also, if a floating point value was returned
in a larger floating point type, pass 1 as the
second operand to FP_ROUND, which tells it
that all the precision is in the original type.
I think this is right but I could be wrong.
Finally, when doing libcalls, set isZExt on
a parameter if it is "unsigned".  Currently
isSExt is set when signed, and nothing is
set otherwise.  This should be right for all
calls to standard library routines.

llvm-svn: 47122
2008-02-14 17:28:50 +00:00
Nate Begeman 53e1b3f9d5 Change how FP immediates are handled.
1) ConstantFP is now expand by default
2) ConstantFP is not turned into TargetConstantFP during Legalize
   if it is legal.

This allows ConstantFP to be handled like Constant, allowing for 
targets that can encode FP immediates as MachineOperands.

As a bonus, fix up Itanium FP constants, which now correctly match,
and match more constants!  Hooray.

llvm-svn: 47121
2008-02-14 08:57:00 +00:00
Dan Gohman 9ca025f1dc Assigning an APInt to 0 with plain assignment gives it a one-bit
size. Initialize these APInts to properly-sized zero values.

llvm-svn: 47099
2008-02-13 23:07:24 +00:00
Dan Gohman e1d9ee66ed Simplify some logic in ComputeMaskedBits. And change ComputeMaskedBits
to pass the mask APInt by value, not by reference. 

llvm-svn: 47096
2008-02-13 22:28:48 +00:00
Dan Gohman f990faf23b Convert SelectionDAG::ComputeMaskedBits to use APInt instead of uint64_t.
Add an overload that supports the uint64_t interface for use by clients
that haven't been updated yet.

llvm-svn: 47039
2008-02-13 00:35:47 +00:00
Nate Begeman 8ef50214f0 SSE4.1 64b integer insert/extract pattern support
Move formats into the formats file

llvm-svn: 47035
2008-02-12 22:51:28 +00:00
Evan Cheng 4d8c98b8f9 Unbreak various insert_vector_elt and extract_vector_elt tests in presence of SSE4.
llvm-svn: 47001
2008-02-12 07:59:45 +00:00
Nate Begeman 2d77e8e446 Enable SSE4 codegen and pattern matching.
Add some notes to the README.

llvm-svn: 46949
2008-02-11 04:19:36 +00:00
Dan Gohman 3a4be0fdef Rename MRegisterInfo to TargetRegisterInfo.
llvm-svn: 46930
2008-02-10 18:45:23 +00:00
Dale Johannesen 36c2967d89 64-bit (MMX) vectors do not need restrictive alignment.
128-bit vectors need it only when SSE is on.

llvm-svn: 46890
2008-02-08 19:48:20 +00:00
Dan Gohman 7a55a94ba1 Avoid needlessly casting away const qualifiers.
llvm-svn: 46877
2008-02-08 03:29:40 +00:00
Dan Gohman 16d4bc3dc0 Follow Chris' suggestion; change the PseudoSourceValue accessors
to return pointers instead of references, since this is always what
is needed.

llvm-svn: 46857
2008-02-07 18:41:25 +00:00
Dan Gohman 63a8452e9c Add SourceValue information for outgoing argument stores on x86.
llvm-svn: 46854
2008-02-07 16:28:05 +00:00
Dan Gohman 2d489b5081 Re-apply the memory operand changes, with a fix for the static
initializer problem, a minor tweak to the way the
DAGISelEmitter finds load/store nodes, and a renaming of the
new PseudoSourceValue objects.

llvm-svn: 46827
2008-02-06 22:27:42 +00:00
Dale Johannesen d88f1d060e Implement sseregparm.
llvm-svn: 46764
2008-02-05 20:46:33 +00:00
Nick Lewycky f5b9938ef6 Don't use uninitialized values. Fixes vec_align.ll on X86 Linux.
llvm-svn: 46666
2008-02-02 08:29:58 +00:00
Evan Cheng efd142a920 SDIsel processes llvm.dbg.declare by recording the variable debug information descriptor and its corresponding stack frame index in MachineModuleInfo. This only works if the local variable is "homed" in the stack frame. It does not work for byval parameter, etc.
Added ISD::DECLARE node type to represent llvm.dbg.declare intrinsic. Now the intrinsic calls are lowered into a SDNode and lives on through out the codegen passes.
For now, since all the debugging information recording is done at isel time, when a ISD::DECLARE node is selected, it has the side effect of also recording the variable. This is a short term solution that should be fixed in time.

llvm-svn: 46659
2008-02-02 04:07:54 +00:00
Evan Cheng 27b32b87ed Revert 46556 and 46585. Dan please fix the PseudoSourceValue problem and re-commit.
llvm-svn: 46623
2008-01-31 21:00:00 +00:00
Dan Gohman ed346f2ed5 Avoid unnecessarily casting away const.
llvm-svn: 46590
2008-01-31 01:01:48 +00:00
Dan Gohman 9ba4d76816 Rename ISD::FLT_ROUNDS to ISD::FLT_ROUNDS_ to avoid conflicting
with the real FLT_ROUNDS (defined in <float.h>).

llvm-svn: 46587
2008-01-31 00:41:03 +00:00
Dan Gohman 3646fdda67 Create a new class, MemOperand, for describing memory references
in the backend. Introduce a new SDNode type, MemOperandSDNode, for
holding a MemOperand in the SelectionDAG IR, and add a MemOperand
list to MachineInstr, and code to manage them. Remove the offset
field from SrcValueSDNode; uses of SrcValueSDNode that were using
it are all all using MemOperandSDNode now.

Also, begin updating some getLoad and getStore calls to use the
PseudoSourceValue objects.

Most of this was written by Florian Brander, some
reorganization and updating to TOT by me.

llvm-svn: 46585
2008-01-31 00:25:39 +00:00
Evan Cheng 29cfb67e28 Even though InsertAtEndOfBasicBlock is an ugly hack it still deserves a proper name. Rename it to EmitInstrWithCustomInserter since it does not necessarily insert
instruction at the end.

llvm-svn: 46562
2008-01-30 18:18:23 +00:00
Evan Cheng 084a1cdcdd Work in progress. This patch *fixes* x86-64 calls which are modelled as StructRet but really should be return in registers, e.g. _Complex long double, some 128-bit aggregates. This is a short term solution that is necessary only because llvm, for now, cannot model i128 nor call's with multiple results.
Status: This only works for direct calls, and only the caller side is done. Disabled for now.

llvm-svn: 46527
2008-01-29 19:34:22 +00:00
Dale Johannesen 2b3bc30420 Handle 'X' constraint in asm's better.
llvm-svn: 46485
2008-01-29 02:21:21 +00:00
Chris Lattner d05d2011d0 Use fldz and fld1 for long double constants instead of a constant pool load.
llvm-svn: 46411
2008-01-27 06:19:31 +00:00
Chris Lattner 250789f1bd Remove some code for inferring alignment info from the x86 backend
now that the dag combiner does it.

llvm-svn: 46404
2008-01-26 20:07:42 +00:00
Chris Lattner f4523c35cb optimize fxor like for
llvm-svn: 46345
2008-01-25 06:14:17 +00:00
Chris Lattner 84ab724e06 Add target-specific dag combines for FAND(x,0) and FOR(x,0). This allows
us to compile:

double test(double X) {
  return copysign(0.0, X);
}

into:

_test:
	andpd	LCPI1_0(%rip), %xmm0
	ret

instead of:
_test:
	pxor	%xmm1, %xmm1
	andpd	LCPI1_0(%rip), %xmm1
	movapd	%xmm0, %xmm2
	andpd	LCPI1_1(%rip), %xmm2
	movapd	%xmm1, %xmm0
	orpd	%xmm2, %xmm0
	ret

llvm-svn: 46344
2008-01-25 05:46:26 +00:00
Chris Lattner a91f77eaac Significantly simplify and improve handling of FP function results on x86-32.
This case returns the value in ST(0) and then has to convert it to an SSE
register.  This causes significant codegen ugliness in some cases.  For 
example in the trivial fp-stack-direct-ret.ll testcase we used to generate:

_bar:
	subl	$28, %esp
	call	L_foo$stub
	fstpl	16(%esp)
	movsd	16(%esp), %xmm0
	movsd	%xmm0, 8(%esp)
	fldl	8(%esp)
	addl	$28, %esp
	ret

because we move the result of foo() into an XMM register, then have to
move it back for the return of bar.

Instead of hacking ever-more special cases into the call result lowering code
we take a much simpler approach: on x86-32, fp return is modeled as always 
returning into an f80 register which is then truncated to f32 or f64 as needed.
Similarly for a result, we model it as an extension to f80 + return.

This exposes the truncate and extensions to the dag combiner, allowing target
independent code to hack on them, eliminating them in this case.  This gives 
us this code for the example above:

_bar:
	subl	$12, %esp
	call	L_foo$stub
	addl	$12, %esp
	ret

The nasty aspect of this is that these conversions are not legal, but we want
the second pass of dag combiner (post-legalize) to be able to hack on them.
To handle this, we lie to legalize and say they are legal, then custom expand
them on entry to the isel pass (PreprocessForFPConvert).  This is gross, but
less gross than the code it is replacing :)

This also allows us to generate better code in several other cases.  For 
example on fp-stack-ret-conv.ll, we now generate:

_test:
	subl	$12, %esp
	call	L_foo$stub
	fstps	8(%esp)
	movl	16(%esp), %eax
	cvtss2sd	8(%esp), %xmm0
	movsd	%xmm0, (%eax)
	addl	$12, %esp
	ret

where before we produced (incidentally, the old bad code is identical to what
gcc produces):

_test:
	subl	$12, %esp
	call	L_foo$stub
	fstpl	(%esp)
	cvtsd2ss	(%esp), %xmm0
	cvtss2sd	%xmm0, %xmm0
	movl	16(%esp), %eax
	movsd	%xmm0, (%eax)
	addl	$12, %esp
	ret

Note that we generate slightly worse code on pr1505b.ll due to a scheduling 
deficiency that is unrelated to this patch.

llvm-svn: 46307
2008-01-24 08:07:48 +00:00
Evan Cheng 35abd840a6 Let each target decide byval alignment. For X86, it's 4-byte unless the aggregare contains SSE vector(s). For x86-64, it's max of 8 or alignment of the type.
llvm-svn: 46286
2008-01-23 23:17:41 +00:00
Duncan Sands 95d46ef887 The last pieces needed for loading arbitrary
precision integers.  This won't actually work
(and most of the code is dead) unless the new
legalization machinery is turned on.  While
there, I rationalized the handling of i1, and
removed some bogus (and unused) sextload patterns.
For i1, this could result in microscopically
better code for some architectures (not X86).
It might also result in worse code if annotating
with AssertZExt nodes turns out to be more harmful
than helpful.

llvm-svn: 46280
2008-01-23 20:39:46 +00:00
Chris Lattner 1ea55cf816 This commit changes:
1. Legalize now always promotes truncstore of i1 to i8. 
2. Remove patterns and gunk related to truncstore i1 from targets.
3. Rename the StoreXAction stuff to TruncStoreAction in TLI.
4. Make the TLI TruncStoreAction table a 2d table to handle from/to conversions.
5. Mark a wide variety of invalid truncstores as such in various targets, e.g.
   X86 currently doesn't support truncstore of any of its integer types.
6. Add legalize support for truncstores with invalid value input types.
7. Add a dag combine transform to turn store(truncate) into truncstore when
   safe.

The later allows us to compile CodeGen/X86/storetrunc-fp.ll to:

_foo:
	fldt	20(%esp)
	fldt	4(%esp)
	faddp	%st(1)
	movl	36(%esp), %eax
	fstps	(%eax)
	ret

instead of:

_foo:
	subl	$4, %esp
	fldt	24(%esp)
	fldt	8(%esp)
	faddp	%st(1)
	fstps	(%esp)
	movl	40(%esp), %eax
	movss	(%esp), %xmm0
	movss	%xmm0, (%eax)
	addl	$4, %esp
	ret

llvm-svn: 46140
2008-01-17 19:59:44 +00:00
Chris Lattner 72733e573b * Introduce a new SelectionDAG::getIntPtrConstant method
and switch various codegen pieces and the X86 backend over
  to using it.

* Add some comments to SelectionDAGNodes.h

* Introduce a second argument to FP_ROUND, which indicates
  whether the FP_ROUND changes the value of its input. If
  not it is safe to xform things like fp_extend(fp_round(x)) -> x.

llvm-svn: 46125
2008-01-17 07:00:52 +00:00
Duncan Sands 32b0ff6814 Trampoline support for x86-64. This looks like
it should work, but I have no machine to test
it on.  Committed because it will at least
cause no harm, and maybe someone can test it
for me!

llvm-svn: 46098
2008-01-16 22:55:25 +00:00
Chris Lattner e8bb9f2190 make it more clear that this predicate only applies to scalar FP types.
llvm-svn: 46058
2008-01-16 06:24:21 +00:00
Chris Lattner 14e616ef0b introduce a isTypeInSSEReg predicate, which allows us to simplify
some code.  No functionality change.

llvm-svn: 46055
2008-01-16 06:19:45 +00:00
Chris Lattner 8f7cec859e My previous commit had an incomplete message, it should have been:
make the 'fp return in ST(0)' optimization smart enough to
look through token factor nodes.  THis allows us to compile 
testcases like CodeGen/X86/fp-stack-retcopy.ll into:

_carg:
	subl	$12, %esp
	call	L_foo$stub
	fstpl	(%esp)
	fldl	(%esp)
	addl	$12, %esp
	ret

instead of:

_carg:
	subl	$28, %esp
	call	L_foo$stub
	fstpl	16(%esp)
	movsd	16(%esp), %xmm0
	movsd	%xmm0, 8(%esp)
	fldl	8(%esp)
	addl	$28, %esp
	ret

Still not optimal, but much better and this is a trivial patch.  Fixing 
the rest requires invasive surgery that is is not llvm 2.2 material.

llvm-svn: 46054
2008-01-16 05:56:59 +00:00
Chris Lattner ea001f1db7 make the 'fp return in ST(0)' optimization smart enough to
look through token factor

llvm-svn: 46053
2008-01-16 05:53:06 +00:00
Chris Lattner de5c74f18e various whitespace cleanups, no functionality change.
llvm-svn: 46052
2008-01-16 05:52:18 +00:00
Chris Lattner 3c3fefde06 no need to expand ISD::TRAP to X86ISD::TRAP, just match ISD::TRAP.
llvm-svn: 46015
2008-01-15 21:58:22 +00:00
Anton Korobeynikov 6bbbc4cbfa For PR1839: add initial support for __builtin_trap. llvm-gcc part is missed
as well as PPC codegen

llvm-svn: 46001
2008-01-15 07:02:33 +00:00
Duncan Sands 51fe7bbcf5 Whitespace tweak.
llvm-svn: 45940
2008-01-13 21:20:29 +00:00
Evan Cheng 7411b510b2 Code clean up.
llvm-svn: 45898
2008-01-12 01:08:07 +00:00
Arnold Schwaighofer 06da9e2d43 hrm - correct spelling.
Actually were not riding any arguments. Sadly there is no semantic spell checker that is going to safe you from such a mistake.

llvm-svn: 45868
2008-01-11 17:10:15 +00:00
Arnold Schwaighofer 6cf72fbbaf Improve tail call optimized call's argument lowering. Before this
commit all arguments where moved to the stack slot where they would
reside on a normal function call before the lowering to the tail call
stack slot. This was done to prevent arguments overwriting each other.
Now only arguments sourcing from a FORMAL_ARGUMENTS node or a
CopyFromReg node with virtual register (could also be a caller's
argument) are lowered indirectly.

 --This line, and those below, will be ignored--

M    X86/X86ISelLowering.cpp
M    X86/README.txt

llvm-svn: 45867
2008-01-11 16:49:42 +00:00
Arnold Schwaighofer bf1816ea7b Correct a copy and paste error.
llvm-svn: 45865
2008-01-11 14:34:56 +00:00
Evan Cheng a26552493b Mark byval parameter stack objects mutable for now.
llvm-svn: 45813
2008-01-10 02:24:25 +00:00
Evan Cheng fead113fe0 Do not use the stack pointer directly, issue a copyfromreg instead. Otherwise we can end up with something like ADD32ri %esp, x which two-address pass won't like.
llvm-svn: 45798
2008-01-10 00:37:26 +00:00
Evan Cheng 73d1017871 Remove comments that do not correspond to anything after recent refactoring.
llvm-svn: 45792
2008-01-10 00:09:10 +00:00
Evan Cheng 8242168ef4 Unbreak x86-64.
llvm-svn: 45725
2008-01-07 23:08:23 +00:00
Nate Begeman 22950d26f5 Remove an incorrect optimization that is performed correctly by
the target independent legalizer.

llvm-svn: 45631
2008-01-05 20:51:30 +00:00
Gordon Henriksen 9231958391 Refactoring the x86 and x86-64 calling convention implementations,
unifying the copied algorithms and saving over 500 LOC. There should
be no functionality change, but please test on your favorite x86
target.

llvm-svn: 45627
2008-01-05 16:56:59 +00:00
Gordon Henriksen f066fc477c First steps in in X86 calling convention cleanup.
llvm-svn: 45536
2008-01-03 16:47:34 +00:00
Chris Lattner a10fff51d9 Rename SSARegMap -> MachineRegisterInfo in keeping with the idea
that "machine" classes are used to represent the current state of
the code being compiled.  Given this expanded name, we can start 
moving other stuff into it.  For now, move the UsedPhysRegs and
LiveIn/LoveOuts vectors from MachineFunction into it.

Update all the clients to match.

This also reduces some needless #includes, such as MachineModuleInfo
from MachineFunction.

llvm-svn: 45467
2007-12-31 04:13:23 +00:00
Chris Lattner a5bb370aa4 Add new shorter predicates for testing machine operands for various types:
e.g. MO.isMBB() instead of MO.isMachineBasicBlock().  I don't plan on 
switching everything over, so new clients should just start using the 
shorter names.

Remove old long accessors, switching everything over to use the short
accessor: getMachineBasicBlock() -> getMBB(), 
getConstantPoolIndex() -> getIndex(), setMachineBasicBlock -> setMBB(), etc.

llvm-svn: 45464
2007-12-30 23:10:15 +00:00
Chris Lattner f3ebc3f3d2 Remove attribution from file headers, per discussion on llvmdev.
llvm-svn: 45418
2007-12-29 20:36:04 +00:00
Chris Lattner 07ccbfa64a Codegen:
as:

_bar:
	pushl	%esi
	subl	$8, %esp
	movl	16(%esp), %esi
	call	L_foo$stub
	fstps	(%esi)
	addl	$8, %esp
	popl	%esi
	#FP_REG_KILL
	ret

instead of:

_bar:
	pushl	%esi
	subl	$8, %esp
	movl	16(%esp), %esi
	call	L_foo$stub
	fstpl	(%esi)
	cvtsd2ss	(%esi), %xmm0
	movss	%xmm0, (%esi)
	addl	$8, %esp
	popl	%esi
	#FP_REG_KILL
	ret

llvm-svn: 45401
2007-12-29 06:57:38 +00:00
Chris Lattner 8013bd339b avoid going through a stack slot to convert from fpstack to xmm reg
if we are just going to store it back anyway.  This improves things 
like:
double foo();
void bar(double *P) { *P = foo(); }

llvm-svn: 45399
2007-12-29 06:41:28 +00:00
Chris Lattner e3b05fe31b fix a questionable cast, thanks to Mike Stump for pointing this out.
llvm-svn: 45075
2007-12-16 20:26:54 +00:00
Evan Cheng 23d2d4dc6c Make better use of instructions that clear high bits; fix various 2-wide shuffle bugs.
llvm-svn: 45058
2007-12-15 03:00:47 +00:00
Evan Cheng 0e6408124e Fix ctlz and cttz. llvm definition requires them to return number of bits in of the src type when value is zero.
llvm-svn: 45029
2007-12-14 08:30:15 +00:00