Commit Graph

1483 Commits

Author SHA1 Message Date
Michael J. Spencer 8b382e7e10 Fix Whitespace.
llvm-svn: 116800
2010-10-19 07:32:42 +00:00
Eric Christopher 604e142844 Combine these together - should probably have some text associated
that says what why what we just asserted is wrong.

llvm-svn: 116333
2010-10-12 19:44:17 +00:00
Nick Lewycky eb7b91d417 Mark variable 'NoImplicitFloatOps' used only in an assert as used.
llvm-svn: 116323
2010-10-12 18:18:03 +00:00
Dan Gohman 395a898b2b Initial va_arg support for x86-64. Patch by David Meyer!
llvm-svn: 116319
2010-10-12 18:00:49 +00:00
Andrew Trick e01c9001c9 Fixes bug 8297: i386 cmpxchg8b, missing MachineMemOperand
llvm-svn: 116214
2010-10-11 19:02:04 +00:00
Michael J. Spencer 8dedb62019 X86: Call ulldiv and ftol2 on Windows instead of their libgcc eqivilents.
llvm-svn: 116188
2010-10-11 05:29:15 +00:00
Michael J. Spencer 00765e5be0 X86: MinGW should always use libgcc on Windows.
llvm-svn: 116177
2010-10-10 23:11:06 +00:00
Michael J. Spencer 7a573a5e1f X86: Call _alldiv instead of __divdi3 on Windows (excluding cygwin).
llvm-svn: 116174
2010-10-10 22:04:34 +00:00
Michael J. Spencer bee1f7f5ba Fix Whitespace.
llvm-svn: 116173
2010-10-10 22:04:20 +00:00
Cameron Esfahani d57f9ecd4a Recommit 116056, now with the missing file...
llvm-svn: 116083
2010-10-08 19:24:18 +00:00
Andrew Trick cf97db2402 reverting 116056: win64_params.ll may need to be conditionalized?
llvm-svn: 116063
2010-10-08 17:22:42 +00:00
Cameron Esfahani a07b5c291d Small patch to restore home register stack space allocation for the Win64 case. Add test case. This code eventually needs to be tighter, since it's always allocating it, even in leaf routines.
llvm-svn: 116056
2010-10-08 10:31:30 +00:00
Evan Cheng 5c31bf0619 Canonicalize X86ISD::MOVDDUP nodes to v2f64 to make sure all cases match. Also eliminate unneeded isel patterns. rdar://8520311
llvm-svn: 115977
2010-10-07 20:50:20 +00:00
Anton Korobeynikov d77a443631 va_args support for Win64.
Patch by Cameron!

llvm-svn: 115480
2010-10-03 22:52:07 +00:00
Dale Johannesen dd224d2333 Massive rewrite of MMX:
The x86_mmx type is used for MMX intrinsics, parameters and
return values where these use MMX registers, and is also
supported in load, store, and bitcast.

Only the above operations generate MMX instructions, and optimizations
do not operate on or produce MMX intrinsics. 

MMX-sized vectors <2 x i32> etc. are lowered to XMM or split into
smaller pieces.  Optimizations may occur on these forms and the
result casted back to x86_mmx, provided the result feeds into a
previous existing x86_mmx operation.

The point of all this is prevent optimizations from introducing
MMX operations, which is unsafe due to the EMMS problem.

llvm-svn: 115243
2010-09-30 23:57:10 +00:00
Chris Lattner b5b71e07af improve indentation
llvm-svn: 114815
2010-09-27 06:34:01 +00:00
Eric Christopher 422e463be7 This code should never fire on non-darwin subtargets.
llvm-svn: 114811
2010-09-27 06:01:51 +00:00
Dale Johannesen 6a4cd59b08 We can't return SSE/MMX vectors if SSE is disabled.
llvm-svn: 114745
2010-09-24 19:05:48 +00:00
Bob Wilson e1223fb583 Attempt to fix llvm-gcc build. It was crashing when building gcov.o for an
ARM cross-compiler on x86, because the MMO size did not match the type size.
This fixes the MMO size and also the size of the stack object to match the
type size.

llvm-svn: 114554
2010-09-22 17:35:14 +00:00
Chris Lattner 8a236b63d8 reimplement elf TLS support in terms of addressing modes, eliminating SegmentBaseAddress.
llvm-svn: 114529
2010-09-22 04:39:11 +00:00
Chris Lattner a5156c30ed convert the last 4 X86ISD nodes that should have memoperands to have them.
llvm-svn: 114523
2010-09-22 01:28:21 +00:00
Chris Lattner ed85da5600 give X86ISD::FNSTCW16m a memoperand, since it touches memory. It only
can access the stack due to how it is generated though.

llvm-svn: 114522
2010-09-22 01:11:26 +00:00
Chris Lattner 78f518b79b give FP_TO_INT16_IN_MEM and friends a memoperand. They are only
used with stack slots, but hey, lets be safe.

llvm-svn: 114521
2010-09-22 01:05:16 +00:00
Chris Lattner 54e5329545 give VZEXT_LOAD a memory operand, it now works with segment registers.
llvm-svn: 114515
2010-09-22 00:34:38 +00:00
Chris Lattner e479e9643b give LCMPXCHG_DAG[8] a memory operand, allowing it to work with addrspace 256/257
llvm-svn: 114508
2010-09-21 23:59:42 +00:00
Owen Anderson 5e65dfbb97 Reimplement r114460 in target-independent DAGCombine rather than target-dependent, by using
the predicate to discover the number of sign bits.  Enhance X86's target lowering to provide
a useful response to this query.

llvm-svn: 114473
2010-09-21 20:42:50 +00:00
Chris Lattner 886250c8f0 convert a couple more places to use the new getStore()
llvm-svn: 114463
2010-09-21 18:51:21 +00:00
Owen Anderson f4b1a5bdc4 When adding the carry bit to another value on X86, exploit the fact that the carry-materialization
(sbbl x, x) sets the registers to 0 or ~0.  Combined with two's complement arithmetic, we can fold
the intermediate AND and the ADD into a single SUB.

This fixes <rdar://problem/8449754>.

llvm-svn: 114460
2010-09-21 18:41:19 +00:00
Chris Lattner 802527adad eliminate some uses of the getStore overload.
llvm-svn: 114453
2010-09-21 17:50:43 +00:00
Chris Lattner 7727d05dbb convert the targets off the non-MachinePointerInfo of getLoad.
llvm-svn: 114410
2010-09-21 06:44:06 +00:00
Chris Lattner 82fd06d3ce it's more elegant to put the "getConstantPool" and
"getFixedStack" on the MachinePointerInfo class.  While
this isn't the problem I'm setting out to solve, it is the
right way to eliminate PseudoSourceValue, so lets go with it.

llvm-svn: 114406
2010-09-21 06:22:23 +00:00
Chris Lattner c3e05d6e50 update the X86 backend to use the MachinePointerInfo version of one
of the getLoad methods.  This fixes at least one bug where an incorrect
svoffset is passed in (a potential combiner-aa miscompile).

llvm-svn: 114404
2010-09-21 06:02:19 +00:00
Chris Lattner 2510de2bea reimplement memcpy/memmove/memset lowering to use MachinePointerInfo
instead of srcvalue/offset pairs.  This corrects SV info for mem 
operations whose size is > 32-bits.

llvm-svn: 114401
2010-09-21 05:40:29 +00:00
Chris Lattner e3d864b857 convert targets to the new MF.getMachineMemOperand interface.
llvm-svn: 114391
2010-09-21 04:39:43 +00:00
John Thompson 1094c80281 Added skeleton for inline asm multiple alternative constraint support.
llvm-svn: 113766
2010-09-13 18:15:37 +00:00
Bruno Cardoso Lopes 99a9f4661a Minor change. Fix comments and remove unused and redundant code
llvm-svn: 113378
2010-09-08 18:12:31 +00:00
Bruno Cardoso Lopes f7fee1c185 x86 vector shuffle lowering now relies only on target specific
nodes to emit shuffles and don't do isel mask matching anymore.
- Add the selection of the remaining shuffle opcode (movddup)
- Introduce two new functions to "recognize" where we may get
potential folds and add several comments to them explaining why
they are not yet in the desidered shape.
- Add more patterns to fallback the case where we select
a specific shuffle opcode as if it could fold a load, but it
can't, so remap to a valid instruction.
- Add a couple of FIXMEs to address in the following days once
there's a good solution to the current folding problem.

llvm-svn: 113369
2010-09-08 17:43:25 +00:00
Bruno Cardoso Lopes 6b1d62c529 Factor out some x86 vector shuffle rewriting and add comments about the direction the shuffle lowering is heading to
llvm-svn: 113286
2010-09-07 21:03:14 +00:00
Bruno Cardoso Lopes 7c483028fb Move code around to prepare for moving some of the logic together to another function
llvm-svn: 113267
2010-09-07 20:20:27 +00:00
Bill Wendling 353802114f Add an MVT::x86mmx type. It will take the place of all current MMX vector types.
llvm-svn: 113261
2010-09-07 20:03:56 +00:00
Bruno Cardoso Lopes 5a45db3e6c decouple MMX check from regular splat checks. Some refactoring is coming, and MMX should be left alone to be easily removed after moving to intrinsics
llvm-svn: 113247
2010-09-07 18:41:45 +00:00
Bruno Cardoso Lopes 4f5d4b4a6e Remove now useless check, because the code can be matched below, no need to leave it for isel
llvm-svn: 113242
2010-09-07 18:29:03 +00:00
Bruno Cardoso Lopes c9b3316fea Minor change. Since the checks are equivalent, use isMMX
llvm-svn: 113239
2010-09-07 18:24:00 +00:00
Bruno Cardoso Lopes c6accda78e Remove the last bit of isShuffleMaskLegal checks and improve the comment regarding mmx shuffles
llvm-svn: 113059
2010-09-04 02:58:56 +00:00
Bruno Cardoso Lopes 731bcc1abf make explicit that we not handle several mmx shuffles
llvm-svn: 113058
2010-09-04 02:50:13 +00:00
Bruno Cardoso Lopes 20779ee157 Emit target specific nodes to handle palignr. Do not touch it for MMX versions yet.
llvm-svn: 113056
2010-09-04 02:36:07 +00:00
Bruno Cardoso Lopes cff7cd18ab Emit target specific nodes to handle splats starting at zero indicies
llvm-svn: 113055
2010-09-04 02:02:14 +00:00
Bruno Cardoso Lopes 95759917eb Emit target specific nodes for isPSHUFHWMask and isPSHUFLWMask
llvm-svn: 113050
2010-09-04 01:36:45 +00:00
Bruno Cardoso Lopes 2b57008c72 Emit target specific nodes for isSHUFPMask
llvm-svn: 113048
2010-09-04 01:22:57 +00:00
Bruno Cardoso Lopes 2f7af36134 Previous isMOVLMask matching already emits targets nodes, remove check
llvm-svn: 113047
2010-09-04 00:50:08 +00:00
Bruno Cardoso Lopes 9f8e704151 One more check from the original isShuffleMaskLegal goes away
llvm-svn: 113045
2010-09-04 00:46:16 +00:00
Bruno Cardoso Lopes 16959372bb Remove a duplicated but useless check that i've inserted in the previous commit.
llvm-svn: 113044
2010-09-04 00:43:12 +00:00
Bruno Cardoso Lopes 44578d38d3 Refactor some code and remove the extra checks for unpckl_undef and unpckh_undef
llvm-svn: 113043
2010-09-04 00:39:43 +00:00
Bruno Cardoso Lopes 7829d0e74b Remove check for unpckh mask
llvm-svn: 113035
2010-09-03 23:32:47 +00:00
Bruno Cardoso Lopes d1dacc57aa Remove check for unpckl mask
llvm-svn: 113034
2010-09-03 23:31:50 +00:00
Bruno Cardoso Lopes 207b9d6218 Inline isShuffleMaskLegal into LowerVECTOR_SHUFFLE, so we can start
checking each standalone condition and decide whether emit target
specific nodes or remove the condition if it's already matched before.

llvm-svn: 113031
2010-09-03 23:24:06 +00:00
Bruno Cardoso Lopes 2bef20eda7 Reapply considered harmfull part of rr112934 and r112942.
"Use target specific nodes instead of relying in unpckl and
unpckh pattern fragments during isel time. Also place a
depth limit in getShuffleScalarElt.

llvm-svn: 113020
2010-09-03 22:09:41 +00:00
Bruno Cardoso Lopes fe8717c573 Reintroduce a simple function refactoring done in r112934, also without any functionality changes
llvm-svn: 113008
2010-09-03 20:20:02 +00:00
Bruno Cardoso Lopes 48e589b122 Reapply piecies of r112942 and r112934 which don't do
functional changes

llvm-svn: 113007
2010-09-03 20:10:35 +00:00
Bruno Cardoso Lopes 6979cf0808 Reapply Fix comment
llvm-svn: 113006
2010-09-03 19:55:05 +00:00
Daniel Dunbar 6f3da24d70 Revert r112934, "- Use specific nodes to match unpckl masks.", which introduced
some infinite loop and select failures.
 - Apologies for eager reverting, but its branch day.

llvm-svn: 113000
2010-09-03 19:38:11 +00:00
Daniel Dunbar f1aacd55c0 Revert r112938 "Fix comment", which depends on r112934, which introduced some
infinite loop and select failures.

llvm-svn: 112999
2010-09-03 19:38:08 +00:00
Daniel Dunbar 0ffe4db45c Revert r112942, "Use punpckh and unpckh family of nodes instead of using unpckh
mask pattern fragment", which depends on r112934, which introduced some infinite
loop and select failures.

llvm-svn: 112998
2010-09-03 19:38:05 +00:00
Bruno Cardoso Lopes a85ec10483 Use punpckh and unpckh family of nodes instead of using unpckh mask pattern fragment
llvm-svn: 112942
2010-09-03 01:39:08 +00:00
Bruno Cardoso Lopes adc6bca2dd Fix comment
llvm-svn: 112938
2010-09-03 01:28:51 +00:00
Bruno Cardoso Lopes cce44678b4 - Use specific nodes to match unpckl masks.
- Teach getShuffleScalarElt how to handle more target
specific nodes, so the DAGCombine can make use of it.
- Add another hack to avoid the node update problem
during legalization. More description on the comments

llvm-svn: 112934
2010-09-03 01:24:00 +00:00
Anton Korobeynikov a689c5b2c0 Revert win64 changes. They seem to be incomplete
llvm-svn: 112885
2010-09-02 22:31:32 +00:00
Anton Korobeynikov 56291f7e53 Properly allocate win64 shadow reg area.
Patch by Jan Sjodin!

llvm-svn: 112875
2010-09-02 22:16:28 +00:00
Bruno Cardoso Lopes 489613f1e5 Replace unpckl_undef and unpckh_undef matching with target specific opcodes
llvm-svn: 112806
2010-09-02 05:23:12 +00:00
Bruno Cardoso Lopes e4e4be3885 Move condition out to prepare for more matching
llvm-svn: 112805
2010-09-02 04:20:26 +00:00
Bruno Cardoso Lopes bf7fd146c7 Remove checking for isUNPCKL_v_undef_Mask, the specific node is already emitted for it
llvm-svn: 112804
2010-09-02 03:57:58 +00:00
Bruno Cardoso Lopes 6a7f634487 become more strict about when it's safe to use X86ISD::MOVLPS
llvm-svn: 112799
2010-09-02 02:35:51 +00:00
Bruno Cardoso Lopes 04c25c15c7 Revert r112689, avoid those kind of checks cause they mess up with mmx
llvm-svn: 112760
2010-09-01 22:59:03 +00:00
Bruno Cardoso Lopes b3825216ce Use movlps, movlpd, movss and movsd specific nodes instead of pattern matching with movlp pattern fragment
llvm-svn: 112694
2010-09-01 05:08:25 +00:00
Bruno Cardoso Lopes 6aaebe877b minor change, simplify some logic
llvm-svn: 112689
2010-09-01 00:57:08 +00:00
Bruno Cardoso Lopes 2b025707a2 Move some functions around so they can be used for some other to come function
llvm-svn: 112687
2010-09-01 00:51:36 +00:00
Bruno Cardoso Lopes 4b56d87290 Use x86 specific MOVSLDUP node, add more patterns to match it and remove useless load nodes
llvm-svn: 112661
2010-08-31 22:35:05 +00:00
Bruno Cardoso Lopes 61996ef835 Use x86 specific MOVSHDUP node and add more patterns to match it
llvm-svn: 112657
2010-08-31 22:22:11 +00:00
Bruno Cardoso Lopes 5de15ce468 Use MOVHLPS node instead of matching using movhlps and movhlps_undef pattern fragments
llvm-svn: 112644
2010-08-31 21:38:49 +00:00
Bruno Cardoso Lopes 03e4c35302 Use MOVLHPS and MOVHLPS x86 nodes whenever possible. Also remove some useless nodes
llvm-svn: 112642
2010-08-31 21:15:21 +00:00
Bruno Cardoso Lopes dfd9dd5d75 Use X86ISD::MOVSS and MOVSD to represent the movl mask pattern, also fix the handling of those nodes when seeking for scalars inside vector shuffles
llvm-svn: 112570
2010-08-31 02:26:40 +00:00
Chris Lattner 94656b1c8c fix the buildvector->insertp[sd] logic to not always create a redundant
insertp[sd] $0, which is a noop.  Before:

_f32:                                   ## @f32
	pshufd	$1, %xmm1, %xmm2
	pshufd	$1, %xmm0, %xmm3
	addss	%xmm2, %xmm3
	addss	%xmm1, %xmm0
                                        ## kill: XMM0<def> XMM0<kill> XMM0<def>
	insertps	$0, %xmm0, %xmm0
	insertps	$16, %xmm3, %xmm0
	ret

after:

_f32:                                   ## @f32
	movdqa	%xmm0, %xmm2
	addss	%xmm1, %xmm2
	pshufd	$1, %xmm1, %xmm1
	pshufd	$1, %xmm0, %xmm3
	addss	%xmm1, %xmm3
	movdqa	%xmm2, %xmm0
	insertps	$16, %xmm3, %xmm0
	ret

The extra movs are due to a random (poor) scheduling decision.

llvm-svn: 112379
2010-08-28 17:59:08 +00:00
Chris Lattner bcb6090ad0 fix the BuildVector -> unpcklps logic to not do pointless shuffles
when the top elements of a vector are undefined.  This happens all
the time for X86-64 ABI stuff because only the low 2 elements of
a 4 element vector are defined.  For example, on:

_Complex float f32(_Complex float A, _Complex float B) {
  return A+B;
}

We used to produce (with SSE2, SSE4.1+ uses insertps):

_f32:                                   ## @f32
	movdqa	%xmm0, %xmm2
	addss	%xmm1, %xmm2
	pshufd	$16, %xmm2, %xmm2
	pshufd	$1, %xmm1, %xmm1
	pshufd	$1, %xmm0, %xmm0
	addss	%xmm1, %xmm0
	pshufd	$16, %xmm0, %xmm1
	movdqa	%xmm2, %xmm0
	unpcklps	%xmm1, %xmm0
	ret

We now produce:

_f32:                                   ## @f32
	movdqa	%xmm0, %xmm2
	addss	%xmm1, %xmm2
	pshufd	$1, %xmm1, %xmm1
	pshufd	$1, %xmm0, %xmm3
	addss	%xmm1, %xmm3
	movaps	%xmm2, %xmm0
	unpcklps	%xmm3, %xmm0
	ret

This implements rdar://8368414

llvm-svn: 112378
2010-08-28 17:28:30 +00:00
Chris Lattner 96db6e66f4 improve comments in the unpcklps generating logic, introduce
a new EltStride variable instead of reusing NumElems variable
for a non-obvious purpose.  No functionality change.

llvm-svn: 112377
2010-08-28 17:15:43 +00:00
Bruno Cardoso Lopes a982aa24ef Clean up the logic of vector shuffles -> vector shifts.
Also teach this logic how to handle target specific shuffles if
needed, this is necessary while searching recursively for zeroed
scalar elements in vector shuffle operands.

llvm-svn: 112348
2010-08-28 02:46:39 +00:00
Anton Korobeynikov c0b36921c2 Properly handle passing of FP stuff to varargs function on Win64:
value should be copied to the corresponding shadow reg as well.
Patch by Cameron Esfahani!

llvm-svn: 112262
2010-08-27 14:43:06 +00:00
Bruno Cardoso Lopes e25ba0c7c2 zap the now unused MVT::getIntVectorWithNumElements
llvm-svn: 112218
2010-08-26 20:53:12 +00:00
Chris Lattner eb2cc0ce0e implement SplitVecOp_CONCAT_VECTORS, fixing the included testcase with SSE1.
llvm-svn: 112171
2010-08-26 05:51:22 +00:00
Chris Lattner cc60609cb4 fix sse1 only codegen in x86-64 mode, which is something we
apparently try to support.

llvm-svn: 112168
2010-08-26 05:24:29 +00:00
Bruno Cardoso Lopes d4085f6e91 Revert this for now, PUNPCKLDQ dont operate on v4f32
llvm-svn: 112090
2010-08-25 21:26:37 +00:00
Anton Korobeynikov b3b53ecac0 Fix nasty mingw32 bug, which e.g. prevented llvm-gcc bootstrap there.
Mark _alloca call as clobberring EFLAGS, otherwise some DCE might remove
other flags-clobberring stuff (e.g. cmp instructions) occuring after
_alloca call.

llvm-svn: 112034
2010-08-25 07:50:11 +00:00
Bruno Cardoso Lopes 0770d25758 PUNPCKLDQ should also be used for v4f32
llvm-svn: 112020
2010-08-25 02:55:40 +00:00
Bruno Cardoso Lopes 2e45d522c1 teach lowering to get target specific nodes for pshufd, emulating the same isel behavior for now, so we can pass all vector shuffle tests
llvm-svn: 112017
2010-08-25 02:35:37 +00:00
Dan Gohman c88fda477a Fix X86's isLegalAddressingMode to recognize that static addresses
need not be RIP-relative in small mode.

llvm-svn: 111917
2010-08-24 15:55:12 +00:00
Bruno Cardoso Lopes 758d7b1f5c Use pshufhw and pshuflw in more cases and fix getTargetShuffleNode number of arguments
llvm-svn: 111890
2010-08-24 01:16:15 +00:00
Bruno Cardoso Lopes 264d90fff7 Start using target speficic nodes for shuffles: pshufhw and pshuflw
llvm-svn: 111837
2010-08-23 20:41:02 +00:00
Anton Korobeynikov cbbe4501df Revert invalid r111792. Jump tables are not broken on x86-64 / coff,
it's COFF emitter which does not support differences of two symbols
(and needs to be fixed). GAS is pretty fine with code produced.

llvm-svn: 111801
2010-08-23 07:38:51 +00:00
Michael J. Spencer e87231232a Workaround broken jump tables on x86-64 COFF.
llvm-svn: 111792
2010-08-23 04:45:37 +00:00
Bruno Cardoso Lopes 9f20e7a1bf Prepare LowerVECTOR_SHUFFLEv8i16 to use x86 target specific nodes directly
llvm-svn: 111704
2010-08-21 01:32:18 +00:00
Bruno Cardoso Lopes 6f3b38a851 This is the first step towards refactoring the x86 vector shuffle code. The
general idea here is to have a group of x86 target specific nodes which are
going to be selected during lowering and then directly matched in isel.

The commit includes the addition of those specific nodes and a *bunch* of
patterns, and incrementally we're going to switch between them and what we
have right now. Both the patterns and target specific nodes can change as
we move forward with this work.

llvm-svn: 111691
2010-08-20 22:55:05 +00:00