Commit Graph

8585 Commits

Author SHA1 Message Date
Craig Topper 1c0e22f57a Fix AVX vs SSE patterns ordering issue for VPCMPESTRM and VPCMPISTRM.
llvm-svn: 149053
2012-01-26 07:31:30 +00:00
Craig Topper b91760eff8 Remove some more patterns by custom lowering intrinsics to target specific nodes.
llvm-svn: 149052
2012-01-26 07:18:03 +00:00
Chris Lattner 33633a90a0 fix a bug I introduced in r148929, this is not a splat!
Thanks to Eli for noticing.

llvm-svn: 148947
2012-01-25 09:56:22 +00:00
Craig Topper 7834900950 Custom lower PSIGN and PSHUFB intrinsics to their corresponding target specific nodes so we can remove the isel patterns.
llvm-svn: 148933
2012-01-25 06:43:11 +00:00
Chris Lattner 47a86bdbe2 use ConstantVector::getSplat in a few places.
llvm-svn: 148929
2012-01-25 06:02:56 +00:00
Craig Topper ce4f9c5668 Custom lower phadd and phsub intrinsics to target specific nodes. Remove the patterns that are no longer necessary.
llvm-svn: 148927
2012-01-25 05:37:32 +00:00
Craig Topper 5bcf070e68 Remove AVX 256-bit unaligned load intrinsics. 128-bit versions had been removed a while ago.
llvm-svn: 148922
2012-01-25 04:42:03 +00:00
Craig Topper 3ad5bc019a Merge intrinsic pattern and no pattern versions of VCVTSD2SI intruction definitions. Matches non-AVX version of same instructions.
llvm-svn: 148914
2012-01-25 03:52:09 +00:00
Devang Patel a410ed3ced Intel Syntax: Extend special hand coded logic, to recognize special instructions, for intel syntax.
llvm-svn: 148864
2012-01-24 21:43:36 +00:00
Elena Demikhovsky 0b0c5d8c4c ZERO_EXTEND operation is optimized for AVX.
v8i16 -> v8i32, v4i32 -> v4i64 - used vpunpck* instructions.

llvm-svn: 148803
2012-01-24 13:54:13 +00:00
Craig Topper 0d8e67aebd Add comments near load pattern fragments indicating that all integer vector loads are promoted to v2i64 or v4i64 so that no one tries to reintroduce pattern fragments for other types.
llvm-svn: 148771
2012-01-24 03:03:17 +00:00
Devang Patel eba7d3dba9 Fix typo.
llvm-svn: 148751
2012-01-23 23:56:33 +00:00
Devang Patel cf893a437e Intel syntax: Robustify parsing of memory operand's displacement experssion.
llvm-svn: 148737
2012-01-23 22:35:25 +00:00
Devang Patel e660fdd953 Intel syntax: Parse memory operand with empty base reg, e.g. DWORD PTR [4*RDI]
llvm-svn: 148721
2012-01-23 20:20:06 +00:00
Devang Patel 880bc1644b Intel syntax: Parse segment registers.
llvm-svn: 148712
2012-01-23 18:31:58 +00:00
Craig Topper edd1d0acfc Custom lower PCMPEQ/PCMPGT intrinsics to target specific nodes and remove the intrinsic patterns.
llvm-svn: 148687
2012-01-23 08:18:28 +00:00
Craig Topper 6b90c5d03e Update more places to use target specific nodes for vector shifts instead of intrinsics.
llvm-svn: 148685
2012-01-23 06:46:22 +00:00
Craig Topper 5e80db4e4f Custom lower vector shift intrinsics to target specific nodes and remove the patterns that are no longer needed.
llvm-svn: 148684
2012-01-23 06:16:53 +00:00
Craig Topper 20c98df340 Remove pattern fragments for v32i8, v16i16, v8i32, v16i8, v8i16, and v4i32 loads. All integer vector loads are promoted to v2i64 or v4i64 so these pattern fragments can never match. Fix or remove patterns that used these fragments.
llvm-svn: 148672
2012-01-23 00:06:44 +00:00
Craig Topper 0b7ad76bd0 Combine X86 CMPPD and CMPPS node types. Simplifies selection code and pattern matching.
llvm-svn: 148670
2012-01-22 23:36:02 +00:00
Craig Topper bd4884371b Merge PCMPEQB/PCMPEQW/PCMPEQD/PCMPEQQ and PCMPGTB/PCMPGTW/PCMPGTD/PCMPGTQ X86 ISD node types into only two node types. Simplifying opcode selection and pattern matching.
llvm-svn: 148667
2012-01-22 22:42:16 +00:00
Craig Topper 094626414d Add target specific ISD node types for SSE/AVX vector shuffle instructions and change all the code that used to create intrinsic nodes to create the new nodes instead.
llvm-svn: 148664
2012-01-22 19:15:14 +00:00
Craig Topper a4ed5246d8 Make code a little less verbose.
llvm-svn: 148651
2012-01-22 03:07:48 +00:00
Craig Topper cb3433cd58 Remove unused X86 ISD node type defines.
llvm-svn: 148644
2012-01-22 01:15:56 +00:00
Craig Topper 123adfa0f3 Move some vector shift patterns into their instruction definitions.
llvm-svn: 148643
2012-01-22 00:41:20 +00:00
Craig Topper dcaa5fbd08 Add memory patterns for some of the fp<->integer conversion instructions. Fold some patterns into instruction definitions.
llvm-svn: 148641
2012-01-21 18:37:15 +00:00
Benjamin Kramer 5cff13a3fb Remove unused variables.
llvm-svn: 148635
2012-01-21 10:42:44 +00:00
Craig Topper 39bc1e4d25 Fix PR11819 introduced by r148537. I'd commit the test case, but the generated code is terrible as it gets fully scalarized. Expect a future commit to fix that.
llvm-svn: 148632
2012-01-21 08:49:33 +00:00
Devang Patel ce6a2ca8c8 Intel syntax: Robustify register parsing.
llvm-svn: 148591
2012-01-20 22:32:05 +00:00
David Blaikie 46a9f016c5 More dead code removal (using -Wunreachable-code)
llvm-svn: 148578
2012-01-20 21:51:11 +00:00
Devang Patel d0930fff85 Intel syntax: Parse ... PTR [-8]
llvm-svn: 148570
2012-01-20 21:21:01 +00:00
Devang Patel f36613cb45 Intel syntax: For now, disable ambiguous JMP64pcrel32 for intel syntax.
llvm-svn: 148569
2012-01-20 21:14:06 +00:00
Craig Topper a409479023 Improve 256-bit shuffle splitting to allow 2 sources in each 128-bit lane. As long as only a single lane of the source is used in the lane in the destination. This makes the splitting match much closer to what happens with 256-bit shuffles when AVX is disabled and only 128-bit XMM is allowed.
llvm-svn: 148537
2012-01-20 09:29:03 +00:00
Craig Topper 3469212c82 Add support for selecting 256-bit PALIGNR.
llvm-svn: 148532
2012-01-20 05:53:00 +00:00
Eli Friedman 32c7c25dcb Support MSVC x86-32 sret convention. PR11688. Patch by Joe Groff.
llvm-svn: 148513
2012-01-20 00:05:46 +00:00
Devang Patel f83dcfd052 Post process 'and', 'sub' instructions and select better encoding, if available.
llvm-svn: 148489
2012-01-19 18:40:55 +00:00
Devang Patel 2529dd9e00 Intel syntax: There is no need to create unary expr for simple negative displacement.
llvm-svn: 148486
2012-01-19 18:15:51 +00:00
Devang Patel 4a62ff9bcb Post process 'xor', 'or' and 'cmp' instructions and select better encoding, if available.
llvm-svn: 148485
2012-01-19 17:53:25 +00:00
Craig Topper a875b7ccc7 Folding table additions and fixes for AVX.
llvm-svn: 148467
2012-01-19 08:50:38 +00:00
Craig Topper 80576e8d1f Merge 128-bit and 256-bit SHUFPS/SHUFPD handling.
llvm-svn: 148466
2012-01-19 08:19:12 +00:00
Nick Lewycky ecc0084f72 Add a TargetOption for disabling tail calls.
llvm-svn: 148442
2012-01-19 00:34:10 +00:00
Jakob Stoklund Olesen ff482f733b Add experimental -x86-use-regmask command line option.
It adds register mask operands to x86 call instructions.  Once all the
backend passes support register mask operands, this will be permanently
enabled.

llvm-svn: 148438
2012-01-18 23:52:22 +00:00
Jakob Stoklund Olesen f1fb1d2375 Ignore register mask operands when lowering instructions to MC.
This is similar to implicit register operands.  MC doesn't understand
register liveness and call clobbers.

llvm-svn: 148437
2012-01-18 23:52:19 +00:00
Devang Patel de47cced25 Process instructions after match to select alternative encoding which may be more desirable.
llvm-svn: 148431
2012-01-18 22:42:29 +00:00
Jim Grosbach aba3de99c0 Tidy up. MCAsmBackend naming conventions.
llvm-svn: 148400
2012-01-18 18:52:16 +00:00
Jakob Stoklund Olesen f43b599550 Add a CoveredBySubRegs property to Register descriptions.
When set, this bit indicates that a register is completely defined by
the value of its sub-registers.

Use the CoveredBySubRegs property to infer which super-registers are
call-preserved given a list of callee-saved registers.  For example, the
ARM registers D8-D15 are callee-saved.  This now automatically implies
that Q4-Q7 are call-preserved.

Conversely, Win64 callees save XMM6-XMM15, but the corresponding
YMM6-YMM15 registers are not call-preserved because they are not fully
defined by their sub-registers.

llvm-svn: 148363
2012-01-18 00:16:39 +00:00
Jakob Stoklund Olesen d51a710bde Move X86 callee saved register lists to the X86CallConv .td file.
Add a trivial implementation of the getCallPreservedMask() hook.

llvm-svn: 148347
2012-01-17 22:47:01 +00:00
Devang Patel c9ed518792 Intel syntax: Fix parser match class to check memory operand size.
llvm-svn: 148338
2012-01-17 21:48:03 +00:00
Devang Patel a7143b6a2b Intel syntax: Parse "BYTE PTR [RDX + RCX]"
llvm-svn: 148334
2012-01-17 21:25:10 +00:00
Devang Patel 2ed6718616 Untabify.
llvm-svn: 148322
2012-01-17 19:09:22 +00:00
Devang Patel 8b39be79ad Intel syntax: Do not unncessarily create plus expression for memory operand displacement.
llvm-svn: 148321
2012-01-17 19:08:07 +00:00
Devang Patel 41b9ddeb7a Intel syntax: Robustify memory operand parsing.
llvm-svn: 148312
2012-01-17 18:00:18 +00:00
Nadav Rotem 86c3807b99 Fix warning.
llvm-svn: 148301
2012-01-17 09:31:09 +00:00
Nadav Rotem 86e5390dbf Fix 11769.
In CanXFormVExtractWithShuffleIntoLoad we assumed that EXTRACT_VECTOR_ELT can be later handled by the DAGCombiner.
However, in some cases on AVX, the EXTRACT_VECTOR_ELT is legalized to EXTRACT_SUBVECTOR + EXTRACT_VECTOR_ELT, which
currently is not handled by the DAGCombiner. In this patch I added a check that we only extract from the XMM part.

llvm-svn: 148298
2012-01-17 09:13:19 +00:00
Craig Topper 9cafcd8baa Remove unnecessary AVX check from an assert. hasSSE2 is enough.
llvm-svn: 148295
2012-01-17 08:23:44 +00:00
Craig Topper 37b10ef250 Fix a crasher when PerformShiftCombine receives a BUILD_VECTOR of all UNDEF. Probably could use better handling in DAG combine or getNode. Fixes PR11772.
llvm-svn: 148285
2012-01-17 04:44:50 +00:00
Eli Friedman 206ca569aa Make sure the non-SSE lowering for fences correctly clobbers EFLAGS. PR11768.
llvm-svn: 148240
2012-01-16 16:42:21 +00:00
Eli Friedman 75e3db4c7a Get rid of unused codegen-only instruction.
llvm-svn: 148239
2012-01-16 16:29:35 +00:00
Craig Topper db8890aedd Give priority to AVX over SSE for 128-bit floating point unpck instructions.
llvm-svn: 148233
2012-01-16 09:56:42 +00:00
Nadav Rotem 57935243bd [AVX] Optimize x86 VSELECT instructions using SimplifyDemandedBits.
We know that the blend instructions only use the MSB, so if the mask is
sign-extended then we can convert it into a SHL instruction. This is a
common pattern because the type-legalizer sign-extends the i1 type which
is used by the LLVM-IR for the condition.

Added a new optimization in SimplifyDemandedBits for SIGN_EXTEND_INREG -> SHL.

llvm-svn: 148225
2012-01-15 19:27:55 +00:00
Benjamin Kramer 339ced4e34 Return an ArrayRef from ShuffleVectorSDNode::getMask and push it through CodeGen.
llvm-svn: 148218
2012-01-15 13:16:05 +00:00
Craig Topper c10e1abaf3 Fix the memop type on a couple 256-bit AVX instructions that were using f128mem instead of f256mem.
llvm-svn: 148196
2012-01-14 18:29:57 +00:00
Craig Topper d78429f850 Add a bunch of AVX instructions to the folding tables. Also fixed the alignment on 256-bit AVX2 instructions.
llvm-svn: 148194
2012-01-14 18:14:53 +00:00
Chad Rosier 71a185c5c6 Fix pasto from r146196.
llvm-svn: 148167
2012-01-14 01:50:21 +00:00
Devang Patel 7066d28043 Revert r148131, it was committed before it was ready.
llvm-svn: 148134
2012-01-13 19:28:58 +00:00
Devang Patel 7ecdc6d4f5 Refactor.
llvm-svn: 148131
2012-01-13 19:12:18 +00:00
Craig Topper e52d86a740 Convert SHUFPD with the same register for both sources to PSHUFD if it would prevent a register copy. Similar to SHUFPS, but requires the mask to be converted.
llvm-svn: 148112
2012-01-13 09:21:41 +00:00
Craig Topper b1c2ebf6ee use v8i32 as optimal mem type over v8f32 if AVX2 is enabled. Similar to SSE2 vs SSE1.
llvm-svn: 148109
2012-01-13 08:32:21 +00:00
Craig Topper cb7e13d7c0 Make X86 instruction selection use 256-bit VPXOR for build_vector of all ones if AVX2 is enabled. This gives the ExeDepsFix pass a chance to choose FP vs int as appropriate. Also use v8i32 as the type for getZeroVector if AVX2 is enabled. This is consistent with SSE2 using prefering v4i32.
llvm-svn: 148108
2012-01-13 08:12:35 +00:00
Craig Topper 9f14d9f939 Add patterns for v16i16 and v32i8 immAllZerosV to select VPXOR to match v4i64 and v8i32.
llvm-svn: 148106
2012-01-13 06:59:47 +00:00
Craig Topper a4c5a47b97 Use 8i32 constant pool entry for converting AVX2_SETALLONES. Possibly fixes PR11750.
llvm-svn: 148101
2012-01-13 06:12:41 +00:00
Craig Topper 2aa07f832e Fix typo in PerformAddCombine that caused any vector type to be checked for horizontal add/sub if AVX2 is enabled. This caused an assert to fail for non 128/256-bit vectors when done before type legalizing. Fixes PR11749.
llvm-svn: 148096
2012-01-13 05:04:25 +00:00
Bill Wendling 9c8456f7ef Fix off-by-one error.
llvm-svn: 148077
2012-01-13 00:41:53 +00:00
Bill Wendling ee5eaebc58 Fix the code that was WRONG.
The registers are placed into the saved registers list in the reverse order,
which is why the original loop was written to loop backwards.

llvm-svn: 148064
2012-01-12 23:05:03 +00:00
Elena Demikhovsky 060f6ccdb8 Fixed a bug in LowerVECTOR_SHUFFLE caused assertion failure
lc: X86ISelLowering.cpp:6480: llvm::SDValue llvm::X86TargetLowering::LowerVECTOR_SHUFFLE(llvm::SDValue, llvm::SelectionDAG&) const: Assertion `V1.getOpcode() != ISD::UNDEF&&  "Op 1 of shuffle should not be undef"' failed.
Added a test.

llvm-svn: 148044
2012-01-12 20:33:10 +00:00
Rafael Espindola 00e861ed57 Support segmented stacks on 64-bit FreeBSD.
This patch uses tcb_spare field in the tcb structure to store info.
Patch by Jyun-Yan You.

llvm-svn: 148041
2012-01-12 20:24:30 +00:00
Rafael Espindola 10745d3381 Support segmented stacks on win32.
Uses the pvArbitrary slot of the TIB, which is reserved for applications. We
only support frames with a static size.

llvm-svn: 148040
2012-01-12 20:22:08 +00:00
Devang Patel 4a6e778aae Rename X86ATTAsmParser -> X86AsmParser
We are using one parser to parse att as well as intel style syntax.

llvm-svn: 148032
2012-01-12 18:03:40 +00:00
Benjamin Kramer 9ece950ddb After Jakob's r147938 exception handling on i386 was completely broken.
Restore the (obviously wrong) behavior from before r147938 without relying on
undefined behavior. Add a fat FIXME note.

This should fix nightly tester failures.

llvm-svn: 148030
2012-01-12 17:37:18 +00:00
Nadav Rotem 0a0a829bea Fix a bug in the AVX 256-bit shuffle code in cases where the splat element is on the boundary of two 128-bit vectors.
The attached testcase was stuck in an endless loop.

llvm-svn: 148027
2012-01-12 15:31:55 +00:00
Benjamin Kramer 5b3aa60b44 X86: Generalize the x << (y & const) optimization to also catch masks with more set bits set than 31 or 63.
llvm-svn: 148024
2012-01-12 12:41:34 +00:00
Devang Patel fc6be102ae Add predicate method check match memory operand size, if available.
In att style asm syntax memory operand size is derived from suffix attached with mnemonic.  In intel style asm syntax it is part of memory operand hence predicate method check is required to select appropriate instruction.

llvm-svn: 148006
2012-01-12 01:51:42 +00:00
Devang Patel 46831de240 Add intel style operand parser skeleton.
This is a work in progress.

llvm-svn: 148002
2012-01-12 01:36:43 +00:00
Chandler Carruth eb21da060b Switch all of the uses of my InsertDAGNode helper to follow the exact
same pattern. We already had this pattern is a few places, but others
tried to make a rough approximation of an actual DAG structure. As not
everywhere went to this trouble, nothing could rely on this being done.
In fact, I've checked all references to these node Ids, and the ones
that are using the topo-sort properties are actually satisfied with
a strict-weak-ordering. The requirement appears to be that Use >= Def.

I've added a big blurb of comments to this bit of the transform to
clarify why the order is so important for the next reader of the code.

I'm starting with this change as it is very small, and trivially
reverted if something breaks or the >= above really does need to be >.
If that proves the case, we can hide the problem by reverting this
patch, but the problem exists elsewhere as well, and so a more
comprehensive solution will be needed.

llvm-svn: 148001
2012-01-12 01:34:44 +00:00
Rafael Espindola d90466bcbf Support segmented stacks on mac.
This uses TLS slot 90, which actually belongs to JavaScriptCore. We only support
frames with static size
Patch by Brian Anderson.

llvm-svn: 147960
2012-01-11 19:00:37 +00:00
Rafael Espindola 4eecacb9c8 Generate the segmented stack prologue for fastcc too.
Patch by Brian Anderson.

llvm-svn: 147958
2012-01-11 18:41:19 +00:00
Chandler Carruth 3212a34269 Revert r147945 which disabled an addressing mode transformation. I had
hoped this would revive one of the llvm-gcc selfhost build bots, but it
didn't so it doesn't appear that my transform is the culprit.

If anyone else is seeing failures, please let me know!

llvm-svn: 147957
2012-01-11 18:36:12 +00:00
Rafael Espindola 2b89448d60 Use unsigned comparison in segmented stack prologue.
This is a comparison of two addresses, and GCC does the comparison unsigned.

Patch by Brian Anderson.

llvm-svn: 147954
2012-01-11 18:23:35 +00:00
Rafael Espindola 6635ae1c17 Explicitly set the scale to 1 on some segstack prologue instrs.
Patch by Brian Anderson.

llvm-svn: 147952
2012-01-11 18:14:03 +00:00
Jan Sjödin 21f83d9f36 Add XOP Intrinsics and tests
llvm-svn: 147949
2012-01-11 15:20:20 +00:00
Nadav Rotem baae7e4577 Fix a bug in the lowering of BUILD_VECTOR for AVX. SCALAR_TO_VECTOR does not zero untouched elements. Use INSERT_VECTOR_ELT instead.
llvm-svn: 147948
2012-01-11 14:07:51 +00:00
Chandler Carruth 9bc48e5215 Disable the transformation I added in r147936 to see if it fixes some
strange build bot failures that look like a miscompile into an infloop.
I'll investigate this tomorrow, but I'd both like to know whether my
patch is the culprit, and get the bots back to green.

llvm-svn: 147945
2012-01-11 12:17:47 +00:00
Chandler Carruth 3eacfb83fa Hoist a really redundant code pattern into a helper function, and delete
lots of lines of code. No functionality changed.

llvm-svn: 147942
2012-01-11 11:04:36 +00:00
Chandler Carruth b0049f4a43 Simplify the AND-rooted mask+shift checking code to match that of the
SRL-rooted code.

llvm-svn: 147941
2012-01-11 09:35:04 +00:00
Chandler Carruth 3dbcda8478 Unify the interface of the three mask+shift transform helpers, and
factor the differences that were hiding in one of them into its other
caller, the SRL handling code. No change in behavior.

llvm-svn: 147940
2012-01-11 09:35:02 +00:00
Chandler Carruth aa01e6661a Clarify and make explicit some of the requirements for transforming
mask+shift pairs at the beginning of the ISD::AND case block, and then
hoist the final pattern into a helper function, simplifying and
reflowing it appropriately. This should have no observable behavior
change, but several simplifications fell out of this such as directly
computing the new mask constant, etc.

llvm-svn: 147939
2012-01-11 09:35:00 +00:00
Jakob Stoklund Olesen 6039983755 Fix undefined code and reenable test case.
I don't think the compact encoding code is right, but at least is has
defined behavior now.

llvm-svn: 147938
2012-01-11 09:08:04 +00:00
Chandler Carruth 51d3076bbf Hoist the logic to transform shift+mask combinations into sub-register
extracts and scaled addressing modes into its own helper function. No
functionality changed here, just hoisting and layout fixes falling out
of that hoisting.

llvm-svn: 147937
2012-01-11 08:48:20 +00:00
Chandler Carruth 55b2cdee26 Teach the X86 instruction selection to do some heroic transforms to
detect a pattern which can be implemented with a small 'shl' embedded in
the addressing mode scale. This happens in real code as follows:

  unsigned x = my_accelerator_table[input >> 11];

Here we have some lookup table that we look into using the high bits of
'input'. Each entity in the table is 4-bytes, which means this
implicitly gets turned into (once lowered out of a GEP):

  *(unsigned*)((char*)my_accelerator_table + ((input >> 11) << 2));

The shift right followed by a shift left is canonicalized to a smaller
shift right and masking off the low bits. That hides the shift right
which x86 has an addressing mode designed to support. We now detect
masks of this form, and produce the longer shift right followed by the
proper addressing mode. In addition to saving a (rather large)
instruction, this also reduces stalls in Intel chips on benchmarks I've
measured.

In order for all of this to work, one part of the DAG needs to be
canonicalized *still further* than it currently is. This involves
removing pointless 'trunc' nodes between a zextload and a zext. Without
that, we end up generating spurious masks and hiding the pattern.

llvm-svn: 147936
2012-01-11 08:41:08 +00:00
Lang Hames 995c63329a Fixed order of operands in comment to match code.
llvm-svn: 147890
2012-01-10 22:53:20 +00:00
Joerg Sonnenberger 96cd35cf6d Default stack alignment for 32bit x86 should be 4 Bytes, not 8 Bytes.
Add a test that checks the stack alignment of a simple function for
Darwin, Linux and NetBSD for 32bit and 64bit mode.

llvm-svn: 147888
2012-01-10 22:43:53 +00:00
Chad Rosier 1a8f0ccd8c Add missing VEX predicates to VMOVSDto64rr/VMOVSDto64mr. This fixes a few
failing test cases on our internal AVX nightly tester.
rdar://10663637

llvm-svn: 147881
2012-01-10 22:14:06 +00:00
Bill Wendling d5ab02600e For i386, don't use the generic code.
As the comment around 7746 says, it's better to use the x87 extended precision
here than SSE. And the generic code doesn't know how to do that. It also regains
the speed lost for the uint64_to_float.c testcase.
<rdar://problem/10669858>

llvm-svn: 147869
2012-01-10 19:41:30 +00:00
Devang Patel 67bf992a8f Add definition for intel asm variant.
Right now, this just adds additional entries in match table. The parser does not use them yet.

llvm-svn: 147859
2012-01-10 17:51:54 +00:00
David Blaikie edbb58c577 Remove unnecessary default cases in switches that cover all enum values.
llvm-svn: 147855
2012-01-10 16:47:17 +00:00
Benjamin Kramer 077ae1d760 Add definitions for AMD's bobcat (aka btver1)
llvm-svn: 147846
2012-01-10 11:50:02 +00:00
Craig Topper 430f3f1bd6 Fix a crash in AVX2 when trying to broadcast a double into a 128-bit vector. There is no vbroadcastsd xmm, but we do need to support 64-bit integers broadcasted into xmm. Also factor the AVX check into the isVectorBroadcast function. This makes more sense since the AVX2 check was already inside.
llvm-svn: 147844
2012-01-10 08:23:59 +00:00
Craig Topper b0c0f72ae6 Remove hasXMM/hasXMMInt functions. Move callers to hasSSE1/hasSSE2. This is the final piece to remove the AVX hack that disabled SSE.
llvm-svn: 147843
2012-01-10 06:54:16 +00:00
Craig Topper d97bbd7b60 Remove hasSSE*orAVX functions and change all callers to use just hasSSE*. AVX is now an SSE level and no longer disables SSE checks.
llvm-svn: 147842
2012-01-10 06:37:29 +00:00
Craig Topper eb8f9e9e5b Instruction selection priority fixes to remove the XMM/XMMInt/orAVX predicates. Another commit will remove orAVX functions from X86SubTarget.
llvm-svn: 147841
2012-01-10 06:30:56 +00:00
Devang Patel 29ba4f97e6 Fix asm string wrt variants.
llvm-svn: 147805
2012-01-09 21:32:02 +00:00
Devang Patel 85d684a4d9 Split AsmParser into two components - AsmParser and AsmParserVariant
AsmParser holds info specific to target parser.
AsmParserVariant holds info specific to asm variants supported by the target.

llvm-svn: 147787
2012-01-09 19:13:28 +00:00
Chandler Carruth c16622daff Don't rely on the fact that shift values are never very large, and thus
this substraction will result in small negative numbers at worst which
become very large positive numbers on assignment and are thus caught by
the <=4 check on the next line. The >0 check clearly intended to catch
these as negative numbers.

Spotted by inspection, and impossible to trigger given the shift widths
that can be used.

llvm-svn: 147773
2012-01-09 09:47:25 +00:00
Craig Topper f287a4509e Remove AVX hack in X86Subtarget. AVX/AVX2 are now treated as an SSE level. Predicate functions have been altered to maintain previous names and behavior.
llvm-svn: 147770
2012-01-09 09:02:13 +00:00
Craig Topper b89805c77d Add HasAVX predicate to some of the AVX patterns.
llvm-svn: 147769
2012-01-09 08:34:00 +00:00
Craig Topper a51f7f75c2 Reorder a bunch of patterns to put the AVX version first thus giving it priority over the SSE version. Another step towards trying to remove the AVX hack that disables SSE from X86Subtarget.
llvm-svn: 147768
2012-01-09 08:10:38 +00:00
Craig Topper ef7f5bf8c9 Clean up patterns for MOVNT*. Not sure why there were floating point types on MOVNTPS and MOVNTDQ. And v4i64 was completely missing.
llvm-svn: 147767
2012-01-09 06:52:46 +00:00
Craig Topper c1f5622ad3 Mark MOVNTI as being supported in SSE2 OR AVX mode. This instruction has no AVX equivalent so we should use the SSE version.
llvm-svn: 147766
2012-01-09 06:38:55 +00:00
Craig Topper a081644f8a Move SSE2 logical operations PAND/POR/PXOR/PANDN above SSE1 logical operations ANDPS/ORPS/XORPS/ANDNPS. This fixes a pattern ordering issue that meant that the SSE2 instructions could never be directly selected since the SSE1 patterns would always match first. This is largely moot with the ExeDepsFix pass, but I'm trying to audit for all such ordering issues.
llvm-svn: 147765
2012-01-09 05:07:01 +00:00
Craig Topper 210e4f81b3 Change some places that were checking for AVX OR SSE1/2 to use hasXMM/hasXMMInt instead. Also fix one place that checked SSE3, but accidentally excluded AVX to use hasSSE3orAVX. This is a step towards removing the AVX hack from the X86Subtarget.h
llvm-svn: 147764
2012-01-09 02:28:15 +00:00
Craig Topper 744f6311d3 Don't disable MMX support when AVX is enabled. Fix predicates for MMX instructions that were added along with SSE instructions to check for AVX in addition to SSE level.
llvm-svn: 147762
2012-01-09 00:11:29 +00:00
Craig Topper c1ab7afec8 Enable FISTTP* instructions when AVX is enabled.
llvm-svn: 147758
2012-01-08 23:04:21 +00:00
Victor Umansky 540651cf59 Reverted commit #147601 upon Evan's request.
llvm-svn: 147748
2012-01-08 17:20:33 +00:00
Craig Topper f210619d08 Fix typo in the X86 backend readme. Patch from Jaeden Amero.
llvm-svn: 147739
2012-01-07 20:35:21 +00:00
Benjamin Kramer 6898db6269 Remove VectorExtras. This unused helper was written for a type of API that is discouraged now.
llvm-svn: 147738
2012-01-07 19:42:13 +00:00
Craig Topper ca66bba45e Remove unnecessary check of hasAVX(). It's already included in hasXMM().
llvm-svn: 147734
2012-01-07 18:48:43 +00:00
Eric Christopher c206d46709 Make the 'x' constraint work for AVX registers as well.
Fixes rdar://10614894

llvm-svn: 147704
2012-01-07 01:02:09 +00:00
Craig Topper 29b0737452 Mark scalar FMA4 instructions as ignoring the VEX.L bit.
llvm-svn: 147602
2012-01-05 08:56:10 +00:00
Victor Umansky 9255b6d9fe Peephole optimization of ptest-conditioned branch in X86 arch. Performs instruction combining of sequences generated by ptestz/ptestc intrinsics to ptest+jcc pair for SSE and AVX.
Testing: passed 'make check' including LIT tests for all sequences being handled (both SSE and AVX)

Reviewers: Evan Cheng, David Blaikie, Bruno Lopes, Elena Demikhovsky, Chad Rosier, Anton Korobeynikov
llvm-svn: 147601
2012-01-05 08:46:19 +00:00
Bill Wendling ac27f0c830 Replace the uint64_t -> double convertion algorithm with one that's more efficient.
This small bit of ASM code is sufficient to do what the old algorithm did:

     movq       %rax,  %xmm0
     punpckldq  (c0),  %xmm0  // c0: (uint4){ 0x43300000U, 0x45300000U, 0U, 0U }
     subpd      (c1),  %xmm0  // c1: (double2){ 0x1.0p52, 0x1.0p52 * 0x1.0p32 }
   #ifdef __SSE3__
     haddpd   %xmm0, %xmm0          
   #else
     pshufd   $0x4e, %xmm0, %xmm1 
     addpd    %xmm1, %xmm0
   #endif

It's arguably faster. One caveat, the 'haddpd' instruction isn't very fast on
all processors.
<rdar://problem/7719814>

llvm-svn: 147593
2012-01-05 02:13:20 +00:00
Benjamin Kramer 9c48f26341 Silence warnings of a mysterious compiler that still defaults to C89.
llvm-svn: 147553
2012-01-04 22:06:45 +00:00
Evan Cheng 104dbb0fd1 For x86, canonicalize max
(x > y) ? x : y
=>
(x >= y) ? x : y

So for something like
(x - y) > 0 : (x - y) ? 0
It will be
(x - y) >= 0 : (x - y) ? 0

This makes is possible to test sign-bit and eliminate a comparison against
zero. e.g.
subl   %esi, %edi
testl  %edi, %edi
movl   $0, %eax
cmovgl %edi, %eax
=>
xorl   %eax, %eax
subl   %esi, $edi
cmovsl %eax, %edi

rdar://10633221

llvm-svn: 147512
2012-01-04 01:41:39 +00:00
Chad Rosier 6ca97df951 Fix 80-column violations.
llvm-svn: 147495
2012-01-03 23:19:12 +00:00
Nadav Rotem 6d31bac85e Revert 147426 because it caused pr11696.
llvm-svn: 147485
2012-01-03 22:19:42 +00:00
Chad Rosier 493c1b3152 Enhance DAGCombine for transforming 128->256 casts into a vmovaps, rather
then a vxorps + vinsertf128 pair if the original vector came from a load.
rdar://10594409

llvm-svn: 147481
2012-01-03 21:05:52 +00:00
Devang Patel c1215324a3 Intel style asm variant does not need '%' prefix.
llvm-svn: 147453
2012-01-03 18:22:10 +00:00
Craig Topper 5bacb7e9e5 Miscellaneous shuffle lowering cleanup. No functional changes. Primarily converting the indexing loops to unsigned to be consistent across functions.
llvm-svn: 147430
2012-01-02 09:17:37 +00:00
Craig Topper 53d559641f Make CanXFormVExtractWithShuffleIntoLoad reject loads with multiple uses. Also make it return false if there's not even a load at all. This makes the code better match the code in DAGCombiner that it tries to match. These two changes prevent some cases where vector_shuffles were making it to instruction selection and causing the older shuffle selection code to be triggered. Also needed to fix a bad pattern that this change exposed. This is the first step towards getting rid of the old shuffle selection support. No test cases yet because there's no way to tell whether a shuffle was handled in the legalize stage or at instruction selection.
llvm-svn: 147428
2012-01-02 08:46:48 +00:00
Nadav Rotem 6c7a0e6c8b Optimize the sequence blend(sign_extend(x)) to blend(shl(x)) since SSE blend instructions only look at the highest bit.
llvm-svn: 147426
2012-01-02 08:05:46 +00:00
Craig Topper b910984458 Allow CRC32 instructions to be selected when AVX is enabled.
llvm-svn: 147411
2012-01-01 19:51:58 +00:00
Craig Topper 1c064e0a89 Fix sfence, lfence, mfence, and clflush to be able to be selected when AVX is enabled. Fix monitor and mwait to require SSE3 or AVX, previously they worked even if SSE3 was disabled. Make prefetch instructions not set the execution domain since they don't use XMM registers.
llvm-svn: 147409
2012-01-01 19:40:22 +00:00
Benjamin Kramer 47aecca51a X86Disassembler: Fix undefined behavior found by GCC 4.6
llvm-svn: 147404
2012-01-01 17:55:36 +00:00
Craig Topper 6e54ba7eee Merge X86 SHUFPS and SHUFPD node types.
llvm-svn: 147394
2011-12-31 23:50:21 +00:00
Craig Topper d51092d93a Add patterns for integer forms of SHUFPD/VSHUFPD with a memory load.
llvm-svn: 147393
2011-12-31 23:24:49 +00:00
Craig Topper 0e796fee11 Fix typo in a SHUFPD and VSHUFPD pattern that prevented SHUFPD/VSHUFPD with a load from being selected.
llvm-svn: 147392
2011-12-31 23:15:11 +00:00
Craig Topper a5d1fc2cc7 Make FMA4 imply AVX so that YMM registers would be available. Necessitates removing from Bulldozer CPU types since it would enable AVX code generation implicitly. Also make SSE4A imply SSE3. Without some level of SSE implied, XMM registers wouldn't be legal.
llvm-svn: 147369
2011-12-30 07:16:00 +00:00
Craig Topper 2ba766ae84 Add disassembler support for VPERMIL2PD and VPERMIL2PS.
llvm-svn: 147368
2011-12-30 06:23:39 +00:00
Craig Topper 03a0beda88 Add FMA4 instructions to disassembler.
llvm-svn: 147367
2011-12-30 05:20:36 +00:00
Craig Topper cd93de93fa Separate the concept of having memory access in operand 4 from the concept of having the W bit set for XOP instructons. Removes ORing W-bits in the encoder and will similarly simplify the disassembler implementation.
llvm-svn: 147366
2011-12-30 04:48:54 +00:00
Craig Topper c0f9bcb5d5 Combine FMA4 SS/SD patterns with the instruction definitions.
llvm-svn: 147365
2011-12-30 03:33:59 +00:00
Craig Topper 51fe43fcd9 Combine FMA4 PS/PD patterns with the instruction definitions.
llvm-svn: 147364
2011-12-30 03:17:15 +00:00
Craig Topper 6c08930c5e Change FMA4 memory forms to use memopv* instead of alignedloadv*. No need to force alignment on these instructions. Add a couple testcases for memory forms.
llvm-svn: 147361
2011-12-30 02:18:36 +00:00
Craig Topper 2ca79b9d4b Fix load size for FMA4 SS/SD instructions. They need to use f32 and f64 size, but with the special handling to be compatible with the intrinsic expecting a vector. Similar handling is already used elsewhere.
llvm-svn: 147360
2011-12-30 01:49:53 +00:00
Craig Topper d773607eee Fix execution domains for PS/PD FMA3 instructions. Add SS/SD forms o FMA3 instructions.
llvm-svn: 147353
2011-12-29 20:43:40 +00:00
Craig Topper 8cab06a214 Expose FMA3 instructions to the disassembler.
llvm-svn: 147351
2011-12-29 20:03:14 +00:00
Craig Topper e1bd05128e Make FMA3 imply AVX needs to be enabled. Particularly because 256-bit types aren't valid unless AVX is enabled.
llvm-svn: 147349
2011-12-29 19:46:19 +00:00
Craig Topper dd286a5201 Change XOP detection to use the correct CPUID bit instead of using the FMA4 bit.
llvm-svn: 147348
2011-12-29 19:25:56 +00:00
Craig Topper a060afb5ba Add FeaturePOPCNT to all CPU types that lost it was removed from SSE42/SSE4A in r147339.
llvm-svn: 147347
2011-12-29 18:47:31 +00:00
Craig Topper 97f05c5768 Mark non-VEX forms of PCLMUL instructions as requiring SSE2 to be enabled along with CLMUL. That's required for the XMM registers to be valid for integer data. Doesn't change any behavior since the CLMUL instructions don't have patterns yet.
llvm-svn: 147345
2011-12-29 18:08:36 +00:00
Craig Topper 1559123c77 Mark non-VEX forms of AES instructions as requiring SSE2 to be enabled along with AES. Since that's required for the XMM registers to be valid for integer data. Doesn't change any behavior though since you can't use an intrinsic with an illegal type anyway. Just makes it consistent with the VEX forms.
llvm-svn: 147344
2011-12-29 18:00:08 +00:00
Craig Topper 9e61291bf5 Remove the separate explicit AES instruction patterns. They are equivalent to the patterns specified by the instructions. Also remove unnecessary bitconverts from the AES patterns.
llvm-svn: 147342
2011-12-29 17:41:56 +00:00
Craig Topper 7bd3305f3e Make SSE42 and SSE4A not imply POPCNT. POPCNT should be able to be disabled on its own without disabling SSE4.2 or SSE4A.
llvm-svn: 147339
2011-12-29 15:51:45 +00:00
Craig Topper 0fdf720ded Make LowerBUILD_VECTOR keep node vector types consistent when creating MOVL for v16i16 and v32i8.
llvm-svn: 147337
2011-12-29 03:34:54 +00:00
Craig Topper 862c9b65be Remove some elses after returns.
llvm-svn: 147336
2011-12-29 03:20:51 +00:00
Craig Topper 274e20a499 Remove trailing spaces. Fix an assert to use && instead of || before string. Add same assert on similar code path.
llvm-svn: 147335
2011-12-29 03:09:33 +00:00
Eli Friedman 3a01ddb7e9 Fix type-checking for load transformation which is not legal on floating-point types. PR11674.
llvm-svn: 147323
2011-12-28 21:24:44 +00:00
Elena Demikhovsky b3515a8d4b Fixed a bug in LowerVECTOR_SHUFFLE and LowerBUILD_VECTOR.
Matching MOVLP mask for AVX (265-bit vectors) was wrong.
The failure was detected by conformance tests.

llvm-svn: 147308
2011-12-28 08:14:01 +00:00
Craig Topper df34d152bd Add handling of x86_avx2_pmovmskb to computeMaskedBitsForTargetNode for consistency. Add comments and an assert for BMI instructions to PerformXorCombine since the enabling of the combine is conditional on it, but the function itself isn't.
llvm-svn: 147287
2011-12-27 06:27:23 +00:00
Rafael Espindola a56ab0ede7 Section relative fixups are a coff concept, not a x86 one. Replace the
x86 specific reloc_coff_secrel32 with a generic FK_SecRel_4.

llvm-svn: 147252
2011-12-24 14:47:52 +00:00
Chandler Carruth a3d54fe0ae Use standard promotion for i8 CTTZ nodes and i8 CTLZ nodes when the
LZCNT instructions are available. Force promotion to i32 to get
a smaller encoding since the fix-ups necessary are just as complex for
either promoted type

We can't do standard promotion for CTLZ when lowering through BSR
because it results in poor code surrounding the 'xor' at the end of this
instruction. Essentially, if we promote the entire CTLZ node to i32, we
end up doing the xor on a 32-bit CTLZ implementation, and then
subtracting appropriately to get back to an i8 value. Instead, our
custom logic just uses the knowledge of the incoming size to compute
a perfect xor. I'd love to know of a way to fix this, but so far I'm
drawing a blank. I suspect the legalizer could be more clever and/or it
could collude with the DAG combiner, but how... ;]

llvm-svn: 147251
2011-12-24 12:12:34 +00:00
Chandler Carruth 38ce24455d Add systematic testing for cttz as well, and fix the bug I spotted by
inspection earlier.

llvm-svn: 147250
2011-12-24 11:46:10 +00:00
Benjamin Kramer 767bbe48c1 Chandler fixed this.
llvm-svn: 147247
2011-12-24 11:23:32 +00:00
Chandler Carruth c9fcde2347 Expand more when we have a nice 'tzcnt' instruction, to avoid generating
'bsf' instructions here.

This one is actually debatable to my eyes. It's not clear that any chip
implementing 'tzcnt' would have a slow 'bsf' for any reason, and unless
EFLAGS or a zero input matters, 'tzcnt' is just a longer encoding.
Still, this restores the old behavior with 'tzcnt' enabled for now.

llvm-svn: 147246
2011-12-24 11:11:38 +00:00
Chandler Carruth 7e9453e916 Switch the lowering of CTLZ_ZERO_UNDEF from a .td pattern back to the
X86ISelLowering C++ code. Because this is lowered via an xor wrapped
around a bsr, we want the dagcombine which runs after isel lowering to
have a chance to clean things up. In particular, it is very common to
see code which looks like:

  (sizeof(x)*8 - 1) ^ __builtin_clz(x)

Which is trying to compute the most significant bit of 'x'. That's
actually the value computed directly by the 'bsr' instruction, but if we
match it too late, we'll get completely redundant xor instructions.

The more naive code for the above (subtracting rather than using an xor)
still isn't handled correctly due to the dagcombine getting confused.

Also, while here fix an issue spotted by inspection: we should have been
expanding the zero-undef variants to the normal variants when there is
an 'lzcnt' instruction. Do so, and test for this. We don't want to
generate unnecessary 'bsr' instructions.

These two changes fix some regressions in encoding and decoding
benchmarks. However, there is still a *lot* to be improve on in this
type of code.

llvm-svn: 147244
2011-12-24 10:55:54 +00:00
Rafael Espindola 908d2ed14e Move x86 specific bits of the COFF writer to lib/Target/X86.
llvm-svn: 147231
2011-12-24 02:14:02 +00:00
Chad Rosier 00bbedff03 Fix 80-column violations.
llvm-svn: 147192
2011-12-22 22:35:21 +00:00
Chad Rosier 3172488cc0 Fix 80-column violations.
llvm-svn: 147095
2011-12-21 20:59:09 +00:00
Chad Rosier 3ede414127 No case stmt for BUILD_VECTOR in PerformDAGCombine(), so I assume this isn't
necessary.  Please chime in if I'm mistaken.

llvm-svn: 147065
2011-12-21 19:14:52 +00:00
Rafael Espindola b264d33854 Move the X86 specific bits of the ELF writer to the Target/X86 directory.
Other targets will follow shortly.

llvm-svn: 147060
2011-12-21 17:30:17 +00:00
Rafael Espindola 1ad4095d6b Reduce the exposure of Triple::OSType in the ELF object writer. This will
avoid including ADT/Triple.h in many places when the target specific bits are
moved.

llvm-svn: 147059
2011-12-21 17:00:36 +00:00
Craig Topper b8b1b4c1de Remove mode specific disassembler classes and just call X86GenericDisassembler constructor with appropriate argument in the creation functions. This removes a few tables that needed to be anchored.
llvm-svn: 147046
2011-12-21 08:06:52 +00:00
Craig Topper f30188418b Fix typo in a couple comments
llvm-svn: 147045
2011-12-21 06:30:53 +00:00
Elena Demikhovsky ec7e6e0946 This is the second fix related to VZEXT_MOVL node.
The failure that I see in the current version is:

LLVM ERROR: Cannot select: 0x18b8f70: v4i64 = X86ISD::VZEXT_MOVL 0x18beee0 [ID=14]
  0x18beee0: v4i64 = insert_subvector 0x18b8c70, 0x18b9170, 0x18b9570 [ID=13]
    0x18b8c70: v4i64 = insert_subvector 0x18b9870, 0x18bf4e0, 0x18b9970 [ID=12]
      0x18b9870: v4i64 = undef [ID=4]
      0x18bf4e0: v2i64 = bitcast 0x18bf3e0 [ID=10]
        0x18bf3e0: v4i32 = BUILD_VECTOR 0x18b9770, 0x18b9770, 0x18b9770, 0x18b9770 [ID=8]
          0x18b9770: i32 = TargetConstant<0> [ID=6]
          0x18b9770: i32 = TargetConstant<0> [ID=6]
          0x18b9770: i32 = TargetConstant<0> [ID=6]
          0x18b9770: i32 = TargetConstant<0> [ID=6]
      0x18b9970: i32 = Constant<0> [ID=3]
    0x18b9170: v2i64 = undef [ORD=1] [ID=1]
    0x18b9570: i32 = Constant<2> [ID=5]

llvm-svn: 146975
2011-12-20 13:34:28 +00:00
Chandler Carruth 24680c24d8 Begin teaching the X86 target how to efficiently codegen patterns that
use the zero-undefined variants of CTTZ and CTLZ. These are just simple
patterns for now, there is more to be done to make real world code using
these constructs be optimized and codegen'ed properly on X86.

The existing tests are spiffed up to check that we no longer generate
unnecessary cmov instructions, and that we generate the very important
'xor' to transform bsr which counts the index of the most significant
one bit to the number of leading (most significant) zero bits. Also they
now check that when the variant with defined zero result is used, the
cmov is still produced.

llvm-svn: 146974
2011-12-20 11:19:37 +00:00
Chandler Carruth e805b16e3d Fix up the CMake build for the new files added in r146960, they're
likely to stay either way that discussion ends up resolving itself.

llvm-svn: 146966
2011-12-20 08:42:11 +00:00
David Blaikie a379b18173 Unweaken vtables as per http://llvm.org/docs/CodingStandards.html#ll_virtual_anch
llvm-svn: 146960
2011-12-20 02:50:00 +00:00
Jakob Stoklund Olesen c7b437ae34 Emit a getMatchingSuperRegClass() implementation for every target.
Use information computed while inferring new register classes to emit
accurate, table-driven implementations of getMatchingSuperRegClass().

Delete the old manual, error-prone implementations in the targets.

llvm-svn: 146873
2011-12-19 16:53:34 +00:00
Benjamin Kramer 1b54835a10 Another variadics tweak.
llvm-svn: 146852
2011-12-18 20:51:31 +00:00
Benjamin Kramer 530b820500 Use the fancy new VariadicFunction template instead of a plain variadic function.
Some compilers were complaining about passing StringRef to it.

llvm-svn: 146850
2011-12-18 19:59:20 +00:00
Craig Topper a913dde0ef Remove an unused X86ISD node type.
llvm-svn: 146833
2011-12-17 19:16:44 +00:00
Benjamin Kramer 792edd3c75 X86: Factor the bswap asm matching to be slightly less horrible to read.
llvm-svn: 146831
2011-12-17 14:36:05 +00:00
Rafael Espindola d3df3d3527 Add back the MC bits of 126425. Original patch by Nathan Jeffords. I added the
asm parsing and testcase.

llvm-svn: 146801
2011-12-17 01:14:52 +00:00
Lang Hames da07b3ad42 Make sure that the lower bits on the VSELECT condition are properly set.
llvm-svn: 146800
2011-12-17 01:08:46 +00:00
Craig Topper a4d411cb1b Don't try to match 'unpackl/h v, v' for 32xi8 and 16xi16 when only AVX1 is supported. Fix 'unpackh v, v' for 256-bit types to understand 128-bit lanes.
llvm-svn: 146726
2011-12-16 08:06:31 +00:00
Eli Friedman 64944090ff Make sure we correctly note the existence of an i8 immediate for vblendvps and friends, so we compute fixups correctly. PR11586.
llvm-svn: 146709
2011-12-15 23:46:18 +00:00
Chad Rosier 41dbf59e12 Add missing zmovl AVX patterns which were causing crashes.
Patch by Elena Demikhovsky <elena.demikhovsky@intel.com>!

llvm-svn: 146689
2011-12-15 22:11:31 +00:00
Chad Rosier 75ed9dcbc6 Fix assert in LowerBUILD_VECTOR for v16i16 type on AVX.
Patch by Elena Demikhovsky <elena.demikhovsky@intel.com>!

llvm-svn: 146684
2011-12-15 21:34:44 +00:00
Lang Hames c44b5e469b Fix VSELECT operand order. Was previously backwards, causing bogus vector shift results - <rdar://problem/10559581>.
llvm-svn: 146671
2011-12-15 18:57:27 +00:00
Chad Rosier b7a0b89ff0 Use SmallVector/assign(), rather than std::vector/push_back().
llvm-svn: 146627
2011-12-15 01:16:09 +00:00
Chad Rosier 1940baa76b Add support for lowering fneg when AVX is enabled.
rdar://10566486

llvm-svn: 146625
2011-12-15 01:02:25 +00:00
Bill Wendling ae94fb4009 The saved registers weren't being processed in the correct order. This lead to
the compact unwind claiming that one register was saved before another, which
isn't all that great in general. Process them in the natural order. Reverse the
list only when necessary for the algorithm.

llvm-svn: 146612
2011-12-14 23:53:24 +00:00
Evan Cheng 7fae11b231 - Add MachineInstrBundle.h and MachineInstrBundle.cpp. This includes a function
to finalize MI bundles (i.e. add BUNDLE instruction and computing register def
  and use lists of the BUNDLE instruction) and a pass to unpack bundles.
- Teach more of MachineBasic and MachineInstr methods to be bundle aware.
- Switch Thumb2 IT block to MI bundles and delete the hazard recognizer hack to
  prevent IT blocks from being broken apart.

llvm-svn: 146542
2011-12-14 02:11:42 +00:00
Chandler Carruth 637cc6a8aa Initial CodeGen support for CTTZ/CTLZ where a zero input produces an
undefined result. This adds new ISD nodes for the new semantics,
selecting them when the LLVM intrinsic indicates that the undef behavior
is desired. The new nodes expand trivially to the old nodes, so targets
don't actually need to do anything to support these new nodes besides
indicating that they should be expanded. I've done this for all the
operand types that I could figure out for all the targets. Owners of
various targets, please review and let me know if any of these are
incorrect.

Note that the expand behavior is *conservatively correct*, and exactly
matches LLVM's current behavior with these operations. Ideally this
patch will not change behavior in any way. For example the regtest suite
finds the exact same instruction sequences coming out of the code
generator. That's why there are no new tests here -- all of this is
being exercised by the existing test suite.

Thanks to Duncan Sands for reviewing the various bits of this patch and
helping me get the wrinkles ironed out with expanding for each target.
Also thanks to Chris for clarifying through all the discussions that
this is indeed the approach he was looking for. That said, there are
likely still rough spots. Further review much appreciated.

llvm-svn: 146466
2011-12-13 01:56:10 +00:00
Daniel Dunbar 8889bb08b8 LLVMBuild: Introduce a common section which currently has a list of the
subdirectories to traverse into.
 - Originally I wanted to avoid this and just autoscan, but this has one key
   flaw in that new subdirectories can not automatically trigger a rerun of the
   llvm-build tool. This is particularly a pain when switching back and forth
   between trees where one has added a subdirectory, as the dependencies will
   tend to be wrong. This will also eliminates FIXME implicitly.

llvm-svn: 146436
2011-12-12 22:45:54 +00:00
Daniel Dunbar 27a7489a03 LLVMBuild: Remove trailing newline, which irked me.
llvm-svn: 146409
2011-12-12 19:48:00 +00:00
Jan Sjödin 7c0face455 XOP instructions and encoding tests.
llvm-svn: 146407
2011-12-12 19:37:49 +00:00
Jan Sjödin 6dd2488383 XOP encoding bits and logic.
llvm-svn: 146397
2011-12-12 19:12:26 +00:00
Craig Topper 1fdfec63a4 Remove some remants of the old palign pattern fragment that were still hanging around. Also remove a cast from inside getShuffleVPERM2X128Immediate and getShuffleVPERMILPImmediate since the only caller already had done the cast.
llvm-svn: 146344
2011-12-11 19:12:35 +00:00
Rafael Espindola c7f355b8e1 Handle expressions of the form _GLOBAL_OFFSET_TABLE_-symbol the same way gas
does. The _GLOBAL_OFFSET_TABLE_ is still magical in that we get a R_386_GOTPC,
but it doesn't change the immediate in the same way as when the expression
has no right hand side symbol.

llvm-svn: 146311
2011-12-10 02:28:43 +00:00
Benjamin Kramer 863683c590 This is now implemented.
llvm-svn: 146258
2011-12-09 15:45:57 +00:00
Benjamin Kramer 16bbfbec66 X86: Add patterns for the various rounding ops for SSE4.1 and AVX.
llvm-svn: 146257
2011-12-09 15:44:03 +00:00
Benjamin Kramer 2dc5dec41d X86: Split (v)rounds[sd] into a normal and an intrinsic version.
llvm-svn: 146256
2011-12-09 15:43:55 +00:00
Evan Cheng 557cda7f1d Remove hasSSE1orAVX(). It's the same as hasXMM().
llvm-svn: 146246
2011-12-09 06:32:46 +00:00
Evan Cheng b96bca81e7 Add 256-bit variant vmovss and vmovsd patterns. rdar://10538417
llvm-svn: 146196
2011-12-08 22:30:45 +00:00
Evan Cheng 2a217be25f Add various missing AVX patterns which was causing crashes. Sadly, the generated
code looks pretty bad compared to SSE.

rdar://10538793

llvm-svn: 146191
2011-12-08 22:05:28 +00:00
Owen Anderson 57a7f41d5d Don't explicitly marked libm rounding ops as legal on SSE4.1/AVX. There don't seem to be patterns for these, so I don't know why they were marked legal in the first place.
Fixes failures caused by r146171.

llvm-svn: 146180
2011-12-08 20:51:38 +00:00
Owen Anderson 0b9b9da6c8 Teach SelectionDAG to match more calls to libm functions onto existing SDNodes. Mark these nodes as illegal by default, unless the target declares otherwise.
llvm-svn: 146171
2011-12-08 19:32:14 +00:00
Evan Cheng 4d1a2d449f Many of the SSE patterns should not be selected when AVX is available. This led to the following code in X86Subtarget.cpp
if (HasAVX)
    X86SSELevel = NoMMXSSE;

This is so patterns that are predicated on hasSSE3, etc. would not be selected when avx is available. Instead, the AVX variant is selected.
However, this breaks instructions which do not have AVX variants.

The right way to fix this is for the SSE but not-AVX patterns to predicate on something like hasSSE3() && !hasAVX().
Then we can take out the hack in X86Subtarget.cpp. Patterns which do not have AVX variants do not need to change.

However, we need to audit all the patterns before we make the change. This patch is workaround that fixes one specific case,
the prefetch instructions. rdar://10538297

llvm-svn: 146163
2011-12-08 19:00:42 +00:00
Jan Sjödin d19760a40c Src2 and src3 were accidentally swapped for the FMA4 rr patterns. Undo this and fix the encoding.
llvm-svn: 146151
2011-12-08 14:43:19 +00:00
Craig Topper 1d578e8835 Fix a bunch of SSE/AVX patterns to use proper memop types. In particular, not using integer loads other than v2i64/v4i64 since the others are all promoted.
llvm-svn: 146031
2011-12-07 08:30:53 +00:00
Bill Wendling 302cf8d5d0 Adjust the stack by one pointer size for all frameless stacks.
llvm-svn: 146030
2011-12-07 07:58:55 +00:00
Bill Wendling 3c86459997 Fix off-by-one error when encoding the stack size for a frameless stack.
llvm-svn: 146029
2011-12-07 07:49:49 +00:00
Evan Cheng 7f8e563a69 Add bundle aware API for querying instruction properties and switch the code
generator to it. For non-bundle instructions, these behave exactly the same
as the MC layer API.

For properties like mayLoad / mayStore, look into the bundle and if any of the
bundled instructions has the property it would return true.
For properties like isPredicable, only return true if *all* of the bundled
instructions have the property.
For properties like canFoldAsLoad, isCompare, conservatively return false for
bundles.

llvm-svn: 146026
2011-12-07 07:15:52 +00:00
Bill Wendling 67a70c995a Explicitly check for the different SUB instructions.
llvm-svn: 145976
2011-12-06 22:14:27 +00:00
Bill Wendling 5a173cd367 Encode the total stack if there isn't a frame.
llvm-svn: 145969
2011-12-06 21:34:01 +00:00
Bill Wendling a73c0c99ea * Add a macro to remove a magic number.
* Rename variables to reflect what they're actually used for.

llvm-svn: 145968
2011-12-06 21:23:42 +00:00
Bill Wendling 87571b6392 Check the correct value for small stack sizes. Also modify some comments.
llvm-svn: 145954
2011-12-06 19:16:17 +00:00
Bill Wendling a4e87944a8 For a small sized stack, we encode that value directly with no "stack adjust" value.
llvm-svn: 145952
2011-12-06 19:09:06 +00:00
Craig Topper 83320e03e6 Add X86ISD::HADD/HSUB to getTargetNodeName
llvm-svn: 145929
2011-12-06 09:31:36 +00:00
Craig Topper 6572e0f203 Fix a bunch of SSE/AVX patterns to use v2i64/v4i64 loads since all other integer vector loads are promoted to those.
llvm-svn: 145927
2011-12-06 09:04:59 +00:00
Craig Topper 8d4ba198d6 Merge floating point and integer UNPCK X86ISD node types.
llvm-svn: 145926
2011-12-06 08:21:25 +00:00
Craig Topper 3cb802c775 Clean up some of the shuffle decoding code for UNPCK instructions. Add instruction commenting for AVX/AVX2 forms for integer UNPCKs.
llvm-svn: 145924
2011-12-06 05:31:16 +00:00
Craig Topper bf41eb3a98 Merge isSHUFPMask and isCommutedSHUFPMask into single function that can do both. Do the same for the 256-bit version. Use loops to reduce size of isVSHUFPYMask. Fix test cases that were incorrectly passing due to isCommutedSHUFPMask not checking for the vector being 128-bit. This caused some 256-bit shuffles to be incorrectly commuted.
llvm-svn: 145921
2011-12-06 04:59:07 +00:00
Bill Wendling 4e87e850a2 Add a comment.
llvm-svn: 145896
2011-12-06 01:57:48 +00:00
Jakob Stoklund Olesen 10e1252269 Use logarithmic units for basic block alignment.
This was actually a bit of a mess. TLI.setPrefLoopAlignment was clearly
documented as taking log2(bytes) units, but the x86 target would still
set a preferred loop alignment of '16'.

CodePlacementOpt passed this number on to the basic block, and
AsmPrinter interpreted it as bytes.

Now both MachineFunction and MachineBasicBlock use logarithmic
alignments.

Obviously, MachineConstantPool still measures alignments in bytes, so we
can emulate the thrill of using as.

llvm-svn: 145889
2011-12-06 01:26:19 +00:00
Bill Wendling f7cef7ecad The compact encoding of the registers are 3-bits each. Make sure we shift the
value over that much.

llvm-svn: 145888
2011-12-06 01:26:14 +00:00
Jim Grosbach 25b63fa117 Move target-specific logic out of generic MCAssembler.
Whether a fixup needs relaxation for the associated instruction is a
target-specific function, as the FIXME indicated. Create a hook for that
and use it.

llvm-svn: 145881
2011-12-06 00:47:03 +00:00
Craig Topper 51bec1a37a Remove some leftover remnants that once tried to create 64-bit MMX PALIGNR instructions.
llvm-svn: 145804
2011-12-05 07:27:14 +00:00
Craig Topper 6a55b1dd9f Clean up and optimizations to the X86 shuffle lowering code. No functional change.
llvm-svn: 145803
2011-12-05 06:56:46 +00:00
Sanjoy Das 006e43bcc0 Check for stack space more intelligently.
libgcc sets the stack limit field in TCB to 256 bytes above the actual
allocated stack limit.  This means if the function's stack frame needs
less than 256 bytes, we can just compare the stack pointer with the
stack limit.  This should result in lesser calls to __morestack.

llvm-svn: 145766
2011-12-03 09:32:07 +00:00
Sanjoy Das 165ca1d4ba Fix a bug in the x86-32 code generated for segmented stacks.
Currently LLVM pads the call to __morestack with a add and sub of 8
bytes to esp.  This isn't correct since __morestack expects the call
to be followed directly by a ret.

This commit also adjusts the relevant test-case.

llvm-svn: 145765
2011-12-03 09:21:07 +00:00
Nick Lewycky 8fd1254a0a Creating multiple JITs on X86 in multiple threads causes multiple writes (of
the same value) to this variable. This code could be refactored, but it doesn't
matter since the old JIT is going away. Add tsan annotations to ignore the
race.

llvm-svn: 145745
2011-12-03 02:45:50 +00:00
Nick Lewycky 50f02cb21b Move global variables in TargetMachine into new TargetOptions class. As an API
change, now you need a TargetOptions object to create a TargetMachine. Clang
patch to follow.

One small functionality change in PTX. PTX had commented out the machine
verifier parts in their copy of printAndVerify. That now calls the version in
LLVMTargetMachine. Users of PTX who need verification disabled should rely on
not passing the command-line flag to enable it.

llvm-svn: 145714
2011-12-02 22:16:29 +00:00
Jan Sjödin 1280eb1d06 Add XOP feature flag.
llvm-svn: 145682
2011-12-02 15:14:37 +00:00
Craig Topper b67440367f Reduce duplicate code in isHorizontalBinOp and add some asserts to protect assumptions
llvm-svn: 145681
2011-12-02 08:18:41 +00:00
Craig Topper abeb79eee3 Add instruction selection support for horizontal add/sub of 256-bit floating point vectors. Also add the test case for 256-bit integer vectors.
llvm-svn: 145680
2011-12-02 07:16:01 +00:00
Sanjoy Das f60485c4cf Dummy commit to check commit access.
llvm-svn: 145619
2011-12-01 19:15:08 +00:00
Eric Christopher 9da7f305a4 For 64-bit the rest of the general regs are ok for the q constraint. Make
sure we can emit both the high and low versions of those registers.

Fixes rdar://10392864

llvm-svn: 145579
2011-12-01 08:12:41 +00:00
Eli Friedman d61887dd0a Pass AVX vectors which are arguments to varargs functions on the stack. <rdar://problem/10463281>.
llvm-svn: 145573
2011-12-01 04:49:21 +00:00
Jan Sjödin 9430e284a9 Support for encoding all FMA4 instructions and tablegen patterns for all
remaining FMA4 instructions and intrinsics with tests.

llvm-svn: 145525
2011-11-30 22:09:42 +00:00
Benjamin Kramer 5feb3dab79 X86: Turns out bulldozer also supports sse42 and lzcnt.
While at it remove the barcelona/instanbul/shanghai subtargets, they're
unsupported by GCC and look pretty broken.

llvm-svn: 145494
2011-11-30 15:48:16 +00:00
Benjamin Kramer 981f32327d X86: Add subtargets for AMD's bulldozer.
llvm-svn: 145493
2011-11-30 15:27:46 +00:00
Nadav Rotem 96923cc2bb X86: PerformOrCombine introduced a vselect node with a wrong order of operands. This bug was introduced when a dedicated blend sdnode was replaced with the vselect node (in 139479).
llvm-svn: 145488
2011-11-30 10:13:37 +00:00
Craig Topper c4977ba413 Add instruction selection support for AVX2 horizontal add/sub instructions.
llvm-svn: 145487
2011-11-30 09:10:50 +00:00
Craig Topper 0a672eaf9e Merge VPERM2F128/VPERM2I128 ISD node types.
llvm-svn: 145485
2011-11-30 07:47:51 +00:00
Craig Topper bafd224c8b Merge decoding of VPERMILPD and VPERMILPS shuffle masks. Merge X86ISD node type for VPERMILPD/PS. Add instruction selection support for VINSERTI128/VEXTRACTI128.
llvm-svn: 145483
2011-11-30 06:25:25 +00:00
Evan Cheng 648e48d02e Add another missing pattern. llvm-gcc likes f64 but clang likes i64 so it was generating poor code for some SSE builtins.
llvm-svn: 145448
2011-11-29 22:48:34 +00:00
Jakob Stoklund Olesen bde32d36bb Make X86::FsFLD0SS / FsFLD0SD real pseudo-instructions.
Like V_SET0, these instructions are expanded by ExpandPostRA to xorps /
vxorps so they can participate in execution domain swizzling.

This also makes the AVX variants redundant.

llvm-svn: 145440
2011-11-29 22:27:25 +00:00
Daniel Dunbar 539d0a8a09 build/CMake: Finish removal of add_llvm_library_dependencies.
llvm-svn: 145420
2011-11-29 19:25:30 +00:00
Michael J. Spencer de3a2118db MC/X86/COFF: Allow quotes in names when targeting MS/Windows,
as MC is the only assembler we support.

This splits MS/Windows and GNU/Windows ASM infos into two seperate classes.
While there is currently only one difference, full MS C++ ABI support will
require many more.

llvm-svn: 145409
2011-11-29 18:00:06 +00:00
Elena Demikhovsky 7a81dea516 Fixed vsqrt.ss intrinsic usage - order of input operands was wrong.
Added a test.
Thanks Bruno for reviewing the patch.

llvm-svn: 145403
2011-11-29 15:00:45 +00:00
Craig Topper 1d63ae3731 Fix shuffle decoding for memory forms for (V)SHUFPS/D.
llvm-svn: 145392
2011-11-29 07:58:09 +00:00
Craig Topper c16db840be Fix issues in shuffle decoding around VPERM* instructions. Fix shuffle decoding for VSHUFPS/D for 256-bit types. Add pattern matching for memory forms of VPERMILPS/VPERMILPD.
llvm-svn: 145390
2011-11-29 07:49:05 +00:00
Craig Topper 12b72def4e Fix VINSERTF128/VEXTRACTF128 to be marked as FP instructions. Allow execution dependency fix pass to convert them to their integer equivalents when AVX2 is enabled.
llvm-svn: 145376
2011-11-29 05:37:58 +00:00
Craig Topper 897a7d4b9c Correctly mark VPERM2F128 as being an FP instruction and add execution domain fixing support to convert it to VPERM2I128 for AVX2.
llvm-svn: 145370
2011-11-29 03:57:34 +00:00
Evan Cheng aa93ceb164 Add missing avx pattern.
llvm-svn: 145272
2011-11-28 20:27:23 +00:00
Craig Topper 818a983e93 Add X86 instruction selection for VPERM2I128 when AVX2 is enabled. Merge VPERMILPS/VPERMILPD detection since they are pretty similar.
llvm-svn: 145238
2011-11-28 10:14:51 +00:00
Craig Topper b0456936da Make isCommutedVSHUFP more like the way isCommutedSHUFP is handled.
llvm-svn: 145218
2011-11-28 01:14:24 +00:00
Craig Topper 79ee88a511 Merge detecting and handling for VSHUFPSY and VSHUFPDY since a lot of the code was similar for both.
llvm-svn: 145199
2011-11-27 21:41:12 +00:00
Craig Topper 51280d565b Merge 128-bit and 256-bit X86ISD node types for VPERMILPS and VPERMILPD. Simplify some shuffle lowering code since V1 can never be UNDEF due to canonalizing that occurs when shuffle nodes are created.
llvm-svn: 145153
2011-11-26 22:55:48 +00:00
Craig Topper 7704bd7ac3 Collapse X86ISD node types for PUNPCKH*, PUNPCKL*, UNPCKLP*, and UNPCKHP* to not be type specific. Now we just have integer high and low and floating point high and low. Pattern matching will choose the correct instruction based on the vector type.
llvm-svn: 145148
2011-11-26 20:47:44 +00:00
Bruno Cardoso Lopes 0f9a1f5e6c This patch contains support for encoding FMA4 instructions and
tablegen patterns for scalar FMA4 operations and intrinsic. Also
add tests for vfmaddsd.

Patch by Jan Sjodin

llvm-svn: 145133
2011-11-25 19:33:42 +00:00
Craig Topper d65a444478 Remove 256-bit specific node types for UNPCKHPS/D and instead use the 128-bit versions and let the operand type disinquish. Also fix the load form of the v8i32 patterns for these to realize that the load would be promoted to v4i64.
llvm-svn: 145126
2011-11-24 22:57:10 +00:00
Craig Topper d26466748b Remove AVX2 specific X86ISD node types for PUNPCKH/L and instead just reuse the 128-bit versions and let the vector type distinguish.
llvm-svn: 145125
2011-11-24 22:20:08 +00:00
Benjamin Kramer 651db37352 X86: alias cqo to cqto.
llvm-svn: 145121
2011-11-24 12:02:46 +00:00
Benjamin Kramer ebcb451874 X86: Use btq for bit tests if the immediate can't be encoded in 32 bits.
Before:
	movabsq	$4294967296, %rax       ## encoding: [0x48,0xb8,0x00,0x00,0x00,0x00,0x01,0x00,0x00,0x00]
	testq	%rax, %rdi              ## encoding: [0x48,0x85,0xf8]
	jne	LBB0_2                  ## encoding: [0x75,A]

After:
	btq	$32, %rdi               ## encoding: [0x48,0x0f,0xba,0xe7,0x20]
	jb	LBB0_2                  ## encoding: [0x72,A]

btq is usually slower than testq because it doesn't fuse with the jump, but here we're better off
saving one register and a giant movabsq.

llvm-svn: 145103
2011-11-23 13:54:17 +00:00
Elena Demikhovsky 779ba6d7b7 I added several lines in X86 code generator that allow to choose
VSHUFPS/VSHUFPD instructions while lowering VECTOR_SHUFFLE node. I check a commuted VSHUFP mask.

The patch was reviewed by Bruno.

llvm-svn: 145099
2011-11-23 10:23:16 +00:00
Jakob Stoklund Olesen 02845410f9 Fix PR11422.
This was a bug in keeping track of the available domains when merging
domain values.

The wrong domain mask caused ExecutionDepsFix to try to move VANDPSYrr
to the integer domain which is only available in AVX2.

Also add an assertion to catch future attempts at emitting AVX2
instructions.

llvm-svn: 145096
2011-11-23 04:03:08 +00:00
Craig Topper 83c4592619 More fixes to the X86InstComments for shuffle instructions. In particular add AVX flavors of many instructions and fix the destination operand for some of the existing AVX entries.
llvm-svn: 145063
2011-11-22 14:27:57 +00:00
Craig Topper ccb7097509 Fix shuffle decoding logic to handle UNPCKLPS/UNPCKLPD on 256-bit vectors correctly. Add support for decoding UNPCKHPS/UNPCKHPD for AVX 128-bit and 256-bit forms.
llvm-svn: 145055
2011-11-22 01:57:35 +00:00
Craig Topper f563977795 Add methods for querying minimum SSE version along with AVX. Simplifies all the places that had to check a version of SSE and AVX.
llvm-svn: 145053
2011-11-22 00:44:41 +00:00
Craig Topper 6270d072c5 Lowering for v32i8 to VPUNPCKLBW/VPUNPCKHBW when AVX2 is enabled.
llvm-svn: 145028
2011-11-21 08:26:50 +00:00
Craig Topper 669199ca94 Add support for lowering 256-bit shuffles to VPUNPCKL/H for i16, i32, i64 if AVX2 is enabled.
llvm-svn: 145026
2011-11-21 06:57:39 +00:00
Craig Topper a065238c6e Make LowerSIGN_EXTEND_INREG split 256-bit vectors when AVX1 is enabled and use AVX2 shifts when AVX2 is enabled.
llvm-svn: 145022
2011-11-21 01:12:36 +00:00
Craig Topper e79761df73 Add code for lowering v32i8 shifts by a splat to AVX2 immediate shift instructions. Remove 256-bit splat handling from LowerShift as it was already handled by PerformShiftCombine.
llvm-svn: 145005
2011-11-20 00:12:05 +00:00
Craig Topper a3a6583694 Use 256-bit vcmpeqd for creating an all ones vector when AVX2 is enabled.
llvm-svn: 145004
2011-11-19 22:34:59 +00:00
Craig Topper bac86038ac Remove some of the special classes that worked around an old tablegen limitation of not being able to remove redundant bitconverts from patterns.
llvm-svn: 145003
2011-11-19 21:01:54 +00:00
Craig Topper 3af6ae089f Custom lower AVX2 variable shift intrinsics to shl/srl/sra nodes and remove the intrinsic patterns.
llvm-svn: 144999
2011-11-19 17:46:46 +00:00
Craig Topper f984efbfce Synthesize SSSE3/AVX 128-bit horizontal integer add/sub instructions from add/sub of appropriate shuffle vectors.
llvm-svn: 144989
2011-11-19 09:02:40 +00:00
Craig Topper 81390be00f Collapse X86 PSIGNB/PSIGNW/PSIGND node types.
llvm-svn: 144988
2011-11-19 07:33:10 +00:00
Craig Topper de6b73bb4d Extend VPBLENDVB and VPSIGN lowering to work for AVX2.
llvm-svn: 144987
2011-11-19 07:07:26 +00:00
Craig Topper 66e2b5a61e Remove unused parameters from the AVX maskmov classes.
llvm-svn: 144985
2011-11-19 04:49:22 +00:00
Nadav Rotem 1ec141d0f9 Add AVX2 vpbroadcast support
llvm-svn: 144967
2011-11-18 02:49:55 +00:00
Craig Topper f41e1d0246 Fix SSE/AVX integer comparison patterns to understand that all integer vector loads are promoted to i64 vector loads so patterns need a bitconvert. Also slightly simplify the AVX2 variable shift patterns by using the predefined bitconvert pattern fragments.
llvm-svn: 144896
2011-11-17 07:49:38 +00:00
Craig Topper f17b600577 Remove seemingly unnecessary duplicate VROUND definitions.
llvm-svn: 144885
2011-11-17 07:04:00 +00:00
Eli Friedman 20439a42b0 Turn on vzeroupper insertion on call boundaries for AVX; it works as far as I know, and I'd like to see wider testing.
llvm-svn: 144867
2011-11-17 00:21:52 +00:00
Evan Cheng 011538dc79 Another missing X86ISD::MOVLPD pattern. rdar://10450317
llvm-svn: 144839
2011-11-16 22:24:44 +00:00
Pete Cooper 48784ed5b7 Added missing comment about new custom lowering of DEC64
llvm-svn: 144811
2011-11-16 19:03:23 +00:00
Evan Cheng ecb2908bf9 Sink codegen optimization level into MCCodeGenInfo along side relocation model
and code model. This eliminates the need to pass OptLevel flag all over the
place and makes it possible for any codegen pass to use this information.

llvm-svn: 144788
2011-11-16 08:38:26 +00:00
Craig Topper 3ed7d9ee5a Fix the execution domain on a bunch of SSE/AVX instructions.
llvm-svn: 144784
2011-11-16 07:30:46 +00:00
Craig Topper 07d8b5e2c9 Remove code to enable execution dependency fix pass on VR256. VR128 is sufficient after r144636.
llvm-svn: 144777
2011-11-16 05:02:04 +00:00
Nadav Rotem 37010002f2 AVX: Add support for vbroadcast from BUILD_VECTOR and refactor some of the vbroadcast code.
llvm-svn: 144720
2011-11-15 22:50:37 +00:00
Pete Cooper 7c7ba1baa1 Added custom lowering for load->dec->store sequence in x86 when the EFLAGS registers is used
by later instructions.

Only done for DEC64m right now.

Fixes <rdar://problem/6172640>

llvm-svn: 144705
2011-11-15 21:57:53 +00:00
Jay Foad 0745e645e0 Remove some unnecessary includes of PseudoSourceValue.h.
llvm-svn: 144631
2011-11-15 07:24:32 +00:00
Craig Topper 649d1c5eec Fix PR11370 for real. Prevents converting 256-bit FP instruction to AVX2 256-bit integer instructions when AVX2 isn't enabled.
llvm-svn: 144629
2011-11-15 06:39:01 +00:00
Craig Topper 05baa85f58 Properly qualify AVX2 specific parts of execution dependency table. Also enable converting between 256-bit PS/PD operations when AVX1 is enabled. Fixes PR11370.
llvm-svn: 144622
2011-11-15 05:55:35 +00:00
Jakob Stoklund Olesen f8ad336bc4 Break false dependencies before partial register updates.
Two new TargetInstrInfo hooks lets the target tell ExecutionDepsFix
about instructions with partial register updates causing false unwanted
dependencies.

The ExecutionDepsFix pass will break the false dependencies if the
updated register was written in the previoius N instructions.

The small loop added to sse-domains.ll runs twice as fast with
dependency-breaking instructions inserted.

llvm-svn: 144602
2011-11-15 01:15:30 +00:00
Evan Cheng fb13d32b3f Add a missing pattern for X86ISD::MOVLPD. rdar://10436044
llvm-svn: 144566
2011-11-14 20:35:52 +00:00
Pete Cooper 890e02e854 Changed SSE4/AVX <2 x i64> extract and insert ops to be Custom lowered
Constant idx case is still done in tablegen but other cases are then expanded

Fixes <rdar://problem/10435460>

llvm-svn: 144557
2011-11-14 19:38:42 +00:00
Craig Topper 182b00a2e0 Add AVX2 version of instructions to load folding tables. Also add a bunch of missing SSE/AVX instructions.
llvm-svn: 144525
2011-11-14 08:07:55 +00:00
Craig Topper a331515c82 Add neverHasSideEffects, mayLoad, and mayStore to many patternless SSE/AVX instructions. Remove MMX check from LowerVECTOR_SHUFFLE since MMX vector types won't go through it anyway.
llvm-svn: 144522
2011-11-14 06:46:21 +00:00
Craig Topper b8bcb473e2 Add BLSI, BLSMSK, and BLSR to getTargetNodeName.
llvm-svn: 144502
2011-11-13 17:31:07 +00:00
Craig Topper 3dc75f9e3b Add more AVX2 shift lowering support. Move AVX2 variable shift to use patterns instead of custom lowering code.
llvm-svn: 144457
2011-11-12 09:58:49 +00:00
Daniel Dunbar 52823cc91c build: Attempt to rectify inconsistencies between CMake and LLVMBuild versions of explicit dependencies.
- The hope is that we have a tool/test to verify these are accurate (and tight) soon.

llvm-svn: 144444
2011-11-12 02:10:57 +00:00
Craig Topper ea28a34c43 Add lowering for AVX2 shift instructions.
llvm-svn: 144380
2011-11-11 07:39:23 +00:00
Bill Wendling 8df8204554 If we have to reset the calculation of the compact encoding, then also reset the
"saved register" index.
<rdar://problem/10430076>

llvm-svn: 144350
2011-11-11 00:59:14 +00:00
Daniel Dunbar 6d617b48c7 LLVMBuild: Add explicit information on whether targets define an assembly printer, assembly parser, or disassembler.
llvm-svn: 144344
2011-11-11 00:23:56 +00:00
Nadav Rotem 0a2f797dec AVX2: Add variable shift from memory.
Note: These patterns only works in some cases because
many times the load sd node is bitcasted from a load
node of a different type.

llvm-svn: 144266
2011-11-10 06:54:20 +00:00
Daniel Dunbar 233c9304a8 llvm-build: Add --native-target and --enable-targets options, and add logic to
handle defining the "magic" target related components (like native,
nativecodegen, and engine).
 - We still require these components to be in the project (currently in
   lib/Target) so that we have a place to document them and hopefully make it
   more obvious that they are "magic".

llvm-svn: 144253
2011-11-10 00:50:07 +00:00
Daniel Dunbar 82219ad4dc llvm-build: Add an explicit component type to represent targets.
- Gives us a place to hang target specific metadata (like whether the target has a JIT).

llvm-svn: 144250
2011-11-10 00:49:51 +00:00
Nadav Rotem 1938482bfa AVX2: Add patterns for variable shift operations
llvm-svn: 144212
2011-11-09 21:22:13 +00:00
Devang Patel 2f70bcdb94 Remove unnecessary include.
llvm-svn: 144211
2011-11-09 21:11:02 +00:00
Nadav Rotem 79135d844d Add AVX2 support for vselect of v32i8
llvm-svn: 144187
2011-11-09 13:21:28 +00:00
Craig Topper f87a2bef51 Enable execution dependency fix pass for YMM registers when AVX2 is enabled. Add AVX2 logical operations to list of replaceable instructions.
llvm-svn: 144179
2011-11-09 09:37:21 +00:00
Craig Topper c9eb09d3b8 Add instruction selection for AVX2 integer comparisons.
llvm-svn: 144176
2011-11-09 08:06:13 +00:00
Craig Topper 8c8a431057 Add AVX2 instruction lowering for add, sub, and mul.
llvm-svn: 144174
2011-11-09 07:28:55 +00:00
Pete Cooper 82cd9e81fc Added invariant field to the DAG.getLoad method and changed all calls.
When this field is true it means that the load is from constant (runt-time or compile-time) and so can be hoisted from loops or moved around other memory accesses

llvm-svn: 144100
2011-11-08 18:42:53 +00:00
Evan Cheng 91b56e0390 Add x86 isel logic and patterns to match movlps from clang generated IR for _mm_loadl_pi(). rdar://10134392, rdar://10050222
llvm-svn: 144052
2011-11-08 00:31:58 +00:00
Jakob Stoklund Olesen 0241308954 Expand V_SET0 to xorps by default.
The xorps instruction is smaller than pxor, so prefer that encoding.

The ExecutionDepsFix pass will switch the encoding to pxor and xorpd
when appropriate.

llvm-svn: 143996
2011-11-07 19:15:58 +00:00
Craig Topper a6d409d543 Add AVX2 variable shift instructions and intrinsics.
llvm-svn: 143915
2011-11-07 08:26:24 +00:00
Craig Topper ff39be0afc Add AVX2 VPMOVMASK instructions and intrinsics.
llvm-svn: 143904
2011-11-07 03:20:35 +00:00
Craig Topper e122dcbf4a Add AVX2 VEXTRACTI128 and VINSERTI128 instructions. Fix VPERM2I128 to be qualified with HasAVX2 instead of HasAVX. Mark VINSERTF128 and VEXTRACTF128 as never having side effects.
llvm-svn: 143902
2011-11-07 02:00:04 +00:00
Craig Topper f01f1b5cb9 More AVX2 instructions and their intrinsics.
llvm-svn: 143895
2011-11-06 23:04:08 +00:00
Benjamin Kramer 20baffb257 Replace (Lower|Upper)caseString in favor of StringRef's newest methods.
llvm-svn: 143891
2011-11-06 20:37:06 +00:00
Craig Topper 05d1cb98e7 Add more AVX2 instructions and intrinsics.
llvm-svn: 143861
2011-11-06 06:12:20 +00:00
Benjamin Kramer f3da529028 Add more PRI.64 macros for MSVC and use them throughout the codebase.
llvm-svn: 143799
2011-11-05 08:57:40 +00:00
Eli Friedman 8f249600e7 Enhanced vzeroupper insertion pass that avoids inserting vzeroupper where it is unnecessary through local analysis. Patch from Bruno Cardoso Lopes, with some additional changes.
I'm going to wait for any review comments and perform some additional testing before turning this on by default.

llvm-svn: 143750
2011-11-04 23:46:11 +00:00
Daniel Dunbar 4a9c6426ff build/cmake: Use tblgen macro directly instead of llvm_tablegen, which just
added a layer of indirection with no value (not even conciseness).

llvm-svn: 143727
2011-11-04 19:04:23 +00:00
Craig Topper caba032f48 Add intrinsics for X86 vcvtps2ph and vcvtph2ps instructions
llvm-svn: 143683
2011-11-04 06:59:49 +00:00
Dan Gohman 198b7ffc11 Reapply r143206, with fixes. Disallow physical register lifetimes
across calls, and only check for nested dependences on the special
call-sequence-resource register.

llvm-svn: 143660
2011-11-03 21:49:52 +00:00
Daniel Dunbar bf9bba47a1 build: Add initial cut at LLVMBuild.txt files.
llvm-svn: 143634
2011-11-03 18:53:17 +00:00
Craig Topper 0e7cbbabea Add new X86 AVX2 VBROADCAST instructions.
llvm-svn: 143612
2011-11-03 07:35:53 +00:00
Craig Topper a47b05c7f3 More AVX2 instructions and intrinsics.
llvm-svn: 143536
2011-11-02 06:54:17 +00:00
Craig Topper 682b850602 Add a bunch more X86 AVX2 instructions and their corresponding intrinsics.
llvm-svn: 143529
2011-11-02 04:42:13 +00:00
Eli Friedman 3f5eccbe7a Teach the x86 backend a couple tricks for dealing with v16i8 sra by a constant splat value. Fixes PR11289.
llvm-svn: 143498
2011-11-01 21:18:39 +00:00
Craig Topper cfcfdf2aab Begin adding AVX2 instructions. No selection support yet other than intrinsics.
llvm-svn: 143331
2011-10-31 02:15:10 +00:00
Craig Topper 228d9131aa Add intrinsics and feature flag for read/write FS/GS base instructions. Also add AVX2 feature flag.
llvm-svn: 143319
2011-10-30 19:57:21 +00:00
Benjamin Kramer 7402ee6ec2 X86: Emit logical shift by constant splat of <16 x i8> as a <8 x i16> shift and zero out the bits where zeros should've been shifted in.
llvm-svn: 143315
2011-10-30 17:31:21 +00:00
Nadav Rotem c602b2c4de Fix pr11266.
On x86: (shl V, 1) -> add V,V

Hardware support for vector-shift is sparse and in many cases we scalarize the
result. Additionally, on sandybridge padd is faster than shl.

llvm-svn: 143311
2011-10-30 13:24:22 +00:00
Dan Gohman 9b9c970148 Revert r143206, as there are still some failing tests.
llvm-svn: 143262
2011-10-29 00:41:52 +00:00
Dan Gohman 73057ad24f Reapply r143177 and r143179 (reverting r143188), with scheduler
fixes: Use a separate register, instead of SP, as the
calling-convention resource, to avoid spurious conflicts with
actual uses of SP. Also, fix unscheduling of calling sequences,
which can be triggered by pseudo-two-address dependencies.

llvm-svn: 143206
2011-10-28 17:55:38 +00:00
Duncan Sands 225a7037d6 Speculatively disable Dan's commits 143177 and 143179 to see if
it fixes the dragonegg self-host (it looks like gcc is miscompiled).
Original commit messages:
Eliminate LegalizeOps' LegalizedNodes map and have it just call RAUW
on every node as it legalizes them. This makes it easier to use
hasOneUse() heuristics, since unneeded nodes can be removed from the
DAG earlier.

Make LegalizeOps visit the DAG in an operands-last order. It previously
used operands-first, because LegalizeTypes has to go operands-first, and
LegalizeTypes used to be part of LegalizeOps, but they're now split.
The operands-last order is more natural for several legalization tasks.
For example, it allows lowering code for nodes with floating-point or
vector constants to see those constants directly instead of seeing the
lowered form (often constant-pool loads). This makes some things
somewhat more complicated today, though it ought to allow things to be
simpler in the future. It also fixes some bugs exposed by Legalizing
using RAUW aggressively.

Remove the part of LegalizeOps that attempted to patch up invalid chain
operands on libcalls generated by LegalizeTypes, since it doesn't work
with the new LegalizeOps traversal order. Instead, define what
LegalizeTypes is doing to be correct, and transfer the responsibility
of keeping calls from having overlapping calling sequences into the
scheduler.

Teach the scheduler to model callseq_begin/end pairs as having a
physical register definition/use to prevent calls from having
overlapping calling sequences. This is also somewhat complicated, though
there are ways it might be simplified in the future.

This addresses rdar://9816668, rdar://10043614, rdar://8434668, and others.
Please direct high-level questions about this patch to management.

Delete #if 0 code accidentally left in.

llvm-svn: 143188
2011-10-28 09:55:57 +00:00
Dan Gohman 4db3f7dd83 Eliminate LegalizeOps' LegalizedNodes map and have it just call RAUW
on every node as it legalizes them. This makes it easier to use
hasOneUse() heuristics, since unneeded nodes can be removed from the
DAG earlier.

Make LegalizeOps visit the DAG in an operands-last order. It previously
used operands-first, because LegalizeTypes has to go operands-first, and
LegalizeTypes used to be part of LegalizeOps, but they're now split.
The operands-last order is more natural for several legalization tasks.
For example, it allows lowering code for nodes with floating-point or
vector constants to see those constants directly instead of seeing the
lowered form (often constant-pool loads). This makes some things
somewhat more complicated today, though it ought to allow things to be
simpler in the future. It also fixes some bugs exposed by Legalizing
using RAUW aggressively.

Remove the part of LegalizeOps that attempted to patch up invalid chain
operands on libcalls generated by LegalizeTypes, since it doesn't work
with the new LegalizeOps traversal order. Instead, define what
LegalizeTypes is doing to be correct, and transfer the responsibility
of keeping calls from having overlapping calling sequences into the
scheduler.

Teach the scheduler to model callseq_begin/end pairs as having a
physical register definition/use to prevent calls from having
overlapping calling sequences. This is also somewhat complicated, though
there are ways it might be simplified in the future.

This addresses rdar://9816668, rdar://10043614, rdar://8434668, and others.
Please direct high-level questions about this patch to management.

llvm-svn: 143177
2011-10-28 01:29:32 +00:00
Kevin Enderby 49e6a0da7e Change the sysexit mnemonic (and sysexitl) to never have the REX.W prefix and
not depend on In32BitMode.  Use the sysexitq mnemonic for the version with the
REX.W prefix and only allow it only In64BitMode.  rdar://9738584

llvm-svn: 143112
2011-10-27 17:40:41 +00:00
Lang Hames 58dba012b6 Rename NonScalarIntSafe to something more appropriate.
llvm-svn: 143080
2011-10-26 23:50:43 +00:00
Rafael Espindola b3285224cd Fixes an issue reported by -verify-machineinstrs.
Patch by Sanjoy Das.

llvm-svn: 143064
2011-10-26 21:16:41 +00:00
Rafael Espindola 66393c127d This commit introduces two fake instructions MORESTACK_RET and
MORESTACK_RET_RESTORE_R10; which are lowered to a RET and a RET
followed by a MOV respectively.  Having a fake instruction prevents
the verifier from seeing a MachineBasicBlock end with a
non-terminator (MOV).  It also prevents the rather eccentric case of a
MachineBasicBlock ending with RET but having successors nevertheless.

Patch by Sanjoy Das.

llvm-svn: 143062
2011-10-26 21:12:27 +00:00
Eli Friedman b72d55353a Add support to the old JIT for acquire/release loads and stores on x86. PR11207.
llvm-svn: 142841
2011-10-24 20:24:21 +00:00
Craig Topper b05d9e9bea Add X86 SARX, SHRX, and SHLX instructions.
llvm-svn: 142779
2011-10-23 22:18:24 +00:00
Craig Topper 980d59832a Add X86 RORX instruction
llvm-svn: 142741
2011-10-23 07:34:00 +00:00
Craig Topper e94d277db8 Add X86 MULX instruction for disassembler.
llvm-svn: 142738
2011-10-23 00:33:32 +00:00
Craig Topper 7412aa9886 Remove some duplicate specifying of neverHasSideEffects and mayLoad from X86 multiply instructions.
llvm-svn: 142737
2011-10-22 23:13:53 +00:00
Nadav Rotem e649d66552 Fix pr11193.
SHL inserts zeros from the right, thus even when the original
sign_extend_inreg value was of 1-bit, we need to sra.

llvm-svn: 142724
2011-10-22 12:39:25 +00:00
Craig Topper 039a79067a Remove intrinsics for X86 BLSI, BLSMSK, and BLSR intrinsics and replace with custom isel lowering code.
llvm-svn: 142642
2011-10-21 06:55:01 +00:00
Evan Cheng 54d678fff4 Fix TLS lowering bug. The CopyFromReg must be glued to the TLSCALL. rdar://10291355
llvm-svn: 142550
2011-10-19 22:22:54 +00:00
Craig Topper ef309c3384 Rename PEXTR to PEXT. Add intrinsics for BMI instructions.
llvm-svn: 142480
2011-10-19 07:48:35 +00:00
Eric Christopher 16ec8c103a Revert "Turn on the vzeroupper pass by default."
This reverts commit 494f7ac3e8d2ab3d94e52317abf9c42a949fe1f3.

llvm-svn: 142455
2011-10-18 23:10:11 +00:00
Eric Christopher 9bede2dd92 Turn on the vzeroupper pass by default.
I'll remove/rename the option in a few days.

llvm-svn: 142439
2011-10-18 22:50:17 +00:00
Lang Hames 7d2f7b5a33 Teach fast isel about vector stores, and make DoSelectCall return false when it fails to emit a store. This fixes <rdar://problem/10215997>.
llvm-svn: 142432
2011-10-18 22:11:33 +00:00
Duncan Sands d278d35b13 Fix a bunch of unused variable warnings when doing a release
build with gcc-4.6.

llvm-svn: 142350
2011-10-18 12:44:00 +00:00
David Meyer 49045ddb4c Remove NaClMode
llvm-svn: 142338
2011-10-18 05:29:23 +00:00
Craig Topper e20793a4f1 Don't use inline assembly in 64-bit Visual Studio. Unfortunately, this means that cpuid leaf 7 can't be queried on versions of Visual Studio earlier than VS 2008 SP1. Fixes PR11147.
llvm-svn: 142177
2011-10-17 05:33:10 +00:00
Craig Topper 96fa597828 Add X86 PEXTR and PDEP instructions.
llvm-svn: 142141
2011-10-16 16:50:08 +00:00
Benjamin Kramer 1930b003fe Add AsmToken::getEndLoc and use it to add ranges to x86 asm register parsing.
<stdin>:1:12: error: register %rax is only available in 64-bit mode
   incl    %rax
           ^~~~

llvm-svn: 142137
2011-10-16 12:10:27 +00:00
Benjamin Kramer d416bae5f2 X86AsmParser: Synthesize EndLoc for tokens out of StartLoc + Length and print ranges for invalid operands.
<stdin>:1:4: error: invalid instruction mnemonic 'abc'
   abc incl    %edi
   ^~~

llvm-svn: 142135
2011-10-16 11:28:29 +00:00
Craig Topper aea148c366 Add X86 BZHI instruction as well as BMI2 feature detection.
llvm-svn: 142122
2011-10-16 07:55:05 +00:00
Craig Topper 0ae8d4d738 Add X86 INVPCID instruction. Add 32/64-bit predicates to INVEPT, INVVPID, VMREAD, and VMWRITE to remove hack from X86RecognizableInstr.
llvm-svn: 142117
2011-10-16 07:05:40 +00:00
Chris Lattner a3a0681083 Enhance llvm::SourceMgr to support diagnostic ranges, the same way clang does. Enhance
the X86 asmparser to produce ranges in the one case that was annoying me, for example:

test.s:10:15: error: invalid operand for instruction
movl 0(%rax), 0(%edx)
              ^~~~~~~

It should be straight-forward to enhance filecheck, tblgen, and/or the .ll parser to use 
ranges where appropriate if someone is interested.

llvm-svn: 142106
2011-10-16 04:47:35 +00:00
Craig Topper 25ea4e5ad3 Add X86 BEXTR instruction. This instruction uses VEX.vvvv to encode Operand 3 instead of Operand 2 so needs special casing in the disassembler and code emitter. Ultimately, should pass this information from tablegen
llvm-svn: 142105
2011-10-16 03:51:13 +00:00
Craig Topper 6c8879e3ab Add X86 feature detection support for BMI instructions. Added new cpuid function for accessing leafs with sub leafs specified in ECX. Also added code to keep track of the max cpuid level supported in both basic and extended leaves and qualified the existing cpuid calls and the new call to leaf 7.
llvm-svn: 142089
2011-10-16 00:21:51 +00:00
Craig Topper 27ad12539d Add support for X86 blsr, blsmsk, and blsi instructions. Required extra work because these are the first VEX encoded instructions to use the reg field as an opcode extension.
llvm-svn: 142082
2011-10-15 20:46:47 +00:00
Benjamin Kramer 5fb5e3b384 SmallVector -> array
llvm-svn: 142073
2011-10-15 13:28:31 +00:00
Evan Cheng 06fdaeb5d9 A few 80-col violations.
llvm-svn: 141988
2011-10-14 20:36:23 +00:00
Craig Topper 965de2c197 Add X86 ANDN instruction. Including instruction selection.
llvm-svn: 141947
2011-10-14 07:06:56 +00:00
Craig Topper 3657fe4b17 Add X86 TZCNT instruction and patterns to select it. Also added core-avx2 processor which is gcc's name for Haswell.
llvm-svn: 141939
2011-10-14 03:21:46 +00:00
Jakob Stoklund Olesen d9444d455e Ban rematerializable instructions with side effects.
TableGen infers unmodeled side effects on instructions without a
pattern.  Fix some instruction definitions where that was overlooked.

Also raise an error if a rematerializable instruction has unmodeled side
effects. That doen't make any sense.

llvm-svn: 141929
2011-10-14 01:00:49 +00:00
Jakob Stoklund Olesen eafa9d50c2 V_SET0 has no side effects.
TableGen will mark any pattern-less instruction as having unmodeled side
effects. This is extra bad for V_SET0 which gets rematerialized a lot.

This was part of the cause for PR11125, but the real bug was fixed
in r141923.

llvm-svn: 141924
2011-10-14 00:39:50 +00:00
Eli Friedman a5abd03a8d Simplify assertion, and avoid undefined shift. Based on patch by Ahmed Charles.
llvm-svn: 141912
2011-10-13 23:27:48 +00:00
Bill Wendling 25f6d3e321 More closely follow libgcc, which has code after the `ret' instruction to
release the stack segment and reset the stack pointer. Place the code in its own
MBB to make the verifier happy.

llvm-svn: 141859
2011-10-13 08:24:19 +00:00
Bill Wendling 063f55ffdd Revert r141854 because it was causing failures:
http://lab.llvm.org:8011/builders/llvm-x86_64-linux/builds/101

--- Reverse-merging r141854 into '.':
U    test/MC/Disassembler/X86/x86-32.txt
U    test/MC/Disassembler/X86/simple-tests.txt
D    test/CodeGen/X86/bmi.ll
U    lib/Target/X86/X86InstrInfo.td
U    lib/Target/X86/X86ISelLowering.cpp
U    lib/Target/X86/X86.td
U    lib/Target/X86/X86Subtarget.h

llvm-svn: 141857
2011-10-13 07:48:07 +00:00
Bill Wendling 22a690e3db Should not add instructions to a BB after a return instruction. The machine instruction verifier doesn't like this, nor do I.
llvm-svn: 141856
2011-10-13 07:42:32 +00:00
Craig Topper 8cc9388073 Add X86 TZCNT instruction and patterns to select it. Also added core-avx2 processor which is gcc's name for Haswell.
llvm-svn: 141854
2011-10-13 07:09:14 +00:00
Craig Topper 2fdcb1f045 Add 'implicit EFLAGS' to patterns for popcnt and lzcnt
llvm-svn: 141853
2011-10-13 06:18:52 +00:00
Nick Lewycky 064c1c0e77 Fix indent in comment.
llvm-svn: 141749
2011-10-12 00:14:12 +00:00
Craig Topper 63bc541196 Add HasPOPCNT predicate to the POPCNT instructions. Also mark POPCNT as modifying EFLAGS.
llvm-svn: 141656
2011-10-11 07:13:09 +00:00
Craig Topper 0fbca75c17 Make Ivy Bridge 16-bit floating point conversion instructions require AVX.
llvm-svn: 141654
2011-10-11 07:01:37 +00:00
Craig Topper 271064e873 Add X86 LZCNT instruction. Including instruction selection support.
llvm-svn: 141651
2011-10-11 06:44:02 +00:00
Craig Topper a697852386 Fix disassembling of popcntw. Also remove some code that says it accounts for 64BIT_REXW_XD not existing, but it does exist.
llvm-svn: 141642
2011-10-11 04:34:23 +00:00
Lang Hames f22f46bf25 Fixed natural stack alignment for Linux x86-32. Thanks Eli.
llvm-svn: 141616
2011-10-11 00:51:36 +00:00
Lang Hames de7ab801cc Add a natural stack alignment field to TargetData, and prevent InstCombine from
promoting allocas to preferred alignments that exceed the natural
alignment. This avoids some potentially expensive dynamic stack realignments.

The natural stack alignment is set in target data strings via the "S<size>"
option. Size is in bits and must be a multiple of 8. The natural stack alignment
defaults to "unspecified" (represented by a zero value), and the "unspecified"
value does not prevent any alignment promotions. Target maintainers that care
about avoiding promotions should explicitly add the "S<size>" option to their
target data strings.

llvm-svn: 141599
2011-10-10 23:42:08 +00:00
Eli Friedman 8ec0897db6 Make sure the X86 backend doesn't explode on 128-bit shuffles in AVX mode. Fixes PR11102.
llvm-svn: 141585
2011-10-10 22:28:47 +00:00
Benjamin Kramer 874c519337 X86: Add a subtarget definition for core-avx-i, which is GCC's name for ivy bridge.
llvm-svn: 141571
2011-10-10 19:35:07 +00:00
Nadav Rotem 814598563f Fix 10892 - When lowering SIGN_EXTEND_INREG do not lower v2i64 because the
instruction set has no 64-bit SRA support.

llvm-svn: 141570
2011-10-10 19:31:45 +00:00
Benjamin Kramer 42c0330a79 X86: Add patterns for the movbe instruction (mov + bswap, only available on atom)
llvm-svn: 141563
2011-10-10 18:34:56 +00:00
Craig Topper a14c5723eb Put a bunch of calls to ToggleFeature behind proper if statements.
llvm-svn: 141527
2011-10-10 05:34:02 +00:00
Craig Topper fe9179fa4f Add Ivy Bridge 16-bit floating point conversion instructions for the X86 disassembler.
llvm-svn: 141505
2011-10-09 07:31:39 +00:00
Jakob Stoklund Olesen 513d1213cc Prevent potential NOREX bug.
A GR8_NOREX virtual register is created when extrating a sub_8bit_hi
sub-register:

  %vreg2<def> = COPY %vreg1:sub_8bit_hi; GR8_NOREX:%vreg2 %GR64_ABCD:%vreg1
  TEST8ri_NOREX %vreg2, 1, %EFLAGS<imp-def>; GR8_NOREX:%vreg2

If such a live range is ever split, its register class must not be
inflated to GR8.  The sub-register copy can only target GR8_NOREX.

I dont have a test case for this theoretical bug.

llvm-svn: 141500
2011-10-08 20:20:03 +00:00
Jakob Stoklund Olesen 729abd360e Add TEST8ri_NOREX pseudo to constrain sub_8bit_hi copies.
In 64-bit mode, sub_8bit_hi sub-registers can only be used by NOREX
instructions. The COPY created from the EXTRACT_SUBREG DAG node cannot
target all GR8 registers, only those in GR8_NOREX.

TO enforce this, we ensure that all instructions using the
EXTRACT_SUBREG are GR8_NOREX constrained.

This fixes PR11088.

llvm-svn: 141499
2011-10-08 18:28:28 +00:00
Jakob Stoklund Olesen 464fcc0035 Constrain both operands on MOVZX32_NOREXrr8.
This instruction is explicitly encoded without an REX prefix, so both
operands but be *_NOREX.

Also add an assertion to copyPhysReg() that fires when the MOV8rr_NOREX
constraints are not satisfied.

This fixes a miscompilation in 20040709-2 in the gcc test suite.

llvm-svn: 141410
2011-10-07 20:15:54 +00:00
Evan Cheng 74db300f37 High bits of movmskp{s|d} and pmovmskb are known zero. rdar://10247336
llvm-svn: 141371
2011-10-07 17:21:44 +00:00
Craig Topper d9cfddc5cd Add X86 disassembler support for RDFSBASE, RDGSBASE, WRFSBASE, and WRGSBASE.
llvm-svn: 141358
2011-10-07 07:02:24 +00:00
Craig Topper bf136764ae Add X86 disassembler support for XSAVE, XRSTOR, and XSAVEOPT.
llvm-svn: 141354
2011-10-07 05:53:50 +00:00
Craig Topper 5aebebe18d Revert part of r141274. Only need to change encoding for xchg %eax, %eax in 64-bit mode. This is because in 64-bit mode xchg %eax, %eax implies zeroing the upper 32-bits of RAX which makes it not a NOP. In 32-bit mode using NOP encoding is fine.
llvm-svn: 141353
2011-10-07 05:35:38 +00:00
Craig Topper 23eb468b1f Fix assembling of xchg %eax, %eax to not use the NOP encoding of 0x90. This was done by creating a new register group that excludes AX registers. Fixes PR10345. Also added aliases for flipping the order of the operands of xchg <reg>, %eax.
llvm-svn: 141274
2011-10-06 06:44:41 +00:00
Peter Collingbourne fb3d935649 Build system infrastructure for multiple tblgens.
llvm-svn: 141266
2011-10-06 01:51:51 +00:00
Jakob Stoklund Olesen ee9b576a2a Override TRI::getSubClassWithSubReg for X86.
There are fewer registers with sub_8bit sub-registers in 32-bit mode
than in 64-bit mode.  In 32-bit mode, sub_8bit behaves the same as
sub_8bit_hi.

llvm-svn: 141206
2011-10-05 20:26:33 +00:00
Craig Topper b58a9665bd Change C++ style comments to C style comments in X86 disassembler. Patch from Joe Abbey.
llvm-svn: 141162
2011-10-05 03:29:32 +00:00
Owen Anderson 0ca562ec4c Teach the MC to output code/data region marker labels in MachO and ELF modes. These are used by disassemblers to provide better disassembly, particularly on targets like ARM Thumb that like to intermingle data in the TEXT segment.
llvm-svn: 141135
2011-10-04 23:26:17 +00:00
Craig Topper f18c896337 Add support in the disassembler for ignoring the L-bit on certain VEX instructions. Mark instructions that have this behavior. Fixes PR10676.
llvm-svn: 141065
2011-10-04 06:30:42 +00:00
Craig Topper 786bdb9e14 Add support for MOVBE and RDRAND instructions for the assembler and disassembler. Includes feature flag checking, but no instrinsic support. Fixes PR10832, PR11026 and PR11027.
llvm-svn: 141007
2011-10-03 17:28:23 +00:00
Craig Topper 0d0be47d03 Treat VEX.vvvv as a 3-bit field outside of 64-bit mode. Prevents access to registers xmm8-xmm15 outside 64-bit mode.
llvm-svn: 140997
2011-10-03 08:14:29 +00:00
Craig Topper 31854ba017 Fix VEX disassembling to ignore REX.RXBW bits in 32-bit mode.
llvm-svn: 140993
2011-10-03 07:51:09 +00:00
Craig Topper 7aea69d949 Fix some Intel syntax disassembly issues with instructions that implicitly use AL/AX/EAX/RAX such as ADD/SUB/ADC/SUBB/XOR/OR/AND/CMP/MOV/TEST.
llvm-svn: 140974
2011-10-02 21:08:12 +00:00
Craig Topper 21c33657d6 Special case disassembler handling of REX.B prefix on NOP instruction to decode as XCHG R8D, EAX instead. Fixes PR10344.
llvm-svn: 140971
2011-10-02 16:56:09 +00:00
Craig Topper d07a59f288 Fix disassembling of INVEPT and INVVPID to take operands
llvm-svn: 140955
2011-10-01 21:20:14 +00:00
Craig Topper 88cb33e0d4 Fix disassembler handling of CRC32 which is an odd instruction that uses 0xf2 as an opcode extension and allows the opsize prefix. This necessitated adding IC_XD_OPSIZE and IC_64BIT_XD_OPSIZE contexts. Unfortunately, this increases the size of the disassembler tables. Fixes PR10702.
llvm-svn: 140954
2011-10-01 19:54:56 +00:00
Jakob Stoklund Olesen 237dceff90 Store sub-class lists as a bit vector.
This uses less memory and it reduces the complexity of sub-class
operations:

- hasSubClassEq() and friends become O(1) instead of O(N).

- getCommonSubClass() becomes O(N) instead of O(N^2).

In the future, TableGen will infer register classes.  This makes it
cheap to add them.

llvm-svn: 140898
2011-09-30 22:19:07 +00:00
Jakob Stoklund Olesen dd1904e7a6 Expand the x86 V_SET0* pseudos right after register allocation.
This also makes it possible to reduce the number of pseudo instructions
and get rid of the encoding information.

llvm-svn: 140776
2011-09-29 05:10:54 +00:00
Eli Friedman 2fb357a5b0 PR11033: Make sure we don't generate PCMPGTQ and PCMPEQQ if the target CPU does not support them.
llvm-svn: 140723
2011-09-28 21:00:25 +00:00
Jakob Stoklund Olesen 934b7d7645 Rename SSEDomainFix -> lib/CodeGen/ExecutionDepsFix.
I'll clean up the source in the next commit.

llvm-svn: 140663
2011-09-28 00:01:54 +00:00
Jakob Stoklund Olesen 30c811246f Remove X86-dependent stuff from SSEDomainFix.
This also enables domain swizzling for AVX code which required a few
trivial test changes.

The pass will be moved to lib/CodeGen shortly.

llvm-svn: 140659
2011-09-27 23:50:46 +00:00
Jakob Stoklund Olesen b48c994cc0 Promote the X86 Get/SetSSEDomain functions to TargetInstrInfo.
I am going to unify the SSEDomainFix and NEONMoveFix passes into a
single target independent pass.  They are essentially doing the same
thing.

llvm-svn: 140652
2011-09-27 22:57:18 +00:00
Craig Topper 45faba98b4 Fix VEX decoding in i386 mode. Fixes PR11008.
llvm-svn: 140515
2011-09-26 05:12:43 +00:00
Jakob Stoklund Olesen 55cf2ed148 Only run MF.verify() with EXPENSIVE_CHECKS=1.
llvm-svn: 140441
2011-09-24 01:11:19 +00:00
Duncan Sands a54fd541c2 Implement Chris's suggestion of legalizing the various SSE and AVX
hadd/hsub intrinsics into the new fhadd/fhsub X86 node.

llvm-svn: 140383
2011-09-23 16:10:22 +00:00
Eli Friedman 87c844cdf8 PR10991: make fast-isel correctly check whether accessing a global through an alias involves thread-local storage. (I'm not entirely sure how this is supposed to work, but this patch makes fast-isel consistent with the normal isel path.)
llvm-svn: 140355
2011-09-22 23:41:28 +00:00
Jakob Stoklund Olesen f05864ad7d Add support for GR32 <-> FR32 cross class copies.
We already support GR64 <-> VR128 copies.  All of these copies break
partial register dependencies by zeroing the high part of the target
register.

llvm-svn: 140348
2011-09-22 22:45:24 +00:00
Duncan Sands 0e4fcb8e3b Synthesize SSE3/AVX 128 bit horizontal add/sub instructions from
floating point add/sub of appropriate shuffle vectors.  Does not
synthesize the 256 bit AVX versions because they work differently.

llvm-svn: 140332
2011-09-22 20:15:48 +00:00
Craig Topper 6d1872b77a Fix register printing in disassembling of push/pop of segment registers and in/out in Intel syntax mode. Fixes PR10960
llvm-svn: 140299
2011-09-22 07:01:50 +00:00
Benjamin Kramer cfd26cd744 The SSE version differences for fmin/fmax are more involved than I thought.
- x87: no min or max.
- SSE1: min/max for single precision scalars and vectors.
- SSE2: min/max for single and double precision scalars and vectors.
- AVX: as SSE2, but also supports the wider ymm vectors. (this is covered by the isTypeLegal check)

llvm-svn: 140296
2011-09-22 03:27:22 +00:00
Benjamin Kramer dc397a6402 X86: Don't form min/max nodes if the target is missing SSE.
llvm-svn: 140294
2011-09-22 03:01:42 +00:00
Benjamin Kramer e5e189f669 X86Disassembler: if verbose logging is going to nulls(), disable logging completely.
Otherwise we'll spend a ridiculous amount of time pretty printing debug output and then discarding it.

llvm-svn: 140276
2011-09-21 21:47:35 +00:00
Nadav Rotem 50f123d8e5 fix comment
llvm-svn: 140258
2011-09-21 17:14:40 +00:00
Nadav Rotem c1cd8506ce Insert a sanity check on the combining of x86 truncing-store nodes. This comes to replace the problematic check that was removed in r139995.
llvm-svn: 140246
2011-09-21 08:45:10 +00:00
Richard Trieu a318b8dce6 Change:
assert(!"error message");

To:

  assert(0 && "error message");

which is more consistant across the code base.

llvm-svn: 140234
2011-09-21 03:09:09 +00:00
Owen Anderson 69fa8ffeef In the disassembler C API, be careful not to confuse the comment streamer that the disassembler outputs annotations on with the streamer that the InstPrinter will print them on.
llvm-svn: 140217
2011-09-21 00:25:23 +00:00
Bruno Cardoso Lopes 8058234b32 Revert r140097, working on a better approach
llvm-svn: 140203
2011-09-20 23:19:29 +00:00
Bruno Cardoso Lopes f7638e1e51 Simplify max/minp[s|d] dagcombine matching
llvm-svn: 140199
2011-09-20 22:34:45 +00:00
Bruno Cardoso Lopes 60aa85b672 Tidy up a bit more, fix tab and remove trailing whitespaces
llvm-svn: 140186
2011-09-20 21:45:26 +00:00
Bruno Cardoso Lopes 33e91a6cf7 The wrong relocation was being emitted for several SSSE3 instructions.
This fixes PR10963. Thanks to Benjamin for finding the wrong tablegen
declaration.

llvm-svn: 140184
2011-09-20 21:39:21 +00:00
Bruno Cardoso Lopes 05f3f4939a Tidy up code!
llvm-svn: 140183
2011-09-20 21:39:06 +00:00
Craig Topper 68c92d86da Extend changes from r139986 to produce 256-bit AVX minps/minpd/maxps/maxpd.
llvm-svn: 140140
2011-09-20 07:38:59 +00:00
Bruno Cardoso Lopes c4398d2c7b Fix PR10949. Fix the encoding of VMOVPQIto64rr.
llvm-svn: 140098
2011-09-19 23:36:59 +00:00
Bruno Cardoso Lopes 51792dcc4d Based on the small opt Zvi's patch was trying to achieve, eliminate
128-bit undef subvector insertion into a 256-bit vector

llvm-svn: 140097
2011-09-19 23:36:50 +00:00
Bruno Cardoso Lopes d4a3d452d4 Match X86ISD::FSETCCsd and X86ISD::FSETCCss while in AVX mode. This fix
PR10955 and PR10948.

llvm-svn: 140069
2011-09-19 21:29:24 +00:00
Nadav Rotem 763c11cc12 Fix typos in my prev commit, found by Tobi.
llvm-svn: 140003
2011-09-18 19:00:23 +00:00
Nadav Rotem 261a10a007 setOperationAction should be done on the return value of the type, not the operands.
llvm-svn: 140001
2011-09-18 14:57:03 +00:00
Nadav Rotem 7ae11279e9 When promoting integer vectors we often create ext-loads. This patch adds a
dag-combine optimization to implement the ext-load efficiently (using shuffles).

For example the type <4 x i8> is stored in memory as i32, but it needs to
find its way into a <4 x i32> register. Previously we scalarized the memory
access, now we use shuffles.

llvm-svn: 139995
2011-09-18 10:39:32 +00:00
Craig Topper d9d01917ee Fix typo by changing Lower256IntVETCC to Lower256IntVSETCC.
llvm-svn: 139993
2011-09-18 08:03:58 +00:00
Duncan Sands f2b8c854dd Synthesize x86 max/min instructions also for vectors (i.e. produce
maxps and maxpd).  This broke the sse41-blend.ll testcase by causing
maxpd to be produced rather than a cmp+blend pair, which is the reason
I tweaked it.  Gives a small speedup on doduc with dragonegg when the
GCC vectorizer is used.

llvm-svn: 139986
2011-09-17 16:49:39 +00:00
Bruno Cardoso Lopes 4641efe304 Describe more AVX 128-bit convert instructions without patterns to have
mayLoad = 1

llvm-svn: 139973
2011-09-16 23:41:29 +00:00
Bruno Cardoso Lopes 5389ed5dfb Add mayLoad attribute to AVX convert instructions, since non of them
are declared with load patterns. This fix the crash in PR10941. No testcases,
since a fold is triggered and then converted back to the register form
afterwards.

llvm-svn: 139953
2011-09-16 22:02:14 +00:00
Bruno Cardoso Lopes 2d406f02bf Fix PR10884.
This PR basically reports a problem where a crash in generated code
happened due to %rbp being clobbered:

  pushq %rbp
  movq  %rsp, %rbp
  ....
  vmovmskps %ymm12, %ebp
  ....
  movq  %rbp, %rsp
  popq  %rbp
  ret

Since Eric's r123367 commit, the default stack alignment for x86 32-bit
has changed to be 16-bytes. Since then, the MaxStackAlignmentHeuristicPass
hasn't been really used, but with AVX it becomes useful again, since per
ABI compliance we don't always align the stack to 256-bit, but only when
there are 256-bit incoming arguments.

ReserveFP was only used by this pass, but there's no RA target hook that
uses getReserveFP() to check for the presence of FP (since nothing was
triggering the pass to run, the uses of getReserveFP() were removed
through time without being noticed). Change this pass to use
setForceFramePointer, which is properly called by MachineFunction
hasFP method.

The testcase is very big and dependent on RA, not sure if it's worth
adding to test/CodeGen/X86.

llvm-svn: 139939
2011-09-16 20:58:28 +00:00
Owen Anderson a0c3b97221 Don't attach annotations to MCInst's. Instead, have the disassembler return, and the printer accept, an annotation string which can be passed through if the client cares about annotations.
llvm-svn: 139876
2011-09-15 23:38:46 +00:00
Bruno Cardoso Lopes 7b43568a93 Add a fixme note!
llvm-svn: 139872
2011-09-15 23:04:24 +00:00
Bruno Cardoso Lopes c69d68a150 Add the remaining AVX versions of instructions to X86InstrInfo, this
time for describing high latency ones and for recognizting loads
from the same base pointer

llvm-svn: 139864
2011-09-15 22:15:52 +00:00
Bruno Cardoso Lopes 6b302955b1 Factor out partial register update checks for some SSE instructions.
Also add the AVX versions and add comments!

llvm-svn: 139854
2011-09-15 21:42:23 +00:00
Owen Anderson d1814791ad Add support for stored annotations to MCInst, and provide facilities for MC-based InstPrinters to print them out. Enhance the ARM and X86 InstPrinter's to do so in verbose mode.
llvm-svn: 139820
2011-09-15 18:36:29 +00:00
Bruno Cardoso Lopes fa1ca3070b Change all checks regarding the presence of any SSE level to always
take into consideration the presence of AVX. This change, together with
the SSEDomainFix enabled for AVX, makes AVX codegen to always (hopefully)
emit the same code as SSE for 128-bit vector ops. I don't
have a testcase for this, but AVX now beats SSE in performance for
128-bit ops in the majority of programas in the llvm testsuite

llvm-svn: 139817
2011-09-15 18:27:36 +00:00
Bruno Cardoso Lopes 62d79875d3 Enable SSEDomainFix pass for AVX mode.
llvm-svn: 139816
2011-09-15 18:27:32 +00:00
Eli Friedman da5f010177 Fix the code creating VZEXT_LOAD so that it creates the right memoperand. Issue spotted in -debug output. I can't think of any practical effects at the moment, but it might matter if we start doing more aggressive alias analysis in CodeGen.
llvm-svn: 139758
2011-09-14 23:42:45 +00:00
Craig Topper ee8157cb41 Fix mem type for VEX.128 form of VROUNDP*. Remove filter preventing VROUND from being recognized by disassembler.
llvm-svn: 139691
2011-09-14 06:41:26 +00:00
Craig Topper 96e00e5a24 Make disassembling of VBLEND* print immediate as a XMM/YMM register name. Fixes PR10917.
llvm-svn: 139690
2011-09-14 05:55:28 +00:00
Bruno Cardoso Lopes d560b8c8e9 Teach the foldable tables about 128-bit AVX instructions and make the
alignment check for 256-bit classes more strict. There're no testcases
but we catch more folding cases for AVX while running single and multi
sources in the llvm testsuite.

Since some 128-bit AVX instructions have different number of operands
than their SSE counterparts, they are placed in different tables.

256-bit AVX instructions should also be added in the table soon. And
there a few more 128-bit versions to handled, which should come in
the following commits.

llvm-svn: 139687
2011-09-14 02:36:58 +00:00
Bruno Cardoso Lopes 333a59eced Vector shuffle mask <i32 4, i32 5, i32 2, i32 3> should yield "movsd", not "movss".
llvm-svn: 139686
2011-09-14 02:36:14 +00:00
Nadav Rotem 9cfbeaff15 swap vselect operand order - pr10907
llvm-svn: 139630
2011-09-13 19:56:38 +00:00
Bruno Cardoso Lopes 03d6002d68 Add versions 256-bit versions of alignedstore and alignedload, to be
more strict about the alignment checking. This was found by inspection
and I don't have any testcases so far, although the llvm testsuite runs
without any problem.

llvm-svn: 139625
2011-09-13 19:33:03 +00:00
Bruno Cardoso Lopes 56d9b51caf Revert the remaining part of r139528. According to PR10907 the bug seems
to be in the VSELECT operands order, so I'll leave the fix for Nadav.

llvm-svn: 139624
2011-09-13 19:33:00 +00:00
Nadav Rotem 52202fbf2d Add vselect target support for targets that do not support blend but do support
xor/and/or (For example SSE2).

llvm-svn: 139623
2011-09-13 19:17:42 +00:00
Craig Topper 8dd7bbcc80 Only disassembler instructions with vvvv != 1111 if the instruction actually uses the vvvv field to encode an operand. Fixes PR10851.
llvm-svn: 139591
2011-09-13 07:37:44 +00:00
Craig Topper e98d8a5c84 Remove filter that was preventing MOVDQU/MOVDQA and their VEX forms from being disassembled. Also added encodings for the other register/register form of these instructions. Fixes PR10848.
llvm-svn: 139588
2011-09-13 06:54:58 +00:00
Craig Topper b7ae29e404 Fix encoding of VMOVDQU to not simultaneously be 'TB OpSize' and 'XS'. 'XS' is correct and seems to have been taking priority.
llvm-svn: 139587
2011-09-13 06:39:34 +00:00
Eli Friedman d68a727bd0 Fix the assembler strings for a couple of atomic instructions. Doesn't really matter much in practice, but it's a bit cleaner.
llvm-svn: 139563
2011-09-13 00:27:04 +00:00
Bruno Cardoso Lopes ff8d8a830e Fix PR10845. SUBREG_TO_REG shouldn't be used when the input and
destination types are equal!

llvm-svn: 139553
2011-09-12 22:59:23 +00:00
Bruno Cardoso Lopes 973d2921e8 Revert the wrong part of r139528, and fix testcases.
llvm-svn: 139541
2011-09-12 21:24:07 +00:00
Bruno Cardoso Lopes be7a086f58 Not sure how CMPPS and CMPPD had already ever worked, I guess it didn't.
However with this fix it does now.

Basically the operand order for the x86 target specific node
is not the same as the instruction, but since the intrinsic need that
specific order at the instruction definition, just change the order
during legalization. Also, there were some wrong invertions of condition
codes, such as GE => LE, GT => LT, fix that too. Fix PR10907.

llvm-svn: 139528
2011-09-12 19:30:40 +00:00
Bruno Cardoso Lopes f6382979f2 Organize a bit the operand names for CMPPS and CMPPD
llvm-svn: 139527
2011-09-12 19:30:36 +00:00
Bruno Cardoso Lopes 2e4bee16bb Realign BLEND patterns to match the general style for patterns in .td file.
llvm-svn: 139526
2011-09-12 19:30:33 +00:00
Bruno Cardoso Lopes 9c9f64918c Fix 80-columns
llvm-svn: 139525
2011-09-12 19:30:29 +00:00
Nadav Rotem c0c71e162a Format patterns, remove unused X86blend patterns
llvm-svn: 139491
2011-09-12 08:41:50 +00:00
Craig Topper 48f2b36911 Fix disassembling of one of the register/register forms of MOVUPS/MOVUPD/MOVAPS/MOVAPD/MOVSS/MOVSD and their VEX equivalents. Fixes PR10877.
llvm-svn: 139486
2011-09-11 23:19:54 +00:00
Craig Topper a88e356017 Fix disassembling of reverse register/register forms of ADD/SUB/XOR/OR/AND/SBB/ADC/CMP/MOV.
llvm-svn: 139485
2011-09-11 21:41:45 +00:00
Nadav Rotem b873b18721 CR fixes per Bruno's request.
Undo the changes from r139285 which added custom lowering to vselect.
Add tablegen lowering for vselect.

llvm-svn: 139479
2011-09-11 15:02:23 +00:00
Eli Friedman 7f50e00203 r139454 activates an assert in a case where we were doing the right thing anyway. Make that explicit, and un-XFAIL the testcase.
llvm-svn: 139458
2011-09-10 02:01:42 +00:00
Richard Trieu 74996f2a79 Fix the asserts in lib/Target/X86/X86ELFWriterInfo.cpp and
lib/ExecutionEngine/MCJIT/MCJIT.cpp from:

  assert("error");

to:

  assert(0 && "error");

llvm-svn: 139456
2011-09-10 01:42:07 +00:00
Richard Trieu d9917bef6c Fixed an assert from:
assert("not implemented for target shuffle node");

to:

  assert(0 && "not implemented for target shuffle node");

This causes a test failure in CodeGen/X86/palignr.ll which has
been marked as XFAIL for the time being.
Test failure filed at PR10901.

llvm-svn: 139454
2011-09-10 01:26:21 +00:00
Nadav Rotem de838daefd Implement vector-select support for avx256. Refactor the vblend implementation to have tablegen match the instruction by the node type
llvm-svn: 139400
2011-09-09 20:29:17 +00:00
Craig Topper 5d5134014f Fix handling of Intel syntax disassembling of movs and stos to stop being blank. Also fixed scas, and cmps to always print size suffix in Intel syntax since its abiguous without arguments. Fixes PR10875.
llvm-svn: 139353
2011-09-09 05:40:53 +00:00
Nadav Rotem b5df62036b Dix the 80-columns and remove unsupported v8i16 type from the list of legal vselect types.
llvm-svn: 139324
2011-09-08 22:17:35 +00:00
Bruno Cardoso Lopes 46b9cde019 Add a AVX version of a simple i64 -> f64 bitcast. This could be
triggered using llc with -O0, which wouldn't let it be folded and
expose the lack of this pattern.

llvm-svn: 139320
2011-09-08 21:52:33 +00:00
Bruno Cardoso Lopes 23eb5265b4 * Combines Alignment, AuxInfo, and TB_NOT_REVERSABLE flag into a
single field (Flags), which is a bitwise OR of items from the TB_*
enum. This makes it easier to add new information in the future.

* Gives every static array an equivalent layout: { RegOp, MemOp, Flags }

* Adds a helper function, AddTableEntry, to avoid duplication of the
insertion code.

* Renames TB_NOT_REVERSABLE to TB_NO_REVERSE.

* Adds TB_NO_FORWARD, which is analogous to TB_NO_REVERSE, except that
it prevents addition of the Reg->Mem entry. (This is going to be used
by Native Client, in the next CL).

Patch by David Meyer

llvm-svn: 139311
2011-09-08 18:35:57 +00:00
Bruno Cardoso Lopes fb113a0051 Add AVX versions of blend vector operations and fix some issues noticed
in Nadav's r139285 and r139287 commits.

1) Rename vsel.ll to a more descriptive name
2) Change the order of BLEND operands to "Op1, Op2, Cond", this is
necessary because PBLENDVB is already used in different places with
this order, and it was being emitted in the wrong way for vselect
3) Add AVX patterns and tests for the same SSE41 instructions

llvm-svn: 139305
2011-09-08 18:05:08 +00:00
Bruno Cardoso Lopes ea8d803bb0 Fix PR10844: Add patterns to cover non foldable versions of X86vzmovl.
Triggered using llc -O0. Also fix some SET0PS patterns to their AVX
forms and test it on the testcase.

llvm-svn: 139304
2011-09-08 18:05:02 +00:00
Nadav Rotem 2550ba2a27 Add X86-SSE4 codegen support for vector-select.
llvm-svn: 139285
2011-09-08 08:11:19 +00:00
Eli Friedman 02f2f89a98 Fix atomic load and store on x86 to pass -verify-machineinstrs (and possibly fix some subtle bugs involving passes which check mayStore()).
This isn't exactly ideal, but it is good enough for the moment.

llvm-svn: 139245
2011-09-07 18:48:32 +00:00
James Molloy 4c493e8050 Refactor instprinter and mcdisassembler to take a SubtargetInfo. Add -mattr= handling to llvm-mc. Reviewed by Owen Anderson.
llvm-svn: 139237
2011-09-07 17:24:38 +00:00
Rafael Espindola 6559656e73 Detect attempt to use segmented stacks on non ELF systems and error
(not assert) early.

llvm-svn: 139233
2011-09-07 16:10:57 +00:00
Bill Wendling 226c4ed92a Reenable compact unwind by default. However, also emit the old version of unwind
information for older linkers.

llvm-svn: 139206
2011-09-06 23:47:14 +00:00
Rafael Espindola 9d96c94278 Fix comment. Noticed by Duncan.
llvm-svn: 139161
2011-09-06 19:29:31 +00:00
Duncan Sands f2641e1bc1 Add codegen support for vector select (in the IR this means a select
with a vector condition); such selects become VSELECT codegen nodes.
This patch also removes VSETCC codegen nodes, unifying them with SETCC
nodes (codegen was actually often using SETCC for vector SETCC already).
This ensures that various DAG combiner optimizations kick in for vector
comparisons.  Passes dragonegg bootstrap with no testsuite regressions
(nightly testsuite as well as "make check-all").  Patch mostly by
Nadav Rotem.

llvm-svn: 139159
2011-09-06 19:07:46 +00:00
Rafael Espindola db5823dc77 Fix style issues and typos found by Duncan.
llvm-svn: 139154
2011-09-06 18:43:08 +00:00
Duncan Sands a098436b32 Split the init.trampoline intrinsic, which currently combines GCC's
init.trampoline and adjust.trampoline intrinsics, into two intrinsics
like in GCC.  While having one combined intrinsic is tempting, it is
not natural because typically the trampoline initialization needs to
be done in one function, and the result of adjust trampoline is needed
in a different (nested) function.  To get around this llvm-gcc hacks the
nested function lowering code to insert an additional parent variable
holding the adjust.trampoline result that can be accessed from the child
function.  Dragonegg doesn't have the luxury of tweaking GCC code, so it
stored the result of adjust.trampoline in the memory GCC set aside for
the trampoline itself (this is always available in the child function),
and set up some new memory (using an alloca) to hold the trampoline.
Unfortunately this breaks Go which allocates trampoline memory on the
heap and wants to use it even after the parent has exited (!).  Rather
than doing even more hacks to get Go working, it seemed best to just use
two intrinsics like in GCC.  Patch mostly by Sanjoy Das.

llvm-svn: 139140
2011-09-06 13:37:06 +00:00
Nick Lewycky 73df7e3830 Add a new MC bit for NaCl (Native Client) mode. NaCl requires that certain
instructions are more aligned than the CPU requires, and adds some additional
directives, to follow in future patches. Patch by David Meyer!

llvm-svn: 139125
2011-09-05 21:51:43 +00:00
Benjamin Kramer 7859d2e148 Use internal storage for command line option.
llvm-svn: 139079
2011-09-03 03:45:06 +00:00
Bruno Cardoso Lopes 07d9914620 Add AVX versions to match AESENC/AESDEC intrinsics. This hopefully ends
the cycle of missing AVX counterparts of already present SSE* patterns

llvm-svn: 139073
2011-09-03 00:47:08 +00:00
Bruno Cardoso Lopes 1d5c2d9227 Add AVX version of a SSE4.1 VPBLENDVB pattern
llvm-svn: 139072
2011-09-03 00:47:05 +00:00
Bruno Cardoso Lopes 212a8c4357 Add AVX versions of SSE4.1 EXTRACTPS patterns
llvm-svn: 139071
2011-09-03 00:47:03 +00:00
Bruno Cardoso Lopes 3d581a36b6 Add AVX versions for SSE4.1 MOVZX* patterns
llvm-svn: 139070
2011-09-03 00:47:01 +00:00
Bruno Cardoso Lopes 6d701fcef0 Add one more AVX pattern for MOVZPQILo2PQI
llvm-svn: 139069
2011-09-03 00:46:58 +00:00
Bruno Cardoso Lopes 9923c51564 Move PUNPCKLQDQ splat pattern close to the instruction definition and
duplicate it for AVX mode.

llvm-svn: 139068
2011-09-03 00:46:56 +00:00
Bruno Cardoso Lopes 96b11f39e2 Add AVX pattern versions for PSHUFB,PSIGN{B,W,D}
llvm-svn: 139067
2011-09-03 00:46:54 +00:00
Bruno Cardoso Lopes 9a0da1e57a Add AVX versions of MOVZDI2PDI patterns. Use SUBREG_TO_REG to indicate
that the AVX versions (even the 128-bit ones) all clear the upper part
of the destination register.

llvm-svn: 139066
2011-09-03 00:46:51 +00:00
Bruno Cardoso Lopes 903952223a Enforce subtarget checks in a few places to be explicit when the
pattern should be matched

llvm-svn: 139065
2011-09-03 00:46:49 +00:00
Bruno Cardoso Lopes 521b0cfdc6 Tidy up code moving patterns to their appropriate place!
llvm-svn: 139064
2011-09-03 00:46:47 +00:00
Bruno Cardoso Lopes aad5e50ded Add AVX versions of FsMOVAPS and FsMOVAPS. Teach X86InstrInfo how to use
it!

llvm-svn: 139063
2011-09-03 00:46:45 +00:00
Bruno Cardoso Lopes d893fc92af Teach X86FastISel to use AVX versions of instructions when possible
llvm-svn: 139062
2011-09-03 00:46:42 +00:00
Bruno Cardoso Lopes 006c9371a1 Fix 80-column and style
llvm-svn: 139061
2011-09-03 00:46:40 +00:00
Bruno Cardoso Lopes dbb40015ff Tidy up some SSE/AVX convert intrinsics. Also add an AVX version of
OptForSize pattern

llvm-svn: 139060
2011-09-03 00:46:38 +00:00
Jakob Stoklund Olesen 1f72dd40c7 Pseudo CMOV instructions don't clobber EFLAGS.
The explanation about a 0 argument being materialized as xor is no
longer valid.  Rematerialization will check if EFLAGS is live before
clobbering it.

The code produced by X86TargetLowering::EmitLoweredSelect does not
clobber EFLAGS.

This causes one less testb instruction to be generated in the cmov.ll
test case.

llvm-svn: 139057
2011-09-02 23:52:55 +00:00
Jakob Stoklund Olesen f08354d183 Check for EFLAGS live-out before clobbering it.
It is only allowed to clobber EFLAGS at the end of a block if it isn't
live-in to any successor.

llvm-svn: 139056
2011-09-02 23:52:52 +00:00
Jakob Stoklund Olesen d0c8a31c8b Use existing function.
llvm-svn: 139055
2011-09-02 23:52:49 +00:00
Jakob Stoklund Olesen 38019e3188 Remove unused variables.
llvm-svn: 139047
2011-09-02 22:41:25 +00:00
Eli Friedman f3dd6da7a8 Don't fast-isel for atomic load/store; some cases require extra handling missing from fast-isel.
llvm-svn: 139044
2011-09-02 22:33:24 +00:00
Kevin Enderby 5b03f72292 Change X86 disassembly to print immediates values as signed by default. Special
case those instructions that the immediate is not sign-extend.  radr://8795217

llvm-svn: 139028
2011-09-02 20:01:23 +00:00
Bill Wendling 4e1d018935 Revert r138826 until PR10834 can be fixed.
llvm-svn: 139018
2011-09-02 18:15:04 +00:00
Bruno Cardoso Lopes f61d1c072e Fix vbroadcast matching logic to early unmatch if the node doesn't have
only one use. Fix PR10825.

llvm-svn: 138951
2011-09-01 18:15:06 +00:00
Bruno Cardoso Lopes a0d85139e5 Move more code around and duplicate AVX patterns: MOVHPS and MOVLPS
llvm-svn: 138897
2011-08-31 21:15:32 +00:00
Bruno Cardoso Lopes 21a180367b Move MOVAPS,MOVUPS patterns close to the instructions definition
llvm-svn: 138896
2011-08-31 21:15:29 +00:00
Bruno Cardoso Lopes 941001312a Remove "_Int" forms of MOVUPSmr and MOVAPSmr
llvm-svn: 138895
2011-08-31 21:15:22 +00:00
Rafael Espindola 6e31dfea35 Spelling and grammar fixes to problems found by Duncan.
llvm-svn: 138858
2011-08-31 16:43:33 +00:00
Eli Friedman 635d9692b6 Make sure we don't crash when -miphoneos-version-min is specified on x86. Hopefully this will fix gcc testsuite failures.
llvm-svn: 138856
2011-08-31 16:19:51 +00:00
Eric Christopher 72d1d5e193 Rework this conditional a bit.
Patch by Sanjoy Das

llvm-svn: 138853
2011-08-31 04:17:21 +00:00
Bruno Cardoso Lopes 9fc6b8be03 - Move all MOVSS and MOVSD patterns close to their definitions
- Duplicate some store patterns to their AVX forms!
- Catched a bug while restricting the patterns subtarget, fix it
  and update a testcase to check it properly

llvm-svn: 138851
2011-08-31 03:04:20 +00:00
Bruno Cardoso Lopes aa1daa63da Remove unnecessary AVX checks
llvm-svn: 138850
2011-08-31 03:04:14 +00:00
Bruno Cardoso Lopes db520db514 Teach more places to use VMOVAPS,VMOVUPS instead of MOVAPS,MOVUPS,
whenever AVX is enabled.

llvm-svn: 138849
2011-08-31 03:04:09 +00:00
Evan Cheng cb1e5bae4c Fix (movhps load) lowering / pattern to match more cases. rdar://10050549
llvm-svn: 138848
2011-08-31 02:05:24 +00:00
Bill Wendling 6470e07e20 Fix off-by-one error Benjamin noticed.
llvm-svn: 138832
2011-08-30 21:23:24 +00:00
Bill Wendling 7a9c3033a4 Enable compact unwind info by default. This only applies to Darwin when CFI is
disabled.

llvm-svn: 138826
2011-08-30 20:54:11 +00:00
Jeffrey Yasskin 065c35726f Fix C++0x narrowing errors when char is unsigned.
In the case of EDInstInfo, this would actually cause a bug when -1 became 255
and was then compared >=0 in llvm-mc/Disassembler.cpp.

llvm-svn: 138825
2011-08-30 20:53:29 +00:00
Rafael Espindola 94d3253626 Adds support for variable sized allocas. For a variable sized alloca,
code is inserted to first check if the current stacklet has enough
space. If so, space is allocated by simply decrementing the stack
pointer. Otherwise a runtime routine (__morestack_allocate_stack_space
in libgcc) is called which allocates the required memory from the
heap.

Patch by Sanjoy Das.

llvm-svn: 138818
2011-08-30 19:47:04 +00:00
Rafael Espindola 3353017668 Adds a SelectionDAG node X86SegAlloca which will be custom lowered
from DYNAMIC_STACKALLOC.

Two new pseudo instructions (SEG_ALLOCA_32 and SEG_ALLOCA_64) which
will match X86SegAlloca (based on word size) are also added.  They
will be custom emitted to inject the actual stack handling code.

Patch by Sanjoy Das.

llvm-svn: 138814
2011-08-30 19:43:21 +00:00
Rafael Espindola c21742112b Emit segmented-stack specific code into function prologues for
X86. Modify the pass added in the previous patch to call this new
code.

This new prologues generated will call a libgcc routine (__morestack)
to allocate more stack space from the heap when required

Patch by Sanjoy Das.

llvm-svn: 138812
2011-08-30 19:39:58 +00:00
Eli Friedman 850b9a9a84 Explicitly zero out parts of a vector which are required to be zero by the algorithm in LowerUINT_TO_FP_i32. This only has a substantial effect on the generated code when the input is extracted from a vector register; other ways of loading an i32 do the appropriate zeroing implicitly. Fixes PR10802.
llvm-svn: 138768
2011-08-29 21:15:46 +00:00
Bruno Cardoso Lopes 50e0170fa5 Move non-intruction patterns to a more appropriate place!
llvm-svn: 138744
2011-08-29 17:51:24 +00:00
Nicolas Geoffray 7ea09c9462 Remove premature previous commit.
llvm-svn: 138725
2011-08-28 14:52:51 +00:00
Nicolas Geoffray f786bae6ac Encoding of instructions referencing segments has changed. Do what X86MCCodeEmitter does.
llvm-svn: 138723
2011-08-28 13:07:57 +00:00
Benjamin Kramer 61a1ff543c Silence GCC warnings and make an array const.
llvm-svn: 138706
2011-08-27 17:36:14 +00:00
Eli Friedman 5e5704277f Add support for generating CMPXCHG16B on x86-64 for the cmpxchg IR instruction.
llvm-svn: 138660
2011-08-26 21:21:21 +00:00
Craig Topper c66d50d1a2 Fix disassembling of VCVTSD2SI
llvm-svn: 138623
2011-08-26 04:49:29 +00:00
Bruno Cardoso Lopes ed834810be Do the same as r138461. Mark VZEROALL as clobbering all YMM registers
llvm-svn: 138592
2011-08-25 22:23:58 +00:00
Bruno Cardoso Lopes 8347b86293 Add support for AVX 256-bit version of MOVDDUP!
llvm-svn: 138588
2011-08-25 21:40:37 +00:00
Bruno Cardoso Lopes 388eacee2c Make isMOVDDUP mask check more strict and update comments!
llvm-svn: 138587
2011-08-25 21:40:34 +00:00
Craig Topper 14380ff9a0 Add more missing TB encodings to VEX instructions to allow them to be disassembled. Fixes remainder of PR10678.
llvm-svn: 138553
2011-08-25 08:11:01 +00:00
Craig Topper e1541838f9 Add TB encoding to VEROALL, VZEROUPPER, and VCVTPS2PD to allow them to be disassembled. Fixes PR10723.
llvm-svn: 138551
2011-08-25 06:57:46 +00:00
Bruno Cardoso Lopes 296256fb32 Add support for 256-bit versions of VSHUFPD and VSHUFPS.
llvm-svn: 138546
2011-08-25 02:58:26 +00:00
Bruno Cardoso Lopes 54366cc332 Add memory version of SHUFPD to mask decoding!
llvm-svn: 138545
2011-08-25 02:58:21 +00:00
Bruno Cardoso Lopes 50d74211df Create a section for non-instructions patterns in the beginning of the
file, and move more code around!

llvm-svn: 138521
2011-08-24 23:18:11 +00:00
Bruno Cardoso Lopes 2fb51d38e6 Move code around!
llvm-svn: 138520
2011-08-24 23:18:09 +00:00
Bruno Cardoso Lopes fb702fe8d6 Organize UNPCK* patterns, also add remaining for AVX.
llvm-svn: 138519
2011-08-24 23:18:06 +00:00
Bruno Cardoso Lopes 9ade17b7f2 Move remaining MOVDDUP patterns close to MOVDDUP defintion and duplicate
the missing ones for AVX.

llvm-svn: 138518
2011-08-24 23:18:04 +00:00
Bruno Cardoso Lopes c1e1e7ab97 Organize and tidy up MOVDDUP section. Also update comments!
llvm-svn: 138517
2011-08-24 23:18:02 +00:00
Bruno Cardoso Lopes 813891a215 Move MOVHLPS patterns close to MOVHLPS definition, and duplicate the
pattern for 128-bit AVX mode.

llvm-svn: 138516
2011-08-24 23:17:59 +00:00
Bruno Cardoso Lopes 9566a66a7c Move all PSHUF* patterns close to the PSHUF* definitions. Also be
explicit about which subtarget they refer to, and add AVX versions of
the ones we currently don't. Remove old and now wrong comments!

llvm-svn: 138515
2011-08-24 23:17:57 +00:00
Bruno Cardoso Lopes 2953d7b320 Move all SHUFP* patterns close to the SHUFP* definitions. Also be
explicit about which subtarget they refer to, and add AVX versions of
the ones we currently don't. Make the mask check more strict, to be
clear it won't be used to match to 256-bit versions!

llvm-svn: 138514
2011-08-24 23:17:55 +00:00
Eli Friedman 9c73a57b20 Hook up 64-bit atomic load/store on x86-32. I plan to write more efficient implementations eventually.
llvm-svn: 138505
2011-08-24 22:33:28 +00:00
Eli Friedman 38cd821dc4 Fix whitespace.
llvm-svn: 138487
2011-08-24 21:17:30 +00:00
Eli Friedman 342e8df0e0 Basic x86 code generation for atomic load and store instructions.
llvm-svn: 138478
2011-08-24 20:50:09 +00:00
Bruno Cardoso Lopes ce02840633 Mark VZEROALL as clobbering all YMM registers
llvm-svn: 138461
2011-08-24 18:48:33 +00:00
Evan Cheng 2bb4035707 Move TargetRegistry and TargetSelect from Target to Support where they belong.
These are strictly utilities for registering targets and components.

llvm-svn: 138450
2011-08-24 18:08:43 +00:00
Craig Topper de92622aa5 Break 256-bit vector int add/sub/mul into two 128-bit operations to avoid costly scalarization. Fixes PR10711.
llvm-svn: 138427
2011-08-24 06:14:18 +00:00
Bruno Cardoso Lopes 9e9f2ce32d Fix a nasty bug where a v4i64 was being wrong emitted with 32-bit
permutations. Also tidy up some patterns and make them close to their
instruction definition!

llvm-svn: 138392
2011-08-23 22:06:37 +00:00
Evan Cheng 4d6c9d711d Some refactoring so TargetRegistry.h no longer has to include any files
from MC.

llvm-svn: 138367
2011-08-23 20:15:21 +00:00
Nick Lewycky 4c8ff77f1b PerformSubCombine to work on integers larger than i128. Fixes a crasher.
llvm-svn: 138354
2011-08-23 19:01:24 +00:00
Craig Topper 6612e35b0d Add support for breaking 256-bit v16i16 and v32i8 VSETCC into two 128-bit ones, avoiding sclarization. Add vex form of pcmpeqq and pcmpgtq. Fixes more cases for PR10712.
llvm-svn: 138321
2011-08-23 04:36:33 +00:00
Bruno Cardoso Lopes 2a3ffb5d97 Introduce a pass to insert vzeroupper instructions to avoid AVX to
SSE transition penalty. The pass is enabled through the "x86-use-vzeroupper"
llc command line option. This is only the first step (very naive and
conservative one) to sketch out the idea, but proper DFA is coming next
to allow smarter decisions. Comments and ideas now and in further commits
will be very appreciated.

llvm-svn: 138317
2011-08-23 01:14:17 +00:00
Benjamin Kramer 9dc808e74d X86: Add some operand types required to identify calls.
llvm-svn: 138285
2011-08-22 22:55:32 +00:00
Bruno Cardoso Lopes 74f090d44c Add support for breaking 256-bit int VETCC into two 128-bit ones,
avoding scalarization of the compare. Reduces code from 59 to 6
instructions. Fix PR10712.

llvm-svn: 138271
2011-08-22 20:31:04 +00:00
Bruno Cardoso Lopes 6e62ca940a Add 128-bit AVX codegen for PCMP* family of integer instructions
llvm-svn: 138270
2011-08-22 20:31:00 +00:00
Bruno Cardoso Lopes d126347f32 Re-write part of VEX encoding logic, to be more easy to read! Also fix
a bug and add a testcase!

llvm-svn: 138123
2011-08-19 22:27:29 +00:00
Craig Topper ba6c2a52c7 Add TB encoding to VEX versions of SSE fp logical operations to fix disassembler
llvm-svn: 138034
2011-08-19 05:28:50 +00:00
Bruno Cardoso Lopes 22241acc29 Fix PR10677. Initial patch and idea by Peter Cooper but I've changed the
implementation!

llvm-svn: 138029
2011-08-19 02:23:56 +00:00
Bruno Cardoso Lopes 5647d84aa4 Re-encoded 128-bit AVX versions of SQRT, RSQRT, RCP have 3 operands
instead of 2. They were already defined this way in their regular
version, but not for the intrinsics versions (*_Int), and that would work
for assembly emission but not for object code, since a MachineOperand
would be missing. This commit fix PR10697.

Also removed the {VSQRT,VRSQRT,VRCP}r_Int forms and match the intrinsic
via INSERT_SUBREG+EXTRACT_SUBREG patterns. The same couldn't be done for
memory versions because sse_load_f32/sse_load_f64 operand need special
handling and don't work like regular "addr" operands.

There are right now 114 "*_Int" and 98 "Int_*" forms! I'm slowly
removing them as I step through, but hope we can get rid of these
someday, they are really annoying :)

llvm-svn: 138012
2011-08-18 23:59:21 +00:00
Bruno Cardoso Lopes 3c7d6eb64c Cleanup vector logical ops in AVX and add use int versions for simple
v2i64

llvm-svn: 137919
2011-08-18 02:11:34 +00:00
Bruno Cardoso Lopes 1a87fcb9ba Fix PR10688. Add support for spliting 256-bit vector shifts when the
shift amount is variable

llvm-svn: 137885
2011-08-17 22:12:20 +00:00
Owen Anderson a4043c4b32 Allow the MCDisassembler to return a "soft fail" status code, indicating an instruction that is disassemblable, but invalid. Only used for ARM UNPREDICTABLE instructions at the moment.
Patch by James Molloy.

llvm-svn: 137830
2011-08-17 17:44:15 +00:00
Bruno Cardoso Lopes be5e987379 Introduce matching patterns for vbroadcast AVX instruction. The idea is to
match splats in the form (splat (scalar_to_vector (load ...))) whenever
the load can be folded. All the logic and instruction emission is
working but because of PR8156, there are no ways to match loads, cause
they can never be folded for splats. Thus, the tests are XFAILed, but
I've tested and exercised all the logic using a relaxed version for
checking the foldable loads, as if the bug was already fixed. This
should work out of the box once PR8156 gets fixed since MayFoldLoad will
work as expected.

llvm-svn: 137810
2011-08-17 02:29:19 +00:00
Bruno Cardoso Lopes 6d33c7f303 Update comments about vector splat handling in x86
llvm-svn: 137808
2011-08-17 02:29:13 +00:00
Bruno Cardoso Lopes ed786a346e Now that we have a canonical way to handle 256-bit splats:
vinsertf128 $1 + vpermilps $0, remove the old code that used to first
do the splat in a 128-bit vector and then insert it into a larger one.
This is better because the handling code gets simpler and also makes a
better room for the upcoming vbroadcast!

llvm-svn: 137807
2011-08-17 02:29:10 +00:00
Bruno Cardoso Lopes 2e99f1b3aa Instead of always leaving the work to the generic legalizer when
there is no support for native 256-bit shuffles, be more smart in some
cases, for example, when you can extract specific 128-bit parts and use
regular 128-bit shuffles for them. Example:

For this shuffle:
  shufflevector <4 x i64> %a, <4 x i64> %b, <4 x i32>
                <i32 1, i32 0, i32 7, i32 6>

This was expanded to:
  vextractf128  $1, %ymm1, %xmm2
  vpextrq $0, %xmm2, %rax
  vmovd %rax, %xmm1
  vpextrq $1, %xmm2, %rax
  vmovd %rax, %xmm2
  vpunpcklqdq %xmm1, %xmm2, %xmm1
  vpextrq $0, %xmm0, %rax
  vmovd %rax, %xmm2
  vpextrq $1, %xmm0, %rax
  vmovd %rax, %xmm0
  vpunpcklqdq %xmm2, %xmm0, %xmm0
  vinsertf128 $1, %xmm1, %ymm0, %ymm0
  ret

Now we get:
  vshufpd $1, %xmm0, %xmm0, %xmm0
  vextractf128  $1, %ymm1, %xmm1
  vshufpd $1, %xmm1, %xmm1, %xmm1
  vinsertf128 $1, %xmm1, %ymm0, %ymm0

llvm-svn: 137733
2011-08-16 18:21:54 +00:00
Bruno Cardoso Lopes c1676e41c0 While I'm here, remove the "_alt" hacks to a series of INSERT_SUBREG and
also add the AVX versions of the 128-bit patterns

llvm-svn: 137685
2011-08-15 23:36:51 +00:00
Bruno Cardoso Lopes 67005029bc Reorder declarations of vmovmskp* and also put the necessary AVX
predicate and TB encoding fields. This fix the encoding for the
attached testcase. This fixes PR10625.

llvm-svn: 137684
2011-08-15 23:36:45 +00:00
Jim Grosbach 120a96a721 MCTargetAsmParser target match predicate support.
Allow a target assembly parser to do context sensitive constraint checking
on a potential instruction match. This will be used, for example, to handle
Thumb2 IT block parsing.

llvm-svn: 137675
2011-08-15 23:03:29 +00:00
Bruno Cardoso Lopes cbe7feeab9 Fix PR10656. It's only profitable to use 128-bit inserts and extracts
when AVX mode is one. Otherwise is just more work for the type
legalizer.

llvm-svn: 137661
2011-08-15 21:45:54 +00:00
Bruno Cardoso Lopes c53dd2ac01 Fix comment!
llvm-svn: 137521
2011-08-12 21:54:42 +00:00
Bruno Cardoso Lopes f15dfe5818 The VPERM2F128 is a AVX instruction which permutes between two 256-bit
vectors. It operates on 128-bit elements instead of regular scalar
types. Recognize shuffles that are suitable for VPERM2F128 and teach
the x86 legalizer how to handle them.

llvm-svn: 137519
2011-08-12 21:48:26 +00:00
Bruno Cardoso Lopes 960c8f71aa Move code around and add comments
llvm-svn: 137518
2011-08-12 21:48:22 +00:00
Duncan Sands a41634e307 Silence a bunch (but not all) "variable written but not read" warnings
when building with assertions disabled.

llvm-svn: 137460
2011-08-12 14:54:45 +00:00
Andrew Trick 210bf8351d findDeadCallerSavedReg fix: Missing NULL terminator in register arrays.
Fix by Ivan Baev. Sorry I don't have a unit test, but the fix is obvious so I don't want to delay it.

llvm-svn: 137404
2011-08-12 00:49:19 +00:00
Bruno Cardoso Lopes 8fbf023c9b Add a dag combine to xform 256-bit shuffles into simple vector
inserts and extracts. This simple combine makes us generate only 1
instruction instead of 11 in the v8 case.

llvm-svn: 137362
2011-08-11 21:50:44 +00:00
Bruno Cardoso Lopes 043c820800 Fix PR10492 by teaching MOVHLPS and MOVLPS mask matching to be more strict.
llvm-svn: 137324
2011-08-11 18:59:13 +00:00
Nadav Rotem efdd183f52 Add a comment, per Bruno's CR.
llvm-svn: 137313
2011-08-11 17:05:47 +00:00
Nadav Rotem 1542d5a00a [AVX] If the data which is going to be saved is already in two XMM registers
(for example, after integer operation), do not pack the registers into a YMM
before saving. Its better to save as two XMM registers.

Before:
                vinsertf128         $1, %xmm3, %ymm0, %ymm3
                vinsertf128         $0, %xmm1, %ymm3, %ymm1
                vmovaps              %ymm1, 416(%rsp)

After:
                vmovaps              %xmm3, 416+16(%rsp)
                vmovaps              %xmm1, 416(%rsp)

llvm-svn: 137308
2011-08-11 16:41:21 +00:00
Bruno Cardoso Lopes dbd1352c80 Cleanup: Remove Int_ CVTSS2SI* forms
llvm-svn: 137297
2011-08-11 02:52:36 +00:00
Bruno Cardoso Lopes a2d8bb97b9 Splats for v8i32/v8f32 can be handled by VPERMILPSY. This was causing
infinite recursive calls in legalize. Fix PR10562

llvm-svn: 137296
2011-08-11 02:49:44 +00:00
Bruno Cardoso Lopes 572c9aaf53 Use the splat index to generate the desired shuffle. Otherwise we
could only get undefs and the vector shuffle becomes an undef,
generating wrong code.

llvm-svn: 137295
2011-08-11 02:49:41 +00:00
Eli Friedman 3ae39f8ad1 Fix X86TargetLowering::LowerExternalSymbol so that it actually works in non-trivial cases. This hasn't been an issue before because the function isn't normally called (but apparently is used to generate a tail-call to sin() on ELF x86-32 with PIC and SSE2).
Fixes PR9693.

llvm-svn: 137292
2011-08-11 01:48:05 +00:00
Nadav Rotem 410a11fe82 When performing a truncating store, it is sometimes possible to rearrange the
data in-register prior to saving to memory.  When we reorder the data in memory
we prevent the need to save multiple scalars to memory, making a single regular
store.

llvm-svn: 137238
2011-08-10 19:30:14 +00:00
Bruno Cardoso Lopes 3ff111c12d The following X86 pattern is incorrect:
def : Pat<(X86Movss VR128:$src1,
                   (bc_v4i32 (v2i64 (load addr:$src2)))),
          (MOVLPSrm VR128:$src1, addr:$src2)>;
This matches a MOVSS dag with a MOVLPS instruction. However, MOVSS will replace only the low 32 bits of the register, while the MOVLPS instruction will replace the low 64 bits. A testcase is added and illustrates the bug and also modified the one that was already present. Patch by Tanya Lattner.

llvm-svn: 137227
2011-08-10 17:45:17 +00:00
Bruno Cardoso Lopes 278ffd7d8e Fix a bug in vpermilps mask checking. Fix PR10560
llvm-svn: 137194
2011-08-10 01:54:17 +00:00
Bruno Cardoso Lopes 72323966c8 Add 256-bit support for v8i32, v4i64 and v4f64 ISD::SELECT. Fix PR10556
llvm-svn: 137179
2011-08-09 23:27:13 +00:00
Bruno Cardoso Lopes fc481959d2 Add v16i16 and v32i8 store patterns
llvm-svn: 137166
2011-08-09 22:39:53 +00:00
Bruno Cardoso Lopes 6963062a99 Use fp unpack instructions to unpack int types. Until we have AVX2, this
is the best we can do for these patterns. This fix PR10554.

llvm-svn: 137161
2011-08-09 22:18:37 +00:00
Eli Friedman 4ef2426b87 Fix a couple ridiculous copy-paste errors. rdar://9914773 .
llvm-svn: 137160
2011-08-09 22:17:39 +00:00
Bruno Cardoso Lopes bed48dc8ff Reapply a more appropriate solution than in r137114. AVX supports
v4f64 = sitofp v4i32. This fix PR10559.
Also add support for v4i32 = fptosi v4f64.

llvm-svn: 137128
2011-08-09 17:39:13 +00:00
Bruno Cardoso Lopes 24dd1d4a27 Revert r137114
llvm-svn: 137127
2011-08-09 17:39:01 +00:00
Bruno Cardoso Lopes ad3453cf2d Handle sitofp between v4f64 <- v4i32. Fix PR10559
llvm-svn: 137114
2011-08-09 05:48:01 +00:00
Bruno Cardoso Lopes 1155b1eafa Add support for avx vector fextend
llvm-svn: 137105
2011-08-09 03:04:29 +00:00
Bruno Cardoso Lopes 0d0964d099 Add AVX versions of 128-bit sitofp and fptosi
llvm-svn: 137104
2011-08-09 03:04:25 +00:00
Bruno Cardoso Lopes 2fc107365b Add two patterns to match special vmovss and vmovsd cases. Also fix
the patterns already there to be more strict regarding the predicate.
This fixes PR10558

llvm-svn: 137100
2011-08-09 01:43:09 +00:00
Bruno Cardoso Lopes af6a85484c Make LowerVSETCC aware of AVX types and add patterns to match them.
llvm-svn: 137090
2011-08-09 00:46:57 +00:00
Bruno Cardoso Lopes c96953c12a Add support for several vector shifts operations while in AVX mode. Fix PR10581
llvm-svn: 137067
2011-08-08 21:31:08 +00:00
Jakob Stoklund Olesen daa2cad723 Hoist hasLoadFromStackSlot and hasStoreToStackSlot.
These the methods are target-independent since they simply scan the
memory operands.  They can live in TargetInstrInfoImpl.

llvm-svn: 137063
2011-08-08 20:53:24 +00:00
Jakob Stoklund Olesen 4f0ace5674 Don't clobber pending ST regs when FP regs are killed.
X86FloatingPoint keeps track of pending ST registers for an upcoming
inline asm instruction with fixed stack register constraints.  It does
this by remembering which FP register holds the value that should appear
at a fixed stack position for the inline asm.

When that FP register is killed before the inline asm, make sure to
duplicate it to a scratch register, so the ST register still has a live
FP reference.

This could happen when the same FP register was copied to two ST
registers, or when a spill instruction is inserted between the ST copy
and the inline asm.

This fixes PR10602.

llvm-svn: 137050
2011-08-08 17:15:43 +00:00
Chandler Carruth 2536b51aae Silence unused variable warnings in release builds.
llvm-svn: 136956
2011-08-05 01:08:21 +00:00
Jason W Kim 239370cb3f Fix http://llvm.org/bugs/show_bug.cgi?id=10583\n - test for 1 and 2 byte fixups to be added
llvm-svn: 136954
2011-08-05 00:53:03 +00:00
Evan Cheng 19e3f80579 Fix an obvious type. Patch by Ivan Krasin.
llvm-svn: 136899
2011-08-04 18:38:15 +00:00
Duncan Sands 00f39c1521 Add obviously missing "break". Noticed by Andrey Karpov with
the PVS-studio tool.

llvm-svn: 136878
2011-08-04 15:45:59 +00:00
Jason W Kim e4df09f7ba Fix http://llvm.org/bugs/show_bug.cgi?id=10568
Move the reloc size assert into AsmBackend - where it is more apropos.

llvm-svn: 136855
2011-08-04 00:38:45 +00:00
Bill Wendling e234f6ae0c Only access both operands of an INSERT_SUBVECTOR if it is an INSERT_SUBVECTOR.
Fixes PR10527.

llvm-svn: 136853
2011-08-04 00:32:58 +00:00
Benjamin Kramer 103e2ec2df Remove unused variables.
llvm-svn: 136803
2011-08-03 19:53:48 +00:00
Jakob Stoklund Olesen da618420ee Handle IMPLICIT_DEF instructions in X86FloatingPoint.
This fixes PR10575.

llvm-svn: 136787
2011-08-03 16:33:19 +00:00
Eli Friedman 04c5025cd5 Don't create a ridiculous EXTRACT_ELEMENT. PR10563.
The testcase looks extremely fragile, so I'm adding an assertion which should catch any cases like this.

llvm-svn: 136711
2011-08-02 18:38:35 +00:00
Bruno Cardoso Lopes 5ada908140 Make this kind of lowering to be supported by 256-bit instructions:
shuffle (scalar_to_vector (load (ptr + 4))), undef, <0, 0, 0, 0>
To:
  shuffle (vload ptr)), undef, <1, 1, 1, 1>
Fix PR10494

llvm-svn: 136691
2011-08-02 16:06:18 +00:00
Nick Lewycky a530a4d925 Bail from FastISel when we encounter a volatile memset intrinsic. Patch by Ivan
Krasin!

llvm-svn: 136663
2011-08-02 00:40:16 +00:00
Bruno Cardoso Lopes a8e3673816 Add v4f64 -> v2f32 fp_round support. Also add a testcase to exercise
the legalizer. This commit together with the two previous ones fixes
PR10495.

llvm-svn: 136654
2011-08-01 21:54:09 +00:00
Bruno Cardoso Lopes 616fe60548 Teach PreprocessISelDAG to be aware of vector types and to not process them.
llvm-svn: 136653
2011-08-01 21:54:05 +00:00
Bruno Cardoso Lopes bd30a4b584 Lower CONCAT_VECTORS to use two VINSERTF128 instructions instead of
using a stack store.

llvm-svn: 136652
2011-08-01 21:54:02 +00:00
Bruno Cardoso Lopes 7513939ddd Since vectors with all ones can't be created with a 256-bit instruction,
avoid returning early for v8i32 types, which would only be valid for
vector with all zeros. Also split the handling of zeros and ones into separate
checking logic since they are handled differently. This fixes PR10547

llvm-svn: 136642
2011-08-01 19:51:53 +00:00
Douglas Gregor d41f3a161f Update CMake target names for tablegen-generated data in the X86 and ARM targets. This should fix the CMake build with MSVC.
llvm-svn: 136621
2011-08-01 16:29:27 +00:00
Eli Friedman adec587d5c Misc optimizer+codegen work for 'cmpxchg' and 'atomicrmw'. They appear to be
working on x86 (at least for trivial testcases); other architectures will
need more work so that they actually emit the appropriate instructions for
orderings stricter than 'monotonic'. (As far as I can tell, the ARM, PPC,
Mips, and Alpha backends need such changes.)

llvm-svn: 136457
2011-07-29 03:05:32 +00:00