Commit Graph

1791 Commits

Author SHA1 Message Date
Richard Trieu d9917bef6c Fixed an assert from:
assert("not implemented for target shuffle node");

to:

  assert(0 && "not implemented for target shuffle node");

This causes a test failure in CodeGen/X86/palignr.ll which has
been marked as XFAIL for the time being.
Test failure filed at PR10901.

llvm-svn: 139454
2011-09-10 01:26:21 +00:00
Nadav Rotem de838daefd Implement vector-select support for avx256. Refactor the vblend implementation to have tablegen match the instruction by the node type
llvm-svn: 139400
2011-09-09 20:29:17 +00:00
Nadav Rotem b5df62036b Dix the 80-columns and remove unsupported v8i16 type from the list of legal vselect types.
llvm-svn: 139324
2011-09-08 22:17:35 +00:00
Bruno Cardoso Lopes fb113a0051 Add AVX versions of blend vector operations and fix some issues noticed
in Nadav's r139285 and r139287 commits.

1) Rename vsel.ll to a more descriptive name
2) Change the order of BLEND operands to "Op1, Op2, Cond", this is
necessary because PBLENDVB is already used in different places with
this order, and it was being emitted in the wrong way for vselect
3) Add AVX patterns and tests for the same SSE41 instructions

llvm-svn: 139305
2011-09-08 18:05:08 +00:00
Nadav Rotem 2550ba2a27 Add X86-SSE4 codegen support for vector-select.
llvm-svn: 139285
2011-09-08 08:11:19 +00:00
Duncan Sands f2641e1bc1 Add codegen support for vector select (in the IR this means a select
with a vector condition); such selects become VSELECT codegen nodes.
This patch also removes VSETCC codegen nodes, unifying them with SETCC
nodes (codegen was actually often using SETCC for vector SETCC already).
This ensures that various DAG combiner optimizations kick in for vector
comparisons.  Passes dragonegg bootstrap with no testsuite regressions
(nightly testsuite as well as "make check-all").  Patch mostly by
Nadav Rotem.

llvm-svn: 139159
2011-09-06 19:07:46 +00:00
Rafael Espindola db5823dc77 Fix style issues and typos found by Duncan.
llvm-svn: 139154
2011-09-06 18:43:08 +00:00
Duncan Sands a098436b32 Split the init.trampoline intrinsic, which currently combines GCC's
init.trampoline and adjust.trampoline intrinsics, into two intrinsics
like in GCC.  While having one combined intrinsic is tempting, it is
not natural because typically the trampoline initialization needs to
be done in one function, and the result of adjust trampoline is needed
in a different (nested) function.  To get around this llvm-gcc hacks the
nested function lowering code to insert an additional parent variable
holding the adjust.trampoline result that can be accessed from the child
function.  Dragonegg doesn't have the luxury of tweaking GCC code, so it
stored the result of adjust.trampoline in the memory GCC set aside for
the trampoline itself (this is always available in the child function),
and set up some new memory (using an alloca) to hold the trampoline.
Unfortunately this breaks Go which allocates trampoline memory on the
heap and wants to use it even after the parent has exited (!).  Rather
than doing even more hacks to get Go working, it seemed best to just use
two intrinsics like in GCC.  Patch mostly by Sanjoy Das.

llvm-svn: 139140
2011-09-06 13:37:06 +00:00
Jakob Stoklund Olesen d0c8a31c8b Use existing function.
llvm-svn: 139055
2011-09-02 23:52:49 +00:00
Jakob Stoklund Olesen 38019e3188 Remove unused variables.
llvm-svn: 139047
2011-09-02 22:41:25 +00:00
Bruno Cardoso Lopes f61d1c072e Fix vbroadcast matching logic to early unmatch if the node doesn't have
only one use. Fix PR10825.

llvm-svn: 138951
2011-09-01 18:15:06 +00:00
Eric Christopher 72d1d5e193 Rework this conditional a bit.
Patch by Sanjoy Das

llvm-svn: 138853
2011-08-31 04:17:21 +00:00
Bruno Cardoso Lopes 9fc6b8be03 - Move all MOVSS and MOVSD patterns close to their definitions
- Duplicate some store patterns to their AVX forms!
- Catched a bug while restricting the patterns subtarget, fix it
  and update a testcase to check it properly

llvm-svn: 138851
2011-08-31 03:04:20 +00:00
Bruno Cardoso Lopes db520db514 Teach more places to use VMOVAPS,VMOVUPS instead of MOVAPS,MOVUPS,
whenever AVX is enabled.

llvm-svn: 138849
2011-08-31 03:04:09 +00:00
Evan Cheng cb1e5bae4c Fix (movhps load) lowering / pattern to match more cases. rdar://10050549
llvm-svn: 138848
2011-08-31 02:05:24 +00:00
Rafael Espindola 94d3253626 Adds support for variable sized allocas. For a variable sized alloca,
code is inserted to first check if the current stacklet has enough
space. If so, space is allocated by simply decrementing the stack
pointer. Otherwise a runtime routine (__morestack_allocate_stack_space
in libgcc) is called which allocates the required memory from the
heap.

Patch by Sanjoy Das.

llvm-svn: 138818
2011-08-30 19:47:04 +00:00
Rafael Espindola 3353017668 Adds a SelectionDAG node X86SegAlloca which will be custom lowered
from DYNAMIC_STACKALLOC.

Two new pseudo instructions (SEG_ALLOCA_32 and SEG_ALLOCA_64) which
will match X86SegAlloca (based on word size) are also added.  They
will be custom emitted to inject the actual stack handling code.

Patch by Sanjoy Das.

llvm-svn: 138814
2011-08-30 19:43:21 +00:00
Rafael Espindola c21742112b Emit segmented-stack specific code into function prologues for
X86. Modify the pass added in the previous patch to call this new
code.

This new prologues generated will call a libgcc routine (__morestack)
to allocate more stack space from the heap when required

Patch by Sanjoy Das.

llvm-svn: 138812
2011-08-30 19:39:58 +00:00
Eli Friedman 850b9a9a84 Explicitly zero out parts of a vector which are required to be zero by the algorithm in LowerUINT_TO_FP_i32. This only has a substantial effect on the generated code when the input is extracted from a vector register; other ways of loading an i32 do the appropriate zeroing implicitly. Fixes PR10802.
llvm-svn: 138768
2011-08-29 21:15:46 +00:00
Benjamin Kramer 61a1ff543c Silence GCC warnings and make an array const.
llvm-svn: 138706
2011-08-27 17:36:14 +00:00
Eli Friedman 5e5704277f Add support for generating CMPXCHG16B on x86-64 for the cmpxchg IR instruction.
llvm-svn: 138660
2011-08-26 21:21:21 +00:00
Bruno Cardoso Lopes 8347b86293 Add support for AVX 256-bit version of MOVDDUP!
llvm-svn: 138588
2011-08-25 21:40:37 +00:00
Bruno Cardoso Lopes 388eacee2c Make isMOVDDUP mask check more strict and update comments!
llvm-svn: 138587
2011-08-25 21:40:34 +00:00
Bruno Cardoso Lopes 296256fb32 Add support for 256-bit versions of VSHUFPD and VSHUFPS.
llvm-svn: 138546
2011-08-25 02:58:26 +00:00
Bruno Cardoso Lopes 2953d7b320 Move all SHUFP* patterns close to the SHUFP* definitions. Also be
explicit about which subtarget they refer to, and add AVX versions of
the ones we currently don't. Make the mask check more strict, to be
clear it won't be used to match to 256-bit versions!

llvm-svn: 138514
2011-08-24 23:17:55 +00:00
Eli Friedman 9c73a57b20 Hook up 64-bit atomic load/store on x86-32. I plan to write more efficient implementations eventually.
llvm-svn: 138505
2011-08-24 22:33:28 +00:00
Eli Friedman 38cd821dc4 Fix whitespace.
llvm-svn: 138487
2011-08-24 21:17:30 +00:00
Eli Friedman 342e8df0e0 Basic x86 code generation for atomic load and store instructions.
llvm-svn: 138478
2011-08-24 20:50:09 +00:00
Craig Topper de92622aa5 Break 256-bit vector int add/sub/mul into two 128-bit operations to avoid costly scalarization. Fixes PR10711.
llvm-svn: 138427
2011-08-24 06:14:18 +00:00
Bruno Cardoso Lopes 9e9f2ce32d Fix a nasty bug where a v4i64 was being wrong emitted with 32-bit
permutations. Also tidy up some patterns and make them close to their
instruction definition!

llvm-svn: 138392
2011-08-23 22:06:37 +00:00
Nick Lewycky 4c8ff77f1b PerformSubCombine to work on integers larger than i128. Fixes a crasher.
llvm-svn: 138354
2011-08-23 19:01:24 +00:00
Craig Topper 6612e35b0d Add support for breaking 256-bit v16i16 and v32i8 VSETCC into two 128-bit ones, avoiding sclarization. Add vex form of pcmpeqq and pcmpgtq. Fixes more cases for PR10712.
llvm-svn: 138321
2011-08-23 04:36:33 +00:00
Bruno Cardoso Lopes 74f090d44c Add support for breaking 256-bit int VETCC into two 128-bit ones,
avoding scalarization of the compare. Reduces code from 59 to 6
instructions. Fix PR10712.

llvm-svn: 138271
2011-08-22 20:31:04 +00:00
Bruno Cardoso Lopes 1a87fcb9ba Fix PR10688. Add support for spliting 256-bit vector shifts when the
shift amount is variable

llvm-svn: 137885
2011-08-17 22:12:20 +00:00
Bruno Cardoso Lopes be5e987379 Introduce matching patterns for vbroadcast AVX instruction. The idea is to
match splats in the form (splat (scalar_to_vector (load ...))) whenever
the load can be folded. All the logic and instruction emission is
working but because of PR8156, there are no ways to match loads, cause
they can never be folded for splats. Thus, the tests are XFAILed, but
I've tested and exercised all the logic using a relaxed version for
checking the foldable loads, as if the bug was already fixed. This
should work out of the box once PR8156 gets fixed since MayFoldLoad will
work as expected.

llvm-svn: 137810
2011-08-17 02:29:19 +00:00
Bruno Cardoso Lopes 6d33c7f303 Update comments about vector splat handling in x86
llvm-svn: 137808
2011-08-17 02:29:13 +00:00
Bruno Cardoso Lopes ed786a346e Now that we have a canonical way to handle 256-bit splats:
vinsertf128 $1 + vpermilps $0, remove the old code that used to first
do the splat in a 128-bit vector and then insert it into a larger one.
This is better because the handling code gets simpler and also makes a
better room for the upcoming vbroadcast!

llvm-svn: 137807
2011-08-17 02:29:10 +00:00
Bruno Cardoso Lopes 2e99f1b3aa Instead of always leaving the work to the generic legalizer when
there is no support for native 256-bit shuffles, be more smart in some
cases, for example, when you can extract specific 128-bit parts and use
regular 128-bit shuffles for them. Example:

For this shuffle:
  shufflevector <4 x i64> %a, <4 x i64> %b, <4 x i32>
                <i32 1, i32 0, i32 7, i32 6>

This was expanded to:
  vextractf128  $1, %ymm1, %xmm2
  vpextrq $0, %xmm2, %rax
  vmovd %rax, %xmm1
  vpextrq $1, %xmm2, %rax
  vmovd %rax, %xmm2
  vpunpcklqdq %xmm1, %xmm2, %xmm1
  vpextrq $0, %xmm0, %rax
  vmovd %rax, %xmm2
  vpextrq $1, %xmm0, %rax
  vmovd %rax, %xmm0
  vpunpcklqdq %xmm2, %xmm0, %xmm0
  vinsertf128 $1, %xmm1, %ymm0, %ymm0
  ret

Now we get:
  vshufpd $1, %xmm0, %xmm0, %xmm0
  vextractf128  $1, %ymm1, %xmm1
  vshufpd $1, %xmm1, %xmm1, %xmm1
  vinsertf128 $1, %xmm1, %ymm0, %ymm0

llvm-svn: 137733
2011-08-16 18:21:54 +00:00
Bruno Cardoso Lopes cbe7feeab9 Fix PR10656. It's only profitable to use 128-bit inserts and extracts
when AVX mode is one. Otherwise is just more work for the type
legalizer.

llvm-svn: 137661
2011-08-15 21:45:54 +00:00
Bruno Cardoso Lopes c53dd2ac01 Fix comment!
llvm-svn: 137521
2011-08-12 21:54:42 +00:00
Bruno Cardoso Lopes f15dfe5818 The VPERM2F128 is a AVX instruction which permutes between two 256-bit
vectors. It operates on 128-bit elements instead of regular scalar
types. Recognize shuffles that are suitable for VPERM2F128 and teach
the x86 legalizer how to handle them.

llvm-svn: 137519
2011-08-12 21:48:26 +00:00
Bruno Cardoso Lopes 8fbf023c9b Add a dag combine to xform 256-bit shuffles into simple vector
inserts and extracts. This simple combine makes us generate only 1
instruction instead of 11 in the v8 case.

llvm-svn: 137362
2011-08-11 21:50:44 +00:00
Bruno Cardoso Lopes 043c820800 Fix PR10492 by teaching MOVHLPS and MOVLPS mask matching to be more strict.
llvm-svn: 137324
2011-08-11 18:59:13 +00:00
Nadav Rotem efdd183f52 Add a comment, per Bruno's CR.
llvm-svn: 137313
2011-08-11 17:05:47 +00:00
Nadav Rotem 1542d5a00a [AVX] If the data which is going to be saved is already in two XMM registers
(for example, after integer operation), do not pack the registers into a YMM
before saving. Its better to save as two XMM registers.

Before:
                vinsertf128         $1, %xmm3, %ymm0, %ymm3
                vinsertf128         $0, %xmm1, %ymm3, %ymm1
                vmovaps              %ymm1, 416(%rsp)

After:
                vmovaps              %xmm3, 416+16(%rsp)
                vmovaps              %xmm1, 416(%rsp)

llvm-svn: 137308
2011-08-11 16:41:21 +00:00
Bruno Cardoso Lopes a2d8bb97b9 Splats for v8i32/v8f32 can be handled by VPERMILPSY. This was causing
infinite recursive calls in legalize. Fix PR10562

llvm-svn: 137296
2011-08-11 02:49:44 +00:00
Bruno Cardoso Lopes 572c9aaf53 Use the splat index to generate the desired shuffle. Otherwise we
could only get undefs and the vector shuffle becomes an undef,
generating wrong code.

llvm-svn: 137295
2011-08-11 02:49:41 +00:00
Eli Friedman 3ae39f8ad1 Fix X86TargetLowering::LowerExternalSymbol so that it actually works in non-trivial cases. This hasn't been an issue before because the function isn't normally called (but apparently is used to generate a tail-call to sin() on ELF x86-32 with PIC and SSE2).
Fixes PR9693.

llvm-svn: 137292
2011-08-11 01:48:05 +00:00
Nadav Rotem 410a11fe82 When performing a truncating store, it is sometimes possible to rearrange the
data in-register prior to saving to memory.  When we reorder the data in memory
we prevent the need to save multiple scalars to memory, making a single regular
store.

llvm-svn: 137238
2011-08-10 19:30:14 +00:00
Bruno Cardoso Lopes 278ffd7d8e Fix a bug in vpermilps mask checking. Fix PR10560
llvm-svn: 137194
2011-08-10 01:54:17 +00:00
Bruno Cardoso Lopes 72323966c8 Add 256-bit support for v8i32, v4i64 and v4f64 ISD::SELECT. Fix PR10556
llvm-svn: 137179
2011-08-09 23:27:13 +00:00
Bruno Cardoso Lopes 6963062a99 Use fp unpack instructions to unpack int types. Until we have AVX2, this
is the best we can do for these patterns. This fix PR10554.

llvm-svn: 137161
2011-08-09 22:18:37 +00:00
Bruno Cardoso Lopes 24dd1d4a27 Revert r137114
llvm-svn: 137127
2011-08-09 17:39:01 +00:00
Bruno Cardoso Lopes ad3453cf2d Handle sitofp between v4f64 <- v4i32. Fix PR10559
llvm-svn: 137114
2011-08-09 05:48:01 +00:00
Bruno Cardoso Lopes af6a85484c Make LowerVSETCC aware of AVX types and add patterns to match them.
llvm-svn: 137090
2011-08-09 00:46:57 +00:00
Bruno Cardoso Lopes c96953c12a Add support for several vector shifts operations while in AVX mode. Fix PR10581
llvm-svn: 137067
2011-08-08 21:31:08 +00:00
Evan Cheng 19e3f80579 Fix an obvious type. Patch by Ivan Krasin.
llvm-svn: 136899
2011-08-04 18:38:15 +00:00
Bill Wendling e234f6ae0c Only access both operands of an INSERT_SUBVECTOR if it is an INSERT_SUBVECTOR.
Fixes PR10527.

llvm-svn: 136853
2011-08-04 00:32:58 +00:00
Benjamin Kramer 103e2ec2df Remove unused variables.
llvm-svn: 136803
2011-08-03 19:53:48 +00:00
Eli Friedman 04c5025cd5 Don't create a ridiculous EXTRACT_ELEMENT. PR10563.
The testcase looks extremely fragile, so I'm adding an assertion which should catch any cases like this.

llvm-svn: 136711
2011-08-02 18:38:35 +00:00
Bruno Cardoso Lopes 5ada908140 Make this kind of lowering to be supported by 256-bit instructions:
shuffle (scalar_to_vector (load (ptr + 4))), undef, <0, 0, 0, 0>
To:
  shuffle (vload ptr)), undef, <1, 1, 1, 1>
Fix PR10494

llvm-svn: 136691
2011-08-02 16:06:18 +00:00
Bruno Cardoso Lopes a8e3673816 Add v4f64 -> v2f32 fp_round support. Also add a testcase to exercise
the legalizer. This commit together with the two previous ones fixes
PR10495.

llvm-svn: 136654
2011-08-01 21:54:09 +00:00
Bruno Cardoso Lopes 616fe60548 Teach PreprocessISelDAG to be aware of vector types and to not process them.
llvm-svn: 136653
2011-08-01 21:54:05 +00:00
Bruno Cardoso Lopes bd30a4b584 Lower CONCAT_VECTORS to use two VINSERTF128 instructions instead of
using a stack store.

llvm-svn: 136652
2011-08-01 21:54:02 +00:00
Bruno Cardoso Lopes 7513939ddd Since vectors with all ones can't be created with a 256-bit instruction,
avoid returning early for v8i32 types, which would only be valid for
vector with all zeros. Also split the handling of zeros and ones into separate
checking logic since they are handled differently. This fixes PR10547

llvm-svn: 136642
2011-08-01 19:51:53 +00:00
Eli Friedman adec587d5c Misc optimizer+codegen work for 'cmpxchg' and 'atomicrmw'. They appear to be
working on x86 (at least for trivial testcases); other architectures will
need more work so that they actually emit the appropriate instructions for
orderings stricter than 'monotonic'. (As far as I can tell, the ARM, PPC,
Mips, and Alpha backends need such changes.)

llvm-svn: 136457
2011-07-29 03:05:32 +00:00
Bruno Cardoso Lopes 65ce5ea3ba Fix two tests that I crashed in the previous commits. The mask elts
on the second half must be reindexed.

llvm-svn: 136454
2011-07-29 02:05:28 +00:00
Bruno Cardoso Lopes 81eb193f2e Match VPERMIL masks more strictly and update the target specific mask
generation to always catch the weird cases.

llvm-svn: 136453
2011-07-29 01:31:15 +00:00
Bruno Cardoso Lopes 795f558532 Add DecodeShuffle shuffle support for VPERMIPD variantes
llvm-svn: 136452
2011-07-29 01:31:11 +00:00
Bruno Cardoso Lopes c00f6728bc Fix a bug while generating target specific VPERMIL masks: skip
undef mask elements. This fixes PR10529.

llvm-svn: 136450
2011-07-29 01:31:04 +00:00
Bruno Cardoso Lopes b9ba465de8 Enable usage of SSE4 extracts and inserts in their 128-bit AVX forms.
Also tidy up code a bit.

llvm-svn: 136449
2011-07-29 01:31:02 +00:00
Bruno Cardoso Lopes 6aee388423 Cleanup PALIGNR handling and remove the old palign pattern fragment.
Also make PALIGNR masks to don't match 256-bits, which isn't supported
It's also a step to solve PR10489

llvm-svn: 136448
2011-07-29 01:30:59 +00:00
Bruno Cardoso Lopes 8c19a8b5d5 Invert the subvector insertion to be more likely to be taken as a COPY
llvm-svn: 136324
2011-07-28 01:26:53 +00:00
Bruno Cardoso Lopes 9e2a301216 Add SINT_TO_FP and FP_TO_SINT support for v8i32 types. Also move
a convert pattern close to the instruction definition.

llvm-svn: 136320
2011-07-28 01:26:39 +00:00
Eli Friedman 26a484852e Code generation for 'fence' instruction.
llvm-svn: 136283
2011-07-27 22:21:52 +00:00
Jeffrey Yasskin 6381c0100b Explicitly cast narrowing conversions inside {}s that will become errors in
C++0x.

llvm-svn: 136211
2011-07-27 06:22:51 +00:00
Bruno Cardoso Lopes f9324f4f6b Move some code around to open opportunity for more shuffle matching
llvm-svn: 136201
2011-07-27 00:56:37 +00:00
Bruno Cardoso Lopes 27a30a7792 The vpermilps and vpermilpd have different behaviour regarding the
usage of the shuffle bitmask. Both work in 128-bit lanes without
crossing, but in the former the mask of the high part is the same
used by the low part while in the later both lanes have independent
masks. Handle this properly and and add support for vpermilpd.

llvm-svn: 136200
2011-07-27 00:56:34 +00:00
Benjamin Kramer 124ac2b997 Add a neat little two's complement hack for x86.
On x86 we can't encode an immediate LHS of a sub directly. If the RHS comes from a XOR with a constant we can
fold the negation into the xor and add one to the immediate of the sub. Then we can turn the sub into an add,
which can be commuted and encoded efficiently.

This code is generated for __builtin_clz and friends.

llvm-svn: 136167
2011-07-26 22:42:13 +00:00
Bruno Cardoso Lopes f8fe47bd2b Recognize unpckh* masks and match 256-bit versions. The new versions are
different from the previous 128-bit because they work in lanes.
Update a few comments and add testcases

llvm-svn: 136157
2011-07-26 22:03:40 +00:00
Eli Friedman 93dc04d5ca Prevent x86-specific DAGCombine from creating nodes with illegal type (which could not be selected). Fixes a minor isel issue that was breaking the testcase from r136130.
llvm-svn: 136148
2011-07-26 21:02:58 +00:00
Bruno Cardoso Lopes d77b383199 More movsldup/movshdup cleanup. Rewrite the mask matching function and add
support for 256-bit versions (but no instruction selection yet, coming next).

llvm-svn: 136050
2011-07-26 02:39:28 +00:00
Bruno Cardoso Lopes 5b268a4b82 More cleanup, subtarget info isn't used here.
llvm-svn: 136049
2011-07-26 02:39:25 +00:00
Bruno Cardoso Lopes 9212bf275d Codegen allonesvector better while using AVX: vpcmpeqd + vinsertf128
This also fixes PR10452

llvm-svn: 136004
2011-07-25 23:05:32 +00:00
Bruno Cardoso Lopes 123dff0f58 - Handle special scalar_to_vector case: splats. Using a native 128-bit
shuffle before inserting on a 256-bit vector.
- Add AVX versions of movd/movq instructions
- Introduce a few COPY patterns to match insert_subvector instructions.
This turns a trivial insert_subvector instruction into a register copy,
coalescing the xmm into a ymm and avoid emiting on more instruction.

llvm-svn: 136002
2011-07-25 23:05:25 +00:00
Bruno Cardoso Lopes 276eb8debf Reintroduce r135730, this is indeed the right approach, there is no
native 256-bit vector instruction to do scalar_to_vector.

llvm-svn: 136001
2011-07-25 23:05:16 +00:00
Eli Friedman ea8c66fea5 Get rid of an incorrect optimization for shuffles with PALIGNR and simplify isPALIGNRMask.
Addresses PR10466, although the crash from that PR only triggers in cases where DAGCombine misses optimizing a shuffle.

llvm-svn: 135980
2011-07-25 21:36:45 +00:00
Rafael Espindola 77242dd537 Turn shuffles into unpacks for VT == MVT::v2i64 and MVT::v2f64
too. Patch by Jeff Muizelaar.

llvm-svn: 135789
2011-07-22 18:56:05 +00:00
Dan Gohman c535278cf1 Fix x86's XALUO lowering to return its replacement values instead
of doing the RAUW calls for the overflow value itself. This makes
it more consistent with how the rest of LegalizeDAG works.

llvm-svn: 135788
2011-07-22 18:45:15 +00:00
Benjamin Kramer 959b7e9df7 GCC complains about the angle of this line.
Remove the escaped newline.

llvm-svn: 135739
2011-07-22 01:02:57 +00:00
Bruno Cardoso Lopes 1872173841 Remove the 128-bit special handling from SCALAR_TO_VECTOR. This isn't
the way to go. Doing this here will prevent several node matches later,
and would have to force looking all the way through several
VINSERTF128/VEXTRACTF128 chains to optimize simple things.

llvm-svn: 135730
2011-07-22 00:15:10 +00:00
Bruno Cardoso Lopes 612e56174b -Inspected a AVX code block added by someone in early Feb. This was never used
and was actually very wrong, fix it and make it simpler. Also remove the
ConcatVectors function, which is unused now.

- Fix a introduction of useless nodes in r126664 and r126264. The
VUNPCKL* should never be introduced cause we don't want duplicate
nodes for 128 AVX and non-AVX modes, the actual instruction
difference only exists during isel, but not for target specific DAG
nodes. We only introduce V* target nodes when there is no 128-bit
version already there.

- Fix a fragile test and make it more useful.

llvm-svn: 135729
2011-07-22 00:15:07 +00:00
Bruno Cardoso Lopes 91eff5140f Add a DAGCombine for transforming 128->256 casts into a simple
vxorps + vinsertf128 pair of instructions

llvm-svn: 135727
2011-07-22 00:15:00 +00:00
Bruno Cardoso Lopes dbebd01269 Introduce a new function to lower 256-bit vectors which are not
direclty supported and should be promoted and handled by smaller
shuffles

llvm-svn: 135726
2011-07-22 00:14:56 +00:00
Bruno Cardoso Lopes 95d037721b Rename function to be more specific and be more strict about its usage
llvm-svn: 135725
2011-07-22 00:14:53 +00:00
Bruno Cardoso Lopes 178fb40612 - Register v16i16 as valid VR256 register class
- Add more bitcasts for v16i16
- Since 135661 and 135662 already added the splat logic,
just add one more splat test for v16i16

llvm-svn: 135663
2011-07-21 02:24:08 +00:00
Bruno Cardoso Lopes b878caa5e2 Add support for 256-bit versions of VPERMIL instruction. This is a new
instruction introduced in AVX, which can operate on 128 and 256-bit vectors.
It considers a 256-bit vector as two independent 128-bit lanes. It can permute
any 32 or 64 elements inside a lane, and restricts the second lane to
have the same permutation of the first one. With the improved splat support
introduced early today, adding codegen for this instruction enable more
efficient 256-bit code:

Instead of:
  vextractf128  $0, %ymm0, %xmm0
  punpcklbw %xmm0, %xmm0
  punpckhbw %xmm0, %xmm0
  vinsertf128 $0, %xmm0, %ymm0, %ymm1
  vinsertf128 $1, %xmm0, %ymm1, %ymm0
  vextractf128  $1, %ymm0, %xmm1
  shufps  $1, %xmm1, %xmm1
  movss %xmm1, 28(%rsp)
  movss %xmm1, 24(%rsp)
  movss %xmm1, 20(%rsp)
  movss %xmm1, 16(%rsp)
  vextractf128  $0, %ymm0, %xmm0
  shufps  $1, %xmm0, %xmm0
  movss %xmm0, 12(%rsp)
  movss %xmm0, 8(%rsp)
  movss %xmm0, 4(%rsp)
  movss %xmm0, (%rsp)
  vmovaps (%rsp), %ymm0
We get:
  vextractf128  $0, %ymm0, %xmm0
  punpcklbw %xmm0, %xmm0
  punpckhbw %xmm0, %xmm0
  vinsertf128 $0, %xmm0, %ymm0, %ymm1
  vinsertf128 $1, %xmm0, %ymm1, %ymm0
  vpermilps $85, %ymm0, %ymm0

llvm-svn: 135662
2011-07-21 01:55:47 +00:00
Bruno Cardoso Lopes fb4920eb25 Improve splat promotion to handle AVX types: v32i8 and v16i16. Also
refactor the code and add a bunch of comments. The final shuffle
emitted by handling 256-bit types is suitable for the VPERM shuffle
instruction which is going to be introduced in a next commit (with
a testcase which cover this commit)

llvm-svn: 135661
2011-07-21 01:55:42 +00:00
Bruno Cardoso Lopes 0bdeacf03b Tidy up code
llvm-svn: 135656
2011-07-21 01:55:27 +00:00
Evan Cheng bbf3b0de8b Goodbye TargetAsmInfo. This eliminate last bit of CodeGen and Target in llvm-mc.
There is still a bit more refactoring left to do in Targets. But we are now very
close to fixing all the layering issues in MC.

llvm-svn: 135611
2011-07-20 19:50:42 +00:00
Evan Cheng d60fa58ba1 Sink getDwarfRegNum, getLLVMRegNum, getSEHRegNum from TargetRegisterInfo down
to MCRegisterInfo. Also initialize the mapping at construction time.

This patch eliminate TargetRegisterInfo from TargetAsmInfo. It's another step
towards fixing the layering violation.

llvm-svn: 135424
2011-07-18 20:57:22 +00:00
Chris Lattner 229907cd11 land David Blaikie's patch to de-constify Type, with a few tweaks.
llvm-svn: 135375
2011-07-18 04:54:35 +00:00
Bruno Cardoso Lopes 8df9cfc279 Fix a couple of things:
1) Make non-legal 256-bit loads to be promoted to v4i64. This lets us
canonize the loads and handle things the same way we use to handle
for 128-bit registers. Despite of what one of the removed comments
explained, the load promotion would not mess with VPERM, it's only a
matter of doing the appropriate bitcasts when this instructions comes
to be introduced. Also make LOAD v8i32 legal.

2) Doing 1) exposed two bugs:
- v4i64 was being promoted to itself for several opcodes (introduced
in r124447 by David Greene) causing endless recursion and the stack to
explode.
- there was no support for allOnes BUILD_VECTORs and ANDNP would fail to
match because it was generating early target constant pools during
lowering.

3) The testcases are already checked-in, doing 1) exposed the
bugs in the current testcases.

4) Tidy up code to be more clear and explicit about AVX.

llvm-svn: 135313
2011-07-15 22:24:33 +00:00
Eric Christopher 92464be28c Check register class matching instead of width of type matching
when determining validity of matching constraint. Allow i1
types access to the GR8 reg class for x86.

Fixes PR10352 and rdar://9777108

llvm-svn: 135180
2011-07-14 20:13:52 +00:00
Nadav Rotem 771f29677f [VECTOR-SELECT]
During type legalization we often use the SIGN_EXTEND_INREG SDNode.
When this SDNode is legalized during the LegalizeVector phase, it is
scalarized because non-simple types are automatically marked to be expanded.
In this patch we add support for lowering SIGN_EXTEND_INREG manually.
This fixes CodeGen/X86/vec_sext.ll when running with the '-promote-elements'
flag.

llvm-svn: 135144
2011-07-14 11:11:14 +00:00
Bruno Cardoso Lopes 9613b64916 Make X86ISD::ANDNP more general and Codegen 256-bit VANDNP. A more
general version of X86ISD::ANDNP also opened the room for a little bit
of refactoring.

llvm-svn: 135088
2011-07-13 21:36:51 +00:00
Bruno Cardoso Lopes 7ba479d22f The target specific node PANDN name is misleading. That happens because
it's later selected to a ANDNPD/ANDNPS instruction instead of the PANDN
instruction. Rename it.

llvm-svn: 135087
2011-07-13 21:36:47 +00:00
Julien Lerouge 112fcc164a Add _allrem, _aullrem and _allmul to the runtime for MSVC.
http://llvm.org/bugs/show_bug.cgi?id=10305

llvm-svn: 134744
2011-07-08 21:40:25 +00:00
Cameron Zwarich f03fa189ca Add an intrinsic and codegen support for fused multiply-accumulate. The intent
is to use this for architectures that have a native FMA instruction.

llvm-svn: 134742
2011-07-08 21:39:21 +00:00
Nick Lewycky 9badf60203 Let the inline asm 'q' constraint match float, and on 64-bit double too.
Fixes PR9602!

llvm-svn: 134665
2011-07-08 00:19:27 +00:00
Eric Christopher 7a2a0f80de Go ahead and emit the barrier on x86-64 even without sse2. The
processor supports it just fine.

Fixes PR9675 and rdar://9740801

llvm-svn: 134664
2011-07-08 00:04:56 +00:00
Eric Christopher 9721396dab Add support for the X86 'l' constraint.
Fixes PR10149 and rdar://9738585

llvm-svn: 134648
2011-07-07 22:29:07 +00:00
Eric Christopher 7e5f2350d3 Use getRegForInlineAsmConstraint instead of custom defining regclasses
via vectors.

Part of rdar://9643582

llvm-svn: 134079
2011-06-29 17:23:50 +00:00
Jakob Stoklund Olesen 7297e7e223 Clean up the handling of the x87 fp stack to make it more robust.
Drop the FpMov instructions, use plain COPY instead.

Drop the FpSET/GET instruction for accessing fixed stack positions.
Instead use normal COPY to/from ST registers around inline assembly, and
provide a single new FpPOP_RETVAL instruction that can access the return
value(s) from a call. This is still necessary since you cannot tell from
the CALL instruction alone if it returns anything on the FP stack. Teach
fast isel to use this.

This provides a much more robust way of handling fixed stack registers -
we can tolerate arbitrary FP stack instructions inserted around calls
and inline assembly. Live range splitting could sometimes break x87 code
by inserting spill code in unfortunate places.

As a bonus we handle floating point inline assembly correctly now.

llvm-svn: 134018
2011-06-28 18:32:28 +00:00
Chad Rosier 15db390f8f Replace dyn_cast<> with cast<> since the cast is already guarded by the necessary check.
llvm-svn: 133874
2011-06-25 18:51:28 +00:00
Chad Rosier bde13d3f76 Enable tail call optimization in the presence of a byval (x86-32 and x86-64).
<rdar://problem/9483883>

llvm-svn: 133858
2011-06-25 02:04:56 +00:00
Chad Rosier e553e75b15 Hoist simple check above more complex checking to avoid unnecessary
overheads.  No functional change intended.

llvm-svn: 133824
2011-06-24 21:15:36 +00:00
Evan Cheng 3a0c5e52ff Remove TargetOptions.h dependency from X86Subtarget.
llvm-svn: 133726
2011-06-23 17:54:54 +00:00
Benjamin Kramer 25e17b0f89 Remove unused but set variables.
llvm-svn: 133347
2011-06-18 11:09:41 +00:00
John McCall 4b7a8d68ae Add a new function attribute, nonlazybind, which inhibits lazy-loading
optimizations when emitting calls to the function;  instead those calls may
use faster relocations which require the function to be immediately resolved
upon loading the dynamic object featuring the call.  This is useful when it
is known that the function will be called frequently and pervasively and
therefore there is no merit in delaying binding of the function.

Currently only implemented for x86-64, where it turns into a call through
the global offset table.

Patch by Dan Gohman, who assures me that he's going to add LangRef documentation
for this once it's committed.

llvm-svn: 133080
2011-06-15 20:36:13 +00:00
Eric Christopher 0713a9d8fc Add a parameter to CCState so that it can access the MachineFunction.
No functional change.

Part of PR6965

llvm-svn: 132763
2011-06-08 23:55:35 +00:00
Stuart Hastings e0d3426e1a Followup to 132458, omit unnecessary stack copy when x87 input is a
load.  rdar://problem/6373334

llvm-svn: 132696
2011-06-06 23:15:58 +00:00
Stuart Hastings be605494ac Reapply 132424 with fixes. This fixes PR10068.
rdar://problem/5993888

llvm-svn: 132606
2011-06-03 23:53:54 +00:00
Eric Christopher de9399bf76 Have LowerOperandForConstraint handle multiple character constraints.
Part of rdar://9119939

llvm-svn: 132510
2011-06-02 23:16:42 +00:00
Rafael Espindola aa318ae495 Revert 132424 to fix PR10068.
llvm-svn: 132479
2011-06-02 19:57:47 +00:00
Stuart Hastings 8d530ad22a Omit unnecessary stack copy when x87 input is a load.
rdar://problem/6373334

llvm-svn: 132458
2011-06-02 15:57:11 +00:00
Stuart Hastings 7adc95f69e Recommit 132404 with fixes. rdar://problem/5993888
llvm-svn: 132424
2011-06-01 21:33:14 +00:00
Stuart Hastings aab130d995 Revert 132404 to appease a buildbot. rdar://problem/5993888
llvm-svn: 132419
2011-06-01 19:52:20 +00:00
Stuart Hastings 7b7c102f2c Add support for x86 CMPEQSS and friends. These instructions do a
floating-point comparison, generate a mask of 0s or 1s, and generally
DTRT with NaNs.  Only profitable when the user wants a materialized 0
or 1 at runtime.  rdar://problem/5993888

llvm-svn: 132404
2011-06-01 17:17:45 +00:00
Stuart Hastings 9f20804216 FGETSIGN support for x86, using movmskps/pd. Will be enabled with a
patch to TargetLowering.cpp.  rdar://problem/5660695

llvm-svn: 132388
2011-06-01 04:39:42 +00:00
Stuart Hastings 493a12bf5e Reverting 132105: it broke some LLVM-GCC DejaGNU tests.
llvm-svn: 132108
2011-05-26 04:09:49 +00:00
Stuart Hastings 276f231c2f Correctly handle a one-word struct passed byval on x86_64.
rdar://problem/6920088

llvm-svn: 132105
2011-05-26 02:44:56 +00:00
Evan Cheng 88f9137fd7 - Teach SelectionDAG::isKnownNeverZero to return true (op x, c) when c is
non-zero.
- Teach X86 cmov optimization to eliminate the cmov from ctlz, cttz extension
  when the source of X86ISD::BSR / X86ISD::BSF is proven to be non-zero.

rdar://9490949

llvm-svn: 131948
2011-05-24 01:48:22 +00:00
Chad Rosier 552f8c4819 Don't attempt to tail call optimize for Win64.
llvm-svn: 131709
2011-05-20 00:59:28 +00:00
Evan Cheng e8d2e9eb35 Revert r131664 and fix it in instcombine instead. rdar://9467055
llvm-svn: 131708
2011-05-20 00:54:37 +00:00
Eric Christopher 4014e5e208 Oddly people want to use the 'r' constraint for fp constants on x86.
Fixes rdar://9218925
Fixes PR9601

llvm-svn: 131682
2011-05-19 21:33:47 +00:00
Evan Cheng 2b9bd38678 crc32 with 64-bit output zeros upper 32-bits. rdar://9467055
llvm-svn: 131664
2011-05-19 18:57:12 +00:00
Chad Rosier f4e832b14e Enables vararg functions that pass all arguments via registers to be optimized into tail-calls when possible.
llvm-svn: 131560
2011-05-18 19:59:50 +00:00
Eli Friedman d000a2c26e Clean up the mess created by r131467+r131469.
llvm-svn: 131471
2011-05-17 18:02:22 +00:00
Stuart Hastings c65d8eda7b Revert 131467 due to buildbot complaint.
llvm-svn: 131469
2011-05-17 16:59:46 +00:00
Stuart Hastings 3cf5308890 Fix an obscure issue in X86_64 parameter passing: if a tiny byval is
passed as the fifth parameter, insure it's passed correctly (in R9).
rdar://problem/6920088

llvm-svn: 131467
2011-05-17 16:45:55 +00:00
Nadav Rotem d8edb1d5cc Fix a bug in PerformEXTRACT_VECTOR_ELTCombine. The code created an ADD SDNode
with two different types, in cases where the index and the ptr had different
types.

llvm-svn: 131461
2011-05-17 08:31:57 +00:00
Eli Friedman d4a3609d30 Remove dead code. Fix associated test to use FileCheck.
llvm-svn: 131424
2011-05-16 21:28:22 +00:00
Nadav Rotem 8f971c27fb Add custom lowering of X86 vector SRA/SRL/SHL when the shift amount is a splat vector.
llvm-svn: 131179
2011-05-11 08:12:09 +00:00
Eli Friedman 2518f8376d Make the logic for determining function alignment more explicit. No functionality change.
llvm-svn: 131012
2011-05-06 20:34:06 +00:00
Daniel Dunbar cd01ed5bd6 ADT/Triple: Renambe isOSX... methods to isMacOSX for consistency with the OS
triple component.

llvm-svn: 129838
2011-04-20 00:14:25 +00:00
Daniel Dunbar 100455a3c8 Target/X86: Eliminate uses of getDarwinVers().
llvm-svn: 129813
2011-04-19 21:04:12 +00:00
Chris Lattner 0ab5e2cded Fix a ton of comment typos found by codespell. Patch by
Luis Felipe Strano Moraes!

llvm-svn: 129558
2011-04-15 05:18:47 +00:00
Evan Cheng ee9d45dd55 Don't try to create zero-sized stack objects.
llvm-svn: 128586
2011-03-30 23:44:13 +00:00
Benjamin Kramer 8d2227373d Make helper static.
llvm-svn: 128338
2011-03-26 12:38:19 +00:00