Commit Graph

13778 Commits

Author SHA1 Message Date
Chris Lattner f42d0aeda1 Turn altivec lvx/stvx intrinsics into loads and stores. This allows the
elimination of one load from this:

int AreSecondAndThirdElementsBothNegative( vector float *in ) {
#define QNaN 0x7FC00000
const vector unsigned int testData = (vector unsigned int)( QNaN, 0, 0, QNaN );
vector float test = vec_ld( 0, (float*) &testData );
return ! vec_any_ge( test, *in );
}

Now generating:

_AreSecondAndThirdElementsBothNegative:
        mfspr r2, 256
        oris r4, r2, 49152
        mtspr 256, r4
        li r4, lo16(LCPI1_0)
        lis r5, ha16(LCPI1_0)
        addi r6, r1, -16
        lvx v0, r5, r4
        stvx v0, 0, r6
        lvx v1, 0, r3
        vcmpgefp. v0, v0, v1
        mfcr r3, 2
        rlwinm r3, r3, 27, 31, 31
        xori r3, r3, 1
        cntlzw r3, r3
        srwi r3, r3, 5
        mtspr 256, r2
        blr

llvm-svn: 27352
2006-04-02 05:30:25 +00:00
Chris Lattner 80fdc1eb6b Remove done item
llvm-svn: 27351
2006-04-02 05:28:54 +00:00
Chris Lattner 42a5fca47e Implement promotion for EXTRACT_VECTOR_ELT, allowing v16i8 multiplies to work with PowerPC.
llvm-svn: 27349
2006-04-02 05:06:04 +00:00
Chris Lattner b80f114707 add a note
llvm-svn: 27348
2006-04-02 03:59:11 +00:00
Chris Lattner 87f080949b Implement the Expand action for binary vector operations to break the binop
into elements and operate on each piece.  This allows generic vector integer
multiplies to work on PPC, though the generated code is horrible.

llvm-svn: 27347
2006-04-02 03:57:31 +00:00
Chris Lattner a9c59156be Intrinsics that just load from memory can be treated like loads: they don't
have to serialize against each other.  This allows us to schedule lvx's
across each other, for example.

llvm-svn: 27346
2006-04-02 03:41:14 +00:00
Chris Lattner 70ec96fa32 Adjust to change in Intrinsics.gen interface.
llvm-svn: 27344
2006-04-02 03:35:01 +00:00
Chris Lattner 0442a18758 Constant fold all of the vector binops. This allows us to compile this:
"vector unsigned char mergeLowHigh = (vector unsigned char)
( 8, 9, 10, 11, 16, 17, 18, 19, 12, 13, 14, 15, 20, 21, 22, 23 );
vector unsigned char mergeHighLow = vec_xor( mergeLowHigh, vec_splat_u8(8));"

aka:

void %test2(<16 x sbyte>* %P) {
  store <16 x sbyte> cast (<4 x int> xor (<4 x int> cast (<16 x ubyte> < ubyte 8, ubyte 9, ubyte 10, ubyte 11, ubyte 16, ubyte 17, ubyte 18, ubyte 19, ubyte 12, ubyte 13, ubyte 14, ubyte 15, ubyte 20, ubyte 21, ubyte 22, ubyte 23 > to <4 x int>), <4 x int> cast (<16 x sbyte> < sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8 > to <4 x int>)) to <16 x sbyte>), <16 x sbyte> * %P
  ret void
}

into this:

_test2:
        mfspr r2, 256
        oris r4, r2, 32768
        mtspr 256, r4
        li r4, lo16(LCPI2_0)
        lis r5, ha16(LCPI2_0)
        lvx v0, r5, r4
        stvx v0, 0, r3
        mtspr 256, r2
        blr

instead of this:

_test2:
        mfspr r2, 256
        oris r4, r2, 49152
        mtspr 256, r4
        li r4, lo16(LCPI2_0)
        lis r5, ha16(LCPI2_0)
        vspltisb v0, 8
        lvx v1, r5, r4
        vxor v0, v1, v0
        stvx v0, 0, r3
        mtspr 256, r2
        blr

... which occurs here:
http://developer.apple.com/hardware/ve/calcspeed.html

llvm-svn: 27343
2006-04-02 03:25:57 +00:00
Chris Lattner ef598059f2 Add a new -view-legalize-dags command line option
llvm-svn: 27342
2006-04-02 03:07:27 +00:00
Chris Lattner e4e64b6b85 Implement constant folding of bit_convert of arbitrary constant vbuild_vector nodes.
llvm-svn: 27341
2006-04-02 02:53:43 +00:00
Chris Lattner 1c22728787 These entries already exist
llvm-svn: 27340
2006-04-02 02:51:27 +00:00
Chris Lattner 1985e1cbb8 Add some missing node names
llvm-svn: 27339
2006-04-02 02:41:18 +00:00
Chris Lattner 7a29cf3c7f New note
llvm-svn: 27337
2006-04-02 01:47:20 +00:00
Chris Lattner 6b3f475d23 Constant fold casts from things like <4 x int> -> <4 x uint>, likewise int<->fp.
llvm-svn: 27336
2006-04-02 01:38:28 +00:00
Chris Lattner 9b2d6e7886 Custom lower all BUILD_VECTOR's so that we can compile vec_splat_u8(8) into
"vspltisb v0, 8" instead of a constant pool load.

llvm-svn: 27335
2006-04-02 00:43:36 +00:00
Chris Lattner bec582f4cd Prefer larger register classes over smaller ones when a register occurs in
multiple register classes.  This fixes PowerPC/2006-04-01-FloatDoubleExtend.ll

llvm-svn: 27334
2006-04-02 00:24:45 +00:00
Chris Lattner 1b2436a624 add valuemapper support for inline asm
llvm-svn: 27332
2006-04-01 23:17:11 +00:00
Chris Lattner dc72c17798 Implement vnot using VNOR instead of using 'vspltisb v0, -1' and vxor
llvm-svn: 27331
2006-04-01 22:41:47 +00:00
Chris Lattner 6cf4914fd4 Fix InstCombine/2006-04-01-InfLoop.ll
llvm-svn: 27330
2006-04-01 22:05:01 +00:00
Chris Lattner dcd0792622 Fold A^(B&A) -> (B&A)^A
Fold (B&A)^A == ~B & A

This implements InstCombine/xor.ll:test2[56]

llvm-svn: 27328
2006-04-01 08:03:55 +00:00
Chris Lattner 98e9604d5d Fix Transforms/IndVarsSimplify/2006-03-31-NegativeStride.ll and
PR726 by performing consistent signed division, not consistent unsigned
division when evaluating scev's.  Do not touch udivs.

llvm-svn: 27326
2006-04-01 04:48:52 +00:00
Chris Lattner 0baebb11bf ADd a note
llvm-svn: 27324
2006-04-01 04:08:29 +00:00
Chris Lattner 8d1d8d364c If we can look through vector operations to find the scalar version of an
extract_element'd value, do so.

llvm-svn: 27323
2006-03-31 23:01:56 +00:00
Chris Lattner ff77dc0a08 Shrinkify some more intrinsic definitions.
llvm-svn: 27322
2006-03-31 22:41:56 +00:00
Evan Cheng dc1161cf53 An entry about packed type alignments.
llvm-svn: 27321
2006-03-31 22:35:14 +00:00
Chris Lattner 20d3f3726f Pull operand asm string into base class, shrinkifying intrinsic definitions.
No functionality change.

llvm-svn: 27320
2006-03-31 22:34:05 +00:00
Evan Cheng a11d834b8c TargetData.cpp::getTypeInfo() was returning alignment of element type as the
alignment of a packed type. This is obviously wrong. Added a workaround that
returns the size of the packed type as its alignment. The correct fix would
be to return a target dependent alignment value provided via TargetLowering
(or some other interface).

llvm-svn: 27319
2006-03-31 22:33:42 +00:00
Chris Lattner 39dcf1a9e2 Delete identity shuffles, implementing CodeGen/Generic/vector-identity-shuffle.ll
llvm-svn: 27317
2006-03-31 22:16:43 +00:00
Chris Lattner 110fc74b97 Fix 80 column violations :)
llvm-svn: 27315
2006-03-31 21:57:36 +00:00
Evan Cheng 5fd7c69473 Use a X86 target specific node X86ISD::PINSRW instead of a mal-formed
INSERT_VECTOR_ELT to insert a 16-bit value in a 128-bit vector.

llvm-svn: 27314
2006-03-31 21:55:24 +00:00
Evan Cheng 747e29ef0b Added support for SSE3 horizontal ops: haddp{s|d} and hsub{s|d}.
llvm-svn: 27310
2006-03-31 21:29:33 +00:00
Chris Lattner a4150f751d fix a pasto
llvm-svn: 27308
2006-03-31 21:19:06 +00:00
Chris Lattner e7fd4b0274 Add vperm support for all datatypes
llvm-svn: 27307
2006-03-31 20:00:35 +00:00
Chris Lattner baa73e0d91 Rearrange code a bit
llvm-svn: 27306
2006-03-31 19:52:36 +00:00
Chris Lattner 754b41c84b Add, sub and shuffle are legal for all vector types
llvm-svn: 27305
2006-03-31 19:48:58 +00:00
Evan Cheng cbffa4656b Add support to use pextrw and pinsrw to extract and insert a word element
from a 128-bit vector.

llvm-svn: 27304
2006-03-31 19:22:53 +00:00
Evan Cheng 3296f297d5 Add vector_extract and vector_insert nodes.
llvm-svn: 27303
2006-03-31 19:21:16 +00:00
Chris Lattner 40ff17dc22 add a note
llvm-svn: 27302
2006-03-31 19:00:22 +00:00
Chris Lattner e52f29b243 constant fold extractelement with undef operands.
llvm-svn: 27301
2006-03-31 18:31:40 +00:00
Chris Lattner 92346c315e extractelement(undef,x) -> undef
llvm-svn: 27300
2006-03-31 18:25:14 +00:00
Chris Lattner d9e4daabd2 Do not endian swap split vector loads. This fixes UnitTests/Vector/sumarray-dbl on PPC.
Now all UnitTests/Vector/* tests pass on PPC.

llvm-svn: 27299
2006-03-31 18:22:37 +00:00
Chris Lattner 8d90f526d7 Do not endian swap the operands to a store if the operands came from a vector.
This fixes UnitTests/Vector/simple.c with altivec.

llvm-svn: 27298
2006-03-31 18:20:46 +00:00
Chris Lattner 7e30af3887 Remove dead *extloads. This allows us to codegen vector.ll:test_extract_elt
to:

test_extract_elt:
        alloc r3 = ar.pfs,0,1,0,0
        adds r8 = 12, r32
        ;;
        ldfs f8 = [r8]
        mov ar.pfs = r3
        br.ret.sptk.many rp

instead of:

test_extract_elt:
        alloc r3 = ar.pfs,0,1,0,0
        adds r8 = 28, r32
        adds r9 = 24, r32
        adds r10 = 20, r32
        adds r11 = 16, r32
        ;;
        ldfs f6 = [r8]
        ;;
        ldfs f6 = [r9]
        adds r8 = 12, r32
        adds r9 = 8, r32
        adds r14 = 4, r32
        ;;
        ldfs f6 = [r10]
        ;;
        ldfs f6 = [r11]
        ldfs f8 = [r8]
        ;;
        ldfs f6 = [r9]
        ;;
        ldfs f6 = [r14]
        ;;
        ldfs f6 = [r32]
        mov ar.pfs = r3
        br.ret.sptk.many rp

llvm-svn: 27297
2006-03-31 18:10:41 +00:00
Chris Lattner 2d8551c85b Delete dead loads in the dag. This allows us to compile
vector.ll:test_extract_elt2 into:

_test_extract_elt2:
        lfd f1, 32(r3)
        blr

instead of:

_test_extract_elt2:
        lfd f0, 56(r3)
        lfd f0, 48(r3)
        lfd f0, 40(r3)
        lfd f1, 32(r3)
        lfd f0, 24(r3)
        lfd f0, 16(r3)
        lfd f0, 8(r3)
        lfd f0, 0(r3)
        blr

llvm-svn: 27296
2006-03-31 18:06:18 +00:00
Chris Lattner 6f42325dca Implement PromoteOp for VEXTRACT_VECTOR_ELT. Thsi fixes
Generic/vector.ll:test_extract_elt on non-sse X86 systems.

llvm-svn: 27294
2006-03-31 17:55:51 +00:00
Chris Lattner 8e1fcab2bc Scalarized vector stores need not be legal, e.g. if the vector element type
needs to be promoted or expanded.  Relegalize the scalar store once created.
This fixes CodeGen/Generic/vector.ll:test1 on non-SSE x86 targets.

llvm-svn: 27293
2006-03-31 17:37:22 +00:00
Jeff Cohen e45355218f Fix build breakage.
llvm-svn: 27292
2006-03-31 07:22:05 +00:00
Chris Lattner 829a061abf note to self: *save* file, then check it in
llvm-svn: 27291
2006-03-31 06:04:53 +00:00
Chris Lattner d4058a59d4 Implement an item from the readme, folding vcmp/vcmp. instructions with
identical instructions into a single instruction.  For example, for:

void test(vector float *x, vector float *y, int *P) {
  int v = vec_any_out(*x, *y);
  *x = (vector float)vec_cmpb(*x, *y);
  *P = v;
}

we now generate:

_test:
        mfspr r2, 256
        oris r6, r2, 49152
        mtspr 256, r6
        lvx v0, 0, r4
        lvx v1, 0, r3
        vcmpbfp. v0, v1, v0
        mfcr r4, 2
        stvx v0, 0, r3
        rlwinm r3, r4, 27, 31, 31
        xori r3, r3, 1
        stw r3, 0(r5)
        mtspr 256, r2
        blr

instead of:

_test:
        mfspr r2, 256
        oris r6, r2, 57344
        mtspr 256, r6
        lvx v0, 0, r4
        lvx v1, 0, r3
        vcmpbfp. v2, v1, v0
        mfcr r4, 2
***     vcmpbfp v0, v1, v0
        rlwinm r4, r4, 27, 31, 31
        stvx v0, 0, r3
        xori r3, r4, 1
        stw r3, 0(r5)
        mtspr 256, r2
        blr

Testcase here: CodeGen/PowerPC/vcmp-fold.ll

llvm-svn: 27290
2006-03-31 06:02:07 +00:00
Chris Lattner 070181c927 compactify some more instruction definitions
llvm-svn: 27288
2006-03-31 05:38:32 +00:00
Chris Lattner 45c709388a Compactify comparisons.
llvm-svn: 27287
2006-03-31 05:32:57 +00:00
Chris Lattner d7495ae7e9 Lower vector compares to VCMP nodes, just like we lower vector comparison
predicates to VCMPo nodes.

llvm-svn: 27285
2006-03-31 05:13:27 +00:00
Chris Lattner e5a6c4f8b7 These are done
llvm-svn: 27284
2006-03-31 04:53:21 +00:00
Chris Lattner b37dfd631c Add a new method to verify intrinsic function prototypes.
llvm-svn: 27282
2006-03-31 04:46:47 +00:00
Chris Lattner ba38035e21 Make sure to pass enough values to phi nodes when we are dealing with
decimated vectors.  This fixes UnitTests/Vector/sumarray-dbl.c

llvm-svn: 27280
2006-03-31 02:12:18 +00:00
Chris Lattner 5fe1f54c17 Significantly improve handling of vectors that are live across basic blocks,
handling cases where the vector elements need promotion, expansion, and when
the vector type itself needs to be decimated.

llvm-svn: 27278
2006-03-31 02:06:56 +00:00
Chris Lattner 051f7861b8 Was returning the wrong type.
llvm-svn: 27277
2006-03-31 01:50:09 +00:00
Chris Lattner bca5fbe914 Mark INSERT_VECTOR_ELT as expand
llvm-svn: 27276
2006-03-31 01:48:55 +00:00
Evan Cheng 1b0d294de0 Expand all INSERT_VECTOR_ELT (obviously bad) for now.
llvm-svn: 27275
2006-03-31 01:30:39 +00:00
Evan Cheng 168e45b0b3 Expand INSERT_VECTOR_ELT to store vec, sp; store elt, sp+k; vec = load sp;
llvm-svn: 27274
2006-03-31 01:27:51 +00:00
Chris Lattner f144dac7b7 Modify the TargetLowering::getPackedTypeBreakdown method to also return the
unpromoted element type.

llvm-svn: 27273
2006-03-31 00:46:36 +00:00
Evan Cheng d9d0bbb5ac Typo
llvm-svn: 27272
2006-03-31 00:33:57 +00:00
Evan Cheng 99d7205fba Ok for vector_shuffle mask to contain undef elements.
llvm-svn: 27271
2006-03-31 00:30:29 +00:00
Chris Lattner 549fb167eb Implement TargetLowering::getPackedTypeBreakdown
llvm-svn: 27270
2006-03-31 00:28:56 +00:00
Chris Lattner c4e3eadf21 Add the rest of the vmul instructions and the vmulsum* instructions.
llvm-svn: 27268
2006-03-30 23:39:06 +00:00
Chris Lattner a23158f1ca Use a new tblgen feature to significantly shrinkify instruction definitions that
directly correspond to intrinsics.

llvm-svn: 27266
2006-03-30 23:21:27 +00:00
Chris Lattner 551d3a11d3 Add a bunch of new instructions for intrinsics.
llvm-svn: 27265
2006-03-30 23:07:36 +00:00
Chris Lattner 612fa8e6f3 Fix Transforms/InstCombine/2006-03-30-ExtractElement.ll
llvm-svn: 27261
2006-03-30 22:02:40 +00:00
Evan Cheng 7e2ff11a42 Make sure all possible shuffles are matched.
Use pshufd, pshuhw, and pshulw to shuffle v4f32 if shufps doesn't match.
Use shufps to shuffle v4f32 if pshufd, pshuhw, and pshulw don't match.

llvm-svn: 27259
2006-03-30 19:54:57 +00:00
Evan Cheng dd487d865b More logical ops patterns
llvm-svn: 27257
2006-03-30 07:33:32 +00:00
Evan Cheng c58ef7deeb Add support for _mm_cmp{cc}_ss and _mm_cmp{cc}_ps intrinsics
llvm-svn: 27256
2006-03-30 06:21:22 +00:00
Evan Cheng 593310016d Add 128-bit pmovmskb intrinsic support.
llvm-svn: 27255
2006-03-30 00:33:26 +00:00
Evan Cheng c5cf9bba05 Change SSE pack operation definitions to fit what the intrinsics expected.
For example, packsswb actually creates a v16i8 from a pair of v8i16. But since
the intrinsic specification forces the output type to match the operands.

llvm-svn: 27254
2006-03-29 23:53:14 +00:00
Evan Cheng b7fedffc78 - Added some SSE2 128-bit packed integer ops.
- Added SSE2 128-bit integer pack with signed saturation ops.
- Added pshufhw and pshuflw ops.

llvm-svn: 27252
2006-03-29 23:07:14 +00:00
Evan Cheng acc336475e Need to special case splat after all. Make the second operand of splat
vector_shuffle undef.

llvm-svn: 27250
2006-03-29 19:02:40 +00:00
Evan Cheng 3cf95747c7 Floating point logical operation patterns should match bit_convert. Or else
integer vector logical operations would match andp{s|d} instead of pand.

llvm-svn: 27248
2006-03-29 18:47:40 +00:00
Evan Cheng 500ec16578 - More shuffle related bug fixes.
- Whenever possible use ops of the right packed types for vector shuffles /
  splats.

llvm-svn: 27246
2006-03-29 03:04:49 +00:00
Evan Cheng 3a1c4e75de Another entry about shuffles.
llvm-svn: 27245
2006-03-29 03:03:46 +00:00
Evan Cheng da59b0d2a8 - Only use pshufd for v4i32 vector shuffles.
- Other shuffle related fixes.

llvm-svn: 27244
2006-03-29 01:30:51 +00:00
Chris Lattner 7d6f4f14b4 add a note
llvm-svn: 27243
2006-03-29 00:24:13 +00:00
Chris Lattner 67271869a8 Bug fixes: handle constantexpr insert/extract element operations
Handle constantpacked vectors with constantexpr elements.

This fixes CodeGen/Generic/vector-constantexpr.ll

llvm-svn: 27241
2006-03-29 00:11:43 +00:00
Evan Cheng 38b34296d0 Added aliases to scalar SSE instructions, e.g. addss, to match x86 intrinsics.
The source operands type are v4sf with upper bits passes through.
Added matching code for these.

llvm-svn: 27240
2006-03-28 23:51:43 +00:00
Evan Cheng 8160fd3d42 Fixing buggy code.
llvm-svn: 27239
2006-03-28 23:41:33 +00:00
Chris Lattner 20e619fba3 When building a VVECTOR_SHUFFLE node from extract_element operations, make
sure to build it as SHUFFLE(X, undef, mask), not SHUFFLE(X, X, mask).

The later is not canonical form, and prevents the PPC splat pattern from
matching.  For a particular splat, we go from generating this:

	li r10, lo16(LCPI1_0)
	lis r11, ha16(LCPI1_0)
	lvx v3, r11, r10
	vperm v3, v2, v2, v3

to generating:

	vspltw v3, v2, 3

llvm-svn: 27236
2006-03-28 22:19:47 +00:00
Chris Lattner a46dfe80c8 Canonicalize VECTOR_SHUFFLE(X, X, Y) -> VECTOR_SHUFFLE(X,undef,Y')
llvm-svn: 27235
2006-03-28 22:11:53 +00:00
Chris Lattner c9992548fc Turn a series of extract_element's feeding a build_vector into a
vector_shuffle node.  For this:

void test(__m128 *res, __m128 *A, __m128 *B) {
  *res = _mm_unpacklo_ps(*A, *B);
}

we now produce this code:

_test:
        movl 8(%esp), %eax
        movaps (%eax), %xmm0
        movl 12(%esp), %eax
        unpcklps (%eax), %xmm0
        movl 4(%esp), %eax
        movaps %xmm0, (%eax)
        ret

instead of this:

_test:
        subl $76, %esp
        movl 88(%esp), %eax
        movaps (%eax), %xmm0
        movaps %xmm0, (%esp)
        movaps %xmm0, 32(%esp)
        movss 4(%esp), %xmm0
        movss 32(%esp), %xmm1
        unpcklps %xmm0, %xmm1
        movl 84(%esp), %eax
        movaps (%eax), %xmm0
        movaps %xmm0, 16(%esp)
        movaps %xmm0, 48(%esp)
        movss 20(%esp), %xmm0
        movss 48(%esp), %xmm2
        unpcklps %xmm0, %xmm2
        unpcklps %xmm1, %xmm2
        movl 80(%esp), %eax
        movaps %xmm2, (%eax)
        addl $76, %esp
        ret

GCC produces this (with -fomit-frame-pointer):

_test:
        subl    $12, %esp
        movl    20(%esp), %eax
        movaps  (%eax), %xmm0
        movl    24(%esp), %eax
        unpcklps        (%eax), %xmm0
        movl    16(%esp), %eax
        movaps  %xmm0, (%eax)
        addl    $12, %esp
        ret

llvm-svn: 27233
2006-03-28 20:28:38 +00:00
Chris Lattner f6f94d3bce Teach Legalize how to pack VVECTOR_SHUFFLE nodes into VECTOR_SHUFFLE nodes.
llvm-svn: 27232
2006-03-28 20:24:43 +00:00
Chris Lattner 8d57da2ffc new node
llvm-svn: 27231
2006-03-28 19:54:42 +00:00
Chris Lattner b7163598f9 Don't crash on X^X if X is a vector. Instead, produce a vector of zeros.
llvm-svn: 27229
2006-03-28 19:11:05 +00:00
Chris Lattner ffec47ebff Add an assertion
llvm-svn: 27228
2006-03-28 19:04:49 +00:00
Chris Lattner 66e1410858 add a note
llvm-svn: 27227
2006-03-28 18:56:23 +00:00
Jim Laskey dea0348853 Refactor address attributes. Add base register to frame info.
llvm-svn: 27226
2006-03-28 14:58:32 +00:00
Jim Laskey d1aa1638c6 Expose base register for DwarfWriter. Refactor code accordingly.
llvm-svn: 27225
2006-03-28 13:48:33 +00:00
Jim Laskey 67a636c587 More bulletproofing of llvm.dbg.declare.
llvm-svn: 27224
2006-03-28 13:45:20 +00:00
Jim Laskey 457e54efc1 Added missing paren on behalf of Ramana Radhakrishnan.
llvm-svn: 27223
2006-03-28 10:17:11 +00:00
Evan Cheng 21e5476deb Missed X86::isUNPCKHMask
llvm-svn: 27222
2006-03-28 08:27:15 +00:00
Evan Cheng be2d9a0e99 movlps and movlpd should be modeled as two address code.
llvm-svn: 27221
2006-03-28 07:01:28 +00:00
Evan Cheng dc57ae0711 Update
llvm-svn: 27220
2006-03-28 06:55:45 +00:00
Evan Cheng 4e7374ff8a Typo
llvm-svn: 27219
2006-03-28 06:53:49 +00:00
Evan Cheng 1a194a5264 * Prefer using operation of matching types. e.g unpcklpd rather than movlhps.
* Bug fixes.

llvm-svn: 27218
2006-03-28 06:50:32 +00:00
Nate Begeman af8c373e77 Fix a couple typos
llvm-svn: 27216
2006-03-28 04:18:18 +00:00
Nate Begeman 1b3928765d Add a few more altivec intrinsics
llvm-svn: 27215
2006-03-28 04:15:58 +00:00
Evan Cheng 08b473c619 Added a couple of entries about movhps and movlhps.
llvm-svn: 27212
2006-03-28 02:49:12 +00:00
Evan Cheng 3765fadef6 All unpack cases are now being handled.
llvm-svn: 27211
2006-03-28 02:44:05 +00:00
Evan Cheng 2bc3280659 - Clean up / consoladate various shuffle masks.
- Some misc. bug fixes.
- Use MOVHPDrm to load from m64 to upper half of a XMM register.

llvm-svn: 27210
2006-03-28 02:43:26 +00:00
Chris Lattner 3710fca2b8 implement a bunch more intrinsics.
llvm-svn: 27209
2006-03-28 02:29:37 +00:00
Chris Lattner cb5ec07cc3 Use normal lvx for scalar_to_vector instead of lve*x. They do the exact
same thing and we have a dag node for the former.

llvm-svn: 27205
2006-03-28 01:43:22 +00:00
Jim Laskey 8374e9c4eb More bulletproofing of DebugInfoDesc verify.
llvm-svn: 27203
2006-03-28 01:30:18 +00:00
Chris Lattner e55d171ccd Tblgen doesn't like multiple SDNode<> definitions that map to the sameenum value. Split them into separate enums.
llvm-svn: 27201
2006-03-28 00:40:33 +00:00
Evan Cheng 5df75889db Model unpack lower and interleave as vector_shuffle so we can lower the
intrinsics as such.

llvm-svn: 27200
2006-03-28 00:39:58 +00:00
Andrew Lenharth d7e612bbc4 If adding a link to a collapsed, node, ignore offset.
Fixes 2006-03-27-LinkedCollapsed.ll

llvm-svn: 27194
2006-03-27 23:39:58 +00:00
Jim Laskey d387cc5cde Reactivate llvm.dbg.declare.
llvm-svn: 27192
2006-03-27 23:31:10 +00:00
Chris Lattner 5bb1d90afd Disable dbg_declare, it currently breaks the CFE build
llvm-svn: 27182
2006-03-27 21:36:03 +00:00
Chris Lattner d5f94c9574 Fix legalization of intrinsics with chain and result values
llvm-svn: 27181
2006-03-27 20:28:29 +00:00
Jim Laskey fa53b276d0 Translate llvm target registers to dwarf register numbers properly.
llvm-svn: 27180
2006-03-27 20:18:45 +00:00
Chris Lattner 018e17c8de unbreak the build
llvm-svn: 27174
2006-03-27 16:52:45 +00:00
Chris Lattner 0e84f1e532 Unbreak the build on non-apple compilers :-(
llvm-svn: 27173
2006-03-27 16:10:59 +00:00
Evan Cheng d09b05b0bc Try again
llvm-svn: 27171
2006-03-27 08:10:26 +00:00
Chris Lattner 939c9ab88f Add a bunch of notes from my journey thus far.
llvm-svn: 27170
2006-03-27 07:41:00 +00:00
Evan Cheng 64efb35c32 Incorrect check for FP all one's
llvm-svn: 27169
2006-03-27 07:26:17 +00:00
Chris Lattner 22ec3e7b7e Split out altivec notes into their own README
llvm-svn: 27168
2006-03-27 07:04:16 +00:00
Evan Cheng 9b9cc4fb39 Use pcmpeq to generate vector of all ones.
llvm-svn: 27167
2006-03-27 07:00:16 +00:00
Evan Cheng a74792fa9d Changed isBuildVectorAllOnesInteger to isBuildVectorAllOnes.
llvm-svn: 27166
2006-03-27 06:59:32 +00:00
Evan Cheng c70e33cd6e Change isBuildVectorAllOnesInteger to isBuildVectorAllOnes. Also check for
floating point cases.

llvm-svn: 27165
2006-03-27 06:58:47 +00:00
Chris Lattner 52fcad3a37 Instead of printing "INTRINSIC" on intrinsic node, print the intrinsic name.
llvm-svn: 27164
2006-03-27 06:45:25 +00:00
Chris Lattner 42e0ba09aa teach the inliner to work with packed constants
llvm-svn: 27161
2006-03-27 05:50:18 +00:00
Chris Lattner 1738c293b5 Fix the JIT encoding of VSEL
llvm-svn: 27160
2006-03-27 03:34:17 +00:00
Chris Lattner df59d5314c Fix the JIT encoding of VSPLTI*
llvm-svn: 27159
2006-03-27 03:28:57 +00:00
Jim Laskey 6e792f9ff8 Pass llvm/test/Regression/CodeGen/Generic/debug-info.ll.
llvm-svn: 27158
2006-03-27 01:51:47 +00:00
Nate Begeman ed728c1291 SelectionDAGISel can now natively handle Switch instructions, in the same
manner that the LowerSwitch LLVM to LLVM pass does: emitting a binary
search tree of basic blocks.  The new approach has several advantages:
it is faster, it generates significantly smaller code in many cases, and
it paves the way for implementing dense switch tables as a jump table by
handling switches directly in the instruction selector.

This functionality is currently only enabled on x86, but should be safe for
every target.  In anticipation of making it the default, the cfg is now
properly updated in the x86, ppc, and sparc select lowering code.

llvm-svn: 27156
2006-03-27 01:32:24 +00:00
Jim Laskey 7092888bcc Bullet proof against undefined args produced by upgrading ols-style debug info.
llvm-svn: 27155
2006-03-26 22:46:27 +00:00
Jim Laskey 84c2f0a705 How to be dumb on $5/day. Need a tri-state to track valid debug descriptors.
llvm-svn: 27154
2006-03-26 22:45:20 +00:00
Chris Lattner 65473e20d8 add vsel
llvm-svn: 27153
2006-03-26 22:38:43 +00:00
Nate Begeman 68cc9d4540 Readme note
llvm-svn: 27152
2006-03-26 19:19:27 +00:00
Chris Lattner 6961fc76bb Codegen vector predicate compares.
llvm-svn: 27151
2006-03-26 10:06:40 +00:00
Evan Cheng ed6184aef2 Remove X86:isZeroVector, use ISD::isBuildVectorAllZeros instead; some fixes / cleanups
llvm-svn: 27150
2006-03-26 09:53:12 +00:00
Evan Cheng b1ddc988af Remove PPC:isZeroVector, use ISD::isBuildVectorAllZeros instead
llvm-svn: 27149
2006-03-26 09:52:32 +00:00
Evan Cheng 5562f2092f Add immAllZerosV helper
llvm-svn: 27148
2006-03-26 09:51:39 +00:00
Evan Cheng a67899195f Add ISD::isBuildVectorAllZeros predicate
llvm-svn: 27147
2006-03-26 09:50:58 +00:00
Chris Lattner 30ee72586d Allow targets to custom lower their own intrinsics if desired.
llvm-svn: 27146
2006-03-26 09:12:51 +00:00
Chris Lattner 563c7022a5 Update dependencies to reflect split of the Intrinsics.td file
llvm-svn: 27144
2006-03-26 07:45:48 +00:00
Chris Lattner 793cbcb4fd Add all of the altivec comparison instructions. Add patterns for the
non-predicate altivec compare intrinsics.

llvm-svn: 27143
2006-03-26 04:57:17 +00:00
Chris Lattner c6c88b2ea1 Add and 8/16-bit adds, add all integer subtracts, add saturating subtract
intrinsics.

llvm-svn: 27142
2006-03-26 02:39:02 +00:00
Chris Lattner 53e07decd7 implement the vsldoi intrinsic.
llvm-svn: 27139
2006-03-26 00:41:48 +00:00
Chris Lattner 5c0c762443 fix the pattern for vandc, it's NOT vnand
llvm-svn: 27136
2006-03-25 23:10:40 +00:00
Chris Lattner e8c1d04051 add patterns for VANDC/VNOR, implementing
CodeGen/PowerPC/eqv-andc-orc-nor.ll:VNOR/VANDC

llvm-svn: 27135
2006-03-25 23:05:29 +00:00
Chris Lattner b6e2d0027a Add some comments.
llvm-svn: 27133
2006-03-25 23:00:56 +00:00
Chris Lattner 3de9286e09 add a vnot helper node for matching 'not' on vectors
llvm-svn: 27132
2006-03-25 23:00:08 +00:00
Chris Lattner f6e3b957b8 Fix a bug in ISD::isBuildVectorAllOnesInteger that caused it to always return
false

llvm-svn: 27131
2006-03-25 22:59:28 +00:00
Chris Lattner c2d2811a07 Implement the ISD::isBuildVectorAllOnesInteger predicate
llvm-svn: 27130
2006-03-25 22:57:01 +00:00
Chris Lattner dc1eab5886 Don't call SimplifyDemandedBits on vectors
llvm-svn: 27128
2006-03-25 22:19:00 +00:00
Chris Lattner b3617beb52 Add some logical operations
llvm-svn: 27127
2006-03-25 22:16:05 +00:00
Chris Lattner d70d9f5b24 Don't crash on packed logical ops
llvm-svn: 27125
2006-03-25 21:58:26 +00:00
Chris Lattner e8e7ac465d Teach BinaryOperator::createNot to work with packed integer types
llvm-svn: 27124
2006-03-25 21:54:21 +00:00
Jim Laskey b434464d1c Cast instruction not inserted into basic block.
llvm-svn: 27122
2006-03-25 18:40:47 +00:00
Evan Cheng 3e4d38eea5 Added missing (any_extend (load ...)) patterns.
llvm-svn: 27120
2006-03-25 09:45:48 +00:00
Evan Cheng 2bc0941e2a Build arbitrary vector with more than 2 distinct scalar elements with a
series of unpack and interleave ops.

llvm-svn: 27119
2006-03-25 09:37:23 +00:00
Chris Lattner 1b4bb22f8a implement a bunch of intrinsics
llvm-svn: 27118
2006-03-25 08:01:02 +00:00
Chris Lattner 2a85fa1f79 Move all Altivec stuff out into a new PPCInstrAltivec.td file.
Add a bunch of patterns for different datatypes, e.g. bit_convert, undef and
zero vector support.

llvm-svn: 27117
2006-03-25 07:51:43 +00:00
Chris Lattner 1cb91b3cd9 Add some basic patterns for other datatypes
llvm-svn: 27116
2006-03-25 07:39:07 +00:00
Chris Lattner 3a66a75108 add all supported formats to the vector register file
llvm-svn: 27115
2006-03-25 07:36:56 +00:00
Chris Lattner f653cdd3f9 Add support for __builtin_altivec_vnmsubfp /vmaddfp
llvm-svn: 27112
2006-03-25 07:05:55 +00:00
Chris Lattner 5d70a7c4a5 #include Intrinsics.h into all dag isels
llvm-svn: 27109
2006-03-25 06:47:10 +00:00
Chris Lattner 71b8c980da Implement Intrinsic::getName
llvm-svn: 27108
2006-03-25 06:32:47 +00:00
Chris Lattner 2771e2c960 Codegen things like:
<int -1, int -1, int -1, int -1>
and
 <int 65537, int 65537, int 65537, int 65537>

Using things like:
  vspltisb v0, -1
and:
  vspltish v0, 1

instead of using constant pool loads.

This implements CodeGen/PowerPC/vec_splat.ll:splat_imm_i{32|16}.

llvm-svn: 27106
2006-03-25 06:12:06 +00:00
Evan Cheng 79e500ec74 Added SSE cachebility ops
llvm-svn: 27103
2006-03-25 06:03:26 +00:00
Evan Cheng 1aaa7280cd Instruction encoding bug
llvm-svn: 27102
2006-03-25 06:00:03 +00:00
Chris Lattner 9dc2d17ae6 Add new intrinsic node definitions for tblgen use
llvm-svn: 27100
2006-03-25 02:29:35 +00:00
Evan Cheng 6f7d31ea50 Added 128-bit packed integer subtraction.
llvm-svn: 27096
2006-03-25 01:33:37 +00:00
Evan Cheng 8e481df625 Added CVTTPS2PI.
llvm-svn: 27095
2006-03-25 01:31:59 +00:00
Evan Cheng 980c4d5b46 Added CVTSS2SI.
llvm-svn: 27094
2006-03-25 01:00:18 +00:00
Evan Cheng e7ee6a5e32 Support for scalar to vector with zero extension.
llvm-svn: 27091
2006-03-24 23:15:12 +00:00
Chris Lattner 313229c74b fix inverted conditional
llvm-svn: 27089
2006-03-24 22:49:42 +00:00
Jim Laskey bb84eae239 D'oh - should be even numbered.
llvm-svn: 27088
2006-03-24 22:48:02 +00:00
Evan Cheng 2f0277bf48 Added LDMXCSR
llvm-svn: 27087
2006-03-24 22:28:37 +00:00
Chris Lattner 97599f1211 plug the intrinsics into the patterns for movmsk*
llvm-svn: 27083
2006-03-24 21:49:18 +00:00
Jim Laskey f0729b4067 Add dwarf register numbering to register data.
llvm-svn: 27081
2006-03-24 21:15:58 +00:00
Jim Laskey 3b338d5566 Add support for dwarf register numbering.
llvm-svn: 27080
2006-03-24 21:13:21 +00:00
Jim Laskey 3324c7236f Hack no more.
llvm-svn: 27079
2006-03-24 21:10:36 +00:00
Chris Lattner 9f9b6116e1 add another note
llvm-svn: 27077
2006-03-24 20:04:27 +00:00
Chris Lattner 0affd76182 add a note
llvm-svn: 27076
2006-03-24 19:59:17 +00:00
Chris Lattner c6b13e21cc Shuffle some includes around
llvm-svn: 27073
2006-03-24 18:52:35 +00:00
Evan Cheng 68d9bf26c8 Only to vector shuffle for {x,x,y,y} cases when SCALAR_TO_VECTOR is free.
llvm-svn: 27071
2006-03-24 18:45:20 +00:00
Chris Lattner 58a9622957 expose intrinsic info to the targets.
llvm-svn: 27070
2006-03-24 18:44:11 +00:00
Chris Lattner d589dd1352 Fix a bad JIT encoding of VPERM. Why is VPERM D,A,B,C but vfmadd is D,A,C,B ??
llvm-svn: 27069
2006-03-24 18:24:43 +00:00
Chris Lattner f2286d5917 Like the comment says, prefer to use the implicit add done by [r+r] addressing
modes than emitting an explicit add and using a base of r0.  This implements
Regression/CodeGen/PowerPC/mem-rr-addr-mode.ll

llvm-svn: 27068
2006-03-24 17:58:06 +00:00
Jim Laskey dd3fa41f0f Fix indent.
llvm-svn: 27065
2006-03-24 10:08:23 +00:00
Jim Laskey 864e444749 Clean up some commentary.
llvm-svn: 27064
2006-03-24 10:00:56 +00:00
Jim Laskey 53f1ecc560 Rename for truth in advertising.
llvm-svn: 27063
2006-03-24 09:50:27 +00:00
Chris Lattner a90b7141ed Disable the i32->float G5 optimization. It is unsafe, as documented in the
comment.

This fixes 177.mesa, and McCat/09-vor with the td scheduler.

llvm-svn: 27060
2006-03-24 07:53:47 +00:00
Chris Lattner ab882abce8 add support for using vxor to build zero vectors. This implements
Regression/CodeGen/PowerPC/vec_zero.ll

llvm-svn: 27059
2006-03-24 07:48:08 +00:00
Evan Cheng 082c8785ef Handle BUILD_VECTOR with all zero elements.
llvm-svn: 27056
2006-03-24 07:29:27 +00:00
Chris Lattner 77e271cb4e prefer to generate constant pool loads over splats. This prevents us from
using a splat for {1.0,1.0,1.0,1.0}

llvm-svn: 27055
2006-03-24 07:29:17 +00:00
Chris Lattner 87b1dddb1c fix spello
llvm-svn: 27053
2006-03-24 07:15:07 +00:00
Chris Lattner f365f5f0c1 Fix spello
llvm-svn: 27052
2006-03-24 07:14:34 +00:00
Chris Lattner 5821a6a17a add the actual cost to the debug info
llvm-svn: 27051
2006-03-24 07:14:00 +00:00
Chris Lattner f5efddf80b Gabor points out that we can't spell. :)
llvm-svn: 27049
2006-03-24 07:12:19 +00:00
Evan Cheng a91d8a5b43 All v2f64 shuffle cases can be handled.
llvm-svn: 27044
2006-03-24 06:40:32 +00:00
Evan Cheng 2595a687da More efficient v2f64 shuffle using movlhps, movhlps, unpckhpd, and unpcklpd.
llvm-svn: 27040
2006-03-24 02:58:06 +00:00
Evan Cheng 6afb3c2de7 A new entry
llvm-svn: 27039
2006-03-24 02:57:03 +00:00
Jeff Cohen 0eafbc3593 Get JIT/Interpreter working on Windows again.
llvm-svn: 27037
2006-03-24 02:53:49 +00:00
Chris Lattner a4f6805a86 legalize vbit_convert nodes whose result is a legal type.
Legalize intrinsic nodes.

llvm-svn: 27036
2006-03-24 02:26:29 +00:00
Chris Lattner d96b09a7b9 Lower target intrinsics into an INTRINSIC node
llvm-svn: 27035
2006-03-24 02:22:33 +00:00
Reid Spencer f9c3dcfdc1 Ignore the burg output files.
llvm-svn: 27033
2006-03-24 02:21:35 +00:00
Chris Lattner 6b05290922 fix some bogus assertions: noop bitconverts are legal
llvm-svn: 27032
2006-03-24 02:20:47 +00:00
Evan Cheng d27fb3e85e Handle more shuffle cases with SHUFP* instructions.
llvm-svn: 27024
2006-03-24 01:18:28 +00:00
Evan Cheng 1d2e995fc1 Lower BUILD_VECTOR to VECTOR_SHUFFLE if there are two distinct nodes (and if
the target can handle it). Issue two SCALAR_TO_VECTOR ops followed by a
VECTOR_SHUFFLE to select from the two vectors.

llvm-svn: 27023
2006-03-24 01:17:21 +00:00
Chris Lattner ebac9a4adf Identify the INTRINSIC node
llvm-svn: 27020
2006-03-24 01:04:30 +00:00
Reid Spencer 78eaa10f1a Add new generated files.
llvm-svn: 27013
2006-03-23 23:48:12 +00:00
Evan Cheng 4b5b4e373b Typo
llvm-svn: 27008
2006-03-23 23:24:51 +00:00
Jim Laskey fb39d2a7f7 Unneeded forward.
llvm-svn: 27004
2006-03-23 23:05:52 +00:00
Jim Laskey f7cfa52e7a Make sure types are allocated in the scope of their use.
llvm-svn: 27002
2006-03-23 23:02:34 +00:00
Chris Lattner cbcfe46556 add a note
llvm-svn: 27000
2006-03-23 21:28:44 +00:00
Chris Lattner d7c4e7d255 add support for splitting casts. This implements
CodeGen/Generic/vector.ll:test_cast_2.

llvm-svn: 26999
2006-03-23 21:16:34 +00:00
Evan Cheng f842ea57bb Typo
llvm-svn: 26997
2006-03-23 20:26:04 +00:00
Jim Laskey b119990289 Add some more bulletproofing to auto upgrade of llvm.dbg intrinsics.
llvm-svn: 26996
2006-03-23 20:13:25 +00:00
Chris Lattner 81137629e0 Add PPC vector bit-convert support
llvm-svn: 26995
2006-03-23 19:54:27 +00:00
Jim Laskey 3c43609f1f Add support to locate local variables in frames (early version.)
llvm-svn: 26994
2006-03-23 18:12:57 +00:00
Jim Laskey 8f64426f5c Strip changes to llvm.dbg intrinsics.
llvm-svn: 26993
2006-03-23 18:11:33 +00:00
Jim Laskey 83f99115db Can't combine anymore - we don't have a chain through llvm.dbg intrinsics.
llvm-svn: 26992
2006-03-23 18:10:42 +00:00
Jim Laskey cf0166fbeb Change interface to DwarfWriter.
llvm-svn: 26991
2006-03-23 18:09:44 +00:00
Jim Laskey 267d39d128 Modify how CBE handles #lines.
llvm-svn: 26990
2006-03-23 18:08:29 +00:00
Jim Laskey 2b74656f25 Generate local variable and scope information and equivalent dwarf forms.
llvm-svn: 26989
2006-03-23 18:07:55 +00:00
Jim Laskey a8bdac875d Handle new forms of llvm.dbg intrinsics.
llvm-svn: 26988
2006-03-23 18:06:46 +00:00
Jim Laskey 0cf8ed61cc Simplify handling of llvm.dbg intrinsic operands to one spot.
llvm-svn: 26987
2006-03-23 18:05:12 +00:00
Jim Laskey 01bd749537 Change the argument types of llvm.dbg intrinsics.
llvm-svn: 26985
2006-03-23 18:03:20 +00:00
Chris Lattner ce0206e119 Fix the encodings of these new instructions, hopefully fixing the JIT
failures from last night

llvm-svn: 26981
2006-03-23 16:13:50 +00:00
Evan Cheng 82ed4a42f9 Following icc's lead: use movdqa to load / store 128-bit integer vectors
llvm-svn: 26980
2006-03-23 07:44:07 +00:00
Chris Lattner a011ba8a76 prune #includes
llvm-svn: 26975
2006-03-23 05:43:58 +00:00
Chris Lattner 6f95ab7abb Eliminate IntrinsicLowering from TargetMachine.
Make the CBE and V9 backends create their own, since they're the only ones that use it.

llvm-svn: 26974
2006-03-23 05:43:16 +00:00
Chris Lattner 9ea1b3f9fd simplify some code
llvm-svn: 26972
2006-03-23 05:29:04 +00:00
Chris Lattner 811dd8d009 remove always-null IntrinsicLowering argument.
llvm-svn: 26971
2006-03-23 05:28:02 +00:00
Chris Lattner 0b2de9f2d4 remove the intrinsiclowering hook
llvm-svn: 26970
2006-03-23 05:22:51 +00:00
Evan Cheng 7055878170 Add v4i32 <-> v4f32 bitconvert patterns.
llvm-svn: 26969
2006-03-23 02:36:37 +00:00
Evan Cheng b9b0550dc6 Add 128-bit integer vector load and add (for testing).
llvm-svn: 26967
2006-03-23 01:57:24 +00:00
Nate Begeman fb6e02931c Add support for 8 bit immediates with 16/32 bit cmp instructions
llvm-svn: 26966
2006-03-23 01:29:48 +00:00
Chris Lattner b893d04a67 Fix a typo
llvm-svn: 26965
2006-03-22 22:20:49 +00:00
Evan Cheng 021bb7c956 Added a ValueType operand to isShuffleMaskLegal(). For now, x86 will not do
64-bit vector shuffle.

llvm-svn: 26964
2006-03-22 22:07:06 +00:00
Chris Lattner 2f4119a608 Implement simple support for vector casting. This can currently only handle
casts between legal vector types.

llvm-svn: 26961
2006-03-22 20:09:35 +00:00
Evan Cheng ed794cd27b SHUFP* are two address code.
llvm-svn: 26959
2006-03-22 20:08:18 +00:00
Evan Cheng bc04722860 Some clean up.
llvm-svn: 26957
2006-03-22 19:22:18 +00:00
Evan Cheng d4e1557941 - Supposely movlhps is faster / better than unpcklpd.
- Don't forget pshufd is only available with sse2.

llvm-svn: 26956
2006-03-22 19:16:21 +00:00
Evan Cheng 68ad48bd1a - Implement X86ISelLowering::isShuffleMaskLegal(). We currently only support
splat and PSHUFD cases.
- Clean up shuffle / splat matching code.

llvm-svn: 26954
2006-03-22 18:59:22 +00:00
Chris Lattner 7d80b4f366 silence a bogus gcc warning
llvm-svn: 26953
2006-03-22 17:27:24 +00:00
Evan Cheng 8fdbdf20cd - VECTOR_SHUFFLE of v4i32 / v4f32 with undef second vector always matches
PSHUFD. We can make permutes entries which point to the undef pointing
  anything we want.
- Change some names to appease Chris.

llvm-svn: 26951
2006-03-22 08:01:21 +00:00
Chris Lattner e24cf9dfa1 add a note
llvm-svn: 26950
2006-03-22 07:33:46 +00:00
Evan Cheng 3617caf526 Fix PSHUF* and SHUF* jit code emission problems
llvm-svn: 26949
2006-03-22 07:10:28 +00:00
Chris Lattner 2d52c1b8b9 Eliminate the dependency of ExecutionEngine on the JIT/Interpreter libraries.
Now you can build a tool with just the JIT or just the interpreter.

llvm-svn: 26946
2006-03-22 06:07:50 +00:00
Chris Lattner eccf46950c This has been implemented. Tweak it into another note
llvm-svn: 26944
2006-03-22 05:33:23 +00:00
Chris Lattner 4a66d69433 When possible, custom lower 32-bit SINT_TO_FP to this:
_foo2:
        extsw r2, r3
        std r2, -8(r1)
        lfd f0, -8(r1)
        fcfid f0, f0
        frsp f1, f0
        blr

instead of this:

_foo2:
        lis r2, ha16(LCPI2_0)
        lis r4, 17200
        xoris r3, r3, 32768
        stw r3, -4(r1)
        stw r4, -8(r1)
        lfs f0, lo16(LCPI2_0)(r2)
        lfd f1, -8(r1)
        fsub f0, f1, f0
        frsp f1, f0
        blr

This speeds up Misc/pi from 2.44s->2.09s with LLC and from 3.01->2.18s
with llcbeta (16.7% and 38.1% respectively).

llvm-svn: 26943
2006-03-22 05:30:33 +00:00
Chris Lattner 77373d1bea Add support for "ri" addressing modes where the immediate is a 14-bit field
which is shifted left two bits before use.  Instructions like STD use this
addressing mode.

llvm-svn: 26942
2006-03-22 05:26:03 +00:00
Chris Lattner f5e36c8bc0 fix a warning
llvm-svn: 26941
2006-03-22 04:18:34 +00:00
Evan Cheng d097e67544 Some splat and shuffle support.
llvm-svn: 26940
2006-03-22 02:53:00 +00:00
Evan Cheng b1d3c64d1f Add a couple more pseudo instructions.
llvm-svn: 26939
2006-03-22 02:52:03 +00:00
Chris Lattner 8fa445a89d Endianness does not affect the order of vector fields. This fixes
SingleSource/UnitTests/Vector/build.c

llvm-svn: 26936
2006-03-22 01:46:54 +00:00
Chris Lattner 4e7371758f Fix the JIT encoding of the VAForm_1 instructions, including vmaddfp
llvm-svn: 26935
2006-03-22 01:44:36 +00:00
Chris Lattner 5be4352124 Enclose some variables in a scope to avoid error with some gcc versions
llvm-svn: 26934
2006-03-22 00:12:37 +00:00
Evan Cheng baea59c61c Didn't mean to check this in. No MMX support yet.
llvm-svn: 26933
2006-03-21 23:04:23 +00:00
Evan Cheng d5e905d762 - Use movaps to store 128-bit vector integers.
- Each scalar to vector v8i16 and v16i8 is a any_extend followed by a movd.

llvm-svn: 26932
2006-03-21 23:01:21 +00:00
Chris Lattner 340a6b5c26 add expand support for extractelement
llvm-svn: 26931
2006-03-21 21:02:03 +00:00
Chris Lattner 00f4683bf6 These targets don't support EXTRACT_VECTOR_ELT, though, in time, X86 will.
llvm-svn: 26930
2006-03-21 20:51:05 +00:00
Chris Lattner 7c0cd8cafc add some trivial support for extractelement.
llvm-svn: 26928
2006-03-21 20:44:12 +00:00
Chris Lattner 3a2ae6ad3c Don't emit pseudo instructions!
llvm-svn: 26926
2006-03-21 20:19:37 +00:00
Chris Lattner 672a42d731 Add a hacky workaround for crashes due to vectors live across blocks.
Note that this code won't work for vectors that aren't legal on the
target.  Improvements coming.

llvm-svn: 26925
2006-03-21 19:20:37 +00:00
Nate Begeman 013127981a Update readme
llvm-svn: 26924
2006-03-21 18:58:20 +00:00
Chris Lattner 139eac5b71 Print absolute memory references like this:
lwz r2, 8(0)
instead of this:
       lwz r2, 8(r0)

This fixes the llc/llc-beta failures on PPC last night.

llvm-svn: 26922
2006-03-21 17:21:13 +00:00
Evan Cheng 2d819f5fa4 Combine 2 entries
llvm-svn: 26921
2006-03-21 07:18:26 +00:00
Evan Cheng aeebc96099 Add a note about x86 register coallescing
llvm-svn: 26920
2006-03-21 07:12:57 +00:00
Evan Cheng 1208d9179a - Remove scalar to vector pseudo ops. They are just wrong.
- Handle FR32 to VR128:v4f32 and FR64 to VR128:v2f64 with aliases of MOVAPS
and MOVAPD. Mark them as move instructions and *hope* they will be deleted.

llvm-svn: 26919
2006-03-21 07:09:35 +00:00
Chris Lattner bda7310ef7 With Evan's latest tblgen patch, this code is obsolete, thanks Evan!
llvm-svn: 26917
2006-03-21 06:37:40 +00:00
Chris Lattner d2132f87d7 When codegen'ing vector MUL using VFMADD, *add* the 0, don't *mul* the 0.
llvm-svn: 26913
2006-03-21 00:51:38 +00:00
Chris Lattner f194834161 minor note
llvm-svn: 26912
2006-03-21 00:47:09 +00:00
Evan Cheng e4d1416239 x86 ISD::SCALAR_TO_VECTOR support.
llvm-svn: 26911
2006-03-21 00:33:35 +00:00
Evan Cheng fb872b41c0 Junk unused vector register classes.
llvm-svn: 26910
2006-03-21 00:30:59 +00:00
Chris Lattner c8b16d00b9 Handle constant addresses more efficiently, folding the low bits into the
disp field of the load/store if possible.  This compiles
CodeGen/PowerPC/load-constant-addr.ll to:

_test:
        lis r2, 2838
        lfs f1, 26848(r2)
        blr

instead of:

_test:
        lis r2, 2838
        ori r2, r2, 26848
        lfs f1, 0(r2)
        blr

llvm-svn: 26908
2006-03-20 22:38:22 +00:00
Chris Lattner 6d74b09da7 remove dead variable
llvm-svn: 26907
2006-03-20 22:37:23 +00:00
Chris Lattner a1bc294f0c Fix a couple of bugs in permute/splat generate, thanks to Nate for actually
figuring these out! :)

llvm-svn: 26904
2006-03-20 18:26:51 +00:00
Chris Lattner eda030da04 reenable this hack, the tblgen version isn't quite ready
llvm-svn: 26902
2006-03-20 17:54:43 +00:00
Chris Lattner f96d523b8f Fix the pattern for VADDUWM, add i32 splat
llvm-svn: 26901
2006-03-20 17:51:58 +00:00
Evan Cheng 89f3cff0f5 Use tblgen'd VECTOR_SHUFFLE selection code.
llvm-svn: 26900
2006-03-20 08:14:16 +00:00
Chris Lattner a9a1313386 Add support for generating vspltw, instead of a vperm instruction with a
constant pool load.  This generates significantly nicer code for splats.

When tblgen gets bugfixed, we can remove the custom selection code.

llvm-svn: 26898
2006-03-20 06:51:10 +00:00
Chris Lattner a8fbb6dd3d Implement PPC::isSplatShuffleMask and PPC::getVSPLTImmediate.
llvm-svn: 26897
2006-03-20 06:37:44 +00:00
Chris Lattner ffc475689b fix duplicate definition errors
llvm-svn: 26896
2006-03-20 06:33:01 +00:00
Chris Lattner 80b6bd2746 Add a build_vector node
llvm-svn: 26895
2006-03-20 06:18:01 +00:00
Chris Lattner 382f356bd9 Check in some intermediate code that adds a skeleton for matching vsplt*
instructions

llvm-svn: 26894
2006-03-20 06:15:45 +00:00
Evan Cheng e6448448c2 Move a few things around.
llvm-svn: 26893
2006-03-20 06:04:52 +00:00
Chris Lattner e4e1ac37ba add vector_shuffle
llvm-svn: 26891
2006-03-20 05:40:45 +00:00
Chris Lattner 93d99f9928 fix typo
llvm-svn: 26889
2006-03-20 05:05:55 +00:00
Chris Lattner 366b2514fa add vsplat instructions, fix sched description for vperm
llvm-svn: 26888
2006-03-20 04:47:33 +00:00
Chris Lattner a8713b1ee6 Custom lower arbitrary VECTOR_SHUFFLE's to VPERM.
TODO: leave specific ones as VECTOR_SHUFFLE's and turn them into specialized
operations like vsplt*

llvm-svn: 26887
2006-03-20 01:53:53 +00:00
Chris Lattner 0a8b4eaee9 Claim to have v16i8 for perm masks
llvm-svn: 26886
2006-03-20 01:53:02 +00:00
Chris Lattner 21e68c8001 If a target supports splatting with SHUFFLE_VECTOR, lower to it from BUILD_VECTOR(x,x,x,x)
llvm-svn: 26885
2006-03-20 01:52:29 +00:00
Chris Lattner 6b20104410 TargetData doesn't know the alignment of vectors :(
llvm-svn: 26884
2006-03-20 01:51:46 +00:00
Chris Lattner e7a058de7d add the vperm instruction
llvm-svn: 26883
2006-03-20 01:00:56 +00:00
Chris Lattner 00f0589bc0 Add very basic support for VECTOR_SHUFFLE
llvm-svn: 26880
2006-03-19 23:56:04 +00:00
Chris Lattner d16f6fdd49 add a note with a testcase
llvm-svn: 26877
2006-03-19 22:27:41 +00:00
Chris Lattner 169e6238ad Add a note about the MUL -> FMADD vector bug.
llvm-svn: 26874
2006-03-19 22:08:08 +00:00
Chris Lattner d783c76c18 Teach cee to propagate through switch statements. This implements
Transforms/CorrelatedExprs/switch.ll

Patch contributed by Eric Kidd!

llvm-svn: 26872
2006-03-19 19:37:24 +00:00
Evan Cheng f7c2e3628b Vector undef's
llvm-svn: 26870
2006-03-19 09:38:54 +00:00
Chris Lattner 7e9440a4fc Custom lower SCALAR_TO_VECTOR into lve*x.
llvm-svn: 26868
2006-03-19 06:55:52 +00:00