Commit Graph

13778 Commits

Author SHA1 Message Date
Evan Cheng 9fa8959dce Exapnd a VECTOR_SHUFFLE to a BUILD_VECTOR if target asks for it to be expanded
or custom lowering fails.

llvm-svn: 27432
2006-04-05 06:07:11 +00:00
Andrew Lenharth 8f321af723 revert this, this is safe, if conservative. leave a note to that effect
llvm-svn: 27428
2006-04-05 02:42:36 +00:00
Evan Cheng 59a6355e82 Handle v8i16 shuffle that must be broken into a pair of pshufhw / pshuflw.
llvm-svn: 27427
2006-04-05 01:47:37 +00:00
Chris Lattner 2f8e2b2895 add vsl
llvm-svn: 27425
2006-04-05 01:16:22 +00:00
Chris Lattner 575352ac20 add vmladduhm
llvm-svn: 27423
2006-04-05 00:49:48 +00:00
Chris Lattner 5a528e565b Add m[tf]vscr instructions.
llvm-svn: 27421
2006-04-05 00:03:57 +00:00
Chris Lattner 0c82447c66 add a note
llvm-svn: 27419
2006-04-04 23:45:11 +00:00
Chris Lattner 281bb5da1d Add missing byte merges.
llvm-svn: 27418
2006-04-04 23:43:56 +00:00
Chris Lattner fc50ae521c Add FP -> Int Conversions
llvm-svn: 27417
2006-04-04 23:25:02 +00:00
Chris Lattner 96338b6a21 add average intrinsics
llvm-svn: 27416
2006-04-04 23:14:00 +00:00
Chris Lattner 4464383a17 add a note
llvm-svn: 27414
2006-04-04 22:43:55 +00:00
Chris Lattner 4a744e5c9d Fix some broken logic that would cause us to codegen {2147483647,2147483647,2147483647,2147483647} as 'vspltisb v0, -1'.
llvm-svn: 27413
2006-04-04 22:28:35 +00:00
Evan Cheng 011c23d9d3 Added pslldq and psrldq.
llvm-svn: 27412
2006-04-04 21:49:39 +00:00
Evan Cheng 8f3b6b8d8a Minor fixes + naming changes.
llvm-svn: 27410
2006-04-04 19:12:30 +00:00
Evan Cheng 802b35c339 PSHUF* encoding bugs.
llvm-svn: 27405
2006-04-04 18:40:36 +00:00
Chris Lattner 4ea52cac01 Do not create ZEXTLOAD's unless we are before legalize or the operation is
legal.

llvm-svn: 27402
2006-04-04 17:39:18 +00:00
Chris Lattner 95c7adc7cb Ask legalize to promote all vector shuffles to be v16i8 instead of having to
handle all 4 PPC vector types.   This simplifies the matching code and allows
us to eliminate a bunch of patterns.  This also adds cases we were missing,
such as CodeGen/PowerPC/vec_splat.ll:splat_h.

llvm-svn: 27400
2006-04-04 17:25:31 +00:00
Chris Lattner 6be79823e7 * Add supprot for SCALAR_TO_VECTOR operations where the input needs to be
promoted/expanded (e.g. SCALAR_TO_VECTOR from i8/i16 on PPC).
* Add support for targets to request that VECTOR_SHUFFLE nodes be promoted
  to a canonical type, for example, we only want v16i8 shuffles on PPC.
* Move isShuffleLegal out of TLI into Legalize.
* Teach isShuffleLegal to allow shuffles that need to be promoted.

llvm-svn: 27399
2006-04-04 17:23:26 +00:00
Chris Lattner 6b2c9748c3 Signed shr by a constant is not the same as sdiv by 2^k
llvm-svn: 27395
2006-04-04 06:11:42 +00:00
Evan Cheng e91e3bd874 cmpps / cmppd encoding bug
llvm-svn: 27393
2006-04-04 03:04:07 +00:00
Chris Lattner a9e77d14c7 Constant fold bitconvert(undef)
llvm-svn: 27391
2006-04-04 01:02:22 +00:00
Evan Cheng dd2eb27d6d Compact some intrinsic definitions.
llvm-svn: 27388
2006-04-04 00:10:53 +00:00
Chris Lattner b1e6d84544 Plug in the byte and short splats
llvm-svn: 27387
2006-04-04 00:05:13 +00:00
Chris Lattner 447a7968af Revert accidentally committed hunks.
llvm-svn: 27386
2006-04-03 23:58:04 +00:00
Chris Lattner 533aed9a35 Make sure to mark unsupported SCALAR_TO_VECTOR operations as expand.
llvm-svn: 27385
2006-04-03 23:55:43 +00:00
Evan Cheng 0ef83c83e1 Some SSE1 intrinsics: min, max, sqrt, etc.
llvm-svn: 27384
2006-04-03 23:49:17 +00:00
Chris Lattner bf0016f2d4 revert previous patch
llvm-svn: 27383
2006-04-03 23:14:49 +00:00
Evan Cheng b64827e662 Use movlpd to: store lower f64 extracted from v2f64.
Use movhpd to: store upper f64 extracted from v2f64.

llvm-svn: 27382
2006-04-03 22:30:54 +00:00
Chris Lattner 5400727595 Force use of a frame-pointer if there is anything on the stack that is aligned
more than the OS keeps the stack aligned.

llvm-svn: 27381
2006-04-03 22:03:29 +00:00
Chris Lattner b710a81e54 The stack alignment is now computed dynamically, just verify it is correct.
llvm-svn: 27380
2006-04-03 21:39:57 +00:00
Chris Lattner 6bc4b9c7f8 Remove unused method
llvm-svn: 27379
2006-04-03 21:39:03 +00:00
Evan Cheng ebf1006d16 - More efficient extract_vector_elt with shuffle and movss, movsd, movd, etc.
- Some bug fixes and naming inconsistency fixes.

llvm-svn: 27377
2006-04-03 20:53:28 +00:00
Chris Lattner 78c788b450 Align vectors to the size in bytes, not bits.
llvm-svn: 27376
2006-04-03 19:28:50 +00:00
Chris Lattner e1e3adf802 Add a missing check, this fixes UnitTests/Vector/sumarray.c
llvm-svn: 27375
2006-04-03 17:29:28 +00:00
Chris Lattner 04c00fc844 Add a missing check, which broke a bunch of vector tests.
llvm-svn: 27374
2006-04-03 17:21:50 +00:00
Chris Lattner 9ccd61c893 Add the full set of min/max instructions
llvm-svn: 27372
2006-04-03 15:58:28 +00:00
Andrew Lenharth df7abf8b74 support x * (c1 + c2) where c1 and c2 are pow2s. special case for c2 == 4
llvm-svn: 27370
2006-04-03 04:19:17 +00:00
Andrew Lenharth 4e2c073a33 mul by const conversion sequences. more coming soon
llvm-svn: 27368
2006-04-03 03:18:59 +00:00
Andrew Lenharth 94f012f606 back this out
llvm-svn: 27367
2006-04-03 03:16:50 +00:00
Andrew Lenharth 015eaf5f33 This should be a win of every arch
llvm-svn: 27364
2006-04-02 21:42:45 +00:00
Andrew Lenharth 444bdb069a This makes McCat/12-IOtest go 8x faster or so
llvm-svn: 27363
2006-04-02 21:08:39 +00:00
Andrew Lenharth 01bd5523a3 This will be needed soon
llvm-svn: 27362
2006-04-02 20:13:57 +00:00
Chris Lattner acf1fc8a28 add a note
llvm-svn: 27360
2006-04-02 07:20:00 +00:00
Chris Lattner c5287c0ece Inform the dag combiner that the predicate compares only return a low bit.
llvm-svn: 27359
2006-04-02 06:26:07 +00:00
Chris Lattner 6c1321ca3f relax assertion
llvm-svn: 27358
2006-04-02 06:19:46 +00:00
Chris Lattner e6025525fb Allow targets to compute masked bits for intrinsics.
llvm-svn: 27357
2006-04-02 06:15:09 +00:00
Chris Lattner 4993249a04 Add a little dag combine to compile this:
int %AreSecondAndThirdElementsBothNegative(<4 x float>* %in) {
entry:
        %tmp1 = load <4 x float>* %in           ; <<4 x float>> [#uses=1]
        %tmp = tail call int %llvm.ppc.altivec.vcmpgefp.p( int 1, <4 x float> < float 0x7FF8000000000000, float 0.000000e+00, float 0.000000e+00, float 0x7FF8000000000000 >, <4 x float> %tmp1 )           ; <int> [#uses=1]
        %tmp = seteq int %tmp, 0                ; <bool> [#uses=1]
        %tmp3 = cast bool %tmp to int           ; <int> [#uses=1]
        ret int %tmp3
}

into this:

_AreSecondAndThirdElementsBothNegative:
        mfspr r2, 256
        oris r4, r2, 49152
        mtspr 256, r4
        li r4, lo16(LCPI1_0)
        lis r5, ha16(LCPI1_0)
        lvx v0, 0, r3
        lvx v1, r5, r4
        vcmpgefp. v0, v1, v0
        mfcr r3, 2
        rlwinm r3, r3, 27, 31, 31
        mtspr 256, r2
        blr

instead of this:

_AreSecondAndThirdElementsBothNegative:
        mfspr r2, 256
        oris r4, r2, 49152
        mtspr 256, r4
        li r4, lo16(LCPI1_0)
        lis r5, ha16(LCPI1_0)
        lvx v0, 0, r3
        lvx v1, r5, r4
        vcmpgefp. v0, v1, v0
        mfcr r3, 2
        rlwinm r3, r3, 27, 31, 31
        xori r3, r3, 1
        cntlzw r3, r3
        srwi r3, r3, 5
        mtspr 256, r2
        blr

llvm-svn: 27356
2006-04-02 06:11:11 +00:00
Chris Lattner caba72b6ff vector casts of casts are eliminable. Transform this:
%tmp = cast <4 x uint> %tmp to <4 x int>                ; <<4 x int>> [#uses=1]
        %tmp = cast <4 x int> %tmp to <4 x float>               ; <<4 x float>> [#uses=1]

into:

        %tmp = cast <4 x uint> %tmp to <4 x float>              ; <<4 x float>> [#uses=1]

llvm-svn: 27355
2006-04-02 05:43:13 +00:00
Chris Lattner 7ee10dec05 vector casts never reinterpret bits
llvm-svn: 27354
2006-04-02 05:40:28 +00:00
Chris Lattner ebca476b27 Allow transforming this:
%tmp = cast <4 x uint>* %testData to <4 x int>*         ; <<4 x int>*> [#uses=1]
        %tmp = load <4 x int>* %tmp             ; <<4 x int>> [#uses=1]

to this:

        %tmp = load <4 x uint>* %testData               ; <<4 x uint>> [#uses=1]
        %tmp = cast <4 x uint> %tmp to <4 x int>                ; <<4 x int>> [#uses=1]

llvm-svn: 27353
2006-04-02 05:37:12 +00:00
Chris Lattner f42d0aeda1 Turn altivec lvx/stvx intrinsics into loads and stores. This allows the
elimination of one load from this:

int AreSecondAndThirdElementsBothNegative( vector float *in ) {
#define QNaN 0x7FC00000
const vector unsigned int testData = (vector unsigned int)( QNaN, 0, 0, QNaN );
vector float test = vec_ld( 0, (float*) &testData );
return ! vec_any_ge( test, *in );
}

Now generating:

_AreSecondAndThirdElementsBothNegative:
        mfspr r2, 256
        oris r4, r2, 49152
        mtspr 256, r4
        li r4, lo16(LCPI1_0)
        lis r5, ha16(LCPI1_0)
        addi r6, r1, -16
        lvx v0, r5, r4
        stvx v0, 0, r6
        lvx v1, 0, r3
        vcmpgefp. v0, v0, v1
        mfcr r3, 2
        rlwinm r3, r3, 27, 31, 31
        xori r3, r3, 1
        cntlzw r3, r3
        srwi r3, r3, 5
        mtspr 256, r2
        blr

llvm-svn: 27352
2006-04-02 05:30:25 +00:00
Chris Lattner 80fdc1eb6b Remove done item
llvm-svn: 27351
2006-04-02 05:28:54 +00:00
Chris Lattner 42a5fca47e Implement promotion for EXTRACT_VECTOR_ELT, allowing v16i8 multiplies to work with PowerPC.
llvm-svn: 27349
2006-04-02 05:06:04 +00:00
Chris Lattner b80f114707 add a note
llvm-svn: 27348
2006-04-02 03:59:11 +00:00
Chris Lattner 87f080949b Implement the Expand action for binary vector operations to break the binop
into elements and operate on each piece.  This allows generic vector integer
multiplies to work on PPC, though the generated code is horrible.

llvm-svn: 27347
2006-04-02 03:57:31 +00:00
Chris Lattner a9c59156be Intrinsics that just load from memory can be treated like loads: they don't
have to serialize against each other.  This allows us to schedule lvx's
across each other, for example.

llvm-svn: 27346
2006-04-02 03:41:14 +00:00
Chris Lattner 70ec96fa32 Adjust to change in Intrinsics.gen interface.
llvm-svn: 27344
2006-04-02 03:35:01 +00:00
Chris Lattner 0442a18758 Constant fold all of the vector binops. This allows us to compile this:
"vector unsigned char mergeLowHigh = (vector unsigned char)
( 8, 9, 10, 11, 16, 17, 18, 19, 12, 13, 14, 15, 20, 21, 22, 23 );
vector unsigned char mergeHighLow = vec_xor( mergeLowHigh, vec_splat_u8(8));"

aka:

void %test2(<16 x sbyte>* %P) {
  store <16 x sbyte> cast (<4 x int> xor (<4 x int> cast (<16 x ubyte> < ubyte 8, ubyte 9, ubyte 10, ubyte 11, ubyte 16, ubyte 17, ubyte 18, ubyte 19, ubyte 12, ubyte 13, ubyte 14, ubyte 15, ubyte 20, ubyte 21, ubyte 22, ubyte 23 > to <4 x int>), <4 x int> cast (<16 x sbyte> < sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8, sbyte 8 > to <4 x int>)) to <16 x sbyte>), <16 x sbyte> * %P
  ret void
}

into this:

_test2:
        mfspr r2, 256
        oris r4, r2, 32768
        mtspr 256, r4
        li r4, lo16(LCPI2_0)
        lis r5, ha16(LCPI2_0)
        lvx v0, r5, r4
        stvx v0, 0, r3
        mtspr 256, r2
        blr

instead of this:

_test2:
        mfspr r2, 256
        oris r4, r2, 49152
        mtspr 256, r4
        li r4, lo16(LCPI2_0)
        lis r5, ha16(LCPI2_0)
        vspltisb v0, 8
        lvx v1, r5, r4
        vxor v0, v1, v0
        stvx v0, 0, r3
        mtspr 256, r2
        blr

... which occurs here:
http://developer.apple.com/hardware/ve/calcspeed.html

llvm-svn: 27343
2006-04-02 03:25:57 +00:00
Chris Lattner ef598059f2 Add a new -view-legalize-dags command line option
llvm-svn: 27342
2006-04-02 03:07:27 +00:00
Chris Lattner e4e64b6b85 Implement constant folding of bit_convert of arbitrary constant vbuild_vector nodes.
llvm-svn: 27341
2006-04-02 02:53:43 +00:00
Chris Lattner 1c22728787 These entries already exist
llvm-svn: 27340
2006-04-02 02:51:27 +00:00
Chris Lattner 1985e1cbb8 Add some missing node names
llvm-svn: 27339
2006-04-02 02:41:18 +00:00
Chris Lattner 7a29cf3c7f New note
llvm-svn: 27337
2006-04-02 01:47:20 +00:00
Chris Lattner 6b3f475d23 Constant fold casts from things like <4 x int> -> <4 x uint>, likewise int<->fp.
llvm-svn: 27336
2006-04-02 01:38:28 +00:00
Chris Lattner 9b2d6e7886 Custom lower all BUILD_VECTOR's so that we can compile vec_splat_u8(8) into
"vspltisb v0, 8" instead of a constant pool load.

llvm-svn: 27335
2006-04-02 00:43:36 +00:00
Chris Lattner bec582f4cd Prefer larger register classes over smaller ones when a register occurs in
multiple register classes.  This fixes PowerPC/2006-04-01-FloatDoubleExtend.ll

llvm-svn: 27334
2006-04-02 00:24:45 +00:00
Chris Lattner 1b2436a624 add valuemapper support for inline asm
llvm-svn: 27332
2006-04-01 23:17:11 +00:00
Chris Lattner dc72c17798 Implement vnot using VNOR instead of using 'vspltisb v0, -1' and vxor
llvm-svn: 27331
2006-04-01 22:41:47 +00:00
Chris Lattner 6cf4914fd4 Fix InstCombine/2006-04-01-InfLoop.ll
llvm-svn: 27330
2006-04-01 22:05:01 +00:00
Chris Lattner dcd0792622 Fold A^(B&A) -> (B&A)^A
Fold (B&A)^A == ~B & A

This implements InstCombine/xor.ll:test2[56]

llvm-svn: 27328
2006-04-01 08:03:55 +00:00
Chris Lattner 98e9604d5d Fix Transforms/IndVarsSimplify/2006-03-31-NegativeStride.ll and
PR726 by performing consistent signed division, not consistent unsigned
division when evaluating scev's.  Do not touch udivs.

llvm-svn: 27326
2006-04-01 04:48:52 +00:00
Chris Lattner 0baebb11bf ADd a note
llvm-svn: 27324
2006-04-01 04:08:29 +00:00
Chris Lattner 8d1d8d364c If we can look through vector operations to find the scalar version of an
extract_element'd value, do so.

llvm-svn: 27323
2006-03-31 23:01:56 +00:00
Chris Lattner ff77dc0a08 Shrinkify some more intrinsic definitions.
llvm-svn: 27322
2006-03-31 22:41:56 +00:00
Evan Cheng dc1161cf53 An entry about packed type alignments.
llvm-svn: 27321
2006-03-31 22:35:14 +00:00
Chris Lattner 20d3f3726f Pull operand asm string into base class, shrinkifying intrinsic definitions.
No functionality change.

llvm-svn: 27320
2006-03-31 22:34:05 +00:00
Evan Cheng a11d834b8c TargetData.cpp::getTypeInfo() was returning alignment of element type as the
alignment of a packed type. This is obviously wrong. Added a workaround that
returns the size of the packed type as its alignment. The correct fix would
be to return a target dependent alignment value provided via TargetLowering
(or some other interface).

llvm-svn: 27319
2006-03-31 22:33:42 +00:00
Chris Lattner 39dcf1a9e2 Delete identity shuffles, implementing CodeGen/Generic/vector-identity-shuffle.ll
llvm-svn: 27317
2006-03-31 22:16:43 +00:00
Chris Lattner 110fc74b97 Fix 80 column violations :)
llvm-svn: 27315
2006-03-31 21:57:36 +00:00
Evan Cheng 5fd7c69473 Use a X86 target specific node X86ISD::PINSRW instead of a mal-formed
INSERT_VECTOR_ELT to insert a 16-bit value in a 128-bit vector.

llvm-svn: 27314
2006-03-31 21:55:24 +00:00
Evan Cheng 747e29ef0b Added support for SSE3 horizontal ops: haddp{s|d} and hsub{s|d}.
llvm-svn: 27310
2006-03-31 21:29:33 +00:00
Chris Lattner a4150f751d fix a pasto
llvm-svn: 27308
2006-03-31 21:19:06 +00:00
Chris Lattner e7fd4b0274 Add vperm support for all datatypes
llvm-svn: 27307
2006-03-31 20:00:35 +00:00
Chris Lattner baa73e0d91 Rearrange code a bit
llvm-svn: 27306
2006-03-31 19:52:36 +00:00
Chris Lattner 754b41c84b Add, sub and shuffle are legal for all vector types
llvm-svn: 27305
2006-03-31 19:48:58 +00:00
Evan Cheng cbffa4656b Add support to use pextrw and pinsrw to extract and insert a word element
from a 128-bit vector.

llvm-svn: 27304
2006-03-31 19:22:53 +00:00
Evan Cheng 3296f297d5 Add vector_extract and vector_insert nodes.
llvm-svn: 27303
2006-03-31 19:21:16 +00:00
Chris Lattner 40ff17dc22 add a note
llvm-svn: 27302
2006-03-31 19:00:22 +00:00
Chris Lattner e52f29b243 constant fold extractelement with undef operands.
llvm-svn: 27301
2006-03-31 18:31:40 +00:00
Chris Lattner 92346c315e extractelement(undef,x) -> undef
llvm-svn: 27300
2006-03-31 18:25:14 +00:00
Chris Lattner d9e4daabd2 Do not endian swap split vector loads. This fixes UnitTests/Vector/sumarray-dbl on PPC.
Now all UnitTests/Vector/* tests pass on PPC.

llvm-svn: 27299
2006-03-31 18:22:37 +00:00
Chris Lattner 8d90f526d7 Do not endian swap the operands to a store if the operands came from a vector.
This fixes UnitTests/Vector/simple.c with altivec.

llvm-svn: 27298
2006-03-31 18:20:46 +00:00
Chris Lattner 7e30af3887 Remove dead *extloads. This allows us to codegen vector.ll:test_extract_elt
to:

test_extract_elt:
        alloc r3 = ar.pfs,0,1,0,0
        adds r8 = 12, r32
        ;;
        ldfs f8 = [r8]
        mov ar.pfs = r3
        br.ret.sptk.many rp

instead of:

test_extract_elt:
        alloc r3 = ar.pfs,0,1,0,0
        adds r8 = 28, r32
        adds r9 = 24, r32
        adds r10 = 20, r32
        adds r11 = 16, r32
        ;;
        ldfs f6 = [r8]
        ;;
        ldfs f6 = [r9]
        adds r8 = 12, r32
        adds r9 = 8, r32
        adds r14 = 4, r32
        ;;
        ldfs f6 = [r10]
        ;;
        ldfs f6 = [r11]
        ldfs f8 = [r8]
        ;;
        ldfs f6 = [r9]
        ;;
        ldfs f6 = [r14]
        ;;
        ldfs f6 = [r32]
        mov ar.pfs = r3
        br.ret.sptk.many rp

llvm-svn: 27297
2006-03-31 18:10:41 +00:00
Chris Lattner 2d8551c85b Delete dead loads in the dag. This allows us to compile
vector.ll:test_extract_elt2 into:

_test_extract_elt2:
        lfd f1, 32(r3)
        blr

instead of:

_test_extract_elt2:
        lfd f0, 56(r3)
        lfd f0, 48(r3)
        lfd f0, 40(r3)
        lfd f1, 32(r3)
        lfd f0, 24(r3)
        lfd f0, 16(r3)
        lfd f0, 8(r3)
        lfd f0, 0(r3)
        blr

llvm-svn: 27296
2006-03-31 18:06:18 +00:00
Chris Lattner 6f42325dca Implement PromoteOp for VEXTRACT_VECTOR_ELT. Thsi fixes
Generic/vector.ll:test_extract_elt on non-sse X86 systems.

llvm-svn: 27294
2006-03-31 17:55:51 +00:00
Chris Lattner 8e1fcab2bc Scalarized vector stores need not be legal, e.g. if the vector element type
needs to be promoted or expanded.  Relegalize the scalar store once created.
This fixes CodeGen/Generic/vector.ll:test1 on non-SSE x86 targets.

llvm-svn: 27293
2006-03-31 17:37:22 +00:00
Jeff Cohen e45355218f Fix build breakage.
llvm-svn: 27292
2006-03-31 07:22:05 +00:00
Chris Lattner 829a061abf note to self: *save* file, then check it in
llvm-svn: 27291
2006-03-31 06:04:53 +00:00
Chris Lattner d4058a59d4 Implement an item from the readme, folding vcmp/vcmp. instructions with
identical instructions into a single instruction.  For example, for:

void test(vector float *x, vector float *y, int *P) {
  int v = vec_any_out(*x, *y);
  *x = (vector float)vec_cmpb(*x, *y);
  *P = v;
}

we now generate:

_test:
        mfspr r2, 256
        oris r6, r2, 49152
        mtspr 256, r6
        lvx v0, 0, r4
        lvx v1, 0, r3
        vcmpbfp. v0, v1, v0
        mfcr r4, 2
        stvx v0, 0, r3
        rlwinm r3, r4, 27, 31, 31
        xori r3, r3, 1
        stw r3, 0(r5)
        mtspr 256, r2
        blr

instead of:

_test:
        mfspr r2, 256
        oris r6, r2, 57344
        mtspr 256, r6
        lvx v0, 0, r4
        lvx v1, 0, r3
        vcmpbfp. v2, v1, v0
        mfcr r4, 2
***     vcmpbfp v0, v1, v0
        rlwinm r4, r4, 27, 31, 31
        stvx v0, 0, r3
        xori r3, r4, 1
        stw r3, 0(r5)
        mtspr 256, r2
        blr

Testcase here: CodeGen/PowerPC/vcmp-fold.ll

llvm-svn: 27290
2006-03-31 06:02:07 +00:00
Chris Lattner 070181c927 compactify some more instruction definitions
llvm-svn: 27288
2006-03-31 05:38:32 +00:00
Chris Lattner 45c709388a Compactify comparisons.
llvm-svn: 27287
2006-03-31 05:32:57 +00:00
Chris Lattner d7495ae7e9 Lower vector compares to VCMP nodes, just like we lower vector comparison
predicates to VCMPo nodes.

llvm-svn: 27285
2006-03-31 05:13:27 +00:00
Chris Lattner e5a6c4f8b7 These are done
llvm-svn: 27284
2006-03-31 04:53:21 +00:00
Chris Lattner b37dfd631c Add a new method to verify intrinsic function prototypes.
llvm-svn: 27282
2006-03-31 04:46:47 +00:00
Chris Lattner ba38035e21 Make sure to pass enough values to phi nodes when we are dealing with
decimated vectors.  This fixes UnitTests/Vector/sumarray-dbl.c

llvm-svn: 27280
2006-03-31 02:12:18 +00:00
Chris Lattner 5fe1f54c17 Significantly improve handling of vectors that are live across basic blocks,
handling cases where the vector elements need promotion, expansion, and when
the vector type itself needs to be decimated.

llvm-svn: 27278
2006-03-31 02:06:56 +00:00
Chris Lattner 051f7861b8 Was returning the wrong type.
llvm-svn: 27277
2006-03-31 01:50:09 +00:00
Chris Lattner bca5fbe914 Mark INSERT_VECTOR_ELT as expand
llvm-svn: 27276
2006-03-31 01:48:55 +00:00
Evan Cheng 1b0d294de0 Expand all INSERT_VECTOR_ELT (obviously bad) for now.
llvm-svn: 27275
2006-03-31 01:30:39 +00:00
Evan Cheng 168e45b0b3 Expand INSERT_VECTOR_ELT to store vec, sp; store elt, sp+k; vec = load sp;
llvm-svn: 27274
2006-03-31 01:27:51 +00:00
Chris Lattner f144dac7b7 Modify the TargetLowering::getPackedTypeBreakdown method to also return the
unpromoted element type.

llvm-svn: 27273
2006-03-31 00:46:36 +00:00
Evan Cheng d9d0bbb5ac Typo
llvm-svn: 27272
2006-03-31 00:33:57 +00:00
Evan Cheng 99d7205fba Ok for vector_shuffle mask to contain undef elements.
llvm-svn: 27271
2006-03-31 00:30:29 +00:00
Chris Lattner 549fb167eb Implement TargetLowering::getPackedTypeBreakdown
llvm-svn: 27270
2006-03-31 00:28:56 +00:00
Chris Lattner c4e3eadf21 Add the rest of the vmul instructions and the vmulsum* instructions.
llvm-svn: 27268
2006-03-30 23:39:06 +00:00
Chris Lattner a23158f1ca Use a new tblgen feature to significantly shrinkify instruction definitions that
directly correspond to intrinsics.

llvm-svn: 27266
2006-03-30 23:21:27 +00:00
Chris Lattner 551d3a11d3 Add a bunch of new instructions for intrinsics.
llvm-svn: 27265
2006-03-30 23:07:36 +00:00
Chris Lattner 612fa8e6f3 Fix Transforms/InstCombine/2006-03-30-ExtractElement.ll
llvm-svn: 27261
2006-03-30 22:02:40 +00:00
Evan Cheng 7e2ff11a42 Make sure all possible shuffles are matched.
Use pshufd, pshuhw, and pshulw to shuffle v4f32 if shufps doesn't match.
Use shufps to shuffle v4f32 if pshufd, pshuhw, and pshulw don't match.

llvm-svn: 27259
2006-03-30 19:54:57 +00:00
Evan Cheng dd487d865b More logical ops patterns
llvm-svn: 27257
2006-03-30 07:33:32 +00:00
Evan Cheng c58ef7deeb Add support for _mm_cmp{cc}_ss and _mm_cmp{cc}_ps intrinsics
llvm-svn: 27256
2006-03-30 06:21:22 +00:00
Evan Cheng 593310016d Add 128-bit pmovmskb intrinsic support.
llvm-svn: 27255
2006-03-30 00:33:26 +00:00
Evan Cheng c5cf9bba05 Change SSE pack operation definitions to fit what the intrinsics expected.
For example, packsswb actually creates a v16i8 from a pair of v8i16. But since
the intrinsic specification forces the output type to match the operands.

llvm-svn: 27254
2006-03-29 23:53:14 +00:00
Evan Cheng b7fedffc78 - Added some SSE2 128-bit packed integer ops.
- Added SSE2 128-bit integer pack with signed saturation ops.
- Added pshufhw and pshuflw ops.

llvm-svn: 27252
2006-03-29 23:07:14 +00:00
Evan Cheng acc336475e Need to special case splat after all. Make the second operand of splat
vector_shuffle undef.

llvm-svn: 27250
2006-03-29 19:02:40 +00:00
Evan Cheng 3cf95747c7 Floating point logical operation patterns should match bit_convert. Or else
integer vector logical operations would match andp{s|d} instead of pand.

llvm-svn: 27248
2006-03-29 18:47:40 +00:00
Evan Cheng 500ec16578 - More shuffle related bug fixes.
- Whenever possible use ops of the right packed types for vector shuffles /
  splats.

llvm-svn: 27246
2006-03-29 03:04:49 +00:00
Evan Cheng 3a1c4e75de Another entry about shuffles.
llvm-svn: 27245
2006-03-29 03:03:46 +00:00
Evan Cheng da59b0d2a8 - Only use pshufd for v4i32 vector shuffles.
- Other shuffle related fixes.

llvm-svn: 27244
2006-03-29 01:30:51 +00:00
Chris Lattner 7d6f4f14b4 add a note
llvm-svn: 27243
2006-03-29 00:24:13 +00:00
Chris Lattner 67271869a8 Bug fixes: handle constantexpr insert/extract element operations
Handle constantpacked vectors with constantexpr elements.

This fixes CodeGen/Generic/vector-constantexpr.ll

llvm-svn: 27241
2006-03-29 00:11:43 +00:00
Evan Cheng 38b34296d0 Added aliases to scalar SSE instructions, e.g. addss, to match x86 intrinsics.
The source operands type are v4sf with upper bits passes through.
Added matching code for these.

llvm-svn: 27240
2006-03-28 23:51:43 +00:00
Evan Cheng 8160fd3d42 Fixing buggy code.
llvm-svn: 27239
2006-03-28 23:41:33 +00:00
Chris Lattner 20e619fba3 When building a VVECTOR_SHUFFLE node from extract_element operations, make
sure to build it as SHUFFLE(X, undef, mask), not SHUFFLE(X, X, mask).

The later is not canonical form, and prevents the PPC splat pattern from
matching.  For a particular splat, we go from generating this:

	li r10, lo16(LCPI1_0)
	lis r11, ha16(LCPI1_0)
	lvx v3, r11, r10
	vperm v3, v2, v2, v3

to generating:

	vspltw v3, v2, 3

llvm-svn: 27236
2006-03-28 22:19:47 +00:00
Chris Lattner a46dfe80c8 Canonicalize VECTOR_SHUFFLE(X, X, Y) -> VECTOR_SHUFFLE(X,undef,Y')
llvm-svn: 27235
2006-03-28 22:11:53 +00:00
Chris Lattner c9992548fc Turn a series of extract_element's feeding a build_vector into a
vector_shuffle node.  For this:

void test(__m128 *res, __m128 *A, __m128 *B) {
  *res = _mm_unpacklo_ps(*A, *B);
}

we now produce this code:

_test:
        movl 8(%esp), %eax
        movaps (%eax), %xmm0
        movl 12(%esp), %eax
        unpcklps (%eax), %xmm0
        movl 4(%esp), %eax
        movaps %xmm0, (%eax)
        ret

instead of this:

_test:
        subl $76, %esp
        movl 88(%esp), %eax
        movaps (%eax), %xmm0
        movaps %xmm0, (%esp)
        movaps %xmm0, 32(%esp)
        movss 4(%esp), %xmm0
        movss 32(%esp), %xmm1
        unpcklps %xmm0, %xmm1
        movl 84(%esp), %eax
        movaps (%eax), %xmm0
        movaps %xmm0, 16(%esp)
        movaps %xmm0, 48(%esp)
        movss 20(%esp), %xmm0
        movss 48(%esp), %xmm2
        unpcklps %xmm0, %xmm2
        unpcklps %xmm1, %xmm2
        movl 80(%esp), %eax
        movaps %xmm2, (%eax)
        addl $76, %esp
        ret

GCC produces this (with -fomit-frame-pointer):

_test:
        subl    $12, %esp
        movl    20(%esp), %eax
        movaps  (%eax), %xmm0
        movl    24(%esp), %eax
        unpcklps        (%eax), %xmm0
        movl    16(%esp), %eax
        movaps  %xmm0, (%eax)
        addl    $12, %esp
        ret

llvm-svn: 27233
2006-03-28 20:28:38 +00:00
Chris Lattner f6f94d3bce Teach Legalize how to pack VVECTOR_SHUFFLE nodes into VECTOR_SHUFFLE nodes.
llvm-svn: 27232
2006-03-28 20:24:43 +00:00
Chris Lattner 8d57da2ffc new node
llvm-svn: 27231
2006-03-28 19:54:42 +00:00
Chris Lattner b7163598f9 Don't crash on X^X if X is a vector. Instead, produce a vector of zeros.
llvm-svn: 27229
2006-03-28 19:11:05 +00:00
Chris Lattner ffec47ebff Add an assertion
llvm-svn: 27228
2006-03-28 19:04:49 +00:00
Chris Lattner 66e1410858 add a note
llvm-svn: 27227
2006-03-28 18:56:23 +00:00
Jim Laskey dea0348853 Refactor address attributes. Add base register to frame info.
llvm-svn: 27226
2006-03-28 14:58:32 +00:00
Jim Laskey d1aa1638c6 Expose base register for DwarfWriter. Refactor code accordingly.
llvm-svn: 27225
2006-03-28 13:48:33 +00:00
Jim Laskey 67a636c587 More bulletproofing of llvm.dbg.declare.
llvm-svn: 27224
2006-03-28 13:45:20 +00:00
Jim Laskey 457e54efc1 Added missing paren on behalf of Ramana Radhakrishnan.
llvm-svn: 27223
2006-03-28 10:17:11 +00:00
Evan Cheng 21e5476deb Missed X86::isUNPCKHMask
llvm-svn: 27222
2006-03-28 08:27:15 +00:00
Evan Cheng be2d9a0e99 movlps and movlpd should be modeled as two address code.
llvm-svn: 27221
2006-03-28 07:01:28 +00:00
Evan Cheng dc57ae0711 Update
llvm-svn: 27220
2006-03-28 06:55:45 +00:00
Evan Cheng 4e7374ff8a Typo
llvm-svn: 27219
2006-03-28 06:53:49 +00:00
Evan Cheng 1a194a5264 * Prefer using operation of matching types. e.g unpcklpd rather than movlhps.
* Bug fixes.

llvm-svn: 27218
2006-03-28 06:50:32 +00:00
Nate Begeman af8c373e77 Fix a couple typos
llvm-svn: 27216
2006-03-28 04:18:18 +00:00
Nate Begeman 1b3928765d Add a few more altivec intrinsics
llvm-svn: 27215
2006-03-28 04:15:58 +00:00
Evan Cheng 08b473c619 Added a couple of entries about movhps and movlhps.
llvm-svn: 27212
2006-03-28 02:49:12 +00:00
Evan Cheng 3765fadef6 All unpack cases are now being handled.
llvm-svn: 27211
2006-03-28 02:44:05 +00:00
Evan Cheng 2bc3280659 - Clean up / consoladate various shuffle masks.
- Some misc. bug fixes.
- Use MOVHPDrm to load from m64 to upper half of a XMM register.

llvm-svn: 27210
2006-03-28 02:43:26 +00:00
Chris Lattner 3710fca2b8 implement a bunch more intrinsics.
llvm-svn: 27209
2006-03-28 02:29:37 +00:00
Chris Lattner cb5ec07cc3 Use normal lvx for scalar_to_vector instead of lve*x. They do the exact
same thing and we have a dag node for the former.

llvm-svn: 27205
2006-03-28 01:43:22 +00:00
Jim Laskey 8374e9c4eb More bulletproofing of DebugInfoDesc verify.
llvm-svn: 27203
2006-03-28 01:30:18 +00:00
Chris Lattner e55d171ccd Tblgen doesn't like multiple SDNode<> definitions that map to the sameenum value. Split them into separate enums.
llvm-svn: 27201
2006-03-28 00:40:33 +00:00
Evan Cheng 5df75889db Model unpack lower and interleave as vector_shuffle so we can lower the
intrinsics as such.

llvm-svn: 27200
2006-03-28 00:39:58 +00:00
Andrew Lenharth d7e612bbc4 If adding a link to a collapsed, node, ignore offset.
Fixes 2006-03-27-LinkedCollapsed.ll

llvm-svn: 27194
2006-03-27 23:39:58 +00:00
Jim Laskey d387cc5cde Reactivate llvm.dbg.declare.
llvm-svn: 27192
2006-03-27 23:31:10 +00:00
Chris Lattner 5bb1d90afd Disable dbg_declare, it currently breaks the CFE build
llvm-svn: 27182
2006-03-27 21:36:03 +00:00
Chris Lattner d5f94c9574 Fix legalization of intrinsics with chain and result values
llvm-svn: 27181
2006-03-27 20:28:29 +00:00
Jim Laskey fa53b276d0 Translate llvm target registers to dwarf register numbers properly.
llvm-svn: 27180
2006-03-27 20:18:45 +00:00
Chris Lattner 018e17c8de unbreak the build
llvm-svn: 27174
2006-03-27 16:52:45 +00:00
Chris Lattner 0e84f1e532 Unbreak the build on non-apple compilers :-(
llvm-svn: 27173
2006-03-27 16:10:59 +00:00
Evan Cheng d09b05b0bc Try again
llvm-svn: 27171
2006-03-27 08:10:26 +00:00
Chris Lattner 939c9ab88f Add a bunch of notes from my journey thus far.
llvm-svn: 27170
2006-03-27 07:41:00 +00:00
Evan Cheng 64efb35c32 Incorrect check for FP all one's
llvm-svn: 27169
2006-03-27 07:26:17 +00:00
Chris Lattner 22ec3e7b7e Split out altivec notes into their own README
llvm-svn: 27168
2006-03-27 07:04:16 +00:00
Evan Cheng 9b9cc4fb39 Use pcmpeq to generate vector of all ones.
llvm-svn: 27167
2006-03-27 07:00:16 +00:00
Evan Cheng a74792fa9d Changed isBuildVectorAllOnesInteger to isBuildVectorAllOnes.
llvm-svn: 27166
2006-03-27 06:59:32 +00:00
Evan Cheng c70e33cd6e Change isBuildVectorAllOnesInteger to isBuildVectorAllOnes. Also check for
floating point cases.

llvm-svn: 27165
2006-03-27 06:58:47 +00:00
Chris Lattner 52fcad3a37 Instead of printing "INTRINSIC" on intrinsic node, print the intrinsic name.
llvm-svn: 27164
2006-03-27 06:45:25 +00:00
Chris Lattner 42e0ba09aa teach the inliner to work with packed constants
llvm-svn: 27161
2006-03-27 05:50:18 +00:00
Chris Lattner 1738c293b5 Fix the JIT encoding of VSEL
llvm-svn: 27160
2006-03-27 03:34:17 +00:00
Chris Lattner df59d5314c Fix the JIT encoding of VSPLTI*
llvm-svn: 27159
2006-03-27 03:28:57 +00:00
Jim Laskey 6e792f9ff8 Pass llvm/test/Regression/CodeGen/Generic/debug-info.ll.
llvm-svn: 27158
2006-03-27 01:51:47 +00:00
Nate Begeman ed728c1291 SelectionDAGISel can now natively handle Switch instructions, in the same
manner that the LowerSwitch LLVM to LLVM pass does: emitting a binary
search tree of basic blocks.  The new approach has several advantages:
it is faster, it generates significantly smaller code in many cases, and
it paves the way for implementing dense switch tables as a jump table by
handling switches directly in the instruction selector.

This functionality is currently only enabled on x86, but should be safe for
every target.  In anticipation of making it the default, the cfg is now
properly updated in the x86, ppc, and sparc select lowering code.

llvm-svn: 27156
2006-03-27 01:32:24 +00:00
Jim Laskey 7092888bcc Bullet proof against undefined args produced by upgrading ols-style debug info.
llvm-svn: 27155
2006-03-26 22:46:27 +00:00
Jim Laskey 84c2f0a705 How to be dumb on $5/day. Need a tri-state to track valid debug descriptors.
llvm-svn: 27154
2006-03-26 22:45:20 +00:00
Chris Lattner 65473e20d8 add vsel
llvm-svn: 27153
2006-03-26 22:38:43 +00:00
Nate Begeman 68cc9d4540 Readme note
llvm-svn: 27152
2006-03-26 19:19:27 +00:00
Chris Lattner 6961fc76bb Codegen vector predicate compares.
llvm-svn: 27151
2006-03-26 10:06:40 +00:00
Evan Cheng ed6184aef2 Remove X86:isZeroVector, use ISD::isBuildVectorAllZeros instead; some fixes / cleanups
llvm-svn: 27150
2006-03-26 09:53:12 +00:00
Evan Cheng b1ddc988af Remove PPC:isZeroVector, use ISD::isBuildVectorAllZeros instead
llvm-svn: 27149
2006-03-26 09:52:32 +00:00
Evan Cheng 5562f2092f Add immAllZerosV helper
llvm-svn: 27148
2006-03-26 09:51:39 +00:00
Evan Cheng a67899195f Add ISD::isBuildVectorAllZeros predicate
llvm-svn: 27147
2006-03-26 09:50:58 +00:00
Chris Lattner 30ee72586d Allow targets to custom lower their own intrinsics if desired.
llvm-svn: 27146
2006-03-26 09:12:51 +00:00
Chris Lattner 563c7022a5 Update dependencies to reflect split of the Intrinsics.td file
llvm-svn: 27144
2006-03-26 07:45:48 +00:00
Chris Lattner 793cbcb4fd Add all of the altivec comparison instructions. Add patterns for the
non-predicate altivec compare intrinsics.

llvm-svn: 27143
2006-03-26 04:57:17 +00:00
Chris Lattner c6c88b2ea1 Add and 8/16-bit adds, add all integer subtracts, add saturating subtract
intrinsics.

llvm-svn: 27142
2006-03-26 02:39:02 +00:00
Chris Lattner 53e07decd7 implement the vsldoi intrinsic.
llvm-svn: 27139
2006-03-26 00:41:48 +00:00
Chris Lattner 5c0c762443 fix the pattern for vandc, it's NOT vnand
llvm-svn: 27136
2006-03-25 23:10:40 +00:00
Chris Lattner e8c1d04051 add patterns for VANDC/VNOR, implementing
CodeGen/PowerPC/eqv-andc-orc-nor.ll:VNOR/VANDC

llvm-svn: 27135
2006-03-25 23:05:29 +00:00
Chris Lattner b6e2d0027a Add some comments.
llvm-svn: 27133
2006-03-25 23:00:56 +00:00
Chris Lattner 3de9286e09 add a vnot helper node for matching 'not' on vectors
llvm-svn: 27132
2006-03-25 23:00:08 +00:00
Chris Lattner f6e3b957b8 Fix a bug in ISD::isBuildVectorAllOnesInteger that caused it to always return
false

llvm-svn: 27131
2006-03-25 22:59:28 +00:00
Chris Lattner c2d2811a07 Implement the ISD::isBuildVectorAllOnesInteger predicate
llvm-svn: 27130
2006-03-25 22:57:01 +00:00
Chris Lattner dc1eab5886 Don't call SimplifyDemandedBits on vectors
llvm-svn: 27128
2006-03-25 22:19:00 +00:00
Chris Lattner b3617beb52 Add some logical operations
llvm-svn: 27127
2006-03-25 22:16:05 +00:00
Chris Lattner d70d9f5b24 Don't crash on packed logical ops
llvm-svn: 27125
2006-03-25 21:58:26 +00:00
Chris Lattner e8e7ac465d Teach BinaryOperator::createNot to work with packed integer types
llvm-svn: 27124
2006-03-25 21:54:21 +00:00
Jim Laskey b434464d1c Cast instruction not inserted into basic block.
llvm-svn: 27122
2006-03-25 18:40:47 +00:00
Evan Cheng 3e4d38eea5 Added missing (any_extend (load ...)) patterns.
llvm-svn: 27120
2006-03-25 09:45:48 +00:00
Evan Cheng 2bc0941e2a Build arbitrary vector with more than 2 distinct scalar elements with a
series of unpack and interleave ops.

llvm-svn: 27119
2006-03-25 09:37:23 +00:00
Chris Lattner 1b4bb22f8a implement a bunch of intrinsics
llvm-svn: 27118
2006-03-25 08:01:02 +00:00
Chris Lattner 2a85fa1f79 Move all Altivec stuff out into a new PPCInstrAltivec.td file.
Add a bunch of patterns for different datatypes, e.g. bit_convert, undef and
zero vector support.

llvm-svn: 27117
2006-03-25 07:51:43 +00:00
Chris Lattner 1cb91b3cd9 Add some basic patterns for other datatypes
llvm-svn: 27116
2006-03-25 07:39:07 +00:00
Chris Lattner 3a66a75108 add all supported formats to the vector register file
llvm-svn: 27115
2006-03-25 07:36:56 +00:00
Chris Lattner f653cdd3f9 Add support for __builtin_altivec_vnmsubfp /vmaddfp
llvm-svn: 27112
2006-03-25 07:05:55 +00:00
Chris Lattner 5d70a7c4a5 #include Intrinsics.h into all dag isels
llvm-svn: 27109
2006-03-25 06:47:10 +00:00
Chris Lattner 71b8c980da Implement Intrinsic::getName
llvm-svn: 27108
2006-03-25 06:32:47 +00:00
Chris Lattner 2771e2c960 Codegen things like:
<int -1, int -1, int -1, int -1>
and
 <int 65537, int 65537, int 65537, int 65537>

Using things like:
  vspltisb v0, -1
and:
  vspltish v0, 1

instead of using constant pool loads.

This implements CodeGen/PowerPC/vec_splat.ll:splat_imm_i{32|16}.

llvm-svn: 27106
2006-03-25 06:12:06 +00:00
Evan Cheng 79e500ec74 Added SSE cachebility ops
llvm-svn: 27103
2006-03-25 06:03:26 +00:00
Evan Cheng 1aaa7280cd Instruction encoding bug
llvm-svn: 27102
2006-03-25 06:00:03 +00:00
Chris Lattner 9dc2d17ae6 Add new intrinsic node definitions for tblgen use
llvm-svn: 27100
2006-03-25 02:29:35 +00:00
Evan Cheng 6f7d31ea50 Added 128-bit packed integer subtraction.
llvm-svn: 27096
2006-03-25 01:33:37 +00:00
Evan Cheng 8e481df625 Added CVTTPS2PI.
llvm-svn: 27095
2006-03-25 01:31:59 +00:00
Evan Cheng 980c4d5b46 Added CVTSS2SI.
llvm-svn: 27094
2006-03-25 01:00:18 +00:00
Evan Cheng e7ee6a5e32 Support for scalar to vector with zero extension.
llvm-svn: 27091
2006-03-24 23:15:12 +00:00
Chris Lattner 313229c74b fix inverted conditional
llvm-svn: 27089
2006-03-24 22:49:42 +00:00
Jim Laskey bb84eae239 D'oh - should be even numbered.
llvm-svn: 27088
2006-03-24 22:48:02 +00:00
Evan Cheng 2f0277bf48 Added LDMXCSR
llvm-svn: 27087
2006-03-24 22:28:37 +00:00
Chris Lattner 97599f1211 plug the intrinsics into the patterns for movmsk*
llvm-svn: 27083
2006-03-24 21:49:18 +00:00
Jim Laskey f0729b4067 Add dwarf register numbering to register data.
llvm-svn: 27081
2006-03-24 21:15:58 +00:00
Jim Laskey 3b338d5566 Add support for dwarf register numbering.
llvm-svn: 27080
2006-03-24 21:13:21 +00:00
Jim Laskey 3324c7236f Hack no more.
llvm-svn: 27079
2006-03-24 21:10:36 +00:00
Chris Lattner 9f9b6116e1 add another note
llvm-svn: 27077
2006-03-24 20:04:27 +00:00
Chris Lattner 0affd76182 add a note
llvm-svn: 27076
2006-03-24 19:59:17 +00:00
Chris Lattner c6b13e21cc Shuffle some includes around
llvm-svn: 27073
2006-03-24 18:52:35 +00:00
Evan Cheng 68d9bf26c8 Only to vector shuffle for {x,x,y,y} cases when SCALAR_TO_VECTOR is free.
llvm-svn: 27071
2006-03-24 18:45:20 +00:00
Chris Lattner 58a9622957 expose intrinsic info to the targets.
llvm-svn: 27070
2006-03-24 18:44:11 +00:00
Chris Lattner d589dd1352 Fix a bad JIT encoding of VPERM. Why is VPERM D,A,B,C but vfmadd is D,A,C,B ??
llvm-svn: 27069
2006-03-24 18:24:43 +00:00
Chris Lattner f2286d5917 Like the comment says, prefer to use the implicit add done by [r+r] addressing
modes than emitting an explicit add and using a base of r0.  This implements
Regression/CodeGen/PowerPC/mem-rr-addr-mode.ll

llvm-svn: 27068
2006-03-24 17:58:06 +00:00
Jim Laskey dd3fa41f0f Fix indent.
llvm-svn: 27065
2006-03-24 10:08:23 +00:00
Jim Laskey 864e444749 Clean up some commentary.
llvm-svn: 27064
2006-03-24 10:00:56 +00:00
Jim Laskey 53f1ecc560 Rename for truth in advertising.
llvm-svn: 27063
2006-03-24 09:50:27 +00:00
Chris Lattner a90b7141ed Disable the i32->float G5 optimization. It is unsafe, as documented in the
comment.

This fixes 177.mesa, and McCat/09-vor with the td scheduler.

llvm-svn: 27060
2006-03-24 07:53:47 +00:00
Chris Lattner ab882abce8 add support for using vxor to build zero vectors. This implements
Regression/CodeGen/PowerPC/vec_zero.ll

llvm-svn: 27059
2006-03-24 07:48:08 +00:00
Evan Cheng 082c8785ef Handle BUILD_VECTOR with all zero elements.
llvm-svn: 27056
2006-03-24 07:29:27 +00:00
Chris Lattner 77e271cb4e prefer to generate constant pool loads over splats. This prevents us from
using a splat for {1.0,1.0,1.0,1.0}

llvm-svn: 27055
2006-03-24 07:29:17 +00:00
Chris Lattner 87b1dddb1c fix spello
llvm-svn: 27053
2006-03-24 07:15:07 +00:00
Chris Lattner f365f5f0c1 Fix spello
llvm-svn: 27052
2006-03-24 07:14:34 +00:00
Chris Lattner 5821a6a17a add the actual cost to the debug info
llvm-svn: 27051
2006-03-24 07:14:00 +00:00
Chris Lattner f5efddf80b Gabor points out that we can't spell. :)
llvm-svn: 27049
2006-03-24 07:12:19 +00:00
Evan Cheng a91d8a5b43 All v2f64 shuffle cases can be handled.
llvm-svn: 27044
2006-03-24 06:40:32 +00:00
Evan Cheng 2595a687da More efficient v2f64 shuffle using movlhps, movhlps, unpckhpd, and unpcklpd.
llvm-svn: 27040
2006-03-24 02:58:06 +00:00
Evan Cheng 6afb3c2de7 A new entry
llvm-svn: 27039
2006-03-24 02:57:03 +00:00