Commit Graph

13778 Commits

Author SHA1 Message Date
Chris Lattner fa5aa396c2 Make the BUILD_VECTOR lowering code much more aggressive w.r.t constant vectors.
Remove some done items from the todo list.

llvm-svn: 27729
2006-04-16 01:01:29 +00:00
Chris Lattner 9095186deb Fix a bug in the 'shuffle(undef,x,mask) -> shuffle(x, undef,mask')' xform
Make the insert/extract elt -> shuffle code more aggressive.

This fixes CodeGen/PowerPC/vec_shuffle.ll

llvm-svn: 27728
2006-04-16 00:51:47 +00:00
Chris Lattner 34cebe785d Canonicalize shuffle(undef,x,mask) -> shuffle(x, undef,mask').
llvm-svn: 27727
2006-04-16 00:03:56 +00:00
Chris Lattner 24acbe46c0 Fix a crash when faced with a shuffle vector that has an undef in its mask.
llvm-svn: 27726
2006-04-15 23:48:05 +00:00
Chris Lattner 873202fabd Add patterns for matching vnots with bit converted inputs. Most of these will
go away when I start using evan's binop type canonicalizer

llvm-svn: 27725
2006-04-15 23:45:24 +00:00
Chris Lattner 41df12ff4c Add a new vnot_conv predicate for matching vnot's where the allones vector is
bitconverted from some other type.

llvm-svn: 27724
2006-04-15 23:39:14 +00:00
Chris Lattner 7e7ad593cc Make these predicates return true for bit_convert(buildvector)'s as well as
buildvectors.

llvm-svn: 27723
2006-04-15 23:38:00 +00:00
Evan Cheng 8f1d801389 More encoding bugs
llvm-svn: 27722
2006-04-15 06:10:09 +00:00
Evan Cheng 91944e8699 pslldrm, psrawrm, etc. encoding bug
llvm-svn: 27721
2006-04-15 05:59:08 +00:00
Evan Cheng 1220b31a31 hsubp{s|d} encoding bug
llvm-svn: 27720
2006-04-15 05:52:42 +00:00
Evan Cheng 6222cf2a36 Silly bug
llvm-svn: 27719
2006-04-15 05:37:34 +00:00
Evan Cheng 65bb720a8b Do not use movs{h|l}dup for a shuffle with a single non-undef node.
llvm-svn: 27718
2006-04-15 03:13:24 +00:00
Chris Lattner 39fac448d6 significant cleanups to code that uses insert/extractelt heavily. This builds
maximal shuffles out of them where possible.

llvm-svn: 27717
2006-04-15 01:39:45 +00:00
Evan Cheng 0ba896c75b Added SSE (and other) entries to foldMemoryOperand().
llvm-svn: 27716
2006-04-14 23:33:27 +00:00
Evan Cheng 00a5b3d9d3 Some clean up
llvm-svn: 27715
2006-04-14 23:32:40 +00:00
Chris Lattner 559c8ba466 Allow undef in a shuffle mask
llvm-svn: 27714
2006-04-14 23:19:08 +00:00
Chris Lattner 0875d94567 Move these ctors out of line
llvm-svn: 27713
2006-04-14 22:20:32 +00:00
Evan Cheng 5d247f81c1 Last few SSE3 intrinsics.
llvm-svn: 27711
2006-04-14 21:59:03 +00:00
Chris Lattner 3323ce165d Teach scalarrepl to promote unions of vectors and floats, producing
insert/extractelement operations.  This implements
Transforms/ScalarRepl/vector_promote.ll

llvm-svn: 27710
2006-04-14 21:42:41 +00:00
Evan Cheng 3bd605397b Misc. SSE2 intrinsics: clflush, lfench, mfence
llvm-svn: 27699
2006-04-14 07:43:12 +00:00
Evan Cheng e349d01acf We were not adjusting the frame size to ensure proper alignment when alloca /
vla are present in the function. This causes a crash when a leaf function
allocates space on the stack used to store / load with 128-bit SSE
instructions.

llvm-svn: 27698
2006-04-14 07:26:43 +00:00
Evan Cheng 8d76f3922b New entry
llvm-svn: 27697
2006-04-14 07:24:04 +00:00
Reid Spencer ef56d92d6c Don't print out the install command for Intrinsics.gen unless VERBOSE mode.
llvm-svn: 27696
2006-04-14 06:32:31 +00:00
Chris Lattner 086e986e94 Make this assertion better
llvm-svn: 27695
2006-04-14 06:08:35 +00:00
Chris Lattner 4211ca9108 Move the rest of the PPCTargetLowering::LowerOperation cases out into
separate functions, for simplicity and code clarity.

llvm-svn: 27693
2006-04-14 06:01:58 +00:00
Chris Lattner 19e9055eb5 Pull the VECTOR_SHUFFLE and BUILD_VECTOR lowering code out into separate
functions, which makes the code much cleaner :)

llvm-svn: 27692
2006-04-14 05:19:18 +00:00
Chris Lattner 68c650ca45 Implement value #'ing for vector operations, implementing
Regression/Transforms/GCSE/vectorops.ll

llvm-svn: 27691
2006-04-14 05:10:20 +00:00
Evan Cheng eb0063a34f pcmpeq* and pcmpgt* intrinsics.
llvm-svn: 27685
2006-04-14 01:39:53 +00:00
Evan Cheng 16287444ff psll*, psrl*, and psra* intrinsics.
llvm-svn: 27684
2006-04-14 00:14:05 +00:00
Reid Spencer 64f6c11c59 Remove the .cvsignore file so this directory can be pruned.
llvm-svn: 27683
2006-04-13 22:00:10 +00:00
Reid Spencer 497ecf6840 Remove .cvsignore so that this directory can be pruned.
llvm-svn: 27682
2006-04-13 21:59:03 +00:00
Andrew Lenharth 4aa3001625 Handle some kernel code than ends in [0 x sbyte]. I think this is safe
llvm-svn: 27672
2006-04-13 19:31:49 +00:00
Reid Spencer 709eaacb36 Expand some code with temporary variables to rid ourselves of the warning
about "dereferencing type-punned pointer will break strict-aliasing rules"

llvm-svn: 27671
2006-04-13 18:29:58 +00:00
Evan Cheng a84319719c Doh. PANDrm, etc. are not commutable.
llvm-svn: 27668
2006-04-13 18:11:28 +00:00
Chris Lattner 883fb053bd Force non-darwin targets to use a static relo model. This fixes PR734,
tested by CodeGen/Generic/vector.ll

llvm-svn: 27657
2006-04-13 17:10:48 +00:00
Chris Lattner 5879efe0c8 add a note, move an altivec todo to the altivec list.
llvm-svn: 27654
2006-04-13 16:48:00 +00:00
Andrew Lenharth 92cf71f6d7 linear -> constant time
llvm-svn: 27652
2006-04-13 13:43:31 +00:00
Reid Spencer 9857229aba Add the README files to the distribution.
llvm-svn: 27651
2006-04-13 06:39:24 +00:00
Evan Cheng ed3996743f psad, pmax, pmin intrinsics.
llvm-svn: 27647
2006-04-13 06:11:45 +00:00
Evan Cheng 58dad55959 Various SSE2 packed integer intrinsics: pmulhuw, pavgw, etc.
llvm-svn: 27645
2006-04-13 05:24:54 +00:00
Evan Cheng e4f97ccf7f X86 SSE2 supports v8i16 multiplication
llvm-svn: 27644
2006-04-13 05:10:25 +00:00
Evan Cheng d2eb662415 Update
llvm-svn: 27643
2006-04-13 05:09:45 +00:00
Evan Cheng b3fe00bdc6 padds{b|w}, paddus{b|w}, psubs{b|w}, psubus{b|w} intrinsics.
llvm-svn: 27639
2006-04-13 00:43:35 +00:00
Evan Cheng 0aab735a1a Naming inconsistency.
llvm-svn: 27638
2006-04-13 00:00:23 +00:00
Evan Cheng c88afc36a9 SSE / SSE2 conversion intrinsics.
llvm-svn: 27637
2006-04-12 23:42:44 +00:00
Evan Cheng 92232307d0 All "integer" logical ops (pand, por, pxor) are now promoted to v2i64.
Clean up and fix various logical ops issues.

llvm-svn: 27633
2006-04-12 21:21:57 +00:00
Evan Cheng 119266ea92 Promote vector AND, OR, and XOR
llvm-svn: 27632
2006-04-12 21:20:24 +00:00
Reid Spencer 7c8fef2cd1 Make sure CVS versions of yacc and lex files get distributed.
llvm-svn: 27630
2006-04-12 20:57:05 +00:00
Reid Spencer 13a1a7a4a6 Get rid of a signed/unsigned compare warning.
llvm-svn: 27625
2006-04-12 19:28:15 +00:00
Chris Lattner 147e50e1c5 Add a new way to match vector constants, which make it easier to bang bits of
different types.

Codegen spltw(0x7FFFFFFF) and spltw(0x80000000) without a constant pool load,
implementing PowerPC/vec_constants.ll:test1.  This compiles:

typedef float vf __attribute__ ((vector_size (16)));
typedef int vi __attribute__ ((vector_size (16)));
void test(vi *P1, vi *P2, vf *P3) {
  *P1 &= (vi){0x80000000,0x80000000,0x80000000,0x80000000};
  *P2 &= (vi){0x7FFFFFFF,0x7FFFFFFF,0x7FFFFFFF,0x7FFFFFFF};
  *P3 = vec_abs((vector float)*P3);
}

to:

_test:
        mfspr r2, 256
        oris r6, r2, 49152
        mtspr 256, r6
        vspltisw v0, -1
        vslw v0, v0, v0
        lvx v1, 0, r3
        vand v1, v1, v0
        stvx v1, 0, r3
        lvx v1, 0, r4
        vandc v1, v1, v0
        stvx v1, 0, r4
        lvx v1, 0, r5
        vandc v0, v1, v0
        stvx v0, 0, r5
        mtspr 256, r2
        blr

instead of (with two constant pool entries):

_test:
        mfspr r2, 256
        oris r6, r2, 49152
        mtspr 256, r6
        li r6, lo16(LCPI1_0)
        lis r7, ha16(LCPI1_0)
        li r8, lo16(LCPI1_1)
        lis r9, ha16(LCPI1_1)
        lvx v0, r7, r6
        lvx v1, 0, r3
        vand v0, v1, v0
        stvx v0, 0, r3
        lvx v0, r9, r8
        lvx v1, 0, r4
        vand v1, v1, v0
        stvx v1, 0, r4
        lvx v1, 0, r5
        vand v0, v1, v0
        stvx v0, 0, r5
        mtspr 256, r2
        blr

GCC produces (with 2 cp entries):

_test:
        mfspr r0,256
        stw r0,-4(r1)
        oris r0,r0,0xc00c
        mtspr 256,r0
        lis r2,ha16(LC0)
        lis r9,ha16(LC1)
        la r2,lo16(LC0)(r2)
        lvx v0,0,r3
        lvx v1,0,r5
        la r9,lo16(LC1)(r9)
        lwz r12,-4(r1)
        lvx v12,0,r2
        lvx v13,0,r9
        vand v0,v0,v12
        stvx v0,0,r3
        vspltisw v0,-1
        vslw v12,v0,v0
        vandc v1,v1,v12
        stvx v1,0,r5
        lvx v0,0,r4
        vand v0,v0,v13
        stvx v0,0,r4
        mtspr 256,r12
        blr

llvm-svn: 27624
2006-04-12 19:07:14 +00:00
Chris Lattner b19a5c661b Turn casts into getelementptr's when possible. This enables SROA to be more
aggressive in some cases where LLVMGCC 4 is inserting casts for no reason.

This implements InstCombine/cast.ll:test27/28.

llvm-svn: 27620
2006-04-12 18:09:35 +00:00
Reid Spencer 175d57c4bc Don't emit useless warning messages.
llvm-svn: 27617
2006-04-12 17:56:16 +00:00
Chris Lattner 74cf9ff761 Rename get_VSPLI_elt -> get_VSPLTI_elt
Canonicalize BUILD_VECTOR's that match VSPLTI's into a single type for each
form, eliminating a bunch of Pat patterns in the .td file and allowing us to
CSE stuff more aggressively.  This implements
PowerPC/buildvec_canonicalize.ll:VSPLTI

llvm-svn: 27614
2006-04-12 17:37:20 +00:00
Evan Cheng e2157c6e41 Promote v4i32, v8i16, v16i8 load to v2i64 load.
llvm-svn: 27612
2006-04-12 17:12:36 +00:00
Chris Lattner e318a7574e Ensure that zero vectors are always v4i32, which forces them to CSE with
each other.  This implements CodeGen/PowerPC/vxor-canonicalize.ll

llvm-svn: 27609
2006-04-12 16:53:28 +00:00
Evan Cheng be8a8933e6 Vector type promotion for ISD::LOAD and ISD::SELECT
llvm-svn: 27606
2006-04-12 16:33:18 +00:00
Chris Lattner d3b504ae10 Implement support for the formal_arguments node. To get this, targets shouldcustom legalize it and remove their XXXTargetLowering::LowerArguments overload
llvm-svn: 27604
2006-04-12 16:20:43 +00:00
Evan Cheng 29be057d92 Various SSE2 conversion intrinsics
llvm-svn: 27603
2006-04-12 05:20:24 +00:00
Chris Lattner 417b96b6dd Don't memoize vloads in the load map! Don't memoize them anywhere here, let
getNode do it.  This fixes CodeGen/Generic/2006-04-11-vecload.ll

llvm-svn: 27602
2006-04-12 03:25:41 +00:00
Evan Cheng 70c74a3ced Added __builtin_ia32_storelv4si, __builtin_ia32_movqv4si,
__builtin_ia32_loadlv4si, __builtin_ia32_loaddqu, __builtin_ia32_storedqu.

llvm-svn: 27599
2006-04-11 22:28:25 +00:00
Nate Begeman f19bcd5177 Fix SingleSource/UnitTests/Vector/sumarray-dbl
llvm-svn: 27594
2006-04-11 19:44:43 +00:00
Nate Begeman 1bb132099f Fix PR727, correctly handling large stack aligments on ppc
llvm-svn: 27593
2006-04-11 19:29:21 +00:00
Chris Lattner aaa04230bd we have a shuffle instr, add an example.
llvm-svn: 27592
2006-04-11 18:47:03 +00:00
Evan Cheng 6b60357f4a gcc lower SSE prefetch into generic prefetch intrinsic. Need to add support
later.

llvm-svn: 27591
2006-04-11 18:04:57 +00:00
Evan Cheng 6ea715af28 Misc. intrinsics.
llvm-svn: 27590
2006-04-11 17:35:57 +00:00
Jim Laskey 02b3b72bfc Suppress debug label when not debug.
llvm-svn: 27588
2006-04-11 08:11:53 +00:00
Evan Cheng 09a956271a movnt* and maskmovdqu intrinsics
llvm-svn: 27587
2006-04-11 06:57:30 +00:00
Evan Cheng 7256b0ae05 Only get Tmp2 for cases where number of operands is > 1. Fixed return void.
llvm-svn: 27586
2006-04-11 06:33:39 +00:00
Chris Lattner 6cf3bbbe17 add some todos
llvm-svn: 27580
2006-04-11 02:00:08 +00:00
Chris Lattner e4db08a2f1 Vector function results go into V2 according to GCC. The darwin ABI doc
doesn't say where they go :-/

llvm-svn: 27579
2006-04-11 01:38:39 +00:00
Chris Lattner 2eb22eef7d Add basic support for legalizing returns of vectors
llvm-svn: 27578
2006-04-11 01:31:51 +00:00
Chris Lattner 92533cfb4a Move some return-handling code from lowerarguments to the ISD::RET handling stuff.
No functionality change.

llvm-svn: 27577
2006-04-11 01:21:43 +00:00
Evan Cheng 12ba3e23d0 Added support for _mm_move_ss and _mm_move_sd.
llvm-svn: 27575
2006-04-11 00:19:04 +00:00
Jim Laskey dca2655daa Use existing information.
llvm-svn: 27574
2006-04-10 23:09:19 +00:00
Chris Lattner 2d37f920ad Implement vec_shuffle.ll:test3
llvm-svn: 27573
2006-04-10 23:06:36 +00:00
Chris Lattner fbb77a408b Implement InstCombine/vec_shuffle.ll:test[12]
llvm-svn: 27571
2006-04-10 22:45:52 +00:00
Evan Cheng f8ac02283c Remove some bogus patterns; clean up.
llvm-svn: 27569
2006-04-10 22:35:16 +00:00
Chris Lattner d99f57c1e1 add a note
llvm-svn: 27567
2006-04-10 21:51:03 +00:00
Evan Cheng 051de9a82b Remove an entry that is now done.
llvm-svn: 27565
2006-04-10 21:42:57 +00:00
Evan Cheng 76112c3cb8 Added some missing shuffle patterns.
llvm-svn: 27564
2006-04-10 21:42:19 +00:00
Evan Cheng 664fcba5fa Correct an entry
llvm-svn: 27563
2006-04-10 21:41:39 +00:00
Evan Cheng 395fa3d2a6 movups / movupd
llvm-svn: 27562
2006-04-10 21:11:06 +00:00
Andrew Lenharth a9cdcca3c3 Add a simple pass to make sure that all (non-library) calls to malloc and free
are visible to analysis as intrinsics.  That is, make sure someone doesn't pass
free around by address in some struct (as happens in say 176.gcc).

This doesn't get rid of any indirect calls, just ensure calls to free and malloc
are always direct.

llvm-svn: 27560
2006-04-10 19:26:09 +00:00
Evan Cheng cb73b8d419 Missing break
llvm-svn: 27559
2006-04-10 18:54:36 +00:00
Evan Cheng 617a6a812e Conditional move of vector types.
llvm-svn: 27556
2006-04-10 07:23:14 +00:00
Evan Cheng 014849e121 New entries
llvm-svn: 27555
2006-04-10 07:22:03 +00:00
Evan Cheng c9ed8e4c1a Use movaps to do VR128 reg-to-reg copies for now. It's shorter and available for SSE1.
llvm-svn: 27554
2006-04-10 07:21:31 +00:00
Chris Lattner 3a68f3c3ca properly mark vector selects as expanded to select_cc
llvm-svn: 27544
2006-04-08 22:59:15 +00:00
Chris Lattner 0a3d1bbca4 Add VRRC select support
llvm-svn: 27543
2006-04-08 22:45:08 +00:00
Chris Lattner 02274a5265 Add code generator support for VSELECT
llvm-svn: 27542
2006-04-08 22:22:57 +00:00
Nate Begeman 3f9c17906f Disable switch lowering for targets based on the selection dag isel,
letting the code generator handle them directly.

llvm-svn: 27539
2006-04-08 19:46:55 +00:00
Chris Lattner d9e80f4516 Implement PowerPC/CodeGen/vec_splat.ll:spltish to use vsplish instead of a
constant pool load.

llvm-svn: 27538
2006-04-08 07:14:26 +00:00
Chris Lattner d71a1f946d Change the interface to the predicate that determines if vsplti* can be used.
No functionality changes.

llvm-svn: 27536
2006-04-08 06:46:53 +00:00
Reid Spencer cf905223c5 Initialize SDOperand values because the gcc 4.0.2 compiler complains about
them.

llvm-svn: 27534
2006-04-08 05:38:03 +00:00
Chris Lattner e1401e3610 Canonicalize vvector_shuffle(x,x) -> vvector_shuffle(x,undef) to enable patterns
to match again :)

llvm-svn: 27533
2006-04-08 05:34:25 +00:00
Chris Lattner a93b4b5866 Add constant replacement for insertelement/vectorshuffle constant exprs
llvm-svn: 27532
2006-04-08 05:09:48 +00:00
Chris Lattner 098c01e94e Codegen shufflevector as VVECTOR_SHUFFLE
llvm-svn: 27529
2006-04-08 04:15:24 +00:00
Chris Lattner 101ea66813 add a sanity check: LegalizeOp should return a value that is the same type
as its input.

llvm-svn: 27528
2006-04-08 04:13:17 +00:00
Chris Lattner fc72a6b72d use isValidOperands instead of duplicating checks
llvm-svn: 27527
2006-04-08 04:09:19 +00:00
Chris Lattner 931565caf6 Regenerate
llvm-svn: 27526
2006-04-08 04:09:02 +00:00