Commit Graph

1332 Commits

Author SHA1 Message Date
Dan Gohman 4a87660127 Remove an unused variable.
llvm-svn: 57621
2008-10-16 01:47:47 +00:00
Evan Cheng 3b0f5e4d61 - Add target lowering hooks that specify which setcc conditions are illegal,
i.e. conditions that cannot be checked with a single instruction. For example,
SETONE and SETUEQ on x86.
- Teach legalizer to implement *illegal* setcc as a and / or of a number of
legal setcc nodes. For now, only implement FP conditions. e.g. SETONE is
implemented as SETO & SETNE, SETUEQ is SETUO | SETEQ.
- Move x86 target over.

llvm-svn: 57542
2008-10-15 02:05:31 +00:00
Dan Gohman e7ced74558 FastISel support for exception-handling constructs.
- Move the EH landing-pad code and adjust it so that it works
   with FastISel as well as with SDISel.
 - Add FastISel support for @llvm.eh.exception and
   @llvm.eh.selector.

llvm-svn: 57539
2008-10-14 23:54:11 +00:00
Evan Cheng 07d53b1d33 Rename LoadX to LoadExt.
llvm-svn: 57526
2008-10-14 21:26:46 +00:00
Chris Lattner 2753955fc0 Change CALLSEQ_BEGIN and CALLSEQ_END to take TargetConstant's as
parameters instead of raw Constants.  This prevents the constants from
being selected by the isel pass, fixing PR2735.

llvm-svn: 57385
2008-10-11 22:08:30 +00:00
Dale Johannesen 4f0bd68cfe Add a "loses information" return value to APFloat::convert
and APFloat::convertToInteger.  Restore return value to
IEEE754.  Adjust all users accordingly.

llvm-svn: 57329
2008-10-09 23:00:39 +00:00
Evan Cheng 94d14f2d45 Fix PR2850 and PR2863. Only generate movddup for 128-bit SSE vector shuffles.
llvm-svn: 57210
2008-10-06 21:13:08 +00:00
Dale Johannesen 8c36a1c09c Make atomic Swap work, 64-bit on x86-32.
Make it all work in non-pic mode.

llvm-svn: 57034
2008-10-03 22:25:52 +00:00
Dale Johannesen 5d60c1ebb1 Pass MemOperand through for 64-bit atomics on 32-bit,
incidentally making the case where the memop is a
pointer deref work.  Fix cmp-and-swap regression.

llvm-svn: 57027
2008-10-03 19:41:08 +00:00
Dan Gohman 0d1e9a8e04 Switch the MachineOperand accessors back to the short names like
isReg, etc., from isRegister, etc.

llvm-svn: 57006
2008-10-03 15:45:36 +00:00
Dale Johannesen 867d549fce Handle some 64-bit atomics on x86-32, some of the time.
llvm-svn: 56963
2008-10-02 18:53:47 +00:00
Bill Wendling 68f12ee567 Implement the -fno-builtin option in the front-end, not in the back-end.
llvm-svn: 56900
2008-10-01 00:59:58 +00:00
Bill Wendling 1782584f56 Just don't transform this memset into "bzero" if no-builtin is specified.
llvm-svn: 56888
2008-09-30 22:05:33 +00:00
Bill Wendling bd09262e97 Add the new `-no-builtin' flag. This flag is meant to mimic the GCC
`-fno-builtin' flag. Currently, it's used to replace "memset" with "_bzero"
instead of "__bzero" on Darwin10+. This arguably violates the meaning of this
flag, but is currently sufficient. The meaning of this flag should become more
specific over time.

llvm-svn: 56885
2008-09-30 21:22:07 +00:00
Dale Johannesen f61a84ec43 Remove misuse of ReplaceNodeResults for atomics with
valid types.  No functional change.

llvm-svn: 56808
2008-09-29 22:25:26 +00:00
Evan Cheng 3774b2f292 Re-apply 56683 with fixes.
llvm-svn: 56748
2008-09-27 01:56:22 +00:00
Bill Wendling c966a737c5 Temporarily reverting r56683. This is causing a failure during the build of llvm-gcc:
/Volumes/Gir/devel/llvm/clean/llvm-gcc.obj/./gcc/xgcc -B/Volumes/Gir/devel/llvm/clean/llvm-gcc.obj/./gcc/ -B/Volumes/Gir/devel/llvm/clean/llvm-gcc.install/i386-apple-darwin9.5.0/bin/ -B/Volumes/Gir/devel/llvm/clean/llvm-gcc.install/i386-apple-darwin9.5.0/lib/ -isystem /Volumes/Gir/devel/llvm/clean/llvm-gcc.install/i386-apple-darwin9.5.0/include -isystem /Volumes/Gir/devel/llvm/clean/llvm-gcc.install/i386-apple-darwin9.5.0/sys-include -mmacosx-version-min=10.4 -O2  -O2 -g -O2  -DIN_GCC    -W -Wall -Wwrite-strings -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition  -isystem ./include  -fPIC -pipe -g -DHAVE_GTHR_DEFAULT -DIN_LIBGCC2 -D__GCC_FLOAT_NOT_NEEDED  -I. -I. -I../../llvm-gcc.src/gcc -I../../llvm-gcc.src/gcc/. -I../../llvm-gcc.src/gcc/../include -I./../intl -I../../llvm-gcc.src/gcc/../libcpp/include  -I../../llvm-gcc.src/gcc/../libdecnumber -I../libdecnumber -I/Volumes/Gir/devel/llvm/clean/llvm.obj/include -I/Volumes/Gir/devel/llvm/clean/llvm.src/include -fexceptions -fvisibility=hidden -DHIDE_EXPORTS -c ../../llvm-gcc.src/gcc/unwind-dw2-fde-darwin.c -o libgcc/./unwind-dw2-fde-darwin.o
Assertion failed: (TargetRegisterInfo::isVirtualRegister(regA) && TargetRegisterInfo::isVirtualRegister(regB) && "cannot update physical register live information"), function runOnMachineFunction, file /Volumes/Gir/devel/llvm/clean/llvm.src/lib/CodeGen/TwoAddressInstructionPass.cpp, line 311.
../../llvm-gcc.src/gcc/unwind-dw2.c:1527: internal compiler error: Abort trap
Please submit a full bug report,
with preprocessed source if appropriate.
See <URL:http://developer.apple.com/bugreporter> for instructions.
{standard input}:3521:non-relocatable subtraction expression, "_dwarf_reg_size_table" minus "L20$pb"
{standard input}:3521:symbol: "_dwarf_reg_size_table" can't be undefined in a subtraction expression
{standard input}:3520:non-relocatable subtraction expression, "_dwarf_reg_size_table" minus "L20$pb"
...

llvm-svn: 56703
2008-09-26 22:10:44 +00:00
Dan Gohman 6e0548336a Rename ConstantSDNode's getSignExtended to getSExtValue, for
consistancy with ConstantInt, and re-implement it in terms
of ConstantInt's getSExtValue.

llvm-svn: 56700
2008-09-26 21:54:37 +00:00
Evan Cheng d77cbe8947 Fix @llvm.frameaddress codegen. FP elimination optimization should be disabled when frame address is desired. Also add support for depth > 0.
llvm-svn: 56683
2008-09-26 19:48:35 +00:00
Dale Johannesen 0e32a2c935 Add "inreg" field to CallSDNode (doesn't increase
its size).  Adjust various lowering functions to
pass this info through from CallInst.  Use it to
implement sseregparm returns on X86.  Remove
X86_ssecall calling convention.

llvm-svn: 56677
2008-09-26 19:31:26 +00:00
Evan Cheng 9dbe45c000 Prefer movlhps over punpcklqdq, etc. in more cases.
llvm-svn: 56627
2008-09-25 23:35:16 +00:00
Devang Patel 4c758ea3e0 Large mechanical patch.
s/ParamAttr/Attribute/g
s/PAList/AttrList/g
s/FnAttributeWithIndex/AttributeWithIndex/g
s/FnAttr/Attribute/g

This sets the stage 
- to implement function notes as function attributes and 
- to distinguish between function attributes and return value attributes.

This requires corresponding changes in llvm-gcc and clang.

llvm-svn: 56622
2008-09-25 21:00:45 +00:00
Evan Cheng 74c9ed91b0 With sse3 and when the source is a load or has multiple uses, favors movddup over shuffp*, pshufd, etc. Without sse3 or when the source is from a register, make use of movlhps
llvm-svn: 56620
2008-09-25 20:50:48 +00:00
Evan Cheng 4751549f9b X86ISD::VZEXT_LOAD should produce and fold a chain.
llvm-svn: 56593
2008-09-24 23:26:36 +00:00
Evan Cheng e0add20c1b Properly handle 'm' inline asm constraints. If a GV is being selected for the addressing mode, it requires the same logic for PIC relative addressing, etc.
llvm-svn: 56526
2008-09-24 00:05:32 +00:00
Dan Gohman 918fe08a56 Arrange for FastISel code to have access to the MachineModuleInfo
object. This will be needed to support debug info.

llvm-svn: 56508
2008-09-23 21:53:34 +00:00
Evan Cheng 9e9426cb82 Support x86 specific inline asm modifier 'J'.
llvm-svn: 56483
2008-09-22 23:57:37 +00:00
Dale Johannesen 7a74e71489 Make log, log2, log10, exp, exp2 use Expand by
default.

llvm-svn: 56471
2008-09-22 21:57:32 +00:00
Arnold Schwaighofer 796a271c5f Change the calling convention used when tail call optimization is enabled from CC_X86_32_TailCall to CC_X86_32_FastCC.
llvm-svn: 56436
2008-09-22 14:50:07 +00:00
Bill Wendling 24c79f28b1 Reverting r56249. On further investigation, this functionality isn't needed.
Apologies for the thrashing.

llvm-svn: 56251
2008-09-16 21:48:12 +00:00
Bill Wendling 8bc392fb1d - Change "ExternalSymbolSDNode" to "SymbolSDNode".
- Add linkage to SymbolSDNode (default to external).
- Change ISD::ExternalSymbol to ISD::Symbol.
- Change ISD::TargetExternalSymbol to ISD::TargetSymbol

These changes pave the way to allowing SymbolSDNodes with non-external linkage.

llvm-svn: 56249
2008-09-16 21:12:30 +00:00
Dan Gohman 38453eebdc Remove isImm(), isReg(), and friends, in favor of
isImmediate(), isRegister(), and friends, to avoid confusion
about having two different names with the same meaning. I'm
not attached to the longer names, and would be ok with
changing to the shorter names if others prefer it.

llvm-svn: 56189
2008-09-13 17:58:21 +00:00
Dan Gohman d3fe174c53 Define CallSDNode, an SDNode subclass for use with ISD::CALL.
Currently it just holds the calling convention and flags
for isVarArgs and isTailCall.

And it has several utility methods, which eliminate magic
5+2*i and similar index computations in several places.

CallSDNodes are not CSE'd. Teach UpdateNodeOperands to handle
nodes that are not CSE'd gracefully.

llvm-svn: 56183
2008-09-13 01:54:27 +00:00
Dan Gohman effb894453 Rename ConstantSDNode::getValue to getZExtValue, for consistency
with ConstantInt. This led to fixing a bug in TargetLowering.cpp
using getValue instead of getAPIntValue.

llvm-svn: 56159
2008-09-12 16:56:44 +00:00
Arnold Schwaighofer dd45bc25ac When tailcallopt is enabled all fastcc calls must have an aligned argument stack size. Add a test case.
llvm-svn: 56119
2008-09-11 20:28:43 +00:00
Dale Johannesen 58d084c05b The version of AtomicSDNode::AtomicSDNode used (only) for
cmp-and-swap reversed the Cmp and Swap arguments; comments
make it clear this is unintentional.  Unfortunately, the
x86 BE had a compensating reversal, which is removed here.
PPC is OK.

From inspection of the Alpha code I think it is OK, but
if somebody has that platform please check it out.  I
cannot test on that platform.

llvm-svn: 56091
2008-09-11 03:12:59 +00:00
Dan Gohman 39d82f902a Add X86FastISel support for static allocas, and refences
to static allocas. As part of this change, refactor the
address mode code for laods and stores.

llvm-svn: 56066
2008-09-10 20:11:02 +00:00
Evan Cheng 710c3cf36a Fix a fastcc + sret bug. If fastcc and sret, callee doesn't need to pop the hidden struct ptr; Re-enable fastcc.
llvm-svn: 56061
2008-09-10 18:25:29 +00:00
Dale Johannesen 4cc893bab6 Handle new intrinsics with vector arguments.
Patch by Paul Redmond.

llvm-svn: 56059
2008-09-10 17:31:40 +00:00
Duncan Sands 6d6a65310b Fix name.
llvm-svn: 56055
2008-09-10 13:22:10 +00:00
Duncan Sands 83e45acc25 Add trampoline support for the new FastCC calling
convention (not related to recent Ada testsuite
failures).

llvm-svn: 56054
2008-09-10 13:11:09 +00:00
Duncan Sands 536c399579 Turn off the new FastCC for the moment. It causes
a slew of Ada testsuite failures on x86-32 linux.
Seems to be related to the use of float.

llvm-svn: 56053
2008-09-10 13:09:24 +00:00
Anton Korobeynikov 6acb2219b6 Replace explicit pointer-size constants to TargetData query.
No functionality change.

llvm-svn: 55996
2008-09-09 18:22:57 +00:00
Anton Korobeynikov 2fd24e7713 Reapply 55899: First draft of EH support on x86/64-linux
Now with fix, which prevents subtle codegen bug to trigger on darwin.
No fix for bug though, it's still there.

llvm-svn: 55955
2008-09-08 21:12:47 +00:00
Anton Korobeynikov 4112634ca6 Reapply blindly reverted 55898: Implement FRAME_TO_ARGS_OFFSET for x86-64
llvm-svn: 55954
2008-09-08 21:12:11 +00:00
Bill Wendling 3871441861 Reverting r55898 as well. This wasn't reverted in the original revert...
llvm-svn: 55938
2008-09-08 19:42:32 +00:00
Bill Wendling 99b83712f3 Reverting r55898 to r55909. One of these patches was causing an ICE during the full bootstrap on Darwin:
/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm-gcc.obj/./gcc/xgcc
-B/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm-gcc.obj/./gcc/
-B/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm-gcc.install/i386-apple-darwin9.4.0/bin/
-B/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm-gcc.install/i386-apple-darwin9.4.0/lib/
-isystem /Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm-gcc.install/i386-apple-darwin9.4.0/include
-isystem /Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm-gcc.install/i386-apple-darwin9.4.0/sys-include
-O2  -O2 -g -O2  -DIN_GCC    -W -Wall -Wwrite-strings
-Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition
-isystem ./include  -fPIC -pipe -g -DHAVE_GTHR_DEFAULT -DIN_LIBGCC2
-D__GCC_FLOAT_NOT_NEEDED  -I. -I. -I../../llvm-gcc.src/gcc
-I../../llvm-gcc.src/gcc/. -I../../llvm-gcc.src/gcc/../include
-I./../intl -I../../llvm-gcc.src/gcc/../libcpp/include
-I../../llvm-gcc.src/gcc/../libdecnumber -I../libdecnumber
-I/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm.obj/include
-I/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm.src/include
-DSHARED -m64 -DL_negdi2 -c ../../llvm-gcc.src/gcc/libgcc2.c -o
libgcc/x86_64/_negdi2_s.o
Assertion failed: (TargetRegisterInfo::isVirtualRegister(regA) &&
TargetRegisterInfo::isVirtualRegister(regB) && "cannot update physical
register live information"), function runOnMachineFunction, file
/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm.src/lib/CodeGen/TwoAddressInstructionPass.cpp,
line 311.
/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm-gcc.obj/./gcc/xgcc
-B/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm-gcc.obj/./gcc/
-B/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm-gcc.install/i386-apple-darwin9.4.0/bin/
-B/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm-gcc.install/i386-apple-darwin9.4.0/lib/
-isystem /Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm-gcc.install/i386-apple-darwin9.4.0/include
-isystem /Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm-gcc.install/i386-apple-darwin9.4.0/sys-include
-O2  -O2 -g -O2  -DIN_GCC    -W -Wall -Wwrite-strings
-Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition
-isystem ./include  -fPIC -pipe -g -DHAVE_GTHR_DEFAULT -DIN_LIBGCC2
-D__GCC_FLOAT_NOT_NEEDED  -I. -I. -I../../llvm-gcc.src/gcc
-I../../llvm-gcc.src/gcc/. -I../../llvm-gcc.src/gcc/../include
-I./../intl -I../../llvm-gcc.src/gcc/../libcpp/include
-I../../llvm-gcc.src/gcc/../libdecnumber -I../libdecnumber
-I/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm.obj/include
-I/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm.src/include
-DSHARED -m64 -DL_lshrdi3 -c ../../llvm-gcc.src/gcc/libgcc2.c -o
libgcc/x86_64/_lshrdi3_s.o
../../llvm-gcc.src/gcc/unwind-dw2.c:1527: internal compiler error: Abort trap
Please submit a full bug report,
with preprocessed source if appropriate.
See <URL:http://developer.apple.com/bugreporter> for instructions.
{standard input}:unknown:Undefined local symbol LBB21_11
{standard input}:unknown:Undefined local symbol LBB21_12
{standard input}:unknown:Undefined local symbol LBB21_13
{standard input}:unknown:Undefined local symbol LBB21_8

llvm-svn: 55928
2008-09-08 17:59:12 +00:00
Anton Korobeynikov 82b9540046 First draft of EH support on x86/64-linux
llvm-svn: 55899
2008-09-08 14:21:53 +00:00
Anton Korobeynikov cb0655d626 Implement FRAME_TO_ARGS_OFFSET for x86-64
llvm-svn: 55898
2008-09-08 14:21:10 +00:00
Evan Cheng 6f343bd543 Some code clean up.
llvm-svn: 55881
2008-09-07 09:07:23 +00:00
Evan Cheng 6c94b99c62 For whatever the reason, x86 CallingConv::Fast (i.e. fastcc) was not passing scalar arguments in registers. This patch defines a new fastcc CC which is slightly different from the FastCall CC. In addition to passing integer arguments in ECX and EDX, it also specify doubles are passed in 8-byte slots which are 8-byte aligned (instead of 4-byte aligned). This avoids a potential performance hazard where doubles span cacheline boundaries.
llvm-svn: 55807
2008-09-04 22:59:58 +00:00
Evan Cheng 3152edf474 Remove code that pad number of bytes to pop for X86_FastCall CC. The code doesn't do the "aligning" for Cygwin, Mingw, and Windows. But aligning it on Darwin and Linux breaks gcc compatibility. That ruled out all the platforms we support!
llvm-svn: 55756
2008-09-04 01:04:15 +00:00
Dale Johannesen da2d80688b Add intrinsics for log, log2, log10, exp, exp2.
No functional change (and no FE change to generate them).

llvm-svn: 55753
2008-09-04 00:47:13 +00:00
Dan Gohman 7bda51f5a4 Create HandlePHINodesInSuccessorBlocksFast, a version of
HandlePHINodesInSuccessorBlocks that works FastISel-style. This
allows PHI nodes to be updated correctly while using FastISel.

This also involves some code reorganization; ValueMap and
MBBMap are now members of the FastISel class, so they needn't
be passed around explicitly anymore. Also, SelectInstructions
is changed to SelectInstruction, and only does one instruction
at a time.

llvm-svn: 55746
2008-09-03 23:12:08 +00:00
Evan Cheng 24422d4928 Let tblgen only generate fastisel routines, not the class definition. This makes it easier for targets to define its own fastisel class.
llvm-svn: 55679
2008-09-03 00:03:49 +00:00
Evan Cheng 3fddc7e906 Swap fp comparison operands and change predicate to allow load folding (safely this time).
llvm-svn: 55553
2008-08-29 23:22:12 +00:00
Evan Cheng b3ed09703c Backing out 55521. Not safe.
llvm-svn: 55548
2008-08-29 22:13:21 +00:00
Evan Cheng 960b17a3c2 Swap fp comparison operands and change predicate to allow load folding.
llvm-svn: 55521
2008-08-28 23:48:31 +00:00
Gabor Greif 95d77f5466 remove tabs, fix > 80 cols
llvm-svn: 55511
2008-08-28 23:19:51 +00:00
Gabor Greif f304a7aa4d erect abstraction boundaries for accessing SDValue members, rename Val -> Node to reflect semantics
llvm-svn: 55504
2008-08-28 21:40:38 +00:00
Rafael Espindola 26d54b3ef3 Use resize instead of reserve. Reserve doesn't change size().
llvm-svn: 55486
2008-08-28 18:32:53 +00:00
Dale Johannesen 41be0d4445 Split the ATOMIC NodeType's to include the size, e.g.
ATOMIC_LOAD_ADD_{8,16,32,64} instead of ATOMIC_LOAD_ADD.
Increased the Hardcoded Constant OpActionsCapacity to match.
Large but boring; no functional change.

This is to support partial-word atomics on ppc; i8 is
not a valid type there, so by the time we get to lowering, the
ATOMIC_LOAD nodes looks the same whether the type was i8 or i32.
The information can be added to the AtomicSDNode, but that is the
largest SDNode; I don't fully understand the SDNode allocation,
but it is sensitive to the largest node size, so increasing
that must be bad.  This is the alternative.

llvm-svn: 55457
2008-08-28 02:44:49 +00:00
Gabor Greif abfdf928d8 disallow direct access to SDValue::ResNo, provide a getter instead
llvm-svn: 55394
2008-08-26 22:36:50 +00:00
Chris Lattner 09f8cef571 If an xmm register is referenced explicitly in an inline asm, make sure to
assign it to a version of the xmm register with the regclass that matches its
type.  This fixes PR2715, a bug handling some crazy xpcom case in mozilla.

llvm-svn: 55358
2008-08-26 06:19:02 +00:00
Evan Cheng f00f1e50b5 Try approach to moving call address load inside of callseq_start. Now it's done during the preprocess of x86 isel. callseq_start's chain is changed to load's chain node; while load's chain is the last of callseq_start or the loads or copytoreg nodes inserted to move arguments to the right spot.
llvm-svn: 55338
2008-08-25 21:27:18 +00:00
Bill Wendling 5b836c5f77 Temporarily reverting r55292. It's causing a bootstraping failure:
/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm-gcc.obj/./gcc/xgcc ... src/libiberty/make-temp-file.c -o make-temp-file.o
Assertion failed: (Node2Index[SU->NodeNum] > Node2Index[I->Dep->NodeNum] && "Wrong topological sorting"), function InitDAGTopologicalSorting, file /Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm.src/lib/CodeGen/SelectionDAG/ScheduleDAGRRList.cpp, line 508.
../../../../llvm-gcc.src/libiberty/hashtab.c:955: internal compiler error: Abort trap
Please submit a full bug report,
with preprocessed source if appropriate.
See <URL:http://developer.apple.com/bugreporter> for instructions.
make[4]: *** [hashtab.o] Error 1
make[4]: *** Waiting for unfinished jobs....
make[3]: *** [multi-do] Error 1
make[2]: *** [all] Error 2
make[1]: *** [all-target-libiberty] Error 2
make: *** [all] Error 2

llvm-svn: 55295
2008-08-24 21:45:30 +00:00
Evan Cheng 8fa17424f7 Move callseq_start above the call address load to allow load to be folded into the call node.
llvm-svn: 55292
2008-08-24 19:19:55 +00:00
Bill Wendling 2fd7dbaf1d If part of the mask is "undef", then ignore it as we don't care what goes into it.
llvm-svn: 55147
2008-08-21 22:36:36 +00:00
Bill Wendling 765d3e0013 Fix whitespace. No functionality change.
llvm-svn: 55146
2008-08-21 22:35:37 +00:00
Evan Cheng 9534ea03e8 Fix a number of byval / memcpy / memset related codegen issues.
1. x86-64 byval alignment should be max of 8 and alignment of type. Previously the code was not doing what the commit message was saying.
2. Do not use byte repeat move and store operations. These are slow.

llvm-svn: 55139
2008-08-21 21:00:15 +00:00
Mon P Wang 5c2ac4a5e0 Treat floating point ST1 the same as ST0 when lowering for a call result
llvm-svn: 55135
2008-08-21 19:54:16 +00:00
Dan Gohman 02c84b8910 Simplify FastISel's constructor argument list, make the FastISel
class hold a MachineRegisterInfo member, and make the
MachineBasicBlock be passed in to SelectInstructions rather
than the FastISel constructor.

llvm-svn: 55076
2008-08-20 21:05:57 +00:00
Dale Johannesen 6f765f392c Add remaining 64-bit atomic patterns for x86-64.
llvm-svn: 55029
2008-08-20 00:48:50 +00:00
Bill Wendling f00f3055d8 Revert r55018 and apply the correct "fix" for the 64-bit sub_and_fetch atomic.
Just expand it like the other X-bit sub_and_fetches.

llvm-svn: 55023
2008-08-20 00:28:16 +00:00
Dan Gohman daef7f43af Instantiate FastISel for X86.
llvm-svn: 55011
2008-08-19 21:45:35 +00:00
Dan Gohman 4619e93bd3 The X86 target will soon have an implementation of createFastISel.
llvm-svn: 55010
2008-08-19 21:32:53 +00:00
Dale Johannesen 5afbf510aa Add support for 8 and 16 bit forms of __sync
builtins on X86.

Change "lock" instructions to be on a separate line.
This is needed to work around a bug in the Darwin
assembler.

llvm-svn: 54999
2008-08-19 18:47:28 +00:00
Evan Cheng ab35bfdf18 Fix a (u)comiss intrinsic lowering bug. It was using anyext which can return junk in higher bits. Patch by Nate Begeman.
llvm-svn: 54903
2008-08-17 19:22:34 +00:00
Anton Korobeynikov 93584cd5a0 Use correct name for TLS address resolution routine on x86-64
llvm-svn: 54845
2008-08-16 12:58:29 +00:00
Dan Gohman 7c2bf62b14 Also avoid pinsrw and pinsrb with a variable insertelement index.
llvm-svn: 54803
2008-08-14 22:53:18 +00:00
Dan Gohman 65d83ccf26 Don't try to use the insertps instruction for vector
element inserts with non-constant indices. This fixes
CodeGen/X86/vector-variable-idx.ll on machines that
have SSE4.1.

llvm-svn: 54801
2008-08-14 22:43:26 +00:00
Evan Cheng 7823a411d5 Fix PR2620: Fix X86cmppd selection code so it expects operands to be v2f64.
llvm-svn: 54376
2008-08-05 22:19:15 +00:00
Dan Gohman 8ef79ebd5f Add an assert to catch invalid VECTOR_SHUFFLE mask indices.
llvm-svn: 54329
2008-08-04 23:09:15 +00:00
Andrew Lenharth 77e3e86e70 Add atomic sub for other sizes
llvm-svn: 54314
2008-08-03 20:17:34 +00:00
Dan Gohman 2ce6f2ad5e Rename SDOperand to SDValue.
llvm-svn: 54128
2008-07-27 21:46:04 +00:00
Dan Gohman 91e5dcb680 Tidy SDNode::use_iterator, and complete the transition to have it
parallel its analogue, Value::value_use_iterator. The operator* method
now returns the user, rather than the use.

llvm-svn: 54127
2008-07-27 20:43:25 +00:00
Nate Begeman 283b2da27a Disable mov{L, LP, HP, HLP, *DUP} shuffles for mmx
mmx needs its own fancy shuffle logic based on unpack; for now we get correct but awful code.

Also commit Mon Ping's VSETCC patch

llvm-svn: 54039
2008-07-25 19:05:58 +00:00
Evan Cheng a2b4b4ad99 Fix PR2485: do all 4-element SSE shuffles in max. of 2 shuffle instructions.
Based on patch by Nicolas Capens.

llvm-svn: 53939
2008-07-23 00:22:17 +00:00
Evan Cheng 0c23ed6364 Factor out SSE 4 wide shuffle lowering code into its own function. No functionality changes.
llvm-svn: 53933
2008-07-22 21:13:36 +00:00
Evan Cheng 0384670141 Fix PR2574: implement v2f32 scalar_to_vector.
llvm-svn: 53927
2008-07-22 18:39:19 +00:00
Duncan Sands b0e3938651 Add VerifyNode, a place to put sanity checks on
generic SDNode's (nodes with their own constructors
should do sanity checking in the constructor).  Add
sanity checks for BUILD_VECTOR and fix all the places
that were producing bogus BUILD_VECTORs, as found by
"make check".  My favorite is the BUILD_VECTOR with
only two operands that was being used to build a
vector with four elements!

llvm-svn: 53850
2008-07-21 10:20:31 +00:00
Bill Wendling 75840e6435 Fix for first part of PR2562. Generate the "pinsrw" instruction for inserts
into v4i16 vectors.

llvm-svn: 53807
2008-07-20 02:32:23 +00:00
Nate Begeman 55b7becb29 SSE codegen for vsetcc nodes
llvm-svn: 53719
2008-07-17 16:51:19 +00:00
Mon P Wang 1e2c6bfa41 When lowering certain atomics, we need to copy the memoperand from the old
atomic operation to the new one.

llvm-svn: 53714
2008-07-17 04:54:06 +00:00
Evan Cheng cf06fe476f x86-64 PIC JIT fixes: do not generate the extra load for external GV's.
llvm-svn: 53661
2008-07-16 01:34:02 +00:00
Dan Gohman 02c7c6cb33 Include a frame index in the "fixed stack" pseudo source value
instead of using the frame index for the SVOffset, which was
inconsistent.

llvm-svn: 53486
2008-07-11 22:44:52 +00:00
Bill Wendling 5774466a33 The frame address on an x86-64 box needs to be offset by -8, not -4.
llvm-svn: 53450
2008-07-11 07:18:52 +00:00
Dan Gohman 3b46030375 Pool-allocation for MachineInstrs, MachineBasicBlocks, and
MachineMemOperands. The pools are owned by MachineFunctions.

This drastically reduces the number of calls to malloc/free made
during the "Emit" phase of scheduling, as well as later phases
in CodeGen. Combined with other changes, this speeds up the
"instruction selection" phase of CodeGen by 10% in some cases.

llvm-svn: 53212
2008-07-07 23:14:23 +00:00
Duncan Sands 93e180342a Rather than having a different custom legalization
hook for each way in which a result type can be
legalized (promotion, expansion, softening etc),
just use one: ReplaceNodeResults, which returns
a node with exactly the same result types as the
node passed to it, but presumably with a bunch of
custom code behind the scenes.  No change if the
new LegalizeTypes infrastructure is not turned on.

llvm-svn: 53137
2008-07-04 11:47:58 +00:00
Duncan Sands 739a0548c4 Add a new getMergeValues method that does not need
to be passed the list of value types, and use this
where appropriate.  Inappropriate places are where
the value type list is already known and may be
long, in which case the existing method is more
efficient.

llvm-svn: 53035
2008-07-02 17:40:58 +00:00
Duncan Sands b55e5ece96 Highlight that getMergeValues optimization is
being suppressed here.

llvm-svn: 52952
2008-07-01 08:00:49 +00:00
Dan Gohman fb19f9402b Split ISD::LABEL into ISD::DBG_LABEL and ISD::EH_LABEL, eliminating
the need for a flavor operand, and add a new SDNode subclass,
LabelSDNode, for use with them to eliminate the need for a label id
operand.

Change instruction selection to let these label nodes through
unmodified instead of creating copies of them. Teach the MachineInstr
emitter how to emit a MachineInstr directly from an ISD label node.

This avoids the need for allocating SDNodes for the label id and
flavor value, as well as SDNodes for each of the post-isel label,
label id, and label flavor.

llvm-svn: 52943
2008-07-01 00:05:16 +00:00
Dan Gohman 4246cf8eea Update comments to new-style syntax.
llvm-svn: 52925
2008-06-30 21:00:56 +00:00
Dan Gohman 5c73a886b4 Rename ISD::LOCATION to ISD::DBG_STOPPOINT to better reflect its
purpose, and give it a custom SDNode subclass so that it doesn't
need to have line number, column number, filename string, and
directory string, all existing as individual SDNodes to be the
operands.

This was the only user of ISD::STRING, StringSDNode, etc., so
remove those and some associated code.

This makes stop-points considerably easier to read in
-view-legalize-dags output, and reduces overhead (creating new
nodes and copying std::strings into them) on code containing
debugging information.

llvm-svn: 52924
2008-06-30 20:59:49 +00:00
Duncan Sands 1ae6ef83ee Revert the SelectionDAG optimization that makes
it impossible to create a MERGE_VALUES node with
only one result: sometimes it is useful to be able
to create a node with only one result out of one of
the results of a node with more than one result, for
example because the new node will eventually be used
to replace a one-result node using ReplaceAllUsesWith,
cf X86TargetLowering::ExpandFP_TO_SINT.  On the other
hand, most users of MERGE_VALUES don't need this and
for them the optimization was valuable.  So add a new
utility method getMergeValues for creating MERGE_VALUES
nodes which by default performs the optimization.
Change almost everywhere to use getMergeValues (and
tidy some stuff up at the same time).

llvm-svn: 52893
2008-06-30 10:19:09 +00:00
Evan Cheng 3fc2372d3a - Fix a x86 vector isel bug: illegal transformation of a vector_shuffle into a
shift.
- Add a readme entry for a missing vector_shuffle optimization that results in
  awful codegen.

llvm-svn: 52740
2008-06-25 20:52:59 +00:00
Dan Gohman aa01afd47c Remove the OrigVT member from AtomicSDNode, as it is redundant with
the base SDNode's VTList.

llvm-svn: 52722
2008-06-25 16:07:49 +00:00
Mon P Wang 6a490371c9 Added MemOperands to Atomic operations since Atomics touches memory.
Added abstract class MemSDNode for any Node that have an associated MemOperand
Changed atomic.lcs => atomic.cmp.swap, atomic.las => atomic.load.add, and
atomic.lss => atomic.load.sub

llvm-svn: 52706
2008-06-25 08:15:39 +00:00
Dale Johannesen e5f4ffbdf1 Add v2f32 (MMX) type to X86. Support is primitive:
load,store,call,return,bitcast.  This is enough to
make call and return work.

llvm-svn: 52691
2008-06-24 22:01:44 +00:00
Dan Gohman 1f2b2a4abe Remove unnecessary #includes.
llvm-svn: 52613
2008-06-22 19:21:26 +00:00
Eli Friedman 8d66e98c92 Fix a bug with <8 x i16> shuffle lowering on X86 where parts of the
shuffle could be skipped.  The check is invalid because the loop index i 
doesn't correspond to the element actually inserted. The correct check is
already done a few lines earlier, for whether the element is already in 
the right spot, so this shouldn't have any effect on the codegen for 
code that was already correct.

llvm-svn: 52486
2008-06-19 06:09:51 +00:00
Evan Cheng e47ca0940f Rather than avoiding to wrap ISD::DECLARE GV operand in X86ISD::Wrapper, simply handle it at dagisel time with x86 specific isel code.
llvm-svn: 52377
2008-06-17 02:01:22 +00:00
Andrew Lenharth f88d50bfcc add missing atomic intrinsic from gcc
llvm-svn: 52270
2008-06-14 05:48:15 +00:00
Anton Korobeynikov 729c4e95e2 Properly lower DYNAMIC_STACKALLOC - bracket all black magic with
CALLSEQ_BEGIN & CALLSEQ_END.

llvm-svn: 52225
2008-06-11 20:16:42 +00:00
Duncan Sands 11dd424539 Remove comparison methods for MVT. The main cause
of apint codegen failure is the DAG combiner doing
the wrong thing because it was comparing MVT's using
< rather than comparing the number of bits.  Removing
the < method makes this mistake impossible to commit.
Instead, add helper methods for comparing bits and use
them.

llvm-svn: 52098
2008-06-08 20:54:56 +00:00
Duncan Sands 13237ac3b9 Wrap MVT::ValueType in a struct to get type safety
and better control the abstraction.  Rename the type
to MVT.  To update out-of-tree patches, the main
thing to do is to rename MVT::ValueType to MVT, and
rewrite expressions like MVT::getSizeInBits(VT) in
the form VT.getSizeInBits().  Use VT.getSimpleVT()
to extract a MVT::SimpleValueType for use in switch
statements (you will get an assert failure if VT is
an extended value type - these shouldn't exist after
type legalization).
This results in a small speedup of codegen and no
new testsuite failures (x86-64 linux).

llvm-svn: 52044
2008-06-06 12:08:01 +00:00
Dan Gohman 714663ab94 Expand small memmovs using inline code. Set the X86 threshold for expanding
memmove to a more plausible value, now that it's actually being used.

llvm-svn: 51696
2008-05-29 19:42:22 +00:00
Evan Cheng 5e28227dbd Implement vector shift up / down and insert zero with ps{rl}lq / ps{rl}ldq.
llvm-svn: 51667
2008-05-29 08:22:04 +00:00
Nate Begeman f1e18c7c44 Don't attempt to create VZEXT_LOAD out of an extload. This an issue where the
code generator would do something like this:

f64 = load f32 <anyext>, f32mem
v2f64 = insertelt undef, %0, 0
v2f64 = insertelt %1, 0.0, 1

into 

v2f64 = vzext_load f32mem

which on x86 is movsd, when you really wanted a cvtss2sd/movsd pair.

llvm-svn: 51624
2008-05-28 00:24:25 +00:00
Dan Gohman 3388d022ac Use PMULDQ for v2i64 multiplies when SSE4.1 is available. And add
load-folding table entries for PMULDQ and PMULLD.

llvm-svn: 51489
2008-05-23 17:49:40 +00:00
Evan Cheng 29e59ad6c9 Fix typos and comments.
llvm-svn: 51165
2008-05-15 22:13:02 +00:00
Evan Cheng ef377adca0 Make use of vector load and store operations to implement memcpy, memmove, and memset. Currently only X86 target is taking advantage of these.
llvm-svn: 51140
2008-05-15 08:39:06 +00:00
Dan Gohman eabd647cd5 Change target-specific classes to use more precise static types.
This eliminates the need for several awkward casts, including
the last dynamic_cast under lib/Target.

llvm-svn: 51091
2008-05-14 01:58:56 +00:00
Evan Cheng 1120279ae6 Instead of a vector load, shuffle and then extract an element. Load the element from address with an offset.
pshufd $1, (%rdi), %xmm0
        movd %xmm0, %eax
=>
        movl 4(%rdi), %eax

llvm-svn: 51026
2008-05-13 08:35:03 +00:00
Evan Cheng b980f6fb3d Xform bitconvert(build_pair(load a, load b)) to a single load if the load locations are at the right offset from each other.
llvm-svn: 51008
2008-05-12 23:04:07 +00:00
Nate Begeman d875c3e2fd Initial X86 codegen support for VSETCC.
llvm-svn: 51000
2008-05-12 20:34:32 +00:00
Evan Cheng 2609d5e779 Refactor isConsecutiveLoad from X86 to TargetLowering so DAG combiner can make use of it.
llvm-svn: 50991
2008-05-12 19:56:52 +00:00
Dan Gohman 906716c40f Fix a compile error on compilers that still want a return value
in a non-void function that calls abort.

llvm-svn: 50969
2008-05-12 16:17:19 +00:00
Evan Cheng 71b9afb053 When transforming a vector_shuffle to a load, the base address must not be an undef.
llvm-svn: 50940
2008-05-10 06:46:49 +00:00
Dan Gohman 3c0e11af64 For now, abort when an ISD::VAARG is encountered on x86-64, rather
than silently generate invalid code.

llvm-gcc does not currently use VAArgInst; it lowers va_arg in the
front-end.

llvm-svn: 50930
2008-05-10 01:26:14 +00:00
Evan Cheng bb48d55a88 If movl top bits are undef, let it be selected to movlps, etc.
llvm-svn: 50928
2008-05-10 00:58:41 +00:00
Evan Cheng 961339bbdb Handle a few more cases of folding load i64 into xmm and zero top bits.
Note, some of the code will be moved into target independent part of DAG combiner in a subsequent patch.

llvm-svn: 50918
2008-05-09 21:53:03 +00:00
Evan Cheng 78af38c392 Handle vector move / load which zero the destination register top bits (i.e. movd, movq, movss (addr), movsd (addr)) with X86 specific dag combine.
llvm-svn: 50838
2008-05-08 00:57:18 +00:00
Mon P Wang 310a38d51e Improved generated code for atomic operators
llvm-svn: 50677
2008-05-05 22:56:23 +00:00
Evan Cheng dbfcce37fe Code clean up. No functionality change.
llvm-svn: 50675
2008-05-05 22:12:23 +00:00
Mon P Wang 3e58393c3d Added addition atomic instrinsics and, or, xor, min, and max.
llvm-svn: 50663
2008-05-05 19:05:59 +00:00
Anton Korobeynikov 9205c8562c Add General Dynamic TLS model for X86-64. Some parts looks really ugly (look for tlsaddr pattern),
but should work. Work is in progress, more models will follow

llvm-svn: 50630
2008-05-04 21:36:32 +00:00
Evan Cheng d9481366e3 Select vector shift with non-immediate i32 shift amount operand by first moving the operand into the right register.
llvm-svn: 50619
2008-05-04 09:15:50 +00:00
Arnold Schwaighofer be0de34ede Tail call optimization improvements:
Move platform independent code (lowering of possibly overwritten
arguments, check for tail call optimization eligibility) from
target X86ISelectionLowering.cpp to TargetLowering.h and
SelectionDAGISel.cpp.

Initial PowerPC tail call implementation:

Support ppc32 implemented and tested (passes my tests and
test-suite llvm-test).  
Support ppc64 implemented and half tested (passes my tests).
On ppc tail call optimization is performed if 
  caller and callee are fastcc
  call is a tail call (in tail call position, call followed by ret)
  no variable argument lists or byval arguments
  option -tailcallopt is enabled
Supported:
 * non pic tail calls on linux/darwin
 * module-local tail calls on linux(PIC/GOT)/darwin(PIC)
 * inter-module tail calls on darwin(PIC)
If constraints are not met a normal call will be emitted.

A test checking the argument lowering behaviour on x86-64 was added.

llvm-svn: 50477
2008-04-30 09:16:33 +00:00
Dan Gohman da44054867 Fix the SVOffset values for loads and stores produced by
memcpy/memset expansion. It was a bug for the SVOffset value
to be used in the actual address calculations.

llvm-svn: 50359
2008-04-28 17:15:20 +00:00
Anton Korobeynikov e183b3cd76 Properly lower vararg's FORMAL_ARGUMENTS node on win64
llvm-svn: 50325
2008-04-27 23:15:03 +00:00
Chris Lattner 724539c001 A few inline asm cleanups:
- Make targetlowering.h fit in 80 cols.
  - Make LowerAsmOperandForConstraint const.
  - Make lowerXConstraint -> LowerXConstraint
  - Make LowerXConstraint return a const char* instead of taking a string byref.

llvm-svn: 50312
2008-04-26 23:02:14 +00:00
Evan Cheng 1e78184a99 Extract the lower 64-bit if a MMX value is passed in a XMM register.
llvm-svn: 50292
2008-04-25 20:13:28 +00:00
Evan Cheng ccde6dd016 Special handling for MMX values being passed in either GPR64 or lower 64-bits of XMM registers.
llvm-svn: 50289
2008-04-25 19:11:04 +00:00
Evan Cheng df38b35a1e MMX argument passing fixes:
On Darwin / Linux x86-32, v8i8, v4i16, v2i32 values are passed in MM[0-2].                                                                                                                                      
On Darwin / Linux x86-32, v1i64 values are passed in memory.                                                                                                                                                    
On Darwin x86-64, v8i8, v4i16, v2i32 values are passed in XMM[0-7].                                                                                                                                     
On Darwin x86-64, v1i64 values are passed in 64-bit GPRs.

llvm-svn: 50257
2008-04-25 07:56:45 +00:00
Evan Cheng 9165e165dc Fix bug in x86 memcpy / memset lowering. If there are trailing bytes not handled by rep instructions, a new memcpy / memset is introduced for them. However, since source / destination addresses are already adjusted, their offsets should be zero.
llvm-svn: 50239
2008-04-25 00:26:43 +00:00
Dan Gohman f166d2d0d6 Implement an x86-64 ABI detail of passing structs by hidden first
argument. The x86-64 ABI requires the incoming value of %rdi to
be copied to %rax on exit from a function that is returning a
large C struct.

Also, add a README-X86-64 entry detailing the missed optimization
opportunity and proposing an alternative approach.

llvm-svn: 50075
2008-04-21 23:59:07 +00:00
Chris Lattner 3b18762f40 Switch to using Simplified ConstantFP::get API.
llvm-svn: 49977
2008-04-20 00:41:09 +00:00
Dan Gohman ad4071a9e1 Fix the handling of va_copy on x86-64. As of llvm-gcc r49920
llvm-gcc is now lowering va_copy on x86-64, so this completes
the fix for PR2230.

llvm-svn: 49922
2008-04-18 20:55:41 +00:00
Roman Levenstein a3ee1a38a3 Ongoing work on improving the instruction selection infrastructure:
Rename SDOperandImpl back to SDOperand.
Introduce the SDUse class that represents a use of the SDNode referred by
an SDOperand. Now it is more similar to Use/Value classes.

Patch is approved by Dan Gohman.

llvm-svn: 49795
2008-04-16 16:15:27 +00:00
Dan Gohman d43d3beeb0 Add support for the form of the SSE41 extractps instruction that
puts its result in a 32-bit GPR.

llvm-svn: 49762
2008-04-16 02:32:24 +00:00
Dan Gohman 8c99ccaf96 Recreate the size SDNode instead of reusing the old one in the x86
memcpy lowering code; this ensures that the size node has the desired
result type. This fixes a regression from r49572 with @llvm.memcpy.i64
on x86-32.

llvm-svn: 49761
2008-04-16 01:32:32 +00:00
Dan Gohman 2505d86783 Fix const-correctness issues with the SrcValue handling in the
memory intrinsic expansion code.

llvm-svn: 49666
2008-04-14 17:55:48 +00:00
Arnold Schwaighofer 634fc9a33a This patch corrects the handling of byval arguments for tailcall
optimized x86-64 (and x86) calls so that they work (... at least for
my test cases).

Should fix the following problems:

Problem 1: When i introduced the optimized handling of arguments for
tail called functions (using a sequence of copyto/copyfrom virtual
registers instead of always lowering to top of the stack) i did not
handle byval arguments correctly e.g they did not work at all :).

Problem 2: On x86-64 after the arguments of the tail called function
are moved to their registers (which include ESI/RSI etc), tail call
optimization performs byval lowering which causes xSI,xDI, xCX
registers to be overwritten. This is handled in this patch by moving
the arguments to virtual registers first and after the byval lowering
the arguments are moved from those virtual registers back to
RSI/RDI/RCX.

llvm-svn: 49584
2008-04-12 18:11:06 +00:00
Dan Gohman 544ab2c50b Drop ISD::MEMSET, ISD::MEMMOVE, and ISD::MEMCPY, which are not Legal
on any current target and aren't optimized in DAGCombiner. Instead
of using intermediate nodes, expand the operations, choosing between
simple loads/stores, target-specific code, and library calls,
immediately.

Previously, the code to emit optimized code for these operations
was only used at initial SelectionDAG construction time; now it is
used at all times. This fixes some cases where rep;movs was being
used for small copies where simple loads/stores would be better.

This also cleans up code that checks for alignments less than 4;
let the targets make that decision instead of doing it in
target-independent code. This allows x86 to use rep;movs in
low-alignment cases.

Also, this fixes a bug that resulted in the use of rep;stos for
memsets of 0 with non-constant memory size when the alignment was
at least 4. It's better to use the library in this case, which
can be significantly faster when the size is large.

This also preserves more SourceValue information when memory
intrinsics are lowered into simple loads/stores.

llvm-svn: 49572
2008-04-12 04:36:06 +00:00
Dan Gohman 8c7cf88f7e Fix a bug that prevented x86-64 from using rep.movsq for
8-byte-aligned data.

llvm-svn: 49571
2008-04-12 02:35:39 +00:00
Dan Gohman 33b3300178 Make isVectorClearMaskLegal's operand list const.
llvm-svn: 49446
2008-04-09 20:09:42 +00:00
Roman Levenstein 51f532f92d Re-commit of the r48822, where the infinite looping problem discovered
by Dan Gohman is fixed.

llvm-svn: 49330
2008-04-07 10:06:32 +00:00
Evan Cheng f77b5ef3d0 Favors pshufd over shufps when shuffling elements from one vector. pshufd is faster than shufps.
llvm-svn: 49244
2008-04-05 00:30:36 +00:00
Evan Cheng 025cea1126 Backing out 48222 temporarily.
llvm-svn: 49124
2008-04-03 03:13:16 +00:00
Dan Gohman cb9f8f6e4e Don't use __bzero for memset if the second argument isn't zero.
llvm-svn: 49050
2008-04-01 20:56:18 +00:00
Dan Gohman 980d7200c1 Speculatively micro-optimize memory-zeroing calls on Darwin 10.
llvm-svn: 49048
2008-04-01 20:38:36 +00:00
Dale Johannesen efa81a6979 Accept 'y' constraint (MMX) in inline asm.
llvm-svn: 49011
2008-04-01 00:57:48 +00:00
Dan Gohman fd2eb00cc2 Fix a tokenfactor node to use the load chain rather than the
load value. This fixes PR2177.

llvm-svn: 48932
2008-03-28 23:45:16 +00:00
Roman Levenstein 358e04a185 Use a linked data structure for the uses lists of an SDNode, just like
LLVM Value/Use does and MachineRegisterInfo/MachineOperand does.
This allows constant time for all uses list maintenance operations.

The idea was suggested by Chris. Reviewed by Evan and Dan.
Patch is tested and approved by Dan.

On normal use-cases compilation speed is not affected. On very big basic
blocks there are compilation speedups in the range of 15-20% or even better. 

llvm-svn: 48822
2008-03-26 12:39:26 +00:00
Evan Cheng 615488ab45 - SSE4.1 extractfps extracts a f32 into a gr32 register. Very useful! Not. Fix the instruction specification and teaches lowering code to use it only when the only use is a store instruction.
llvm-svn: 48746
2008-03-24 21:52:23 +00:00
Anton Korobeynikov 7f125b2ba5 Add convenient helper for win64 check. Simplify things slightly.
llvm-svn: 48691
2008-03-22 20:57:27 +00:00
Anton Korobeynikov 7b4f4e1a86 Initial support for Win64 calling conventions. Still in early state.
llvm-svn: 48690
2008-03-22 20:37:30 +00:00
Duncan Sands d97eea372a Introduce a new node for holding call argument
flags.  This is needed by the new legalize types
infrastructure which wants to expand the 64 bit
constants previously used to hold the flags on
32 bit machines.  There are two functional changes:
(1) in LowerArguments, if a parameter has the zext
attribute set then that is marked in the flags;
before it was being ignored; (2) PPC had some bogus
code for handling two word arguments when using the
ELF 32 ABI, which was hard to convert because of
the bogusness.  As suggested by the original author
(Nicolas Geoffray), I've disabled it for the moment.
Tested with "make check" and the Ada ACATS testsuite.

llvm-svn: 48640
2008-03-21 09:14:45 +00:00
Chris Lattner 68b11e14bc remove Evan's "ugly hack" that sorta attempted to get
x86-64 return conventions correct, but was never enabled.
We can now do the "right thing" with multiple return values.

llvm-svn: 48635
2008-03-21 06:50:21 +00:00
Evan Cheng 7a3e750fd2 Fix this xform: (sra (shl X, m), result_size) -> (sign_extend (trunc (shl X, result_size - n - m)))
llvm-svn: 48578
2008-03-20 02:18:41 +00:00
Christopher Lamb 8fe9109469 Fix X86's isTruncateFree to not claim that truncate to i1 is free. This fixes Bill's testcase that failed for r48491.
llvm-svn: 48542
2008-03-19 08:30:06 +00:00
Evan Cheng 484064370a Fix a x86-64 isel lowering bug that's been around forever. A x86-64 varargs function implicitly reads X86::AL, don't clobber it!
llvm-svn: 48515
2008-03-18 23:36:35 +00:00
Chris Lattner 8a923e7c28 Reimplement the parameter attributes support, phase #1. hilights:
1. There is now a "PAListPtr" class, which is a smart pointer around
   the underlying uniqued parameter attribute list object, and manages
   its refcount.  It is now impossible to mess up the refcount.
2. PAListPtr is now the main interface to the underlying object, and
   the underlying object is now completely opaque.
3. Implementation details like SmallVector and FoldingSet are now no
   longer part of the interface.
4. You can create a PAListPtr with an arbitrary sequence of
   ParamAttrsWithIndex's, no need to make a SmallVector of a specific 
   size (you can just use an array or scalar or vector if you wish).
5. All the client code that had to check for a null pointer before
   dereferencing the pointer is simplified to just access the 
   PAListPtr directly.
6. The interfaces for adding attrs to a list and removing them is a
   bit simpler.

Phase #2 will rename some stuff (e.g. PAListPtr) and do other less 
invasive changes.

llvm-svn: 48289
2008-03-12 17:45:29 +00:00
Chris Lattner 120ad01fcb start handling the 'f' x87 constraint.
llvm-svn: 48239
2008-03-11 19:06:29 +00:00
Chris Lattner 1bd44363f2 Change the model for FP Stack return to use fp operands on the
RET instruction instead of using FpSET_ST0_32.  This also generalizes
the code to handling returning of multiple FP results.

llvm-svn: 48209
2008-03-11 03:23:40 +00:00
Chris Lattner 4b3a7fa823 Eliminate the FP_GET_ST0/FP_SET_ST0 target-specific dag nodes, just lower to
copyfromreg/copytoreg instead.

llvm-svn: 48174
2008-03-10 21:08:41 +00:00
Evan Cheng ae2c56d93e Default ISD::PREFETCH to expand.
llvm-svn: 48169
2008-03-10 19:38:10 +00:00
Scott Michel a6729e8666 Give TargetLowering::getSetCCResultType() a parameter so that ISD::SETCC's
return ValueType can depend its operands' ValueType.

This is a cosmetic change, no functionality impacted.

llvm-svn: 48145
2008-03-10 15:42:14 +00:00
Dale Johannesen 4e622ec86d Increase ISD::ParamFlags to 64 bits. Increase the ByValSize
field to 32 bits, thus enabling correct handling of ByVal
structs bigger than 0x1ffff.  Abstract interface a bit.
Fixes gcc.c-torture/execute/pr23135.c and 
gcc.c-torture/execute/pr28982b.c in gcc testsuite (were ICE'ing
on ppc32, quietly producing wrong code on x86-32.)

llvm-svn: 48122
2008-03-10 02:17:22 +00:00
Chris Lattner 4c869594bc rename FP_SETRESULT -> FP_SET_ST0
llvm-svn: 48094
2008-03-09 07:08:44 +00:00
Chris Lattner d587e580a6 rename FpGETRESULT32 -> FpGET_ST0_32 etc. Add support for
isel'ing value preserving FP roundings from one fp stack reg to another
into a noop, instead of stack traffic.

llvm-svn: 48093
2008-03-09 07:05:32 +00:00
Chris Lattner b6387c8a74 Finish implementing a readme entry: when inserting an i64 variable
into a vector of zeros or undef, and when the top part is obviously
zero, we can just use movd + shuffle.  This allows us to compile
vec_set-B.ll into:

_test3:
	movl	$1234567, %eax
	andl	4(%esp), %eax
	movd	%eax, %xmm0
	ret

instead of:

_test3:
	subl	$28, %esp
	movl	$1234567, %eax
	andl	32(%esp), %eax
	movl	%eax, (%esp)
	movl	$0, 4(%esp)
	movq	(%esp), %xmm0
	addl	$28, %esp
	ret

llvm-svn: 48090
2008-03-09 05:42:06 +00:00
Chris Lattner eef374c197 Implement a readme entry, compiling
#include <xmmintrin.h>
__m128i doload64(short x) {return _mm_set_epi16(0,0,0,0,0,0,0,1);}

into:
	movl	$1, %eax
	movd	%eax, %xmm0
	ret

instead of a constant pool load.

llvm-svn: 48063
2008-03-09 01:05:04 +00:00
Chris Lattner ad58828354 1) Improve comments.
2) Don't try to insert an i64 value into the low part of a 
   vector with movq on an x86-32 target.  This allows us to 
   compile:

__m128i doload64(short x) {return _mm_set_epi16(0,0,0,0,0,0,0,1);}

into:

_doload64:
	movaps	LCPI1_0, %xmm0
	ret

instead of:

_doload64:
	subl	$28, %esp
	movl	$0, 4(%esp)
	movl	$1, (%esp)
	movq	(%esp), %xmm0
	addl	$28, %esp
	ret

llvm-svn: 48057
2008-03-08 22:59:52 +00:00
Chris Lattner 8a6ebd23a8 minor simplifications to this code, don't create a dead
SCALAR_TO_VECTOR on paths that end up not using it.

llvm-svn: 48056
2008-03-08 22:48:29 +00:00
Evan Cheng 95cf661534 Implement x86 support for @llvm.prefetch. It corresponds to prefetcht{0|1|2} and prefetchnta instructions.
llvm-svn: 48042
2008-03-08 00:58:38 +00:00
Chris Lattner d4defb00df mark frem as expand for all legal fp types on x86, regardless of whether
we're using SSE or not.  This fixes PR2122.

llvm-svn: 48006
2008-03-07 06:36:32 +00:00
Andrew Lenharth 357061a74d 64bit CAS on 32bit x86.
llvm-svn: 47929
2008-03-05 01:15:49 +00:00
Andrew Lenharth 4fee9f35b5 x86-64 atomics
llvm-svn: 47903
2008-03-04 21:13:33 +00:00
Dan Gohman a986eea82f Add support for lowering i64 SRA_PARTS and friends on x86-64.
llvm-svn: 47865
2008-03-03 22:22:09 +00:00
Andrew Lenharth f5c90ec12c make CAS work
llvm-svn: 47799
2008-03-01 22:27:48 +00:00
Andrew Lenharth d032c33300 all but CAS working on x86
llvm-svn: 47798
2008-03-01 21:52:34 +00:00
Evan Cheng c799065cc3 Add a quick and dirty "loop aligner pass". x86 uses it to align its loops to 16-byte boundaries.
llvm-svn: 47703
2008-02-28 00:43:03 +00:00
Chris Lattner 83263b8cfb Make X86TargetLowering::LowerSINT_TO_FP return without creating a dead
stack slot and store if the  SINT_TO_FP is actually legal.  This allows
us to compile:

double a(double b) {return (unsigned)b;}

to:

_a:
	cvttsd2siq	%xmm0, %rax
	movl	%eax, %eax
	cvtsi2sdq	%rax, %xmm0
	ret

instead of:

_a:
	subq	$8, %rsp
	cvttsd2siq	%xmm0, %rax
	movl	%eax, %eax
	cvtsi2sdq	%rax, %xmm0
	addq	$8, %rsp
	ret

crazy.

llvm-svn: 47660
2008-02-27 05:57:41 +00:00
Arnold Schwaighofer 3bfca3e942 Refactor according to Evan's and Anton's suggestions.
llvm-svn: 47635
2008-02-26 22:21:54 +00:00
Arnold Schwaighofer 1f17bf6171 Correct function comments.
llvm-svn: 47606
2008-02-26 17:50:59 +00:00
Arnold Schwaighofer 69a10f4112 Add support for intermodule tail calls on x86/32bit with
GOT-style position independent code. Before only tail calls to
protected/hidden functions within the same module were optimized.
Now all function calls are tail call optimized.

llvm-svn: 47594
2008-02-26 10:21:54 +00:00
Arnold Schwaighofer b01b99ec78 Change the lowering of arguments for tail call optimized
calls. Before arguments that could overwrite each other were
explicitly lowered to a stack slot, not giving the register allocator
a chance to optimize. Now a sequence of copyto/copyfrom virtual
registers ensures that arguments are loaded in (virtual) registers
before they are lowered to the stack slot (and might overwrite each
other). Also parameter stack slots are marked mutable for
(potentially) tail calling functions.

llvm-svn: 47593
2008-02-26 09:19:59 +00:00
Dale Johannesen 65b404d61c Revise previous patch per review.
llvm-svn: 47573
2008-02-25 22:29:22 +00:00
Dale Johannesen 32d84b1772 Expand removal of MMX memory copies to allow 1 level
of TokenFactor underneath chain (seems to be enough)

llvm-svn: 47554
2008-02-25 19:20:14 +00:00
Dale Johannesen 09f410b6d7 Split ParameterAttributes.h, putting the complicated
stuff into ParamAttrsList.h.  Per feedback from
ParamAttrs changes.

llvm-svn: 47504
2008-02-22 22:17:59 +00:00
Chris Lattner ab8bfc28c8 copy mmx values from/to memory with GPRs on x86-32
instead of with mmx registers.  This horribleness is apparently
done by gcc to avoid having to insert emms in places that really 
should have it.  This is the second half of rdar://5741668.

llvm-svn: 47474
2008-02-22 05:18:04 +00:00
Chris Lattner 997b3a65ca Start using GPR's to copy around mmx value instead of mmx regs.
GCC apparently does this, and code depends on not having to do
emms when this happens.  This is x86-64 only so far, second half
should handle x86-32.

rdar://5741668

llvm-svn: 47470
2008-02-22 02:09:43 +00:00
Anton Korobeynikov 40d67c59d5 Remove bunch of gcc 4.3-related warnings from Target
llvm-svn: 47369
2008-02-20 11:22:39 +00:00
Evan Cheng 6200c225e0 - When DAG combiner is folding a bit convert into a BUILD_VECTOR, it should check if it's essentially a SCALAR_TO_VECTOR. Avoid turning (v8i16) <10, u, u, u> to <10, 0, u, u, u, u, u, u>. Instead, simply convert it to a SCALAR_TO_VECTOR of the proper type.
- X86 now normalize SCALAR_TO_VECTOR to (BIT_CONVERT (v4i32 SCALAR_TO_VECTOR)). Get rid of X86ISD::S2VEC.

llvm-svn: 47290
2008-02-18 23:04:32 +00:00
Dan Gohman c589243107 Chris pointed out that it's not necessary to set i64 MUL to Expand
on x86-32 since i64 itself is not a Legal type. And, update some
comments.

llvm-svn: 47282
2008-02-18 19:34:53 +00:00
Dan Gohman a589ee11bb Don't mark scalar integer multiplication as Expand on x86, since x86
has plain one-result scalar integer multiplication instructions.
This avoids expanding such instructions into MUL_LOHI sequences that
must be special-cased at isel time, and avoids the problem with that
code that provented memory operands from being folded.

This fixes PR1874, addressesing the most common case. The uncommon
cases of optimizing multiply-high operations will require work
in DAGCombiner.

llvm-svn: 47277
2008-02-18 17:55:26 +00:00
Andrew Lenharth fedcf477b5 I cannot find a libgcc function for this builtin. Therefor expanding it to a noop (which is how it use to be treated). If someone who knows the x86 backend better than me could tell me how to get a lock prefix on an instruction, that would be nice to complete x86 support.
llvm-svn: 47213
2008-02-16 14:46:26 +00:00
Duncan Sands 4c95dbd69f In TargetLowering::LowerCallTo, don't assert that
the return value is zero-extended if it isn't
sign-extended.  It may also be any-extended.
Also, if a floating point value was returned
in a larger floating point type, pass 1 as the
second operand to FP_ROUND, which tells it
that all the precision is in the original type.
I think this is right but I could be wrong.
Finally, when doing libcalls, set isZExt on
a parameter if it is "unsigned".  Currently
isSExt is set when signed, and nothing is
set otherwise.  This should be right for all
calls to standard library routines.

llvm-svn: 47122
2008-02-14 17:28:50 +00:00
Nate Begeman 53e1b3f9d5 Change how FP immediates are handled.
1) ConstantFP is now expand by default
2) ConstantFP is not turned into TargetConstantFP during Legalize
   if it is legal.

This allows ConstantFP to be handled like Constant, allowing for 
targets that can encode FP immediates as MachineOperands.

As a bonus, fix up Itanium FP constants, which now correctly match,
and match more constants!  Hooray.

llvm-svn: 47121
2008-02-14 08:57:00 +00:00
Dan Gohman 9ca025f1dc Assigning an APInt to 0 with plain assignment gives it a one-bit
size. Initialize these APInts to properly-sized zero values.

llvm-svn: 47099
2008-02-13 23:07:24 +00:00
Dan Gohman e1d9ee66ed Simplify some logic in ComputeMaskedBits. And change ComputeMaskedBits
to pass the mask APInt by value, not by reference. 

llvm-svn: 47096
2008-02-13 22:28:48 +00:00
Dan Gohman f990faf23b Convert SelectionDAG::ComputeMaskedBits to use APInt instead of uint64_t.
Add an overload that supports the uint64_t interface for use by clients
that haven't been updated yet.

llvm-svn: 47039
2008-02-13 00:35:47 +00:00
Nate Begeman 8ef50214f0 SSE4.1 64b integer insert/extract pattern support
Move formats into the formats file

llvm-svn: 47035
2008-02-12 22:51:28 +00:00
Evan Cheng 4d8c98b8f9 Unbreak various insert_vector_elt and extract_vector_elt tests in presence of SSE4.
llvm-svn: 47001
2008-02-12 07:59:45 +00:00
Nate Begeman 2d77e8e446 Enable SSE4 codegen and pattern matching.
Add some notes to the README.

llvm-svn: 46949
2008-02-11 04:19:36 +00:00
Dan Gohman 3a4be0fdef Rename MRegisterInfo to TargetRegisterInfo.
llvm-svn: 46930
2008-02-10 18:45:23 +00:00
Dale Johannesen 36c2967d89 64-bit (MMX) vectors do not need restrictive alignment.
128-bit vectors need it only when SSE is on.

llvm-svn: 46890
2008-02-08 19:48:20 +00:00
Dan Gohman 7a55a94ba1 Avoid needlessly casting away const qualifiers.
llvm-svn: 46877
2008-02-08 03:29:40 +00:00
Dan Gohman 16d4bc3dc0 Follow Chris' suggestion; change the PseudoSourceValue accessors
to return pointers instead of references, since this is always what
is needed.

llvm-svn: 46857
2008-02-07 18:41:25 +00:00
Dan Gohman 63a8452e9c Add SourceValue information for outgoing argument stores on x86.
llvm-svn: 46854
2008-02-07 16:28:05 +00:00
Dan Gohman 2d489b5081 Re-apply the memory operand changes, with a fix for the static
initializer problem, a minor tweak to the way the
DAGISelEmitter finds load/store nodes, and a renaming of the
new PseudoSourceValue objects.

llvm-svn: 46827
2008-02-06 22:27:42 +00:00
Dale Johannesen d88f1d060e Implement sseregparm.
llvm-svn: 46764
2008-02-05 20:46:33 +00:00
Nick Lewycky f5b9938ef6 Don't use uninitialized values. Fixes vec_align.ll on X86 Linux.
llvm-svn: 46666
2008-02-02 08:29:58 +00:00
Evan Cheng efd142a920 SDIsel processes llvm.dbg.declare by recording the variable debug information descriptor and its corresponding stack frame index in MachineModuleInfo. This only works if the local variable is "homed" in the stack frame. It does not work for byval parameter, etc.
Added ISD::DECLARE node type to represent llvm.dbg.declare intrinsic. Now the intrinsic calls are lowered into a SDNode and lives on through out the codegen passes.
For now, since all the debugging information recording is done at isel time, when a ISD::DECLARE node is selected, it has the side effect of also recording the variable. This is a short term solution that should be fixed in time.

llvm-svn: 46659
2008-02-02 04:07:54 +00:00
Evan Cheng 27b32b87ed Revert 46556 and 46585. Dan please fix the PseudoSourceValue problem and re-commit.
llvm-svn: 46623
2008-01-31 21:00:00 +00:00
Dan Gohman ed346f2ed5 Avoid unnecessarily casting away const.
llvm-svn: 46590
2008-01-31 01:01:48 +00:00
Dan Gohman 9ba4d76816 Rename ISD::FLT_ROUNDS to ISD::FLT_ROUNDS_ to avoid conflicting
with the real FLT_ROUNDS (defined in <float.h>).

llvm-svn: 46587
2008-01-31 00:41:03 +00:00
Dan Gohman 3646fdda67 Create a new class, MemOperand, for describing memory references
in the backend. Introduce a new SDNode type, MemOperandSDNode, for
holding a MemOperand in the SelectionDAG IR, and add a MemOperand
list to MachineInstr, and code to manage them. Remove the offset
field from SrcValueSDNode; uses of SrcValueSDNode that were using
it are all all using MemOperandSDNode now.

Also, begin updating some getLoad and getStore calls to use the
PseudoSourceValue objects.

Most of this was written by Florian Brander, some
reorganization and updating to TOT by me.

llvm-svn: 46585
2008-01-31 00:25:39 +00:00
Evan Cheng 29cfb67e28 Even though InsertAtEndOfBasicBlock is an ugly hack it still deserves a proper name. Rename it to EmitInstrWithCustomInserter since it does not necessarily insert
instruction at the end.

llvm-svn: 46562
2008-01-30 18:18:23 +00:00
Evan Cheng 084a1cdcdd Work in progress. This patch *fixes* x86-64 calls which are modelled as StructRet but really should be return in registers, e.g. _Complex long double, some 128-bit aggregates. This is a short term solution that is necessary only because llvm, for now, cannot model i128 nor call's with multiple results.
Status: This only works for direct calls, and only the caller side is done. Disabled for now.

llvm-svn: 46527
2008-01-29 19:34:22 +00:00
Dale Johannesen 2b3bc30420 Handle 'X' constraint in asm's better.
llvm-svn: 46485
2008-01-29 02:21:21 +00:00
Chris Lattner d05d2011d0 Use fldz and fld1 for long double constants instead of a constant pool load.
llvm-svn: 46411
2008-01-27 06:19:31 +00:00
Chris Lattner 250789f1bd Remove some code for inferring alignment info from the x86 backend
now that the dag combiner does it.

llvm-svn: 46404
2008-01-26 20:07:42 +00:00
Chris Lattner f4523c35cb optimize fxor like for
llvm-svn: 46345
2008-01-25 06:14:17 +00:00
Chris Lattner 84ab724e06 Add target-specific dag combines for FAND(x,0) and FOR(x,0). This allows
us to compile:

double test(double X) {
  return copysign(0.0, X);
}

into:

_test:
	andpd	LCPI1_0(%rip), %xmm0
	ret

instead of:
_test:
	pxor	%xmm1, %xmm1
	andpd	LCPI1_0(%rip), %xmm1
	movapd	%xmm0, %xmm2
	andpd	LCPI1_1(%rip), %xmm2
	movapd	%xmm1, %xmm0
	orpd	%xmm2, %xmm0
	ret

llvm-svn: 46344
2008-01-25 05:46:26 +00:00
Chris Lattner a91f77eaac Significantly simplify and improve handling of FP function results on x86-32.
This case returns the value in ST(0) and then has to convert it to an SSE
register.  This causes significant codegen ugliness in some cases.  For 
example in the trivial fp-stack-direct-ret.ll testcase we used to generate:

_bar:
	subl	$28, %esp
	call	L_foo$stub
	fstpl	16(%esp)
	movsd	16(%esp), %xmm0
	movsd	%xmm0, 8(%esp)
	fldl	8(%esp)
	addl	$28, %esp
	ret

because we move the result of foo() into an XMM register, then have to
move it back for the return of bar.

Instead of hacking ever-more special cases into the call result lowering code
we take a much simpler approach: on x86-32, fp return is modeled as always 
returning into an f80 register which is then truncated to f32 or f64 as needed.
Similarly for a result, we model it as an extension to f80 + return.

This exposes the truncate and extensions to the dag combiner, allowing target
independent code to hack on them, eliminating them in this case.  This gives 
us this code for the example above:

_bar:
	subl	$12, %esp
	call	L_foo$stub
	addl	$12, %esp
	ret

The nasty aspect of this is that these conversions are not legal, but we want
the second pass of dag combiner (post-legalize) to be able to hack on them.
To handle this, we lie to legalize and say they are legal, then custom expand
them on entry to the isel pass (PreprocessForFPConvert).  This is gross, but
less gross than the code it is replacing :)

This also allows us to generate better code in several other cases.  For 
example on fp-stack-ret-conv.ll, we now generate:

_test:
	subl	$12, %esp
	call	L_foo$stub
	fstps	8(%esp)
	movl	16(%esp), %eax
	cvtss2sd	8(%esp), %xmm0
	movsd	%xmm0, (%eax)
	addl	$12, %esp
	ret

where before we produced (incidentally, the old bad code is identical to what
gcc produces):

_test:
	subl	$12, %esp
	call	L_foo$stub
	fstpl	(%esp)
	cvtsd2ss	(%esp), %xmm0
	cvtss2sd	%xmm0, %xmm0
	movl	16(%esp), %eax
	movsd	%xmm0, (%eax)
	addl	$12, %esp
	ret

Note that we generate slightly worse code on pr1505b.ll due to a scheduling 
deficiency that is unrelated to this patch.

llvm-svn: 46307
2008-01-24 08:07:48 +00:00
Evan Cheng 35abd840a6 Let each target decide byval alignment. For X86, it's 4-byte unless the aggregare contains SSE vector(s). For x86-64, it's max of 8 or alignment of the type.
llvm-svn: 46286
2008-01-23 23:17:41 +00:00
Duncan Sands 95d46ef887 The last pieces needed for loading arbitrary
precision integers.  This won't actually work
(and most of the code is dead) unless the new
legalization machinery is turned on.  While
there, I rationalized the handling of i1, and
removed some bogus (and unused) sextload patterns.
For i1, this could result in microscopically
better code for some architectures (not X86).
It might also result in worse code if annotating
with AssertZExt nodes turns out to be more harmful
than helpful.

llvm-svn: 46280
2008-01-23 20:39:46 +00:00
Chris Lattner 1ea55cf816 This commit changes:
1. Legalize now always promotes truncstore of i1 to i8. 
2. Remove patterns and gunk related to truncstore i1 from targets.
3. Rename the StoreXAction stuff to TruncStoreAction in TLI.
4. Make the TLI TruncStoreAction table a 2d table to handle from/to conversions.
5. Mark a wide variety of invalid truncstores as such in various targets, e.g.
   X86 currently doesn't support truncstore of any of its integer types.
6. Add legalize support for truncstores with invalid value input types.
7. Add a dag combine transform to turn store(truncate) into truncstore when
   safe.

The later allows us to compile CodeGen/X86/storetrunc-fp.ll to:

_foo:
	fldt	20(%esp)
	fldt	4(%esp)
	faddp	%st(1)
	movl	36(%esp), %eax
	fstps	(%eax)
	ret

instead of:

_foo:
	subl	$4, %esp
	fldt	24(%esp)
	fldt	8(%esp)
	faddp	%st(1)
	fstps	(%esp)
	movl	40(%esp), %eax
	movss	(%esp), %xmm0
	movss	%xmm0, (%eax)
	addl	$4, %esp
	ret

llvm-svn: 46140
2008-01-17 19:59:44 +00:00
Chris Lattner 72733e573b * Introduce a new SelectionDAG::getIntPtrConstant method
and switch various codegen pieces and the X86 backend over
  to using it.

* Add some comments to SelectionDAGNodes.h

* Introduce a second argument to FP_ROUND, which indicates
  whether the FP_ROUND changes the value of its input. If
  not it is safe to xform things like fp_extend(fp_round(x)) -> x.

llvm-svn: 46125
2008-01-17 07:00:52 +00:00
Duncan Sands 32b0ff6814 Trampoline support for x86-64. This looks like
it should work, but I have no machine to test
it on.  Committed because it will at least
cause no harm, and maybe someone can test it
for me!

llvm-svn: 46098
2008-01-16 22:55:25 +00:00
Chris Lattner e8bb9f2190 make it more clear that this predicate only applies to scalar FP types.
llvm-svn: 46058
2008-01-16 06:24:21 +00:00
Chris Lattner 14e616ef0b introduce a isTypeInSSEReg predicate, which allows us to simplify
some code.  No functionality change.

llvm-svn: 46055
2008-01-16 06:19:45 +00:00
Chris Lattner 8f7cec859e My previous commit had an incomplete message, it should have been:
make the 'fp return in ST(0)' optimization smart enough to
look through token factor nodes.  THis allows us to compile 
testcases like CodeGen/X86/fp-stack-retcopy.ll into:

_carg:
	subl	$12, %esp
	call	L_foo$stub
	fstpl	(%esp)
	fldl	(%esp)
	addl	$12, %esp
	ret

instead of:

_carg:
	subl	$28, %esp
	call	L_foo$stub
	fstpl	16(%esp)
	movsd	16(%esp), %xmm0
	movsd	%xmm0, 8(%esp)
	fldl	8(%esp)
	addl	$28, %esp
	ret

Still not optimal, but much better and this is a trivial patch.  Fixing 
the rest requires invasive surgery that is is not llvm 2.2 material.

llvm-svn: 46054
2008-01-16 05:56:59 +00:00
Chris Lattner ea001f1db7 make the 'fp return in ST(0)' optimization smart enough to
look through token factor

llvm-svn: 46053
2008-01-16 05:53:06 +00:00
Chris Lattner de5c74f18e various whitespace cleanups, no functionality change.
llvm-svn: 46052
2008-01-16 05:52:18 +00:00
Chris Lattner 3c3fefde06 no need to expand ISD::TRAP to X86ISD::TRAP, just match ISD::TRAP.
llvm-svn: 46015
2008-01-15 21:58:22 +00:00
Anton Korobeynikov 6bbbc4cbfa For PR1839: add initial support for __builtin_trap. llvm-gcc part is missed
as well as PPC codegen

llvm-svn: 46001
2008-01-15 07:02:33 +00:00
Duncan Sands 51fe7bbcf5 Whitespace tweak.
llvm-svn: 45940
2008-01-13 21:20:29 +00:00
Evan Cheng 7411b510b2 Code clean up.
llvm-svn: 45898
2008-01-12 01:08:07 +00:00
Arnold Schwaighofer 06da9e2d43 hrm - correct spelling.
Actually were not riding any arguments. Sadly there is no semantic spell checker that is going to safe you from such a mistake.

llvm-svn: 45868
2008-01-11 17:10:15 +00:00
Arnold Schwaighofer 6cf72fbbaf Improve tail call optimized call's argument lowering. Before this
commit all arguments where moved to the stack slot where they would
reside on a normal function call before the lowering to the tail call
stack slot. This was done to prevent arguments overwriting each other.
Now only arguments sourcing from a FORMAL_ARGUMENTS node or a
CopyFromReg node with virtual register (could also be a caller's
argument) are lowered indirectly.

 --This line, and those below, will be ignored--

M    X86/X86ISelLowering.cpp
M    X86/README.txt

llvm-svn: 45867
2008-01-11 16:49:42 +00:00
Arnold Schwaighofer bf1816ea7b Correct a copy and paste error.
llvm-svn: 45865
2008-01-11 14:34:56 +00:00
Evan Cheng a26552493b Mark byval parameter stack objects mutable for now.
llvm-svn: 45813
2008-01-10 02:24:25 +00:00
Evan Cheng fead113fe0 Do not use the stack pointer directly, issue a copyfromreg instead. Otherwise we can end up with something like ADD32ri %esp, x which two-address pass won't like.
llvm-svn: 45798
2008-01-10 00:37:26 +00:00
Evan Cheng 73d1017871 Remove comments that do not correspond to anything after recent refactoring.
llvm-svn: 45792
2008-01-10 00:09:10 +00:00
Evan Cheng 8242168ef4 Unbreak x86-64.
llvm-svn: 45725
2008-01-07 23:08:23 +00:00
Nate Begeman 22950d26f5 Remove an incorrect optimization that is performed correctly by
the target independent legalizer.

llvm-svn: 45631
2008-01-05 20:51:30 +00:00
Gordon Henriksen 9231958391 Refactoring the x86 and x86-64 calling convention implementations,
unifying the copied algorithms and saving over 500 LOC. There should
be no functionality change, but please test on your favorite x86
target.

llvm-svn: 45627
2008-01-05 16:56:59 +00:00
Gordon Henriksen f066fc477c First steps in in X86 calling convention cleanup.
llvm-svn: 45536
2008-01-03 16:47:34 +00:00
Chris Lattner a10fff51d9 Rename SSARegMap -> MachineRegisterInfo in keeping with the idea
that "machine" classes are used to represent the current state of
the code being compiled.  Given this expanded name, we can start 
moving other stuff into it.  For now, move the UsedPhysRegs and
LiveIn/LoveOuts vectors from MachineFunction into it.

Update all the clients to match.

This also reduces some needless #includes, such as MachineModuleInfo
from MachineFunction.

llvm-svn: 45467
2007-12-31 04:13:23 +00:00
Chris Lattner a5bb370aa4 Add new shorter predicates for testing machine operands for various types:
e.g. MO.isMBB() instead of MO.isMachineBasicBlock().  I don't plan on 
switching everything over, so new clients should just start using the 
shorter names.

Remove old long accessors, switching everything over to use the short
accessor: getMachineBasicBlock() -> getMBB(), 
getConstantPoolIndex() -> getIndex(), setMachineBasicBlock -> setMBB(), etc.

llvm-svn: 45464
2007-12-30 23:10:15 +00:00
Chris Lattner f3ebc3f3d2 Remove attribution from file headers, per discussion on llvmdev.
llvm-svn: 45418
2007-12-29 20:36:04 +00:00
Chris Lattner 07ccbfa64a Codegen:
as:

_bar:
	pushl	%esi
	subl	$8, %esp
	movl	16(%esp), %esi
	call	L_foo$stub
	fstps	(%esi)
	addl	$8, %esp
	popl	%esi
	#FP_REG_KILL
	ret

instead of:

_bar:
	pushl	%esi
	subl	$8, %esp
	movl	16(%esp), %esi
	call	L_foo$stub
	fstpl	(%esi)
	cvtsd2ss	(%esi), %xmm0
	movss	%xmm0, (%esi)
	addl	$8, %esp
	popl	%esi
	#FP_REG_KILL
	ret

llvm-svn: 45401
2007-12-29 06:57:38 +00:00
Chris Lattner 8013bd339b avoid going through a stack slot to convert from fpstack to xmm reg
if we are just going to store it back anyway.  This improves things 
like:
double foo();
void bar(double *P) { *P = foo(); }

llvm-svn: 45399
2007-12-29 06:41:28 +00:00
Chris Lattner e3b05fe31b fix a questionable cast, thanks to Mike Stump for pointing this out.
llvm-svn: 45075
2007-12-16 20:26:54 +00:00
Evan Cheng 23d2d4dc6c Make better use of instructions that clear high bits; fix various 2-wide shuffle bugs.
llvm-svn: 45058
2007-12-15 03:00:47 +00:00
Evan Cheng 0e6408124e Fix ctlz and cttz. llvm definition requires them to return number of bits in of the src type when value is zero.
llvm-svn: 45029
2007-12-14 08:30:15 +00:00
Evan Cheng e9fbc3f014 Implement ctlz and cttz with bsr and bsf.
llvm-svn: 45024
2007-12-14 02:13:44 +00:00
Dan Gohman 7a7742c2fe Allow vector integer constants to be created with
SelectionDAG::getConstant, in the same way as vector floating-point
constants. This allows the legalize expansion code for @llvm.ctpop and
friends to be usable with vector types.

llvm-svn: 44954
2007-12-12 22:21:26 +00:00
Evan Cheng 0f42730722 Use shuffles to implement insert_vector_elt for i32, i64, f32, and f64.
llvm-svn: 44929
2007-12-12 07:55:34 +00:00
Evan Cheng 2a98956796 Lower a build_vector with all constants into a constpool load unless it can be done with a move to low part.
llvm-svn: 44921
2007-12-12 06:45:40 +00:00
Evan Cheng 4fbf459549 - Improved v8i16 shuffle lowering. It now uses pshuflw and pshufhw as much as
possible before resorting to pextrw and pinsrw.
- Better codegen for v4i32 shuffles masquerading as v8i16 or v16i8 shuffles.
- Improves (i16 extract_vector_element 0) codegen by recognizing
  (i32 extract_vector_element 0) does not require a pextrw.

llvm-svn: 44836
2007-12-11 01:46:18 +00:00
Nate Begeman a55a67ae91 x86 doesn't actually want to custom lower v3i32
llvm-svn: 44835
2007-12-11 01:41:33 +00:00
Evan Cheng b41d838d28 Add comment.
llvm-svn: 44686
2007-12-07 21:30:01 +00:00
Evan Cheng bfd373a53e Much improved v8i16 shuffles. (Step 1).
llvm-svn: 44676
2007-12-07 08:07:39 +00:00
Evan Cheng c829e5cdf0 Remove a bogus optimization. It's not possible to do a move to low element to a <8 x i16> or <16 x i8> vector.
llvm-svn: 44669
2007-12-06 22:14:22 +00:00
Duncan Sands ad0ea2d430 Fix PR1146: parameter attributes are longer part of
the function type, instead they belong to functions
and function calls.  This is an updated and slightly
corrected version of Reid Spencer's original patch.
The only known problem is that auto-upgrading of
bitcode files doesn't seem to work properly (see
test/Bitcode/AutoUpgradeIntrinsics.ll).  Hopefully
a bitcode guru (who might that be? :) ) will fix it.

llvm-svn: 44359
2007-11-27 13:23:08 +00:00
Chris Lattner 5728bdd4db Fix a long standing deficiency in the X86 backend: we would
sometimes emit "zero" and "all one" vectors multiple times,
for example:

_test2:
	pcmpeqd	%mm0, %mm0
	movq	%mm0, _M1
	pcmpeqd	%mm0, %mm0
	movq	%mm0, _M2
	ret

instead of:

_test2:
	pcmpeqd	%mm0, %mm0
	movq	%mm0, _M1
	movq	%mm0, _M2
	ret

This patch fixes this by always arranging for zero/one vectors
to be defined as v4i32 or v2i32 (SSE/MMX) instead of letting them be
any random type.  This ensures they get trivially CSE'd on the dag.
This fix is also important for LegalizeDAGTypes, as it gets unhappy
when the x86 backend wants BUILD_VECTOR(i64 0) to be legal even when
'i64' isn't legal.

This patch makes the following changes:

1) X86TargetLowering::LowerBUILD_VECTOR now lowers 0/1 vectors into
   their canonical types.
2) The now-dead patterns are removed from the SSE/MMX .td files.
3) All the patterns in the .td file that referred to immAllOnesV or
   immAllZerosV in the wrong form now use *_bc to match them with a
   bitcast wrapped around them.
4) X86DAGToDAGISel::SelectScalarSSELoad is generalized to handle 
   bitcast'd zero vectors, which simplifies the code actually.
5) getShuffleVectorZeroOrUndef is updated to generate a shuffle that
   is legal, instead of generating one that is illegal and expecting
   a later legalize pass to clean it up.
6) isZeroShuffle is generalized to handle bitcast of zeros.
7) several other minor tweaks.

This patch is definite goodness, but has the potential to cause random
code quality regressions.  Please be on the lookout for these and let 
me know if they happen.

llvm-svn: 44310
2007-11-25 00:24:49 +00:00
Chris Lattner f72ad16263 remove bogus assertion that broke CodeGen/Generic/cast-fp.ll on x86
among others.

llvm-svn: 44302
2007-11-24 18:37:20 +00:00
Chris Lattner f81d5886c6 Several changes:
1) Change the interface to TargetLowering::ExpandOperationResult to 
   take and return entire NODES that need a result expanded, not just
   the value.  This allows us to handle things like READCYCLECOUNTER,
   which returns two values.
2) Implement (extremely limited) support in LegalizeDAG::ExpandOp for MERGE_VALUES.
3) Reimplement custom lowering in LegalizeDAGTypes in terms of the new
   ExpandOperationResult.  This makes the result simpler and fully 
   general.
4) Implement (fully general) expand support for MERGE_VALUES in LegalizeDAGTypes.
5) Implement ExpandOperationResult support for ARM f64->i64 bitconvert and ARM
   i64 shifts, allowing them to work with LegalizeDAGTypes.
6) Implement ExpandOperationResult support for X86 READCYCLECOUNTER and FP_TO_SINT,
   allowing them to work with LegalizeDAGTypes.

LegalizeDAGTypes now passes several more X86 codegen tests when enabled and when
type legalization in LegalizeDAG is ifdef'd out.

llvm-svn: 44300
2007-11-24 07:07:01 +00:00
Anton Korobeynikov 91460e43f1 Implement codegen for flt_rounds on x86
llvm-svn: 44183
2007-11-16 01:31:51 +00:00
Bill Wendling f359fed9f9 Unify CALLSEQ_{START,END}. They take 4 parameters: the chain, two stack
adjustment fields, and an optional flag. If there is a "dynamic_stackalloc" in
the code, make sure that it's bracketed by CALLSEQ_START and CALLSEQ_END. If
not, then there is the potential for the stack to be changed while the stack's
being used by another instruction (like a call).

This can only result in tears...

llvm-svn: 44037
2007-11-13 00:44:25 +00:00
Arnold Schwaighofer d2c16ff905 Update tailcall code to include inline attribute operand for memcpy.
llvm-svn: 43978
2007-11-10 10:48:01 +00:00
Evan Cheng 797d56ff17 Much improved pic jumptable codegen:
Then:
        call    "L1$pb"
"L1$pb":
        popl    %eax
		...
LBB1_1: # entry
        imull   $4, %ecx, %ecx
        leal    LJTI1_0-"L1$pb"(%eax), %edx
        addl    LJTI1_0-"L1$pb"(%ecx,%eax), %edx
        jmpl    *%edx

        .align  2
        .set L1_0_set_3,LBB1_3-LJTI1_0
        .set L1_0_set_2,LBB1_2-LJTI1_0
        .set L1_0_set_5,LBB1_5-LJTI1_0
        .set L1_0_set_4,LBB1_4-LJTI1_0
LJTI1_0:
        .long    L1_0_set_3
        .long    L1_0_set_2

Now:
        call    "L1$pb"
"L1$pb":
        popl    %eax
		...
LBB1_1: # entry
        addl    LJTI1_0-"L1$pb"(%eax,%ecx,4), %eax
        jmpl    *%eax

		.align  2
		.set L1_0_set_3,LBB1_3-"L1$pb"
		.set L1_0_set_2,LBB1_2-"L1$pb"
		.set L1_0_set_5,LBB1_5-"L1$pb"
		.set L1_0_set_4,LBB1_4-"L1$pb"
LJTI1_0:
        .long    L1_0_set_3
        .long    L1_0_set_2

llvm-svn: 43924
2007-11-09 01:32:10 +00:00
Rafael Espindola fa0df55bdd Move the LowerMEMCPY and LowerMEMCPYCall to a common place.
Thanks for the suggestions Bill :-)

llvm-svn: 43742
2007-11-05 23:12:20 +00:00
Chris Lattner 296160d443 Fix PR1763 by allowing the 'q' constraint to work with 64-bit
regs on x86-64.

llvm-svn: 43669
2007-11-04 06:51:12 +00:00
Evan Cheng 2b93a20b09 Unbreak tailcall opt.
llvm-svn: 43646
2007-11-02 17:45:40 +00:00
Evan Cheng e453ff4913 Missing a getNumOperands check.
llvm-svn: 43630
2007-11-02 01:26:22 +00:00
Rafael Espindola 419b6d7ce4 Make ARM and X86 LowerMEMCPY identical by moving the isThumb check into getMaxInlineSizeThreshold
and by restructuring the X86 version.

New I just have to move this to a common place :-)

llvm-svn: 43554
2007-10-31 14:39:58 +00:00
Rafael Espindola 063f177300 Make ARM an X86 memcpy expansion more similar to each other.
Now both subtarget define getMaxInlineSizeThreshold and the expansion uses it.

This should not change generated code.

llvm-svn: 43552
2007-10-31 11:52:06 +00:00
Dale Johannesen b066c1f216 Make i64=expand_vector_elt(v2i64) work in 32-bit mode.
llvm-svn: 43535
2007-10-31 00:32:36 +00:00
Dale Johannesen 6aa304e529 Add missing MMX PSUBQ.
llvm-svn: 43488
2007-10-30 01:18:38 +00:00
Evan Cheng e106e2f142 Enable more fold (sext (load x)) -> (sext (truncate (sextload x)))
transformation. Previously, it's restricted by ensuring the number of load uses
is one. Now the restriction is loosened up by allowing setcc uses to be
"extended" (e.g. setcc x, c, eq -> setcc sext(x), sext(c), eq).

llvm-svn: 43465
2007-10-29 19:58:20 +00:00
Evan Cheng 7b3f7feaea Avoid doing something dumb like rewriting using a 64-bit iv in 32-bit mode.
llvm-svn: 43446
2007-10-29 07:57:50 +00:00
Evan Cheng 7f3d02471d Loosen up iv reuse to allow reuse of the same stride but a larger type when truncating from the larger type to smaller type is free.
e.g.
Turns this loop:
LBB1_1: # entry.bb_crit_edge
        xorl    %ecx, %ecx
        xorw    %dx, %dx
        movw    %dx, %si
LBB1_2: # bb
        movl    L_X$non_lazy_ptr, %edi
        movw    %si, (%edi)
        movl    L_Y$non_lazy_ptr, %edi
        movw    %dx, (%edi)
		addw    $4, %dx
		incw    %si
		incl    %ecx
		cmpl    %eax, %ecx
		jne     LBB1_2  # bb
	
into

LBB1_1: # entry.bb_crit_edge
        xorl    %ecx, %ecx
        xorw    %dx, %dx
LBB1_2: # bb
        movl    L_X$non_lazy_ptr, %esi
        movw    %cx, (%esi)
        movl    L_Y$non_lazy_ptr, %esi
        movw    %dx, (%esi)
        addw    $4, %dx
		incl    %ecx
        cmpl    %eax, %ecx
        jne     LBB1_2  # bb

llvm-svn: 43375
2007-10-26 01:56:11 +00:00
Dale Johannesen 8ee70112ea Allow for copysign having f80 second argument.
Fixes 5550319.

llvm-svn: 43205
2007-10-21 01:07:44 +00:00
Rafael Espindola 846c19dd70 Add support for byval function whose argument is not 32 bit aligned.
To do this it is necessary to add a "always inline" argument to the
memcpy node. For completeness I have also added this node to memmove
and memset.  I have also added getMem* functions, because the extra
argument makes it cumbersome to use getNode and because I get confused
by it :-)

llvm-svn: 43172
2007-10-19 10:41:11 +00:00
Chris Lattner 12d5da49d3 Change fp to sint legalization on x86-32 to do 2 x i32
loads instead of 1 x i64 loads.  This doesn't change any functionality yet.

llvm-svn: 43068
2007-10-17 06:17:29 +00:00
Chris Lattner 693cbeadff fix some funny indentation, add comments.
llvm-svn: 43066
2007-10-17 06:02:13 +00:00
Dale Johannesen e5530a35d4 Check for invalid cc's in f80 select.
llvm-svn: 43033
2007-10-16 18:09:08 +00:00
Arnold Schwaighofer b3d58b98d0 Correction to tail call optimization code. The new return address
was stored to the acutal stack slot before the parameters were
lowered to their stack slot. This could cause arguments to be
overwritten by the return address if the called function had less
parameters than the caller function. The update should remove the
last failing test case of llc-beta: SPASS.

llvm-svn: 43027
2007-10-16 09:05:00 +00:00
Evan Cheng 7bcfd8f880 LowerFP_TO_SINT must not create a stack object if it's not needed.
llvm-svn: 43004
2007-10-15 20:11:21 +00:00
Evan Cheng 4099f4f91a Unbreak x86-64.
llvm-svn: 42962
2007-10-14 10:09:39 +00:00
Arnold Schwaighofer e8d0bf2669 Correcting the corrections. Bad bad baaad emacs!
llvm-svn: 42935
2007-10-12 21:53:12 +00:00
Arnold Schwaighofer 1f0da1fefb Corrected many typing errors. And removed 'nest' parameter handling
for fastcc from X86CallingConv.td.  This means that nested functions
are not supported for calling convention 'fastcc'.

llvm-svn: 42934
2007-10-12 21:30:57 +00:00
Duncan Sands a6286bd502 Due to the new tail call optimization, trampolines can no
longer be created for fastcc functions.

llvm-svn: 42925
2007-10-12 19:37:31 +00:00
Dan Gohman 8d978da3b0 Mark vector ctpop, cttz, and ctlz as Expand on x86.
llvm-svn: 42905
2007-10-12 14:09:42 +00:00
Dan Gohman 482732af9d Set ISD::FPOW to Expand.
llvm-svn: 42881
2007-10-11 23:21:31 +00:00
Arnold Schwaighofer 9ccea99165 Added tail call optimization to the x86 back end. It can be
enabled by passing -tailcallopt to llc.  The optimization is
performed if the following conditions are satisfied:
* caller/callee are fastcc
* elf/pic is disabled OR
  elf/pic enabled + callee is in module + callee has
  visibility protected or hidden

llvm-svn: 42870
2007-10-11 19:40:01 +00:00
Evan Cheng f5ec10b64c Bug fix. X86 was emitting redundant setcc and test instructions before a conditional move.
llvm-svn: 42774
2007-10-08 22:16:29 +00:00
Dan Gohman a160361c85 Migrate X86 and ARM from using X86ISD::{,I}DIV and ARMISD::MULHILO{U,S} to
use ISD::{S,U}DIVREM and ISD::{S,U}MUL_HIO. Move the lowering code
associated with these operators into target-independent in LegalizeDAG.cpp
and TargetLowering.cpp.

llvm-svn: 42762
2007-10-08 18:33:35 +00:00
Evan Cheng 6912b50958 Not needed any more.
llvm-svn: 42623
2007-10-05 01:34:14 +00:00
Evan Cheng 5fb5a1f389 Enabling new condition code modeling scheme.
llvm-svn: 42459
2007-09-29 00:00:36 +00:00
Rafael Espindola 6c04ac1db0 Refactor the memcpy lowering for the x86 target.
The only generated code difference is that now we call memcpy when
the size of the array is unknown. This matches GCC behavior and is
better since the run time value can be arbitrarily large.

llvm-svn: 42433
2007-09-28 12:53:01 +00:00
Dale Johannesen b6d56401aa Enable codegen for long double abs, sin, cos
llvm-svn: 42368
2007-09-26 21:10:55 +00:00
Evan Cheng 9b7f0e6eb4 translateX86CC updates the last two operands.
llvm-svn: 42333
2007-09-26 00:45:55 +00:00
Dan Gohman 31599685c7 When both x/y and x%y are needed (x and y both scalar integer), compute
both results with a single div or idiv instruction. This uses new X86ISD
nodes for DIV and IDIV which are introduced during the legalize phase
so that the SelectionDAG's CSE can automatically eliminate redundant
computations.

llvm-svn: 42308
2007-09-25 18:23:27 +00:00
Dan Gohman 5e1a428344 Move the setOperationAction(ISD::DEBUG_LOC, MVT::Other, Expand) and
the check to see if the assembler supports .loc from X86TargetLowering
into the superclass TargetLowering.

llvm-svn: 42297
2007-09-25 15:10:49 +00:00
Evan Cheng e95f391ef1 Added support for new condition code modeling scheme (i.e. physical register dependency). These are a bunch of instructions that are duplicated so the x86 backend can support both the old and new schemes at the same time. They will be deleted after
all the kinks are worked out.

llvm-svn: 42285
2007-09-25 01:57:46 +00:00
Dan Gohman 1b2156fcae Add support on x86 for having Legalize lower ISD::LOCATION to ISD::DEBUG_LOC
instead of ISD::LABEL with a manual .debug_line entry when the assembler
supports .file and .loc directives.

llvm-svn: 42278
2007-09-24 21:54:14 +00:00
Chris Lattner 5b5484db63 claim that "st" is from the 80-bit register file. This causes x87-using inline
asm to die with:

ScheduleDAG.cpp:269: failed assertion `false && "Couldn't find the register class"'

instead of:
failed assertion `RegMap->getRegClass(VReg) == RC && "Register class of operand and regclass of use don't agree!"'

yay.

llvm-svn: 42259
2007-09-24 05:27:37 +00:00
Dale Johannesen e36c400255 Fix PR 1681. When X86 target uses +sse -sse2,
keep f32 in SSE registers and f64 in x87.  This
is effectively a new codegen mode.
Change addLegalFPImmediate to permit float and
double variants to do different things.
Adjust callers.

llvm-svn: 42246
2007-09-23 14:52:20 +00:00
Rafael Espindola 4730c04904 Don't add a default STACK_ALIGN (use the generic ABI alignment)
Implement calls to functions with byval arguments on X86

llvm-svn: 42192
2007-09-21 15:50:22 +00:00
Rafael Espindola f065f0e2a1 small cleanup: use LowerMemArgument in LowerFastCCArguments also
llvm-svn: 42189
2007-09-21 14:55:38 +00:00
Dale Johannesen 7d67e547b5 More long double fixes. x86_64 should build now.
llvm-svn: 42155
2007-09-19 23:55:34 +00:00
Dan Gohman 863bdc332d Emit integer x<1 as x<=0, as comparisons with zero (now includeing
64-bit) can use test instead of cmp with an immediate.

llvm-svn: 42026
2007-09-17 14:49:27 +00:00
Dale Johannesen 98d3a08d8f Remove the assumption that FP's are either float or
double from some of the many places in the optimizers
it appears, and do something reasonable with x86
long double.
Make APInt::dump() public, remove newline, use it to
dump ConstantSDNode's.
Allow APFloats in FoldingSet.
Expand X86 backend handling of long doubles (conversions
to/from int, mostly).

llvm-svn: 41967
2007-09-14 22:26:36 +00:00
Rafael Espindola 272f7304f0 Add support for functions with byval arguments on x86
llvm-svn: 41953
2007-09-14 15:48:13 +00:00
Dale Johannesen 245dceb06d Add APInt interfaces to APFloat (allows directly
access to bits).  Use them in place of float and
double interfaces where appropriate.
First bits of x86 long double constants handling 
(untested, probably does not work).

llvm-svn: 41858
2007-09-11 18:32:33 +00:00
Duncan Sands 86e0119822 Fold the adjust_trampoline intrinsic into
init_trampoline.  There is now only one
trampoline intrinsic.

llvm-svn: 41841
2007-09-11 14:10:23 +00:00
Dale Johannesen bed9dc423c Next round of APFloat changes.
Use APFloat in UpgradeParser and AsmParser.
Change all references to ConstantFP to use the
APFloat interface rather than double.  Remove
the ConstantFP double interfaces.
Use APFloat functions for constant folding arithmetic
and comparisons.
(There are still way too many places APFloat is
just a wrapper around host float/double, but we're
getting there.)

llvm-svn: 41747
2007-09-06 18:13:44 +00:00
Anton Korobeynikov 50ab26e835 Reapply r41578 with proper fix
llvm-svn: 41680
2007-09-03 00:36:06 +00:00
Rafael Espindola e636fc05d6 Initial support for calling functions with byval arguments on x86-64
llvm-svn: 41643
2007-08-31 15:06:30 +00:00
Dale Johannesen 3cf889f75e Enhance APFloat to retain bits of NaNs (fixes oggenc).
Use APFloat interfaces for more references, mostly
of ConstantFPSDNode.

llvm-svn: 41632
2007-08-31 04:03:46 +00:00
Dale Johannesen d246b2ca5c Change LegalFPImmediates to use APFloat.
Add APFloat interfaces to ConstantFP, SelectionDAG.
Fix integer bit in double->APFloat conversion.
Convert LegalizeDAG to use APFloat interface in
ConstantFPSDNode uses.

llvm-svn: 41587
2007-08-30 00:23:21 +00:00
Duncan Sands 7741427a09 Move getX86RegNum into X86RegisterInfo and use it
in the trampoline lowering.  Lookup the jump and
mov opcodes for the trampoline rather than hard
coding them.

llvm-svn: 41577
2007-08-29 19:01:20 +00:00
Rafael Espindola b602461f48 Add a comment about using libc memset/memcpy or generating inline code.
llvm-svn: 41502
2007-08-27 17:48:26 +00:00
Rafael Espindola ff33241e16 call libc memcpy/memset if array size is bigger then threshold.
Coping 100MB array (after a warmup) shows that glibc 2.6.1 implementation on
x86-64 (core 2) is 30% faster (from 0.270917s to 0.188079s)

llvm-svn: 41479
2007-08-27 10:18:20 +00:00
Chris Lattner d8c9cb9182 rename isOperandValidForConstraint to LowerAsmOperandForConstraint,
changing the interface to allow for future changes.

llvm-svn: 41384
2007-08-25 00:47:38 +00:00
Rafael Espindola 9c3d20d823 Partial implementation of calling functions with byval arguments:
*) The needed information is propagated to the DAG
 *) The X86-64 backend detects it and aborts

llvm-svn: 41179
2007-08-20 15:18:24 +00:00
Anton Korobeynikov 597c8b77e4 Move ReturnAddrIndex variable to X86MachineFunctionInfo structure. This fixed
hard to catch bugs with retaddr lowering

llvm-svn: 41104
2007-08-15 17:12:32 +00:00
Evan Cheng b2823dac69 Fix a typo pointd out by Maarten ter Huurne.
llvm-svn: 41059
2007-08-13 23:27:11 +00:00
Christopher Lamb b372abab14 Increase efficiency of sign_extend_inreg by using subregisters for truncation. As the README suggests sign_extend_subreg is selected to (sext(trunc)).
llvm-svn: 41010
2007-08-10 21:48:46 +00:00
Rafael Espindola 66011c17d5 propagate struct size and alignment of byval arguments to the DAG
llvm-svn: 40986
2007-08-10 14:44:42 +00:00
Dale Johannesen ba1a98a4e0 long double 9 of N. This finishes up the X86-32 bits
(constants are still not handled).  Adds ConvertActions
to control fp-to-fp conversions (these are currently
defaulted for all other targets, so no changes there).

llvm-svn: 40958
2007-08-09 01:04:01 +00:00
Dale Johannesen 57c6ac5fe5 Long double patch 7 of N, unless I lost count:).
Last x87 bits for full functionality (not
thoroughly tested, and long doubles do not work
in SSE modes at all - use -mcpu=i486 for now)

llvm-svn: 40886
2007-08-07 01:17:37 +00:00
Dale Johannesen b1888e73ad Long double patch 4 of N: initial x87 implementation.
Lots of problems yet but some simple things work.

llvm-svn: 40847
2007-08-05 18:49:15 +00:00
Dan Gohman 8932bff7fe Fix the alignment requirements of several unpck and shuf instructions.
Generalize isPSHUFDMask and add a unary SHUFPD pattern so that SHUFPD's
memory operand alignment can be tested as well, with a fix to avoid
breaking MMX's use of isPSHUFDMask.

llvm-svn: 40756
2007-08-02 21:17:01 +00:00
Evan Cheng d3d92890fc Can't handle offset and scale if rip-relative addressing is to be used.
llvm-svn: 40703
2007-08-01 23:46:47 +00:00
Evan Cheng 242a87734a This isn't safe when there are uses of load's chain result.
llvm-svn: 40617
2007-07-31 06:21:44 +00:00
Duncan Sands ce38853cc6 Trampoline codegen support for X86-32.
llvm-svn: 40566
2007-07-27 20:02:49 +00:00
Dan Gohman 4788552deb Re-apply 40504, but with a fix for the segfault it caused in oggenc:
Make the alignedload and alignedstore patterns always require 16-byte
alignment. This way when they are used in the "Fs" instructions, in which
a vector instruction is used for a scalar purpose, they can still require
the full vector alignment. And add a regression test for this.

llvm-svn: 40555
2007-07-27 17:16:43 +00:00
Evan Cheng 931de40afa Reverting 40504 for now. It's breaking oggenc.
llvm-svn: 40547
2007-07-27 01:37:47 +00:00
Dan Gohman 8455bd3fae Remove X86ISD::LOAD_PACK and X86ISD::LOAD_UA and associated code from the
x86 target, replacing them with the new alignment attributes on memory
references.

llvm-svn: 40504
2007-07-26 00:31:09 +00:00
Dan Gohman f906c7286f Use movaps to load a v4f32 build_vector of all-constant values into a
register instead of loading each element individually.

llvm-svn: 40478
2007-07-24 22:55:08 +00:00
Dan Gohman b6a8ae20c7 Fix some uses of dyn_cast to be uses of cast.
llvm-svn: 40443
2007-07-23 20:24:29 +00:00
Evan Cheng 64738536b3 Fix custom lowering of SSE FXOR.
llvm-svn: 40071
2007-07-19 23:36:01 +00:00
Anton Korobeynikov 383a324735 Long live the exception handling!
This patch fills the last necessary bits to enable exceptions
handling in LLVM. Currently only on x86-32/linux.

In fact, this patch adds necessary intrinsics (and their lowering) which
represent really weird target-specific gcc builtins used inside unwinder.

After corresponding llvm-gcc patch will land (easy) exceptions should be
more or less workable. However, exceptions handling support should not be 
thought as 'finished': I expect many small and not so small glitches
everywhere.

llvm-svn: 39855
2007-07-14 14:06:15 +00:00
Dan Gohman 57111e7a60 Define non-intrinsic instructions for vector min, max, sqrt, rsqrt, and rcp,
in addition to the intrinsic forms. Add spill-folding entries for these new
instructions, and for the scalar min and max instrinsic instructions which
were missing. And add some preliminary ISelLowering code for using the new
non-intrinsic vector sqrt instruction, and fneg and fabs.

llvm-svn: 38478
2007-07-10 00:05:58 +00:00
Anton Korobeynikov de9c825859 Proper flag __alloca call
llvm-svn: 37923
2007-07-05 20:36:08 +00:00
Dale Johannesen 3d7008cd49 Refactor X87 instructions. As a side effect, all
their names are changed.

llvm-svn: 37876
2007-07-04 21:07:47 +00:00
Dale Johannesen a2b3c175db Fix for PR 1505 (and 1489). Rewrite X87 register
model to include f32 variants.  Some factoring
improvments forthcoming.

llvm-svn: 37847
2007-07-03 00:53:03 +00:00
Evan Cheng 444d3ca53d No vector fneg.
llvm-svn: 37786
2007-06-29 00:18:15 +00:00
Evan Cheng 3bd318e298 Type of vector extract / insert index operand should be iPTR.
llvm-svn: 37784
2007-06-29 00:01:20 +00:00
Dan Gohman a866514528 Generalize MVT::ValueType and associated functions to be able to represent
extended vector types. Remove the special SDNode opcodes used for pre-legalize
vector operations, and the special MVT::Vector type used with them. Adjust
lowering and legalize to work with the normal SDNode kinds instead, and to
use the normal MVT functions to work with vector types instead of using the
two special operands that the pre-legalize nodes held.

This allows pre-legalize and post-legalize DAGs, and the code that operates
on them, to be more consistent. Pre-legalize vector operators can be handled
more consistently with scalar operators. And, -view-dag-combine1-dags and
-view-legalize-dags now look prettier for vector code.

llvm-svn: 37719
2007-06-25 16:23:39 +00:00
Dan Gohman 309d3d51b3 Move ComputeMaskedBits, MaskedValueIsZero, and ComputeNumSignBits from
TargetLowering to SelectionDAG so that they have more convenient
access to the current DAG, in preparation for the ValueType routines
being changed from standalone functions to members of SelectionDAG for
the pre-legalize vector type changes.

llvm-svn: 37704
2007-06-22 14:59:07 +00:00
Chris Lattner 944200be45 If a function is vararg, never pass inreg arguments in registers. Thanks to
Anton for half of this patch.

llvm-svn: 37641
2007-06-19 00:13:10 +00:00
Evan Cheng cea02ffd05 Look for VECTOR_SHUFFLE that's identity operation on either LHS or RHS. This can happen before DAGCombiner catches it.
llvm-svn: 37636
2007-06-19 00:02:56 +00:00
Bill Wendling 094a4e813a Revert patch. It regresses:
define double @test2(i64 %A) {
   %B = bitcast i64 %A to double
   ret double %B
}

$ llvm-as < t.ll | llc -march=x86-64

before:

         .align  4
         .globl  _test2
_test2:
         movd %rdi, %xmm0
         ret

after:

_test2:
         subq $8, %rsp
         movq %rdi, (%rsp)
         movsd (%rsp), %xmm0
         addq $8, %rsp
         ret

llvm-svn: 37617
2007-06-16 23:57:15 +00:00
Bill Wendling cd9673e565 Fix a failure to bit_convert from integer GPR to MMX register.
llvm-svn: 37611
2007-06-16 06:17:31 +00:00
Dan Gohman 5c4413120f Rename MVT::getVectorBaseType to MVT::getVectorElementType.
llvm-svn: 37579
2007-06-14 22:58:02 +00:00
Chris Lattner 75372ad603 fix x86-64 mmx calling convention for real, which passes in integer gprs.
llvm-svn: 37534
2007-06-09 05:08:10 +00:00
Chris Lattner a4a49e37ab fix mmx handling bug
llvm-svn: 37533
2007-06-09 05:01:50 +00:00
Dan Gohman 703e0f8608 Add explicit qualification for namespace MVT members.
llvm-svn: 37320
2007-05-24 14:33:05 +00:00
Dan Gohman eefa83e67b Use MVT::FIRST_VECTOR_VALUETYPE and MVT::LAST_VECTOR_VALUETYPE.
llvm-svn: 37234
2007-05-18 18:44:07 +00:00
Evan Cheng afa1cb6da3 Fix a bogus check that prevented folding VECTOR_SHUFFLE to UNDEF; add an optimization to fold VECTOR_SHUFFLE to a zero vector.
llvm-svn: 37173
2007-05-17 18:45:50 +00:00
Chris Lattner dade607f19 This is the correct fix for PR1427. This fixes mmx-shuffle.ll and doesn't
cause other regressions.

llvm-svn: 37160
2007-05-17 17:13:13 +00:00
Anton Korobeynikov 1ad4618715 Revert patch for PR1427. It breaks almost all vector tests.
llvm-svn: 37159
2007-05-17 07:50:14 +00:00
Chris Lattner 6a5a46322f Fix PR1427 and test/CodeGen/X86/mmx-shuffle.ll
llvm-svn: 37141
2007-05-17 03:29:42 +00:00
Chris Lattner c8798d085c fix subtle bugs in inline asm operand selection
llvm-svn: 37065
2007-05-15 01:28:08 +00:00
Chris Lattner 83df45a959 Fix two classes of bugs:
1. x86 backend rejected (&gv+c) for the 'i' constraint when in static mode.
  2. the matcher didn't correctly reject and accept some global addresses.
     the right predicate is GVRequiresExtraLoad, not "relomodel = pic".

llvm-svn: 36670
2007-05-03 16:52:29 +00:00
Anton Korobeynikov f1dcf69fc3 Emit correct register move information in eh frames for X86. This allows Shootout-C++/except to pass on x86/linux
with non-llvm-compiled (e.g. "native") unwind runtime.

llvm-svn: 36647
2007-05-02 19:53:33 +00:00
Bill Wendling 591eab8844 Support for the special case of a vector with the canonical form:
vector_shuffle v1, v2, <2, 6, 3, 7>

I.e.

         vector_shuffle v, undef, <2, 2, 3, 3>

MMX only has a shuffle for v4i16 vectors. It needs to use the unpackh for
this type of operation.

llvm-svn: 36403
2007-04-24 21:16:55 +00:00
Lauro Ramos Venancio efb8077ddd X86 TLS: fix and optimize the implementation of "initial exec" model.
llvm-svn: 36355
2007-04-22 22:50:52 +00:00
Lauro Ramos Venancio 4e91908f17 X86 TLS: Implement review feedback.
llvm-svn: 36318
2007-04-21 20:56:26 +00:00
Lauro Ramos Venancio 2518889872 Implement "general dynamic", "initial exec" and "local exec" TLS models for
X86 32 bits.

llvm-svn: 36283
2007-04-20 21:38:10 +00:00
Anton Korobeynikov 9b91d98a30 Add comment
llvm-svn: 36213
2007-04-17 19:34:00 +00:00
Chris Lattner ff0598de75 rename X86FunctionInfo to X86MachineFunctionInfo to match the header file
it is defined in.

llvm-svn: 36196
2007-04-17 17:21:52 +00:00
Anton Korobeynikov 8b7aab009e Implemented correct stack probing on mingw/cygwin for dynamic alloca's.
Also, fixed static case in presence of eax livin. This fixes PR331

PS: Why don't we still have push/pop instructions? :)
llvm-svn: 36195
2007-04-17 09:20:00 +00:00
Anton Korobeynikov fb80151c42 Removed tabs everywhere except autogenerated & external files. Add make
target for tabs checking.

llvm-svn: 36146
2007-04-16 18:10:23 +00:00
Chris Lattner 2805bce656 Fix mmx paddq, add support for the 'y' register class, though it isn't tested.
llvm-svn: 35940
2007-04-12 04:14:49 +00:00
Chris Lattner 808ac93f68 remove some dead hooks
llvm-svn: 35845
2007-04-09 23:31:19 +00:00
Chris Lattner 39f65335d5 remove some dead target hooks, subsumed by isLegalAddressingMode
llvm-svn: 35840
2007-04-09 22:27:04 +00:00
Chris Lattner 7451e4d6a1 move a bunch of register constraints from being handled by
getRegClassForInlineAsmConstraint to being handled by
getRegForInlineAsmConstraint.  This allows us to let the llvm register allocator
allocate, which gives us better code.  For example, X86/2007-01-29-InlineAsm-ir.ll
used to compile to:

_run_init_process:
        subl $4, %esp
        movl %ebx, (%esp)
        xorl %ebx, %ebx
        movl $11, %eax
        movl %ebx, %ecx
        movl %ebx, %edx
        # InlineAsm Start
        push %ebx ; movl %ebx,%ebx ; int $0x80 ; pop %ebx
        # InlineAsm End

Now we get:
_run_init_process:
        xorl %ecx, %ecx
        movl $11, %eax
        movl %ecx, %edx
        # InlineAsm Start
        push %ebx ; movl %ecx,%ebx ; int $0x80 ; pop %ebx
        # InlineAsm End

llvm-svn: 35804
2007-04-09 05:49:22 +00:00
Chris Lattner 2b6b4eb471 implement support for CodeGen/X86/inline-asm-x-scalar.ll:test3 - i32/i64 values
used with x constraints.

llvm-svn: 35803
2007-04-09 05:31:48 +00:00
Chris Lattner 590ed5e5b7 implement CodeGen/X86/inline-asm-x-scalar.ll
llvm-svn: 35799
2007-04-09 05:11:28 +00:00
Chris Lattner 1eb94d973a implement the new addressing mode description hook.
llvm-svn: 35521
2007-03-30 23:15:24 +00:00
Bill Wendling 2338e21b2b Remove cruft I put in there...
llvm-svn: 35394
2007-03-28 01:02:54 +00:00
Bill Wendling ad2db4a4cf Unbreak mmx arithmetic. It was barfing trying to do v8i8 arithmetic.
llvm-svn: 35392
2007-03-28 00:57:11 +00:00
Bill Wendling 6dff51ae65 Fix so that pandn is emitted instead of an xor/and combo. Add integer
comparison operators.

llvm-svn: 35385
2007-03-27 20:22:40 +00:00
Bill Wendling 158f6092a2 Promote to v1i64 type...
llvm-svn: 35353
2007-03-26 08:03:33 +00:00
Bill Wendling 98d2104c6f Add support for the v1i64 type. This makes better code for this:
#include <mmintrin.h>

extern __m64 C;

void baz(__v2si *A, __v2si *B)
{
  *A = C;
  _mm_empty();
}

We get this:

_baz:
        call "L1$pb"
"L1$pb":
        popl %eax
        movl L_C$non_lazy_ptr-"L1$pb"(%eax), %eax
        movq (%eax), %mm0
        movl 4(%esp), %eax
        movq %mm0, (%eax)
        emms
        ret

GCC gives us this:

_baz:
        pushl   %ebx
        call    L3
"L00000000001$pb":
L3:
        popl    %ebx
        subl    $8, %esp
        movl    L_C$non_lazy_ptr-"L00000000001$pb"(%ebx), %eax
        movl    (%eax), %edx
        movl    4(%eax), %ecx
        movl    16(%esp), %eax
        movl    %edx, (%eax)
        movl    %ecx, 4(%eax)
        emms
        addl    $8, %esp
        popl    %ebx
        ret

llvm-svn: 35351
2007-03-26 07:53:08 +00:00
Chris Lattner d685514e2e switch TargetLowering::getConstraintType to take the entire constraint,
not just the first letter.  No functionality change.

llvm-svn: 35322
2007-03-25 02:14:49 +00:00
Chris Lattner 03a643aa69 enforce the proper range for the i386 N constraint.
llvm-svn: 35319
2007-03-25 01:57:35 +00:00
Bill Wendling d551a18783 Support added for shifts and unpacking MMX instructions.
llvm-svn: 35266
2007-03-22 18:42:45 +00:00
Dale Johannesen 0c6bb5eab7 repair x86 performance, dejagnu problems from previous change
llvm-svn: 35245
2007-03-21 21:51:52 +00:00
Chris Lattner f01f87bc63 fix a warning
llvm-svn: 35152
2007-03-19 00:39:32 +00:00
Devang Patel b38c2ec89c Support 'I' inline asm constraint.
llvm-svn: 35129
2007-03-17 00:13:28 +00:00
Bill Wendling 144b8bbf17 And now support for MMX logical operations.
llvm-svn: 35125
2007-03-16 09:44:46 +00:00
Bill Wendling e31034125c Multiplication support for MMX.
llvm-svn: 35118
2007-03-15 21:24:36 +00:00
Evan Cheng a1779b9739 Under X86-64 large code model, do not emit 32-bit pc relative calls.
llvm-svn: 35108
2007-03-14 22:11:11 +00:00
Evan Cheng 3ab7ea7965 More flexible TargetLowering LSR hooks for testing whether an immediate is
a legal target address immediate or scale.

llvm-svn: 35073
2007-03-12 23:28:50 +00:00
Evan Cheng 57f261b13a Stupid bug: SSE2 supports v2i64 add / sub.
llvm-svn: 35070
2007-03-12 22:58:52 +00:00
Bill Wendling e9b81f5366 Adding more arithmetic operators to MMX. This is an almost exact copy of
the addition. Please let me know if you have suggestions.

llvm-svn: 35055
2007-03-10 09:57:05 +00:00
Bill Wendling 6092ce25cf Added "padd*" support for MMX. Added MMX move stuff to X86InstrInfo so that
moves, loads, etc. are recognized.

llvm-svn: 35031
2007-03-08 22:09:11 +00:00
Anton Korobeynikov ed4b303c10 Refactoring of formal parameter flags. Enable properly use of
zext/sext/aext stuff.

llvm-svn: 35008
2007-03-07 16:25:09 +00:00
Bill Wendling 97905b4027 Properly support v8i8 and v4i16 types. It now converts them to v2i32 for
load and stores.

llvm-svn: 35002
2007-03-07 05:43:18 +00:00
Bill Wendling bbd25984b7 Add LOAD/STORE support for MMX.
llvm-svn: 34978
2007-03-06 18:53:42 +00:00
Anton Korobeynikov e7ec3bc7bc Use new SDIselParamAttr enumeration. This removes "magick" constants
from formal attributes' flags processing.

llvm-svn: 34963
2007-03-06 08:12:33 +00:00
Evan Cheng deaea25eb9 X86-64 VACOPY needs custom expansion. va_list is a struct { i32, i32, i8*, i8* }.
llvm-svn: 34857
2007-03-02 23:16:35 +00:00
Anton Korobeynikov 57af2a4f3b Simplify things
llvm-svn: 34849
2007-03-02 21:50:27 +00:00
Chris Lattner 9c7e5e365d argument lowering should copy from the vreg shadows of live-in arguments
passed in registers, not directly from the pregs themselves.

llvm-svn: 34838
2007-03-02 05:12:29 +00:00
Anton Korobeynikov af8be4458f Ensure that fastcall'ed function is correctly mangled & stack is
properly aligned

llvm-svn: 34788
2007-03-01 16:29:22 +00:00
Chris Lattner 7373f3a351 remove dead option
llvm-svn: 34754
2007-02-28 18:39:53 +00:00
Chris Lattner 152bfa103e use high-level functions in CCState
llvm-svn: 34739
2007-02-28 07:09:55 +00:00
Chris Lattner 227b6c5d19 make use of helper functions in CCState for analyzing formals and calls.
llvm-svn: 34737
2007-02-28 07:00:42 +00:00
Chris Lattner d439e86078 switch LowerFastCCCallTo over to using the new fastcall description.
llvm-svn: 34734
2007-02-28 06:26:33 +00:00
Chris Lattner 66e1d1dd7e switch LowerFastCCArguments over to using the autogenerated Fastcall description.
llvm-svn: 34733
2007-02-28 06:21:19 +00:00
Chris Lattner 3066beccb5 rearrange code
llvm-svn: 34731
2007-02-28 06:10:12 +00:00
Chris Lattner 3ed3be3b4a remove fastcc (not fastcall) support
llvm-svn: 34730
2007-02-28 06:05:16 +00:00
Chris Lattner b9db225049 switch LowerCCCArguments over to using autogenerated CC.
llvm-svn: 34729
2007-02-28 05:46:49 +00:00
Chris Lattner 5958b176f3 simplify sret handling
llvm-svn: 34728
2007-02-28 05:39:26 +00:00
Chris Lattner be7995953a switch LowerCCCCallTo over to using an autogenerated callingconv
llvm-svn: 34727
2007-02-28 05:31:48 +00:00
Chris Lattner ba3d273122 switch return value passing and the x86-64 calling convention information
over to being autogenerated from the X86CallingConv.td file.

llvm-svn: 34722
2007-02-28 04:55:35 +00:00
Chris Lattner c9eed39a5d switch x86-64 return value lowering over to using same mechanism as argument
lowering uses.

llvm-svn: 34657
2007-02-27 05:28:59 +00:00
Chris Lattner 9f059194a7 Minor refactoring of CC Lowering interfaces
llvm-svn: 34656
2007-02-27 05:13:54 +00:00
Chris Lattner dc3adc83e7 move CC Lowering stuff to its own public interface
llvm-svn: 34655
2007-02-27 04:43:02 +00:00
Chris Lattner 2e5e8407ad refactor x86-64 argument lowering yet again, this time eliminating templates,
'clients', etc, and adding CCValAssign instead.

llvm-svn: 34654
2007-02-27 04:18:15 +00:00
Chris Lattner ff19957468 switch to smallvector
llvm-svn: 34633
2007-02-26 07:59:53 +00:00
Chris Lattner 294780829a initial hack at splitting the x86-64 calling convention info out from the
mechanics that process it.  I'm still not happy with this, but it's a step
in the right direction.

llvm-svn: 34631
2007-02-26 07:50:02 +00:00
Chris Lattner d41bff0f97 the truncate must always be done, it's only the assert that is conditional.
llvm-svn: 34628
2007-02-26 05:21:05 +00:00
Chris Lattner 1db979bae8 in X86-64 CCC, i8/i16 arguments are already properly zext/sext'd on input.
Capture this so that downstream zext/sext's are optimized out.  This
compiles:
  int test(short X) { return (int)X; }

to:

_test:
        movl %edi, %eax
        ret

instead of:

_test:
        movswl %di, %eax
        ret


GCC produces this bizarre code:

_test:
        movw    %di, -12(%rsp)
        movswl  -12(%rsp),%eax
        ret

llvm-svn: 34623
2007-02-26 03:18:56 +00:00
Chris Lattner 8924332e22 Fix an X86-64 abi bug. We now compile:
void foo(short);
void bar(unsigned short A) {
  foo(A);
}

into:

_bar:
        subq $8, %rsp
        movswl %di, %edi
        call _foo
        addq $8, %rsp
        ret

instead of:

_bar:
        subq $8, %rsp
        call _foo
        addq $8, %rsp
        ret

Testcase here: test/CodeGen/X86/x86-64-shortint.ll

llvm-svn: 34615
2007-02-25 23:10:46 +00:00
Chris Lattner 3e0703357f fix CodeGen/X86/2007-02-25-FastCCStack.ll, a regression from my patch last
night:  fastcc returns should only go in XMM0 if we have SSE2 or above.

llvm-svn: 34613
2007-02-25 22:23:46 +00:00
Chris Lattner fcee9b5568 fastcc functions that return double values now return them in xmm0 on x86-32.
This implements CodeGen/X86/fp-stack-ret.ll:test[23]

llvm-svn: 34592
2007-02-25 09:31:16 +00:00
Chris Lattner 9d9cc84f5b allow vectors to be passed to stdcall/fastcall functions
llvm-svn: 34590
2007-02-25 09:14:25 +00:00
Chris Lattner 2fc0d70392 move LowerRET into the 'Return Value Calling Convention Implementation'
section of the file.

llvm-svn: 34589
2007-02-25 09:12:39 +00:00
Chris Lattner ba474f58a4 make all Lower*CallTo implementations use LowerCallResult to handle their
result value stuff.  This eliminates a bunch of duplicated code and now
GetRetValueLocs is the sole place that decides where a value is returned.

llvm-svn: 34588
2007-02-25 09:10:05 +00:00
Chris Lattner 7802f3e2ea pass the calling convention into Lower*CallTo, instead of using ad-hoc flags.
llvm-svn: 34587
2007-02-25 09:06:15 +00:00
Chris Lattner 0cd9960fe7 factor a bunch of code out of LowerCCCCallTo into a new LowerCallResult
function.  This function now uses GetRetValueLocs to determine *where*
the result values are located and concerns itself with *how* to pull the
values out.

llvm-svn: 34586
2007-02-25 08:59:22 +00:00
Chris Lattner 3c76309a5b move some code around, pass in calling conv, even though it is unused
llvm-svn: 34585
2007-02-25 08:29:00 +00:00
Chris Lattner dfda38f7dc simplify result value lowering by splitting the selection of *where* to return
registers out from the logic of *how* to return them.

This changes X86-64 to mark EAX live out when returning a 32-bit value,
where before it marked RAX liveout.

llvm-svn: 34582
2007-02-25 08:15:11 +00:00
Chris Lattner d6b853ad1b make void-return not a special case
llvm-svn: 34579
2007-02-25 07:18:38 +00:00
Chris Lattner 35a08551a5 eliminate a bunch more temporary vectors from X86 lowering.
llvm-svn: 34578
2007-02-25 07:10:00 +00:00
Chris Lattner e56fef9b51 eliminate temporary vectors created during X86 lowering.
llvm-svn: 34577
2007-02-25 06:40:16 +00:00
Chris Lattner 84141d4e99 remove std::vector's in RET lowering.
llvm-svn: 34576
2007-02-25 06:21:57 +00:00
Jim Laskey e0008e23cf Simplify lowering and selection of exception ops.
llvm-svn: 34488
2007-02-22 14:56:36 +00:00
Jim Laskey 3796abea0f Support to provide exception and selector registers.
llvm-svn: 34482
2007-02-21 22:54:50 +00:00
Evan Cheng 84a041eb98 ELF / PIC requires GOT be in the EBX register during calls via PLT GOT pointer.
Add implicit uses of EBX to calls to ensure liveintervalanalysis does not treat
the GOT in EBX move as dead upon definition.
This should fix PR1207.

llvm-svn: 34470
2007-02-21 21:18:14 +00:00
Anton Korobeynikov 1b4e6015b4 Fixed uninitialized stuff inside LegalizeDAG. Fortunately, the only
affected part is codegen of "memove" inside x86 backend. This fixes
PR1144

llvm-svn: 33752
2007-02-01 08:39:52 +00:00
Nate Begeman eda5997cc8 Finish off bug 680, allowing targets to custom lower frame and return
address nodes.

llvm-svn: 33636
2007-01-29 22:58:52 +00:00
Nick Lewycky 0c49722b36 Fix compile error "jump to case label crosses initialization".
What compiler are people using that accepts this code?

llvm-svn: 33603
2007-01-28 15:39:16 +00:00
Anton Korobeynikov 037c867b54 Propagate changes from my local tree. This patch includes:
1. New parameter attribute called 'inreg'. It has meaning "place this
parameter in registers, if possible". This is some generalization of
gcc's regparm(n) attribute. It's currently used only in X86-32 backend.
2. Completely rewritten CC handling/lowering code inside X86 backend.
Merged stdcall + c CCs and fastcall + fast CC.
3. Dropped CSRET CC. We cannot add struct return variant for each
target-specific CC (e.g. stdcall + csretcc and so on).
4. Instead of CSRET CC introduced 'sret' parameter attribute. Setting in
on first attribute has meaning 'This is hidden pointer to structure
return. Handle it gently'.
5. Fixed small bug in llvm-extract + add new feature to
FunctionExtraction pass, which relinks all internal-linkaged callees
from deleted function to external linkage. This will allow further
linking everything together.

NOTEs: 1. Documentation will be updated soon.
       2. llvm-upgrade should be improved to translate csret => sret.
          Before this, there will be some unexpected test fails.
llvm-svn: 33597
2007-01-28 13:31:35 +00:00
Jim Laskey f9e5445ed4 Make LABEL a builtin opcode.
llvm-svn: 33537
2007-01-26 14:34:52 +00:00
Evan Cheng 1281dc32ef Linux GOT indirect reference is only necessary in PIC mode.
llvm-svn: 33441
2007-01-22 21:34:25 +00:00
Anton Korobeynikov a0554d90e8 * PIC codegen for X86/Linux has been implemented
* PIC-aware internal structures in X86 Codegen have been refactored
* Visibility (default/weak) has been added
* Docs fixes (external weak linkage, visibility, formatting)

llvm-svn: 33136
2007-01-12 19:20:47 +00:00
Evan Cheng 82241c86e9 - FCOPYSIGN custom lowering bug. Clear the sign bit of operand 0 first before
or'ing in the sign bit of operand 1.
- Tweaking: rather than left shift the sign bit, fp_extend operand 1 first
  before taking its sign bit if its type is smaller than that of operand 0.

llvm-svn: 32932
2007-01-05 21:37:56 +00:00
Evan Cheng 8c7094a770 Typo
llvm-svn: 32902
2007-01-05 08:32:24 +00:00
Evan Cheng 4363e884c0 With SSE2, expand FCOPYSIGN to a series of SSE bitwise operations.
llvm-svn: 32900
2007-01-05 07:55:56 +00:00
Reid Spencer e87b5e9825 Fix a comment that referred to the now defunct ubyte type.
llvm-svn: 32840
2007-01-03 17:24:59 +00:00
Anton Korobeynikov 4efbbc963f Really big cleanup.
- New target type "mingw" was introduced
- Same things for both mingw & cygwin are marked as "cygming" (as in
gcc)
- .lcomm is supported here, so allow LLVM to use it
- Correctly use underscored versions of setjmp & _longjmp for both mingw
& cygwin

llvm-svn: 32833
2007-01-03 11:43:14 +00:00
Reid Spencer e63b6518fa For PR950:
Three changes:
1. Convert signed integer types to signless versions.
2. Implement the @sext and @zext parameter attributes. Previously the
   type of an function parameter was used to determine whether it should
   be sign extended or zero extended before the call. This information is
   now communicated via the function type's parameter attributes.
3. The interface to LowerCallTo had to be changed in order to accommodate
   the parameter attribute information. Although it would have been
   convenient to pass in the FunctionType itself, there isn't always one
   present in the caller. Consequently, a signedness indication for the
   result type and for each parameter was provided for in the interface
   to this method. All implementations were changed to make the adjustment
   necessary.

llvm-svn: 32788
2006-12-31 05:55:36 +00:00
Anton Korobeynikov 430e68a1b9 Refactored JIT codegen for mingw32. Now we're using standart relocation
type for distinguish JIT & non-JIT instead of "dirty" hacks :)

llvm-svn: 32745
2006-12-22 22:29:05 +00:00
Evan Cheng 735fcf74d8 f64 <-> i64 bit_convert using movq in 64-bit mode.
llvm-svn: 32587
2006-12-14 21:55:39 +00:00
Anton Korobeynikov 3b7c257cae Cleaned setjmp/longjmp lowering interfaces. Now we're producing right
code (both asm & cbe) for Mingw32 target.
Removed autoconf checks for underscored versions of setjmp/longjmp.

llvm-svn: 32415
2006-12-10 23:12:42 +00:00
Chris Lattner c20b7e878a If we have ScalarSSE, we can select bitconvert into single instructions.
This compiles bitcast.ll:test3/test4 into:

_test3:
        movd %xmm0, %eax
        ret
_test4:
        movd %edi, %xmm0
        ret

llvm-svn: 32230
2006-12-05 18:45:06 +00:00
Chris Lattner 55c17f9177 Fix PR1033 and CodeGen/X86/bitcast.ll, by expanding bitcast to a load/store pair.
This could be better, readme entry pending.

llvm-svn: 32228
2006-12-05 18:22:22 +00:00
Chris Lattner a16201c672 Fix typo noticed by Lauro Ramos Venancio, thanks!
llvm-svn: 32223
2006-12-05 17:29:40 +00:00
Evan Cheng 62cdc3f011 - Fix X86-64 JIT by temporarily disabling code that treats GV address as 32-bit
immediate in small code model. The JIT cannot ensure GV's are placed in the
lower 4G.
- Some preliminary support for large code model.

llvm-svn: 32215
2006-12-05 04:01:03 +00:00
Chris Lattner d139ddd2e6 Add support for 64-bit 'r' constraint. Patch by by Rafael ~Avila de Espíndol!
This fixes PR1029 and Regression/CodeGen/X86/x86-64-asm.ll

llvm-svn: 32206
2006-12-04 22:38:21 +00:00
Evan Cheng ae1cd75af7 - Use a different wrapper node for RIP-relative GV, etc.
- Proper support for both small static and PIC modes under X86-64
- Some (non-optimal) support for medium modes.

llvm-svn: 32046
2006-11-30 21:55:46 +00:00
Evan Cheng 7a9238c84d In PIC mode, GV not requiring an extra load can be used as address immediate.
llvm-svn: 32028
2006-11-29 23:48:14 +00:00
Evan Cheng 0b1692216d Fix for PR1018 - Better support for X86-64 Linux in small code model.
llvm-svn: 32026
2006-11-29 23:19:46 +00:00
Evan Cheng 28a9e9b230 Custom lower READCYCLECOUNTER for x86-64.
llvm-svn: 32017
2006-11-29 08:28:13 +00:00
Evan Cheng 20350c4025 Change MachineInstr ctor's to take a TargetInstrDescriptor reference instead
of opcode and number of operands.

llvm-svn: 31947
2006-11-27 23:37:22 +00:00
Anton Korobeynikov 5b96cdebb0 Refactored *GVRequiresExtraLoad() to Subtarget method.
llvm-svn: 31887
2006-11-21 00:01:06 +00:00
Anton Korobeynikov 37d080b11d We should perform extra load for dllimported calls
llvm-svn: 31874
2006-11-20 10:46:14 +00:00
Evan Cheng dbd3d294e6 Matches MachineInstr changes.
llvm-svn: 31712
2006-11-13 23:36:35 +00:00
Evan Cheng db04c958a5 Add implicit use / def operands to created MI's.
llvm-svn: 31676
2006-11-11 10:21:44 +00:00
Evan Cheng 49683ba236 Don't dag combine floating point select to max and min intrinsics. Those
take v4f32 / v2f64 operands and may end up causing larger spills / restores.
Added X86 specific nodes X86ISD::FMAX, X86ISD::FMIN instead.

This fixes PR996.

llvm-svn: 31645
2006-11-10 21:43:37 +00:00
Anton Korobeynikov b9c91c265c Fixing PR990: http://llvm.org/PR990.
This should unbreak csretcc on Linux & mingw targets. Several tests from
llvm-test should be also restored (fftbench, bigfib).

llvm-svn: 31613
2006-11-10 00:48:11 +00:00
Evan Cheng 922e191116 Fixed a bug which causes x86 be to incorrectly match
shuffle v, undef, <2, ?, 3, ?>
to movhlps
It should match to unpckhps instead.

Added proper matching code for
shuffle v, undef, <2, 3, 2, 3>

llvm-svn: 31519
2006-11-07 22:14:24 +00:00
Reid Spencer de46e48420 For PR786:
Turn on -Wunused and -Wno-unused-parameter. Clean up most of the resulting
fall out by removing unused variables. Remaining warnings have to do with
unused functions (I didn't want to delete code without review) and unused
variables in generated code. Maintainers should clean up the remaining
issues when they see them. All changes pass DejaGnu tests and Olden.

llvm-svn: 31380
2006-11-02 20:25:50 +00:00
Chris Lattner 44daa50bed allow the address of a global to be used with the "i" constraint when in
-static mode.  This implements PR882.

llvm-svn: 31326
2006-10-31 20:13:11 +00:00
Chris Lattner f6a6966cd2 handle "st" as "st(0)"
llvm-svn: 31320
2006-10-31 19:42:44 +00:00
Anton Korobeynikov aa4c0f9374 1. Clean up code due to changes in SwitchTo*Section(2)
2. Added partial debug support for mingw\cygwin targets (the same as
   Linux\ELF). Please note, that currently mingw\cygwin uses 'stabs' format
   for storing debug info by default, thus many (runtime) libraries has
   this information included. These formats shouldn't be mixed in one binary
   ('stabs' & 'DWARF'), otherwise binutils tools will be confused.

llvm-svn: 31311
2006-10-31 08:31:24 +00:00
Reid Spencer b51b5c0b1f Add debug support for X86/ELF targets (Linux). This allows llvm-gcc4
generated object modules to be debugged with gdb. Hopefully this helps
pre-release debugging.

llvm-svn: 31299
2006-10-30 22:32:30 +00:00
Evan Cheng 0d41d19427 All targets expand BR_JT for now.
llvm-svn: 31294
2006-10-30 08:02:39 +00:00
Evan Cheng e056dd5928 Fixed a significant bug where unpcklpd is incorrectly used to extract element 1 from a v2f64 value.
llvm-svn: 31228
2006-10-27 21:08:32 +00:00
Evan Cheng bf3df77758 Fix for PR968: expand vector sdiv, udiv, srem, urem.
llvm-svn: 31220
2006-10-27 18:49:08 +00:00
Evan Cheng c415c5be49 During vector shuffle lowering, we sometimes commute a vector shuffle to try
to match MOVL (movss, movsd, etc.). Don't forget to commute it back and try
unpck* and shufp* if that doesn't pan out.

llvm-svn: 31186
2006-10-25 21:49:50 +00:00
Evan Cheng 798b306311 Remove -disable-x86-shuffle-opti
llvm-svn: 31183
2006-10-25 20:48:19 +00:00
Chris Lattner c0fb567e23 Implement branch analysis/xform hooks required by the branch folding pass.
llvm-svn: 31065
2006-10-20 17:42:20 +00:00
Evan Cheng 949bcc94ea Avoid getting into an infinite loop when -disable-x86-shuffle-opti is specified.
llvm-svn: 30974
2006-10-16 06:36:00 +00:00
Evan Cheng ab51cf2e78 Merge ISD::TRUNCSTORE to ISD::STORE. Switch to using StoreSDNode.
llvm-svn: 30945
2006-10-13 21:14:26 +00:00
Evan Cheng 694810c227 Some X86ISD::CMP were created with wrong ValueType's.
llvm-svn: 30913
2006-10-12 19:12:56 +00:00
Evan Cheng e646abb7b6 Don't convert to MOVLP if using shufps etc. may allow load folding.
llvm-svn: 30847
2006-10-09 21:39:25 +00:00
Evan Cheng e71fe34d75 Reflects ISD::LOAD / ISD::LOADX / LoadSDNode changes.
llvm-svn: 30844
2006-10-09 20:57:25 +00:00
Evan Cheng df9ac47e5e Make use of getStore().
llvm-svn: 30759
2006-10-05 23:01:46 +00:00
Chris Lattner f2ef243580 Lower some min/max idioms to minss/maxss when unsafe fp math is enabled.
llvm-svn: 30748
2006-10-05 04:11:26 +00:00
Evan Cheng 8c5766ef3f Added option -disable-x86-shuffle-opti to disable X86 specific vector shuffle optimizations.
llvm-svn: 30723
2006-10-04 18:33:38 +00:00
Chris Lattner 9259b1efb6 Pattern match min/max nodes when we have sse. This implements
CodeGen/X86/scalar_sse_minmax.ll

llvm-svn: 30719
2006-10-04 06:57:07 +00:00
Evan Cheng 5d9fd977d3 Combine ISD::EXTLOAD, ISD::SEXTLOAD, ISD::ZEXTLOAD into ISD::LOADX. Add an
extra operand to LOADX to specify the exact value extension type.

llvm-svn: 30714
2006-10-04 00:56:09 +00:00
Chris Lattner f598d73142 Fix PR933 and CodeGen/X86/2006-10-02-BoolRetCrash.ll
llvm-svn: 30703
2006-10-03 17:18:42 +00:00
Chris Lattner fc36039f86 silence warnings in release build
llvm-svn: 30631
2006-09-27 18:29:38 +00:00
Chris Lattner 104aa5dbc1 Various random and minor code cleanups.
llvm-svn: 30608
2006-09-26 03:57:53 +00:00
Nick Lewycky c68bbef874 Fix compile error.
llvm-svn: 30553
2006-09-21 02:08:31 +00:00
Anton Korobeynikov 3c5b3df6a0 Adding codegeneration for StdCall & FastCall calling conventions
llvm-svn: 30549
2006-09-20 22:03:51 +00:00
Anton Korobeynikov 6f7072c66a Added some eye-candy for Subtarget type checking
Added X86 StdCall & FastCall calling conventions. Codegen will follow.

llvm-svn: 30446
2006-09-17 20:25:45 +00:00
Anton Korobeynikov 0ab01ff6e2 Small fixes for supporting dll* linkage types
llvm-svn: 30441
2006-09-17 13:06:18 +00:00
Anton Korobeynikov d61d39ec53 Adding dllimport, dllexport and external weak linkage types.
DLL* linkages got full (I hope) codegeneration support in C & both x86
assembler backends.
External weak linkage added for future use, we don't provide any
codegeneration, etc. support for it.

llvm-svn: 30374
2006-09-14 18:23:27 +00:00
Chris Lattner 971e33930d Turn X < 0 -> TEST X,X js
llvm-svn: 30294
2006-09-13 17:04:54 +00:00
Chris Lattner 0c9ae46c5f The sense of this branch was inverted :(
llvm-svn: 30293
2006-09-13 16:56:12 +00:00
Chris Lattner 7a627676be Compile X > -1 -> text X,X; js dest
This implements CodeGen/X86/jump_sign.ll.

llvm-svn: 30283
2006-09-13 03:22:10 +00:00
Evan Cheng 9a083a4121 Reflects MachineConstantPoolEntry changes.
llvm-svn: 30279
2006-09-12 21:04:05 +00:00
Evan Cheng 4259a0f654 X86ISD::CMP now produces a chain as well as a flag. Make that the chain
operand of a conditional branch to allow load folding into CMP / TEST
instructions.

llvm-svn: 30241
2006-09-11 02:19:56 +00:00
Evan Cheng 11b0a5dbd4 Committing X86-64 support.
llvm-svn: 30177
2006-09-08 06:48:29 +00:00
Evan Cheng 89c5d04b9b - Identify a vector_shuffle that can be turned into an undef, e.g.
shuffle V1, <undef>, <undef, undef, 4, 5>
- Fix some suspicious logic into LowerVectorShuffle that cause less than
  optimal code by failing to identify MOVL (move to lowest element of a
  vector).

llvm-svn: 30171
2006-09-08 01:50:06 +00:00
Chris Lattner dc4ff5311f Eliminate X86ISD::TEST, using X86ISD::CMP instead. Match X86ISD::CMP patterns
using test, which provides nice simplifications like:

-       movl %edi, %ecx
-       andl $2, %ecx
-       cmpl $0, %ecx
+       testl $2, %edi
        je LBB1_11      #cond_next90

There are a couple of dagiselemitter deficiencies that this exposes, they will
be handled later.

llvm-svn: 30156
2006-09-07 20:33:45 +00:00
Chris Lattner 162f2d5d4c Revert this patch, the front-end has been fixed to make it unneccesary.
llvm-svn: 29752
2006-08-17 18:43:24 +00:00
Chris Lattner dfb3f0591d 'g' is handled by the front-end.
llvm-svn: 29751
2006-08-17 18:12:28 +00:00
Andrew Lenharth 4a063c5ffb Fix handling of 'g'. Closes 883
llvm-svn: 29750
2006-08-17 17:50:12 +00:00
Andrew Lenharth 1c3210d08d Add the 'c' constraint as needed by the linux kernel
llvm-svn: 29747
2006-08-17 16:07:50 +00:00
Andrew Lenharth fc60fb974c Add support for S and D constraints, as needed to compile the linux kernel.
llvm-svn: 29746
2006-08-17 15:35:43 +00:00
Chris Lattner ed728e8dc9 Eliminate use of getNode that takes a vector.
llvm-svn: 29614
2006-08-11 17:38:39 +00:00
Evan Cheng bd1c5a8fb8 Match tablegen changes.
llvm-svn: 29604
2006-08-11 09:08:15 +00:00
Evan Cheng 5c68bba085 Convert more calls of getNode() that takes a vector to pass in the start of an array.
llvm-svn: 29601
2006-08-11 07:35:45 +00:00
Chris Lattner c24a1d3093 Start eliminating temporary vectors used to create DAG nodes. Instead, pass
in the start of an array and a count of operands where applicable.  In many
cases, the number of operands is known, so this static array can be allocated
on the stack, avoiding the heap.  In many other cases, a SmallVector can be
used, which has the same benefit in the common cases.

I updated a lot of code calling getNode that takes a vector, but ran out of
time.  The rest of the code should be updated, and these methods should be
removed.

We should also do the same thing to eliminate the methods that take a
vector of MVT::ValueTypes.

It would be extra nice to convert the dagiselemitter to avoid creating vectors
for operands when calling getTargetNode.

llvm-svn: 29566
2006-08-08 02:23:42 +00:00
Chris Lattner 524129dd64 Fix PR850 and CodeGen/X86/2006-07-31-SingleRegClass.ll.
The CFE refers to all single-register constraints (like "A") by their 16-bit
name, even though the 8 or 32-bit version of the register may be needed.
The X86 backend should realize what is going on and redecode the name back
to its proper form.

llvm-svn: 29420
2006-07-31 23:26:50 +00:00
Chris Lattner 9e56e5c003 Rename RelocModel::PIC to PIC_, to avoid conflicts with -DPIC.
llvm-svn: 29307
2006-07-26 21:12:04 +00:00
Evan Cheng 74065bedf2 This opt is now handled in DAG combine.
llvm-svn: 29243
2006-07-21 08:26:46 +00:00
Evan Cheng 4cf0238720 A splat of a vector constant of all zero or all one is the vector constant.
llvm-svn: 29234
2006-07-20 23:09:47 +00:00
Chris Lattner c8db10725b Add information preventing several register class constraints from working.
This implements PR828 and CodeGen/X86/2006-07-12-InlineAsmQConstraint.ll

llvm-svn: 29118
2006-07-12 16:59:49 +00:00
Chris Lattner 298ef37e02 Implement the inline asm 'A' constraint. This implements PR825 and
CodeGen/X86/2006-07-10-InlineAsmAConstraint.ll

llvm-svn: 29101
2006-07-11 02:54:03 +00:00
Evan Cheng 79cf9a5342 Fixed stack objects do not specify alignments, but their offsets are known.
Use that information when doing the transformation to merge multiple loads
into a 128-bit load.

llvm-svn: 29090
2006-07-10 21:37:44 +00:00
Chris Lattner 9aabc1e16f Mark internal function static
llvm-svn: 29085
2006-07-10 19:53:12 +00:00
Evan Cheng 5987cfb7b1 X86 target specific DAG combine: turn build_vector (load x), (load x+4),
(load x+8), (load x+12), <0, 1, 2, 3> to a single 128-bit load (aligned and
unaligned).

e.g.

__m128 test(float a, float b, float c, float d) {
  return _mm_set_ps(d, c, b, a);
}

_test:
        movups 4(%esp), %xmm0
        ret

llvm-svn: 29042
2006-07-07 08:33:52 +00:00
Evan Cheng 0261242aa6 Reorg. No functionality change.
llvm-svn: 28999
2006-07-05 22:17:51 +00:00
Evan Cheng 38c5aee959 Simplify X86CompilationCallback: always align to 16-byte boundary; don't save EAX/EDX if unnecessary.
llvm-svn: 28910
2006-06-24 08:36:10 +00:00
Evan Cheng de7156f12c Type of vector extract / insert index operand should be iPTR.
llvm-svn: 28796
2006-06-15 08:14:54 +00:00
Evan Cheng ca25486603 Add argument registers to the end of call operand list (partial fix).
llvm-svn: 28783
2006-06-14 18:17:40 +00:00
Evan Cheng 0e14a56d35 Minor compilation speed improvement.
llvm-svn: 28736
2006-06-09 06:24:42 +00:00
Evan Cheng dc614c193e Added X86FunctionInfo subclass of MachineFunction to record whether the
function that is being lowered is forced to use FP. Currently this is only
true for main() / Cygwin.

llvm-svn: 28703
2006-06-06 23:30:24 +00:00
Evan Cheng 2b2c1be49c Typos
llvm-svn: 28617
2006-06-01 05:53:27 +00:00
Evan Cheng 2489ccdd90 Remove a warning
llvm-svn: 28607
2006-06-01 00:30:39 +00:00
Evan Cheng 550cb663e8 Remove dead code.
llvm-svn: 28581
2006-05-31 00:50:42 +00:00
Evan Cheng a3add0fea8 Change RET node to include signness information of the return values. i.e.
RET chain, value1, sign1, value2, sign2, ...

llvm-svn: 28510
2006-05-26 23:10:12 +00:00
Evan Cheng b92f418408 Vector argument must be passed in memory location aligned on 16-byte boundary.
llvm-svn: 28505
2006-05-26 20:37:47 +00:00
Evan Cheng bfb5ea6875 Mac OS X ABI document lied. The first four XMM registers are used to pass
vector arguments, not three.

llvm-svn: 28504
2006-05-26 19:22:06 +00:00
Evan Cheng a01e799927 Minor update to make the code more clear
llvm-svn: 28499
2006-05-26 18:39:59 +00:00
Evan Cheng cbfb3d07e0 Update more comments.
llvm-svn: 28498
2006-05-26 18:37:16 +00:00
Evan Cheng 763f9b00f0 Fix some comments.
llvm-svn: 28497
2006-05-26 18:25:43 +00:00
Evan Cheng 83dc51d7ff No need to handle illegal types.
llvm-svn: 28496
2006-05-26 18:22:49 +00:00
Evan Cheng 8aca43e8da Consistency
llvm-svn: 28488
2006-05-25 23:31:23 +00:00
Evan Cheng 0421aca87a Some clean up.
llvm-svn: 28483
2006-05-25 22:38:31 +00:00
Evan Cheng 29f805ec65 Remove some dead code.
llvm-svn: 28481
2006-05-25 22:25:52 +00:00
Evan Cheng 5ee96893ae Build breakage.
llvm-svn: 28475
2006-05-25 18:56:34 +00:00
Evan Cheng 2a33094284 Switch X86 over to a call-selection model where the lowering code creates
the copyto/fromregs instead of making the X86ISD::CALL selection code create
them.

llvm-svn: 28463
2006-05-25 00:59:30 +00:00
Chris Lattner a58f559848 Fix file header comment
llvm-svn: 28441
2006-05-23 23:20:42 +00:00
Evan Cheng 7068a93cae Better way to check for vararg.
llvm-svn: 28440
2006-05-23 21:08:24 +00:00
Evan Cheng 17e734f0a6 Remove PreprocessCCCArguments and PreprocessFastCCArguments now that
FORMAL_ARGUMENTS nodes include a token operand.

llvm-svn: 28439
2006-05-23 21:06:34 +00:00
Chris Lattner 8be5be817c Implement an annoying part of the Darwin/X86 abi: the callee of a struct
return argument pops the hidden struct pointer if present, not the caller.

For example, in this testcase:

struct X { int D, E, F, G; };
struct X bar() {
  struct X a;
  a.D = 0;
  a.E = 1;
  a.F = 2;
  a.G = 3;
  return a;
}
void foo(struct X *P) {
  *P = bar();
}

We used to emit:

_foo:
        subl $28, %esp
        movl 32(%esp), %eax
        movl %eax, (%esp)
        call _bar
        addl $28, %esp
        ret
_bar:
        movl 4(%esp), %eax
        movl $0, (%eax)
        movl $1, 4(%eax)
        movl $2, 8(%eax)
        movl $3, 12(%eax)
        ret

This is correct on Linux/X86 but not Darwin/X86.  With this patch, we now
emit:

_foo:
        subl $28, %esp
        movl 32(%esp), %eax
        movl %eax, (%esp)
        call _bar
***     addl $24, %esp
        ret
_bar:
        movl 4(%esp), %eax
        movl $0, (%eax)
        movl $1, 4(%eax)
        movl $2, 8(%eax)
        movl $3, 12(%eax)
***     ret $4

For the record, GCC emits (which is functionally equivalent to our new code):

_bar:
        movl    4(%esp), %eax
        movl    $3, 12(%eax)
        movl    $2, 8(%eax)
        movl    $1, 4(%eax)
        movl    $0, (%eax)
        ret     $4
_foo:
        pushl   %esi
        subl    $40, %esp
        movl    48(%esp), %esi
        leal    16(%esp), %eax
        movl    %eax, (%esp)
        call    _bar
        subl    $4, %esp
        movl    16(%esp), %eax
        movl    %eax, (%esi)
        movl    20(%esp), %eax
        movl    %eax, 4(%esi)
        movl    24(%esp), %eax
        movl    %eax, 8(%esi)
        movl    28(%esp), %eax
        movl    %eax, 12(%esi)
        addl    $40, %esp
        popl    %esi
        ret

This fixes SingleSource/Benchmarks/CoyoteBench/fftbench with LLC and the
JIT, and fixes the X86-backend portion of PR729.  The CBE still needs to
be updated.

llvm-svn: 28438
2006-05-23 18:50:38 +00:00
Chris Lattner 01dd6df5f3 CSRet allows varargs
llvm-svn: 28409
2006-05-19 21:34:04 +00:00
Evan Cheng 8c6b234ce8 Should pass by reference.
llvm-svn: 28357
2006-05-17 19:07:40 +00:00
Chris Lattner c7df70db57 Implement the custom lowering hook right, returning values for all of the
arguments at once.

llvm-svn: 28327
2006-05-16 17:14:26 +00:00
Chris Lattner 7b8b8bbbf9 Fix a bug I introduced yesterday, which broke functions with *no* arguments.
llvm-svn: 28326
2006-05-16 17:08:35 +00:00
Evan Cheng 9fee442e63 X86 integer register classes naming changes. Make them consistent with FP, vector classes.
llvm-svn: 28324
2006-05-16 07:21:53 +00:00
Chris Lattner 3d82699605 Add a chain to FORMAL_ARGUMENTS. This is a minimal port of the X86 backend,
it doesn't currently use/maintain the chain properly.  Also, make the
X86ISelLowering.cpp file 80-col clean.

llvm-svn: 28320
2006-05-16 06:45:34 +00:00
Chris Lattner 22f95b74ba Dead variable
llvm-svn: 28265
2006-05-12 21:12:22 +00:00
Chris Lattner 6d4a2dc4ad Teach the X86 backend about non-i32 inline asm register classes.
llvm-svn: 28139
2006-05-06 00:29:37 +00:00
Chris Lattner 44a73e9fa5 Teach the code generator to use cvtss2sd as extload f32 -> f64
llvm-svn: 28131
2006-05-05 21:35:18 +00:00
Owen Anderson 20a631fde7 Refactor TargetMachine, pushing handling of TargetData into the target-specific subclasses. This has one caller-visible change: getTargetData() now returns a pointer instead of a reference.
This fixes PR 759.

llvm-svn: 28074
2006-05-03 01:29:57 +00:00
Evan Cheng 88decded82 Initial caller side support (for CCC only, not FastCC) of 128-bit vector
passing by value.

llvm-svn: 28015
2006-04-28 21:29:37 +00:00
Evan Cheng 3cd4362ade Implement four-wide shuffle with 2 shufps if no more than two elements come
from each vector. e.g.
        shuffle(G1, G2, 7, 1, 5, 2)
==>
        movaps _G2, %xmm0
        shufps $151, _G1, %xmm0
        shufps $216, %xmm0, %xmm0

llvm-svn: 28011
2006-04-28 07:03:38 +00:00
Evan Cheng d43c5c6046 TargetLowering::LowerArguments should return a VBIT_CONVERT of
FORMAL_ARGUMENTS SDOperand in the return result vector.

llvm-svn: 28009
2006-04-28 05:25:15 +00:00
Evan Cheng f4f3f0d25f Make x86 isel lowering produce tailcall nodes. They are match to normal calls
for now.

Patch contributed by Alexander Friedman.

llvm-svn: 27994
2006-04-27 08:40:39 +00:00
Evan Cheng 89001ad729 Support for passing 128-bit vector arguments via XMM registers.
llvm-svn: 27992
2006-04-27 08:31:10 +00:00
Evan Cheng a0374e1bed Oops
llvm-svn: 27989
2006-04-27 05:44:50 +00:00
Evan Cheng 24eb3f4765 Bug fix: not updating NumIntRegs.
llvm-svn: 27988
2006-04-27 05:35:28 +00:00
Evan Cheng 48940d16b2 - Clean up formal argument lowering code. Prepare for vector pass by value work.
- Fixed vararg support.

llvm-svn: 27985
2006-04-27 01:32:22 +00:00
Evan Cheng 1c39903297 Fix fastcc failures.
llvm-svn: 27980
2006-04-26 18:21:31 +00:00
Evan Cheng e0bcfbe811 Switching over FORMAL_ARGUMENTS mechanism to lower call arguments.
llvm-svn: 27975
2006-04-26 01:20:17 +00:00
Evan Cheng a9467aab0a Separate LowerOperation() into multiple functions, one per opcode.
llvm-svn: 27972
2006-04-25 20:13:52 +00:00
Evan Cheng 5c2bfb069e Special case handling two wide build_vector(0, x).
llvm-svn: 27961
2006-04-24 22:58:52 +00:00
Evan Cheng b0461080e4 A little bit more build_vector enhancement for v8i16 cases.
llvm-svn: 27959
2006-04-24 18:01:45 +00:00
Evan Cheng b4f31dd1a8 MOVL shuffle (i.e. movd or movss / movsd from memory) of undef, V2 == V2
llvm-svn: 27953
2006-04-23 06:35:19 +00:00
Nate Begeman 4ca2ea5b43 JumpTable support! What this represents is working asm and jit support for
x86 and ppc for 100% dense switch statements when relocations are non-PIC.
This support will be extended and enhanced in the coming days to support
PIC, and less dense forms of jump tables.

llvm-svn: 27947
2006-04-22 18:53:45 +00:00