Commit Graph

1652 Commits

Author SHA1 Message Date
Benjamin Kramer 959b7e9df7 GCC complains about the angle of this line.
Remove the escaped newline.

llvm-svn: 135739
2011-07-22 01:02:57 +00:00
Bruno Cardoso Lopes 1872173841 Remove the 128-bit special handling from SCALAR_TO_VECTOR. This isn't
the way to go. Doing this here will prevent several node matches later,
and would have to force looking all the way through several
VINSERTF128/VEXTRACTF128 chains to optimize simple things.

llvm-svn: 135730
2011-07-22 00:15:10 +00:00
Bruno Cardoso Lopes 612e56174b -Inspected a AVX code block added by someone in early Feb. This was never used
and was actually very wrong, fix it and make it simpler. Also remove the
ConcatVectors function, which is unused now.

- Fix a introduction of useless nodes in r126664 and r126264. The
VUNPCKL* should never be introduced cause we don't want duplicate
nodes for 128 AVX and non-AVX modes, the actual instruction
difference only exists during isel, but not for target specific DAG
nodes. We only introduce V* target nodes when there is no 128-bit
version already there.

- Fix a fragile test and make it more useful.

llvm-svn: 135729
2011-07-22 00:15:07 +00:00
Bruno Cardoso Lopes 91eff5140f Add a DAGCombine for transforming 128->256 casts into a simple
vxorps + vinsertf128 pair of instructions

llvm-svn: 135727
2011-07-22 00:15:00 +00:00
Bruno Cardoso Lopes dbebd01269 Introduce a new function to lower 256-bit vectors which are not
direclty supported and should be promoted and handled by smaller
shuffles

llvm-svn: 135726
2011-07-22 00:14:56 +00:00
Bruno Cardoso Lopes 95d037721b Rename function to be more specific and be more strict about its usage
llvm-svn: 135725
2011-07-22 00:14:53 +00:00
Bruno Cardoso Lopes 178fb40612 - Register v16i16 as valid VR256 register class
- Add more bitcasts for v16i16
- Since 135661 and 135662 already added the splat logic,
just add one more splat test for v16i16

llvm-svn: 135663
2011-07-21 02:24:08 +00:00
Bruno Cardoso Lopes b878caa5e2 Add support for 256-bit versions of VPERMIL instruction. This is a new
instruction introduced in AVX, which can operate on 128 and 256-bit vectors.
It considers a 256-bit vector as two independent 128-bit lanes. It can permute
any 32 or 64 elements inside a lane, and restricts the second lane to
have the same permutation of the first one. With the improved splat support
introduced early today, adding codegen for this instruction enable more
efficient 256-bit code:

Instead of:
  vextractf128  $0, %ymm0, %xmm0
  punpcklbw %xmm0, %xmm0
  punpckhbw %xmm0, %xmm0
  vinsertf128 $0, %xmm0, %ymm0, %ymm1
  vinsertf128 $1, %xmm0, %ymm1, %ymm0
  vextractf128  $1, %ymm0, %xmm1
  shufps  $1, %xmm1, %xmm1
  movss %xmm1, 28(%rsp)
  movss %xmm1, 24(%rsp)
  movss %xmm1, 20(%rsp)
  movss %xmm1, 16(%rsp)
  vextractf128  $0, %ymm0, %xmm0
  shufps  $1, %xmm0, %xmm0
  movss %xmm0, 12(%rsp)
  movss %xmm0, 8(%rsp)
  movss %xmm0, 4(%rsp)
  movss %xmm0, (%rsp)
  vmovaps (%rsp), %ymm0
We get:
  vextractf128  $0, %ymm0, %xmm0
  punpcklbw %xmm0, %xmm0
  punpckhbw %xmm0, %xmm0
  vinsertf128 $0, %xmm0, %ymm0, %ymm1
  vinsertf128 $1, %xmm0, %ymm1, %ymm0
  vpermilps $85, %ymm0, %ymm0

llvm-svn: 135662
2011-07-21 01:55:47 +00:00
Bruno Cardoso Lopes fb4920eb25 Improve splat promotion to handle AVX types: v32i8 and v16i16. Also
refactor the code and add a bunch of comments. The final shuffle
emitted by handling 256-bit types is suitable for the VPERM shuffle
instruction which is going to be introduced in a next commit (with
a testcase which cover this commit)

llvm-svn: 135661
2011-07-21 01:55:42 +00:00
Bruno Cardoso Lopes 0bdeacf03b Tidy up code
llvm-svn: 135656
2011-07-21 01:55:27 +00:00
Evan Cheng bbf3b0de8b Goodbye TargetAsmInfo. This eliminate last bit of CodeGen and Target in llvm-mc.
There is still a bit more refactoring left to do in Targets. But we are now very
close to fixing all the layering issues in MC.

llvm-svn: 135611
2011-07-20 19:50:42 +00:00
Evan Cheng d60fa58ba1 Sink getDwarfRegNum, getLLVMRegNum, getSEHRegNum from TargetRegisterInfo down
to MCRegisterInfo. Also initialize the mapping at construction time.

This patch eliminate TargetRegisterInfo from TargetAsmInfo. It's another step
towards fixing the layering violation.

llvm-svn: 135424
2011-07-18 20:57:22 +00:00
Chris Lattner 229907cd11 land David Blaikie's patch to de-constify Type, with a few tweaks.
llvm-svn: 135375
2011-07-18 04:54:35 +00:00
Bruno Cardoso Lopes 8df9cfc279 Fix a couple of things:
1) Make non-legal 256-bit loads to be promoted to v4i64. This lets us
canonize the loads and handle things the same way we use to handle
for 128-bit registers. Despite of what one of the removed comments
explained, the load promotion would not mess with VPERM, it's only a
matter of doing the appropriate bitcasts when this instructions comes
to be introduced. Also make LOAD v8i32 legal.

2) Doing 1) exposed two bugs:
- v4i64 was being promoted to itself for several opcodes (introduced
in r124447 by David Greene) causing endless recursion and the stack to
explode.
- there was no support for allOnes BUILD_VECTORs and ANDNP would fail to
match because it was generating early target constant pools during
lowering.

3) The testcases are already checked-in, doing 1) exposed the
bugs in the current testcases.

4) Tidy up code to be more clear and explicit about AVX.

llvm-svn: 135313
2011-07-15 22:24:33 +00:00
Eric Christopher 92464be28c Check register class matching instead of width of type matching
when determining validity of matching constraint. Allow i1
types access to the GR8 reg class for x86.

Fixes PR10352 and rdar://9777108

llvm-svn: 135180
2011-07-14 20:13:52 +00:00
Nadav Rotem 771f29677f [VECTOR-SELECT]
During type legalization we often use the SIGN_EXTEND_INREG SDNode.
When this SDNode is legalized during the LegalizeVector phase, it is
scalarized because non-simple types are automatically marked to be expanded.
In this patch we add support for lowering SIGN_EXTEND_INREG manually.
This fixes CodeGen/X86/vec_sext.ll when running with the '-promote-elements'
flag.

llvm-svn: 135144
2011-07-14 11:11:14 +00:00
Bruno Cardoso Lopes 9613b64916 Make X86ISD::ANDNP more general and Codegen 256-bit VANDNP. A more
general version of X86ISD::ANDNP also opened the room for a little bit
of refactoring.

llvm-svn: 135088
2011-07-13 21:36:51 +00:00
Bruno Cardoso Lopes 7ba479d22f The target specific node PANDN name is misleading. That happens because
it's later selected to a ANDNPD/ANDNPS instruction instead of the PANDN
instruction. Rename it.

llvm-svn: 135087
2011-07-13 21:36:47 +00:00
Julien Lerouge 112fcc164a Add _allrem, _aullrem and _allmul to the runtime for MSVC.
http://llvm.org/bugs/show_bug.cgi?id=10305

llvm-svn: 134744
2011-07-08 21:40:25 +00:00
Cameron Zwarich f03fa189ca Add an intrinsic and codegen support for fused multiply-accumulate. The intent
is to use this for architectures that have a native FMA instruction.

llvm-svn: 134742
2011-07-08 21:39:21 +00:00
Nick Lewycky 9badf60203 Let the inline asm 'q' constraint match float, and on 64-bit double too.
Fixes PR9602!

llvm-svn: 134665
2011-07-08 00:19:27 +00:00
Eric Christopher 7a2a0f80de Go ahead and emit the barrier on x86-64 even without sse2. The
processor supports it just fine.

Fixes PR9675 and rdar://9740801

llvm-svn: 134664
2011-07-08 00:04:56 +00:00
Eric Christopher 9721396dab Add support for the X86 'l' constraint.
Fixes PR10149 and rdar://9738585

llvm-svn: 134648
2011-07-07 22:29:07 +00:00
Eric Christopher 7e5f2350d3 Use getRegForInlineAsmConstraint instead of custom defining regclasses
via vectors.

Part of rdar://9643582

llvm-svn: 134079
2011-06-29 17:23:50 +00:00
Jakob Stoklund Olesen 7297e7e223 Clean up the handling of the x87 fp stack to make it more robust.
Drop the FpMov instructions, use plain COPY instead.

Drop the FpSET/GET instruction for accessing fixed stack positions.
Instead use normal COPY to/from ST registers around inline assembly, and
provide a single new FpPOP_RETVAL instruction that can access the return
value(s) from a call. This is still necessary since you cannot tell from
the CALL instruction alone if it returns anything on the FP stack. Teach
fast isel to use this.

This provides a much more robust way of handling fixed stack registers -
we can tolerate arbitrary FP stack instructions inserted around calls
and inline assembly. Live range splitting could sometimes break x87 code
by inserting spill code in unfortunate places.

As a bonus we handle floating point inline assembly correctly now.

llvm-svn: 134018
2011-06-28 18:32:28 +00:00
Chad Rosier 15db390f8f Replace dyn_cast<> with cast<> since the cast is already guarded by the necessary check.
llvm-svn: 133874
2011-06-25 18:51:28 +00:00
Chad Rosier bde13d3f76 Enable tail call optimization in the presence of a byval (x86-32 and x86-64).
<rdar://problem/9483883>

llvm-svn: 133858
2011-06-25 02:04:56 +00:00
Chad Rosier e553e75b15 Hoist simple check above more complex checking to avoid unnecessary
overheads.  No functional change intended.

llvm-svn: 133824
2011-06-24 21:15:36 +00:00
Evan Cheng 3a0c5e52ff Remove TargetOptions.h dependency from X86Subtarget.
llvm-svn: 133726
2011-06-23 17:54:54 +00:00
Benjamin Kramer 25e17b0f89 Remove unused but set variables.
llvm-svn: 133347
2011-06-18 11:09:41 +00:00
John McCall 4b7a8d68ae Add a new function attribute, nonlazybind, which inhibits lazy-loading
optimizations when emitting calls to the function;  instead those calls may
use faster relocations which require the function to be immediately resolved
upon loading the dynamic object featuring the call.  This is useful when it
is known that the function will be called frequently and pervasively and
therefore there is no merit in delaying binding of the function.

Currently only implemented for x86-64, where it turns into a call through
the global offset table.

Patch by Dan Gohman, who assures me that he's going to add LangRef documentation
for this once it's committed.

llvm-svn: 133080
2011-06-15 20:36:13 +00:00
Eric Christopher 0713a9d8fc Add a parameter to CCState so that it can access the MachineFunction.
No functional change.

Part of PR6965

llvm-svn: 132763
2011-06-08 23:55:35 +00:00
Stuart Hastings e0d3426e1a Followup to 132458, omit unnecessary stack copy when x87 input is a
load.  rdar://problem/6373334

llvm-svn: 132696
2011-06-06 23:15:58 +00:00
Stuart Hastings be605494ac Reapply 132424 with fixes. This fixes PR10068.
rdar://problem/5993888

llvm-svn: 132606
2011-06-03 23:53:54 +00:00
Eric Christopher de9399bf76 Have LowerOperandForConstraint handle multiple character constraints.
Part of rdar://9119939

llvm-svn: 132510
2011-06-02 23:16:42 +00:00
Rafael Espindola aa318ae495 Revert 132424 to fix PR10068.
llvm-svn: 132479
2011-06-02 19:57:47 +00:00
Stuart Hastings 8d530ad22a Omit unnecessary stack copy when x87 input is a load.
rdar://problem/6373334

llvm-svn: 132458
2011-06-02 15:57:11 +00:00
Stuart Hastings 7adc95f69e Recommit 132404 with fixes. rdar://problem/5993888
llvm-svn: 132424
2011-06-01 21:33:14 +00:00
Stuart Hastings aab130d995 Revert 132404 to appease a buildbot. rdar://problem/5993888
llvm-svn: 132419
2011-06-01 19:52:20 +00:00
Stuart Hastings 7b7c102f2c Add support for x86 CMPEQSS and friends. These instructions do a
floating-point comparison, generate a mask of 0s or 1s, and generally
DTRT with NaNs.  Only profitable when the user wants a materialized 0
or 1 at runtime.  rdar://problem/5993888

llvm-svn: 132404
2011-06-01 17:17:45 +00:00
Stuart Hastings 9f20804216 FGETSIGN support for x86, using movmskps/pd. Will be enabled with a
patch to TargetLowering.cpp.  rdar://problem/5660695

llvm-svn: 132388
2011-06-01 04:39:42 +00:00
Stuart Hastings 493a12bf5e Reverting 132105: it broke some LLVM-GCC DejaGNU tests.
llvm-svn: 132108
2011-05-26 04:09:49 +00:00
Stuart Hastings 276f231c2f Correctly handle a one-word struct passed byval on x86_64.
rdar://problem/6920088

llvm-svn: 132105
2011-05-26 02:44:56 +00:00
Evan Cheng 88f9137fd7 - Teach SelectionDAG::isKnownNeverZero to return true (op x, c) when c is
non-zero.
- Teach X86 cmov optimization to eliminate the cmov from ctlz, cttz extension
  when the source of X86ISD::BSR / X86ISD::BSF is proven to be non-zero.

rdar://9490949

llvm-svn: 131948
2011-05-24 01:48:22 +00:00
Chad Rosier 552f8c4819 Don't attempt to tail call optimize for Win64.
llvm-svn: 131709
2011-05-20 00:59:28 +00:00
Evan Cheng e8d2e9eb35 Revert r131664 and fix it in instcombine instead. rdar://9467055
llvm-svn: 131708
2011-05-20 00:54:37 +00:00
Eric Christopher 4014e5e208 Oddly people want to use the 'r' constraint for fp constants on x86.
Fixes rdar://9218925
Fixes PR9601

llvm-svn: 131682
2011-05-19 21:33:47 +00:00
Evan Cheng 2b9bd38678 crc32 with 64-bit output zeros upper 32-bits. rdar://9467055
llvm-svn: 131664
2011-05-19 18:57:12 +00:00
Chad Rosier f4e832b14e Enables vararg functions that pass all arguments via registers to be optimized into tail-calls when possible.
llvm-svn: 131560
2011-05-18 19:59:50 +00:00
Eli Friedman d000a2c26e Clean up the mess created by r131467+r131469.
llvm-svn: 131471
2011-05-17 18:02:22 +00:00
Stuart Hastings c65d8eda7b Revert 131467 due to buildbot complaint.
llvm-svn: 131469
2011-05-17 16:59:46 +00:00
Stuart Hastings 3cf5308890 Fix an obscure issue in X86_64 parameter passing: if a tiny byval is
passed as the fifth parameter, insure it's passed correctly (in R9).
rdar://problem/6920088

llvm-svn: 131467
2011-05-17 16:45:55 +00:00
Nadav Rotem d8edb1d5cc Fix a bug in PerformEXTRACT_VECTOR_ELTCombine. The code created an ADD SDNode
with two different types, in cases where the index and the ptr had different
types.

llvm-svn: 131461
2011-05-17 08:31:57 +00:00
Eli Friedman d4a3609d30 Remove dead code. Fix associated test to use FileCheck.
llvm-svn: 131424
2011-05-16 21:28:22 +00:00
Nadav Rotem 8f971c27fb Add custom lowering of X86 vector SRA/SRL/SHL when the shift amount is a splat vector.
llvm-svn: 131179
2011-05-11 08:12:09 +00:00
Eli Friedman 2518f8376d Make the logic for determining function alignment more explicit. No functionality change.
llvm-svn: 131012
2011-05-06 20:34:06 +00:00
Daniel Dunbar cd01ed5bd6 ADT/Triple: Renambe isOSX... methods to isMacOSX for consistency with the OS
triple component.

llvm-svn: 129838
2011-04-20 00:14:25 +00:00
Daniel Dunbar 100455a3c8 Target/X86: Eliminate uses of getDarwinVers().
llvm-svn: 129813
2011-04-19 21:04:12 +00:00
Chris Lattner 0ab5e2cded Fix a ton of comment typos found by codespell. Patch by
Luis Felipe Strano Moraes!

llvm-svn: 129558
2011-04-15 05:18:47 +00:00
Evan Cheng ee9d45dd55 Don't try to create zero-sized stack objects.
llvm-svn: 128586
2011-03-30 23:44:13 +00:00
Benjamin Kramer 8d2227373d Make helper static.
llvm-svn: 128338
2011-03-26 12:38:19 +00:00
NAKAMURA Takumi 521eb7c11e Target/X86: [PR8777][PR8778] Tweak alloca/chkstk for Windows targets.
FIXME: Some cleanups would be needed.
llvm-svn: 128206
2011-03-24 07:07:00 +00:00
Andrew Trick 4ab9a16569 Revert r128175.
I'm backing this out for the second time. It was supposed to be fixed by r128164, but the mingw self-host must be defeating the fix.

llvm-svn: 128181
2011-03-23 23:11:02 +00:00
Andrew Trick 4046a0de91 Reapply Eli's r127852 now that the pre-RA scheduler can spill EFLAGS.
(target-specific branchless method for double-width relational comparisons on x86)

llvm-svn: 128175
2011-03-23 22:16:02 +00:00
Evan Cheng 0663f23bd8 Re-apply r127953 with fixes: eliminate empty return block if it has no predecessors; update dominator tree if cfg is modified.
llvm-svn: 127981
2011-03-21 01:19:09 +00:00
Daniel Dunbar 327cd36f74 Revert r127953, "SimplifyCFG has stopped duplicating returns into predecessors
to canonicalize IR", it broke a lot of things.

llvm-svn: 127954
2011-03-19 21:47:14 +00:00
Evan Cheng 824a711305 SimplifyCFG has stopped duplicating returns into predecessors to canonicalize IR
to have single return block (at least getting there) for optimizations. This
is general goodness but it would prevent some tailcall optimizations.
One specific case is code like this:
int f1(void);
int f2(void);
int f3(void);
int f4(void);
int f5(void);
int f6(void);
int foo(int x) {
  switch(x) {
  case 1: return f1();
  case 2: return f2();
  case 3: return f3();
  case 4: return f4();
  case 5: return f5();
  case 6: return f6();
  }
}

=>
LBB0_2:                                 ## %sw.bb
  callq   _f1
  popq    %rbp
  ret
LBB0_3:                                 ## %sw.bb1
  callq   _f2
  popq    %rbp
  ret
LBB0_4:                                 ## %sw.bb3
  callq   _f3
  popq    %rbp
  ret

This patch teaches codegenprep to duplicate returns when the return value
is a phi and where the phi operands are produced by tail calls followed by
an unconditional branch:

sw.bb7:                                           ; preds = %entry
  %call8 = tail call i32 @f5() nounwind
  br label %return
sw.bb9:                                           ; preds = %entry
  %call10 = tail call i32 @f6() nounwind
  br label %return
return:
  %retval.0 = phi i32 [ %call10, %sw.bb9 ], [ %call8, %sw.bb7 ], ... [ 0, %entry ]
  ret i32 %retval.0

This allows codegen to generate better code like this:

LBB0_2:                                 ## %sw.bb
        jmp     _f1                     ## TAILCALL
LBB0_3:                                 ## %sw.bb1
        jmp     _f2                     ## TAILCALL
LBB0_4:                                 ## %sw.bb3
        jmp     _f3                     ## TAILCALL

rdar://9147433

llvm-svn: 127953
2011-03-19 17:17:39 +00:00
Nadav Rotem e7a101ccab Add support for legalizing UINT_TO_FP of vectors on platforms which do
not have native support for this operation (such as X86).
The legalized code uses two vector INT_TO_FP operations and is faster
than scalarizing.

llvm-svn: 127951
2011-03-19 13:09:10 +00:00
Eli Friedman 59721e3238 Revert r127852; it's apparently causing an ICE on mingw.
llvm-svn: 127909
2011-03-18 21:12:29 +00:00
Eli Friedman 1a916a3c0c Add a target-specific branchless method for double-width relational
comparisons on x86.  Essentially, the way this works is that SUB+SBB sets
the relevant flags the same way a double-width CMP would.

This is a substantial improvement over the generic lowering in LLVM. The output
is also shorter than the gcc-generated output; I haven't done any detailed
benchmarking, though.

llvm-svn: 127852
2011-03-18 02:34:11 +00:00
Cameron Zwarich 2ef0c69df1 Move more logic into getTypeForExtArgOrReturn.
llvm-svn: 127809
2011-03-17 14:53:37 +00:00
Cameron Zwarich 34e7b3f77e Rename getTypeForExtendedInteger() to getTypeForExtArgOrReturn().
llvm-svn: 127807
2011-03-17 14:21:56 +00:00
Cameron Zwarich ac106273d4 The x86-64 ABI says that a bool is only guaranteed to be sign-extended to a byte
rather than an int. Thankfully, this only causes LLVM to miss optimizations, not
generate incorrect code.

This just fixes the zext at the return. We still insert an i32 ZextAssert when
reading a function's arguments, but it is followed by a truncate and another i8
ZextAssert so it is not optimized.

llvm-svn: 127766
2011-03-16 22:20:18 +00:00
Eric Christopher cf56a5034f Change the x86 32-bit scheduler to register pressure and fix up the
corresponding testcases back to the previous versions.

Fixes some performance regressions only seen on 32-bit.

llvm-svn: 127441
2011-03-11 01:05:58 +00:00
Stuart Hastings d17ae4e939 Revert 127359; it broke lencod.
llvm-svn: 127382
2011-03-10 00:25:53 +00:00
Stuart Hastings 9955e2f912 X86 byval copies no longer always_inline. <rdar://problem/8706628>
llvm-svn: 127359
2011-03-09 21:10:30 +00:00
NAKAMURA Takumi 58d1f93b03 Target/X86: Tweak va_arg for Win64 not to miss taking va_start when number of fixed args > 4.
llvm-svn: 127328
2011-03-09 11:33:15 +00:00
Benjamin Kramer 679cfb54ec X86: Fix the (saddo/ssub x, 1) -> incl/decl selection to check the right operand for 1.
Found by inspection.

llvm-svn: 127247
2011-03-08 15:20:20 +00:00
Eric Christopher eb19e9e9fc Turn on list-ilp scheduling by default on x86 and x86-64, fix up
testcases accordingly. Some are currently xfailed and will be filed
as bugs to be fixed or understood.

Performance results:

roughly neutral on SPEC
some micro benchmarks in the llvm suite are up between 100 and 150%, only
a pair of regressions that are due to be investigated

john-the-ripper saw:
10% improvement in traditional DES
8% improvement in BSDI DES
59% improvement in FreeBSD MD5
67% improvement in OpenBSD Blowfish
14% improvement in LM DES

Small compile time impact.

llvm-svn: 127208
2011-03-08 02:42:25 +00:00
Cameron Zwarich df61694417 Move getRegPressureLimit() from TargetLoweringInfo to TargetRegisterInfo.
llvm-svn: 127175
2011-03-07 21:56:36 +00:00
Andrew Trick 641e2d4f8c Increased the register pressure limit on x86_64 from 8 to 12
regs. This is the only change in this checkin that may affects the
default scheduler. With better register tracking and heuristics, it
doesn't make sense to artificially lower the register limit so much.

Added -sched-high-latency-cycles and X86InstrInfo::isHighLatencyDef to
give the scheduler a way to account for div and sqrt on targets that
don't have an itinerary. It is currently defaults to 10 (the actual
number doesn't matter much), but only takes effect on non-default
schedulers: list-hybrid and list-ilp.

Added several heuristics that can be individually disabled for the
non-default sched=list-ilp mode. This helps us determine how much
better we can do on a given benchmark than the default
scheduler. Certain compute intensive loops run much faster in this
mode with the right set of heuristics, and it doesn't seem to have
much negative impact elsewhere. Not all of the heuristics are needed,
but we still need to experiment to decide which should be disabled by
default for sched=list-ilp.

llvm-svn: 127067
2011-03-05 08:00:22 +00:00
David Greene dd567b214b [AVX] Fix mask predicates for 256-bit UNPCKLPS/D and implement
missing patterns for them.

      Add a SIMD test subdirectory to hold tests for SIMD instruction
      selection correctness and quality.
'

llvm-svn: 126845
2011-03-02 17:23:43 +00:00
David Greene 20a1cbefad [AVX] Add decode support for VUNPCKLPS/D instructions, both 128-bit
and 256-bit forms.  Because the number of elements in a vector
      does not determine the vector type (4 elements could be v4f32 or
      v4f64), pass the full type of the vector to decode routines.

llvm-svn: 126664
2011-02-28 19:06:56 +00:00
Owen Anderson b2c80da4ae Allow targets to specify a the type of the RHS of a shift parameterized on the type of the LHS.
llvm-svn: 126518
2011-02-25 21:41:48 +00:00
Chris Lattner 0152b7bc7c remove command line option debugging hook.
llvm-svn: 126441
2011-02-24 21:53:03 +00:00
David Greene 9a6040dc86 [AVX] General VUNPCKL codegen support.
llvm-svn: 126264
2011-02-22 23:31:46 +00:00
Devang Patel f3292b2196 Revert r124611 - "Keep track of incoming argument's location while emitting LiveIns."
In other words, do not keep track of argument's location.  The debugger (gdb) is not prepared to see line table entries for arguments. For the debugger, "second" line table entry marks beginning of function body.
This requires some coordination with debugger to get this working. 
 - The debugger needs to be aware of prolog_end attribute attached with line table entries.
 - The compiler needs to accurately mark prolog_end in line table entries (at -O0 and at -O1+)

llvm-svn: 126155
2011-02-21 23:21:26 +00:00
Eric Christopher ac6b001f56 If both operands are loads from stores in memory we can't use movlpd/movlps
since one needs to be a register operand. Just use movss instead of forcing
an operand into a register.

Fixes PR9239

llvm-svn: 126072
2011-02-20 05:04:42 +00:00
Eric Christopher c509ff6944 Fix typos.
llvm-svn: 126018
2011-02-19 03:19:09 +00:00
David Greene 3a2b508e8f [AVX] Recorganize X86ShuffleDecode into its own library
(LLVMX86Utils.a) to break cyclic library dependencies between
LLVMX86CodeGen.a and LLVMX86AsmParser.a.  Previously this code was in
a header file and marked static but AVX requires some additional
functionality here that won't be used by all clients.  Since including
unused static functions causes a gcc compiler warning, keeping it as a
header would break builds that use -Werror.  Putting this in its own
library solves both problems at once.

llvm-svn: 125765
2011-02-17 19:18:59 +00:00
Stuart Hastings 81c4306005 Swap VT and DebugLoc operands of getExtLoad() for consistency with
other getNode() methods.  Radar 9002173.

llvm-svn: 125665
2011-02-16 16:23:55 +00:00
Chris Lattner 46c01a30f4 Enhance ComputeMaskedBits to know that aligned frameindexes
have their low bits set to zero.  This allows us to optimize
out explicit stack alignment code like in stack-align.ll:test4 when
it is redundant.

Doing this causes the code generator to start turning FI+cst into
FI|cst all over the place, which is general goodness (that is the
canonical form) except that various pieces of the code generator
don't handle OR aggressively.  Fix this by introducing a new
SelectionDAG::isBaseWithConstantOffset predicate, and using it
in places that are looking for ADD(X,CST).  The ARM backend in
particular was missing a lot of addressing mode folding opportunities
around OR.

llvm-svn: 125470
2011-02-13 22:25:43 +00:00
David Greene 79827a5a26 [AVX] Implement 256-bit vector lowering for SCALAR_TO_VECTOR. This
largely completes support for 128-bit fallback lowering for code that
is not 256-bit ready.

llvm-svn: 125315
2011-02-10 23:11:29 +00:00
David Greene ce318e4958 [AVX] Implement 256-bit vector lowering for EXTRACT_VECTOR_ELT.
llvm-svn: 125284
2011-02-10 16:57:36 +00:00
David Greene b36195ab26 [AVX] Implement 256-bit vector lowering for INSERT_VECTOR_ELT.
llvm-svn: 125187
2011-02-09 15:32:06 +00:00
David Greene 10b0db1d5f [AVX] Implement BUILD_VECTOR lowering for 256-bit vectors. For
anything but the simplest of cases, lower a 256-bit BUILD_VECTOR by
splitting it into 128-bit parts and recombining.

llvm-svn: 125105
2011-02-08 19:04:41 +00:00
David Greene 79651c527b [AVX] Insert/extract subvector lowering support. This includes a
couple of utility functions that will be used in other places for more
AVX lowering.

llvm-svn: 125029
2011-02-07 19:36:54 +00:00
NAKAMURA Takumi 1850c80afb Target/X86: Tweak allocating shadow area (aka home) on Win64. It must be enough for caller to allocate one.
llvm-svn: 124949
2011-02-05 15:11:32 +00:00
NAKAMURA Takumi b21c3db920 lib/Target/X86/X86ISelLowering.cpp: Introduce a new variable "IsWin64". No functional changes.
llvm-svn: 124948
2011-02-05 15:11:13 +00:00
NAKAMURA Takumi f7f319d4d3 Target/X86: Fix whitespace.
llvm-svn: 124946
2011-02-05 15:10:54 +00:00