David Greene
b6f1611928
[AVX] Support EXTRACT_SUBVECTOR on x86. This provides a default
...
implementation of EXTRACT_SUBVECTOR for x86, going through the stack
in a similr fashion to how the codegen implements BUILD_VECTOR.
Eventually this will get matched to VEXTRACTF128 if AVX is available.
llvm-svn: 124292
2011-01-26 15:38:49 +00:00
NAKAMURA Takumi
0cfdac078e
Target/X86: Tweak win64's tailcall.
...
llvm-svn: 124272
2011-01-26 02:04:09 +00:00
NAKAMURA Takumi
9d29eff198
Fix whitespace.
...
llvm-svn: 124270
2011-01-26 02:03:37 +00:00
Chris Lattner
218092e68e
fix PR8981, a crash trying to form a conditional inc with a floating point compare.
...
llvm-svn: 123560
2011-01-16 02:56:53 +00:00
Anton Korobeynikov
2f93128109
Rename TargetFrameInfo into TargetFrameLowering. Also, put couple of FIXMEs and fixes here and there.
...
llvm-svn: 123170
2011-01-10 12:39:04 +00:00
Jakob Stoklund Olesen
2fb5b31578
Simplify a bunch of isVirtualRegister() and isPhysicalRegister() logic.
...
These functions not longer assert when passed 0, but simply return false instead.
No functional change intended.
llvm-svn: 123155
2011-01-10 02:58:51 +00:00
Evan Cheng
078b0b095e
Recognize inline asm 'rev /bin/bash, ' as a bswap intrinsic call.
...
llvm-svn: 123048
2011-01-08 01:24:27 +00:00
Evan Cheng
a048c83fe4
Revert r122955. It seems using movups to lower memcpy can cause massive regression (even on Nehalem) in edge cases. I also didn't see any real performance benefit.
...
llvm-svn: 123015
2011-01-07 19:35:30 +00:00
Evan Cheng
7998b1d6fe
Use movups to lower memcpy and memset even if it's not fast (like corei7).
...
The theory is it's still faster than a pair of movq / a quad of movl. This
will probably hurt older chips like P4 but should run faster on current
and future Intel processors. rdar://8817010
llvm-svn: 122955
2011-01-06 07:58:36 +00:00
Evan Cheng
3ae2b79aa3
Re-implement r122936 with proper target hooks. Now getMaxStoresPerMemcpy
...
etc. takes an option OptSize. If OptSize is true, it would return
the inline limit for functions with attribute OptSize.
llvm-svn: 122952
2011-01-06 06:52:41 +00:00
Benjamin Kramer
6020ed9d99
X86: Lower a select directly to a setcc_carry if possible.
...
int test(unsigned long a, unsigned long b) { return -(a < b); }
compiles to
_test: ## @test
cmpq %rsi, %rdi ## encoding: [0x48,0x39,0xf7]
sbbl %eax, %eax ## encoding: [0x19,0xc0]
ret ## encoding: [0xc3]
instead of
_test: ## @test
xorl %ecx, %ecx ## encoding: [0x31,0xc9]
cmpq %rsi, %rdi ## encoding: [0x48,0x39,0xf7]
movl $-1, %eax ## encoding: [0xb8,0xff,0xff,0xff,0xff]
cmovael %ecx, %eax ## encoding: [0x0f,0x43,0xc1]
ret ## encoding: [0xc3]
llvm-svn: 122451
2010-12-22 23:09:28 +00:00
Benjamin Kramer
f6ddc4a1de
Add some x86 specific dagcombines for conditional increments.
...
(add Y, (sete X, 0)) -> cmp X, 1; adc 0, Y
(add Y, (setne X, 0)) -> cmp X, 1; sbb -1, Y
(sub (sete X, 0), Y) -> cmp X, 1; sbb 0, Y
(sub (setne X, 0), Y) -> cmp X, 1; adc -1, Y
for
unsigned foo(unsigned a, unsigned b) {
if (a == 0) b++;
return b;
}
we now get:
foo:
cmpl $1, %edi
movl %esi, %eax
adcl $0, %eax
ret
instead of:
foo:
testl %edi, %edi
sete %al
movzbl %al, %eax
addl %esi, %eax
ret
llvm-svn: 122364
2010-12-21 21:41:44 +00:00
Chris Lattner
3e5fbd74ed
rename MVT::Flag to MVT::Glue. "Flag" is a terrible name for
...
something that just glues two nodes together, even if it is
sometimes used for flags.
llvm-svn: 122310
2010-12-21 02:38:05 +00:00
Nate Begeman
4b9db07b02
Implement feedback from Bruno on making pblendvb an x86-specific ISD node in addition to being an intrinsic, and convert
...
lowering to use it. Hopefully the pattern fragment is doing the right thing with XMM0, looks correct in testing.
llvm-svn: 122277
2010-12-20 22:04:24 +00:00
Chris Lattner
5c00d41688
now that addc/adde are gone, "ADDC" in the X86 backend uses EFLAGS results,
...
the same as setcc. Optimize ADDC(0,0,FLAGS) -> SET_CARRY(FLAGS). This is
a step towards finishing off PR5443. In the testcase in that bug we now get:
movq %rdi, %rax
addq %rsi, %rax
sbbq %rcx, %rcx
testb $1, %cl
setne %dl
ret
instead of:
movq %rdi, %rax
addq %rsi, %rax
movl $0, %ecx
adcq $0, %rcx
testq %rcx, %rcx
setne %dl
ret
llvm-svn: 122219
2010-12-20 01:37:09 +00:00
Chris Lattner
9c26d2711b
use for loop over types.
...
llvm-svn: 122214
2010-12-20 01:03:27 +00:00
Chris Lattner
846c20d4e6
Change the X86 backend to stop using the evil ADDC/ADDE/SUBC/SUBE nodes (which
...
their carry depenedencies with MVT::Flag operands) and use clean and beautiful
EFLAGS dependences instead.
We do this by changing the modelling of SBB/ADC to have EFLAGS input and outputs
(which is what requires the previous scheduler change) and change X86 ISelLowering
to custom lower ADDC and friends down to X86ISD::ADD/ADC/SUB/SBB nodes.
With the previous series of changes, this causes no changes in the testsuite, woo.
llvm-svn: 122213
2010-12-20 00:59:46 +00:00
Mon P Wang
1064992c84
Prevents PerformShuffleCombine from creating a node with an illegal type after legalize types
...
has run, e.g., prevent creating an i64 node from a v2i64 when i64 is not a legal type.
llvm-svn: 122206
2010-12-19 23:55:53 +00:00
Chris Lattner
9edf3f50bf
improve the setcc -> setcc_carry optimization to happen more
...
consistently by moving it out of lowering into dag combine.
Add some missing patterns for matching away extended versions of setcc_c.
llvm-svn: 122201
2010-12-19 22:08:31 +00:00
Chris Lattner
6dddab2ffe
simplify some code to just reuse a setcc if we can instead of
...
going through the CSE maps to get it.
llvm-svn: 122196
2010-12-19 21:23:48 +00:00
Chris Lattner
c37bb023b1
now that generic vector types aren't selected onto MMX operations,
...
we don't need -disable-mmx anymore.
llvm-svn: 122189
2010-12-19 20:19:20 +00:00
Chris Lattner
ae756e1980
reduce copy/paste programming with the power of for loops.
...
llvm-svn: 122187
2010-12-19 20:07:10 +00:00
Chris Lattner
1e8c032a6e
X86 supports i8/i16 overflow ops (except i8 multiplies), we should
...
generate them.
Now we compile:
define zeroext i8 @X(i8 signext %a, i8 signext %b) nounwind ssp {
entry:
%0 = tail call %0 @llvm.sadd.with.overflow.i8(i8 %a, i8 %b)
%cmp = extractvalue %0 %0, 1
br i1 %cmp, label %if.then, label %if.end
into:
_X: ## @X
## BB#0: ## %entry
subl $12, %esp
movb 16(%esp), %al
addb 20(%esp), %al
jo LBB0_2
Before we were generating:
_X: ## @X
## BB#0: ## %entry
pushl %ebp
movl %esp, %ebp
subl $8, %esp
movb 12(%ebp), %al
testb %al, %al
setge %cl
movb 8(%ebp), %dl
testb %dl, %dl
setge %ah
cmpb %cl, %ah
sete %cl
addb %al, %dl
testb %dl, %dl
setge %al
cmpb %al, %ah
setne %al
andb %cl, %al
testb %al, %al
jne LBB0_2
llvm-svn: 122186
2010-12-19 20:03:11 +00:00
Nate Begeman
97b72c99d2
Add support for matching psign & plendvb to the x86 target
...
Remove unnecessary pandn patterns, 'vnot' patfrag looks through bitcasts
llvm-svn: 122098
2010-12-17 22:55:37 +00:00
Nate Begeman
8b08f5232b
Formalize the notion that AVX and SSE are non-overlapping extensions from the compiler's point of view. Per email discussion, we either want to always use VEX-prefixed instructions or never use them, and are taking "HasAVX" to mean "Always use VEX". Passing -mattr=-avx,+sse42 should serve to restore legacy SSE support when desirable.
...
llvm-svn: 121439
2010-12-10 00:26:57 +00:00
Eric Christopher
a8aaaee379
Rewrite the darwin tlv support to use a chain and return to copying
...
the output to the correct register. Fixes a hidden problem uncovered
by the last patch where we'd try to DAG combine our MVT::Other node
oddly.
llvm-svn: 121358
2010-12-09 06:25:53 +00:00
Eric Christopher
8783074091
Stop confusing people, it's not really a chain, or a tumor.
...
llvm-svn: 121340
2010-12-09 00:57:19 +00:00
Eric Christopher
d84970ae8b
Remove extraneous copy from DAG conversion for darwin tls. This was
...
popping up at O0 when it wasn't folded and the fast allocator would
complain.
llvm-svn: 121330
2010-12-09 00:27:58 +00:00
Chris Lattner
6886171792
Teach X86ISelLowering that the second result of X86ISD::UMUL is a flags
...
result. This allows us to compile:
void *test12(long count) {
return new int[count];
}
into:
test12:
movl $4, %ecx
movq %rdi, %rax
mulq %rcx
movq $-1, %rdi
cmovnoq %rax, %rdi
jmp __Znam ## TAILCALL
instead of:
test12:
movl $4, %ecx
movq %rdi, %rax
mulq %rcx
seto %cl
testb %cl, %cl
movq $-1, %rdi
cmoveq %rax, %rdi
jmp __Znam
Of course it would be even better if the regalloc inverted the cmov to 'cmovoq',
which would eliminate the need for the 'movq %rdi, %rax'.
llvm-svn: 120936
2010-12-05 07:49:54 +00:00
Chris Lattner
364bb0a081
it turns out that when ".with.overflow" intrinsics were added to the X86
...
backend that they were all implemented except umul. This one fell back
to the default implementation that did a hi/lo multiply and compared the
top. Fix this to check the overflow flag that the 'mul' instruction
sets, so we can avoid an explicit test. Now we compile:
void *func(long count) {
return new int[count];
}
into:
__Z4funcl: ## @_Z4funcl
movl $4, %ecx ## encoding: [0xb9,0x04,0x00,0x00,0x00]
movq %rdi, %rax ## encoding: [0x48,0x89,0xf8]
mulq %rcx ## encoding: [0x48,0xf7,0xe1]
seto %cl ## encoding: [0x0f,0x90,0xc1]
testb %cl, %cl ## encoding: [0x84,0xc9]
movq $-1, %rdi ## encoding: [0x48,0xc7,0xc7,0xff,0xff,0xff,0xff]
cmoveq %rax, %rdi ## encoding: [0x48,0x0f,0x44,0xf8]
jmp __Znam ## TAILCALL
instead of:
__Z4funcl: ## @_Z4funcl
movl $4, %ecx ## encoding: [0xb9,0x04,0x00,0x00,0x00]
movq %rdi, %rax ## encoding: [0x48,0x89,0xf8]
mulq %rcx ## encoding: [0x48,0xf7,0xe1]
testq %rdx, %rdx ## encoding: [0x48,0x85,0xd2]
movq $-1, %rdi ## encoding: [0x48,0xc7,0xc7,0xff,0xff,0xff,0xff]
cmoveq %rax, %rdi ## encoding: [0x48,0x0f,0x44,0xf8]
jmp __Znam ## TAILCALL
Other than the silly seto+test, this is using the o bit directly, so it's going in the right
direction.
llvm-svn: 120935
2010-12-05 07:30:36 +00:00
Chris Lattner
116580a11c
generalize the previous check to handle -1 on either side of the
...
select, inserting a not to compensate. Add a missing isZero check
that I lost somehow.
This improves codegen of:
void *func(long count) {
return new int[count];
}
from:
__Z4funcl: ## @_Z4funcl
movl $4, %ecx ## encoding: [0xb9,0x04,0x00,0x00,0x00]
movq %rdi, %rax ## encoding: [0x48,0x89,0xf8]
mulq %rcx ## encoding: [0x48,0xf7,0xe1]
testq %rdx, %rdx ## encoding: [0x48,0x85,0xd2]
movq $-1, %rdi ## encoding: [0x48,0xc7,0xc7,0xff,0xff,0xff,0xff]
cmoveq %rax, %rdi ## encoding: [0x48,0x0f,0x44,0xf8]
jmp __Znam ## TAILCALL
## encoding: [0xeb,A]
to:
__Z4funcl: ## @_Z4funcl
movl $4, %ecx ## encoding: [0xb9,0x04,0x00,0x00,0x00]
movq %rdi, %rax ## encoding: [0x48,0x89,0xf8]
mulq %rcx ## encoding: [0x48,0xf7,0xe1]
cmpq $1, %rdx ## encoding: [0x48,0x83,0xfa,0x01]
sbbq %rdi, %rdi ## encoding: [0x48,0x19,0xff]
notq %rdi ## encoding: [0x48,0xf7,0xd7]
orq %rax, %rdi ## encoding: [0x48,0x09,0xc7]
jmp __Znam ## TAILCALL
## encoding: [0xeb,A]
llvm-svn: 120932
2010-12-05 02:00:51 +00:00
Chris Lattner
342e6ea5f9
Improve an integer select optimization in two ways:
...
1. generalize
(select (x == 0), -1, 0) -> (sign_bit (x - 1))
to:
(select (x == 0), -1, y) -> (sign_bit (x - 1)) | y
2. Handle the identical pattern that happens with !=:
(select (x != 0), y, -1) -> (sign_bit (x - 1)) | y
cmov is often high latency and can't fold immediates or
memory operands. For example for (x == 0) ? -1 : 1, before
we got:
< testb %sil, %sil
< movl $-1, %ecx
< movl $1, %eax
< cmovel %ecx, %eax
now we get:
> cmpb $1, %sil
> sbbl %eax, %eax
> orl $1, %eax
llvm-svn: 120929
2010-12-05 01:23:24 +00:00
Benjamin Kramer
2f489236ab
Add patterns for the x86 popcnt instruction.
...
- Also adds a new POPCNT subtarget feature that is currently enabled if the target
supports SSE4.2 (nehalem) or SSE4A (barcelona).
llvm-svn: 120917
2010-12-04 20:32:23 +00:00
Benjamin Kramer
8ceebfaa04
Simplify code. No functionality change.
...
llvm-svn: 120907
2010-12-04 14:22:24 +00:00
Evan Cheng
419ea286ee
Fix and re-enable tail call optimization of expanded libcalls.
...
llvm-svn: 120622
2010-12-01 22:59:46 +00:00
Duncan Sands
c4fb38b821
I don't think it makes any sense to assert that the target supports SSE3 here.
...
The user (i.e. whoever generated a call to the intrinsic in the first place) is
essentially asking for a particular instruction to be placed in the assembler.
If that instruction won't execute on the target machine, that's their problem
not ours. Two buildbots with processors that don't support SSE3 were barfing
on the apm.ll test in CodeGen/X86 because of this assertion.
llvm-svn: 120574
2010-12-01 12:58:13 +00:00
Evan Cheng
a695abde49
Speculatively disable x86 portion of r120501 to appease the x86_64 buildbot.
...
llvm-svn: 120549
2010-12-01 03:27:20 +00:00
Evan Cheng
d4b0873c06
Enable sibling call optimization of libcalls which are expanded during
...
legalization time. Since at legalization time there is no mapping from
SDNode back to the corresponding LLVM instruction and the return
SDNode is target specific, this requires a target hook to check for
eligibility. Only x86 and ARM support this form of sibcall optimization
right now.
rdar://8707777
llvm-svn: 120501
2010-11-30 23:55:39 +00:00
Eric Christopher
2d1bcf4aea
Fix insertion point in pcmp expander.
...
While I'm there, clean up too many \n even for me.
llvm-svn: 120411
2010-11-30 08:20:21 +00:00
Eric Christopher
1a86e8461a
Fix some cleanups from my last patch.
...
llvm-svn: 120410
2010-11-30 08:10:28 +00:00
Eric Christopher
fa6657cec0
Rewrite mwait and monitor support and custom lower arguments.
...
Fixes PR8573.
llvm-svn: 120404
2010-11-30 07:20:12 +00:00
Rafael Espindola
c4774795ce
Move lowering of TLS_addr32 and TLS_addr64 to X86MCInstLower.
...
llvm-svn: 120263
2010-11-28 21:16:39 +00:00
Rafael Espindola
5d882894d8
Lower TLS_addr32 and TLS_addr64.
...
llvm-svn: 120225
2010-11-27 20:43:02 +00:00
Wesley Peck
527da1b6e2
Renaming ISD::BIT_CONVERT to ISD::BITCAST to better reflect the LLVM IR concept.
...
llvm-svn: 119990
2010-11-23 03:31:01 +00:00
Anton Korobeynikov
0eecf5d201
Move hasFP() and few related hooks to TargetFrameInfo.
...
llvm-svn: 119740
2010-11-18 21:19:35 +00:00
Chris Lattner
edb9d84dcc
add targetoperand flags for jump tables, constant pool and block address
...
nodes to indicate when ha16/lo16 modifiers should be used. This lets
us pass PowerPC/indirectbr.ll.
The one annoying thing about this patch is that the MCSymbolExpr isn't
expressive enough to represent ha16(label1-label2) which we need on
PowerPC. I have a terrible hack in the meantime, but this will have
to be revisited at some point.
Last major conversion item left is global variable references.
llvm-svn: 119105
2010-11-15 02:46:57 +00:00
Chris Lattner
7077efe894
move the pic base symbol stuff up to MachineFunction
...
since it is trivial and will be shared between ppc and x86.
This substantially simplifies the X86 backend also.
llvm-svn: 119089
2010-11-14 22:48:15 +00:00
Chris Lattner
239f9a35ed
simplify getPICBaseSymbol a bit.
...
llvm-svn: 119088
2010-11-14 22:37:11 +00:00
Peter Collingbourne
feea10bcdf
Recognise 32-bit ror-based bswap implementation used by uclibc
...
llvm-svn: 119007
2010-11-13 19:54:30 +00:00
Peter Collingbourne
1c6437a62a
Support ; as asm separator
...
llvm-svn: 119006
2010-11-13 19:54:23 +00:00
Dale Johannesen
6d95ed1760
Remove possibly useful info from comment, per Chris.
...
llvm-svn: 118865
2010-11-12 00:43:18 +00:00
Duncan Sands
1462777017
Simplify uses of MVT and EVT. An MVT can be compared directly
...
with a SimpleValueType, while an EVT supports equality and
inequality comparisons with SimpleValueType.
llvm-svn: 118169
2010-11-03 12:17:33 +00:00
Duncan Sands
fb0a48ef96
Factorize the duplicated logic for choosing the right argument
...
calling convention out of the fast and normal ISel files, and
into the calling convention TD file.
llvm-svn: 117856
2010-10-31 13:21:44 +00:00
John Thompson
e8360b7182
Inline asm multiple alternative constraints development phase 2 - improved basic logic, added initial platform support.
...
llvm-svn: 117667
2010-10-29 17:29:13 +00:00
Michael J. Spencer
7db918f1e9
x86-Win32: Switch ftol2 calling convention from stdcall to C.
...
llvm-svn: 117474
2010-10-27 18:52:38 +00:00
Dale Johannesen
ec57ac1c3c
An stdcall function calling a non-stdcall function
...
cannot use tailcall. PR 8461.
llvm-svn: 117322
2010-10-25 22:17:05 +00:00
Duncan Sands
1f0d37e892
Add parentheses to pacify gcc, which warns otherwise.
...
llvm-svn: 117020
2010-10-21 16:02:12 +00:00
Michael J. Spencer
f509c6ca27
X86: Add alloca probing to dynamic alloca on Windows. Fixes PR8424.
...
llvm-svn: 116984
2010-10-21 01:41:01 +00:00
Dale Johannesen
320a553319
Remove Synthesizable from the Type system; as MMX vector
...
types are no longer Legal on X86, we don't need it.
No functional change. 8499854.
llvm-svn: 116947
2010-10-20 21:32:10 +00:00
Michael J. Spencer
3e64de9504
X86: Add MS-CRT libcalls.
...
llvm-svn: 116801
2010-10-19 07:32:52 +00:00
Michael J. Spencer
8b382e7e10
Fix Whitespace.
...
llvm-svn: 116800
2010-10-19 07:32:42 +00:00
Eric Christopher
604e142844
Combine these together - should probably have some text associated
...
that says what why what we just asserted is wrong.
llvm-svn: 116333
2010-10-12 19:44:17 +00:00
Nick Lewycky
eb7b91d417
Mark variable 'NoImplicitFloatOps' used only in an assert as used.
...
llvm-svn: 116323
2010-10-12 18:18:03 +00:00
Dan Gohman
395a898b2b
Initial va_arg support for x86-64. Patch by David Meyer!
...
llvm-svn: 116319
2010-10-12 18:00:49 +00:00
Andrew Trick
e01c9001c9
Fixes bug 8297: i386 cmpxchg8b, missing MachineMemOperand
...
llvm-svn: 116214
2010-10-11 19:02:04 +00:00
Michael J. Spencer
8dedb62019
X86: Call ulldiv and ftol2 on Windows instead of their libgcc eqivilents.
...
llvm-svn: 116188
2010-10-11 05:29:15 +00:00
Michael J. Spencer
00765e5be0
X86: MinGW should always use libgcc on Windows.
...
llvm-svn: 116177
2010-10-10 23:11:06 +00:00
Michael J. Spencer
7a573a5e1f
X86: Call _alldiv instead of __divdi3 on Windows (excluding cygwin).
...
llvm-svn: 116174
2010-10-10 22:04:34 +00:00
Michael J. Spencer
bee1f7f5ba
Fix Whitespace.
...
llvm-svn: 116173
2010-10-10 22:04:20 +00:00
Cameron Esfahani
d57f9ecd4a
Recommit 116056, now with the missing file...
...
llvm-svn: 116083
2010-10-08 19:24:18 +00:00
Andrew Trick
cf97db2402
reverting 116056: win64_params.ll may need to be conditionalized?
...
llvm-svn: 116063
2010-10-08 17:22:42 +00:00
Cameron Esfahani
a07b5c291d
Small patch to restore home register stack space allocation for the Win64 case. Add test case. This code eventually needs to be tighter, since it's always allocating it, even in leaf routines.
...
llvm-svn: 116056
2010-10-08 10:31:30 +00:00
Evan Cheng
5c31bf0619
Canonicalize X86ISD::MOVDDUP nodes to v2f64 to make sure all cases match. Also eliminate unneeded isel patterns. rdar://8520311
...
llvm-svn: 115977
2010-10-07 20:50:20 +00:00
Anton Korobeynikov
d77a443631
va_args support for Win64.
...
Patch by Cameron!
llvm-svn: 115480
2010-10-03 22:52:07 +00:00
Dale Johannesen
dd224d2333
Massive rewrite of MMX:
...
The x86_mmx type is used for MMX intrinsics, parameters and
return values where these use MMX registers, and is also
supported in load, store, and bitcast.
Only the above operations generate MMX instructions, and optimizations
do not operate on or produce MMX intrinsics.
MMX-sized vectors <2 x i32> etc. are lowered to XMM or split into
smaller pieces. Optimizations may occur on these forms and the
result casted back to x86_mmx, provided the result feeds into a
previous existing x86_mmx operation.
The point of all this is prevent optimizations from introducing
MMX operations, which is unsafe due to the EMMS problem.
llvm-svn: 115243
2010-09-30 23:57:10 +00:00
Chris Lattner
b5b71e07af
improve indentation
...
llvm-svn: 114815
2010-09-27 06:34:01 +00:00
Eric Christopher
422e463be7
This code should never fire on non-darwin subtargets.
...
llvm-svn: 114811
2010-09-27 06:01:51 +00:00
Dale Johannesen
6a4cd59b08
We can't return SSE/MMX vectors if SSE is disabled.
...
llvm-svn: 114745
2010-09-24 19:05:48 +00:00
Bob Wilson
e1223fb583
Attempt to fix llvm-gcc build. It was crashing when building gcov.o for an
...
ARM cross-compiler on x86, because the MMO size did not match the type size.
This fixes the MMO size and also the size of the stack object to match the
type size.
llvm-svn: 114554
2010-09-22 17:35:14 +00:00
Chris Lattner
8a236b63d8
reimplement elf TLS support in terms of addressing modes, eliminating SegmentBaseAddress.
...
llvm-svn: 114529
2010-09-22 04:39:11 +00:00
Chris Lattner
a5156c30ed
convert the last 4 X86ISD nodes that should have memoperands to have them.
...
llvm-svn: 114523
2010-09-22 01:28:21 +00:00
Chris Lattner
ed85da5600
give X86ISD::FNSTCW16m a memoperand, since it touches memory. It only
...
can access the stack due to how it is generated though.
llvm-svn: 114522
2010-09-22 01:11:26 +00:00
Chris Lattner
78f518b79b
give FP_TO_INT16_IN_MEM and friends a memoperand. They are only
...
used with stack slots, but hey, lets be safe.
llvm-svn: 114521
2010-09-22 01:05:16 +00:00
Chris Lattner
54e5329545
give VZEXT_LOAD a memory operand, it now works with segment registers.
...
llvm-svn: 114515
2010-09-22 00:34:38 +00:00
Chris Lattner
e479e9643b
give LCMPXCHG_DAG[8] a memory operand, allowing it to work with addrspace 256/257
...
llvm-svn: 114508
2010-09-21 23:59:42 +00:00
Owen Anderson
5e65dfbb97
Reimplement r114460 in target-independent DAGCombine rather than target-dependent, by using
...
the predicate to discover the number of sign bits. Enhance X86's target lowering to provide
a useful response to this query.
llvm-svn: 114473
2010-09-21 20:42:50 +00:00
Chris Lattner
886250c8f0
convert a couple more places to use the new getStore()
...
llvm-svn: 114463
2010-09-21 18:51:21 +00:00
Owen Anderson
f4b1a5bdc4
When adding the carry bit to another value on X86, exploit the fact that the carry-materialization
...
(sbbl x, x) sets the registers to 0 or ~0. Combined with two's complement arithmetic, we can fold
the intermediate AND and the ADD into a single SUB.
This fixes <rdar://problem/8449754>.
llvm-svn: 114460
2010-09-21 18:41:19 +00:00
Chris Lattner
802527adad
eliminate some uses of the getStore overload.
...
llvm-svn: 114453
2010-09-21 17:50:43 +00:00
Chris Lattner
7727d05dbb
convert the targets off the non-MachinePointerInfo of getLoad.
...
llvm-svn: 114410
2010-09-21 06:44:06 +00:00
Chris Lattner
82fd06d3ce
it's more elegant to put the "getConstantPool" and
...
"getFixedStack" on the MachinePointerInfo class. While
this isn't the problem I'm setting out to solve, it is the
right way to eliminate PseudoSourceValue, so lets go with it.
llvm-svn: 114406
2010-09-21 06:22:23 +00:00
Chris Lattner
c3e05d6e50
update the X86 backend to use the MachinePointerInfo version of one
...
of the getLoad methods. This fixes at least one bug where an incorrect
svoffset is passed in (a potential combiner-aa miscompile).
llvm-svn: 114404
2010-09-21 06:02:19 +00:00
Chris Lattner
2510de2bea
reimplement memcpy/memmove/memset lowering to use MachinePointerInfo
...
instead of srcvalue/offset pairs. This corrects SV info for mem
operations whose size is > 32-bits.
llvm-svn: 114401
2010-09-21 05:40:29 +00:00
Chris Lattner
e3d864b857
convert targets to the new MF.getMachineMemOperand interface.
...
llvm-svn: 114391
2010-09-21 04:39:43 +00:00
John Thompson
1094c80281
Added skeleton for inline asm multiple alternative constraint support.
...
llvm-svn: 113766
2010-09-13 18:15:37 +00:00
Bruno Cardoso Lopes
99a9f4661a
Minor change. Fix comments and remove unused and redundant code
...
llvm-svn: 113378
2010-09-08 18:12:31 +00:00
Bruno Cardoso Lopes
f7fee1c185
x86 vector shuffle lowering now relies only on target specific
...
nodes to emit shuffles and don't do isel mask matching anymore.
- Add the selection of the remaining shuffle opcode (movddup)
- Introduce two new functions to "recognize" where we may get
potential folds and add several comments to them explaining why
they are not yet in the desidered shape.
- Add more patterns to fallback the case where we select
a specific shuffle opcode as if it could fold a load, but it
can't, so remap to a valid instruction.
- Add a couple of FIXMEs to address in the following days once
there's a good solution to the current folding problem.
llvm-svn: 113369
2010-09-08 17:43:25 +00:00
Bruno Cardoso Lopes
6b1d62c529
Factor out some x86 vector shuffle rewriting and add comments about the direction the shuffle lowering is heading to
...
llvm-svn: 113286
2010-09-07 21:03:14 +00:00
Bruno Cardoso Lopes
7c483028fb
Move code around to prepare for moving some of the logic together to another function
...
llvm-svn: 113267
2010-09-07 20:20:27 +00:00
Bill Wendling
353802114f
Add an MVT::x86mmx type. It will take the place of all current MMX vector types.
...
llvm-svn: 113261
2010-09-07 20:03:56 +00:00