Chris Lattner
531f9e92d4
This mega patch converts us from using Function::a{iterator|begin|end} to
...
using Function::arg_{iterator|begin|end}. Likewise Module::g* -> Module::global_*.
This patch is contributed by Gabor Greif, thanks!
llvm-svn: 20597
2005-03-15 04:54:21 +00:00
Chris Lattner
6f6ecad995
I didn't mean to check this in. :(
...
llvm-svn: 20555
2005-03-10 20:59:51 +00:00
Chris Lattner
85e7163947
Fix a bug where we would incorrectly do a sign ext instead of a zero ext
...
because we were checking the wrong thing. Thanks to andrew for pointing
this out!
llvm-svn: 20554
2005-03-10 20:55:51 +00:00
Chris Lattner
76aa8e071b
Allow the live interval analysis pass to be a bit more aggressive about
...
numbering values in live ranges for physical registers.
The alpha backend currently generates code that looks like this:
vreg = preg
...
preg = vreg
use preg
...
preg = vreg
use preg
etc. Because vreg contains the value of preg coming in, each of the
copies back into preg contain that initial value as well.
In the case of the Alpha, this allows this testcase:
void "foo"(int %blah) {
store int 5, int *%MyVar
store int 12, int* %MyVar2
ret void
}
to compile to:
foo:
ldgp $29, 0($27)
ldiq $0,5
stl $0,MyVar
ldiq $0,12
stl $0,MyVar2
ret $31,($26),1
instead of:
foo:
ldgp $29, 0($27)
bis $29,$29,$0
ldiq $1,5
bis $0,$0,$29
stl $1,MyVar
ldiq $1,12
bis $0,$0,$29
stl $1,MyVar2
ret $31,($26),1
This does not seem to have any noticable effect on X86 code.
This fixes PR535.
llvm-svn: 20536
2005-03-09 23:05:19 +00:00
Chris Lattner
7f26946709
constant fold FP_ROUND_INREG, ZERO_EXTEND_INREG, and SIGN_EXTEND_INREG
...
This allows the alpha backend to compile:
bool %test(uint %P) {
%c = seteq uint %P, 0
ret bool %c
}
into:
test:
ldgp $29, 0($27)
ZAP $16,240,$0
CMPEQ $0,0,$0
AND $0,1,$0
ret $31,($26),1
instead of:
test:
ldgp $29, 0($27)
ZAP $16,240,$0
ldiq $1,0
ZAP $1,240,$1
CMPEQ $0,$1,$0
AND $0,1,$0
ret $31,($26),1
... and fixes PR534.
llvm-svn: 20534
2005-03-09 18:37:12 +00:00
Alkis Evlogimenos
b3846f4b06
Lower llvm.isunordered(a, b) into a != a | b != b.
...
llvm-svn: 20382
2005-03-01 02:07:58 +00:00
Chris Lattner
c87e03aeea
Lower prefetch to a noop, patch contributed by Justin Wick!
...
llvm-svn: 20375
2005-02-28 19:27:23 +00:00
Chris Lattner
a474313902
Fix a bug in the 'store fpimm, ptr' -> 'store intimm, ptr' handling code.
...
Changing 'op' here caused us to not enter the store into a map, causing
reemission of the code!! In practice, a simple loop like this:
no_exit: ; preds = %no_exit, %entry
%indvar = phi uint [ %indvar.next, %no_exit ], [ 0, %entry ] ; <uint> [#uses=3]
%tmp.4 = getelementptr "complex long double"* %P, uint %indvar, uint 0 ; <double*> [#uses=1]
store double 0.000000e+00, double* %tmp.4
%indvar.next = add uint %indvar, 1 ; <uint> [#uses=2]
%exitcond = seteq uint %indvar.next, %N ; <bool> [#uses=1]
br bool %exitcond, label %return, label %no_exit
was being code gen'd to:
.LBBtest_1: # no_exit
movl %edx, %esi
shll $4, %esi
movl $0, 4(%eax,%esi)
movl $0, (%eax,%esi)
incl %edx
movl $0, (%eax,%esi)
movl $0, 4(%eax,%esi)
cmpl %ecx, %edx
jne .LBBtest_1 # no_exit
Note that we are doing 4 32-bit stores instead of 2. Now we generate:
.LBBtest_1: # no_exit
movl %edx, %esi
incl %esi
shll $4, %edx
movl $0, (%eax,%edx)
movl $0, 4(%eax,%edx)
cmpl %ecx, %esi
movl %esi, %edx
jne .LBBtest_1 # no_exit
This is much happier, though it would be even better if the increment of ESI
was scheduled after the compare :-/
llvm-svn: 20265
2005-02-22 07:23:39 +00:00
Misha Brukman
73e929f89d
Fix compilation errors with VS 2005, patch by Aaron Gray.
...
llvm-svn: 20231
2005-02-17 21:39:27 +00:00
Chris Lattner
381dddc90c
Don't rely on doubles comparing identical to each other, which doesn't work
...
for 0.0 and -0.0.
llvm-svn: 20230
2005-02-17 20:17:32 +00:00
Chris Lattner
0c56a548ed
Don't sink argument loads into loops or other bad places. This disables folding of argument loads with instructions that are not in the entry block.
...
llvm-svn: 20228
2005-02-17 19:40:32 +00:00
Chris Lattner
145569b076
Print GEP offsets as signed values instead of unsigned values. On X86, this
...
prints:
getelementptr (int* %A, int -1)
as: "(A) - 4" instead of "(A) + 18446744073709551612", which makes the
assembler much happier.
This fixes test/Regression/CodeGen/X86/2005-02-14-IllegalAssembler.ll,
and Benchmarks/Prolangs-C/cdecl with LLC on X86.
llvm-svn: 20183
2005-02-14 21:40:26 +00:00
Chris Lattner
0559691163
Fix a case where were incorrectly compiled cast from short to int on 64-bit
...
targets.
llvm-svn: 20030
2005-02-04 18:39:19 +00:00
Andrew Lenharth
c8770aa507
fix constant pointer outputing on 64 bit machines
...
llvm-svn: 20026
2005-02-04 13:47:16 +00:00
Chris Lattner
5aa75e4ce5
Fix yet another memset issue.
...
llvm-svn: 19986
2005-02-02 03:44:41 +00:00
Chris Lattner
4487b2e5a6
Fix some bugs andrew noticed legalizing memset for alpha
...
llvm-svn: 19969
2005-02-01 18:38:28 +00:00
Chris Lattner
f6c93e36c7
Improve conformance with the Misha spelling benchmark suite
...
llvm-svn: 19930
2005-01-30 00:09:23 +00:00
Chris Lattner
e6074aa08b
adjust to ilist changes.
...
llvm-svn: 19924
2005-01-29 18:41:25 +00:00
Chris Lattner
bc7497d5f5
Alpha doesn't have a native f32 extload instruction.
...
llvm-svn: 19880
2005-01-28 22:58:25 +00:00
Chris Lattner
bf8c1ad313
implement legalization of truncates whose results and sources need to be
...
truncated, e.g. (truncate:i8 something:i16) on a 32 or 64-bit RISC.
llvm-svn: 19879
2005-01-28 22:52:50 +00:00
Chris Lattner
a4cfafe31a
Get alpha working with memset/memcpy/memmove
...
llvm-svn: 19878
2005-01-28 22:29:18 +00:00
Chris Lattner
eb6614d719
CopyFromReg produces two values. Make sure that we remember that both are
...
legalized, and actually return the correct result when we legalize the chain first.
llvm-svn: 19866
2005-01-28 06:27:38 +00:00
Chris Lattner
0dfd7d3a0d
Silence optimized warnings.
...
llvm-svn: 19797
2005-01-23 23:19:44 +00:00
Chris Lattner
fb5614506e
Simplify/speedup the PEI by not having to scan for uses of the callee saved
...
registers. This information is computed directly by the register allocator
now.
llvm-svn: 19795
2005-01-23 23:13:12 +00:00
Chris Lattner
3d527f7b61
Update physregsused info.
...
llvm-svn: 19793
2005-01-23 22:55:45 +00:00
Chris Lattner
24f0f0e28f
Update this pass to set PhysRegsUsed info in MachineFunction.
...
llvm-svn: 19792
2005-01-23 22:51:56 +00:00
Chris Lattner
ae09d93b35
Update these register allocators to set the PhysRegUsed info in MachineFunction.
...
llvm-svn: 19791
2005-01-23 22:45:13 +00:00
Chris Lattner
304053c6ec
Add support for the PhysRegsUsed array.
...
llvm-svn: 19789
2005-01-23 22:13:58 +00:00
Chris Lattner
ef2de322c6
Speed this up a bit by making ModifiedRegs a vector<char> not vector<bool>
...
llvm-svn: 19787
2005-01-23 21:45:01 +00:00
Chris Lattner
4add7e356f
Adjust to changes in SelectionDAG interfaces
...
The first half of correct chain insertion for libcalls. This is not enough
to fix Fhourstones yet though.
llvm-svn: 19781
2005-01-23 04:42:50 +00:00
Chris Lattner
90b7c13f3a
Remove the 3 HACK HACK HACKs I put in before, fixing them properly with
...
the new TLI that is available.
Implement support for handling out of range shifts. This allows us to
compile this code (a 64-bit rotate):
unsigned long long f3(unsigned long long x) {
return (x << 32) | (x >> (64-32));
}
into this:
f3:
mov %EDX, DWORD PTR [%ESP + 4]
mov %EAX, DWORD PTR [%ESP + 8]
ret
GCC produces this:
$ gcc t.c -masm=intel -O3 -S -o - -fomit-frame-pointer
..
f3:
push %ebx
mov %ebx, DWORD PTR [%esp+12]
mov %ecx, DWORD PTR [%esp+8]
mov %eax, %ebx
mov %edx, %ecx
pop %ebx
ret
The Simple ISEL produces (eww gross):
f3:
sub %ESP, 4
mov DWORD PTR [%ESP], %ESI
mov %EDX, DWORD PTR [%ESP + 8]
mov %ECX, DWORD PTR [%ESP + 12]
mov %EAX, 0
mov %ESI, 0
or %EAX, %ECX
or %EDX, %ESI
mov %ESI, DWORD PTR [%ESP]
add %ESP, 4
ret
llvm-svn: 19780
2005-01-23 04:39:44 +00:00
Chris Lattner
ffcb0ae329
Adjust to changes in SelectionDAG interface.
...
llvm-svn: 19779
2005-01-23 04:36:26 +00:00
Chris Lattner
eccb73d57f
Get this to work for 64-bit systems.
...
llvm-svn: 19763
2005-01-22 23:04:37 +00:00
Chris Lattner
52c97fbea9
Implicitly defined registers can clobber callee saved registers too!
...
This fixes the return-address-not-being-saved problem in the Alpha backend.
llvm-svn: 19741
2005-01-22 00:49:16 +00:00
Chris Lattner
3bc78b2e0b
More bugfixes for IA64 shifts.
...
llvm-svn: 19739
2005-01-22 00:33:03 +00:00
Chris Lattner
ec2183713c
Fix problems with non-x86 targets.
...
llvm-svn: 19738
2005-01-22 00:31:52 +00:00
Chris Lattner
d637c96fac
Add a nasty hack to fix Alpha/IA64 multiplies by a power of two.
...
llvm-svn: 19737
2005-01-22 00:20:42 +00:00
Chris Lattner
d53e763f18
Remove unneeded line.
...
llvm-svn: 19736
2005-01-21 23:43:12 +00:00
Chris Lattner
4f987bf16d
test commit
...
llvm-svn: 19735
2005-01-21 23:38:56 +00:00
Chris Lattner
96e809c47d
Unary token factor nodes are unneeded.
...
llvm-svn: 19727
2005-01-21 18:01:22 +00:00
Chris Lattner
aac464e6c0
Refactor libcall code a bit. Initial implementation of expanding int -> FP
...
operations for 64-bit integers.
llvm-svn: 19724
2005-01-21 06:05:23 +00:00
Chris Lattner
4d25c04f94
Simplify the shift-expansion code.
...
llvm-svn: 19721
2005-01-20 20:29:23 +00:00
Chris Lattner
b3f83b28a5
Expand add/sub into ADD_PARTS/SUB_PARTS instead of a non-existant libcall.
...
llvm-svn: 19715
2005-01-20 18:52:28 +00:00
Chris Lattner
1fe9b40981
implement add_parts/sub_parts.
...
llvm-svn: 19714
2005-01-20 18:50:55 +00:00
Chris Lattner
28d15860bd
Add missing entry.
...
llvm-svn: 19712
2005-01-20 17:32:28 +00:00
Chris Lattner
96c26751ec
Support targets that do not use i8 shift amounts.
...
llvm-svn: 19707
2005-01-19 22:31:21 +00:00
Chris Lattner
f840289291
Add an assertion that would have made more sense to duraid
...
llvm-svn: 19704
2005-01-19 21:32:07 +00:00
Chris Lattner
3d95c14d94
Add support for targets that pass args in registers to calls.
...
llvm-svn: 19703
2005-01-19 20:24:35 +00:00
Chris Lattner
55562fa99a
Fold single use token factor nodes into other token factor nodes.
...
llvm-svn: 19701
2005-01-19 19:10:54 +00:00
Chris Lattner
0d03eb45a8
Realize the individual pieces of an expanded copytoreg/store/load are
...
independent of each other.
llvm-svn: 19700
2005-01-19 18:02:17 +00:00