Commit Graph

197 Commits

Author SHA1 Message Date
Chris Lattner 0048e574fb Fix a fairly major performance problem. If a PHI node had a constant as
an incoming value from a block, the selector would evaluate the constant
at the TOP of the block instead of at the end of the block.  This made the
live range for the constant span the entire block, increasing register
pressure needlessly.

llvm-svn: 12542
2004-03-30 19:10:12 +00:00
Chris Lattner 6ca9b89abb Malloc doesn't kill a load. This patch need not go into 1.2 though.
llvm-svn: 12500
2004-03-18 17:01:26 +00:00
Chris Lattner dc47e27188 Fix a really nasty bug that was breaking ijpeg in LLC mode. We were incorrectly
folding load instructions into other instructions across free instruction
boundaries.  Perhaps this will also fix the other strange failures?

llvm-svn: 12494
2004-03-18 06:29:54 +00:00
Chris Lattner 699aa70f0c It helps if I save the file. :)
llvm-svn: 12357
2004-03-13 00:24:52 +00:00
Chris Lattner 071a5e5649 Rename the intrinsic enum values for llvm.va_* from Intrinsic::va_* to
Intrinsic::va*.  This avoid conflicting with macros in the stdlib.h file.

llvm-svn: 12356
2004-03-13 00:24:00 +00:00
Chris Lattner 653e662a93 Implement folding explicit load instructions into binary operations. For a
testcase like this:

int %test(int* %P, int %A) {
        %Pv = load int* %P
        %B = add int %A, %Pv
        ret int %B
}

We now generate:
test:
        mov %ECX, DWORD PTR [%ESP + 4]
        mov %EAX, DWORD PTR [%ESP + 8]
        add %EAX, DWORD PTR [%ECX]
        ret

Instead of:
test:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %ECX, DWORD PTR [%ESP + 8]
        mov %EAX, DWORD PTR [%EAX]
        add %EAX, %ECX
        ret

... saving one instruction, and often a register.  Note that there are a lot
of other instructions that could use this, but they aren't handled.  I'm not
really interested in adding them, but mul/div and all of the FP instructions
could be supported as well if someone wanted to add them.

llvm-svn: 12204
2004-03-08 01:58:35 +00:00
Chris Lattner 1dd6afe6a2 Rearrange and refactor some code. No functionality changes.
llvm-svn: 12203
2004-03-08 01:18:36 +00:00
Misha Brukman a6025e6480 Doxygenify some comments.
llvm-svn: 12064
2004-03-01 23:53:11 +00:00
Chris Lattner 1f4642c47c Handle passing constant integers to functions much more efficiently. Instead
of generating this code:

        mov %EAX, 4
        mov DWORD PTR [%ESP], %EAX
        mov %AX, 123
        movsx %EAX, %AX
        mov DWORD PTR [%ESP + 4], %EAX
        call Y

we now generate:
        mov DWORD PTR [%ESP], 4
        mov DWORD PTR [%ESP + 4], 123
        call Y

Which hurts the eyes less.  :)

Considering that register pressure around call sites is already high (with all
of the callee clobber registers n stuff), this may help a lot.

llvm-svn: 12028
2004-03-01 02:42:43 +00:00
Chris Lattner 5c7d3cda78 Fix a minor code-quality issue. When passing 8 and 16-bit integer constants
to function calls, we would emit dead code, like this:

int Y(int, short, double);
int X() {
  Y(4, 123, 4);
}

--- Old
X:
        sub %ESP, 20
        mov %EAX, 4
        mov DWORD PTR [%ESP], %EAX
***     mov %AX, 123
        mov %AX, 123
        movsx %EAX, %AX
        mov DWORD PTR [%ESP + 4], %EAX
        fld QWORD PTR [.CPIX_0]
        fstp QWORD PTR [%ESP + 8]
        call Y
        mov %EAX, 0
        # IMPLICIT_USE %EAX %ESP
        add %ESP, 20
        ret

Now we emit:
X:
        sub %ESP, 20
        mov %EAX, 4
        mov DWORD PTR [%ESP], %EAX
        mov %AX, 123
        movsx %EAX, %AX
        mov DWORD PTR [%ESP + 4], %EAX
        fld QWORD PTR [.CPIX_0]
        fstp QWORD PTR [%ESP + 8]
        call Y
        mov %EAX, 0
        # IMPLICIT_USE %EAX %ESP
        add %ESP, 20
        ret

Next up, eliminate the mov AX and movsx entirely!

llvm-svn: 12026
2004-03-01 02:34:08 +00:00
Alkis Evlogimenos ea81b79a97 A big X86 instruction rename. The instructions are renamed to make
their names more decriptive. A name consists of the base name, a
default operand size followed by a character per operand with an
optional special size. For example:

ADD8rr -> add, 8-bit register, 8-bit register

IMUL16rmi -> imul, 16-bit register, 16-bit memory, 16-bit immediate

IMUL16rmi8 -> imul, 16-bit register, 16-bit memory, 8-bit immediate

MOVSX32rm16 -> movsx, 32-bit register, 16-bit memory

llvm-svn: 11995
2004-02-29 08:50:03 +00:00
Chris Lattner 1e36fb030c Eliminate the X86-specific BMI functions, using BuildMI instead.
Replace uses of addZImm with addImm.

llvm-svn: 11992
2004-02-29 07:22:16 +00:00
Chris Lattner 9a97573267 Fix a miscompilation of 197.parser that occurs when you have single basic
block loops.

llvm-svn: 11990
2004-02-29 07:10:16 +00:00
Chris Lattner ca89812db7 These two virtual methods are never called.
llvm-svn: 11984
2004-02-29 05:59:33 +00:00
Alkis Evlogimenos fa63580517 SHLD and SHRD take 32-bit operands but an 8-bit immediate. Rename them
to denote this fact.

llvm-svn: 11972
2004-02-28 23:46:44 +00:00
Alkis Evlogimenos 4953ae085a Floating point loads/stores act on memory operands. Rename them to
denote this fact.

llvm-svn: 11971
2004-02-28 23:42:35 +00:00
Alkis Evlogimenos f020dfb43c Rename SHL, SHR, SAR, SHLD and SHLR instructions to make them
consistent with the rest and also pepare for the addition of their
memory operand variants.

llvm-svn: 11902
2004-02-27 06:57:05 +00:00
Alkis Evlogimenos 61719d48d2 Uncomment assertions that register# != 0 on calls to
MRegisterInfo::is{Physical,Virtual}Register. Apply appropriate fixes
to relevant files.

llvm-svn: 11882
2004-02-26 22:00:20 +00:00
Chris Lattner 9192bbdad9 Fix some warnings, some of which were spurious, and some of which were real
bugs.  Thanks Brian!

llvm-svn: 11859
2004-02-26 01:20:02 +00:00
Chris Lattner 309327a4b5 Teach the instruction selector how to transform 'array' GEP computations into X86
scaled indexes.  This allows us to compile GEP's like this:

int* %test([10 x { int, { int } }]* %X, int %Idx) {
        %Idx = cast int %Idx to long
        %X = getelementptr [10 x { int, { int } }]* %X, long 0, long %Idx, ubyte 1, ubyte 0
        ret int* %X
}

Into a single address computation:

test:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %ECX, DWORD PTR [%ESP + 8]
        lea %EAX, DWORD PTR [%EAX + 8*%ECX + 4]
        ret

Before it generated:
test:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %ECX, DWORD PTR [%ESP + 8]
        shl %ECX, 3
        add %EAX, %ECX
        lea %EAX, DWORD PTR [%EAX + 4]
        ret

This is useful for things like int/float/double arrays, as the indexing can be folded into
the loads&stores, reducing register pressure and decreasing the pressure on the decode unit.
With these changes, I expect our performance on 256.bzip2 and gzip to improve a lot.  On
bzip2 for example, we go from this:

10665 asm-printer           - Number of machine instrs printed
   40 ra-local              - Number of loads/stores folded into instructions
 1708 ra-local              - Number of loads added
 1532 ra-local              - Number of stores added
 1354 twoaddressinstruction - Number of instructions added
 1354 twoaddressinstruction - Number of two-address instructions
 2794 x86-peephole          - Number of peephole optimization performed

to this:
9873 asm-printer           - Number of machine instrs printed
  41 ra-local              - Number of loads/stores folded into instructions
1710 ra-local              - Number of loads added
1521 ra-local              - Number of stores added
 789 twoaddressinstruction - Number of instructions added
 789 twoaddressinstruction - Number of two-address instructions
2142 x86-peephole          - Number of peephole optimization performed

... and these types of instructions are often in tight loops.

Linear scan is also helped, but not as much.  It goes from:

8787 asm-printer           - Number of machine instrs printed
2389 liveintervals         - Number of identity moves eliminated after coalescing
2288 liveintervals         - Number of interval joins performed
3522 liveintervals         - Number of intervals after coalescing
5810 liveintervals         - Number of original intervals
 700 spiller               - Number of loads added
 487 spiller               - Number of stores added
 303 spiller               - Number of register spills
1354 twoaddressinstruction - Number of instructions added
1354 twoaddressinstruction - Number of two-address instructions
 363 x86-peephole          - Number of peephole optimization performed

to:

7982 asm-printer           - Number of machine instrs printed
1759 liveintervals         - Number of identity moves eliminated after coalescing
1658 liveintervals         - Number of interval joins performed
3282 liveintervals         - Number of intervals after coalescing
4940 liveintervals         - Number of original intervals
 635 spiller               - Number of loads added
 452 spiller               - Number of stores added
 288 spiller               - Number of register spills
 789 twoaddressinstruction - Number of instructions added
 789 twoaddressinstruction - Number of two-address instructions
 258 x86-peephole          - Number of peephole optimization performed

Though I'm not complaining about the drop in the number of intervals.  :)

llvm-svn: 11820
2004-02-25 07:00:55 +00:00
Chris Lattner d1ee55d439 * Make the previous patch more efficient by not allocating a temporary MachineInstr
to do analysis.

*** FOLD getelementptr instructions into loads and stores when possible,
    making use of some of the crazy X86 addressing modes.

For example, the following C++ program fragment:

struct complex {
    double re, im;
    complex(double r, double i) : re(r), im(i) {}
};
inline complex operator+(const complex& a, const complex& b) {
    return complex(a.re+b.re, a.im+b.im);
}
complex addone(const complex& arg) {
    return arg + complex(1,0);
}

Used to be compiled to:
_Z6addoneRK7complex:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %ECX, DWORD PTR [%ESP + 8]
***     mov %EDX, %ECX
        fld QWORD PTR [%EDX]
        fld1
        faddp %ST(1)
***     add %ECX, 8
        fld QWORD PTR [%ECX]
        fldz
        faddp %ST(1)
***     mov %ECX, %EAX
        fxch %ST(1)
        fstp QWORD PTR [%ECX]
***     add %EAX, 8
        fstp QWORD PTR [%EAX]
        ret

Now it is compiled to:
_Z6addoneRK7complex:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %ECX, DWORD PTR [%ESP + 8]
        fld QWORD PTR [%ECX]
        fld1
        faddp %ST(1)
        fld QWORD PTR [%ECX + 8]
        fldz
        faddp %ST(1)
        fxch %ST(1)
        fstp QWORD PTR [%EAX]
        fstp QWORD PTR [%EAX + 8]
        ret

Other programs should see similar improvements, across the board.  Note that
in addition to reducing instruction count, this also reduces register pressure
a lot, always a good thing on X86.  :)

llvm-svn: 11819
2004-02-25 06:13:04 +00:00
Chris Lattner d825d30f42 add an inefficient way of folding structure and constant array indexes together
into a single LEA instruction.  This should improve the code generated for
things like X->A.B.C[12].D.

The bigger benefit is still coming though.  Note that this uses an LEA instruction
instead of an add, giving the register allocator more freedom.  We should probably
never generate ADDri32's.

llvm-svn: 11817
2004-02-25 03:45:50 +00:00
Chris Lattner f85e33cd79 Implement special case for storing an immediate into memory so that we don't need
an intermediate register.

llvm-svn: 11816
2004-02-25 02:56:58 +00:00
Alkis Evlogimenos af2de4848e Refactor rewinding code for finding the first terminator of a basic
block into MachineBasicBlock::getFirstTerminator().

This also fixes a bug in the implementation of the above in both
RegAllocLocal and InstrSched, where instructions where added after the
terminator if the basic block's only instruction was a terminator (it
shouldn't matter for RegAllocLocal since this case never occurs in
practice).

llvm-svn: 11748
2004-02-23 18:14:48 +00:00
Chris Lattner cb185a34bb Simplify code a bit, don't go off the end of the block, now that the current
block we are in might be empty

llvm-svn: 11744
2004-02-23 07:42:19 +00:00
Chris Lattner 4ffd4443ce We were forgetting to add FP_REG_KILL instructions to basic blocks which will
eventually get an assignment due to elimination of PHIs.

llvm-svn: 11743
2004-02-23 07:29:45 +00:00
Chris Lattner 7e90628a8a Implement cast fp -> bool
llvm-svn: 11728
2004-02-23 03:21:41 +00:00
Chris Lattner 6590c29971 Stop passing iterators around by reference now that we have ilists!
Implement cast Type::ULongTy -> double

llvm-svn: 11726
2004-02-23 03:10:10 +00:00
Chris Lattner cdd56634b0 Only insert FP_REG_KILL instructions in MachineBasicBlocks that actually
use FP instructions.  This reduces the number of instructions inserted in
176.gcc (for example) from 58074 to 101 (it doesn't use much FP, which
is typical).  This reduction speeds up the entire code generator.  In the
case of 176.gcc, llc went from taking 31.38s to 24.78s.  The passes that
sped up the most are the register allocator and the 2 live variable analysis
passes, which sped up 2.3, 1.3, and 1.5s respectively.  The asmprinter
pass also sped up because it doesn't print the instructions in comments :)

Note that this patch is likely to expose latent bugs in machine code passes,
because now basicblock can be empty, where they were never empty before.  I
cleaned out regalloclocal, but who knows about linscan :)

llvm-svn: 11717
2004-02-22 19:47:26 +00:00
Alkis Evlogimenos 8358cc573d Move MOTy::UseType enum into MachineOperand. This eliminates the
switch statements in the constructors and simplifies the
implementation of the getUseType() member function. You will have to
specify defs using MachineOperand::Def instead of MOTy::Def though
(similarly for Use and UseAndDef).

llvm-svn: 11715
2004-02-22 19:23:26 +00:00
Chris Lattner fae7564027 Reduce the number of pointless copies inserted due to constant pointer refs.
Also, make an assertion actually fireable!

llvm-svn: 11713
2004-02-22 17:35:42 +00:00
Chris Lattner fa3ebd6ad5 Fix bug in previous checkout: leave the iterator at the first instruction
AFTER the GEP that was emitted.  :(

llvm-svn: 11712
2004-02-22 17:05:38 +00:00
Chris Lattner 6536519f6e Completely rewrite how getelementptr instructions are expanded. This has two
(minor) benefits right now:

1. An extra dummy MOVrr32 is gone.  This move would often be coallesced by
   both allocators anyway.
2. The code now uses the gep_type_iterator to walk the gep, which should future
   proof it a bit.  It still assumes that array indexes are Longs though.

These don't really justify rewriting the code.  The big benefit will come later
though.

llvm-svn: 11710
2004-02-22 07:04:00 +00:00
Chris Lattner 3abcdf3b90 Fix the mneumonics for the mov instructions to have the source and destination
order in the correct sense!! Arg!

llvm-svn: 11530
2004-02-17 06:28:19 +00:00
Chris Lattner ebd90733b0 Fix the last crimes against nature that used the 'ir' ordering to use the
'ri' ordering instead... no it's not possible to store a register into an
immediate!

llvm-svn: 11529
2004-02-17 06:24:02 +00:00
Chris Lattner 288e043e1b Rename MOVi[mr] instructions to MOV[rm]i
llvm-svn: 11527
2004-02-17 06:16:44 +00:00
Chris Lattner 818bcec247 Rename the IMULri* instructions to IMULrri, as they are actually three address
instructions.  Add forms of these instructions that read from memory

llvm-svn: 11518
2004-02-17 04:26:43 +00:00
Chris Lattner a9084948ff Implement llvm.(frame|return)address(0) correctly. They are used by the LLVM JIT, among other
applications

llvm-svn: 11459
2004-02-15 01:04:03 +00:00
Chris Lattner 9f75a55329 finegrainify namespacification, fix 80col prob
llvm-svn: 11445
2004-02-14 06:00:36 +00:00
Chris Lattner 2f49d5bb35 Codegen llvm.memset into rep stos[bwd]. Simplify code for llvm.memcpy
llvm-svn: 11442
2004-02-14 04:46:05 +00:00
Chris Lattner 7b5f374a18 There is no need to emit a shift if the size is constant, which is common
llvm-svn: 11420
2004-02-13 23:36:47 +00:00
Chris Lattner 8dc99feeaf Add support for the rep movs[bwd] instructions, and emit them when code
generating the llvm.memcpy intrinsic.

llvm-svn: 11351
2004-02-12 17:53:22 +00:00
Alkis Evlogimenos 80da865f77 Change MachineBasicBlock's vector of MachineInstr pointers into an
ilist of MachineInstr objects. This allows constant time removal and
insertion of MachineInstr instances from anywhere in each
MachineBasicBlock. It also allows for constant time splicing of
MachineInstrs into or out of MachineBasicBlocks.

llvm-svn: 11340
2004-02-12 02:27:10 +00:00
Chris Lattner ac6db755c3 Adjust to the changed StructType interface. In particular, getElementTypes() is gone.
llvm-svn: 11228
2004-02-09 04:37:31 +00:00
Chris Lattner d1b1992495 Generate ftst instructions for comparison against zero
llvm-svn: 11098
2004-02-03 18:54:04 +00:00
Chris Lattner 30d26ac561 Generate the fchs instruction to negate a floating point number
llvm-svn: 11078
2004-02-02 19:31:38 +00:00
Chris Lattner 298fdd7eb1 Codegen -0.0 correctly. Do not use fldz! This is another -0.0 == +0.0 problem, arg.
llvm-svn: 11070
2004-02-02 18:56:30 +00:00
Chris Lattner 4710add9dd Add (currently disabled) support to the instruction selector to only insert
FP_REG_KILL instructions at the end of blocks involved with critical edges.

Fix a bug where FP_REG_KILL instructions weren't inserted in fall through
unconditional branches.  Perhaps this will fix some linscan problems?

llvm-svn: 11019
2004-01-30 22:13:44 +00:00
Alkis Evlogimenos 975c8bde79 Output mov %REG = 0 instead of xor %REG, %REG, %REG to clear a
register so that LiveVariable analysis is not confused.

llvm-svn: 10773
2004-01-12 07:22:45 +00:00
Chris Lattner 5d236005b0 Clean up a lot of the code I added yesterday by exposing the IntrinsicLowering
implementation from the TargetMachine directly.

llvm-svn: 10636
2003-12-28 21:23:38 +00:00