Commit Graph

1129 Commits

Author SHA1 Message Date
Reid Spencer 2bce8006d5 Get rid of a warning when compiling optimized. Uninitialized variable has
been initialized.

llvm-svn: 15565
2004-08-07 15:19:31 +00:00
Chris Lattner 57cafee576 Ok get rid of the REST of the tabs
llvm-svn: 15564
2004-08-07 07:18:41 +00:00
Chris Lattner 0ee8279979 Death to tabs
llvm-svn: 15563
2004-08-07 07:07:57 +00:00
Alkis Evlogimenos 4242241a69 Clean up whitespace.
llvm-svn: 15490
2004-08-04 09:46:56 +00:00
Alkis Evlogimenos a698308cce Convert indentation to 2 spaces.
llvm-svn: 15489
2004-08-04 09:46:26 +00:00
Brian Gaeke c017a2c269 Include SparcV9TmpInstr.h to pick up the def. of TmpInstruction,
instead of InstrSelection.h, which is dead.

llvm-svn: 15476
2004-08-04 07:34:57 +00:00
Brian Gaeke e31a13ae07 Tighten up some whitespace. Include SparcV9TmpInstr.h to pick up
the def. of TmpInstruction, instead of InstrSelection.h, which is
dead.

llvm-svn: 15475
2004-08-04 07:34:44 +00:00
Chris Lattner 3e689f8070 Squelch warnings in release mode
llvm-svn: 15460
2004-08-04 03:51:55 +00:00
Misha Brukman a297dc2480 Add #include <cstdlib> and abort() to silence a warning
llvm-svn: 15413
2004-08-02 14:02:21 +00:00
Misha Brukman 8481785391 * ceil() requires #include <cmath> for compilation
* Alphabetize #includes
* Fix some lines to fit within 80 cols

llvm-svn: 15412
2004-08-02 13:59:10 +00:00
Tanya Lattner dfd402ea8d Adding ModuloScheduling so that it compiles for everyone.
llvm-svn: 15408
2004-08-01 19:00:17 +00:00
Chris Lattner 356f5a13f5 Dereferencing end() is bad.
llvm-svn: 15402
2004-08-01 09:51:42 +00:00
Alkis Evlogimenos 62c979ba07 Make OptimizeBlock take a MachineFunction::iterator instead of a
MachineBasicBlock* as a parameter so that nxext() and prior() helper
functions can work naturally on it.

llvm-svn: 15376
2004-07-31 19:24:41 +00:00
Chris Lattner 0f1c2ed907 Next on a pointer increments the pointer, not an iterator
llvm-svn: 15375
2004-07-31 18:40:36 +00:00
Alkis Evlogimenos 2303d3f660 Use next() helper to make code more readable. Use
MachineFunction::iterator instead of MachineBasicBlock* to avoid
dereferencing end iterators.

llvm-svn: 15373
2004-07-31 15:14:29 +00:00
Alkis Evlogimenos 1e8d8fd81f Use MachineFunction::iterator instead of a MachineBasicBlock* because
FallThrough maybe == to MF.end().

llvm-svn: 15372
2004-07-31 15:03:52 +00:00
Chris Lattner 25e48dd2e0 Implement a simple target-independent CFG cleanup pass
llvm-svn: 15368
2004-07-31 10:01:27 +00:00
Tanya Lattner 081fbd1bde Updated ModuloScheduling. It makes it all the wya through register allocation on the new code!!
llvm-svn: 15351
2004-07-30 23:36:10 +00:00
Brian Gaeke b10778dc07 Convert a few assertions with side-effects into regular old runtime checks.
These side-effects seem to make a difference when using llc -march=sparcv9
in Release mode (i.e., with -DNDEBUG); when they are left out, lots of
instructions just get dropped on the floor, because they never end up
in the schedule.

llvm-svn: 15339
2004-07-29 21:31:20 +00:00
Misha Brukman 63b38bd2ed Fix #includes of i*.h => Instructions.h as per PR403.
llvm-svn: 15334
2004-07-29 17:30:56 +00:00
Chris Lattner b5ada398a2 Fix #includes of i*.h => Instructions.h as per PR403:
http://llvm.cs.uiuc.edu/PR403 .

llvm-svn: 15333
2004-07-29 17:23:00 +00:00
Chris Lattner 66a64fb997 Fix #includes of i*.h => Instructions.h as per PR403:
http://llvm.cs.uiuc.edu/PR403 .

llvm-svn: 15331
2004-07-29 17:11:37 +00:00
Brian Gaeke 5eb1150db6 TargetInstrInfo::hasOperandInterlock() is always true, because it is
never overridden by any target.

llvm-svn: 15308
2004-07-28 19:24:48 +00:00
Alkis Evlogimenos 83d9b62b28 Add some comments to the backtracking code.
llvm-svn: 15200
2004-07-25 08:10:33 +00:00
Chris Lattner bbe845b969 Fix the sense of joinable
llvm-svn: 15196
2004-07-25 07:47:25 +00:00
Chris Lattner ccc75d4f32 This patch makes use of the infrastructure implemented before to safely and
aggressively coallesce live ranges even if they overlap.  Consider this LLVM
code for example:

int %test(int %X) {
        %Y = mul int %X, 1      ;; Codegens to Y = X
        %Z = add int %X, %Y
        ret int %Z
}

The mul is just there to get a copy into the code stream.  This produces
this machine code:

 (0x869e5a8, LLVM BB @0x869b9a0):
        %reg1024 = mov <fi#-2>, 1, %NOREG, 0    ;; "X"
        %reg1025 = mov %reg1024                 ;; "Y"  (subsumed by X)
        %reg1026 = add %reg1024, %reg1025
        %EAX = mov %reg1026
        ret

Note that the life times of reg1024 and reg1025 overlap, even though they
contain the same value.  This results in this machine code:

test:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %ECX, %EAX
        add %EAX, %ECX
        ret

Another, worse case involves loops and PHI nodes.  Consider this trivial loop:
testcase:

int %test2(int %X) {
entry:
        br label %Loop
Loop:
        %Y = phi int [%X, %entry], [%Z, %Loop]
        %Z = add int %Y, 1
        %cond = seteq int %Z, 100
        br bool %cond, label %Out, label %Loop
Out:
        ret int %Z
}

Because of interactions between the PHI elimination pass and the register
allocator, this got compiled to this code:

test2:
        mov %ECX, DWORD PTR [%ESP + 4]
.LBBtest2_1:
***     mov %EAX, %ECX
        inc %EAX
        cmp %EAX, 100
***     mov %ECX, %EAX
        jne .LBBtest2_1

        ret

Or on powerpc, this code:

_test2:
        mflr r0
        stw r0, 8(r1)
        stwu r1, -60(r1)
.LBB_test2_1:
        addi r2, r3, 1
        cmpwi cr0, r2, 100
***     or r3, r2, r2
        bne cr0, .LBB_test2_1

***     or r3, r2, r2
        lwz r0, 68(r1)
        mtlr r0
        addi r1, r1, 60
        blr 0



With this improvement in place, we now generate this code for these two
testcases, which is what we want:


test:
        mov %EAX, DWORD PTR [%ESP + 4]
        add %EAX, %EAX
        ret

test2:
        mov %EAX, DWORD PTR [%ESP + 4]
.LBBtest2_1:
        inc %EAX
        cmp %EAX, 100
        jne .LBBtest2_1 # Loop
        ret

Or on PPC:

_test2:
        mflr r0
        stw r0, 8(r1)
        stwu r1, -60(r1)
.LBB_test2_1:
        addi r3, r3, 1
        cmpwi cr0, r3, 100
        bne cr0, .LBB_test2_1

        lwz r0, 68(r1)
        mtlr r0
        addi r1, r1, 60
        blr 0


Static numbers for spill code loads/stores/reg-reg copies (smaller is better):

em3d:       before: 47/25/26         after: 44/22/24
164.gzip:   before: 433/245/310      after: 403/231/278
175.vpr:    before: 3721/2189/1581   after: 4144/2081/1423
176.gcc:    before: 26195/8866/9235  after: 25942/8082/8275
186.crafty: before: 4295/2587/3079   after: 4119/2519/2916
252.eon:    before: 12754/7585/5803  after: 12508/7425/5643
256.bzip2:  before: 463/226/315      after: 482:241/309


Runtime perf number samples on X86:

gzip: before: 41.09 after: 39.86
bzip2: runtime: before: 56.71s after: 57.07s
gcc: before: 6.16 after: 6.12
eon: before: 2.03s after: 2.00s
llvm-svn: 15194
2004-07-25 07:11:19 +00:00
Chris Lattner c8002d49e3 Make a method const, no functionality changes
llvm-svn: 15193
2004-07-25 06:23:01 +00:00
Chris Lattner 83b9c50f8f Fix a bug where we incorrectly value numbered the first PHI definition the
same as the PHI use.  This is not correct as the PHI use value is different
depending on which branch is taken.  This fixes espresso with aggressive
coallescing, and perhaps others.

llvm-svn: 15189
2004-07-25 05:45:18 +00:00
Chris Lattner af7e898e84 Fix a bug in the range remover
llvm-svn: 15188
2004-07-25 05:43:53 +00:00
Chris Lattner 0e58e5e48c Add debugging output for joining assignments
llvm-svn: 15187
2004-07-25 03:24:11 +00:00
Alkis Evlogimenos e0ab16fe54 Remove implementation of operator= and make it private so that it is
not used accidentally.

llvm-svn: 15172
2004-07-24 18:55:15 +00:00
Alkis Evlogimenos cf72e7f854 Change std::map<unsigned, LiveInterval*> into a std::map<unsigned,
LiveInterval>. This saves some space and removes the pointer
indirection caused by following the pointer.

llvm-svn: 15167
2004-07-24 11:44:15 +00:00
Chris Lattner 8c595eb4bd whoops, didn't mean to remove this
llvm-svn: 15157
2004-07-24 04:32:22 +00:00
Chris Lattner d9bbbb8484 In the joiner, merge the small interval into the large interval. This restores
us back to taking about 10.5s on gcc, instead of taking 15.6s!  The net result
is that my big patches have hand no significant effect on compile time or code
quality.  heh.

llvm-svn: 15156
2004-07-24 03:41:50 +00:00
Chris Lattner c51866a20e Completely eliminate the intervals_ list. instead, the r2iMap_ maintains
ownership of the intervals.

llvm-svn: 15155
2004-07-24 03:32:06 +00:00
Chris Lattner 7efcdb7ca3 Big change to compute logical value numbers for each LiveRange added to an
Interval.  This generalizes the isDefinedOnce mechanism that we used before
to help us coallesce ranges that overlap.  As part of this, every logical
range with a different value is assigned a different number in the interval.
For example, for code that looks like this:

0  X = ...
4  X += ...
  ...
N    = X

We now generate a live interval that contains two ranges: [2,6:0),[6,?:1)
reflecting the fact that there are two different values in the range at
different positions in the code.

Currently we are not using this information at all, so this just slows down
liveintervals.  In the future, this will change.

Note that this change also substantially refactors the joinIntervalsInMachineBB
method to merge the cases for virt-virt and phys-virt joining into a single
case, adds comments, and makes the code a bit easier to follow.

llvm-svn: 15154
2004-07-24 02:59:07 +00:00
Chris Lattner d7b9e29327 Add a new differingRegisterClasses method
make overlapsAliases take pointers instead of references
fix indentation

llvm-svn: 15153
2004-07-24 02:53:43 +00:00
Chris Lattner 038747f5c0 Little stuff:
* Fix comment typeo
* add dump() methods
* add a few new methods like getLiveRangeContaining, removeRange & joinable
  (which is currently the same as overlaps)
* Remove the unused operator==

Bigger change:

* In LiveInterval, instead of using a boolean isDefinedOnce to keep track of
  if there are > 1 definitions in a particular interval, keep a counter,
  NumValues to keep track of exactly how many there are.
* In LiveRange, add a new ValId element to indicate which of the numbered
  values each LiveRange belongs to.   We now no longer merge LiveRanges if
  they are of differing value ID's even if they are neighbors.

llvm-svn: 15152
2004-07-24 02:52:23 +00:00
Chris Lattner 1604b02c4c More minor changes:
* Inline some functions
 * Eliminate some comparisons from the release build

This is good for another .3 on gcc.

llvm-svn: 15144
2004-07-23 21:24:19 +00:00
Chris Lattner b4acba49b1 Change addRange and join to be a little bit smarter. In particular, we don't
want to insert a new range into the middle of the vector, then delete ranges
one at a time next to the inserted one as they are merged.

Instead, if the inserted interval overlaps, just start merging.  The only time
we insert into the middle of the vector is when we don't overlap at all.  Also
delete blocks of live ranges if we overlap with many of them.

This patch speeds up joining by .7 seconds on a large testcase, but more
importantly gets all of the range adding code into addRangeFrom.

llvm-svn: 15141
2004-07-23 19:38:44 +00:00
Chris Lattner 2fcc5e416d Search by the start point, not by the whole interval. This saves some
comparisons, reducing linscan by another .1 seconds :)

llvm-svn: 15139
2004-07-23 18:40:00 +00:00
Chris Lattner 60babd0431 New helper method
llvm-svn: 15138
2004-07-23 18:39:12 +00:00
Chris Lattner 848c7c59d4 Speedup debug builds a bit
llvm-svn: 15137
2004-07-23 18:38:52 +00:00
Chris Lattner c96d299569 Instead of searching for a live interval pair, search for a location. This gives
a very modest speedup of .3 seconds compiling 176.gcc (out of 20s).

llvm-svn: 15136
2004-07-23 18:13:24 +00:00
Chris Lattner 856383326a Rename LiveIntervals.(cpp|h) -> LiveIntervalAnalysis.(cpp|h)
llvm-svn: 15135
2004-07-23 17:56:30 +00:00
Chris Lattner 78f62e37f3 Pull the LiveRange and LiveInterval classes out of LiveIntervals.h (which
will soon be renamed) into their own file.  The new file should not emit
DEBUG output or have other side effects.  The LiveInterval class also now
doesn't know whether its working on registers or some other thing.

In the future we will want to use the LiveInterval class and friends to do
stack packing.  In addition to a code simplification, this will allow us to
do it more easily.

llvm-svn: 15134
2004-07-23 17:49:16 +00:00
Chris Lattner 53280cd26e Improve comments a bit
Use an explicit LiveRange class to represent ranges instead of an std::pair.
This is a minor cleanup, but is really intended to make a future patch simpler
and less invasive.

Alkis, could you please take a look at LiveInterval::liveAt?  I suspect that
you can add an operator<(unsigned) to LiveRange, allowing us to speed up the
upper_bound call by quite a bit (this would also apply to other callers of
upper/lower_bound).  I would do it myself, but I still don't understand that
crazy liveAt function, despite the comment. :)

Basically I would like to see this:
    LiveRange dummy(index, index+1);
    Ranges::const_iterator r = std::upper_bound(ranges.begin(),
                                                ranges.end(),
                                                dummy);

Turn into:
    Ranges::const_iterator r = std::upper_bound(ranges.begin(),
                                                ranges.end(),
                                                index);

llvm-svn: 15130
2004-07-23 08:24:23 +00:00
Chris Lattner 2d75978bc6 Update live intervals more accurately for PHI elim. This slightly reduces
the live intervals for some registers.

llvm-svn: 15125
2004-07-23 05:27:43 +00:00
Chris Lattner b549420cd0 Force coallescing of live ranges that have a single definition, even if they
interfere.  Because these intervals have a single definition, and one of them
is a copy instruction, they are always safe to merge even if their lifetimes
interfere.  This slightly reduces the amount of spill code, for example on
252.eon, from:

 12837 spiller               - Number of loads added
  7604 spiller               - Number of stores added
  5842 spiller               - Number of register spills
 18155 liveintervals         - Number of identity moves eliminated after coalescing

to:

  12754 spiller               - Number of loads added
   7585 spiller               - Number of stores added
   5803 spiller               - Number of register spills
  18262 liveintervals         - Number of identity moves eliminated after coalescing

The much much bigger win would be to merge intervals with multiple definitions
(aka phi nodes) but this is not that day.

llvm-svn: 15124
2004-07-23 05:26:05 +00:00
Chris Lattner 84b93bb107 costmetic changes
llvm-svn: 15118
2004-07-22 23:05:12 +00:00