Commit Graph

11801 Commits

Author SHA1 Message Date
Eric Christopher 28f4c729f7 Temporarily revert r129408 to see if it brings the bots back.
llvm-svn: 129417
2011-04-13 00:20:59 +00:00
Eric Christopher d829f43c06 Fix a bug where we were counting the alias sets as completely used
registers for fast allocation.

Fixes rdar://9207598

llvm-svn: 129408
2011-04-12 23:23:14 +00:00
Devang Patel 0e821f4673 I missed this new file in previous commit.
llvm-svn: 129407
2011-04-12 23:21:44 +00:00
Devang Patel 28dce70364 Simplify. There is no need to use static variable.
llvm-svn: 129406
2011-04-12 23:10:47 +00:00
Devang Patel 13d47f0ddc Do not reuse parameter name.
llvm-svn: 129405
2011-04-12 23:09:06 +00:00
Devang Patel f20c4f715f This mechanical patch moves type handling into CompileUnit from DwarfDebug. In case of multiple compile unit in one object file, each compile unit is responsible for its own set of type entries anyway. This refactoring makes this obvious.
llvm-svn: 129402
2011-04-12 22:53:02 +00:00
Eric Christopher de9d58569f Add more comments... err debug statements to the fast allocator.
llvm-svn: 129400
2011-04-12 22:17:44 +00:00
Jakob Stoklund Olesen c49df2c05a SparseBitVector is SLOW.
Use a Bitvector instead, we didn't need the smaller memory footprint anyway.
This makes the greedy register allocator 10% faster.

llvm-svn: 129390
2011-04-12 21:30:53 +00:00
Andrew Trick 1b60ad6644 Revert 129383. It causes some targets to hit a scheduler assert.
llvm-svn: 129385
2011-04-12 20:14:07 +00:00
Andrew Trick c5dd24a542 PreRA scheduler heuristic fixes: VRegCycle, TokenFactor latency.
UnitsSharePred was a source of randomness in the scheduler: node
priority depended on the queue data structure. I rewrote the recent
VRegCycle heuristics to completely replace the old heuristic without
any randomness. To make these heuristic adjustments to node latency work,
I also needed to do something a little more reasonable with TokenFactor. I
gave it zero latency to its consumers and always schedule it as low as
possible.

llvm-svn: 129383
2011-04-12 19:54:36 +00:00
Jakob Stoklund Olesen c70b697a40 Create new intervals for isolated blocks during region splitting.
This merges the behavior of splitSingleBlocks into splitAroundRegion, so the
RS_Region and RS_Block register stages can be coalesced. That means the leftover
intervals after region splitting go directly to spilling instead of a second
pass of per-block splitting.

llvm-svn: 129379
2011-04-12 19:32:53 +00:00
Jakob Stoklund Olesen 0840f50b76 Add SplitKit API to query and select the current interval being worked on.
This makes it possible to target multiple registers in one pass.

llvm-svn: 129374
2011-04-12 18:11:31 +00:00
Jakob Stoklund Olesen 68e84581c5 Fix a bug in RegAllocBase::addMBBLiveIns() where a basic block could accidentally be skipped.
llvm-svn: 129373
2011-04-12 18:11:28 +00:00
Devang Patel 4547a9e658 Remove dead typedef.
llvm-svn: 129368
2011-04-12 17:43:12 +00:00
Devang Patel 5eb4319dba Refactor CompileUnit into a separate header.
llvm-svn: 129367
2011-04-12 17:40:32 +00:00
Eric Christopher c37833625a Fix typo.
llvm-svn: 129334
2011-04-12 00:48:08 +00:00
Jakob Stoklund Olesen 507992e909 Reuse live interval union between functions. This saves a bit of compile time
when compiling many small functions.

llvm-svn: 129321
2011-04-11 23:57:14 +00:00
Nick Lewycky 0f85789800 Just because a GlobalVariable's initializer is [N x { i32, void ()* }] doesn't
mean that it has to be ConstantArray of ConstantStruct. We might have
ConstantAggregateZero, at either level, so don't crash on that.

Also, semi-deprecate the sentinal value. The linker isn't aware of sentinals so
we end up with the two lists appended, each with their "sentinals" on them.
Different parts of LLVM treated sentinals differently, so make them all just
ignore the single entry and continue on with the rest of the list.

llvm-svn: 129307
2011-04-11 22:11:20 +00:00
Jakob Stoklund Olesen 0f175ebc32 Speed up eviction by stopping collectInterferingVRegs as soon as the spill
weight limit has been exceeded.

llvm-svn: 129305
2011-04-11 21:47:01 +00:00
Bill Wendling 1e1f1c9ce1 The default of the dispatch switch statement was to branch to a BB that executed
the 'unwind' instruction. However, later on that instruction was converted into
a jump to the basic block it was located in, causing an infinite loop when we
get there.

It turns out, we get there if the _Unwind_Resume_or_Rethrow call returns (which
it's not supposed to do). It returns if it cannot find a place to unwind
to. Thus we would get what appears to be a "hang" when in reality it's just that
the EH couldn't be propagated further along.

Instead of infinitely looping (or calling `unwind', which none of our back-ends
support (it's lowered into nothing...)), call the @llvm.trap() intrinsic
instead. This may not conform to specific rules of a particular language, but
it's rather better than infinitely looping.

<rdar://problem/9175843&9233582>

llvm-svn: 129302
2011-04-11 21:32:34 +00:00
Evan Cheng ef42bea704 Look pass copies when determining whether hoisting would end up inserting more copies. rdar://9266679
llvm-svn: 129297
2011-04-11 21:09:18 +00:00
Jakob Stoklund Olesen 7d05bce70c Use a faster algorithm for computing MBB live-in registers after register allocation.
LiveIntervals::findLiveInMBBs has to do a full binary search for each segment.

llvm-svn: 129292
2011-04-11 20:01:41 +00:00
Evan Cheng fe917efc8b Fix a couple of places where changes are made but not tracked.
llvm-svn: 129287
2011-04-11 18:47:20 +00:00
Jakob Stoklund Olesen f8beafe207 Don't add live ranges for sub-registers when clobbering a physical register.
Both coalescing and register allocation already check aliases for interference,
so these extra segments are only slowing us down.

This speeds up both linear scan and the greedy register allocator.

llvm-svn: 129283
2011-04-11 18:08:10 +00:00
Jakob Stoklund Olesen 4fbbe3689d Speed up LiveIntervalUnion::unify by handling end insertion specially.
This particularly helps with the initial transfer of fixed intervals.

llvm-svn: 129277
2011-04-11 15:00:44 +00:00
Jakob Stoklund Olesen bfabc494f5 Time the initial seeding of live registers
llvm-svn: 129276
2011-04-11 15:00:42 +00:00
Jakob Stoklund Olesen 96d04c8e00 Don't shrink live ranges after dead code elimination unless it is going to help.
In particular, don't repeatedly recompute the PIC base live range after rematerialization.

llvm-svn: 129275
2011-04-11 15:00:39 +00:00
Jay Foad 7c14a558fe Don't include Operator.h from InstrTypes.h.
llvm-svn: 129271
2011-04-11 09:35:34 +00:00
Chris Lattner cfe5aa65d2 Avoid excess precision issues that lead to generating host-compiler-specific code.
Switch lowering probably shouldn't be using FP for this.  This resolves PR9581.

llvm-svn: 129199
2011-04-09 06:57:13 +00:00
Jakob Stoklund Olesen ed47ed4e80 Build the Hopfield network incrementally when splitting global live ranges.
It is common for large live ranges to have few basic blocks with register uses
and many live-through blocks without any uses. This approach grows the Hopfield
network incrementally around the use blocks, completely avoiding checking
interference for some through blocks.

llvm-svn: 129188
2011-04-09 02:59:09 +00:00
Jakob Stoklund Olesen 4ad6c160a5 Precompute interference for neighbor blocks as long as there is no interference.
This doesn't require seeking in the live interval union, so it is very cheap.

llvm-svn: 129187
2011-04-09 02:59:05 +00:00
Chris Lattner 41c80e89f3 have dag combine zap "store undef", which can be formed during call lowering
with undef arguments.

llvm-svn: 129185
2011-04-09 02:32:02 +00:00
Devang Patel 778947c203 Simplify array bound checks and clarify comments. One element array can have same non-zero number as lower bound as well as upper bound.
llvm-svn: 129170
2011-04-08 23:39:38 +00:00
Devang Patel e39647951b Do not emit DW_AT_upper_bound and DW_AT_lower_bound for unbouded array.
If lower bound is more then upper bound then consider it is an unbounded array.
An array is unbounded if non-zero lower bound is same as upper bound.
If lower bound and upper bound are zero than array has one element.

llvm-svn: 129156
2011-04-08 21:55:10 +00:00
Evan Cheng 74d92c1924 Change -arm-trap-func= into a non-arm specific option. Now Intrinsic::trap is lowered into a call to the specified trap function at sdisel time.
llvm-svn: 129152
2011-04-08 21:37:21 +00:00
Nick Lewycky 466d0c1f93 llvm.global_[cd]tor is defined to be either external, or appending with an array
of { i32, void ()* }. Teach the verifier to verify that, deleting copies of
checks strewn about.

llvm-svn: 129128
2011-04-08 07:30:21 +00:00
Andrew Trick 2ad0b37318 Added a check in the preRA scheduler for potential interference on a
induction variable. The preRA scheduler is unaware of induction vars,
so we look for potential "virtual register cycles" instead.

Fixes <rdar://problem/8946719> Bad scheduling prevents coalescing

llvm-svn: 129100
2011-04-07 19:54:57 +00:00
Jakob Stoklund Olesen 64beb47783 Recompute hasPHIKill flags when shrinking live intervals.
PHI values may be deleted, causing the flags to be wrong. This fixes PR9616.

llvm-svn: 129092
2011-04-07 18:43:14 +00:00
Jakob Stoklund Olesen 994c16833c Avoid moving iterators when the previous block was just visited.
llvm-svn: 129081
2011-04-07 17:27:50 +00:00
Jakob Stoklund Olesen 1c0db0fd21 Prefer multiplications to divisions.
llvm-svn: 129080
2011-04-07 17:27:48 +00:00
Jakob Stoklund Olesen 6d2bbc1c20 Extract SpillPlacement::addLinks for handling the special transparent blocks.
llvm-svn: 129079
2011-04-07 17:27:46 +00:00
Evan Cheng b7c9c407f9 Remove dead code. rdar://9221736.
llvm-svn: 129044
2011-04-07 00:56:37 +00:00
Jakob Stoklund Olesen 8ce2f43694 Also account for the spill code that would be inserted in live-through blocks with interference.
llvm-svn: 129030
2011-04-06 21:32:41 +00:00
Jakob Stoklund Olesen 81439a83f4 Abort the constraint calculation early when all positive bias is lost.
Without any positive bias, there is nothing for the spill placer to to. It will
spill everywhere.

llvm-svn: 129029
2011-04-06 21:32:38 +00:00
Jakob Stoklund Olesen 6895b87dfe Keep track of the number of positively biased nodes when adding constraints.
If there are no positive nodes, the algorithm can be aborted early.

llvm-svn: 129021
2011-04-06 19:14:00 +00:00
Jakob Stoklund Olesen 36b5d8a698 Break the spill placement algorithm into three parts: prepare, addConstraints, and finish.
This will allow us to abort the algorithm early if it is determined to be futile.

llvm-svn: 129020
2011-04-06 19:13:57 +00:00
Jakob Stoklund Olesen f3b2dcc74d Oops. Scary.
llvm-svn: 128986
2011-04-06 04:07:14 +00:00
Jakob Stoklund Olesen bf91c4e85e Analyze blocks with uses separately from live-through blocks without uses.
About 90% of the relevant blocks are live-through without uses, and the only
information required about them is their number. This saves memory and enables
later optimizations that need to look at only the use-blocks.

llvm-svn: 128985
2011-04-06 03:57:00 +00:00
Jakob Stoklund Olesen 858afbb6ac Sign error
llvm-svn: 128963
2011-04-05 23:43:16 +00:00
Jakob Stoklund Olesen 5c482cd38f Don't crash when a value is defined after the last split point.
llvm-svn: 128962
2011-04-05 23:43:14 +00:00
Jakob Stoklund Olesen 30b5473d82 Permit blocks to branch directly to a landing pad.
Treat the landing pad as a normal successor when that happens.

llvm-svn: 128961
2011-04-05 23:43:11 +00:00
Devang Patel 9f738849ab Add support to encode function's template parameters.
llvm-svn: 128947
2011-04-05 22:52:06 +00:00
Jakob Stoklund Olesen 6aa0fbf4c0 Run LiveDebugVariables in RegAllocBasic and RegAllocGreedy.
llvm-svn: 128935
2011-04-05 21:40:37 +00:00
Devang Patel d4e20eacf0 Refactor.
llvm-svn: 128929
2011-04-05 21:08:24 +00:00
Bob Wilson 6c20b88173 Add an assertion instead of crashing when the scavenger goes past the end
of a basic block.

llvm-svn: 128925
2011-04-05 20:44:15 +00:00
Jakob Stoklund Olesen 18fd84c79a When dead code elimination removes all but one use, try to fold the single def into the remaining use.
Rematerialization can leave single-use loads behind that we might as well fold whenever possible.

llvm-svn: 128918
2011-04-05 20:20:26 +00:00
Devang Patel 651d06e036 Do not emit empty name.
llvm-svn: 128914
2011-04-05 20:14:13 +00:00
Jakob Stoklund Olesen 76ad3debab Ensure all defs referring to a virtual register are marked dead by addRegisterDead().
There can be multiple defs for a single virtual register when they are defining
sub-registers.

The missing <dead> flag was stopping the inline spiller from eliminating dead
code after rematerialization.

llvm-svn: 128888
2011-04-05 16:53:50 +00:00
Rafael Espindola 7dd4d6e2e8 Print visibility info for external variables.
llvm-svn: 128887
2011-04-05 15:51:32 +00:00
Jakob Stoklund Olesen fe6e07fd8a Use std::unique instead of a SmallPtrSet to ensure unique instructions in UseSlots.
This allows us to always keep the smaller slot for an instruction which is what
we want when a register has early clobber defines.

Drop the UsingInstrs set and the UsingBlocks map. They are no longer needed.

llvm-svn: 128886
2011-04-05 15:18:18 +00:00
Jakob Stoklund Olesen d93b0e3ced Stop precomputing last split points, query the SplitAnalysis cache on demand.
llvm-svn: 128875
2011-04-05 04:20:29 +00:00
Jakob Stoklund Olesen 50b2db8a02 Cache the fairly expensive last split point computation and provide a fast
inlined path for the common case.

Most basic blocks don't contain a call that may throw, so the last split point
os simply the first terminator.

llvm-svn: 128874
2011-04-05 04:20:27 +00:00
Bill Wendling dd4dcd549b Revamp the SjLj "dispatch setup" intrinsic.
It needed to be moved closer to the setjmp statement, because the code directly
after the setjmp needs to know about values that are on the stack. Also, the
'bitcast' of the function context was causing a dead load. This wouldn't be too
horrible, except that at -O0 it wasn't optimized out, and because it wasn't
using the correct base pointer (if there is a VLA), it would try to access a
value from a garbage address.
<rdar://problem/9130540>

llvm-svn: 128873
2011-04-05 01:37:43 +00:00
Stuart Hastings ad68c93a2d Revert 123704; it broke threaded LLVM.
llvm-svn: 128868
2011-04-05 00:37:28 +00:00
Jakob Stoklund Olesen 2e85396509 Allow coalescing with reserved physregs in certain cases:
When a virtual register has a single value that is defined as a copy of a
reserved register, permit that copy to be joined. These virtual register are
usually copies of the stack pointer:

  %vreg75<def> = COPY %ESP; GR32:%vreg75
  MOV32mr %vreg75, 1, %noreg, 0, %noreg, %vreg74<kill>
  MOV32mi %vreg75, 1, %noreg, 8, %noreg, 0
  MOV32mi %vreg75<kill>, 1, %noreg, 4, %noreg, 0
  CALLpcrel32 ...

Coalescing these virtual registers early decreases register pressure.
Previously, they were coalesced by RALinScan::attemptTrivialCoalescing after
register allocation was completed.

The lower register pressure causes the mcinst-lowering-cmp0.ll test case to fail
because it depends on linear scan spilling a particular register.

I am deleting 2008-08-05-SpillerBug.ll because it is counting the number of
instructions emitted, and its revision history shows the 'correct' count being
edited many times.

llvm-svn: 128845
2011-04-04 21:00:03 +00:00
Jakob Stoklund Olesen 8de5ca72e3 Extract physreg joining policy to a separate method.
llvm-svn: 128844
2011-04-04 20:59:59 +00:00
Jakob Stoklund Olesen 8933907b51 Stop caching basic block index ranges now that SlotIndexes can keep up.
llvm-svn: 128821
2011-04-04 15:32:15 +00:00
Jakob Stoklund Olesen 956ae3da41 Delete leftover data members.
llvm-svn: 128820
2011-04-04 15:32:11 +00:00
Jakob Stoklund Olesen ca26e0acbb Use InterferenceCache in RegAllocGreedy.
llvm-svn: 128765
2011-04-02 06:03:38 +00:00
Jakob Stoklund Olesen 91cbcaf957 Add an InterferenceCache class for caching per-block interference ranges.
When the greedy register allocator is splitting multiple global live ranges, it
tends to look at the same interference data many times. The InterferenceCache
class caches queries for unaltered LiveIntervalUnions.

llvm-svn: 128764
2011-04-02 06:03:35 +00:00
Jakob Stoklund Olesen 36171288ce Use basic block numbers as indexes when mapping slot index ranges.
This is more compact and faster than using DenseMap.

llvm-svn: 128763
2011-04-02 06:03:31 +00:00
Cameron Zwarich 8c7bbc09e2 Add a RemoveFromWorklist method to DCI. This is needed to do some complicated
transformations in target-specific DAG combines without causing DAGCombiner to
delete the same node twice. If you know of a better way to avoid this (see my
next patch for an example), please let me know.

llvm-svn: 128758
2011-04-02 02:40:26 +00:00
Evan Cheng 8b1bca1998 Add comments.
llvm-svn: 128730
2011-04-01 19:57:01 +00:00
Evan Cheng 8d68ebd42a Assign node order numbers to results of call instruction lowering. This should improve src line debug info when sdisel is used. rdar://9199118
llvm-svn: 128728
2011-04-01 19:42:22 +00:00
Evan Cheng bd76679700 Issue libcalls __udivmod*i4 / __divmod*i4 for div / rem pairs.
rdar://8911343

llvm-svn: 128696
2011-04-01 00:42:02 +00:00
Jakob Stoklund Olesen 6e597dc8e7 The basic register allocator must also use the inline spiller.
It is using a trivial rewriter that doesn't know how to insert spill code
requested by the standard spiller.

llvm-svn: 128688
2011-03-31 23:02:17 +00:00
Jakob Stoklund Olesen e6e6750670 Don't completely eliminate identity copies that also modify super register liveness.
Turn them into noop KILL instructions instead. This lets the scavenger know when
super-registers are killed and defined.

llvm-svn: 128645
2011-03-31 17:55:25 +00:00
Jakob Stoklund Olesen 561cea0480 Allow kill flags on two-address instructions. They are harmless.
llvm-svn: 128643
2011-03-31 17:52:41 +00:00
Jakob Stoklund Olesen 9a78835414 Mark all uses as <undef> when joining a copy.
This way, shrinkToUses() will ignore the instruction that is about to be
deleted, and we avoid leaving invalid live ranges that SplitKit doesn't like.

Fix a misunderstanding in MachineVerifier about <def,undef> operands. The
<undef> flag is valid on def operands where it has the same meaning as <undef>
on a use operand. It only applies to sub-register defines which also read the
full register.

llvm-svn: 128642
2011-03-31 17:23:25 +00:00
Devang Patel e0cbe31ebb Remove dead code.
llvm-svn: 128639
2011-03-31 16:53:49 +00:00
Jakob Stoklund Olesen 2ee5a0fc7f Fix bug found by valgrind.
llvm-svn: 128634
2011-03-31 15:14:11 +00:00
NAKAMURA Takumi 41f32c7127 lib/CodeGen/LiveIntervalAnalysis.cpp: [PR9590] Don't use std::pow(float,float) here.
We don't expect the real "powf()" on some hosts (and powf() would be available on other hosts).
For consistency, std::pow(double,double) may be called instead.
Or, precision issue might attack us, to see unstable regalloc and stack coloring.

llvm-svn: 128629
2011-03-31 12:11:33 +00:00
Jakob Stoklund Olesen ae044c06bf Pick a conservative register class when creating a small live range for remat.
The rematerialized instruction may require a more constrained register class
than the register being spilled. In the test case, the spilled register has been
inflated to the DPR register class, but we are rematerializing a load of the
ssub_0 sub-register which only exists for DPR_VFP2 registers.

The register class is reinflated after spilling, so the conservative choice is
only temporary.

llvm-svn: 128610
2011-03-31 03:54:44 +00:00
Jakob Stoklund Olesen ae917a3740 Fix evil VirtRegRewriter bug.
The rewriter can keep track of multiple stack slots in the same register if they
happen to have the same value. When an instruction modifies a stack slot by
defining a register that is mapped to a stack slot, other stack slots in that
register are no longer valid.

This is a very rare problem, and I don't have a simple test case. I get the
impression that VirtRegRewriter knows it is about to be deleted, inventing a
last opaque problem.

<rdar://problem/9204040>

llvm-svn: 128562
2011-03-30 18:14:07 +00:00
Jakob Stoklund Olesen 69129256dd Teach VirtRegRewriter about the new virtual register numbers. No functional change.
llvm-svn: 128561
2011-03-30 18:14:04 +00:00
Jay Foad 52131344a2 Remove PHINode::reserveOperandSpace(). Instead, add a parameter to
PHINode::Create() giving the (known or expected) number of operands.

llvm-svn: 128537
2011-03-30 11:28:46 +00:00
Jay Foad e0938d8a87 (Almost) always call reserveOperandSpace() on newly created PHINodes.
llvm-svn: 128535
2011-03-30 11:19:20 +00:00
Jakob Stoklund Olesen dd9a2ecef7 Treat clones the same as their origin.
When DCE clones a live range because it separates into connected components,
make sure that the clones enter the same register allocator stage as the
register they were cloned from.

For instance, clones may be split even when they where created during spilling.
Other registers created during spilling are not candidates for splitting or even
(re-)spilling.

llvm-svn: 128524
2011-03-30 02:52:39 +00:00
Jim Grosbach 1900c73a97 Tidy up. 80 columns and trailing whitespace.
llvm-svn: 128504
2011-03-29 23:20:22 +00:00
Jakob Stoklund Olesen e991f728d6 Recompute register class and hint for registers created during spilling.
The spill weight is not recomputed for an unspillable register - it stays infinite.

llvm-svn: 128490
2011-03-29 21:20:19 +00:00
Jakob Stoklund Olesen 0ed9ebca58 Remember to use the correct register when rematerializing for snippets.
llvm-svn: 128469
2011-03-29 17:47:02 +00:00
Jakob Stoklund Olesen add79c6abf Run dead code elimination immediately after rematerialization.
This may eliminate some uses of the spilled registers, and we don't want to
insert reloads for that.

llvm-svn: 128468
2011-03-29 17:47:00 +00:00
Bill Wendling dd1cf3279e Inline check that's used only once.
llvm-svn: 128465
2011-03-29 17:12:55 +00:00
Bill Wendling fb63d55fe8 Rework the logic (and removing the bad check for an unreachable block) so that
the FailBB dominator is correctly calculated. Believe it or not, there isn't a
functionality change here.

llvm-svn: 128455
2011-03-29 07:28:52 +00:00
Bill Wendling 220c9f045b Don't try to add stack protector logic to a dead basic block. It messes up
dominator information.

llvm-svn: 128452
2011-03-29 05:15:48 +00:00
Jakob Stoklund Olesen 12877b8a15 Handle the special case when all uses follow the last split point.
llvm-svn: 128450
2011-03-29 03:12:04 +00:00
Jakob Stoklund Olesen d8af5298d1 Properly enable rematerialization when spilling after live range splitting.
The instruction to be rematerialized may not be the one defining the register
that is being spilled. The traceSiblingValue() function sees through sibling
copies to find the remat candidate.

llvm-svn: 128449
2011-03-29 03:12:02 +00:00
Bill Wendling 96f962fdff In some cases, the "fail BB dominator" may be null after the BB was split (and
becomes reachable when before it wasn't). Check to make sure that it's not null
before trying to use it.

llvm-svn: 128434
2011-03-28 23:02:18 +00:00
Daniel Dunbar 3e2b335903 Integrated-As: Add support for setting the AllowTemporaryLabels flag via
integrated-as.

llvm-svn: 128431
2011-03-28 22:49:19 +00:00
Jakob Stoklund Olesen bd6b86e489 Amend debug output.
llvm-svn: 128398
2011-03-27 22:49:23 +00:00
Jakob Stoklund Olesen 28d79cdeab Drop interference reassignment in favor of eviction.
The reassignment phase was able to move interference with a higher spill weight,
but it didn't happen very often and it was fairly expensive.

The existing interference eviction picks up the slack.

llvm-svn: 128397
2011-03-27 22:49:21 +00:00
Jakob Stoklund Olesen e466345675 Use individual register classes when spilling snippets.
The main register class may have been inflated by live range splitting, so that
register class is not necessarily valid for the snippet instructions.

Use the original register class for the stack slot interval.

llvm-svn: 128351
2011-03-26 22:16:41 +00:00
Benjamin Kramer 355ce07425 Turn SelectionDAGBuilder::GetRegistersForValue into a local function.
It couldn't be used outside of the file because SDISelAsmOperandInfo
is local to SelectionDAGBuilder.cpp. Making it a static function avoids
a weird linkage dance.

llvm-svn: 128342
2011-03-26 16:35:10 +00:00
Jakob Stoklund Olesen 9a624fa993 Collect and coalesce DBG_VALUE instructions before emitting the function.
Correctly terminate the range of register DBG_VALUEs when the register is
clobbered or when the basic block ends.

The code is now ready to deal with variables that are sometimes in a register
and sometimes on the stack. We just need to teach emitDebugLoc to say 'stack
slot'.

llvm-svn: 128327
2011-03-26 02:19:36 +00:00
Jakob Stoklund Olesen 1886a4c823 Emit less labels for debug info and stop emitting .loc directives for DBG_VALUEs.
The .dot directives don't need labels, that is a leftover from when we created
line number info manually.

Instructions following a DBG_VALUE can share its label since the DBG_VALUE
doesn't produce any code.

llvm-svn: 128284
2011-03-25 17:20:59 +00:00
Andrew Trick 3bd8b7a388 Fix for -pre-RA-sched=source.
Yet another case of unchecked NULL node (for physreg copy).
May fix PR9509.

llvm-svn: 128266
2011-03-25 06:40:55 +00:00
Nick Lewycky d73218e4a3 No functionality change. Fix up some whitespace and switch out "" for '' when
printing a single character.

llvm-svn: 128256
2011-03-25 06:04:26 +00:00
Jakob Stoklund Olesen a1e3156ebd Ignore special ARM allocation hints for unexpected register classes.
Add an assertion to linear scan to prevent it from allocating registers outside
the register class.

<rdar://problem/9183021>

llvm-svn: 128254
2011-03-25 01:48:18 +00:00
Devang Patel e01b75cb89 Keep track of directory namd and fIx regression caused by Rafael's patch r119613.
A better approach would be to move source id handling inside MC.

llvm-svn: 128233
2011-03-24 20:30:50 +00:00
Eli Friedman 4c192305bf PR9535: add support for splitting and scalarizing vector ISD::FP_ROUND.
Also cleaning up some duplicated code while I'm here.

llvm-svn: 128176
2011-03-23 22:18:48 +00:00
Andrew Trick 13acae040c Ensure that def-side physreg copies are scheduled above any other uses
so the scheduler can't create new interferences on the copies
themselves. Prior to this fix the scheduler could get stuck in a loop
creating copies.
Fixes PR9509.

llvm-svn: 128164
2011-03-23 20:42:39 +00:00
Andrew Trick a8846e0540 whitespace
llvm-svn: 128163
2011-03-23 20:40:18 +00:00
Jakob Stoklund Olesen a87d80cdca Don't coalesce identical DBG_VALUE instructions prematurely.
Each of these instructions may have a RegsClobberInsn entry that can't be
ignored. Consecutive ranges are coalesced later when DwarfDebug::emitDebugLoc
merges entries.

llvm-svn: 128155
2011-03-23 18:37:30 +00:00
Jakob Stoklund Olesen b993f6394f Notify the delegate before removing dead values from a live interval.
The register allocator needs to know when the range shrinks.

llvm-svn: 128145
2011-03-23 04:43:16 +00:00
Jakob Stoklund Olesen f7eb955046 Allow the allocation of empty live ranges that have uses.
Empty ranges may represent undef values.

llvm-svn: 128144
2011-03-23 04:32:51 +00:00
Jakob Stoklund Olesen 710656d7b3 Dump the register map before rewriting.
llvm-svn: 128143
2011-03-23 04:32:49 +00:00
Andrew Trick b1fd328581 Added block number and name to isel debug output.
I'm tired of doing this manually for each checkout.
If anyone knows a better way debug isel for non-trivial tests feel
free to revert and let me know how to do it.

llvm-svn: 128132
2011-03-23 01:38:28 +00:00
Jakob Stoklund Olesen ec0ac3ca40 Reapply r128045 and r128051 with fixes.
This will extend the ranges of debug info variables in registers until they are
clobbered.

Fix 1: Don't mistake DBG_VALUE instructions referring to incoming arguments on
the stack with DBG_VALUE instructions referring to variables in the frame
pointer. This fixes the gdb test-suite failure.

Fix 2: Don't trace through copies to physical registers setting up call
arguments. These registers are call clobbered, and the source register is more
likely to be a callee-saved register that can be extended through the call
instruction.

llvm-svn: 128114
2011-03-22 22:33:08 +00:00
Andrew Trick b0f98bb5e9 Revert r128045 and r128051, debug info enhancements.
Temporarily reverting these to see if we can get llvm-objdump to link. Hopefully this is not the problem.

llvm-svn: 128097
2011-03-22 19:18:42 +00:00
Jakob Stoklund Olesen c6f4af028d Clear map after use.
This is likely to fix the segfault in llvm-gcc-x86_64-darwin10-cross-mingw32.

llvm-svn: 128051
2011-03-22 01:03:24 +00:00
Jakob Stoklund Olesen 9c057ee440 Dont emit 'DBG_VALUE %noreg, ...' to terminate user variable ranges.
These ranges get completely jumbled by the post-ra scheduler, and it is not
really reasonable to expect it to make sense of them.

Instead, teach DwarfDebug to notice when user variables in registers are
clobbered, and terminate the ranges there.

llvm-svn: 128045
2011-03-22 00:21:41 +00:00
Eric Christopher 1b4b1e559a Grammar-o.
llvm-svn: 128004
2011-03-21 18:06:21 +00:00
Bill Wendling 00f0cddfd4 We need to pass the TargetMachine object to the InstPrinter if we are printing
the alias of an InstAlias instead of the thing being aliased. Because we need to
know the features that are valid for an InstAlias.

This is part of a work-in-progress.

llvm-svn: 127986
2011-03-21 04:13:46 +00:00
Jakob Stoklund Olesen 35502423de Process all dead defs after rematerializing during splitting.
llvm-svn: 127973
2011-03-20 19:46:23 +00:00
Jakob Stoklund Olesen e55003fb04 Also eliminate redundant spills downstream of inserted reloads.
This can happen when multiple sibling registers are spilled after live range
splitting.

llvm-svn: 127965
2011-03-20 05:44:58 +00:00
Jakob Stoklund Olesen 39488642d3 Change an argument to a LiveInterval instead of a register number to save some redundant lookups.
llvm-svn: 127964
2011-03-20 05:44:55 +00:00
Jakob Stoklund Olesen ccacd0df19 Replace a broken LiveInterval::MergeValueInAsValue() with something simpler.
llvm-svn: 127960
2011-03-19 23:02:49 +00:00
Jakob Stoklund Olesen 8698507fe1 Add debug output.
llvm-svn: 127959
2011-03-19 23:02:47 +00:00
Evan Cheng b1f3b4989f Minor code re-structuring.
llvm-svn: 127952
2011-03-19 17:03:16 +00:00
Nadav Rotem e7a101ccab Add support for legalizing UINT_TO_FP of vectors on platforms which do
not have native support for this operation (such as X86).
The legalized code uses two vector INT_TO_FP operations and is faster
than scalarizing.

llvm-svn: 127951
2011-03-19 13:09:10 +00:00
Stuart Hastings 12d5312622 Reapply 127939 since Daniel fixed the breakage. <rdar://problem/9012638>
llvm-svn: 127944
2011-03-19 02:42:31 +00:00
Stuart Hastings 08b4daa191 Revert 127939. <rdar://problem/9012638>
llvm-svn: 127943
2011-03-19 02:33:56 +00:00
Stuart Hastings 83d4a28d1f Revise r126127 to address Daniel's comments. <rdar://problem/9012638>
llvm-svn: 127939
2011-03-19 01:32:01 +00:00
Jim Grosbach 7b162490fd Beginnings of MC-JIT code generation.
Proof-of-concept code that code-gens a module to an in-memory MachO object.
This will be hooked up to a run-time dynamic linker library (see: llvm-rtdyld
for similarly conceptual work for that part) which will take the compiled
object and link it together with the rest of the system, providing back to the
JIT a table of available symbols which will be used to respond to the
getPointerTo*() queries.

llvm-svn: 127916
2011-03-18 22:48:41 +00:00
Jakob Stoklund Olesen 816f5f4c2a Extend live debug values down the dominator tree by following copies.
The llvm.dbg.value intrinsic refers to SSA values, not virtual registers, so we
should be able to extend the range of a value by tracking that value through
register copies. This greatly improves the debug value tracking for function
arguments that for some reason are copied to a second virtual register at the
end of the entry block.

We only extend the debug value range where its register is killed. All original
llvm.dbg.value locations are still respected.

Copies from physical registers are ignored. That should not be a problem since
the entry block already adds DBG_VALUE instructions for the virtual registers
holding the function arguments.

llvm-svn: 127912
2011-03-18 21:42:19 +00:00
Jakob Stoklund Olesen 27320cb864 Hoist spills when the same value is known to be in less loopy sibling registers.
Stack slot real estate is virtually free compared to registers, so it is
advantageous to spill earlier even though the same value is now kept in both a
register and a stack slot.

Also eliminate redundant spills by extending the stack slot live range
underneath reloaded registers.

This can trigger a dead code elimination, removing copies and even reloads that
were only feeding spills.

llvm-svn: 127868
2011-03-18 04:23:06 +00:00
Jakob Stoklund Olesen fdc09941f2 Accept instructions that read undefined values.
This is not supposed to happen, but I have seen the x86 rematter getting
confused when rematerializing partial redefs.

llvm-svn: 127857
2011-03-18 03:06:04 +00:00
Jakob Stoklund Olesen c099dde918 Be more accurate about the slot index reading a register when dealing with defs
and early clobbers.

Assert when trying to find an undefined value.

llvm-svn: 127856
2011-03-18 03:06:02 +00:00
Benjamin Kramer cfcea12fe2 BuildUDIV: If the divisor is even we can simplify the fixup of the multiplied value by introducing an early shift.
This allows us to compile "unsigned foo(unsigned x) { return x/28; }" into
	shrl	$2, %edi
	imulq	$613566757, %rdi, %rax
	shrq	$32, %rax
	ret

instead of
	movl    %edi, %eax
	imulq   $613566757, %rax, %rcx
	shrq    $32, %rcx
	subl    %ecx, %eax
	shrl    %eax
	addl    %ecx, %eax
	shrl    $4, %eax

on x86_64

llvm-svn: 127829
2011-03-17 20:39:14 +00:00
Jakob Stoklund Olesen 8630840c30 Dead code elimination may separate the live interval into multiple connected components.
I have convinced myself that it can only happen when a phi value dies. When it
happens, allocate new virtual registers for the components.

llvm-svn: 127827
2011-03-17 20:37:07 +00:00
Cameron Zwarich 2ef0c69df1 Move more logic into getTypeForExtArgOrReturn.
llvm-svn: 127809
2011-03-17 14:53:37 +00:00
Cameron Zwarich 34e7b3f77e Rename getTypeForExtendedInteger() to getTypeForExtArgOrReturn().
llvm-svn: 127807
2011-03-17 14:21:56 +00:00
Jakob Stoklund Olesen 315b42c354 Rewrite instructions as part of ConnectedVNInfoEqClasses::Distribute.
llvm-svn: 127779
2011-03-17 00:23:45 +00:00
Jakob Stoklund Olesen e14b2b226f Add a LiveRangeEdit delegate callback before shrinking a live range.
The register allocator needs to adjust its live interval unions when that happens.

llvm-svn: 127774
2011-03-16 22:56:16 +00:00
Jakob Stoklund Olesen c738c96519 Erase virtual registers that are unused after DCE.
llvm-svn: 127773
2011-03-16 22:56:13 +00:00
Jakob Stoklund Olesen e29d63e98a Tag cached interference with a user-provided tag instead of the virtual register number.
The live range of a virtual register may change which invalidates the cached
interference information.

llvm-svn: 127772
2011-03-16 22:56:11 +00:00
Jakob Stoklund Olesen 557a82c099 Clarify debugging output.
llvm-svn: 127771
2011-03-16 22:56:08 +00:00
Cameron Zwarich ac106273d4 The x86-64 ABI says that a bool is only guaranteed to be sign-extended to a byte
rather than an int. Thankfully, this only causes LLVM to miss optimizations, not
generate incorrect code.

This just fixes the zext at the return. We still insert an i32 ZextAssert when
reading a function's arguments, but it is followed by a truncate and another i8
ZextAssert so it is not optimized.

llvm-svn: 127766
2011-03-16 22:20:18 +00:00
Cameron Zwarich d1ad9bc277 Don't recompute something that we already have in a local variable.
llvm-svn: 127764
2011-03-16 22:20:07 +00:00
Daniel Dunbar fd95b016fb Revert r127757, "Patch to a fix dwarf relocation problem on ARM. One-line fix
plus the test where it used to break.", which broke Clang self-host of a
Debug+Asserts compiler, on OS X.

llvm-svn: 127763
2011-03-16 22:16:39 +00:00
Renato Golin a3aeafeb35 Patch to a fix dwarf relocation problem on ARM. One-line fix plus the test where it used to break.
llvm-svn: 127757
2011-03-16 21:05:52 +00:00
Jakob Stoklund Olesen a0d5ec10d1 Trace back through sibling copies to hoist spills and find rematerializable defs.
After live range splitting, an original value may be available in multiple
registers. Tracing back through the registers containing the same value, find
the best place to insert a spill, determine if the value has already been
spilled, or discover a reaching def that may be rematerialized.

This is only the analysis part. The information is not used for anything yet.

llvm-svn: 127698
2011-03-15 21:13:25 +00:00
Jakob Stoklund Olesen 32210de3e4 Preserve both isPHIDef and isDefByCopy bits when copying parent values.
llvm-svn: 127697
2011-03-15 21:13:22 +00:00
Evan Cheng e4b8ac9fef Add a peephole optimization to optimize pairs of bitcasts. e.g.
v2 = bitcast v1
...
v3 = bitcast v2
...
   = v3
=>
v2 = bitcast v1
...
   = v1
if v1 and v3 are of in the same register class.

bitcast between i32 and fp (and others) are often not nops since they
are in different register classes. These bitcast instructions are often
left because they are in different basic blocks and cannot be
eliminated by dag combine.

rdar://9104514

llvm-svn: 127668
2011-03-15 05:13:13 +00:00
Evan Cheng c5c2cfa381 sext(undef) = 0, because the top bits will all be the same.
zext(undef) = 0, because the top bits will be zero.

llvm-svn: 127649
2011-03-15 02:22:10 +00:00
Bill Wendling 5c25a92011 There are some situations which can cause the URoR hack to infinitely recurse
and then go kablooie. The problem was that it was tracking the PHI nodes anew
each time into this function. But it didn't need to. And because the recursion
didn't know that a PHINode was visited before, it would go ahead and call
itself.

There is a testcase, but unfortunately it's too big to add. This problem will go
away with the EH rewrite.
<rdar://problem/8856298>

llvm-svn: 127640
2011-03-15 01:03:17 +00:00
Jakob Stoklund Olesen 59a549b7ec Place context in member variables instead of passing around pointers.
Use the opportunity to get rid of the trailing underscore variable names.

llvm-svn: 127618
2011-03-14 20:57:14 +00:00
Jakob Stoklund Olesen a00bab24c2 Rename members to match LLVM naming conventions more closely.
Remove the unused reserved_ bit vector, no functional change intended.

This doesn't break 'svn blame', this file really is all my fault.

llvm-svn: 127607
2011-03-14 19:56:43 +00:00
Evan Cheng 37139edc8c BIT_CONVERT has been renamed to BITCAST.
llvm-svn: 127600
2011-03-14 18:19:52 +00:00
Evan Cheng d2f3b01797 Minor optimization. sign-ext/anyext of undef is still undef.
llvm-svn: 127598
2011-03-14 18:15:55 +00:00
Jakob Stoklund Olesen e1539cc5b6 Now that we are deleting unused live intervals during allocation, pointers may be reused.
Use the virtual register number as a cache tag instead. They are not reused.

llvm-svn: 127561
2011-03-13 01:29:32 +00:00
Jakob Stoklund Olesen 43a87501b3 Tell the register allocator about new unused virtual registers.
This allows the allocator to free any resources used by the virtual register,
including physical register assignments.

llvm-svn: 127560
2011-03-13 01:23:11 +00:00
Duncan Sands b847bf547b Speculatively revert commit 127478 (jsjodin) in an attempt to fix the
llvm-gcc-i386-linux-selfhost and llvm-x86_64-linux-checks buildbots.
The original log entry:
Remove optimization emitting a reference insted of label difference, since
it can create more relocations. Removed isBaseAddressKnownZero method,
because it is no longer used.

llvm-svn: 127540
2011-03-12 13:07:37 +00:00
Jakob Stoklund Olesen e77005ef88 Include snippets in the live stack interval.
llvm-svn: 127530
2011-03-12 04:25:36 +00:00
Jakob Stoklund Olesen a86595e06b Spill multiple registers at once.
Live range splitting can create a number of small live ranges containing only a
single real use. Spill these small live ranges along with the large range they
are connected to with copies. This enables memory operand folding and maximizes
the spill to fill distance.

Work in progress with known bugs.

llvm-svn: 127529
2011-03-12 04:17:20 +00:00
Jakob Stoklund Olesen dae1dc1f01 That's it, I am declaring this a failure of the C++03 STL.
There are too many compatibility problems with using mixed types in
std::upper_bound, and I don't want to spend 110 lines of boilerplate setting up
a call to a 10-line function. Binary search is not /that/ hard to implement
correctly.

I tried terminating the binary search with a linear search, but that actually
made the algorithm slower against my expectation. Most live intervals have less
than 4 segments. The early test against endIndex() does pay, and this version is
25% faster than plain std::upper_bound().

llvm-svn: 127522
2011-03-12 01:50:35 +00:00
Cameron Zwarich 4d7d728594 Fix the GCC test suite issue exposed by r127477, which was caused by stack
protector insertion not working correctly with unreachable code. Since that
revision was rolled out, this test doesn't actual fail before this fix.

llvm-svn: 127497
2011-03-11 21:51:56 +00:00
Owen Anderson 66443c034d Teach FastISel to support register-immediate-immediate instructions.
llvm-svn: 127496
2011-03-11 21:33:55 +00:00
Jan Sjödin f3f78583f9 Remove optimization emitting a reference insted of label difference, since it can create more relocations. Removed isBaseAddressKnownZero method, because it is no longer used.
llvm-svn: 127478
2011-03-11 19:37:02 +00:00
Andrew Trick 710d5da306 Replace -dag-chain-limit flag with constant. It has survived a release cycle without being touched, so no longer needs to pollute the hidden-help text.
llvm-svn: 127468
2011-03-11 17:46:59 +00:00
John Wiegley 8559f5914c Fix use of CompEnd predicate to be standards conforming
The existing CompEnd predicate does not define a strict weak order as required
by the C++03 standard; therefore, its use as a predicate to std::upper_bound
is invalid. For a discussion of this issue, see
http://www.open-std.org/jtc1/sc22/wg21/docs/lwg-defects.html#270

This patch replaces the asymmetrical comparison with an iterator adaptor that
achieves the same effect while being strictly standard-conforming by ensuring
an apples-to-apples comparison.

llvm-svn: 127462
2011-03-11 08:54:34 +00:00
Evan Cheng adb9c03e41 Avoid replacing the value of a directly stored load with the stored value if the load is indexed. rdar://9117613.
llvm-svn: 127440
2011-03-11 00:48:56 +00:00
Cameron Zwarich 7930407339 Add an option to disable critical edge splitting in PHIElimination.
llvm-svn: 127398
2011-03-10 05:59:17 +00:00
Jakob Stoklund Olesen 4d6eafa138 Change the Spiller interface to take a LiveRangeEdit reference.
This makes it possible to register delegates and get callbacks when the spiller
edits live ranges.

llvm-svn: 127389
2011-03-10 01:51:42 +00:00
Jakob Stoklund Olesen c6cc485051 Make SpillIs an optional pointer. Avoid creating a bunch of temporary SmallVectors.
llvm-svn: 127388
2011-03-10 01:21:58 +00:00
Evan Cheng b4c6a34415 Re-commit 127368 and 127371. They are exonerated.
llvm-svn: 127380
2011-03-10 00:16:32 +00:00
Evan Cheng d4b3f8e009 Revert 127368 and 127371 for now.
llvm-svn: 127376
2011-03-09 23:53:17 +00:00
Evan Cheng ca9a936332 Change the definition of TargetRegisterInfo::getCrossCopyRegClass to be more
flexible.

If it returns a register class that's different from the input, then that's the
register class used for cross-register class copies.
If it returns a register class that's the same as the input, then no cross-
register class copies are needed (normal copies would do).
If it returns null, then it's not at all possible to copy registers of the
specified register class.

llvm-svn: 127368
2011-03-09 22:47:38 +00:00
Jakob Stoklund Olesen d0db705256 Make physreg coalescing independent on the number of uses of the virtual register.
The damage done by physreg coalescing only depends on the number of instructions
the extended physreg live range covers. This fixes PR9438.

The heuristic is still luck-based, and physreg coalescing really should be
disabled completely. We need a register allocator with better hinting support
before that is possible.

Convert a test to FileCheck and force spilling by inserting an extra call. The
previous spilling behavior was dependent on misguided physreg coalescing
decisions.

llvm-svn: 127351
2011-03-09 19:27:06 +00:00
Andrew Trick 072ed2ee0d Improve pre-RA-sched register pressure tracking for duplicate operands.
This helps cases like 2008-07-19-movups-spills.ll, but doesn't have an obvious impact on benchmarks

llvm-svn: 127347
2011-03-09 19:12:43 +00:00
Benjamin Kramer b2e4d84305 Fix typo, make helper static.
llvm-svn: 127335
2011-03-09 16:19:12 +00:00
Benjamin Kramer 84fccb64c3 Remove unused virtual dtor.
llvm-svn: 127331
2011-03-09 14:20:28 +00:00
Matt Beaumont-Gay df72652fd0 Add a virtual dtor to Delegate to silence -Wnon-virtual-dtor
llvm-svn: 127311
2011-03-09 04:02:15 +00:00
Jakob Stoklund Olesen 8e089640e0 Add a LiveRangeEdit::Delegate protocol.
This will we used for keeping register allocator data structures up to date
while LiveRangeEdit is trimming live intervals.

llvm-svn: 127300
2011-03-09 00:57:29 +00:00
Jakob Stoklund Olesen 06b72e338a Delete dead code.
llvm-svn: 127295
2011-03-09 00:07:39 +00:00
Jakob Stoklund Olesen ea5ebfed15 Delete dead code after rematerializing.
LiveRangeEdit::eliminateDeadDefs() will eventually be used by coalescing,
splitting, and spilling for dead code elimination. It can delete chains of dead
instructions as long as there are no dependency loops.

llvm-svn: 127287
2011-03-08 22:46:11 +00:00
Jakob Stoklund Olesen 880e0b7760 Fix the build for MSVC 9 whose upper_bound() wants to compare elements in the sorted array.
Patch by Olaf Krzikalla!

llvm-svn: 127264
2011-03-08 19:37:54 +00:00
Eric Christopher 7238cba180 Fix some latent bugs if the nodes are unschedulable. We'd gotten away
with this before since none of the register tracking or nightly tests
had unschedulable nodes.

This should probably be refixed with a special default Node that just
returns some "don't touch me" values.

Fixes PR9427

llvm-svn: 127263
2011-03-08 19:35:47 +00:00
Oscar Fuentes a28879b824 Revert "Make a comparator's argument `const'. This fixes the build for
MSVC 9."

The "fix" was meaningless.

This reverts commit r127245.

llvm-svn: 127260
2011-03-08 19:26:21 +00:00
Benjamin Kramer b8ca01fff5 Reduce vector reallocations.
llvm-svn: 127254
2011-03-08 17:28:36 +00:00
Oscar Fuentes 6ec5983a0c Make a comparator's argument `const'. This fixes the build for MSVC 9.
llvm-svn: 127245
2011-03-08 13:52:07 +00:00
Andrew Trick 52b3e38a1f Further improvements to pre-RA-sched=list-ilp.
This change uses the MaxReorderWindow for both height and depth, which
tends to limit the negative effects of high register pressure.

llvm-svn: 127203
2011-03-08 01:51:56 +00:00
Jakob Stoklund Olesen 71c380f6c7 Let shrinkToUses optionally return a list of now dead machine instructions.
llvm-svn: 127192
2011-03-07 23:29:10 +00:00
Jakob Stoklund Olesen 27f942fa60 Make the UselessRegs argument optional in the LiveRangeEdit constructor.
llvm-svn: 127181
2011-03-07 22:42:16 +00:00
Cameron Zwarich df61694417 Move getRegPressureLimit() from TargetLoweringInfo to TargetRegisterInfo.
llvm-svn: 127175
2011-03-07 21:56:36 +00:00
Jakob Stoklund Olesen ac32d8a691 Handle the special case of registers begin redefined by early-clobber defs.
In this case, the value need to be available at the load index instead of the
normal use index.

llvm-svn: 127167
2011-03-07 18:56:16 +00:00
Owen Anderson cd526fa15e Use the correct LHS type when determining the legalization of a shift's RHS type.
llvm-svn: 127163
2011-03-07 18:29:47 +00:00
Eric Christopher 9cb33deebf Typo.
llvm-svn: 127131
2011-03-06 21:13:45 +00:00
NAKAMURA Takumi 0d8150f279 lib/CodeGen/AsmPrinter/CMakeLists.txt: Fix CMake build, following up to r127099.
llvm-svn: 127114
2011-03-06 00:13:15 +00:00
Andrew Trick dd01732e63 Disable a couple of experimental heuristics to get the best results from the current implementation of -pre-RA-sched=list-ilp.
llvm-svn: 127113
2011-03-06 00:03:32 +00:00
Anton Korobeynikov a7ec2dcefd Some first rudimentary support for ARM EHABI: print exception table in "text mode".
llvm-svn: 127099
2011-03-05 18:43:15 +00:00
Anton Korobeynikov 65cff414b6 Add FrameSetup MI flags
llvm-svn: 127098
2011-03-05 18:43:04 +00:00
Jakob Stoklund Olesen 27e0a4ab86 Work around a coalescer bug.
The coalescer can in very rare cases leave too large live intervals around after
rematerializing cheap-as-a-move instructions.

Linear scan doesn't really care, but live range splitting gets very confused
when a live range is killed by a ghost instruction.

I will fix this properly in the coalescer after 2.9 branches.

llvm-svn: 127096
2011-03-05 18:33:49 +00:00
Andrew Trick 25cedf3fe4 Be explicit with abs(). Visual Studio workaround.
llvm-svn: 127075
2011-03-05 10:29:25 +00:00
Andrew Trick d7f4c21684 Fix for -sched-high-latency-cycles in sched=list-ilp mode.
llvm-svn: 127071
2011-03-05 09:18:16 +00:00
Andrew Trick b8390b7a25 Missing comment.
llvm-svn: 127068
2011-03-05 08:04:11 +00:00
Andrew Trick 641e2d4f8c Increased the register pressure limit on x86_64 from 8 to 12
regs. This is the only change in this checkin that may affects the
default scheduler. With better register tracking and heuristics, it
doesn't make sense to artificially lower the register limit so much.

Added -sched-high-latency-cycles and X86InstrInfo::isHighLatencyDef to
give the scheduler a way to account for div and sqrt on targets that
don't have an itinerary. It is currently defaults to 10 (the actual
number doesn't matter much), but only takes effect on non-default
schedulers: list-hybrid and list-ilp.

Added several heuristics that can be individually disabled for the
non-default sched=list-ilp mode. This helps us determine how much
better we can do on a given benchmark than the default
scheduler. Certain compute intensive loops run much faster in this
mode with the right set of heuristics, and it doesn't seem to have
much negative impact elsewhere. Not all of the heuristics are needed,
but we still need to experiment to decide which should be disabled by
default for sched=list-ilp.

llvm-svn: 127067
2011-03-05 08:00:22 +00:00
Jakob Stoklund Olesen 1a9b66c752 Rework the global split cost calculation.
The global cost is the sum of block frequencies for spill code that must be
inserted because preferences weren't met.

llvm-svn: 127062
2011-03-05 03:28:51 +00:00
Jakob Stoklund Olesen 4b598e156a Compute the constraints for global live range splitting from an interference pattern.
This simplifies the code and makes it faster too.

The interference patterns are saved for each candidate register. It will be
reused for actually executing the split. Work in progress.

llvm-svn: 127054
2011-03-05 01:10:31 +00:00
Jim Grosbach dc55428d7a Teach the register scavenger to take subregs into account when finding a free register.
llvm-svn: 127049
2011-03-05 00:20:19 +00:00
Eric Christopher 403269894f Improve readability with some whitespace!
llvm-svn: 127043
2011-03-04 22:47:12 +00:00
Jakob Stoklund Olesen 05a2f5178e Extract a method. No functional change.
llvm-svn: 127040
2011-03-04 22:11:11 +00:00
Jakob Stoklund Olesen d7e1bb80a9 Go back to comparing spill weights when deciding if interference can be evicted.
It gives better results. Sometimes, a live range can be large and still have
high spill weight. Such a range should not be spilled.

llvm-svn: 127036
2011-03-04 21:32:50 +00:00
Jakob Stoklund Olesen b8e6fdc23c Renumber slot indexes locally when possible.
Initially, slot indexes are quad-spaced. There is room for inserting up to 3
new instructions between the original instructions.

When we run out of indexes between two instructions, renumber locally using
double-spaced indexes. The original quad-spacing means that we catch up quickly,
and we only have to renumber a handful of instructions to get a monotonic
sequence. This is much faster than renumbering the whole function as we did
before.

llvm-svn: 127023
2011-03-04 19:43:38 +00:00
Jakob Stoklund Olesen 348d8e8ba6 Number SlotIndexes uniformly without looking at the number of defs on each instruction.
You can't really predict how many indexes will be needed from the number of
defs, so let's keep it simple.

Also remove an extra empty index that was inserted after each basic block. It
was intended for live-out ranges, but it was never used that way.

llvm-svn: 127014
2011-03-04 18:51:09 +00:00
Jakob Stoklund Olesen b88f6adf0f Add SlotIndex statistics.
llvm-svn: 127007
2011-03-04 18:08:29 +00:00
Jakob Stoklund Olesen d4f788952d Tweak debug output. No functional changes.
llvm-svn: 127006
2011-03-04 18:08:26 +00:00
Duncan Sands 6bd1044222 Revert commit 126684 "Use the correct shift amount type". It is only the correct
type after type legalization has completed.  Before then it may simply not be big
enough to hold the shift amount, particularly on x86 which uses a very small type
for shifts (this issue broke stuff in the past which is why LegalizeTypes carefully
uses a large type for shift amounts).

llvm-svn: 127000
2011-03-04 14:28:59 +00:00
Andrew Trick c88b7ecb88 Minor pre-RA-sched fixes and cleanup.
Fix the PendingQueue, then disable it because it's not required for
the current schedulers' heuristics.
Fix the logic for the unused list-ilp scheduler.

llvm-svn: 126981
2011-03-04 02:03:45 +00:00
Jakob Stoklund Olesen c332e727b4 Precompute block frequencies, pow() isn't free.
llvm-svn: 126975
2011-03-04 00:58:40 +00:00
Jakob Stoklund Olesen 1a69e23300 Use an IndexedMap instead of a DenseMap for the live-out cache.
This speeds up updateSSA() so it only accounts for 5% of the live range
splitting time.

llvm-svn: 126972
2011-03-04 00:15:36 +00:00
Bill Wendling f3658f3872 There are times when the landing pad won't have a call to 'eh.selector' in
it. It's been assumed up til now that it would be in its immediate
successor. However, this isn't necessarily the case. It could be in one of its
successor's successors.

Modify the code to more thoroughly check for an 'eh.selector' call in
successors. It only looks at a successor if we get there as a result of an
unconditional branch.

Testcase ObjC/exceptions-4.m in r126968.

llvm-svn: 126969
2011-03-03 23:14:05 +00:00
Eli Friedman d8a555bb3b Revert r123908; the code in question is completely untested and wrong.
llvm-svn: 126964
2011-03-03 22:33:23 +00:00
Devang Patel 63b3e76370 Fix typo.
llvm-svn: 126962
2011-03-03 21:49:41 +00:00
Devang Patel 34a7ab400e Fix thinko in previous check-in.
Add comment.

llvm-svn: 126959
2011-03-03 20:08:10 +00:00
Devang Patel 4ab660b077 llvm::Function argument count is not a good indicator of how many arugments does the function have at source level. If we need more space, just resize vector conservatively. This vector is only used once per function.
llvm-svn: 126957
2011-03-03 20:02:02 +00:00
Jim Grosbach 7e200664f6 Allow a target to choose whether to prefer the scavenger emergency spill slot
be next to the frame pointer or the stack pointer.

llvm-svn: 126956
2011-03-03 20:01:52 +00:00
Jakob Stoklund Olesen bfdbc11554 Renumber slot indexes uniformly instead of spacing according to the number of defs.
There are probably much larger speedups to be had by renumbering locally instead
of looping over the whole function. For now, the greedy register allocator is
25% faster.

llvm-svn: 126926
2011-03-03 06:29:01 +00:00
Jakob Stoklund Olesen 4ec757d588 Represent sentinel slot indexes with a null pointer.
This is much faster than using a pointer to a ManagedStatic object accessed with
a function call. The greedy register allocator is 5% faster overall just from
the SlotIndex default constructor savings.

llvm-svn: 126925
2011-03-03 05:40:04 +00:00
Jakob Stoklund Olesen 67a84d08ce Avoid comparing invalid slot indexes, and assert that it doesn't happen.
The SlotIndex created by the default construction does not represent a position
in the function, and it doesn't make sense to compare it to other indexes.

llvm-svn: 126924
2011-03-03 05:18:19 +00:00
Jakob Stoklund Olesen a04dddf7a1 Avoid comparing invalid slot indexes.
llvm-svn: 126922
2011-03-03 04:23:52 +00:00
Jakob Stoklund Olesen 9a6382fc81 Cache basic block bounds instead of asking SlotIndexes::getMBBRange all the time.
This speeds up the greedy register allocator by 15%.
DenseMap is not as fast as one might hope.

llvm-svn: 126921
2011-03-03 03:41:29 +00:00
Jakob Stoklund Olesen c96019886c Change the SplitEditor interface to a single instance can be shared for multiple splits.
llvm-svn: 126912
2011-03-03 01:29:13 +00:00
Jakob Stoklund Olesen 5ea0712e96 Only run the updateSSA loop when we have actually seen multiple values.
When only a single value has been seen, new PHIDefs are never needed.

llvm-svn: 126911
2011-03-03 01:29:10 +00:00
Jakob Stoklund Olesen d58c8d12ab Fix PHI handling in LiveIntervals::shrinkToUses().
We need to wait until we meet a PHIDef in its defining block before resurrecting
PHIKills in the predecessors.

This should unbreak the llvm-gcc-build-x86_64-darwin10-x-mingw32-x-armeabi bot.

llvm-svn: 126905
2011-03-03 00:20:51 +00:00
Bob Wilson 24b3ba5990 Avoid exponential blow-up when printing DAGs.
David Greene changed CannotYetSelect() to print the full DAG including multiple
copies of operands reached through different paths in the DAG.  Unfortunately
this blows up exponentially in some cases.  The depth limit of 100 is way too
high to prevent this -- I'm seeing a message string of 150MB with a depth of
only 40 in one particularly bad case, even though the DAG has less than 200
nodes.  Part of the problem is that the printing code is following chain
operands, so if you fail to select an operation with a chain, the printer will
follow all the chained operations back to the entry node.

llvm-svn: 126899
2011-03-02 23:38:06 +00:00
Jakob Stoklund Olesen 815196ca19 Turn the Edit member into a pointer so it can change dynamically.
No functional change.

llvm-svn: 126898
2011-03-02 23:31:50 +00:00
Jakob Stoklund Olesen 503b143a62 Transfer simply defined values directly without recomputing liveness and SSA.
Values that map to a single new value in a new interval after splitting don't
need new PHIDefs, and if the parent value was never rematerialized the live
range will be the same.

llvm-svn: 126894
2011-03-02 23:05:19 +00:00
Jakob Stoklund Olesen 3648263a3e Extract a method. No functional change.
llvm-svn: 126893
2011-03-02 23:05:16 +00:00
Stuart Hastings 6b4007dec6 Can't introduce floating-point immediate constants after legalization.
Radar 9056407.

llvm-svn: 126864
2011-03-02 19:36:30 +00:00
Cameron Zwarich daed6f6c39 Fix some typos.
llvm-svn: 126829
2011-03-02 04:03:46 +00:00
Jakob Stoklund Olesen 48af8923c5 Move extendRange() into SplitEditor and delete the LiveRangeMap class.
Extract the updateSSA() method from the too long extendRange().

LiveOutCache can be shared among all the new intervals since there is at most
one of the new ranges live out from each basic block.

llvm-svn: 126818
2011-03-02 01:59:34 +00:00
Nick Lewycky 68faa2dbbe Quiet a compiler warning about unused variable 'ExtVNI'.
llvm-svn: 126815
2011-03-02 01:43:30 +00:00
Evan Cheng 15fed7af3c Catch more cases where 2-address pass should 3-addressify instructions. rdar://9002648.
llvm-svn: 126811
2011-03-02 01:08:17 +00:00
Jakob Stoklund Olesen b02376198b Rename mapValue to extendRange because that is its function now.
Simplify the signature - The return value and ParentVNI are no longer needed.

llvm-svn: 126809
2011-03-02 00:49:28 +00:00
Jakob Stoklund Olesen f3c6e9211c Simplify LiveIntervals::shrinkToUses() a bit by using the new extendInBlock().
llvm-svn: 126806
2011-03-02 00:33:03 +00:00
Jakob Stoklund Olesen 81eb18df34 Fix typo.
llvm-svn: 126805
2011-03-02 00:33:01 +00:00
Jakob Stoklund Olesen 9e326a8413 Move LiveIntervalMap::extendTo into LiveInterval itself.
This method could probably be used by LiveIntervalAnalysis::shrinkToUses, and
now it can use extendIntervalEndTo() which coalesces ranges.

llvm-svn: 126803
2011-03-02 00:06:15 +00:00
Jakob Stoklund Olesen 2b09bed518 Delete dead code.
llvm-svn: 126801
2011-03-01 23:24:19 +00:00
Jakob Stoklund Olesen 8ef91fc870 Move the value map from LiveIntervalMap to SplitEditor.
The value map is currently not used, all values are 'complex mapped' and
LiveIntervalMap::mapValue is used to dig them out.

This is the first step in a series changes leading to the removal of
LiveIntervalMap. Its data structures can be shared among all the live intervals
created by a split, so it is wasteful to create a copy for each.

llvm-svn: 126800
2011-03-01 23:14:53 +00:00