We want LiveVariables clients to use methods rather than accessing the
getVarInfo data structure directly. That way it will be possible to change the
LiveVariables representation.
llvm-svn: 90240
This means that well connected blocks are copy coalesced before the less connected blocks. Connected blocks are more difficult to
coalesce because intervals are more complicated, so handling them first gives a greater chance of success.
llvm-svn: 90194
This helps us avoid silly copies when rematting values that are copied to a physical register:
leaq _.str44(%rip), %rcx
movq %rcx, %rsi
call _strcmp
becomes:
leaq _.str44(%rip), %rsi
call _strcmp
The coalescer will not touch the movq because that would tie down the physical register.
llvm-svn: 90163
branches even when optimizing for code size. Unless we find evidence to the
contrary in the future, the special treatment for indirect branches does not
have a significant effect on code size, and performance still matters with -Os.
llvm-svn: 90147
for all the processors where I have tried it, and even when it might not help
performance, the cost is quite low. The opportunities for duplicating
indirect branches are limited by other factors so code size does not change
much due to tail duplicating indirect branches aggressively.
llvm-svn: 90144
running tail duplication when doing branch folding for if-conversion, and
we also want to be able to run tail duplication earlier to fix some
reg alloc problems. Move the CanFallThrough function from BranchFolding
to MachineBasicBlock so that it can be shared by TailDuplication.
llvm-svn: 89904
Make tail duplication of indirect branches much more aggressive (for targets
that indicate that it is profitable), based on further experience with
this transformation. I compiled 3 large applications with and without
this more aggressive tail duplication and measured minimal changes in code
size. ("size" on Darwin seems to round the text size up to the nearest
page boundary, so I can only say that any code size increase was less than
one 4k page.) Radar 7421267.
llvm-svn: 89814
Note that "hasDotLocAndDotFile"-style debug info was already broken;
people wanting this functionality should implement it in the
AsmPrinter/DwarfWriter code.
llvm-svn: 89711
tell debug info which base register to use to reference a frame index on a
per-index basis. This is useful, for example, in the presence of dynamic
stack realignment when local variables are indexed via the stack pointer and
stack-based arguments via the frame pointer.
llvm-svn: 89620
When splitting a critical edge, the registers live through the edge are:
- Used in a PHI instruction, or
- Live out from the predecessor, and
- Live in to the successor.
This allows the coalescer to eliminate even more phi joins.
llvm-svn: 89530
DIEs are created from MDNode, which are already uniqued. And DwarfDebug already uses ValueMaps to find and use existing DIE for a given MDNode.
llvm-svn: 89518
which was an expensive checks failure due to a bug in the checking. This
patch in essence reverts the original fix for PR3393, and refixes it by a
tweak to the way expensive checking is done.
llvm-svn: 89454
critical edges in PHIElimination.
This has a huge impact on regalloc performance, and we recover almost all of
the 10% compile time regression that edge splitting introduced.
llvm-svn: 89381
Add a -linearscan-skip-count argument (default to 0) that tells the
allocator to remember the last N registers it allocated and skip them
when looking for a register candidate. This tends to spread out
register usage and free up post-allocation scheduling at the cost of
slightly more register pressure. The primary benefit is the ability
to backschedule reloads.
This is turned off by default.
llvm-svn: 89356
All spiller calls in RegAllocLinearScan now go through the new Spiller interface.
The "-new-spill-framework" command line option has been removed. To use the trivial in-place spiller you should now pass "-spiller=trivial -rewriter=trivial".
(Note the trivial spiller/rewriter are only meant to serve as examples of the new in-place modification work. Enabling them will yield terrible, though hopefully functional, code).
llvm-svn: 89311
When TwoAddressInstructionPass deletes a dead instruction, make sure that all
register kills are accounted for. The 2-addr register does not get special
treatment.
llvm-svn: 89246
when LiveVariables is available.
The -split-phi-edges is now gone, and so is the hack to disable it when using
the local register allocator. The PHIElimination pass no longer has
LiveVariables as a prerequisite - that is what broke the local allocator.
Instead we do critical edge splitting when possible - that is when
LiveVariables is available.
llvm-svn: 89213
contents of the block to be duplicated. Use this for ARM Cortex A8/9 to
be more aggressive tail duplicating indirect branches, since it makes it
much more likely that they will be predicted in the branch target buffer.
Testcase coming soon.
llvm-svn: 89187
The local register allocator doesn't like it when LiveVariables is run.
We should also disable edge splitting under -O0, but that has to wait a bit.
llvm-svn: 89125
unnecessary. It is broken because the "isIdenticalTo" check should be
negated. If that is fixed, this code causes the CodeGen/X86/tail-opts.ll
test to fail, in the dont_merge_oddly function. And, I confirmed that the
regression is real -- the generated code is worse. As far as I can tell,
that tail-opts.ll test is checking for what this code is supposed to handle
and we're doing the right thing anyway.
llvm-svn: 89121
unconditional branches or fallthroghes. Instcombine/SimplifyCFG
should be simplifying branches with known conditions.
This fixes some problems caused by these transformations not
updating the MachineBasicBlock CFG.
llvm-svn: 89017
Have the asm printer emit a comment if an instruction is a spill or
reload and have the spiller mark copies it introdues so the asm printer
can also annotate those.
llvm-svn: 88911
When splitting an edge after a machine basic block with fall-through, we
forgot to insert a jump instruction. Fix this by calling updateTerminator() on
the fall-through block when relevant.
Also be more precise in PHIElimination::isLiveIn.
llvm-svn: 88728
The BasicBlock associated with a MachineBasicBlock does not necessarily
correspond to the code in the MBB.
Don't insert a new IR BasicBlock when splitting critical edges. We are not
supposed to modify the IR during codegen, and we should be able to do just
fine with a NULL BB.
llvm-svn: 88707
code-size win, and not when it's only likely to be code-size neutral,
such as when only a single instruction would be eliminated and a new
branch would be required.
This fixes rdar://7392894.
llvm-svn: 88692
D0<def,dead> = ...
...
= S0<use, kill>
S0<def> = ...
...
D0<def> =
The first D0 def is correctly marked dead, however, livevariables should have
added an implicit def of S0 or we end up with a use without a def.
llvm-svn: 88690
and don't assume that the call doesn't throw. It would be nice if there were a
way to determine which is the callee and which is a parameter. In practice, the
architecture we care about normally only have one operand for a call instruction
(x86 and arm).
llvm-svn: 87023
slots. The AsmPrinter will use this information to determine whether to
print a spill/reload comment.
Remove default argument values. It's too easy to pass a wrong argument
value when multiple arguments have default values. Make everything
explicit to trap bugs early.
Update all targets to adhere to the new interfaces..
llvm-svn: 87022
making it visible to clients and adding LLVM-style cast capability.
This will be used by AsmPrinter to determine when to emit spill comments
for an instruction.
llvm-svn: 87019
instead of typedefs for std::pair. This simplifies the type of
SameTails, which previously was std::vector<std::pair<std::vector<std::pair<unsigned, MachineBasicBlock *> >::iterator, MachineBasicBlock::iterator>
llvm-svn: 86885
tail merging support to handle more cases.
- Recognize several cases where tail merging is beneficial even when
the tail size is smaller than the generic threshold.
- Make use of MachineInstrDesc::isBarrier to help detect
non-fallthrough blocks.
- Check for and avoid disrupting fall-through edges in more cases.
llvm-svn: 86871
- Edges are split before any phis are eliminated, so the code is SSA.
- Create a proper IR BasicBlock for the split edges.
- LiveVariables::addNewBlock now has same syntax as
MachineDominatorTree::addNewBlock. Algorithm calculates predecessor live-out
set rather than successor live-in set.
This feature still causes some miscompilations.
llvm-svn: 86867
constant whose component type is not a legal type for the target.
(If the target ConstantPool cannot handle this type either, it has
an opportunity to merge elements. In practice any target with
8-bit bytes must support i8 *as data*). 7320806 (partial).
llvm-svn: 86751
Critical edges leading to a PHI node are split when the PHI source variable is
live out from the predecessor block. This help the coalescer eliminate more
PHI joins.
llvm-svn: 86725
This patch forbids implicit conversion of DenseMap::const_iterator to
DenseMap::iterator which was possible because DenseMapIterator inherited
(publicly) from DenseMapConstIterator. Conversion the other way around is now
allowed as one may expect.
The template DenseMapConstIterator is removed and the template parameter
IsConst which specifies whether the iterator is constant is added to
DenseMapIterator.
Actually IsConst parameter is not necessary since the constness can be
determined from KeyT but this is not relevant to the fix and can be addressed
later.
Patch by Victor Zverovich!
llvm-svn: 86636
except it doesn't care if the definitions' virtual registers differ. This is
used by machine LICM and other MI passes to perform CSE.
- Teach Thumb2InstrInfo::isIdentical() to check two t2LDRpci_pic are identical.
Since pc relative constantpool entries are always different, this requires it
it check if the values can actually the same.
llvm-svn: 86328
A non-identity copy cannot be coalesced when the phi join destination register
is live at the copy site.
Also verify the condition that the PHI join source register is only used in
the PHI join. Otherwise the coalescing is invalid.
llvm-svn: 86322
The KILL pseudo-instruction may survive to the asm printer pass, just like the IMPLICIT_DEF. Print the KILL as a comment instead of just leaving a blank line in the output.
With -asm-verbose=0, a blank line is printed, like IMPLICIT?DEF.
llvm-svn: 86041
and extract_subreg as a "copy" that defines a valno.
Also fixes a typo. These two issues prevent a simple subreg coalescing from
happening before.
llvm-svn: 86022
This introduces a new pass, SlotIndexes, which is responsible for numbering
instructions for register allocation (and other clients). SlotIndexes numbering
is designed to match the existing scheme, so this patch should not cause any
changes in the generated code.
For consistency, and to avoid naming confusion, LiveIndex has been renamed
SlotIndex.
The processImplicitDefs method of the LiveIntervals analysis has been moved
into its own pass so that it can be run prior to SlotIndexes. This was
necessary to match the existing numbering scheme.
llvm-svn: 85979
an unconditional branch (possibly from tail merging), this code is
trying to redirect all of its predecessors to go directly to the branch
target, but that isn't feasible for indirect branches. The other
predecessors (that don't end with indirect branches) could theoretically
still be handled, but that is not easily done right now.
The AnalyzeBranch interface doesn't currently let us distinguish jump table
branches from indirect branches, and this code is currently handling
jump tables. To avoid punting on address-taken blocks, we would have to give
up handling jump tables. That seems like a bad tradeoff.
llvm-svn: 85975
the loop preheader. Add instructions which are already in the preheader block that
may be common expressions of those that are hoisted out. These does get a few more
instructions CSE'ed.
llvm-svn: 85799
- Be consistent when referring to MachineBasicBlocks: BB#0.
- Be consistent when referring to virtual registers: %reg1024.
- Be consistent when referring to unknown physical registers: %physreg10.
- Be consistent when referring to known physical registers: %RAX
- Be consistent when referring to register 0: %reg0
- Be consistent when printing alignments: align=16
- Print jump table contents.
- Don't print host addresses, in general.
- and various other cleanups.
llvm-svn: 85682
previously running CodePlacementOpt. Also print headers before
each dump in -print-machineinstrs mode, so that it's clear which
dump is which.
llvm-svn: 85681
unfolding loads for hoisting. getOpcodeAfterMemoryUnfold returns the
opcode of the original operation without the load, not the load
itself, MachineLICM needs to know the operand index in order to get
the correct register class. Extend getOpcodeAfterMemoryUnfold to
return this information.
llvm-svn: 85622
bunch of associated comments, because it doesn't have anything to do
with DAGs or scheduling. This is another step in decoupling MachineInstr
emitting from scheduling.
llvm-svn: 85517
indexed via the stack pointer, even if a frame pointer is present. Update the
heuristic to place it nearest the stack pointer in that case, rather than
nearest the frame pointer.
llvm-svn: 85474
the second (store) instruction in SpillSlotToUsesMap
consistently. I don't think this matters functionally,
but it's cleaner and Evan wants it this way.
llvm-svn: 85463
to spill after all, we weren't handling 2-instruction
spill sequences correctly (PPC Altivec). We need to
remove the store in this case. Removing the other
instruction(s) would be goodness but is not needed for
correctness, and isn't done here. 7331562.
llvm-svn: 85437
use it to control tail merging when there is a tradeoff between performance
and code size. When there is only 1 instruction in the common tail, we have
been merging. That can be good for code size but is a definite loss for
performance. Now we will avoid tail merging in that case when the
optimization level is "Aggressive", i.e., "-O3". Radar 7338114.
Since the IfConversion pass invokes BranchFolding, it too needs to know
the optimization level. Note that I removed the RegisterPass instantiation
for IfConversion because it required a default constructor. If someone
wants to keep that for some reason, we can add a default constructor with
a hard-wired optimization level.
llvm-svn: 85346
machineinstr whether the aliased register is dead, rather than the original
register is dead. This allows it to get the correct answer when examining
an instruction like this:
CALLpcrel32 <ga:foo>, %AL<imp-def>, %EAX<imp-def,dead>
where EAX is dead but a subregister of it is still live. This fixes PR5294.
llvm-svn: 85135
bootstrapping. It's not safe to leave identity subreg_to_reg and insert_subreg
around.
- Relax register scavenging to allow use of partially "not-live" registers. It's
common for targets to operate on registers where the top bits are undef. e.g.
s0 =
d0 = insert_subreg d0<undef>, s0, 1
...
= d0
When the insert_subreg is eliminated by the coalescer, the scavenger used to
complain. The previous fix was to keep to insert_subreg around. But that's
brittle and it's overly conservative when we want to use the scavenger to
allocate registers. It's actually legal and desirable for other instructions
to use the "undef" part of d0. e.g.
s0 =
d0 = insert_subreg d0<undef>, s0, 1
...
s1 =
= s1
= d0
We probably need add a "partial-undef" marker on machine operand so the
machine verifier would not complain.
llvm-svn: 85091
used elsewhere - an exit block is a block outside the loop branched to
from within the loop. An exiting block is a block inside the loop that
branches out.
llvm-svn: 85019
to break up CFG diamonds by banishing one of the blocks to the end of
the function, which is bad for code density and branch size.
This does pessimize MultiSource/Benchmarks/Ptrdist/yacr2, the
benchmark cited as the reason for the change, however I've examined
the code and it looks more like a case of gaming a particular
branch than of being generally applicable.
llvm-svn: 84803
tracked. Instead of trying to manually keep track of these locations
while doing complex modifications, just recompute them when they're needed.
This fixes a bug in which the TopMBB and BotMBB were not correctly updated,
leading to invalid transformations.
llvm-svn: 84598
appropriate restore location for the spill as well as perform the actual
save and restore.
The Thumb1 target uses this to make sure R12 is not clobbered while a spilled
scavenger register is live there.
llvm-svn: 84554
stack slots and giving them different PseudoSourceValue's did not fix the
problem of post-alloc scheduling miscompiling llvm itself.
- Apply Dan's conservative workaround by assuming any non fixed stack slots can
alias other memory locations. This means a load from spill slot #1 cannot
move above a store of spill slot #2.
- Enable post-alloc scheduling for x86 at optimization leverl Default and above.
llvm-svn: 84424
to be more general and understand more varieties of loops.
Teach CodePlacementOpt to reorganize the basic blocks of a loop so that
they are contiguous. This also includes a fair amount of logic for preserving
fall-through edges while doing so. This fixes a BranchFolding-ism where blocks
which can't be made to use a fall-through edge and don't conveniently fit
anywhere nearby get tossed out to the end of the function.
llvm-svn: 84295
header is just the entry block to the loop, and it needn't be at
the top of the loop in the code layout.
Remove the code that suppressed loop alignment for outer loops,
so that outer loops are aligned.
llvm-svn: 84158
so get rid of eh.selector.i64 and rename eh.selector.i32 to eh.selector.
Likewise for eh.typeid.for. This aligns us with gcc, which always uses a
32 bit value for the selector on all platforms. My understanding is that
the register allocator used to assert if the selector intrinsic size didn't
match the pointer size, and this was the reason for introducing the two
variants. However my testing shows that this is no longer the case (I
fixed some bugs in selector lowering yesterday, and some more today in the
fastisel path; these might have caused the original problems).
llvm-svn: 84106
to remat non-load instructions as loads, and the remat code now uses
the UnmodeledSideEffects flags, MachineMemOperands, and similar things
to decide which instructions are valid for rematerialization.
llvm-svn: 84060
truncating an SDValue (depending on whether the target
type is bigger or smaller than the value's type); or zero
extending or truncating it. Use it in a few places (this
seems to be a popular operation, but I only modified cases
of it in SelectionDAGBuild). In particular, the eh_selector
lowering was doing this wrong due to a repeated rather than
inverted test, fixed with this change.
llvm-svn: 84027
bootstrap of FSF-style PPC, so there is some
reason to believe the original bug (which was
never analyzed) has been fixed, probably by
82266.
llvm-svn: 83871
into MachineInstrs. This is mostly just moving the code from
ScheduleDAGSDNodesEmit.cpp into a new class. This decouples MachineInstr
emitting from scheduling.
llvm-svn: 83699
is trivially rematerializable and integrate it into
TargetInstrInfo::isTriviallyReMaterializable. This way, all places that
need to know whether an instruction is rematerializable will get the
same answer.
This enables the useful parts of the aggressive-remat option by
default -- using AliasAnalysis to determine whether a memory location
is invariant, and removes the questionable parts -- rematting operations
with virtual register inputs that may not be live everywhere.
llvm-svn: 83687
While recording beginning of a function, use scope info from the first location entry instead of just relying on first location entry itself.
llvm-svn: 83684
to declare that they preserve other passes without needing to pull in
additional header file or library dependencies. Convert MachineFunctionPass
and CodeGenLICM to make use of this.
llvm-svn: 83555
implementations with a new MachineInstr::isInvariantLoad, which uses
MachineMemOperands and is target-independent. This brings MachineLICM
and other functionality to targets which previously lacked an
isInvariantLoad implementation.
llvm-svn: 83475
a virtual register to eliminate a frame index, it can return that register
and the constant stored there to PEI to track. When scavenging to allocate
for those registers, PEI then tracks the last-used register and value, and
if it is still available and matches the value for the next index, reuses
the existing value rather and removes the re-materialization instructions.
Fancier tracking and adjustment of scavenger allocations to keep more
values live for longer is possible, but not yet implemented and would likely
be better done via a different, less special-purpose, approach to the
problem.
eliminateFrameIndex() is modified so the target implementations can return
the registers they wish to be tracked for reuse.
ARM Thumb1 implements and utilizes the new mechanism. All other targets are
simply modified to adjust for the changed eliminateFrameIndex() prototype.
llvm-svn: 83467
verbose-asm mode, print comments instead. This eliminates a non-comment
difference between verbose-asm mode and non-verbose-asm mode.
Also, factor out the relevant code out of all the targets and into
target-independent code.
llvm-svn: 83392