independent of the order that isel happens to visit the dbg_declare
intrinsics. This fixes a bug in which the formal arguments were
being printed in reverse order, now that fast isel is going bottom up.
llvm-svn: 108369
LiveInterval::overlapsFrom dereferences end() if it is called on an empty
interval.
It would be reasonable to just return false - an empty interval doesn't overlap
anything, but I want to know who is doing it first.
llvm-svn: 108264
they already have one.
This fixes the himenobmtxpa miscompilation on ARM.
The PostRA scheduler got confused by the double memoperand and hoisted a stack
slot load above a store to the same slot.
llvm-svn: 108219
AggressiveAntiDepBreaker should not be using getPhysicalRegisterRegClass. An
instruction might be using a register that can only be replaced with one from
a subclass of getPhysicalRegisterRegClass.
With this patch we use getMinimalPhysRegClass. This is correct, but
conservative. We should check the uses of the register and select the
largest register class that can be used in all of them.
llvm-svn: 108122
physical register can be allocated in the class of the virtual are sufficient.
I think that the test for virtual registers is more strict than it needs to be,
it should be possible to coalesce two virtual registers the class of one
is a subclass of the other.
llvm-svn: 108118
getMinimalPhysRegClass. It was used to produce spills, and it is better to
use the most specific class if possible.
Update getLoadStoreRegOpcode to handle GR32_AD.
llvm-svn: 108115
Targets must now implement TargetInstrInfo::copyPhysReg instead. There is no
longer a default implementation forwarding to copyRegToReg.
llvm-svn: 108095
The first one was used just to call isSafeToMoveRegClassDefs. In
general, using a more specific reg class is better, in practice only
x86 implements that method and the results are always the same.
The second one is in FindFreeRegister and is used to check if a register
is in a register class, a much more direct call to contains is better as
it should cover more cases and is faster.
llvm-svn: 108093
assert()s, switching to void-casts. Removed an unneeded Compiler.h include as
a result. There are two other uses in LLVM, but they're not due to assert()s,
so I've left them alone.
llvm-svn: 108088
correct alignment information, which simplifies ExpandRes_VAARG a bit.
The patch introduces a new alignment information to TargetLoweringInfo. This is
needed since the two natural candidates cannot be used:
* The 's' in target data: If this is set to the minimal alignment of any
argument, getCallFrameTypeAlignment would return 4 for doubles on ARM for
example.
* The getTransientStackAlignment method. It is possible for an architecture to
have argument less aligned than what we maintain the stack pointer.
llvm-svn: 108072
The remaining copyRegToReg calls actually check the return value (shock!), so we
cannot trivially replace them with COPY instructions.
llvm-svn: 108069
if a block is split (by a custom inserter), the insert point may be in a
different block than it was originally. This fixes 32-bit llvm-gcc
bootstrap builds, and I haven't been able to reproduce it otherwise.
llvm-svn: 108060
ScheduleDAGEmit, TwoAddressLowering, and PHIElimination.
This switches the bulk of register copies to using COPY, but many less used
copyRegToReg calls remain.
llvm-svn: 108050
- Check getBytesToPopOnReturn().
- Eschew ST0 and ST1 for return values.
- Fix the PIC base register initialization so that it doesn't ever
fail to end up the top of the entry block.
llvm-svn: 108039
inserted in a MBB, and return an already inserted MI.
This target API change is necessary to allow foldMemoryOperand to call
storeToStackSlot and loadFromStackSlot when folding a COPY to a stack slot
reference in a target independent way.
The foldMemoryOperandImpl hook is going to change in the same way, but I'll wait
until COPY folding is actually implemented. Most targets only fold copies and
won't need to specialize this hook at all.
llvm-svn: 107991
U utils/TableGen/FastISelEmitter.cpp
--- Reverse-merging r107943 into '.':
U test/CodeGen/X86/fast-isel.ll
U test/CodeGen/X86/fast-isel-loads.ll
U include/llvm/Target/TargetLowering.h
U include/llvm/Support/PassNameParser.h
U include/llvm/CodeGen/FunctionLoweringInfo.h
U include/llvm/CodeGen/CallingConvLower.h
U include/llvm/CodeGen/FastISel.h
U include/llvm/CodeGen/SelectionDAGISel.h
U lib/CodeGen/LLVMTargetMachine.cpp
U lib/CodeGen/CallingConvLower.cpp
U lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
U lib/CodeGen/SelectionDAG/FunctionLoweringInfo.cpp
U lib/CodeGen/SelectionDAG/FastISel.cpp
U lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
U lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.cpp
U lib/CodeGen/SelectionDAG/InstrEmitter.cpp
U lib/CodeGen/SelectionDAG/TargetLowering.cpp
U lib/Target/XCore/XCoreISelLowering.cpp
U lib/Target/XCore/XCoreISelLowering.h
U lib/Target/X86/X86ISelLowering.cpp
U lib/Target/X86/X86FastISel.cpp
U lib/Target/X86/X86ISelLowering.h
llvm-svn: 107987
disabled and then never turned back on again. Adjust some tests, one because
this change avoids an unnecessary instruction, and the other to make it
continue testing what it was intended to test.
llvm-svn: 107941
the simplification of frame index register scavenging to not have to check
for available registers directly and instead just let scavengeRegister()
handle it.
llvm-svn: 107880
EXTRACT_SUBREG no longer appears as a machine instruction. Use COPY instead.
Add isCopy() checks in many places using isMoveInstr() and isExtractSubreg().
The isMoveInstr hook will be removed later.
llvm-svn: 107879
This target hook is intended to replace copyRegToReg entirely, but for now it
calls copyRegToReg.
Any remaining calls to copyRegToReg wil be replaced by COPY instructions.
llvm-svn: 107854
(if there are any) and use the one which remains available for the longest
rather than just using the first one. This should help enable better re-use
of the loaded frame index values. rdar://7318760
llvm-svn: 107847
around everywhere, and also give it an InsertPt member, to enable isel
to operate at an arbitrary position within a block, rather than just
appending to a block.
llvm-svn: 107791
INSERT_SUBREG will now only appear in SSA machine instructions.
Fix the handling of partial redefs in ProcessImplicitDefs. This is now relevant
since partial redef COPY instructions appear.
llvm-svn: 107726
It is OK for an alias live range to overlap if there is a copy to or from the
physical register. CoalescerPair can work out if the copy is coalescable
independently of the alias.
This means that we can join with the actual destination interval instead of
using the getOrigDstReg() hack. It is no longer necessary to merge clobber
ranges into subregisters.
llvm-svn: 107695
This way *only* debug sections can be discarded, but not the opposite. Seems like the copy-and-pasto from ELF code, since there it contains the reverse flag ('alloc').
llvm-svn: 107658
The COPY instruction is intended to replace the target specific copy
instructions for virtual registers as well as the EXTRACT_SUBREG and
INSERT_SUBREG instructions in MachineFunctions. It won't we used in a selection
DAG.
COPY is lowered to native register copies by LowerSubregs.
llvm-svn: 107529
new basic blocks, and if used as a function argument, that can cause call frame
setup / destroy pairs to be split across a basic block boundary. That prevents
us from doing a simple assertion to check that the pairs match and alloc/
dealloc the same amount of space. Modify the assertion to only check the
amount allocated when there are matching pairs in the same basic block.
rdar://8022442
llvm-svn: 107517
- X86 unfolding should check if the instructions being unfolded has memoperands.
If there is no memoperands, then it must assume conservative alignment. If this
would introduce an expensive sse unaligned load / store, then unfoldMemoryOperand
etc. should not unfold the instruction.
llvm-svn: 107509
PrologEpilog code, and use it to determine whether
the asm forces stack alignment or not. gcc consistently
does not do this for GCC-style asms; Apple gcc inconsistently
sometimes does it for asm blocks. There is no
convenient place to put a bit in either the SDNode or
the MachineInstr form, so I've added an extra operand
to each; unlovely, but it does allow for expansion for
more bits, should we need it. PR 5125. Some
existing testcases are affected.
The operand lists of the SDNode and MachineInstr forms
are indexed with awesome mnemonics, like "2"; I may
fix this someday, but not now. I'm not making it any
worse. If anyone is inspired I think you can find all
the right places from this patch.
llvm-svn: 107506
This allows us to recognize the common case where all uses could be
rematerialized, and no stack slot allocation is necessary.
If some values could be fully rematerialized, remove them from the live range
before allocating a stack slot for the rest.
llvm-svn: 107492
Objective-C metadata types which should be marked as "weak", but which the
linker will remove upon final linkage. However, this linkage isn't specific to
Objective-C.
For example, the "objc_msgSend_fixup_alloc" symbol is defined like this:
.globl l_objc_msgSend_fixup_alloc
.weak_definition l_objc_msgSend_fixup_alloc
.section __DATA, __objc_msgrefs, coalesced
.align 3
l_objc_msgSend_fixup_alloc:
.quad _objc_msgSend_fixup
.quad L_OBJC_METH_VAR_NAME_1
This is different from the "linker_private" linkage type, because it can't have
the metadata defined with ".weak_definition".
Currently only supported on Darwin platforms.
llvm-svn: 107433
LocalRewriter::runOnMachineFunction uses this information to mark dead spill
slots.
This means that InlineSpiller now also works for functions that spill.
llvm-svn: 107302
InlineSpiller inserts loads and spills immediately instead of deferring to
VirtRegMap. This is possible now because SlotIndexes allows instructions to be
inserted and renumbered.
This is work in progress, and is mostly a copy of TrivialSpiller so far. It
works very well for functions that don't require spilling.
llvm-svn: 107227
metadata types which should be marked as "weak", but which the linker will
remove upon final linkage. For example, the "objc_msgSend_fixup_alloc" symbol is
defined like this:
.globl l_objc_msgSend_fixup_alloc
.weak_definition l_objc_msgSend_fixup_alloc
.section __DATA, __objc_msgrefs, coalesced
.align 3
l_objc_msgSend_fixup_alloc:
.quad _objc_msgSend_fixup
.quad L_OBJC_METH_VAR_NAME_1
This is different from the "linker_private" linkage type, because it can't have
the metadata defined with ".weak_definition".
llvm-svn: 107205
A partial redefine needs to be treated like a tied operand, and the register
must be reloaded while processing use operands.
This fixes a bug where partially redefined registers were processed as normal
defs with a reload added. The reload could clobber another use operand if it was
a kill that allowed register reuse.
llvm-svn: 107193
The LowerSubregs pass needs to preserve implicit def operands attached to
EXTRACT_SUBREG instructions when it replaces those instructions with copies.
llvm-svn: 107189
of getPhysicalRegisterRegClass with it.
If we want to make a copy (or estimate its cost), it is better to use the
smallest class as more efficient operations might be possible.
llvm-svn: 107140
There are 2 changes relative to the previous version of the patch:
1) For the "simple" if-conversion case, there's no need to worry about
RemoveExtraEdges not handling an unanalyzable branch. Predicated terminators
are ignored in this context, so RemoveExtraEdges does the right thing.
This might break someday if we ever treat indirect branches (BRIND) as
predicable, but for now, I just removed this part of the patch, because
in the case where we do not add an unconditional branch, we rely on keeping
the fall-through edge to CvtBBI (which is empty after this transformation).
The change relative to the previous patch is:
@@ -1036,10 +1036,6 @@
IterIfcvt = false;
}
- // RemoveExtraEdges won't work if the block has an unanalyzable branch,
- // which is typically the case for IfConvertSimple, so explicitly remove
- // CvtBBI as a successor.
- BBI.BB->removeSuccessor(CvtBBI->BB);
RemoveExtraEdges(BBI);
// Update block info. BB can be iteratively if-converted.
2) My patch exposed a bug in the code for merging the tail of a "diamond",
which had previously never been exercised. The code was simply checking that
the tail had a single predecessor, but there was a case in
MultiSource/Benchmarks/VersaBench/dbms where that single predecessor was
neither edge of the diamond. I added the following change to check for
that:
@@ -1276,7 +1276,18 @@
// tail, add a unconditional branch to it.
if (TailBB) {
BBInfo TailBBI = BBAnalysis[TailBB->getNumber()];
- if (TailBB->pred_size() == 1 && !TailBBI.HasFallThrough) {
+ bool CanMergeTail = !TailBBI.HasFallThrough;
+ // There may still be a fall-through edge from BBI1 or BBI2 to TailBB;
+ // check if there are any other predecessors besides those.
+ unsigned NumPreds = TailBB->pred_size();
+ if (NumPreds > 1)
+ CanMergeTail = false;
+ else if (NumPreds == 1 && CanMergeTail) {
+ MachineBasicBlock::pred_iterator PI = TailBB->pred_begin();
+ if (*PI != BBI1->BB && *PI != BBI2->BB)
+ CanMergeTail = false;
+ }
+ if (CanMergeTail) {
MergeBlocks(BBI, TailBBI);
TailBBI.IsDone = true;
} else {
With these fixes, I was able to run all the SingleSource and MultiSource
tests successfully.
llvm-svn: 107110
have to be registers, per gcc documentation. This affects
the logic for determining what "g" should lower to. PR 7393.
A couple of existing testcases are affected.
llvm-svn: 107079
When an instruction has tied operands and physreg defines, we must take extra
care that the tied operands conflict with neither physreg defs nor uses.
The special treatment is given to inline asm and instructions with tied operands
/ early clobbers and physreg defines.
This fixes PR7509.
llvm-svn: 107043
if-conversion. The RemoveExtraEdges function doesn't work for blocks that
end with unanalyzable branches, so in those cases, the "extra" edges must
be explicitly removed. The CopyAndPredicateBlock and MergeBlocks methods
can also avoid copying successor edges due to branches that have already
been removed. The latter case is especially helpful when MergeBlocks is
called for handling "diamond" if-conversions, where otherwise you can end
up with some weird intermediate states in the CFG. Unfortunately I've
been unable to find cases where this cleanup actually makes a significant
difference in the code. There is one test where we manage to remove an
empty block at the end of a function. Radar 6911268.
llvm-svn: 106939
The VNInfo.kills vector was almost unused except for all the code keeping it
updated. The few places using it were easily rewritten to check for interval
ends instead.
The two new methods LiveInterval::killedAt and killedInRange are replacements.
This brings us down to 3 independent data structures tracking kills.
llvm-svn: 106905
for an "i" constraint should get lowered; PR 6309. While
this argument was passed around a lot, this is the only
place it was used, so it goes away from a lot of other
places.
llvm-svn: 106893
are dead, not just the def of this register. I.e., a register could be dead, but
it's subreg isn't.
Testcase to follow with a subsequent patch.
llvm-svn: 106878
and CallInst for getting hold
of the intrinsic's arguments
simplify along the way (at least for me this is much more legible now)
Bill, Baldrick or Anton, please review\!
llvm-svn: 106838
This fixes PR7479 and PR7485. The test cases from those PRs are big, so not
included. However, PR7485 comes from self hosting on FreeBSD, so we will surely
hear about any regression.
llvm-svn: 106811
which don't have a catch-all associated with them not just clean-ups. This fixes
the SingleSource/Benchmarks/Shootout-C++/except.cpp testcase that broke because
of my change r105902.
llvm-svn: 106772
CoalescerPair can determine if a copy can be coalesced, and which register gets
merged away. The old logic in SimpleRegisterCoalescing had evolved into
something a bit too convoluted.
This second attempt fixes some crashes that only occurred Linux.
llvm-svn: 106769
[L]oad, [u]se, [d]ef, or [S]tore slots.
This makes it easier to see if two indices refer to the same instruction,
avoiding mental mod 4 calculations.
llvm-svn: 106766
In this case it is essential that the kill is real because the spiller will
decide to omit a spill if it thinks there is a later kill.
llvm-svn: 106751
when the condition is constant. This optimization shouldn't be
necessary, because codegen shouldn't be able to find dead control
paths that the IR-level optimizer can't find. And it's undesirable,
because it encourages bugpoint to leave "br i1 false" branches
in its output. And it wasn't updating the CFG.
I updated all the tests I could, but some tests are too reduced
and I wasn't able to meaningfully preserve them.
llvm-svn: 106748
CoalescerPair can determine if a copy can be coalesced, and which register gets
merged away. The old logic in SimpleRegisterCoalescing had evolved into
something a bit too convoluted.
llvm-svn: 106701
atomic intrinsics, either because the use locking instructions for the
atomics, or because they perform the locking directly. Add support in the
DAG combiner to fold away the fences.
llvm-svn: 106630
instructions.
This does not affect codegen much because SUBREG_TO_REG is only used by X86 and
X86 does not use the register scavenger, but it prevents verifier errors.
llvm-svn: 106583
Measurements show that it does not speed up coalescing, so there is no reason
the keep the added complexity around.
Also clean out some unused methods and static functions.
llvm-svn: 106548
opportunities. For example, this lets it emit this:
movq (%rax), %rcx
addq %rdx, %rcx
instead of this:
movq %rdx, %rcx
addq (%rax), %rcx
in the case where %rdx has subsequent uses. It's the same number
of instructions, and usually the same encoding size on x86, but
it appears faster, and in general, it may allow better scheduling
for the load.
llvm-svn: 106493
Split the code for materializing a value out of
SelectionDAGBuilder::getValue into a helper function, so that it can
be used in other ways. Add a new getNonRegisterValue function which
uses it, for use in code which doesn't want a CopyFromReg even
when FuncMap.ValueMap already has an entry for it.
llvm-svn: 106422
- This fixed a number of bugs in if-converter, tail merging, and post-allocation
scheduler. If-converter now runs branch folding / tail merging first to
maximize if-conversion opportunities.
- Also changed the t2IT instruction slightly. It now defines the ITSTATE
register which is read by instructions in the IT block.
- Added Thumb2 specific hazard recognizer to ensure the scheduler doesn't
change the instruction ordering in the IT block (since IT mask has been
finalized). It also ensures no other instructions can be scheduled between
instructions in the IT block.
This is not yet enabled.
llvm-svn: 106344
instructions, but it doesn't really understand live ranges, so the first
INSERT_SUBREG uses an implicitly defined register.
Fix it in LiveVariableAnalysis by adding the <undef> flag.
llvm-svn: 106333
entries used by llvm-gcc. *_[U]MIN and such can be added later if needed.
This enables the front ends to simplify handling of the atomic intrinsics by
removing the target-specific decision about which targets can handle the
intrinsics.
llvm-svn: 106321
so when IfConverter::CopyAndPredicateBlock checks to see if it should ignore
an instruction because it is a branch, it should not check if the branch is
predicated.
This case (when IgnoreBr is true) is only relevant from IfConvertTriangle,
where new branches are inserted after the block has been copied and predicated.
If the original branch is not removed, we end up with multiple conditional
branches (possibly conflicting) at the end of the block. Aside from any
immediate errors resulting from that, this confuses the AnalyzeBranch functions
so that the branches are not analyzable. That in turn causes the IfConverter to
think that the "Simple" pattern can be applied, and things go downhill fast
because the "Simple" pattern does _not_ apply if the block can fall through.
This is pretty fragile. If there are other degenerate cases where AnalyzeBranch
fails, but where the block may still fall through, the IfConverter should not
perform its "Simple" if-conversion. But, I don't know how to do that with the
current AnalyzeBranch interface, so for now, the best thing seems to be to
avoid creating branches that AnalyzeBranch cannot handle.
Evan, please review!
llvm-svn: 106291
switch from this:
if (TimePassesIsEnabled) {
NamedRegionTimer T(Name, GroupName);
do_something();
} else {
do_something(); // duplicate the code, this time without a timer!
}
to this:
{
NamedRegionTimer T(Name, GroupName, TimePassesIsEnabled);
do_something();
}
llvm-svn: 106285
addresses a longstanding deficiency noted in many FIXMEs scattered
across all the targets.
This effectively moves the problem up one level, replacing eleven
FIXMEs in the targets with eight FIXMEs in CodeGen, plus one path
through FastISel where we actually supply a DebugLoc, fixing Radar
7421831.
llvm-svn: 106243
for the moment. The implementation of the libcall will follow.
Currently, the llvm-gcc knows when the intrinsics can be correctly handled by
the back end and only generates them in those cases, issuing libcalls directly
otherwise. That's too much coupling. The intrinsics should always be
generated and the back end decide how to handle them, be it with a libcall,
inline code, or whatever. This patch is a step in that direction.
rdar://8097623
llvm-svn: 106227
LiveVariableAnalysis was a bit picky about a register only being redefined once,
but that really isn't necessary.
Here is an example of chained INSERT_SUBREGs that we can handle now:
68 %reg1040<def> = INSERT_SUBREG %reg1040, %reg1028<kill>, 14
register: %reg1040 +[70,134:0)
76 %reg1040<def> = INSERT_SUBREG %reg1040, %reg1029<kill>, 13
register: %reg1040 replace range with [70,78:1) RESULT: %reg1040,0.000000e+00 = [70,78:1)[78,134:0) 0@78-(134) 1@70-(78)
84 %reg1040<def> = INSERT_SUBREG %reg1040, %reg1030<kill>, 12
register: %reg1040 replace range with [78,86:2) RESULT: %reg1040,0.000000e+00 = [70,78:1)[78,86:2)[86,134:0) 0@86-(134) 1@70-(78) 2@78-(86)
92 %reg1040<def> = INSERT_SUBREG %reg1040, %reg1031<kill>, 11
register: %reg1040 replace range with [86,94:3) RESULT: %reg1040,0.000000e+00 = [70,78:1)[78,86:2)[86,94:3)[94,134:0) 0@94-(134) 1@70-(78) 2@78-(86) 3@86-(94)
rdar://problem/8096390
llvm-svn: 106152
will conflict with another live range. The place which creates this scenerio is
the code in X86 that lowers a select instruction by splitting the MBBs. This
eliminates the need to check from the bottom up in an MBB for live pregs.
llvm-svn: 106066
SimpleRegisterCoalescing::JoinIntervals() uses CoalescerPair to determine if a
copy is coalescable, and in very rare cases it can return true where LHS is not
live - the coalescable copy can come from an alias of the physreg in LHS.
llvm-svn: 106021
combined to an insert_subreg, i.e., where the destination register is larger
than the source. We need to check that the subregs can be composed for that
case in a symmetrical way to the case when the destination is smaller.
llvm-svn: 106004
Early clobbers defining a virtual register were first alocated to a physreg and
then processed as a physreg EC, spilling the virtreg.
This fixes PR7382.
llvm-svn: 105998
Given a copy instruction, CoalescerPair can determine which registers to
coalesce in order to eliminate the copy. It deals with all the subreg fun to
determine a tuple (DstReg, SrcReg, SubIdx) such that:
- SrcReg is a virtual register that will disappear after coalescing.
- DstReg is a virtual or physical register whose live range will be extended.
- SubIdx is 0 when DstReg is a physical register.
- SrcReg can be joined with DstReg:SubIdx.
CoalescerPair::isCoalescable() determines if another copy instruction is
compatible with the same tuple. This fixes some NEON miscompilations where
shuffles are getting coalesced as if they were copies.
The CoalescerPair class will replace a lot of the spaghetti logic in JoinCopy
later.
llvm-svn: 105997
replacing the overly conservative checks that I had introduced recently to
deal with correctness issues. This makes a pretty noticable difference
in our testcases where reg_sequences are used. I've updated one test to
check that we no longer emit the unnecessary subreg moves.
llvm-svn: 105991
clean-up to a catch-all after inlining, take into account that there could be
filter IDs as well. The presence of filters don't mean that the selector catches
anything. It's just metadata information.
llvm-svn: 105872
This is a bit of a hack to make inline asm look more like call instructions.
It would be better to produce correct dead flags during isel.
llvm-svn: 105749
%reg1025 = <sext> %reg1024
...
%reg1026 = SUBREG_TO_REG 0, %reg1024, 4
into this:
%reg1025 = <sext> %reg1024
...
%reg1027 = EXTRACT_SUBREG %reg1025, 4
%reg1026 = SUBREG_TO_REG 0, %reg1027, 4
The problem here is that SUBREG_TO_REG is there to assert that an implicit zext
occurs. It doesn't insert a zext instruction. If we allow the EXTRACT_SUBREG
here, it will give us the value after the <sext>, not the original value of
%reg1024 before <sext>.
llvm-svn: 105741
register allocation.
Process all of the clobber lists at the end of the function, marking the
registers as used in MachineRegisterInfo.
This is necessary in case the calls clobber callee-saved registers (sic).
llvm-svn: 105473
replace an OpA with a widened OpB, it is possible to get new uses of OpA due to CSE
when recursively updating nodes. Since OpA has been processed, the new uses are
not examined again. The patch checks if this occurred and it it did, updates the
new uses of OpA to use OpB.
llvm-svn: 105453
Check that all the instructions are in the same basic block, that the
EXTRACT_SUBREGs write to the same subregs that are being extracted, and that
the source and destination registers are in the same regclass. Some of
these constraints can be relaxed with a bit more work. Jakob suggested
that the loop that checks for subregs when NewSubIdx != 0 should use the
"nodbg" iterator, so I made that change here, too.
llvm-svn: 105437
registers it defines then interfere with an existing preg live range.
For instance, if we had something like these machine instructions:
BB#0
... = imul ... EFLAGS<imp-def,dead>
test ..., EFLAGS<imp-def>
jcc BB#2 EFLAGS<imp-use>
BB#1
... ; fallthrough to BB#2
BB#2
... ; No code that defines EFLAGS
jcc ... EFLAGS<imp-use>
Machine sink will come along, see that imul implicitly defines EFLAGS, but
because it's "dead", it assumes that it can move imul into BB#2. But when it
does, imul's "dead" imp-def of EFLAGS is raised from the dead (a zombie) and
messes up the condition code for the jump (and pretty much anything else which
relies upon it being correct).
The solution is to know which pregs are live going into a basic block. However,
that information isn't calculated at this point. Nor does the LiveVariables pass
take into account non-allocatable physical registers. In lieu of this, we do a
*very* conservative pass through the basic block to determine if a preg is live
coming out of it.
llvm-svn: 105387
expansion is the same as that used by LegalizeDAG.
The resulting code sucks in terms of performance/codesize on x86-32 for a
64-bit operation; I haven't looked into whether different expansions might be
better in general.
llvm-svn: 105378
spills and reloads.
This means that a partial define of a register causes a reload so the other
parts of the register are preserved.
The reload can be prevented by adding an <imp-def> operand for the full
register. This is already done by the coalescer and live interval analysis where
relevant.
llvm-svn: 105369
register updates.
These operands tell the spiller that the other parts of the partially defined
register are don't-care, and a reload is not necessary.
llvm-svn: 105361
instruction defines subregisters.
Any existing subreg indices on the original instruction are preserved or
composed with the new subreg index.
Also substitute multiple operands mentioning the original register by using the
new MachineInstr::substituteRegister() function. This is necessary because there
will soon be <imp-def> operands added to non read-modify-write partial
definitions. This instruction:
%reg1234:foo = FLAP %reg1234<imp-def>
will reMaterialize(%reg3333, bar) like this:
%reg3333:bar-foo = FLAP %reg333:bar<imp-def>
Finally, replace the TargetRegisterInfo pointer argument with a reference to
indicate that it cannot be NULL.
llvm-svn: 105358
that are too large. This causes the freebsd bootloader to be too
large apparently.
It's unclear if this should be an -Os or -Oz thing. Thoughts welcome.
llvm-svn: 105228
implementation that is correct for most targets. Tablegen will override where
needed.
Add MachineOperand::subst{Virt,Phys}Reg methods that correctly handle existing
subreg indices when sustituting registers.
llvm-svn: 104985
optimization level.
This only really affects llc for now because both the llvm-gcc and clang front
ends override the default register allocator. I intend to remove that code later.
llvm-svn: 104904
implementing pop with a linear search for a "best" element. The priority
queue was a neat idea, but in practice the comparison functions depend
on dynamic information.
llvm-svn: 104718
If you have a setjmp/longjmp situation, it's possible for stack slot coloring to
reuse a stack slot before it's really dead. For instance, if we have something
like this:
1: y = g;
x = sigsetjmp(env, 0);
switch (x) {
case 1:
/* ... */
goto run;
case 0:
run:
do_run(); /* marked as "no return" */
break;
case 3:
if (...) {
/* ... */
goto run;
}
/* ... */
break;
}
2: g = y;
"y" may be put onto the stack, so the expression "g = y" is relying upon the
fact that the stack slot containing "y" isn't modified between (1) and (2). But
it can be, because of the "no return" calls in there. A longjmp might come back
with 3, modify the stack slot, and then go to case 0. And it's perfectly
acceptable to reuse the stack slot there because there's no CFG flow from case 3
to (2).
The fix is to disable certain optimizations in these situations. Ideally, we'd
disable them for all "returns twice" functions. But we don't support that
attribute. Check for "setjmp" and "sigsetjmp" instead.
llvm-svn: 104640
Mon Ping provided; unfortunately bugpoint failed to
reduce it, but I think it's important to have a test for
this in the suite. 8023512.
llvm-svn: 104624
so that it will continue to test what it was meant to test when I commit a
separate change for better support of BUILD_VECTOR and VECTOR_SHUFFLE for Neon.
Fix a DAG combiner crash exposed by this test change.
llvm-svn: 104380
that are aliases of the specified register.
- Rename modifiesRegister to definesRegister since it's looking a def of the
specific register or one of its super-registers. It's not looking for def of a
sub-register or alias that could change the specified register.
- Added modifiesRegister to look for defs of aliases.
llvm-svn: 104377
reads or writes a register.
This takes partial redefines and undef uses into account.
Don't actually use it yet. That caused miscompiles.
llvm-svn: 104372
definitions of the virtual register.
This happens when spilling the registers produced by REG_SEQUENCE:
%reg1047:5<def>, %reg1047:6<def>, %reg1047:7<def> = VLD3d8 %reg1033, 0, pred:14, pred:%reg0
The rewriter would spill the register multiple times, dead store elimination
tried to keep up, but ended up cutting the branch it was sitting on.
llvm-svn: 104321
pipeline stall. It's useful for targets like ARM cortex-a8. NEON has a lot
of long latency instructions so a strict register pressure reduction
scheduler does not work well.
Early experiments show this speeds up some NEON loops by over 30%.
llvm-svn: 104216
A partial redef now triggers a reload if required. Also don't add
<imp-def,dead> operands for physical superregisters.
Kill flags are still treated as full register kills, and <imp-use,kill> operands
are added for physical superregisters as before.
llvm-svn: 104167
partial redefines.
We are going to treat a partial redefine of a virtual register as a
read-modify-write:
%reg1024:6 = OP
Unless the register is fully clobbered:
%reg1024:6 = OP, %reg1024<imp-def>
MachineInstr::readsVirtualRegister() knows the difference. The first case is a
read, the second isn't.
llvm-svn: 104149
need to be promoted. The BUILD_VECTOR and EXTRACT_VECTOR_ELT nodes generated
here already allow the promoted type to be used without further changes, so
just do the promotion. This fixes part of pr7167.
llvm-svn: 104141
The trouble arises when the result of a vector cmp + sext is then and'ed with all ones. Instcombine will turn it into a vector cmp + zext, dag combiner will miss turning it into a vsetcc and hell breaks loose after that.
Teach dag combine to turn a vector cpm + zest into a vsetcc + and 1. This fixes rdar://7923010.
llvm-svn: 104094
This fixes the miscompilations of MultiSource/Applications/JM/l{en,de}cod.
Clang now successfully self hosts in a debug build with the fast register allocator.
llvm-svn: 103975
While that approach works wonders for register pressure, it tends to break
everything.
This should unbreak the arm-linux builder and fix a number of miscompilations.
llvm-svn: 103946
out aliases when allocating. Clean up allocVirtReg().
Use calcSpillCost() to allow more aggressive hinting. Now the hint is always
taken unless blocked by a reserved register. This leads to more coalescing,
lower register pressure, and less spilling.
llvm-svn: 103939
This is safe to do because the physreg has been marked UsedInInstr and the kill flag will be set on the last operand using the virtreg if there are more then one.
llvm-svn: 103933
Debug code doesn't use callee saved registers anyway, and the code is simpler this way. Now spillVirtReg always kills, and the isKill parameter is not needed.
llvm-svn: 103927
The implementation in LegalizeIntegerTypes to handle this as
sint64->float + appropriate power of 2 is subject to double rounding,
considered incorrect by numerics people. Use this implementation only
when it is safe. This leads to using library calls in some cases
that produced inline code before, but it's correct now.
(EVTToAPFloatSemantics belongs somewhere else, any suggestions?)
Add a correctly rounding (though not particularly fast) conversion
that uses X87 80-bit computations for x86-32.
7885399, 5901940. This shows up in gcc.c-torture/execute/ieee/rbug.c
in the gcc testsuite on some platforms.
llvm-svn: 103883
a condition's grouping. Every other use of Allocatable.test(Hint) groups it the
same way as it is indented, so move the parentheses to agree with that
grouping.
llvm-svn: 103869
When working top-down in a basic block, substituting physregs for virtregs, the use-def chains are kept up to date. That means we can recognize a virtreg kill by the use-def chain becoming empty.
This makes the fast allocator independent of incoming kill flags.
llvm-svn: 103866
instructions.
e.g.
%reg1026<def> = VLDMQ %reg1025<kill>, 260, pred:14, pred:%reg0
%reg1027<def> = EXTRACT_SUBREG %reg1026, 6
%reg1028<def> = EXTRACT_SUBREG %reg1026<kill>, 5
...
%reg1029<def> = REG_SEQUENCE %reg1028<kill>, 5, %reg1027<kill>, 6, %reg1028, 7, %reg1027, 8, %reg1028, 9, %reg1027, 10, %reg1030<kill>, 11, %reg1032<kill>, 12
After REG_SEQUENCE is eliminated, we are left with:
%reg1026<def> = VLDMQ %reg1025<kill>, 260, pred:14, pred:%reg0
%reg1029:6<def> = EXTRACT_SUBREG %reg1026, 6
%reg1029:5<def> = EXTRACT_SUBREG %reg1026<kill>, 5
The regular coalescer will not be able to coalesce reg1026 and reg1029 because it doesn't
know how to combine sub-register indices 5 and 6. Now 2-address pass will consult the
target whether sub-registers 5 and 6 of reg1026 can be combined to into a larger
sub-register (or combined to be reg1026 itself as is the case here). If it is possible,
it will be able to replace references of reg1026 with reg1029 + the larger sub-register
index.
llvm-svn: 103835
the variable actually tracks.
N.B., several back-ends are using "HasCalls" as being synonymous for something
that adjusts the stack. This isn't 100% correct and should be looked into.
llvm-svn: 103802
- Kill is implicit when use and def registers are identical.
- Only virtual registers can differ.
Add a -verify-fast-regalloc to run the verifier before the fast allocator.
llvm-svn: 103797
-filetype=obj test, and -filetype=obj leaks a few objects. Added a FIXME, we
need to sort out the ownership model for the various MC objects.
llvm-svn: 103769
This loop is quadratic in the capacity for a DenseMap:
while(!map.empty())
map.erase(map.begin());
Instead we now do a normal begin() - end() iteration followed by map.clear().
That also has the nice sideeffect of shrinking the map capacity on demand.
llvm-svn: 103747
Now, the .linkonce directive is emitted as part of MCSectionCOFF::PrintSwitchToSection instead of AsmPrinter::EmitLinkage since it is an attribute of the section the symbol was placed into not the symbol itself.
llvm-svn: 103568
Sorry for the big change. The path leading up to this patch had some TableGen
changes that I didn't want to commit before I knew they were useful. They
weren't, and this version does not need them.
The fast register allocator now does no liveness calculations. Instead it relies
on kill flags provided by isel. (Currently those kill flags are also ignored due
to isel bugs). The allocation algorithm is supposed to work with any subset of
valid kill flags. More kill flags simply means fewer spills inserted.
Registers are allocated from a working set that contains no aliases. That means
most allocations can be done directly without expensive alias checks. When the
working set runs out of registers we do the full alias check to find new free
registers.
llvm-svn: 103488
Move EmitTargetCodeForMemcpy, EmitTargetCodeForMemset, and
EmitTargetCodeForMemmove out of TargetLowering and into
SelectionDAGInfo to exercise this.
llvm-svn: 103481
getConstantFP to accept the two supported long double
target types. This was not the original intent, but
there are other places that assume this works and it's
easy enough to do.
llvm-svn: 103299
with the fix in 103157.
%reg1039:1<def> = VMOVS %S1<kill>, pred:14, pred:%reg0
is not coalescable since none of the super-registers of S1 are in reg1039's
register class: DPR_VFP2. But it is still a legal copy instruction so it should
not assert.
llvm-svn: 103170
Users can write broken code that emits the same label twice with asm renaming,
detect this and emit a fatal backend error instead of aborting.
llvm-svn: 103140
debug output is showing machine instructions, the IR-level basic block names
aren't very meaningful, and because multiple machine basic blocks may be
derived from one IR-level BB, they're also not unique.
llvm-svn: 102960
beneficial cases. See the changes in test/CodeGen/X86/tail-opts.ll and
test/CodeGen/ARM/ifcvt2.ll for details.
The fix is to change HashEndOfMBB to hash at most one instruction,
instead of trying to apply heuristics about when it will be profitable to
consider more than one instruction. The regular tail-merging heuristics
are already prepared to handle the same cases, and they're more precise.
Also, make test/CodeGen/ARM/ifcvt5.ll and
test/CodeGen/Thumb2/thumb2-branch.ll slightly more complex so that they
continue to test what they're intended to test.
And, this eliminates the problem in
test/CodeGen/Thumb2/2009-10-15-ITBlockBranch.ll, the testcase from
PR5204. Update it accordingly.
llvm-svn: 102907
code, and to eliminate the need for the SelectionDAGBuilder
state to be live during CodeGenAndEmitDAG calls.
Call SDB->clear() before CodeGenAndEmitDAG calls instead of
before it, and move the CurDAG->clear() out of SelectionDAGBuilder,
which doesn't own the DAG, and into CodeGenAndEmitDAG.
llvm-svn: 102814
indexes could be of a different value type. Or not even using the same SDNode
for the constant (weird, I know). Compare the actual values instead of the
pointers.
llvm-svn: 102791
call that might throw. The landing pad assumes that all registers are in stack
slots.
We used to spill those dirty CSRs after the call, and the stack slots would be
wrong when arriving at the landing pad.
llvm-svn: 102770
of different register classes. e.g.
%reg1048:3<def> = EXTRACT_SUBREG %RAX<kill>, 3
Where %reg1048 is a GR32 register. This is not impossible to handle, but it is
pretty hard and very rare.
This should unbreak the dragonegg builder.
llvm-svn: 102672
update them. Computing kill flags is notoriously difficult, and the coalescer
would get it wrong sometimes, and it would completely skip physical registers.
Now we simply remove kill flags based on the live intervals after coalescing.
This is a few percent slower, but now we get correct kill flags for physical
registers after coalescing.
llvm-svn: 102510
of the dbg testsuite regressions. I don't think this is
really the right fix; this change exposed an existing problem
upstream somewhere.
llvm-svn: 102410
otherwise labels get incorrectly merged. We handled this by emitting a
".byte 0", but this isn't correct on thumb/arm targets where the text segment
needs to be a multiple of 2/4 bytes. Handle this by emitting a noop. This
is more gross than it should be because arm/ppc are not fully mc'ized yet.
This fixes rdar://7908505
llvm-svn: 102400
alignment of globals with a specified alignment, we fix
common variables to obey their alignment. Add a comment
explaining why this behavior is important.
llvm-svn: 102365
form of DEBUG_VALUE, as it doesn't have reasonable default
behavior for unsupported targets. Add a new hook instead.
No functional change.
llvm-svn: 102320
FunctionLoweringInfo, as it isn't SelectionDAG-specific. This isn't
completely natural, as PHI node state is not per-function but rather
per-basic-block, however there's currently no other convenient
per-basic-block state to group it with.
llvm-svn: 102109
This actually makes everything slower, but the plan is to have isel add <kill>
flags the way it is already adding <dead> flags. Then LiveVariables can be
removed again.
When ignoring the time spent in LiveVariables, -regalloc=fast is now twice as
fast as -regalloc=local.
llvm-svn: 102034
So far this is just a clone of -regalloc=local that has been lobotomized to run
25% faster. It drops the least-recently-used calculations, and is just plain
stupid when it runs out of registers.
The plan is to make this go even faster for -O0 by taking advantage of the short
live intervals in unoptimized code. It should not be necessary to calculate
liveness when most virtual registers are killed 2-3 instructions after they are
born.
llvm-svn: 102006
user-defined operations that use MMX register types, but
the compiler shouldn't generate them on its own. This adds
a Synthesizable abstraction to represent this, and changes
the vector widening computation so it won't produce MMX types.
(The motivation is to remove noise from the ABI compatibility
part of the gcc test suite, which has some breakage right now.)
llvm-svn: 101951
register is not killed in the loop.
This fixes 188.ammp on ARM where the post-ra scheduler would grab a register
that looked available but wasn't.
A testcase would be huge and fragile, sorry.
llvm-svn: 101930
into SelectionDAGBuilder. This avoids a separate pass over the
instructions, and has the side effect of providing debug location
information to the copy.
llvm-svn: 101906
in other types. fix this by only bumping zero-byte globals
up to a single byte if the *entire global* is zero size,
fixing PR6340.
This also fixes empty arrays etc to be handled correctly,
and only does this on subsection-via-symbols targets (aka
darwin) which is the only place where this matters.
llvm-svn: 101879
const_casts, and it reinforces the design of the Target classes being
immutable.
SelectionDAGISel::IsLegalToFold is now a static member function, because
PIC16 uses it in an unconventional way. There is more room for API
cleanup here.
And PIC16's AsmPrinter no longer uses TargetLowering.
llvm-svn: 101635
the live-in sets of BBs in the loop. Otherwise later pass may end up using the
registers and override the invariant. rdar://7852937
No reasonablly sized test case possible.
llvm-svn: 101626
with a fix for self-hosting
rotate CallInst operands, i.e. move callee to the back
of the operand array
the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary
llvm-svn: 101465
JIT doesn't use the MC back-end asm printer to emit labels that it uses, the
section for the MCSymbol is never set. And thus the MCSymbol for the EH label
isn't marked as "defined". Because of that, TidyLandingPads removes the needed
landing pads from the JIT output. This breaks EH for every JIT program.
This is a work-around for this limitation. We pass in the label locations
map. If the label has a non-zero value, then it was "emitted" by the JIT and
TidyLandingPads shouldn't remove that label.
A nicer solution would be to mark the MCSymbol as "used" by the JIT and not rely
upon the section being set to determine if it's defined or not.
llvm-svn: 101453
MachineLoopInfo is already available when MachineSinking runs, so the check is
free.
There is no test case because it would require a critical edge into a loop, and
CodeGenPrepare splits those. This check is just to be extra careful.
llvm-svn: 101420
with a fix
rotate CallInst operands, i.e. move callee to the back
of the operand array
the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary
llvm-svn: 101397
of the operand array
the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary
llvm-svn: 101364
This doesn't occur much at all, it only seems to formed in the case
when the trunc optimization kicks in due to phase ordering. In that
case it is saves a few bytes on x86-32.
llvm-svn: 101350
a load/or/and/store sequence into a narrower store when it is
safe. Daniel tells me that clang will start producing this sort
of thing with bitfields, and this does trigger a few dozen times
on 176.gcc produced by llvm-gcc even now.
This compiles code like CodeGen/X86/2009-05-28-DAGCombineCrash.ll
into:
movl %eax, 36(%rdi)
instead of:
movl $4294967295, %eax ## imm = 0xFFFFFFFF
andq 32(%rdi), %rax
shlq $32, %rcx
addq %rax, %rcx
movq %rcx, 32(%rdi)
and each of the testcases into a single store. Each of them used
to compile into craziness like this:
_test4:
movl $65535, %eax ## imm = 0xFFFF
andl (%rdi), %eax
shll $16, %esi
addl %eax, %esi
movl %esi, (%rdi)
ret
llvm-svn: 101343
Sometimes it is desirable to sink instructions along a critical edge:
x = ...
if (a && b) ...
else use(x);
The 'a && b' condition creates a critical edge to the else block, but we still
want to sink the computation of x into the block. The else block is dominated by
the parent block, so we are not pushing instructions into new code paths.
llvm-svn: 101165
MachineBasicBlock::livein_iterator a const_iterator, because
clients shouldn't ever be using the iterator interface to
mutate the livein set.
llvm-svn: 101147
so the user at least knows what inline asm is a problem. For example:
error: inline asm not supported yet: don't know how to handle tied indirect register inputs
pr8788-1.c:14:10: note: generated from here
asm ("\n" : "+r" (stack->regs)
^
Instead of:
fatal error: error in backend: Don't know how to handle tied indirect register inputs yet!
llvm-svn: 100731
and use it in one place in inline asm handling stuff. Before
we'd generate this for an invalid modifier letter:
$ clang asm.c -c -o t.o
fatal error: error in backend: Invalid operand found in inline asm: 'abc incl ${0:Z}'
INLINEASM <es:abc incl ${0:Z}>, 10, %EAX<def>, 2147483657, %EAX, 14, %EFLAGS<earlyclobber,def,dead>, <!-1>
Now we generate this:
$ clang asm.c -c -o t.o
error: invalid operand in inline asm: 'incl ${0:Z}'
asm.c:3:12: note: generated from here
__asm__ ("incl %Z0" : "+r" (X));
^
1 error generated.
This is much better but still admittedly not great ("why" is the operand
invalid??), codegen should try harder with its diagnostics :)
llvm-svn: 100723
1. Introduce some enums and accessors in the InlineAsm class
that eliminate a ton of magic numbers when handling inline
asm SDNode.
2. Add a new MDNodeSDNode selection dag node type that holds
a MDNode (shocking!)
3. Add a new argument to ISD::INLINEASM nodes that hold !srcloc
metadata, propagating it to the instruction emitter, which
drops it.
No functionality change.
llvm-svn: 100605
A certain GDB testsuite case (local.cc) has a function nested inside a
class nested inside another function. GCC presents the innermost
function to llvm-convert first. Heretofore, the debug info mistakenly
placed the inner function at module scope. This patch walks the GCC
context links and instantiates the outer class and function so the
debug info is properly nested. Radar 7426545.
llvm-svn: 100530
"Print" methods to "Emit". Emit is something that goes
to an mc streamer, Print is something that goes to a
raw_ostream (for inline asm)
llvm-svn: 100337
"asm printering" happens through MCStreamer. This also
Streamerizes PIC16 debug info, which escaped my attention.
This removes a leak from LLVMTargetMachine of the 'legacy'
output stream.
llvm-svn: 100327