and it is shared across CUs.
We add a few maps in DwarfDebug to map MDNodes for the type system to the
corresponding DIEs: MDTypeNodeToDieMap, MDSPNodeToDieMap, and
MDStaticMemberNodeToDieMap. These DIEs can be shared across CUs, that is why we
keep the maps in DwarfDebug instead of CompileUnit.
Sometimes, when we try to add an attribute to a DIE, the DIE is not yet added
to its owner yet, so we don't know whether we should use ref_addr or ref4.
We create a worklist that will be processed during finalization to add
attributes with the correct form (ref_addr or ref4).
We add addDIEEntry to DwarfDebug to be a wrapper around DIE->addValue. It checks
whether we know the correct form, if not, we update the worklist
(DIEEntryWorklist).
A testing case is added to show that we only create a single DIE for a type
MDNode and we use ref_addr to refer to the type DIE.
llvm-svn: 191792
For targets that have instruction itineraries this means no change. Targets
that move over to the new schedule model will use be able the new schedule
module for instruction latencies in the if-converter (the logic is such that if
there is no itineary we will use the new sched model for the latencies).
Before, we queried "TTI->getInstructionLatency()" for the instruction latency
and the extra prediction cost. Now, we query the TargetSchedule abstraction for
the instruction latency and TargetInstrInfo for the extra predictation cost. The
TargetSchedule abstraction will internally call "TTI->getInstructionLatency" if
an itinerary exists, otherwise it will use the new schedule model.
ATTENTION: Out of tree targets!
(I will also send out an email later to LLVMDev)
This means, if your target implements
unsigned getInstrLatency(const InstrItineraryData *ItinData,
const MachineInstr *MI,
unsigned *PredCost);
and returns a value for "PredCost", you now also need to implement
unsigned getPredictationCost(const MachineInstr *MI);
(if your target uses the IfConversion.cpp pass)
radar://15077010
llvm-svn: 191671
SelectionDAG will now attempt to inverse an illegal conditon in order to
find a legal one and if that doesn't work, it will attempt to swap the
operands using the inverted condition.
There are no new test cases for this, but a nubmer of the existing R600
tests hit this path.
llvm-svn: 191602
This is useful for targets like R600, which only support GT, GE, NE, and EQ
condition codes as it removes the need to handle unsupported condition
codes in target specific code.
There are no tests with this commit, but R600 has been updated to take
advantage of this new feature, so its existing selectcc tests are now
testing the swapped operands path.
llvm-svn: 191601
Interpreting the results of this function is not very intuitive, so I
cleaned it up to make it more clear whether or not a SETCC op was
legalized and how it was legalized (either by swapping LHS and RHS or
replacing with AND/OR).
This patch does change functionality in the LHS and RHS swapping case,
but unfortunately there are no in-tree tests for this. However, this
patch is a prerequisite for R600 to take advantage of the LHS and RHS
swapping, so tests will be added in subsequent commits.
llvm-svn: 191600
No functionality change. Future patches will add analysis which will be used
in other passes (PEI, StackSlot). The end goal is to support ssp-strong stack
layout rules.
WIP.
Differential Revision: http://llvm-reviews.chandlerc.com/D1521
llvm-svn: 191570
This change fixes the problem reported in pr17380 and re-add the dagcombine
transformation ensuring that the value types are always legal if the
transformation is triggered after Legalization took place.
Added the test case from pr17380.
llvm-svn: 191509
(shl (zext (shr A, X)), X) => (zext (shl (shr A, X), X)).
The rule only triggers when there are no other uses of the
zext to avoid materializing more instructions.
This helps the DAGCombiner understand that the shl/shr
sequence can then be converted into an and instruction.
llvm-svn: 191393
Ideally, the machinel model is added at the time the instructions are
defined. But many instructions in X86InstrSSE.td still need a model.
Without this workaround the scheduler asserts because x86 already has
itinerary classes for these instructions, indicating they should be
modeled by the scheduler. Since we use the new machine model for other
instructions, it expects a new machine model for these too.
llvm-svn: 191391
PEI inserts a save/restore sequence for the link register, according to the
information it gets from the MachineRegisterInfo.
MachineRegisterInfo is populated by the VirtRegMap pass.
This pass was not aware of noreturn calls and was registering the definitions of
these calls the same way as regular operations.
Modify VirtRegPass so that it does not set the isPhysRegUsed information for
registers only defined by noreturn calls.
The rational is that a noreturn call is the "last instruction" of the program
(if it returns the behavior is undefined), so everything that is defined by it
cannot be used and will not interfere with anything else. Therefore, it is
pointless to account for then.
llvm-svn: 191349
Patch by Ana Pazos.
1.Added support for v1ix and v1fx types.
2.Added Scalar Pairwise Reduce instructions.
3.Added initial implementation of Scalar Arithmetic instructions.
llvm-svn: 191263
Sometimes a copy from a vreg -> vreg sneaks into the middle of a terminator
sequence. It is safe to slice this into the stack protector success bb.
This fixes PR16979.
llvm-svn: 191260
a) Make sure we are emitting the correct section in our section labels
when we begin the module.
b) Make sure we are emitting the correct pubtypes section in the
presence of gnu pubtypes.
c) For C++ struct, union, class, and enumeration types are default
external.
llvm-svn: 191225
The size of common symbols is now tracked correctly, so they can be listed in the arange section without needing knowledge of other following symbols.
.comm (and .lcomm) do not indicate to the system assembler any particular section to use, so we have to treat them as having no section.
Test case update to account for this.
llvm-svn: 191210
This makes using array_pod_sort significantly safer. The implementation relies
on function pointer casting but that should be safe as we're dealing with void*
here.
llvm-svn: 191175
Previously, the DAGISel function WalkChainUsers was spotting that it
had entered already-selected territory by whether a node was a
MachineNode (amongst other things). Since it's fairly common practice
to insert MachineNodes during ISelLowering, this was not the correct
check.
Looking around, it seems that other nodes get their NodeId set to -1
upon selection, so this makes sure the same thing happens to all
MachineNodes and uses that characteristic to determine whether we
should stop looking for a loop during selection.
This should fix PR15840.
llvm-svn: 191165
The Type Legalizer recognizes that VSELECT needs to be split, because the type
is to wide for the given target. The same does not always apply to SETCC,
because less space is required to encode the result of a comparison. As a result
VSELECT is split and SETCC is unrolled into scalar comparisons.
This commit fixes the issue by checking for VSELECT-SETCC patterns in the DAG
Combiner. If a matching pattern is found, then the result mask of SETCC is
promoted to the expected vector mask for the given target. This mask has usually
te same size as the VSELECT return type (except for Intel KNL). Now the type
legalizer will split both VSELECT and SETCC.
This allows the following X86 DAG Combine code to sucessfully detect the MIN/MAX
pattern. This fixes PR16695, PR17002, and <rdar://problem/14594431>.
llvm-svn: 191130
The global registry is used to allow command line override of the
scheduler selection, but does not work well as the normal selection
API. For example, the same LLVM process should be able to target
multiple targets or subtargets.
llvm-svn: 191071
This was an experimental scheduler a year ago. It's now used by
several subtargets, both in-order and out-of-order, and it
is about to be enabled by default for x86 and armv7. It will be the
new GenericScheduler for subtargets that don't provide their own
SchedulingStrategy.
llvm-svn: 191051
C-like languages promote types like unsigned short to unsigned int before
performing an arithmetic operation. Currently the rotate matcher in the
DAGCombiner does not consider this situation.
This commit extends the DAGCombiner in the way that the pattern
(or (shl ([az]ext x), (*ext y)), (srl ([az]ext x), (*ext (sub 32, y))))
is folded into
([az]ext (rotl x, y))
The matching is restricted to aext and zext because in this cases the upper
bits are either undefined or known. Test case is included.
This fixes PR16726.
llvm-svn: 191049
C-like languages promote types like unsigned short to unsigned int before
performing an arithmetic operation. Currently the rotate matcher in the
DAGCombiner does not consider this situation.
This commit extends the DAGCombiner in the way that the pattern
(or (shl ([az]ext x), (*ext y)), (srl ([az]ext x), (*ext (sub 32, y))))
is folded into
([az]ext (rotl x, y))
The matching is restricted to aext and zext because in this cases the upper
bits are either undefined or known. Test case is included.
This fixes PR16726.
llvm-svn: 191045
Based on code review feedback from Eric Christopher, unshifting these
constants as they can appear in the gdb_index itself, shifted a further
24 bits. This means that keeping them preshifted is a bit inflexible, so
let's not do that.
Given the motivation, wrap up some nicer enums, more type safety, and
some utility functions.
llvm-svn: 191035
Use the DIVariable::isIndirect() flag set by the frontend instead of
guessing whether to set the machine location's indirection bit.
Paired commit with CFE.
llvm-svn: 190961
Upcoming SLP vectorization improvements will want to be able to estimate costs
of horizontal reductions. Add infrastructure to support this.
We model reductions as a series of (shufflevector,add) tuples ultimately
followed by an extractelement. For example, for an add-reduction of <4 x float>
we could generate the following sequence:
(v0, v1, v2, v3)
\ \ / /
\ \ /
+ +
(v0+v2, v1+v3, undef, undef)
\ /
((v0+v2) + (v1+v3), undef, undef)
%rdx.shuf = shufflevector <4 x float> %rdx, <4 x float> undef,
<4 x i32> <i32 2, i32 3, i32 undef, i32 undef>
%bin.rdx = fadd <4 x float> %rdx, %rdx.shuf
%rdx.shuf7 = shufflevector <4 x float> %bin.rdx, <4 x float> undef,
<4 x i32> <i32 1, i32 undef, i32 undef, i32 undef>
%bin.rdx8 = fadd <4 x float> %bin.rdx, %rdx.shuf7
%r = extractelement <4 x float> %bin.rdx8, i32 0
This commit adds a cost model interface "getReductionCost(Opcode, Ty, Pairwise)"
that will allow clients to ask for the cost of such a reduction (as backends
might generate more efficient code than the cost of the individual instructions
summed up). This interface is excercised by the CostModel analysis pass which
looks for reduction patterns like the one above - starting at extractelements -
and if it sees a matching sequence will call the cost model interface.
We will also support a second form of pairwise reduction that is well supported
on common architectures (haddps, vpadd, faddp).
(v0, v1, v2, v3)
\ / \ /
(v0+v1, v2+v3, undef, undef)
\ /
((v0+v1)+(v2+v3), undef, undef, undef)
%rdx.shuf.0.0 = shufflevector <4 x float> %rdx, <4 x float> undef,
<4 x i32> <i32 0, i32 2 , i32 undef, i32 undef>
%rdx.shuf.0.1 = shufflevector <4 x float> %rdx, <4 x float> undef,
<4 x i32> <i32 1, i32 3, i32 undef, i32 undef>
%bin.rdx.0 = fadd <4 x float> %rdx.shuf.0.0, %rdx.shuf.0.1
%rdx.shuf.1.0 = shufflevector <4 x float> %bin.rdx.0, <4 x float> undef,
<4 x i32> <i32 0, i32 undef, i32 undef, i32 undef>
%rdx.shuf.1.1 = shufflevector <4 x float> %bin.rdx.0, <4 x float> undef,
<4 x i32> <i32 1, i32 undef, i32 undef, i32 undef>
%bin.rdx.1 = fadd <4 x float> %rdx.shuf.1.0, %rdx.shuf.1.1
%r = extractelement <4 x float> %bin.rdx.1, i32 0
llvm-svn: 190876
When a truncate node defines a legal vector type but uses an illegal
vector type, the legalization process was splitting the vector until
<1 x vector> type, but then it was failing to scalarize the node because
it did not know how to handle TRUNCATE.
<rdar://problem/14989896>
llvm-svn: 190830
DAGCombiner::isAlias can be called with SrcValue1 or SrcValue2 null, and we
can't use AA in this case (if we try, then the casting code in AA will assert).
llvm-svn: 190763
By definition copies across register banks are not coalescable. Still, it may be
possible to get rid of such a copy when the value is available in another
register of the same register file.
Consider the following example, where capital and lower letters denote different
register file:
b = copy A <-- cross-bank copy
...
C = copy b <-- cross-bank copy
This could have been optimized this way:
b = copy A <-- cross-bank copy
...
C = copy A <-- same-bank copy
Note: b and C's definitions may be in different basic blocks.
This patch adds a peephole optimization that looks through a chain of copies
leading to a cross-bank copy and reuses a source that is on the same register
file if available.
This solution could also be used to get rid of some copies (e.g., A could have
been used instead of C). However, we do not do so because:
- It may over constrain the coloring of the source register for coalescing.
- The register allocator may not be able to find a nice split point for the
longer live-range, leading to more spill.
<rdar://problem/14742333>
llvm-svn: 190713
versions of gold. This support is designed to allow gold to produce
gdb_index sections similar to the accelerator tables and consumable
by gdb.
llvm-svn: 190649
The 'Deprecated' class allows you to specify a SubtargetFeature that the
instruction is deprecated on.
The 'ComplexDeprecationPredicate' class allows you to define a custom
predicate that is called to check for deprecation.
For example:
ComplexDeprecationPredicate<"MCR">
would mean you would have to define the following function:
bool getMCRDeprecationInfo(MCInst &MI, MCSubtargetInfo &STI,
std::string &Info)
Which returns 'false' for not deprecated, and 'true' for deprecated
and store the warning message in 'Info'.
The MCTargetAsmParser constructor was chaned to take an extra argument of
the MCInstrInfo class, so out-of-tree targets will need to be changed.
llvm-svn: 190598
If no register classes are added to CriticalPathRCs, then the CriticalPathSet
bitmask will be empty. In that case, ExcludeRegs must remain NULL or else this
line will cause a segfault:
} else if ((ExcludeRegs != NULL) && ExcludeRegs->test(AntiDepReg)) {
I have no in-tree test case.
llvm-svn: 190584
Allow targets to customize the default behavior of the generic loop unrolling
transformation. This will be used by the PowerPC backend when targeting the A2
core (which is in-order with a deep pipeline), and using more aggressive
defaults is important.
llvm-svn: 190542
We try to create the scope children DIEs after we create the scope DIE. But
to avoid emitting empty lexical block DIE, we first check whether a scope
DIE is going to be null, then create the scope children if it is not null.
From the number of children, we decide whether to actually create the scope DIE.
This patch also removes an early exit which checks for a special condition.
It also removes deletion of un-used children DIEs that are generated
because we used to generate children DIEs before the scope DIE.
Deletion of un-used children DIEs may cause problem because we sometimes keep
created DIEs in a member variable of a CU.
llvm-svn: 190421
Specialize the constructors for DIRef<DIScope> and DIRef<DIType> to make sure
the Value is indeed a scope ref and a type ref.
Use DIScopeRef for DIScope::getContext and DIType::getContext and use DITypeRef
for getContainingType and getClassType.
DIScope::generateRef now returns a DIScopeRef instead of a "Value *" for
readability and type safety.
llvm-svn: 190418
The vselect mask isn't a setcc.
This breaks in the case when the result of getSetCCResultType
is larger than the vector operands
e.g. %tmp = select i1 %cmp <2 x i8> %a, <2 x i8> %b
when getSetCCResultType returns <2 x i32>, the assertion
that the (MaskTy.getSizeInBits() == Op1.getValueType().getSizeInBits())
is hit.
No test since I don't think I can hit this with any of the current
targets. The R600/SI implementation would break, since it returns a
vector of i1 for this, but it doesn't reach ExpandSELECT for other
reasons.
llvm-svn: 190376
This partially reverts r190330. DIScope::getContext now returns DIScopeRef
instead of DIScope. We construct a DIScopeRef from DIScope when we are
dealing with subprogram, lexical block or name space.
llvm-svn: 190362
Arnold's idea.
I generally try to avoid stateful heuristics because it can make
debugging harder. However, we need a way to prevent the latency
priority from dominating, and it somewhat makes sense to schedule
aggressively for latency only within an issue group.
Swift in particular likes this, and it doesn't hurt anyone else:
| Benchmarks/MiBench/consumer-lame | 10.39% |
| Benchmarks/Misc/himenobmtxpa | 9.63% |
llvm-svn: 190360
There are more than one paths to where the frame information is emitted. Place
the call to generateCompactUnwindEncodings() into the method which outputs the
frame information, thus ensuring that the encoding is there for every path. This
involved threading the MCAsmBackend object through to this method.
<rdar://problem/13623355>
llvm-svn: 190335
In DIBuilder, the context field of a TAG_member is updated to use the
scope reference. Verifier is updated accordingly.
DebugInfoFinder now needs to generate a type identifier map to have
access to the actual scope. Same applies for BreakpointPrinter.
processModule of DebugInfoFinder is called during initialization phase
of the verifier to make sure the type identifier map is constructed early
enough.
We are now able to unique a simple class as demonstrated by the added
testing case.
llvm-svn: 190334
DIScope::getContext is a wrapper function that calls the specific getContext
method on each subclass. When we switch DIType::getContext to return DIScopeRef
instead of DIScope, DIScope::getContext can no longer return a DIScope without
a type identifier map.
DIScope::getContext is only used by DwarfDebug, so we move it to DwarfDebug
to have easy access to the type identifier map.
llvm-svn: 190330
The work on this project was left in an unfinished and inconsistent state.
Hopefully someone will eventually get a chance to implement this feature, but
in the meantime, it is better to put things back the way the were. I have
left support in the bitcode reader to handle the case-range bitcode format,
so that we do not lose bitcode compatibility with the llvm 3.3 release.
This reverts the following commits: 155464, 156374, 156377, 156613, 156704,
156757, 156804 156808, 156985, 157046, 157112, 157183, 157315, 157384, 157575,
157576, 157586, 157612, 157810, 157814, 157815, 157880, 157881, 157882, 157884,
157887, 157901, 158979, 157987, 157989, 158986, 158997, 159076, 159101, 159100,
159200, 159201, 159207, 159527, 159532, 159540, 159583, 159618, 159658, 159659,
159660, 159661, 159703, 159704, 160076, 167356, 172025, 186736
llvm-svn: 190328
This helper function needs the type identifier map when we switch
DIType::getContext to return DIScopeRef instead of DIScope.
Since isSubprogramContext is used by DwarfDebug only, We move it to DwarfDebug
to have easy access to the map.
llvm-svn: 190325
A reference to a scope is more general than a reference to a type since
DIType is a subclass of DIScope.
A reference to a type can be either an identifier for the type or
the DIType itself, while a reference to a scope can be either an
identifier for the type (when the scope is indeed a type) or the
DIScope itself. A reference to a type and a reference to a scope
will be resolved in the same way. The only difference is in the
verifier when a field is a reference to a type (i.e. the containing
type field of a DICompositeType) or a field is a reference to a scope
(i.e. the context field of a DIType).
This is to get ready for switching DIType::getContext to return
DIScopeRef instead of DIScope.
Tighten up isTypeRef and isScopeRef to make sure the identifier is not
empty and the MDNode is DIType for TypeRef and DIScope for ScopeRef.
llvm-svn: 190322
We used to generate the compact unwind encoding from the machine
instructions. However, this had the problem that if the user used `-save-temps'
or compiled their hand-written `.s' file (with CFI directives), we wouldn't
generate the compact unwind encoding.
Move the algorithm that generates the compact unwind encoding into the
MCAsmBackend. This way we can generate the encoding whether the code is from a
`.ll' or `.s' file.
<rdar://problem/13623355>
llvm-svn: 190290
Allow subtargets to customize the generic scheduling strategy.
This is convenient for targets that don't need to add new heuristics
by specializing the strategy.
llvm-svn: 190176
Occasionally DAGCombiner can spot that a SETCC operation is completely
redundant and reduce it to "all true" or "all false". If this happens to a
vector, the value produced has to take account of what a normal comparison
would have produced, which may be an all-1s bitmask.
The fix in SelectionDAG.cpp is tested, however, as far as I can see the code in
TargetLowering.cpp is possibly unreachable and almost certainly irrelevant when
triggered so there are no tests. However, I believe it's still clearly the
right change and may save someone else some hassle if it suddenly becomes
reachable. So I'm doing it anyway.
llvm-svn: 190147
ptr_to_member.
We introduce a new class DITypeRef that represents a reference to a DIType.
It wraps around a Value*, which can be either an identifier in MDString
or an actual MDNode. The class has a helper function "resolve" that
finds the actual MDNode for a given DITypeRef.
We specialize getFieldAs to return a field that is a reference to a
DIType. To correctly access the base type field of ptr_to_member,
getClassType now calls getFieldAs<DITypeRef> to return a DITypeRef.
Also add a typedef for DITypeIdentifierMap and a helper
generateDITypeIdentifierMap in DebugInfo.h. In DwarfDebug.cpp, we keep
a DITypeIdentifierMap and call generateDITypeIdentifierMap to actually
populate the map.
Verifier is updated accordingly.
llvm-svn: 190081
Fast register pressure tracking currently only takes effect during
bottom up scheduling. Forcing this is a bit faster and simpler for
targets that don't have many scheduling constraints and don't need
top-down scheduling.
llvm-svn: 190014
If the instruction window is < NumRegs/2, pressure tracking is not
likely to be effective. The scheduler has to process a very large
number of tiny blocks. We want this to be fast.
llvm-svn: 189991
Register pressure tracking is half the complexity of the
scheduler. It's useful to be able to turn it off for compile time and
performance comparisons.
llvm-svn: 189987
This reverts commit r189913.
Talked with Eric on IRC. I am going to XFAIL the failing test since it
is using what Eric described as "the member hack" which was needed on
that old GDB.
Sorry for the noise!
llvm-svn: 189914
This won't affect the kinds of hashes we test for as we actually
do hashing based on form and attribute. Change the fission-hash
testcase one last time to handle DW_AT_comp_dir.
llvm-svn: 189840
There was one case that we could hit a DebugValue where I didn't think
to check. DebugValues are evil. No checkinable test case, sorry. It's
an obvious fix.
llvm-svn: 189717
Created SUPressureDiffs array to hold the per node PDiff computed during DAG building.
Added a getUpwardPressureDelta API that will soon replace the old
one. Compute PressureDelta here from the precomputed PressureDiffs.
Updating for liveness will come next.
llvm-svn: 189640
Revert unintentional commit (of an unreviewed change).
Original commit message:
Add getUnrollingPreferences to TTI
Allow targets to customize the default behavior of the generic loop unrolling
transformation. This will be used by the PowerPC backend when targeting the A2
core (which is in-order with a deep pipeline), and using more aggressive
defaults is important.
llvm-svn: 189566
Allow targets to customize the default behavior of the generic loop unrolling
transformation. This will be used by the PowerPC backend when targeting the A2
core (which is in-order with a deep pipeline), and using more aggressive
defaults is important.
llvm-svn: 189565
This uses the TargetSubtargetInfo::useAA() function to control the defaults of
the -combiner-alias-analysis and -combiner-global-alias-analysis options.
llvm-svn: 189564
There are several optional (off-by-default) features in CodeGen that can make
use of alias analysis. These features are important for generating code for
some kinds of cores (for example the (in-order) PPC A2 core). This adds a
useAA() function to TargetSubtargetInfo to allow these features to be enabled
by default on a per-subtarget basis.
Here is the first use of this function: To control the default of the
-enable-aa-sched-mi feature.
llvm-svn: 189563
when we can. Migrate from using blocks when we're adding just a
single attribute and floating point values are an unsigned, not signed,
bag of bits.
Update all test cases accordingly.
llvm-svn: 189419
We want to convert code like (or (srl N, 8), (shl N, 8)) into (srl (bswap N),
const), but this is only valid if the bits above 16 on the source pattern are
0, the checks we were doing on this were slightly wrong before.
llvm-svn: 189348
is constructing from as an input and keep the same unique identifier.
We can use this to connect items which must stay in the .o file
(e.g. pubnames and pubtypes) to the skeleton cu rather than having
duplicate unique numbers for the sections and needing to do lookups
based on MDNode.
llvm-svn: 189293
If we have a binary operation like ISD:ADD, we can set the result type
equal to the result type of one of its operands rather than using
TargetLowering::getPointerTy().
Also, any use of DAG.getIntPtrConstant(C) as an operand for a binary
operation can be replaced with:
DAG.getConstant(C, OtherOperand.getValueType());
llvm-svn: 189227
This adds minimal support to the SelectionDAG for handling address spaces
with different pointer sizes. The SelectionDAG should now correctly
lower pointer function arguments to the correct size as well as generate
the correct code when lowering getelementptr.
This patch also updates the R600 DataLayout to use 32-bit pointers for
the local address space.
v2:
- Add more helper functions to TargetLoweringBase
- Use CHECK-LABEL for tests
llvm-svn: 189221
We currently emit labels with the prefix Lllvm$workaround$fake$stub$ if
the target's MCAsmInfo has getLinkOnceDirective() mapped to something
interesting. This was apparently a work around introduced in r31033 for
binutils that we don't need anymore.
llvm-svn: 189187
Estimate the cyclic critical path within a single block loop. If the
acyclic critical path is longer, then the loop will exhaust OOO
resources after some number of iterations. If lag between the acyclic
critical path and cyclic critical path is longer the the time it takes
to issue those loop iterations, then aggressively schedule for
latency.
llvm-svn: 189120
This will be used to compute the cyclic critical path and to
update precomputed per-node pressure differences.
In the longer term, it could also be used to speed up LiveInterval
update by avoiding visiting all global vreg users.
llvm-svn: 189118
...so that it can be used for z too. Most of the code is the same.
The only real change is to use TargetTransformInfo to test when a sqrt
instruction is available.
The pass is opt-in because at the moment it only handles sqrt.
llvm-svn: 189097
When truncated vector stores were being custom lowered in
VectorLegalizer::LegalizeOp(), the old (illegal) and new (legal) node pair
was not being added to LegalizedNodes list. Instead of the legalized
result being passed to VectorLegalizer::TranslateLegalizeResult(),
the result was being passed back into VectorLegalizer::LegalizeOp(),
which ended up adding a (new, new) pair to the list instead.
This was causing an assertion failure when a custom lowered truncated
vector store was the last instruction a basic block and the VectorLegalizer
was unable to find it in the LegalizedNodes list when updating the
DAG root.
llvm-svn: 188953
The small utility function that pattern matches Base + Index +
Offset patterns for loads and stores fails to recognize the base
pointer for loads/stores from/into an array at offset 0 inside a
loop. As a result DAGCombiner::MergeConsecutiveStores was not able
to merge all stores.
This commit fixes the issue by adding an additional pattern match
and also a test case.
Reviewer: Nadav
llvm-svn: 188936
Summary:
LLVM would generate DWARF with version 3 in the .debug_pubname and
.debug_pubtypes version fields. This would lead SGI dwarfdump to fail
parsing the DWARF with (in the instance of .debug_pubnames) would exit
with:
dwarfdump ERROR: dwarf_get_globals: DW_DLE_PUBNAMES_VERSION_ERROR (123)
This fixes PR16950.
Reviewers: echristo, dblaikie
Reviewed By: echristo
CC: cfe-commits
Differential Revision: http://llvm-reviews.chandlerc.com/D1454
llvm-svn: 188869
SystemZTargetLowering::emitStringWrapper() previously loaded the character
into R0 before the loop and made R0 live on entry. I'd forgotten that
allocatable registers weren't allowed to be live across blocks at this stage,
and it confused LiveVariables enough to cause a miscompilation of f3 in
memchr-02.ll.
This patch instead loads R0 in the loop and leaves LICM to hoist it
after RA. This is actually what I'd tried originally, but I went for
the manual optimisation after noticing that R0 often wasn't being hoisted.
This bug forced me to go back and look at why, now fixed as r188774.
We should also try to optimize null checks so that they test the CC result
of the SRST directly. The select between null and the SRST GPR result could
then usually be deleted as dead.
llvm-svn: 188779
Post-RA LICM keeps three sets of registers: PhysRegDefs, PhysRegClobbers
and TermRegs. When it sees a definition of R it adds all aliases of R
to the corresponding set, so that when it needs to test for membership
it only needs to test a single register, rather than worrying about
aliases there too. E.g. the final candidate loop just has:
unsigned Def = Candidates[i].Def;
if (!PhysRegClobbers.test(Def) && ...) {
to test whether register Def is multiply defined.
However, there was also a shortcut in ProcessMI to make sure we didn't
add candidates if we already knew that they would fail the final test.
This shortcut was more pessimistic than the final one because it
checked whether _any alias_ of the defined register was multiply defined.
This is too conservative for targets that define register pairs.
E.g. on z, R0 and R1 are sometimes used as a pair, so there is a
128-bit register that aliases both R0 and R1. If a loop used
R0 and R1 independently, and the definition of R0 came first,
we would be able to hoist the R0 assignment (because that used
the final test quoted above) but not the R1 assignment (because
that meant we had two definitions of the paired R0/R1 register
and would fail the shortcut in ProcessMI).
This patch just uses the same check for the ProcessMI shortcut as
we use in the final candidate loop.
llvm-svn: 188774
Previously, generation of stack protectors was done exclusively in the
pre-SelectionDAG Codegen LLVM IR Pass "Stack Protector". This necessitated
splitting basic blocks at the IR level to create the success/failure basic
blocks in the tail of the basic block in question. As a result of this,
calls that would have qualified for the sibling call optimization were no
longer eligible for optimization since said calls were no longer right in
the "tail position" (i.e. the immediate predecessor of a ReturnInst
instruction).
Then it was noticed that since the sibling call optimization causes the
callee to reuse the caller's stack, if we could delay the generation of
the stack protector check until later in CodeGen after the sibling call
decision was made, we get both the tail call optimization and the stack
protector check!
A few goals in solving this problem were:
1. Preserve the architecture independence of stack protector generation.
2. Preserve the normal IR level stack protector check for platforms like
OpenBSD for which we support platform specific stack protector
generation.
The main problem that guided the present solution is that one can not
solve this problem in an architecture independent manner at the IR level
only. This is because:
1. The decision on whether or not to perform a sibling call on certain
platforms (for instance i386) requires lower level information
related to available registers that can not be known at the IR level.
2. Even if the previous point were not true, the decision on whether to
perform a tail call is done in LowerCallTo in SelectionDAG which
occurs after the Stack Protector Pass. As a result, one would need to
put the relevant callinst into the stack protector check success
basic block (where the return inst is placed) and then move it back
later at SelectionDAG/MI time before the stack protector check if the
tail call optimization failed. The MI level option was nixed
immediately since it would require platform specific pattern
matching. The SelectionDAG level option was nixed because
SelectionDAG only processes one IR level basic block at a time
implying one could not create a DAG Combine to move the callinst.
To get around this problem a few things were realized:
1. While one can not handle multiple IR level basic blocks at the
SelectionDAG Level, one can generate multiple machine basic blocks
for one IR level basic block. This is how we handle bit tests and
switches.
2. At the MI level, tail calls are represented via a special return
MIInst called "tcreturn". Thus if we know the basic block in which we
wish to insert the stack protector check, we get the correct behavior
by always inserting the stack protector check right before the return
statement. This is a "magical transformation" since no matter where
the stack protector check intrinsic is, we always insert the stack
protector check code at the end of the BB.
Given the aforementioned constraints, the following solution was devised:
1. On platforms that do not support SelectionDAG stack protector check
generation, allow for the normal IR level stack protector check
generation to continue.
2. On platforms that do support SelectionDAG stack protector check
generation:
a. Use the IR level stack protector pass to decide if a stack
protector is required/which BB we insert the stack protector check
in by reusing the logic already therein. If we wish to generate a
stack protector check in a basic block, we place a special IR
intrinsic called llvm.stackprotectorcheck right before the BB's
returninst or if there is a callinst that could potentially be
sibling call optimized, before the call inst.
b. Then when a BB with said intrinsic is processed, we codegen the BB
normally via SelectBasicBlock. In said process, when we visit the
stack protector check, we do not actually emit anything into the
BB. Instead, we just initialize the stack protector descriptor
class (which involves stashing information/creating the success
mbbb and the failure mbb if we have not created one for this
function yet) and export the guard variable that we are going to
compare.
c. After we finish selecting the basic block, in FinishBasicBlock if
the StackProtectorDescriptor attached to the SelectionDAGBuilder is
initialized, we first find a splice point in the parent basic block
before the terminator and then splice the terminator of said basic
block into the success basic block. Then we code-gen a new tail for
the parent basic block consisting of the two loads, the comparison,
and finally two branches to the success/failure basic blocks. We
conclude by code-gening the failure basic block if we have not
code-gened it already (all stack protector checks we generate in
the same function, use the same failure basic block).
llvm-svn: 188755
This adds a llvm.copysign intrinsic; We already have Libfunc recognition for
copysign (which is turned into the FCOPYSIGN SDAG node). In order to
autovectorize calls to copysign in the loop vectorizer, we need a corresponding
intrinsic as well.
In addition to the expected changes to the language reference, the loop
vectorizer, BasicTTI, and the SDAG builder (the intrinsic is transformed into
an FCOPYSIGN node, just like the function call), this also adds FCOPYSIGN to a
few lists in LegalizeVector{Ops,Types} so that vector copysigns can be
expanded.
In TargetLoweringBase::initActions, I've made the default action for FCOPYSIGN
be Expand for vector types. This seems correct for all in-tree targets, and I
think is the right thing to do because, previously, there was no way to generate
vector-values FCOPYSIGN nodes (and most targets don't specify an action for
vector-typed FCOPYSIGN).
llvm-svn: 188728
Until gdb supports the new accelerator tables we should add the
pubnames section so that gdb_index can be generated from gold
at link time. On darwin we already emit the accelerator tables
and so don't need to worry about pubnames.
llvm-svn: 188708
- split WidenVecRes_Binary into WidenVecRes_Binary and WidenVecRes_BinaryCanTrap
- WidenVecRes_BinaryCanTrap preserves the original behaviour for operations
that can trap
- WidenVecRes_Binary simply widens the operation and improves codegen for
3-element vectors by allowing widening and promotion on x86 (matches the
behaviour of unary and ternary operation widening)
- use WidenVecRes_Binary for operations on integers.
Reviewed by: nrotem
llvm-svn: 188699
We had previously been asserting when faced with a FCOPYSIGN f64, ppcf128 node
because there was no way to expand the FCOPYSIGN node. Because ppcf128 is the
sum of two doubles, and the first double must have the larger magnitude, we
can take the sign from the first double. As a result, in addition to fixing the
crash, this is also an optimization.
llvm-svn: 188655
We check this in many/all other cases, just missed this one it seems.
Perhaps it'd be worth unifying this so we never emit zero-length
DW_AT_names.
llvm-svn: 188649
Teach the generic instruction selection helper functions to constrain
the register classes of their input operands. For non-physical register
references, the generic code needs to be careful not to mess that up
when replacing references to result registers. As the comment indicates
for MachineRegisterInfo::replaceRegWith(), it's important to call
constrainRegClass() first.
rdar://12594152
llvm-svn: 188593
Generalize r188163 to cope with return types other than MVT::i32, just
as the existing visitMemCmpCall code did. I've split this out into a
subroutine so that it can be used for other upcoming patches.
I also noticed that I'd used the wrong API to record the out chain.
It's a load that uses DAG.getRoot() rather than getRoot(), so the out
chain should go on PendingLoads. I don't have a testcase for that because
we don't do any interesting scheduling on z yet.
llvm-svn: 188540
When new virtual registers are created during splitting/spilling, defer
creation of the live interval until we need to use the live interval.
Along with the recent commits to notify LiveRangeEdit when new virtual
registers are created, this makes it possible for functions like
TargetInstrInfo::loadRegFromStackSlot() and
TargetInstrInfo::storeRegToStackSlot() to create multiple virtual
registers as part of the process of generating loads/stores for
different register classes, and then have the live intervals for those
new registers computed when they are needed.
llvm-svn: 188437
Add a delegate class to MachineRegisterInfo with a single virtual
function, MRI_NoteNewVirtualRegister(). Update LiveRangeEdit to inherit
from this delegate class and override the definition of the callback
with an implementation that tracks the newly created virtual registers.
llvm-svn: 188435
Track new virtual registers by register number, rather than by the live
interval created for them. This is the first step in separating the
creation of new virtual registers and new live intervals. Eventually
live intervals will be created and populated on demand after the virtual
registers have been created and used in instructions.
llvm-svn: 188434
A common idiom is to use zero and all-ones as sentinal values and to
check for both in a single conditional ("x != 0 && x != (unsigned)-1").
That generates code, for i32, like:
testl %edi, %edi
setne %al
cmpl $-1, %edi
setne %cl
andb %al, %cl
With this transform, we generate the simpler:
incl %edi
cmpl $1, %edi
seta %al
Similar improvements for other integer sizes and on other platforms. In
general, combining the two setcc instructions into one is better.
rdar://14689217
llvm-svn: 188315
LowerCallTo returns a pair with the return value of the call as the first
element and the chain associated with the return value as the second element. If
we lower a call that has a void return value, LowerCallTo returns an SDValue
with a NULL SDNode and the chain for the call. Thus makeLibCall by just
returning the first value makes it impossible for you to set up the chain so
that the call is not eliminated as dead code.
I also updated all references to makeLibCall to reflect the new return type.
llvm-svn: 188300
Summary:
We need to do two things:
- Initialize BSSSection in MCObjectFileInfo::InitCOFFMCObjectFileInfo
- Teach TargetLoweringObjectFileCOFF::SelectSectionForGlobal what to do
with it
This fixes PR16861.
Reviewers: rnk
Reviewed By: rnk
CC: llvm-commits
Differential Revision: http://llvm-reviews.chandlerc.com/D1361
llvm-svn: 188244
CUs.
Currently only hashes the name of CUs and the names of any children,
but it's an obvious first step to show the framework. The testcase
should continue to be correct, however, as it's an empty TU.
llvm-svn: 188243
If the tail-callee and caller give the same bits via the same signext/zeroext
attribute then a tail-call should be allowed, since the extension has already
been done by the callee.
llvm-svn: 188159
This patch decouples the stack protector pass so that we can support stack
protector implementations that do not use the IR level generated stack protector
fail basic block.
No codesize increase is caused by this change since the MI level tail merge pass
properly merges together the fail condition blocks (see the updated test).
llvm-svn: 188105
Previously the asserts were only checking that RHS and LHS were the same type and had the same element type as the result. All downstream code for ISD::VECTOR_SHUFFLE requires the types to be the same.
Also removed one unnecessary check of matched element counts that was present in the code.
llvm-svn: 188051
For most libm ISD nodes, TargetLoweringBase::initActions sets the default
scalar-type action to Expand, and leaves the vector-type action default as
Legal. This is not appropriate for the new ISD::FROUND node (which no backend
but PowerPC handles explicitly).
Fixes PR16842.
llvm-svn: 188048
the type exists.
Fix up cases where we weren't checking for optional types and add
an assert to addType to make sure we catch this in the future.
Fix up a testcase that was using the tag for DW_TAG_array_type
when it meant DW_TAG_enumeration_type.
llvm-svn: 187963
This reverts commit r77814.
We were sticking global constants in the .data section instead of in the
.rdata section when emitting for COFF.
This fixes PR16831.
llvm-svn: 187956
Original commit message:
Stop emitting weak symbols into the "coal" sections.
The Mach-O linker has been able to support the weak-def bit on any symbol for
quite a while now. The compiler however continued to place these symbols into a
"coal" section, which required the linker to map them back to the base section
name.
Replace the sections like this:
__TEXT/__textcoal_nt instead use __TEXT/__text
__TEXT/__const_coal instead use __TEXT/__const
__DATA/__datacoal_nt instead use __DATA/__data
<rdar://problem/14265330>
llvm-svn: 187939
All libm floating-point rounding functions, except for round(), had their own
ISD nodes. Recent PowerPC cores have an instruction for round(), and so here I'm
adding ISD::FROUND so that round() can be custom lowered as well.
For the most part, this is straightforward. I've added an intrinsic
and a matching ISD node just like those for nearbyint() and friends. The
SelectionDAG pattern I've named frnd (because ISD::FP_ROUND has already claimed
fround).
This will be used by the PowerPC backend in a follow-up commit.
llvm-svn: 187926
This change came about primarily because of two issues in the existing code.
Niether of:
define i64 @test1(i64 %val) {
%in = trunc i64 %val to i32
tail call i32 @ret32(i32 returned %in)
ret i64 %val
}
define i64 @test2(i64 %val) {
tail call i32 @ret32(i32 returned undef)
ret i32 42
}
should be tail calls, and the function sameNoopInput is responsible. The main
problem is that it is completely symmetric in the "tail call" and "ret" value,
but in reality different things are allowed on each side.
For these cases:
1. Any truncation should lead to a larger value being generated by "tail call"
than needed by "ret".
2. Undef should only be allowed as a source for ret, not as a result of the
call.
Along the way I noticed that a mismatch between what this function treats as a
valid truncation and what the backends see can lead to invalid calls as well
(see x86-32 test case).
This patch refactors the code so that instead of being based primarily on
values which it recurses into when necessary, it starts by inspecting the type
and considers each fundamental slot that the backend will see in turn. For
example, given a pathological function that returned {{}, {{}, i32, {}}, i32}
we would consider each "real" i32 in turn, and ask if it passes through
unchanged. This is much closer to what the backend sees as a result of
ComputeValueVTs.
Aside from the bug fixes, this eliminates the recursion that's going on and, I
believe, makes the bulk of the code significantly easier to understand. The
trade-off is the nasty iterators needed to find the real types inside a
returned value.
llvm-svn: 187787
This virtual function can be implemented by targets to specify the type
to use for the index operand of INSERT_VECTOR_ELT, EXTRACT_VECTOR_ELT,
INSERT_SUBVECTOR, EXTRACT_SUBVECTOR. The default implementation returns
the result from TargetLowering::getPointerTy()
The previous code was using TargetLowering::getPointerTy() for vector
indices, because this is guaranteed to be legal on all targets. However,
using TargetLowering::getPointerTy() can be a problem for targets with
pointer sizes that differ across address spaces. On such targets,
when vectors need to be loaded or stored to an address space other than the
default 'zero' address space (which is the address space assumed by
TargetLowering::getPointerTy()), having an index that
is a different size than the pointer can lead to inefficient
pointer calculations, (e.g. 64-bit adds for a 32-bit address space).
There is no intended functionality change with this patch.
llvm-svn: 187748
Function attributes are the future! So just query whether we want to realign the
stack directly from the function instead of through a random target options
structure.
llvm-svn: 187618
For a testcase like the following:
typedef unsigned long uint64_t;
typedef struct {
uint64_t lo;
uint64_t hi;
} blob128_t;
void add_128_to_128(const blob128_t *in, blob128_t *res) {
asm ("PAND %1, %0" : "+Q"(*res) : "Q"(*in));
}
where we'll fail to allocate the register for the output constraint,
our matching input constraint will not find a register to match,
and could try to search past the end of the current operands array.
On the idea that we'd like to attempt to keep compilation going
to find more errors in the module, change the error cases when
we're visiting inline asm IR to return immediately and avoid
trying to create a node in the DAG. This leaves us with only
a single error message per inline asm instruction, but allows us
to safely keep going in the general case.
llvm-svn: 187470
When registers must be live throughout the scheduling region, increase
the limit for the register class. Once we exceed the original limit,
they will be spilled, and there's no point further reducing pressure.
This isn't a perfect heuristics but avoids a situation where the
scheduler could become trapped by trying to achieve the impossible.
llvm-svn: 187436
This patch prevents the following combine when the input vector is used more
than once.
insert_vector_elt (build_vector elt0, ..., eltN), NewEltIdx, idx
=>
build_vector elt0, ..., NewEltIdx, ..., eltN
The reasons are:
- Building a vector may be expensive, so try to reuse the existing part of a
vector instead of creating a new one (think big vectors).
- elt0 to eltN now have two users instead of one. This may prevent some other
optimizations.
llvm-svn: 187396
update testcase to make sure we generate debug info for walrus
by adding a non-trivial constructor and verify that we don't
emit an ODR signature for the type.
llvm-svn: 187393
32-bit symbols have "_" as global prefix, but when forming the name of
COMDAT sections this prefix is ignored. The current behavior assumes that
this prefix is always present which is not the case for 64-bit and names
are truncated.
llvm-svn: 187356
There doesn't appear to be any reason to put this variable on the heap.
I'm suspicious of the LexicalScope above that we stuff in a map and then
delete afterward, but I'm just trying to get the valgrind bot clean.
llvm-svn: 187301
Adds unit tests for it too.
Split BasicBlockUtils into an analysis-half and a transforms-half, and put the
analysis bits into a new Analysis/CFG.{h,cpp}. Promote isPotentiallyReachable
into llvm::isPotentiallyReachable and move it into Analysis/CFG.
llvm-svn: 187283
Merge consecutive if-regions if they contain identical statements.
Both transformations reduce number of branches. The transformation
is guarded by a target-hook, and is currently enabled only for +R600,
but the correctness has been tested on X86 target using a variety of
CPU benchmarks.
Patch by: Mei Ye
llvm-svn: 187278
type units.
Initially this support is used in the computation of an ODR checker
for C++. For now we're attaching it to the DIE, but in the future
it will be attached to the type unit.
This also starts breaking out types into the separation for type
units, but without actually splitting the DIEs.
In preparation for hashing the DIEs this adds a DIEString type
that contains a StringRef with the string contained at the label.
llvm-svn: 187213
CustomLowerNode was not being called during SplitVectorOperand,
meaning custom legalization could not be used by targets.
This also adds a test case for NVPTX that depends on this custom
legalization.
Differential Revision: http://llvm-reviews.chandlerc.com/D1195
Attempt to fix the buildbots by making the X86 test I just added platform independent
llvm-svn: 187202
This reverts commit 187198. It broke the bots.
The soft float test probably needs a -triple because of name differences.
On the hard float test I am getting a "roundss $1, %xmm0, %xmm0", instead of
"vroundss $1, %xmm0, %xmm0, %xmm0".
llvm-svn: 187201
CustomLowerNode was not being called during SplitVectorOperand,
meaning custom legalization could not be used by targets.
This also adds a test case for NVPTX that depends on this custom
legalization.
Differential Revision: http://llvm-reviews.chandlerc.com/D1195
llvm-svn: 187198
The previous change to local live range allocation also suppressed
eviction of local ranges. In rare cases, this could result in more
expensive register choices. This commit actually revives a feature
that I added long ago: check if live ranges can be reassigned before
eviction. But now it only happens in rare cases of evicting a local
live range because another local live range wants a cheaper register.
The benefit is improved code size for some benchmarks on x86 and armv7.
I measured no significant compile time increase and performance
changes are noise.
llvm-svn: 187140
Also avoid locals evicting locals just because they want a cheaper register.
Problem: MI Sched knows exactly how many registers we have and assumes
they can be colored. In cases where we have large blocks, usually from
unrolled loops, greedy coloring fails. This is a source of
"regressions" from the MI Scheduler on x86. I noticed this issue on
x86 where we have long chains of two-address defs in the same live
range. It's easy to see this in matrix multiplication benchmarks like
IRSmk and even the unit test misched-matmul.ll.
A fundamental difference between the LLVM register allocator and
conventional graph coloring is that in our model a live range can't
discover its neighbors, it can only verify its neighbors. That's why
we initially went for greedy coloring and added eviction to deal with
the hard cases. However, for singly defined and two-address live
ranges, we can optimally color without visiting neighbors simply by
processing the live ranges in instruction order.
Other beneficial side effects:
It is much easier to understand and debug regalloc for large blocks
when the live ranges are allocated in order. Yes, global allocation is
still very confusing, but it's nice to be able to comprehend what
happened locally.
Heuristics could be added to bias register assignment based on
instruction locality (think late register pairing, banks...).
Intuituvely this will make some test cases that are on the threshold
of register pressure more stable.
llvm-svn: 187139
There's no need to specify a flag to omit frame pointer elimination on non-leaf
nodes...(Honestly, I can't parse that option out.) Use the function attribute
stuff instead.
llvm-svn: 187093
Prior to this patch, IfConverter may widen the cases where a sequence of
instructions were executed because of the way it uses nested predicates. This
result in incorrect execution.
For instance, Let A be a basic block that flows conditionally into B and B be a
predicated block.
B can be predicated with A.BrToBPredicate into A iff B.Predicate is less
"permissive" than A.BrToBPredicate, i.e., iff A.BrToBPredicate subsumes
B.Predicate.
The IfConverter was checking the opposite: B.Predicate subsumes
A.BrToBPredicate.
<rdar://problem/14379453>
llvm-svn: 187071
Use the function attributes to pass along the stack protector buffer size.
Now that we have robust function attributes, don't use a command line option to
specify the stack protecto buffer size.
llvm-svn: 186863
These floats all represented block frequencies anyway, so just use the
BlockFrequency class directly.
Some floating point computations remain in tryLocalSplit(). They are
estimating spill weights which are still floats.
llvm-svn: 186435
Original commit message:
Remove floating point computations from SpillPlacement.cpp.
Patch by Benjamin Kramer!
Use the BlockFrequency class instead of floats in the Hopfield network
computations. This rescales the node Bias field from a [-2;2] float
range to two block frequencies BiasN and BiasP pulling in opposite
directions. This construct has a more predictable behavior when block
frequencies saturate.
The per-node scaling factors are no longer necessary, assuming the block
frequencies around a bundle are consistent.
This patch can cause the register allocator to make different spilling
decisions. The differences should be small.
llvm-svn: 186434
We can have a FrameSetup in one basic block and the matching FrameDestroy
in a different basic block when we have struct byval. In that case, SPAdj
is not zero at beginning of the basic block.
Modify PEI to correctly set SPAdj at beginning of each basic block using
DFS traversal. We used to assume SPAdj is 0 at beginning of each basic block.
PEI had an assert SPAdjCount || SPAdj == 0.
If we have a Destroy <n> followed by a Setup <m>, PEI will assert failure.
We can add an extra condition to make sure the pairs are matched:
The pairs start with a FrameSetup.
But since we are doing a much better job in the verifier, this patch removes
the check in PEI.
PR16393
llvm-svn: 186364
1> on every path through the CFG, a FrameSetup <n> is always followed by a
FrameDestroy <n> and a FrameDestroy is always followed by a FrameSetup.
2> stack adjustments are identical on all CFG edges to a merge point.
3> frame is destroyed at end of a return block.
PR16393
llvm-svn: 186350
There is a comment at the top of DAGTypeLegalizer::PerformExpensiveChecks
which, in part, says:
// Note that these invariants may not hold momentarily when processing a node:
// the node being processed may be put in a map before being marked Processed.
Unfortunately, this assert would be valid only if the above-mentioned invariant
held unconditionally. This was causing llc to assert when, in fact,
everything was fine.
Thanks to Richard Sandiford for investigating this issue!
Fixes PR16562.
llvm-svn: 186338
Address calculation for gather/scather in vectorized code can incur a
significant cost making vectorization unbeneficial. Add infrastructure to add
cost.
Tests and cost model for targets will be in follow-up commits.
radar://14351991
llvm-svn: 186187
MF is normally initialized in AsmPrinter::SetupMachineFunction, but if the file
contains only globals (no functions), then we need this to be initialized
because, when encountering an error, lowerConstant() references it.
This should fix the non-deterministic failures of
test/CodeGen/X86/nonconst-static-iv.ll, etc.
llvm-svn: 186068
When computing currently-live registers, the register scavenger excludes undef
uses. As a result, undef uses are ignored when computing the restore points of
registers spilled into the emergency slots. While the register scavenger
normally excludes from consideration, when scavenging, registers used by the
current instruction, we need to not exclude undef uses. Otherwise, we might end
up requiring more emergency spill slots than we have (in the case where the
undef use *is* the currently-spilled register).
Another bug found by llvm-stress.
llvm-svn: 186067
Change the informal convention of DBG_VALUE machine instructions so that
we can express a register-indirect address with an offset of 0.
The old convention was that a DBG_VALUE is a register-indirect value if
the offset (operand 1) is nonzero. The new convention is that a DBG_VALUE
is register-indirect if the first operand is a register and the second
operand is an immediate. For plain register values the combination reg,
reg is used. MachineInstrBuilder::BuildMI knows how to build the new
DBG_VALUES.
rdar://problem/13658587
llvm-svn: 185966
Because integer BUILD_VECTOR operands may have a larger type than the result's
vector element type, and all operands must have the same type, when widening a
BUILD_VECTOR node by adding UNDEFs, we cannot use the vector element type, but
rather must use the type of the existing operands.
Another bug found by llvm-stress.
llvm-svn: 185960
in-tree implementations of TargetLoweringBase::isFMAFasterThanMulAndAdd in
order to resolve the following issues with fmuladd (i.e. optional FMA)
intrinsics:
1. On X86(-64) targets, ISD::FMA nodes are formed when lowering fmuladd
intrinsics even if the subtarget does not support FMA instructions, leading
to laughably bad code generation in some situations.
2. On AArch64 targets, ISD::FMA nodes are formed for operations on fp128,
resulting in a call to a software fp128 FMA implementation.
3. On PowerPC targets, FMAs are not generated from fmuladd intrinsics on types
like v2f32, v8f32, v4f64, etc., even though they promote, split, scalarize,
etc. to types that support hardware FMAs.
The function has also been slightly renamed for consistency and to force a
merge/build conflict for any out-of-tree target implementing it. To resolve,
see comments and fixed in-tree examples.
llvm-svn: 185956
When folding sub x, x (and other similar constructs), where x is a vector, the
result is a vector of zeros. After type legalization, make sure that the input
zero elements have a legal type. This type may be larger than the result's
vector element type.
This was another bug found by llvm-stress.
llvm-svn: 185949
This patch broke `make check-asan` on Mac, causing ld warnings like the following one:
ld: warning: direct access in __GLOBAL__I_a to global weak symbol
___asan_mapping_scale means the weak symbol cannot be overridden at
runtime. This was likely caused by different translation units being
compiled with different visibility settings.
The resulting test binaries crashed with incorrect ASan warnings.
llvm-svn: 185923
The Mach-O linker has been able to support the weak-def bit on any symbol for
quite a while now. The compiler however continued to place these symbols into a
"coal" section, which required the linker to map them back to the base section
name.
Replace the sections like this:
__TEXT/__textcoal_nt instead use __TEXT/__text
__TEXT/__const_coal instead use __TEXT/__const
__DATA/__datacoal_nt instead use __DATA/__data
<rdar://problem/14265330>
llvm-svn: 185872
Since the pool indexes are necessarily sequential and contiguous, just
insert things in the right place rather than having to sort the sequence
after the fact.
No functionality change.
llvm-svn: 185842
This fixes a bug (found by llvm-stress) in
DAGTypeLegalizer::PromoteIntRes_BUILD_VECTOR where it assumed that the result
type would always be larger than the original operands. This is not always
true, however, with boolean vectors. For example, promoting a node of type v8i1
(where the operands will be of type i32, the type to which i1 is promoted) will
yield a node with a result vector element type of i16 (and operands of type
i32). As a result, we cannot blindly assume that we can ANY_EXTEND the operands
to the result type.
llvm-svn: 185794
This fixes an oversight that Intrinsic::nearbyint was not being mapped to
ISD::FNEARBYINT (thus fixing the over-optimistic cost we were assigning to
nearbyint calls for some targets).
llvm-svn: 185783
Obviously the personality function should be emitted as language handler
instead of the hard coded _GCC_specific_handler. The language specific
data must be placed after the unwind information therefore it must not
be emitted into a separate section.
Reviewed by Charles Davis and Nico Rieck.
llvm-svn: 185761
ReduceLoadWidth unconditionally drops extensions from loads. Limit it to the
case when all of the bits the extension would otherwise produce are dropped by
the shrink. It would be possible to shrink the load in more cases by merging
the extensions, but this isn't trivial and a very rare case. I left a TODO for
that case.
Fixes PR16551.
llvm-svn: 185755
This prevents the emission of DAG-generated vreg definitions after a
tail call be dropping them entirely (on the grounds that nothing could
use them anyway, and they interfere with O0 CodeGen).
llvm-svn: 185754
The stack coloring pass has code to delete stores and loads that become
trivially dead after coloring. Extend it to cope with single instructions
that copy from one frame index to another.
The testcase happens to show an example of this kicking in at the moment.
It did occur in Real Code too though.
llvm-svn: 185705
The stack coloring pass renumbered frame indexes with a loop of the form:
for each frame index FI
for each instruction I that uses FI
for each use of FI in I
rename FI to FI'
This caused problems if an instruction used two frame indexes F0 and F1
and if F0 was renamed to F1 and F1 to F2. The first time we visited the
instruction we changed F0 to F1, then we changed both F1s to F2.
In other words, the problem was that SSRefs recorded which instructions
used an FI, but not which MachineOperands and MachineMemOperands within
that instruction used it.
This is easily fixed for MachineOperands by walking the instructions
once and processing each operand in turn. There's already a loop to
do that for dead store elimination, so it seemed more efficient to
fuse the two at the block level.
MachineMemOperands are more tricky because they can be shared between
instructions. The patch handles them by making SSRefs an array of
MachineMemOperands rather than an array of MachineInstrs. We might end
up processing the same MachineMemOperand twice, but that's OK because
we always know from the SSRefs index what the original frame index was.
llvm-svn: 185703
SystemZ wants normal register scavenging slots, as close to the stack or
frame pointer as possible. The only reason it was using custom code was
because PrologEpilogInserter assumed an x86-like layout, where the frame
pointer is at the opposite end of the frame from the stack pointer.
This meant that when frame pointer elimination was disabled,
the slots ended up being as close as possible to the incoming
stack pointer, which is the opposite of what we want on SystemZ.
This patch adds a new knob to say which layout is used and converts
SystemZ to use target-independent scavenging slots. It's one of the pieces
needed to support frame-to-frame MVCs, where two slots might be required.
The ABI requires us to allocate 160 bytes for calls, so one approach
would be to use that area as temporary spill space instead. It would need
some surgery to make sure that the slot isn't live across a call though.
I stuck to the "isFPCloseToIncomingSP - ..." style comment on the
"do what the surrounding code does" principle. The FP case is already
covered by several Systemz/frame-* tests, which fail without the
PrologueEpilogueInserter change, so no new ones are needed.
No behavioural change intended.
llvm-svn: 185696
r179494 switched to using the object file info to retrieve the default text
section for some MC streamers. It is possible that initializing an MC
streamer can request sections before the object file info is initialized
when the AutoInitSections flag is set on the streamer.
llvm-svn: 185670
Stop using the ISD::EXCEPTIONADDR and ISD::EHSELECTION when lowering
landing pad arguments. These nodes were previously legalized into
CopyFromReg nodes, but that never worked properly because the
CopyFromReg node weren't guaranteed to be scheduled at the top of the
basic block.
This meant the exception pointer and selector registers could be
clobbered before being copied to a virtual register.
This patch copies the two physical registers to virtual registers at
the beginning of the basic block, and lowers the landingpad instruction
directly to two CopyFromReg nodes reading the *virtual* registers. This
is safe because virtual registers don't get clobbered.
A future patch will remove the ISD::EXCEPTIONADDR and ISD::EHSELECTION
nodes.
llvm-svn: 185617
Compute the insertion point from the end of the basic block instead of
skipping labels from the front.
This caused failures in landing pads when live-in copies where inserted
before instruction selection.
llvm-svn: 185616
Stop using the ISD::EXCEPTIONADDR and ISD::EHSELECTION when lowering
landing pad arguments. These nodes were previously legalized into
CopyFromReg nodes, but that never worked properly because the
CopyFromReg node weren't guaranteed to be scheduled at the top of the
basic block.
This meant the exception pointer and selector registers could be
clobbered before being copied to a virtual register.
This patch copies the two physical registers to virtual registers at
the beginning of the basic block, and lowers the landingpad instruction
directly to two CopyFromReg nodes reading the *virtual* registers. This
is safe because virtual registers don't get clobbered.
A future patch will remove the ISD::EXCEPTIONADDR and ISD::EHSELECTION
nodes.
llvm-svn: 185595
Correctly handles ref_addr depending on the Dwarf version. Emit Dwarf with
version from module flag.
TODO: turn on/off features depending on the Dwarf version.
llvm-svn: 185484
This allows getDebugThreadLocalSymbol to return a generic MCExpr
instead of just a MCSymbolRefExpr.
This is in preparation for supporting debug info for TLS variables
on PowerPC, where we need to describe the variable location using
a more complex expression than just MCSymbolRefExpr.
llvm-svn: 185460
This changes the AddrPool infrastructure to enable it to hold
generic MCExpr expressions, not just MCSymbolRefExpr.
This is in preparation for supporting debug info for TLS variables
on PowerPC, where we need to describe the variable location using
a more complex expression than just MCSymbolRefExpr.
llvm-svn: 185459
This partially reverts r185202 and restores DIELabel to hold plain
MCSymbol references. Instead, we add a new subclass DIEExpr of
DIEValue that can hold generic MCExpr references.
This is in preparation for supporting debug info for TLS variables
on PowerPC, where we need to describe the variable location using
a more complex expression than just MCSymbolRefExpr.
llvm-svn: 185458
"Remove floating point computations form SpillPlacement.cpp."
These commits caused test failures in lencod on clang-native-arm-lnt.
I suspect these changes are only exposing an existing issue, but
reverting anyway to keep the bots passing while we investigate.
llvm-svn: 185447
This is dead code since PIC16 was removed in 2010. The result was an odd mix,
where some parts would carefully pass it along and others would assert it was
zero (most of the object streamer for example).
llvm-svn: 185436
DAGCombiner was counting all uses of a load node when considering whether it's
worth combining into a zextload. Really, it wants to ignore the chain and just
count real uses.
rdar://problem/13896307
llvm-svn: 185419
Patch by Benjamin Kramer!
Use the BlockFrequency class instead of floats in the Hopfield network
computations. This rescales the node Bias field from a [-2;2] float
range to two block frequencies BiasN and BiasP pulling in opposite
directions. This construct has a more predictable behavior when block
frequencies saturate.
The per-node scaling factors are no longer necessary, assuming the block
frequencies around a bundle are consistent.
This patch can cause the register allocator to make different spilling
decisions. The differences should be small.
llvm-svn: 185393
Restrict the current TLS support to X86 ELF for now. Test that we don't
produce it on PPC & we can flesh that test case out with the right thing
once someone implements it.
llvm-svn: 185389
When phis get lowered, destination copies are inserted using an iterator that is
determined once for all phis in the block, which BuildMI interprets as a request
to insert an instruction directly before the iterator. In the case of a cyclic
phi, source copies may also be inserted directly before this iterator, which can
cause source copies to be inserted before destination copies. The fix is to keep
an iterator to the last phi and then advance it while lowering each phi in order
to insert destination copies directly after the phis.
llvm-svn: 185363
Based on GCC's output for TLS variables (OP_constNu, x@dtpoff,
OP_lo_user), this implements debug info support for TLS in ELF. Verified
that this output is correct/sufficient on Linux (using gold - if you're
using binutils-ld, you'll need something with the fix for
http://sourceware.org/bugzilla/show_bug.cgi?id=15685 in it).
Support on non-ELF is sort of "arbitrary" at the moment - if Apple folks
want to discuss (or just go ahead & implement) how this should work in
MachO, etc, I'm open.
llvm-svn: 185203
should expand ATOMIC_CMP_SWAP nodes the same way that it does for ATOMIC_SWAP.
Since ATOMIC_LOADs on some targets (e.g. older ARM variants) get legalized to
ATOMIC_CMP_SWAPs, the missing case had been causing i64 atomic loads to crash
during isel.
<rdar://problem/14074644>
llvm-svn: 185186
No functionality change.
It should suffice to check the type of a debug info metadata, instead of
calling Verify. For cases where we know the type of a DI metadata, use
assert.
Also update testing cases to make them conform to the format of DI classes.
llvm-svn: 185135
This is a band-aid to fix the most severe regressions we're seeing from basing
spill decisions on block frequencies, until we have a better solution.
llvm-svn: 184835
This makes it possible to write unit tests that are less susceptible
to minor code motion, particularly copy placement. block-placement.ll
covers this case with -pre-RA-sched=source which will soon be
default. One incorrectly named block is already fixed, but without
this fix, enabling new coalescing and scheduling would cause more
failures.
llvm-svn: 184680
We have no targets on trunk that bundle before regalloc. However, we
have been advertising regalloc as bundle safe for use with out-of-tree
targets. We need to at least contain the parts of the code that are
still unsafe.
llvm-svn: 184620
A FastISel optimization was causing us to emit no information for such
parameters & when they go missing we end up emitting a different
function type. By avoiding that shortcut we not only get types correct
(very important) but also location information (handy) - even if it's
only live at the start of a function & may be clobbered later.
Reviewed/discussion by Evan Cheng & Dan Gohman.
llvm-svn: 184604
Live intervals for dead physregs may be created during coalescing. We
need to update these in the event that their instruction goes away.
crash.ll is the unit test that catches it when MI sched is enabled on
X86.
llvm-svn: 184572
Fix up three tests - one that was relying on abbreviation number,
another relying on a location list in this case (& testing raw asm,
changed that to use dwarfdump on the debug_info now that that's where
the location is), and another which was added in r184368 - exposing a
bug in that fix that is exposed when we emit the location inline rather
than through a location list. Fix that bug while I'm here.
llvm-svn: 184387
We had been papering over a problem with location info for non-trivial
types passed by value by emitting their type as references (this caused
the debugger to interpret the location information correctly, but broke
the type of the function). r183329 corrected the type information but
lead to the debugger interpreting the pointer parameter as the value -
the debug info describing the location needed an extra dereference.
Use a new flag in DIVariable to add the extra indirection (either by
promoting an existing DW_OP_reg (parameter passed in a register) to
DW_OP_breg + 0 or by adding DW_OP_deref to an existing DW_OP_breg + n
(parameter passed on the stack).
llvm-svn: 184368
value is zero.
This allows optmizations to kick in more easily.
Fix some test cases so that they remain meaningful (i.e., not completely dead
coded) when optimizations apply.
<rdar://problem/14096009> superfluous multiply by high part of zero-extended
value.
llvm-svn: 184222
The main advantages here are way better heuristics, taking into account not
just loop depth but also __builtin_expect and other static heuristics and will
eventually learn how to use profile info. Most of the work in this patch is
pushing the MachineBlockFrequencyInfo analysis into the right places.
This is good for a 5% speedup on zlib's deflate (x86_64), there were some very
unfortunate spilling decisions in its hottest loop in longest_match(). Other
benchmarks I tried were mostly neutral.
This changes register allocation in subtle ways, update the tests for it.
2012-02-20-MachineCPBug.ll was deleted as it's very fragile and the instruction
it looked for was gone already (but the FileCheck pattern picked up unrelated
stuff).
llvm-svn: 184105
Frame index handling is now target-agnostic, so delete the target hooks
for creation & asm printing of target-specific addressing in DBG_VALUEs
and any related functions.
llvm-svn: 184067
Rather than using the full power of target-specific addressing modes in
DBG_VALUEs with Frame Indicies, simply use Frame Index + Offset. This
reduces the complexity of debug info handling down to two
representations of values (reg+offset and frame index+offset) rather
than three or four.
Ideally we could ensure that frame indicies had been eliminated by the
time we reached an assembly or dwarf generation, but I haven't spent the
time to figure out where the FIs are leaking through into that & whether
there's a good place to convert them. Some FI+offset=>reg+offset
conversion is done (see PrologEpilogInserter, for example) which is
necessary for some SelectionDAG assumptions about registers, I believe,
but it might be possible to make this a more thorough conversion &
ensure there are no remaining FIs no matter how instruction selection
is performed.
llvm-svn: 184066
Replace the ill-defined MinLatency and ILPWindow properties with
with straightforward buffer sizes:
MCSchedMode::MicroOpBufferSize
MCProcResourceDesc::BufferSize
These can be used to more precisely model instruction execution if desired.
Disabled some misched tests temporarily. They'll be reenabled in a few commits.
llvm-svn: 184032
"Counts" refer to scaled resource counts within a region. CurrMOps is
simply the number of micro-ops to be issue in the current cycle.
llvm-svn: 184031
Heuristics compare the critical path in the scheduled code, called
ExpectedLatency, with the latency of instructions remaining to be
scheduled. There are two ways to look at remaining latency:
(1) Dependent latency includes the latency between unscheduled and
scheduled instructions.
(2) Independent latency is simply the height (bottom-up) or depth
(top-down) of instructions currently in the ready Q.
llvm-svn: 184029
When we're rematerializing into a not-quite-right register we already add the
real definition as an imp-def, but we should also be marking the "official"
register as dead, since nothing else is going to use it as a result of this
remat.
Not doing this can affect pressure tracking.
rdar://problem/14158833
llvm-svn: 184002
in functions which call __builtin_unwind_init()
__builtin_unwind_init() is an undocumented gcc intrinsic which has this effect,
and is used in libgcc_eh.
Goes part of the way toward fixing PR8541.
llvm-svn: 183984
operator<< so that functions are printed as just their name instead of as their
entire definition, which is excessively verbose in this context.
llvm-svn: 183871
instantiation issue with non-standard type.
Add a backend option to warn on a given stack size limit.
Option: -mllvm -warn-stack-size=<limit>
Output (if limit is exceeded):
warning: Stack size limit exceeded (<actual size>) in <functionName>.
The longer term plan is to hook that to a clang warning.
PR:4072
<rdar://problem/13987214>.
llvm-svn: 183595
Option: -mllvm -warn-stack-size=<limit>
Output (if limit is exceeded):
warning: Stack size limit exceeded (<actual size>) in <functionName>.
The longer term plan is to hook that to a clang warning.
PR:4072
<rdar://problem/13987214>
llvm-svn: 183552
Fix an assertion when the compiler encounters big constants whose bit width is
not a multiple of 64-bits.
Although clang would never generate something like this, the backend should be
able to handle any legal IR.
<rdar://problem/13363576>
llvm-svn: 183544
OpenBSD's stack smashing protection differs slightly from other
platforms:
1. The smash handler function is "__stack_smash_handler(const char
*funcname)" instead of "__stack_chk_fail(void)".
2. There's a hidden "long __guard_local" object that gets linked
into each executable and DSO.
Patch by Matthew Dempsky.
llvm-svn: 183533
Seems we emit the parameter ordering number (spuriously named 'arg
number') in the debug info, so there's no need to search through the
variable list to figure out the parameter ordering. This implementation
does 'always' do the work, even in non-optimized debug info (the
previous implementation checked the existence of the 'variables' list on
the subprogram which is only present in optimized builds).
No intended functionality change.
llvm-svn: 183446
The TargetLoweringInfo object is owned by the TargetMachine. In the future, the
TargetMachine object may change, which may also change the TargetLoweringInfo
object.
llvm-svn: 183356
When a function is inlined we lazily construct the variables
representing the function's parameters. After that, we add any remaining
unused parameters.
If the function doesn't use all the parameters, or uses them out of
order, then the DWARF would produce them in that order, producing a
parameter order that doesn't match the source.
This fix causes us to always keep the arg variables at the start of the
variable list & in the original order from the source.
llvm-svn: 183297
(4.58s vs 3.2s on an oldish Mac Tower).
The corresponding src is excerpted bellow. The lopp accounts for about 90% of execution time.
--------------------
cat -n test-suite/MultiSource/Benchmarks/Olden/em3d/make_graph.c
90
91 for (k=0; k<j; k++)
92 if (other_node == cur_node->to_nodes[k]) break;
The defective layout is sketched bellow, where the two branches need to swap.
------------------------------------------------------------------------
L:
...
if (cond) goto out-of-loop
goto L
While this code sequence is defective, I don't understand why it incurs 1/3 of
execution time. CPU-event-profiling indicates the poor laoyout dose not increase
in br-misprediction; it dosen't increase stall cycle at all, and it dosen't
prevent the CPU detect the loop (i.e. Loop-Stream-Detector seems to be working fine
as well)...
The root cause of the problem is that the layout pass calls AnalyzeBranch()
with basic-block which is not updated to reflect its current layout.
rdar://13966341
llvm-svn: 183174
Account for the cost of scaling factor in Loop Strength Reduce when rating the
formulae. This uses a target hook.
The default implementation of the hook is: if the addressing mode is legal, the
scaling factor is free.
<rdar://problem/13806271>
llvm-svn: 183045
r182872 introduced a bug in how the register-coalescer's rematerialization
handled defining a physical register. It relied on the output of the
coalescer's setRegisters method to determine whether the replacement
instruction needed an implicit-def. However, this value isn't necessarily the
same as the CopyMI's actual destination register which is what the rest of the
basic-block expects us to be defining.
The commit changes the rematerializer to use the actual register attached to
CopyMI in its decision.
This will be tested soon by an X86 patch which moves everything to using
MOV32r0 instead of other sizes.
llvm-svn: 182925
Fixes PR16146: gdb.base__call-ar-st.exp fails after
pre-RA-sched=source fixes.
Patch by Xiaoyi Guo!
This also fixes an unsupported dbg.value test case. Codegen was
previously incorrect but the test was passing by luck.
llvm-svn: 182885
This allows rematerialization during register coalescing to handle
more cases involving operations like SUBREG_TO_REG which might need to
be rematerialized using sub-register indices.
For example, code like:
v1(GPR64):sub_32 = MOVZ something
v2(GPR64) = COPY v1(GPR64)
should be convertable to:
v2(GPR64):sub_32 = MOVZ something
but previously we just gave up in places like this
llvm-svn: 182872
Since the testing case uses ref_addr, which requires version 3+ to work,
we will solve the dwarf version issue first.
This patch also causes failures in one of the bots. I will update the patch
accordingly in my next attempt.
rdar://13926659
llvm-svn: 182867
from a different CU.
We used to print out an error message and fail to generate inlined_subroutine.
If we use ref_addr in the generated DWARF, the DWARF version should be 3 or
above.
rdar://13926659
llvm-svn: 182791
When -ffast-math is in effect (on Linux, at least), clang defines
__FINITE_MATH_ONLY__ > 0 when including <math.h>. This causes the
preprocessor to include <bits/math-finite.h>, which renames the sqrt functions.
For instance, "sqrt" is renamed as "__sqrt_finite".
This patch adds the 3 new names in such a way that they will be treated
as equivalent to their respective original names.
llvm-svn: 182739
Use a field in the SelectionDAGNode object to track its IR ordering.
This adds fields and utility classes without changing existing
interfaces or functionality.
llvm-svn: 182701
Now that the LiveDebugVariables pass is running *after* register
coalescing, the ConnectedVNInfoEqClasses class needs to deal with
DBG_VALUE instructions.
This only comes up when rematerialization during coalescing causes the
remaining live range of a virtual register to separate into two
connected components.
llvm-svn: 182592
There were bits & pieces of code lying around that may've given the
impression that debug info metadata supported the possibility that a
subprogram's type could be specified by a non-subroutine type describing
the return type of a void function. This support was incomplete &
unnecessary. Asserts & API have been changed to make the desired usage
more clear.
llvm-svn: 182532
This is to fix PR15408 where an undefined symbol Lline_table_start1 is used.
Since we do not generate the debug_line section when .loc is used,
Lline_table_start1 is not emitted and we can't refer to it when calculating
at_stmt_list for a compile unit.
llvm-svn: 182344
This resolves the last of the PR14606 failures in the GDB 7.5 test
suite by implementing an optional name field for
DW_TAG_imported_modules/DIImportedEntities and using that to implement
C++ namespace aliases (eg: "namespace X = Y;").
llvm-svn: 182328
This lane mask provides information about which register lanes
completely cover super-registers. See the block comment before
getCoveringLanes().
llvm-svn: 182034
If the input operands to SETCC are promoted, we need to make sure that we
either use the promoted form of both operands (or neither); a mixture is not
allowed. This can happen, for example, if a target has a custom promoted
i1-returning intrinsic (where i1 is not a legal type). In this case, we need to
use the promoted form of both operands.
This change only augments the behavior of the existing logic in the case where
the input types (which may or may not have already been legalized) disagree,
and should not affect existing target code because this case would otherwise
cause an assert in the SETCC operand promotion code.
This will be covered by (essentially all of the) tests for the new PPCCTRLoops
infrastructure.
llvm-svn: 181926
IR optimisation passes can result in a basic block that contains:
llvm.lifetime.start(%buf)
...
llvm.lifetime.end(%buf)
...
llvm.lifetime.start(%buf)
Before this change, calculateLiveIntervals() was ignoring the second
lifetime.start() and was regarding %buf as being dead from the
lifetime.end() through to the end of the basic block. This can cause
StackColoring to incorrectly merge %buf with another stack slot.
Fix by removing the incorrect Starts[pos].isValid() and
Finishes[pos].isValid() checks.
Just doing:
Starts[pos] = Indexes->getMBBStartIdx(MBB);
Finishes[pos] = Indexes->getMBBEndIdx(MBB);
unconditionally would be enough to fix the bug, but it causes some
test failures due to stack slots not being merged when they were
before. So, in order to keep the existing tests passing, treat LiveIn
and LiveOut separately rather than approximating the live ranges by
merging LiveIn and LiveOut.
This fixes PR15707.
Patch by Mark Seaborn.
llvm-svn: 181922
BitVector/SmallBitVector::reference::operator bool remain implicit since
they model more exactly a bool, rather than something else that can be
boolean tested.
The most common (non-buggy) case are where such objects are used as
return expressions in bool-returning functions or as boolean function
arguments. In those cases I've used (& added if necessary) a named
function to provide the equivalent (or sometimes negative, depending on
convenient wording) test.
One behavior change (YAMLParser) was made, though no test case is
included as I'm not sure how to reach that code path. Essentially any
comparison of llvm::yaml::document_iterators would be invalid if neither
iterator was at the end.
This helped uncover a couple of bugs in Clang - test cases provided for
those in a separate commit along with similar changes to `operator bool`
instances in Clang.
llvm-svn: 181868
The personality function is user defined and may have an arbitrary result type.
The code assumes always i8*. This results in an assertion failure if a different
type is used. A bitcast to i8* is added to prevent this failure.
Reviewed by: Renato Golin, Bob Wilson
llvm-svn: 181802
It was just a less powerful and more confusing version of
MCCFIInstruction. A side effect is that, since MCCFIInstruction uses
dwarf register numbers, calls to getDwarfRegNum are pushed out, which
should allow further simplifications.
I left the MachineModuleInfo::addFrameMove interface unchanged since
this patch was already fairly big.
llvm-svn: 181680
This is only tested for global variables at the moment (& includes tests
for the unnamed parameter case, since apparently this entire function
was completely untested previously)
llvm-svn: 181632
for constructors and destructors since the original declaration given
by the AT_specification both won't and can't.
Patch by Yacine Belkadi, I've cleaned up the testcases.
llvm-svn: 181471
This provides basic functionality for imported declarations. For
subprograms and types some amount of lazy construction is supported (so
the definition of a function can proceed the using declaration), but it
still doesn't handle declared-but-not-defined functions (since we don't
generally emit function declarations).
Variable support is really rudimentary at the moment - simply looking up
the existing definition with no support for out of order (declaration,
imported_module, then definition).
llvm-svn: 181392
DIBuilder::createImportedDeclaration isn't fully plumbed through (note,
lacking in AsmPrinter/DwarfDebug support) but this seemed like a
sufficiently useful division of code to make the subsequent patch(es)
easier to follow.
llvm-svn: 181364
Apparently we didn't keep an association of Compile Unit metadata nodes
to DIEs so looking up that parental context failed & thus caused no
DW_TAG_imported_modules to be emitted at the CU scope. Fix this by
adding the mapping & sure up the test case to verify this.
llvm-svn: 181339
Now even the small structures could be passed within byval (small enough
to be stored in GPRs).
In regression tests next function prototypes are checked:
PR15293:
%artz = type { i32 }
define void @foo(%artz* byval %s)
define void @foo2(%artz* byval %s, i32 %p, %artz* byval %s2)
foo: "s" stored in R0
foo2: "s" stored in R0, "s2" stored in R2.
Next AAPCS rules are checked:
5.5 Parameters Passing, C.4 and C.5,
"ParamSize" is parameter size in 32bit words:
-- NSAA != 0, NCRN < R4 and NCRN+ParamSize > R4.
Parameter should be sent to the stack; NCRN := R4.
-- NSAA != 0, and NCRN < R4, NCRN+ParamSize < R4.
Parameter stored in GPRs; NCRN += ParamSize.
llvm-svn: 181148
at all of the operands. Previously it was skipping over implicit operands which
cause infinite looping when the two-address pass try to reschedule a
two-address instruction below the kill of tied operand.
I'm unable to come up with a reasonably sized test case.
rdar://13747577
llvm-svn: 180906
the things, and renames it to CBindingWrapping.h. I also moved
CBindingWrapping.h into Support/.
This new file just contains the macros for defining different wrap/unwrap
methods.
The calls to those macros, as well as any custom wrap/unwrap definitions
(like for array of Values for example), are put into corresponding C++
headers.
Doing this required some #include surgery, since some .cpp files relied
on the fact that including Wrap.h implicitly caused the inclusion of a
bunch of other things.
This also now means that the C++ headers will include their corresponding
C API headers; for example Value.h must include llvm-c/Core.h. I think
this is harmless, since the C API headers contain just external function
declarations and some C types, so I don't believe there should be any
nasty dependency issues here.
llvm-svn: 180881
report a fatal error. This allows us to continue processing the translation
unit. Test case to come on the clang side because we need an inline asm
diagnostics handler in place.
rdar://13446483
llvm-svn: 180873
register-indirect address with an offset of 0.
It used to be that a DBG_VALUE is a register-indirect value if the offset
(operand 1) is nonzero. The new convention is that a DBG_VALUE is
register-indirect if the first operand is a register and the second
operand is an immediate. For plain registers use the combination reg, reg.
rdar://problem/13658587
llvm-svn: 180816
First, taking advantage of the fact that the virtual base registers are allocated in order of the local frame offsets, remove the quadratic register-searching behavior. Because of the ordering, we only need to check the last virtual base register created.
Second, store the frame index in the FrameRef structure, and get the frame index and the local offset from this structure at the top of the loop iteration. This allows us to de-nest the loops in insertFrameReferenceRegisters (and I think makes the code cleaner). I also moved the needsFrameBaseReg check into the first loop over instructions so that we don't bother pushing FrameRefs for instructions that don't want a virtual base register anyway.
Lastly, and this is the only functionality change, avoid the creation of single-use virtual base registers. These are currently not useful because, in general, they end up replacing what would be one r+r instruction with an add and a r+i instruction. Committing this removes the XFAIL in CodeGen/PowerPC/2007-09-07-LoadStoreIdxForms.ll
Jim has okayed this off-list.
llvm-svn: 180799
The `llvm.tls_init_funcs' (created by the front-end) holds pointers to the TLS
initialization functions. These need to be placed into the correct section so
that they are run before `main()'.
<rdar://problem/13733006>
llvm-svn: 180737
to determine whether or not we're on a darwin platform for debug code
emitting.
Solves the problem of a module with no triple on the command line
and no triple in the module using non-gdb ok features on darwin. Fix
up the member-pointers test to check the correct things for cross
platform (DW_FORM_flag is a good prefix).
Unfortunately no testcase because I have no ideas how to test something
without a triple and without a triple in the module yet check
precisely on two platforms. Ideas welcome.
llvm-svn: 180660
Clarify documentation and API to make the difference between register and
register-indirect addressed locations more explicit. Put in a comment
to point out that with the current implementation we cannot specify
a register-indirect location with offset 0 (a breg 0 in DWARF).
No functionality change intended.
rdar://problem/13658587
llvm-svn: 180641