This is the CodeGen equivalent of r153747. I tested that there is not noticeable
performance difference with any combination of -O0/-O2 /-g when compiling
gcc as a single compilation unit.
llvm-svn: 153817
interfaces. These methods were used in the old inline cost system where
there was a persistent cache that had to be updated, invalidated, and
cleared. We're now doing more direct computations that don't require
this intricate dance. Even if we resume some level of caching, it would
almost certainly have a simpler and more narrow interface than this.
llvm-svn: 153813
on a per-callsite walk of the called function's instructions, in
breadth-first order over the potentially reachable set of basic blocks.
This is a major shift in how inline cost analysis works to improve the
accuracy and rationality of inlining decisions. A brief outline of the
algorithm this moves to:
- Build a simplification mapping based on the callsite arguments to the
function arguments.
- Push the entry block onto a worklist of potentially-live basic blocks.
- Pop the first block off of the *front* of the worklist (for
breadth-first ordering) and walk its instructions using a custom
InstVisitor.
- For each instruction's operands, re-map them based on the
simplification mappings available for the given callsite.
- Compute any simplification possible of the instruction after
re-mapping, and store that back int othe simplification mapping.
- Compute any bonuses, costs, or other impacts of the instruction on the
cost metric.
- When the terminator is reached, replace any conditional value in the
terminator with any simplifications from the mapping we have, and add
any successors which are not proven to be dead from these
simplifications to the worklist.
- Pop the next block off of the front of the worklist, and repeat.
- As soon as the cost of inlining exceeds the threshold for the
callsite, stop analyzing the function in order to bound cost.
The primary goal of this algorithm is to perfectly handle dead code
paths. We do not want any code in trivially dead code paths to impact
inlining decisions. The previous metric was *extremely* flawed here, and
would always subtract the average cost of two successors of
a conditional branch when it was proven to become an unconditional
branch at the callsite. There was no handling of wildly different costs
between the two successors, which would cause inlining when the path
actually taken was too large, and no inlining when the path actually
taken was trivially simple. There was also no handling of the code
*path*, only the immediate successors. These problems vanish completely
now. See the added regression tests for the shiny new features -- we
skip recursive function calls, SROA-killing instructions, and high cost
complex CFG structures when dead at the callsite being analyzed.
Switching to this algorithm required refactoring the inline cost
interface to accept the actual threshold rather than simply returning
a single cost. The resulting interface is pretty bad, and I'm planning
to do lots of interface cleanup after this patch.
Several other refactorings fell out of this, but I've tried to minimize
them for this patch. =/ There is still more cleanup that can be done
here. Please point out anything that you see in review.
I've worked really hard to try to mirror at least the spirit of all of
the previous heuristics in the new model. It's not clear that they are
all correct any more, but I wanted to minimize the change in this single
patch, it's already a bit ridiculous. One heuristic that is *not* yet
mirrored is to allow inlining of functions with a dynamic alloca *if*
the caller has a dynamic alloca. I will add this back, but I think the
most reasonable way requires changes to the inliner itself rather than
just the cost metric, and so I've deferred this for a subsequent patch.
The test case is XFAIL-ed until then.
As mentioned in the review mail, this seems to make Clang run about 1%
to 2% faster in -O0, but makes its binary size grow by just under 4%.
I've looked into the 4% growth, and it can be fixed, but requires
changes to other parts of the inliner.
llvm-svn: 153812
visitor will now visit a CallInst and an InvokeInst with
instruction-specific visitors, then visit a generic CallSite visitor,
then delegate back to the Instruction visitor and the TerminatorInst
visitors depending on whether a call or an invoke originally. This will
be used in the soon-to-land inline cost rewrite.
llvm-svn: 153811
Use an explicit comparator instead of the default.
The sets are sorted, but not using the default comparator. Hopefully,
this will unbreak the Linux builders.
llvm-svn: 153772
TableGen emits lists of sub-registers, super-registers, and overlaps. Put
them all in a single table and use a SequenceToOffsetTable to share
suffixes.
llvm-svn: 153761
1. The main works will made in the RuntimeDyLdImpl with uses the ObjectFile class. RuntimeDyLdMachO and RuntimeDyLdELF now only parses relocations and resolve it. This is allows to make improvements of the RuntimeDyLd more easily. In addition the support for COFF can be easily added.
2. Added ARM relocations to RuntimeDyLdELF.
3. Added support for stub functions for the ARM, allowing to do a long branch.
4. Added support for external functions that are not loaded from the object files, but can be loaded from external libraries. Now MCJIT can correctly execute the code containing the printf, putc, and etc.
5. The sections emitted instead functions, thanks Jim Grosbach. MemoryManager.startFunctionBody() and MemoryManager.endFunctionBody() have been removed.
6. MCJITMemoryManager.allocateDataSection() and MCJITMemoryManager. allocateCodeSection() used JMM->allocateSpace() instead of JMM->allocateCodeSection() and JMM->allocateDataSection(), because I got an error: "Cannot allocate an allocated block!" with object file contains more than one code or data sections.
llvm-svn: 153754
Some targets still mess up the liveness information, but that isn't
verified after MRI->invalidateLiveness().
The verifier can still check other useful things like register classes
and CFG, so it should be enabled after all passes.
llvm-svn: 153615
blocks in the function cloner. This removes the last case of trivially
dead code that I've been seeing in the wild getting inlined, analyzed,
re-inlined, optimized, only to be deleted. Nukes a FIXME from the
cleanup tests.
llvm-svn: 153572
Late optimization passes like branch folding and tail duplication can
transform the machine code in a way that makes it expensive to keep the
register liveness information up to date. There is a fuzzy line between
register allocation and late scheduling where the liveness information
degrades.
The MRI::tracksLiveness() flag makes the line clear: While true,
liveness information is accurate, and can be used for register
scavenging. Once the flag is false, liveness information is not
accurate, and can only be used as a hint.
Late passes generally don't need the liveness information, but they will
sometimes use the register scavenger to help update it. The scavenger
enforces strict correctness, and we have to spend a lot of code to
update register liveness that may never be used.
llvm-svn: 153511
bit simpler by handling a common case explicitly.
Also, refactor the implementation to use a worklist based walk of the
recursive users, rather than trying to use value handles to detect and
recover from RAUWs during the recursive descent. This fixes a very
subtle bug in the previous implementation where degenerate control flow
structures could cause mutually recursive instructions (PHI nodes) to
collapse in just such a way that From became equal to To after some
amount of recursion. At that point, we hit the inf-loop that the assert
at the top attempted to guard against. This problem is defined away when
not using value handles in this manner. There are lots of comments
claiming that the WeakVH will protect against just this sort of error,
but they're not accurate about the actual implementation of WeakVHs,
which do still track RAUWs.
I don't have any test case for the bug this fixes because it requires
running the recursive simplification on unreachable phi nodes. I've no
way to either run this or easily write an input that triggers it. It was
found when using instruction simplification inside the inliner when
running over the nightly test-suite.
llvm-svn: 153393
This is necessary if the client wants to be able to mutate TargetOptions (for example, fast FP math mode) after the initial creation of the ExecutionEngine.
llvm-svn: 153342
(and hopefully on Windows). The bots have been down most of the day
because of this, and it's not clear to me what all will be required to
fix it.
The commits started with r153205, then r153207, r153208, and r153221.
The first commit seems to be the real culprit, but I couldn't revert
a smaller number of patches.
When resubmitting, r153207 and r153208 should be folded into r153205,
they were simple build fixes.
llvm-svn: 153241
1. Declare a virtual function getPointerToNamedFunction() in JITMemoryManager
2. Move the implementation of getPointerToNamedFunction() form JIT/MCJIT to DefaultJITMemoryManager.
llvm-svn: 153205
Remaining "uncategorized" functions have been organized into their
proper place in the hierarchy. Some functions were moved around so
groups are defined together.
No code changes were made.
llvm-svn: 153169
This gives a lot of love to the docs for the C API. Like Clang's
documentation, the C API is now organized into a Doxygen "module"
(LLVMC). Each C header file is a child of the main module. Some modules
(like Core) have a hierarchy of there own. The produced documentation is
thus better organized (before everything was in one monolithic list).
This patch also includes a lot of new documentation for APIs in Core.h.
It doesn't document them all, but is better than none. Function docs are
missing @param and @return annotation, but the documentation body now
commonly provides help details (like the expected llvm::Value sub-type
to expect).
llvm-svn: 153157
ImmutAVLTree uses random unsigned values as keys into a DenseMap,
which could possibly happen to be the same value as the Tombstone or
Entry keys in the DenseMap.
Test case is hard to come up with. We randomly get failures on the
internal static analyzer bot, which most likely hits this issue
(hard to be 100% sure without the full stack).
llvm-svn: 153148
violations I introduced. Also sort some of the instructions to get
a more consistent ordering.
Suggestions on still better / more consistent formatting would be
welcome. I'm actually tempted to use a macro to define all of the
delegate methods...
llvm-svn: 153030
directly query the function information which this set was representing.
This simplifies the interface of the inline cost analysis, and makes the
always-inline pass significantly more efficient.
Previously, always-inline would first make a single set of every
function in the module *except* those marked with the always-inline
attribute. It would then query this set at every call site to see if the
function was a member of the set, and if so, refuse to inline it. This
is quite wasteful. Instead, simply check the function attribute directly
when looking at the callsite.
The normal inliner also had similar redundancy. It added every function
in the module with the noinline attribute to its set to ignore, even
though inside the cost analysis function we *already tested* the
noinline attribute and produced the same result.
The only tricky part of removing this is that we have to be able to
correctly remove only the functions inlined by the always-inline pass
when finalizing, which requires a bit of a hack. Still, much less of
a hack than the set of all non-always-inline functions was. While I was
touching this function, I switched a heavy-weight set to a vector with
sort+unique. The algorithm already had a two-phase insert and removal
pattern, we were just needlessly paying the uniquing cost on every
insert.
This probably speeds up some compiles by a small amount (-O0 compiles
with lots of always-inline, so potentially heavy libc++ users), but I've
not tried to measure it.
I believe there is no functional change here, but yell if you spot one.
None are intended.
Finally, the direction this is going in is to greatly simplify the
inline cost query interface so that we can replace its implementation
with a much more clever one. Along the way, all the APIs get simplified,
so it seems incrementally good.
llvm-svn: 152903
analysis implementation. The header was already separated. Also cleanup
all the comments in the header to follow a nice modern doxygen form.
There is still plenty of cruft here, but some of that will fall out in
subsequent refactorings and this was an easy step in the right
direction. No functionality changed here.
llvm-svn: 152898
Only record IVUsers that are dominated by simplified loop
headers. Otherwise SCEVExpander will crash while looking for a
preheader.
I previously tried to work around this in LSR itself, but that was
insufficient. This way, LSR can continue to run if some uses are not
in simple loops, as long as we don't attempt to analyze those users.
Fixes <rdar://problem/11049788> Segmentation fault: 11 in LoopStrengthReduce
llvm-svn: 152892
It caused MSP430DAGToDAGISel::SelectIndexedBinOp() to be miscompiled.
When two ReplaceUses()'s are expanded as inline, vtable in base class is stored to latter (ISelUpdater)ISU.
llvm-svn: 152877
We cannot limit the concatenated instruction names to 64K. ARM is
already at 32K, and it is easy to imagine a target with more
instructions.
llvm-svn: 152817
correlated pairs of pointer arguments at the callsite. This is designed
to recognize the common C++ idiom of begin/end pointer pairs when the
end pointer is a constant offset from the begin pointer. With the
C-based idiom of a pointer and size, the inline cost saw the constant
size calculation, and this provides the same level of information for
begin/end pairs.
In order to propagate this information we have to search for candidate
operations on a pair of pointer function arguments (or derived from
them) which would be simplified if the pointers had a known constant
offset. Then the callsite analysis looks for such pointer pairs in the
argument list, and applies the appropriate bonus.
This helps LLVM detect that half of bounds-checked STL algorithms
(such as hash_combine_range, and some hybrid sort implementations)
disappear when inlined with a constant size input. However, it's not
a complete fix due the inaccuracy of our cost metric for constants in
general. I'm looking into that next.
Benchmarks showed no significant code size change, and very minor
performance changes. However, specific code such as hashing is showing
significantly cleaner inlining decisions.
llvm-svn: 152752
Commit r152704 exposed a latent MSVC limitation (aka bug).
Both ilist and and iplist contains the same function:
template<class InIt> void insert(iterator where, InIt first, InIt last) {
for (; first != last; ++first) insert(where, *first);
}
Also ilist inherits from iplist and ilist contains a "using iplist<NodeTy>::insert".
MSVC doesn't know which one to pick and complain with an error.
I think it is safe to delete ilist::insert since it is redundant anyway.
llvm-svn: 152746
New flags: -misched-topdown, -misched-bottomup. They can be used with
the default scheduler or with -misched=shuffle. Without either
topdown/bottomup flag -misched=shuffle now alternates scheduling
direction.
LiveIntervals update is unimplemented with bottom-up scheduling, so
only -misched-topdown currently works.
Capped the ScheduleDAG hierarchy with a concrete ScheduleDAGMI class.
ScheduleDAGMI is aware of the top and bottom of the unscheduled zone
within the current region. Scheduling policy can be plugged into
the ScheduleDAGMI driver by implementing MachineSchedStrategy.
ConvergingScheduler is now the default scheduling algorithm.
It exercises the new driver but still does no reordering.
llvm-svn: 152700
take a TargetLibraryInfo parameter. Internally, rather than passing TD, TLI
and DT parameters around all over the place, introduce a struct for holding
them.
llvm-svn: 152623
Also refactor the existing OProfile profiling code to reuse the same interfaces with the VTune profiling code.
In addition, unit tests for the profiling interfaces were added.
This patch was prepared by Andrew Kaylor and Daniel Malea, and reviewed in the llvm-commits list by Jim Grosbach
llvm-svn: 152620
Renamed methods caseBegin, caseEnd and caseDefault with case_begin, case_end, and case_default.
Added some notes relative to case iterators.
llvm-svn: 152532
a common collection of methods on Value, and share their implementation.
We had two variations in two different places already, and I need the
third variation for inline cost estimation.
Reviewed by Duncan Sands on IRC, but further comments here welcome.
llvm-svn: 152490
* Add enums and structures for GNU version information.
* Implement extraction of that information on a per-symbol basis (ELFObjectFile::getSymbolVersion).
* Implement a generic interface, GetELFSymbolVersion(), for getting the symbol version from the ObjectFile (hides the templating).
* Have llvm-readobj print out the version, when available.
* Add a test for the new feature: readobj-elf-versioning.test
llvm-svn: 152436
caused several clients to select the slow variation. =[ This is extra
annoying because we don't have any realistic way of testing this -- by
design, these two functions *must* compute the same value.
Found while inspecting the output of some benchmarks I'm working on.
llvm-svn: 152369
buildbots. Original commit message:
[ADT] Change the trivial FoldingSetNodeID::Add* methods to be inline, reapplied
with a fix for the longstanding over-read of 32-bit pointer values.
llvm-svn: 152304
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20120130/136146.html
Implemented CaseIterator and it solves almost all described issues: we don't need to mix operand/case/successor indexing anymore. Base iterator class is implemented as a template since it may be initialized either from "const SwitchInst*" or from "SwitchInst*".
ConstCaseIt is just a read-only iterator.
CaseIt is read-write iterator; it allows to change case successor and case value.
Usage of iterator allows totally remove resolveXXXX methods. All indexing convertions done automatically inside the iterator's getters.
Main way of iterator usage looks like this:
SwitchInst *SI = ... // intialize it somehow
for (SwitchInst::CaseIt i = SI->caseBegin(), e = SI->caseEnd(); i != e; ++i) {
BasicBlock *BB = i.getCaseSuccessor();
ConstantInt *V = i.getCaseValue();
// Do something.
}
If you want to convert case number to TerminatorInst successor index, just use getSuccessorIndex iterator's method.
If you want initialize iterator from TerminatorInst successor index, use CaseIt::fromSuccessorIndex(...) method.
There are also related changes in llvm-clients: klee and clang.
llvm-svn: 152297
Original commit message:
Use uint16_t to store InstrNameIndices in MCInstrInfo. Add asserts to protect all 16-bit string table offsets. Also make sure the string to offset table string is not larger than 65536 characters since larger string literals aren't portable.
llvm-svn: 152296
analysis to be methods on the cost analysis's function info object
instead of the code metrics object. These really are just users of the
code metrics, they're building the information for the function's
analysis.
This is the first step of growing the amount of information we collect
about a function in order to cope with pair-wise simplifications due to
allocas.
llvm-svn: 152283
Allow targets to provide their own schedulers (subclass of
ScheduleDAGInstrs) to the misched pass. Select schedulers using
-misched=...
llvm-svn: 152278
Original commit message:
Use uint16_t to store InstrNameIndices in MCInstrInfo. Add asserts to protect
all 16-bit string table offsets. Also make sure the string to offset table
string is not larger than 65536 characters since larger string literals aren't
portable.
llvm-svn: 152233
compilers. It seems that GCC 4.3 (and likely older) simply aren't going
to do SFINAE on non-type template parameters the way Clang and modern
GCCs do...
Now we detect the implicit conversion to an integer type, and then
blacklist classes, pointers, and floating point types. This seems to
work well enough, and I'm hopeful will return the bots to life.
llvm-svn: 152227
traits.
With this change, the pattern used here is *extremely* close to the
pattern used elsewhere in the file, so I'm hoping it survives the
build-bots.
llvm-svn: 152225
template argument and an *implicit* conversion from '0' to a null
pointer. For some bizarre reason, GCC 4.3.2 thinks that the cast to
'(T*)' is invalid inside of an enumerator's value... which it isn't but
whatever. ;] This pattern is used elsewhere in the type_traits header
and so hopefully will survive the wrath of the build bots.
llvm-svn: 152220
Deleting them because they aren't used. =D
Yell if you need these, I'm happy to instead replace them with nice uses
of the new infrastructure.
llvm-svn: 152219
integral and enumeration types. This is accomplished with a bit of
template type trait magic. Thanks to Richard Smith for the core idea
here to detect viable types by detecting the set of types which can be
default constructed in a template parameter.
This is used (in conjunction with a system for detecting nullptr_t
should it exist) to provide an is_integral_or_enum type trait that
doesn't need a whitelist or direct compiler support.
With this, the hashing is extended to the more general facility. This
will be used in a subsequent commit to hashing more things, but I wanted
to make sure the type trait magic went through the build bots separately
in case other compilers don't like this formulation.
llvm-svn: 152217
ScheduleDAG is responsible for the DAG: SUnits and SDeps. It provides target hooks for latency computation.
ScheduleDAGInstrs extends ScheduleDAG and defines the current scheduling region in terms of MachineInstr iterators. It has access to the target's scheduling itinerary data. ScheduleDAGInstrs provides the logic for building the ScheduleDAG for the sequence of MachineInstrs in the current region. Target's can implement highly custom schedulers by extending this class.
ScheduleDAGPostRATDList provides the driver and diagnostics for current postRA scheduling. It maintains a current Sequence of scheduled machine instructions and logic for splicing them into the block. During scheduling, it uses the ScheduleHazardRecognizer provided by the target.
Specific changes:
- Removed driver code from ScheduleDAG. clearDAG is the only interface needed.
- Added enterRegion/exitRegion hooks to ScheduleDAGInstrs to delimit the scope of each scheduling region and associated DAG. They should be used to setup and cleanup any region-specific state in addition to the DAG itself. This is necessary because we reuse the same ScheduleDAG object for the entire function. The target may extend these hooks to do things at regions boundaries, like bundle terminators. The hooks are called even if we decide not to schedule the region. So all instructions in a block are "covered" by these calls.
- Added ScheduleDAGInstrs::begin()/end() public API.
- Moved Sequence into the driver layer, which is specific to the scheduling algorithm.
llvm-svn: 152208
"is sized". This prevents every query to isSized() from recursing over
every sub-type of a struct type. This could get *very* slow for
extremely deep nesting of structs, as in 177.mesa.
This change is a 45% speedup for 'opt -O2' of 177.mesa.linked.bc, and
likely a significant speedup for other cases as well. It even impacts
-O0 cases because so many part of the code try to check whether a type
is sized.
Thanks for the review from Nick Lewycky and Benjamin Kramer on IRC.
llvm-svn: 152197
This currently assumes that both sets have the same SmallSize to keep the implementation simple,
a limitation that can be lifted if someone cares.
llvm-svn: 152143
With the new composite physical registers to represent arbitrary pairs
of DPR registers, we don't need the pseudo-registers anymore. Get rid of
a bunch of them that use DPR register pairs and just use the real
instructions directly instead.
llvm-svn: 152045
complains about the truncation of a 64-bit constant to a 32-bit value
when size_t is 32-bits wide, but *only with static_cast*!!! The exact
signal that should *silence* such a warning, and in fact does silence it
with both GCC and Clang.
Anyways, this was causing grief for all the MSVC builds, so pointless
change made. Thanks to Nikola on IRC for confirming that this works.
llvm-svn: 152021
MachineOperands that define part of a virtual register must have an
<undef> flag if they are not intended as read-modify-write operands.
The old trick of adding an <imp-def> operand doesn't work any longer.
Fixes PR12177.
llvm-svn: 152008
new hash_value infrastructure, and replace their implementations using
hash_combine. This removes a complete copy of Jenkin's lookup3 hash
function (which is both significantly slower and lower quality than the
one implemented in hash_combine) along with a somewhat scary xor-only
hash function.
Now that APInt and APFloat can be passed directly to hash_combine,
simplify the rest of the LLVMContextImpl hashing to use the new
infrastructure.
llvm-svn: 152004
folks who know something about PPC tell me that the byte swap is crazy
fast and without this the bit mixture would actually be different. It
might not be worse, but I've not measured it and so I'd rather not trust
it. This way, the algorithm is identical on both endianness hosts. I'll
look into any performance issues etc stemming from this.
llvm-svn: 151892
just ensure that the number of bytes in the pair is the sum of the bytes
in each side of the pair. As long as thats true, there are no extra
bytes that might be padding.
Also add a few tests that previously would have slipped through the
checking. The more accurate checking mechanism catches these and ensures
they are handled conservatively correctly.
Thanks to Duncan for prodding me to do this right and more simply.
llvm-svn: 151891
hashable data. This matters when we have pair<T*, U*> as a key, which is
quite common in DenseMap, etc. To that end, we need to detect when this
is safe. The requirements on a generic std::pair<T, U> are:
1) Both T and U must satisfy the existing is_hashable_data trait. Note
that this includes the requirement that T and U have no internal
padding bits or other bits not contributing directly to equality.
2) The alignment constraints of std::pair<T, U> do not require padding
between consecutive objects.
3) The alignment constraints of U and the size of T do not conspire to
require padding between the first and second elements.
Grow two somewhat magical traits to detect this by forming a pod
structure and inspecting offset artifacts on it. Hopefully this won't
cause any compilers to panic.
Added and adjusted tests now that pairs, even nested pairs, are treated
as just sequences of data.
Thanks to Jeffrey Yasskin for helping me sort through this and reviewing
the somewhat subtle traits.
llvm-svn: 151883
an open question of whether we can do better than this by treating pairs
as boring data containers and directly hashing the two subobjects. This
at least makes the API reasonable.
In order to make this change, I reorganized the header a bit. I lifted
the declarations of the hash_value functions up to the top of the header
with their doxygen comments as these are intended for users to interact
with. They shouldn't have to wade through implementation details. I then
defined them at the very end so that they could be defined in terms of
hash_combine or any other hashing infrastructure.
Added various pair-hashing unittests.
llvm-svn: 151882
the hash_code. I'm not sure what I was thinking here, the use cases for
special values are in the *keys*, not in the hashes of those keys.
We can always resurrect this if needed, or clients can accomplish the
same goal themselves. This makes the general case somewhat faster (~5
cycles faster on my machine) and smaller with less branching.
llvm-svn: 151865
of the proposed standard hashing interfaces (N3333), and to use
a modified and tuned version of the CityHash algorithm.
Some of the highlights of this change:
-- Significantly higher quality hashing algorithm with very well
distributed results, and extremely few collisions. Should be close to
a checksum for up to 64-bit keys. Very little clustering or clumping of
hash codes, to better distribute load on probed hash tables.
-- Built-in support for reserved values.
-- Simplified API that composes cleanly with other C++ idioms and APIs.
-- Better scaling performance as keys grow. This is the fastest
algorithm I've found and measured for moderately sized keys (such as
show up in some of the uniquing and folding use cases)
-- Support for enabling per-execution seeds to prevent table ordering
or other artifacts of hashing algorithms to impact the output of
LLVM. The seeding would make each run different and highlight these
problems during bootstrap.
This implementation was tested extensively using the SMHasher test
suite, and pased with flying colors, doing better than the original
CityHash algorithm even.
I've included a unittest, although it is somewhat minimal at the moment.
I've also added (or refactored into the proper location) type traits
necessary to implement this, and converted users of GeneralHash over.
My only immediate concerns with this implementation is the performance
of hashing small keys. I've already started working to improve this, and
will continue to do so. Currently, the only algorithms faster produce
lower quality results, but it is likely there is a better compromise
than the current one.
Many thanks to Jeffrey Yasskin who did most of the work on the N3333
paper, pair-programmed some of this code, and reviewed much of it. Many
thanks also go to Geoff Pike Pike and Jyrki Alakuijala, the original
authors of CityHash on which this is heavily based, and Austin Appleby
who created MurmurHash and the SMHasher test suite.
Also thanks to Nadav, Tobias, Howard, Jay, Nick, Ahmed, and Duncan for
all of the review comments! If there are further comments or concerns,
please let me know and I'll jump on 'em.
llvm-svn: 151822
Allows us to de-virtualize the function and provides access to it in
the instruction printer, which is useful for handling composite
physical registers (e.g., ARM register lists).
llvm-svn: 151815
This allows us to make TRC non-polymorphic and value-initializable, eliminating a huge static
initializer and a ton of cruft from the generated code.
Shrinks ARMBaseRegisterInfo.o by ~100k.
llvm-svn: 151806
Simply treat bundles as instructions. Spill code is inserted between
bundles, never inside a bundle. Rewrite all operands in a bundle at
once.
Don't attempt and memory operand folding inside bundles.
llvm-svn: 151787
* Add begin_dynamic_table() / end_dynamic_table() private interface to ELFObjectFile.
* Add begin_libraries_needed() / end_libraries_needed() interface to ObjectFile, for grabbing the list of needed libraries for a shared object or dynamic executable.
* Implement this new interface completely for ELF, leave stubs for COFF and MachO.
* Add 'llvm-readobj' tool for dumping ObjectFile information.
llvm-svn: 151785
This allows the function to be inlined, and makes it suitable for use in
getInstructionIndex().
Also provide a const version. C++ is great for touch typing practice.
llvm-svn: 151782
std::vector.
- Good for 1-2% speedup on writing PCH for Cocoa.h.
- Clang side API match to follow shortly, there wasn't an easy way to make this
non-breaking.
llvm-svn: 151750
Rename ST_External to ST_Unknown, and slightly change its semantics. It now only indicates that the symbol's type
is unknown, not that the symbol is undefined. (For that, use ST_Undefined).
llvm-svn: 151696
This function does more or less the same as
MI::readsWritesVirtualRegister(), but it supports bundles as well.
It also determines if any constraint requires reading and writing
operands to use the same register. Most clients want to know.
Use the more modern MO.readsReg() instead of trying to sort out undefs
and partial redefines. Stop supporting the extra full <imp-def> operand
as an alternative to <def,undef> sub-register defines.
llvm-svn: 151690
Extract a base class and provide four specific sub-classes for iterating
over const/non-const bundles/instructions.
This eliminates the mystery bool constructor argument.
llvm-svn: 151684
SlotIndexes are not assigned to instructions inside bundles, but it is
still valid to look up the index of those instructions.
The reverse getInstructionFromIndex() will return the first instruction
in the bundle.
llvm-svn: 151672
debug info for assembly files. We were already doing the right thing when
producing debug info for C/C++.
ELF linkers don't know dwarf, so they depend on these relocations to produce
valid dwarf output.
llvm-svn: 151655
the processor keeps a return addresses stack (RAS) which stores the address
and the instruction execution state of the instruction after a function-call
type branch instruction.
Calling a "noreturn" function with normal call instructions (e.g. bl) can
corrupt RAS and causes 100% return misprediction so LLVM should use a
unconditional branch instead. i.e.
mov lr, pc
b _foo
The "mov lr, pc" is issued in order to get proper backtrace.
rdar://8979299
llvm-svn: 151623
We on the linker to resolve calls to the appropriate BL/BLX instruction
to make interworking function correctly. It uses the symbol in the
relocation to do that, so we need to be careful about being too clever.
To enable this for ARM mode, split the BL/BLX fixup kind off from the
unconditional-branch fixups.
rdar://10927209
llvm-svn: 151571
The MIOperands iterator can visit operands on a single instruction, or
all operands in a bundle. This simplifies code like the register
allocator that treats bundles as a set of operands.
llvm-svn: 151529
verifier does. This correctly handles invoke.
Thanks to Duncan, Andrew and Chris for the comments.
Thanks to Joerg for the early testing.
llvm-svn: 151469
are optimization hints, but at -O0 we're not optimizing. This becomes a problem
when the alwaysinline attribute is abused.
rdar://10921594
llvm-svn: 151429
it with memcpy. This also fixes a problem on big-endian hosts, where
addUnaligned would return different results depending on the alignment
of the data.
llvm-svn: 151247
The standard function epilog includes a .size directive, but ppc64 uses
an alternate local symbol to tag the actual start of each function.
Until recently, binutils accepted the .size directive as:
.size test1, .Ltmp0-test1
however, using this directive with recent binutils will result in the error:
.size expression for XXX does not evaluate to a constant
so we must use the label which actually tags the start of the function.
llvm-svn: 151200
Add some data structures to represent for loops. These will be
referenced during object processing to do any needed iteration and
instantiation.
Add foreach keyword support to the lexer.
Add a mode to indicate that we're parsing a foreach loop. This allows
the value parser to early-out when processing the foreach value list.
Add a routine to parse foreach iteration declarations. This is
separate from ParseDeclaration because the type of the named value
(the iterator) doesn't match the type of the initializer value (the
value list). It also needs to add two values to the foreach record:
the iterator and the value list.
Add parsing support for foreach.
Add the code to process foreach loops and create defs based
on iterator values.
Allow foreach loops to be matched at the top level.
When parsing an IDValue check if it is a foreach loop iterator for one
of the active loops. If so, return a VarInit for it.
Add Emacs keyword support for foreach.
Add VIM keyword support for foreach.
Add tests to check foreach operation.
Add TableGen documentation for foreach.
Support foreach with multiple objects.
Support non-braced foreach body with one object.
Do not require types for the foreach declaration. Assume the iterator
type from the iteration list element type.
llvm-svn: 151164
chip in r139383, and the PSP components of the triple are really
annoying to parse. Let's leave this chapter behind. There is no reason
to expect LLVM to see a PSP-related triple these days, and so no
reasonable motivation to support them.
It might be reasonable to prune a few of the older MIPS triple forms in
general, but as those at least cause no burden on parsing (they aren't
both a chip and an OS!), I'm happy to leave them in for now.
llvm-svn: 151156
Affect on SD scheduling and postRA scheduling:
Printing the DAG will display the nodes in top-down topological order.
This matches the order within the MBB and makes my life much easier in general.
Affect on misched:
We don't need to track virtual register uses at all. This is awesome.
I also intend to rely on the SUnit ID as a topo-sort index. So if A < B then we cannot have an edge B -> A.
llvm-svn: 151135
For objects that can be identified by small unsigned keys, SparseSet
provides constant time clear() and fast deterministic iteration. Insert,
erase, and find operations are typically faster than hash tables.
SparseSet is useful for keeping information about physical registers,
virtual registers, or numbered basic blocks.
llvm-svn: 151110
bundles. This method takes a bundle start and an MI being bundled, and makes
the intervals for the MI's operands appear to start/end on the bundle start.
Also fixes some minor cosmetic issues (whitespace, naming convention) in the
HMEditor code.
llvm-svn: 151099
They're private static methods but we can just make them static
functions in the implementation. It makes the implementations a touch
more wordy, but takes another chunk out of the header file.
Also, take the opportunity to switch the names to the new coding
conventions.
No functionality changed here.
llvm-svn: 151047
Passes after RegAlloc should be able to rely on MRI->getNumVirtRegs() == 0.
This makes sharing code for pre/postRA passes more robust.
Now, to check if a pass is running before the RA pipeline begins, use MRI->isSSA().
To check if a pass is running after the RA pipeline ends, use !MRI->getNumVirtRegs().
PEI resets virtual regs when it's done scavenging.
PTX will either have to provide its own PEI pass or assign physregs.
llvm-svn: 151032
construction. Simplify its interface, implementation, and users
accordingly as there is no longer an 'uninitialized' state to check for.
Also, fixes a bug lurking in the interface as there was one method that
didn't correctly check for initialization.
llvm-svn: 151024
MRI keeps track of which physregs have been used. Make sure it gets
updated with all the regmask-clobbered registers.
Delete the closePhysRegsUsed() function which isn't necessary.
llvm-svn: 150830
metadata may still unwind, but only in ways that the ARC
optimizer doesn't need to consider. This permits more
aggressive optimization.
llvm-svn: 150829
any changes.
Internally this adds a private inner class HMEditor, to LiveIntervals. HMEditor provides
an API for updating live intervals when code is moved or bundled.
llvm-svn: 150826
Fix the type of eh_frame on Solaris so that Sun ld doesn't fail to combine them (thus making it impossible for the unwind library to find them and breaking exceptions).
llvm-svn: 150814
The existing framework for postra scheduling is library local. We want to keep it that way. Soon we will have a more general MachineScheduler interface. At that time, various bits will be exposed to targets. In the meantime, the VLIWPacketizer wants to use ScheduleDAGInstrs directly, so it needs to wrapped in a PIMPL to avoid exposing it to the target interface.
llvm-svn: 150633
method. This allows the target lowering code to not have to deal with MDNodes.
Also, avoid leaking memory like a sieve by not creating a global variable for
the image info section, but just emitting the code directly.
llvm-svn: 150624
Accomplished by moving the body of StringRef::edit_distance into
a separate function that accepts two ArrayRefs, and making
StringRef::edit_distance a wrapper around the new function.
llvm-svn: 150621
The llc command line options for enabling/disabling passes are local to CodeGen/Passes.cpp. This patch associates those options with standard pass IDs so they work regardless of how the target configures the passes.
A target has two ways of overriding standard passes:
1) Redefine the pass pipeline (override TargetPassConfig::add%Stage)
2) Replace or suppress individiual passes with TargetPassConfig::substitutePass.
In both cases, the command line options associated with the pass override the target default.
For example, say a target wants to disable machine instruction scheduling by default:
- The target calls disablePass(MachineSchedulerID) but otherwise does not override any TargetPassConfig methods.
- Without any llc options, no scheduler is run.
- With -enable-misched, the standard machine scheduler is run and honors the -misched=... flag to select the scheduler variant, which may be used for performance evaluation or testing.
Sorry overridePass is ugly. I haven't thought of a better way without replacing the cl::opt framework. I hope to do that one day...
I haven't figured out why CodeGen uses char& for pass IDs. AnalysisID is much easier to use and less bug prone. I'm using it wherever I can for internal implementation. Maybe later we can change the global pass ID definitions as well.
llvm-svn: 150563
Only accept register masks when looking for an 'overlapping' def. When
Overlap is not set, the function searches for a proper definition of
Reg.
This means MI->modifiesRegister() considers register masks, but
MI->definesRegister() doesn't.
llvm-svn: 150529
The MachO back-end needs to emit the garbage collection flags specified in the
module flags. This is a WIP, so the front-end hasn't been modified to emit these
flags just yet. Documentation and front-end switching to occur soon.
llvm-svn: 150507
This is useful for clients that want to maintain compatibility
across multiple releases of LLVM. Currently users like Klee and
Mesa all have to roll their own 'parse llvm-config --version
output and generate defines' solution.
Also reuse the new macros so that version information is less
redundant/likely to fall out of sync again in the future.
llvm-svn: 150405
- Use unsigned literals when the desired result is unsigned. This mostly allows unsigned/signed mismatch warnings to be less noisy even if they aren't on by default.
- Remove misplaced llvm_unreachable.
- Add static to a declaration of a function on MSVC x86 only.
- Change some instances of calling a static function through a variable to simply calling that function while removing the unused variable.
llvm-svn: 150364
to what's done for MachO and COFF. This allows advanced uses of the class to
be implemented outside the Object library. In particular, the DyldELFObject
subclass is now moved into its logical home - ExecutionEngine/RuntimeDyld.
This patch was reviewed by Michael Spencer.
llvm-svn: 150327
Module flags are key-value pairs associated with the module. They include a
'behavior' value, indicating how module flags react when mergine two
files. Normally, it's just the union of the two module flags. But if two module
flags have the same key, then the resulting flags are dictated by the behaviors.
Allowable behaviors are:
Error
Emits an error if two values disagree.
Warning
Emits a warning if two values disagree.
Require
Emits an error when the specified value is not present
or doesn't have the specified value. It is an error for
two (or more) llvm.module.flags with the same ID to have
the Require behavior but different values. There may be
multiple Require flags per ID.
Override
Uses the specified value if the two values disagree. It
is an error for two (or more) llvm.module.flags with the
same ID to have the Override behavior but different
values.
llvm-svn: 150300
In case the MachineScheduling pass I'm working on doesn't work well
for another target, they can completely override it. This also adds a
hook immediately after the RegAlloc pass to cleanup immediately after
vregs go away. We may want to fold it into the postRA hook later.
llvm-svn: 150298
It can be necessary to detach a register mask pointer from its
MachineOperand. This method is convenient for checking clobbered
physregs on a detached bitmask pointer.
llvm-svn: 150261
These query functions are safe for external use and, furthermore,
are the only way to make queries against the "unknown instructions" array.
BBVectorize will use these functions.
llvm-svn: 150248
is that patterns no longer match for vectors of booleans, because you only get
ConstantDataVector when the vector element type is i8, i16, etc, not when it is
i1). Original commit message:
Remove some dead code and tidy things up now that vectors use ConstantDataVector
instead of always using ConstantVector.
llvm-svn: 150246
Make them accessible through MCInstrInfo. They are only used for debugging purposes so this doesn't
have an impact on performance. X86MCTargetDesc.o goes from 630K to 461K on x86_64.
llvm-svn: 150245
Creates a configurable regalloc pipeline.
Ensure specific llc options do what they say and nothing more: -reglloc=... has no effect other than selecting the allocator pass itself. This patch introduces a new umbrella flag, "-optimize-regalloc", to enable/disable the optimizing regalloc "superpass". This allows for example testing coalscing and scheduling under -O0 or vice-versa.
When a CodeGen pass requires the MachineFunction to have a particular property, we need to explicitly define that property so it can be directly queried rather than naming a specific Pass. For example, to check for SSA, use MRI->isSSA, not addRequired<PHIElimination>.
CodeGen transformation passes are never "required" as an analysis
ProcessImplicitDefs does not require LiveVariables.
We have a plan to massively simplify some of the early passes within the regalloc superpass.
llvm-svn: 150226
Unify default construction of error_code uses on this idiom so that users don't
feel compelled to make static globals for naming convenience. (unfortunately I
couldn't make the original ctor private as some APIs don't return their result,
instead using an out parameter (that makes sense to default construct) - which
is a bit of a pity. I did, however, find/fix some cases of unnecessary default
construction of error_code before I hit the unfixable cases)
llvm-svn: 150197
Moving toward a uniform style of pass definition to allow easier target configuration.
Globally declare Pass ID.
Globally declare pass initializer.
Use INITIALIZE_PASS consistently.
Add a call to the initializer from CodeGen.cpp.
Remove redundant "createPass" functions and "getPassName" methods.
While cleaning up declarations, cleaned up comments (sorry for large diff).
llvm-svn: 150100
Build an ordered vector of register mask operands (i.e., calls) when
computing live intervals. Provide a checkRegMaskInterference() function
that computes a bit mask of usable registers for a live range.
This is a quick way of determining of a live range crosses any calls,
and restricting it to the callee saved registers if it does.
Previously, we had to discover call clobbers for each candidate register
independently.
llvm-svn: 150077
LiveIntervalAnalysis has a number of functions that simply forward to
SlotIndexes. Since SlotIndexes is a stand-alone analysis now, clients
should really refer to it directly.
llvm-svn: 149921
This CL delays reading of function bodies from initial parse until
materialization, allowing overlap of compilation with bitcode download.
llvm-svn: 149918
convert at least one client over to use them. Subsequent patches both to
LLVM and Clang will try to convert more people over to a common set of
predicates.
This round of predicates is focused on OS-categorization predicates.
llvm-svn: 149815
but with a critical fix to the SelectionDAG code that optimizes copies
from strings into immediate stores: the previous code was stopping reading
string data at the first nul. Address this by adding a new argument to
llvm::getConstantStringInfo, preserving the behavior before the patch.
llvm-svn: 149800
A live range that has an early clobber tied redef now looks like a
normal tied redef, except the early clobber def uses the early clobber
slot.
This is enough to handle any strange interference problems.
llvm-svn: 149769
If a value is defined by a COPY, that instuction can easily and cheaply
be found by getInstructionFromIndex(VNI->def).
This reduces the size of VNInfo from 24 to 16 bytes, and improves
llc compile time by 3%.
llvm-svn: 149763
Passes prior to instructon selection are now split into separate configurable stages.
Header dependencies are simplified.
The bulk of this diff is simply removal of the silly DisableVerify flags.
Sorry for the target header churn. Attempting to stabilize them.
llvm-svn: 149754
Calls that use register mask operands don't have implicit defs for
returned values. The register mask operand handles the call clobber,
but it always behaves like a set of dead defs.
Add live implicit defs for any implicitly defined physregs that are
actually used.
llvm-svn: 149715
Allows command line overrides to be centralized in LLVMTargetMachine.cpp.
LLVMTargetMachine can intercept common passes and give precedence to command line overrides.
Allows adding "internal" target configuration options without touching TargetOptions.
Encapsulates the PassManager.
Provides a good point to initialize all CodeGen passes so that Pass ID's can be used in APIs.
Allows modifying the target configuration hooks without rebuilding the world.
llvm-svn: 149672
needed to emit a 64-bit gp-relative relocation entry. Make changes necessary
for emitting jump tables which have entries with directive .gpdword. This patch
does not implement the parts needed for direct object emission or JIT.
llvm-svn: 149668
PHI nodes which were matched, rather than climbing up the
original PHI node's operands to rediscover PHI nodes for
recording, since the PHI nodes found that are not
necessarily part of the matched set.
This fixes rdar://10589171.
llvm-svn: 149654
that just uses the new toolchain probing logic. This fixes linking with -m32 on
64 bit systems (the /32 dir was not being added to the search).
llvm-svn: 149651
It is simpler to define a composite index directly:
def ssub_2 : SubRegIndex<[dsub_1, ssub_0]>;
def ssub_3 : SubRegIndex<[dsub_1, ssub_1]>;
Than specifying the composite indices on each register:
CompositeIndices = [(ssub_2 dsub_1, ssub_0),
(ssub_3 dsub_1, ssub_1)] in ...
This also makes it clear that SubRegIndex composition is supposed to be
unique.
llvm-svn: 149556
This new scheduler plugs into the existing selection DAG scheduling framework. It is a top-down critical path scheduler that tracks register pressure and uses a DFA for pipeline modeling.
Patch by Sergei Larin!
llvm-svn: 149547
The purpose of refactoring is to hide operand roles from SwitchInst user (programmer). If you want to play with operands directly, probably you will need lower level methods than SwitchInst ones (TerminatorInst or may be User). After this patch we can reorganize SwitchInst operands and successors as we want.
What was done:
1. Changed semantics of index inside the getCaseValue method:
getCaseValue(0) means "get first case", not a condition. Use getCondition() if you want to resolve the condition. I propose don't mix SwitchInst case indexing with low level indexing (TI successors indexing, User's operands indexing), since it may be dangerous.
2. By the same reason findCaseValue(ConstantInt*) returns actual number of case value. 0 means first case, not default. If there is no case with given value, ErrorIndex will returned.
3. Added getCaseSuccessor method. I propose to avoid usage of TerminatorInst::getSuccessor if you want to resolve case successor BB. Use getCaseSuccessor instead, since internal SwitchInst organization of operands/successors is hidden and may be changed in any moment.
4. Added resolveSuccessorIndex and resolveCaseIndex. The main purpose of these methods is to see how case successors are really mapped in TerminatorInst.
4.1 "resolveSuccessorIndex" was created if you need to level down from SwitchInst to TerminatorInst. It returns TerminatorInst's successor index for given case successor.
4.2 "resolveCaseIndex" converts low level successors index to case index that curresponds to the given successor.
Note: There are also related compatability fix patches for dragonegg, klee, llvm-gcc-4.0, llvm-gcc-4.2, safecode, clang.
llvm-svn: 149481
The pass pointer should never be referenced after sending it to
schedulePass(), which may delete the pass. To fix this bug I had to
clean up the design leading to more goodness.
You may notice now that any non-analysis pass is printed. So things like loop-simplify and lcssa show up, while target lib, target data, alias analysis do not show up. Normally, analysis don't mutate the IR, but you can now check this by using both -print-after and -print-before. The effects of analysis will now show up in between the two.
The llc path is still in bad shape. But I'll be improving it in my next checkin. Meanwhile, print-machineinstrs still works the same way. With print-before/after, many llc passes that were not printed before now are, some of these should be converted to analysis. A few very important passes, isel and scheduler, are not properly initialized, so not printed.
llvm-svn: 149480
This is the initial checkin of the basic-block autovectorization pass along with some supporting vectorization infrastructure.
Special thanks to everyone who helped review this code over the last several months (especially Tobias Grosser).
llvm-svn: 149468
now that this handles the release / retain calls.
Adds a regression test for that bug (which is a compile-time
regression) and for the last two changes to the IntrusiveRefCntPtr,
especially tests for the memory leak due to copy construction of the
ref-counted object and ensuring that the traits are used for release /
retain calls.
llvm-svn: 149411
kicking in the big win of ConstantDataArray. As part of this, change
the implementation of GetConstantStringInfo in ValueTracking to work
with ConstantDataArray (and not ConstantArray) making it dramatically,
amazingly, more efficient in the process and renaming it to
getConstantStringInfo.
This keeps around a GetConstantStringInfo entrypoint that (grossly)
forwards to getConstantStringInfo and constructs the std::string
required, but existing clients should move over to
getConstantStringInfo instead.
llvm-svn: 149351
- Don't call malloc+free in the very hot forward().
- Don't call isTiedToDefOperand().
- Don't create BitVector temporaries.
- Merge DeadRegs into KillRegs.
- Eliminate the early clobber checks, they were irrelevant to scavenging.
- Remove unnecessary code from -Asserts builds.
This speeds up ARM PEI by 3.4x and overall llc -O0 codegen time by 11%.
llvm-svn: 149189
around within a basic block while maintaining live-intervals.
Updated ScheduleTopDownLive in MachineScheduler.cpp to use the moveInstr API
when reordering MIs.
llvm-svn: 149147
we're at it, allow PatternMatch's "neg" pattern to match integer
vector negations, and enhance ComputeNumSigned bits to handle
shl of vectors.
llvm-svn: 149082
The live range of the source register may be extended when a redundant
copy is eliminated. Make sure any kill flags between the two copies are
cleared.
This fixes PR11765.
llvm-svn: 149069
This enables the linker to match concrete relocation types (absolute or relative) with whatever library or C++ support code is being linked against.
llvm-svn: 149057
ConstantVector. Fix some outright bugs in the implementation of
ConstantArray and Constant struct, which would cause us to not make
one big UndefValue when asking for an array/struct with all undef
elements. Enhance Constant::isAllOnesValue to work with
ConstantDataVector.
llvm-svn: 149021
to reduce the number of cast<>'s we have. This allows someone to use
things like Ty->getVectorNumElements() instead of
cast<VectorType>(Ty)->getNumElements() when you know that a type is a
vector.
It would be a great general cleanup to move the codebase to use these,
I will do so in the code I'm touching.
llvm-svn: 148999
to 64-bits, and added a new attribute in bit #32. Specifically, remove
this new attribute from the enum used in the C API. It's not yet clear
what the best approach is for exposing these new attributes in the
C API, and several different proposals are on the table. Until then, we
can simply not expose this bit in the API at all.
Also, I've reverted a somewhat unrelated change in the same revision
which switched from "1 << 31" to "1U << 31" for the top enum. While "1
<< 31" is technically undefined behavior, implementations DTRT here.
However, MS and -pedantic mode warn about non-'int' type enumerator
values. If folks feel strongly about this I can put the 'U' back in, but
it seemed best to wait for the proper solution.
llvm-svn: 148937
add a ConstantDataArray::getString method that corresponds to the (to be
removed) StringRef version of ConstantArray::get, but is dramatically more
efficient.
llvm-svn: 148804
and clean up some other misc stuff. Unlike ConstantArray, we will
prefer to emit .fill directives for "String" arrays that all have
the same value, since they are denser than emitting a .ascii
llvm-svn: 148793
same semantics as ConstantArray's but much more efficient because they
don't have to return std::string's. The ConstantArray methods will
eventually be removed.
llvm-svn: 148792
out into a new ConstantFoldLoadThroughGEPIndices (more useful) function
and rewrite it to be simpler, more efficient, and to handle the new
ConstantDataSequential type.
Enhance ConstantFoldLoadFromConstPtr to handle ConstantDataSequential.
llvm-svn: 148786
violation -- MC cannot depend on CodeGen.
Specifically, the MCTargetDesc component of each target is actually
a subcomponent of the MC library. As such, it cannot depend on the
target-independent code generator, because MC itself cannot depend on
the target-independent code generator. This change moved a flag from the
ARM MCTargetDesc file ARMMCAsmInfo.cpp to the CodeGen layer in
ARMException.cpp, leaving behind an 'extern' to refer back to it. That
layering order isn't viable givin the constraints outlined above.
Commandline flags are designed to be static specifically to avoid these
types of bugs.
Fixing this is likely going to require some non-trivial refactoring.
llvm-svn: 148759
classes, per PR1324. Not all of their helper functions are implemented,
nothing creates them, and the rest of the compiler doesn't handle them yet.
llvm-svn: 148741
This change adds an new value to the --arm-enable-ehabi option that
disables emitting unwinding descriptors. This mode gives a working
backtrace() without the (currently broken) exception support.
llvm-svn: 148686
in a subclass named DyldELFObject. This class supports rebasing the object file
it represents by re-mapping section addresses to the actual memory addresses
the object was placed in. This is required for MC-JIT implementation on ELF with
debugging support.
Patch reviewed on llvm-commits.
Developed together with Ashok Thirumurthi and Andrew Kaylor.
llvm-svn: 148653
A register mask operand kills any live physreg that isn't preserved.
Unlike an implicit-def operand, the clobbered physregs are never live
afterwards.
This means LiveVariables has to track a much smaller number of live
physregs, and it should spend much less time in addRegisterDead().
llvm-svn: 148609
Problem: LLVM needs more function attributes than currently available (32 bits).
One such proposed attribute is "address_safety", which shows that a function is being checked for address safety (by AddressSanitizer, SAFECode, etc).
Solution:
- extend the Attributes from 32 bits to 64-bits
- wrap the object into a class so that unsigned is never erroneously used instead
- change "unsigned" to "Attributes" throughout the code, including one place in clang.
- the class has no "operator uint64 ()", but it has "uint64_t Raw() " to support packing/unpacking.
- the class has "safe operator bool()" to support the common idiom: if (Attributes attr = getAttrs()) useAttrs(attr);
- The CTOR from uint64_t is marked explicit, so I had to add a few explicit CTOR calls
- Add the new attribute "address_safety". Doing it in the same commit to check that attributes beyond first 32 bits actually work.
- Some of the functions from the Attribute namespace are worth moving inside the class, but I'd prefer to have it as a separate commit.
Tested:
"make check" on Linux (32-bit and 64-bit) and Mac (10.6)
built/run spec CPU 2006 on Linux with clang -O2.
This change will break clang build in lib/CodeGen/CGCall.cpp.
The following patch will fix it.
llvm-svn: 148553
LSR has gradually been improved to more aggressively reuse existing code, particularly existing phi cycles. This exposed problems with the SCEVExpander's sloppy treatment of its insertion point. I applied some rigor to the insertion point problem that will hopefully avoid an endless bug cycle in this area. Changes:
- Always used properlyDominates to check safe code hoisting.
- The insertion point provided to SCEV is now considered a lower bound. This is usually a block terminator or the use itself. Under no cirumstance may SCEVExpander insert below this point.
- LSR is reponsible for finding a "canonical" insertion point across expansion of different expressions.
- Robust logic to determine whether IV increments are in "expanded" form and/or can be safely hoisted above some insertion point.
Fixes PR11783: SCEVExpander assert.
llvm-svn: 148535
to instruction right after the last instruction in the bundle.
- Add a finalizeBundle() variant that doesn't specify LastMI. Instead, the code
will find the last instruction in the bundle by following the 'InsideBundle'
marker. This is useful in case bundles are formed early (i.e. during MI
scheduling) but finalized later (i.e. after register allocator has finished
rewriting virtual registers with physical registers).
llvm-svn: 148444
This SelectionDAG node will be attached to call nodes by LowerCall(),
and eventually becomes a MO_RegisterMask MachineOperand on the
MachineInstr representing the call instruction.
LowerCall() will attach a register mask that depends on the calling
convention.
llvm-svn: 148436
When set, this bit indicates that a register is completely defined by
the value of its sub-registers.
Use the CoveredBySubRegs property to infer which super-registers are
call-preserved given a list of callee-saved registers. For example, the
ARM registers D8-D15 are callee-saved. This now automatically implies
that Q4-Q7 are call-preserved.
Conversely, Win64 callees save XMM6-XMM15, but the corresponding
YMM6-YMM15 registers are not call-preserved because they are not fully
defined by their sub-registers.
llvm-svn: 148363
Targets can now add CalleeSavedRegs defs to their *CallingConv.td file.
TableGen will use this to create a *_SaveList array suitable for
returning from getCalleeSavedRegs() as well as a *_RegMask bit mask
suitable for returning from getCallPreservedMask().
llvm-svn: 148346
BitVector uses the native word size for its internal representation.
That doesn't work well for literal bit masks in source code.
This patch adds BitVector operations to efficiently apply literal bit
masks specified as arrays of uint32_t. Since each array entry always
holds exactly 32 bits, these portable bit masks can be source code
literals, probably produced by TableGen.
llvm-svn: 148272
Move to a by-section allocation and relocation scheme. This allows
better support for sections which do not contain externally visible
symbols.
Flesh out the relocation address vs. local storage address separation a
bit more as well. Remote process JITs use this to tell the relocation
resolution code where the code will live when it executes.
The startFunctionBody/endFunctionBody interfaces to the JIT and the
memory manager are deprecated. They'll stick around for as long as the
old JIT does, but the MCJIT doesn't use them anymore.
llvm-svn: 148258
Register masks will be used as a compact representation of large clobber
lists. Currently, an x86 call instruction has some 40 operands
representing call-clobbered registers. That's more than 1kB of useless
operands per call site.
A register mask operand references a bit mask of call-preserved
registers, everything else is clobbered. The bit mask will typically
come from TargetRegisterInfo::getCallPreservedMask().
By abandoning ImplicitDefs for call-clobbered registers, it also becomes
possible to share call instruction descriptions between calling
conventions, and we can get rid of the WINCALL* instructions.
This patch introduces the new operand kind. Future patches will add
RegMask support to target-independent passes before finally the fixed
clobber lists can be removed from call instruction descriptions.
llvm-svn: 148250
or Clang is using this, and it would be hard to use it correctly given
the thread hostility of the function. Also, it never checked the return
which is rather dangerous with chdir. If someone was in fact using this,
please let me know, as well as what the usecase actually is so that
I can add it back and make it more correct and secure to use. (That
said, it's never going to be "safe" per-se, but we could at least
document the risks...)
llvm-svn: 148211
The hook returns a bit-mask of call-preserved registers that will
eventually replace the current list of implicit defs on call
instructions. This will make it possible to support multiple calling
conventions without duplicating call instruction descriptors.
The call-preserved mask is slightly different from the list returned by
the getCalleeSavedRegs() hook, it includes all aliases that are
preserved by calls.
The hook takes a CallingConv::ID argument instead of a MachineFunction
pointer, so it can provide information about calls to extern functions,
and even indirect function calls.
TRI::getCalleeSavedRegs() returns information about the function
currently being compiled. TRI::getCallPreservedMask() returns
information about the functions it is calling.
llvm-svn: 148165
Consider this code:
int h() {
int x;
try {
x = f();
g();
} catch (...) {
return x+1;
}
return x;
}
The variable x is undefined on the first edge to the landing pad, but it
has the f() return value on the second edge to the landing pad.
SplitAnalysis::getLastSplitPoint() would assume that the return value
from f() was live into the landing pad when f() throws, which is of
course impossible.
Detect these cases, and treat them as if the landing pad wasn't there.
This allows spill code to be inserted after the function call to f().
<rdar://problem/10664933>
llvm-svn: 147912
Delete the alternative implementation in LiveIntervalAnalysis.
These functions computed the same thing, but SplitAnalysis caches the
result.
llvm-svn: 147911
with other symbols.
An object in the __cfstring section is suppoed to be filled with CFString
objects, which have a pointer to ___CFConstantStringClassReference followed by a
pointer to a __cstring. If we allow the object in the __cstring section to be
merged with another global, then it could end up in any section. Because the
linker is going to remove these symbols in the final executable, we shouldn't
bother to merge them.
<rdar://problem/10564621>
llvm-svn: 147899
functional change in r147860 to use DW_TAG_label's instead TAG_subprogram's.
This only changes names and updates comments. No functional change.
llvm-svn: 147877
of several newly un-defaulted switches. This also helps optimizers
(including LLVM's) recognize that every case is covered, and we should
assume as much.
llvm-svn: 147861
These heuristics are sufficient for enabling IV chains by
default. Performance analysis has been done for i386, x86_64, and
thumbv7. The optimization is rarely important, but can significantly
speed up certain cases by eliminating spill code within the
loop. Unrolled loops are prime candidates for IV chains. In many
cases, the final code could still be improved with more target
specific optimization following LSR. The goal of this feature is for
LSR to make the best choice of induction variables.
Instruction selection may not completely take advantage of this
feature yet. As a result, there could be cases of slight code size
increase.
Code size can be worse on x86 because it doesn't support postincrement
addressing. In fact, when chains are formed, you may see redundant
address plus stride addition in the addressing mode. GenerateIVChains
tries to compensate for the common cases.
On ARM, code size increase can be mitigated by using postincrement
addressing, but downstream codegen currently misses some opportunities.
llvm-svn: 147826
file error checking. Use that to error on an unfinished cfi_startproc.
The error is not nice, but is already better than a segmentation fault.
llvm-svn: 147717
opportunities that only present themselves after late optimizations
such as tail duplication .e.g.
## BB#1:
movl %eax, %ecx
movl %ecx, %eax
ret
The register allocator also leaves some of them around (due to false
dep between copies from phi-elimination, etc.)
This required some changes in codegen passes. Post-ra scheduler and the
pseudo-instruction expansion passes have been moved after branch folding
and tail merging. They were before branch folding before because it did
not always update block livein's. That's fixed now. The pass change makes
independently since we want to properly schedule instructions after
branch folding / tail duplication.
rdar://10428165
rdar://10640363
llvm-svn: 147716
The register allocators don't currently support adding reserved
registers while they are running. Extend the MRI API to keep track of
the set of reserved registers when register allocation started.
Target hooks like hasFP() and needsStackRealignment() can look at this
set to avoid reserving more registers during register allocation.
llvm-svn: 147577
Using DenseMap iterators isn't free as they have to check for empty
buckets. Dominator queries are common so this gives a minor speedup.
llvm-svn: 147544
Get back getHostTriple.
For JIT compilation, use the host triple instead of the default
target: this fixes some JIT testcases that used to fail when the
compiler has been configured as a cross compiler.
llvm-svn: 147542
captured. This allows the tracker to look at the specific use, which may be
especially interesting for function calls.
Use this to fix 'nocapture' deduction in FunctionAttrs. The existing one does
not iterate until a fixpoint and does not guarantee that it produces the same
result regardless of iteration order. The new implementation builds up a graph
of how arguments are passed from function to function, and uses a bottom-up walk
on the argument-SCCs to assign nocapture. This gets us nocapture more often, and
does so rather efficiently and independent of iteration order.
llvm-svn: 147327
Diagnostics are now emitted via the SourceMgr and we use MemoryBuffer
for buffer management. Switched the code to make use of the trailing
'0' that MemoryBuffer guarantees where it makes sense.
llvm-svn: 147063
unpredicated. That is, turn
subeq r0, r1, #1
addne r0, r1, #1
into
sub r0, r1, #1
addne r0, r1, #1
For targets where conditional instructions are always executed, this may be
beneficial. It may remove pseudo anti-dependency in out-of-order execution
CPUs. e.g.
op r1, ...
str r1, [r10] ; end-of-life of r1 as div result
cmp r0, #65
movne r1, #44 ; raw dependency on previous r1
moveq r1, #12
If movne is unpredicated, then
op r1, ...
str r1, [r10]
cmp r0, #65
mov r1, #44 ; r1 written unconditionally
moveq r1, #12
Both mov and moveq are no longer depdendent on the first instruction. This gives
the out-of-order execution engine more freedom to reorder them.
This has passed entire LLVM test suite. But it has not been enabled for any ARM
variant pending more performance evaluation.
rdar://8951196
llvm-svn: 146914
Use information computed while inferring new register classes to emit
accurate, table-driven implementations of getMatchingSuperRegClass().
Delete the old manual, error-prone implementations in the targets.
llvm-svn: 146873
make VariadicFunction actually be trivial. Do so, and also make it look
more like your standard trivial functor by making it a struct with no
access specifiers. The unit test is updated to initialize its functors
properly.
llvm-svn: 146827
variadic-like functions in C++98. See the comments in the header file
for a more detailed description of how these work. We plan to use these
extensively in the AST matching library. This code and idea were
originally authored by Zhanyong Wan. I've condensed it using macros
to reduce repeatition and adjusted it to fit better with LLVM's ADT.
Thanks to both David Blaikie and Doug Gregor for the review!
llvm-svn: 146729
into Analysis as a standalone function, since there's no need for
it to be in VMCore. Also, update it to use isKnownNonZero and
other goodies available in Analysis, making it more precise,
enabling more aggressive optimization.
llvm-svn: 146610
r0 = mov #0
r0 = moveq #1
Then the second instruction has an implicit data dependency on the first
instruction. Sadly I have yet to come up with a small test case that
demonstrate the post-ra scheduler taking advantage of this.
llvm-svn: 146583
to finalize MI bundles (i.e. add BUNDLE instruction and computing register def
and use lists of the BUNDLE instruction) and a pass to unpack bundles.
- Teach more of MachineBasic and MachineInstr methods to be bundle aware.
- Switch Thumb2 IT block to MI bundles and delete the hazard recognizer hack to
prevent IT blocks from being broken apart.
llvm-svn: 146542
undefined result. This adds new ISD nodes for the new semantics,
selecting them when the LLVM intrinsic indicates that the undef behavior
is desired. The new nodes expand trivially to the old nodes, so targets
don't actually need to do anything to support these new nodes besides
indicating that they should be expanded. I've done this for all the
operand types that I could figure out for all the targets. Owners of
various targets, please review and let me know if any of these are
incorrect.
Note that the expand behavior is *conservatively correct*, and exactly
matches LLVM's current behavior with these operations. Ideally this
patch will not change behavior in any way. For example the regtest suite
finds the exact same instruction sequences coming out of the code
generator. That's why there are no new tests here -- all of this is
being exercised by the existing test suite.
Thanks to Duncan Sands for reviewing the various bits of this patch and
helping me get the wrinkles ironed out with expanding for each target.
Also thanks to Chris for clarifying through all the discussions that
this is indeed the approach he was looking for. That said, there are
likely still rough spots. Further review much appreciated.
llvm-svn: 146466
indicates whether the intrinsic has a defined result for a first
argument equal to zero. This will eventually allow these intrinsics to
accurately model the semantics of GCC's __builtin_ctz and __builtin_clz
and the X86 instructions (prior to AVX) which implement them.
This patch merely sets the stage by extending the signature of these
intrinsics and establishing auto-upgrade logic so that the old spelling
still works both in IR and in bitcode. The upgrade logic preserves the
existing (inefficient) semantics. This patch should not change any
behavior. CodeGen isn't updated because it can use the existing
semantics regardless of the flag's value.
Note that this will be followed by API updates to Clang and DragonEgg.
Reviewed by Nick Lewycky!
llvm-svn: 146357
The OptLevel is now redundant with the TargetMachine*.
And selectTarget() isn't really JIT-specific and could probably
get refactored into one of the lower level libraries.
llvm-svn: 146355
generates the dwarf Compile Unit DIE and a dwarf subprogram DIE for each
non-temporary label.
The next part will be to get the clang driver to enable this when assembling
a .s file. rdar://9275556
llvm-svn: 146262
Patch by Brendon Cahoon!
This extends the existing LoopUnroll and LoopUnrollPass. Brendon
measured no regressions in the llvm test suite with -unroll-runtime
enabled. This implementation works by using the existing loop
unrolling code to unroll the loop by a power-of-two (default 8). It
generates an if-then-else sequence of code prior to the loop to
execute the extra iterations before entering the unrolled loop.
llvm-svn: 146245
clients to decide whether to look inside bundled instructions and whether
the query should return true if any / all bundled instructions have the
queried property.
llvm-svn: 146168
files. First, add a new block USELIST_BLOCK to the bitcode format. This is
where USELIST_CODE_ENTRYs will be stored. The format of the USELIST_CODE_ENTRYs
have not yet been defined. Add support in the BitcodeReader for parsing the
USELIST_BLOCK.
Part of rdar://9860654 and PR5680.
llvm-svn: 146078
generator to it. For non-bundle instructions, these behave exactly the same
as the MC layer API.
For properties like mayLoad / mayStore, look into the bundle and if any of the
bundled instructions has the property it would return true.
For properties like isPredicable, only return true if *all* of the bundled
instructions have the property.
For properties like canFoldAsLoad, isCompare, conservatively return false for
bundles.
llvm-svn: 146026
This flag is used when bundling machine instructions. It indicates
whether the operand reads a value defined inside or outside its bundle.
llvm-svn: 145997
For example, ARM allows:
vmov.u32 s4, #0 -> vmov.i32, #0
'u32' is a more specific designator for the 32-bit integer type specifier
and is legal for any instruction which accepts 'i32' as a datatype suffix.
We want to say,
def : TokenAlias<".u32", ".i32">;
This works by marking the match class of 'From' as a subclass of the
match class of 'To'.
rdar://10435076
llvm-svn: 145992
1. Added opcode BUNDLE
2. Taught MachineInstr class to deal with bundled MIs
3. Changed MachineBasicBlock iterator to skip over bundled MIs; added an iterator to walk all the MIs
4. Taught MachineBasicBlock methods about bundled MIs
llvm-svn: 145975
This was actually a bit of a mess. TLI.setPrefLoopAlignment was clearly
documented as taking log2(bytes) units, but the x86 target would still
set a preferred loop alignment of '16'.
CodePlacementOpt passed this number on to the basic block, and
AsmPrinter interpreted it as bytes.
Now both MachineFunction and MachineBasicBlock use logarithmic
alignments.
Obviously, MachineConstantPool still measures alignments in bytes, so we
can emulate the thrill of using as.
llvm-svn: 145889
Whether a fixup needs relaxation for the associated instruction is a
target-specific function, as the FIXME indicated. Create a hook for that
and use it.
llvm-svn: 145881
This is a patch by Guoping Long!
As part of utilizing LLVM Dominator computation in Clang, made two changes to LLVM dominators tree implementation:
- (1) Change the recalculate() template function to only rely on GraphTraits.
- (2) Add a size() method to GraphTraits template class to query the number of nodes in the graph.
llvm-svn: 145837
change, now you need a TargetOptions object to create a TargetMachine. Clang
patch to follow.
One small functionality change in PTX. PTX had commented out the machine
verifier parts in their copy of printAndVerify. That now calls the version in
LLVMTargetMachine. Users of PTX who need verification disabled should rely on
not passing the command-line flag to enable it.
llvm-svn: 145714
It was getting ignored after r144788.
Also fix an accidental implicit cast from the OptLevel enum
to an optional bool argument. MSVC warned on this, but gcc
didn't.
llvm-svn: 145633
as MC is the only assembler we support.
This splits MS/Windows and GNU/Windows ASM infos into two seperate classes.
While there is currently only one difference, full MS C++ ABI support will
require many more.
llvm-svn: 145409
Now that it needs to be exported in a public header (Valgrind.h)
it should be prefixed to avoid collision with other projects.
Add it to llvm-config.h as well.
This'll require regenerating the configure script after this
commit, but I don't have the required autoconf version.
llvm-svn: 145214
It was out of sync with the description in configure.ac/config.h.in.
Also re-alphabetize it from its position when it was LLVM_HOST_TRIPLE.
llvm-svn: 145213
and code model. This eliminates the need to pass OptLevel flag all over the
place and makes it possible for any codegen pass to use this information.
llvm-svn: 144788
Two new TargetInstrInfo hooks lets the target tell ExecutionDepsFix
about instructions with partial register updates causing false unwanted
dependencies.
The ExecutionDepsFix pass will break the false dependencies if the
updated register was written in the previoius N instructions.
The small loop added to sse-domains.ll runs twice as fast with
dependency-breaking instructions inserted.
llvm-svn: 144602
and stores capture) to permit the caller to see each capture point and decide
whether to continue looking.
Use this inside memdep to do an analysis that basicaa won't do. This lets us
solve another devirtualization case, fixing PR8908!
llvm-svn: 144580
These annotations are disabled entirely when either ENABLE_THREADS is off, or
building a release build. When enabled, they add calls to functions with no
statements to ManagedStatic's getters.
Use these annotations to inform tsan that the race used inside ManagedStatic
initialization is actually benign. Thanks to Kostya Serebryany for helping
write this patch!
llvm-svn: 144567
time it is queried to compute the probability of a single successor.
This makes computing the probability of every successor of a block in
sequence... really really slow. ;] This switches to a linear walk of the
successors rather than a quadratic one. One of several quadratic
behaviors slowing this pass down.
I'm not really thrilled with moving the sum code into the public
interface of MBPI, but I don't (at the moment) have ideas for a better
interface. My direction I'm thinking in for a better interface is to
have MBPI actually retain much more state and make *all* of these
queries cheap. That's a lot of work, and would require invasive changes.
Until then, this seems like the least bad (ie, least quadratic)
solution. Suggestions welcome.
llvm-svn: 144530
correctly handle blocks whose successor weights sum to more than
UINT32_MAX. This is slightly less efficient, but the entire thing is
already linear on the number of successors. Calling it within any hot
routine is a mistake, and indeed no one is calling it. It also
simplifies the code.
llvm-svn: 144527
the sum of the edge weights not overflowing uint32, and crashed when
they did. This is generally safe as BranchProbabilityInfo tries to
provide this guarantee. However, the CFG can get modified during codegen
in a way that grows the *sum* of the edge weights. This doesn't seem
unreasonable (imagine just adding more blocks all with the default
weight of 16), but it is hard to come up with a case that actually
triggers 32-bit overflow. Fortuately, the single-source GCC build is
good at this. The solution isn't very pretty, but its no worse than the
previous code. We're already summing all of the edge weights on each
query, we can sum them, check for an overflow, compute a scale, and sum
them again.
I've included a *greatly* reduced test case out of the GCC source that
triggers it. It's a pretty lame test, as it clearly is just barely
triggering the overflow. I'd like to have something that is much more
definitive, but I don't understand the fundamental pattern that triggers
an explosion in the edge weight sums.
The buggy code is duplicated within this file. I'll colapse them into
a single implementation in a subsequent commit.
llvm-svn: 144526
The old naming scheme (load/use/def/store) can be traced back to an old
linear scan article, but the names don't match how slots are actually
used.
The load and store slots are not needed after the deferred spill code
insertion framework was deleted.
The use and def slots don't make any sense because we are using
half-open intervals as is customary in C code, but the names suggest
closed intervals. In reality, these slots were used to distinguish
early-clobber defs from normal defs.
The new naming scheme also has 4 slots, but the names match how the
slots are really used. This is a purely mechanical renaming, but some
of the code makes a lot more sense now.
llvm-svn: 144503
RegAllocGreedy has been the default for six months now.
Deleting RegAllocLinearScan makes it possible to also delete
VirtRegRewriter and clean up the spiller code.
llvm-svn: 144475
When this field is true it means that the load is from constant (runt-time or compile-time) and so can be hoisted from loops or moved around other memory accesses
llvm-svn: 144100