logic to use the new APInt methods. Among other things this
implements rdar://8501501 - llvm.smul.with.overflow.i32 should constant fold
which comes from "clang -ftrapv", originally brought to my attention from PR8221.
llvm-svn: 116457
getExpandedRegion() enables us to create non canonical regions. Those regions
can be used to define the largerst region, that fullfills a certain property.
llvm-svn: 116397
perform initialization without static constructors AND without explicit initialization
by the client. For the moment, passes are required to initialize both their
(potential) dependencies and any passes they preserve. I hope to be able to relax
the latter requirement in the future.
llvm-svn: 116334
Added ARM specific ELF section types.
Added AttributesSection to ARMElfTargetObject
First step in unifying .cpu assembly tag with ELF/.o
llc now asserts on actual ELF emission on -filetype=obj :-)
llvm-svn: 116257
connected components. These components should be allocated different virtual
registers because there is no reason for them to be allocated together.
Add the ConnectedVNInfoEqClasses class to calculate the connected components,
and move values to new LiveIntervals.
Use it from SplitKit::rewrite by creating new virtual registers for the
components.
llvm-svn: 116006
This function is intended to be used when inserting a machine instruction that
trivially restricts the legal registers, like LEA requiring a GR32_NOSP
argument.
llvm-svn: 115875
allow target to correctly compute latency for cases where static scheduling
itineraries isn't sufficient. e.g. variable_ops instructions such as
ARM::ldm.
This also allows target without scheduling itineraries to compute operand
latencies. e.g. X86 can return (approximated) latencies for high latency
instructions such as division.
- Compute operand latencies for those defined by load multiple instructions,
e.g. ldm and those used by store multiple instructions, e.g. stm.
llvm-svn: 115755
expand to an initializeMyPass() function (in additional to the extant static ctors). Eventually, these will be called
from a big InitializeAllPasses() function, and the PassInfo's they create (which would be leaked if this code were used
at the moment) will be handed off to a PassRegistry for ownership.
llvm-svn: 115703
it in with the SSSE3 instructions.
Steward! Could you place this chair by the aft sun deck? I'm trying to get away
from the Astors. They are such boors!
llvm-svn: 115552
1) Changed ValidateDwarfFileNumber() to isValidDwarfFileNumber() to be better
named. Since it is just a predicate and isn't actually changing any state.
2) Added a missing return in the comments for setCurrentDwarfLoc() in
include/llvm/MC/MCContext.h for fix formatting.
3) Changed clearDwarfLocSeen() to ClearDwarfLocSeen() since it does change
state.
4) Simplified the last test in isValidDwarfFileNumber() to just a one line
boolean test of MCDwarfFiles[FileNumber] != 0 for the final return statement.
llvm-svn: 115551
is partly because this attribute caused trouble in the past (the
SmallVector one had to be changed from aligned to aligned(8) due
to causing crashes on i386 for example; in theory the same might
be needed in the Allocator case...). But it's mostly because
there seems to be no point in special casing gcc here. Using the
same implementation for all compilers results in better testing.
llvm-svn: 115462
LiveInterval::MergeValueNumberInto instead of trying to extend LiveRanges and
getting it wrong.
This fixed PR8249 where a valno with a multi-segment live range was defined by
an identity copy created by RemoveCopyByCommutingDef. Some of the live
segments disappeared.
llvm-svn: 115385
stick with a constant estimate of 90% (branch predictors are good!), but we might find that we want to provide
more nuanced estimates in the future.
llvm-svn: 115364
The x86_mmx type is used for MMX intrinsics, parameters and
return values where these use MMX registers, and is also
supported in load, store, and bitcast.
Only the above operations generate MMX instructions, and optimizations
do not operate on or produce MMX intrinsics.
MMX-sized vectors <2 x i32> etc. are lowered to XMM or split into
smaller pieces. Optimizations may occur on these forms and the
result casted back to x86_mmx, provided the result feeds into a
previous existing x86_mmx operation.
The point of all this is prevent optimizations from introducing
MMX operations, which is unsafe due to the EMMS problem.
llvm-svn: 115243
resolved or not. Different object files have different restrictions and
different native assemblers have different idiosyncrasies we want to emulate
for now.
Move the existing MachO logic to the new place and implement an ELF one that
gets fixups to globals right.
llvm-svn: 115131
or not. TableGen needs to generate the printInstruction() function as taking
an MCInstr* or a MachineInstr*, depending. Default to the old non-MC
version so that everything not yet using MC continues to just work without
fidding.
llvm-svn: 115126
hide jump threading opportunities by turning control flow into data flow. Run an early JumpThreading pass
(adds approximately an additional 1% to optimization time on SPEC), allowing it to get a shot at these cases
first. Fixes <rdar://problem/8447345>.
llvm-svn: 115099
Rather than having arbitrary cutoffs, actually try to cost model the conversion.
For now, the constants are tuned to more or less match our existing behavior, but these will be
changed to reflect realistic values as this work proceeds.
llvm-svn: 114973
Allocator instances can now be created by calling createPBQPRegisterAllocator.
Tidied up use of CoalescerPair as per Jakob's suggestions.
Made the new PBQPBuilder based construction process the default. The internal construction process
remains in-place and available via -pbqp-builder=false for now. It will be removed shortly if the new
process doesn't cause any regressions.
llvm-svn: 114626
new VariantKind to the MCSymbolExpr seems like overkill, but I'm not sure
there's a more straightforward way to get the printing difference captured.
(i.e., x86 uses @PLT, ARM uses (PLT)).
llvm-svn: 114613
that complex patterns are matched after the entire pattern has
a structural match, therefore the NodeStack isn't in a useful
state when the actual call to the matcher happens.
llvm-svn: 114489
passed the root of the match, even though only a few patterns
actually needed this (one in X86, several in ARM [which should
be refactored anyway], and some in CellSPU that I don't feel
like detangling). Instead of requiring all ComplexPatterns to
take the dead root, have targets opt into getting the root by
putting SDNPWantRoot on the ComplexPattern.
llvm-svn: 114471
I think I've audited all uses, so it should be dependable for address spaces,
and the pointer+offset info should also be accurate when there.
llvm-svn: 114464
instead of calling lower_bound or upper_bound directly.
This cleans up the search logic a bit because {lower,upper}_bound compare
LR->start by default, and it is usually simpler to search LR->end.
Funnelling all searches through one function also makes it possible to replace
the search algorithm with something faster than binary search.
llvm-svn: 114448
into OptimizeCompareInstr.
This necessitates the passing of CmpValue around,
so widen the virtual functions to accomodate.
No functionality changes.
llvm-svn: 114428
"getFixedStack" on the MachinePointerInfo class. While
this isn't the problem I'm setting out to solve, it is the
right way to eliminate PseudoSourceValue, so lets go with it.
llvm-svn: 114406
MachinePointerInfo struct, no functionality change.
This also adds an assert to MachineMemOperand::MachineMemOperand
that verifies that the Value* is either null or is an IR pointer type.
llvm-svn: 114389
For now the allocator still uses the old (internal) construction mechanism by default. This will be phased out soon assuming
no issues with the builder system come up.
To invoke the new construction mechanism just pass '-regalloc=pbqp -pbqp-builder' to llc. To provide custom constraints a
Target just needs to extend PBQPBuilder and pass an instance of their derived builder to the RegAllocPBQP constructor.
llvm-svn: 114272
more careful not to call SimplifyInstructionsInBlock() on an unreachable block, the issue has been fixed at a higher level. Add
a big warning to SimplifyInstructionsInBlock() to hopefully prevent this in the future.
llvm-svn: 114117
that all the setup of this class currently happens at static initialization time, this misses the fact
that some later events can cause mutation of the PassRegistrationListeners list, and thus cause race issues.
llvm-svn: 114036
The ELF implementation now creates text, data and bss to match the gnu as
behavior.
The text streamer still has the old MachO specific behavior since
the testsuite checks that it will error when a directive is given
before a setting the current section for example.
A nice benefit is that -n is not required anymore when producing
ELF files.
llvm-svn: 114027
VFP instructions use it for loading some constants, so implement that
handling.
Not thrilled with adding a member to MCOperand, but not sure there's much of
a better option that's not pretty fragile (like putting a double in the
union instead and just assuming that's good enough). Suggestions welcome...
llvm-svn: 113996
The problem was that the test for whether a compiler supports it or not was
inaccurate, but has to be accurate: LLVM_LOCAL_VISIBILITY is an optimization
and not needed for correctness, so wrongly thinking a compiler doesn't support
it is not a big deal, but LLVM_GLOBAL_VISIBILITY is for correctness, and not
an optimization: getting it wrong is fatal: it needs to be set based on a
configure test not testing the gcc version. Since dragonegg has moved to a
different scheme, and it was the only user of LLVM_GLOBAL_VISIBILITY, just
remove this macro.
llvm-svn: 113959
isn't a good level of abstraction for memdep. Instead, generalize
AliasAnalysis::alias and related interfaces with a new Location
class for describing a memory location. For now, this is the same
Pointer and Size as before, plus an additional field for a TBAA tag.
Also, introduce a fixed MD_tbaa metadata tag kind.
llvm-svn: 113858
the end of the line on a parser error, allowing skipping to happen
for syntactic errors but not for semantic errors. Before we would
miss emitting a diagnostic about the second line, because we skipped
it due to the semantic error on the first line:
foo %eax
bar %al
This fixes rdar://8414033 - llvm-mc ignores lines after an invalid instruction mnemonic errors
llvm-svn: 113688
iterator when an optimization took place. This allows us to do more insane
things with the code than just remove an instruction or two.
llvm-svn: 113640
take multiple cycles to decode.
For the current if-converter clients (actually only ARM), the instructions that
are predicated on false are not nops. They would still take machine cycles to
decode. Micro-coded instructions such as LDM / STM can potentially take multiple
cycles to decode. If-converter should take treat them as non-micro-coded
simple instructions.
llvm-svn: 113570
is different from what the code now uses in a two ways: NamedMDNodes
were considered Values and included in the numbering, and the
function-local metadata counter wasn't reset between functions.
The later problem breaks lazy deserialization, so instead of trying
to emulate the old numbering, just drop the old metadata. The only
in-tree use case is debug info with LTO, where the QOI loss is
considered acceptable.
llvm-svn: 113557
not unrolling loops that contain calls that would be better off getting inlined. This mostly
comes up when an interleaved devirtualization pass has devirtualized a call which the inliner
will inline on a future pass. Thus, rather than blocking all loops containing calls, add
a metric for "inline candidate calls" and block loops containing those instead.
llvm-svn: 113535
instruction in the class would be decoded to. Or zero if the number of
uOPs must be determined dynamically.
This will be used to determine the cost-effectiveness of predicating a
micro-coded instruction.
llvm-svn: 113513
don't use any InlineCostAnalyzer state, and are useful for other clients who don't necessarily want to use
all of InlineCostAnalyzer's logic, some of which is fairly inlining-specific.
No intended functionality change.
llvm-svn: 113499
AliasAnalysis, and some code for implementing the new query on top of
existing implementations by making standard alias and getModRefInfo
queries.
llvm-svn: 113329
switch to using a ManagedStatic for the global PassRegistry instead of a
ManagedCleanup, and fix a destruction ordering bug this exposed.
llvm-svn: 113283
Fix zeroExtend and signExtend to support empty sets, and to return the smallest
possible result set which contains the extension of each element in their
inputs. For example zext i8 [100, 10) to i16 is now [0, 256), not i16 [100, 10)
which contains 63446 members.
llvm-svn: 113187
Loop::hasLoopInvariantOperands method. Remove
a useless and confusing Loop::isLoopInvariant(Instruction)
method, which didn't do what you thought it did.
No functionality change.
llvm-svn: 113133
Since mem2reg isn't run at -O0, we get a ton of reloads from the stack,
for example, before, this code:
int foo(int x, int y, int z) {
return x+y+z;
}
used to compile into:
_foo: ## @foo
subq $12, %rsp
movl %edi, 8(%rsp)
movl %esi, 4(%rsp)
movl %edx, (%rsp)
movl 8(%rsp), %edx
movl 4(%rsp), %esi
addl %edx, %esi
movl (%rsp), %edx
addl %esi, %edx
movl %edx, %eax
addq $12, %rsp
ret
Now we produce:
_foo: ## @foo
subq $12, %rsp
movl %edi, 8(%rsp)
movl %esi, 4(%rsp)
movl %edx, (%rsp)
movl 8(%rsp), %edx
addl 4(%rsp), %edx ## Folded load
addl (%rsp), %edx ## Folded load
movl %edx, %eax
addq $12, %rsp
ret
Fewer instructions and less register use = faster compiles.
llvm-svn: 113102
vabd intrinsic and add and/or zext operations. In the case of vaba, this
also avoids the need for a DAG combine pattern to combine vabd with add.
Update tests. Auto-upgrade the old intrinsics.
llvm-svn: 112941
- Add patterns to match the following MMX builtins:
* __builtin_ia32_vec_init_v8qi
* __builtin_ia32_vec_init_v4hi
* __builtin_ia32_vec_init_v2si
* __builtin_ia32_vec_ext_v2si
These builtins do not correspond to a single MMX instruction. They will have
to be lowered -- most likely in the back-end.
llvm-svn: 112881
I'm sure it is harmless. Original commit message:
If PrototypeValue is erased in the middle of using the SSAUpdator
then the SSAUpdator may access freed memory. Instead, simply pass
in the type and name explicitly, which is all that was used anyway.
llvm-svn: 112810
add, and subtract operations with zero-extended or sign-extended vectors.
Update tests. Add auto-upgrade support for the old intrinsics.
llvm-svn: 112773
moment, as there's a testcase that uses it and expects it
to be subject to optimizations; we won't be doing that.
Some adjustments based on feedback from Bill.
llvm-svn: 112754
of a base class.
This makes it possible to unregister the file from FilesToRemove when
the file is done. Also, this eliminates the need for
formatted_tool_output_file.
llvm-svn: 112706
available in normal llvm operators. We aren't going to
use those for MMX any more because it's unsafe for the
optimizers to synthesize new MMX instructions.
llvm-svn: 112685
and output the dwarf line number tables. This takes the current loc info after
an instruction is assembled and saves the needed info into an object that has
vector and for each section. These objects will be used for the final patch to
build and emit the encoded dwarf line number tables. Again for now this is only
in the Mach-O streamer but at some point will move to a more generic place.
llvm-svn: 112668
any more. I plan to reimplement alloca promotion using SSAUpdater later.
It looks like Bill's URoR logic really always needs domtree, so the pass
now always asks for domtree info.
llvm-svn: 112597
assertingvh so we get a violent explosion if the pointer dangles.
2) Fix AliasSetTracker::deleteValue to remove call sites with
by-pointer comparisons instead of by-alias queries. Using
findAliasSetForCallSite can cause alias sets to get merged
when they shouldn't, and can also miss alias sets when the
call is readonly.
#2 fixes PR6889, which only repros with a .c file :(
llvm-svn: 112452
since none of them use it. With this, we now only run
domfrontier (an N^2 analysis) 3 times at clang -O3: once for
"early" per-function cleanup, once at the start of the
per-function pipeline to support SRoA, and once late because
the EHPrepare class uses it.
EHPrepare needs to stop using it, this is silly and wasteful.
llvm-svn: 112420
fix: add a flag to MapValue and friends which indicates whether
any module-level mappings are being made. In the common case of
inlining, no module-level mappings are needed, so MapValue doesn't
need to examine non-function-local metadata, which can be very
expensive in the case of a large module with really deep metadata
(e.g. a large C++ program compiled with -g).
This flag is a little awkward; perhaps eventually it can be moved
into the ClonedCodeInfo class.
llvm-svn: 112190
I think there are good reasons to change this, but in the interests
of short-term stability, make SmallVector<...,0> reserve non-zero
capacity in its constructors. This means that SmallVector<...,0>
uses more memory than SmallVector<...,1> and should really only be
used (unless/until this workaround is removed) by clients that
care about using SmallVector with an incomplete type.
llvm-svn: 112147
expanding: e.g. <2 x float> -> <4 x float> instead of -> 2 floats. This
affects two places in the code: handling cross block values and handling
function return and arguments. Since vectors are already widened by
legalizetypes, this gives us much better code and unblocks x86-64 abi
and SPU abi work.
For example, this (which is a silly example of a cross-block value):
define <4 x float> @test2(<4 x float> %A) nounwind {
%B = shufflevector <4 x float> %A, <4 x float> undef, <2 x i32> <i32 0, i32 1>
%C = fadd <2 x float> %B, %B
br label %BB
BB:
%D = fadd <2 x float> %C, %C
%E = shufflevector <2 x float> %D, <2 x float> undef, <4 x i32> <i32 0, i32 1, i32 undef, i32 undef>
ret <4 x float> %E
}
Now compiles into:
_test2: ## @test2
## BB#0:
addps %xmm0, %xmm0
addps %xmm0, %xmm0
ret
previously it compiled into:
_test2: ## @test2
## BB#0:
addps %xmm0, %xmm0
pshufd $1, %xmm0, %xmm1
## kill: XMM0<def> XMM0<kill> XMM0<def>
insertps $0, %xmm0, %xmm0
insertps $16, %xmm1, %xmm0
addps %xmm0, %xmm0
ret
This implements rdar://8230384
llvm-svn: 112101
needed parsing for the .loc directive and saves the current info from that
into the context. The next patch will take the current loc info after an
instruction is assembled and save that info into a vector for each section for
use to build the line number tables. The patch after that will encode the info
from those vectors into the output file as the dwarf line tables.
llvm-svn: 111956
which does the same thing. This eliminates redundant code and
handles MDNodes better. MDNode linking still doesn't fully
work yet though.
llvm-svn: 111941
- Cache used characters in a bitset to reduce memory overhead to just 32 bytes.
- On my core2 this code is faster except when the checked string was very short
(smaller than the list of delimiters).
llvm-svn: 111817
general idea here is to have a group of x86 target specific nodes which are
going to be selected during lowering and then directly matched in isel.
The commit includes the addition of those specific nodes and a *bunch* of
patterns, and incrementally we're going to switch between them and what we
have right now. Both the patterns and target specific nodes can change as
we move forward with this work.
llvm-svn: 111691
It's similar to "linker_private_weak", but it's known that the address of the
object is not taken. For instance, functions that had an inline definition, but
the compiler decided not to inline it. Note, unlike linker_private and
linker_private_weak, linker_private_weak_def_auto may have only default
visibility. The symbols are removed by the linker from the final linked image
(executable or dynamic library).
llvm-svn: 111684
not part of the IR, are not uniqued, and may be safely RAUW'd.
This replaces a variety of alternate mechanisms for achieving
the same effect.
llvm-svn: 111681
functionality that most command-line tools need: ensuring that the
output file gets deleted if the tool is interrupted or encounters an
error.
llvm-svn: 111595
base registers were required. This will allow for slightly better packing
of the locals when alignment padding is necessary after callee saved registers.
llvm-svn: 111508
constructed with an output filename of "-". In particular, allow the
file descriptor to be closed, and close the file descriptor in the
destructor if it hasn't been explicitly closed already, to ensure
that any write errors are detected.
llvm-svn: 111436
We must complete the DFS, otherwise we might miss needed phi-defs, and
prematurely color live ranges with a non-dominating value.
This is not a big deal since we get to color more of the CFG and the next
mapValue call will be faster.
llvm-svn: 111397
Nothing fancy, just ask the target if any currently available base reg
is in range for the instruction under consideration and use the first one
that is. Placeholder ARM implementation simply returns false for now.
ongoing saga of rdar://8277890
llvm-svn: 111374
the local block. Resolve references to those indices to a new base register.
For simplification and testing purposes, a new virtual base register is
allocated for each frame index being resolved. The result is truly horrible,
but correct, code that's good for exercising the new code paths.
Next up is adding thumb1 support, which should be very simple. Following that
will be adding base register re-use and implementing a reasonable ARM
heuristic for when a virtual base register should be generated at all.
llvm-svn: 111315
whether to allocate a virtual frame base register to resolve the frame
index reference in it. Implement a simple version for ARM to aid debugging.
In LocalStackSlotAllocation, scan the function for frame index references
to local frame indices and ask the target whether to allocate virtual
frame base registers for any it encounters. Purely infrastructural for
debug output. Next step is to actually allocate base registers, then add
intelligent re-use of them.
rdar://8277890
llvm-svn: 111262
a Pass abstraction, since that's the level it's actually used at.
Rename Pass' dumpPassStructure to dumpPass.
This eliminates an awkward use of getAsPass() to convert a PMDataManager*
into a Pass* just to permit a dumpPassStructure call.
llvm-svn: 111199
mapping. Have the local block track its alignment requirement, and then
apply that when the block itself is allocated. Previously, offsets could
get adjusted in PEI to be different, relative to one another, than the
block allocation thought they would be, which defeats the point of doing
the allocation this way. Continuing rdar://8277890
llvm-svn: 111197
Introduce a helper method to add a section to the end of a layout. This
will be used by the ELF ObjectWriter code to add the metadata sections
(symbol table, etc) to the end of an object file.
llvm-svn: 111171
expression being loop invariant is not equivalent to the property of
properly dominating the loop header.
Other optimizations have also made this optimization less important.
llvm-svn: 111160
implementations of equality comparison and hash computation. This
can be used to optimize node lookup by avoiding creating lots of
temporary ID values just for hashing and comparison purposes.
llvm-svn: 111130
- Eliminate redundant successors.
- Convert an indirectbr with one successor into a direct branch.
Also, generalize SimplifyCFG to be able to be run on a function entry block.
It knows quite a few simplifications which are applicable to the entry
block, and it only needs a few checks to avoid trouble with the entry block.
llvm-svn: 111060
experimental pass that allocates locals relative to one another before
register allocation and then assigns them to actual stack slots as a block
later in PEI. This will eventually allow targets with limited index offset
range to allocate additional base registers (not just FP and SP) to
more efficiently reference locals, as well as handle situations where
locals cannot be referenced via SP or FP at all (dynamic stack realignment
together with variable sized objects, for example). It's currently
incomplete and almost certainly buggy. Work in progress.
Disabled by default and gated via the -enable-local-stack-alloc command
line option.
rdar://8277890
llvm-svn: 111059
target triple and straightens it out. This does less than gcc's script
config.sub, for example it turns i386-mingw32 into i386--mingw32 not
i386-pc-mingw32, but it does a decent job of turning funky triples into
something that the rest of the Triple class can understand. The plan
is to use this to canonicalize triple's when they are first provided
by users, and have the rest of LLVM only deal with canonical triples.
Once this is done the special case workarounds in the Triple constructor
can be removed, making the class more regular and easier to use. The
comments and unittests for the Triple class are already adjusted in this
patch appropriately for this brave new world of increased uniformity.
llvm-svn: 110909
- remove ashr which never worked.
- fix lshr and shl and add tests.
- remove dead function "intersect1Wrapped".
- add a new sub method to subtract ranges, with test.
llvm-svn: 110861
that many of these things, so the memory savings isn't significant,
and there are now situations where there can be alignments greater
than 128.
llvm-svn: 110836
When splitting a live range, the new registers have fewer uses and the
permissible register class may be less constrained. Recompute the register class
constraint from the uses of new registers created for a split. This may let them
be allocated from a larger set, possibly avoiding a spill.
llvm-svn: 110703
register at a time. This turns out to be slightly faster than iterating over
instructions, but more importantly, it allows us to compute spill weights for
new registers created after the spill weight pass has run.
Also compute the allocation hint at the same time as the spill weight. This
allows us to use the spill weight as a cost metric for copies, and choose the
most profitable hint if there is more than one possibility.
The new hints provide a very small (< 0.1%) but universal code size improvement.
llvm-svn: 110631
previously collected info from the .file directives and outputs the encoded
bytes for it. For now this is only in the Mach-O streamer but at some point
will move to a more generic place.
llvm-svn: 110617
relatively expensive comparison analyzer on each instruction. Also rename the
comparison analyzer method to something more in line with what it actually does.
This pass is will eventually be folded into the Machine CSE pass.
llvm-svn: 110539
After heavy editing of a live interval, it is much easier to simply renumber the
live values instead of trying to keep track of the unused ones.
llvm-svn: 110463
Without this what was happening was:
* R3 is not marked as "used"
* ARM backend thinks it has to save it to the stack because of vaarg
* Offset computation correctly ignores it
* Offsets are wrong
llvm-svn: 110446
need the Compare flag after all.
--- Reverse-merging r109901 into '.':
U include/llvm/Target/TargetInstrDesc.h
U include/llvm/Target/Target.td
U utils/TableGen/InstrInfoEmitter.cpp
U utils/TableGen/CodeGenInstruction.cpp
U utils/TableGen/CodeGenInstruction.h
llvm-svn: 110424
This pass tries to remove comparison instructions when possible. For instance,
if you have this code:
sub r1, 1
cmp r1, 0
bz L1
and "sub" either sets the same flag as the "cmp" instruction or could be
converted to set the same flag, then we can eliminate the "cmp" instruction all
together. This is a important for ARM where the ALU instructions could set the
CPSR flag, but need a special suffix ('s') to do so.
llvm-svn: 110423
be killed before being redefined.
These checks are usually disabled, and usually fail when enabled. We de facto
allow live registers to be redefined without a kill, the corresponding
assertions in RegScavenger were removed long ago.
llvm-svn: 110362
eliminate several const_casts.
Make CallSite implicitly convertible to ImmutableCallSite.
Rename the getModRefBehavior for intrinsic IDs to
getIntrinsicModRefBehavior to avoid overload ambiguity with CallSite,
which happens to be implicitly convertible to bool.
llvm-svn: 110155
Add support for using the FPSCR in conjunction with the vcvtr instruction, for controlling fp to int rounding.
Add support for the FLT_ROUNDS_ node now that the FPSCR is exposed.
llvm-svn: 110152
as soon as we properly codegen the simple vector operations in clang, remove the
unnecessary builti-ins/intrinsics from clang and llvm.
llvm-svn: 110094
of Value deletions and RAUWs, instead of relying on ScalarEvolution's
Scalars map being notified, as that's complicated at best, and
insufficient in general.
This means SCEVUnknown needs a non-trivial destructor, so introduce
a mechanism to allow ScalarEvolution to locate all the SCEVUnknowns.
llvm-svn: 110086
exactly what bugpoint expected it to do.
There was also only one user of
BlockExtractorPass(const std::vector<BasicBlock*> &B), so just remove it and
make BlockExtractorPass read BlockFile.
This fixes bugpoint's block extraction.
Nick, please review.
llvm-svn: 109936
later to identify and possibly remove superfluous compare instructions -- those
that are testing for and setting a status flag that should already be set.
llvm-svn: 109901
handles with a pointer to the containing map. When a map is copied, these
pointers need to be corrected to point to the new map. If not, then consider
the case of a map M1 which maps a value V to something. Create a copy M2 of
M1. At this point there are two value handles on V, one representing V as a
key in M1, the other representing V as a key in M2. But both value handles
point to M1 as the containing map. Now delete V. The value handles remove
themselves from their containing map (which destroys them), but only the first
value handle is successful: the second one cannot remove itself from M1 as
(once the first one has removed itself) there is nothing there to remove; it
is therefore not destroyed. This causes an assertion failure "All references
to V were not removed?".
llvm-svn: 109851
extend it to handle the case where multiple RAUWs affect a single
SCEVUnknown.
Add a ScalarEvolution unittest to test for this situation.
llvm-svn: 109705
the info from the .file directive and makes file and directory tables that
will eventually be put out as part of the dwarf info in the output file.
llvm-svn: 109651
alloca instructions (constrained by their internal encoding),
and add error checking for it. Fix an instcombine bug which
generated huge alignment values (null is infinitely aligned).
This fixes undefined behavior noticed by John Regehr.
llvm-svn: 109643
- Designed as a simple wrapper to allow clients to attempt to catch crashes
(memory errors, assertion violations, etc.) and do some kind of recovery.
- Currently doesn't actually attempt to catch crashes.
llvm-svn: 109586
protectors, to be near the stack protectors on the stack. Accomplish this by
tagging the stack object with a predicate that indicates that it would trigger
this. In the prolog-epilog inserter, assign these objects to the stack after the
stack protector but before the other objects.
llvm-svn: 109481
that the values they refer to aren't being deleted underneath them.
Make sure these containters get cleared by clear(), which IndVarSimplify
and LSR both use before deleting instructions.
llvm-svn: 109478
Simplifying use_iterators by dereferencing
is not a good idea. The codebase does not depend
in this any more, and it may introduce hidden
runtime cost. If you get compile errors, please
dereference your iterator before passing to cast<>
(and friends).
Also: please consider caching the result of
operator* and reusing that instead of dereferencing
many times.
llvm-svn: 109425
dependence on DominanceFrontier. Instead, add an explicit DominanceFrontier
pass in StandardPasses.h to ensure that it gets scheduled at the right
time.
Declare that loop unrolling preserves ScalarEvolution, and shuffle some
getAnalysisUsages.
This eliminates one LoopSimplify and one LCCSA run in the standard
compile opts sequence.
llvm-svn: 109413
appropriate for targets without detailed instruction iterineries.
The scheduler schedules for increased instruction level parallelism in
low register pressure situation; it schedules to reduce register pressure
when the register pressure becomes high.
On x86_64, this is a win for all tests in CFP2000. It also sped up 256.bzip2
by 16%.
llvm-svn: 109300
it's too late to start backing off aggressive latency scheduling when most
of the registers are in use so the threshold should be a bit tighter.
- Correctly handle live out's and extract_subreg etc.
- Enable register pressure aware scheduling by default for hybrid scheduler.
For ARM, this is almost always a win on # of instructions. It's runtime
neutral for most of the tests. But for some kernels with high register
pressure it can be a huge win. e.g. 464.h264ref reduced number of spills by
54 and sped up by 20%.
llvm-svn: 109279
is not a good idea. The codebase does not depend
in this any more, and it may introduce hidden
runtime cost. If you get compile errors, please
dereference your iterator before passing to cast<>
(and friends).
Also: please consider caching the result of
operator* and reusing that instead of dereferencing
many times.
llvm-svn: 109220
with an existing allocator. The interesting use case of this
is that it allows "StringMap<whatever, BumpPtrAllocator&>" for
when you want to allocate out of a preexisting bump pointer
allocator owned by someone else.
llvm-svn: 109213
ARM/PPC/MSP430-specific code (which are the only targets that
implement the hook) can directly reference their target-specific
instrinfo classes.
llvm-svn: 109171
The RegionInfo pass detects single entry single exit regions in a function,
where a region is defined as any subgraph that is connected to the remaining
graph at only two spots.
Furthermore an hierarchical region tree is built.
Use it by calling "opt -regions analyze" or "opt -view-regions".
llvm-svn: 109089
Make MDNode::destroy private.
Fix the one thing that used MDNode::destroy, outside of MDNode itself.
One should never delete or destroy an MDNode explicitly. MDNodes
implicitly go away when there are no references to them (implementation
details aside).
llvm-svn: 109028
to Parse mach-o files. All defines have been renamed to not conflict with
#defines in mach header files, all structures were left named the same but
are in the llvm::MachO namespace.
llvm-svn: 108953
bitcode file, so that two bitcode files where the same metadata kind
name happens to have been assigned a different ID can still be
linked together.
Eliminate the restriction that metadata kind IDs can't be 0.
Change MD_dbg from 1 to 0, because we can now, and because it's
less mysterious that way.
llvm-svn: 108939
better in the llvm world. Among other things, this changes:
1. The guts of libedis are now moved into lib/MC/MCDisassembler
2. llvm-mc now depends on lib/MC/MCDisassembler, not tools/edis,
so edis and mc don't have to be built in series.
3. lib/MC/MCDisassembler no longer depends on the C api, the C
API depends on it.
4. Various code cleanup changes.
There is still a lot to be done to make edis fit with the llvm
design, but this is an incremental step in the right direction.
llvm-svn: 108869