Commit Graph

12187 Commits

Author SHA1 Message Date
Eli Friedman 33c133919a Fix a silly mistake in r130338.
llvm-svn: 130360
2011-04-28 00:42:03 +00:00
Rafael Espindola ce83fc3463 Remove unnecessary argument.
llvm-svn: 130343
2011-04-27 23:17:57 +00:00
Rafael Espindola 08704349da Rename getPersonalityPICSymbol to getCFIPersonalitySymbol, document it, and
give it a bit more responsibility. Also implement it for MachO.

If hacked to use cfi, 32 bit MachO will produce

.cfi_personality 155, L___gxx_personality_v0$non_lazy_ptr

and 64 bit will produce

.cfi_presonality ___gxx_personality_v0

The general idea is that .cfi_personality gets passed the final symbol. It is
up to codegen to produce it if using indirect representation (like 32 bit
MachO), but it is up to MC to decide which relocations to create.

llvm-svn: 130341
2011-04-27 23:08:15 +00:00
Devang Patel 77dc541b00 Simplify handling of variables with complex address (i.e. blocks variables)
llvm-svn: 130339
2011-04-27 22:45:24 +00:00
Eli Friedman 406c471b69 Make the fast-isel code for literal 0.0 a bit shorter/faster, since 0.0 is common. rdar://problem/9303592 .
llvm-svn: 130338
2011-04-27 22:41:55 +00:00
Eli Friedman 121d27e9e4 Remove unused function.
llvm-svn: 130337
2011-04-27 22:21:02 +00:00
Rafael Espindola 3989776770 Fix indentation.
llvm-svn: 130331
2011-04-27 21:29:52 +00:00
Devang Patel e3745fdcf3 Revert r130178. It turned out to be not the optimal path to emit complex location expressions.
llvm-svn: 130326
2011-04-27 20:29:27 +00:00
Evan Cheng 9808d31b9e If converter was being too cute. It look for root BBs (which don't have
successors) and use inverse depth first search to traverse the BBs. However
that doesn't work when the CFG has infinite loops. Simply do a linear
traversal of all BBs work just fine.

rdar://9344645

llvm-svn: 130324
2011-04-27 19:32:43 +00:00
Jakob Stoklund Olesen 71d3b895ba Also add <imp-def> operands for defined and dead super-registers when rewriting.
We cannot rely on the <imp-def> operands added by LiveIntervals in all cases as
demonstrated by the test case.

llvm-svn: 130313
2011-04-27 17:42:31 +00:00
Jakob Stoklund Olesen eef2327360 Add a safe-guard against repeated splitting for some rare cases.
The number of blocks covered by a live range must be strictly decreasing when
splitting, otherwise we can't allow repeated splitting.

llvm-svn: 130249
2011-04-26 22:33:12 +00:00
Evan Cheng 1355bbdd11 Be careful about scheduling nodes above previous calls. It increase usages of
more callee-saved registers and introduce copies. Only allows it if scheduling
a node above calls would end up lessen register pressure.

Call operands also has added ABI restrictions for register allocation, so be
extra careful with hoisting them above calls.

rdar://9329627

llvm-svn: 130245
2011-04-26 21:31:35 +00:00
Rafael Espindola a4fa5ce911 Print the label if we will use it in debug_frame.
llvm-svn: 130232
2011-04-26 19:26:53 +00:00
Devang Patel ba5fbf17df Refactor code. Keep dwarf register operation selection logic at one place.
llvm-svn: 130231
2011-04-26 19:06:18 +00:00
Jakob Stoklund Olesen 3b7b7bcc7f Use the new TRI->getLargestLegalSuperClass hook to constrain register class inflation.
This has two effects: 1. We never inflate to a larger register class than what
the sub-target can handle. 2. Completely unconstrained virtual registers get the
largest possible register class.

llvm-svn: 130229
2011-04-26 18:52:36 +00:00
Dan Gohman 7da91aee83 Fast-isel support for simple inline asms.
llvm-svn: 130205
2011-04-26 17:18:34 +00:00
Chris Lattner 189ca1498f don't emit the symbol name twice for local bss and common
symbols.  For example, don't emit:
        .comm   _i,4,2                  ## @i
                                        ## @i

instead emit:
        .comm   _i,4,2                  ## @i

llvm-svn: 130192
2011-04-26 06:14:13 +00:00
Evan Cheng 2f64754031 Fix typo
llvm-svn: 130190
2011-04-26 04:57:37 +00:00
Rafael Espindola 80cb3cb1d6 Print all the moves at a given label instead of just the first one.
Remove previous DwarfCFI hack.

llvm-svn: 130187
2011-04-26 03:58:56 +00:00
Devang Patel cae2fbd6fc Let dwarf writer allocate extra space in the debug location expression. This space, if requested, will be used for complex addresses of the Blocks' variables.
llvm-svn: 130178
2011-04-26 00:12:46 +00:00
Devang Patel 2606f4d6d2 Rename a local variable.
llvm-svn: 130171
2011-04-25 23:05:21 +00:00
Devang Patel 8ce24133fd Rename a method to match what it really does.
s/addVariableAddress/addFrameVariableAddress/g

llvm-svn: 130170
2011-04-25 23:02:17 +00:00
Devang Patel 2688e4aba6 Do not drop a variable's complex address if it is not based on frame base.
Observed this while reading code, so I do not have a test case handy here.

llvm-svn: 130167
2011-04-25 22:52:55 +00:00
Devang Patel 734f2218ac A dbg.declare may not be in entry block, even if it is referring to an incoming argument. However, It is appropriate to emit DBG_VALUE referring to this incoming argument in entry block in MachineFunction.
llvm-svn: 130129
2011-04-25 16:33:52 +00:00
Rafael Espindola a076199e71 Simplify the logic. Noticed by aKor.
llvm-svn: 130116
2011-04-24 19:55:34 +00:00
Rafael Espindola 5c54ecc9af Synchronize the conditions for producing a .cfi_startproc and a .cfi_endproc.
Fixes PR9787.

llvm-svn: 130115
2011-04-24 19:00:34 +00:00
Sebastian Redl b8a62aa3c9 Give SplitKit.h a header guard.
llvm-svn: 130095
2011-04-24 15:46:51 +00:00
Jay Foad 1a180156b6 Remove unused STL header includes.
llvm-svn: 130068
2011-04-23 19:53:52 +00:00
Owen Anderson dd450b86cf Teach FastISel to deal with instructions that have two immediate operands.
llvm-svn: 130033
2011-04-22 23:38:06 +00:00
Devang Patel 1d6bbd41aa Let front-end tie subprogram declaration with subprogram definition directly.
llvm-svn: 130028
2011-04-22 23:10:17 +00:00
Jakob Stoklund Olesen 032891b718 Always compare the cost of region splitting with the cost of per-block splitting.
Sometimes it is better to split per block, and we missed those cases.

llvm-svn: 130025
2011-04-22 22:47:40 +00:00
Chris Lattner 6d277517d1 Recommit the fix for rdar://9289512 with a couple tweaks to
fix bugs exposed by the gcc dejagnu testsuite:
1. The load may actually be used by a dead instruction, which
   would cause an assert.
2. The load may not be used by the current chain of instructions,
   and we could move it past a side-effecting instruction. Change
   how we process uses to define the problem away.

llvm-svn: 130018
2011-04-22 21:59:37 +00:00
Benjamin Kramer 341c11da3b DAGCombine: fold "(zext x) == C" into "x == (trunc C)" if the trunc is lossless.
On x86 this allows to fold a load into the cmp, greatly reducing register pressure.
  movzbl	(%rdi), %eax
  cmpl	$47, %eax
->
  cmpb	$47, (%rdi)

This shaves 8k off gcc.o on i386. I'll leave applying the patch in README.txt to Chris :)

llvm-svn: 130005
2011-04-22 18:47:44 +00:00
Devang Patel ad45d911bb Do not leak argument's DbgVariables.
llvm-svn: 130004
2011-04-22 18:09:57 +00:00
Evan Cheng 8ea3af47bd Typo
llvm-svn: 129970
2011-04-22 01:40:20 +00:00
Bill Wendling c14d7322ee Branch folding is folding a landing pad into a regular BB.
An exception is thrown via a call to _cxa_throw, which we don't expect to
return. Therefore, the "true" part of the invoke goes to a BB that has
'unreachable' as its only instruction. This is lowered into an empty MachineBB.
The landing pad for this invoke, however, is directly after the "true" MBB.
When the empty MBB is removed, the landing pad is directly below the BB with the
invoke call. The unconditional branch is removed and then the two blocks are
merged together.

The testcase is too big for a regression test.
<rdar://problem/9305728>

llvm-svn: 129965
2011-04-22 01:07:09 +00:00
Devang Patel 2266aa84a1 Refactor.
llvm-svn: 129938
2011-04-21 21:07:35 +00:00
Matt Beaumont-Gay 70597d4e50 Don't recycle loop variables.
llvm-svn: 129928
2011-04-21 19:46:23 +00:00
Jakob Stoklund Olesen 6a663b8dc8 Allow allocatable ranges from global live range splitting to be split again.
These intervals are allocatable immediately after splitting, but they may be
evicted because of later splitting. This is rare, but when it happens they
should be split again.

The remainder intervals that cannot be allocated after splitting still move
directly to spilling.

SplitEditor::finish can optionally provide a mapping from new live intervals
back to the original interval indexes returned by openIntv().

Each original interval index can map to multiple new intervals after connected
components have been separated. Dead code elimination may also add existing
intervals to the list.

The reverse mapping allows the SplitEditor client to treat the new intervals
differently depending on the split region they came from.

llvm-svn: 129925
2011-04-21 18:38:15 +00:00
Devang Patel 28f2719d83 Add comment in output stream.
llvm-svn: 129921
2011-04-21 17:50:24 +00:00
Daniel Dunbar 6309828206 Revert r1296656, "Fix rdar://9289512 - not folding load into compare at -O0...",
which broke a couple GCC test suite tests at -O0.

llvm-svn: 129914
2011-04-21 16:14:46 +00:00
Jakob Stoklund Olesen 86e53ced08 Add debug output for rematerializable instructions.
llvm-svn: 129883
2011-04-20 22:14:20 +00:00
Jakob Stoklund Olesen 90d79bdcd2 Permit remat when a virtual register has multiple defs.
TII::isTriviallyReMaterializable() shouldn't depend on any properties of the
register being defined by the instruction. Rematerialization is going to create
a new virtual register anyway.

llvm-svn: 129882
2011-04-20 22:14:17 +00:00
Jakob Stoklund Olesen 0e34c1dfac Prefer cheap registers for busy live ranges.
On the x86-64 and thumb2 targets, some registers are more expensive to encode
than others in the same register class.

Add a CostPerUse field to the TableGen register description, and make it
available from TRI->getCostPerUse. This represents the cost of a REX prefix or a
32-bit instruction encoding required by choosing a high register.

Teach the greedy register allocator to prefer cheap registers for busy live
ranges (as indicated by spill weight).

llvm-svn: 129864
2011-04-20 18:19:48 +00:00
Stuart Hastings 45fe3c38c5 ARM byval support. Will be enabled by another patch to the FE. <rdar://problem/7662569>
llvm-svn: 129858
2011-04-20 16:47:52 +00:00
Rafael Espindola e473aaf540 Remove unused arguments.
llvm-svn: 129844
2011-04-20 03:08:09 +00:00
Eric Christopher bcaedb5ce0 Rewrite the expander for umulo/smulo to remember to sign extend the input
manually and pass all (now) 4 arguments to the mul libcall. Add a new
ExpandLibCall for just this (copied gratuitously from type legalization).

Fixes rdar://9292577

llvm-svn: 129842
2011-04-20 01:19:45 +00:00
Daniel Dunbar cd01ed5bd6 ADT/Triple: Renambe isOSX... methods to isMacOSX for consistency with the OS
triple component.

llvm-svn: 129838
2011-04-20 00:14:25 +00:00
Daniel Dunbar 4a7783b0c2 CodeGen: Eliminate a use of getDarwinMajorNumber().
- There is a minor semantic change here (evidenced by the test change) for
   Darwin triples that have no version component. I debated changing the default
   behavior of isOSVersionLT, but decided it made more sense for triples to be
   explicit.

llvm-svn: 129802
2011-04-19 20:32:39 +00:00
Stuart Hastings 468086d5e1 Delete unnecessary variable. <rdar://problem/7662569>
llvm-svn: 129796
2011-04-19 20:09:38 +00:00
Bob Wilson df612ba006 Avoid write-after-write issue hazards for Cortex-A9.
Add a avoidWriteAfterWrite() target hook to identify register classes that
suffer from write-after-write hazards. For those register classes, try to avoid
writing the same register in two consecutive instructions.

This is currently disabled by default.  We should not spill to avoid hazards!
The command line flag -avoid-waw-hazard can be used to enable waw avoidance.

llvm-svn: 129772
2011-04-19 18:11:45 +00:00
Jakob Stoklund Olesen af12138d10 Force the greedy register allocator to be linked alongside linear scan.
This means that the new register allocator can be used with 'clang -mllvm -regalloc=greedy'.

llvm-svn: 129764
2011-04-19 17:17:58 +00:00
Eli Friedman bcd09b3a7f SelectBasicBlock is rather slow even when it doesn't do anything; skip the
unnecessary work where possible.

llvm-svn: 129763
2011-04-19 17:01:08 +00:00
Stuart Hastings 0b68c1219f Support nested CALLSEQ_BEGIN/END; necessary for ARM byval support. <rdar://problem/7662569>
llvm-svn: 129761
2011-04-19 16:16:58 +00:00
Chris Lattner 91328b317b Implement support for x86 fastisel of small fixed-sized memcpys, which are generated
en-mass for C++ PODs.  On my c++ test file, this cuts the fast isel rejects by 10x 
and shrinks the generated .s file by 5%

llvm-svn: 129755
2011-04-19 05:52:03 +00:00
Eli Friedman ec138b4b27 Simplify declarations slightly by using typedefs.
llvm-svn: 129720
2011-04-18 21:21:37 +00:00
Devang Patel 17740e70d5 Reduce clutter in asm output. Do not emit source location as comment for each instruction.
llvm-svn: 129715
2011-04-18 20:26:49 +00:00
Jakob Stoklund Olesen 9f294a9e52 Handle spilling around an instruction that has an early-clobber re-definition of
the spilled register.

This is quite common on ARM now that some stores have early-clobber defines.

llvm-svn: 129714
2011-04-18 20:23:27 +00:00
Eric Christopher c37aa0b26a Fix a bug where we were counting the alias sets as completely used
registers for fast allocation a different way. This has us updating
used registers only when we're using that exact register.

Fixes rdar://9207598

llvm-svn: 129711
2011-04-18 19:26:25 +00:00
Chris Lattner 48f75ad678 while we're at it, handle 'sdiv exact' of a power of 2 also,
this fixes a few rejects on c++ iterator loops.

llvm-svn: 129694
2011-04-18 07:00:40 +00:00
Chris Lattner 562d6e82bd fix rdar://9297011 - udiv by power of two causing fast-isel rejects
llvm-svn: 129693
2011-04-18 06:55:51 +00:00
Chris Lattner b53ccb8e36 1. merge fast-isel-shift-imm.ll into fast-isel-x86-64.ll
2. implement rdar://9289501 - fast isel should fold trivial multiplies to shifts
3. teach tblgen to handle shift immediates that are different sizes than the 
   shifted operands, eliminating some code from the X86 fast isel backend.
4. Have FastISel::SelectBinaryOp use (the poorly named) FastEmit_ri_ function
   instead of FastEmit_ri to simplify code.

llvm-svn: 129666
2011-04-17 20:23:29 +00:00
Chris Lattner 4832660b4d fix an oversight which caused us to compile the testcase (and other
less trivial things) into a dummy lea.  Before we generated:

_test:                                  ## @test
	movq	_G@GOTPCREL(%rip), %rax
	leaq	(%rax), %rax
	ret

now we produce:

_test:                                  ## @test
	movq	_G@GOTPCREL(%rip), %rax
	ret

This is part of rdar://9289558

llvm-svn: 129662
2011-04-17 17:12:08 +00:00
Chris Lattner 045c43855c Fix rdar://9289512 - not folding load into compare at -O0
The basic issue here is that bottom-up isel is matching the branch
and compare, and was failing to fold the load into the branch/compare
combo.  Fixing this (by allowing folding into any instruction of a
sequence that is selected) allows us to produce things like:


cmpb    $0, 52(%rax)
je      LBB4_2

instead of:

movb    52(%rax), %cl
cmpb    $0, %cl
je      LBB4_2

This makes the generated -O0 code run a bit faster, but also speeds up
compile time by putting less pressure on the register allocator and 
generating less code.

This was one of the biggest classes of missing load folding.  Implementing
this shrinks 176.gcc's c-decl.s (as a random example) by about 4% in (verbose-asm)
line count.

llvm-svn: 129656
2011-04-17 06:35:44 +00:00
Chris Lattner d70ff0d807 split a complex predicate out to a helper function. Simplify two for loops,
which don't need to check for falling off the end of a block *and* end of phi
nodes, since terminators are never phis.

llvm-svn: 129655
2011-04-17 06:03:19 +00:00
Chris Lattner fba7ca63cc fix rdar://9289583 - fast isel should handle non-canonical commutative binops
allowing us to fold the immediate into the 'and' in this case:

int test1(int i) {
  return 8&i;
}

llvm-svn: 129653
2011-04-17 01:16:47 +00:00
Eli Friedman 55b0acd624 PR9055: extend the fix to PR4050 (r70179) to apply to zext and anyext.
Returning a new node makes the code try to replace the old node, which
in the included testcase is killed by CSE.

llvm-svn: 129650
2011-04-16 23:25:34 +00:00
Francois Pichet beb17d9359 Unbreak the MSVC 2010 build.
For further information on this particular issue see: http://connect.microsoft.com/VisualStudio/feedback/details/520043/error-converting-from-null-to-a-pointer-type-in-std-pair

llvm-svn: 129642
2011-04-16 14:20:39 +00:00
Benjamin Kramer 659bfb34ff Remove unused variable.
llvm-svn: 129639
2011-04-16 10:30:47 +00:00
Rafael Espindola a83b177035 Put each personality function in a section. This fixes the gnu ld warning:
error in foo.o; no .eh_frame_hdr table will be created.

llvm-svn: 129635
2011-04-16 03:51:21 +00:00
Evan Cheng b14ce09fca Fix divmod libcall lowering. Convert to {S|U}DIVREM first and then expand the node to a libcall. rdar://9280991
llvm-svn: 129633
2011-04-16 03:08:26 +00:00
Devang Patel 514b4006c2 Introduce support to encode Objective-C property information in debugging information generated for an interface.
llvm-svn: 129624
2011-04-16 00:11:51 +00:00
Rafael Espindola beb74c3f00 Some refactoring suggested by Anton Korobeynikov.
llvm-svn: 129600
2011-04-15 20:32:03 +00:00
Jakob Stoklund Olesen 1af8b4dc92 Teach the SplitKit blitter to handle multiply defined values as well.
The transferValues() function can now handle both singly and multiply defined
values, as long as the resulting live range is known. Only rematerialized values
have their live range recomputed by extendRange().

The updateSSA() function can now insert PHI values in bulk across multiple
values in multiple target registers in one pass. The list of blocks received
from transferValues() is in layout order which seems to work well for the
iterative algorithm. Blocks from extendRange() are still in reverse BFS order,
but this function is used so rarely now that it doesn't matter.

llvm-svn: 129580
2011-04-15 17:24:49 +00:00
Jakob Stoklund Olesen 871f70609a Remember to set flag.
llvm-svn: 129579
2011-04-15 17:24:46 +00:00
Rafael Espindola a01cdb0e37 Add 129518 back with a fix for when we are producing eh just because of debug info.
Change ELF systems to use CFI for producing the EH tables. This reduces the
size of the clang binary in Debug builds from 690MB to 679MB.

llvm-svn: 129571
2011-04-15 15:11:06 +00:00
Chris Lattner 0ab5e2cded Fix a ton of comment typos found by codespell. Patch by
Luis Felipe Strano Moraes!

llvm-svn: 129558
2011-04-15 05:18:47 +00:00
NAKAMURA Takumi b5e3e9dd27 Revert r129518, "Change ELF systems to use CFI for producing the EH tables. This reduces the"
It broke several builds.

llvm-svn: 129557
2011-04-15 03:35:57 +00:00
Owen Anderson a519284fec Fix another instance of the DAG combiner not using the correct type for the RHS of a shift.
llvm-svn: 129522
2011-04-14 17:30:49 +00:00
Rafael Espindola aa2a7cd828 Change ELF systems to use CFI for producing the EH tables. This reduces the
size of the clang binary in Debug builds from 690MB to 679MB.

llvm-svn: 129518
2011-04-14 15:18:53 +00:00
Andrew Trick bfbd972b1f In the pre-RA scheduler, maintain cmp+br proximity.
This is done by pushing physical register definitions close to their
use, which happens to handle flag definitions if they're not glued to
the branch. This seems to be generally a good thing though, so I
didn't need to add a target hook yet.

The primary motivation is to generate code closer to what people
expect and rule out missed opportunity from enabling macro-op
fusion. As a side benefit, we get several 2-5% gains on x86
benchmarks. There is one regression:
SingleSource/Benchmarks/Shootout/lists slows down be -10%. But this is
an independent scheduler bug that will be tracked separately.
See rdar://problem/9283108.

Incidentally, pre-RA scheduling is only half the solution. Fixing the
later passes is tracked by:
<rdar://problem/8932804> [pre-RA-sched] on x86, attempt to schedule CMP/TEST adjacent with condition jump

Fixes:
<rdar://problem/9262453> Scheduler unnecessary break of cmp/jump fusion

llvm-svn: 129508
2011-04-14 05:15:06 +00:00
Chris Lattner 493b3e72f2 sink a call into its only use.
llvm-svn: 129503
2011-04-14 04:12:47 +00:00
Owen Anderson 9c12834eed During post-legalization DAG combining, be careful to only create shifts where the RHS is of the legal type for the new operation.
llvm-svn: 129484
2011-04-13 23:22:23 +00:00
Devang Patel e141234940 Remove extra bytes that were added for gdb. We do not have good poiner to understand actual reason behind this fixme. Spot checking suggest that newer gdb does not need this.
llvm-svn: 129461
2011-04-13 19:41:17 +00:00
Jakob Stoklund Olesen cda53febec Stop using dead function.
llvm-svn: 129442
2011-04-13 15:00:11 +00:00
Andrew Trick b53a00d2cb Recommit r129383. PreRA scheduler heuristic fixes: VRegCycle, TokenFactor latency.
Additional fixes:
Do something reasonable for subtargets with generic
itineraries by handle node latency the same as for an empty
itinerary. Now nodes default to unit latency unless an itinerary
explicitly specifies a zero cycle stage or it is a TokenFactor chain.

Original fixes:
UnitsSharePred was a source of randomness in the scheduler: node
priority depended on the queue data structure. I rewrote the recent
VRegCycle heuristics to completely replace the old heuristic without
any randomness. To make the ndoe latency adjustments work, I also
needed to do something a little more reasonable with TokenFactor. I
gave it zero latency to its consumers and always schedule it as low as
possible.

llvm-svn: 129421
2011-04-13 00:38:32 +00:00
Eric Christopher 28f4c729f7 Temporarily revert r129408 to see if it brings the bots back.
llvm-svn: 129417
2011-04-13 00:20:59 +00:00
Eric Christopher d829f43c06 Fix a bug where we were counting the alias sets as completely used
registers for fast allocation.

Fixes rdar://9207598

llvm-svn: 129408
2011-04-12 23:23:14 +00:00
Devang Patel 0e821f4673 I missed this new file in previous commit.
llvm-svn: 129407
2011-04-12 23:21:44 +00:00
Devang Patel 28dce70364 Simplify. There is no need to use static variable.
llvm-svn: 129406
2011-04-12 23:10:47 +00:00
Devang Patel 13d47f0ddc Do not reuse parameter name.
llvm-svn: 129405
2011-04-12 23:09:06 +00:00
Devang Patel f20c4f715f This mechanical patch moves type handling into CompileUnit from DwarfDebug. In case of multiple compile unit in one object file, each compile unit is responsible for its own set of type entries anyway. This refactoring makes this obvious.
llvm-svn: 129402
2011-04-12 22:53:02 +00:00
Eric Christopher de9d58569f Add more comments... err debug statements to the fast allocator.
llvm-svn: 129400
2011-04-12 22:17:44 +00:00
Jakob Stoklund Olesen c49df2c05a SparseBitVector is SLOW.
Use a Bitvector instead, we didn't need the smaller memory footprint anyway.
This makes the greedy register allocator 10% faster.

llvm-svn: 129390
2011-04-12 21:30:53 +00:00
Andrew Trick 1b60ad6644 Revert 129383. It causes some targets to hit a scheduler assert.
llvm-svn: 129385
2011-04-12 20:14:07 +00:00
Andrew Trick c5dd24a542 PreRA scheduler heuristic fixes: VRegCycle, TokenFactor latency.
UnitsSharePred was a source of randomness in the scheduler: node
priority depended on the queue data structure. I rewrote the recent
VRegCycle heuristics to completely replace the old heuristic without
any randomness. To make these heuristic adjustments to node latency work,
I also needed to do something a little more reasonable with TokenFactor. I
gave it zero latency to its consumers and always schedule it as low as
possible.

llvm-svn: 129383
2011-04-12 19:54:36 +00:00
Jakob Stoklund Olesen c70b697a40 Create new intervals for isolated blocks during region splitting.
This merges the behavior of splitSingleBlocks into splitAroundRegion, so the
RS_Region and RS_Block register stages can be coalesced. That means the leftover
intervals after region splitting go directly to spilling instead of a second
pass of per-block splitting.

llvm-svn: 129379
2011-04-12 19:32:53 +00:00
Jakob Stoklund Olesen 0840f50b76 Add SplitKit API to query and select the current interval being worked on.
This makes it possible to target multiple registers in one pass.

llvm-svn: 129374
2011-04-12 18:11:31 +00:00
Jakob Stoklund Olesen 68e84581c5 Fix a bug in RegAllocBase::addMBBLiveIns() where a basic block could accidentally be skipped.
llvm-svn: 129373
2011-04-12 18:11:28 +00:00
Devang Patel 4547a9e658 Remove dead typedef.
llvm-svn: 129368
2011-04-12 17:43:12 +00:00
Devang Patel 5eb4319dba Refactor CompileUnit into a separate header.
llvm-svn: 129367
2011-04-12 17:40:32 +00:00
Eric Christopher c37833625a Fix typo.
llvm-svn: 129334
2011-04-12 00:48:08 +00:00
Jakob Stoklund Olesen 507992e909 Reuse live interval union between functions. This saves a bit of compile time
when compiling many small functions.

llvm-svn: 129321
2011-04-11 23:57:14 +00:00
Nick Lewycky 0f85789800 Just because a GlobalVariable's initializer is [N x { i32, void ()* }] doesn't
mean that it has to be ConstantArray of ConstantStruct. We might have
ConstantAggregateZero, at either level, so don't crash on that.

Also, semi-deprecate the sentinal value. The linker isn't aware of sentinals so
we end up with the two lists appended, each with their "sentinals" on them.
Different parts of LLVM treated sentinals differently, so make them all just
ignore the single entry and continue on with the rest of the list.

llvm-svn: 129307
2011-04-11 22:11:20 +00:00
Jakob Stoklund Olesen 0f175ebc32 Speed up eviction by stopping collectInterferingVRegs as soon as the spill
weight limit has been exceeded.

llvm-svn: 129305
2011-04-11 21:47:01 +00:00
Bill Wendling 1e1f1c9ce1 The default of the dispatch switch statement was to branch to a BB that executed
the 'unwind' instruction. However, later on that instruction was converted into
a jump to the basic block it was located in, causing an infinite loop when we
get there.

It turns out, we get there if the _Unwind_Resume_or_Rethrow call returns (which
it's not supposed to do). It returns if it cannot find a place to unwind
to. Thus we would get what appears to be a "hang" when in reality it's just that
the EH couldn't be propagated further along.

Instead of infinitely looping (or calling `unwind', which none of our back-ends
support (it's lowered into nothing...)), call the @llvm.trap() intrinsic
instead. This may not conform to specific rules of a particular language, but
it's rather better than infinitely looping.

<rdar://problem/9175843&9233582>

llvm-svn: 129302
2011-04-11 21:32:34 +00:00
Evan Cheng ef42bea704 Look pass copies when determining whether hoisting would end up inserting more copies. rdar://9266679
llvm-svn: 129297
2011-04-11 21:09:18 +00:00
Jakob Stoklund Olesen 7d05bce70c Use a faster algorithm for computing MBB live-in registers after register allocation.
LiveIntervals::findLiveInMBBs has to do a full binary search for each segment.

llvm-svn: 129292
2011-04-11 20:01:41 +00:00
Evan Cheng fe917efc8b Fix a couple of places where changes are made but not tracked.
llvm-svn: 129287
2011-04-11 18:47:20 +00:00
Jakob Stoklund Olesen f8beafe207 Don't add live ranges for sub-registers when clobbering a physical register.
Both coalescing and register allocation already check aliases for interference,
so these extra segments are only slowing us down.

This speeds up both linear scan and the greedy register allocator.

llvm-svn: 129283
2011-04-11 18:08:10 +00:00
Jakob Stoklund Olesen 4fbbe3689d Speed up LiveIntervalUnion::unify by handling end insertion specially.
This particularly helps with the initial transfer of fixed intervals.

llvm-svn: 129277
2011-04-11 15:00:44 +00:00
Jakob Stoklund Olesen bfabc494f5 Time the initial seeding of live registers
llvm-svn: 129276
2011-04-11 15:00:42 +00:00
Jakob Stoklund Olesen 96d04c8e00 Don't shrink live ranges after dead code elimination unless it is going to help.
In particular, don't repeatedly recompute the PIC base live range after rematerialization.

llvm-svn: 129275
2011-04-11 15:00:39 +00:00
Jay Foad 7c14a558fe Don't include Operator.h from InstrTypes.h.
llvm-svn: 129271
2011-04-11 09:35:34 +00:00
Chris Lattner cfe5aa65d2 Avoid excess precision issues that lead to generating host-compiler-specific code.
Switch lowering probably shouldn't be using FP for this.  This resolves PR9581.

llvm-svn: 129199
2011-04-09 06:57:13 +00:00
Jakob Stoklund Olesen ed47ed4e80 Build the Hopfield network incrementally when splitting global live ranges.
It is common for large live ranges to have few basic blocks with register uses
and many live-through blocks without any uses. This approach grows the Hopfield
network incrementally around the use blocks, completely avoiding checking
interference for some through blocks.

llvm-svn: 129188
2011-04-09 02:59:09 +00:00
Jakob Stoklund Olesen 4ad6c160a5 Precompute interference for neighbor blocks as long as there is no interference.
This doesn't require seeking in the live interval union, so it is very cheap.

llvm-svn: 129187
2011-04-09 02:59:05 +00:00
Chris Lattner 41c80e89f3 have dag combine zap "store undef", which can be formed during call lowering
with undef arguments.

llvm-svn: 129185
2011-04-09 02:32:02 +00:00
Devang Patel 778947c203 Simplify array bound checks and clarify comments. One element array can have same non-zero number as lower bound as well as upper bound.
llvm-svn: 129170
2011-04-08 23:39:38 +00:00
Devang Patel e39647951b Do not emit DW_AT_upper_bound and DW_AT_lower_bound for unbouded array.
If lower bound is more then upper bound then consider it is an unbounded array.
An array is unbounded if non-zero lower bound is same as upper bound.
If lower bound and upper bound are zero than array has one element.

llvm-svn: 129156
2011-04-08 21:55:10 +00:00
Evan Cheng 74d92c1924 Change -arm-trap-func= into a non-arm specific option. Now Intrinsic::trap is lowered into a call to the specified trap function at sdisel time.
llvm-svn: 129152
2011-04-08 21:37:21 +00:00
Nick Lewycky 466d0c1f93 llvm.global_[cd]tor is defined to be either external, or appending with an array
of { i32, void ()* }. Teach the verifier to verify that, deleting copies of
checks strewn about.

llvm-svn: 129128
2011-04-08 07:30:21 +00:00
Andrew Trick 2ad0b37318 Added a check in the preRA scheduler for potential interference on a
induction variable. The preRA scheduler is unaware of induction vars,
so we look for potential "virtual register cycles" instead.

Fixes <rdar://problem/8946719> Bad scheduling prevents coalescing

llvm-svn: 129100
2011-04-07 19:54:57 +00:00
Jakob Stoklund Olesen 64beb47783 Recompute hasPHIKill flags when shrinking live intervals.
PHI values may be deleted, causing the flags to be wrong. This fixes PR9616.

llvm-svn: 129092
2011-04-07 18:43:14 +00:00
Jakob Stoklund Olesen 994c16833c Avoid moving iterators when the previous block was just visited.
llvm-svn: 129081
2011-04-07 17:27:50 +00:00
Jakob Stoklund Olesen 1c0db0fd21 Prefer multiplications to divisions.
llvm-svn: 129080
2011-04-07 17:27:48 +00:00
Jakob Stoklund Olesen 6d2bbc1c20 Extract SpillPlacement::addLinks for handling the special transparent blocks.
llvm-svn: 129079
2011-04-07 17:27:46 +00:00
Evan Cheng b7c9c407f9 Remove dead code. rdar://9221736.
llvm-svn: 129044
2011-04-07 00:56:37 +00:00
Jakob Stoklund Olesen 8ce2f43694 Also account for the spill code that would be inserted in live-through blocks with interference.
llvm-svn: 129030
2011-04-06 21:32:41 +00:00
Jakob Stoklund Olesen 81439a83f4 Abort the constraint calculation early when all positive bias is lost.
Without any positive bias, there is nothing for the spill placer to to. It will
spill everywhere.

llvm-svn: 129029
2011-04-06 21:32:38 +00:00
Jakob Stoklund Olesen 6895b87dfe Keep track of the number of positively biased nodes when adding constraints.
If there are no positive nodes, the algorithm can be aborted early.

llvm-svn: 129021
2011-04-06 19:14:00 +00:00
Jakob Stoklund Olesen 36b5d8a698 Break the spill placement algorithm into three parts: prepare, addConstraints, and finish.
This will allow us to abort the algorithm early if it is determined to be futile.

llvm-svn: 129020
2011-04-06 19:13:57 +00:00
Jakob Stoklund Olesen f3b2dcc74d Oops. Scary.
llvm-svn: 128986
2011-04-06 04:07:14 +00:00
Jakob Stoklund Olesen bf91c4e85e Analyze blocks with uses separately from live-through blocks without uses.
About 90% of the relevant blocks are live-through without uses, and the only
information required about them is their number. This saves memory and enables
later optimizations that need to look at only the use-blocks.

llvm-svn: 128985
2011-04-06 03:57:00 +00:00
Jakob Stoklund Olesen 858afbb6ac Sign error
llvm-svn: 128963
2011-04-05 23:43:16 +00:00
Jakob Stoklund Olesen 5c482cd38f Don't crash when a value is defined after the last split point.
llvm-svn: 128962
2011-04-05 23:43:14 +00:00
Jakob Stoklund Olesen 30b5473d82 Permit blocks to branch directly to a landing pad.
Treat the landing pad as a normal successor when that happens.

llvm-svn: 128961
2011-04-05 23:43:11 +00:00
Devang Patel 9f738849ab Add support to encode function's template parameters.
llvm-svn: 128947
2011-04-05 22:52:06 +00:00
Jakob Stoklund Olesen 6aa0fbf4c0 Run LiveDebugVariables in RegAllocBasic and RegAllocGreedy.
llvm-svn: 128935
2011-04-05 21:40:37 +00:00
Devang Patel d4e20eacf0 Refactor.
llvm-svn: 128929
2011-04-05 21:08:24 +00:00
Bob Wilson 6c20b88173 Add an assertion instead of crashing when the scavenger goes past the end
of a basic block.

llvm-svn: 128925
2011-04-05 20:44:15 +00:00
Jakob Stoklund Olesen 18fd84c79a When dead code elimination removes all but one use, try to fold the single def into the remaining use.
Rematerialization can leave single-use loads behind that we might as well fold whenever possible.

llvm-svn: 128918
2011-04-05 20:20:26 +00:00
Devang Patel 651d06e036 Do not emit empty name.
llvm-svn: 128914
2011-04-05 20:14:13 +00:00
Jakob Stoklund Olesen 76ad3debab Ensure all defs referring to a virtual register are marked dead by addRegisterDead().
There can be multiple defs for a single virtual register when they are defining
sub-registers.

The missing <dead> flag was stopping the inline spiller from eliminating dead
code after rematerialization.

llvm-svn: 128888
2011-04-05 16:53:50 +00:00
Rafael Espindola 7dd4d6e2e8 Print visibility info for external variables.
llvm-svn: 128887
2011-04-05 15:51:32 +00:00
Jakob Stoklund Olesen fe6e07fd8a Use std::unique instead of a SmallPtrSet to ensure unique instructions in UseSlots.
This allows us to always keep the smaller slot for an instruction which is what
we want when a register has early clobber defines.

Drop the UsingInstrs set and the UsingBlocks map. They are no longer needed.

llvm-svn: 128886
2011-04-05 15:18:18 +00:00
Jakob Stoklund Olesen d93b0e3ced Stop precomputing last split points, query the SplitAnalysis cache on demand.
llvm-svn: 128875
2011-04-05 04:20:29 +00:00
Jakob Stoklund Olesen 50b2db8a02 Cache the fairly expensive last split point computation and provide a fast
inlined path for the common case.

Most basic blocks don't contain a call that may throw, so the last split point
os simply the first terminator.

llvm-svn: 128874
2011-04-05 04:20:27 +00:00
Bill Wendling dd4dcd549b Revamp the SjLj "dispatch setup" intrinsic.
It needed to be moved closer to the setjmp statement, because the code directly
after the setjmp needs to know about values that are on the stack. Also, the
'bitcast' of the function context was causing a dead load. This wouldn't be too
horrible, except that at -O0 it wasn't optimized out, and because it wasn't
using the correct base pointer (if there is a VLA), it would try to access a
value from a garbage address.
<rdar://problem/9130540>

llvm-svn: 128873
2011-04-05 01:37:43 +00:00
Stuart Hastings ad68c93a2d Revert 123704; it broke threaded LLVM.
llvm-svn: 128868
2011-04-05 00:37:28 +00:00
Jakob Stoklund Olesen 2e85396509 Allow coalescing with reserved physregs in certain cases:
When a virtual register has a single value that is defined as a copy of a
reserved register, permit that copy to be joined. These virtual register are
usually copies of the stack pointer:

  %vreg75<def> = COPY %ESP; GR32:%vreg75
  MOV32mr %vreg75, 1, %noreg, 0, %noreg, %vreg74<kill>
  MOV32mi %vreg75, 1, %noreg, 8, %noreg, 0
  MOV32mi %vreg75<kill>, 1, %noreg, 4, %noreg, 0
  CALLpcrel32 ...

Coalescing these virtual registers early decreases register pressure.
Previously, they were coalesced by RALinScan::attemptTrivialCoalescing after
register allocation was completed.

The lower register pressure causes the mcinst-lowering-cmp0.ll test case to fail
because it depends on linear scan spilling a particular register.

I am deleting 2008-08-05-SpillerBug.ll because it is counting the number of
instructions emitted, and its revision history shows the 'correct' count being
edited many times.

llvm-svn: 128845
2011-04-04 21:00:03 +00:00
Jakob Stoklund Olesen 8de5ca72e3 Extract physreg joining policy to a separate method.
llvm-svn: 128844
2011-04-04 20:59:59 +00:00
Jakob Stoklund Olesen 8933907b51 Stop caching basic block index ranges now that SlotIndexes can keep up.
llvm-svn: 128821
2011-04-04 15:32:15 +00:00
Jakob Stoklund Olesen 956ae3da41 Delete leftover data members.
llvm-svn: 128820
2011-04-04 15:32:11 +00:00
Jakob Stoklund Olesen ca26e0acbb Use InterferenceCache in RegAllocGreedy.
llvm-svn: 128765
2011-04-02 06:03:38 +00:00
Jakob Stoklund Olesen 91cbcaf957 Add an InterferenceCache class for caching per-block interference ranges.
When the greedy register allocator is splitting multiple global live ranges, it
tends to look at the same interference data many times. The InterferenceCache
class caches queries for unaltered LiveIntervalUnions.

llvm-svn: 128764
2011-04-02 06:03:35 +00:00
Jakob Stoklund Olesen 36171288ce Use basic block numbers as indexes when mapping slot index ranges.
This is more compact and faster than using DenseMap.

llvm-svn: 128763
2011-04-02 06:03:31 +00:00
Cameron Zwarich 8c7bbc09e2 Add a RemoveFromWorklist method to DCI. This is needed to do some complicated
transformations in target-specific DAG combines without causing DAGCombiner to
delete the same node twice. If you know of a better way to avoid this (see my
next patch for an example), please let me know.

llvm-svn: 128758
2011-04-02 02:40:26 +00:00
Evan Cheng 8b1bca1998 Add comments.
llvm-svn: 128730
2011-04-01 19:57:01 +00:00
Evan Cheng 8d68ebd42a Assign node order numbers to results of call instruction lowering. This should improve src line debug info when sdisel is used. rdar://9199118
llvm-svn: 128728
2011-04-01 19:42:22 +00:00
Evan Cheng bd76679700 Issue libcalls __udivmod*i4 / __divmod*i4 for div / rem pairs.
rdar://8911343

llvm-svn: 128696
2011-04-01 00:42:02 +00:00
Jakob Stoklund Olesen 6e597dc8e7 The basic register allocator must also use the inline spiller.
It is using a trivial rewriter that doesn't know how to insert spill code
requested by the standard spiller.

llvm-svn: 128688
2011-03-31 23:02:17 +00:00
Jakob Stoklund Olesen e6e6750670 Don't completely eliminate identity copies that also modify super register liveness.
Turn them into noop KILL instructions instead. This lets the scavenger know when
super-registers are killed and defined.

llvm-svn: 128645
2011-03-31 17:55:25 +00:00
Jakob Stoklund Olesen 561cea0480 Allow kill flags on two-address instructions. They are harmless.
llvm-svn: 128643
2011-03-31 17:52:41 +00:00
Jakob Stoklund Olesen 9a78835414 Mark all uses as <undef> when joining a copy.
This way, shrinkToUses() will ignore the instruction that is about to be
deleted, and we avoid leaving invalid live ranges that SplitKit doesn't like.

Fix a misunderstanding in MachineVerifier about <def,undef> operands. The
<undef> flag is valid on def operands where it has the same meaning as <undef>
on a use operand. It only applies to sub-register defines which also read the
full register.

llvm-svn: 128642
2011-03-31 17:23:25 +00:00
Devang Patel e0cbe31ebb Remove dead code.
llvm-svn: 128639
2011-03-31 16:53:49 +00:00
Jakob Stoklund Olesen 2ee5a0fc7f Fix bug found by valgrind.
llvm-svn: 128634
2011-03-31 15:14:11 +00:00
NAKAMURA Takumi 41f32c7127 lib/CodeGen/LiveIntervalAnalysis.cpp: [PR9590] Don't use std::pow(float,float) here.
We don't expect the real "powf()" on some hosts (and powf() would be available on other hosts).
For consistency, std::pow(double,double) may be called instead.
Or, precision issue might attack us, to see unstable regalloc and stack coloring.

llvm-svn: 128629
2011-03-31 12:11:33 +00:00
Jakob Stoklund Olesen ae044c06bf Pick a conservative register class when creating a small live range for remat.
The rematerialized instruction may require a more constrained register class
than the register being spilled. In the test case, the spilled register has been
inflated to the DPR register class, but we are rematerializing a load of the
ssub_0 sub-register which only exists for DPR_VFP2 registers.

The register class is reinflated after spilling, so the conservative choice is
only temporary.

llvm-svn: 128610
2011-03-31 03:54:44 +00:00
Jakob Stoklund Olesen ae917a3740 Fix evil VirtRegRewriter bug.
The rewriter can keep track of multiple stack slots in the same register if they
happen to have the same value. When an instruction modifies a stack slot by
defining a register that is mapped to a stack slot, other stack slots in that
register are no longer valid.

This is a very rare problem, and I don't have a simple test case. I get the
impression that VirtRegRewriter knows it is about to be deleted, inventing a
last opaque problem.

<rdar://problem/9204040>

llvm-svn: 128562
2011-03-30 18:14:07 +00:00
Jakob Stoklund Olesen 69129256dd Teach VirtRegRewriter about the new virtual register numbers. No functional change.
llvm-svn: 128561
2011-03-30 18:14:04 +00:00
Jay Foad 52131344a2 Remove PHINode::reserveOperandSpace(). Instead, add a parameter to
PHINode::Create() giving the (known or expected) number of operands.

llvm-svn: 128537
2011-03-30 11:28:46 +00:00
Jay Foad e0938d8a87 (Almost) always call reserveOperandSpace() on newly created PHINodes.
llvm-svn: 128535
2011-03-30 11:19:20 +00:00
Jakob Stoklund Olesen dd9a2ecef7 Treat clones the same as their origin.
When DCE clones a live range because it separates into connected components,
make sure that the clones enter the same register allocator stage as the
register they were cloned from.

For instance, clones may be split even when they where created during spilling.
Other registers created during spilling are not candidates for splitting or even
(re-)spilling.

llvm-svn: 128524
2011-03-30 02:52:39 +00:00
Jim Grosbach 1900c73a97 Tidy up. 80 columns and trailing whitespace.
llvm-svn: 128504
2011-03-29 23:20:22 +00:00
Jakob Stoklund Olesen e991f728d6 Recompute register class and hint for registers created during spilling.
The spill weight is not recomputed for an unspillable register - it stays infinite.

llvm-svn: 128490
2011-03-29 21:20:19 +00:00
Jakob Stoklund Olesen 0ed9ebca58 Remember to use the correct register when rematerializing for snippets.
llvm-svn: 128469
2011-03-29 17:47:02 +00:00
Jakob Stoklund Olesen add79c6abf Run dead code elimination immediately after rematerialization.
This may eliminate some uses of the spilled registers, and we don't want to
insert reloads for that.

llvm-svn: 128468
2011-03-29 17:47:00 +00:00
Bill Wendling dd1cf3279e Inline check that's used only once.
llvm-svn: 128465
2011-03-29 17:12:55 +00:00
Bill Wendling fb63d55fe8 Rework the logic (and removing the bad check for an unreachable block) so that
the FailBB dominator is correctly calculated. Believe it or not, there isn't a
functionality change here.

llvm-svn: 128455
2011-03-29 07:28:52 +00:00
Bill Wendling 220c9f045b Don't try to add stack protector logic to a dead basic block. It messes up
dominator information.

llvm-svn: 128452
2011-03-29 05:15:48 +00:00
Jakob Stoklund Olesen 12877b8a15 Handle the special case when all uses follow the last split point.
llvm-svn: 128450
2011-03-29 03:12:04 +00:00
Jakob Stoklund Olesen d8af5298d1 Properly enable rematerialization when spilling after live range splitting.
The instruction to be rematerialized may not be the one defining the register
that is being spilled. The traceSiblingValue() function sees through sibling
copies to find the remat candidate.

llvm-svn: 128449
2011-03-29 03:12:02 +00:00
Bill Wendling 96f962fdff In some cases, the "fail BB dominator" may be null after the BB was split (and
becomes reachable when before it wasn't). Check to make sure that it's not null
before trying to use it.

llvm-svn: 128434
2011-03-28 23:02:18 +00:00
Daniel Dunbar 3e2b335903 Integrated-As: Add support for setting the AllowTemporaryLabels flag via
integrated-as.

llvm-svn: 128431
2011-03-28 22:49:19 +00:00
Jakob Stoklund Olesen bd6b86e489 Amend debug output.
llvm-svn: 128398
2011-03-27 22:49:23 +00:00
Jakob Stoklund Olesen 28d79cdeab Drop interference reassignment in favor of eviction.
The reassignment phase was able to move interference with a higher spill weight,
but it didn't happen very often and it was fairly expensive.

The existing interference eviction picks up the slack.

llvm-svn: 128397
2011-03-27 22:49:21 +00:00
Jakob Stoklund Olesen e466345675 Use individual register classes when spilling snippets.
The main register class may have been inflated by live range splitting, so that
register class is not necessarily valid for the snippet instructions.

Use the original register class for the stack slot interval.

llvm-svn: 128351
2011-03-26 22:16:41 +00:00
Benjamin Kramer 355ce07425 Turn SelectionDAGBuilder::GetRegistersForValue into a local function.
It couldn't be used outside of the file because SDISelAsmOperandInfo
is local to SelectionDAGBuilder.cpp. Making it a static function avoids
a weird linkage dance.

llvm-svn: 128342
2011-03-26 16:35:10 +00:00
Jakob Stoklund Olesen 9a624fa993 Collect and coalesce DBG_VALUE instructions before emitting the function.
Correctly terminate the range of register DBG_VALUEs when the register is
clobbered or when the basic block ends.

The code is now ready to deal with variables that are sometimes in a register
and sometimes on the stack. We just need to teach emitDebugLoc to say 'stack
slot'.

llvm-svn: 128327
2011-03-26 02:19:36 +00:00
Jakob Stoklund Olesen 1886a4c823 Emit less labels for debug info and stop emitting .loc directives for DBG_VALUEs.
The .dot directives don't need labels, that is a leftover from when we created
line number info manually.

Instructions following a DBG_VALUE can share its label since the DBG_VALUE
doesn't produce any code.

llvm-svn: 128284
2011-03-25 17:20:59 +00:00
Andrew Trick 3bd8b7a388 Fix for -pre-RA-sched=source.
Yet another case of unchecked NULL node (for physreg copy).
May fix PR9509.

llvm-svn: 128266
2011-03-25 06:40:55 +00:00
Nick Lewycky d73218e4a3 No functionality change. Fix up some whitespace and switch out "" for '' when
printing a single character.

llvm-svn: 128256
2011-03-25 06:04:26 +00:00
Jakob Stoklund Olesen a1e3156ebd Ignore special ARM allocation hints for unexpected register classes.
Add an assertion to linear scan to prevent it from allocating registers outside
the register class.

<rdar://problem/9183021>

llvm-svn: 128254
2011-03-25 01:48:18 +00:00
Devang Patel e01b75cb89 Keep track of directory namd and fIx regression caused by Rafael's patch r119613.
A better approach would be to move source id handling inside MC.

llvm-svn: 128233
2011-03-24 20:30:50 +00:00
Eli Friedman 4c192305bf PR9535: add support for splitting and scalarizing vector ISD::FP_ROUND.
Also cleaning up some duplicated code while I'm here.

llvm-svn: 128176
2011-03-23 22:18:48 +00:00
Andrew Trick 13acae040c Ensure that def-side physreg copies are scheduled above any other uses
so the scheduler can't create new interferences on the copies
themselves. Prior to this fix the scheduler could get stuck in a loop
creating copies.
Fixes PR9509.

llvm-svn: 128164
2011-03-23 20:42:39 +00:00
Andrew Trick a8846e0540 whitespace
llvm-svn: 128163
2011-03-23 20:40:18 +00:00
Jakob Stoklund Olesen a87d80cdca Don't coalesce identical DBG_VALUE instructions prematurely.
Each of these instructions may have a RegsClobberInsn entry that can't be
ignored. Consecutive ranges are coalesced later when DwarfDebug::emitDebugLoc
merges entries.

llvm-svn: 128155
2011-03-23 18:37:30 +00:00
Jakob Stoklund Olesen b993f6394f Notify the delegate before removing dead values from a live interval.
The register allocator needs to know when the range shrinks.

llvm-svn: 128145
2011-03-23 04:43:16 +00:00
Jakob Stoklund Olesen f7eb955046 Allow the allocation of empty live ranges that have uses.
Empty ranges may represent undef values.

llvm-svn: 128144
2011-03-23 04:32:51 +00:00
Jakob Stoklund Olesen 710656d7b3 Dump the register map before rewriting.
llvm-svn: 128143
2011-03-23 04:32:49 +00:00
Andrew Trick b1fd328581 Added block number and name to isel debug output.
I'm tired of doing this manually for each checkout.
If anyone knows a better way debug isel for non-trivial tests feel
free to revert and let me know how to do it.

llvm-svn: 128132
2011-03-23 01:38:28 +00:00
Jakob Stoklund Olesen ec0ac3ca40 Reapply r128045 and r128051 with fixes.
This will extend the ranges of debug info variables in registers until they are
clobbered.

Fix 1: Don't mistake DBG_VALUE instructions referring to incoming arguments on
the stack with DBG_VALUE instructions referring to variables in the frame
pointer. This fixes the gdb test-suite failure.

Fix 2: Don't trace through copies to physical registers setting up call
arguments. These registers are call clobbered, and the source register is more
likely to be a callee-saved register that can be extended through the call
instruction.

llvm-svn: 128114
2011-03-22 22:33:08 +00:00
Andrew Trick b0f98bb5e9 Revert r128045 and r128051, debug info enhancements.
Temporarily reverting these to see if we can get llvm-objdump to link. Hopefully this is not the problem.

llvm-svn: 128097
2011-03-22 19:18:42 +00:00
Jakob Stoklund Olesen c6f4af028d Clear map after use.
This is likely to fix the segfault in llvm-gcc-x86_64-darwin10-cross-mingw32.

llvm-svn: 128051
2011-03-22 01:03:24 +00:00
Jakob Stoklund Olesen 9c057ee440 Dont emit 'DBG_VALUE %noreg, ...' to terminate user variable ranges.
These ranges get completely jumbled by the post-ra scheduler, and it is not
really reasonable to expect it to make sense of them.

Instead, teach DwarfDebug to notice when user variables in registers are
clobbered, and terminate the ranges there.

llvm-svn: 128045
2011-03-22 00:21:41 +00:00
Eric Christopher 1b4b1e559a Grammar-o.
llvm-svn: 128004
2011-03-21 18:06:21 +00:00
Bill Wendling 00f0cddfd4 We need to pass the TargetMachine object to the InstPrinter if we are printing
the alias of an InstAlias instead of the thing being aliased. Because we need to
know the features that are valid for an InstAlias.

This is part of a work-in-progress.

llvm-svn: 127986
2011-03-21 04:13:46 +00:00
Jakob Stoklund Olesen 35502423de Process all dead defs after rematerializing during splitting.
llvm-svn: 127973
2011-03-20 19:46:23 +00:00
Jakob Stoklund Olesen e55003fb04 Also eliminate redundant spills downstream of inserted reloads.
This can happen when multiple sibling registers are spilled after live range
splitting.

llvm-svn: 127965
2011-03-20 05:44:58 +00:00
Jakob Stoklund Olesen 39488642d3 Change an argument to a LiveInterval instead of a register number to save some redundant lookups.
llvm-svn: 127964
2011-03-20 05:44:55 +00:00
Jakob Stoklund Olesen ccacd0df19 Replace a broken LiveInterval::MergeValueInAsValue() with something simpler.
llvm-svn: 127960
2011-03-19 23:02:49 +00:00
Jakob Stoklund Olesen 8698507fe1 Add debug output.
llvm-svn: 127959
2011-03-19 23:02:47 +00:00
Evan Cheng b1f3b4989f Minor code re-structuring.
llvm-svn: 127952
2011-03-19 17:03:16 +00:00
Nadav Rotem e7a101ccab Add support for legalizing UINT_TO_FP of vectors on platforms which do
not have native support for this operation (such as X86).
The legalized code uses two vector INT_TO_FP operations and is faster
than scalarizing.

llvm-svn: 127951
2011-03-19 13:09:10 +00:00
Stuart Hastings 12d5312622 Reapply 127939 since Daniel fixed the breakage. <rdar://problem/9012638>
llvm-svn: 127944
2011-03-19 02:42:31 +00:00
Stuart Hastings 08b4daa191 Revert 127939. <rdar://problem/9012638>
llvm-svn: 127943
2011-03-19 02:33:56 +00:00
Stuart Hastings 83d4a28d1f Revise r126127 to address Daniel's comments. <rdar://problem/9012638>
llvm-svn: 127939
2011-03-19 01:32:01 +00:00
Jim Grosbach 7b162490fd Beginnings of MC-JIT code generation.
Proof-of-concept code that code-gens a module to an in-memory MachO object.
This will be hooked up to a run-time dynamic linker library (see: llvm-rtdyld
for similarly conceptual work for that part) which will take the compiled
object and link it together with the rest of the system, providing back to the
JIT a table of available symbols which will be used to respond to the
getPointerTo*() queries.

llvm-svn: 127916
2011-03-18 22:48:41 +00:00
Jakob Stoklund Olesen 816f5f4c2a Extend live debug values down the dominator tree by following copies.
The llvm.dbg.value intrinsic refers to SSA values, not virtual registers, so we
should be able to extend the range of a value by tracking that value through
register copies. This greatly improves the debug value tracking for function
arguments that for some reason are copied to a second virtual register at the
end of the entry block.

We only extend the debug value range where its register is killed. All original
llvm.dbg.value locations are still respected.

Copies from physical registers are ignored. That should not be a problem since
the entry block already adds DBG_VALUE instructions for the virtual registers
holding the function arguments.

llvm-svn: 127912
2011-03-18 21:42:19 +00:00
Jakob Stoklund Olesen 27320cb864 Hoist spills when the same value is known to be in less loopy sibling registers.
Stack slot real estate is virtually free compared to registers, so it is
advantageous to spill earlier even though the same value is now kept in both a
register and a stack slot.

Also eliminate redundant spills by extending the stack slot live range
underneath reloaded registers.

This can trigger a dead code elimination, removing copies and even reloads that
were only feeding spills.

llvm-svn: 127868
2011-03-18 04:23:06 +00:00
Jakob Stoklund Olesen fdc09941f2 Accept instructions that read undefined values.
This is not supposed to happen, but I have seen the x86 rematter getting
confused when rematerializing partial redefs.

llvm-svn: 127857
2011-03-18 03:06:04 +00:00
Jakob Stoklund Olesen c099dde918 Be more accurate about the slot index reading a register when dealing with defs
and early clobbers.

Assert when trying to find an undefined value.

llvm-svn: 127856
2011-03-18 03:06:02 +00:00
Benjamin Kramer cfcea12fe2 BuildUDIV: If the divisor is even we can simplify the fixup of the multiplied value by introducing an early shift.
This allows us to compile "unsigned foo(unsigned x) { return x/28; }" into
	shrl	$2, %edi
	imulq	$613566757, %rdi, %rax
	shrq	$32, %rax
	ret

instead of
	movl    %edi, %eax
	imulq   $613566757, %rax, %rcx
	shrq    $32, %rcx
	subl    %ecx, %eax
	shrl    %eax
	addl    %ecx, %eax
	shrl    $4, %eax

on x86_64

llvm-svn: 127829
2011-03-17 20:39:14 +00:00
Jakob Stoklund Olesen 8630840c30 Dead code elimination may separate the live interval into multiple connected components.
I have convinced myself that it can only happen when a phi value dies. When it
happens, allocate new virtual registers for the components.

llvm-svn: 127827
2011-03-17 20:37:07 +00:00
Cameron Zwarich 2ef0c69df1 Move more logic into getTypeForExtArgOrReturn.
llvm-svn: 127809
2011-03-17 14:53:37 +00:00
Cameron Zwarich 34e7b3f77e Rename getTypeForExtendedInteger() to getTypeForExtArgOrReturn().
llvm-svn: 127807
2011-03-17 14:21:56 +00:00
Jakob Stoklund Olesen 315b42c354 Rewrite instructions as part of ConnectedVNInfoEqClasses::Distribute.
llvm-svn: 127779
2011-03-17 00:23:45 +00:00
Jakob Stoklund Olesen e14b2b226f Add a LiveRangeEdit delegate callback before shrinking a live range.
The register allocator needs to adjust its live interval unions when that happens.

llvm-svn: 127774
2011-03-16 22:56:16 +00:00
Jakob Stoklund Olesen c738c96519 Erase virtual registers that are unused after DCE.
llvm-svn: 127773
2011-03-16 22:56:13 +00:00
Jakob Stoklund Olesen e29d63e98a Tag cached interference with a user-provided tag instead of the virtual register number.
The live range of a virtual register may change which invalidates the cached
interference information.

llvm-svn: 127772
2011-03-16 22:56:11 +00:00
Jakob Stoklund Olesen 557a82c099 Clarify debugging output.
llvm-svn: 127771
2011-03-16 22:56:08 +00:00
Cameron Zwarich ac106273d4 The x86-64 ABI says that a bool is only guaranteed to be sign-extended to a byte
rather than an int. Thankfully, this only causes LLVM to miss optimizations, not
generate incorrect code.

This just fixes the zext at the return. We still insert an i32 ZextAssert when
reading a function's arguments, but it is followed by a truncate and another i8
ZextAssert so it is not optimized.

llvm-svn: 127766
2011-03-16 22:20:18 +00:00
Cameron Zwarich d1ad9bc277 Don't recompute something that we already have in a local variable.
llvm-svn: 127764
2011-03-16 22:20:07 +00:00
Daniel Dunbar fd95b016fb Revert r127757, "Patch to a fix dwarf relocation problem on ARM. One-line fix
plus the test where it used to break.", which broke Clang self-host of a
Debug+Asserts compiler, on OS X.

llvm-svn: 127763
2011-03-16 22:16:39 +00:00
Renato Golin a3aeafeb35 Patch to a fix dwarf relocation problem on ARM. One-line fix plus the test where it used to break.
llvm-svn: 127757
2011-03-16 21:05:52 +00:00
Jakob Stoklund Olesen a0d5ec10d1 Trace back through sibling copies to hoist spills and find rematerializable defs.
After live range splitting, an original value may be available in multiple
registers. Tracing back through the registers containing the same value, find
the best place to insert a spill, determine if the value has already been
spilled, or discover a reaching def that may be rematerialized.

This is only the analysis part. The information is not used for anything yet.

llvm-svn: 127698
2011-03-15 21:13:25 +00:00
Jakob Stoklund Olesen 32210de3e4 Preserve both isPHIDef and isDefByCopy bits when copying parent values.
llvm-svn: 127697
2011-03-15 21:13:22 +00:00
Evan Cheng e4b8ac9fef Add a peephole optimization to optimize pairs of bitcasts. e.g.
v2 = bitcast v1
...
v3 = bitcast v2
...
   = v3
=>
v2 = bitcast v1
...
   = v1
if v1 and v3 are of in the same register class.

bitcast between i32 and fp (and others) are often not nops since they
are in different register classes. These bitcast instructions are often
left because they are in different basic blocks and cannot be
eliminated by dag combine.

rdar://9104514

llvm-svn: 127668
2011-03-15 05:13:13 +00:00
Evan Cheng c5c2cfa381 sext(undef) = 0, because the top bits will all be the same.
zext(undef) = 0, because the top bits will be zero.

llvm-svn: 127649
2011-03-15 02:22:10 +00:00
Bill Wendling 5c25a92011 There are some situations which can cause the URoR hack to infinitely recurse
and then go kablooie. The problem was that it was tracking the PHI nodes anew
each time into this function. But it didn't need to. And because the recursion
didn't know that a PHINode was visited before, it would go ahead and call
itself.

There is a testcase, but unfortunately it's too big to add. This problem will go
away with the EH rewrite.
<rdar://problem/8856298>

llvm-svn: 127640
2011-03-15 01:03:17 +00:00
Jakob Stoklund Olesen 59a549b7ec Place context in member variables instead of passing around pointers.
Use the opportunity to get rid of the trailing underscore variable names.

llvm-svn: 127618
2011-03-14 20:57:14 +00:00
Jakob Stoklund Olesen a00bab24c2 Rename members to match LLVM naming conventions more closely.
Remove the unused reserved_ bit vector, no functional change intended.

This doesn't break 'svn blame', this file really is all my fault.

llvm-svn: 127607
2011-03-14 19:56:43 +00:00
Evan Cheng 37139edc8c BIT_CONVERT has been renamed to BITCAST.
llvm-svn: 127600
2011-03-14 18:19:52 +00:00
Evan Cheng d2f3b01797 Minor optimization. sign-ext/anyext of undef is still undef.
llvm-svn: 127598
2011-03-14 18:15:55 +00:00
Jakob Stoklund Olesen e1539cc5b6 Now that we are deleting unused live intervals during allocation, pointers may be reused.
Use the virtual register number as a cache tag instead. They are not reused.

llvm-svn: 127561
2011-03-13 01:29:32 +00:00
Jakob Stoklund Olesen 43a87501b3 Tell the register allocator about new unused virtual registers.
This allows the allocator to free any resources used by the virtual register,
including physical register assignments.

llvm-svn: 127560
2011-03-13 01:23:11 +00:00
Duncan Sands b847bf547b Speculatively revert commit 127478 (jsjodin) in an attempt to fix the
llvm-gcc-i386-linux-selfhost and llvm-x86_64-linux-checks buildbots.
The original log entry:
Remove optimization emitting a reference insted of label difference, since
it can create more relocations. Removed isBaseAddressKnownZero method,
because it is no longer used.

llvm-svn: 127540
2011-03-12 13:07:37 +00:00
Jakob Stoklund Olesen e77005ef88 Include snippets in the live stack interval.
llvm-svn: 127530
2011-03-12 04:25:36 +00:00
Jakob Stoklund Olesen a86595e06b Spill multiple registers at once.
Live range splitting can create a number of small live ranges containing only a
single real use. Spill these small live ranges along with the large range they
are connected to with copies. This enables memory operand folding and maximizes
the spill to fill distance.

Work in progress with known bugs.

llvm-svn: 127529
2011-03-12 04:17:20 +00:00
Jakob Stoklund Olesen dae1dc1f01 That's it, I am declaring this a failure of the C++03 STL.
There are too many compatibility problems with using mixed types in
std::upper_bound, and I don't want to spend 110 lines of boilerplate setting up
a call to a 10-line function. Binary search is not /that/ hard to implement
correctly.

I tried terminating the binary search with a linear search, but that actually
made the algorithm slower against my expectation. Most live intervals have less
than 4 segments. The early test against endIndex() does pay, and this version is
25% faster than plain std::upper_bound().

llvm-svn: 127522
2011-03-12 01:50:35 +00:00
Cameron Zwarich 4d7d728594 Fix the GCC test suite issue exposed by r127477, which was caused by stack
protector insertion not working correctly with unreachable code. Since that
revision was rolled out, this test doesn't actual fail before this fix.

llvm-svn: 127497
2011-03-11 21:51:56 +00:00
Owen Anderson 66443c034d Teach FastISel to support register-immediate-immediate instructions.
llvm-svn: 127496
2011-03-11 21:33:55 +00:00
Jan Sjödin f3f78583f9 Remove optimization emitting a reference insted of label difference, since it can create more relocations. Removed isBaseAddressKnownZero method, because it is no longer used.
llvm-svn: 127478
2011-03-11 19:37:02 +00:00
Andrew Trick 710d5da306 Replace -dag-chain-limit flag with constant. It has survived a release cycle without being touched, so no longer needs to pollute the hidden-help text.
llvm-svn: 127468
2011-03-11 17:46:59 +00:00
John Wiegley 8559f5914c Fix use of CompEnd predicate to be standards conforming
The existing CompEnd predicate does not define a strict weak order as required
by the C++03 standard; therefore, its use as a predicate to std::upper_bound
is invalid. For a discussion of this issue, see
http://www.open-std.org/jtc1/sc22/wg21/docs/lwg-defects.html#270

This patch replaces the asymmetrical comparison with an iterator adaptor that
achieves the same effect while being strictly standard-conforming by ensuring
an apples-to-apples comparison.

llvm-svn: 127462
2011-03-11 08:54:34 +00:00
Evan Cheng adb9c03e41 Avoid replacing the value of a directly stored load with the stored value if the load is indexed. rdar://9117613.
llvm-svn: 127440
2011-03-11 00:48:56 +00:00
Cameron Zwarich 7930407339 Add an option to disable critical edge splitting in PHIElimination.
llvm-svn: 127398
2011-03-10 05:59:17 +00:00
Jakob Stoklund Olesen 4d6eafa138 Change the Spiller interface to take a LiveRangeEdit reference.
This makes it possible to register delegates and get callbacks when the spiller
edits live ranges.

llvm-svn: 127389
2011-03-10 01:51:42 +00:00
Jakob Stoklund Olesen c6cc485051 Make SpillIs an optional pointer. Avoid creating a bunch of temporary SmallVectors.
llvm-svn: 127388
2011-03-10 01:21:58 +00:00
Evan Cheng b4c6a34415 Re-commit 127368 and 127371. They are exonerated.
llvm-svn: 127380
2011-03-10 00:16:32 +00:00
Evan Cheng d4b3f8e009 Revert 127368 and 127371 for now.
llvm-svn: 127376
2011-03-09 23:53:17 +00:00
Evan Cheng ca9a936332 Change the definition of TargetRegisterInfo::getCrossCopyRegClass to be more
flexible.

If it returns a register class that's different from the input, then that's the
register class used for cross-register class copies.
If it returns a register class that's the same as the input, then no cross-
register class copies are needed (normal copies would do).
If it returns null, then it's not at all possible to copy registers of the
specified register class.

llvm-svn: 127368
2011-03-09 22:47:38 +00:00
Jakob Stoklund Olesen d0db705256 Make physreg coalescing independent on the number of uses of the virtual register.
The damage done by physreg coalescing only depends on the number of instructions
the extended physreg live range covers. This fixes PR9438.

The heuristic is still luck-based, and physreg coalescing really should be
disabled completely. We need a register allocator with better hinting support
before that is possible.

Convert a test to FileCheck and force spilling by inserting an extra call. The
previous spilling behavior was dependent on misguided physreg coalescing
decisions.

llvm-svn: 127351
2011-03-09 19:27:06 +00:00
Andrew Trick 072ed2ee0d Improve pre-RA-sched register pressure tracking for duplicate operands.
This helps cases like 2008-07-19-movups-spills.ll, but doesn't have an obvious impact on benchmarks

llvm-svn: 127347
2011-03-09 19:12:43 +00:00
Benjamin Kramer b2e4d84305 Fix typo, make helper static.
llvm-svn: 127335
2011-03-09 16:19:12 +00:00
Benjamin Kramer 84fccb64c3 Remove unused virtual dtor.
llvm-svn: 127331
2011-03-09 14:20:28 +00:00
Matt Beaumont-Gay df72652fd0 Add a virtual dtor to Delegate to silence -Wnon-virtual-dtor
llvm-svn: 127311
2011-03-09 04:02:15 +00:00
Jakob Stoklund Olesen 8e089640e0 Add a LiveRangeEdit::Delegate protocol.
This will we used for keeping register allocator data structures up to date
while LiveRangeEdit is trimming live intervals.

llvm-svn: 127300
2011-03-09 00:57:29 +00:00
Jakob Stoklund Olesen 06b72e338a Delete dead code.
llvm-svn: 127295
2011-03-09 00:07:39 +00:00
Jakob Stoklund Olesen ea5ebfed15 Delete dead code after rematerializing.
LiveRangeEdit::eliminateDeadDefs() will eventually be used by coalescing,
splitting, and spilling for dead code elimination. It can delete chains of dead
instructions as long as there are no dependency loops.

llvm-svn: 127287
2011-03-08 22:46:11 +00:00
Jakob Stoklund Olesen 880e0b7760 Fix the build for MSVC 9 whose upper_bound() wants to compare elements in the sorted array.
Patch by Olaf Krzikalla!

llvm-svn: 127264
2011-03-08 19:37:54 +00:00
Eric Christopher 7238cba180 Fix some latent bugs if the nodes are unschedulable. We'd gotten away
with this before since none of the register tracking or nightly tests
had unschedulable nodes.

This should probably be refixed with a special default Node that just
returns some "don't touch me" values.

Fixes PR9427

llvm-svn: 127263
2011-03-08 19:35:47 +00:00
Oscar Fuentes a28879b824 Revert "Make a comparator's argument `const'. This fixes the build for
MSVC 9."

The "fix" was meaningless.

This reverts commit r127245.

llvm-svn: 127260
2011-03-08 19:26:21 +00:00
Benjamin Kramer b8ca01fff5 Reduce vector reallocations.
llvm-svn: 127254
2011-03-08 17:28:36 +00:00
Oscar Fuentes 6ec5983a0c Make a comparator's argument `const'. This fixes the build for MSVC 9.
llvm-svn: 127245
2011-03-08 13:52:07 +00:00
Andrew Trick 52b3e38a1f Further improvements to pre-RA-sched=list-ilp.
This change uses the MaxReorderWindow for both height and depth, which
tends to limit the negative effects of high register pressure.

llvm-svn: 127203
2011-03-08 01:51:56 +00:00
Jakob Stoklund Olesen 71c380f6c7 Let shrinkToUses optionally return a list of now dead machine instructions.
llvm-svn: 127192
2011-03-07 23:29:10 +00:00
Jakob Stoklund Olesen 27f942fa60 Make the UselessRegs argument optional in the LiveRangeEdit constructor.
llvm-svn: 127181
2011-03-07 22:42:16 +00:00
Cameron Zwarich df61694417 Move getRegPressureLimit() from TargetLoweringInfo to TargetRegisterInfo.
llvm-svn: 127175
2011-03-07 21:56:36 +00:00
Jakob Stoklund Olesen ac32d8a691 Handle the special case of registers begin redefined by early-clobber defs.
In this case, the value need to be available at the load index instead of the
normal use index.

llvm-svn: 127167
2011-03-07 18:56:16 +00:00
Owen Anderson cd526fa15e Use the correct LHS type when determining the legalization of a shift's RHS type.
llvm-svn: 127163
2011-03-07 18:29:47 +00:00
Eric Christopher 9cb33deebf Typo.
llvm-svn: 127131
2011-03-06 21:13:45 +00:00
NAKAMURA Takumi 0d8150f279 lib/CodeGen/AsmPrinter/CMakeLists.txt: Fix CMake build, following up to r127099.
llvm-svn: 127114
2011-03-06 00:13:15 +00:00
Andrew Trick dd01732e63 Disable a couple of experimental heuristics to get the best results from the current implementation of -pre-RA-sched=list-ilp.
llvm-svn: 127113
2011-03-06 00:03:32 +00:00
Anton Korobeynikov a7ec2dcefd Some first rudimentary support for ARM EHABI: print exception table in "text mode".
llvm-svn: 127099
2011-03-05 18:43:15 +00:00
Anton Korobeynikov 65cff414b6 Add FrameSetup MI flags
llvm-svn: 127098
2011-03-05 18:43:04 +00:00
Jakob Stoklund Olesen 27e0a4ab86 Work around a coalescer bug.
The coalescer can in very rare cases leave too large live intervals around after
rematerializing cheap-as-a-move instructions.

Linear scan doesn't really care, but live range splitting gets very confused
when a live range is killed by a ghost instruction.

I will fix this properly in the coalescer after 2.9 branches.

llvm-svn: 127096
2011-03-05 18:33:49 +00:00
Andrew Trick 25cedf3fe4 Be explicit with abs(). Visual Studio workaround.
llvm-svn: 127075
2011-03-05 10:29:25 +00:00
Andrew Trick d7f4c21684 Fix for -sched-high-latency-cycles in sched=list-ilp mode.
llvm-svn: 127071
2011-03-05 09:18:16 +00:00
Andrew Trick b8390b7a25 Missing comment.
llvm-svn: 127068
2011-03-05 08:04:11 +00:00
Andrew Trick 641e2d4f8c Increased the register pressure limit on x86_64 from 8 to 12
regs. This is the only change in this checkin that may affects the
default scheduler. With better register tracking and heuristics, it
doesn't make sense to artificially lower the register limit so much.

Added -sched-high-latency-cycles and X86InstrInfo::isHighLatencyDef to
give the scheduler a way to account for div and sqrt on targets that
don't have an itinerary. It is currently defaults to 10 (the actual
number doesn't matter much), but only takes effect on non-default
schedulers: list-hybrid and list-ilp.

Added several heuristics that can be individually disabled for the
non-default sched=list-ilp mode. This helps us determine how much
better we can do on a given benchmark than the default
scheduler. Certain compute intensive loops run much faster in this
mode with the right set of heuristics, and it doesn't seem to have
much negative impact elsewhere. Not all of the heuristics are needed,
but we still need to experiment to decide which should be disabled by
default for sched=list-ilp.

llvm-svn: 127067
2011-03-05 08:00:22 +00:00
Jakob Stoklund Olesen 1a9b66c752 Rework the global split cost calculation.
The global cost is the sum of block frequencies for spill code that must be
inserted because preferences weren't met.

llvm-svn: 127062
2011-03-05 03:28:51 +00:00
Jakob Stoklund Olesen 4b598e156a Compute the constraints for global live range splitting from an interference pattern.
This simplifies the code and makes it faster too.

The interference patterns are saved for each candidate register. It will be
reused for actually executing the split. Work in progress.

llvm-svn: 127054
2011-03-05 01:10:31 +00:00
Jim Grosbach dc55428d7a Teach the register scavenger to take subregs into account when finding a free register.
llvm-svn: 127049
2011-03-05 00:20:19 +00:00
Eric Christopher 403269894f Improve readability with some whitespace!
llvm-svn: 127043
2011-03-04 22:47:12 +00:00
Jakob Stoklund Olesen 05a2f5178e Extract a method. No functional change.
llvm-svn: 127040
2011-03-04 22:11:11 +00:00
Jakob Stoklund Olesen d7e1bb80a9 Go back to comparing spill weights when deciding if interference can be evicted.
It gives better results. Sometimes, a live range can be large and still have
high spill weight. Such a range should not be spilled.

llvm-svn: 127036
2011-03-04 21:32:50 +00:00
Jakob Stoklund Olesen b8e6fdc23c Renumber slot indexes locally when possible.
Initially, slot indexes are quad-spaced. There is room for inserting up to 3
new instructions between the original instructions.

When we run out of indexes between two instructions, renumber locally using
double-spaced indexes. The original quad-spacing means that we catch up quickly,
and we only have to renumber a handful of instructions to get a monotonic
sequence. This is much faster than renumbering the whole function as we did
before.

llvm-svn: 127023
2011-03-04 19:43:38 +00:00
Jakob Stoklund Olesen 348d8e8ba6 Number SlotIndexes uniformly without looking at the number of defs on each instruction.
You can't really predict how many indexes will be needed from the number of
defs, so let's keep it simple.

Also remove an extra empty index that was inserted after each basic block. It
was intended for live-out ranges, but it was never used that way.

llvm-svn: 127014
2011-03-04 18:51:09 +00:00
Jakob Stoklund Olesen b88f6adf0f Add SlotIndex statistics.
llvm-svn: 127007
2011-03-04 18:08:29 +00:00
Jakob Stoklund Olesen d4f788952d Tweak debug output. No functional changes.
llvm-svn: 127006
2011-03-04 18:08:26 +00:00
Duncan Sands 6bd1044222 Revert commit 126684 "Use the correct shift amount type". It is only the correct
type after type legalization has completed.  Before then it may simply not be big
enough to hold the shift amount, particularly on x86 which uses a very small type
for shifts (this issue broke stuff in the past which is why LegalizeTypes carefully
uses a large type for shift amounts).

llvm-svn: 127000
2011-03-04 14:28:59 +00:00
Andrew Trick c88b7ecb88 Minor pre-RA-sched fixes and cleanup.
Fix the PendingQueue, then disable it because it's not required for
the current schedulers' heuristics.
Fix the logic for the unused list-ilp scheduler.

llvm-svn: 126981
2011-03-04 02:03:45 +00:00
Jakob Stoklund Olesen c332e727b4 Precompute block frequencies, pow() isn't free.
llvm-svn: 126975
2011-03-04 00:58:40 +00:00
Jakob Stoklund Olesen 1a69e23300 Use an IndexedMap instead of a DenseMap for the live-out cache.
This speeds up updateSSA() so it only accounts for 5% of the live range
splitting time.

llvm-svn: 126972
2011-03-04 00:15:36 +00:00
Bill Wendling f3658f3872 There are times when the landing pad won't have a call to 'eh.selector' in
it. It's been assumed up til now that it would be in its immediate
successor. However, this isn't necessarily the case. It could be in one of its
successor's successors.

Modify the code to more thoroughly check for an 'eh.selector' call in
successors. It only looks at a successor if we get there as a result of an
unconditional branch.

Testcase ObjC/exceptions-4.m in r126968.

llvm-svn: 126969
2011-03-03 23:14:05 +00:00
Eli Friedman d8a555bb3b Revert r123908; the code in question is completely untested and wrong.
llvm-svn: 126964
2011-03-03 22:33:23 +00:00
Devang Patel 63b3e76370 Fix typo.
llvm-svn: 126962
2011-03-03 21:49:41 +00:00
Devang Patel 34a7ab400e Fix thinko in previous check-in.
Add comment.

llvm-svn: 126959
2011-03-03 20:08:10 +00:00
Devang Patel 4ab660b077 llvm::Function argument count is not a good indicator of how many arugments does the function have at source level. If we need more space, just resize vector conservatively. This vector is only used once per function.
llvm-svn: 126957
2011-03-03 20:02:02 +00:00
Jim Grosbach 7e200664f6 Allow a target to choose whether to prefer the scavenger emergency spill slot
be next to the frame pointer or the stack pointer.

llvm-svn: 126956
2011-03-03 20:01:52 +00:00
Jakob Stoklund Olesen bfdbc11554 Renumber slot indexes uniformly instead of spacing according to the number of defs.
There are probably much larger speedups to be had by renumbering locally instead
of looping over the whole function. For now, the greedy register allocator is
25% faster.

llvm-svn: 126926
2011-03-03 06:29:01 +00:00
Jakob Stoklund Olesen 4ec757d588 Represent sentinel slot indexes with a null pointer.
This is much faster than using a pointer to a ManagedStatic object accessed with
a function call. The greedy register allocator is 5% faster overall just from
the SlotIndex default constructor savings.

llvm-svn: 126925
2011-03-03 05:40:04 +00:00
Jakob Stoklund Olesen 67a84d08ce Avoid comparing invalid slot indexes, and assert that it doesn't happen.
The SlotIndex created by the default construction does not represent a position
in the function, and it doesn't make sense to compare it to other indexes.

llvm-svn: 126924
2011-03-03 05:18:19 +00:00
Jakob Stoklund Olesen a04dddf7a1 Avoid comparing invalid slot indexes.
llvm-svn: 126922
2011-03-03 04:23:52 +00:00
Jakob Stoklund Olesen 9a6382fc81 Cache basic block bounds instead of asking SlotIndexes::getMBBRange all the time.
This speeds up the greedy register allocator by 15%.
DenseMap is not as fast as one might hope.

llvm-svn: 126921
2011-03-03 03:41:29 +00:00
Jakob Stoklund Olesen c96019886c Change the SplitEditor interface to a single instance can be shared for multiple splits.
llvm-svn: 126912
2011-03-03 01:29:13 +00:00
Jakob Stoklund Olesen 5ea0712e96 Only run the updateSSA loop when we have actually seen multiple values.
When only a single value has been seen, new PHIDefs are never needed.

llvm-svn: 126911
2011-03-03 01:29:10 +00:00
Jakob Stoklund Olesen d58c8d12ab Fix PHI handling in LiveIntervals::shrinkToUses().
We need to wait until we meet a PHIDef in its defining block before resurrecting
PHIKills in the predecessors.

This should unbreak the llvm-gcc-build-x86_64-darwin10-x-mingw32-x-armeabi bot.

llvm-svn: 126905
2011-03-03 00:20:51 +00:00
Bob Wilson 24b3ba5990 Avoid exponential blow-up when printing DAGs.
David Greene changed CannotYetSelect() to print the full DAG including multiple
copies of operands reached through different paths in the DAG.  Unfortunately
this blows up exponentially in some cases.  The depth limit of 100 is way too
high to prevent this -- I'm seeing a message string of 150MB with a depth of
only 40 in one particularly bad case, even though the DAG has less than 200
nodes.  Part of the problem is that the printing code is following chain
operands, so if you fail to select an operation with a chain, the printer will
follow all the chained operations back to the entry node.

llvm-svn: 126899
2011-03-02 23:38:06 +00:00
Jakob Stoklund Olesen 815196ca19 Turn the Edit member into a pointer so it can change dynamically.
No functional change.

llvm-svn: 126898
2011-03-02 23:31:50 +00:00
Jakob Stoklund Olesen 503b143a62 Transfer simply defined values directly without recomputing liveness and SSA.
Values that map to a single new value in a new interval after splitting don't
need new PHIDefs, and if the parent value was never rematerialized the live
range will be the same.

llvm-svn: 126894
2011-03-02 23:05:19 +00:00
Jakob Stoklund Olesen 3648263a3e Extract a method. No functional change.
llvm-svn: 126893
2011-03-02 23:05:16 +00:00
Stuart Hastings 6b4007dec6 Can't introduce floating-point immediate constants after legalization.
Radar 9056407.

llvm-svn: 126864
2011-03-02 19:36:30 +00:00
Cameron Zwarich daed6f6c39 Fix some typos.
llvm-svn: 126829
2011-03-02 04:03:46 +00:00
Jakob Stoklund Olesen 48af8923c5 Move extendRange() into SplitEditor and delete the LiveRangeMap class.
Extract the updateSSA() method from the too long extendRange().

LiveOutCache can be shared among all the new intervals since there is at most
one of the new ranges live out from each basic block.

llvm-svn: 126818
2011-03-02 01:59:34 +00:00
Nick Lewycky 68faa2dbbe Quiet a compiler warning about unused variable 'ExtVNI'.
llvm-svn: 126815
2011-03-02 01:43:30 +00:00
Evan Cheng 15fed7af3c Catch more cases where 2-address pass should 3-addressify instructions. rdar://9002648.
llvm-svn: 126811
2011-03-02 01:08:17 +00:00
Jakob Stoklund Olesen b02376198b Rename mapValue to extendRange because that is its function now.
Simplify the signature - The return value and ParentVNI are no longer needed.

llvm-svn: 126809
2011-03-02 00:49:28 +00:00
Jakob Stoklund Olesen f3c6e9211c Simplify LiveIntervals::shrinkToUses() a bit by using the new extendInBlock().
llvm-svn: 126806
2011-03-02 00:33:03 +00:00
Jakob Stoklund Olesen 81eb18df34 Fix typo.
llvm-svn: 126805
2011-03-02 00:33:01 +00:00
Jakob Stoklund Olesen 9e326a8413 Move LiveIntervalMap::extendTo into LiveInterval itself.
This method could probably be used by LiveIntervalAnalysis::shrinkToUses, and
now it can use extendIntervalEndTo() which coalesces ranges.

llvm-svn: 126803
2011-03-02 00:06:15 +00:00
Jakob Stoklund Olesen 2b09bed518 Delete dead code.
llvm-svn: 126801
2011-03-01 23:24:19 +00:00
Jakob Stoklund Olesen 8ef91fc870 Move the value map from LiveIntervalMap to SplitEditor.
The value map is currently not used, all values are 'complex mapped' and
LiveIntervalMap::mapValue is used to dig them out.

This is the first step in a series changes leading to the removal of
LiveIntervalMap. Its data structures can be shared among all the live intervals
created by a split, so it is wasteful to create a copy for each.

llvm-svn: 126800
2011-03-01 23:14:53 +00:00
Jakob Stoklund Olesen 977e3d3c48 Delete dead code.
Local live range splitting is better driven by interference. This code was just
guessing.

llvm-svn: 126799
2011-03-01 23:14:50 +00:00
Jakob Stoklund Olesen ff07178789 Drop RAGreedy::trySpillInterferences().
This is a waste of time since we already know how to evict all interferences
which is a better approach anyway.

llvm-svn: 126798
2011-03-01 23:14:48 +00:00
Devang Patel 6c622ef1bc If argument numbering is encoded in metadata then emit arguments' debug info in that order.
llvm-svn: 126794
2011-03-01 22:58:55 +00:00
Jakob Stoklund Olesen 5f9f081d76 Keep track of which stage produced a live range, and bypass earlier stages when revisiting.
This effectively disables the 'turbo' functionality of the greedy register
allocator where all new live ranges created by splitting would be reconsidered
as if they were originals.

There are two reasons for doing this, 1. It guarantees that the algorithm
terminates. Early versions were prone to infinite looping in certain corner
cases. 2. It is a 2x speedup. We can skip a lot of unnecessary interference
checks that won't lead to good splitting anyway.

The problem is that region splitting only gets one shot, so it should probably
be changed to target multiple physical registers at once.

Local live range splitting is still 'turbo' enabled. It only accounts for a
small fraction of compile time, so it is probably not necessary to do anything
about that.

llvm-svn: 126781
2011-03-01 21:10:07 +00:00
Duncan Sands cb95eeecc6 Add a few missed unary cases when legalizing vector results. Put some cases
in alphabetical order.

llvm-svn: 126745
2011-03-01 15:15:43 +00:00
Jim Grosbach 621818ab1a trailing whitespace.
llvm-svn: 126733
2011-03-01 01:39:05 +00:00
Jim Grosbach 1d479dbc55 Generalize the register matching code in DAGISel a bit.
llvm-svn: 126731
2011-03-01 01:37:19 +00:00
Owen Anderson 0dc63104c6 Use the correct shift amount type.
llvm-svn: 126684
2011-02-28 21:10:10 +00:00
Owen Anderson 4f4df81861 Clean whitespace.
llvm-svn: 126683
2011-02-28 20:57:56 +00:00
Dan Gohman 06d70015ce Delete the GEPSplitter experiment.
llvm-svn: 126671
2011-02-28 19:47:47 +00:00
Stuart Hastings 67c5c3e939 Support for byval parameters on ARM. Will be enabled by a forthcoming
patch to the front-end.  Radar 7662569.

llvm-svn: 126655
2011-02-28 17:17:53 +00:00
Duncan Sands f571290d1e Legalize support for fpextend of vector. PR9309.
llvm-svn: 126574
2011-02-27 14:41:27 +00:00
Nadav Rotem b00913028f Fix typos in the comments.
llvm-svn: 126565
2011-02-27 07:40:43 +00:00
Tobias Grosser 3ac8689fa3 Pass the graph to the DOTGraphTraits.getEdgeAttributes().
This follows the interface of getNodeAttributes.

llvm-svn: 126562
2011-02-27 04:11:03 +00:00
Benjamin Kramer 26691d9660 Add some DAGCombines for (adde 0, 0, glue), which are useful to optimize legalized code for large integer arithmetic.
1. Inform users of ADDEs with two 0 operands that it never sets carry
2. Fold other ADDs or ADDCs into the ADDE if possible

It would be neat if we could do the same thing for SETCC+ADD eventually, but we can't do that in target independent code.

llvm-svn: 126557
2011-02-26 22:48:07 +00:00
Jim Grosbach 416c47019c Trailing whitespace.
llvm-svn: 126526
2011-02-25 22:53:20 +00:00
Owen Anderson b2c80da4ae Allow targets to specify a the type of the RHS of a shift parameterized on the type of the LHS.
llvm-svn: 126518
2011-02-25 21:41:48 +00:00
Cameron Zwarich fcf51fd298 Roll out r126425 and r126450 to see if it fixes the failures on the buildbots.
llvm-svn: 126488
2011-02-25 16:30:32 +00:00
Jim Grosbach 14a07365cb Fix formatting of debug helper string.
llvm-svn: 126471
2011-02-25 03:59:03 +00:00
Cameron Zwarich 4c82cd21ed Set NumSignBits to 1 if KnownZero/KnownOne are being zero extended. In theory it
is possible to do better if the high bit is set in either KnownZero/KnownOne, but
in practice NumSignBits is always 1 when we are zero extending because nothing
is known about that register.

llvm-svn: 126465
2011-02-25 01:11:01 +00:00
Cameron Zwarich d2f3041c7f We only want to zero extend the existing information if the bit width is
actually larger.

llvm-svn: 126464
2011-02-25 01:10:55 +00:00
Jakob Stoklund Olesen 9918b33451 Try harder to get the hint by preferring to evict hint interference.
llvm-svn: 126463
2011-02-25 01:04:22 +00:00
Jakob Stoklund Olesen e68a27eecd Tweak the register allocator priority queue some more.
New live ranges are assigned in long -> short order, but live ranges that have
been evicted at least once are deferred and assigned in short -> long order.

Also disable splitting and spilling for live ranges seen for the first time.

The intention is to create a realistic interference pattern from the heavy live
ranges before starting splitting and spilling around it.

llvm-svn: 126451
2011-02-24 23:21:36 +00:00
Nick Lewycky 1db7b187cb Remove dead variable.
llvm-svn: 126450
2011-02-24 23:15:43 +00:00
Devang Patel b037383a35 Enable DebugInfo support for COFF object files.
Patch by Nathan Jeffords!

llvm-svn: 126425
2011-02-24 21:04:00 +00:00
Nadav Rotem 502f1b943f Enable support for vector sext and trunc:
Limit the folding of any_ext and sext  into the load operation to scalars.
Limit the active-bits trunc optimization to scalars.
Document vector trunc and vector sext in LangRef.

Similar to commit 126080 (for enabling zext).

llvm-svn: 126424
2011-02-24 21:01:34 +00:00
Rafael Espindola 601a11edd4 Fix llvm-gcc bootstrap with gnu ld.
The problem was codegen guessing the wrong values and printing

	.section	.eh_frame,"aMS",@progbits,4

It is not clear at all if Codegen should try to guess, MC is the
one that should know the default flags.

llvm-svn: 126421
2011-02-24 20:18:01 +00:00
Devang Patel a5d93247c2 Do not use DIFactory.
llvm-svn: 126397
2011-02-24 18:49:30 +00:00
Cameron Zwarich a62fc89a04 Merge information about the number of zero, one, and sign bits of live-out
registers at phis. This enables us to eliminate a lot of pointless zexts during
the DAGCombine phase. This fixes <rdar://problem/8760114>.

llvm-svn: 126380
2011-02-24 10:00:25 +00:00
Cameron Zwarich 3cf9280214 Add a getNumSignBits() method to APInt.
llvm-svn: 126379
2011-02-24 10:00:20 +00:00
Cameron Zwarich 97eb52da7b Add a mechanism for invalidating the LiveOutInfo of a PHI, and use it whenever
a block is visited before all of its predecessors.

llvm-svn: 126378
2011-02-24 10:00:16 +00:00
Cameron Zwarich 988faf91bd Track blocks visited in reverse postorder.
llvm-svn: 126377
2011-02-24 10:00:13 +00:00
Cameron Zwarich 6470647383 Refactor the LiveOutInfo interface into a few methods on FunctionLoweringInfo
and make the actual map private.

llvm-svn: 126376
2011-02-24 10:00:08 +00:00
Cameron Zwarich b670d512e9 Have isel visit blocks in reverse postorder rather than an undefined order. This
allows for the information propagated across basic blocks to be merged at phis.

llvm-svn: 126375
2011-02-24 10:00:04 +00:00
Jakob Stoklund Olesen 2b4ded329d Use the same spill slot for all live ranges that descend form the same original
register.

This avoids some silly stack slot shuffling when both sides of a copy get
spilled.

llvm-svn: 126353
2011-02-24 01:07:55 +00:00
Devang Patel 7b0f796c55 Use DW_FORM_data2 for DW_AT_language and let users use DW_LANG_lo_user=0x8000 to DW_LANG_hi_user=0xffff range.
llvm-svn: 126339
2011-02-23 22:37:04 +00:00
Jakob Stoklund Olesen ed172998a6 It is safe to ignore LastSplitPoint when the variable is not live out.
No code will be inserted after the split point anyway.

llvm-svn: 126319
2011-02-23 18:26:31 +00:00
Stuart Hastings bf83659d11 Omit private_extern declarations of extern symbols; followup to
r124468.  Patch by Rafael Avila de Espindola!

llvm-svn: 126297
2011-02-23 02:27:05 +00:00
Jakob Stoklund Olesen b51f65c297 Keep track of how many times a live range has been dequeued, and prioritize new ranges.
When a large live range is evicted, it will usually be split when it comes
around again. By deferring evicted live ranges, the splitting happens at a time
when the interference pattern is more realistic. This prevents repeated
splitting and evictions.

llvm-svn: 126282
2011-02-23 00:56:56 +00:00
Jakob Stoklund Olesen 37de3235e5 Fix a bug in determining if there is only a single interfering register.
llvm-svn: 126277
2011-02-23 00:29:55 +00:00
Jakob Stoklund Olesen 6bd68cdffb Be more aggressive about evicting interference.
Use interval sizes instead of spill weights to determine if it is legal to evict
interference. A smaller interval can evict interference if all interfering live
ranges are larger.

Allow multiple interferences to be evicted as along as they are all larger than
the live range being allocated.

Spill weights are still used to select the preferred eviction candidate.

llvm-svn: 126276
2011-02-23 00:29:52 +00:00
Jakob Stoklund Olesen 2329c542e9 Change the RAGreedy register assignment order so large live ranges are allocated first.
This is based on the observation that long live ranges are more difficult to
allocate, so there is a better chance of solving the puzzle by handling the big
pieces first. The allocator will evict and split long alive ranges when they get
in the way.

RABasic is still using spill weights for its priority queue, so the interface to
the queue has been virtualized.

llvm-svn: 126259
2011-02-22 23:01:52 +00:00
Jakob Stoklund Olesen fbad93fa13 80 Col.
llvm-svn: 126258
2011-02-22 23:01:49 +00:00
Cameron Zwarich 7cf88763e0 MachineConstantPoolValues are not uniqued, so they need to be freed if they
share entries. Add a DenseSet to MachineConstantPool for the MachineCPVs that
it owns.

This will hopefully fix the MC/ARM/elf-reloc-01.ll failure on the leaks bots.

llvm-svn: 126218
2011-02-22 08:54:30 +00:00
Andrew Trick 842921dfc8 VirtRegRewriter assertion fix.
Apparently it's ok for multiple operands to "kill" the same register.
Fixes PR9237.

llvm-svn: 126190
2011-02-22 06:52:56 +00:00
Cameron Zwarich f8b22b3483 Roll out r126169 and r126170 in an attempt to fix the selfhost bot.
llvm-svn: 126185
2011-02-22 03:24:52 +00:00
Cameron Zwarich 800f85baf9 Merge information about the number of zero, one, and sign bits of live-out registers
at phis. This enables us to eliminate a lot of pointless zexts during the DAGCombine
phase. This fixes <rdar://problem/8760114>.

llvm-svn: 126170
2011-02-22 00:46:27 +00:00
Cameron Zwarich f248f945c8 Have isel visit blocks in reverse postorder rather than an undefined order. This
allows for the information propagated across basic blocks to be merged at phis.

llvm-svn: 126169
2011-02-22 00:46:22 +00:00
Eric Christopher 9b48fef478 Revert r125960, it's breaking darwin10 bootstrap.
llvm-svn: 126163
2011-02-21 23:52:19 +00:00
Evan Cheng b8ed462ca2 Add more debugging output.
llvm-svn: 126158
2011-02-21 23:39:48 +00:00
Devang Patel f3292b2196 Revert r124611 - "Keep track of incoming argument's location while emitting LiveIns."
In other words, do not keep track of argument's location.  The debugger (gdb) is not prepared to see line table entries for arguments. For the debugger, "second" line table entry marks beginning of function body.
This requires some coordination with debugger to get this working. 
 - The debugger needs to be aware of prolog_end attribute attached with line table entries.
 - The compiler needs to accurately mark prolog_end in line table entries (at -O0 and at -O1+)

llvm-svn: 126155
2011-02-21 23:21:26 +00:00
Jakob Stoklund Olesen 60a26a6578 Add SplitKit::isOriginalEndpoint and use it to force live range splitting to terminate.
An original endpoint is an instruction that killed or defined the original live
range before any live ranges were split.

When splitting global live ranges, avoid creating local live ranges without any
original endpoints. We may still create global live ranges without original
endpoints, but such a range won't be split again, and live range splitting still
terminates.

llvm-svn: 126151
2011-02-21 23:09:46 +00:00
Stuart Hastings b4863a41e9 Fix to correctly support attribute((section("__DATA, __common"))).
Radar 9012638.

llvm-svn: 126127
2011-02-21 17:27:17 +00:00
Nadav Rotem 25f2ac948b Fix 9267; Add vector zext support.
The DAGCombiner folds the zext into complex load instructions. This patch
prevents this optimization on vectors since none of the supported targets
knows how to perform load+vector_zext in one instruction.

llvm-svn: 126080
2011-02-20 12:37:50 +00:00
Devang Patel 5f1b4cdda1 Do not emit empty DW_TAG_lexical_block DIEs. In one test case, size of debug info reduced by almost 7%.
llvm-svn: 126009
2011-02-19 01:31:27 +00:00
Jakob Stoklund Olesen f1a60a61ba Give SplitAnalysis a VRM member to access VirtRegMap::getOriginal().
llvm-svn: 126005
2011-02-19 00:53:42 +00:00
Jakob Stoklund Olesen 04aff708fd Missed member rename for naming convention.
llvm-svn: 126003
2011-02-19 00:42:33 +00:00
Jakob Stoklund Olesen 13eb3650b0 This method belonged in VirtRegMap.
llvm-svn: 126002
2011-02-19 00:38:43 +00:00
Jakob Stoklund Olesen 609bc44c2e Separate timers for local and global splitting.
llvm-svn: 126001
2011-02-19 00:38:40 +00:00
Devang Patel b7ae3ccb84 Do not lose debug info of an inlined function argument even if the argument is only used through GEPs.
This time with a fix that avoids using invalidated DenseMap iterator.

llvm-svn: 125984
2011-02-18 22:43:42 +00:00
Jakob Stoklund Olesen 4376d67b6f Use VirtRegMap's Virt2SplitMap to keep track of the original live range before splitting.
All new virtual registers created for spilling or splitting point back to their original.

llvm-svn: 125980
2011-02-18 22:35:20 +00:00
Oscar Fuentes 5ed962656c Move library stuff out of the toplevel CMakeLists.txt file.
llvm-svn: 125968
2011-02-18 22:06:14 +00:00
Jakob Stoklund Olesen 5bfec69b1d Add VirtRegMap::rewrite() and use it in the new register allocators.
The rewriter works almost identically to -rewriter=trivial, except it also
eliminates any identity copies.

This makes the new register allocators independent of VirtRegRewriter.cpp which
will be going away at the same time as RegAllocLinearScan.

llvm-svn: 125967
2011-02-18 22:03:18 +00:00
Bill Wendling 8fbe09f160 Reapply r114997 now that the buildbots have been updated.
llvm-svn: 125960
2011-02-18 21:12:58 +00:00
Cameron Zwarich 0a1a36dc46 Roll out r125794 to help diagnose the llvm-gcc-i386-linux-selfhost failure.
llvm-svn: 125830
2011-02-18 04:58:10 +00:00
Jakob Stoklund Olesen 73e203e3d3 Trim debugging output.
llvm-svn: 125802
2011-02-18 00:32:47 +00:00
Devang Patel f922a431ee Do not lose debug info of an inlined function argument even if the argument is only used through GEPs.
llvm-svn: 125794
2011-02-17 23:33:27 +00:00
Jakob Stoklund Olesen 99827e861f Add basic register allocator statistics.
llvm-svn: 125789
2011-02-17 22:53:48 +00:00
Jakob Stoklund Olesen 93c8736abb Split local live ranges.
A local live range is live in a single basic block. If such a range fails to
allocate, try to find a sub-range that would get a larger spill weight than its
interference.

llvm-svn: 125764
2011-02-17 19:13:53 +00:00
Duncan Sands c6196aa481 Fix wrong logic in promotion of signed mul-with-overflow (I pointed this out at
the time but presumably my email got lost).  Examples where the previous logic
got it wrong: (1) a signed i8 multiply of 64 by 2 overflows, but the high part is
zero; (2) a signed i8 multiple of -128 by 2 overflows, but the high part is all
ones. 

llvm-svn: 125748
2011-02-17 12:42:48 +00:00
Cameron Zwarich 83f4cee199 Switch to SmallVector in SimpleRegisterCoalescing for a 3.5% speedup on 403.gcc.
llvm-svn: 125728
2011-02-17 06:52:07 +00:00
Cameron Zwarich ecd44922ab Adjust indenting of arguments.
llvm-svn: 125727
2011-02-17 06:13:46 +00:00
Cameron Zwarich 0b0cc4d75e Return Changed from SplitPHIEdges rather than always returning true.
llvm-svn: 125726
2011-02-17 06:13:43 +00:00
Stuart Hastings 81c4306005 Swap VT and DebugLoc operands of getExtLoad() for consistency with
other getNode() methods.  Radar 9002173.

llvm-svn: 125665
2011-02-16 16:23:55 +00:00
Eric Christopher e5ca1e0506 Refactor zero folding slightly. Clean up todo.
llvm-svn: 125651
2011-02-16 04:50:12 +00:00
Eric Christopher ef72141a75 The change for PR9190 wasn't quite right. We need to avoid making the
transformation if we can't legally create a build vector of the correct
type. Check that we can make the transformation first, and add a TODO to
refactor this code with similar cases.

Fixes: PR9223 and rdar://9000350
llvm-svn: 125631
2011-02-16 01:10:03 +00:00
Evan Cheng 2eecc22fdb Remove a duplicated check.
llvm-svn: 125625
2011-02-16 00:37:02 +00:00
Devang Patel d12c0a2764 Ignore DBG_VALUE machine instructions while constructing instruction ranges based on location info.
Machine instruction range consisting of only DBG_VALUE MIs only contributes consecutive labels in assembly output, which is harmless, and empty scope entry in DebugInfo, which confuses debugger tools.

llvm-svn: 125577
2011-02-15 17:56:09 +00:00
Duncan Sands 75b5d27b84 Spelling fix: consequtive -> consecutive.
llvm-svn: 125563
2011-02-15 09:23:02 +00:00
Evan Cheng 98196b4ebb Fix thinko. Cmp can be the first instruction in a MBB.
llvm-svn: 125552
2011-02-15 05:00:24 +00:00
Chris Lattner 69229316aa convert ConstantVector::get to use ArrayRef.
llvm-svn: 125537
2011-02-15 00:14:00 +00:00
Jakob Stoklund Olesen 1dd377d8c8 Move more fragments of spill weight calculation into CalcSpillWeights.h
Simplify the spill weight calculation a bit by bypassing
getApproximateInstructionCount() and using LiveInterval::getSize() directly.
This changes the computed spill weights, but only by a constant factor in each
function. It should not affect how spill weights compare against each other, and
so it shouldn't affect code generation.

llvm-svn: 125530
2011-02-14 23:15:38 +00:00
Rafael Espindola 70d8015063 Switch llvm to using comdats. For now always use groups with a single
section.

llvm-svn: 125526
2011-02-14 22:23:49 +00:00
Evan Cheng 9bf3f8e08b Fix PR8854. Track inserted copies to avoid read before write. Sorry, it's hard to reduce a sensible small test case.
llvm-svn: 125523
2011-02-14 21:50:37 +00:00
Chris Lattner 34442e6ebf revert my ConstantVector patch, it seems to have made the llvm-gcc
builders unhappy.

llvm-svn: 125504
2011-02-14 18:15:46 +00:00
Rafael Espindola 85bc995c5b Move broken HasCommonSymbols to ELFWriter.cpp.
llvm-svn: 125490
2011-02-14 16:51:08 +00:00
Chris Lattner d9f5b88548 Switch ConstantVector::get to use ArrayRef instead of a pointer+size
idiom.  Change various clients to simplify their code.

llvm-svn: 125487
2011-02-14 07:55:32 +00:00
Chris Lattner eff248ca7f fix PR9210 by implementing some type legalization logic for
vector fp conversions.

llvm-svn: 125482
2011-02-14 06:30:45 +00:00
Chris Lattner eaa8341d3b fix two comment thinkos
llvm-svn: 125481
2011-02-14 06:14:42 +00:00
Cameron Zwarich 3005ee396d Add some statistics to StrongPHIElimination.
llvm-svn: 125477
2011-02-14 02:09:18 +00:00
Cameron Zwarich 8790396e6a Add a statistic to PHIElimination tracking the number of critical edges split.
llvm-svn: 125476
2011-02-14 02:09:11 +00:00
Chris Lattner 46c01a30f4 Enhance ComputeMaskedBits to know that aligned frameindexes
have their low bits set to zero.  This allows us to optimize
out explicit stack alignment code like in stack-align.ll:test4 when
it is redundant.

Doing this causes the code generator to start turning FI+cst into
FI|cst all over the place, which is general goodness (that is the
canonical form) except that various pieces of the code generator
don't handle OR aggressively.  Fix this by introducing a new
SelectionDAG::isBaseWithConstantOffset predicate, and using it
in places that are looking for ADD(X,CST).  The ARM backend in
particular was missing a lot of addressing mode folding opportunities
around OR.

llvm-svn: 125470
2011-02-13 22:25:43 +00:00
Chris Lattner e95d195014 Revisit my fix for PR9028: the issue is that DAGCombine was
generating i8 shift amounts for things like i1024 types.  Add
an assert in getNode to prevent this from occuring in the future,
fix the buggy transformation, revert my previous patch, and
document this gotcha in ISDOpcodes.h

llvm-svn: 125465
2011-02-13 19:09:16 +00:00
Chris Lattner d5f0b1148a when legalizing extremely wide shifts, make sure that
the shift amounts are in a suitably wide type so that
we don't generate out of range constant shift amounts.

This fixes PR9028.

llvm-svn: 125458
2011-02-13 09:10:56 +00:00
Chris Lattner 2a720d933a fix visitShift to properly zero extend the shift amount if the provided operand
is narrower than the shift register.  Doing an anyext provides undefined bits in
the top part of the register.

llvm-svn: 125457
2011-02-13 09:02:52 +00:00
Nadav Rotem db2f54811d A fix for 9165.
The DAGCombiner created illegal BUILD_VECTOR operations.
The patch added a check that either illegal operations are
allowed or that the created operation is legal.

llvm-svn: 125435
2011-02-12 14:40:33 +00:00
Nadav Rotem a49a02a04f SimplifySelectOps can only handle selects with a scalar condition. Add a check
that the condition is not a vector.

llvm-svn: 125398
2011-02-11 19:57:47 +00:00
Nadav Rotem 18f6a33457 Fix #9190
The bug happens when the DAGCombiner attempts to optimize one of the patterns
of the SUB opcode. It tries to create a zero of type v2i64. This type is legal
on 32bit machines, but the initializer of this vector (i64) is target dependent.
Currently, the initializer attempts to create an i64 zero constant, which fails.
Added a flag to tell the DAGCombiner to create a legal zero, if we require that
the pass would generate legal types.

llvm-svn: 125391
2011-02-11 19:20:37 +00:00
Evan Cheng d4fcc05304 After 3-addressifying a two-address instruction, update the register maps; add a missing check when considering whether it's profitable to commute. rdar://8977508.
llvm-svn: 125259
2011-02-10 02:20:55 +00:00
Jakob Stoklund Olesen 8dafc875bb Delete unused code for analyzing and splitting around loops.
Loop splitting is better handled by the more generic global region splitting
based on the edge bundle graph.

llvm-svn: 125243
2011-02-09 23:56:18 +00:00
Jakob Stoklund Olesen 77ba1cf286 Simplify using the new leaveIntvBefore()
llvm-svn: 125238
2011-02-09 23:33:02 +00:00
Jakob Stoklund Olesen 7cb57b30bd Use the LiveBLocks array for SplitEditor::splitSingleBlocks() as well.
This fixes a bug where splitSingleBlocks() could split a live range after a
terminator instruction.

llvm-svn: 125237
2011-02-09 23:30:25 +00:00
Mikhail Glushenkov 52847a9bb9 Typo.
llvm-svn: 125232
2011-02-09 22:55:48 +00:00
Jakob Stoklund Olesen b1b76adbd9 Move calcLiveBlockInfo() and the BlockInfo struct into SplitAnalysis.
No functional changes intended.

llvm-svn: 125231
2011-02-09 22:50:26 +00:00
Jakob Stoklund Olesen f6e0394d76 Ignore <undef> uses when analyzing and rewriting.
llvm-svn: 125226
2011-02-09 21:52:09 +00:00
Jakob Stoklund Olesen 6d4d8581bc Assert on bad jump tables.
llvm-svn: 125225
2011-02-09 21:52:06 +00:00
Jakob Stoklund Olesen 8f59b46750 Add tags to live interval unions to avoid using stale queries.
The tag is updated whenever the live interval union is changed, and it is tested
before using cached information.

llvm-svn: 125224
2011-02-09 21:52:03 +00:00
Jakob Stoklund Olesen 1305bc0a65 Evict a lighter single interference before attempting to split a live range.
Registers are not allocated strictly in spill weight order when live range
splitting and spilling has created new shorter intervals with higher spill
weights.

When one of the new heavy intervals conflicts with a single lighter interval,
simply evict the old interval instead of trying to split the heavy one.

The lighter interval is a better candidate for splitting, it has a smaller use
density.

llvm-svn: 125151
2011-02-09 01:14:03 +00:00
Jakob Stoklund Olesen 0b2f8d24b3 Set an allocation hint when rematting before a COPY.
This almost guarantees that the COPY will be coalesced.

llvm-svn: 125140
2011-02-09 00:25:36 +00:00
Jakob Stoklund Olesen 5a9683b319 Fix one more case of splitting after the last split point.
llvm-svn: 125137
2011-02-08 23:26:48 +00:00
Jakob Stoklund Olesen f248b20d8c Reorganize interference code to check LastSplitPoint first.
The last split point can be anywhere in the block, so it interferes with the
strictly monotonic requirements of advanceTo().

llvm-svn: 125132
2011-02-08 23:02:58 +00:00
Jakob Stoklund Olesen 93dda45ada Also handle the situation where an indirect branch is the first (and last)
instruction in a basic block.

llvm-svn: 125116
2011-02-08 21:46:11 +00:00
Jakob Stoklund Olesen f2b16dc847 Add LiveIntervals::addKillFlags() to recompute kill flags after register allocation.
This is a lot easier than trying to get kill flags right during live range
splitting and rematerialization.

llvm-svn: 125113
2011-02-08 21:13:03 +00:00
Jakob Stoklund Olesen 4d83c691f6 Trim debug spew
llvm-svn: 125109
2011-02-08 19:33:58 +00:00
Jakob Stoklund Olesen c6a2041d99 Avoid folding a load instruction into an instruction that redefines the register.
The target hook doesn't know how to do that. (Neither do I).

llvm-svn: 125108
2011-02-08 19:33:55 +00:00
Jakob Stoklund Olesen 1749935173 Add SplitEditor::overlapIntv() to create small ranges where both registers are live.
If a live range is used by a terminator instruction, and that live range needs
to leave the block on the stack or in a different register, it can be necessary
to have both sides of the split live at the terminator instruction.

Example:

  %vreg2 = COPY %vreg1
  JMP %vreg1

Becomes after spilling %vreg2:

  SPILL %vreg1
  JMP %vreg1

The spill doesn't kill the register as is normally the case.

llvm-svn: 125102
2011-02-08 18:50:21 +00:00
Jakob Stoklund Olesen 3d11c8eaf2 Add assertion.
llvm-svn: 125101
2011-02-08 18:50:18 +00:00
Andrew Trick 4b4918788b Fix PostRA antidependence breaker.
Avoid using the same register for two def operands or and earlyclobber
def and use operand. This fixes PR8986 and improves on the prior fix
for rdar://problem/8959122.

llvm-svn: 125089
2011-02-08 17:39:46 +00:00
Jakob Stoklund Olesen 55fc1d0b3e Add LiveIntervals::shrinkToUses().
After uses of a live range are removed, recompute the live range to only cover
the remaining uses. This is necessary after rematerializing the value before
some (but not all) uses.

llvm-svn: 125058
2011-02-08 00:03:05 +00:00
Devang Patel 639dd997eb Remove comment about an argument that was removed couple of years ago.
llvm-svn: 125054
2011-02-07 21:58:52 +00:00
Andrew Trick f841571404 Fix an anti-dep breaker corner case.
<rdar://problem/8959122> illegal register operands for UMULL instruction in cfrac nightly test
I'm stil working on a unit test, but the case is:
rx = movcc rx, r3
r2 = ldr
r2, r3 = umull r2, r2

The anti-dep breaker should not convert this into an illegal instruction:
r2, r2 = umull

llvm-svn: 124932
2011-02-05 02:58:46 +00:00
Jakob Stoklund Olesen 4ee8990278 Be more strict about the first/last interference-free use.
If the interference overlaps the instruction, we cannot separate it.

llvm-svn: 124918
2011-02-05 01:06:39 +00:00
Jakob Stoklund Olesen 7b73528064 Add assertions to verify that the new interval is clear of the interference.
If these inequalities don't hold, we are creating a live range split that won't
allocate.

llvm-svn: 124917
2011-02-05 01:06:36 +00:00
Jakob Stoklund Olesen e8ac8e93a1 Apparently, it is possible for a block with a landing pad successor to have no calls.
In that case we simply ignore the landing pad and split live ranges before the
first terminator.

llvm-svn: 124907
2011-02-04 23:11:13 +00:00
Devang Patel 116a9d7c38 Merge .debug_loc entries whenever possible to reduce debug_loc size.
llvm-svn: 124904
2011-02-04 22:57:18 +00:00
Nick Lewycky d650b30488 Mark that the return is using EAX so that we don't use it for some other
purpose. Fixes PR9080!

llvm-svn: 124903
2011-02-04 22:44:08 +00:00
Jakob Stoklund Olesen 80a2878b5d Be more accurate about live range splitting at the end of blocks.
If interference reaches the last split point, it is effectively live out and
should be marked as 'MustSpill'.

This can make a difference when the terminator uses a register. There is no way
that register can be reused in the outgoing CFG bundle, even if it isn't live
out.

llvm-svn: 124900
2011-02-04 21:42:06 +00:00
Jakob Stoklund Olesen 096bd8837f Add LiveIntervals::getLastSplitPoint().
A live range cannot be split everywhere in a basic block. A split must go before
the first terminator, and if the variable is live into a landing pad, the split
must happen before the call that can throw.

llvm-svn: 124894
2011-02-04 19:33:11 +00:00
Jakob Stoklund Olesen fefe6ebc73 Verify that one of the ranges produced by region splitting is allocatable.
We should not be attempting a region split if it won't lead to at least one
directly allocatable interval. That could cause infinite splitting loops.

llvm-svn: 124893
2011-02-04 19:33:07 +00:00
Andrew Trick d0548ae750 Introducing a new method of tracking register pressure. We can't
precisely track pressure on a selection DAG, but we can at least keep
it balanced. This design accounts for various interesting aspects of
selection DAGS: register and subregister copies, glued nodes, dead
nodes, unused registers, etc.

Added SUnit::NumRegDefsLeft and ScheduleDAGSDNodes::RegDefIter.

Note: I disabled PrescheduleNodesWithMultipleUses when register
pressure is enabled, based on no evidence other than I don't think it
makes sense to have both enabled.

llvm-svn: 124853
2011-02-04 03:18:17 +00:00
Devang Patel 26ffa01889 DebugLoc associated with a machine instruction is used to emit location entries. DebugLoc associated with a DBG_VALUE is used to identify lexical scope of the variable. After register allocation, while inserting DBG_VALUE remember original debug location for the first instruction and reuse it, otherwise dwarf writer may be mislead in identifying the variable's scope.
llvm-svn: 124845
2011-02-04 01:43:25 +00:00
Evan Cheng f7073d1445 Update comments.
llvm-svn: 124843
2011-02-04 01:10:12 +00:00
Jakob Stoklund Olesen 3295a99fe9 Skip unused values.
llvm-svn: 124842
2011-02-04 00:59:23 +00:00
Jakob Stoklund Olesen b336c50c81 Also compute interference intervals for blocks with no uses.
When the live range is live through a block that doesn't use the register, but
that has interference, region splitting wants to split at the top and bottom of
the basic block.

llvm-svn: 124839
2011-02-04 00:39:20 +00:00
Jakob Stoklund Olesen 66d0f39904 Verify kill flags conservatively.
Allow a live range to end with a kill flag, but don't allow a kill flag that
doesn't end the live range.

This makes the machine code verifier more useful during register allocation when
kill flag computation is deferred.

llvm-svn: 124838
2011-02-04 00:39:18 +00:00
Andrew Trick 3f924e4e87 whitespace
llvm-svn: 124827
2011-02-03 23:00:17 +00:00
Jakob Stoklund Olesen 4a6518e6a8 Ensure that the computed interference intervals actually overlap their basic blocks.
llvm-svn: 124815
2011-02-03 20:29:43 +00:00
Jakob Stoklund Olesen db4cf7e4a4 Tweak debug output from SlotIndexes.
llvm-svn: 124814
2011-02-03 20:29:41 +00:00
Jakob Stoklund Olesen d8f62e2a62 Add debug output and asserts to the phi-connecting code.
llvm-svn: 124813
2011-02-03 20:29:39 +00:00
Jakob Stoklund Olesen 8c0254870b Fix coloring bug when mapping values in the middle of a live-through block.
If the found value is not live-through the block, we should only add liveness up
to the requested slot index. When the value is live-through, the whole block
should be colored.

Bug found by SSA verification in the machine code verifier.

llvm-svn: 124812
2011-02-03 20:29:36 +00:00
Jakob Stoklund Olesen f12e120743 Return live range end points from SplitEditor::enter*/leave*.
These end points come from the inserted copies, and can be passed directly to
useIntv. This simplifies the coloring code.

llvm-svn: 124799
2011-02-03 17:04:16 +00:00
Jakob Stoklund Olesen 2b855eb69c Silence an MSVC warning
llvm-svn: 124798
2011-02-03 17:04:12 +00:00
Eric Christopher ede6267993 Reapply this.
llvm-svn: 124779
2011-02-03 06:18:29 +00:00
Eric Christopher 21933539f2 Temporarily revert 124765 in an attempt to find the cycle breaking bootstrap.
llvm-svn: 124778
2011-02-03 05:40:54 +00:00
Jakob Stoklund Olesen dca2917e25 Defer SplitKit value mapping until all defs are available.
The greedy register allocator revealed some problems with the value mapping in
SplitKit. We would sometimes start mapping values before all defs were known,
and that could change a value from a simple 1-1 mapping to a multi-def mapping
that requires ssa update.

The new approach collects all defs and register assignments first without
filling in any live intervals. Only when finish() is called, do we compute
liveness and mapped values. At this time we know with certainty which values map
to multiple values in a split range.

This also has the advantage that we can compute live ranges based on the
remaining uses after rematerializing at split points.

The current implementation has many opportunities for compile time optimization.

llvm-svn: 124765
2011-02-03 00:54:23 +00:00
Devang Patel be933b470a Add support to describe template value parameter in debug info.
llvm-svn: 124755
2011-02-02 22:35:53 +00:00
Devang Patel 3a9e65efb6 Add support to describe template parameter type in debug info.
llvm-svn: 124752
2011-02-02 21:38:25 +00:00
Evan Cheng d42641c6b5 Given a pair of floating point load and store, if there are no other uses of
the load, then it may be legal to transform the load and store to integer
load and store of the same width.

This is done if the target specified the transformation as profitable. e.g.
On arm, this can transform:
vldr.32 s0, []
vstr.32 s0, []

to

ldr r12, []
str r12, []

rdar://8944252

llvm-svn: 124708
2011-02-02 01:06:55 +00:00
Matt Beaumont-Gay 29c8c8fe92 Take Bill Wendling's suggestion for structuring a couple of asserts.
llvm-svn: 124688
2011-02-01 22:12:50 +00:00
Devang Patel 56cc5fdf09 Keep track of incoming argument's location while emitting LiveIns.
llvm-svn: 124611
2011-01-31 21:38:14 +00:00
Richard Osborne 272e084bca Fix bug where ReduceLoadWidth was creating illegal ZEXTLOAD instructions.
llvm-svn: 124587
2011-01-31 17:41:44 +00:00
Anton Korobeynikov fe3a6e049d Clarify the LSDASection NULL check
llvm-svn: 124569
2011-01-30 22:07:31 +00:00
Jakob Stoklund Olesen 9af7afcb7f Respect the -tail-dup-size command line option even when optimizing for size.
This is similar to the -unroll-threshold option. There should be no change in
behavior when -tail-dup-size is not explicit on the llc command line.

llvm-svn: 124564
2011-01-30 20:38:12 +00:00
Benjamin Kramer 946e1522b6 Teach DAGCombine to fold fold (sra (trunc (sr x, c1)), c2) -> (trunc (sra x, c1+c2) when c1 equals the amount of bits that are truncated off.
This happens all the time when a smul is promoted to a larger type.

On x86-64 we now compile "int test(int x) { return x/10; }" into
  movslq  %edi, %rax
  imulq $1717986919, %rax, %rax
  movq  %rax, %rcx
  shrq  $63, %rcx
  sarq  $34, %rax <- used to be "shrq $32, %rax; sarl $2, %eax"
  addl  %ecx, %eax

This fires 96 times in gcc.c on x86-64.

llvm-svn: 124559
2011-01-30 16:38:43 +00:00
Benjamin Kramer 65bb14d368 Add the missing sub identity "A-(A-B) -> B" to DAGCombine.
This happens e.g. for code like "X - X%10" where we lower the modulo operation
to a series of multiplies and shifts that are then subtracted from X, leading to
this missed optimization.

llvm-svn: 124532
2011-01-29 12:34:05 +00:00
Evan Cheng d983eba7dc Re-apply r124518 with fix. Watch out for invalidated iterator.
llvm-svn: 124526
2011-01-29 04:46:23 +00:00
Evan Cheng 65b8ccf6ac Revert r124518. It broke Linux self-host.
llvm-svn: 124522
2011-01-29 02:43:04 +00:00
Evan Cheng d4eff31476 Re-commit r124462 with fixes. Tail recursion elim will now dup ret into unconditional predecessor to enable TCE on demand.
llvm-svn: 124518
2011-01-29 01:29:26 +00:00
Evan Cheng aaa9606b2f Revert r124462. There are a few big regressions that I need to fix first.
llvm-svn: 124478
2011-01-28 07:12:38 +00:00
Nick Lewycky 0af77fd45b Fix build with stdcxx by using llvm::next. Patch by Joerg Sonnenberger!
llvm-svn: 124472
2011-01-28 04:00:15 +00:00
Rafael Espindola 6c17d54891 Print the visibility of declarations.
llvm-svn: 124468
2011-01-28 03:20:10 +00:00
Evan Cheng 417fca86c4 - Stop simplifycfg from duplicating "ret" instructions into unconditional
branches. PR8575, rdar://5134905, rdar://8911460.
- Allow codegen tail duplication to dup small return blocks after register
  allocation is done.

llvm-svn: 124462
2011-01-28 02:19:21 +00:00
Andrew Trick c0ca67601a Remove a temporary workaround for a lencod miscompile. Depends on the fix in r124442.
llvm-svn: 124443
2011-01-27 21:28:51 +00:00
Andrew Trick 13bb644fdd VirtRegRewriter fix: update kill flags, which are used by the scavenger.
rdar://problem/8893967: JM/lencod miscompile at -arch armv7 -mthumb -O3

Added ResurrectKill to remove kill flags after we decide to reused a
physical register. And (hopefully) ensure that we call it in all the
right places.

Sorry, I'm not checking in a unit test given that it's a miscompile I
can't reproduce easily with a toy example. Failures in the rewriter
depend on a series of heuristic decisions maked during one of the many
upstream phases in codegen. This case would require coercing regalloc
to generate a couple of rematerialzations in a way that causes the
scavenger to reuse the same register at just the wrong point.

The general way to test this is to implement kill flags
verification. Then we could have a simple, robust compile-only unit
test. That would be worth doing if the whole pass was not about to
disappear. At this point we focus verification work on the next
generation of regalloc.

llvm-svn: 124442
2011-01-27 21:26:43 +00:00
Devang Patel 1cec755494 Speculatively revert r124380.
llvm-svn: 124397
2011-01-27 19:15:01 +00:00
Devang Patel 3b266a2780 While legalizing SDValues do not drop SDDbgValues, trasfer them to new legal nodes.
Take 2. This includes fix for dragonegg crash.

llvm-svn: 124380
2011-01-27 17:43:53 +00:00
Bob Wilson 2d69fb4184 Avoid modifying the OneClassForEachPhysReg map while iterating over it.
Linear scan regalloc is currently assuming that any register aliased with
a member of a regclass must also be in at least one regclass.  That is not
always true.  For example, for X86, RIP is in a regclass but IP is not.
If you're unlucky, this can cause a crash by invalidating the iterator.

llvm-svn: 124365
2011-01-27 07:26:15 +00:00
Matt Beaumont-Gay a148c59231 Try harder to not have unused variables.
llvm-svn: 124350
2011-01-27 02:39:27 +00:00
Matt Beaumont-Gay 0cddbf2bdf Opt-mode -Wunused-variable cleanup
llvm-svn: 124346
2011-01-27 01:47:50 +00:00
Devang Patel 92b7077f9e Reapply 124301
llvm-svn: 124339
2011-01-27 00:13:27 +00:00
Bill Wendling fb4ee9bbde Initialize variable to get rid of clang warning.
llvm-svn: 124331
2011-01-26 22:21:35 +00:00
Devang Patel b370bf329a Revert 124301.
llvm-svn: 124327
2011-01-26 21:41:22 +00:00
Devang Patel 084e0628e0 Revert r124302
llvm-svn: 124320
2011-01-26 21:12:32 +00:00
David Greene bab5e6ed0e [AVX] Add INSERT_SUBVECTOR and support it on x86. This provides a
default implementation for x86, going through the stack in a similr
fashion to how the codegen implements BUILD_VECTOR.  Eventually this
will get matched to VINSERTF128 if AVX is available.

llvm-svn: 124307
2011-01-26 19:13:22 +00:00
Devang Patel a11210b1b8 While legalizing SDValues do not drop SDDbgValues, trasfer them to new legal nodes.
llvm-svn: 124302
2011-01-26 18:55:05 +00:00
Devang Patel 9d4eb2f480 Process valid SDDbgValues even if the node does not have any order assigned.
llvm-svn: 124301
2011-01-26 18:42:32 +00:00
Devang Patel 1448e7c8b6 Refactor.
llvm-svn: 124300
2011-01-26 18:20:04 +00:00
David Greene b6f1611928 [AVX] Support EXTRACT_SUBVECTOR on x86. This provides a default
implementation of EXTRACT_SUBVECTOR for x86, going through the stack
in a similr fashion to how the codegen implements BUILD_VECTOR.
Eventually this will get matched to VEXTRACTF128 if AVX is available.

llvm-svn: 124292
2011-01-26 15:38:49 +00:00
Jakob Stoklund Olesen b308902024 Rename member variables to follow the rest of LLVM.
No functional change.

llvm-svn: 124257
2011-01-26 00:50:53 +00:00
Devang Patel efc6b16e4b Provide an interface to transfer SDDbgValue from one SDNode to another.
llvm-svn: 124245
2011-01-25 23:27:42 +00:00
Devang Patel 70f8e5962a Resolve DanglingDbgValue of PHI nodes where the use follows dbg.value intrinisic.
llvm-svn: 124203
2011-01-25 18:09:58 +00:00
Devang Patel 04b649d48a This assertion is too restrictive, it does not apply for dangling dbg value nodes (nodes where dbg.value intrinsic preceds use of the value).
llvm-svn: 124202
2011-01-25 18:09:33 +00:00
Anton Korobeynikov b15beb2ae1 Support printing exception section into the current one. This is the case when LSDASection is blank
llvm-svn: 124150
2011-01-24 22:38:40 +00:00
Devang Patel 533479544b Speculatively revert r124138.
llvm-svn: 124142
2011-01-24 20:04:37 +00:00
Devang Patel 8cc5355c90 Resolve DanglingDbgValue of PHI nodes where the use follows dbg.value intrinisic.
llvm-svn: 124138
2011-01-24 19:24:37 +00:00
Andrew Trick a293c49f0d Temporarily workaround JM/lencod miscompile (SIGSEGV).
rdar://problem/8893967

llvm-svn: 124137
2011-01-24 19:08:15 +00:00
Rafael Espindola b3eca9bb71 Add support for the --noexecstack option.
llvm-svn: 124077
2011-01-23 17:55:27 +00:00
Ted Kremenek 3c4408ceb6 Null initialize a few variables flagged by
clang's -Wuninitialized-experimental warning.
While these don't look like real bugs, clang's
-Wuninitialized-experimental analysis is stricter
than GCC's, and these fixes have the benefit
of being general nice cleanups.

llvm-svn: 124073
2011-01-23 17:05:06 +00:00
Rafael Espindola 4b7b7fba38 Delay the creation of eh_frame so that the user can change the defaults.
Add support for SHT_X86_64_UNWIND.

llvm-svn: 124059
2011-01-23 05:43:40 +00:00
Rafael Espindola 0e7e34e476 Remove more duplicated code.
llvm-svn: 124056
2011-01-23 04:43:11 +00:00
Rafael Espindola aea4958ea6 Remove duplicated code.
llvm-svn: 124054
2011-01-23 04:28:49 +00:00
Andrew Trick bd428ec50f Enable support for precise scheduling of the instruction selection
DAG. Disable using "-disable-sched-cycles".

For ARM, this enables a framework for modeling the cpu pipeline and
counting stalls. It also activates several heuristics to drive
scheduling based on the model. Scheduling is inherently imprecise at
this stage, and until spilling is improved it may defeat attempts to
schedule. However, this framework provides greater control over
tuning codegen.

Although the flag is not target-specific, it should have very little
affect on the default scheduler used by x86. The only two changes that
affect x86 are:
- scheduling a high-latency operation bumps the current cycle so independent
  operations can have their latency covered. i.e. two independent 4
  cycle operations can produce results in 4 cycles, not 8 cycles.
- Two operations with equal register pressure impact and no
  latency-based stalls on their uses will be prioritized by depth before height
  (height is irrelevant if no stalls occur in the schedule below this point).

llvm-svn: 123971
2011-01-21 06:19:05 +00:00
Andrew Trick 47ff14b091 Convert -enable-sched-cycles and -enable-sched-hazard to -disable
flags. They are still not enable in this revision.

Added TargetInstrInfo::isZeroCost() to fix a fundamental problem with
the scheduler's model of operand latency in the selection DAG.

Generalized unit tests to work with sched-cycles.

llvm-svn: 123969
2011-01-21 05:51:33 +00:00
Jakob Stoklund Olesen 8a46e26b8e SplitKit requires that all defs are in place before calling useIntv().
The value mapping gets confused about which original values have multiple new
definitions so they may need phi insertions.

This could probably be simplified by letting enterIntvBefore() take a live range
to be added following the instruction. As long as the range stays inside the
same basic block, value mapping shouldn't be a problem.

llvm-svn: 123926
2011-01-20 17:45:23 +00:00
Jakob Stoklund Olesen 04e6b3bd21 Add LiveIntervalMap::dumpCache() to print out the cache used by the ssa update algorithm.
llvm-svn: 123925
2011-01-20 17:45:20 +00:00
Eric Christopher 37c4a8be72 My editor's indent went crazy. Fix.
llvm-svn: 123909
2011-01-20 08:56:34 +00:00
Eric Christopher 785db078b4 Expand invalid return values for umulo and smulo. Handle these similarly
to add/sub by doing the normal operation and then checking for overflow
afterwards. This generally relies on the DAG handling the later invalid
operations as well.

Fixes the 64-bit part of rdar://8622122 and rdar://8774702.

llvm-svn: 123908
2011-01-20 08:54:28 +00:00
Evan Cheng b8b0ad80a8 Sorry, several patches in one.
TargetInstrInfo:
Change produceSameValue() to take MachineRegisterInfo as an optional argument.
When in SSA form, targets can use it to make more aggressive equality analysis.

Machine LICM:
1. Eliminate isLoadFromConstantMemory, use MI.isInvariantLoad instead.
2. Fix a bug which prevent CSE of instructions which are not re-materializable.
3. Use improved form of produceSameValue.

ARM:
1. Teach ARM produceSameValue to look pass some PIC labels.
2. Look for operands from different loads of different constant pool entries
   which have same values.
3. Re-implement PIC GA materialization using movw + movt. Combine the pair with
   a "add pc" or "ldr [pc]" to form pseudo instructions. This makes it possible
   to re-materialize the instruction, allow machine LICM to hoist the set of
   instructions out of the loop and make it possible to CSE them. It's a bit
   hacky, but it significantly improve code quality.
4. Some minor bug fixes as well.

With the fixes, using movw + movt to materialize GAs significantly outperform the
load from constantpool method. 186.crafty and 255.vortex improved > 20%, 254.gap
and 176.gcc ~10%.

llvm-svn: 123905
2011-01-20 08:34:58 +00:00
Andrew Trick 2cd1f0beb6 Selection DAG scheduler register pressure heuristic fixes.
Added a check for already live regs before claiming HighRegPressure.
Fixed a few cases of checking the wrong number of successors.
Added some tracing until these heuristics are better understood.

llvm-svn: 123892
2011-01-20 06:21:59 +00:00
Jakob Stoklund Olesen 4060abb4b9 Check that a live range exists before shortening it. This fixes PR8989.
The live range may have been deleted earlier because of rematerialization.

llvm-svn: 123891
2011-01-20 06:20:02 +00:00
Jakob Stoklund Olesen 145755f1d6 Add hidden -verify-coalescing to run the machine code verifier before and after
register coalescing.

llvm-svn: 123890
2011-01-20 06:20:00 +00:00
Jakob Stoklund Olesen 5acd4a6453 Fix bug found by new clang warning.
llvm-svn: 123872
2011-01-20 02:43:19 +00:00
Eric Christopher b2139f655b Use only one API at a time.
llvm-svn: 123866
2011-01-20 01:29:23 +00:00
Eric Christopher bb14f65672 If we can, lower the multiply part of a umulo/smulo call to a libcall
with an invalid type then split the result and perform the overflow check
normally.

Fixes the 32-bit parts of rdar://8622122 and rdar://8774702.

llvm-svn: 123864
2011-01-20 00:29:24 +00:00
Devang Patel 2d9e532a3a Fix debug info for merged global.
llvm-svn: 123862
2011-01-20 00:02:16 +00:00
Jakob Stoklund Olesen 79be8aecba Divert Hopfield network debug output. It is very noisy.
llvm-svn: 123859
2011-01-19 23:14:59 +00:00
Jakob Stoklund Olesen 509089f5b6 Don't accidentally leave small gaps in the live ranges when leaving the active
interval after an instruction. The leaveIntvAfter() method only adds liveness
from the instruction's boundary index to the inserted copy.

Ideally, SplitKit should be smarter about this, perhaps by combining useIntv()
and leaveIntvAfter() into one method that guarantees continuity.

llvm-svn: 123858
2011-01-19 23:14:56 +00:00
Devang Patel 8698f09dbd Fix register address expression. Patch by Ken Dyck.
llvm-svn: 123856
2011-01-19 23:04:47 +00:00
Jakob Stoklund Olesen 9fb04015ff Implement RAGreedy::splitAroundRegion and remove loop splitting.
Region splitting includes loop splitting as a subset, and it is more generic.
The splitting heuristics for variables that are live in more than one block are
now:

1. Try to create a region that covers multiple basic blocks.
2. Try to create a new live range for each block with multiple uses.
3. Spill.

Steps 2 and 3 are similar to what the standard spiller is doing.

llvm-svn: 123853
2011-01-19 22:11:48 +00:00
Jakob Stoklund Olesen 267f6c1ab2 Add RAGreedy methods for splitting live ranges around regions.
Analyze the live range's behavior entering and leaving basic blocks. Compute an
interference pattern for each allocation candidate, and use SpillPlacement to
find an optimal region where that register can be live.

This code is still not enabled.

llvm-svn: 123774
2011-01-18 21:13:27 +00:00
Jeffrey Yasskin 249fcd4499 Remove unused variables found by gcc-4.6's -Wunused-but-set-variable.
llvm-svn: 123707
2011-01-18 00:51:23 +00:00
Stuart Hastings 4fa832aab0 Remove checking that prevented overlapping CALLSEQ_START/CALLSEQ_END
ranges, add legalizer support for nested calls.  Necessary for ARM
byval support.  Radar 7662569.

llvm-svn: 123704
2011-01-18 00:09:27 +00:00
Benjamin Kramer 45d183ccf0 Fix an off-by-one error in ctpop combining.
llvm-svn: 123664
2011-01-17 18:00:28 +00:00
Benjamin Kramer 24c5184dca Add a DAGCombine to turn (ctpop x) u< 2 into (x & x-1) == 0.
This shaves off 4 popcounts from the hacked 186.crafty source.

This is enabled even when a native popcount instruction is available. The
combined code is one operation longer but it should be faster nevertheless.

llvm-svn: 123621
2011-01-17 12:04:57 +00:00