Commit Graph

733 Commits

Author SHA1 Message Date
James Molloy dd9137aa56 Revert r142530 at least temporarily while a discussion is had on llvm-commits regarding exactly how much optsize should optimize for size over performance.
llvm-svn: 143023
2011-10-26 08:53:19 +00:00
Bill Wendling 1414bc5a14 Use a worklist to prevent the iterator from becoming invalidated because of the 'removeSuccessor' call. Noticed in a Release+Asserts+Check buildbot.
llvm-svn: 143018
2011-10-26 07:16:18 +00:00
Evan Cheng 043c9d3f7a Revert part of r142530. The patch potentially hurts performance especially
on Darwin platforms where -Os means optimize for size without hurting
performance.

llvm-svn: 143002
2011-10-26 01:17:44 +00:00
Eli Friedman a5e244c08d Don't crash on variable insertelement on ARM. PR10258.
llvm-svn: 142871
2011-10-24 23:08:52 +00:00
Dan Gohman 4ed1afa51d Change this overloaded use of Sched::Latency to be an overloaded
use of Sched::ILP instead, as Sched::Latency is going away.

llvm-svn: 142813
2011-10-24 17:55:11 +00:00
Bill Wendling 94e6643fce The different flavors of ARM have different valid subsets of registers. Check
that the set of callee-saved registers is correct for the specific platform.
<rdar://problem/10313708> & ctor_dtor_count & ctor_dtor_count-2

llvm-svn: 142706
2011-10-22 00:29:28 +00:00
Bill Wendling cf7bdf4438 Add missing operand. <rdar://problem/10313323>
llvm-svn: 142615
2011-10-20 20:37:11 +00:00
James Molloy 2d768fd379 Use literal pool loads instead of MOVW/MOVT for materializing global addresses when optimizing for size.
On spec/gcc, this caused a codesize improvement of ~1.9% for ARM mode and ~4.9% for Thumb(2) mode. This is
codesize including literal pools.

The pools themselves doubled in size for ARM mode and quintupled for Thumb mode, leaving suggestion that there
is still perhaps redundancy in LLVM's use of constant pools that could be decreased by sharing entries.

Fixes PR11087.

llvm-svn: 142530
2011-10-19 14:11:07 +00:00
Bill Wendling 2977a15ab1 Make sure we emit the 'movw' and 'movt' only if it's supported. Otherwise, use a constant pool.
llvm-svn: 142485
2011-10-19 09:24:02 +00:00
Bill Wendling 7c1634556d Remove some dead code.
llvm-svn: 142484
2011-10-19 09:04:11 +00:00
Bill Wendling 94f60018e0 Emit the MOVT instruction only if the # LPads is > 64K.
llvm-svn: 142460
2011-10-18 23:19:55 +00:00
Bill Wendling 64e6bfc16c For Thumb mode, we need to use a constant pool if the value is too large to be
used with the CMP instruction.

llvm-svn: 142458
2011-10-18 23:11:05 +00:00
Bill Wendling 4969dcdef9 Use the integer compare when the value is small enough. Use the "move into a
register and then compare against that" method when it's too large. We have to
move the value into the register in the "movw, movt" pair of instructions.

llvm-svn: 142440
2011-10-18 22:52:20 +00:00
Bill Wendling 85833f71c6 Use the integer compare when the value is small enough. Use the "move into a
register and then compare against that" method when it's too large. We have to
move the value into the register in the "movw, movt" pair of instructions.

llvm-svn: 142437
2011-10-18 22:49:07 +00:00
Bill Wendling 973c817cde The value we're comparing against may be too large for the ARM CMP
instruction. Move the value into a register and then use that for the CMP.
<rdar://problem/10305266>

llvm-svn: 142431
2011-10-18 22:11:18 +00:00
Bill Wendling b2a703d352 The immediate may be too large for the CMP instruction. Move it into a register
and use that in the CMP.
<rdar://problem/10305266>

llvm-svn: 142429
2011-10-18 21:55:58 +00:00
Andrew Trick 88b2450adc Use ARM/t2PseudoInst class from ARM/Thumb2 special adds/subs patterns.
Clean up the patterns, fix comments, and avoid confusing both tools
and coders. Note that the special adds/subs SelectionDAG nodes no
longer have the dummy cc_out operand.

llvm-svn: 142397
2011-10-18 19:18:52 +00:00
Bob Wilson 93b0f7b319 Use isIntN and isUIntN to check for valid signed/unsigned numbers.
llvm-svn: 142395
2011-10-18 18:46:49 +00:00
Andrew Trick 3f07c429b5 whitespace
llvm-svn: 142394
2011-10-18 18:40:53 +00:00
Bill Wendling 617075fcf6 A landing pad could have more than one predecessor. In that case, we want that
predecessor to remove the jump to it as well. Delay clearing the 'landing pad'
flag until after the jumps have been removed. (There is an implicit assumption
in several modules that an MBB which jumps to a landing pad has only two
successors.)
<rdar://problem/10304224>

llvm-svn: 142390
2011-10-18 18:30:49 +00:00
Bob Wilson 9258b76d8d Fix incorrect check for sign-extended constant BUILD_VECTOR.
<rdar://problem/10298332>

llvm-svn: 142371
2011-10-18 17:34:51 +00:00
Duncan Sands d278d35b13 Fix a bunch of unused variable warnings when doing a release
build with gcc-4.6.

llvm-svn: 142350
2011-10-18 12:44:00 +00:00
Bill Wendling 510fbcd440 Don't renumber the blocks here. This could cause problems later on if another
pass renumbers the blocks again.

llvm-svn: 142258
2011-10-17 21:32:56 +00:00
Bill Wendling f7f223f69e Add a call to EmitSjLjDispatchBlock.
Once the intrinsics are marked as having a custom inserter, it will call this
method to emit the dispatch table into the machine function.

llvm-svn: 142245
2011-10-17 20:37:20 +00:00
Bill Wendling 26d2780d07 Add comment explaining that the order of processing doesn't matter here.
llvm-svn: 142176
2011-10-17 05:25:09 +00:00
Nadav Rotem 097106b77a ARM cannot select a pattern for trunc-store v4i8; /ARM/vrev.ll fails when promoting elements.
llvm-svn: 142080
2011-10-15 20:03:12 +00:00
Bill Wendling 9c1019c6c7 Mark registers as DEAD because they're really just clobbers.
llvm-svn: 142027
2011-10-15 00:27:44 +00:00
Eli Friedman 74d1da5a05 Add missing correctness check to ARMTargetLowering::ReconstructShuffle. Fixes PR11129.
llvm-svn: 142022
2011-10-14 23:58:49 +00:00
Bill Wendling 9e0cd1ee17 Make sure that the register is in the register class before adding it as a machine op.
llvm-svn: 142021
2011-10-14 23:55:44 +00:00
Bill Wendling 6f3f9a391e Mark the invoke call instruction as implicitly defining the callee-saved registers.
The callee-saved registers cannot be live across an invoke call because the
control flow may continue along the exceptional edge. When this happens, all of
the callee-saved registers are no longer valid.

llvm-svn: 142018
2011-10-14 23:34:37 +00:00
Eli Friedman aa6ec39056 Simplify and avoid undefined shift. Based on patch by Ahmed Charles.
llvm-svn: 141903
2011-10-13 22:40:23 +00:00
Bill Wendling a7d697e4a6 Reapply r141365 now that PR11107 is fixed.
llvm-svn: 141591
2011-10-10 22:59:55 +00:00
Bill Wendling 47aac51043 Revert r141365. It was causing MultiSource/Benchmarks/MiBench/consumer-lame to
hang, and possibly SPEC/CINT2006/464_h264ref.

llvm-svn: 141560
2011-10-10 18:27:30 +00:00
Bill Wendling 883ec97115 Take all of the invoke basic blocks and make the dispatch basic block their new
successor. Remove the old landing pad from their successor list, because it's
now the successor of the dispatch block. Now that the landing pad blocks are no
longer the destination of invokes, we can mark them as normal basic blocks
instead of landing pads.

This more closely resembles what the CFG is actually doing.

llvm-svn: 141436
2011-10-07 23:18:02 +00:00
Bill Wendling f9f5e455d4 Take the code that was emitted for the llvm.eh.dispatch.setup intrinsic and emit
it with the new SjLj emitter stuff. This way there's no need to emit that
kind-of-hacky intrinsic.

llvm-svn: 141419
2011-10-07 22:08:37 +00:00
Bill Wendling 7ecfbd90ef Thread the chain through the eh.sjlj.setjmp intrinsic, like it's documented to
do. This will be useful later on with the new SJLJ stuff.

llvm-svn: 141416
2011-10-07 21:25:38 +00:00
Bob Wilson 8decdc472f Reenable tail calls for iOS 5.0 and later.
llvm-svn: 141370
2011-10-07 17:17:49 +00:00
Bob Wilson bc1589945d Reenable use of divmod compiler_rt functions for iOS 5.0 and later.
llvm-svn: 141368
2011-10-07 16:59:21 +00:00
Anton Korobeynikov 318d6bae80 Peephole optimization for ABS on ARM.
Patch by Ana Pazos!

llvm-svn: 141365
2011-10-07 16:15:08 +00:00
Bill Wendling 8d50ea0f82 Use the correct vreg here.
llvm-svn: 141342
2011-10-06 23:41:14 +00:00
Bill Wendling b3d4678877 Generate the dispatch code for a 'thumb' function. This is very similar to the
others. They take the call site value. Determine if it's a proper value. And
then jumps to the correct call site via a jump table.

llvm-svn: 141341
2011-10-06 23:37:36 +00:00
Bill Wendling 5626c66a89 Generate the dispatch table for ARM mode.
llvm-svn: 141327
2011-10-06 22:53:00 +00:00
Bill Wendling 030b58e5c9 Refactor some of the code that sets up the entry block for SjLj EH. No functionality change.
llvm-svn: 141323
2011-10-06 22:18:16 +00:00
Bill Wendling 31d973cde6 Use a thumb ORR instead of thumb2 ORR when in thumb-only mode. (Picky! Picky!)
Place the immediate to OR into a register so that it works.

llvm-svn: 141319
2011-10-06 21:51:21 +00:00
Bill Wendling 362c1b01cc * Set the low bit of the return address when we are in thumb mode.
* Some code cleanup.

llvm-svn: 141317
2011-10-06 21:29:56 +00:00
Bill Wendling 6134655f08 Add the MBBs before inserting the instructions. Doing it afterwards could lead
to an infinite loop because of the def-use chains.

Also use a frame load instead of store for the LD instruction.

llvm-svn: 141263
2011-10-06 00:53:33 +00:00
Bill Wendling f793e7ed5c Get the proper call site numbers for the landing pads. Also remove a magic
number (18) for the proper addressing mode.

llvm-svn: 141245
2011-10-05 23:28:57 +00:00
Bill Wendling 324be98a3c Look at the number of entries in the jump table and jump to a 'trap' block if
the value exceeds that number.

llvm-svn: 141143
2011-10-05 00:39:32 +00:00
Bill Wendling 202803e39c Checkpoint for SJLJ EH code.
This is a first pass at generating the jump table for the sjlj dispatch. It
currently generates something plausible, but hasn't been tested thoroughly.

llvm-svn: 141140
2011-10-05 00:02:33 +00:00
Bill Wendling 1eab54f8ba Use the PC label ID rather than '1'. Add support for thumb-2, because I heard that some people use it.
llvm-svn: 141042
2011-10-03 22:44:15 +00:00