Commit Graph

916 Commits

Author SHA1 Message Date
Dan Gohman dd550305e6 Give this test an explicit register allocator, so that it can work even if
the default register allocator is changed.

llvm-svn: 130883
2011-05-04 23:14:02 +00:00
Bill Wendling 2a40131f6b SjLj EH could produce a machine basic block that legitimately has more than one
landing pad as its successor.

SjLj exception handling jumps to the correct landing pad via a switch statement
that's generated right before code-gen. Loosen the constraint in the machine
instruction verifier to allow for this. Note, this isn't the most rigorous check
since we cannot determine where that switch statement came from. But it's
marginally better than turning this check off when SjLj exceptions are used.
<rdar://problem/9187612>

llvm-svn: 130881
2011-05-04 22:54:05 +00:00
Galina Kistanova e53ae508ec This test fails on ARM. The test shouldn't explicitly specify alignment (and alignment 4 is wrong) and requires hard-float.
llvm-svn: 130875
2011-05-04 21:57:44 +00:00
Devang Patel 39ecf816c5 Do not emit location expression size twice.
llvm-svn: 130854
2011-05-04 19:00:57 +00:00
Jakob Stoklund Olesen 51b35f7bb1 Fix a bunch of ARM tests to be register allocation independent.
llvm-svn: 130800
2011-05-03 22:31:21 +00:00
Evan Cheng 93b5cdc5ab Make the test less likely to fail with minor changes.
llvm-svn: 130778
2011-05-03 19:09:32 +00:00
Bob Wilson c5242b0e78 Remove test for iOS divmod function, since that is disabled for now.
llvm-svn: 130769
2011-05-03 17:54:49 +00:00
Bruno Cardoso Lopes 168c9005b5 Add a few ARM coprocessor intrinsics. Testcases included
llvm-svn: 130763
2011-05-03 17:29:22 +00:00
Dan Gohman 6136e94897 Add an unfolded offset field to LSR's Formula record. This is used to
model constants which can be added to base registers via add-immediate
instructions which don't require an additional register to materialize
the immediate.

llvm-svn: 130743
2011-05-03 00:46:49 +00:00
Jakob Stoklund Olesen edfabc9aad Weekly fix of register allocation dependent unit tests.
llvm-svn: 130567
2011-04-30 01:37:52 +00:00
Eli Friedman 4105ed1523 Make FastEmit_ri_ try a bit harder to succeed for supported operations; FastEmit_i can fail for non-Thumb2 ARM. Makes ARMSimplifyAddress work correctly, and reduces the number of fast-isel bailouts on non-Thumb ARM.
llvm-svn: 130560
2011-04-29 23:34:52 +00:00
Eli Friedman 328bad02fa Switch to ImmLeaf (which can be used by FastISel) for a few more common ARM/Thumb2 patterns.
llvm-svn: 130552
2011-04-29 22:48:03 +00:00
Eli Friedman dd937843d3 Fix run-line, again. :(
llvm-svn: 130540
2011-04-29 21:33:03 +00:00
Eli Friedman 86caced370 Re-committing r130454, which does not in fact break anything.
Fix a rather obscure crash caused by ARM fast-isel generating code which redefines a register.
rdar://problem/9338332 .

llvm-svn: 130539
2011-04-29 21:22:56 +00:00
Eric Christopher 8d46b47787 Add trunc->branch support, this won't help with clang's i8->i1 truncations
for bools, but is a start.

llvm-svn: 130534
2011-04-29 20:02:39 +00:00
Eli Friedman 517728b1ae Revert r130454; apparently this doesn't actually work.
llvm-svn: 130462
2011-04-28 23:55:14 +00:00
Eli Friedman 37b9ede969 Fix runline.
llvm-svn: 130455
2011-04-28 23:12:24 +00:00
Eli Friedman e4ecd42926 Fix a rather obscure crash caused by ARM fast-isel generating code which redefines a register.
rdar://problem/9338332 .

llvm-svn: 130454
2011-04-28 23:03:25 +00:00
Devang Patel 3e021533cd Teach dwarf writer to handle complex address expression for .debug_loc entries.
This fixes clang generated blocks' variables' debug info.
Radar 9279956.

llvm-svn: 130373
2011-04-28 02:22:40 +00:00
Evan Cheng 9808d31b9e If converter was being too cute. It look for root BBs (which don't have
successors) and use inverse depth first search to traverse the BBs. However
that doesn't work when the CFG has infinite loops. Simply do a linear
traversal of all BBs work just fine.

rdar://9344645

llvm-svn: 130324
2011-04-27 19:32:43 +00:00
Jakob Stoklund Olesen 71d3b895ba Also add <imp-def> operands for defined and dead super-registers when rewriting.
We cannot rely on the <imp-def> operands added by LiveIntervals in all cases as
demonstrated by the test case.

llvm-svn: 130313
2011-04-27 17:42:31 +00:00
Evan Cheng 1355bbdd11 Be careful about scheduling nodes above previous calls. It increase usages of
more callee-saved registers and introduce copies. Only allows it if scheduling
a node above calls would end up lessen register pressure.

Call operands also has added ABI restrictions for register allocation, so be
extra careful with hoisting them above calls.

rdar://9329627

llvm-svn: 130245
2011-04-26 21:31:35 +00:00
Evan Cheng dbb86b8108 This test should be in MC. It breaks with changes to scheduling / register allocation so it's being removed.
llvm-svn: 130243
2011-04-26 21:09:04 +00:00
Chris Lattner 189ca1498f don't emit the symbol name twice for local bss and common
symbols.  For example, don't emit:
        .comm   _i,4,2                  ## @i
                                        ## @i

instead emit:
        .comm   _i,4,2                  ## @i

llvm-svn: 130192
2011-04-26 06:14:13 +00:00
Eric Christopher 238a21f2d5 Make this test disable fast isel as it's not needed.
llvm-svn: 130165
2011-04-25 22:39:46 +00:00
Benjamin Kramer ba446cc12a Make tests more useful.
lit needs a linter ...

llvm-svn: 130126
2011-04-25 10:12:01 +00:00
Andrew Trick 0ed5778a1e Thumb2 and ARM add/subtract with carry fixes.
Fixes Thumb2 ADCS and SBCS lowering: <rdar://problem/9275821>.
t2ADCS/t2SBCS are now pseudo instructions, consistent with ARM, so the
assembly printer correctly prints the 's' suffix.

Fixes Thumb2 adde -> SBC matching to check for live/dead carry flags.

Fixes the internal ARM machine opcode mnemonic for ADCS/SBCS.
Fixes ARM SBC lowering to check for live carry (potential bug).

llvm-svn: 130048
2011-04-23 03:55:32 +00:00
Devang Patel 94ad6ac13c Fix DWARF description of Q registers.
llvm-svn: 129952
2011-04-21 23:22:35 +00:00
Devang Patel 3712c14be9 Fix DWARF description of S registers.
llvm-svn: 129947
2011-04-21 22:48:26 +00:00
Devang Patel be22131c28 Test case for r129922
llvm-svn: 129934
2011-04-21 20:16:43 +00:00
Evan Cheng 5f1ba4cd2d Remove -use-divmod-libcall. Let targets opt in when they are available.
llvm-svn: 129884
2011-04-20 22:20:12 +00:00
Eric Christopher bcaedb5ce0 Rewrite the expander for umulo/smulo to remember to sign extend the input
manually and pass all (now) 4 arguments to the mul libcall. Add a new
ExpandLibCall for just this (copied gratuitously from type legalization).

Fixes rdar://9292577

llvm-svn: 129842
2011-04-20 01:19:45 +00:00
Daniel Dunbar 4a7783b0c2 CodeGen: Eliminate a use of getDarwinMajorNumber().
- There is a minor semantic change here (evidenced by the test change) for
   Darwin triples that have no version component. I debated changing the default
   behavior of isOSVersionLT, but decided it made more sense for triples to be
   explicit.

llvm-svn: 129802
2011-04-19 20:32:39 +00:00
Bob Wilson 0858c3aaed This patch combines several changes from Evan Cheng for rdar://8659675.
Making use of VFP / NEON floating point multiply-accumulate / subtraction is
difficult on current ARM implementations for a few reasons.
1. Even though a single vmla has latency that is one cycle shorter than a pair
   of vmul + vadd, a RAW hazard during the first (4? on Cortex-a8) can cause
   additional pipeline stall. So it's frequently better to single codegen
   vmul + vadd.
2. A vmla folowed by a vmul, vmadd, or vsub causes the second fp instruction to
   stall for 4 cycles. We need to schedule them apart.
3. A vmla followed vmla is a special case. Obvious issuing back to back RAW
   vmla + vmla is very bad. But this isn't ideal either:
     vmul
     vadd
     vmla
   Instead, we want to expand the second vmla:
     vmla
     vmul
     vadd
   Even with the 4 cycle vmul stall, the second sequence is still 2 cycles
   faster.

Up to now, isel simply avoid codegen'ing fp vmla / vmls. This works well enough
but it isn't the optimial solution. This patch attempts to make it possible to
use vmla / vmls in cases where it is profitable.

A. Add missing isel predicates which cause vmla to be codegen'ed.
B. Make sure the fmul in (fadd (fmul)) has a single use. We don't want to
   compute a fmul and a fmla.
C. Add additional isel checks for vmla, avoid cases where vmla is feeding into
   fp instructions (except for the #3 exceptional case).
D. Add ARM hazard recognizer to model the vmla / vmls hazards.
E. Add a special pre-regalloc case to expand vmla / vmls when it's likely the
   vmla / vmls will trigger one of the special hazards.

Enable these fp vmlx codegen changes for Cortex-A9.

llvm-svn: 129775
2011-04-19 18:11:57 +00:00
Bob Wilson d04a83f8f2 Add -mcpu=cortex-a9-mp. It's cortex-a9 with MP extension. rdar://8648637.
llvm-svn: 129774
2011-04-19 18:11:52 +00:00
Bob Wilson a2881ee8a4 Avoid some 's' 16-bit instruction which partially update CPSR
(and add false dependency) when it isn't dependent on last CPSR defining
instruction. rdar://8928208

llvm-svn: 129773
2011-04-19 18:11:49 +00:00
Bob Wilson df612ba006 Avoid write-after-write issue hazards for Cortex-A9.
Add a avoidWriteAfterWrite() target hook to identify register classes that
suffer from write-after-write hazards. For those register classes, try to avoid
writing the same register in two consecutive instructions.

This is currently disabled by default.  We should not spill to avoid hazards!
The command line flag -avoid-waw-hazard can be used to enable waw avoidance.

llvm-svn: 129772
2011-04-19 18:11:45 +00:00
Jakob Stoklund Olesen fb1249548f Tighten test case a bit.
Ideally, we would match an S-register to its containing D-register, but that
requires arithmetic (divide by 2).

llvm-svn: 129756
2011-04-19 06:14:45 +00:00
Jakob Stoklund Olesen bf78618db6 Make tests register allocation independent again.
llvm-svn: 129739
2011-04-19 00:14:43 +00:00
Evan Cheng 4079133796 Do not lose mem_operands while lowering VLD / VST intrinsics.
llvm-svn: 129738
2011-04-19 00:04:03 +00:00
Eric Christopher c37aa0b26a Fix a bug where we were counting the alias sets as completely used
registers for fast allocation a different way. This has us updating
used registers only when we're using that exact register.

Fixes rdar://9207598

llvm-svn: 129711
2011-04-18 19:26:25 +00:00
Evan Cheng b14ce09fca Fix divmod libcall lowering. Convert to {S|U}DIVREM first and then expand the node to a libcall. rdar://9280991
llvm-svn: 129633
2011-04-16 03:08:26 +00:00
Cameron Zwarich 9c65e4d69c Add ORR and EOR to the CMP peephole optimizer. It's hard to get isel to generate
a case involving EOR, so I only added a test for ORR.

llvm-svn: 129610
2011-04-15 21:24:38 +00:00
Cameron Zwarich 0829b3065a The AND instruction leaves the V flag unmodified, so it falls victim to the same
problem as all of the other instructions we fold with CMPs.

llvm-svn: 129602
2011-04-15 20:45:00 +00:00
Cameron Zwarich 93eae1571c Add missing register forms of instructions to the ARM CMP-folding code. This
fixes <rdar://problem/9287901>.

llvm-svn: 129599
2011-04-15 20:28:28 +00:00
Evan Cheng 12bb05b75b Fix another fcopysign lowering bug. If src is f64 and destination is f32, don't
forget to right shift the source by 32 first. rdar://9287902

llvm-svn: 129556
2011-04-15 01:31:00 +00:00
Cameron Zwarich 415b5e8341 Fix a typo in an ARM-specific DAG combine. This fixes <rdar://problem/9278274>.
llvm-svn: 129468
2011-04-13 21:01:19 +00:00
Cameron Zwarich 70be27e913 Fix an obvious problem with an alignment computation. AsmPrinter actually does
the max itself, so it is not easy to write a test case for this, but I added a
test case that would fail if the code in AsmPrinter were removed.

llvm-svn: 129432
2011-04-13 09:02:43 +00:00
Cameron Zwarich cdf59f7016 If a global variable has a specified alignment that is less than the preferred
alignment for its type, use the minimum of the specified alignment and the ABI
alignment. This fixes <rdar://problem/9275290>.

llvm-svn: 129428
2011-04-13 06:03:16 +00:00
Andrew Trick b53a00d2cb Recommit r129383. PreRA scheduler heuristic fixes: VRegCycle, TokenFactor latency.
Additional fixes:
Do something reasonable for subtargets with generic
itineraries by handle node latency the same as for an empty
itinerary. Now nodes default to unit latency unless an itinerary
explicitly specifies a zero cycle stage or it is a TokenFactor chain.

Original fixes:
UnitsSharePred was a source of randomness in the scheduler: node
priority depended on the queue data structure. I rewrote the recent
VRegCycle heuristics to completely replace the old heuristic without
any randomness. To make the ndoe latency adjustments work, I also
needed to do something a little more reasonable with TokenFactor. I
gave it zero latency to its consumers and always schedule it as low as
possible.

llvm-svn: 129421
2011-04-13 00:38:32 +00:00