Commit Graph

581 Commits

Author SHA1 Message Date
Evan Cheng 3ae2b79aa3 Re-implement r122936 with proper target hooks. Now getMaxStoresPerMemcpy
etc. takes an option OptSize. If OptSize is true, it would return
the inline limit for functions with attribute OptSize.

llvm-svn: 122952
2011-01-06 06:52:41 +00:00
Bob Wilson 36be00ceb3 Radar 8803471: Fix expansion of ARM BCCi64 pseudo instructions.
If the basic block containing the BCCi64 (or BCCZi64) instruction ends with
an unconditional branch, that branch needs to be deleted before appending
the expansion of the BCCi64 to the end of the block.

llvm-svn: 122521
2010-12-23 22:45:49 +00:00
Bob Wilson 1a20c2aedd Add ARM-specific DAG combining to cast i64 vector element load/stores to f64.
Type legalization splits up i64 values into pairs of i32 values, which leads
to poor quality code when inserting or extracting i64 vector elements.
If the vector element is loaded or stored, it can be treated as an f64 value
and loaded or stored directly from a VPR register.  Use the pre-legalization
DAG combiner to cast those vector elements to f64 types so that the type
legalizer won't mess them up.  Radar 8755338.

llvm-svn: 122319
2010-12-21 06:43:19 +00:00
Chris Lattner 3e5fbd74ed rename MVT::Flag to MVT::Glue. "Flag" is a terrible name for
something that just glues two nodes together, even if it is
sometimes used for flags.

llvm-svn: 122310
2010-12-21 02:38:05 +00:00
Bob Wilson f268d0303b Add some missing entries in ARMTargetLowering::getTargetNodeName.
llvm-svn: 122111
2010-12-18 00:04:26 +00:00
Eric Christopher 347f4c32e8 Don't handle -arm-long-calls in fast isel for now.
llvm-svn: 121919
2010-12-15 23:47:29 +00:00
Evan Cheng c177813755 bfi A, (and B, C1), C2) -> bfi A, B, C2 iff C1 & C2 == C1. rdar://8458663
llvm-svn: 121746
2010-12-14 03:22:07 +00:00
Evan Cheng 2e51bb4ff0 Generalize BFI isel lowering a bit.
llvm-svn: 121714
2010-12-13 20:32:54 +00:00
Evan Cheng 3434575704 (or (and (shl A, #shamt), mask), B) => ARMbfi B, A, ~mask where lsb(mask) == #shamt. rdar://8752056
llvm-svn: 121606
2010-12-11 04:11:38 +00:00
Jay Foad 583abbc4df PR5207: Change APInt methods trunc(), sext(), zext(), sextOrTrunc() and
zextOrTrunc(), and APSInt methods extend(), extOrTrunc() and new method
trunc(), to be const and to return a new value instead of modifying the
object in place.

llvm-svn: 121120
2010-12-07 08:25:19 +00:00
Evan Cheng 419ea286ee Fix and re-enable tail call optimization of expanded libcalls.
llvm-svn: 120622
2010-12-01 22:59:46 +00:00
Evan Cheng d4b0873c06 Enable sibling call optimization of libcalls which are expanded during
legalization time. Since at legalization time there is no mapping from
SDNode back to the corresponding LLVM instruction and the return
SDNode is target specific, this requires a target hook to check for
eligibility. Only x86 and ARM support this form of sibcall optimization
right now.
rdar://8707777

llvm-svn: 120501
2010-11-30 23:55:39 +00:00
Bob Wilson 2d790df105 Add support for NEON VLD2-dup instructions.
llvm-svn: 120236
2010-11-28 06:51:26 +00:00
Bob Wilson 62a6f7eda6 Add entry in getTargetNodeName() for ARMISD::VBICIMM.
llvm-svn: 120233
2010-11-28 06:51:11 +00:00
Bob Wilson d7d2cf7842 Recognize sign/zero-extended constant BUILD_VECTORs for VMULL operations.
We need to check if the individual vector elements are sign/zero-extended
values.  For now this only handles constants values.  Radar 8687140.

llvm-svn: 120034
2010-11-23 19:38:38 +00:00
Wesley Peck 527da1b6e2 Renaming ISD::BIT_CONVERT to ISD::BITCAST to better reflect the LLVM IR concept.
llvm-svn: 119990
2010-11-23 03:31:01 +00:00
Evan Cheng 2debc86138 These instructions are thumb2 only.
llvm-svn: 119793
2010-11-19 06:28:11 +00:00
Tanya Lattner cd68095650 Fix bug in DAGCombiner for ARM that was trying to do a ShiftCombine on illegal types (vector should be split first).
Added test case.

llvm-svn: 119749
2010-11-18 22:06:46 +00:00
Anton Korobeynikov 0eecf5d201 Move hasFP() and few related hooks to TargetFrameInfo.
llvm-svn: 119740
2010-11-18 21:19:35 +00:00
Bob Wilson 7d47133ff7 Split up ARM LowerShift function.
This function was being called from two different places for completely
unrelated reasons.  During type legalization, it was called to expand 64-bit
shift operations.  During operation legalization, it was called to handle
Neon vector shifts.  The vector shift code was not written to check for
illegal types, since it was assumed to be only called after type legalization.
Fixed this by splitting off the 64-bit shift expansion into a separate
function.  I don't have a particular testcase for this; I just noticed it
by inspection.

llvm-svn: 119738
2010-11-18 21:16:28 +00:00
Nate Begeman ca52411955 Fix an issue where we tried to turn a v2f32 build_vector into a v4i32 build vector with 2 elts
llvm-svn: 118720
2010-11-10 21:35:41 +00:00
Bob Wilson 193722ebc8 Do not use MEMBARRIER_MCR for any Thumb code.
It is only supported for ARM code.  Normally Thumb2 code would use DMB instead,
but depending on how the compiler is invoked (e.g., -mattr=-db) that might be
disabled.  This prevents a "cannot select MEMBARRIER_MCR" error in that
situation.  Radar 8644195

llvm-svn: 118642
2010-11-09 22:50:44 +00:00
Jim Grosbach a942ad4222 Change the ARMConstantPoolValue modifier string to an enumeration. This will
help in MC'izing the references that use them.

llvm-svn: 118633
2010-11-09 21:36:17 +00:00
Owen Anderson c7baee31ad Add support for ARM's specialized vector-compare-against-zero instructions.
llvm-svn: 118453
2010-11-08 23:21:22 +00:00
Owen Anderson a4076924d1 Disallow the certain NEON modified-immediate forms when generating vorr or vbic.
llvm-svn: 118300
2010-11-05 21:57:54 +00:00
Owen Anderson 30c4892ea5 Add codegen and encoding support for the immediate form of vbic.
llvm-svn: 118291
2010-11-05 19:27:46 +00:00
Evan Cheng 21acf9fb38 Fix @llvm.prefetch isel. Selecting between pld / pldw using the first immediate rw. There is currently no intrinsic that matches to pli.
llvm-svn: 118237
2010-11-04 05:19:35 +00:00
Owen Anderson bc9b31c493 Covert VORRIMM to be produced via early target-specific DAG combining, rather than legalization.
This is both the conceptually correct place for it, as well as allowing it to be more aggressive.

llvm-svn: 118204
2010-11-03 23:15:26 +00:00
Owen Anderson 0747307049 Add support for code generation of the one register with immediate form of vorr.
We could be more aggressive about making this work for a larger range of constants,
but this seems like a good start.

llvm-svn: 118201
2010-11-03 22:44:51 +00:00
Bob Wilson ceb49296ef Check for extractelement with a variable operand for the element number.
For NEON we had been assuming this was always an immediate constant.

llvm-svn: 118175
2010-11-03 16:24:50 +00:00
Duncan Sands 1462777017 Simplify uses of MVT and EVT. An MVT can be compared directly
with a SimpleValueType, while an EVT supports equality and
inequality comparisons with SimpleValueType.

llvm-svn: 118169
2010-11-03 12:17:33 +00:00
Evan Cheng 8740ee3637 Fix preload instruction isel. Only v7 supports pli, and only v7 with mp extension supports pldw. Add subtarget attribute to denote mp extension support and legalize illegal ones to nothing.
llvm-svn: 118160
2010-11-03 06:34:55 +00:00
Evan Cheng 6f36042557 Add support to match @llvm.prefetch to pld / pldw / pli. rdar://8601536.
llvm-svn: 118152
2010-11-03 05:14:24 +00:00
Bob Wilson 44be217af1 NEON does not support truncating vector stores. Radar 8598391.
llvm-svn: 117940
2010-11-01 18:31:39 +00:00
Bob Wilson 7ed597149b Overhaul memory barriers in the ARM backend. Radar 8601999.
There were a number of issues to fix up here:
* The "device" argument of the llvm.memory.barrier intrinsic should be
used to distinguish the "Full System" domain from the "Inner Shareable"
domain.  It has nothing to do with using DMB vs. DSB instructions.
* The compiler should never need to emit DSB instructions.  Remove the
ARMISD::SYNCBARRIER node and also remove the instruction patterns for DSB.
* Merge the separate DMB/DSB instructions for options only used for the
disassembler with the default DMB/DSB instructions.  Add the default
"full system" option ARM_MB::SY to the ARM_MB::MemBOpt enum.
* Add a separate ARMISD::MEMBARRIER_MCR node for subtargets that implement
a data memory barrier using the MCR instruction.
* Fix up encodings for these instructions (except MCR).
I also updated the tests and added a few new ones to check for DMB options
that were not currently being exercised.

llvm-svn: 117756
2010-10-30 00:54:37 +00:00
Evan Cheng 0c4c5ca6e1 - Don't schedule nodes with only MVT::Flag and MVT::Other values for latency.
- Compute CopyToReg use operand latency correctly.

llvm-svn: 117674
2010-10-29 18:07:31 +00:00
John Thompson e8360b7182 Inline asm multiple alternative constraints development phase 2 - improved basic logic, added initial platform support.
llvm-svn: 117667
2010-10-29 17:29:13 +00:00
Bob Wilson 6c55007edb Fix compiler warnings about signed/unsigned comparisons.
llvm-svn: 117511
2010-10-27 23:49:00 +00:00
Bob Wilson c7334a146e SelectionDAG shuffle nodes do not allow operands with different numbers of
elements than the result vector type.  So, when an instruction like:

%8 = shufflevector <2 x float> %4, <2 x float> %7, <4 x i32> <i32 1, i32 0, i32 3, i32 2>

is translated to a DAG, each operand is changed to a concat_vectors node that appends 2 undef elements.  That is:

shuffle [a,b], [c,d] is changed to:
shuffle [a,b,u,u], [c,d,u,u]

That's probably the right thing for x86 but for NEON, we'd much rather have:

shuffle [a,b,c,d], undef

Teach the DAG combiner how to do that transformation for ARM.  Radar 8597007.

llvm-svn: 117482
2010-10-27 20:38:28 +00:00
Evan Cheng 817bbac4f7 Enable ARM fastcc.
llvm-svn: 117194
2010-10-23 02:19:37 +00:00
Evan Cheng 08dd8c8295 Add fastcc cc: pass and return VFP / NEON values in registers. Controlled by -arm-fastcc for now.
llvm-svn: 117119
2010-10-22 18:23:05 +00:00
Dale Johannesen ff37675c72 Fix crash introduced in 116852. 8573915.
llvm-svn: 116955
2010-10-20 22:03:37 +00:00
Jim Grosbach bbdc5d2ef9 Add a pre-dispatch SjLj EH hook on the unwind edge for targets to do any
setup they require. Use this for ARM/Darwin to rematerialize the base
pointer from the frame pointer when required. rdar://8564268

llvm-svn: 116879
2010-10-19 23:27:08 +00:00
Dale Johannesen 710a2d9d46 Enable using vdup for vector constants which are splat of
integers by default, and remove the controlling flag, now
that LICM will hoist such vdup's.  8003375.

llvm-svn: 116852
2010-10-19 20:00:17 +00:00
Jim Grosbach 2d00b1b2e5 Don't mark argument value stores as immutable, as otherwise the post-RA
scheduler may reorder loads from them before the stores and other such
badness. PR8347. Patch by David Meyer

llvm-svn: 116602
2010-10-15 18:34:47 +00:00
Bob Wilson 3b1db392fc Remove unused ARMISD::AND selection DAG node.
llvm-svn: 116566
2010-10-15 04:34:40 +00:00
Anton Korobeynikov 81bdc93bbb User proper libcall names & condcodes while compiling for ARM EABI.
Patch by Evzen Muller!

llvm-svn: 114991
2010-09-28 21:39:26 +00:00
Bob Wilson 3dc97324c1 Add a command line option "-arm-strict-align" to disallow unaligned memory
accesses for ARM targets that would otherwise allow it.  Radar 8465431.

llvm-svn: 114941
2010-09-28 04:09:35 +00:00
Evan Cheng dbcc4b4d4d Enable code placement optimization pass for ARM.
llvm-svn: 114746
2010-09-24 19:07:23 +00:00
Jim Grosbach 85dcd3d0f4 Add support for ELF PLT references for ARM MC asm printing. Adding a
new VariantKind to the MCSymbolExpr seems like overkill, but I'm not sure
there's a more straightforward way to get the printing difference captured.
(i.e., x86 uses @PLT, ARM uses (PLT)).

llvm-svn: 114613
2010-09-22 23:27:36 +00:00
Bob Wilson 463a05342a Change VDUPLANE DAG combiner to just return the result instead of calling
CombineTo to avoid putting the result on the worklist.  I don't think it makes
much difference for now, but it might help someday as we add more DAG
combine optimizations.

llvm-svn: 114595
2010-09-22 22:27:30 +00:00
Bob Wilson 2280674fa9 Combine both VMOVDRR(VMOVRRD) and VMOVRRD(VMOVDRR), instead of just doing one
of those.  Refactor to share code for handling BUILD_VECTOR(VMOVRRD).
I don't have a testcase that exercises this, but it seems like an obvious
good thing to do.

llvm-svn: 114589
2010-09-22 22:09:21 +00:00
Owen Anderson 61158f98ab Enable target-specific mul-lowering on ARM, even at -Os. Remove a test that this makes
irrelevant, but add a new test for the new, improved functionality.

llvm-svn: 114494
2010-09-21 22:51:46 +00:00
Chris Lattner 886250c8f0 convert a couple more places to use the new getStore()
llvm-svn: 114463
2010-09-21 18:51:21 +00:00
Bob Wilson 5549d496dd Define the TargetLowering::getTgtMemIntrinsic hook for ARM so that NEON load
and store intrinsics are represented with MemIntrinsicSDNodes.

llvm-svn: 114454
2010-09-21 17:56:22 +00:00
Chris Lattner 7727d05dbb convert the targets off the non-MachinePointerInfo of getLoad.
llvm-svn: 114410
2010-09-21 06:44:06 +00:00
Chris Lattner 2510de2bea reimplement memcpy/memmove/memset lowering to use MachinePointerInfo
instead of srcvalue/offset pairs.  This corrects SV info for mem 
operations whose size is > 32-bits.

llvm-svn: 114401
2010-09-21 05:40:29 +00:00
Bob Wilson cb6db98897 Add target-specific DAG combiner for BUILD_VECTOR and VMOVRRD. An i64
value should be in GPRs when it's going to be used as a scalar, and we use
VMOVRRD to make that happen, but if the value is converted back to a vector
we need to fold to a simple bit_convert.  Radar 8407927.

llvm-svn: 114233
2010-09-17 22:59:05 +00:00
Eric Christopher 1c06917f15 Split out some of the calling convention bits so that they can be
used for fast-isel.

llvm-svn: 113652
2010-09-10 22:42:06 +00:00
Evan Cheng bf4070756f Teach if-converter to be more careful with predicating instructions that would
take multiple cycles to decode.
For the current if-converter clients (actually only ARM), the instructions that
are predicated on false are not nops. They would still take machine cycles to
decode. Micro-coded instructions such as LDM / STM can potentially take multiple
cycles to decode. If-converter should take treat them as non-micro-coded
simple instructions.

llvm-svn: 113570
2010-09-10 01:29:16 +00:00
Jim Grosbach 535d3b4e09 remove trailing whitespace
llvm-svn: 113338
2010-09-08 03:54:02 +00:00
Bob Wilson f65c9ef720 Replace NEON vabdl, vaba, and vabal intrinsics with combinations of the
vabd intrinsic and add and/or zext operations.  In the case of vaba, this
also avoids the need for a DAG combine pattern to combine vabd with add.
Update tests.  Auto-upgrade the old intrinsics.

llvm-svn: 112941
2010-09-03 01:35:08 +00:00
Bob Wilson 38ab35a911 Remove NEON vmull, vmlal, and vmlsl intrinsics, replacing them with multiply,
add, and subtract operations with zero-extended or sign-extended vectors.
Update tests.  Add auto-upgrade support for the old intrinsics.

llvm-svn: 112773
2010-09-01 23:50:19 +00:00
Bill Wendling 0a65116cce Create an ARMISD::AND node. This node is exactly like the "ARM::AND" node, but
it sets the CPSR register.

llvm-svn: 112393
2010-08-29 03:02:11 +00:00
Daniel Dunbar a54a1b0edf ARM/Thumb2: Fix a misselect in getARMCmp, when attempting to adjust a signed
comparison that would overflow.
 - The other under/overflow cases can't actually happen because the immediates
   which would trigger them are legal (so we don't enter this code), but
   adjusted the style to make it clear the transform is always valid.

llvm-svn: 112053
2010-08-25 16:58:05 +00:00
Bob Wilson 9a511c07e4 Replace the arm.neon.vmovls and vmovlu intrinsics with vector sign-extend and
zero-extend operations.

llvm-svn: 111614
2010-08-20 04:54:02 +00:00
Bob Wilson fb7eaff759 Expand ZERO_EXTEND operations for NEON vector types.
Testcase from Nick Lewycky.

llvm-svn: 111341
2010-08-18 01:45:52 +00:00
Bob Wilson 411dfad981 Allow more cases of undef shuffle indices and add tests for them.
llvm-svn: 111226
2010-08-17 05:54:34 +00:00
Bob Wilson c350e7a509 Ignore undef shuffle indices when checking for a VTRN shuffle. Radar 8290937.
llvm-svn: 111208
2010-08-16 23:37:17 +00:00
Bob Wilson 3c9ed76ba5 Temporarily disable tail calls on ARM to work around some linker problems.
llvm-svn: 111050
2010-08-13 22:43:33 +00:00
Jim Grosbach 4d5dc3e7e5 cortex m4 has floating point support, but only single precision.
llvm-svn: 110810
2010-08-11 15:44:15 +00:00
Bill Wendling 6a98131468 Consider this code snippet:
float t1(int argc) {
  return (argc == 1123) ? 1.234f : 2.38213f;
}

We would generate truly awful code on ARM (those with a weak stomach should look
away):

_t1:
  movw   r1, #1123
  movs   r2, #1
  movs   r3, #0
  cmp    r0, r1
  mov.w  r0, #0
  it     eq
  moveq  r0, r2
  movs   r1, #4
  cmp    r0, #0
  it     ne
  movne  r3, r1
  adr    r0, #LCPI1_0
  ldr    r0, [r0, r3]
  bx     lr

The problem was that legalization was creating a cascade of SELECT_CC nodes, for
for the comparison of "argc == 1123" which was fed into a SELECT node for the ?:
statement which was itself converted to a SELECT_CC node. This is because the
ARM back-end doesn't have custom lowering for SELECT nodes, so it used the
default "Expand".

I added a fairly simple "LowerSELECT" to the ARM back-end. It takes care of this
testcase, but can obviously be expanded to include more cases.

Now we generate this, which looks optimal to me:

_t1:
  movw   r1, #1123
  movs   r2, #0
  cmp    r0, r1
  adr    r0, #LCPI0_0
  it     eq
  moveq  r2, #4
  ldr    r0, [r0, r2]
  bx     lr
  .align  2
LCPI0_0:
  .long   1075344593  @ float 2.382130e+00
  .long   1067316150  @ float 1.234000e+00

llvm-svn: 110799
2010-08-11 08:43:16 +00:00
Evan Cheng 6e809de90c - Add subtarget feature -mattr=+db which determine whether an ARM cpu has the
memory and synchronization barrier dmb and dsb instructions.
- Change instruction names to something more sensible (matching name of actual
  instructions).
- Added tests for memory barrier codegen.

llvm-svn: 110785
2010-08-11 06:22:01 +00:00
Evan Cheng fa16acae44 Delete some unused instructions.
llvm-svn: 110710
2010-08-10 19:36:22 +00:00
Evan Cheng 3f251fb26e Re-apply r110655 with fixes. Epilogue must restore sp from fp if the function stack frame has a var-sized object.
Also added a test case to check for the added benefit of this patch: it's optimizing away the unnecessary restore of sp from fp for some non-leaf functions.

llvm-svn: 110707
2010-08-10 19:30:19 +00:00
Daniel Dunbar 0dd47bfca3 Revert r110655, "Fix ARM hasFP() semantics. It should return true whenever FP
register is", it breaks a couple test-suite tests.

llvm-svn: 110701
2010-08-10 18:32:02 +00:00
Evan Cheng 8d5d1c1331 Fix ARM hasFP() semantics. It should return true whenever FP register is
reserved, not available for general allocation. This eliminates all the
extra checks for Darwin.

This change also fixes the use of FP to access frame indices in leaf
functions and cleaned up some confusing code in epilogue emission.

llvm-svn: 110655
2010-08-10 06:26:49 +00:00
Dale Johannesen 21f13209f8 Remove switch for disabling ARM tail calls. They
seem to be working correctly.  No functional change.

llvm-svn: 110226
2010-08-04 18:07:17 +00:00
Bob Wilson 79daf7e0ae Combine NEON VABD (absolute difference) intrinsics with ADDs to make VABA
(absolute difference with accumulate) intrinsics.  Radar 8228576.

llvm-svn: 110170
2010-08-04 00:12:08 +00:00
Nate Begeman b69b182191 Add support for getting & setting the FPSCR application register on ARM when VFP is enabled.
Add support for using the FPSCR in conjunction with the vcvtr instruction, for controlling fp to int rounding.
Add support for the FLT_ROUNDS_ node now that the FPSCR is exposed.

llvm-svn: 110152
2010-08-03 21:31:55 +00:00
Bob Wilson 728eb292eb Refactor ARM-specific DAG combining in preparation for adding some more
transformations.

llvm-svn: 109800
2010-07-29 20:34:14 +00:00
Dale Johannesen 2bff50546c Implement vector constants which are splat of
integers with mov + vdup.  8003375.  This is
currently disabled by default because LICM will
not hoist a VDUP, so it pessimizes the code if
the construct occurs inside a loop (8248029).

llvm-svn: 109799
2010-07-29 20:10:08 +00:00
Anton Korobeynikov 19edda0323 Hook in GlobalMerge pass
llvm-svn: 109359
2010-07-24 21:52:08 +00:00
Jim Grosbach 0acbcb1a60 Use the appropriate register class for an i32 when adding ARM::LR to the
function live in set. This will give us tGPR for Thumb1 and GPR otherwise,
so the copy will be spillable. rdar://8224931

llvm-svn: 109293
2010-07-23 23:50:35 +00:00
Evan Cheng df907f4594 - Allow target to specify when is register pressure "too high". In most cases,
it's too late to start backing off aggressive latency scheduling when most
  of the registers are in use so the threshold should be a bit tighter.
- Correctly handle live out's and extract_subreg etc.
- Enable register pressure aware scheduling by default for hybrid scheduler.
  For ARM, this is almost always a win on # of instructions. It's runtime
  neutral for most of the tests. But for some kernels with high register
  pressure it can be a huge win. e.g. 464.h264ref reduced number of spills by
  54 and sped up by 20%.

llvm-svn: 109279
2010-07-23 22:39:59 +00:00
Chandler Carruth a1d7516cb7 Mark an assert-only variable as used.
llvm-svn: 109091
2010-07-22 08:02:25 +00:00
Evan Cheng 285903853f More register pressure aware scheduling work.
llvm-svn: 109064
2010-07-21 23:53:58 +00:00
Eric Christopher 84bdfd80df Baby steps towards ARM fast-isel.
llvm-svn: 109047
2010-07-21 22:26:11 +00:00
Rafael Espindola 4277e14dc4 Fix calling convention on ARM if vfp2+ is enabled.
llvm-svn: 109009
2010-07-21 11:38:30 +00:00
Evan Cheng a77f3d3b37 Teach bottom up pre-ra scheduler to track register pressure. Work in progress.
llvm-svn: 108991
2010-07-21 06:09:07 +00:00
Jim Grosbach 9c7708cc1b Removed un-used code.
llvm-svn: 108841
2010-07-20 14:51:32 +00:00
Evan Cheng 10f99a3490 ARM has to provide its own TargetLowering::findRepresentativeClass because its scalar floating point registers alias its vector registers.
llvm-svn: 108761
2010-07-19 22:15:08 +00:00
Jim Grosbach 8d3ba7349c Since ARM emits inline jump tables as part of the ConstantIsland pass,
it should set the jump table encloding the EK_Inline. This prevents
a second, unused, copy of the table from being emitted after the function
body. PR6581.

llvm-svn: 108730
2010-07-19 17:20:38 +00:00
Jim Grosbach d9ad52adff revert so I can get the right PR# in the log message.
llvm-svn: 108727
2010-07-19 17:19:40 +00:00
Jim Grosbach c685756cfb Since ARM emits inline jump tables as part of the ConstantIsland pass,
it should set the jump table encloding the EK_Inline. This prevents
a second, unused, copy of the table from being emitted after the function
body. PR7499.

llvm-svn: 108722
2010-07-19 17:18:28 +00:00
Jim Grosbach b97e2bbe32 Add combiner patterns to more effectively utilize the BFI (bitfield insert)
instruction for non-constant operands. This includes the case referenced
in the README.txt regarding a bitfield copy.

llvm-svn: 108608
2010-07-17 03:30:54 +00:00
Jim Grosbach 6e3b5fa91c add BFI to getTargetNodeName()
llvm-svn: 108603
2010-07-17 01:50:57 +00:00
Jim Grosbach adc81f8ee8 Fix logic think-o
llvm-svn: 108601
2010-07-17 01:22:19 +00:00
Jim Grosbach 11013eda5a Add basic support to code-gen the ARM/Thumb2 bit-field insert (BFI) instruction
and a combine pattern to use it for setting a bit-field to a constant
value. More to come for non-constant stores.

llvm-svn: 108570
2010-07-16 23:05:05 +00:00
Evan Cheng 55f0c6b9fc Split -enable-finite-only-fp-math to two options:
-enable-no-nans-fp-math and -enable-no-infs-fp-math. All of the current codegen fp math optimizations only care whether the fp arithmetics arguments and results can never be NaN.

llvm-svn: 108465
2010-07-15 22:07:12 +00:00