Commit Graph

8585 Commits

Author SHA1 Message Date
Craig Topper a29ed865d0 Make a bunch of lowering helper functions static instead of member functions. No functional change.
llvm-svn: 163596
2012-09-11 06:15:32 +00:00
Craig Topper 8702c5b7c0 Change unsigned to a uint16_t in static disassembler tables to reduce the table size.
llvm-svn: 163594
2012-09-11 04:19:21 +00:00
Chad Rosier 38e05a9eb2 Update function names to conform to guidelines. No functional change intended.
llvm-svn: 163561
2012-09-10 22:50:57 +00:00
Chad Rosier 41ff85d754 Revert r163556. Missed updates to tablegen files.
llvm-svn: 163557
2012-09-10 22:30:35 +00:00
Chad Rosier 2089c49db7 Update function names to conform to guidelines. No functional change intended.
llvm-svn: 163556
2012-09-10 22:23:45 +00:00
Dmitri Gribenko ca1e27be0d Remove redundant semicolons which are null statements.
llvm-svn: 163547
2012-09-10 21:26:47 +00:00
Chad Rosier db20a41d99 [ms-inline asm] Pass the correct AsmVariant to the PrintAsmOperand() function
and update the printOperand() function accordingly.

llvm-svn: 163544
2012-09-10 21:10:49 +00:00
Chad Rosier 6f8d8b2406 [ms-inline asm] Add support for .att_syntax directive.
llvm-svn: 163542
2012-09-10 20:54:39 +00:00
Michael Liao 400f7ef871 Enhance PR11334 fix to support extload from v2f32/v4f32
- Fix an remaining issue of PR11674 as well

llvm-svn: 163528
2012-09-10 18:33:51 +00:00
Michael Liao c3d5b21c39 Add boolean simplification support from CMOV
- If a boolean value is generated from CMOV and tested as boolean value,
  simplify the use of test result by referencing the original condition.
  RDRAND intrinisc is one of such cases.

llvm-svn: 163516
2012-09-10 16:36:16 +00:00
Elena Demikhovsky 264fb0217e The VPSHUFB 256-bit instruction may be generated when one of input vector is undefined or zeroinitializer.
I've added the "zeroinitializer" case in this patch.

llvm-svn: 163506
2012-09-10 12:13:11 +00:00
Nick Lewycky 74bf42c9a1 Add missing space before {. No functionality change.
llvm-svn: 163484
2012-09-09 23:40:55 +00:00
Craig Topper 4ed79bd7d7 Add instruction selection for ffloor of vectors when SSE4.1 or AVX is enabled.
llvm-svn: 163473
2012-09-08 17:42:27 +00:00
Craig Topper 0955a9f4e1 Use 256-bit alignment for constant pool value for 256-bit vector FNEG lowering.
llvm-svn: 163463
2012-09-08 07:46:05 +00:00
Craig Topper 98f2e861a0 Add support for lowering FABS of vector types.
llvm-svn: 163461
2012-09-08 07:31:51 +00:00
Craig Topper 3e41a5bb31 Set operation action for FFLOOR to Expand for all vector types for X86. Set FFLOOR of v4f32 to Expand for ARM. v2f64 was already correct.
llvm-svn: 163458
2012-09-08 04:58:43 +00:00
Benjamin Kramer e3d658bb6c PR13754: llvm-mc/x86 crashes on .cfi directives without the % prefix for registers.
gas accepts this and it seems to be common enough to be worth supporting. This
doesn't affect the parsing of reg operands outside of .cfi directives.

llvm-svn: 163390
2012-09-07 14:51:35 +00:00
Manman Ren 742534c4dc Release build: guard dump functions with "ifndef NDEBUG"
No functional change.

llvm-svn: 163339
2012-09-06 19:06:06 +00:00
Elena Demikhovsky 42777877c2 AVX2 optimization.
Added generation of VPSHUB instruction for <32 x i8> vector shuffle when possible.

llvm-svn: 163312
2012-09-06 12:42:01 +00:00
Michael Liao 2d95a2b5c4 Remove duplicated helper function
llvm-svn: 163295
2012-09-06 07:11:22 +00:00
Craig Topper f3e4aa8cdd Use iPTR instead of i32 for extract_subvector/insert_subvector index in lowering and patterns. This makes it consistent with the incoming DAG nodes from the DAG builder.
llvm-svn: 163293
2012-09-06 06:09:01 +00:00
Craig Topper daa5ed1e0a Add patterns for converting stores of subvector_extracts of lower 128-bits of a 256-bit vector to VMOVAPSmr/VMOVUPSmr.
llvm-svn: 163292
2012-09-06 05:15:01 +00:00
Roman Divacky ad06cee239 Stop casting away const qualifier needlessly.
llvm-svn: 163258
2012-09-05 22:26:57 +00:00
Roman Divacky 6792380e7b Use const properly so that we dont remove const qualifier from region and MII
by casting. Found with gcc48.

llvm-svn: 163247
2012-09-05 21:17:34 +00:00
Craig Topper 81f06df699 Remove some of the patterns added in r163196. Increasing the complexity on insert_subvector into undef accomplishes the same thing.
llvm-svn: 163198
2012-09-05 07:26:35 +00:00
Craig Topper f7c87d6eea Add patterns for integer forms of VINSERTF128/VINSERTI128 folded with loads. Also add patterns to turn subvector inserts with loads to index 0 of an undef into VMOVAPS.
llvm-svn: 163196
2012-09-05 06:58:39 +00:00
Craig Topper 2db2353b21 Convert vextracti128/vextractf128 intrinsics to extract_subvector at DAG build time. Similar was previously done for vinserti128/vinsertf128. Add patterns for folding these extract_subvectors with stores.
llvm-svn: 163192
2012-09-05 05:48:09 +00:00
Chad Rosier a05ea0f3e3 Fix function name per coding standard.
llvm-svn: 163187
2012-09-05 01:15:43 +00:00
Preston Gurd cdf540d5d6 Generic Bypass Slow Div
- CodeGenPrepare pass for identifying div/rem ops
- Backend specifies the type mapping using addBypassSlowDivType
- Enabled only for Intel Atom with O2 32-bit -> 8-bit
- Replace IDIV with instructions which test its value and use DIVB if the value
is positive and less than 256.
- In the case when the quotient and remainder of a divide are used a DIV
and a REM instruction will be present in the IR. In the non-Atom case
they are both lowered to IDIVs and CSE removes the redundant IDIV instruction,
using the quotient and remainder from the first IDIV. However,
due to this optimization CSE is not able to eliminate redundant
IDIV instructions because they are located in different basic blocks.
This is overcome by calculating both the quotient (DIV) and remainder (REM)
in each basic block that is inserted by the optimization and reusing the result
values when a subsequent DIV or REM instruction uses the same operands.
- Test cases check for the presents of the optimization when calculating
either the quotient, remainder,  or both.

Patch by Tyler Nowicki!

llvm-svn: 163150
2012-09-04 18:22:17 +00:00
Elena Demikhovsky cbe99bbb36 This patch optimizes shuffle instruction - generates 2 instructions instead of 4.
Since this specific shuffle is widely used in many workloads we have ~10% performance on them.

shufflevector <8 x float> %A, <8 x float> %B, <8 x i32> <i32 0, i32 8, i32 2, i32 10, i32 4, i32 12, i32 6, i32 14>

vmovaps (%rdx), %ymm0
vshufps $8, %ymm0, %ymm0, %ymm0
vmovaps (%rcx), %ymm1
vshufps $8, %ymm0, %ymm1, %ymm1
vunpcklps       %ymm0, %ymm1, %ymm0

vmovaps (%rcx), %ymm0
vmovsldup       (%rdx), %ymm1
vblendps        $85, %ymm0, %ymm1, %ymm0

llvm-svn: 163134
2012-09-04 12:49:02 +00:00
Chad Rosier 9e2aff8b6d [ms-inline asm] Asm operands can map to one or more MCOperands. Therefore, add
the NumMCOperands argument to the GetMCInstOperandNum() function that is set
to the number of MCOperands this asm operand mapped to.

llvm-svn: 163124
2012-09-03 20:31:23 +00:00
Chad Rosier 391d299737 [ms-inline asm] Add an interface to the GetMCInstOperandNum() function in the
MCTargetAsmParser class.

llvm-svn: 163122
2012-09-03 18:47:45 +00:00
Chad Rosier a353dba17d Removed unused argument.
llvm-svn: 163104
2012-09-03 03:16:09 +00:00
Chris Lattner ba3ba8fa1f some peepholes that should match horizontal add/sub operations.
llvm-svn: 163103
2012-09-03 02:58:21 +00:00
Chad Rosier e38bb6a34e [ms-inline asm] Expose the Kind and Opcode variables from the
MatchInstructionImpl() function.

These values are used by the ConvertToMCInst() function to index into the
ConversionTable.  The values are also needed to call the GetMCInstOperandNum()
function.

llvm-svn: 163101
2012-09-03 02:06:46 +00:00
Craig Topper d6cc4062be Typos
llvm-svn: 163053
2012-09-01 06:33:50 +00:00
Manman Ren 26c5d0f607 SelectionDAG: when constructing VZEXT_LOAD from other loads, make sure its
output chain is correctly setup.

As an example, if the original load must happen before later stores, we need
to make sure the constructed VZEXT_LOAD is constrained to be before the stores.

rdar://11457792

llvm-svn: 163036
2012-08-31 23:16:57 +00:00
Craig Topper 908e685102 Mark FMA4 instructions as commutable and add them to the folding tables.
llvm-svn: 163035
2012-08-31 23:10:34 +00:00
Craig Topper 7573c8f081 Add selection of RegOp2MemOpTable3 to canFoldMemoryOperand
llvm-svn: 163029
2012-08-31 22:12:16 +00:00
Michael Liao 3224543bf9 Fix PR12359
- In addition to undefined, if V2 is zero vector, skip 2nd PSHUFB and POR as
  well as PSHUFB will zero elements with negative indices.

  Patch by Sriram Murali <sriram.murali@intel.com>

llvm-svn: 163018
2012-08-31 20:12:31 +00:00
Chad Rosier a8f3c4fe35 The ConvertToMCInst() function can't fail, so remove the now dead Match_ConversionFail enum.
llvm-svn: 163002
2012-08-31 16:41:07 +00:00
Craig Topper c0387f6b23 Mark FMA3 instructions as commutable so that the operands to the multiply part can be commuted.
llvm-svn: 163001
2012-08-31 16:31:13 +00:00
Craig Topper c30fdbc46c Add support for converting llvm.fma to fma4 instructions.
llvm-svn: 162999
2012-08-31 15:40:30 +00:00
Michael Liao 969f3913dd Clean up AddedComplexity further after adding UseSSEx
llvm-svn: 162973
2012-08-31 03:01:35 +00:00
Jim Grosbach e423e865fe X86: Fix encoding of 'movd %xmm0, %rax'
The assembly string for the VMOVPQIto64rr instruction incorrectly lacked the 'v'
prefix, resulting in mis-assembly of the vanilla movd instruction.

llvm-svn: 162963
2012-08-31 00:30:30 +00:00
Michael Liao bbd10792c2 Introduce 'UseSSEx' to force SSE legacy encoding
- Add 'UseSSEx' to force SSE legacy insn not being selected when AVX is
  enabled.

  As the penalty of inter-mixing SSE and AVX instructions, we need
  prevent SSE legacy insn from being generated except explicitly
  specified through some intrinsics. For patterns supported by both
  SSE and AVX, so far, we force AVX insn will be tried first relying on
  AddedComplexity or position in td file. It's error-prone and
  introduces bugs accidentally.

  'UseSSEx' is disabled when AVX is turned on. For SSE insns inherited
  by AVX, we need this predicate to force VEX encoding or SSE legacy
  encoding only.

  For insns not inherited by AVX, we still use the previous predicates,
  i.e. 'HasSSEx'. So far, these insns fall into the following
  categories:
  * SSE insns with MMX operands
  * SSE insns with GPR/MEM operands only (xFENCE, PREFETCH, CLFLUSH,
    CRC, and etc.)
  * SSE4A insns.
  * MMX insns.
  * x87 insns added by SSE.

2 test cases are modified:

 - test/CodeGen/X86/fast-isel-x86-64.ll
   AVX code generation is different from SSE one. 'vcvtsi2sdq' cannot be
   selected by fast-isel due to complicated pattern and fast-isel
   fallback to materialize it from constant pool.

 - test/CodeGen/X86/widen_load-1.ll
   AVX code generation is different from SSE one after fixing SSE/AVX
   inter-mixing. Exec-domain fixing prefers 'vmovapd' instead of
   'vmovaps'.

llvm-svn: 162919
2012-08-30 16:54:46 +00:00
Craig Topper e39ad7b549 Only perform DAG combine on FMAs of legal types.
llvm-svn: 162892
2012-08-30 06:56:15 +00:00
Michael Liao 3c8980646b Fix PR13727
- The root cause is that target constant materialization in X86 fast-isel
  creates a PC-rel addressing which may overflow 32-bit range in non-Small code
  model if .rodata section is allocated too far away from code segment in
  MCJIT, which uses Large code model so far.
- Follow the similar logic to fix non-Small code model in fast-isel by skipping
  non-Small code model.

llvm-svn: 162881
2012-08-30 00:30:16 +00:00
Benjamin Kramer 8f5c5ded4e Make helper function static.
llvm-svn: 162843
2012-08-29 16:17:01 +00:00
Craig Topper a999c66292 Convert FMA4 patterns to use target specific nodes instead of intrinsics to align with FMA3.
llvm-svn: 162829
2012-08-29 07:18:25 +00:00