Commit Graph

1009 Commits

Author SHA1 Message Date
David Blaikie b735b4d6db DebugInfo: remove target-specific Frame Index handling for DBG_VALUE MachineInstrs
Frame index handling is now target-agnostic, so delete the target hooks
for creation & asm printing of target-specific addressing in DBG_VALUEs
and any related functions.

llvm-svn: 184067
2013-06-16 20:34:27 +00:00
Tim Northover 6833e3fd75 X86: Stop LEA64_32r doing unspeakable things to its arguments.
Previously LEA64_32r went through virtually the entire backend thinking it was
using 32-bit registers until its blissful illusions were cruelly snatched away
by MCInstLower and 64-bit equivalents were substituted at the last minute.

This patch makes it behave normally, and take 64-bit registers as sources all
the way through. Previous uses (for 32-bit arithmetic) are accommodated via
SUBREG_TO_REG instructions which make the types and classes agree properly.

llvm-svn: 183693
2013-06-10 20:43:49 +00:00
Bill Wendling 8f26840c5a Don't cache the instruction and register info from the TargetMachine, because
the internals of TargetMachine could change.

No functionality change intended.

llvm-svn: 183571
2013-06-07 21:00:34 +00:00
Tim Northover 339bf154cc Revert r183069: "TMP: LEA64_32r fixing"
Very sorry, it was committed from the wrong branch by mistake.

llvm-svn: 183070
2013-06-01 10:23:46 +00:00
Tim Northover 57954f04b3 TMP: LEA64_32r fixing
llvm-svn: 183069
2013-06-01 10:21:54 +00:00
Tim Northover 64ec0ff433 X86: use sub-register sequences for MOV*r0 operations
Instead of having a bunch of separate MOV8r0, MOV16r0, ... pseudo-instructions,
it's better to use a single MOV32r0 (which will expand to "xorl %reg, %reg")
and obtain other sizes with EXTRACT_SUBREG and SUBREG_TO_REG. The encoding is
smaller and partial register updates can sometimes be avoided.

Until recently, this sequence was a barrier to rematerialization though. That
should now be fixed so it's an appropriate time to make the change.

llvm-svn: 182928
2013-05-30 13:19:42 +00:00
Tim Northover 04eb4234fc X86: change zext moves to use sub-register infrastructure.
32-bit writes on amd64 zero out the high bits of the corresponding 64-bit
register. LLVM makes use of this for zero-extension, but until now relied on
custom MCLowering and other code to fixup instructions. Now we have proper
handling of sub-registers, this can be done by creating SUBREG_TO_REG
instructions at selection-time.

Should be no change in functionality.

llvm-svn: 182921
2013-05-30 10:43:18 +00:00
Andrew Trick ef9de2a739 Track IR ordering of SelectionDAG nodes 2/4.
Change SelectionDAG::getXXXNode() interfaces as well as call sites of
these functions to pass in SDLoc instead of DebugLoc.

llvm-svn: 182703
2013-05-25 02:42:55 +00:00
David Majnemer 7ea2a52a0c X86: Remove test instructions proceeding shift by immediate instructions
Allow LLVM to take advantage of shift instructions that set the ZF flag,
making instructions that test the destination superfluous.

llvm-svn: 182454
2013-05-22 08:13:02 +00:00
David Majnemer 5ba473afb0 X86: Bad peephole interaction between adc, MOV32r0
The peephole tries to reorder MOV32r0 instructions such that they are
before the instruction that modifies EFLAGS.

The problem is that the peephole does not consider the case where the
instruction that modifies EFLAGS also depends on the previous state of
EFLAGS.

Instead, walk backwards until we find an instruction that has a def for
EFLAGS but does not have a use.
If we find such an instruction, insert the MOV32r0 before it.
If it cannot find such an instruction, skip the optimization.

llvm-svn: 182184
2013-05-18 01:02:03 +00:00
David Majnemer 8f16974273 X86: Remove redundant test instructions
Increase the number of instructions LLVM recognizes as setting the ZF
flag. This allows us to remove test instructions that redundantly
recalculate the flag.

llvm-svn: 181937
2013-05-15 22:03:08 +00:00
Michael Liao b53d8963ce ArrayRefize getMachineNode(). No functionality change.
llvm-svn: 179901
2013-04-19 22:22:57 +00:00
Preston Gurd d6be4bf87f This patch follows is a follow up to r178171, which uses the register
form of call in preference to memory indirect on Atom.

In this case, the patch applies the optimization to the code for reloading
spilled registers.

The patch also includes changes to sibcall.ll and movgs.ll, which were
failing on the Atom buildbot after the first patch was applied.

This patch by Sriram Murali.

llvm-svn: 178193
2013-03-27 23:16:18 +00:00
Chandler Carruth 9fb823bbd4 Move all of the header files which are involved in modelling the LLVM IR
into their new header subdirectory: include/llvm/IR. This matches the
directory structure of lib, and begins to correct a long standing point
of file layout clutter in LLVM.

There are still more header files to move here, but I wanted to handle
them in separate commits to make tracking what files make sense at each
layer easier.

The only really questionable files here are the target intrinsic
tablegen files. But that's a battle I'd rather not fight today.

I've updated both CMake and Makefile build systems (I think, and my
tests think, but I may have missed something).

I've also re-sorted the includes throughout the project. I'll be
committing updates to Clang, DragonEgg, and Polly momentarily.

llvm-svn: 171366
2013-01-02 11:36:10 +00:00
Bill Wendling 698e84fc4f Remove the Function::getFnAttributes method in favor of using the AttributeSet
directly.

This is in preparation for removing the use of the 'Attribute' class as a
collection of attributes. That will shift to the AttributeSet class instead.

llvm-svn: 171253
2012-12-30 10:32:01 +00:00
Craig Topper fe82eb6bcd Remove intrinsic specific instructions for (V)SQRTPS/PD. Instead lower to target-independent ISD nodes and use the existing patterns for those.
llvm-svn: 171237
2012-12-29 18:18:20 +00:00
Craig Topper 6b27251a76 Remove intrinsic specific instructions for SSE/SSE2/AVX floating point max/min instructions. Lower them to target specific nodes and use those patterns instead. This also allows them to be commuted if UnsafeFPMath is enabled.
llvm-svn: 171227
2012-12-29 16:44:25 +00:00
Craig Topper 81d1e596bb Remove alignment from a bunch more VEX encoded operations in the folding tables.
llvm-svn: 171082
2012-12-26 02:44:47 +00:00
Craig Topper b2922164f0 Remove alignment from folding table for VMOVUPD as an unaligned instruction it shouldn't require alignment...
llvm-svn: 171081
2012-12-26 02:14:19 +00:00
Craig Topper d09a9af9b6 Remove alignment requirements from (V)EXTRACTPS. This instruction does 32-bit stores which aren't required to be aligned on SSE or AVX.
llvm-svn: 171080
2012-12-26 01:47:12 +00:00
Craig Topper caef1c5d86 Remove alignment requirement from VCVTSS2SD in folding tables. Reverting r171049. This instruction doesn't require alignment.
llvm-svn: 171078
2012-12-26 00:35:47 +00:00
Nadav Rotem 00410ae625 VCVTSS2SD requires a strict alignment. Thanks Elena.
llvm-svn: 171049
2012-12-25 03:29:18 +00:00
Nadav Rotem dc0ad92b64 Some x86 instructions can load/store one of the operands to memory. On SSE, this memory needs to be aligned.
When these instructions are encoded in VEX (on AVX) there is no such requirement. This changes the folding
tables and removes the alignment restrictions from VEX-encoded instructions.

llvm-svn: 171024
2012-12-24 09:40:33 +00:00
Nadav Rotem d5aae980cb In some cases, due to scheduling constraints we copy the EFLAGS.
The only way to read the eflags is using push and pop. If we don't
adjust the stack then we run over the first frame index. This is
not something that we want to do, so we have to make sure that
our machine function does not copy the flags. If it does then
we have to emit the prolog that adjusts the stack.

rdar://12896831

llvm-svn: 170961
2012-12-21 23:48:49 +00:00
Benjamin Kramer 4669d18893 X86: Match the SSE/AVX min/max vector ops using a custom node instead of intrinsics
This is very mechanical, no functionality change. Preparation for PR14667.

llvm-svn: 170898
2012-12-21 14:04:55 +00:00
Jakob Stoklund Olesen b159b5ff0d Remove the explicit MachineInstrBuilder(MI) constructor.
Use the version that also takes an MF reference instead.

It would technically be possible to extract an MF reference from the MI
as MI->getParent()->getParent(), but that would not work for MIs that
are not inserted into any basic block.

Given the reasonably small number of places this constructor was used at
all, I preferred the compile time check to a run time assertion.

llvm-svn: 170588
2012-12-19 21:31:56 +00:00
Bill Wendling 3d7b0b8ac7 Rename the 'Attributes' class to 'Attribute'. It's going to represent a single attribute in the future.
llvm-svn: 170502
2012-12-19 07:18:57 +00:00
Craig Topper f3ff6ae066 Simplify BMI ANDN matching to use patterns instead of a DAG combine. Also add ANDN to isDefConvertible.
llvm-svn: 170305
2012-12-17 05:12:30 +00:00
Craig Topper f924a58af1 Add rest of BMI/BMI2 instructions to the folding tables as well as popcnt and lzcnt.
llvm-svn: 170304
2012-12-17 05:02:29 +00:00
Craig Topper 5b08cf7736 Remove store forms of DEC/INC from isDefConvertible. Since they are stores they don't have a register def.
llvm-svn: 170303
2012-12-17 04:55:07 +00:00
Craig Topper 922f10aec4 Mark MOVDQ(A/U)rm as ReMaterializable. Mark all MOVDQ(A/U) instructions as neverHasSideEffects.
llvm-svn: 169477
2012-12-06 06:49:16 +00:00
Chandler Carruth ed0881b2a6 Use the new script to sort the includes of every file under lib.
Sooooo many of these had incorrect or strange main module includes.
I have manually inspected all of these, and fixed the main module
include to be the nearest plausible thing I could find. If you own or
care about any of these source files, I encourage you to take some time
and check that these edits were sensible. I can't have broken anything
(I strictly added headers, and reordered them, never removed), but they
may not be the headers you'd really like to identify as containing the
API being implemented.

Many forward declarations and missing includes were added to a header
files to allow them to parse cleanly when included first. The main
module rule does in fact have its merits. =]

llvm-svn: 169131
2012-12-03 16:50:05 +00:00
Jakob Stoklund Olesen 9de596e650 Remove all references to TargetInstrInfoImpl.
This class has been merged into its super-class TargetInstrInfo.

llvm-svn: 168760
2012-11-28 02:35:17 +00:00
Manman Ren 5b4628201f X86: do not fold load instructions such as [V]MOVS[S|D] to other instructions
when the destination register is wider than the memory load.

These load instructions load from m32 or m64 and set the upper bits to zero,
while the folded instructions may accept m128.

rdar://12721174

llvm-svn: 168710
2012-11-27 18:09:26 +00:00
Craig Topper 3b530ea605 Remove alignments from folding tables for scalar FMA4 instructions.
llvm-svn: 167366
2012-11-04 04:40:08 +00:00
Craig Topper 8cd3b07a51 Add scalar forms of FMA4 VFNMSUB/VFNMADD to folding tables. Patch from Cameron McInally.
llvm-svn: 167106
2012-10-31 04:59:46 +00:00
Bill Wendling c9b22d735a Create enums for the different attributes.
We use the enums to query whether an Attributes object has that attribute. The
opaque layer is responsible for knowing where that specific attribute is stored.

llvm-svn: 165488
2012-10-09 07:45:08 +00:00
Craig Topper 9384902ef1 Move expansion of SETB_C(8/16/32/64)r from MCInstLower to ExpandPostRAPseudos and mark them as pseudos in the td file.
llvm-svn: 165302
2012-10-05 06:05:15 +00:00
Sylvestre Ledru 91ce36c986 Revert 'Fix a typo 'iff' => 'if''. iff is an abreviation of if and only if. See: http://en.wikipedia.org/wiki/If_and_only_if Commit 164767
llvm-svn: 164768
2012-09-27 10:14:43 +00:00
Sylvestre Ledru 721cffd53a Fix a typo 'iff' => 'if'
llvm-svn: 164767
2012-09-27 09:59:43 +00:00
Bill Wendling 863bab689a Remove the `hasFnAttr' method from Function.
The hasFnAttr method has been replaced by querying the Attributes explicitly. No
intended functionality change.

llvm-svn: 164725
2012-09-26 21:48:26 +00:00
Michael Liao 2b425e1e24 Add SARX/SHRX/SHLX code generation support
llvm-svn: 164675
2012-09-26 08:26:25 +00:00
Michael Liao 2de86af22d Add RORX code generation support
llvm-svn: 164674
2012-09-26 08:24:51 +00:00
Michael Liao f9f7b5518a Add MULX code generation support
llvm-svn: 164673
2012-09-26 08:22:37 +00:00
Michael Liao 3237662b65 Re-work X86 code generation of atomic ops with spin-loop
- Rewrite/merge pseudo-atomic instruction emitters to address the
  following issue:
  * Reduce one unnecessary load in spin-loop

    previously the spin-loop looks like

        thisMBB:
        newMBB:
          ld  t1 = [bitinstr.addr]
          op  t2 = t1, [bitinstr.val]
          not t3 = t2  (if Invert)
          mov EAX = t1
          lcs dest = [bitinstr.addr], t3  [EAX is implicit]
          bz  newMBB
          fallthrough -->nextMBB

    the 'ld' at the beginning of newMBB should be lift out of the loop
    as lcs (or CMPXCHG on x86) will load the current memory value into
    EAX. This loop is refined as:

        thisMBB:
          EAX = LOAD [MI.addr]
        mainMBB:
          t1 = OP [MI.val], EAX
          LCMPXCHG [MI.addr], t1, [EAX is implicitly used & defined]
          JNE mainMBB
        sinkMBB:

  * Remove immopc as, so far, all pseudo-atomic instructions has
    all-register form only, there is no immedidate operand.

  * Remove unnecessary attributes/modifiers in pseudo-atomic instruction
    td

  * Fix issues in PR13458

- Add comprehensive tests on atomic ops on various data types.
  NOTE: Some of them are turned off due to missing functionality.

- Revise tests due to the new spin-loop generated.

llvm-svn: 164281
2012-09-20 03:06:15 +00:00
Jan Wen Voung 4ce1d7b4f1 Add some cases to x86 OptimizeCompare to handle DEC and INC, too.
While we are setting the earlier def to true, also make it live.

llvm-svn: 164056
2012-09-17 22:04:23 +00:00
Craig Topper 908e685102 Mark FMA4 instructions as commutable and add them to the folding tables.
llvm-svn: 163035
2012-08-31 23:10:34 +00:00
Craig Topper 7573c8f081 Add selection of RegOp2MemOpTable3 to canFoldMemoryOperand
llvm-svn: 163029
2012-08-31 22:12:16 +00:00
Craig Topper 72f51c3986 Convert V_SETALLONES/AVX_SETALLONES/AVX2_SETALLONES to Post-RA pseudos.
llvm-svn: 162740
2012-08-28 07:30:47 +00:00
Craig Topper bd509eea4a Merge AVX_SET0PSY/AVX_SET0PDY/AVX2_SET0 into a single post-RA pseudo.
llvm-svn: 162738
2012-08-28 07:05:28 +00:00
Jakob Stoklund Olesen 7030427623 Preserve operand flags in convertToThreeAddress() by copying operands.
No test case, this is a generalization of r160260.

llvm-svn: 162485
2012-08-23 22:36:31 +00:00
Craig Topper f911597494 Use a switch statement instead of a bunch of if-else checks and pull out the common function call.
llvm-svn: 162428
2012-08-23 04:57:36 +00:00
Craig Topper bab0c76674 Fix up indentation and remove a couple else's after returns.
llvm-svn: 162270
2012-08-21 08:29:51 +00:00
Craig Topper bfcfdeb563 Use uint16_t for tables of opcodes.
llvm-svn: 162267
2012-08-21 08:23:21 +00:00
Craig Topper a0cabf19f8 Fix up indentation. No functional change.
llvm-svn: 162264
2012-08-21 08:17:07 +00:00
Craig Topper 4bc3e5a1bf Add a couple llvm_unreachables. Add a message to several others.
llvm-svn: 162263
2012-08-21 08:16:16 +00:00
Craig Topper 653e759046 Replace a break with llvm_unreachable in the default case of a nested switch. Condense code a bit. No functional change.
llvm-svn: 162261
2012-08-21 07:32:16 +00:00
Craig Topper b58eec4eaf Remove FMA3 intrinsic instructions in favor of patterns.
llvm-svn: 162194
2012-08-20 06:21:25 +00:00
Manman Ren 959acb106b X86: move Int_CVTSD2SSrr, Int_CVTSI2SSrr, Int_CVTSI2SDrr, Int_CVTSS2SDrr from
OpTbl1 to OpTbl2 since they have 3 operands and the last operand can be changed
to a memory operand.

PR13576

llvm-svn: 161769
2012-08-13 18:29:41 +00:00
Manman Ren 1be131ba27 X86: enable CSE between CMP and SUB
We perform the following:
1> Use SUB instead of CMP for i8,i16,i32 and i64 in ISel lowering.
2> Modify MachineCSE to correctly handle implicit defs.
3> Convert SUB back to CMP if possible at peephole.

Removed pattern matching of (a>b) ? (a-b):0 and like, since they are handled
by peephole now.

rdar://11873276

llvm-svn: 161462
2012-08-08 00:51:41 +00:00
Jakob Stoklund Olesen 3b9a442841 Don't scan physreg use-def chains looking for a PIC base.
We can't rematerialize a PIC base after register allocation anyway, and
scanning physreg use-def chains is very expensive in a function with
many calls.

<rdar://problem/12047515>

llvm-svn: 161461
2012-08-08 00:40:47 +00:00
Manman Ren 5759d01230 X86 Peephole: fold loads to the source register operand if possible.
Machine CSE and other optimizations can remove instructions so folding
is possible at peephole while not possible at ISel.

This patch is a rework of r160919 and was tested on clang self-host on my local
machine.

rdar://10554090 and rdar://11873276

llvm-svn: 161152
2012-08-02 00:56:42 +00:00
Elena Demikhovsky 3cb3b0045c Added FMA functionality to X86 target.
llvm-svn: 161110
2012-08-01 12:06:00 +00:00
Manman Ren f87dd7c01b Revert r160920 and r160919 due to dragonegg and clang selfhost failure
llvm-svn: 160927
2012-07-29 02:44:09 +00:00
Manman Ren 0fa3ab88ba X86 Peephole: fold loads to the source register operand if possible.
Machine CSE and other optimizations can remove instructions so folding
is possible at peephole while not possible at ISel.

rdar://10554090 and rdar://11873276

llvm-svn: 160919
2012-07-28 16:48:01 +00:00
Manman Ren 32367c063b X86 Peephole: fix PR13475 in optimizeCompare.
It is possible that an instruction can use and update EFLAGS.
When checking the safety, we should check the usage of EFLAGS first before
declaring it is safe to optimize due to the update.

llvm-svn: 160912
2012-07-28 03:15:46 +00:00
Manman Ren d0a4ee8427 X86: remove redundant cmp against zero.
Updated OptimizeCompare in peephole to remove redundant cmp against zero.
We only remove Compare if CF and OF are not used.

rdar://11855129

llvm-svn: 160454
2012-07-18 21:40:01 +00:00
Nadav Rotem 4968e45b9f Fix a bug in the 3-address conversion of LEA when one of the operands is an
undef virtual register. The problem is that ProcessImplicitDefs removes the
definition of the register and marks all uses as undef. If we lose the undef
marker then we get a register which has no def, is not marked as undef. The
live interval analysis does not collect information for these virtual
registers and we crash in later passes.

Together with Michael Kuperstein <michael.m.kuperstein@intel.com>

llvm-svn: 160260
2012-07-16 10:52:25 +00:00
Nadav Rotem ee3552f88d Rename VBROADCASTSDrm into VBROADCASTSDYrm to match the naming convention.
Allow the folding of vbroadcastRR to vbroadcastRM, where the memory operand is a spill slot.

PR12782.

Together with Michael Kuperstein <michael.m.kuperstein@intel.com>

llvm-svn: 160230
2012-07-15 12:26:30 +00:00
Benjamin Kramer abbfe69356 Make helper functions static.
llvm-svn: 160173
2012-07-13 13:25:15 +00:00
Manman Ren 1553ce0e81 X86: Update to peephole optimization to move Movr0 before (Sub, Cmp) pair.
When Movr0 is between sub and cmp, we move Movr0 before sub if it enables
removal of Cmp.

llvm-svn: 160066
2012-07-11 19:35:12 +00:00
Manman Ren 5f6fa428fa X86: implement functions to analyze & synthesize CMOV|SET|Jcc
getCondFromSETOpc, getCondFromCMovOpc, getSETFromCond, getCMovFromCond

No functional change intended.
If we want to update the condition code of CMOV|SET|Jcc, we first analyze the
opcode to get the condition code, then update the condition code, finally
synthesize the new opcode form the new condition code.

llvm-svn: 159955
2012-07-09 18:57:12 +00:00
Manman Ren bb36074047 X86: Fix optimizeCompare to correctly check safe condition.
It is safe if EFLAGS is killed or re-defined.
When we are done with the basic block, check whether EFLAGS is live-out.
Do not optimize away cmp if EFLAGS is live-out.

llvm-svn: 159888
2012-07-07 03:34:46 +00:00
Manman Ren c965673707 X86: peephole optimization to remove cmp instruction
For each Cmp, we check whether there is an earlier Sub which make Cmp
redundant. We handle the case where SUB operates on the same source operands as
Cmp, including the case where the two source operands are swapped.

llvm-svn: 159838
2012-07-06 17:36:20 +00:00
Jakob Stoklund Olesen 49e4d4b3ef Add early if-conversion support to X86.
Implement the TII hooks needed by EarlyIfConversion to create cmov
instructions and estimate their latency.

Early if-conversion is still not enabled by default.

llvm-svn: 159695
2012-07-04 00:09:58 +00:00
Craig Topper b6eb513c68 Remove codegen only instruction in favor of one that has the same definition. Make some pattern operands more explicit about types.
llvm-svn: 159126
2012-06-25 06:16:00 +00:00
Craig Topper fd5e6e7db1 Remove intrinsic specific instructions for (V)CVTPS2DQ and replace with patterns.
llvm-svn: 159109
2012-06-24 07:07:16 +00:00
Craig Topper b925230fb1 Remove intrinsic specific instructions for (V)CVTPS2DQ and replace with patterns.
llvm-svn: 159108
2012-06-24 06:55:37 +00:00
Craig Topper f48ec7a708 Fix build failures from r159106.
llvm-svn: 159107
2012-06-24 06:08:31 +00:00
Craig Topper 3cee08ce7d Remove intrinsic specific instructions for CVTPD2DQ. Replace with patterns.
llvm-svn: 159105
2012-06-24 05:33:24 +00:00
Craig Topper a899cc15f1 Remove intrinsic specific instructions for (V)CVTDQ2PS. Use a Pat instead instead.
llvm-svn: 159090
2012-06-23 22:33:14 +00:00
Craig Topper 1cac50bc5e Compress flags in X86 op folding to reduce space in static tables.
llvm-svn: 159073
2012-06-23 08:01:18 +00:00
Craig Topper 431f1e7192 Remove intrinsic specific instructions for 128-bit (V)CVTDQ2PD. Replace with intrinsic patterns. Mem forms omitted because the load size is only 64-bits.
llvm-svn: 159070
2012-06-23 04:23:36 +00:00
Craig Topper 11913052d6 Move AVX version of convert instructions that write to GPRs to the Op1 table.
llvm-svn: 158497
2012-06-15 07:02:58 +00:00
Pete Cooper 8bbce768d8 Move X86::VCVTTSD2SIrr from the 2 operand to 1 operand MemRegOp table.
Can someone with more knowledge of this please look at other entries
to see if others need moved.

llvm-svn: 158474
2012-06-14 22:12:58 +00:00
Manman Ren 9c9641812c Revert r157755.
The commit is intended to fix rdar://11540023.
It is implemented as part of peephole optimization. We can actually implement
this in the SelectionDAG lowering phase.

llvm-svn: 158122
2012-06-06 23:53:03 +00:00
Benjamin Kramer 628a39faa3 Remove unused private fields found by clang's new -Wunused-private-field.
There are some that I didn't remove this round because they looked like
obvious stubs. There are dead variables in gtest too, they should be
fixed upstream.

llvm-svn: 158090
2012-06-06 18:25:08 +00:00
Craig Topper c6ac4cefcc Add intrinsic forms for FMA instructions to opcode folding tables.
llvm-svn: 157917
2012-06-04 07:46:16 +00:00
Craig Topper 3cb143016d Add VFMADDSUB and VFMSUBADD FMA instructions to folding tables. Also add 213 forms of scalar FMA instructions.
llvm-svn: 157914
2012-06-04 07:08:21 +00:00
Manman Ren 5097e4f38a Revert r157831
llvm-svn: 157896
2012-06-03 03:14:24 +00:00
Manman Ren 879ca9d47d X86: peephole optimization to remove cmp instruction
This patch will optimize the following:
  sub r1, r3
  cmp r3, r1 or cmp r1, r3
  bge L1
TO
  sub r1, r3
  bge L1 or ble L1

If the branch instruction can use flag from "sub", then we can eliminate
the "cmp" instruction.

llvm-svn: 157831
2012-06-01 19:49:33 +00:00
Hans Wennborg 789acfb63d Implement the local-dynamic TLS model for x86 (PR3985)
This implements codegen support for accesses to thread-local variables
using the local-dynamic model, and adds a clean-up pass so that the base
address for the TLS block can be re-used between local-dynamic access on
an execution path.

llvm-svn: 157818
2012-06-01 16:27:21 +00:00
Craig Topper 2e127b5274 Add VFNSUB* instructions to folding table.
llvm-svn: 157802
2012-06-01 05:48:39 +00:00
Manman Ren 9bccb64e56 X86: replace SUB with CMP if possible
This patch will optimize the following
        movq    %rdi, %rax
        subq    %rsi, %rax
        cmovsq  %rsi, %rdi
        movq    %rdi, %rax
to
        cmpq    %rsi, %rdi
        cmovsq  %rsi, %rdi
        movq    %rdi, %rax

Perform this optimization if the actual result of SUB is not used.

rdar: 11540023
llvm-svn: 157755
2012-05-31 17:20:29 +00:00
Elena Demikhovsky 602f3a26d6 Added FMA3 Intel instructions.
I disabled FMA3 autodetection, since the result may differ from expected for some benchmarks.
I added tests for GodeGen and intrinsics.
I did not change llvm.fma.f32/64 - it may be done later.

llvm-svn: 157737
2012-05-31 09:20:20 +00:00
Jakob Stoklund Olesen 38dcd598f9 Make the global base reg GR32_NOSP.
It can sometimes be used in addressing modes that don't support %ESP.

llvm-svn: 157165
2012-05-20 18:43:00 +00:00
Jakob Stoklund Olesen 3c52f0281f Add an MF argument to TRI::getPointerRegClass() and TII::getRegClass().
The getPointerRegClass() hook can return register classes that depend on
the calling convention of the current function (ptr_rc_tailcall).

So far, we have been able to infer the calling convention from the
subtarget alone, but as we add support for multiple calling conventions
per target, that no longer works.

Patch by Yiannis Tsiouris!

llvm-svn: 156328
2012-05-07 22:10:26 +00:00
Craig Topper abadc660e0 Convert some uses of XXXRegisterClass to &XXXRegClass. No functional change since they are equivalent.
llvm-svn: 155186
2012-04-20 06:31:50 +00:00
Elena Demikhovsky 779a72b49e Added VPERM optimization for AVX2 shuffles
llvm-svn: 154761
2012-04-15 11:18:59 +00:00
Craig Topper b25fda95f6 Reorder includes in Target backends to following coding standards. Remove some superfluous forward declarations.
llvm-svn: 152997
2012-03-17 18:46:09 +00:00
Craig Topper 2dac962864 Use uint16_t to store opcodes in static tables in X86 backend.
llvm-svn: 152391
2012-03-09 07:45:21 +00:00
Craig Topper 760b134ffa Make all pointers to TargetRegisterClass const since they are all pointers to static data that should not be modified.
llvm-svn: 151134
2012-02-22 05:59:10 +00:00
Jia Liu b22310fda6 Emacs-tag and some comment fix for all ARM, CellSPU, Hexagon, MBlaze, MSP430, PPC, PTX, Sparc, X86, XCore.
llvm-svn: 150878
2012-02-18 12:03:15 +00:00
Jakob Stoklund Olesen 97e3115dc2 Use the same CALL instructions for Windows as for everything else.
The different calling conventions and call-preserved registers are
represented with regmask operands that are added dynamically.

llvm-svn: 150708
2012-02-16 17:56:02 +00:00
Jakob Stoklund Olesen 4519fd0b21 Handle register masks when searching for EFLAGS clobbers.
Calls clobber the flags, but when using register masks there is no
EFLAGS<imp-def> operand.

llvm-svn: 150117
2012-02-09 00:17:22 +00:00
Craig Topper 7834900950 Custom lower PSIGN and PSHUFB intrinsics to their corresponding target specific nodes so we can remove the isel patterns.
llvm-svn: 148933
2012-01-25 06:43:11 +00:00
Craig Topper ce4f9c5668 Custom lower phadd and phsub intrinsics to target specific nodes. Remove the patterns that are no longer necessary.
llvm-svn: 148927
2012-01-25 05:37:32 +00:00
David Blaikie 46a9f016c5 More dead code removal (using -Wunreachable-code)
llvm-svn: 148578
2012-01-20 21:51:11 +00:00
Craig Topper a875b7ccc7 Folding table additions and fixes for AVX.
llvm-svn: 148467
2012-01-19 08:50:38 +00:00
Craig Topper d78429f850 Add a bunch of AVX instructions to the folding tables. Also fixed the alignment on 256-bit AVX2 instructions.
llvm-svn: 148194
2012-01-14 18:14:53 +00:00
Craig Topper e52d86a740 Convert SHUFPD with the same register for both sources to PSHUFD if it would prevent a register copy. Similar to SHUFPS, but requires the mask to be converted.
llvm-svn: 148112
2012-01-13 09:21:41 +00:00
Craig Topper cb7e13d7c0 Make X86 instruction selection use 256-bit VPXOR for build_vector of all ones if AVX2 is enabled. This gives the ExeDepsFix pass a chance to choose FP vs int as appropriate. Also use v8i32 as the type for getZeroVector if AVX2 is enabled. This is consistent with SSE2 using prefering v4i32.
llvm-svn: 148108
2012-01-13 08:12:35 +00:00
Craig Topper a4c5a47b97 Use 8i32 constant pool entry for converting AVX2_SETALLONES. Possibly fixes PR11750.
llvm-svn: 148101
2012-01-13 06:12:41 +00:00
Evan Cheng 7fae11b231 - Add MachineInstrBundle.h and MachineInstrBundle.cpp. This includes a function
to finalize MI bundles (i.e. add BUNDLE instruction and computing register def
  and use lists of the BUNDLE instruction) and a pass to unpack bundles.
- Teach more of MachineBasic and MachineInstr methods to be bundle aware.
- Switch Thumb2 IT block to MI bundles and delete the hazard recognizer hack to
  prevent IT blocks from being broken apart.

llvm-svn: 146542
2011-12-14 02:11:42 +00:00
Benjamin Kramer 2dc5dec41d X86: Split (v)rounds[sd] into a normal and an intrinsic version.
llvm-svn: 146256
2011-12-09 15:43:55 +00:00
Evan Cheng 7f8e563a69 Add bundle aware API for querying instruction properties and switch the code
generator to it. For non-bundle instructions, these behave exactly the same
as the MC layer API.

For properties like mayLoad / mayStore, look into the bundle and if any of the
bundled instructions has the property it would return true.
For properties like isPredicable, only return true if *all* of the bundled
instructions have the property.
For properties like canFoldAsLoad, isCompare, conservatively return false for
bundles.

llvm-svn: 146026
2011-12-07 07:15:52 +00:00
Jakob Stoklund Olesen bde32d36bb Make X86::FsFLD0SS / FsFLD0SD real pseudo-instructions.
Like V_SET0, these instructions are expanded by ExpandPostRA to xorps /
vxorps so they can participate in execution domain swizzling.

This also makes the AVX variants redundant.

llvm-svn: 145440
2011-11-29 22:27:25 +00:00
Craig Topper 12b72def4e Fix VINSERTF128/VEXTRACTF128 to be marked as FP instructions. Allow execution dependency fix pass to convert them to their integer equivalents when AVX2 is enabled.
llvm-svn: 145376
2011-11-29 05:37:58 +00:00
Craig Topper 897a7d4b9c Correctly mark VPERM2F128 as being an FP instruction and add execution domain fixing support to convert it to VPERM2I128 for AVX2.
llvm-svn: 145370
2011-11-29 03:57:34 +00:00
Jakob Stoklund Olesen 02845410f9 Fix PR11422.
This was a bug in keeping track of the available domains when merging
domain values.

The wrong domain mask caused ExecutionDepsFix to try to move VANDPSYrr
to the integer domain which is only available in AVX2.

Also add an assertion to catch future attempts at emitting AVX2
instructions.

llvm-svn: 145096
2011-11-23 04:03:08 +00:00
Craig Topper a3a6583694 Use 256-bit vcmpeqd for creating an all ones vector when AVX2 is enabled.
llvm-svn: 145004
2011-11-19 22:34:59 +00:00
Jay Foad 0745e645e0 Remove some unnecessary includes of PseudoSourceValue.h.
llvm-svn: 144631
2011-11-15 07:24:32 +00:00
Craig Topper 649d1c5eec Fix PR11370 for real. Prevents converting 256-bit FP instruction to AVX2 256-bit integer instructions when AVX2 isn't enabled.
llvm-svn: 144629
2011-11-15 06:39:01 +00:00
Craig Topper 05baa85f58 Properly qualify AVX2 specific parts of execution dependency table. Also enable converting between 256-bit PS/PD operations when AVX1 is enabled. Fixes PR11370.
llvm-svn: 144622
2011-11-15 05:55:35 +00:00
Jakob Stoklund Olesen f8ad336bc4 Break false dependencies before partial register updates.
Two new TargetInstrInfo hooks lets the target tell ExecutionDepsFix
about instructions with partial register updates causing false unwanted
dependencies.

The ExecutionDepsFix pass will break the false dependencies if the
updated register was written in the previoius N instructions.

The small loop added to sse-domains.ll runs twice as fast with
dependency-breaking instructions inserted.

llvm-svn: 144602
2011-11-15 01:15:30 +00:00
Craig Topper 182b00a2e0 Add AVX2 version of instructions to load folding tables. Also add a bunch of missing SSE/AVX instructions.
llvm-svn: 144525
2011-11-14 08:07:55 +00:00
Craig Topper f87a2bef51 Enable execution dependency fix pass for YMM registers when AVX2 is enabled. Add AVX2 logical operations to list of replaceable instructions.
llvm-svn: 144179
2011-11-09 09:37:21 +00:00
Jakob Stoklund Olesen 0241308954 Expand V_SET0 to xorps by default.
The xorps instruction is smaller than pxor, so prefer that encoding.

The ExecutionDepsFix pass will switch the encoding to pxor and xorpd
when appropriate.

llvm-svn: 143996
2011-11-07 19:15:58 +00:00
Jakob Stoklund Olesen 729abd360e Add TEST8ri_NOREX pseudo to constrain sub_8bit_hi copies.
In 64-bit mode, sub_8bit_hi sub-registers can only be used by NOREX
instructions. The COPY created from the EXTRACT_SUBREG DAG node cannot
target all GR8 registers, only those in GR8_NOREX.

TO enforce this, we ensure that all instructions using the
EXTRACT_SUBREG are GR8_NOREX constrained.

This fixes PR11088.

llvm-svn: 141499
2011-10-08 18:28:28 +00:00
Jakob Stoklund Olesen 464fcc0035 Constrain both operands on MOVZX32_NOREXrr8.
This instruction is explicitly encoded without an REX prefix, so both
operands but be *_NOREX.

Also add an assertion to copyPhysReg() that fires when the MOV8rr_NOREX
constraints are not satisfied.

This fixes a miscompilation in 20040709-2 in the gcc test suite.

llvm-svn: 141410
2011-10-07 20:15:54 +00:00
Jakob Stoklund Olesen dd1904e7a6 Expand the x86 V_SET0* pseudos right after register allocation.
This also makes it possible to reduce the number of pseudo instructions
and get rid of the encoding information.

llvm-svn: 140776
2011-09-29 05:10:54 +00:00
Jakob Stoklund Olesen b48c994cc0 Promote the X86 Get/SetSSEDomain functions to TargetInstrInfo.
I am going to unify the SSEDomainFix and NEONMoveFix passes into a
single target independent pass.  They are essentially doing the same
thing.

llvm-svn: 140652
2011-09-27 22:57:18 +00:00
Jakob Stoklund Olesen f05864ad7d Add support for GR32 <-> FR32 cross class copies.
We already support GR64 <-> VR128 copies.  All of these copies break
partial register dependencies by zeroing the high part of the target
register.

llvm-svn: 140348
2011-09-22 22:45:24 +00:00
Bruno Cardoso Lopes 7b43568a93 Add a fixme note!
llvm-svn: 139872
2011-09-15 23:04:24 +00:00
Bruno Cardoso Lopes c69d68a150 Add the remaining AVX versions of instructions to X86InstrInfo, this
time for describing high latency ones and for recognizting loads
from the same base pointer

llvm-svn: 139864
2011-09-15 22:15:52 +00:00
Bruno Cardoso Lopes 6b302955b1 Factor out partial register update checks for some SSE instructions.
Also add the AVX versions and add comments!

llvm-svn: 139854
2011-09-15 21:42:23 +00:00
Bruno Cardoso Lopes d560b8c8e9 Teach the foldable tables about 128-bit AVX instructions and make the
alignment check for 256-bit classes more strict. There're no testcases
but we catch more folding cases for AVX while running single and multi
sources in the llvm testsuite.

Since some 128-bit AVX instructions have different number of operands
than their SSE counterparts, they are placed in different tables.

256-bit AVX instructions should also be added in the table soon. And
there a few more 128-bit versions to handled, which should come in
the following commits.

llvm-svn: 139687
2011-09-14 02:36:58 +00:00
Bruno Cardoso Lopes 23eb5265b4 * Combines Alignment, AuxInfo, and TB_NOT_REVERSABLE flag into a
single field (Flags), which is a bitwise OR of items from the TB_*
enum. This makes it easier to add new information in the future.

* Gives every static array an equivalent layout: { RegOp, MemOp, Flags }

* Adds a helper function, AddTableEntry, to avoid duplication of the
insertion code.

* Renames TB_NOT_REVERSABLE to TB_NO_REVERSE.

* Adds TB_NO_FORWARD, which is analogous to TB_NO_REVERSE, except that
it prevents addition of the Reg->Mem entry. (This is going to be used
by Native Client, in the next CL).

Patch by David Meyer

llvm-svn: 139311
2011-09-08 18:35:57 +00:00
Bruno Cardoso Lopes aad5e50ded Add AVX versions of FsMOVAPS and FsMOVAPS. Teach X86InstrInfo how to use
it!

llvm-svn: 139063
2011-09-03 00:46:45 +00:00
Jakob Stoklund Olesen f08354d183 Check for EFLAGS live-out before clobbering it.
It is only allowed to clobber EFLAGS at the end of a block if it isn't
live-in to any successor.

llvm-svn: 139056
2011-09-02 23:52:52 +00:00
Bruno Cardoso Lopes db520db514 Teach more places to use VMOVAPS,VMOVUPS instead of MOVAPS,MOVUPS,
whenever AVX is enabled.

llvm-svn: 138849
2011-08-31 03:04:09 +00:00
Bruno Cardoso Lopes dbd1352c80 Cleanup: Remove Int_ CVTSS2SI* forms
llvm-svn: 137297
2011-08-11 02:52:36 +00:00
Jakob Stoklund Olesen daa2cad723 Hoist hasLoadFromStackSlot and hasStoreToStackSlot.
These the methods are target-independent since they simply scan the
memory operands.  They can live in TargetInstrInfoImpl.

llvm-svn: 137063
2011-08-08 20:53:24 +00:00
Bruno Cardoso Lopes 9212bf275d Codegen allonesvector better while using AVX: vpcmpeqd + vinsertf128
This also fixes PR10452

llvm-svn: 136004
2011-07-25 23:05:32 +00:00
Evan Cheng 7e763d86ba Refactor X86 target to separate MC code from Target code.
llvm-svn: 135930
2011-07-25 18:43:53 +00:00
Bruno Cardoso Lopes a89039998d Fix PR10422 by adding the necessary AVX UCOMISD memory versions to
load folding logic

llvm-svn: 135801
2011-07-22 20:53:20 +00:00
Chris Lattner 229907cd11 land David Blaikie's patch to de-constify Type, with a few tweaks.
llvm-svn: 135375
2011-07-18 04:54:35 +00:00
Evan Cheng bc153d49b7 Next round of MC refactoring. This patch factor MC table instantiations, MC
registeration and creation code into XXXMCDesc libraries.

llvm-svn: 135184
2011-07-14 20:59:42 +00:00
Bruno Cardoso Lopes 6778597deb Add 256-bit load/store recognition and matching in several places.
llvm-svn: 135171
2011-07-14 18:50:58 +00:00
Evan Cheng 703a0fbf39 Hide the call to InitMCInstrInfo into tblgen generated ctor.
llvm-svn: 134244
2011-07-01 17:57:27 +00:00
Evan Cheng 194c3dc01f Move CallFrameSetupOpcode and CallFrameDestroyOpcode to TargetInstrInfo.
llvm-svn: 134030
2011-06-28 21:14:33 +00:00
Evan Cheng 1e210d08d8 Merge XXXGenRegisterNames.inc into XXXGenRegisterInfo.inc
llvm-svn: 134024
2011-06-28 20:07:07 +00:00
Evan Cheng 6cc775f905 - Rename TargetInstrDesc, TargetOperandInfo to MCInstrDesc and MCOperandInfo and
sink them into MC layer.
- Added MCInstrInfo, which captures the tablegen generated static data. Chang
TargetInstrInfo so it's based off MCInstrInfo.

llvm-svn: 134021
2011-06-28 19:10:37 +00:00
Evan Cheng 8d71a75777 More refactoring. Move getRegClass from TargetOperandInfo to TargetInstrInfo.
llvm-svn: 133944
2011-06-27 21:26:13 +00:00
Evan Cheng ee9b90a727 Get rid of one getStackAlignment(). RegisterInfo shouldn't need to know about stack alignment.
llvm-svn: 133679
2011-06-23 01:53:43 +00:00
Rafael Espindola defd4b0875 AnalyzeBranch doesn't change which successors a bb has, just the order
we try to branch to them.

Before we were creating successor lists with duplicated entries. Fixing that
found a bug in isBlockOnlyReachableByFallthrough that would causes it to
return the wrong answer for

-----------
...
jne foo
jmp bar

foo:
----------

llvm-svn: 132882
2011-06-12 03:20:32 +00:00
Eli Friedman 87ef38784e PR10092 (second try): Don't crash on a load without a momoperand; fast-isel creates loads like this.
llvm-svn: 132826
2011-06-10 01:13:01 +00:00
Eli Friedman 9008377c2d Revert 132789; it breaks tests. My mistake.
llvm-svn: 132795
2011-06-09 19:33:30 +00:00
Eli Friedman c095116710 Add a check to make sure we don't crash with strange configurations where we do fast-isel, then try to fold instructions. PR10092.
llvm-svn: 132789
2011-06-09 18:55:00 +00:00
Jakob Stoklund Olesen 56ce3a0f01 Fix PR10059 and future variations by handling all register subclasses.
Add TargetRegisterInfo::hasSubClassEq and use it to check for compatible
register classes instead of trying to list all register classes in
X86's getLoadStoreRegOpcode.

llvm-svn: 132398
2011-06-01 15:32:10 +00:00
Jakob Stoklund Olesen 2348cdd67f X86AsmPrinter doesn't know how to handle the X86II::MO_GOT_ABSOLUTE_ADDRESS flag
after folding ADD32ri to ADD32mi, so don't do that.

This only happens when the greedy register allocator gets itself in trouble and
spills %vreg9 here:

16L             %vreg9<def> = MOVPC32r 0, %ESP<imp-use>; GR32:%vreg9
48L             %vreg9<def> = ADD32ri %vreg9, <es:_GLOBAL_OFFSET_TABLE_>[TF=1], %EFLAGS<imp-def,dead>; GR32:%vreg9

That should never happen, the live range should be split instead.

llvm-svn: 130625
2011-04-30 23:00:05 +00:00
Chris Lattner 0ab5e2cded Fix a ton of comment typos found by codespell. Patch by
Luis Felipe Strano Moraes!

llvm-svn: 129558
2011-04-15 05:18:47 +00:00
Bill Wendling b902f1dd88 Reapply r129401 with patch for clang.
llvm-svn: 129419
2011-04-13 00:36:11 +00:00
Bill Wendling dbfde42468 Revert r129401 for now. Clang is using the old way of doing things.
llvm-svn: 129403
2011-04-12 22:59:27 +00:00
Bill Wendling 47c24875a1 Remove the unaligned load intrinsics in favor of using native unaligned loads.
Now that we have a first-class way to represent unaligned loads, the unaligned
load intrinsics are superfluous.

First part of <rdar://problem/8460511>.

llvm-svn: 129401
2011-04-12 22:46:31 +00:00
Andrew Trick 641e2d4f8c Increased the register pressure limit on x86_64 from 8 to 12
regs. This is the only change in this checkin that may affects the
default scheduler. With better register tracking and heuristics, it
doesn't make sense to artificially lower the register limit so much.

Added -sched-high-latency-cycles and X86InstrInfo::isHighLatencyDef to
give the scheduler a way to account for div and sqrt on targets that
don't have an itinerary. It is currently defaults to 10 (the actual
number doesn't matter much), but only takes effect on non-default
schedulers: list-hybrid and list-ilp.

Added several heuristics that can be individually disabled for the
non-default sched=list-ilp mode. This helps us determine how much
better we can do on a given benchmark than the default
scheduler. Certain compute intensive loops run much faster in this
mode with the right set of heuristics, and it doesn't seem to have
much negative impact elsewhere. Not all of the heuristics are needed,
but we still need to experiment to decide which should be disabled by
default for sched=list-ilp.

llvm-svn: 127067
2011-03-05 08:00:22 +00:00
Evan Cheng 3923466e82 Fix bug in X86 folding / unfolding table. Int_CMPSDrm and Int_CMPSSrm memory
operands starts at index 2, not 1.
rdar://9045024
PR9305

llvm-svn: 126359
2011-02-24 02:36:52 +00:00
NAKAMURA Takumi 0cfdac078e Target/X86: Tweak win64's tailcall.
llvm-svn: 124272
2011-01-26 02:04:09 +00:00
NAKAMURA Takumi 9d29eff198 Fix whitespace.
llvm-svn: 124270
2011-01-26 02:03:37 +00:00
Nate Begeman 073901c836 Add support for AVX to materialize +0.0 when doing scalar FP.
llvm-svn: 121415
2010-12-09 21:43:51 +00:00
Anton Korobeynikov d08fbd19f5 Move callee-saved regs spills / reloads to TFI
llvm-svn: 120228
2010-11-27 23:05:03 +00:00
Evan Cheng 63c7608c34 Re-enable register pressure aware machine licm with fixes. Hoist() may have
erased the instruction during LICM so UpdateRegPressureAfter() should not
reference it afterwards.

llvm-svn: 116845
2010-10-19 18:58:51 +00:00
Daniel Dunbar 418204e523 Revert r116781 "- Add a hook for target to determine whether an instruction def
is", which breaks some nightly tests.

llvm-svn: 116816
2010-10-19 17:14:24 +00:00
Evan Cheng 8249dfe6ce - Add a hook for target to determine whether an instruction def is
"long latency" enough to hoist even if it may increase spilling. Reloading
  a value from spill slot is often cheaper than performing an expensive
  computation in the loop. For X86, that means machine LICM will hoist
  SQRT, DIV, etc. ARM will be somewhat aggressive with VFP and NEON
  instructions.
- Enable register pressure aware machine LICM by default.

llvm-svn: 116781
2010-10-19 00:55:07 +00:00
Jakob Stoklund Olesen aec745326a Remove the x86 MOV{32,64}{rr,rm,mr}_TC instructions.
The reg-reg copies were no longer being generated since copyPhysReg copies
physical registers only.

The loads and stores are not necessary - The TC constraint is imposed by the
TAILJMP and TCRETURN instructions, there should be no need for constrained loads
and stores.

llvm-svn: 116314
2010-10-12 17:15:00 +00:00
Chris Lattner dd77477690 reapply: Use the new TB_NOT_REVERSABLE flag instead of special
reapply: reimplement the second half of the or/add optimization.  We should now

with no changes.  Turns out that one missing "Defs = [EFLAGS]" can upset things
a bit.

llvm-svn: 116040
2010-10-08 03:57:25 +00:00
Chris Lattner 626656a562 reapply the patch reverted in r116033:
"Reimplement (part of) the or -> add optimization.  Matching 'or' into 'add'"

With a critical fix: the add pseudos clobber EFLAGS.

llvm-svn: 116039
2010-10-08 03:54:52 +00:00
Daniel Dunbar 8f21f9c1fb Revert "Reimplement (part of) the or -> add optimization. Matching 'or' into
'add'", which seems to have broken just about everything.

llvm-svn: 116033
2010-10-08 02:07:32 +00:00
Daniel Dunbar 5b2a411c77 Revert "Use the new TB_NOT_REVERSABLE flag instead of special ", which depends
on r116007, which I am about to revert.

llvm-svn: 116032
2010-10-08 02:07:29 +00:00
Daniel Dunbar efdf08b5b8 Revert "reimplement the second half of the or/add optimization. We should now",
which depends on r116007, which I am about to revert.

llvm-svn: 116031
2010-10-08 02:07:26 +00:00
Chris Lattner 134f415bf8 reimplement the second half of the or/add optimization. We should now
only end up emitting LEA instead of OR.  If we aren't able to promote
something into an LEA, we should never be emitting it as an ADD.

Add some testcases that we emit "or" in cases where we used to produce
an "add".

llvm-svn: 116026
2010-10-08 01:05:10 +00:00
Chris Lattner e2245542ce Use the new TB_NOT_REVERSABLE flag instead of special
casing FsMOVAPDrr/FsMOVAPSrr.

llvm-svn: 116016
2010-10-08 00:03:02 +00:00
Chris Lattner 0921bfdf36 simplify some map operations.
llvm-svn: 116014
2010-10-07 23:57:02 +00:00
Chris Lattner 4fb38d3cd3 Reimplement (part of) the or -> add optimization. Matching 'or' into 'add'
is general goodness because it allows ORs to be converted to LEA to avoid
inserting copies.  However, this is bad because it makes the generated .s
file less obvious and gives valgrind heartburn (tons of false positives in
bitfield code).

While the general fix should be in valgrind, we can at least try to avoid
emitting ADD instructions that *don't* get promoted to LEA.  This is more
work because it requires introducing pseudo instructions to represents
"add that knows the bits are disjoint", but hey, people really love valgrind.

This fixes this testcase:
https://bugs.kde.org/show_bug.cgi?id=242137#c20

the add r/i cases are coming next.

llvm-svn: 116007
2010-10-07 23:36:18 +00:00
Chris Lattner 1c090c00bc Reduce casting in various tables by defining the table
with the right types.

llvm-svn: 116001
2010-10-07 23:08:41 +00:00
Chris Lattner 70a7b54f97 simplify code: don't build up vector only to assert it is empty.
llvm-svn: 115997
2010-10-07 22:26:19 +00:00
Jakob Stoklund Olesen b19bae4e3e Constrain the offset register to a *_NOSP register class when inserting LEA
instructions.

This unbreaks the machine code verifier and fixes PR8317.

llvm-svn: 115879
2010-10-07 00:07:26 +00:00
Chris Lattner 1a1c600110 Use #NAME# to have the CMOV multiclass define things with the same names as before
(e.g. CMOVBE16rr instead of CMOVBErr16).

llvm-svn: 115705
2010-10-05 23:00:14 +00:00
Chris Lattner 0067ee02f9 switch CMOVBE to the multipattern:
21 insertions(+), 53 deletions(-)

Moar change coming before I switch the rest.

llvm-svn: 115697
2010-10-05 22:23:58 +00:00
Chris Lattner f60062fd55 add basic avx support to the disassembler, also teach it about ssmem/sdmem
operands.

With this done, we can remove the _Int suffixes from the round instructions
without the disassembler blowing up.  This allows the assembler to support
them, implementing rdar://8456376 - llvm-mc rejects 'roundss'

llvm-svn: 115019
2010-09-29 02:57:56 +00:00
Chris Lattner ff3a3930a0 add asmparser support for cvttpd2dq by removing some Int_ prefixes.
Clean up cvttps2dq by removing some redundant implementations of the
same instruction.  rdar://8456382

llvm-svn: 115018
2010-09-29 02:36:32 +00:00
Chris Lattner ef1c2fc305 implement rdar://8456382 - cvtsd2si support, by removing some Int_ prefixes.
llvm-svn: 115017
2010-09-29 02:24:57 +00:00
Chris Lattner 37fc469f88 fix rdar://8456412 - llvm-mc crash in encoder on "mov %rdx, %cr8"
Teaching the code generator about CR8-15, how to rex them up, etc.

llvm-svn: 114533
2010-09-22 05:29:50 +00:00
Dan Gohman 534db8a5c8 Avoid emitting a PIC base register if no PIC addresses are needed.
This fixes rdar://8396318.

llvm-svn: 114201
2010-09-17 20:24:24 +00:00
Anton Korobeynikov c0b36921c2 Properly handle passing of FP stuff to varargs function on Win64:
value should be copied to the corresponding shadow reg as well.
Patch by Cameron Esfahani!

llvm-svn: 112262
2010-08-27 14:43:06 +00:00
Anton Korobeynikov 88c09879c7 Revert part of one of the prev. patches - tailjmp will follow later.
llvm-svn: 111291
2010-08-17 21:08:28 +00:00
Anton Korobeynikov cd78af6e3c Enable more win64 calls folding opportunities.
Patch by Cameron Esfahani!

llvm-svn: 111288
2010-08-17 21:06:01 +00:00
Bruno Cardoso Lopes 7f704b31a9 - Teach SSEDomainFix to switch between different levels of AVX instructions. Here we guess that AVX will have domain issues, so just implement them for consistency and in the future we remove if it's unnecessary.
- Make foldMemoryOperandImpl aware of 256-bit zero vectors folding and support the 128-bit counterparts of AVX too.
- Make sure MOV[AU]PS instructions are only selected when SSE1 is enabled, and duplicate the patterns to match AVX.
- Add a testcase for a simple 128-bit zero vector creation.

llvm-svn: 110946
2010-08-12 20:20:53 +00:00
Bruno Cardoso Lopes 1401e040eb Fix comment order
llvm-svn: 110898
2010-08-12 02:08:52 +00:00
Jakob Stoklund Olesen 9c473e46f3 Fix <rdar://problem/8282498> even if it doesn't reproduce on trunk.
When a register is defined by a partial load:

  %reg1234:sub_32 = MOV32mr <fi#-1>; GR64:%reg1234

That load cannot be folded into an instruction using the full 64-bit register.
It would become a 64-bit load.

This is related to the recent change to have isLoadFromStackSlot return false on
a sub-register load.

llvm-svn: 110874
2010-08-11 23:08:22 +00:00
Owen Anderson a7aed18624 Reapply r110396, with fixes to appease the Linux buildbot gods.
llvm-svn: 110460
2010-08-06 18:33:48 +00:00
Owen Anderson bda59bd247 Revert r110396 to fix buildbots.
llvm-svn: 110410
2010-08-06 00:23:35 +00:00
Owen Anderson 755aceb5d0 Don't use PassInfo* as a type identifier for passes. Instead, use the address of the static
ID member as the sole unique type identifier.  Clean up APIs related to this change.

llvm-svn: 110396
2010-08-05 23:42:04 +00:00
Jakob Stoklund Olesen ba0e124aaf Revert r109652, and remove the offending assert in loadRegFromStackSlot instead.
We do sometimes load from a too small stack slot when dealing with x86 arguments
(varargs and smaller-than-32-bit args). It looks like we know what we are doing
in those cases, so I am going to remove the assert instead of artifically
enlarging stack slot sizes.

The assert in storeRegToStackSlot stays in. We don't want to write beyond the
bounds of a stack slot.

llvm-svn: 109764
2010-07-29 17:42:27 +00:00
Jakob Stoklund Olesen 96a890a7f8 The isLoadFromStackSlot and isStoreToStackSlot have no way of reporting
subregister operands like this:

%reg1040:sub_32bit<def> = MOV32rm <fi#-2>, 1, %reg0, 0, %reg0, %reg1040<imp-def>; mem:LD4[FixedStack-2](align=8)

Make them return false when subreg operands are present. VirtRegRewriter is
making bad assumptions otherwise.

This fixes PR7713.

llvm-svn: 109489
2010-07-27 04:17:01 +00:00
Jakob Stoklund Olesen c3c05ed02e Add assertions that expose the PR7713 miscompilation: Accessing a stack slot
with a too-big register class.

llvm-svn: 109488
2010-07-27 04:16:58 +00:00
Chris Lattner 8f3adc9057 remove the JIT "NeedsExactSize" feature and supporting logic.
llvm-svn: 109167
2010-07-22 21:17:55 +00:00
Chris Lattner 083be4d384 instead of migrating it to the MC instruction encoder, just
rip out the implementation of X86InstrInfo::GetInstSizeInBytes.
The code being ripped out just implemented a copy and hacked up
version of the (old) instruction encoder, and is buggy and 
terrible in other ways.  Since "GetInstSizeInBytes" is really 
only there to support the JIT's "NeedsExactSize" hook (which
noone is using), just rip out the code.  I will rip out the
NeedsExactSize hook next.

This resolves rdar://7617809 - switch X86InstrInfo::GetInstSizeInBytes to use X86MCCodeEmitter

llvm-svn: 109149
2010-07-22 21:05:13 +00:00
Rafael Espindola 350b1a449f Fixes win64. It was broken by a previous patch where I missed the !isWin64
and then forced every register to be a vr128 on win64.

llvm-svn: 109060
2010-07-21 23:19:57 +00:00
Nate Begeman 784e062b2a Fix a couple issues with Win64 ABI
1) all registers were spilled as xmm, regardless of actual size
2) win64 abi doesn't do the varargs-size-in-%al thing

Still to look into:

xmm6-15 are marked as clobbered by call instructions on win64 even though they aren't.

llvm-svn: 109035
2010-07-21 20:49:52 +00:00
Jakob Stoklund Olesen 8289f78569 Remove the isMoveInstr() hook.
llvm-svn: 108567
2010-07-16 22:35:46 +00:00
Bill Wendling 499f797cdd Rename DBG_LABEL PROLOG_LABEL, because it's only used during prolog emission and
thus is a much more meaningful name.

llvm-svn: 108563
2010-07-16 22:20:36 +00:00
Jakob Stoklund Olesen c30b4ddc58 Remove the X86::FP_REG_KILL pseudo-instruction and the X86FloatingPointRegKill
pass that inserted it.

It is no longer necessary to limit the live ranges of FP registers to a single
basic block.

llvm-svn: 108536
2010-07-16 17:41:44 +00:00
Dan Gohman 425b35681f Check begin!=end, rather than !begin.
llvm-svn: 108167
2010-07-12 18:12:35 +00:00
Rafael Espindola 6635f9838e Convert getLoadStoreRegOpcode to use a switch.
llvm-svn: 108123
2010-07-12 03:43:04 +00:00
Rafael Espindola e35d70fafa Convert the last getPhysicalRegisterRegClass in VirtRegRewriter.cpp to
getMinimalPhysRegClass. It was used to produce spills, and it is better to
use the most specific class if possible.

Update getLoadStoreRegOpcode to handle GR32_AD.

llvm-svn: 108115
2010-07-12 00:52:33 +00:00
Jakob Stoklund Olesen e46f3eb0c4 X86InstrInfo::copyRegToReg is dead. Long live copyPhysReg!
llvm-svn: 108076
2010-07-11 05:44:30 +00:00
Jakob Stoklund Olesen de457896b6 Don't emit st(0)/st(1) copies as FpMOV instructions. Use FpSET_ST? instead.
Based on a patch by Rafael Espíndola.

Attempt to make the FpSET_ST1 hack more robust, but we are still relying on
FpSET_ST0 preceeding it. This is only for supporting really weird x87 inline
asm.

We support:

  FpSET_ST0
  INLINEASM

  FpSET_ST0
  FpSET_ST1
  INLINEASM

with and without kills on the arguments. We don't support:

  FpSET_ST1
  FpSET_ST0
  INLINEASM

nor

  FpSET_ST1
  INLINEASM

Just Don't Do It!

llvm-svn: 108047
2010-07-10 17:42:34 +00:00
Dan Gohman d7b5ce3312 Reapply bottom-up fast-isel, with several fixes for x86-32:
- Check getBytesToPopOnReturn().
 - Eschew ST0 and ST1 for return values.
 - Fix the PIC base register initialization so that it doesn't ever
   fail to end up the top of the entry block.

llvm-svn: 108039
2010-07-10 09:00:22 +00:00
Jakob Stoklund Olesen e2614a9979 Remember the *_TC opcodes for load/store
llvm-svn: 108020
2010-07-09 21:27:55 +00:00
Jakob Stoklund Olesen 7a7b55eb67 Automatically fold COPY instructions into stack load/store.
llvm-svn: 108012
2010-07-09 20:43:13 +00:00
Jakob Stoklund Olesen 51702ec46b Fix a few tests
llvm-svn: 108011
2010-07-09 20:43:09 +00:00
Bruno Cardoso Lopes 792e906bef Start the support for AVX instructions with 256-bit %ymm registers. A couple of
notes:
- The instructions are being added with dummy placeholder patterns using some 256
  specifiers, this is not meant to work now, but since there are some multiclasses
  generic enough to accept them,  when we go for codegen, the stuff will be already
  there.
- Add VEX encoding bits to support YMM
- Add MOVUPS and MOVAPS in the first round
- Use "Y" as suffix for those Instructions: MOVUPSYrr, ...
- All AVX instructions in X86InstrSSE.td will move soon to a new X86InstrAVX
  file.

llvm-svn: 107996
2010-07-09 18:27:43 +00:00
Chris Lattner f469307c77 Change LEA to have 5 operands for its memory operand, just
like all other instructions, even though a segment is not
allowed.  This resolves a bunch of gross hacks in the 
encoder and makes LEA more consistent with the rest of the
instruction set.

No functionality change.

llvm-svn: 107934
2010-07-08 23:46:44 +00:00
Chris Lattner ec536276f0 add some long-overdue enums to refer to the parts of the 5-operand
X86 memory operand.

llvm-svn: 107925
2010-07-08 22:41:28 +00:00
Jakob Stoklund Olesen ec58a43d81 Remember the VR64 register class
llvm-svn: 107920
2010-07-08 22:30:35 +00:00
Jakob Stoklund Olesen 930f8082c3 Implement X86InstrInfo::copyPhysReg
llvm-svn: 107898
2010-07-08 19:46:25 +00:00
Jakob Stoklund Olesen 00264624a9 Convert EXTRACT_SUBREG to COPY when emitting machine instrs.
EXTRACT_SUBREG no longer appears as a machine instruction. Use COPY instead.

Add isCopy() checks in many places using isMoveInstr() and isExtractSubreg().
The isMoveInstr hook will be removed later.

llvm-svn: 107879
2010-07-08 16:40:22 +00:00
Jakob Stoklund Olesen a1e883dcf6 Remove references to INSERT_SUBREG after de-SSA.
Fix X86InstrInfo::convertToThreeAddressWithLEA to generate COPY instead of
INSERT_SUBREG.

llvm-svn: 107878
2010-07-08 16:40:15 +00:00
Jakob Stoklund Olesen 6213ab789f fix copies to/from GR8_ABCD_H even more
llvm-svn: 107832
2010-07-07 23:04:56 +00:00
Jakob Stoklund Olesen ddaf0099a5 Allow copies between GR8_ABCD_L and GR8_ABCD_H.
This fixes PR7540.

llvm-svn: 107809
2010-07-07 20:33:27 +00:00
Evan Cheng 0ce84486c3 - Two-address pass should not assume unfolding is always successful.
- X86 unfolding should check if the instructions being unfolded has memoperands.
  If there is no memoperands, then it must assume conservative alignment. If this
  would introduce an expensive sse unaligned load / store, then unfoldMemoryOperand
  etc. should not unfold the instruction.

llvm-svn: 107509
2010-07-02 20:36:18 +00:00
Bill Wendling 8ce69cd95a Fix the formatting of the switch statement and add a missing break.
llvm-svn: 106586
2010-06-22 22:16:17 +00:00
Rafael Espindola 1cae86f704 Fix an unintentional commit. I think I typed "git svn dcommit" in the wrong branch.
I was trying to do some refactoring on the copyRegToReg, but this is realyl a work in progress and not generally useful yet.

llvm-svn: 106413
2010-06-21 13:31:32 +00:00
Rafael Espindola c596baa56d wip
llvm-svn: 106408
2010-06-21 02:17:34 +00:00
Stuart Hastings 0125b6410a Add a DebugLoc parameter to TargetInstrInfo::InsertBranch(). This
addresses a longstanding deficiency noted in many FIXMEs scattered
across all the targets.

This effectively moves the problem up one level, replacing eleven
FIXMEs in the targets with eight FIXMEs in CodeGen, plus one path
through FastISel where we actually supply a DebugLoc, fixing Radar
7421831.

llvm-svn: 106243
2010-06-17 22:43:56 +00:00
Rafael Espindola e302f833e1 Merge getStoreRegOpcode and getLoadRegOpcode.
llvm-svn: 105900
2010-06-12 20:13:29 +00:00
Jakob Stoklund Olesen a8ad97743d Slightly change the meaning of the reMaterialize target hook when the original
instruction defines subregisters.

Any existing subreg indices on the original instruction are preserved or
composed with the new subreg index.

Also substitute multiple operands mentioning the original register by using the
new MachineInstr::substituteRegister() function. This is necessary because there
will soon be <imp-def> operands added to non read-modify-write partial
definitions. This instruction:

  %reg1234:foo = FLAP %reg1234<imp-def>

will reMaterialize(%reg3333, bar) like this:

  %reg3333:bar-foo = FLAP %reg333:bar<imp-def>

Finally, replace the TargetRegisterInfo pointer argument with a reference to
indicate that it cannot be NULL.

llvm-svn: 105358
2010-06-02 22:47:25 +00:00
Rafael Espindola f2dffcef82 Remove the TargetRegisterClass member from CalleeSavedInfo
llvm-svn: 105344
2010-06-02 20:02:30 +00:00
Jakob Stoklund Olesen 396c8802b2 Use enums instead of literals for X86 subregisters.
The cases in getMatchingSuperRegClass cannot be broken up until the enums have
unique values.

llvm-svn: 104611
2010-05-25 17:04:16 +00:00
Jakob Stoklund Olesen 9340ea59e1 Rename X86 subregister indices to something shorter.
Use the tablegen-produced enums.

llvm-svn: 104493
2010-05-24 14:48:17 +00:00
Jakob Stoklund Olesen 1c69646e99 Add the SubRegIndex TableGen class.
This is the beginning of purely symbolic subregister indices, but we need a bit
of jiggling before the explicit numeric indices can be completely removed.

llvm-svn: 104492
2010-05-24 14:48:12 +00:00
Evan Cheng 168ced94d8 Implement @llvm.returnaddress. rdar://8015977.
llvm-svn: 104421
2010-05-22 01:47:14 +00:00
Dan Gohman 29790edb93 Fix assembly parsing and encoding of the pushf and popf family of
instructions.

llvm-svn: 104231
2010-05-20 16:16:00 +00:00
Dan Gohman f8bf663873 Teach mode load folding and unfolding code about CMP32ri8 and friends.
llvm-svn: 104068
2010-05-18 21:54:15 +00:00
Dan Gohman 887dd1cd31 When converting a test to a cmp to fold a load, use the cmp that has an
8-bit immediate field rather than one with a wider immediate field.

llvm-svn: 104064
2010-05-18 21:42:03 +00:00
Dan Gohman 90c600d6d2 When rematerializing, use the debug location of the original
instruction, rather than a location near where the new instruction
is being inserted.

llvm-svn: 103232
2010-05-07 01:28:10 +00:00
Dan Gohman 779c69bbc5 Add a DebugLoc argument to TargetInstrInfo::copyRegToReg, so that it
doesn't have to guess.

llvm-svn: 103194
2010-05-06 20:33:48 +00:00
Evan Cheng efb126a665 Add argument TargetRegisterInfo to loadRegFromStackSlot and storeRegToStackSlot.
llvm-svn: 103193
2010-05-06 19:06:44 +00:00
Evan Cheng 250e917e9d Frame index can be negative.
llvm-svn: 102577
2010-04-29 01:13:30 +00:00
Chris Lattner 6a5e706e3c on darwin empty functions need to codegen into something of non-zero length,
otherwise labels get incorrectly merged.  We handled this by emitting a 
".byte 0", but this isn't correct on thumb/arm targets where the text segment
needs to be a multiple of 2/4 bytes.  Handle this by emitting a noop.  This
is more gross than it should be because arm/ppc are not fully mc'ized yet.

This fixes rdar://7908505

llvm-svn: 102400
2010-04-26 23:37:21 +00:00
Evan Cheng 1ff9d1b63e Remove a redundant comment.
llvm-svn: 102326
2010-04-26 08:16:57 +00:00
Evan Cheng ed69b382ea - Move TargetLowering::EmitTargetCodeForFrameDebugValue to TargetInstrInfo and rename it to emitFrameIndexDebugValue.
- Teach spiller to modify DBG_VALUE instructions to reference spill slots.

llvm-svn: 102323
2010-04-26 07:38:55 +00:00
Dan Gohman bcaf681cde Add const qualifiers to CodeGen's use of LLVM IR constructs.
llvm-svn: 101334
2010-04-15 01:51:59 +00:00
Evan Cheng 4ca4bc6f95 Re-apply 101075 and fix it properly. Just reuse the debug info of the branch instruction being optimized. There is no need to --I which can deref off start of the BB.
llvm-svn: 101162
2010-04-13 18:50:27 +00:00
Eric Christopher d67f66dc0c Temporarily revert r101075, it's causing invalid iterator assertions
in a nightly tester.

llvm-svn: 101158
2010-04-13 18:37:58 +00:00
Bill Wendling b02bbe416f Micro-optimization:
If we have this situation:

    jCC  L1
    jmp  L2
L1:
  ...
L2:
  ...

We can get a small performance boost by emitting this instead:

    jnCC L2
L1:
  ...
L2:
  ...

This testcase shows an example of this:

float func(float x, float y) {
    double product = (double)x * y;
    if (product == 0.0)
        return product;
    return product - 1.0;
}

llvm-svn: 101075
2010-04-12 22:19:57 +00:00
Chris Lattner 2104b8d36e rename llvm::llvm_report_error -> llvm::report_fatal_error
llvm-svn: 100709
2010-04-07 22:58:41 +00:00
Dale Johannesen 60b289709e Educate GetInstrSizeInBytes implementations that
DBG_VALUE does not generate code.

llvm-svn: 100681
2010-04-07 19:51:44 +00:00
Jakob Stoklund Olesen 1a9b3f3484 Properly enable load clustering.
Operand 2 on a load instruction does not have to be a RegisterSDNode for this to
work.

llvm-svn: 100497
2010-04-05 23:48:02 +00:00
Chris Lattner 6f306d7d30 use DebugLoc default ctor instead of DebugLoc::getUnknownLoc()
llvm-svn: 100214
2010-04-02 20:16:16 +00:00
Dale Johannesen 4244d12769 Teach AnalyzeBranch, RemoveBranch and the branch
folder to be tolerant of debug info following the
branch(es) at the end of a block.

llvm-svn: 100168
2010-04-02 01:38:09 +00:00
Jakob Stoklund Olesen 9986ba954c Replace V_SET0 with variants for each SSE execution domain.
llvm-svn: 99975
2010-03-31 00:40:13 +00:00
Jakob Stoklund Olesen dbff4e8103 Renumber SSE execution domains for better code size.
SSEDomainFix will collapse to the domain with the lower number when it has a
choice. The SSEPackedSingle domain often has smaller instructions, so prefer
that.

llvm-svn: 99952
2010-03-30 22:46:53 +00:00
Eric Christopher 6ad8167714 Remove the pmulld intrinsic and autoupdate it as a vector multiply.
Rewrite the pmulld patterns, and make sure that they fold in loads of
arguments into the instruction.

llvm-svn: 99910
2010-03-30 18:49:01 +00:00
Jakob Stoklund Olesen b551aa4da5 Basic implementation of SSEDomainFix pass.
Cross-block inference is primitive and wrong, but the pass is working otherwise.

llvm-svn: 99848
2010-03-29 23:24:21 +00:00
Jakob Stoklund Olesen 49e121d5e4 Add a late SSEDomainFix pass that twiddles SSE instructions to avoid domain crossings.
On Nehalem and newer CPUs there is a 2 cycle latency penalty on using a register
in a different domain than where it was defined. Some instructions have
equvivalents for different domains, like por/orps/orpd.

The SSEDomainFix pass tries to minimize the number of domain crossings by
changing between equvivalent opcodes where possible.

This is a work in progress, in particular the pass doesn't do anything yet. SSE
instructions are tagged with their execution domain in TableGen using the last
two bits of TSFlags. Note that not all instructions are tagged correctly. Life
just isn't that simple.

The SSE execution domain issue is very similar to the ARM NEON/VFP pipeline
issue handled by NEONMoveFixPass. This pass may become target independent to
handle both.

llvm-svn: 99524
2010-03-25 17:25:00 +00:00
Jakob Stoklund Olesen a86ccbfe88 Revert "Add a late SSEDomainFix pass that twiddles SSE instructions to avoid domain crossings."
This reverts commit 99345. It was breaking buildbots.

llvm-svn: 99352
2010-03-23 23:48:51 +00:00
Jakob Stoklund Olesen 31da45b7af Add a late SSEDomainFix pass that twiddles SSE instructions to avoid domain crossings.
This is work in progress. So far, SSE execution domain tables are added to
X86InstrInfo, and a skeleton pass is enabled with -sse-domain-fix.

llvm-svn: 99345
2010-03-23 23:14:44 +00:00
Evan Cheng b6dee6e015 Teach isSafeToClobberEFLAGS to ignore dbg_value's. We need a MachineBasicBlock::iterator that does this automatically?
llvm-svn: 99320
2010-03-23 20:35:45 +00:00
Evan Cheng d703df67ce Do not force indirect tailcall through fixed registers: eax, r11. Add support to allow loads to be folded to tail call instructions.
llvm-svn: 98465
2010-03-14 03:48:46 +00:00
Dan Gohman 772952f46e Don't try to fold V_SET0 and V_SETALLONES to loads in medium and
large code models.

llvm-svn: 98042
2010-03-09 03:01:40 +00:00
Bill Wendling 543ce1f64a Revert r97766. It's deleting a tag.
llvm-svn: 97768
2010-03-05 00:33:59 +00:00
Bill Wendling 6517f88f25 Micro-optimization:
This code:

float floatingPointComparison(float x, float y) {
    double product = (double)x * y;
    if (product == 0.0)
        return product;
    return product - 1.0;
}

produces this:

_floatingPointComparison:
0000000000000000        cvtss2sd        %xmm1,%xmm1
0000000000000004        cvtss2sd        %xmm0,%xmm0
0000000000000008        mulsd           %xmm1,%xmm0
000000000000000c        pxor            %xmm1,%xmm1
0000000000000010        ucomisd         %xmm1,%xmm0
0000000000000014        jne             0x00000004
0000000000000016        jp              0x00000002
0000000000000018        jmp             0x00000008
000000000000001a        addsd           0x00000006(%rip),%xmm0
0000000000000022        cvtsd2ss        %xmm0,%xmm0
0000000000000026        ret

The "jne/jp/jmp" sequence can be reduced to this instead:

_floatingPointComparison:
0000000000000000        cvtss2sd        %xmm1,%xmm1
0000000000000004        cvtss2sd        %xmm0,%xmm0
0000000000000008        mulsd           %xmm1,%xmm0
000000000000000c        pxor            %xmm1,%xmm1
0000000000000010        ucomisd         %xmm1,%xmm0
0000000000000014        jp              0x00000002
0000000000000016        je              0x00000008
0000000000000018        addsd           0x00000006(%rip),%xmm0
0000000000000020        cvtsd2ss        %xmm0,%xmm0
0000000000000024        ret

for a savings of 2 bytes.

This xform can happen when we recognize that jne and jp jump to the same "true"
MBB, the unconditional jump would jump to the "false" MBB, and the "true" branch
is the fall-through MBB.

llvm-svn: 97766
2010-03-05 00:24:26 +00:00
Dan Gohman bdd6405f29 Implement XMM subregs.
Extracting the low element of a vector is now done with EXTRACT_SUBREG,
and the zero-extension performed by load movss is now modeled with
SUBREG_TO_REG, and so on.

Register-to-register movss and movsd are no longer considered copies;
they are two-address instructions which insert a scalar into a vector.

llvm-svn: 97354
2010-02-28 00:17:42 +00:00
Dan Gohman 952f6f98bb movl is a cheaper way to materialize 0 without clobbering EFLAGS than movabsq.
llvm-svn: 97227
2010-02-26 16:49:27 +00:00
Dan Gohman c1a545c307 Fix a typo in a comment.
llvm-svn: 96778
2010-02-22 04:09:26 +00:00
Chris Lattner f7477e599f add a bunch of mod/rm encoding types for fixed mod/rm bytes.
This will work better for the disassembler for modeling things
like lfence/monitor/vmcall etc.

llvm-svn: 95960
2010-02-12 02:06:33 +00:00
Chris Lattner 2b0a7a2592 refactor the conditional jump instructions in the .td file to
use a multipattern that generates both the 1-byte and 4-byte 
versions from the same defm

llvm-svn: 95901
2010-02-11 19:25:55 +00:00
Chris Lattner b06015aa69 move target-independent opcodes out of TargetInstrInfo
into TargetOpcodes.h.  #include the new TargetOpcodes.h
into MachineInstr.  Add new inline accessors (like isPHI())
to MachineInstr, and start using them throughout the 
codebase.

llvm-svn: 95687
2010-02-09 19:54:29 +00:00
Chris Lattner 58827ff98e port X86InstrInfo::determineREX over to the new encoder.
llvm-svn: 95440
2010-02-05 22:10:22 +00:00
Chris Lattner 503243559a move functions for decoding X86II values into the X86II namespace.
llvm-svn: 95410
2010-02-05 19:24:13 +00:00
Chris Lattner b8d375fd21 change getSizeOfImm and getBaseOpcodeFor to just take
TSFlags directly instead of a TargetInstrDesc.

llvm-svn: 95405
2010-02-05 19:16:26 +00:00
Dale Johannesen e5a4134d11 use findDebugLoc in more places.
llvm-svn: 94477
2010-01-26 00:03:12 +00:00
Evan Cheng 16cf934381 Be more conservative with clustering f32 / f64 loads.
llvm-svn: 94254
2010-01-22 23:49:11 +00:00
Evan Cheng 4f026f3750 Add two target hooks to determine whether two loads are near and should be scheduled together.
llvm-svn: 94147
2010-01-22 03:34:51 +00:00
Evan Cheng 5d30f7c91c Fix a minor issue in x86 load / store folding table. movups does an unaligned load so it doesn't require 16-byte alignment.
llvm-svn: 94058
2010-01-21 00:55:14 +00:00
Dale Johannesen c5db599813 make findDebugLoc a class method
llvm-svn: 94032
2010-01-20 21:36:02 +00:00
Dale Johannesen 91970b4ea2 Move findDebugLoc somewhere more central. Fix
more cases where debug declarations affect
debug line info.

llvm-svn: 93953
2010-01-20 00:19:24 +00:00
Jim Grosbach 04770f2aa1 For aligned load/store instructions, it's only required to know whether a
function can support dynamic stack realignment. That's a much easier question
to answer at instruction selection stage than whether the function actually
will have dynamic alignment prologue. This allows the removal of the
stack alignment heuristic pass, and improves code quality for cases where
the heuristic would result in dynamic alignment code being generated when
it was not strictly necessary.

llvm-svn: 93885
2010-01-19 18:31:11 +00:00
Evan Cheng ceb5a4e8f6 For now, avoid issuing extract_subreg to reuse lower 8-bit, it's not safe in 32-bit.
llvm-svn: 93307
2010-01-13 08:01:32 +00:00
Evan Cheng 30bebff456 Add a quick pass to optimize sign / zero extension instructions. For targets where the pre-extension values are available in the subreg of the result of the extension, replace the uses of the pre-extension value with the result + extract_subreg.
For now, this pass is fairly conservative. It only perform the replacement when both the pre- and post- extension values are used in the block. It will miss cases where the post-extension values are live, but not used.

llvm-svn: 93278
2010-01-13 00:30:23 +00:00
Dan Gohman c119580307 Reapply the MOV64r0 patch, with a fix: MOV64r0 clobbers EFLAGS.
llvm-svn: 93229
2010-01-12 04:42:54 +00:00
Evan Cheng 4216615f99 Add TargetInstrInfo::isCoalescableInstr. It returns true if the specified
instruction is copy like where the source and destination registers can
overlap. This is to be used by the coalescable to coalesce the source and
destination registers of instructions like X86::MOVSX64rr32. Apparently
some crazy people believe the coalescer is too simple.

llvm-svn: 93210
2010-01-12 00:09:37 +00:00
Evan Cheng 7bdf339602 Revert 93158. It's breaking quite a few x86_64 tests.
llvm-svn: 93185
2010-01-11 21:13:41 +00:00
Dan Gohman 3a55686345 Re-instate MOV64r0 and MOV16r0, with adjustments to work with the
new AsmPrinter. This is perhaps less elegant than describing them
in terms of MOV32r0 and subreg operations, but it allows the
current register to rematerialize them.

llvm-svn: 93158
2010-01-11 17:37:57 +00:00
David Greene d589dafba6 Change errs() to dbgs().
llvm-svn: 92653
2010-01-05 01:29:29 +00:00
Bill Wendling 3179a89067 Remove dead variable.
llvm-svn: 92184
2009-12-28 01:36:02 +00:00
Chris Lattner 518b037620 completely eliminate the MOV16r0 'instruction'. The only
interesting part of this is the divrem changes, which are
already tested by CodeGen/X86/divrem.ll.

llvm-svn: 91975
2009-12-23 01:45:04 +00:00
Evan Cheng 71d7eaa87e Remove target attribute break-sse-dep. Instead, do not fold load into sse partial update instructions unless optimizing for size.
llvm-svn: 91910
2009-12-22 17:47:23 +00:00
Evan Cheng 4cf30b72bf On recent Intel u-arch's, folding loads into some unary SSE instructions can
be non-optimal. To be precise, we should avoid folding loads if the instructions
only update part of the destination register, and the non-updated part is not
needed. e.g. cvtss2sd, sqrtss. Unfolding the load from these instructions breaks
the partial register dependency and it can improve performance. e.g.

movss (%rdi), %xmm0
cvtss2sd %xmm0, %xmm0

instead of
cvtss2sd (%rdi), %xmm0

An alternative method to break dependency is to clear the register first. e.g.
xorps %xmm0, %xmm0
cvtss2sd (%rdi), %xmm0

llvm-svn: 91672
2009-12-18 07:40:29 +00:00
Sean Callanan 04d8cb74f3 Instruction fixes, added instructions, and AsmString changes in the
X86 instruction tables.

Also (while I was at it) cleaned up the X86 tables, removing tabs and
80-line violations.

This patch was reviewed by Chris Lattner, but please let me know if
there are any problems.

* X86*.td
	Removed tabs and fixed 80-line violations

* X86Instr64bit.td
	(IRET, POPCNT, BT_, LSL, SWPGS, PUSH_S, POP_S, L_S, SMSW)
		Added
	(CALL, CMOV) Added qualifiers
	(JMP) Added PC-relative jump instruction
	(POPFQ/PUSHFQ) Added qualifiers; renamed PUSHFQ to indicate
		that it is 64-bit only (ambiguous since it has no
		REX prefix)
	(MOV) Added rr form going the other way, which is encoded
		differently
	(MOV) Changed immediates to offsets, which is more correct;
		also fixed MOV64o64a to have to a 64-bit offset
	(MOV) Fixed qualifiers
	(MOV) Added debug-register and condition-register moves
	(MOVZX) Added more forms
	(ADC, SUB, SBB, AND, OR, XOR) Added reverse forms, which
		(as with MOV) are encoded differently
	(ROL) Made REX.W required
	(BT) Uncommented mr form for disassembly only
	(CVT__2__) Added several missing non-intrinsic forms
	(LXADD, XCHG) Reordered operands to make more sense for
		MRMSrcMem
	(XCHG) Added register-to-register forms
	(XADD, CMPXCHG, XCHG) Added non-locked forms
* X86InstrSSE.td
	(CVTSS2SI, COMISS, CVTTPS2DQ, CVTPS2PD, CVTPD2PS, MOVQ)
		Added
* X86InstrFPStack.td
	(COM_FST0, COMP_FST0, COM_FI, COM_FIP, FFREE, FNCLEX, FNOP,
	 FXAM, FLDL2T, FLDL2E, FLDPI, FLDLG2, FLDLN2, F2XM1, FYL2X,
	 FPTAN, FPATAN, FXTRACT, FPREM1, FDECSTP, FINCSTP, FPREM,
	 FYL2XP1, FSINCOS, FRNDINT, FSCALE, FCOMPP, FXSAVE,
	 FXRSTOR)
		Added
	(FCOM, FCOMP) Added qualifiers
	(FSTENV, FSAVE, FSTSW) Fixed opcode names
	(FNSTSW) Added implicit register operand
* X86InstrInfo.td
	(opaque512mem) Added for FXSAVE/FXRSTOR
	(offset8, offset16, offset32, offset64) Added for MOV
	(NOOPW, IRET, POPCNT, IN, BTC, BTR, BTS, LSL, INVLPG, STR,
	 LTR, PUSHFS, PUSHGS, POPFS, POPGS, LDS, LSS, LES, LFS,
	 LGS, VERR, VERW, SGDT, SIDT, SLDT, LGDT, LIDT, LLDT,
	 LODSD, OUTSB, OUTSW, OUTSD, HLT, RSM, FNINIT, CLC, STC,
	 CLI, STI, CLD, STD, CMC, CLTS, XLAT, WRMSR, RDMSR, RDPMC,
	 SMSW, LMSW, CPUID, INVD, WBINVD, INVEPT, INVVPID, VMCALL,
	 VMCLEAR, VMLAUNCH, VMRESUME, VMPTRLD, VMPTRST, VMREAD,
	 VMWRITE, VMXOFF, VMXON) Added
	(NOOPL, POPF, POPFD, PUSHF, PUSHFD) Added qualifier
	(JO, JNO, JB, JAE, JE, JNE, JBE, JA, JS, JNS, JP, JNP, JL,
	 JGE, JLE, JG, JCXZ) Added 32-bit forms
	(MOV) Changed some immediate forms to offset forms
	(MOV) Added reversed reg-reg forms, which are encoded
		differently
	(MOV) Added debug-register and condition-register moves
	(CMOV) Added qualifiers
	(AND, OR, XOR, ADC, SUB, SBB) Added reverse forms, like MOV
	(BT) Uncommented memory-register forms for disassembler
	(MOVSX, MOVZX) Added forms
	(XCHG, LXADD) Made operand order make sense for MRMSrcMem
	(XCHG) Added register-register forms
	(XADD, CMPXCHG) Added unlocked forms
* X86InstrMMX.td
	(MMX_MOVD, MMV_MOVQ) Added forms
* X86InstrInfo.cpp: Changed PUSHFQ to PUSHFQ64 to reflect table
	change

* X86RegisterInfo.td: Added debug and condition register sets
* x86-64-pic-3.ll: Fixed testcase to reflect call qualifier
* peep-test-3.ll: Fixed testcase to reflect test qualifier
* cmov.ll: Fixed testcase to reflect cmov qualifier
* loop-blocks.ll: Fixed testcase to reflect call qualifier
* x86-64-pic-11.ll: Fixed testcase to reflect call qualifier
* 2009-11-04-SubregCoalescingBug.ll: Fixed testcase to reflect call
  qualifier
* x86-64-pic-2.ll: Fixed testcase to reflect call qualifier
* live-out-reg-info.ll: Fixed testcase to reflect test qualifier
* tail-opts.ll: Fixed testcase to reflect call qualifiers
* x86-64-pic-10.ll: Fixed testcase to reflect call qualifier
* bss-pagealigned.ll: Fixed testcase to reflect call qualifier
* x86-64-pic-1.ll: Fixed testcase to reflect call qualifier
* widen_load-1.ll: Fixed testcase to reflect call qualifier

llvm-svn: 91638
2009-12-18 00:01:26 +00:00
Bill Wendling 277381f69a Whitespace changes, comment clarification. No functional changes.
llvm-svn: 91274
2009-12-14 06:51:19 +00:00
Evan Cheng 26fdd7265b Disable r91104 for x86. It causes partial register stall which pessimize code in 32-bit.
llvm-svn: 91223
2009-12-12 20:03:14 +00:00
Evan Cheng 3974c8de51 Add comment about potential partial register stall.
llvm-svn: 91220
2009-12-12 18:55:26 +00:00
Evan Cheng 766a73fb04 Add support to 3-addressify 16-bit instructions.
llvm-svn: 91104
2009-12-11 06:01:48 +00:00
Dan Gohman 047a767d74 Remove the target hook TargetInstrInfo::BlockHasNoFallThrough in favor of
MachineBasicBlock::canFallThrough(), which is target-independent and more
thorough.

llvm-svn: 90634
2009-12-05 00:44:40 +00:00
David Greene 86bafa29a3 Remove an unneeded include.
llvm-svn: 90625
2009-12-04 23:55:07 +00:00
David Greene 0508e435c3 Have hasLoad/StoreFrom/ToStackSlot return the relevant MachineMemOperand.
llvm-svn: 90608
2009-12-04 22:38:46 +00:00
Chris Lattner a48f44d9ee improve portability to avoid conflicting with std::next in c++'0x.
Patch by Howard Hinnant!

llvm-svn: 90365
2009-12-03 00:50:42 +00:00
Dan Gohman de5dea869f Remove ISD::DEBUG_LOC and ISD::DBG_LABEL, which are no longer used.
Note that "hasDotLocAndDotFile"-style debug info was already broken;
people wanting this functionality should implement it in the
AsmPrinter/DwarfWriter code.

llvm-svn: 89711
2009-11-23 23:20:51 +00:00
Evan Cheng 5392cc9d14 Re-apply 89011. It's not to be blamed.
llvm-svn: 89081
2009-11-17 09:51:18 +00:00
Evan Cheng 05938e819b Revert 89011. Buildbot thinks it might be breaking stuff.
llvm-svn: 89076
2009-11-17 09:20:28 +00:00
Evan Cheng ce28f6f478 A few more instructions that should be marked re-materializable.
llvm-svn: 89011
2009-11-17 00:23:22 +00:00
Evan Cheng f25ef4ffb0 - Check memoperand alignment instead of checking stack alignment. Most load / store folding instructions are not referencing spill stack slots.
- Mark MOVUPSrm re-materializable.

llvm-svn: 88974
2009-11-16 21:56:03 +00:00
Evan Cheng 6ad7da96fe - Change TargetInstrInfo::reMaterialize to pass in TargetRegisterInfo.
- If destination is a physical register and it has a subreg index, use the
  sub-register instead.
This fixes PR5423.

llvm-svn: 88745
2009-11-14 02:55:43 +00:00
David Greene 2f4c37425b Fix a bootstrap failure.
Provide special isLoadFromStackSlotPostFE and isStoreToStackSlotPostFE
interfaces to explicitly request checking for post-frame ptr elimination
operands.  This uses a heuristic so it isn't reliable for correctness.

llvm-svn: 87047
2009-11-13 00:29:53 +00:00
David Greene 70fdd57dc1 Add hasLoadFromStackSlot and hasStoreToStackSlot to return whether a
machine instruction loads or stores from/to a stack slot.  Unlike
isLoadFromStackSlot and isStoreFromStackSlot, the instruction may be
something other than a pure load/store (e.g. it may be an arithmetic
operation with a memory operand).  This helps AsmPrinter determine when
to print a spill/reload comment.

This is only a hint since we may not be able to figure this out in all
cases.  As such, it should not be relied upon for correctness.

Implement for X86.  Return false by default for other architectures.

llvm-svn: 87026
2009-11-12 20:55:29 +00:00
Jeffrey Yasskin b40d3f76a0 Fix DenseMap iterator constness.
This patch forbids implicit conversion of DenseMap::const_iterator to
DenseMap::iterator which was possible because DenseMapIterator inherited
(publicly) from DenseMapConstIterator. Conversion the other way around is now
allowed as one may expect.

The template DenseMapConstIterator is removed and the template parameter
IsConst which specifies whether the iterator is constant is added to
DenseMapIterator.

Actually IsConst parameter is not necessary since the constness can be
determined from KeyT but this is not relevant to the fix and can be addressed
later.

Patch by Victor Zverovich!

llvm-svn: 86636
2009-11-10 01:02:17 +00:00
Dan Gohman 49fa51d936 Fix MachineLICM to use the correct virtual register class when
unfolding loads for hoisting.  getOpcodeAfterMemoryUnfold returns the
opcode of the original operation without the load, not the load
itself, MachineLICM needs to know the operand index in order to get
the correct register class. Extend getOpcodeAfterMemoryUnfold to
return this information.

llvm-svn: 85622
2009-10-30 22:18:41 +00:00
Dan Gohman 0be8c2b0e3 Make isSafeToClobberEFLAGS more aggressive. Teach it to scan backwards
(for uses marked kill and defs marked dead) a few instructions in
addition to forwards. Also, increase the maximum number of instructions
to scan, as it appears to help in a fair number of cases.

llvm-svn: 84061
2009-10-14 00:08:59 +00:00
Dan Gohman 1faa11521e Remove a no-longer-necessary #include.
llvm-svn: 83697
2009-10-10 00:36:09 +00:00
Dan Gohman e919de5acf Replace X86's CanRematLoadWithDispOperand by calling the target-independent
MachineInstr::isInvariantLoad instead, which has the benefit of being
more complete.

llvm-svn: 83696
2009-10-10 00:34:18 +00:00
Dan Gohman dd76bb23d1 Add basic infrastructure and x86 support for preserving MachineMemOperand
information when unfolding memory references.

llvm-svn: 83656
2009-10-09 18:10:05 +00:00
Dan Gohman be8137b0b4 Replace TargetInstrInfo::isInvariantLoad and its target-specific
implementations with a new MachineInstr::isInvariantLoad, which uses
MachineMemOperands and is target-independent. This brings MachineLICM
and other functionality to targets which previously lacked an
isInvariantLoad implementation.

llvm-svn: 83475
2009-10-07 17:38:06 +00:00
Jakob Stoklund Olesen dc9efe8078 Introduce the TargetInstrInfo::KILL machine instruction and get rid of the
unused DECLARE instruction.

KILL is not yet used anywhere, it will replace TargetInstrInfo::IMPLICIT_DEF
in the places where IMPLICIT_DEF is just used to alter liveness of physical
registers.

llvm-svn: 83006
2009-09-28 20:32:26 +00:00
Dan Gohman 48b185d6f7 Improve MachineMemOperand handling.
- Allocate MachineMemOperands and MachineMemOperand lists in MachineFunctions.
   This eliminates MachineInstr's std::list member and allows the data to be
   created by isel and live for the remainder of codegen, avoiding a lot of
   copying and unnecessary translation. This also shrinks MemSDNode.
 - Delete MemOperandSDNode. Introduce MachineSDNode which has dedicated
   fields for MachineMemOperands.
 - Change MemSDNode to have a MachineMemOperand member instead of its own
   fields with the same information. This introduces some redundancy, but
   it's more consistent with what MachineInstr will eventually want.
 - Ignore alignment when searching for redundant loads for CSE, but remember
   the greatest alignment.

Target-specific code which previously used MemOperandSDNodes with generic
SDNodes now use MemIntrinsicSDNodes, with opcodes in a designated range
so that the SelectionDAG framework knows that MachineMemOperand information
is available.

llvm-svn: 82794
2009-09-25 20:36:54 +00:00
Dan Gohman 32f71d714b Rename getTargetNode to getMachineNode, for consistency with the
naming scheme used in SelectionDAG, where there are multiple kinds
of "target" nodes, but "machine" nodes are nodes which represent
a MachineInstr.

llvm-svn: 82790
2009-09-25 18:54:59 +00:00
Dan Gohman 1439957928 Fix X86's unfoldMemoryOperand to properly handle MachineMemOperands.
llvm-svn: 82597
2009-09-23 01:29:41 +00:00
Dan Gohman 69499b13fd Add support for rematerializing FsFLD0SS and FsFLD0SD as constant-pool
loads in order to reduce register pressure.

llvm-svn: 82470
2009-09-21 18:30:38 +00:00
Evan Cheng 74a3231de4 Follow up to 81494. When the folded reload is narrowed to a 32-bit load then change the destination register to a 32-bit one or add a sub-register index.
llvm-svn: 81496
2009-09-11 01:01:31 +00:00
Evan Cheng 3cad6283b8 It's not legal to fold a load from a narrower stack slot into a wider instruction. If done, the instruction does a 64-bit load and that's not
safe. This can happen we a subreg_to_reg 0 has been coalesced. One
exception is when the instruction that folds the load is a move, then we
can simply turn it into a 32-bit load from the stack slot.                                                                                                                    

rdar://7170444

llvm-svn: 81494
2009-09-11 00:39:26 +00:00
Daniel Dunbar f7a14aa43d Remove Offset from ExternalSybmol MachineOperands, this is unused (and at least partly unsupported, in X86 encoding at least).
llvm-svn: 80726
2009-09-01 22:06:46 +00:00
Anton Korobeynikov f43ab91486 Short-term workaround for frame-related weirdness on win64.
Some other minor win64 fixes as well.

Patch by Michael Beck!

llvm-svn: 80370
2009-08-28 16:06:41 +00:00
Chris Lattner a6f074fb3a remove various std::ostream version of printing methods from
MachineInstr and MachineOperand.  This required eliminating a
bunch of stuff that was using DOUT, I hope that bill doesn't
mind me stealing his fun. ;-)

llvm-svn: 79813
2009-08-23 03:41:05 +00:00
Chris Lattner 7b26fce23e Rename TargetAsmInfo (and its subclasses) to MCAsmInfo.
llvm-svn: 79763
2009-08-22 20:48:53 +00:00
Devang Patel 0939595711 Record variable debug info at ISel time directly.
llvm-svn: 79742
2009-08-22 17:12:53 +00:00
Owen Anderson 55f1c09e31 Push LLVMContexts through the IntegerType APIs.
llvm-svn: 78948
2009-08-13 21:58:54 +00:00
Owen Anderson 9f94459d24 Split EVT into MVT and EVT, the former representing _just_ a primitive type, while
the latter is capable of representing either a primitive or an extended type.

llvm-svn: 78713
2009-08-11 20:47:22 +00:00
Dan Gohman aa3fb65349 Simplify this code. The case where one class is GR64RegClass and the
other is a subclass of it is effectively handled by the prior tests.

llvm-svn: 78676
2009-08-11 15:59:48 +00:00
Owen Anderson 53aa7a960c Rename MVT to EVT, in preparation for splitting SimpleValueType out into its own struct type.
llvm-svn: 78610
2009-08-10 22:56:29 +00:00
Eric Christopher 7dfa9f2e56 Add crc32 instruction and intrinsics. Add a new class of prefix
bytes for F2 0F 38 and propagate. Add a FIXME for a set
of possibilities which correspond to intrinsics already used.

New test.

llvm-svn: 78508
2009-08-08 21:55:08 +00:00
Dan Gohman 77f33b71c7 Use GR32 for copies between GR32_NOSP and GR32_NOREX, as neither
is a subset of the other, but both are subsets of GR32.

llvm-svn: 78250
2009-08-05 22:18:26 +00:00
Dan Gohman 87cc2c2dce hasSuperClass tests for a strict superset relation, rather than
a superset relation. This code wants to test the regular superset
relation.

llvm-svn: 78236
2009-08-05 20:13:45 +00:00
Chris Lattner e98a3c3ca3 Move the getInlineAsmLength virtual method from TAI to TII, where
the only real caller (GetFunctionSizeInBytes) uses it.

The custom ARM implementation of this is basically reimplementing
an assembler poorly for negligible gain.  It should be removed 
IMNSHO, but I'll leave that to ARMish folks to decide.

llvm-svn: 77877
2009-08-02 05:20:37 +00:00
Owen Anderson 5a1acd9912 Move a few more APIs back to 2.5 forms. The only remaining ones left to change back are
metadata related, which I'm waiting on to avoid conflicting with Devang.

llvm-svn: 77721
2009-07-31 20:28:14 +00:00
Dan Gohman 49a6f16b7c Add a new register class to describe operands that can't be SP,
due to x86 encoding restrictions. This is currently off by default
because it may cause code quality regressions. This is for PR4572.

llvm-svn: 77565
2009-07-30 01:56:29 +00:00
Chris Lattner f3239532cc 1. Introduce a new TargetOperandInfo::getRegClass() helper method
and convert code to using it, instead of having lots of things
   poke the isLookupPtrRegClass() method directly.

2. Make PointerLikeRegClass contain a 'kind' int, and store it in
   the existing regclass field of TargetOperandInfo when the
   isLookupPtrRegClass() predicate is set.  Make getRegClass pass
   this into TargetRegisterInfo::getPointerRegClass(), allowing
   targets to have multiple ptr_rc things.

llvm-svn: 77504
2009-07-29 21:10:12 +00:00
Owen Anderson 47db941fd3 Get rid of the Pass+Context magic.
llvm-svn: 76702
2009-07-22 00:24:57 +00:00
Jakob Stoklund Olesen c7895d3cf6 Silence warning in Linux builds:
X86InstrInfo.cpp:2272: warning: suggest explicit braces to avoid ambiguous 'else'

llvm-svn: 76105
2009-07-16 21:24:13 +00:00
Evan Cheng fdd0eb4011 With recent MC changes, RIP base register is explicitly modeled. Make sure we add it when x86 V_SET0 / V_SETALLONES (by transforming it into a constpool load) into the use instruction.
llvm-svn: 76094
2009-07-16 18:44:05 +00:00
Evan Cheng 84517443ca Let callers decide the sub-register index on the def operand of rematerialized instructions.
Avoid remat'ing instructions whose def have sub-register indices for now. It's just really really hard to get all the cases right.

llvm-svn: 75900
2009-07-16 09:20:10 +00:00
Evan Cheng 9e0c7f2c5e Move load / store folding alignment require into the table(s).
llvm-svn: 75749
2009-07-15 06:10:07 +00:00
Chris Lattner 79c136d473 reapply r75408, which eliminates MOV64r0 in favor of using
MOV32r0 + subregs to do the same thing.  This should work now
that PR4544 is fixed.  Thanks Evan!

llvm-svn: 75671
2009-07-14 20:19:57 +00:00
Torok Edwin fbcc663cbf llvm_unreachable->llvm_unreachable(0), LLVM_UNREACHABLE->llvm_unreachable.
This adds location info for all llvm_unreachable calls (which is a macro now) in
!NDEBUG builds.
In NDEBUG builds location info and the message is off (it only prints
"UREACHABLE executed").

llvm-svn: 75640
2009-07-14 16:55:14 +00:00
Owen Anderson 542619e6d5 Move more functionality over to LLVMContext.
llvm-svn: 75497
2009-07-13 20:58:05 +00:00
Owen Anderson 53a52215b5 Begin the painful process of tearing apart the rat'ss nest that is Constants.cpp and ConstantFold.cpp.
This involves temporarily hard wiring some parts to use the global context.  This isn't ideal, but it's
the only way I could figure out to make this process vaguely incremental.

llvm-svn: 75445
2009-07-13 04:09:18 +00:00
Bill Wendling 5b76fc03ae Temporarily revert r75408. It appears to break the Apple-style builds:
x86_64-apple-darwin10-gcc -c   -g -O2  -DIN_GCC   -W -Wall -Wwrite-strings -Wstrict-prototypes -Wmissing-prototypes -pedantic -Wno-long-long -Wno-variadic-macros -Wno-overlength-strings -Wold-style-definition -Wmissing-format-attribute   -mdynamic-no-pic -DHAVE_CONFIG_H -I. -I. -I/Volumes/Sandbox/Buildbot/llvm/build.llvm-gcc-x86_64-darwin10-selfhost/build/llvmgcc42.roots/llvmgcc42~obj/src/gcc -I/Volumes/Sandbox/Buildbot/llvm/build.llvm-gcc-x86_64-darwin10-selfhost/build/llvmgcc42.roots/llvmgcc42~obj/src/gcc/. -I/Volumes/Sandbox/Buildbot/llvm/build.llvm-gcc-x86_64-darwin10-selfhost/build/llvmgcc42.roots/llvmgcc42~obj/src/gcc/../include -I./../intl -I/Volumes/Sandbox/Buildbot/llvm/build.llvm-gcc-x86_64-darwin10-selfhost/build/llvmgcc42.roots/llvmgcc42~obj/src/gcc/../libcpp/include  -I/Volumes/Sandbox/Buildbot/llvm/build.llvm-gcc-x86_64-darwin10-selfhost/build/llvmgcc42.roots/llvmgcc42~obj/src/gcc/../libdecnumber -I../libdecnumber -I/Volumes/Sandbox/Buildbot/llvm/build.llvm-gcc-x86_64-darwin10-selfhost/build/llvmCore.roots/llvmCore~dst/Developer/usr/local/include -I/Volumes/Sandbox/Buildbot/llvm/build.llvm-gcc-x86_64-darwin10-selfhost/build/llvmCore.roots/llvmCore~obj/src/include -DENABLE_LLVM -I/Volumes/Sandbox/Buildbot/llvm/build.llvm-gcc-x86_64-darwin10-selfhost/build/llvmCore.roots/llvmCore~dst/Developer/usr/local/include  -D_DEBUG  -D_GNU_SOURCE -D__STDC_LIMIT_MACROS -D__STDC_CONSTANT_MACROS -DLLVM_VERSION_INFO='"9999"' -DBUILD_LLVM_APPLE_STYLE   /Volumes/Sandbox/Buildbot/llvm/build.llvm-gcc-x86_64-darwin10-selfhost/build/llvmgcc42.roots/llvmgcc42~obj/src/gcc/tree-ssa-alias.c -o tree-ssa-alias.o
/var/tmp//ccJQ2JBT.s:4134:Incorrect register `%rcx' used with `l' suffix
make[2]: *** [tree-ssa-live.o] Error 1
make[2]: *** Waiting for unfinished jobs....

llvm-svn: 75412
2009-07-12 02:49:22 +00:00
Chris Lattner 02c4339bde eliminate MOV64r0 in favor of a Pat<> pattern. This is only nontrivial because
the div lowering code explicitly references it.

llvm-svn: 75408
2009-07-12 00:47:55 +00:00
Torok Edwin 56d0659726 assert(0) -> LLVM_UNREACHABLE.
Make llvm_unreachable take an optional string, thus moving the cerr<< out of
line.
LLVM_UNREACHABLE is now a simple wrapper that makes the message go away for
NDEBUG builds.

llvm-svn: 75379
2009-07-11 20:10:48 +00:00
Evan Cheng 7997cbf2d5 Undo my brain cramp.
llvm-svn: 75290
2009-07-10 21:31:42 +00:00
Chris Lattner bd3e560f1a some minor simplifications.
llvm-svn: 75274
2009-07-10 20:53:38 +00:00
Evan Cheng bb00fe0dc6 CMOVxx doesn't swap operands which it's commuted.
llvm-svn: 75266
2009-07-10 19:26:57 +00:00
Chris Lattner ca9d784bf1 change isGlobalStubReference to take target flags instead of a MachineOperand.
llvm-svn: 75236
2009-07-10 06:29:59 +00:00
Chris Lattner e6d259340e convert some late code (called by regalloc and code emission)
to use isGlobalStubReference instead of GVRequiresExtraLoad
(which should really be part of isel).

llvm-svn: 75234
2009-07-10 06:07:08 +00:00
Chris Lattner b9af63a4d2 GVRequiresExtraLoad is now never used for calls, simplify it based on this.
llvm-svn: 75232
2009-07-10 05:52:02 +00:00
Evan Cheng 7452c968e4 Targets sometimes assign fixed stack object to spill certain callee-saved
registers based on dynamic conditions. For example, X86 EBP/RBP, when used as
frame register has to be spilled in the first fixed object. It should inform
PEI this so it doesn't get allocated another stack object. Also, it should not
be spilled as other callee-saved registers but rather its spilling and restoring
are being handled by emitPrologue and emitEpilogue. Avoid spilling it twice.

llvm-svn: 75116
2009-07-09 06:53:48 +00:00
Chris Lattner fef11d6e77 simplify some code based on the fact that picstyles != none are only valid
in pic or dynamic-no-pic mode. Also, x86-64 never used picstylegot.

llvm-svn: 75101
2009-07-09 04:39:06 +00:00
Torok Edwin 6dd2730024 Start converting to new error handling API.
cerr+abort -> llvm_report_error
assert(0)+abort -> LLVM_UNREACHABLE (assert(0)+llvm_unreachable-> abort() included)

llvm-svn: 75018
2009-07-08 18:01:40 +00:00
Evan Cheng 0dc101b897 Add a bit IsUndef to MachineOperand. This indicates the def / use register operand is defined by an implicit_def. That means it can def / use any register and passes (e.g. register scavenger) can feel free to ignore them.
The register allocator, when it allocates a register to a virtual register defined by an implicit_def, can allocate any physical register without worrying about overlapping live ranges. It should mark all of operands of the said virtual register so later passes will do the right thing.

This is not the best solution. But it should be a lot less fragile to having the scavenger try to track what is defined by implicit_def.

llvm-svn: 74518
2009-06-30 08:49:04 +00:00
Chris Lattner 9876bd8257 factor some logic out into a helper function, allow remat of loads from constant
globals.  This implements remat-constant.ll even without aggressive-remat.

llvm-svn: 74373
2009-06-27 04:38:55 +00:00
Chris Lattner fea81da433 Reimplement rip-relative addressing in the X86-64 backend. The new
implementation primarily differs from the former in that the asmprinter
doesn't make a zillion decisions about whether or not something will be
RIP relative or not.  Instead, those decisions are made by isel lowering
and propagated through to the asm printer.  To achieve this, we:

1. Represent RIP relative addresses by setting the base of the X86 addr
   mode to X86::RIP.
2. When ISel Lowering decides that it is safe to use RIP, it lowers to
   X86ISD::WrapperRIP.  When it is unsafe to use RIP, it lowers to
   X86ISD::Wrapper as before.
3. This removes isRIPRel from X86ISelAddressMode, representing it with
   a basereg of RIP instead.
4. The addressing mode matching logic in isel is greatly simplified.
5. The asmprinter is greatly simplified, notably the "NotRIPRel" predicate
   passed through various printoperand routines is gone now.
6. The various symbol printing routines in asmprinter now no longer infer
   when to emit (%rip), they just print the symbol.

I think this is a big improvement over the previous situation.  It does have
two small caveats though: 1. I implemented a horrible "no-rip" modifier for
the inline asm "P" constraint modifier.  This is a short term hack, there is
a much better, but more involved, solution.  2. I had to xfail an 
-aggressive-remat testcase because it isn't handling the use of RIP in the
constant-pool reading instruction.  This specific test is easy to fix without
-aggressive-remat, which I intend to do next.

llvm-svn: 74372
2009-06-27 04:16:01 +00:00
Chris Lattner 852739b46f Use target-specific machine operand flags to eliminate a gross hack
from the asmprinter.

llvm-svn: 74184
2009-06-25 17:38:33 +00:00
Chris Lattner 1927844ebf just eliminate the code entirely!
llvm-svn: 74183
2009-06-25 17:28:07 +00:00
Eli Friedman 63488f1fbf PR3739, part 2: Use an explicit store to spill XMM registers. (Previously,
the code tried to use "push", which doesn't exist for XMM registers.)

llvm-svn: 72836
2009-06-04 02:32:04 +00:00
Bill Wendling 2e09bd3d34 The MONITOR and MWAIT instructions have insufficient information for
decoding. Essentially, they both map to the same column in the "opcode
extensions for one- and two-byte opcodes" table in the x86 manual. The RawFrm
complicates decoding this.

Instead, use opcode 0x01, prefix 0x01, and form MRM1r. Then have the code
emitter special case these, a la [SML]FENCE.

llvm-svn: 72556
2009-05-28 23:40:46 +00:00
Bill Wendling f7b83c7ae7 Change MachineInstrBuilder::addReg() to take a flag instead of a list of
booleans. This gives a better indication of what the "addReg()" is
doing. Remembering what all of those booleans mean isn't easy, especially if you
aren't spending all of your time in that code.

I took Jakob's suggestion and made it illegal to pass in "true" for the
flag. This should hopefully prevent any unintended misuse of this (by reverting
to the old way of using addReg()).

llvm-svn: 71722
2009-05-13 21:33:08 +00:00
Evan Cheng 55173b7646 Avoid unneeded SIB byte encoding. Patch by Zoltan Varga.
llvm-svn: 71520
2009-05-12 00:07:35 +00:00
Evan Cheng 2fa281106a Optimize code placement in loop to eliminate unconditional branches or move unconditional branch to the outside of the loop. e.g.
///       A:                                                                                                                                                                 
///       ...                                                                                                                                                                
///       <fallthrough to B>                                                                                                                                                 
///                                                                                                                                                                          
///       B:  --> loop header                                                                                                                                                
///       ...                                                                                                                                                                
///       jcc <cond> C, [exit]                                                                                                                                               
///                                                                                                                                                                          
///       C:                                                                                                                                                                 
///       ...                                                                                                                                                                
///       jmp B                                                                                                                                                              
///                                                                                                                                                                          
/// ==>                                                                                                                                                                      
///                                                                                                                                                                          
///       A:                                                                                                                                                                 
///       ...                                                                                                                                                                
///       jmp B                                                                                                                                                              
///                                                                                                                                                                          
///       C:  --> new loop header                                                                                                                                            
///       ...                                                                                                                                                                
///       <fallthough to B>                                                                                                                                                  
///                                                                                                                                                                          
///       B:                                                                                                                                                                 
///       ...                                                                                                                                                                
///       jcc <cond> C, [exit] 

llvm-svn: 71209
2009-05-08 06:34:09 +00:00
Evan Cheng a35aed567a Revert part of 70929 that has to do with determining whether a SIB byte is needed. It causes a lot of x86_64 JIT failures.
llvm-svn: 70986
2009-05-05 18:18:57 +00:00
Evan Cheng c298ccb998 - Avoid the longer SIB encoding on x86_64 when it's not needed.
- Synchronize instruction length computation code in X86InstrInfo with code in X86CodeEmitter.cpp
Patch by Zoltan Varga.

llvm-svn: 70929
2009-05-04 22:49:16 +00:00
Dan Gohman 2986972118 Rename GR8_ABCD to GR8_ABCD_L and create GR8_ABCD_H, and use these
to precisely describe the h-register subreg register classes.
Thanks to Jakob Stoklund Olesen for spotting this and for the
initial patch!

Also, make getStoreRegOpcode and getLoadRegOpcode aware of the
needs of h registers.

llvm-svn: 70211
2009-04-27 16:41:36 +00:00
Dan Gohman ec542ca65e Rename GR8_, GR16_, GR32_, and GR64_ to GR8_ABCD, GR16_ABCD,
GR32_ABCD, and GR64_ABCD, respectively, to help describe them.

llvm-svn: 70210
2009-04-27 16:33:14 +00:00
Dan Gohman 1addf64735 Make X86's copyRegToReg able to handle copies to and from subclasses.
This makes the extra copyRegToReg calls in ScheduleDAGSDNodesEmit.cpp
unnecessary. Derived from a patch by Jakob Stoklund Olesen.

llvm-svn: 69635
2009-04-20 22:54:34 +00:00
Mon P Wang 6c8bcf9da1 Fixed a few 64 bit cases in X86InstrInfo::commuteInstruction
llvm-svn: 69417
2009-04-18 05:16:01 +00:00
Bill Wendling 06684350c4 Recommit r69335 and r69336. These were not causing problems.
llvm-svn: 69394
2009-04-17 22:40:38 +00:00
Bill Wendling 30527b1114 Revert r69335 and r69336. They were causing build failures.
llvm-svn: 69347
2009-04-17 04:19:22 +00:00
Dan Gohman 09dbb0b5e0 MOV8rr_NOREX is a "Move" instruction. This doesn't currently
matter, because this instruction isn't generated until after
things that care.

llvm-svn: 69336
2009-04-17 00:45:17 +00:00
Dan Gohman 74835ce1cb Don't use MOV8rr_NOREX on x86-32. It doesn't actually hurt anything at
present, but it's inconsistent.

llvm-svn: 69335
2009-04-17 00:43:09 +00:00
Dan Gohman de7b3e74be Fix 80-column violations.
llvm-svn: 69204
2009-04-15 19:48:57 +00:00
Dan Gohman 6711216e84 Add a folding table entry for MOV8rr_NOREX.
llvm-svn: 69203
2009-04-15 19:48:28 +00:00
Dan Gohman 7913ea5e4a Add a new MOV8rr_NOREX, and make X86's copyRegToReg use it when
either the source or destination is a physical h register.

This fixes sqlite3 with the post-RA scheduler enabled.

llvm-svn: 69111
2009-04-15 00:04:23 +00:00
Dan Gohman 57d6bd36b2 Implement x86 h-register extract support.
- Add patterns for h-register extract, which avoids a shift and mask,
   and in some cases a temporary register.
 - Add address-mode matching for turning (X>>(8-n))&(255<<n), where
   n is a valid address-mode scale value, into an h-register extract
   and a scaled-offset address.
 - Replace X86's MOV32to32_ and related instructions with the new
   target-independent COPY_TO_SUBREG instruction.

On x86-64 there are complicated constraints on h registers, and
CodeGen doesn't currently provide a high-level way to express all of them,
so they are handled with a bunch of special code. This code currently only
supports extracts where the result is used by a zero-extend or a store,
though these are fairly common.

These transformations are not always beneficial; since there are only
4 h registers, they sometimes require extra move instructions, and
this sometimes increases register pressure because it can force out
values that would otherwise be in one of those registers. However,
this appears to be relatively uncommon.

llvm-svn: 68962
2009-04-13 16:09:41 +00:00
Dan Gohman 39aa13a401 Fix another hard-coded constant to use X86AddrNumOperands.
This unbreaks the JIT on x86-64.

llvm-svn: 68948
2009-04-13 15:04:25 +00:00
Chris Lattner bcd2632638 Fix code size computation on x86-64, patch by Zoltan Varga!
llvm-svn: 68690
2009-04-09 06:10:51 +00:00
Rafael Espindola 3b2df10c9e Re-apply 68552.
Tested by bootstrapping llvm-gcc and using that to build llvm.

llvm-svn: 68645
2009-04-08 21:14:34 +00:00
Bill Wendling 4aa25b79f9 Temporarily revert r68552. This was causing a failure in the self-hosting LLVM
builds.

--- Reverse-merging (from foreign repository) r68552 into '.':
U    test/CodeGen/X86/tls8.ll
U    test/CodeGen/X86/tls10.ll
U    test/CodeGen/X86/tls2.ll
U    test/CodeGen/X86/tls6.ll
U    lib/Target/X86/X86Instr64bit.td
U    lib/Target/X86/X86InstrSSE.td
U    lib/Target/X86/X86InstrInfo.td
U    lib/Target/X86/X86RegisterInfo.cpp
U    lib/Target/X86/X86ISelLowering.cpp
U    lib/Target/X86/X86CodeEmitter.cpp
U    lib/Target/X86/X86FastISel.cpp
U    lib/Target/X86/X86InstrInfo.h
U    lib/Target/X86/X86ISelDAGToDAG.cpp
U    lib/Target/X86/AsmPrinter/X86ATTAsmPrinter.cpp
U    lib/Target/X86/AsmPrinter/X86IntelAsmPrinter.cpp
U    lib/Target/X86/AsmPrinter/X86ATTAsmPrinter.h
U    lib/Target/X86/AsmPrinter/X86IntelAsmPrinter.h
U    lib/Target/X86/X86ISelLowering.h
U    lib/Target/X86/X86InstrInfo.cpp
U    lib/Target/X86/X86InstrBuilder.h
U    lib/Target/X86/X86RegisterInfo.td

llvm-svn: 68560
2009-04-07 22:35:25 +00:00
Rafael Espindola 1edda06792 Reduce code duplication on the TLS implementation.
This introduces a small regression on the generated code
quality in the case we are just computing addresses, not
loading values.

Will work on it and on X86-64 support.

llvm-svn: 68552
2009-04-07 21:37:46 +00:00
Rafael Espindola 6ff3dabbb4 Have only one definition of X86AddrNumOperands.
llvm-svn: 67949
2009-03-28 18:55:31 +00:00
Rafael Espindola c2a17d3022 Make code a bit less brittle by no hardcoding the number
of operands in an address in so many places.

llvm-svn: 67945
2009-03-28 17:03:24 +00:00