Commit Graph

268 Commits

Author SHA1 Message Date
Evan Cheng 64a223aed8 Do not delete BBs if their addresses are taken. rdar://12396696
llvm-svn: 164866
2012-09-28 23:58:57 +00:00
Bill Wendling 863bab689a Remove the `hasFnAttr' method from Function.
The hasFnAttr method has been replaced by querying the Attributes explicitly. No
intended functionality change.

llvm-svn: 164725
2012-09-26 21:48:26 +00:00
Hans Wennborg 02fbc71647 CodeGenPrep: turn lookup tables into switches for some targets.
This is a follow-up from r163302, which added a transformation to
SimplifyCFG that turns some switches into loads from lookup tables.

It was pointed out that some targets, such as GPUs and deeply embedded
targets, might not find this appropriate, but SimplifyCFG doesn't have
enough information about the target to decide this.

This patch adds the reverse transformation to CodeGenPrep: it turns
loads from lookup tables back into switches for targets where we do not
build jump tables (assuming these are also the targets where lookup
tables are inappropriate).

Hopefully we will eventually get to have target information in
SimplifyCFG, and then this CodeGenPrep transformation can be removed.

llvm-svn: 164206
2012-09-19 07:48:16 +00:00
Evan Cheng 71be12b35b Stylistic and 80-col fixes
llvm-svn: 163940
2012-09-14 21:25:34 +00:00
Dmitri Gribenko 2bc1d483fe Fix Doxygen issues:
* wrap code blocks in \code ... \endcode;
* refer to parameter names in paragraphs correctly (\arg is not what most
  people want -- it starts a new paragraph).

llvm-svn: 163790
2012-09-13 12:34:29 +00:00
Preston Gurd cdf540d5d6 Generic Bypass Slow Div
- CodeGenPrepare pass for identifying div/rem ops
- Backend specifies the type mapping using addBypassSlowDivType
- Enabled only for Intel Atom with O2 32-bit -> 8-bit
- Replace IDIV with instructions which test its value and use DIVB if the value
is positive and less than 256.
- In the case when the quotient and remainder of a divide are used a DIV
and a REM instruction will be present in the IR. In the non-Atom case
they are both lowered to IDIVs and CSE removes the redundant IDIV instruction,
using the quotient and remainder from the first IDIV. However,
due to this optimization CSE is not able to eliminate redundant
IDIV instructions because they are located in different basic blocks.
This is overcome by calculating both the quotient (DIV) and remainder (REM)
in each basic block that is inserted by the optimization and reusing the result
values when a subsequent DIV or REM instruction uses the same operands.
- Test cases check for the presents of the optimization when calculating
either the quotient, remainder,  or both.

Patch by Tyler Nowicki!

llvm-svn: 163150
2012-09-04 18:22:17 +00:00
Nadav Rotem 9d83202620 Not all targets have efficient ISel code generation for select instructions.
For example, the ARM target does not have efficient ISel handling for vector
selects with scalar conditions. This patch adds a TLI hook which allows the
different targets to report which selects are supported well and which selects
should be converted to CF duting codegen prepare.

llvm-svn: 163093
2012-09-02 12:10:19 +00:00
Benjamin Kramer 8bcc971174 Make MemoryBuiltins aware of TargetLibraryInfo.
This disables malloc-specific optimization when -fno-builtin (or -ffreestanding)
is specified. This has been a problem for a long time but became more severe
with the recent memory builtin improvements.

Since the memory builtin functions are used everywhere, this required passing
TLI in many places. This means that functions that now have an optional TLI
argument, like RecursivelyDeleteTriviallyDeadFunctions, won't remove dead
mallocs anymore if the TLI argument is missing. I've updated most passes to do
the right thing.

Fixes PR13694 and probably others.

llvm-svn: 162841
2012-08-29 15:32:21 +00:00
Michael Liao 6e12d12830 revise debug output to avoid dangling pointer
llvm-svn: 162256
2012-08-21 05:55:22 +00:00
Bill Wendling 4d5150d978 Remove dead flag.
llvm-svn: 161990
2012-08-15 21:18:10 +00:00
Nadav Rotem 70409991bc During the CodeGenPrepare we often lower intrinsics (such as objsize)
and allow some optimizations to turn conditional branches into unconditional.
This commit adds a simple control-flow optimization which merges two consecutive
basic blocks which are connected by a single edge. This allows the codegen to
operate on larger basic blocks.

rdar://11973998

llvm-svn: 161852
2012-08-14 05:19:07 +00:00
Evan Cheng 249716e8ae Teach CodeGenPrep to look past bitcast when it's duplicating return instruction
into predecessor blocks to enable tail call optimization.

rdar://11958338

llvm-svn: 160894
2012-07-27 21:21:26 +00:00
Nuno Lopes 89702e94b5 make all Emit*() functions consult the TargetLibraryInfo information before creating a call to a library function.
Update all clients to pass the TLI information around.
Previous draft reviewed by Eli.

llvm-svn: 160733
2012-07-25 16:46:31 +00:00
Nadav Rotem 465834c85f Clean whitespaces.
llvm-svn: 160668
2012-07-24 10:51:42 +00:00
Benjamin Kramer 396b3adc10 CodeGenPrepare: Don't crash when TLI is not available.
This happens when codegenprepare is invoked via opt.

llvm-svn: 159457
2012-06-29 19:58:21 +00:00
Chandler Carruth aafe0918bc Move llvm/Support/IRBuilder.h -> llvm/IRBuilder.h
This was always part of the VMCore library out of necessity -- it deals
entirely in the IR. The .cpp file in fact was already part of the VMCore
library. This is just a mechanical move.

I've tried to go through and re-apply the coding standard's preferred
header sort, but at 40-ish files, I may have gotten some wrong. Please
let me know if so.

I'll be committing the corresponding updates to Clang and Polly, and
Duncan has DragonEgg.

Thanks to Bill and Eric for giving the green light for this bit of cleanup.

llvm-svn: 159421
2012-06-29 12:38:19 +00:00
Benjamin Kramer 3d38c17b59 Switch the select to branch transformation on by default.
The primitive conservative heuristic seems to give a slight overall
improvement while not regressing stuff. Make it available to wider
testing. If you notice any speed regressions (or significant code
size regressions) let me know!

llvm-svn: 156258
2012-05-06 14:25:16 +00:00
Benjamin Kramer 047d7ca0b1 CodeGenPrepare: Add a transform to turn selects into branches in some cases.
This came up when a change in block placement formed a cmov and slowed down a
hot loop by 50%:

	ucomisd	(%rdi), %xmm0
	cmovbel	%edx, %esi

cmov is a really bad choice in this context because it doesn't get branch
prediction. If we emit it as a branch, an out-of-order CPU can do a better job
(if the branch is predicted right) and avoid waiting for the slow load+compare
instruction to finish. Of course it won't help if the branch is unpredictable,
but those are really rare in practice.

This patch uses a dumb conservative heuristic, it turns all cmovs that have one
use and a direct memory operand into branches. cmovs usually save some code
size, so we disable the transform in -Os mode. In-Order architectures are
unlikely to benefit as well, those are included in the
"predictableSelectIsExpensive" flag.

It would be better to reuse branch probability info here, but BPI doesn't
support select instructions currently. It would make sense to use the same
heuristics as the if-converter pass, which does the opposite direction of this
transform.


Test suite shows a small improvement here and there on corei7-level machines,
but the actual results depend a lot on the used microarchitecture. The
transformation is currently disabled by default and available by passing the
-enable-cgp-select2branch flag to the code generator.

Thanks to Chandler for the initial test case to him and Evan Cheng for providing
me with comments and test-suite numbers that were more stable than mine :)

llvm-svn: 156234
2012-05-05 12:49:22 +00:00
Chandler Carruth cf1b585f60 Refactor the interface to recursively simplifying instructions to be tad
bit simpler by handling a common case explicitly.

Also, refactor the implementation to use a worklist based walk of the
recursive users, rather than trying to use value handles to detect and
recover from RAUWs during the recursive descent. This fixes a very
subtle bug in the previous implementation where degenerate control flow
structures could cause mutually recursive instructions (PHI nodes) to
collapse in just such a way that From became equal to To after some
amount of recursion. At that point, we hit the inf-loop that the assert
at the top attempted to guard against. This problem is defined away when
not using value handles in this manner. There are lots of comments
claiming that the WeakVH will protect against just this sort of error,
but they're not accurate about the actual implementation of WeakVHs,
which do still track RAUWs.

I don't have any test case for the bug this fixes because it requires
running the recursive simplification on unreachable phi nodes. I've no
way to either run this or easily write an input that triggers it. It was
found when using instruction simplification inside the inliner when
running over the nightly test-suite.

llvm-svn: 153393
2012-03-24 21:11:24 +00:00
Pete Cooper 615fd897e0 Target override to allow CodeGenPrepare to sink address operands to intrinsics in the same way it current does for loads and stores
llvm-svn: 152666
2012-03-13 20:59:56 +00:00
Bill Wendling 97b9359623 Do trivial CSE of dead BBs during codegen preparation.
Some BBs can become dead after codegen preparation. If we delete them here, it
could help enable tail-call optimizations later on.
<rdar://problem/10256573>

llvm-svn: 152002
2012-03-04 10:46:01 +00:00
Kostya Serebryany a5054ad2f3 Extend Attributes to 64 bits
Problem: LLVM needs more function attributes than currently available (32 bits).
One such proposed attribute is "address_safety", which shows that a function is being checked for address safety (by AddressSanitizer, SAFECode, etc).

Solution:
- extend the Attributes from 32 bits to 64-bits
- wrap the object into a class so that unsigned is never erroneously used instead
- change "unsigned" to "Attributes" throughout the code, including one place in clang.
- the class has no "operator uint64 ()", but it has "uint64_t Raw() " to support packing/unpacking.
- the class has "safe operator bool()" to support the common idiom:  if (Attributes attr = getAttrs()) useAttrs(attr);
- The CTOR from uint64_t is marked explicit, so I had to add a few explicit CTOR calls
- Add the new attribute "address_safety". Doing it in the same commit to check that attributes beyond first 32 bits actually work.
- Some of the functions from the Attribute namespace are worth moving inside the class, but I'd prefer to have it as a separate commit.

Tested:
"make check" on Linux (32-bit and 64-bit) and Mac (10.6)
built/run spec CPU 2006 on Linux with clang -O2.


This change will break clang build in lib/CodeGen/CGCall.cpp.
The following patch will fix it.

llvm-svn: 148553
2012-01-20 17:56:17 +00:00
Chad Rosier c24b86ffbe Propagate TargetLibraryInfo throughout ConstantFolding.cpp and
InstructionSimplify.cpp.  Other fixups as needed.
Part of rdar://10500969

llvm-svn: 145559
2011-12-01 03:08:23 +00:00
Nick Lewycky a3e7ffdae8 Fold two identical set lookups into one. No functionality change.
llvm-svn: 140821
2011-09-29 23:40:12 +00:00
Benjamin Kramer 547b6c5ecd Stop emitting instructions with the name "tmp" they eat up memory and have to be uniqued, without any benefit.
If someone prefers %tmp42 to %42, run instnamer.

llvm-svn: 140634
2011-09-27 20:39:19 +00:00
Devang Patel c10e52a0c4 Use IRBuilder.
llvm-svn: 139156
2011-09-06 18:49:53 +00:00
Devang Patel 53771ba07c Dramatically speedup codegen prepare by a) avoiding use of dominator tree and b) doing a separate pass over dbg.value instructions.
llvm-svn: 137908
2011-08-18 00:50:51 +00:00
Bill Wendling 8ddfc09e7a Use the getFirstInsertionPt() method instead of getFirstNonPHI + an 'isa<>'
check for a LandingPadInst.

llvm-svn: 137745
2011-08-16 20:45:24 +00:00
Bill Wendling 5a18b7c7c7 In places where it's using "getFirstNonPHI", skip the landingpad instruction if necessary.
llvm-svn: 137679
2011-08-15 23:19:54 +00:00
Bill Wendling b9c0e0db53 Skip the insertion iterator past the landingpad instruction if there.
llvm-svn: 137626
2011-08-15 18:21:07 +00:00
Chris Lattner 229907cd11 land David Blaikie's patch to de-constify Type, with a few tweaks.
llvm-svn: 135375
2011-07-18 04:54:35 +00:00
Nadav Rotem 707f2d7787 Fix warnings due to 132263; Thanks rdivacky.
llvm-svn: 132285
2011-05-29 08:10:47 +00:00
Nadav Rotem a9effb13dd Refactor getActionType and getTypeToTransformTo ; place all of the 'decision'
code in one place. Re-apply 131534 and fix the multi-step promotion of integers.

llvm-svn: 132217
2011-05-27 21:03:13 +00:00
Chandler Carruth 07f5b65e63 Fix warning about || and && without explicit grouping.
This looks like it flagged an actual bug. Devang, please review. I added
the parentheses that change behavior, but make the behavior more closely
match commit log's intent.

llvm-svn: 132165
2011-05-26 23:37:58 +00:00
Devang Patel bf22998f21 Do not insert anything after terminator.
llvm-svn: 132164
2011-05-26 23:16:48 +00:00
Devang Patel 252f0079a9 Do not move DBG_VALUE in middle of PHI nodes.
llvm-svn: 132161
2011-05-26 22:43:14 +00:00
Devang Patel 0da5250bcd If llvm.dbg.value and the value instruction it refers to are far apart then iSel may not be able to find corresponding Node for llvm.dbg.value during DAG construction. Make iSel's life easier by removing this distance between llvm.dbg.value and its value instruction.
llvm-svn: 132151
2011-05-26 21:51:06 +00:00
Frits van Bommel ad964559ef Add a parameter to ConstantFoldTerminator() that callers can use to ask it to also clean up the condition of any conditional terminator it folds to be unconditional, if that turns the condition into dead code. This just means it calls RecursivelyDeleteTriviallyDeadInstructions() in strategic spots. It defaults to the old behavior.
I also changed -simplifycfg, -jump-threading and -codegenprepare to use this to produce slightly better code without any extra cleanup passes (AFAICT this was the only place in -simplifycfg where now-dead conditions of replaced terminators weren't being cleaned up). The only other user of this function is -sccp, but I didn't read that thoroughly enough to figure out whether it might be holding pointers to instructions that could be deleted by this.

llvm-svn: 131855
2011-05-22 16:24:18 +00:00
Duncan Sands 3d9407f4eb Revert commit 131534 since it seems to have broken several buildbots.
Original log entry:
Refactor getActionType and getTypeToTransformTo ; place all of the 'decision'
code in one place.

llvm-svn: 131536
2011-05-18 14:57:56 +00:00
Nadav Rotem c5c27ede55 Refactor getActionType and getTypeToTransformTo ; place all of the 'decision'
code in one place.

llvm-svn: 131534
2011-05-18 12:26:38 +00:00
Chris Lattner af1bccec68 Fix a bug where RecursivelyDeleteTriviallyDeadInstructions could
delete the instruction pointed to by CGP's current instruction
iterator, leading to a crash on the testcase.  This fixes PR9578.

llvm-svn: 129200
2011-04-09 07:05:44 +00:00
Cameron Zwarich 74157ab3e5 Debug intrinsics must be skipped at the beginning and ends of blocks, lest they
affect the generated code.

llvm-svn: 128217
2011-03-24 16:34:59 +00:00
Cameron Zwarich 2edfe778ec It is enough for the CallInst to have no uses to be made a tail call with a ret
void; it doesn't need to have a void type.

llvm-svn: 128212
2011-03-24 15:54:11 +00:00
Devang Patel 8f606d7b9b s/UpdateDT/ModifiedDT/g
llvm-svn: 128211
2011-03-24 15:35:25 +00:00
Cameron Zwarich 4649f17db1 Do early taildup of ret in CodeGenPrepare for potential tail calls that have a
void return type. This fixes PR9487.

llvm-svn: 128197
2011-03-24 04:52:10 +00:00
Cameron Zwarich 0e331c05ae Use an early return instead of a long if block.
llvm-svn: 128196
2011-03-24 04:52:07 +00:00
Cameron Zwarich dd84bcce8f When UpdateDT is set, DT is invalid, which could cause problems when trying to
use it later. I couldn't make a test that hits this with the current code.

llvm-svn: 128195
2011-03-24 04:52:04 +00:00
Cameron Zwarich 47e7175fe9 Check for TLI so that -codegenprepare can be used from opt.
llvm-svn: 128194
2011-03-24 04:51:51 +00:00
Evan Cheng 0663f23bd8 Re-apply r127953 with fixes: eliminate empty return block if it has no predecessors; update dominator tree if cfg is modified.
llvm-svn: 127981
2011-03-21 01:19:09 +00:00
Daniel Dunbar 327cd36f74 Revert r127953, "SimplifyCFG has stopped duplicating returns into predecessors
to canonicalize IR", it broke a lot of things.

llvm-svn: 127954
2011-03-19 21:47:14 +00:00
Evan Cheng 824a711305 SimplifyCFG has stopped duplicating returns into predecessors to canonicalize IR
to have single return block (at least getting there) for optimizations. This
is general goodness but it would prevent some tailcall optimizations.
One specific case is code like this:
int f1(void);
int f2(void);
int f3(void);
int f4(void);
int f5(void);
int f6(void);
int foo(int x) {
  switch(x) {
  case 1: return f1();
  case 2: return f2();
  case 3: return f3();
  case 4: return f4();
  case 5: return f5();
  case 6: return f6();
  }
}

=>
LBB0_2:                                 ## %sw.bb
  callq   _f1
  popq    %rbp
  ret
LBB0_3:                                 ## %sw.bb1
  callq   _f2
  popq    %rbp
  ret
LBB0_4:                                 ## %sw.bb3
  callq   _f3
  popq    %rbp
  ret

This patch teaches codegenprep to duplicate returns when the return value
is a phi and where the phi operands are produced by tail calls followed by
an unconditional branch:

sw.bb7:                                           ; preds = %entry
  %call8 = tail call i32 @f5() nounwind
  br label %return
sw.bb9:                                           ; preds = %entry
  %call10 = tail call i32 @f6() nounwind
  br label %return
return:
  %retval.0 = phi i32 [ %call10, %sw.bb9 ], [ %call8, %sw.bb7 ], ... [ 0, %entry ]
  ret i32 %retval.0

This allows codegen to generate better code like this:

LBB0_2:                                 ## %sw.bb
        jmp     _f1                     ## TAILCALL
LBB0_3:                                 ## %sw.bb1
        jmp     _f2                     ## TAILCALL
LBB0_4:                                 ## %sw.bb3
        jmp     _f3                     ## TAILCALL

rdar://9147433

llvm-svn: 127953
2011-03-19 17:17:39 +00:00
Cameron Zwarich 338d362200 Roll r127459 back in:
Optimize trivial branches in CodeGenPrepare, which often get created from the
lowering of objectsize intrinsics. Unfortunately, a number of tests were relying
on llc not optimizing trivial branches, so I had to add an option to allow them
to continue to test what they originally tested.

This fixes <rdar://problem/8785296> and <rdar://problem/9112893>.

llvm-svn: 127498
2011-03-11 21:52:04 +00:00
Daniel Dunbar 94ccb27b43 Revert r127459, "Optimize trivial branches in CodeGenPrepare, which often get
created from the", it broke some GCC test suite tests.

llvm-svn: 127477
2011-03-11 19:30:30 +00:00
Cameron Zwarich cc27b3acc4 Optimize trivial branches in CodeGenPrepare, which often get created from the
lowering of objectsize intrinsics. Unfortunately, a number of tests were relying
on llc not optimizing trivial branches, so I had to add an option to allow them
to continue to test what they originally tested.

This fixes <rdar://problem/8785296> and <rdar://problem/9112893>.

llvm-svn: 127459
2011-03-11 04:54:27 +00:00
Cameron Zwarich 13c885d193 Fix PR9398 - 10% of llc compile time is spent in Value::getNumUses. This reduces
the percentage of time spent in CodeGenPrepare when llcing 403.gcc from 12.6% to
1.8% of total llc time.

llvm-svn: 127069
2011-03-05 08:12:26 +00:00
Cameron Zwarich 86ade9510f Remove some more unused code that I missed.
llvm-svn: 126826
2011-03-02 03:48:29 +00:00
Cameron Zwarich 5dd2aa2615 Eliminate the unused CodeGenPrepare option to split critical edges.
llvm-svn: 126825
2011-03-02 03:31:46 +00:00
Cameron Zwarich b7f8eaafa3 Stop computing the number of uses twice per value in CodeGenPrepare's sinking of
addressing code. On 403.gcc this almost halves CodeGenPrepare time and reduces
total llc time by 9.5%. Unfortunately, getNumUses() is still the hottest function
in llc.

llvm-svn: 126782
2011-03-01 21:13:53 +00:00
Chris Lattner 86d56c651d fix rdar://8878965, a regression I introduced with the recent
llvm.objectsize changes.

llvm-svn: 123771
2011-01-18 20:53:04 +00:00
Chris Lattner af26390790 temporarily revert r123526. While working on a follow-on patch I
realize that ConstantFoldTerminator doesn't preserve dominfo.

llvm-svn: 123527
2011-01-15 07:51:19 +00:00
Chris Lattner 8df83c4a24 fix rdar://8785296 - -fcatch-undefined-behavior generates inefficient code
The basic issue is that isel (very reasonably!) expects conditional branches
to be folded, so CGP leaving around a bunch dead computation feeding
conditional branches isn't such a good idea.  Just fold branches on constants
into unconditional branches.

llvm-svn: 123526
2011-01-15 07:36:13 +00:00
Chris Lattner ee588defc6 simplify code, no functionality change.
llvm-svn: 123525
2011-01-15 07:29:01 +00:00
Chris Lattner 1b93be501d Now that instruction optzns can update the iterator as they go, we can
have objectsize folding recursively simplify away their result when it
folds.  It is important to catch this here, because otherwise we won't
eliminate the cross-block values at isel and other times.

llvm-svn: 123524
2011-01-15 07:25:29 +00:00
Chris Lattner 7a2771440f make the current instruction iterator an ivar, allowing xforms that
potentially invalidate it (like inline asm lowering) to be sunk into
their proper place, cleaning up a ton of code.

llvm-svn: 123523
2011-01-15 07:14:54 +00:00
Cameron Zwarich 84986b298a Make more passes preserve dominators (or state that they preserve dominators if
they all ready do). This removes two dominator recomputations prior to isel,
which is a 1% improvement in total llc time for 403.gcc.

The only potentially suspect thing is making GCStrategy recompute dominators if
it used a custom lowering strategy.

llvm-svn: 123064
2011-01-08 17:01:52 +00:00
Cameron Zwarich 9ec19ea06a Add the CallInst optimizations that don't involve expanding inline assembly to
OptimizeInst() so that they can be used on a worklist instruction.

llvm-svn: 122945
2011-01-06 02:56:42 +00:00
Cameron Zwarich d28c78eb4f Move the GEP handling in CodeGenPrepare to OptimizeInst().
llvm-svn: 122944
2011-01-06 02:44:52 +00:00
Cameron Zwarich 14ac865ca9 Split the optimizations in CodeGenPrepare that don't manipulate the iterators
into a separate function, so that it can be called from a loop using a worklist
rather than a loop traversing a whole basic block.

llvm-svn: 122943
2011-01-06 02:37:26 +00:00
Cameron Zwarich ce3b930a98 Stop reallocating SunkAddrs for each basic block. When we move to an instruction
worklist, the key will need to become std::pair<BasicBlock*, Value*>.

llvm-svn: 122932
2011-01-06 00:42:50 +00:00
Cameron Zwarich b62ccb241b Add some more statistics to CodeGenPrepare.
llvm-svn: 122891
2011-01-05 17:47:38 +00:00
Cameron Zwarich ced753fadf Add some stats to CodeGenPrepare to make it easier to speed it up without
regressing code quality.

llvm-svn: 122887
2011-01-05 17:27:27 +00:00
Cameron Zwarich f4e13699e7 Avoid finding loop back edges when we are not splitting critical edges in
CodeGenPrepare (which is the default behavior).

llvm-svn: 122801
2011-01-04 04:43:31 +00:00
Cameron Zwarich 43cecb1200 Switch a worklist in CodeGenPrepare to SmallVector and increase the inline
capacity on the Visited SmallPtrSet. On 403.gcc, this is about a 4.5% speedup of
CodeGenPrepare time (which itself is 10% of time spent in the backend).

This is progress towards PR8889.

llvm-svn: 122741
2011-01-03 06:33:01 +00:00
Owen Anderson 5d690d4168 It is possible for SimplifyCFG to cause PHI nodes to become redundant too late in the optimization
pipeline to be caught by instcombine, and it's not feasible to catch them in SimplifyCFG because the
use-lists are in an inconsistent state at the point where it could know that it need to simplify them.
Instead, have CodeGenPrepare look for trivially redundant PHIs as part of its general cleanup effort.

llvm-svn: 122516
2010-12-23 20:57:35 +00:00
Chris Lattner fb888622c3 revert r122164, I'm going to go with a different approach.
llvm-svn: 122168
2010-12-19 04:23:03 +00:00
Chris Lattner 583ec6fa44 first step to fixing PR8642: don't fold away empty basic blocks
which have trapping constant exprs in them due to PHI nodes.
Eliminating them can cause the constant expr to be evalutated
on new paths if the input edges are critical.

llvm-svn: 122164
2010-12-19 03:02:34 +00:00
Owen Anderson 8ba5f39f70 Second attempt at fixing the performance regressions introduced
by my recent GVN improvement.  Looking through a single layer of
PHI nodes when attempting to sink GEPs, we need to iteratively
look through arbitrary PHI nests.

llvm-svn: 120202
2010-11-27 08:15:55 +00:00
Owen Anderson dfb8c3bbfc When folding addressing modes in CodeGenPrepare, attempt to look through PHI nodes
if all the operands of the PHI are equivalent.  This allows CodeGenPrepare to undo
unprofitable PRE transforms.

llvm-svn: 119853
2010-11-19 22:15:03 +00:00
John Thompson e8360b7182 Inline asm multiple alternative constraints development phase 2 - improved basic logic, added initial platform support.
llvm-svn: 117667
2010-10-29 17:29:13 +00:00
Owen Anderson 6c18d1aac0 Get rid of static constructors for pass registration. Instead, every pass exposes an initializeMyPassFunction(), which
must be called in the pass's constructor.  This function uses static dependency declarations to recursively initialize
the pass's dependencies.

Clients that only create passes through the createFooPass() APIs will require no changes.  Clients that want to use the
CommandLine options for passes will need to manually call the appropriate initialization functions in PassInitialization.h
before parsing commandline arguments.

I have tested this with all standard configurations of clang and llvm-gcc on Darwin.  It is possible that there are problems
with the static dependencies that will only be visible with non-standard options.  If you encounter any crash in pass
registration/creation, please send the testcase to me directly.

llvm-svn: 116820
2010-10-19 17:21:58 +00:00
Owen Anderson df7a4f2515 Now with fewer extraneous semicolons!
llvm-svn: 115996
2010-10-07 22:25:06 +00:00
Jakob Stoklund Olesen eb12f49fb7 Try again to disable critical edge splitting in CodeGenPrepare.
The bug that broke i386 linux has been fixed in r115191.

llvm-svn: 115204
2010-09-30 20:51:52 +00:00
Jakob Stoklund Olesen 415a7a6fec Revert "Disable codegen prepare critical edge splitting. Machine instruction passes now"
This reverts revision 114633. It was breaking llvm-gcc-i386-linux-selfhost.

It seems there is a downstream bug that is exposed by
-cgp-critical-edge-splitting=0. When that bug is fixed, this patch can go back
in.

Note that the changes to tailcallfp2.ll are not reverted. They were good are
required.

llvm-svn: 114859
2010-09-27 18:43:48 +00:00
Evan Cheng 794aaa79e2 Disable codegen prepare critical edge splitting. Machine instruction passes now
break critical edges on demand.

llvm-svn: 114633
2010-09-23 06:55:34 +00:00
Bob Wilson b6832a4372 When moving zext/sext to be folded with a load, ignore the issue of whether
truncates are free only in the case where the extended type is legal but the
load type is not.  If both types are illegal, such as when they are too big,
the load may not be legalized into an extended load.

llvm-svn: 114568
2010-09-22 18:44:56 +00:00
Bob Wilson 4ddcb6a6b4 Move a sign-extend or a zero-extend of a load to the same basic block as the
load when the type of the load is not legal, even if truncates are not free.
The load is going to be legalized to an extending load anyway.

llvm-svn: 114488
2010-09-21 21:54:27 +00:00
Bob Wilson ff714f9992 Clarify a comment.
llvm-svn: 114487
2010-09-21 21:44:14 +00:00
Dale Johannesen f95f59a0c2 When substituting sunkaddrs into indirect arguments an asm, we were
walking the asm arguments once and stashing their Values.  This is
wrong because the same memory location can be in the list twice, and
if the first one has a sunkaddr substituted, the stashed value for the
second one will be wrong (use-after-free).  PR 8154.

llvm-svn: 114104
2010-09-16 18:30:55 +00:00
Eric Christopher e3a89f9f9c Remove unused variable.
llvm-svn: 113769
2010-09-13 18:27:59 +00:00
John Thompson 1094c80281 Added skeleton for inline asm multiple alternative constraint support.
llvm-svn: 113766
2010-09-13 18:15:37 +00:00
Chris Lattner 8df99b523e remove some llvmcontext arguments that are now dead post-refactoring.
llvm-svn: 112104
2010-08-25 23:00:45 +00:00
Evan Cheng 8b637b177c Add an option to disable codegen prepare critical edge splitting. In theory, PHI elimination is already doing all (most?) of the splitting needed. But machine-licm and machine-sink seem to miss some important optimizations when splitting is disabled.
llvm-svn: 111224
2010-08-17 01:34:49 +00:00
Owen Anderson a7aed18624 Reapply r110396, with fixes to appease the Linux buildbot gods.
llvm-svn: 110460
2010-08-06 18:33:48 +00:00
Owen Anderson bda59bd247 Revert r110396 to fix buildbots.
llvm-svn: 110410
2010-08-06 00:23:35 +00:00
Owen Anderson 755aceb5d0 Don't use PassInfo* as a type identifier for passes. Instead, use the address of the static
ID member as the sole unique type identifier.  Clean up APIs related to this change.

llvm-svn: 110396
2010-08-05 23:42:04 +00:00
Owen Anderson a57b97e7e7 Fix batch of converting RegisterPass<> to INTIALIZE_PASS().
llvm-svn: 109045
2010-07-21 22:09:45 +00:00
Gabor Greif 6d673953e3 eliminate CallInst::ArgOffset
llvm-svn: 108522
2010-07-16 09:38:02 +00:00
Gabor Greif 743b3fd196 use getArgOperand (corrected by CallInst::ArgOffset) instead of getOperand
llvm-svn: 107273
2010-06-30 09:19:23 +00:00
Dale Johannesen ce97d55ad9 The hasMemory argument is irrelevant to how the argument
for an "i" constraint should get lowered; PR 6309.  While
this argument was passed around a lot, this is the only
place it was used, so it goes away from a lot of other
places.

llvm-svn: 106893
2010-06-25 21:55:36 +00:00
Gabor Greif 4a39b84a9d use ArgOperand API
llvm-svn: 106707
2010-06-24 00:44:01 +00:00