Commit Graph

484 Commits

Author SHA1 Message Date
Duncan P. N. Exon Smith 946fdcc50c IR: Remove MDNodeFwdDecl
Remove `MDNodeFwdDecl` (as promised in r226481).  Aside from API
changes, there's no real functionality change here.
`MDNode::getTemporary()` now forwards to `MDTuple::getTemporary()`,
which returns a tuple with `isTemporary()` equal to true.

The main point is that we can now add temporaries of other `MDNode`
subclasses, needed for PR22235 (I introduced `MDNodeFwdDecl` in the
first place because I didn't recognize this need, and thought they were
only needed to handle forward references).

A few things left out of (or highlighted by) this commit:

  - I've had to remove the (few) uses of `std::unique_ptr<>` to deal
    with temporaries, since the destructor is no longer public.
    `getTemporary()` should probably return the equivalent of
    `std::unique_ptr<T, MDNode::deleteTemporary>`.
  - `MDLocation::getTemporary()` doesn't exist yet (worse, it actually
    does exist, but does the wrong thing: `MDNode::getTemporary()` is
    inherited and returns an `MDTuple`).
  - `MDNode` now only has one subclass, `UniquableMDNode`, and the
    distinction between them is actually somewhat confusing.

I'll fix those up next.

llvm-svn: 226501
2015-01-19 20:36:39 +00:00
Chandler Carruth 66b3130cda [PM] Split the AssumptionTracker immutable pass into two separate APIs:
a cache of assumptions for a single function, and an immutable pass that
manages those caches.

The motivation for this change is two fold. Immutable analyses are
really hacks around the current pass manager design and don't exist in
the new design. This is usually OK, but it requires that the core logic
of an immutable pass be reasonably partitioned off from the pass logic.
This change does precisely that. As a consequence it also paves the way
for the *many* utility functions that deal in the assumptions to live in
both pass manager worlds by creating an separate non-pass object with
its own independent API that they all rely on. Now, the only bits of the
system that deal with the actual pass mechanics are those that actually
need to deal with the pass mechanics.

Once this separation is made, several simplifications become pretty
obvious in the assumption cache itself. Rather than using a set and
callback value handles, it can just be a vector of weak value handles.
The callers can easily skip the handles that are null, and eventually we
can wrap all of this up behind a filter iterator.

For now, this adds boiler plate to the various passes, but this kind of
boiler plate will end up making it possible to port these passes to the
new pass manager, and so it will end up factored away pretty reasonably.

llvm-svn: 225131
2015-01-04 12:03:27 +00:00
Michael Kuperstein fffb6996c9 The inliner needs to fix up debug information for llvm.dbg.declare, not only for llvm.dbg.value.
Patch by Amjad Aboud

Differential Revision: http://reviews.llvm.org/D6525

llvm-svn: 224015
2014-12-11 12:41:10 +00:00
Duncan P. N. Exon Smith 5bf8fef580 IR: Split Metadata from Value
Split `Metadata` away from the `Value` class hierarchy, as part of
PR21532.  Assembly and bitcode changes are in the wings, but this is the
bulk of the change for the IR C++ API.

I have a follow-up patch prepared for `clang`.  If this breaks other
sub-projects, I apologize in advance :(.  Help me compile it on Darwin
I'll try to fix it.  FWIW, the errors should be easy to fix, so it may
be simpler to just fix it yourself.

This breaks the build for all metadata-related code that's out-of-tree.
Rest assured the transition is mechanical and the compiler should catch
almost all of the problems.

Here's a quick guide for updating your code:

  - `Metadata` is the root of a class hierarchy with three main classes:
    `MDNode`, `MDString`, and `ValueAsMetadata`.  It is distinct from
    the `Value` class hierarchy.  It is typeless -- i.e., instances do
    *not* have a `Type`.

  - `MDNode`'s operands are all `Metadata *` (instead of `Value *`).

  - `TrackingVH<MDNode>` and `WeakVH` referring to metadata can be
    replaced with `TrackingMDNodeRef` and `TrackingMDRef`, respectively.

    If you're referring solely to resolved `MDNode`s -- post graph
    construction -- just use `MDNode*`.

  - `MDNode` (and the rest of `Metadata`) have only limited support for
    `replaceAllUsesWith()`.

    As long as an `MDNode` is pointing at a forward declaration -- the
    result of `MDNode::getTemporary()` -- it maintains a side map of its
    uses and can RAUW itself.  Once the forward declarations are fully
    resolved RAUW support is dropped on the ground.  This means that
    uniquing collisions on changing operands cause nodes to become
    "distinct".  (This already happened fairly commonly, whenever an
    operand went to null.)

    If you're constructing complex (non self-reference) `MDNode` cycles,
    you need to call `MDNode::resolveCycles()` on each node (or on a
    top-level node that somehow references all of the nodes).  Also,
    don't do that.  Metadata cycles (and the RAUW machinery needed to
    construct them) are expensive.

  - An `MDNode` can only refer to a `Constant` through a bridge called
    `ConstantAsMetadata` (one of the subclasses of `ValueAsMetadata`).

    As a side effect, accessing an operand of an `MDNode` that is known
    to be, e.g., `ConstantInt`, takes three steps: first, cast from
    `Metadata` to `ConstantAsMetadata`; second, extract the `Constant`;
    third, cast down to `ConstantInt`.

    The eventual goal is to introduce `MDInt`/`MDFloat`/etc. and have
    metadata schema owners transition away from using `Constant`s when
    the type isn't important (and they don't care about referring to
    `GlobalValue`s).

    In the meantime, I've added transitional API to the `mdconst`
    namespace that matches semantics with the old code, in order to
    avoid adding the error-prone three-step equivalent to every call
    site.  If your old code was:

        MDNode *N = foo();
        bar(isa             <ConstantInt>(N->getOperand(0)));
        baz(cast            <ConstantInt>(N->getOperand(1)));
        bak(cast_or_null    <ConstantInt>(N->getOperand(2)));
        bat(dyn_cast        <ConstantInt>(N->getOperand(3)));
        bay(dyn_cast_or_null<ConstantInt>(N->getOperand(4)));

    you can trivially match its semantics with:

        MDNode *N = foo();
        bar(mdconst::hasa               <ConstantInt>(N->getOperand(0)));
        baz(mdconst::extract            <ConstantInt>(N->getOperand(1)));
        bak(mdconst::extract_or_null    <ConstantInt>(N->getOperand(2)));
        bat(mdconst::dyn_extract        <ConstantInt>(N->getOperand(3)));
        bay(mdconst::dyn_extract_or_null<ConstantInt>(N->getOperand(4)));

    and when you transition your metadata schema to `MDInt`:

        MDNode *N = foo();
        bar(isa             <MDInt>(N->getOperand(0)));
        baz(cast            <MDInt>(N->getOperand(1)));
        bak(cast_or_null    <MDInt>(N->getOperand(2)));
        bat(dyn_cast        <MDInt>(N->getOperand(3)));
        bay(dyn_cast_or_null<MDInt>(N->getOperand(4)));

  - A `CallInst` -- specifically, intrinsic instructions -- can refer to
    metadata through a bridge called `MetadataAsValue`.  This is a
    subclass of `Value` where `getType()->isMetadataTy()`.

    `MetadataAsValue` is the *only* class that can legally refer to a
    `LocalAsMetadata`, which is a bridged form of non-`Constant` values
    like `Argument` and `Instruction`.  It can also refer to any other
    `Metadata` subclass.

(I'll break all your testcases in a follow-up commit, when I propagate
this change to assembly.)

llvm-svn: 223802
2014-12-09 18:38:53 +00:00
Duncan P. N. Exon Smith de36e8040f Revert "IR: MDNode => Value"
Instead, we're going to separate metadata from the Value hierarchy.  See
PR21532.

This reverts commit r221375.
This reverts commit r221373.
This reverts commit r221359.
This reverts commit r221167.
This reverts commit r221027.
This reverts commit r221024.
This reverts commit r221023.
This reverts commit r220995.
This reverts commit r220994.

llvm-svn: 221711
2014-11-11 21:30:22 +00:00
Reid Kleckner dd3f3edafa Revert "Transforms: reapply SVN r219899"
This reverts commit r220811 and r220839. It made an incorrect change to
musttail handling.

llvm-svn: 221226
2014-11-04 02:02:14 +00:00
Duncan P. N. Exon Smith 3872d0084c IR: MDNode => Value: Instruction::getMetadata()
Change `Instruction::getMetadata()` to return `Value` as part of
PR21433.

Update most callers to use `Instruction::getMDNode()`, which wraps the
result in a `cast_or_null<MDNode>`.

llvm-svn: 221024
2014-11-01 00:10:31 +00:00
Saleem Abdulrasool d178ada55e Transforms: reapply SVN r219899
This restores the commit from SVN r219899 with an additional change to ensure
that the CodeGen is correct for the case that was identified as being incorrect
(originally PR7272).

In the case that during inlining we need to synthesize a value on the stack
(i.e. for passing a value byval), then any function involving that alloca must
be stripped of its tailness as the restriction that it does not access the
parent's stack no longer holds.  Unfortunately, a single alloca can cause a
rippling effect through out the inlining as the value may be aliased or may be
mutated through an escaped external call.  As such, we simply track if an alloca
has been introduced in the frame during inlining, and strip any tail calls.

llvm-svn: 220811
2014-10-28 18:27:37 +00:00
Paul Robinson f60e0a160f Do not attribute static allocas to the call site's DebugLoc.
When functions are inlined, instructions without debug information are
attributed to the call site's DebugLoc. After inlining, inlined static
allocas are moved to the caller's entry block, adjacent to the caller's
original static alloca instructions. By retaining the call site's
DebugLoc, these instructions could cause instructions that were
subsequently inserted at the entry block to pick up the same DebugLoc.

Patch by Wolfgang Pieb!

llvm-svn: 220255
2014-10-21 01:00:55 +00:00
Hal Finkel 68dc3c7ab2 Preserve non-byval pointer alignment attributes using @llvm.assume when inlining
For pointer-typed function arguments, enhanced alignment can be asserted using
the 'align' attribute. When inlining, if this enhanced alignment information is
not otherwise available, preserve it using @llvm.assume-based alignment
assumptions.

llvm-svn: 219876
2014-10-15 23:44:41 +00:00
Benjamin Kramer 0bd147da17 Simplify code. No functionality change.
llvm-svn: 217726
2014-09-13 12:38:49 +00:00
Hal Finkel 60db05896a Make use of @llvm.assume in ValueTracking (computeKnownBits, etc.)
This change, which allows @llvm.assume to be used from within computeKnownBits
(and other associated functions in ValueTracking), adds some (optional)
parameters to computeKnownBits and friends. These functions now (optionally)
take a "context" instruction pointer, an AssumptionTracker pointer, and also a
DomTree pointer, and most of the changes are just to pass this new information
when it is easily available from InstSimplify, InstCombine, etc.

As explained below, the significant conceptual change is that known properties
of a value might depend on the control-flow location of the use (because we
care that the @llvm.assume dominates the use because assumptions have
control-flow dependencies). This means that, when we ask if bits are known in a
value, we might get different answers for different uses.

The significant changes are all in ValueTracking. Two main changes: First, as
with the rest of the code, new parameters need to be passed around. To make
this easier, I grouped them into a structure, and I made internal static
versions of the relevant functions that take this structure as a parameter. The
new code does as you might expect, it looks for @llvm.assume calls that make
use of the value we're trying to learn something about (often indirectly),
attempts to pattern match that expression, and uses the result if successful.
By making use of the AssumptionTracker, the process of finding @llvm.assume
calls is not expensive.

Part of the structure being passed around inside ValueTracking is a set of
already-considered @llvm.assume calls. This is to prevent a query using, for
example, the assume(a == b), to recurse on itself. The context and DT params
are used to find applicable assumptions. An assumption needs to dominate the
context instruction, or come after it deterministically. In this latter case we
only handle the specific case where both the assumption and the context
instruction are in the same block, and we need to exclude assumptions from
being used to simplify their own ephemeral values (those which contribute only
to the assumption) because otherwise the assumption would prove its feeding
comparison trivial and would be removed.

This commit adds the plumbing and the logic for a simple masked-bit propagation
(just enough to write a regression test). Future commits add more patterns
(and, correspondingly, more regression tests).

llvm-svn: 217342
2014-09-07 18:57:58 +00:00
Hal Finkel 74c2f355d2 Add an Assumption-Tracking Pass
This adds an immutable pass, AssumptionTracker, which keeps a cache of
@llvm.assume call instructions within a module. It uses callback value handles
to keep stale functions and intrinsics out of the map, and it relies on any
code that creates new @llvm.assume calls to notify it of the new instructions.
The benefit is that code needing to find @llvm.assume intrinsics can do so
directly, without scanning the function, thus allowing the cost of @llvm.assume
handling to be negligible when none are present.

The current design is intended to be lightweight. We don't keep track of
anything until we need a list of assumptions in some function. The first time
this happens, we scan the function. After that, we add/remove @llvm.assume
calls from the cache in response to registration calls and ValueHandle
callbacks.

There are no new direct test cases for this pass, but because it calls it
validation function upon module finalization, we'll pick up detectable
inconsistencies from the other tests that touch @llvm.assume calls.

This pass will be used by follow-up commits that make use of @llvm.assume.

llvm-svn: 217334
2014-09-07 12:44:26 +00:00
James Molloy 6b95d8ed36 Enable noalias metadata by default and swap the order of the SLP and Loop vectorizers by default.
After some time maturing, hopefully the flags themselves will be removed.

llvm-svn: 217144
2014-09-04 13:23:08 +00:00
Hal Finkel 0c083024f0 Feed AA to the inliner and use AA->getModRefBehavior in AddAliasScopeMetadata
This feeds AA through the IFI structure into the inliner so that
AddAliasScopeMetadata can use AA->getModRefBehavior to figure out which
functions only access their arguments (instead of just hard-coding some
knowledge of memory intrinsics). Most of the information is only available from
BasicAA; this is important for preserving alias scoping information for
target-specific intrinsics when doing the noalias parameter attribute to
metadata conversion.

llvm-svn: 216866
2014-09-01 09:01:39 +00:00
Hal Finkel cbb85f249e Fix AddAliasScopeMetadata again - alias.scope must be a complete description
I thought that I had fixed this problem in r216818, but I did not do a very
good job. The underlying issue is that when we add alias.scope metadata we are
asserting that this metadata completely describes the aliasing relationships
within the current aliasing scope domain, and so in the context of translating
noalias argument attributes, the pointers must all be based on noalias
arguments (as underlying objects) and have no other kind of underlying object.
In r216818 excluding appropriate accesses from getting alias.scope metadata is
done by looking for underlying objects that are not identified function-local
objects -- but that's wrong because allocas, etc. are also function-local
objects and we need to explicitly check that all underlying objects are the
noalias arguments for which we're adding metadata aliasing scopes.

This fixes the underlying-object check for adding alias.scope metadata, and
does some refactoring of the related capture-checking eligibility logic (and
adds more comments; hopefully making everything a bit clearer).

Fixes self-hosting on x86_64 with -mllvm -enable-noalias-to-md-conversion (the
feature is still disabled by default).

llvm-svn: 216863
2014-09-01 04:26:40 +00:00
Hal Finkel a3708df41a Fix AddAliasScopeMetadata to not add scopes when deriving from unknown pointers
The previous implementation of AddAliasScopeMetadata, which adds noalias
metadata to preserve noalias parameter attribute information when inlining had
a flaw: it would add alias.scope metadata to accesses which might have been
derived from pointers other than noalias function parameters. This was
incorrect because even some access known not to alias with all noalias function
parameters could easily alias with an access derived from some other pointer.
Instead, when deriving from some unknown pointer, we cannot add alias.scope
metadata at all. This fixes a miscompile of the test-suite's tramp3d-v4.
Furthermore, we cannot add alias.scope to functions unless we know they
access only argument-derived pointers (currently, we know this only for
memory intrinsics).

Also, we fix a theoretical problem with using the NoCapture attribute to skip
the capture check. This is incorrect (as explained in the comment added), but
would not matter in any code generated by Clang because we get only inferred
nocapture attributes in Clang-generated IR.

This functionality is not yet enabled by default.

llvm-svn: 216818
2014-08-30 12:48:33 +00:00
Hal Finkel 2d3d6da44b Fix a typo in AddAliasScopeMetadata
llvm-svn: 216741
2014-08-29 16:33:41 +00:00
Craig Topper e1d1294853 Simplify creation of a bunch of ArrayRefs by using None, makeArrayRef or just letting them be implicitly created.
llvm-svn: 216525
2014-08-27 05:25:25 +00:00
Craig Topper 4627679cec Use range based for loops to avoid needing to re-mention SmallPtrSet size.
llvm-svn: 216351
2014-08-24 23:23:06 +00:00
Craig Topper 71b7b68b74 Repace SmallPtrSet with SmallPtrSetImpl in function arguments to avoid needing to mention the size.
llvm-svn: 216158
2014-08-21 05:55:13 +00:00
Craig Topper 6230691c91 Revert "Repace SmallPtrSet with SmallPtrSetImpl in function arguments to avoid needing to mention the size."
Getting a weird buildbot failure that I need to investigate.

llvm-svn: 215870
2014-08-18 00:24:38 +00:00
Craig Topper 5229cfd163 Repace SmallPtrSet with SmallPtrSetImpl in function arguments to avoid needing to mention the size.
llvm-svn: 215868
2014-08-17 23:47:00 +00:00
Hal Finkel 61c386126b Copy noalias metadata from call sites to inlined instructions
When a call site with noalias metadata is inlined, that metadata can be
propagated directly to the inlined instructions (only those that might access
memory because it is not useful on the others). Prior to inlining, the noalias
metadata could express that a call would not alias with some other memory
access, which implies that no instruction within that called function would
alias. By propagating the metadata to the inlined instructions, we preserve
that knowledge.

This should complete the enhancements requested in PR20500.

llvm-svn: 215676
2014-08-14 21:09:37 +00:00
Hal Finkel d2dee16c27 Add noalias metadata for general calls (not just memory intrinsics) during inlining
When preserving noalias function parameter attributes by adding noalias
metadata in the inliner, we should do this for general function calls (not just
memory intrinsics). The logic is very similar to what already existed (except
that we want to add this metadata even for functions taking no relevant
parameters). This metadata can be used by ModRef queries in the caller after
inlining.

This addresses the first part of PR20500. Adding noalias metadata during
inlining is still turned off by default.

llvm-svn: 215657
2014-08-14 16:44:03 +00:00
Reid Kleckner e31acf239a Move helper for getting a terminating musttail call to BasicBlock
No functional change.  To be used in future commits that need to look
for such instructions.

Reviewed By: rafael

Differential Revision: http://reviews.llvm.org/D4504

llvm-svn: 215413
2014-08-12 00:05:15 +00:00
Hal Finkel ff0bcb60c9 Convert noalias parameter attributes into noalias metadata during inlining
This functionality is currently turned off by default.

Part of the motivation for introducing scoped-noalias metadata is to enable the
preservation of noalias parameter attribute information after inlining.
Sometimes this can be inferred from the code in the caller after inlining, but
often we simply lose valuable information.

The overall process if fairly simple:
 1. Create a new unqiue scope domain.
 2. For each (used) noalias parameter, create a new alias scope.
 3. For each pointer, collect the underlying objects. Add a noalias scope for
    each noalias parameter from which we're not derived (and has not been
    captured prior to that point).
 4. Add an alias.scope for each noalias parameter from which we might be
    derived (or has been captured before that point).

Note that the capture checks apply only if one of the underlying objects is not
an identified function-local object.

llvm-svn: 213949
2014-07-25 15:50:08 +00:00
Hal Finkel 9414665a3b Add scoped-noalias metadata
This commit adds scoped noalias metadata. The primary motivations for this
feature are:
  1. To preserve noalias function attribute information when inlining
  2. To provide the ability to model block-scope C99 restrict pointers

Neither of these two abilities are added here, only the necessary
infrastructure. In fact, there should be no change to existing functionality,
only the addition of new features. The logic that converts noalias function
parameters into this metadata during inlining will come in a follow-up commit.

What is added here is the ability to generally specify noalias memory-access
sets. Regarding the metadata, alias-analysis scopes are defined similar to TBAA
nodes:

!scope0 = metadata !{ metadata !"scope of foo()" }
!scope1 = metadata !{ metadata !"scope 1", metadata !scope0 }
!scope2 = metadata !{ metadata !"scope 2", metadata !scope0 }
!scope3 = metadata !{ metadata !"scope 2.1", metadata !scope2 }
!scope4 = metadata !{ metadata !"scope 2.2", metadata !scope2 }

Loads and stores can be tagged with an alias-analysis scope, and also, with a
noalias tag for a specific scope:

... = load %ptr1, !alias.scope !{ !scope1 }
... = load %ptr2, !alias.scope !{ !scope1, !scope2 }, !noalias !{ !scope1 }

When evaluating an aliasing query, if one of the instructions is associated
with an alias.scope id that is identical to the noalias scope associated with
the other instruction, or is a descendant (in the scope hierarchy) of the
noalias scope associated with the other instruction, then the two memory
accesses are assumed not to alias.

Note that is the first element of the scope metadata is a string, then it can
be combined accross functions and translation units. The string can be replaced
by a self-reference to create globally unqiue scope identifiers.

[Note: This overview is slightly stylized, since the metadata nodes really need
to just be numbers (!0 instead of !scope0), and the scope lists are also global
unnamed metadata.]

Existing noalias metadata in a callee is "cloned" for use by the inlined code.
This is necessary because the aliasing scopes are unique to each call site
(because of possible control dependencies on the aliasing properties). For
example, consider a function: foo(noalias a, noalias b) { *a = *b; } that gets
inlined into bar() { ... if (...) foo(a1, b1); ... if (...) foo(a2, b2); } --
now just because we know that a1 does not alias with b1 at the first call site,
and a2 does not alias with b2 at the second call site, we cannot let inlining
these functons have the metadata imply that a1 does not alias with b2.

llvm-svn: 213864
2014-07-24 14:25:39 +00:00
David Blaikie 644d2eee59 DebugInfo: Preserve debug location information when transforming a call into an invoke during inlining.
This both improves basic debug info quality, but also fixes a larger
hole whenever we inline a call/invoke without a location (debug info for
the entire inlining is lost and other badness that the debug info
emission code is currently working around but shouldn't have to).

llvm-svn: 212065
2014-06-30 20:30:39 +00:00
Evgeniy Stepanov 2be29929be Fix line numbers for code inlined from __nodebug__ functions.
Instructions from __nodebug__ functions don't have file:line
information even when inlined into no-nodebug functions. As a result,
intrinsics (SSE and other) from <*intrin.h> clang headers _never_
have file:line information.

With this change, an instruction without !dbg metadata gets one from
the call instruction when inlined.

Fixes PR19001.

llvm-svn: 210459
2014-06-09 09:09:19 +00:00
Reid Kleckner 900d46ff39 Don't insert lifetime.end markers between a musttail call and ret
The allocas going out of scope are immediately killed by the return
instruction.

This is a resend of r208912, which was committed accidentally.

Reviewers: chandlerc

Differential Revision: http://reviews.llvm.org/D3792

llvm-svn: 208920
2014-05-15 21:10:46 +00:00
Reid Kleckner b16564109b Revert "Don't insert lifetime.end markers between a musttail call and ret"
This reverts commit r208912.

It was committed accidentally without review.

llvm-svn: 208914
2014-05-15 20:41:05 +00:00
Reid Kleckner 6af21245eb Remove unused variable in inliner
We have to iterate over all the calls that were inlined to find out if
any were musttail.

Sink another variable down to where its used.

llvm-svn: 208913
2014-05-15 20:39:42 +00:00
Reid Kleckner 26ab7ead45 Don't insert lifetime.end markers between a musttail call and ret
The allocas going out of scope are immediately killed by the return
instruction.

Reviewers: chandlerc

Differential Revision: http://reviews.llvm.org/D3630

llvm-svn: 208912
2014-05-15 20:39:13 +00:00
Reid Kleckner f0915aa0e6 Teach the inliner how to preserve musttail invariants
The interesting case is what happens when you inline a musttail call
through a musttail call site.  In this case, we can't break perfect
forwarding or allow any stack growth.

Instead of merging control flow from the inlined return instruction
after a musttail call into the body of the caller, leave the inlined
return instruction in the caller so that the musttail call stays in the
tail position.

More work is required in http://reviews.llvm.org/D3630 to handle the
case where the inlined function has dynamic allocas or byval arguments.

Reviewers: chandlerc

Differential Revision: http://reviews.llvm.org/D3491

llvm-svn: 208910
2014-05-15 20:11:28 +00:00
Craig Topper f40110f4d8 [C++] Use 'nullptr'. Transforms edition.
llvm-svn: 207196
2014-04-25 05:29:35 +00:00
Matt Arsenault be55888849 Remove more default address space argument usage.
These places are inconsequential in practice.

llvm-svn: 207021
2014-04-23 20:58:57 +00:00
Reid Kleckner 9b2cc647eb Fix PR7272 in -tailcallelim instead of the inliner
The -tailcallelim pass should be checking if byval or inalloca args can
be captured before marking calls as tail calls.  This was the real root
cause of PR7272.

With a better fix in place, revert the inliner change from r105255.  The
test case it introduced still passes and has been moved to
test/Transforms/Inline/byval-tail-call.ll.

Reviewers: chandlerc

Differential Revision: http://reviews.llvm.org/D3403

llvm-svn: 206789
2014-04-21 20:48:47 +00:00
Julien Lerouge be4fe32eb8 Add lifetime markers for allocas created to hold byval arguments, make them
appear in the InlineFunctionInfo.

llvm-svn: 206308
2014-04-15 18:06:46 +00:00
Julien Lerouge 957e91c4d8 Split byval argument initialization so the memcpy(s) are injected at the
beginning of the first new block after inlining.

llvm-svn: 206307
2014-04-15 18:01:54 +00:00
Chandler Carruth cdf4788401 [C++11] Add range based accessors for the Use-Def chain of a Value.
This requires a number of steps.
1) Move value_use_iterator into the Value class as an implementation
   detail
2) Change it to actually be a *Use* iterator rather than a *User*
   iterator.
3) Add an adaptor which is a User iterator that always looks through the
   Use to the User.
4) Wrap these in Value::use_iterator and Value::user_iterator typedefs.
5) Add the range adaptors as Value::uses() and Value::users().
6) Update *all* of the callers to correctly distinguish between whether
   they wanted a use_iterator (and to explicitly dig out the User when
   needed), or a user_iterator which makes the Use itself totally
   opaque.

Because #6 requires churning essentially everything that walked the
Use-Def chains, I went ahead and added all of the range adaptors and
switched them to range-based loops where appropriate. Also because the
renaming requires at least churning every line of code, it didn't make
any sense to split these up into multiple commits -- all of which would
touch all of the same lies of code.

The result is still not quite optimal. The Value::use_iterator is a nice
regular iterator, but Value::user_iterator is an iterator over User*s
rather than over the User objects themselves. As a consequence, it fits
a bit awkwardly into the range-based world and it has the weird
extra-dereferencing 'operator->' that so many of our iterators have.
I think this could be fixed by providing something which transforms
a range of T&s into a range of T*s, but that *can* be separated into
another patch, and it isn't yet 100% clear whether this is the right
move.

However, this change gets us most of the benefit and cleans up
a substantial amount of code around Use and User. =]

llvm-svn: 203364
2014-03-09 03:16:01 +00:00
Chandler Carruth 9a4c9e597b [Layering] Move DebugInfo.h into the IR library where its implementation
already lives.

llvm-svn: 203046
2014-03-06 00:46:21 +00:00
Chandler Carruth 219b89b987 [Modules] Move CallSite into the IR library where it belogs. It is
abstracting between a CallInst and an InvokeInst, both of which are IR
concepts.

llvm-svn: 202816
2014-03-04 11:01:28 +00:00
Rafael Espindola 7c68bebb9c Rename some member variables from TD to DL.
TargetData was renamed DataLayout back in r165242.

llvm-svn: 201581
2014-02-18 15:33:12 +00:00
Mark Seaborn 1b3dd3527e Fix inlining to not lose the "cleanup" clause from landingpads
This fixes PR17872.  This bug can lead to C++ destructors not being
called when they should be, when an exception is thrown.

llvm-svn: 196711
2013-12-08 00:51:21 +00:00
Mark Seaborn ef3dbb93ec Fix inlining to not produce duplicate landingpad clauses
Before this change, inlining one "invoke" into an outer "invoke" call
site can lead to the outer landingpad's catch/filter clauses being
copied multiple times into the resulting landingpad.  This happens:

 * when the inlined function contains multiple "resume" instructions,
   because forwardResume() copies the clauses but is called multiple
   times;

 * when the inlined function contains a "resume" and a "call", because
   HandleCallsInBlockInlinedThroughInvoke() copies the clauses but is
   redundant with forwardResume().

Fix this by deduplicating the code.

This problem doesn't lead to any incorrect execution; it's only
untidy.

This change will make fixing PR17872 a little easier.

llvm-svn: 196710
2013-12-08 00:50:58 +00:00
Mark Seaborn d91fa22b06 InlineFunction.cpp: Remove a return value that is always false
Remove some associated dead code.

This cleanup is associated with PR17872.

llvm-svn: 196147
2013-12-02 20:50:59 +00:00
David Majnemer 120f4a06fd Revert "Inliner: Handle readonly attribute per argument when adding memcpy"
This reverts commit r193356, it caused PR17781.

A reduced test case covering this regression has been added to the test suite.

llvm-svn: 193955
2013-11-03 12:22:13 +00:00
Manman Ren 87a2adc7fe Do not convert "call asm" to "invoke asm" in Inliner.
Given that backend does not handle "invoke asm" correctly ("invoke asm" will be
handled by SelectionDAGBuilder::visitInlineAsm, which does not have the right
setup for LPadToCallSiteMap) and we already made the assumption that inline asm
does not throw in InstCombiner::visitCallSite, we are going to make the same
assumption in Inliner to make sure we don't convert "call asm" to "invoke asm".

If it becomes necessary to add support for "invoke asm" later on, we will need
to modify the backend as well as remove the assumptions that inline asm does
not throw.

Fix rdar://15317907

llvm-svn: 193808
2013-10-31 21:56:03 +00:00
Tom Stellard bc7d87f07c Inliner: Handle readonly attribute per argument when adding memcpy
Patch by: Vincent Lejeune

llvm-svn: 193356
2013-10-24 16:38:33 +00:00
Richard Trieu 624c2ebcbb Fix a use after free. RI is freed before the call to getDebugLoc(). To
prevent this, capture the location before RI is freed.

llvm-svn: 180824
2013-04-30 22:45:10 +00:00
Adrian Prantl 8beccf9e6d Spelling. Thanks, Eric.
llvm-svn: 180794
2013-04-30 17:33:32 +00:00
Adrian Prantl 0941638a1b Set debug locations for branch instructions created during inlining, even
the inlined function has multiple returns.

rdar://problem/12415623

llvm-svn: 180793
2013-04-30 17:08:16 +00:00
Adrian Prantl 15db52bf6d Make sure the instruction right after an inlined function has a
debug location. This solves a problem where range of an inlined
subroutine is emitted wrongly.
Patch by Manman Ren.

Fixes rdar://problem/12415623

llvm-svn: 180140
2013-04-23 19:56:03 +00:00
Bill Wendling 56f15bf490 Add all clauses when merging the landing pads. Duplicates will be handled later on.
llvm-svn: 177757
2013-03-22 20:31:05 +00:00
Bill Wendling a397c017bb Don't use the removed API.
llvm-svn: 177749
2013-03-22 18:49:53 +00:00
Bill Wendling 173c71ff3d Always forward 'resume' instructions to the outter landing pad.
How did this ever work?

Basically, if you have a function that's inlined into the caller, it may not
have any 'call' instructions, but any 'resume' instructions it may have should
still be forwarded to the outer (caller's) landing pad. This requires that all
of the 'landingpad' instructions in the callee have their clauses merged with
the caller's outer 'landingpad' instruction (hence the bit of ugly code in the
`forwardResume' method).

Testcase in a follow commit to the test-suite repository.

<rdar://problem/13360379> & PR15555

llvm-svn: 177680
2013-03-21 23:30:12 +00:00
Chandler Carruth 9fb823bbd4 Move all of the header files which are involved in modelling the LLVM IR
into their new header subdirectory: include/llvm/IR. This matches the
directory structure of lib, and begins to correct a long standing point
of file layout clutter in LLVM.

There are still more header files to move here, but I wanted to handle
them in separate commits to make tracking what files make sense at each
layer easier.

The only really questionable files here are the target intrinsic
tablegen files. But that's a battle I'd rather not fight today.

I've updated both CMake and Makefile build systems (I think, and my
tests think, but I may have missed something).

I've also re-sorted the includes throughout the project. I'll be
committing updates to Clang, DragonEgg, and Polly momentarily.

llvm-svn: 171366
2013-01-02 11:36:10 +00:00
Chandler Carruth ed0881b2a6 Use the new script to sort the includes of every file under lib.
Sooooo many of these had incorrect or strange main module includes.
I have manually inspected all of these, and fixed the main module
include to be the nearest plausible thing I could find. If you own or
care about any of these source files, I encourage you to take some time
and check that these edits were sensible. I can't have broken anything
(I strictly added headers, and reordered them, never removed), but they
may not be the headers you'd really like to identify as containing the
API being implemented.

Many forward declarations and missing includes were added to a header
files to allow them to parse cleanly when included first. The main
module rule does in fact have its merits. =]

llvm-svn: 169131
2012-12-03 16:50:05 +00:00
Alexey Samsonov cfd662f279 Figure out <size> argument of llvm.lifetime intrinsics at the moment they are created (during function inlining)
llvm-svn: 167821
2012-11-13 07:15:32 +00:00
Micah Villmow cdfe20b97f Move TargetData to DataLayout.
llvm-svn: 165402
2012-10-08 16:38:25 +00:00
Chandler Carruth aafe0918bc Move llvm/Support/IRBuilder.h -> llvm/IRBuilder.h
This was always part of the VMCore library out of necessity -- it deals
entirely in the IR. The .cpp file in fact was already part of the VMCore
library. This is just a mechanical move.

I've tried to go through and re-apply the coding standard's preferred
header sort, but at 40-ish files, I may have gotten some wrong. Please
let me know if so.

I'll be committing the corresponding updates to Clang and Polly, and
Duncan has DragonEgg.

Thanks to Bill and Eric for giving the green light for this bit of cleanup.

llvm-svn: 159421
2012-06-29 12:38:19 +00:00
Bill Wendling e38859dc8e Move lib/Analysis/DebugInfo.cpp to lib/VMCore/DebugInfo.cpp and
include/llvm/Analysis/DebugInfo.h to include/llvm/DebugInfo.h.

The reasoning is because the DebugInfo module is simply an interface to the
debug info MDNodes and has nothing to do with analysis.

llvm-svn: 159312
2012-06-28 00:05:13 +00:00
Dmitri Gribenko dbeafa773a Convert comments to proper Doxygen comments.
llvm-svn: 158248
2012-06-09 00:01:45 +00:00
Eric Christopher 2b40fdf3ae Tidy.
llvm-svn: 153456
2012-03-26 19:09:40 +00:00
Eric Christopher f16bee8682 Tidy.
llvm-svn: 153455
2012-03-26 19:09:38 +00:00
Chad Rosier 07d37bc1ed Add support for disabling llvm.lifetime intrinsics in the AlwaysInliner. These
are optimization hints, but at -O0 we're not optimizing.  This becomes a problem
when the alwaysinline attribute is abused.
rdar://10921594

llvm-svn: 151429
2012-02-25 02:56:01 +00:00
Bill Wendling 0aef16afd5 [unwind removal] Remove all of the code for the dead 'unwind' instruction. There
were no 'unwind' instructions being generated before this, so this is in effect
a no-op.

llvm-svn: 149906
2012-02-06 21:44:22 +00:00
Bill Wendling 3fd879dde2 s/getInnerUnwindDest/getInnerResumeDest/g
llvm-svn: 149328
2012-01-31 01:48:40 +00:00
Bill Wendling ea6e935e95 Remove ivar which is identical to another ivar.
llvm-svn: 149323
2012-01-31 01:25:54 +00:00
Bill Wendling 0c2d82b942 Remove unused ivars and s/getOuterUnwindDest/getOuterResumeDest/g.
llvm-svn: 149322
2012-01-31 01:22:03 +00:00
Bill Wendling 7778e6d818 Remove more dead functions.
llvm-svn: 149318
2012-01-31 01:18:21 +00:00
Bill Wendling 803d6b1b0c s/getInnerUnwindDestNewEH/getInnerUnwindDest/g
llvm-svn: 149317
2012-01-31 01:15:59 +00:00
Bill Wendling 621699de22 Remove some unused, old-EH methods.
llvm-svn: 149316
2012-01-31 01:14:49 +00:00
Bill Wendling 518a205d0a Get rid of references to dead intrinsics.
The eh.selector and eh.resume intrinsics aren't used anymore. Get rid of some
calls to them.

llvm-svn: 149314
2012-01-31 01:05:20 +00:00
Bill Wendling ce0c229234 Formatting cleanups. No functionality change.
llvm-svn: 149312
2012-01-31 01:01:16 +00:00
Bill Wendling f3cae51490 Remove no-longer-useful dyn_casts and pals.
llvm-svn: 149307
2012-01-31 00:56:53 +00:00
Benjamin Kramer 4d2b871cda Fix quadratic behavior in InlineFunction by fetching the personality function of the callee once and not for every invoke in the caller.
The callee is usually smaller than the caller, too. This reduces the compile
time of ARMDisassembler.cpp by 32% (Release build). It still takes ages to
compile though.

llvm-svn: 145690
2011-12-02 18:37:31 +00:00
Nick Lewycky 612d70b19d Refactor code to use new attribute getters on CallSite for NoCapture and ByVal.
Suggested in code review by Eli.

That code in InstCombine looks kinda suspicious.

llvm-svn: 145013
2011-11-20 19:09:04 +00:00
Bill Wendling 55421f0c4d Add inlining for the new EH scheme.
This builds off of the current scheme, but instead of llvm.eh.exception and
llvm.eh.selector, it uses the landingpad instruction. And instead of
llvm.eh.resume, it uses the resume instruction.

Because of the invariants in the landing pad instruction, a lot of code that's
currently needed to find the appropriate intrinsic calls for an invoke
instruction won't be needed once we go to the new EH scheme. The "FIXME"s tell
us what to remove after we switch.

llvm-svn: 137576
2011-08-14 08:01:36 +00:00
Devang Patel bb23a4a9a5 Distinguish between two copies of one inlined variable. Take 2.
llvm-svn: 137253
2011-08-10 21:50:54 +00:00
Chandler Carruth 81b7e11c89 Temporarily revert r135528 which distinguishes between two copies of one
inlined variable, based on the discussion in PR10542.

This explodes the runtime of several passes down the pipeline due to
a large number of "copies" remaining live across a large function. This
only shows up with both debug and opt, but when it does it creates
a many-minute compile when self-hosting LLVM+Clang. There are several
other cases that show these types of regressions.

All of this is tracked in PR10542, and progress is being made on fixing
the issue. Once its addressed, the re-instated, but until then this
restores the performance for self-hosting and other opt+debug builds.

Devang, let me know if this causes any trouble, or impedes fixing it in
any way, and thanks for working on this!

llvm-svn: 136953
2011-08-05 00:51:31 +00:00
Bill Wendling ad088e6724 Revert r136253, r136263, r136269, r136313, r136325, r136326, r136329, r136338,
r136339, r136341, r136369, r136387, r136392, r136396, r136429, r136430, r136444,
r136445, r136446, r136253 pending review.

llvm-svn: 136556
2011-07-30 05:42:50 +00:00
Bill Wendling 9e5f0f8fce Some minor cleanups. No functionalitical change.
llvm-svn: 136341
2011-07-28 07:44:07 +00:00
Bill Wendling fa28440f15 Leverage some of the code that John wrote to manage the landing pads.
The new EH is more simple in many respects. Mainly, we don't have to worry about
the "llvm.eh.exception" and "llvm.eh.selector" calls being in weird places.

llvm-svn: 136339
2011-07-28 07:31:46 +00:00
Bill Wendling 51affc8258 Automatically merge the landingpad clauses when we come across a callee's
landingpad.

llvm-svn: 136329
2011-07-28 02:40:13 +00:00
Bill Wendling 246eb96c8a Initial stab at getting inlining working with the EH rewrite.
This takes the new 'resume' instruction and turns it into a direct jump to the
caller's landing pad code. The caller's landingpad instruction is merged with
the landingpad instructions of the callee. This is a bit rough and makes some
assumptions in how the code works. But it passes a simple test.

llvm-svn: 136313
2011-07-28 00:38:23 +00:00
Bill Wendling 9c5b7ff807 Refuse to inline two functions which use different personality functions.
llvm-svn: 136269
2011-07-27 21:44:28 +00:00
Devang Patel a59b24b090 Distinguish between two copies of one inlined variable.
llvm-svn: 135528
2011-07-19 22:31:15 +00:00
Chris Lattner 229907cd11 land David Blaikie's patch to de-constify Type, with a few tweaks.
llvm-svn: 135375
2011-07-18 04:54:35 +00:00
Jay Foad 5bd375a6cc Convert CallInst and InvokeInst APIs to use ArrayRef.
llvm-svn: 135265
2011-07-15 08:37:34 +00:00
Benjamin Kramer e6e1933f31 Change Intrinsic::getDeclaration and friends to take an ArrayRef.
llvm-svn: 135154
2011-07-14 17:45:39 +00:00
Jay Foad b804a2b751 Second attempt at de-constifying LLVM Types in FunctionType::get(),
StructType::get() and TargetData::getIntPtrType().

llvm-svn: 134982
2011-07-12 14:06:48 +00:00
Bill Wendling a78cd228c2 Revert r134893 and r134888 (and related patches in other trees). It was causing
an assert on Darwin llvm-gcc builds.

Assertion failed: (castIsValid(op, S, Ty) && "Invalid cast!"), function Create, file /Users/buildslave/zorg/buildbot/smooshlab/slave-0.8/build.llvm-gcc-i386-darwin9-RA/llvm.src/lib/VMCore/Instructions.cpp, li\
ne 2067.
etc.

http://smooshlab.apple.com:8013/builders/llvm-gcc-i386-darwin9-RA/builds/2354

--- Reverse-merging r134893 into '.':
U    include/llvm/Target/TargetData.h
U    include/llvm/DerivedTypes.h
U    tools/bugpoint/ExtractFunction.cpp
U    unittests/Support/TypeBuilderTest.cpp
U    lib/Target/ARM/ARMGlobalMerge.cpp
U    lib/Target/TargetData.cpp
U    lib/VMCore/Constants.cpp
U    lib/VMCore/Type.cpp
U    lib/VMCore/Core.cpp
U    lib/Transforms/Utils/CodeExtractor.cpp
U    lib/Transforms/Instrumentation/ProfilingUtils.cpp
U    lib/Transforms/IPO/DeadArgumentElimination.cpp
U    lib/CodeGen/SjLjEHPrepare.cpp
--- Reverse-merging r134888 into '.':
G    include/llvm/DerivedTypes.h
U    include/llvm/Support/TypeBuilder.h
U    include/llvm/Intrinsics.h
U    unittests/Analysis/ScalarEvolutionTest.cpp
U    unittests/ExecutionEngine/JIT/JITTest.cpp
U    unittests/ExecutionEngine/JIT/JITMemoryManagerTest.cpp
U    unittests/VMCore/PassManagerTest.cpp
G    unittests/Support/TypeBuilderTest.cpp
U    lib/Target/MBlaze/MBlazeIntrinsicInfo.cpp
U    lib/Target/Blackfin/BlackfinIntrinsicInfo.cpp
U    lib/VMCore/IRBuilder.cpp
G    lib/VMCore/Type.cpp
U    lib/VMCore/Function.cpp
G    lib/VMCore/Core.cpp
U    lib/VMCore/Module.cpp
U    lib/AsmParser/LLParser.cpp
U    lib/Transforms/Utils/CloneFunction.cpp
G    lib/Transforms/Utils/CodeExtractor.cpp
U    lib/Transforms/Utils/InlineFunction.cpp
U    lib/Transforms/Instrumentation/GCOVProfiling.cpp
U    lib/Transforms/Scalar/ObjCARC.cpp
U    lib/Transforms/Scalar/SimplifyLibCalls.cpp
U    lib/Transforms/Scalar/MemCpyOptimizer.cpp
G    lib/Transforms/IPO/DeadArgumentElimination.cpp
U    lib/Transforms/IPO/ArgumentPromotion.cpp
U    lib/Transforms/InstCombine/InstCombineCompares.cpp
U    lib/Transforms/InstCombine/InstCombineAndOrXor.cpp
U    lib/Transforms/InstCombine/InstCombineCalls.cpp
U    lib/CodeGen/DwarfEHPrepare.cpp
U    lib/CodeGen/IntrinsicLowering.cpp
U    lib/Bitcode/Reader/BitcodeReader.cpp

llvm-svn: 134949
2011-07-12 01:15:52 +00:00
Jay Foad 56cc1530ee De-constify Types in FunctionType::get().
llvm-svn: 134888
2011-07-11 07:56:41 +00:00
Devang Patel 35797406a5 Refactor. It is inliner's responsibility to update line number information.
llvm-svn: 134708
2011-07-08 18:01:31 +00:00
Jay Foad 61ea0e4692 Reinstate r133513 (reverted in r133700) with an additional fix for a
-Wshorten-64-to-32 warning in Instructions.h.

llvm-svn: 133708
2011-06-23 09:09:15 +00:00
Eric Christopher 96513120b7 Revert r133513:
"Reinstate r133435 and r133449 (reverted in r133499) now that the clang
self-hosted build failure has been fixed (r133512)."

Due to some additional warnings.

llvm-svn: 133700
2011-06-23 06:24:52 +00:00
Jay Foad a97a2c998e Reinstate r133435 and r133449 (reverted in r133499) now that the clang
self-hosted build failure has been fixed (r133512).

llvm-svn: 133513
2011-06-21 10:33:19 +00:00
Chad Rosier 184f3b37e2 Revert r133435 and r133449 to appease buildbots.
llvm-svn: 133499
2011-06-21 02:09:03 +00:00
Jay Foad e03c05c35a Change how PHINodes store their operands.
Change PHINodes to store simple pointers to their incoming basic blocks,
instead of full-blown Uses.

Note that this loses an optimization in SplitCriticalEdge(), because we
can no longer walk the use list of a BasicBlock to find phi nodes. See
the comment I removed starting "However, the foreach loop is slow for
blocks with lots of predecessors".

Extend replaceAllUsesWith() on a BasicBlock to also update any phi
nodes in the block's successors. This mimics what would have happened
when PHINodes were proper Users of their incoming blocks. (Note that
this only works if OldBB->replaceAllUsesWith(NewBB) is called when
OldBB still has a terminator instruction, so it still has some
successors.)

llvm-svn: 133435
2011-06-20 14:38:01 +00:00
John McCall 5af845226c Use IRBuilder to make our intrinsic calls in the inliner so that we pick up
line info correctly.

llvm-svn: 132961
2011-06-14 02:51:53 +00:00
Nick Lewycky 9711b5c70b Use Value::stripPointerCasts instead of reinventing part of the wheel.
llvm-svn: 132954
2011-06-14 00:59:24 +00:00
Nick Lewycky f8e046b148 It's possible that an all-zero GEP may be used as the argument to lifetime
intrinsics. In fact, we'll optimize a bitcast to that when possible. Detect it
when looking for the lifetime intrinsics.

No test case, noticed by inspection.

llvm-svn: 132906
2011-06-13 07:52:46 +00:00
John McCall fc1ca36866 SplitCriticalEdge can sometimes split the edge from an invoke to a landing
pad, separating the exception and selector calls from the new lpad.  Teaching
it not to do that, or to properly adjust the CFG afterwards, is out of
scope because it would require the other edges to the landing pad to be split
as well (effectively).  Instead, just recover from the most likely cases
during inlining.  The best long-term solution is to change the exception
representation and commit to either requiring or not requiring the more
complex edge-splitting logic;  this is just a shorter-term hack.

llvm-svn: 132799
2011-06-09 20:06:24 +00:00
John McCall 729c35b680 Teach the CallGraph to ignore calls to intrinsics.
llvm-svn: 132797
2011-06-09 19:46:27 +00:00
John McCall fca7786267 First, do no harm -- even if we can't find a selector for an enclosing
landing pad, forward llvm.eh.resume calls to it instead of turning them
invalidly into invokes.

llvm-svn: 132382
2011-06-01 02:17:11 +00:00
John McCall 2c6d23fba2 Fix this to work correctly with phis; test case to follow if this successfully
fixes self-host.

llvm-svn: 132275
2011-05-29 03:01:09 +00:00
John McCall 046c47e970 Implement and document the llvm.eh.resume intrinsic, which is
transformed by the inliner into a branch to the enclosing landing pad
(when inlined through an invoke).  If not so optimized, it is lowered
DWARF EH preparation into a call to _Unwind_Resume (or _Unwind_SjLj_Resume
as appropriate).  Its chief advantage is that it takes both the
exception value and the selector value as arguments, meaning that there
is zero effort in recovering these;  however, the frontend is required
to pass these down, which is not actually particularly difficult.

Also document the behavior of landing pads a bit better, and make it
clearer that it's okay that personality functions don't always land at
landing pads.  This is just a fact of life.  Don't write optimizations that
rely on pushing things over an unwind edge.

llvm-svn: 132253
2011-05-28 07:45:59 +00:00
John McCall bd04b74bb2 Fix the inliner to maintain the current de facto invoke semantics:
- the selector for the landing pad must provide all available information
    about the handlers, filters, and cleanups within that landing pad
  - calls to _Unwind_Resume must be converted to branches to the enclosing
    lpad so as to avoid re-entering the unwinder when the lpad claimed it
    was going to handle the exception in some way
This is quite specific to libUnwind-based unwinding.  In an effort to not
interfere too badly with other unwinders, and with existing hacks in frontends,
this only triggers on _Unwind_Resume (not _Unwind_Resume_or_Rethrow) and does
nothing with selectors if it cannot find a selector call for either lpad.

llvm-svn: 132200
2011-05-27 18:34:38 +00:00
Nick Lewycky a68ec83b36 Teach the inliner to emit llvm.lifetime.start/end, to scope the local variables
of the inlinee to the code representing the original function.

llvm-svn: 131838
2011-05-22 05:22:10 +00:00
Chris Lattner 0ab5e2cded Fix a ton of comment typos found by codespell. Patch by
Luis Felipe Strano Moraes!

llvm-svn: 129558
2011-04-15 05:18:47 +00:00
Jay Foad 52131344a2 Remove PHINode::reserveOperandSpace(). Instead, add a parameter to
PHINode::Create() giving the (known or expected) number of operands.

llvm-svn: 128537
2011-03-30 11:28:46 +00:00
Jay Foad e0938d8a87 (Almost) always call reserveOperandSpace() on newly created PHINodes.
llvm-svn: 128535
2011-03-30 11:19:20 +00:00
Chris Lattner 20fca48341 switch the inliner alignment enforcement stuff to use the
getOrEnforceKnownAlignment function, which simplifies the code
and makes it stronger.

llvm-svn: 122555
2010-12-25 20:42:38 +00:00
Chris Lattner 0f11495289 when eliding a byval copy due to inlining a readonly function, we have
to make sure that the reused alloca has sufficient alignment.

llvm-svn: 122236
2010-12-20 08:10:40 +00:00
Chris Lattner 0099744506 pull byval processing out to its own helper function.
llvm-svn: 122235
2010-12-20 07:57:41 +00:00
Chris Lattner 7394680a00 fix PR8769, a miscompilation by inliner when inlining a function with a byval
argument.  The generated alloca has to have at least the alignment of the
byval, if not, the client may be making assumptions that the new alloca won't
satisfy.

llvm-svn: 122234
2010-12-20 07:45:28 +00:00
Chris Lattner cd3af96a8f improve comment
llvm-svn: 120994
2010-12-06 07:43:04 +00:00
Benjamin Kramer ddd1b7b801 Simplify code. No change in functionality.
llvm-svn: 119908
2010-11-20 18:43:35 +00:00
Duncan Sands 9d9a4e2ca2 Have InlineFunction use SimplifyInstruction rather than
hasConstantValue.  I was leery of using SimplifyInstruction
while the IR was still in a half-baked state, which is the
reason for delaying the simplification until the IR is fully
cooked.

llvm-svn: 119494
2010-11-17 11:16:23 +00:00
Rafael Espindola 229e38f0fe Be more consistent in using ValueToValueMapTy.
llvm-svn: 116387
2010-10-13 01:36:30 +00:00
Dan Gohman ca26f79051 Reapply r112091 and r111922, support for metadata linking, with a
fix: add a flag to MapValue and friends which indicates whether
any module-level mappings are being made. In the common case of
inlining, no module-level mappings are needed, so MapValue doesn't
need to examine non-function-local metadata, which can be very
expensive in the case of a large module with really deep metadata
(e.g. a large C++ program compiled with -g).

This flag is a little awkward; perhaps eventually it can be moved
into the ClonedCodeInfo class.

llvm-svn: 112190
2010-08-26 15:41:53 +00:00
Gabor Greif 7b0a5fd2a5 simplify: CallSite::get --> CallSite constructor
llvm-svn: 109506
2010-07-27 15:02:37 +00:00
Gabor Greif 42f620cc55 use callsite to obtain all arguments
llvm-svn: 106728
2010-06-24 09:56:43 +00:00
Devang Patel 0dc3c2d37e Use ValueMap instead of DenseMap.
The ValueMapper used by various cloning utility maps MDNodes also.

llvm-svn: 106706
2010-06-24 00:33:28 +00:00
Devang Patel b8f11de105 Cosmetic change.
Do not use "ValueMap" as a name for a local variable or an argument.

llvm-svn: 106698
2010-06-23 23:55:51 +00:00
Duncan Sands 4c904fa797 Fix PR7272: when inlining through a callsite with byval arguments,
the newly created allocas may be used by inlined calls, so these
need to have their tail call flags cleared.  Fixes PR7272.

llvm-svn: 105255
2010-05-31 21:00:26 +00:00
Chris Lattner c2432b9d44 rename InlineInfo.DevirtualizedCalls -> InlinedCalls to
reflect that it includes all inlined calls now, not just
devirtualized ones.

llvm-svn: 102824
2010-05-01 01:26:13 +00:00
Chris Lattner fc8d9ee6c3 Implement rdar://6295824 and PR6724 with two tiny changes
that can have a big effect :).  The first is to enable the
iterative SCC passmanager juice that kicks in when the
scc passmgr detects that a function pass has devirtualized
a call.  In this case, it will rerun all the passes it 
manages on the SCC, up to the iteration count limit (4). This
is useful because a function pass may devirualize a call, and
we want the inliner to inline it, or pruneeh to infer stuff
about it, etc.

The second patch is to add *all* call sites to the 
DevirtualizedCalls list the inliner uses.  This list is
about to get renamed, but the jist of this is that the 
inliner now reconsiders *all* inlined call sites as candidates
for further inlining.  The intuition is this that in cases 
like this:

f() { g(1); }     g(int x) { h(x); }

We analyze this bottom up, and may decide that it isn't 
profitable to inline H into G.  Next step, we decide that it is
profitable to inline G into F, and do so, which means that F 
now calls H.  Even though the call from G -> H may not have been
profitable to inline, the call from F -> H may be (in this case
because a constant allows folding etc).

In my spot checks, this doesn't have a big impact on code.  For
example, the LLC output for 252.eon grew from 0.02% (from
317252 to 317308) and 176.gcc actually shrunk by .3% (from 1525612
to 1520964 bytes).  252.eon never iterated in the SCC Passmgr,
176.gcc iterated at most 1 time.

llvm-svn: 102823
2010-05-01 01:15:56 +00:00
Chris Lattner c691de3b4e switch InlineInfo.DevirtualizedCalls's list to be of WeakVH.
This fixes a bug where calls inlined into an invoke would get
changed into an invoke but the array would keep pointing to
the (now dead) call.  The improved inliner behavior is still
disabled for now.

llvm-svn: 102196
2010-04-23 18:37:01 +00:00
Chris Lattner 2eee5d3467 The inliner was choosing to not consider call sites
that appear in the SCC as a result of inlining as candidates
for inlining.  Change this so that it *does* consider call 
sites that change from being indirect to being direct as a
result of inlining.  This allows it to completely 
"devirtualize" the testcase.

llvm-svn: 102146
2010-04-22 23:37:35 +00:00
Chris Lattner 4ba01ec869 refactor the interface to InlineFunction so that most of the in/out
arguments are handled with a new InlineFunctionInfo class.  This 
makes it easier to extend InlineFunction to return more info in the
future.

llvm-svn: 102137
2010-04-22 23:07:58 +00:00
Chris Lattner 016c00a311 when inlining something like this:
define void @f3(void (i8*)* %__f) ssp {
entry:
  call void %__f(i8* undef)
  unreachable
}

define void @f4(i8* %this) ssp align 2 {
entry:
  call void @f3(void (i8*)* @f2) ssp
  ret void
}

The inliner is turning the indirect call to %__f into a direct
call to F2.  Make the call graph more precise when this happens.

The inliner doesn't revisit call sites introduced by inlining,
so there isn't an easy way to test for this, but a more precise
callgraph is a good thing.

llvm-svn: 102131
2010-04-22 21:31:00 +00:00
Chris Lattner 0a3b5b4e39 eliminate dead #include.
llvm-svn: 102119
2010-04-22 20:41:10 +00:00
Eric Christopher 7258dcd77f Revert 101465, it broke internal OpenGL testing.
Probably the best way to know that all getOperand() calls have been handled
is to replace that API instead of updating.

llvm-svn: 101579
2010-04-16 23:37:20 +00:00
Gabor Greif f375520f7b reapply r101434
with a fix for self-hosting

rotate CallInst operands, i.e. move callee to the back
of the operand array

the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary

llvm-svn: 101465
2010-04-16 15:33:14 +00:00
Gabor Greif 403e9694f9 back out r101423 and r101397, they break llvm-gcc self-host on darwin10
llvm-svn: 101434
2010-04-16 01:16:20 +00:00
Gabor Greif 33ae80bff7 reapply r101364, which has been backed out in r101368
with a fix

rotate CallInst operands, i.e. move callee to the back
of the operand array

the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary

llvm-svn: 101397
2010-04-15 20:51:13 +00:00
Gabor Greif 9fd00c7d25 back out r101364, as it trips the linux nightlybot on some clang C++ tests
llvm-svn: 101368
2010-04-15 12:46:56 +00:00
Gabor Greif aafd209632 rotate CallInst operands, i.e. move callee to the back
of the operand array

the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary

llvm-svn: 101364
2010-04-15 10:49:53 +00:00
Mon P Wang c576ee9040 Reapply address space patch after fixing an issue in MemCopyOptimizer.
Added support for address spaces and added a isVolatile field to memcpy, memmove, and memset,
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)

llvm-svn: 100304
2010-04-04 03:10:48 +00:00
Mon P Wang 999c1b927b Revert r100191 since it breaks objc in clang
llvm-svn: 100199
2010-04-02 18:43:02 +00:00
Mon P Wang a972ab8564 Reapply address space patch after fixing an issue in MemCopyOptimizer.
Added support for address spaces and added a isVolatile field to memcpy, memmove, and memset,
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)

llvm-svn: 100191
2010-04-02 18:04:15 +00:00
Bob Wilson 6f7fd28824 Revert Mon Ping's change 99928, since it broke all the llvm-gcc buildbots.
llvm-svn: 99948
2010-03-30 22:27:04 +00:00
Mon P Wang 7460571381 Added support for address spaces and added a isVolatile field to memcpy, memmove, and memset,
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)
A update of langref will occur in a subsequent checkin.

llvm-svn: 99928
2010-03-30 20:55:56 +00:00
Eric Christopher 1d38538fb6 Temporarily revert this, it's causing an issue with an internal project.
llvm-svn: 99451
2010-03-24 23:35:21 +00:00
Chris Lattner 00eeac4179 add some accessors to callsite/callinst/invokeinst to check
for the noinline attribute, and make the inliner refuse to
inline a call site when the call site is marked noinline even
if the callee isn't.  This fixes PR6682.

llvm-svn: 99341
2010-03-23 22:59:07 +00:00
Devang Patel be94f23992 Remove dead debug info intrinsics.
Intrinsic::dbg_stoppoint
 Intrinsic::dbg_region_start 
 Intrinsic::dbg_region_end 
 Intrinsic::dbg_func_start
AutoUpgrade simply ignores these intrinsics now.

llvm-svn: 92557
2010-01-05 01:10:40 +00:00
Devang Patel f6eeaebd76 Implement support to debug inlined functions.
llvm-svn: 86748
2009-11-10 23:06:00 +00:00
Chris Lattner c6b3b25f94 Fix a pretty serious misfeature of the inliner: if it inlines a function
with multiple return values it inserts a PHI to merge them all together.
However, if the return values are all the same, it ends up with a pointless
PHI and this pointless PHI happens to really block SRoA from happening in 
at least a silly C++ example written by Doug, but probably others.  This 
fixes rdar://7339069.

llvm-svn: 85206
2009-10-27 05:39:41 +00:00
Chris Lattner 88b36f1140 Simplify some code (first hunk) and fix PR5208 (second hunk) by
updating the callgraph when introducing a call.

llvm-svn: 84310
2009-10-17 05:39:39 +00:00
Duncan Sands 9ed7b16bf3 Introduce and use convenience methods for getting pointer types
where the element is of a basic builtin type.  For example, to get
an i8* use getInt8PtrTy.

llvm-svn: 83379
2009-10-06 15:40:36 +00:00
Nick Lewycky 42fb7452df Instruction::clone does not need to take an LLVMContext&. Remove that and
update all the callers.

llvm-svn: 82889
2009-09-27 07:38:41 +00:00
Eric Christopher 66d8555f7e Fix comment.
llvm-svn: 81138
2009-09-06 22:20:54 +00:00
Chris Lattner 8900f3ec57 remove a bunch of explicit code previously needed to update the
callgraph.  This is now dead because RAUW does the job.

llvm-svn: 80703
2009-09-01 18:44:06 +00:00
Chris Lattner 063d06527e Change CallGraphNode to maintain it's Function as an AssertingVH
for sanity.  This didn't turn up any bugs.

Change CallGraphNode to maintain its "callsite" information in the 
call edges list as a WeakVH instead of as an instruction*.  This fixes
a broad class of dangling pointer bugs, and makes CallGraph have a number
of useful invariants again.  This fixes the class of problem indicated
by PR4029 and PR3601.

llvm-svn: 80663
2009-09-01 06:31:31 +00:00
Devang Patel 80ae34974b Reapply 79977.
Use MDNodes to encode debug info in llvm IR.

llvm-svn: 80406
2009-08-28 23:24:31 +00:00
Chris Lattner b1cba3f91e enhance InlineFunction to be able to optionally return
a the list of static allocas that it inlined.

llvm-svn: 80203
2009-08-27 04:20:52 +00:00
Chris Lattner d84dbb3443 smallvectorize the list of returns built by CloneAndPruneFunctionInto.
llvm-svn: 80202
2009-08-27 04:02:30 +00:00
Chris Lattner 5eef6ad6a9 reduce inlining factor some stuff out to a static helper function,
and other code cleanups.  No functionality change.

llvm-svn: 80199
2009-08-27 03:51:50 +00:00
Devang Patel f08e35d9dc Revert 79977. It causes llvm-gcc bootstrap failures on some platforms.
llvm-svn: 80073
2009-08-26 05:01:18 +00:00
Devang Patel 02aac922b4 Update DebugInfo interface to use metadata, instead of special named llvm.dbg.... global variables, to encode debugging information in llvm IR. This is mostly a mechanical change that tests metadata support very well.
This change speeds up llvm-gcc by more then 6% at "-O0 -g" (measured by compiling InstructionCombining.cpp!)

llvm-svn: 79977
2009-08-25 05:24:07 +00:00
Owen Anderson 55f1c09e31 Push LLVMContexts through the IntegerType APIs.
llvm-svn: 78948
2009-08-13 21:58:54 +00:00
Owen Anderson b292b8ce70 Move more code back to 2.5 APIs.
llvm-svn: 77635
2009-07-30 23:03:37 +00:00
Owen Anderson 4056ca9568 Move types back to the 2.5 API.
llvm-svn: 77516
2009-07-29 22:17:13 +00:00
Owen Anderson 487375e9a2 Move ConstantExpr to 2.5 API.
llvm-svn: 77494
2009-07-29 18:55:55 +00:00
Owen Anderson edb4a70325 Revert the ConstantInt constructors back to their 2.5 forms where possible, thanks to contexts-on-types. More to come.
llvm-svn: 77011
2009-07-24 23:12:02 +00:00
Owen Anderson 47db941fd3 Get rid of the Pass+Context magic.
llvm-svn: 76702
2009-07-22 00:24:57 +00:00
Owen Anderson 4fdeba9706 Revert yesterday's change by removing the LLVMContext parameter to AllocaInst and MallocInst.
llvm-svn: 75863
2009-07-15 23:53:25 +00:00
Owen Anderson b6b2530000 Move EVER MORE stuff over to LLVMContext.
llvm-svn: 75703
2009-07-14 23:09:55 +00:00
Owen Anderson 1e5f00e7a7 This started as a small change, I swear. Unfortunately, lots of things call the [I|F]CmpInst constructors. Who knew!?
llvm-svn: 75200
2009-07-09 23:48:35 +00:00
Owen Anderson e70b637033 More LLVMContext-ification.
llvm-svn: 74807
2009-07-05 22:41:43 +00:00
Eli Friedman 36b9026fa7 PR4123: don't crash when inlining a call which uses its own result.
llvm-svn: 71199
2009-05-08 00:22:04 +00:00
Devang Patel 046bf624b9 While inlining, clone llvm.dbg.func.start intrinsic and adjust
llvm.dbg.region.end instrinsic. This nested llvm.dbg.func.start/llvm.dbg.region.end pair now enables DW_TAG_inlined_subroutine support in code generator.

llvm-svn: 69118
2009-04-15 00:17:06 +00:00
Devang Patel 4ce6e69022 Update call graph after inlining invoke.
Patch by Jay Foad.

llvm-svn: 68120
2009-03-31 17:36:12 +00:00
Dale Johannesen 845e582cbe Revert unintended commmit.
llvm-svn: 66001
2009-03-04 02:09:48 +00:00
Dale Johannesen c8b5a6ef7d Always skip ptr-to-ptr bitcasts when counting,
per Chris' suggestion.  Slightly faster.

llvm-svn: 65999
2009-03-04 01:53:05 +00:00
Chris Lattner feb129e813 Fix a nasty bug (PR3550) where the inline pass could incorrectly mark
calls with the tail marker when inlining them through an invoke.  Patch,
testcase, and perfect analysis by Jay Foad!

llvm-svn: 64364
2009-02-12 07:06:42 +00:00
Nick Lewycky 05daea5d32 Revert r63600. It didn't fix the bug, it just moved it a bit.
llvm-svn: 63618
2009-02-03 06:30:37 +00:00
Nick Lewycky 12a130bd06 Update the callgraph when replacing InvokeInst with CallInst when inlining.
llvm-svn: 63600
2009-02-03 04:34:40 +00:00
Gabor Greif f1abfdccdc introduce typedef for complicated vector, and use it too
llvm-svn: 62384
2009-01-17 00:09:08 +00:00
Gabor Greif 8c573f7e49 typo
llvm-svn: 62377
2009-01-16 23:08:50 +00:00
Gabor Greif 5aa1922614 avoid using iterators when they get invalidated potentially
this fixes PR3332

llvm-svn: 62271
2009-01-15 18:40:09 +00:00
Dale Johannesen 0aeabdff57 Fix testsuite regressions from recursive inlining.
llvm-svn: 62189
2009-01-13 22:43:37 +00:00
Chris Lattner dd7083452f reapply Sanjiv's patch to genericize memcpy/memset/memmove to take an
arbitrary integer width for the count.

llvm-svn: 59823
2008-11-21 16:42:48 +00:00
Bill Wendling 4bce2bff88 Revert r59802. It was breaking the build of llvm-gcc:
g++ -m32 -c -g -DIN_GCC -W -Wall -Wwrite-strings -Wmissing-format-attribute -fno-common -mdynamic-no-pic -DHAVE_CONFIG_H -Wno-unused -DTARGET_NAME=\"i386-apple-darwin9.5.0\" -I. -I. -I../../llvm-gcc.src/gcc -I../../llvm-gcc.src/gcc/. -I../../llvm-gcc.src/gcc/../include -I./../intl -I../../llvm-gcc.src/gcc/../libcpp/include  -I../../llvm-gcc.src/gcc/../libdecnumber -I../libdecnumber -I/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm.obj/include -I/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm.src/include -DENABLE_LLVM -I/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm.obj/../llvm.src/include  -D_DEBUG  -D_GNU_SOURCE -D__STDC_LIMIT_MACROS -D__STDC_CONSTANT_MACROS   -I. -I. -I../../llvm-gcc.src/gcc -I../../llvm-gcc.src/gcc/. -I../../llvm-gcc.src/gcc/../include -I./../intl -I../../llvm-gcc.src/gcc/../libcpp/include  -I../../llvm-gcc.src/gcc/../libdecnumber -I../libdecnumber -I/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm.obj/include -I/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm.src/include ../../llvm-gcc.src/gcc/llvm-types.cpp -o llvm-types.o
../../llvm-gcc.src/gcc/llvm-convert.cpp: In member function 'void TreeToLLVM::EmitMemCpy(llvm::Value*, llvm::Value*, llvm::Value*, unsigned int)':
../../llvm-gcc.src/gcc/llvm-convert.cpp:1496: error: 'memcpy_i32' is not a member of 'llvm::Intrinsic'
../../llvm-gcc.src/gcc/llvm-convert.cpp:1496: error: 'memcpy_i64' is not a member of 'llvm::Intrinsic'
../../llvm-gcc.src/gcc/llvm-convert.cpp: In member function 'void TreeToLLVM::EmitMemMove(llvm::Value*, llvm::Value*, llvm::Value*, unsigned int)':
../../llvm-gcc.src/gcc/llvm-convert.cpp:1512: error: 'memmove_i32' is not a member of 'llvm::Intrinsic'
../../llvm-gcc.src/gcc/llvm-convert.cpp:1512: error: 'memmove_i64' is not a member of 'llvm::Intrinsic'
../../llvm-gcc.src/gcc/llvm-convert.cpp: In member function 'void TreeToLLVM::EmitMemSet(llvm::Value*, llvm::Value*, llvm::Value*, unsigned int)':
../../llvm-gcc.src/gcc/llvm-convert.cpp:1528: error: 'memset_i32' is not a member of 'llvm::Intrinsic'
../../llvm-gcc.src/gcc/llvm-convert.cpp:1528: error: 'memset_i64' is not a member of 'llvm::Intrinsic'
make[3]: *** [llvm-convert.o] Error 1
make[3]: *** Waiting for unfinished jobs....
rm fsf-funding.pod gcov.pod gfdl.pod cpp.pod gpl.pod gcc.pod
make[2]: *** [all-stage1-gcc] Error 2
make[1]: *** [stage1-bubble] Error 2
make: *** [all] Error 2

llvm-svn: 59809
2008-11-21 09:09:41 +00:00
Sanjiv Gupta 09a203765a Make mem[cpy,move,set] intrinsics overloaded.
llvm-svn: 59802
2008-11-21 07:49:09 +00:00
Devang Patel 4c758ea3e0 Large mechanical patch.
s/ParamAttr/Attribute/g
s/PAList/AttrList/g
s/FnAttributeWithIndex/AttributeWithIndex/g
s/FnAttr/Attribute/g

This sets the stage 
- to implement function notes as function attributes and 
- to distinguish between function attributes and return value attributes.

This requires corresponding changes in llvm-gcc and clang.

llvm-svn: 56622
2008-09-25 21:00:45 +00:00
Devang Patel ba3fa6c6e1 s/ParameterAttributes/Attributes/g
llvm-svn: 56513
2008-09-23 23:03:40 +00:00
Duncan Sands 46911f1271 Reapply 55859. This doesn't change anything as
long as the callgraph is correct.  It checks
for wrong callgraphs more strictly.

llvm-svn: 55894
2008-09-08 11:05:51 +00:00
Owen Anderson 1dd2e40521 Revert r55859. This is breaking the build in the abscence of its companion commit.
llvm-svn: 55865
2008-09-05 23:36:01 +00:00
Duncan Sands 9e23602849 Delete the removeCallEdgeTo callgraph method,
because it does not maintain a correct list
of callsites.  I discovered (see following
commit) that the inliner will create a wrong
callgraph if it is fed a callgraph with
correct edges but incorrect callsites.  These
were created by Prune-EH, and while it wasn't
done via removeCallEdgeTo, it could have been
done via removeCallEdgeTo, which is an accident
waiting to happen.  Use removeCallEdgeFor
instead.

llvm-svn: 55859
2008-09-05 21:43:04 +00:00
Duncan Sands 7c8fb1ad93 Remove trailing whitespace.
llvm-svn: 55835
2008-09-05 12:37:12 +00:00
Gordon Henriksen d930f913e6 Rename some GC classes so that their roll will hopefully be clearer.
In particular, Collector was confusing to implementors. Several
thought that this compile-time class was the place to implement
their runtime GC heap. Of course, it doesn't even exist at runtime.
Specifically, the renames are:

  Collector               -> GCStrategy
  CollectorMetadata       -> GCFunctionInfo
  CollectorModuleMetadata -> GCModuleInfo
  CollectorRegistry       -> GCRegistry
  Function::getCollector  -> getGC (setGC, hasGC, clearGC)

Several accessors and nested types have also been renamed to be
consistent. These changes should be obvious.

llvm-svn: 54899
2008-08-17 18:44:35 +00:00
Dan Gohman fa1211f69b Enable first-class aggregates support.
Remove the GetResultInst instruction. It is still accepted in LLVM assembly
and bitcode, where it is now auto-upgraded to ExtractValueInst. Also, remove
support for return instructions with multiple values. These are auto-upgraded
to use InsertValueInst instructions.

The IRBuilder still accepts multiple-value returns, and auto-upgrades them
to InsertValueInst instructions.

llvm-svn: 53941
2008-07-23 00:34:11 +00:00
Dan Gohman 158ff2c4a9 Use Instruction::eraseFromParent().
llvm-svn: 52606
2008-06-21 22:08:46 +00:00
Dan Gohman 3ada1e118b Clean up a use of std::distance.
llvm-svn: 52544
2008-06-20 17:11:32 +00:00
Dan Gohman 3b18fd7b02 Teach InlineFunction how to differentiate between multiple-value
return statements and aggregate returns so that it handles both
correctly.

llvm-svn: 52519
2008-06-20 01:03:44 +00:00
Gabor Greif 697e94cc22 Fix a bunch of 80col violations that arose from the Create API change. Tweak makefile targets to find these better.
llvm-svn: 51143
2008-05-15 10:04:30 +00:00
Nick Lewycky 4d43d3c72c Remove 'unwinds to' support from mainline. This patch undoes r47802 r47989
r48047 r48084 r48085 r48086 r48088 r48096 r48099 r48109 and r48123.

llvm-svn: 50265
2008-04-25 16:53:59 +00:00
Devang Patel 8f83081fea Check type instead of no. of operands.
llvm-svn: 50179
2008-04-23 20:18:29 +00:00
Duncan Sands 1416ebf1fe The "stacksave is not nounwind problem" no longer
needs to be fixed here - a previous commit made sure
that intrinsics always get the right attributes.
So remove no-longer needed code, and while there use
Intrinsic::getDeclaration rather than getOrInsertFunction. 

llvm-svn: 49337
2008-04-07 13:43:58 +00:00
Dale Johannesen 87e484f08b Mark calls to llvm.stacksave, llvm.stackrestore as
nounwind.  When such calls are inlined into something
else that is invoked, they were getting changed to invokes,
which is badness.

llvm-svn: 49299
2008-04-07 00:08:48 +00:00
Gabor Greif e9ecc68d8f API changes for class Use size reduction, wave 1.
Specifically, introduction of XXX::Create methods
for Users that have a potentially variable number of
Uses.

llvm-svn: 49277
2008-04-06 20:25:17 +00:00
Devang Patel 64d0f07085 Restore optimization that merges blocks when inline function
has single return value.

llvm-svn: 48162
2008-03-10 18:34:00 +00:00
Devang Patel 72ea2dc9a9 Simplify
llvm-svn: 48161
2008-03-10 18:22:16 +00:00
Nick Lewycky 5ce9b521d7 Update the inliner and simplifycfg to handle unwind_to.
llvm-svn: 48086
2008-03-09 05:10:13 +00:00
Devang Patel 780b3ca64b Update inliner to handle functions that return multiple values.
llvm-svn: 48020
2008-03-07 20:06:16 +00:00
Devang Patel 4566d885dd Use while loop.
llvm-svn: 47909
2008-03-04 21:59:49 +00:00
Devang Patel 941ab37ea8 Use cast instead of dyn_cast.
Update test to use multiple return value directly, instead of relying on -sretpromotion.

llvm-svn: 47907
2008-03-04 21:45:28 +00:00
Devang Patel 841322b32a Handle multiple return values.
llvm-svn: 47904
2008-03-04 21:15:15 +00:00
Duncan Sands 053c9871cd Revert r46393: readonly/readnone functions are no
longer allowed to write through byval arguments.

llvm-svn: 46416
2008-01-27 18:12:58 +00:00
Duncan Sands c4dc3dc3a2 Create an explicit copy for byval parameters even
when inlining a readonly function.

llvm-svn: 46393
2008-01-26 06:41:49 +00:00
Duncan Sands f52faf9a64 Do this more neatly.
llvm-svn: 46369
2008-01-25 22:06:51 +00:00
Chris Lattner 4f6c81ac68 we don't have to make an explicit copy of a byval argument when
inlining a function if we know that the function does not write
to *any* memory.  This implements test/Transforms/Inline/byval2.ll

llvm-svn: 45912
2008-01-12 18:54:29 +00:00
Chris Lattner 908117bf69 When inlining a functino with a byval argument, make an explicit
copy of it in case the callee modifies the struct.

llvm-svn: 45853
2008-01-11 06:09:30 +00:00
Chris Lattner f3ebc3f3d2 Remove attribution from file headers, per discussion on llvmdev.
llvm-svn: 45418
2007-12-29 20:36:04 +00:00
Gordon Henriksen b969c5981b GC poses hazards to the inliner. Consider:
define void @f() {
            ...
            call i32 @g()
            ...
    }

    define void @g() {
            ...
    }

The hazards are:

  - @f and @g have GC, but they differ GC. Inlining is invalid. This
    may never occur.
  - @f has no GC, but @g does. g's GC must be propagated to @f.

The other scenarios are safe:

  - @f and @g have the same GC.
  - @f and @g have no GC.
  - @g has no GC.

This patch adds inliner checks for the former two scenarios.

llvm-svn: 45351
2007-12-25 03:10:07 +00:00
Duncan Sands aa31b92508 When inlining through an 'nounwind' call, mark inlined
calls 'nounwind'.  It is important for correct C++
exception handling that nounwind markings do not get
lost, so this transformation is actually needed for
correctness.

llvm-svn: 45218
2007-12-19 21:13:37 +00:00
Duncan Sands 3353ed09ac Rename isNoReturn to doesNotReturn, and isNoUnwind to
doesNotThrow.

llvm-svn: 45160
2007-12-18 09:59:50 +00:00
Duncan Sands b5a79d0eaa Make invokes of inline asm legal. Teach codegen
how to lower them (with no attempt made to be
efficient, since they should only occur for
unoptimized code).

llvm-svn: 45108
2007-12-17 18:08:19 +00:00
Christopher Lamb edf0788758 Change the PointerType api for creating pointer types. The old functionality of PointerType::get() has become PointerType::getUnqual(), which returns a pointer in the generic address space. The new prototype of PointerType::get() requires both a type and an address space.
llvm-svn: 45082
2007-12-17 01:12:55 +00:00
Duncan Sands 56ed48036b Revert this part of r45073 until the verifier is
changed not to reject invoke of inline asm.

llvm-svn: 45077
2007-12-16 21:01:21 +00:00
Duncan Sands 8e4847ee95 Make instcombine promote inline asm calls to 'nounwind'
calls.  Remove special casing of inline asm from the
inliner.  There is a potential problem: the verifier
rejects invokes of inline asm (not sure why).  If an
asm call is not marked "nounwind" in some .ll, and
instcombine is not run, but the inliner is run, then
an illegal module will be created.  This is bad but
I'm not sure what the best approach is.  I'm tempted
to remove the check in the verifier...

llvm-svn: 45073
2007-12-16 15:51:49 +00:00
Duncan Sands 38ef3a8ec7 Rather than having special rules like "intrinsics cannot
throw exceptions", just mark intrinsics with the nounwind
attribute.  Likewise, mark intrinsics as readnone/readonly
and get rid of special aliasing logic (which didn't use
anything more than this anyway).

llvm-svn: 44544
2007-12-03 20:06:50 +00:00
Duncan Sands ad0ea2d430 Fix PR1146: parameter attributes are longer part of
the function type, instead they belong to functions
and function calls.  This is an updated and slightly
corrected version of Reid Spencer's original patch.
The only known problem is that auto-upgrading of
bitcode files doesn't seem to work properly (see
test/Bitcode/AutoUpgradeIntrinsics.ll).  Hopefully
a bitcode guru (who might that be? :) ) will fix it.

llvm-svn: 44359
2007-11-27 13:23:08 +00:00
David Greene 703623d571 Update InvokeInst to work like CallInst
llvm-svn: 41506
2007-08-27 19:04:21 +00:00
Chris Lattner 343c88cdb9 Fix PR1335 and Transforms/Inline/2007-04-15-InlineEH.ll
llvm-svn: 36090
2007-04-15 21:38:06 +00:00
Dan Gohman dcb291faa4 Change uses of Function::front to Function::getEntryBlock for readability.
llvm-svn: 35265
2007-03-22 16:38:57 +00:00
Dan Gohman 8c8597c4d9 Fix typos in comments.
llvm-svn: 34456
2007-02-20 20:52:03 +00:00
Chris Lattner a06a8fd2d7 Eliminate use of ctors that take vectors.
llvm-svn: 34219
2007-02-13 02:10:56 +00:00
Chris Lattner 1bfc7ab6a7 Switch inliner over to use DenseMap instead of std::map for ValueMap. This
speeds up the inliner 16%.

llvm-svn: 33801
2007-02-03 00:08:31 +00:00
Chris Lattner ad84a730ba The inliner/cloner can now optionally take TargetData info, which can be
used by constant folding.

llvm-svn: 33676
2007-01-30 23:22:39 +00:00
Reid Spencer 5301e7c605 For PR1136: Rename GlobalVariable::isExternal as isDeclaration to avoid
confusion with external linkage types.

llvm-svn: 33663
2007-01-30 20:08:39 +00:00
Chris Lattner d97f1936bb prepare for adjustment to getOrInsertFunction method
llvm-svn: 32985
2007-01-07 07:54:34 +00:00
Reid Spencer c635f47d9a For PR950:
This patch replaces signed integer types with signless ones:
1. [US]Byte -> Int8
2. [U]Short -> Int16
3. [U]Int   -> Int32
4. [U]Long  -> Int64.
5. Removal of isSigned, isUnsigned, getSignedVersion, getUnsignedVersion
   and other methods related to signedness. In a few places this warranted
   identifying the signedness information from other sources.

llvm-svn: 32785
2006-12-31 05:48:39 +00:00
Chris Lattner 6ef6d06d21 Implement the first half of Transforms/Inline/inline_cleanup.ll
llvm-svn: 30303
2006-09-13 19:23:57 +00:00
Chris Lattner fea3974133 silence warnings in a release build
llvm-svn: 29189
2006-07-18 21:48:57 +00:00
Chris Lattner b3c64f7ab3 Handle instructions in the map, but that map to a null pointer.
This unbreaks smg2000.

llvm-svn: 29127
2006-07-12 21:37:11 +00:00
Chris Lattner 6148456ec2 In addition to deleting calls, the inliner can constant fold them as well.
Handle this case, which doesn't require a new callgraph edge.  This fixes
a crash compiling MallocBench/gs.

llvm-svn: 29121
2006-07-12 18:37:18 +00:00
Chris Lattner 5de3b8b262 Change the callgraph representation to store the callsite along with the
target CG node.  This allows the inliner to properly update the callgraph
when using the pruning inliner.  The pruning inliner may not copy over all
call sites from a callee to a caller, so the edges corresponding to those
call sites should not be copied over either.

This fixes PR827 and Transforms/Inline/2006-07-12-InlinePruneCGUpdate.ll

llvm-svn: 29120
2006-07-12 18:29:36 +00:00
Chris Lattner be853d77e9 Switch the inliner over to using CloneAndPruneFunctionInto. This effectively
makes it so that it constant folds instructions on the fly.  This is good
for several reasons:

0. Many instructions are constant foldable after inlining, particularly if
   inlining a call with constant arguments.
1. Without this, the inliner has to allocate memory for all of the instructions
   that can be constant folded, then a subsequent pass has to delete them.  This
   gets the job done without this extra work.
2. This makes the inliner *pass* a bit more aggressive: in particular, it
   partially solves a phase order issue where the inliner would inline lots
   of code that folds away to nothing, but think that the resultant function
   is big because of this code that will be gone.  Now the code never exists.

This is the first part of a 2-step process.  The second part will be smart
enough to see when this implicit constant folding propagates a constant into
a branch or switch instruction, making CFG edges dead.

This implements Transforms/Inline/inline_constprop.ll

llvm-svn: 28521
2006-05-27 01:28:04 +00:00
Chris Lattner 0841fb1d4c Teach the inliner to update the CallGraph itself, and have it add edges to
llvm.stacksave/restore when it inserts calls to them.

llvm-svn: 25320
2006-01-14 20:07:50 +00:00
Chris Lattner 2be0607a8d If inlining a call to a function that contains dynamic allocas, wrap the
resultant code with llvm.stacksave/llvm.stackrestore intrinsics.

llvm-svn: 25286
2006-01-13 19:34:14 +00:00
Chris Lattner e24f79a032 Use ClonedCodeInfo to avoid another walk over the inlined code, this this
time in common C cases.

llvm-svn: 25285
2006-01-13 19:18:11 +00:00
Chris Lattner 19e6a08d78 Use the ClonedCodeInfo object to avoid scans of the inlined code when
it doesn't contain any calls.  This is a fairly common case for C++ code,
so it will probably speed up the inliner marginally in these cases.

llvm-svn: 25284
2006-01-13 19:15:15 +00:00
Chris Lattner 908d79556d Refactor a bunch of invoke handling stuff out into a new function
"HandleInlinedInvoke".  No functionality change.

llvm-svn: 25283
2006-01-13 19:05:59 +00:00
Chris Lattner 257492c0ab Fix a bug I noticed by inspection: if the first instruction in the inlined
function was not an alloca, we wouldn't check the entry block for any allocas,
leading to increased stack space in some cases.  In practice, allocas are almost
always at the top of the block, so this was never noticed.

llvm-svn: 25280
2006-01-13 18:16:48 +00:00
Jeff Cohen 5f4ef3c5a8 Eliminate all remaining tabs and trailing spaces.
llvm-svn: 22523
2005-07-27 06:12:32 +00:00