entries associated with the value being erased in the
folding set map. These entries used to be harmless, because
a SCEVUnknown doesn't store any information about its Value*,
so having a new Value allocated at the old Value's address
wasn't a problem. But now that ScalarEvolution is storing more
information about values, this is no longer safe.
llvm-svn: 107316
instruction to an add scev, it's not safe to blindly transfer the
inbounds flag from a gep instruction to an nsw on the scev for the
gep.
llvm-svn: 107117
assuming that loops are in canonical form, as ScalarEvolution doesn't
depend on LoopSimplify itself. Also, with indirectbr not all loops can
be simplified. This fixes PR7416.
llvm-svn: 106389
scrounging through SCEVUnknown contents and SCEVNAryExpr operands;
instead just do a simple deterministic comparison of the precomputed
hash data.
Also, since this is more precise, it eliminates the need for the slow
N^2 duplicate detection code.
llvm-svn: 105540
Also, pass true for isSigned even when creating constants for unsigned
comparisons, because the point is to create an all-ones constant,
rather than UINT64_MAX, even for integers wider than 64 bits.
llvm-svn: 102946
SimplifyICmpOperands will simplify such cases to EQ or NE. This makes
the correcntess of the code independent on SimplifyICmpOperands doing
certain simplifications.
llvm-svn: 102927
Also, generalize ScalarEvolutions's min and max recognition to handle
some new forms of min and max that this change makes more common.
llvm-svn: 102234
AddRecs so that it checks for overflow in the computation that it is
performing, rather than just checking hasNo{Signed,Unsigned}Wrap, since
those flags are for a different computation. This fixes a bug that
impacts an upcoming change.
llvm-svn: 101028
BumpPtrAllocator-allocated region to allow it to be stored in a more
compact form and to avoid the need for a non-trivial destructor call.
Use this new mechanism in ScalarEvolution instead of
FastFoldingSetNode to avoid leaking memory in the case where a
FoldingSetNodeID uses heap storage, and to reduce overall memory
usage.
llvm-svn: 98829
pointer and length, and allocate the arrays in ScalarEvolution's
BumpPtrAllocator, so that they get released when their owning
SCEV gets released. SCEVs are immutable, so they don't need to worry
about operand array resizing. This fixes a memory leak reported
in PR6637.
llvm-svn: 98755
which branch on undef to branch on a boolean constant for the edge
exiting the loop. This helps ScalarEvolution compute trip counts for
loops.
Teach ScalarEvolution to recognize single-value PHIs, when safe, and
ForgetSymbolicName to forget such single-value PHI nodes as apprpriate
in ForgetSymbolicName.
llvm-svn: 97126
true or false as its exit condition. These are usually eliminated by
SimplifyCFG, but the may be left around during a pass which wishes
to preserve the CFG.
llvm-svn: 96683
a loop exit value, so that if a loop gets deleted, ScalarEvolution
isn't stick holding on to dangling SCEVAddRecExprs for that loop. This
fixes PR6339.
llvm-svn: 96626
name, test whether the SCEV itself is that temporary symbolic name,
in addition to checking whether the symbolic name appears as a
possibly-indirect operand.
llvm-svn: 96216
cases, and implement target-independent folding rules for alignof and
offsetof. Also, reassociate reassociative operators when it leads to
more folding.
Generalize ScalarEvolution's isOffsetOf to recognize offsetof on
arrays. Rename getAllocSizeExpr to getSizeOfExpr, and getFieldOffsetExpr
to getOffsetOfExpr, for consistency with analagous ConstantExpr routines.
Make the target-dependent folder promote GEP array indices to
pointer-sized integers, to make implicit casting explicit and exposed
to subsequent folding.
And add a bunch of testcases for this new functionality, and a bunch
of related existing functionality.
llvm-svn: 94987
use plain SCEVUnknowns with ConstantExpr::getSizeOf and
ConstantExpr::getOffsetOf constants. This eliminates a bunch of
special-case code.
Also add code for pattern-matching these expressions, for clients that
want to recognize them.
Move ScalarEvolution's logic for expanding array and vector sizeof
expressions into an element count times the element size, to expose
the multiplication to subsequent folding, into the regular constant
folder.
llvm-svn: 94737
have trouble with an intermediate add overflowing. Also, be more conservative
about the case where the induction variable in an SLT loop exit can step past
the RHS of the SLT and overflow in a single step.
Make getSignedRange more aggressive, to recover for some common cases which
the above fixes pessimized.
This addresses rdar://7561161.
llvm-svn: 94512
This new version is much more aggressive about doing "full" reduction in
cases where it reduces register pressure, and also more aggressive about
rewriting induction variables to count down (or up) to zero when doing so
reduces register pressure.
It currently uses fairly simplistic algorithms for finding reuse
opportunities, but it introduces a new framework allows it to combine
multiple strategies at once to form hybrid solutions, instead of doing
all full-reduction or all base+index.
llvm-svn: 94061