isel has its own particular features that it wants in the CFG, in order to
reduce the number of times a constant is computed, etc. Make sure that we
clean up the CFG before doing any other things for isel. Doing so can
dramatically reduce the number of split edges and reduce the number of
places that constants get computed. For example, this shrinks
CodeGen/Generic/phi-immediate-factoring.ll from 44 to 37 instructions on X86,
and from 21 to 17 MBB's in the output. This is primarily a code size win,
not a performance win.
This implements CodeGen/Generic/phi-immediate-factoring.ll and PR1296.
llvm-svn: 35575
1. Line out nested call of APInt::zext/trunc.
2. Make more use of APInt::getHighBitsSet/getLowBitsSet.
3. Use APInt[] operator instead of expression like "APIntVal & SignBit".
llvm-svn: 35444
2. Use APInt[] instead of "X & SignBit".
3. Clean up some codes.
4. Make the expression like "ShiftAmt = ShiftAmtC->getZExtValue()" safe.
llvm-svn: 35424
1. Line out nested use of zext/trunc.
2. Make more use of getHighBitsSet/getLowBitsSet.
3. Use APInt[] != 0 instead of "(APInt & SignBit) != 0".
llvm-svn: 35408
When converting an add/xor/and triplet into a trunc/sext, only do so if the
intermediate integer type is a bitwidth that the targets can handle.
llvm-svn: 35400
original and new instruction. A slight performance hit with ostringstream
but it is only for debug.
Also, clean up an uninitialized variable warning noticed in a release build.
llvm-svn: 35358
Fix SingleSource/Regression/C/2003-05-21-UnionBitFields.c by changing a
getHighBitsSet call to getLowBitsSet call that was incorrectly converted
from the original lshr constant expression.
llvm-svn: 35348
Remove a use of getLowBitsSet that caused the mask used for replacement of
shl/lshr pairs with an AND instruction to be computed incorrectly. Its not
clear exactly why this is the case. This solves the disappearing shifts
problem, but it doesn't fix Regression/C/2003-05-21-UnionBitFields. It
seems there is more going on.
llvm-svn: 35342
* Don't assume shift amounts are <= 64 bits
* Avoid creating an extra APInt in SubOne and AddOne by using -- and ++
* Add another use of getLowBitsSet
* Convert a series of if statements to a switch
llvm-svn: 35339
using the facilities of APInt. While this duplicates a tiny fraction of
the constant folding code, it also makes the code easier to read and
avoids large ConstantExpr overhead for simple, known computations.
llvm-svn: 35335
* Convert the last use of a uint64_t that should have been an APInt.
* Change ComputeMaskedBits to have a const reference argument for the Mask
so that recursions don't cause unneeded temporaries. This causes temps
to be needed in other places (where the mask has to change) but this
change optimizes for the recursion which is more frequent.
* Remove two instances of &ing a Mask with getAllOnesValue. Its not
needed any more because APInt is accurate in its bit computations.
* Start using the getLowBitsSet and getHighBits set methods on APInt
instead of shifting. This makes it more clear in the code what is
going on.
llvm-svn: 35321
* APIntify visitAdd and visitSelectInst
* Remove unused uint64_t versions of utility functions that have been
replaced with APInt versions.
This completes most of the changes for APIntification of InstCombine. This
passes llvm-test and llvm/test/Transforms/InstCombine/APInt.
Patch by Zhou Sheng.
llvm-svn: 35287
* Re-enable the APInt version of MaskedValueIsZero.
* APIntify the Comput{Un}SignedMinMaxValuesFromKnownBits functions
* APIntify visitICmpInst.
llvm-svn: 35270
Analyze GEPs. If the indices are all zero, transfer whether the pointer is
known to be not null through the GEP.
Add a few more cases for xor and shift instructions.
llvm-svn: 35257
* Fix some indentation and comments in InsertRangeTest
* Add an "IsSigned" parameter to AddWithOverflow and make it handle signed
additions. Also, APIntify this function so it works with any bitwidth.
* For the icmp pred ([us]div %X, C1), C2 transforms, exit early if the
div instruction's RHS is zero.
* Finally, for icmp pred (sdiv %X, C1), -C2, fix an off-by-one error. The
HiBound needs to be incremented in order to get the range test correct.
llvm-svn: 35247
Add new micro-optimizations.
Add icmp predicate snuggling. Given %x ULT 4, "icmp ugt %x, 2" becomes
"icmp eq %x, 3". This doesn't apply in any non-trivial cases yet due to missing
support for NE values in ValueRanges.
llvm-svn: 35119
1. Replace getSignedMinValue() with getSignBit() for better code readability.
2. Replace APIntOps::shl() with operator<<= for convenience.
3. Make APInt construction more effective.
llvm-svn: 35060
Provide an APIntified version of MaskedValueIsZero. This will (temporarily)
cause a "defined but not used" message from the compiler. It will be used
in the next patch in this series.
Patch by Sheng Zhou.
llvm-svn: 35019
Add a new ComputeMaskedBits function that is APIntified. We'll slowly
convert things over to use this version. When its all done, we'll remove
the existing version.
llvm-svn: 35018
scalarrepl things down to elements, but mem2reg can't promote elements that
are memset/memcpy'd. Until then, the code is disabled "0 &&".
llvm-svn: 34924
would scan the entire loop body, then scan all users of instructions in the
loop, looking for users outside the loop. Now, since we know that the
loop is in LCSSA form, we know that any users outside the loop will be LCSSA
phi nodes. Just scan them.
This speeds up indvars significantly.
llvm-svn: 34898
the order that instcombine processed instructions in the testcase. The end
result is that instcombine finished with:
define i16 @test1(i16 %a) {
%tmp = zext i16 %a to i32 ; <i32> [#uses=2]
%tmp21 = lshr i32 %tmp, 8 ; <i32> [#uses=1]
%tmp5 = shl i32 %tmp, 8 ; <i32> [#uses=1]
%tmp.upgrd.32 = or i32 %tmp21, %tmp5 ; <i32> [#uses=1]
%tmp.upgrd.3 = trunc i32 %tmp.upgrd.32 to i16 ; <i16> [#uses=1]
ret i16 %tmp.upgrd.3
}
which can't get matched as a bswap.
This patch makes instcombine more sophisticated about removing truncating
casts, allowing it to turn this into:
define i16 @test2(i16 %a) {
%tmp211 = lshr i16 %a, 8
%tmp52 = shl i16 %a, 8
%tmp.upgrd.323 = or i16 %tmp211, %tmp52
ret i16 %tmp.upgrd.323
}
which then matches as bswap. This fixes bswap.ll and implements
InstCombine/cast2.ll:test[12]. This also implements cast elimination of
add/sub.
llvm-svn: 34870
a value from the worklist required scanning the entire worklist to remove all
entries. We now use a combination map+vector to prevent duplicates from
happening and prevent the scan. This speeds up instcombine on a large file
from the llvm-gcc bootstrap from 189.7s to 4.84s in a debug build and from
5.04s to 1.37s in a release build.
llvm-svn: 34848
First round of ConstantRange changes. This makes all CR constructors use
only APInt and not use ConstantInt. Clients are adjusted accordingly.
llvm-svn: 34756
as its main datastructure. There are many improvements yet to be made, but
this speeds up opt --std-compile-opts on 447.dealII by 7.3%.
llvm-svn: 34193
This happened because deadargelim now causes VMCore to auto-rename every
function that it hacks arguments out of. Because it hacks arguments out of
functions in a non-deterministic order, this caused the resultant numbering
to be nondet. The fix is to just be careful to not rename functions!
llvm-svn: 34005
BBNumbers. Instead of using a bi-directional mapping, just use a single
densemap. This speeds up mem2reg on 176.gcc by 8%, from 1.3489 to
1.2485s.
llvm-svn: 33940
the Transforms library. This reduces debug library size by 132 KB, debug
binary size by 376 KB, and reduces link time for llvm tools slightly.
llvm-svn: 33939
This patch replaces the SymbolTable class with ValueSymbolTable which does
not support types planes. This means that all symbol names in LLVM must now
be unique. The patch addresses the necessary changes to deal with this and
removes code no longer needed as a result. This completes the bulk of the
changes for this PR. Some cleanup patches will follow.
llvm-svn: 33918
Learn from sext and zext. The destination value falls within the range of the
source type.
Generalize properties regarding constant ints.
Get smarter about marking blocks as unreachable. If 1 >= 2 in order for this
block to execute, then it isn't reachable.
llvm-svn: 33889
Make the Module's dependent library use a std::vector instead of SetVector
adjust #includes in .cpp files because SetVector.h is no longer included.
llvm-svn: 33855
This feature is needed in order to support shifts of more than 255 bits
on large integer types. This changes the syntax for llvm assembly to
make shl, ashr and lshr instructions look like a binary operator:
shl i32 %X, 1
instead of
shl i32 %X, i8 1
Additionally, this should help a few passes perform additional optimizations.
llvm-svn: 33776
pessimization where instcombine can sink a load (good for code size) that
prevents an alloca from being promoted by mem2reg (bad for everything).
llvm-svn: 33771
updating. These were exposed by Devang's recent passmgr changes (with
non-default passorderings) because now the inliner can be interleved with
the LCSSA pass.
llvm-svn: 33760
ConstantFoldInstOperands/ConstantFoldCall to take a pointer to an array
of operands + size, instead of an std::vector.
In some cases, switch to using a SmallVector instead of a vector.
This allows us to get rid of some special case gross code that was there
to avoid the cost of constructing a vector.
llvm-svn: 33670
This occurs in C++ code like:
#include <iostream>
#include <iterator>
int a[] = { 1, 2, 3, 4, 5 };
int main() {
using namespace std;
copy(a, a + sizeof(a)/sizeof(a[0]), ostream_iterator<int>(cout, "\n"));
return 0;
}
Before we would decide the loop trip count is:
sdiv (i32 sub (i32 ptrtoint (i32* getelementptr ([5 x i32]* @a, i32 0, i32 5) to i32), i32 ptrtoint ([5 x i32]* @a to i32)), i32 4)
Now we decide it is "5". Amazing.
This code will need to be refactored, but I'm doing that as a separate
commit.
llvm-svn: 33665
Fix initializeConstant, now initializeInt. Fixes major performance
bottleneck.
X == Y || X->DominatedBy(Y) is redundant. Remove the X == Y part.
Fix crasher in makeEqual where getOrInsertNode would add a new constant,
producing an NE relationship between the two members we're trying to make
equal. This now allows us to mark more BBs as unreachable.
llvm-svn: 33612
1. New parameter attribute called 'inreg'. It has meaning "place this
parameter in registers, if possible". This is some generalization of
gcc's regparm(n) attribute. It's currently used only in X86-32 backend.
2. Completely rewritten CC handling/lowering code inside X86 backend.
Merged stdcall + c CCs and fastcall + fast CC.
3. Dropped CSRET CC. We cannot add struct return variant for each
target-specific CC (e.g. stdcall + csretcc and so on).
4. Instead of CSRET CC introduced 'sret' parameter attribute. Setting in
on first attribute has meaning 'This is hidden pointer to structure
return. Handle it gently'.
5. Fixed small bug in llvm-extract + add new feature to
FunctionExtraction pass, which relinks all internal-linkaged callees
from deleted function to external linkage. This will allow further
linking everything together.
NOTEs: 1. Documentation will be updated soon.
2. llvm-upgrade should be improved to translate csret => sret.
Before this, there will be some unexpected test fails.
llvm-svn: 33597
The Module::setEndianness and Module::setPointerSize methods have been
removed. Instead you can get/set the DataLayout. Adjust thise accordingly.
llvm-svn: 33530
changes: (1) don't special case for i1 any more, (2) use the new
TargetData::getTypeSizeInBits method to ensure source and dest are the
same bit width.
llvm-svn: 33427
We only want to do this if the src and destination types have the same
bit width. This patch uses TargetData::getTypeSizeInBits() instead of
making a special case for integer types and avoiding the transform if
they don't match.
llvm-svn: 33414
This is the final patch for this PR. It implements some minor cleanup
in the use of IntegerType, to wit:
1. Type::getIntegerTypeMask -> IntegerType::getBitMask
2. Type::Int*Ty changed to IntegerType* from Type*
3. ConstantInt::getType() returns IntegerType* now, not Type*
This also fixes PR1120.
Patch by Sheng Zhou.
llvm-svn: 33370
transform. Change some variable names so it is clear what is source and
what is dest of the cast. Also, add an assert to ensure that the integer
to integer case is asserting if the bitwidths are different. This prevents
illegal casts from being formed and catches bitwidth bugs sooner.
llvm-svn: 33337
because TargetData::getTypeSize() returns the same for i1 and i8. This fix
is not right for the full generality of bitwise types, but it fixes the
regression.
llvm-svn: 33237
the basic block and is stable across runs in gdb or valgrind.
Make Node::update handle edges which dominate and are tighter than
existing edges.
Replace makeEqual's "squeeze theorem" code. Fixes miscompilation.
Gate the calls to defToOps and opsToDef. Before this, we were getting IG
edges about values which weren't even defined in the dominated area. This
reduces the size of the IG by about half.
llvm-svn: 33236
rename Type::getIntegralTypeMask to Type::getIntegerTypeMask.
This makes naming much more consistent. For example, there are now no longer any
instances of IntegerType that are not considered isInteger! :)
llvm-svn: 33225
that properties were being applied where they didn't belong. Fixes crash
in new MiBench testcase.
Also mark debugging code as such in #ifdef.
llvm-svn: 33177
Implement the arbitrary bit-width integer feature. The feature allows
integers of any bitwidth (up to 64) to be defined instead of just 1, 8,
16, 32, and 64 bit integers.
This change does several things:
1. Introduces a new Derived Type, IntegerType, to represent the number of
bits in an integer. The Type classes SubclassData field is used to
store the number of bits. This allows 2^23 bits in an integer type.
2. Removes the five integer Type::TypeID values for the 1, 8, 16, 32 and
64-bit integers. These are replaced with just IntegerType which is not
a primitive any more.
3. Adjust the rest of LLVM to account for this change.
Note that while this incremental change lays the foundation for arbitrary
bit-width integers, LLVM has not yet been converted to actually deal with
them in any significant way. Most optimization passes, for example, will
still only deal with the byte-width integer types. Future increments
will rectify this situation.
llvm-svn: 33113
recommended that getBoolValue be replaced with getZExtValue and that
get(bool) be replaced by get(const Type*, uint64_t). This implements
those changes.
llvm-svn: 33110
This patch converts getPrimitiveSize to getPrimitiveSizeInBits where it is
appropriate to do so (comparison of integer primitive types).
llvm-svn: 33012
Enable complex addressing modes on 64-bit platforms involving two induction
variables by keeping a size and scale in 64-bits not 32.
Patch by Dan Gohman.
llvm-svn: 33011
Take an incremental step towards type plane elimination. This change
separates types from values in the symbol tables by finally making use
of the TypeSymbolTable class. This yields more natural interfaces for
dealing with types and unclutters the SymbolTable class.
llvm-svn: 32956
This patch replaces signed integer types with signless ones:
1. [US]Byte -> Int8
2. [U]Short -> Int16
3. [U]Int -> Int32
4. [U]Long -> Int64.
5. Removal of isSigned, isUnsigned, getSignedVersion, getUnsignedVersion
and other methods related to signedness. In a few places this warranted
identifying the signedness information from other sources.
llvm-svn: 32785
Fix this by ensuring that a bitcast is inserted to do sign switching. This
is only temporarily needed as the merging of signed and unsigned is next
on the SignlessTypes plate.
llvm-svn: 32757
This patch removes the SetCC instructions and replaces them with the ICmp
and FCmp instructions. The SetCondInst instruction has been removed and
been replaced with ICmpInst and FCmpInst.
llvm-svn: 32751
code to handle instructions as well, so that we properly fold things like
X & undef -> 0.
This fixes Transforms/SCCP/2006-12-19-UndefBug.ll
llvm-svn: 32715
creation. These changes are still temporary but at least this pushes
knowledge of signedness out closer to where it can be determined properly
and allows signedness to be removed from VMCore.
llvm-svn: 32654
rework the hacks that had us passing OStream in. We pass in std::ostream*
instead, check for null, and then dispatch to the correct print() method.
llvm-svn: 32636
used to determine whether a ZExt or SExt cast is performed. Instead, pass
an "isSigned" bool to the function and determine its value from the opcode
of the cast involved.
Also, clean up some cruft from previous patches.
llvm-svn: 32548
The cast patch introduced the possibility that the wrong cast opcode
could be used and that this transform could trigger on different kinds
of cast operations. This patch rectifies that.
llvm-svn: 32538
The long awaited CAST patch. This introduces 12 new instructions into LLVM
to replace the cast instruction. Corresponding changes throughout LLVM are
provided. This passes llvm-test, llvm/test, and SPEC CPUINT2000 with the
exception of 175.vpr which fails only on a slight floating point output
difference.
llvm-svn: 31931
Remove predicate simplifier from default gcc3 pipeline. New design is too
slow to enable by default.
Add new testcases for problems encountered in development.
llvm-svn: 31895
This patch converts the old SHR instruction into two instructions,
AShr (Arithmetic) and LShr (Logical). The Shr instructions now are not
dependent on the sign of their operands.
llvm-svn: 31542
Turn on -Wunused and -Wno-unused-parameter. Clean up most of the resulting
fall out by removing unused variables. Remaining warnings have to do with
unused functions (I didn't want to delete code without review) and unused
variables in generated code. Maintainers should clean up the remaining
issues when they see them. All changes pass DejaGnu tests and Olden.
llvm-svn: 31380
Make necessary changes to support DIV -> [SUF]Div. This changes llvm to
have three division instructions: signed, unsigned, floating point. The
bytecode and assembler are bacwards compatible, however.
llvm-svn: 31195
1. Better document what is going on here.
2. Only hack on one branch per iteration, making the results less conservative.
3. Handle the problematic case by marking edges executable instead of by
playing with value lattice states. This is far less pessimistic, and fixes
SCCP/ipsccp-gvar.ll.
llvm-svn: 31106
This patch implements the first increment for the Signless Types feature.
All changes pertain to removing the ConstantSInt and ConstantUInt classes
in favor of just using ConstantInt.
llvm-svn: 31063
SimplifyDemandedBits. The idea is that some operations can be simplified if
not all of the computed elements are needed. Some targets (like x86) have a
large number of intrinsics that operate on a single element, but pass other
elts through unmodified. If those other elements are not needed, the
intrinsics can be simplified to scalar operations, and insertelement ops can
be removed.
This turns (f.e.):
ushort %Convert_sse(float %f) {
%tmp = insertelement <4 x float> undef, float %f, uint 0 ; <<4 x float>> [#uses=1]
%tmp10 = insertelement <4 x float> %tmp, float 0.000000e+00, uint 1 ; <<4 x float>> [#uses=1]
%tmp11 = insertelement <4 x float> %tmp10, float 0.000000e+00, uint 2 ; <<4 x float>> [#uses=1]
%tmp12 = insertelement <4 x float> %tmp11, float 0.000000e+00, uint 3 ; <<4 x float>> [#uses=1]
%tmp28 = tail call <4 x float> %llvm.x86.sse.sub.ss( <4 x float> %tmp12, <4 x float> < float 1.000000e+00, float 0.000000e+00, float 0.000000e+00, float 0.000000e+00 > ) ; <<4 x float>> [#uses=1]
%tmp37 = tail call <4 x float> %llvm.x86.sse.mul.ss( <4 x float> %tmp28, <4 x float> < float 5.000000e-01, float 0.000000e+00, float 0.000000e+00, float 0.000000e+00 > ) ; <<4 x float>> [#uses=1]
%tmp48 = tail call <4 x float> %llvm.x86.sse.min.ss( <4 x float> %tmp37, <4 x float> < float 6.553500e+04, float 0.000000e+00, float 0.000000e+00, float 0.000000e+00 > ) ; <<4 x float>> [#uses=1]
%tmp59 = tail call <4 x float> %llvm.x86.sse.max.ss( <4 x float> %tmp48, <4 x float> zeroinitializer ) ; <<4 x float>> [#uses=1]
%tmp = tail call int %llvm.x86.sse.cvttss2si( <4 x float> %tmp59 ) ; <int> [#uses=1]
%tmp69 = cast int %tmp to ushort ; <ushort> [#uses=1]
ret ushort %tmp69
}
into:
ushort %Convert_sse(float %f) {
entry:
%tmp28 = sub float %f, 1.000000e+00 ; <float> [#uses=1]
%tmp37 = mul float %tmp28, 5.000000e-01 ; <float> [#uses=1]
%tmp375 = insertelement <4 x float> undef, float %tmp37, uint 0 ; <<4 x float>> [#uses=1]
%tmp48 = tail call <4 x float> %llvm.x86.sse.min.ss( <4 x float> %tmp375, <4 x float> < float 6.553500e+04, float undef, float undef, float undef > ) ; <<4 x float>> [#uses=1]
%tmp59 = tail call <4 x float> %llvm.x86.sse.max.ss( <4 x float> %tmp48, <4 x float> < float 0.000000e+00, float undef, float undef, float undef > ) ; <<4 x float>> [#uses=1]
%tmp = tail call int %llvm.x86.sse.cvttss2si( <4 x float> %tmp59 ) ; <int> [#uses=1]
%tmp69 = cast int %tmp to ushort ; <ushort> [#uses=1]
ret ushort %tmp69
}
which improves codegen from:
_Convert_sse:
movss LCPI1_0, %xmm0
movss 4(%esp), %xmm1
subss %xmm0, %xmm1
movss LCPI1_1, %xmm0
mulss %xmm0, %xmm1
movss LCPI1_2, %xmm0
minss %xmm0, %xmm1
xorps %xmm0, %xmm0
maxss %xmm0, %xmm1
cvttss2si %xmm1, %eax
andl $65535, %eax
ret
to:
_Convert_sse:
movss 4(%esp), %xmm0
subss LCPI1_0, %xmm0
mulss LCPI1_1, %xmm0
movss LCPI1_2, %xmm1
minss %xmm1, %xmm0
xorps %xmm1, %xmm1
maxss %xmm1, %xmm0
cvttss2si %xmm0, %eax
andl $65535, %eax
ret
This is just a first step, it can be extended in many ways. Testcase here:
Transforms/InstCombine/vec_demanded_elts.ll
llvm-svn: 30752
The critical edge block dominates the dest block if the destblock dominates
all edges other than the one incoming from the critical edge.
llvm-svn: 30696
or when splitting loops with a common header into multiple loops. In particular
the old code would always insert the preheader before the old loop header. This
is disasterous in cases where the loop hasn't been rotated. For example, it can
produce code like:
.. outside the loop...
jmp LBB1_2 #bb13.outer
LBB1_1: #bb1
movsd 8(%esp,%esi,8), %xmm1
mulsd (%edi), %xmm1
addsd %xmm0, %xmm1
addl $24, %edi
incl %esi
jmp LBB1_3 #bb13
LBB1_2: #bb13.outer
leal (%edx,%eax,8), %edi
pxor %xmm1, %xmm1
xorl %esi, %esi
LBB1_3: #bb13
movapd %xmm1, %xmm0
cmpl $4, %esi
jl LBB1_1 #bb1
Note that the loop body is actually LBB1_1 + LBB1_3, which means that the
loop now contains an uncond branch WITHIN it to jump around the inserted
loop header (LBB1_2). Doh.
This patch changes the preheader insertion code to insert it in the right
spot, producing this code:
... outside the loop, fall into the header ...
LBB1_1: #bb13.outer
leal (%edx,%eax,8), %esi
pxor %xmm0, %xmm0
xorl %edi, %edi
jmp LBB1_3 #bb13
LBB1_2: #bb1
movsd 8(%esp,%edi,8), %xmm0
mulsd (%esi), %xmm0
addsd %xmm1, %xmm0
addl $24, %esi
incl %edi
LBB1_3: #bb13
movapd %xmm0, %xmm1
cmpl $4, %edi
jl LBB1_2 #bb1
Totally crazy, no branch in the loop! :)
llvm-svn: 30587
reachable, making it general purpose enough for use by InsertPreheaderForLoop.
Eliminate custom dominfo updating code in InsertPreheaderForLoop, using
UpdateDomInfoForRevectoredPreds instead.
llvm-svn: 30586
DLL* linkages got full (I hope) codegeneration support in C & both x86
assembler backends.
External weak linkage added for future use, we don't provide any
codegeneration, etc. support for it.
llvm-svn: 30374
operations (like findProperties) should be faster, at the expense of
unionSets being slower in cases that are rare in practise.
Don't erase a dead Instruction. This fixes a memory corruption issue.
llvm-svn: 30235
Reorder operations to remove duplicated work.
Fix to leave floating-point types out of the optimization.
Add tests to predsimplify.ll for SwitchInst and SelectInst handling.
llvm-svn: 30055
If a branch's condition has become a ConstantBool, simplify it immediately.
Removing the edge saves work and exposes up more optimization opportunities
in the pass.
Add support for SelectInst.
llvm-svn: 29970
Not only will this take huge amounts of compile time, the resultant loop nests
won't be useful for optimization. This reduces loopsimplify time on
Transforms/LoopSimplify/2006-08-11-LoopSimplifyLongTime.ll from ~32s to ~0.4s
with a debug build of llvm on a 2.7Ghz G5.
llvm-svn: 29647
blocks that target loop blocks.
Before, the code was run once per loop, and depended on the number of
predecessors each block in the loop had. Unfortunately, scanning preds can
be really slow when huge numbers of phis exist or when phis with huge numbers
of inputs exist.
Now, the code is run once per function and scans successors instead of preds,
which is far faster. In addition, the new code is simpler and is goto free,
woo.
This change speeds up a nasty testcase Duraid provided me from taking hours to
taking ~72s with a debug build. The functionality this implements is already
tested in the testsuite as Transforms/CodeExtractor/2004-03-13-LoopExtractorCrash.ll.
llvm-svn: 29644
SlowOperatingInfo, Statistics). Besides providing an example of how to
use these facilities, it also serves to debug problems with runtime linking
when dlopening a loadable module. These three support facilities exercise
different combinations of Text/Weak Weak/Text and Text/Text linking
between the executable and the module.
llvm-svn: 29552
1. Change the usage of LOADABLE_MODULE so that it implies all the things
necessary to make a loadable module. This reduces the user's burdern to
get a loadable module correctly built.
2. Document the usage of LOADABLE_MODULE in the MakefileGuide
3. Adjust the makefile for lib/Transforms/Hello to use the new specification
for building loadable modules
4. Adjust the sample project to not attempt to build a shared library for
its little library. This was just wasteful and not instructive at all.
llvm-svn: 29551
1. Update an obsolete comment.
2. Make the sorting by base an explicit (though still N^2) step, so
that the code is more clear on what it is doing.
3. Partition uses so that uses inside the loop are handled before uses
outside the loop.
Note that none of these changes currently changes the code inserted by LSR,
but they are a stepping stone to getting there.
This code is the result of some crazy pair programming with Nate. :)
llvm-svn: 29493
down approach, inspired by discussions with Tanya.
This approach is significantly faster, because it does not need dominator
frontiers and it does not insert extraneous unused PHI nodes. For example, on
252.eon, in a release-asserts build, this speeds up LCSSA (which is the slowest
pass in gccas) from 9.14s to 0.74s on my G5. This code is also slightly smaller
and significantly simpler than the old code.
Amusingly, in a normal Release build (which includes the
"assert(L->isLCSSAForm());" assertion), asserting that the result of LCSSA
is in LCSSA form is actually slower than the LCSSA transformation pass
itself on 252.eon. I will see if Loop::isLCSSAForm can be sped up next.
llvm-svn: 29463
target CG node. This allows the inliner to properly update the callgraph
when using the pruning inliner. The pruning inliner may not copy over all
call sites from a callee to a caller, so the edges corresponding to those
call sites should not be copied over either.
This fixes PR827 and Transforms/Inline/2006-07-12-InlinePruneCGUpdate.ll
llvm-svn: 29120
will be profitable. This is mainly to remove some cases where excessive
unswitching would result in long compile times and/or huge generated code.
Once someone comes up with a better heuristic that avoids these cases, this
should be switched out.
llvm-svn: 28962
Remove the Function pointer cast in these calls, converting it to
a cast of argument.
%tmp60 = tail call int cast (int (ulong)* %str to int (int)*)( int 10 )
%tmp60 = tail call int cast (int (ulong)* %str to int (int)*)( uint %tmp51 )
llvm-svn: 28953
"LCSSA" phi node causes indvars to break dominance properties. This fixes
causes indvars to avoid inserting aggressive code in this case, instead
indvars should be fixed to be more aggressive in the face of lcssa phi's.
llvm-svn: 28850
not handling PHI nodes correctly when determining if a value was live-out.
This patch reduces the number of detected live-out variables in the testcase
from 6565 to 485.
llvm-svn: 28771
If a single exit block has multiple predecessors within the loop, it will
appear in the exit blocks list more than once. LCSSA needs to take that into
account so that it doesn't double process that exit block.
llvm-svn: 28750
post-increment value, should be first cast to the appropriated type (to the
type of the common expr). Otherwise, the rewrite of a use based on (common +
iv) may end up with an incorrect type.
llvm-svn: 28735
code (while cloning) it often gets the branch/switch instructions. Since it
knows that edges of the CFG are dead, it need not clone (or even look) at
the obviously dead blocks. This should speed up the inliner substantially on
code where there are lots of inlinable calls to functions with constant
arguments. On C++ code in particular, this kicks in.
llvm-svn: 28641
the iterated Dominance Frontier of the loop-closure Phi's. This is the
second phase of the LCSSA pass. The third phase (coming soon) will be to
update all uses of loop variables to use the loop-closure Phi's instead.
llvm-svn: 28524
makes it so that it constant folds instructions on the fly. This is good
for several reasons:
0. Many instructions are constant foldable after inlining, particularly if
inlining a call with constant arguments.
1. Without this, the inliner has to allocate memory for all of the instructions
that can be constant folded, then a subsequent pass has to delete them. This
gets the job done without this extra work.
2. This makes the inliner *pass* a bit more aggressive: in particular, it
partially solves a phase order issue where the inliner would inline lots
of code that folds away to nothing, but think that the resultant function
is big because of this code that will be gone. Now the code never exists.
This is the first part of a 2-step process. The second part will be smart
enough to see when this implicit constant folding propagates a constant into
a branch or switch instruction, making CFG edges dead.
This implements Transforms/Inline/inline_constprop.ll
llvm-svn: 28521
When doing the initial pass of constant folding, if we get a constantexpr,
simplify the constant expr like we would do if the constant is folded in the
normal loop.
This fixes the missed-optimization regression in
Transforms/InstCombine/getelementptr.ll last night.
llvm-svn: 28224
1. Implement InstCombine/deadcode.ll by not adding instructions in unreachable
blocks (due to constants in conditional branches/switches) to the worklist.
This causes them to be deleted before instcombine starts up, leading to
better optimization.
2. In the prepass over instructions, do trivial constprop/dce as we go. This
has the effect of improving the effectiveness of #1. In addition, it
*significantly* speeds up instcombine on test cases with large amounts of
constant folding code (for example, that produced by code specialization
or partial evaluation). In one example, it speeds up instcombine from
0.0589s to 0.0224s with a release build (a 2.6x speedup).
llvm-svn: 28215
Make the "fold (and (cast A), (cast B)) -> (cast (and A, B))" transformation
only apply when both casts really will cause code to be generated. If one or
both doesn't, then this xform doesn't remove a cast.
This fixes Transforms/InstCombine/2006-05-06-Infloop.ll
llvm-svn: 28141
nondeterminism being bad) could cause some trivial missed optimizations (dead
phi nodes being left around for later passes to clean up).
With this, llvm-gcc4 now bootstraps and correctly compares. I don't know
why I never tried to do it before... :)
llvm-svn: 27984
are visible to analysis as intrinsics. That is, make sure someone doesn't pass
free around by address in some struct (as happens in say 176.gcc).
This doesn't get rid of any indirect calls, just ensure calls to free and malloc
are always direct.
llvm-svn: 27560
%tmp = cast <4 x uint> %tmp to <4 x int> ; <<4 x int>> [#uses=1]
%tmp = cast <4 x int> %tmp to <4 x float> ; <<4 x float>> [#uses=1]
into:
%tmp = cast <4 x uint> %tmp to <4 x float> ; <<4 x float>> [#uses=1]
llvm-svn: 27355
%tmp = cast <4 x uint>* %testData to <4 x int>* ; <<4 x int>*> [#uses=1]
%tmp = load <4 x int>* %tmp ; <<4 x int>> [#uses=1]
to this:
%tmp = load <4 x uint>* %testData ; <<4 x uint>> [#uses=1]
%tmp = cast <4 x uint> %tmp to <4 x int> ; <<4 x int>> [#uses=1]
llvm-svn: 27353
stride. For a set of uses of the IV of a stride which is a multiple
of another stride, do not insert a new IV expression. Rather, reuse the
previous IV and rewrite the uses as uses of IV expression multiplied by
the factor.
e.g.
x = 0 ...; x ++
y = 0 ...; y += 4
then use of y can be rewritten as use of 4*x for x86.
llvm-svn: 26803
the pointer is known to come from either a global variable, alloca or
malloc. This allows us to compile this:
P = malloc(28);
memset(P, 0, 28);
into explicit stores on PPC instead of a memset call.
llvm-svn: 26577
Make this code more powerful by using ComputeMaskedBits instead of looking
for an AND operand. This lets us fold this:
int %test23(int %a) {
%tmp.1 = and int %a, 1
%tmp.2 = seteq int %tmp.1, 0
%tmp.3 = cast bool %tmp.2 to int ;; xor tmp1, 1
ret int %tmp.3
}
into: xor (and a, 1), 1
llvm-svn: 26396
caused SPASS to fail building last night.
We can't trivially unswitch a loop if the exit block has phi nodes in it,
because we don't know which predecessor to use.
llvm-svn: 26320
This fixes a testcase that nate reduced from spass.
Also included are a couple minor code changes that don't affect the generated
code at all.
llvm-svn: 26235
unswitch this loop on 2 before sweating to unswitch on 1/3.
void test4(int N, int i, int C, int*P, int*Q) {
int j;
for (j = 0; j < N; ++j) {
switch (C) { // general unswitching.
default: P[i+j] = 0; break;
case 1: Q[i+j] = 0; break;
case 3: P[i+j] = Q[i+j]; break;
case 2: break; // TRIVIAL UNSWITCH on C==2
}
}
}
llvm-svn: 26223
this for example:
for (j = 0; j < N; ++j) { // trivial unswitch
if (C)
P[i+j] = 0;
}
turning it into the obvious code without bothering to duplicate an empty loop.
llvm-svn: 26220
1. Teach GetConstantInType to handle boolean constants.
2. Teach instcombine to fold (compare X, CST) when X has known 0/1 bits.
Testcase here: set.ll:test22
3. Improve the "(X >> c1) & C2 == 0" folding code to allow a noop cast
between the shift and and. More aggressive bitfolding for other reasons
was turning signed shr's into unsigned shr's, leaving the noop cast in
the way.
llvm-svn: 26131
This allows us to simplify on conditions where bits are not known, but they
are not demanded either! This also fixes a couple of bugs in
ComputeMaskedBits that were exposed during this work.
In the future, swaths of instcombine should be removed, as this code
subsumes a bunch of ad-hockery.
llvm-svn: 26122
1. Teach it new tricks: in particular how to propagate through signed shr and sexts.
2. Teach it to return a bitset of known-1 and known-0 bits, instead of just zero.
3. Teach instcombine (AND X, C) to fold when we know all C bits of X.
This implements Regression/Transforms/InstCombine/bittest.ll, and allows
future things to be simplified.
llvm-svn: 26087
instruction onto the worklist (in case they are now dead).
Add a really trivial local DSE implementation to help out bitfield code.
We now fold this:
struct S {
unsigned char a : 1, b : 1, c : 1, d : 2, e : 3;
S();
};
S::S() : a(0), b(0), c(1), d(0), e(6) {}
to this:
void %_ZN1SC1Ev(%struct.S* %this) {
entry:
%tmp.1 = getelementptr %struct.S* %this, int 0, uint 0
store ubyte 38, ubyte* %tmp.1
ret void
}
much earlier (in gccas instead of only in gccld after DSE runs).
llvm-svn: 26050
mask. This allows the code to be simpler and more efficient.
Also, generalize some of the cases in MVIZ a bit, making it slightly more aggressive.
llvm-svn: 26035
'demanded bits', inspired by Nate's work in the dag combiner. This isn't
complete, but needs to unrelated instcombiner changes to continue.
llvm-svn: 26033
1. When rewriting code in outer loops, sometimes we would insert code into
inner loops that is invariant in that loop.
2. Notice that 4*(2+x) is 8+4*x and use that to simplify expressions.
This is a performance neutral change.
llvm-svn: 25964
1. Do not statically construct a map when the program starts up, this
is expensive and cannot be optimized. Instead, create a list.
2. Do not insert entries for all function in the module into a hashmap
that lives the full life of the compiler.
llvm-svn: 25512
1. Use the varargs version of getOrInsertFunction to simplify code.
2. remove #include
3. Reduce the number of #ifdef's.
4. remove extraneous vertical whitespace.
llvm-svn: 25508
Don't do floor->floorf conversion if floorf is not available. This checks
the compiler's host, not its target, which is incorrect for cross-compilers
Not sure that's important as we don't build many cross-compilers.
llvm-svn: 25456
This patch is an incremental step towards supporting a flat symbol table.
It de-overloads the intrinsic functions by providing type-specific intrinsics
and arranging for automatically upgrading from the old overloaded name to
the new non-overloaded name. Specifically:
llvm.isunordered -> llvm.isunordered.f32, llvm.isunordered.f64
llvm.sqrt -> llvm.sqrt.f32, llvm.sqrt.f64
llvm.ctpop -> llvm.ctpop.i8, llvm.ctpop.i16, llvm.ctpop.i32, llvm.ctpop.i64
llvm.ctlz -> llvm.ctlz.i8, llvm.ctlz.i16, llvm.ctlz.i32, llvm.ctlz.i64
llvm.cttz -> llvm.cttz.i8, llvm.cttz.i16, llvm.cttz.i32, llvm.cttz.i64
New code should not use the overloaded intrinsic names. Warnings will be
emitted if they are used.
llvm-svn: 25366
it doesn't contain any calls. This is a fairly common case for C++ code,
so it will probably speed up the inliner marginally in these cases.
llvm-svn: 25284
function was not an alloca, we wouldn't check the entry block for any allocas,
leading to increased stack space in some cases. In practice, allocas are almost
always at the top of the block, so this was never noticed.
llvm-svn: 25280
the shifts.
This allows us to fold this (which is the 'integer add a constant' sequence
from cozmic's scheme compmiler):
int %x(uint %anf-temporary776) {
%anf-temporary777 = shr uint %anf-temporary776, ubyte 1
%anf-temporary800 = cast uint %anf-temporary777 to int
%anf-temporary804 = shl int %anf-temporary800, ubyte 1
%anf-temporary805 = add int %anf-temporary804, -2
%anf-temporary806 = or int %anf-temporary805, 1
ret int %anf-temporary806
}
into this:
int %x(uint %anf-temporary776) {
%anf-temporary776 = cast uint %anf-temporary776 to int
%anf-temporary776.mask1 = add int %anf-temporary776, -2
%anf-temporary805 = or int %anf-temporary776.mask1, 1
ret int %anf-temporary805
}
note that instcombine already knew how to eliminate the AND that the two
shifts fold into. This is tested by InstCombine/shift.ll:test26
-Chris
llvm-svn: 25128
a) use better local variable names (OldMT -> OldFT) where "M" is used to
mean "Function" (perhaps it was previously "Method"?)
b) print out the module identifier in a warning message so that it is
possible to track down in which module the error occurred.
llvm-svn: 24698
186.crafty by about 16% (from 15.109s to 13.045s) on my system.
This turns allocas with unions/casts into scalars. For example crafty has
something like this:
union doub {
unsigned short i[4];
long long d;
};
int f(long long a) {
return ((union doub){.d=a}).i[1];
}
Instead of generating loads and stores to an alloca, we now promote the
whole thing to a scalar long value.
This implements: Transforms/ScalarRepl/AggregatePromote.ll
llvm-svn: 24667
The code is organized into 3 parts (2 passes)
1) a linked set of profiling passes, which implement an analysis group (linked, like alias analysis are). These insert profiling into the program, and remember what they inserted, so that at a later time they can be queried about any instruction.
2) a pass that handles inserting the random sampling framework. This also has options to control how random samples are choosen. Currently implemented are Global counters, register allocated global counters, and read cycle counter (see? there was a reason for it).
The profiling passes are almost identical to the existing ones (block, function, and null profiling is supported right now), and they are valid passes without the sampling framework (hence the existing passes can be unified with the new ones, not done yet).
Some things are a bit ugly still, but that should be fixed up soon enough.
Other todo? making the counter values not "magic 2^16 -1" values, but dynamically choosable.
llvm-svn: 24493
has a single def. In this case, look for uses that are dominated by the def
and attempt to rewrite them to directly use the stored value.
This speeds up mem2reg on these values and reduces the number of phi nodes
inserted. This should address PR665.
llvm-svn: 24411
Add support for specifying alignment and size of setjmp jmpbufs.
No targets currently do anything with this information, nor is it presrved
in the bytecode representation. That's coming up next.
llvm-svn: 24196
a few times in crafty:
OLD: %tmp.36 = div int %tmp.35, 8 ; <int> [#uses=1]
NEW: %tmp.36 = div uint %tmp.35, 8 ; <uint> [#uses=0]
OLD: %tmp.19 = div int %tmp.18, 8 ; <int> [#uses=1]
NEW: %tmp.19 = div uint %tmp.18, 8 ; <uint> [#uses=0]
OLD: %tmp.117 = div int %tmp.116, 8 ; <int> [#uses=1]
NEW: %tmp.117 = div uint %tmp.116, 8 ; <uint> [#uses=0]
OLD: %tmp.92 = div int %tmp.91, 8 ; <int> [#uses=1]
NEW: %tmp.92 = div uint %tmp.91, 8 ; <uint> [#uses=0]
Which all turn into shrs.
llvm-svn: 24190
8 times in vortex, allowing the srems to be turned into shrs:
OLD: %tmp.104 = rem int %tmp.5.i37, 16 ; <int> [#uses=1]
NEW: %tmp.104 = rem uint %tmp.5.i37, 16 ; <uint> [#uses=0]
OLD: %tmp.98 = rem int %tmp.5.i24, 16 ; <int> [#uses=1]
NEW: %tmp.98 = rem uint %tmp.5.i24, 16 ; <uint> [#uses=0]
OLD: %tmp.91 = rem int %tmp.5.i19, 8 ; <int> [#uses=1]
NEW: %tmp.91 = rem uint %tmp.5.i19, 8 ; <uint> [#uses=0]
OLD: %tmp.88 = rem int %tmp.5.i14, 8 ; <int> [#uses=1]
NEW: %tmp.88 = rem uint %tmp.5.i14, 8 ; <uint> [#uses=0]
OLD: %tmp.85 = rem int %tmp.5.i9, 1024 ; <int> [#uses=2]
NEW: %tmp.85 = rem uint %tmp.5.i9, 1024 ; <uint> [#uses=0]
OLD: %tmp.82 = rem int %tmp.5.i, 512 ; <int> [#uses=2]
NEW: %tmp.82 = rem uint %tmp.5.i1, 512 ; <uint> [#uses=0]
OLD: %tmp.48.i = rem int %tmp.5.i.i161, 4 ; <int> [#uses=1]
NEW: %tmp.48.i = rem uint %tmp.5.i.i161, 4 ; <uint> [#uses=0]
OLD: %tmp.20.i2 = rem int %tmp.5.i.i, 4 ; <int> [#uses=1]
NEW: %tmp.20.i2 = rem uint %tmp.5.i.i, 4 ; <uint> [#uses=0]
it also occurs 9 times in gcc, but with odd constant divisors (1009 and 61)
so the payoff isn't as great.
llvm-svn: 24189
into the LLVMAnalysis library.
This allows LLVMTranform and LLVMTransformUtils to be archives and linked
with LLVMAnalysis.a, which provides any missing definitions.
llvm-svn: 24036
SparcV9 JIT.
2. Make LLVMTransformUtils a relinked object file and always link it before
LLVMAnalysis.a. These two libraries have circular dependencies on each
other which creates problem when building the SparcV9 JIT. This change
fixes the dependency on all platforms problems with a minimum of fuss.
llvm-svn: 24023
one use (but one is a cast). This handles the very common case of:
X = alloc [n x byte]
Y = cast X to somethingbetter
seteq X, null
In order to avoid infinite looping when there are multiple casts, we only
allow this if the xform is strictly increasing the alignment of the
allocation.
llvm-svn: 23961
where the second has less alignment required. If we had explicit alignment
support in the IR, we could handle this case, but we can't until we do.
llvm-svn: 23960
pointer marking the end of the list, the zero *must* be cast to the pointer
type. An un-cast zero is a 32-bit int, and at least on x86_64, gcc will
not extend the zero to 64 bits, thus allowing the upper 32 bits to be
random junk.
The new END_WITH_NULL macro may be used to annotate a such a function
so that GCC (version 4 or newer) will detect the use of un-casted zero
at compile time.
llvm-svn: 23888
check the presplit pred, not the post-split pred. This was causing us
to make the wrong decision in some cases, leaving the critical edge block
in the loop.
llvm-svn: 23601
is performed so it is only at most once per function that contains an invoke
instead of once per invoke in the function. This patch has the following perks:
1. It fixes PR631, which complains about slowness.
2. If fixes PR240, which complains about non-volatile vars being live across
setjmp/longjmps.
3. It improves (but does not fix) the jmpbuf alignment issue on itanium by not
forcing the jmpbufs to always be 8-bytes off the alignment of the structure.
4. It speeds up 253.perlbmk from 338s to 13.70s (a 25x improvement!), making us
now about 4% faster than GCC.
Further improvements are also possible.
llvm-svn: 23477
Implement the start of global ctor optimization. It is currently smart
enough to remove the global ctor for cases like this:
struct foo {
foo() {}
} x;
... saving a bit of startup time for the program.
llvm-svn: 23433
not define a value that is used outside of it's block. This catches many
more simplifications, e.g. 854 in 176.gcc, 137 in vpr, etc.
This implements branch-phi-thread.ll:test3.ll
llvm-svn: 23397
if () { store A -> P; } else { store B -> P; }
into a PHI node with one store, in the most trival case. This implements
load.ll:test10.
llvm-svn: 23324
load are exactly consequtive. This is picked up by other passes, but this
triggers thousands of times in fortran programs that use static locals
(and is thus a compile-time speedup).
llvm-svn: 23320
code for IV uses outside of loops that are not dominated by the latch block.
We should only convert these uses to use the post-inc value if they ARE
dominated by the latch block.
Also use a new LoopInfo method to simplify some code.
This fixes Transforms/LoopStrengthReduce/2005-09-12-UsesOutOutsideOfLoop.ll
llvm-svn: 23318
in building maximal expressions before simplifying them. In particular, i
cases like this:
X-(A+B+X)
the code would consider A+B+X to be a maximal expression (not understanding
that the single use '-' would be turned into a + later), simplify it (a noop)
then later get simplified again.
Each of these simplify steps is where the cost of reassociation comes from,
so this patch should speed up the already fast pass a bit.
Thanks to Dan for noticing this!
llvm-svn: 23214
Do not claim to not change the CFG. We do change the cfg to split critical
edges. This isn't causing us a problem now, but could likely do so in the
future.
llvm-svn: 22824
edge so that the code is not always executed for both operands. This
prevents LSR from inserting code into loops whose exit blocks contain
PHI uses of IV expressions (which are outside of loops). On gzip, for
example, we turn this ugly code:
.LBB_test_1: ; loopentry
add r27, r3, r28
lhz r27, 3(r27)
add r26, r4, r28
lhz r26, 3(r26)
add r25, r30, r28 ;; Only live if exiting the loop
add r24, r29, r28 ;; Only live if exiting the loop
cmpw cr0, r27, r26
bne .LBB_test_5 ; loopexit
into this:
.LBB_test_1: ; loopentry
or r27, r28, r28
add r28, r3, r27
lhz r28, 3(r28)
add r26, r4, r27
lhz r26, 3(r26)
cmpw cr0, r28, r26
beq .LBB_test_3 ; shortcirc_next.0
.LBB_test_2: ; loopentry.loopexit_crit_edge
add r2, r30, r27
add r8, r29, r27
b .LBB_test_9 ; loopexit
.LBB_test_2: ; shortcirc_next.0
...
blt .LBB_test_1
into this:
.LBB_test_1: ; loopentry
or r27, r28, r28
add r28, r3, r27
lhz r28, 3(r28)
add r26, r4, r27
lhz r26, 3(r26)
cmpw cr0, r28, r26
beq .LBB_test_3 ; shortcirc_next.0
.LBB_test_2: ; loopentry.loopexit_crit_edge
add r2, r30, r27
add r8, r29, r27
b .LBB_t_3: ; shortcirc_next.0
.LBB_test_3: ; shortcirc_next.0
...
blt .LBB_test_1
Next step: get the block out of the loop so that the loop is all
fall-throughs again.
llvm-svn: 22766
Instead, just update the BB in-place. This is both faster, and it prevents
split-critical-edges from shuffling the PHI argument list unneccesarily.
llvm-svn: 22765
into just Y. This often occurs when it seperates loops that have collapsed loop
headers. This implements LoopSimplify/phi-node-simplify.ll
llvm-svn: 22746
For code like this:
void foo(float *a, float *b, int n, int stride_a, int stride_b) {
int i;
for (i=0; i<n; i++)
a[i*stride_a] = b[i*stride_b];
}
we now emit:
.LBB_foo2_2: ; no_exit
lfs f0, 0(r4)
stfs f0, 0(r3)
addi r7, r7, 1
add r4, r2, r4
add r3, r6, r3
cmpw cr0, r7, r5
blt .LBB_foo2_2 ; no_exit
instead of:
.LBB_foo_2: ; no_exit
mullw r8, r2, r7 ;; multiply!
slwi r8, r8, 2
lfsx f0, r4, r8
mullw r8, r2, r6 ;; multiply!
slwi r8, r8, 2
stfsx f0, r3, r8
addi r2, r2, 1
cmpw cr0, r2, r5
blt .LBB_foo_2 ; no_exit
loops with variable strides occur pretty often. For example, in SPECFP2K
there are 317 variable strides in 177.mesa, 3 in 179.art, 14 in 188.ammp,
56 in 168.wupwise, 36 in 172.mgrid.
Now we can allow indvars to turn functions written like this:
void foo2(float *a, float *b, int n, int stride_a, int stride_b) {
int i, ai = 0, bi = 0;
for (i=0; i<n; i++)
{
a[ai] = b[bi];
ai += stride_a;
bi += stride_b;
}
}
into code like the above for better analysis. With this patch, they generate
identical code.
llvm-svn: 22740
The termination condition actually wants to use the post-incremented value
of the loop, not a new indvar with an unusual base.
On PPC, for example, this allows us to compile
LoopStrengthReduce/exit_compare_live_range.ll to:
_foo:
li r2, 0
.LBB_foo_1: ; no_exit
li r5, 0
stw r5, 0(r3)
addi r2, r2, 1
cmpw cr0, r2, r4
bne .LBB_foo_1 ; no_exit
blr
instead of:
_foo:
li r2, 1 ;; IV starts at 1, not 0
.LBB_foo_1: ; no_exit
li r5, 0
stw r5, 0(r3)
addi r5, r2, 1
cmpw cr0, r2, r4
or r2, r5, r5 ;; Reg-reg copy, extra live range
bne .LBB_foo_1 ; no_exit
blr
This implements LoopStrengthReduce/exit_compare_live_range.ll
llvm-svn: 22699
* Teach this code to move allocas out of the loop when tail call eliminating
a call marked 'tail'. This implements TailCallElim/move_alloca_for_tail_call.ll
* Do not perform this transformation if a call is marked 'tail' and if there
are allocas that we cannot move out of the loop in #2. Doing so would increase
the stack usage of the function. This implements fixes
PR615 and TailCallElim/dont-tce-tail-marked-call.ll.
llvm-svn: 22690
BasicBlock's removePredecessor routine. This requires shuffling around
the definition and implementation of hasContantValue from Utils.h,cpp into
Instructions.h,cpp
llvm-svn: 22664
that the symbolic evaluator is not always able to use subtraction to remove
expressions. This makes the code faster, and fixes the last crash on 178.galgel.
Finally, add a statistic to see how many phi nodes are inserted.
On 178.galgel, we get the follow stats:
2562 loop-reduce - Number of PHIs inserted
3927 loop-reduce - Number of GEPs strength reduced
llvm-svn: 22662
method.
* Fix a crash on 178.galgel, where we would insert expressions before PHI
nodes instead of into the PHI node predecessor blocks.
llvm-svn: 22657
for (i = 0; i < N; ++i)
A[i][foo()] = 0;
here we still want to strength reduce the A[i] part, even though foo() is
l-v.
This also simplifies some of the 'CanReduce' logic.
This implements Transforms/LoopStrengthReduce/ops_after_indvar.ll
llvm-svn: 22652
1. We only analyze instructions once, guaranteed
2. AnalyzeGetElementPtrUsers has been ripped apart and replaced with
something much simpler.
The next step is to handle expressions that are not all indvar+loop-invariant
values (e.g. handling indvar+loopvariant).
llvm-svn: 22649
Only emit one PHI node for IV uses with identical bases and strides (after
moving foldable immediates to the load/store instruction).
This implements LoopStrengthReduce/dont_insert_redundant_ops.ll, allowing
us to generate this PPC code for test1:
or r30, r3, r3
.LBB_test1_1: ; Loop
li r2, 0
stw r2, 0(r30)
stw r2, 4(r30)
bl L_pred$stub
addi r30, r30, 8
cmplwi cr0, r3, 0
bne .LBB_test1_1 ; Loop
instead of this code:
or r30, r3, r3
or r29, r3, r3
.LBB_test1_1: ; Loop
li r2, 0
stw r2, 0(r29)
stw r2, 4(r30)
bl L_pred$stub
addi r30, r30, 8 ;; Two iv's with step of 8
addi r29, r29, 8
cmplwi cr0, r3, 0
bne .LBB_test1_1 ; Loop
llvm-svn: 22635
map from instruction* to SCEVHandles. When we delete instructions, we have
to tell it about it. We would run into nasty cases where new instructions
were reallocated at old instruction addresses and get the old map values.
Bad bad bad :(
llvm-svn: 22632
consideration the case where a reference in an unreachable block could
occur. This fixes Transforms/SimplifyCFG/2005-08-01-PHIUpdateFail.ll,
something I ran into while bugpoint'ing another pass.
llvm-svn: 22584
SimplifyLibCalls probably has to be audited to make sure it does not make
this mistake elsewhere. Also, if this code knows that the type will be
unsigned, obviously one arm of this is dead.
Reid, can you take a look into this further?
llvm-svn: 22566
target data to decide which loop induction variables to strength reduce
and how to do so. This work is mostly by Chris Lattner, with tweaks by
me to get it working on some of MultiSource.
llvm-svn: 22558
Because the instcombine has to scan the entire function when it starts up
to begin with, we might as well do it in DFO so we can nuke unreachable code.
This fixes: Transforms/InstCombine/2005-07-07-DeadPHILoop.ll
llvm-svn: 22348
The optimization for locally used allocas was not safe for allocas that
were read before they were written. This change disables that optimization
in that case.
llvm-svn: 22318
is a mismatch in their character type pointers (i.e. fprintf() prints an
array of ubytes while fwrite() takes an array of sbytes).
We can probably do better than this (such as casting the ubyte to an
sbyte).
llvm-svn: 22310
* Check for availability of ffsll call in configure script
* Support ffs, ffsl, and ffsll conversion to constant value if the argument
is constant.
llvm-svn: 22027
This makes reassociate realize that loads should be treated as unmovable, and
gives distinct ranks to distinct values defined in the same basic block, allowing
reassociate to do its thing.
llvm-svn: 21783
of trying to do local reassociation tweaks at each level, only process an expression
tree once (at its root). This does not improve the reassociation pass in any real way.
llvm-svn: 21768
strlen(x) != 0 -> *x != 0
strlen(x) == 0 -> *x == 0
* Change nested statistics to use style of other LLVM statistics so that
only the name of the optimization (simplify-libcalls) is used as the
statistic name, and the description indicates which specific all is
optimized. Cuts down on some redundancy and saves a few bytes of space.
* Make note of stpcpy optimization that could be done.
llvm-svn: 21766
the result, turn signed shift rights into unsigned shift rights if possible.
This leads to later simplification and happens *often* in 176.gcc. For example,
this testcase:
struct xxx { unsigned int code : 8; };
enum codes { A, B, C, D, E, F };
int foo(struct xxx *P) {
if ((enum codes)P->code == A)
bar();
}
used to be compiled to:
int %foo(%struct.xxx* %P) {
%tmp.1 = getelementptr %struct.xxx* %P, int 0, uint 0 ; <uint*> [#uses=1]
%tmp.2 = load uint* %tmp.1 ; <uint> [#uses=1]
%tmp.3 = cast uint %tmp.2 to int ; <int> [#uses=1]
%tmp.4 = shl int %tmp.3, ubyte 24 ; <int> [#uses=1]
%tmp.5 = shr int %tmp.4, ubyte 24 ; <int> [#uses=1]
%tmp.6 = cast int %tmp.5 to sbyte ; <sbyte> [#uses=1]
%tmp.8 = seteq sbyte %tmp.6, 0 ; <bool> [#uses=1]
br bool %tmp.8, label %then, label %UnifiedReturnBlock
Now it is compiled to:
%tmp.1 = getelementptr %struct.xxx* %P, int 0, uint 0 ; <uint*> [#uses=1]
%tmp.2 = load uint* %tmp.1 ; <uint> [#uses=1]
%tmp.2 = cast uint %tmp.2 to sbyte ; <sbyte> [#uses=1]
%tmp.8 = seteq sbyte %tmp.2, 0 ; <bool> [#uses=1]
br bool %tmp.8, label %then, label %UnifiedReturnBlock
which is the difference between this:
foo:
subl $4, %esp
movl 8(%esp), %eax
movl (%eax), %eax
shll $24, %eax
sarl $24, %eax
testb %al, %al
jne .LBBfoo_2
and this:
foo:
subl $4, %esp
movl 8(%esp), %eax
movl (%eax), %eax
testb %al, %al
jne .LBBfoo_2
This occurs 3243 times total in the External tests, 215x in povray,
6x in each f2c'd program, 1451x in 176.gcc, 7x in crafty, 20x in perl,
25x in gap, 3x in m88ksim, 25x in ijpeg.
Maybe this will cause a little jump on gcc tommorow :)
llvm-svn: 21715
This implements set.ll:test20.
This triggers 2x on povray, 9x on mesa, 11x on gcc, 2x on crafty, 1x on eon,
6x on perlbmk and 11x on m88ksim.
It allows us to compile these two functions into the same code:
struct s { unsigned int bit : 1; };
unsigned foo(struct s *p) {
if (p->bit)
return 1;
else
return 0;
}
unsigned bar(struct s *p) { return p->bit; }
llvm-svn: 21690
library function:
isdigit(chr) -> 0 or 1 if chr is constant
isdigit(chr) -> chr - '0' <= 9 otherwise
Although there are many calls to isdigit in llvm-test, most of them are
compiled away by macros leaving only this:
2 MultiSource/Applications/hexxagon
llvm-svn: 21688
actual spec (int -> uint)
* Add the ability to get/cache the strlen function prototype.
* Make sure generated values are appropriately named for debugging purposes
* Add the SPrintFOptimiation for 4 casts of sprintf optimization:
sprintf(str,cstr) -> llvm.memcpy(str,cstr) (if cstr has no %)
sprintf(str,"") -> store sbyte 0, str
sprintf(str,"%s",src) -> llvm.memcpy(str,src) (if src is constant)
sprintf(str,"%c",chr) -> store chr, str ; store sbyte 0, str+1
The sprintf optimization didn't fire as much as I had hoped:
2 MultiSource/Applications/SPASS
5 MultiSource/Benchmarks/McCat/18-imp
22 MultiSource/Benchmarks/Prolangs-C/TimberWolfMC
1 MultiSource/Benchmarks/Prolangs-C/assembler
6 MultiSource/Benchmarks/Prolangs-C/unix-smail
2 MultiSource/Benchmarks/mediabench/mpeg2/mpeg2dec
llvm-svn: 21679
Neither of these activated as many times as was hoped:
strchr:
9 MultiSource/Applications/siod
1 MultiSource/Applications/d
2 MultiSource/Prolangs-C/archie-client
1 External/SPEC/CINT2000/176.gcc/176.gcc
llvm.memset:
no hits
llvm-svn: 21669
strings passed to Statistic's constructor are not destructable. The stats
are printed during static destruction and the SimplifyLibCalls module was
getting destructed before the statistics.
llvm-svn: 21661
type be obtained from a CallInst we're optimizing.
* Make it possible for getConstantStringLength to return the ConstantArray
that it extracts in case the content is needed by an Optimization.
* Implement the strcmp optimization
* Implement the toascii optimization
This pass is now firing several to many times in the following MultiSource
tests:
Applications/Burg - 7 (strcat,strcpy)
Applications/siod - 13 (strcat,strcpy,strlen)
Applications/spiff - 120 (exit,fputs,strcat,strcpy,strlen)
Applications/treecc - 66 (exit,fputs,strcat,strcpy)
Applications/kimwitu++ - 34 (strcmp,strcpy,strlen)
Applications/SPASS - 588 (exit,fputs,strcat,strcpy,strlen)
llvm-svn: 21626
sinh, cosh, etc.
* Make the name comparisons for the fp libcalls a little more efficient by
switching on the first character of the name before doing comparisons.
llvm-svn: 21611
* Correct stale documentation in a few places
* Re-order the file to better associate things and reduce line count
* Make the pass thread safe by caching the Function* objects needed by the
optimizers in the pass object instead of globally.
* Provide the SimplifyLibCalls pass object to the optimizer classes so they
can access cached Function* objects and TargetData info
* Make sure the pass resets its cache if the Module passed to runOnModule
changes
* Rename CallOptimizer LibCallOptimization. All the classes are named
*Optimization while the objects are *Optimizer.
* Don't cache Function* in the optimizer objects because they could be used
by multiple PassManager's running in multiple threads
* Add an optimization for strcpy which is similar to strcat
* Add a "TODO" list at the end of the file for ideas on additional libcall
optimizations that could be added (get ideas from other compilers).
Sorry for the huge diff. Its mostly reorganization of code. That won't
happen again as I believe the design and infrastructure for this pass is
now done or close to it.
llvm-svn: 21589
call to them into an 'unreachable' instruction.
This triggers a bunch of times, particularly on gcc:
gzip: 36
gcc: 601
eon: 12
bzip: 38
llvm-svn: 21587
* MemCpyOptimization can only be optimized if the 3rd and 4th arguments are
constants and we weren't checking for that.
* The result of llvm.memcpy (and llvm.memmove) is void* not sbyte*, put in
a cast.
llvm-svn: 21570
* Have the SimplifyLibCalls pass acquire the TargetData and pass it down to
the optimization classes so they can use it to make better choices for
the signatures of functions, etc.
* Rearrange the code a little so the utility functions are closer to their
usage and keep the core of the pass near the top of the files.
* Adjust the StrLen pass to get/use the correct prototype depending on the
TargetData::getIntPtrType() result. The result of strlen is size_t which
could be either uint or ulong depending on the platform.
* Clean up some coding nits (cast vs. dyn_cast, remove redundant items from
a switch, etc.)
* Implement the MemMoveOptimization as a twin of MemCpyOptimization (they
only differ in name).
llvm-svn: 21569
named getConstantStringLength. This is the common part of StrCpy and
StrLen optimizations and probably several others, yet to be written. It
performs all the validity checks for looking at constant arrays that are
supposed to be null-terminated strings and then computes the actual
length of the string.
* Implement the MemCpyOptimization class. This just turns memcpy of 1, 2, 4
and 8 byte data blocks that are properly aligned on those boundaries into
a load and a store. Much more could be done here but alignment
restrictions and lack of knowledge of the target instruction set prevent
use from doing significantly more. That will have to be delegated to the
code generators as they lower llvm.memcpy calls.
llvm-svn: 21562
* Change signatures of OptimizeCall and ValidateCalledFunction so they are
non-const, allowing the optimization object to be modified. This is in
support of caching things used across multiple calls.
* Provide two functions for constructing and caching function types
* Modify the StrCatOptimization to cache Function objects for strlen and
llvm.memcpy so it doesn't regenerate them on each call site. Make sure
these are invalidated each time we start the pass.
* Handle both a GEP Instruction and a GEP ConstantExpr
* Add additional checks to make sure we really are dealing with an arary of
sbyte and that all the element initializers are ConstantInt or
ConstantExpr that reduce to ConstantInt.
* Make sure the GlobalVariable is constant!
* Don't use ConstantArray::getString as it can fail and it doesn't give us
the right thing. We must check for null bytes in the middle of the array.
* Use llvm.memcpy instead of memcpy so we can factor alignment into it.
* Don't use void* types in signatures, replace with sbyte* instead.
llvm-svn: 21555
* Don't use std::string for the function names, const char* will suffice
* Allow each CallOptimizer to validate the function signature before
doing anything
* Repeatedly loop over the functions until an iteration produces
no more optimizations. This allows one optimization to insert a
call that is optimized by another optimization.
* Implement the ConstantArray portion of the StrCatOptimization
* Provide a template for the MemCpyOptimization
* Make ExitInMainOptimization split the block, not delete everything
after the return instruction.
(This covers revision 1.3 and 1.4, as the 1.3 comments were botched)
llvm-svn: 21548
* Fix comments at top of file
* Change algorithm for running the call optimizations from n*n to something
closer to n.
* Use a hash_map to store and lookup the optimizations since there will
eventually (or potentially) be a large number of them. This gets lookup
based on the name of the function to O(1). Each CallOptimizer now has a
std::string member named func_name that tracks the name of the function
that it applies to. It is this string that is entered into the hash_map
for fast comparison against the function names encountered in the module.
* Cleanup some style issues pertaining to iterator invalidation
* Don't pass the Function pointer to the OptimizeCall function because if
the optimization needs it, it can get it from the CallInst passed in.
* Add the skeleton for a new CallOptimizer, StrCatOptimizer which will
eventually replace strcat's of constant strings with direct copies.
llvm-svn: 21526
calls. The pass visits all external functions in the module and determines
if such function calls can be optimized. The optimizations are specific to
the library calls involved. This initial version only optimizes calls to
exit(3) when they occur in main(): it changes them to ret instructions.
llvm-svn: 21522
Completely rework the 'setcc (cast x to larger), y' code. This code has
the advantage of implementing setcc.ll:test19 (being more general than
the previous code) and being correct in all cases.
This allows us to unxfail 2004-11-27-SetCCForCastLargerAndConstant.ll,
and close PR454.
llvm-svn: 21491