This patch adds a new SSA MI pass that runs on little-endian PPC64
code with VSX enabled. Loads and stores of 4x32 and 2x64 vectors
without alignment constraints are accomplished for little-endian using
lxvd2x/xxswapd and xxswapd/stxvd2x. The existence of the additional
xxswapd instructions hurts performance in comparison with big-endian
code, but they are necessary in the general case to support correct
semantics.
However, the general case does not apply to most vector code. Many
vector instructions are lane-insensitive; they do not "care" which
lanes the parallel computations are performed within, provided that
the resulting data is stored into the correct locations. Thus this
pass looks for computations that perform only lane-insensitive
operations, and remove the unnecessary swaps from loads and stores in
such computations.
Future improvements will allow computations using certain
lane-sensitive operations to also be optimized in this manner, by
modifying the lane-sensitive operations to account for the permuted
order of the lanes. However, this patch only adds the infrastructure
to permit this; no lane-sensitive operations are optimized at this
time.
This code is heavily exercised by the various vectorizing applications
in the projects/test-suite tree. For the time being, I have only added
one simple test case to demonstrate what the pass is doing. Although
it is quite simple, it provides coverage for much of the code,
including the special case handling of copies and subreg-to-reg
operations feeding the swaps. I plan to add additional tests in the
future as I fill in more of the "special handling" code.
Two existing tests were affected, because they expected the swaps to
be present, but they are now removed.
llvm-svn: 235910
Use a loop instruction with a constant extender for a hardware
loop instruction that is too far away from the start of the loop.
This is cheaper than changing the SA register value.
Differential Revision: http://reviews.llvm.org/D9262
llvm-svn: 235882
Summary:
Changed the warning message to show the current value of $at, similar to what clang does for typedef's, and renamed warnIfAssemblerTemporary to a more descriptive name.
I also changed the type of variables which store registers from int to unsigned, updated the relevant test and tried to make the related comments clearer.
Reviewers: dsanders
Reviewed By: dsanders
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8479
llvm-svn: 235881
This reapplies r235194, which was reverted in r235495 because it was causing a
failure in our out-of-tree buildbots for MIPS. With the sign-extension patch
in r235718, this patch doesn't cause any problem any more.
llvm-svn: 235878
Summary:
When used, it is substituted with the number of .macro instantiations we've done up to that point in time.
So if this is the 1st time we've instantiated a .macro (any .macro, regardless of name), \@ will instantiate to 0, if it's the 2nd .macro instantiation, it will instantiate to 1 etc.
It can only be used inside a .macro definition, an .irp definition or an .irpc definition (those last 2 uses are undocumented).
Reviewers: echristo, rafael
Reviewed By: rafael
Subscribers: dsanders, llvm-commits
Differential Revision: http://reviews.llvm.org/D9197
llvm-svn: 235862
Summary:
This patch adds constant folding of insertelement instruction to undef value when index operand is constant and is not less than vector size or is undef.
InstCombine does not support this case, but I'm happy to add it there also if this change is accepted.
Test Plan: Unittests and regression tests for ConstProp pass.
Reviewers: majnemer
Reviewed By: majnemer
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D9287
llvm-svn: 235854
Patch to allow int8 vectors to be multiplied on the SSE unit instead of being scalarized.
The patch sign extends the i8 lanes to i16, uses the SSE2 pmullw multiplication instruction, then packs the lower byte from each result.
Differential Revision: http://reviews.llvm.org/D9115
llvm-svn: 235837
There can be various constant pointers in the IR which do not get relocated at a safepoint. One example is the address of a global variable. Another example is a pointer created via inttoptr. Note that the optimizer itself likes to create such inttoptrs when locally propagating constants through dynamically dead code.
To deal with this, we need to exclude uses of constants from contributing to the liveness of a safepoint which might reach that use. At some later date, it might be worth exploring what could be done to support the relocation of various special types of "constants", but that's future work.
Differential Revision: http://reviews.llvm.org/D9236
llvm-svn: 235821
llvm.frameescape() intrinsic is not a real call. The intrinsic can only exist in the entry block. Inserting a gc.statepoint() before llvm.frameescape() may split the entry block, and push the intrinsic out of the entry block.
Patch by: Swaroop.Sridhar@microsoft.com
Differential Revision: http://reviews.llvm.org/D8910
llvm-svn: 235820
This is a follow-on to D8833 (insertps optimization when the zero mask is not used).
In this patch, we check for the case where the zmask is used, but both input vectors
to the insertps intrinsic are the same operand or the zmask overrides the destination
lane. This lets us replace the 2nd shuffle input operand with the zero vector.
Differential Revision: http://reviews.llvm.org/D9257
llvm-svn: 235810
Add serialization support for function metadata attachments (added in
r235783). The syntax is:
define @foo() !attach !0 {
Metadata attachments are only allowed on functions with bodies. Since
they come before the `{`, they're not really part of the body; since
they require a body, they're not really part of the header. In
`LLParser` I gave them a separate function called from `ParseDefine()`,
`ParseOptionalFunctionMetadata()`.
In bitcode, I'm using the same `METADATA_ATTACHMENT` record used by
instructions. Instruction metadata attachments are included in a
special "attachment" block at the end of a `Function`. The attachment
records are laid out like this:
InstID (KindID MetadataID)+
Note that these records always have an odd number of fields. The new
code takes advantage of this to recognize function attachments (which
don't need an instruction ID):
(KindID MetadataID)+
This means we can use the same attachment block already used for
instructions.
This is part of PR23340.
llvm-svn: 235785
When using bit tests for hole checks, we call AddPredecessorToBlock to give the
phi node a value from the bit test block. This would break if we've
previously called removePredecessor on the default destination because the
switch is fully covered.
Test case by Mark Lacey.
llvm-svn: 235771
This introduces an intrinsic called llvm.eh.exceptioncode. It is lowered
by copying the EAX value live into whatever basic block it is called
from. Obviously, this only works if you insert it late during codegen,
because otherwise mid-level passes might reschedule it.
llvm-svn: 235768
Same as r235145 for the call instruction - the justification, tradeoffs,
etc are all the same. The conversion script worked the same without any
false negatives (after replacing 'call' with 'invoke').
llvm-svn: 235755
Check that `@main` is calling `@foo2` (the renamed internal function),
not the `@foo` with external linkage that's been pulled in from the
override file.
llvm-svn: 235730
Summary:
Fixes a bug in the NVPTX codegen. The code used to miss necessary "generic()"
on aggregates of addrspacecasts.
Test Plan: addrspacecast-gvar.ll
Reviewers: eliben, jholewinski
Reviewed By: jholewinski
Subscribers: jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D9130
llvm-svn: 235689
Match binutils by supporting the optional register name prefix for new vector
registers ("vs" for VSX registers and "q" for QPX registers).
llvm-svn: 235665
Add assembler/disassembler support for dcbt/dcbtst (and aliases) with the hint
field specified (non-zero). Unforunately, the syntax for this instruction is
special in that it differs for server vs. embedded cores:
dcbt ra, rb, th [server]
dcbt th, ra, rb [embedded]
where th can be omitted when it is 0. dcbtst is the same. Thus we need to play
games in the parser and the printer to flip the operands around on the embedded
cores. We'll use the server syntax as the default (binutils currently uses the
embedded form by default, but IBM is changing that).
We also stop marking dcbtst as having unmodeled side effects (this is not
necessary, it is just a hint like dcbt -- noticed by inspection, so no separate
test case).
llvm-svn: 235657
We were asserting on code like this:
extern "C" unsigned long _exception_code();
void might_crash(unsigned long);
void foo() {
__try {
might_crash(0);
} __except(1) {
might_crash(_exception_code());
}
}
Gtest and many other libraries get the exception code from the __except
block. What's supposed to happen here is that EAX is live into the
__except block, and it contains the exception code. Eventually we'll
represent that as a use of the landingpad ehptr value, but for now we
can replace it with undef.
llvm-svn: 235649
remove copies that are useful after breaking some hardware dependencies.
In other words, handle this kind of situations conservatively by assuming reg2
is redefined by the undef flag.
reg1 = copy reg2
= inst reg2<undef>
reg2 = copy reg1
Copy propagation used to remove the last copy.
This is incorrect because the undef flag on reg2 in inst, allows next
passes to put whatever trashed value in reg2 that may help.
In practice we end up with this code:
reg1 = copy reg2
reg2 = 0
= inst reg2<undef>
reg2 = copy reg1
This fixes PR21743.
llvm-svn: 235647
When the base register index of the vector plus the constant offset
was less than zero, we were passing the wrong base register to the indirect
addressing instruction.
In this case, we need to set the base register to v0 and then add
the computed (negative) index to m0.
llvm-svn: 235641
The order in which branches appear in ImmBranches is approximately their
order within the function body. By visiting later branches first, we reduce
the distance between earlier forward branches and their targets, making it
more likely that the cbn?z optimization, which can only apply to forward
branches, will succeed for those earlier branches.
Differential Revision: http://reviews.llvm.org/D9185
llvm-svn: 235640
In particular, this preserves the kill flag, which allows the Thumb2 cbn?z
optimization to be applied in cases where a branch has been re-created after
the live variables analysis pass, e.g. by the machine block placement pass.
This appears to be low risk; a number of other targets seem to already be
doing something similar, e.g. AArch64, PowerPC.
Differential Revision: http://reviews.llvm.org/D9184
llvm-svn: 235639
This allows the constant island pass to lower these branches to cbn?z
instructions, resulting in a shorter instruction sequence.
Differential Revision: http://reviews.llvm.org/D9183
llvm-svn: 235638
This makes it more likely that we can use the 16-bit push and pop instructions
on Thumb-2, saving around 4 bytes per function.
Differential Revision: http://reviews.llvm.org/D9165
llvm-svn: 235637
This appears to have been introduced back in r76698 as part of an unrelated
change. I can find no official ARM documentation stating that Thumb-2 functions
require 4-byte alignment; in fact, ARM documentation appears to contradict
this (see, e.g., ARM Architecture Reference Manual Thumb-2 Supplement,
section 2.6.1: "Thumb-2 enforces 16-bit alignment on all instructions.").
Also remove code that sets alignment for ARM functions, which is redundant
with code in the MachineFunction constructor, and remove the hidden
-arm-align-constant-islands flag, which has been enabled by default since
r146739 (Dec 2011) and has probably received sufficient testing by now.
Differential Revision: http://reviews.llvm.org/D9138
llvm-svn: 235636
Specifically, if a pointer accesses different underlying objects in each
iteration, don't look through the phi node defining the pointer.
The motivating case is the underlyling-objects-2.ll testcase. Consider
the loop nest:
int **A;
for (i)
for (j)
A[i][j] = A[i-1][j] * B[j]
This loop is transformed by Load-PRE to stash away A[i] for the next
iteration of the outer loop:
Curr = A[0]; // Prev_0
for (i: 1..N) {
Prev = Curr; // Prev = PHI (Prev_0, Curr)
Curr = A[i];
for (j: 0..N)
Curr[j] = Prev[j] * B[j]
}
Since A[i] and A[i-1] are likely to be independent pointers,
getUnderlyingObjects should not assume that Curr and Prev share the same
underlying object in the inner loop.
If it did we would try to dependence-analyze Curr and Prev and the
analysis of the corresponding SCEVs would fail with non-constant
distance.
To fix this, the getUnderlyingObjects API is extended with an optional
LoopInfo parameter. This is effectively what controls whether we want
the above behavior or the original. Currently, I only changed to use
this approach for LoopAccessAnalysis.
The other testcase is to guard the opposite case where we do want to
look through the loop PHI. If we step through an array by incrementing
a pointer, the underlying object is the incoming value of the phi as the
loop is entered.
Fixes rdar://problem/19566729
llvm-svn: 235634
Summary:
We pick this order because SeparateConstOffsetFromGEP may create more
opportunities for SLSR.
Test Plan:
reassociate-geps-and-slsr.ll
no performance regression on internal benchmarks
Reviewers: meheff
Subscribers: llvm-commits, jholewinski
Differential Revision: http://reviews.llvm.org/D9230
llvm-svn: 235632
TableGen had been nicely generating code to print a number of instructions using
shorter aliases (and PowerPC has plenty of short mnemonics), but we were not
calling it. For some of the aliases we support in the parser, TableGen can't
infer the "inverse" alias relationship, so there is still more to do.
Thus, after some hours of updating test cases...
llvm-svn: 235616
Summary:
Constant stores of f16 vectors can create NvCast nodes from various
operand types to v4f16 or v8f16 depending on patterns in the stored
constants. This patch adds nvcast rules with v4f16 and v8f16 values.
AArchISelLowering::LowerBUILD_VECTOR has the details on which constant
patterns generate the nvcast nodes.
Reviewers: jmolloy, srhines, ab
Subscribers: rengolin, aemerson, llvm-commits
Differential Revision: http://reviews.llvm.org/D9201
llvm-svn: 235610
Summary:
Set operation action for SINT_TO_FP and UINT_TO_FP nodes with v4i32,
v8i8, v8i16 inputs to allow promotion of v4f16 results.
Add tests for sitofp and uitofp for vec4, vec8, vec16, and i8, i16, i32,
and i64 vectors. Only missing tests are for v16i8 and v16i16 as the
shift operations are too complicated to write a proper check sequence.
The conversions from v4i64 to v4f16 do not depend on this patch - v4i64
is split and the conversion gets handled while lowering v2i64. I am
adding a test here for completeness.
Reviewers: aemerson, rengolin, ab, jmolloy, srhines
Subscribers: rengolin, aemerson, llvm-commits
Differential Revision: http://reviews.llvm.org/D9166
llvm-svn: 235609
Third time's the charm. The previous commit was reverted as a
reverse for-loop in SelectionDAGBuilder::lowerWorkItem did 'I--'
on an iterator at the beginning of a vector, causing asserts
when using debugging iterators. This commit fixes that.
llvm-svn: 235608
Summary:
Make sure the abbrev operands are valid and that we can read/skip them
afterwards.
Bug found with AFL fuzz.
Reviewers: rafael
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D9030
llvm-svn: 235595
Patch to remove extra bitcasts from shuffles, this is often a legacy of XformToShuffleWithZero being used to combine bitmaskings (of float vectors bitcast to integer vectors) into shuffles: bitcast(shuffle(bitcast(s0),bitcast(s1))) -> shuffle(s0,s1)
Differential Revision: http://reviews.llvm.org/D9097
llvm-svn: 235578
This is a re-commit of r235101, which also fixes the problems with the previous patch:
- Switches with only a default case and non-fallthrough were handled incorrectly
- The previous patch tickled a bug in PowerPC Early-Return Creation which is fixed here.
> This is a major rewrite of the SelectionDAG switch lowering. The previous code
> would lower switches as a binary tre, discovering clusters of cases
> suitable for lowering by jump tables or bit tests as it went along. To increase
> the likelihood of finding jump tables, the binary tree pivot was selected to
> maximize case density on both sides of the pivot.
>
> By not selecting the pivot in the middle, the binary trees would not always
> be balanced, leading to performance problems in the generated code.
>
> This patch rewrites the lowering to search for clusters of cases
> suitable for jump tables or bit tests first, and then builds the binary
> tree around those clusters. This way, the binary tree will always be balanced.
>
> This has the added benefit of decoupling the different aspects of the lowering:
> tree building and jump table or bit tests finding are now easier to tweak
> separately.
>
> For example, this will enable us to balance the tree based on profile info
> in the future.
>
> The algorithm for finding jump tables is quadratic, whereas the previous algorithm
> was O(n log n) for common cases, and quadratic only in the worst-case. This
> doesn't seem to be major problem in practice, e.g. compiling a file consisting
> of a 10k-case switch was only 30% slower, and such large switches should be rare
> in practice. Compiling e.g. gcc.c showed no compile-time difference. If this
> does turn out to be a problem, we could limit the search space of the algorithm.
>
> This commit also disables all optimizations during switch lowering in -O0.
>
> Differential Revision: http://reviews.llvm.org/D8649
llvm-svn: 235560
Only clear out the NSW/NUW flags if we are optimizing 'add'/'sub' while
taking advantage that the sign bit is not set. We do this optimization
to further shrink the mask but shrinking the mask isn't NSW/NUW
preserving in this case.
llvm-svn: 235558
This removes the -sehprepare flag and makes __C_specific_handler
functions always to use WinEHPrepare.
This was tested by building all of chromium_builder_tests and running a
few tests that use SEH, but if something breaks, we can revert this.
llvm-svn: 235557
In particular, this handles SSA values that are live *out* of a handler.
The existing code only handles values that are live *in* to a handler.
It also handles phi nodes in the block where normal control should
resume after the end of a catch handler. When EH return points have phi
nodes, we need to split the return edge. It is impossible for phi
elimination to emit copies in the previous block if that block gets
outlined. The indirectbr that we leave in the function is only notional,
and is eliminated from the MachineFunction CFG early on.
Reviewers: majnemer, andrew.w.kaylor
Differential Revision: http://reviews.llvm.org/D9158
llvm-svn: 235545
An nsw/nuw operation relies on the values feeding into it to not
overflow if 'poison' is not to be produced. This means that
optimizations which make modifications to the bottom of a chain (like
SimplifyDemandedBits) must strip out nsw/nuw if they cannot ensure that
they will be preserved.
This fixes PR23309.
llvm-svn: 235544
Summary:
Remove the CHECK-DAG calls introduced in r235341, and add a comment that
this test may break due to scheduling variations.
This patch completes the fix discussed in http://reviews.llvm.org/D8804
Reviewers: dsanders, srhines
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D9178
llvm-svn: 235530
ARM32 ELF R_ARM_V4BX relocation format is a special relocation type
that records the location of an ARMv4t BX instruction to enable a
static linker to generate ARMv4 compatible instructions. This
relocation does not contain a reference symbol.
This patch enabled its creation by removing the requeriment of a
relocation symbol target in ELFState<ELFT>::writeSectionContent.
llvm-svn: 235513
An assert was triggered when attempting to create a new SCEV
with operands of different types in the visitAddRecExpr. In this
test case, the operand types of the numerator and denominator
are different. The SCEV division code should generate a
conservative answer when this happens.
Differential Revision: http://reviews.llvm.org/D9021
llvm-svn: 235511
This fixes a regression introduced at revision 218263.
On AVX, if we optimize for size, a splat build_vector of a load
is lowered into a VBROADCAST node. This is done even if the value type of the
splat build_vector node is v2i64.
Since AVX doesn't support v2f64/v2i64 broadcasts, revision 218263 added two
extra tablegen patterns to allow selecting a VMOVDDUPrm from an X86VBroadcast
where the scalar element comes from a loadi64/loadf64.
However, revision 218263 forgot to add an extra fallback pattern for the case
where we have a X86VBroadcast of a loadi64 with multiple uses.
This patch adds the missing tablegen pattern in X86InstrSSE.td.
This patch also adds an extra test to 'splat-for-size.ll' to verify that ISel
doesn't crash with a 'fatal error in the backend' due to a missing AVX pattern
to select v2i64 X86ISD::BROADCAST nodes.
llvm-svn: 235509
This turned up after r235333, but was a pre-existing bug. The optimization
which transforms select(c, load, load) into a load of a select of the addresses
does not handle indexed loads (pre/post inc/dec). However, it did not check for
them either, leading to a crash if it tried to transform one of them.
llvm-svn: 235497
Enough concerns were raised that this optimization is pessimising some code patterns.
The obvious fix, to add a Reassociate run afterwards, causes even more pessimisation in some cases due to fewer complex addressing modes being matched. As there isn't a trivial fix for this, backing this out by default until someone gets a chance to fix the addressing mode matcher.
llvm-svn: 235491
X86 backend.
The code generated for symbolic targets is identical to the code generated for
constant targets, except that a relocation is emitted to fix up the actual
target address at link-time. This allows IR and object files containing
patchpoints to be cached across JIT-invocations where the target address may
change.
llvm-svn: 235483
Add a flag to lib/Linker (and `llvm-link`) to override linkage rules.
When set, the functions in the source module *always* replace those in
the destination module.
The `llvm-link` option is `-override=abc.ll`. All the "regular" modules
are loaded and linked first, followed by the `-override` modules. This
is useful for debugging workflows where some subset of the module (e.g.,
a single function) is extracted into a separate file where it's
optimized differently, before being merged back in.
Patch by Luqman Aden!
llvm-svn: 235473
With SSE2, we can generate a 'movq' or other 64-bit store op on a 32-bit system
even though 64-bit integers are not legal types.
So instead of producing this:
pshufd $229, %xmm0, %xmm1 ## xmm1 = xmm0[1,1,2,3]
movd %xmm0, (%eax)
movd %xmm1, 4(%eax)
We can do:
movq %xmm0, (%eax)
This is a fix for the problem noted in D7296.
Differential Revision: http://reviews.llvm.org/D9134
llvm-svn: 235460
We should also teach the inliner to collapse framerecover of
frameaddress of the current frame down to an alloca, but that can happen
later.
llvm-svn: 235459
https://llvm.org/bugs/show_bug.cgi?id=23163.
Gep merging sometimes behaves like a reverse CSE/LICM optimization,
which has negative impact on performance. In this patch we restrict
gep merging to happen only when the indexes to be merged are both consts,
which ensures such merge is always beneficial.
The patch makes gep merging only happen in very restrictive cases.
It is possible that some analysis/optimization passes rely on the merged
geps to get better result, and we havn't notice them yet. We will be ready
to further improve it once we see the cases.
Differential Revision: http://reviews.llvm.org/D8911
llvm-svn: 235455
https://llvm.org/bugs/show_bug.cgi?id=23163.
Gep merging sometimes behaves like a reverse CSE/LICM optimizations,
which has negative impact on performance. In this patch we restrict
gep merging to happen only when the indexes to be merged are both consts,
which ensures such merge is always beneficial.
The patch makes gep merging only happen in very restrictive cases.
It is possible that some analysis/optimization passes rely on the merged
geps to get better result, and we havn't notice them yet. We will be ready
to further improve it once we see the cases.
Differential Revision: http://reviews.llvm.org/D9007
llvm-svn: 235451
MemIntrinsic::getDest() looks through pointer casts, and using it
directly when building the new GEP+memset results in stuff like:
%0 = getelementptr i64* %p, i32 16
%1 = bitcast i64* %0 to i8*
call ..memset(i8* %1, ...)
instead of the correct:
%0 = bitcast i64* %p to i8*
%1 = getelementptr i8* %0, i32 16
call ..memset(i8* %1, ...)
Instead, use getRawDest, which just gives you the i8* value.
While there, use the memcpy's dest, as it's live anyway.
In most cases, when the optimization triggers, the memset and memcpy
sizes are the same, so the built memset is 0-sized and eliminated.
The problem occurs when they're different.
Fixes a regression caused by r235232: PR23300.
llvm-svn: 235419
Summary:
After we rewrite a candidate, the instructions used by the old form may
become unused. This patch cleans up these unused instructions so that we
needn't run DCE after SLSR.
Test Plan: removed -dce in all the SLSR tests
Reviewers: broune, meheff
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D9101
llvm-svn: 235410
Summary: so that we needn't run DCE after this pass.
Test Plan: removed -dce from the commandline in split-gep.ll and split-gep-and-gvn.ll
Reviewers: meheff
Subscribers: llvm-commits, HaoLiu, hfinkel, jholewinski
Differential Revision: http://reviews.llvm.org/D9096
llvm-svn: 235409
Keep the old SEH fan-in lowering on by default for now, since projects
rely on it. This will make it easy to test this change with a simple
flag flip.
llvm-svn: 235399
There doesn't seem to be a reason to perform this target ISD node matching
in an DAGCombine, moving it to lowering fixes PR23296.
Differential Revision: http://reviews.llvm.org/D9137
llvm-svn: 235394
Summary:
This directive is exactly the same as .asciz, except it's only used by MIPS.
It is used to store null terminated strings in object files.
Reviewers: rafael, dsanders, echristo
Reviewed By: dsanders, echristo
Subscribers: echristo, llvm-commits
Differential Revision: http://reviews.llvm.org/D7530
llvm-svn: 235382
Summary:
The 64-bit version of the variable shift instructions uses the
shift_rotate_reg class which uses a GPR32Opnd to specify the variable
shift amount. With this patch we avoid the generation of a redundant
SLL instruction for the variable shift instructions in 64-bit targets.
Reviewers: dsanders
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D7413
llvm-svn: 235376
This is an updated version of Chandler's patch D7402 that got accepted but never committed, and has bit-rotted a bit since.
I've updated the execution domain declarations to match the approach of the packed templates and also added some extra scalar unary tests.
Differential Revision: http://reviews.llvm.org/D9095
llvm-svn: 235372
Fixed issue with the combine of CONCAT_VECTOR of 2 BUILD_VECTOR nodes - the optimisation wasn't ensuring that the scalar operands of both nodes were the same type/size for implicit truncation.
Test case spotted by Patrik Hagglund
llvm-svn: 235371
Summary:
This fixes http://llvm.org/bugs/show_bug.cgi?id=16439.
This is one possible way to approach this. The other would be to split InL>>(nbits-Amt) into (InL>>(nbits-1-Amt))>>1, which is also valid since since we only need to care about Amt up nbits-1. It's hard to tell which one is better since the shift might be expensive if this stage of expansion is not yet a legal machine integer, whereas comparisons with zero are relatively cheap at all sizes, but more expensive than a shift if the shift is on a legal machine type.
Patch by Keno Fischer!
Test Plan: regression test from http://reviews.llvm.org/D7752
Reviewers: chfast, resistor
Reviewed By: chfast, resistor
Subscribers: sanjoy, resistor, chfast, llvm-commits
Differential Revision: http://reviews.llvm.org/D4978
llvm-svn: 235370
X86ISD::ADDSUB, X86ISD::(F)HADD, X86ISD::(F)HSUB should not be selected
if the operand types do not match the result type because vector type
legalization cannot deal with this for custom nodes.
Testcase X86ISD::ADDSUB is attached. I could not create a testcase for
the FHADD/FHSUB cases because of: https://llvm.org/bugs/show_bug.cgi?id=23296
Differential Revision: http://reviews.llvm.org/D9120
llvm-svn: 235367
Summary:
Bundle aligment requires that the functions always start at an aligned address.
Usually this is ensured by the compiler, but assembly code does not always
begin with a .align directive.
This change ensures that sections get the correct alignment if they contain
any instructions and bundling is enabled. (It also makes LLVM match the
behavior of GNU as).
Differential Revision: http://reviews.llvm.org/D9131
llvm-svn: 235365
Summary:
In the f16-promote test, make the checks for native conversion instructions
similar to the libcall checks:
- Remove hard coded register names
- Do not check exact instruction sequences.
This fixes test flakiness due to non-determinism in instruction
scheduling and register allocation. I also fixed a few minor things in
the CHECK-LIBCALL checks.
I'll try to find a way to check that unnecessary loads, stores, or
conversions don't happen.
Reviewers: mzolotukhin, srhines, ab
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D9112
llvm-svn: 235363
Summary:
This patch adds two flags to `bugpoint`: "-replace-funcs-with-null" and "-disable-pass-list-reduction".
When "-replace-funcs-with-null" is specified, bugpoint will, instead of simply deleting function bodies, replace all uses of functions and then will delete functions completely from the test module, correctly handling aliasing and @llvm.used && @llvm.compiler.used. This part was conceived while trying to debug the PNaCl IR simplification passes, which don't allow undefined functions (ie no declarations).
With "-disable-pass-list-reduction", bugpoint won't try to reduce the set of passes causing the "crash". This is needed in cases where one is trying to debug an issue inside the PNaCl IR simplification passes which is causing an PNaCl ABI verification error, for example.
Reviewers: jfb
Reviewed By: jfb
Subscribers: jfb, llvm-commits
Differential Revision: http://reviews.llvm.org/D8555
llvm-svn: 235362
Summary:
Set operation action for FP16 conversion opcodes, so the Op legalizer
can choose the gnu_* libcalls for Mips.
Set LoadExtAction and TruncStoreAction for f16 scalars and vectors to
prevent (fpext (load )) and (store (fptrunc)) from getting combined into
unsupported operations.
Added test cases to test that these operations are handled correctly
for f16 scalars and vectors. This patch depends on
http://reviews.llvm.org/D8755.
Reviewers: srhines
Subscribers: llvm-commits, ab
Differential Revision: http://reviews.llvm.org/D8804
llvm-svn: 235341
This commit fixes the code which adds lifetime markers in InlineFunction to skip
zero-sized allocas instead of asserting on them.
rdar://problem/20531155
llvm-svn: 235312
n/1 generates a quotient equal to n and a remainder of 0.
If this case is not recognized, then the SCEV divide() function
can return a remainder that is greater than or equal to the
denominator, which means the delinearized subscripts for the
test case will be incorrect.
Differential Revision: http://reviews.llvm.org/D9003
llvm-svn: 235311
We have to avoid converting a reference to a global into a reference to a local,
but it is fine to look past a local.
Patch by Vasileios Kalintiris.
I just moved the comment and added thet test.
llvm-svn: 235300
This fixes a regression introduced at revision 231243.
The target-independent selection algorithm in FastISel knows how to select
a SINT_TO_FP if the target is SSE but not AVX. That is because on X86, the
tablegen'd 'fastEmit' functions know how to select CVTSI2SSrr and CVTSI2SDrr.
Method X86FastISel::X86SelectSIToFP was therefore working under the
wrong assumption that the target was AVX. That assumption was incorrect since
we can have a target that is neither AVX nor SSE.
So, rather than asserting for the presence of AVX, we should have had an
early exit from 'X86SelectSIToFP' if the target was not AVX.
This patch fixes the issue replacing the invalid assertion with an early exit.
Thanks to Dimitry Andric for reporting this problem and for providing a small
reproducible testcase. Added test pr23273.ll.
llvm-svn: 235295
When an inline asm call has an output register marked as early-clobber, but
that same register is also an input operand, what should we do? GCC accepts
this, and is documented to accept this for read/write operands saying,
"Furthermore, if the earlyclobber operand is also a read/write operand, then
that operand is written only after it's used." For write-only operands, the
situation seems less clear, but I have at least one existing codebase that
assumes this will work, in part because it has syscall macros like this:
({ \
register uint64_t r0 __asm__ ("r0") = (__NR_ ## name); \
register uint64_t r3 __asm__ ("r3") = ((uint64_t) (arg0)); \
register uint64_t r4 __asm__ ("r4") = ((uint64_t) (arg1)); \
register uint64_t r5 __asm__ ("r5") = ((uint64_t) (arg2)); \
__asm__ __volatile__ \
("sc" \
: "=&r"(r0),"=&r"(r3),"=&r"(r4),"=&r"(r5) \
: "0"(r0), "1"(r3), "2"(r4), "3"(r5) \
: "r6","r7","r8","r9","r10","r11","r12","cr0","memory"); \
r3; \
})
Furthermore, with register aliases and subregister relationships that only the
backend knows about, rejecting this in the frontend seems like a difficult
proposition (if we wanted to do so). However, keeping the early-clobber flag on
the INLINEASM MI does not work for us, because it will cause the register's
live interval to end to soon (so it will not appear defined to be used as an
input).
Fortunately, fixing this does not seem hard: When forming the INLINEASM MI,
check to see if any of the early-clobber outputs are also inputs, and if so,
remove the early-clobber flag.
llvm-svn: 235283
The fix ensures that scalar sources inserted into a vector are the correct bit size.
Integer scalar sources from BUILD_VECTOR and SCALAR_TO_VECTOR nodes may require truncation that this function doesn't currently support.
llvm-svn: 235281
Harden r235258 to support any integer bitwidth. The quick glance at
the reference made me think only i32 and i64 were valid types, but
they're not special, so any overload is legal.
Thanks to David Majnemer for noticing!
llvm-svn: 235261
Followup to r235232, which caused PR23278.
We can't assume the memset and memcpy sizes have the same type, as
nothing in the language reference prevents that.
Instead, zext both to i64 if they disagree.
While there, robustify tests by using i8 %c rather than i8 0 for the
memset character.
llvm-svn: 235258
Instead of merging everything together, look at the users of
GlobalVariables, and try to group them by function, to create
sets of globals used "together".
Using that information, a less-aggressive alternative is to keep merging
everything together *except* globals that are only ever used alone, that
is, those for which it's clearly non-profitable to merge with others.
In my testing, grouping by Function is too aggressive, but grouping by
BasicBlock is too conservative. Anything in-between isn't trivially
available, so stick with Function grouping for now.
cl::opts are added for testing; both enabled by default.
A few of the testcases aren't testing the merging proper, but just
various edge cases when merging does occur. Update them to use the
previous grouping behavior. Also, one of the tests is unrelated to
GlobalMerge; change it accordingly.
While there, switch to r234666' flags rather than the brutal -O3.
Differential Revision: http://reviews.llvm.org/D8070
llvm-svn: 235249
The result is either an Untyped reg sequence, on ldN with N > 1, or
just the type of the input vector, on ld1. Don't force Untyped.
Instead, just use the type of the reg sequence.
This mirrors the behavior of createTuple, which feeds the LD1*_POST.
The narrow code path wasn't actually covered by tests, because V64
insert_vector_elt are widened to V128 before the LD1LANEpost combine
has the chance to run, usually.
The only case where it does run on V64 vectors is if the vector ops
legalizer ran. So, tickle the code with a ctpop.
Fixes PR23265.
llvm-svn: 235243
A common idiom in some code is to do the following:
memset(dst, 0, dst_size);
memcpy(dst, src, src_size);
Some of the memset is redundant; instead, we can do:
memcpy(dst, src, src_size);
memset(dst + src_size, 0,
dst_size <= src_size ? 0 : dst_size - src_size);
Original patch by: Joel Jones
Differential Revision: http://reviews.llvm.org/D498
llvm-svn: 235232
Similar to r235222, but for the weak symbol case.
In an "ideal" assembler/object format an expression would always refer to the
final value and A-B would only be computed from a section in the same
comdat as A and B with A and B strong.
Unfortunately that is not the case with debug info on ELF, so we need an
heuristic. Since we need an heuristic, we may as well use the same one as
gas:
* call weak_sym : produces a relocation, even if in the same section.
* A - weak_sym and weak_sym -A: don't produce a relocation if we can
compute it.
This fixes pr23272 and changes the fix of pr22815 to match what gas does.
llvm-svn: 235227
Part of pr23272.
A small annoyance with the assembly syntax we implement is that given an
expression there is no way to know if what is desired is the value of that
expression for the symbols in this file or for the final values of those
symbols in a link.
The first case is useful for use in sections that get discarded or ignored
if the section they are describing is discarded.
For axample, consider A-B where A and B are in the same comdat section.
We can compute the value of the difference in the section that is present in
the current .o and if that section survives to the final DSO the value will
still will be correct.
But the section is in a comdat. Another section from another object file
might be used istead. We know that that section will define A and B, but
we have no idea what the value of A-B might be.
In practice we have to assume that the intention is to compute the value
in the current section since otherwise the is no way to create something like
the debug aranges section.
llvm-svn: 235222
Summary:
This patch adds legalization support to operate on FP16 as a load/store type
and do operations on it as floats.
Tests for ARM are added to test/CodeGen/ARM/fp16-promote.ll
Reviewers: srhines, t.p.northover
Differential Revision: http://reviews.llvm.org/D8755
llvm-svn: 235215
Summary: Implement the method FastMaterializeAlloca in Mips fast-isel
Based on a patch by Reed Kotler.
Test Plan:
Passes test-suite at O0/O2 for mips32 r1/r2
fastalloca.ll
Reviewers: dsanders, rkotler
Subscribers: rfuhler, llvm-commits
Differential Revision: http://reviews.llvm.org/D6742
llvm-svn: 235213
The v1i128 type is needed for the quadword add/substract instructions introduced
in POWER8. Futhermore, the PowerPC ABI specifies that parameters of type v1i128
are to be passed in a single vector register, while parameters of type i128 are
passed in pairs of GPRs. Thus, it is necessary to be able to differentiate
between v1i128 and i128 in LLVM.
http://reviews.llvm.org/D8564
llvm-svn: 235198
Summary:
Add shift operators implementation to fast-isel for Mips. These are shift ops
for non legal forms, i.e. i8 and i16.
Based on a patch by Reed Kotler.
Test Plan:
Reviewers: dsanders
Subscribers: echristo, rfuhler, llvm-commits
Differential Revision: http://reviews.llvm.org/D6726
llvm-svn: 235194
This is a followon to r233681 - I'd misunderstood the semantics of FTRUNC,
and had confused it with (FP_ROUND ..., 0).
Thanks for Ahmed Bougacha for his post-commit review!
llvm-svn: 235191
Summary:
Previously, the presence of KILL instructions would block valid candidates
from filling a specific delay slot. With the elimination of the KILL
instructions, in the appropriate range, we are able to fill more slots and
keep the information from future def/use analysis consistent.
Reviewers: dsanders
Reviewed By: dsanders
Subscribers: hfinkel, llvm-commits
Differential Revision: http://reviews.llvm.org/D7724
llvm-svn: 235183
Linkers normally read all the relocations upfront to compute the references
between sections. Putting them together is a bit more cache friendly.
I benchmarked linking a Release+Asserts clang with gold on a vm. I tried all
4 combinations of --gc-sections/no --gc-section hot and cold cache.
I cleared the cache with
echo 3 > /proc/sys/vm/drop_caches
and warmed it up by running the link once before timing the subsequent ones.
With cold cache and --gc-sections the time goes from
1.86130781665 +- 0.01713126697463843 seconds
to
1.82370735105 +- 0.014127522318814516 seconds
With cold cache and no --gc-sections the time goes from
1.6087245435500002 +- 0.012999066825178644 seconds
to
1.5687122041500001 +- 0.013145850126026619 seconds
With hot cache and no --gc-sections the time goes from
0.926200939 ( +- 0.33% ) seconds
to
0.907200079 ( +- 0.31% ) seconds
With hot cache and gc sections the time goes from
1.183038049 ( +- 0.34% ) seconds
to
1.147355862 ( +- 0.39% ) seconds
llvm-svn: 235165
Use an extra bit in the CCInfo to flag the newer version of the
instructiont hat includes the type explicitly.
Tested the newer error cases I added, but didn't add tests for the finer
granularity improvements to existing error paths.
llvm-svn: 235160
This now emits simple, unoptimized xdata tables for __C_specific_handler
based on the handlers listed in @llvm.eh.actions calls produced by
WinEHPrepare.
This adds support for running __finally blocks when exceptions are
thrown, and removes the old landingpad fan-in codepath.
I ran some manual execution tests on small basic test cases with and
without optimization, as well as on Chrome base_unittests, which uses a
small amount of SEH. I'm sure there are bugs, and we may need to
revert.
llvm-svn: 235154
Summary:
An alternative is to use a worklist approach. However, that approach
would break the traversing order so that we couldn't lookup SeenExprs
efficiently. I don't see a clear winner here, so I picked the easier approach.
Along with two minor improvements:
1. preserves ScalarEvolution by forgetting instructions replaced
2. removes dead code locally avoiding the need of running DCE afterwards
Test Plan: add to slsr-add.ll a test that requires multiple iterations
Reviewers: broune, dberlin, atrick, meheff
Reviewed By: atrick
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D9058
llvm-svn: 235151
Found by code inspection, but breaking i16 at least breaks other tests.
They aren't checking this in particular though, so also add some
explicit tests for the already working types.
llvm-svn: 235148
See r230786 and r230794 for similar changes to gep and load
respectively.
Call is a bit different because it often doesn't have a single explicit
type - usually the type is deduced from the arguments, and just the
return type is explicit. In those cases there's no need to change the
IR.
When that's not the case, the IR usually contains the pointer type of
the first operand - but since typed pointers are going away, that
representation is insufficient so I'm just stripping the "pointerness"
of the explicit type away.
This does make the IR a bit weird - it /sort of/ reads like the type of
the first operand: "call void () %x(" but %x is actually of type "void
()*" and will eventually be just of type "ptr". But this seems not too
bad and I don't think it would benefit from repeating the type
("void (), void () * %x(" and then eventually "void (), ptr %x(") as has
been done with gep and load.
This also has a side benefit: since the explicit type is no longer a
pointer, there's no ambiguity between an explicit type and a function
that returns a function pointer. Previously this case needed an explicit
type (eg: a function returning a void() function was written as
"call void () () * @x(" rather than "call void () * @x(" because of the
ambiguity between a function returning a pointer to a void() function
and a function returning void).
No ambiguity means even function pointer return types can just be
written alone, without writing the whole function's type.
This leaves /only/ the varargs case where the explicit type is required.
Given the special type syntax in call instructions, the regex-fu used
for migration was a bit more involved in its own unique way (as every
one of these is) so here it is. Use it in conjunction with the apply.sh
script and associated find/xargs commands I've provided in rr230786 to
migrate your out of tree tests. Do let me know if any of this doesn't
cover your cases & we can iterate on a more general script/regexes to
help others with out of tree tests.
About 9 test cases couldn't be automatically migrated - half of those
were functions returning function pointers, where I just had to manually
delete the function argument types now that we didn't need an explicit
function type there. The other half were typedefs of function types used
in calls - just had to manually drop the * from those.
import fileinput
import sys
import re
pat = re.compile(r'((?:=|:|^|\s)call\s(?:[^@]*?))(\s*$|\s*(?:(?:\[\[[a-zA-Z0-9_]+\]\]|[@%](?:(")?[\\\?@a-zA-Z0-9_.]*?(?(3)"|)|{{.*}}))(?:\(|$)|undef|inttoptr|bitcast|null|asm).*$)')
addrspace_end = re.compile(r"addrspace\(\d+\)\s*\*$")
func_end = re.compile("(?:void.*|\)\s*)\*$")
def conv(match, line):
if not match or re.search(addrspace_end, match.group(1)) or not re.search(func_end, match.group(1)):
return line
return line[:match.start()] + match.group(1)[:match.group(1).rfind('*')].rstrip() + match.group(2) + line[match.end():]
for line in sys.stdin:
sys.stdout.write(conv(re.search(pat, line), line))
llvm-svn: 235145
r235050 dropped the inlined-at field from `MDLocalVariable`, deferring
to the `!dbg` attachments. Fix `UserValue` to take the `!dbg` into
account when differentiating between variables.
llvm-svn: 235140
A big-endian vector return needs a byte-swap which we aren't doing right now.
For now just bail on these cases to get correctness back.
llvm-svn: 235133
Summary:
If a pointer is marked as dereferenceable_or_null(N), LLVM assumes it
is either `null` or `dereferenceable(N)` or both. This change only
introduces the attribute and adds a token test case for the `llvm-as`
/ `llvm-dis`. It does not hook up other parts of the optimizer to
actually exploit the attribute -- those changes will come later.
For pointers in address space 0, `dereferenceable(N)` is now exactly
equivalent to `dereferenceable_or_null(N)` && `nonnull`. For other
address spaces, `dereferenceable(N)` is potentially weaker than
`dereferenceable_or_null(N)` && `nonnull` (since we could have a null
`dereferenceable(N)` pointer).
The motivating case for this change is Java (and other managed
languages), where pointers are either `null` or dereferenceable up to
some usually known-at-compile-time constant offset.
Reviewers: rafael, hfinkel
Reviewed By: hfinkel
Subscribers: nicholas, llvm-commits
Differential Revision: http://reviews.llvm.org/D8650
llvm-svn: 235132
Summary:
This fixes a left-over efficiency issue in D8950.
As Andrew and Daniel suggested, we can store the candidates in a stack
and pop the top element when it does not dominate the current
instruction. This reduces the worst-case time complexity to O(n).
Test Plan: a new test in nary-add.ll that exercises this optimization.
Reviewers: broune, dberlin, meheff, atrick
Reviewed By: atrick
Subscribers: llvm-commits, sanjoy
Differential Revision: http://reviews.llvm.org/D9055
llvm-svn: 235129
This is very similar to D8486 / r232852 (vperm2). If we treat insertps intrinsics
as shufflevectors, we can optimize them better.
I've left all but the full zero case of the zero mask variants out of this patch.
I don't think those can be converted into a single shuffle in all cases, but I'd
be happy to be proven wrong as I was for vperm2f128.
Either way, we'd need to support whatever sequence we come up with for those cases
in the backend before converting them here.
Differential Revision: http://reviews.llvm.org/D8833
llvm-svn: 235124
This started failing on Windows after my switch lowering change in r235101.
I suspect the error is unrelated, so adding a triple to pacify it
until it can be fixed. See the PR for details.
llvm-svn: 235106
This is a major rewrite of the SelectionDAG switch lowering. The previous code
would lower switches as a binary tre, discovering clusters of cases
suitable for lowering by jump tables or bit tests as it went along. To increase
the likelihood of finding jump tables, the binary tree pivot was selected to
maximize case density on both sides of the pivot.
By not selecting the pivot in the middle, the binary trees would not always
be balanced, leading to performance problems in the generated code.
This patch rewrites the lowering to search for clusters of cases
suitable for jump tables or bit tests first, and then builds the binary
tree around those clusters. This way, the binary tree will always be balanced.
This has the added benefit of decoupling the different aspects of the lowering:
tree building and jump table or bit tests finding are now easier to tweak
separately.
For example, this will enable us to balance the tree based on profile info
in the future.
The algorithm for finding jump tables is O(n^2), whereas the previous algorithm
was O(n log n) for common cases, and quadratic only in the worst-case. This
doesn't seem to be major problem in practice, e.g. compiling a file consisting
of a 10k-case switch was only 30% slower, and such large switches should be rare
in practice. Compiling e.g. gcc.c showed no compile-time difference. If this
does turn out to be a problem, we could limit the search space of the algorithm.
This commit also disables all optimizations during switch lowering in -O0.
Differential Revision: http://reviews.llvm.org/D8649
llvm-svn: 235101
Summary: Previously, this was only happening for functions, but because of .insn, objects can also be marked now.
Reviewers: dsanders
Reviewed By: dsanders
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8007
llvm-svn: 235095
Summary:
This assembler directive marks the current label as an instruction label in microMIPS and MIPS16.
This initial implementation works only for microMIPS.
Reviewers: dsanders
Reviewed By: dsanders
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8006
llvm-svn: 235084
Fix for test case found by James Molloy - TRUNCATE of constant build vectors can be more simply achieved by simply replacing with a new build vector node with the truncated value type - no need to touch the scalar operands at all.
llvm-svn: 235079
The only type that isn't an integer, isn't floating point, and isn't
a vector; ladies and gentlemen, the gift that keeps on giving: x86_mmx!
Fixes PR23246.
Original message (reverted in r235062):
[CodeGen] Combine concat_vectors of scalars into build_vector.
Combine something like:
(v8i8 concat_vectors (v2i8 bitcast (i16)) x4)
into:
(v8i8 (bitcast (v4i16 BUILD_VECTOR (i16) x4)))
If any of the scalars are floating point, use that throughout.
Differential Revision: http://reviews.llvm.org/D8948
llvm-svn: 235072
The way we split SEH catch-all blocks can leave some dead EH values
behind at -O0. Try to remove them, and if we fail, replace them all with
undef.
Fixes a crash when removing the old unreachable landingpad which is
still used by extractvalue instructions in the catch-all block.
llvm-svn: 235061
Remove 'inlinedAt:' from MDLocalVariable. Besides saving some memory
(variables with it seem to be single largest `Metadata` contributer to
memory usage right now in -g -flto builds), this stops optimization and
backend passes from having to change local variables.
The 'inlinedAt:' field was used by the backend in two ways:
1. To tell the backend whether and into what a variable was inlined.
2. To create a unique id for each inlined variable.
Instead, rely on the 'inlinedAt:' field of the intrinsic's `!dbg`
attachment, and change the DWARF backend to use a typedef called
`InlinedVariable` which is `std::pair<MDLocalVariable*, MDLocation*>`.
This `DebugLoc` is already passed reliably through the backend (as
verified by r234021).
This commit removes the check from r234021, but I added a new check
(that will survive) in r235048, and changed the `DIBuilder` API in
r235041 to require a `!dbg` attachment whose 'scope:` is in the same
`MDSubprogram` as the variable's.
If this breaks your out-of-tree testcases, perhaps the script I used
(mdlocalvariable-drop-inlinedat.sh) will help; I'll attach it to PR22778
in a moment.
llvm-svn: 235050
Before we start to rely on valid `!dbg` attachments, add a check to the
verifier that `@llvm.dbg.*` intrinsics always have one. Also check that
the `scope:` fields point at the same `MDSubprogram`.
This is in the context of PR22778. The check that the `inlinedAt:`
fields agree has baked for a while (since r234021), so I'll kill [1] the
`MDLocalVariable::getInlinedAt()` field soon.
[1]: http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20150330/269387.html
Unfortunately, that means it's impossible to keep the current `Verifier`
checks, which rely on comparing `inlinedAt:` fields. We'll be able to
keep the checks I'm adding here.
If this breaks your out-of-tree testcases, the upgrade script
(add-dbg-to-intrinsics.sh) attached to PR22778 that I used for r235040
might fix them for you.
llvm-svn: 235048
Add missing `!dbg` attachments to `@llvm.dbg.*` intrinsics. I updated
these using a script (add-dbg-to-intrinsics.sh) that I'll attach to
PR22778 for posterity.
llvm-svn: 235040
This avoids emitting code for unreachable landingpad blocks that contain
calls to llvm.eh.actions and indirectbr.
It's also a first step towards unifying the SEH and WinEH lowering
codepaths. I'm keeping the old fan-in lowering of SEH around until the
preparation version works well enough that we can switch over without
breaking existing users.
llvm-svn: 235037
BXJ was incorrectly said to be unsupported in ARMv8-A. It is not
supported in the A64 instruction set, but it is supported in the T32
and A32 instruction sets, because it's listed as an instruction in the
ARM ARM section F7.1.28.
Using SP as an operand to BXJ changed from UNPREDICTABLE to
PREDICTABLE in v8-A. This patch reflects that update as well.
This was found by MCHammer.
llvm-svn: 235024
Summary:
With this patch, SLSR may rewrite
S1: X = B + i * S
S2: Y = B + i' * S
to
S2: Y = X + (i' - i) * S
A secondary improvement: if (i' - i) is a power of 2, emit Y as X + (S << log(i' - i)). (S << log(i' -i)) is in a canonical form and thus more likely GVN'ed than (i' - i) * S.
Test Plan: slsr-add.ll
Reviewers: hfinkel, sanjoy, meheff, broune, eliben
Reviewed By: eliben
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8983
llvm-svn: 235019
Many of these predate llvm-readobj. With elf-dump we had to match
a relocation to symbol number and symbol number to symbol name or
section number.
llvm-svn: 235015
This is a 1-line patch (with a TODO for AVX because that will affect
even more regression tests) that lets us substitute the appropriate
64-bit store for the float/double/int domains.
It's not clear to me exactly what the difference is between the 0xD6 (MOVPQI2QImr) and
0x7E (MOVSDto64mr) opcodes, but this is apparently the right choice.
Differential Revision: http://reviews.llvm.org/D8691
llvm-svn: 235014
Set the transform bar at 2 divisions because the fastest current
x86 FP divider circuit is in SandyBridge / Haswell at 10 cycle
latency (best case) relative to a 5 cycle multiplier.
So that's the worst case for this transform (no latency win),
but multiplies are obviously pipelined while divisions are not,
so there's still a big throughput win which we would expect to
show up in typical FP code.
These are the sequences I'm comparing:
divss %xmm2, %xmm0
mulss %xmm1, %xmm0
divss %xmm2, %xmm0
Becomes:
movss LCPI0_0(%rip), %xmm3 ## xmm3 = mem[0],zero,zero,zero
divss %xmm2, %xmm3
mulss %xmm3, %xmm0
mulss %xmm1, %xmm0
mulss %xmm3, %xmm0
[Ignore for the moment that we don't optimize the chain of 3 multiplies
into 2 independent fmuls followed by 1 dependent fmul...this is the DAG
version of: https://llvm.org/bugs/show_bug.cgi?id=21768 ...if we fix that,
then the transform becomes even more profitable on all targets.]
Differential Revision: http://reviews.llvm.org/D8941
llvm-svn: 235012
This commit makes LLVM not estimate branch probabilities when doing a
single bit bitmask tests.
The code that originally made me discover this is:
if ((a & 0x1) == 0x1) {
..
}
In this case we don't actually have any branch probability information
and should not assume to have any. LLVM transforms this into:
%and = and i32 %a, 1
%tobool = icmp eq i32 %and, 0
So, in this case, the result of a bitwise and is compared against 0,
but nevertheless, we should not assume to have probability
information.
CodeGen/ARM/2013-10-11-select-stalls.ll started failing because the
changed probabilities changed the results of
ARMBaseInstrInfo::isProfitableToIfCvt() and led to an Ifcvt of the
diamond in the test. AFAICT, the test was never meant to test this and
thus changing the test input slightly to not change the probabilities
seems like the best way to preserve the meaning of the test.
llvm-svn: 234979
Inlining such intrinsics is very difficult, since you need to
simultaneously transform many calls to llvm.framerecover and potentially
duplicate the functions containing them. Normally this intrinsic isn't
added until EH preparation, which is part of the backend pass pipeline
after inlining. However, if it were to get fed through the inliner,
this change will ensure that it doesn't break the code.
llvm-svn: 234937
This causes badness for GDB which expects to find a definition in any
compile_unit that has an entry for the variable in its pubnames.
llvm-svn: 234915
I'd added some stuff to this test case without adding the original
source, which makes updating/adding further stuff rather difficult. So
update it first (& it seems in the interim Clang's changed its output
order a bit, so adjust the CHECK lines to account for that - rather than
hand hacking the IR order which just makes it harder to maintain/change
next time)
llvm-svn: 234911
if ((a & 0x1) == 0x1) {
..
}
In this case we don't actually have any branch probability information and
should not assume to have any. LLVM transforms this into:
%and = and i32 %a, 1
%tobool = icmp eq i32 %and, 0
So, in this case, the result of a bitwise and is compared against 0,
but nevertheless, we should not assume to have probability
information.
llvm-svn: 234898
The ARMv8 ARMARM states that for these instructions in A64 state:
"Unspecified bits in "imm5" are ignored but should be set to zero by an assembler.", (imm4 for INS).
Make the disassembler accept any encoding with these ignored bits set to 1.
llvm-svn: 234896
This pass will always try to insert llvm.SI.ifbreak intrinsics
in the same block that its conditional value is computed in. This is
a problem when conditions for breaks or continue are computed outside
of the loop, because the llvm.SI.ifbreak intrinsic ends up being inserted
outside of the loop.
This patch fixes this problem by inserting the llvm.SI.ifbreak
intrinsics in the loop header when the condition is computed outside
the loop.
llvm-svn: 234891
Summary:
Without this check the following case failed:
Skip a SubBlock which is not a MODULE_BLOCK_ID nor a BLOCKINFO_BLOCK_ID
Got to end of file
TheModule would still be == nullptr, and we would subsequentially fail
when materializing the Module (assert at the start of
BitcodeReader::MaterializeModule).
Bug found with AFL.
Reviewers: dexonsmith, rafael
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D9014
llvm-svn: 234887
Some targets (ie. Mips) have additional rules for ordering the relocation
table entries. Allow them to override generic sortRelocs(), which sorts
entries by Offset.
Then override this function for Mips, to emit HI16 and GOT16 relocations
against the local symbol in pair with the corresponding LO16 relocation.
Patch by Vladimir Stefanovic.
Differential Revision: http://reviews.llvm.org/D7414
llvm-svn: 234883
The testcase that is included in the patch caused a crash when doing DebugInfoFinder::processModule
on the module due to DCT->getElements() returning nullptr in DebugInfoFinder::processType.
By doing "DCT->getElements()" instead of "DCT->getElements()->operands()" one gets a DIArray
instead of a raw MDTuple. The former has code to handle null as a 0-element array and
therefore avoids the crash.
Differential Revision: http://reviews.llvm.org/D9008
llvm-svn: 234875
Summary:
This transformation reassociates a n-ary add so that the add can partially reuse
existing instructions. For example, this pass can simplify
void foo(int a, int b) {
bar(a + b);
bar((a + 2) + b);
}
to
void foo(int a, int b) {
int t = a + b;
bar(t);
bar(t + 2);
}
saving one add instruction.
Fixes PR22357 (https://llvm.org/bugs/show_bug.cgi?id=22357).
Test Plan: nary-add.ll
Reviewers: broune, dberlin, hfinkel, meheff, sanjoy, atrick
Reviewed By: sanjoy, atrick
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8950
llvm-svn: 234855
Summary:
Runtime unrolling of loops needs to emit an expression to compute the
loop's runtime trip-count. Avoid runtime unrolling if this computation
will be expensive.
Depends on D8993.
Reviewers: atrick
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8994
llvm-svn: 234846
Summary:
Teach `isHighCostExpansion` to consider divisions by power-of-two
constants as cheap and add a test case. This change is needed for a new
user of `isHighCostExpansion` that will be added in a subsequent change.
Depends on D8995.
Reviewers: atrick
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8993
llvm-svn: 234845
Summary:
This is the first in a series of patches to eventually add support for TLS relocations to RuntimeDyld. This patch resolves an issue in the current GOT handling, where GOT entries would be reused between object files, which leads to the same situation that necessitates the GOT in the first place, i.e. that the 32-bit offset can not cover all of the address space. Thus this patch makes the GOT object-file-local.
Unfortunately, this still isn't quite enough, because the MemoryManager does not yet guarantee that sections are allocated sufficiently close to each other, even if they belong to the same object file. To address this concern, this patch also adds a small API abstraction on top of the GOT allocation mechanism that will allow (temporarily, until the MemoryManager is improved) using the stub mechanism instead of allocating a different section. The actual switch from separate section to stub mechanism will be part of a follow-on commit, so that it can be easily reverted independently at the appropriate time.
Test Plan: Includes a test case where the GOT of two object files is artificially forced to be apart by several GB.
Reviewers: lhames
Reviewed By: lhames
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8813
llvm-svn: 234839
Fix oversight in -analyze output. PtrRtCheck contains the pointers that
need to be checked against each other and not whether memchecks are
necessary.
For instance in the testcase PtrRtCheck has four elements but all
no-alias so no checking is necessary.
llvm-svn: 234833
This keeps the program and JIT output in sync, enabling FileCheck to test the
order of target program and JIT events.
In particular we can now test that main is not compiled until after the global
constructor has run.
llvm-svn: 234815
Combine something like:
(v8i8 concat_vectors (v2i8 bitcast (i16)) x4)
into:
(v8i8 (bitcast (v4i16 BUILD_VECTOR (i16) x4)))
If any of the scalars are floating point, use that throughout.
Differential Revision: http://reviews.llvm.org/D8948
llvm-svn: 234809
This reverts commit r234717, reapplying r234698 (in spirit).
As described in r234717, the original `Verifier` check had a
use-after-free. Instead of storing pointers to "interesting" debug info
intrinsics whose bit piece expressions should be verified once we have
typerefs, do a second traversal. I've added a testcase to catch the
`llc` crasher.
Original commit message:
Verifier: Check for incompatible bit piece expressions
Convert an assertion into a `Verifier` check. Bit piece expressions
must fit inside the variable, and mustn't be the entire variable.
Catching this in the verifier will help us find bugs sooner, and makes
`DIVariable::getSizeInBits()` dead code.
llvm-svn: 234776
Revert "Remove default in fully-covered switch (to fix Clang -Werror -Wcovered-switch-default)"
Revert "R600: Add carry and borrow instructions. Use them to implement UADDO/USUBO"
Revert "LegalizeDAG: Try to use Overflow operations when expanding ADD/SUB"
Using overflow operations fails CodeGen/Generic/2011-07-07-ScheduleDAGCrash.ll
on hexagon, nvptx, and r600. Revert while I investigate.
llvm-svn: 234768
In case of different types used for the condition of the selects the
select(select) -> select(and) normalisation cannot be performed.
See also: http://reviews.llvm.org/D7622
llvm-svn: 234763
v2: tighten the sub64 tests
v3: rename to CARRY/BORROW
v4: fixup test cmdline
add known bits computation
use sign extend instead of sub 0,x
better add test
v5: remove redundant break
move lowering to separate functions
fix comments
Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
Reviewers: arsenm
llvm-svn: 234759
Fill in the TODO in CodeGenPrepare::OptimizeCallInst so that global
variables that are passed to memory intrinsics are aligned in the same
way that allocas are.
Differential Revision: http://reviews.llvm.org/D8421
llvm-svn: 234735
Summary:
When instruction bundling is enabled and the -mc-relax-all flag is
set, we can write bundle padding directly into fragments and avoid
creating large number of fragments significantly reducing LLVM MC
memory usage.
Test Plan: Regression test attached
Reviewers: eliben
Subscribers: jfb, mseaborn
Differential Revision: http://reviews.llvm.org/D8072
llvm-svn: 234714
When I fixed these a couple of days ago to iterate over all loops, not just
depth == 1 loops, I inadvertently made it such that we'd only look at the first
top-level loop. Make sure that we really look at all of them.
llvm-svn: 234705
Clean up a predicate I added in r229731, fix the relevant comment and
add a test case. The earlier version is confusing to read and was also
buggy (probably not a coincidence) till Alexey fixed it in r233881.
llvm-svn: 234701
As it turns out, even though these are part of ISA 2.06, the P7 does not
support them (or, at least, not any P7s we're tested so far).
llvm-svn: 234686
This patch corresponds to review:
http://reviews.llvm.org/D8928
It adds direct move instructions to/from VSX registers to GPR's. These are
exploited for FP <-> INT conversions.
llvm-svn: 234682
This pass had the same problem as the data-prefetching pass: it was only
checking for depth == 1 loops in practice. Fix that, add some debugging
statements, and make sure that, when we grab an AddRec, it is for the loop we
expect.
llvm-svn: 234670
Currently, there's a single flag, checked by the pass itself.
It can't force-enable the pass (and is on by default), because it
might not even have been created, as that's the targets decision.
Instead, have separate explicit flags, so that the decision is
consistently made in the target.
Keep the flag as a last-resort "force-disable GlobalMerge" for now,
for backwards compatibility.
llvm-svn: 234666
This allows winehprepare to build sensible llvm.eh.actions calls for SEH
finally blocks. The pattern matching in this change is brittle and
should be replaced with something more robust soon. In the meantime,
this will let us write the code that produces __C_specific_handler xdata
tables, which we need regardless of how we decide to get finally blocks
through EH preparation.
llvm-svn: 234663
When rewriting statepoints to make relocations explicit, we need to have a conservative but consistent notion of where a particular pointer is live at a particular site. The old code just used dominance, which is correct, but decidedly more conservative then it needed to be. This patch implements a simple dataflow algorithm that's run one per function (well, twice counting fixup after base pointer insertion). There's still lots of room to make this faster, but it's fast enough for all practical purposes today.
Differential Revision: http://reviews.llvm.org/D8674
llvm-svn: 234657
Two related small changes:
Various dominance based queries about liveness can get confused if we're talking about unreachable blocks. To avoid reasoning about such cases, just remove them before rewriting statepoints.
Remove single entry phis (likely left behind by LCSSA) to reduce the number of live values.
Both of these are motivated by http://reviews.llvm.org/D8674 which will be submitted shortly.
Differential Revision: http://reviews.llvm.org/D8675
llvm-svn: 234651
This patch adds limited support for inserting explicit relocations when there's a vector of pointers live over the statepoint. This doesn't handle the case where the vector contains a mix of base and non-base pointers; that's future work.
The current implementation just scalarizes the vector over the gc.statepoint before doing the explicit rewrite. An alternate approach would be to plumb the vector all the way though the backend lowering, but doing that appears challenging. In particular, the size of the indirect spill slot is currently assumed to be sizeof(pointer) throughout the backend.
In practice, this is enough to allow running the SLP and Loop vectorizers before RewriteStatepointsForGC.
Differential Revision: http://reviews.llvm.org/D8671
llvm-svn: 234647
Summary:
This change moves creating calls to `llvm.uadd.with.overflow` from
InstCombine to CodeGenPrep. Combining overflow check patterns into
calls to the said intrinsic in InstCombine inhibits optimization because
it introduces an intrinsic call that not all other transforms and
analyses understand.
Depends on D8888.
Reviewers: majnemer, atrick
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8889
llvm-svn: 234638
WinEH currently turns invokes into calls. Long term, we will reconsider
this, but for now, make sure we remap the operands and clone the
successors of the new terminator.
llvm-svn: 234608
Iterating over loops from the LoopInfo instance only provides top-level loops.
We need to search the whole tree of loops to find the inner ones.
llvm-svn: 234603
Summary:
Some optimizations such as jump threading and loop unswitching can negatively
affect performance when applied to divergent branches. The divergence analysis
added in this patch conservatively estimates which branches in a GPU program
can diverge. This information can then help LLVM to run certain optimizations
selectively.
Test Plan: test/Analysis/DivergenceAnalysis/NVPTX/diverge.ll
Reviewers: resistor, hfinkel, eliben, meheff, jholewinski
Subscribers: broune, bjarke.roune, madhur13490, tstellarAMD, dberlin, echristo, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D8576
llvm-svn: 234567
When we have an instruction for this (and, thus, don't generate a runtime
call), we need to custom type legalize this (in a trivial way, just as we do
for fp_to_sint).
Fixes PR23173.
llvm-svn: 234561
For the most common ones (such as fadd), we already did the promotion.
Do the same thing for all the others.
Currently, we'll just crash/assert on all these operations, as
there's no hardware or libcall support whatsoever.
f16 (half) is specified as an interchange - not arithmetic - format,
and is expected to be promoted to single-precision for arithmetic
operations.
While there, teach the legalizer about promoting some of the (mostly
floating-point) operations that we never needed before.
Differential Revision: http://reviews.llvm.org/D8648
See related discussion on the thread for: http://reviews.llvm.org/D8755
llvm-svn: 234550
This is the patch corresponding to review:
http://reviews.llvm.org/D8406
It adds some missing instructions from ISA 2.06 to the PPC back end.
llvm-svn: 234546
We already do:
concat_vectors(scalar, undef) -> scalar_to_vector(scalar)
When the scalar is legal.
When it's not, but is a truncated legal scalar, we can also do:
concat_vectors(trunc(scalar), undef) -> scalar_to_vector(scalar)
Which is equivalent, since the upper lanes are undef anyway.
While there, teach the combine to look at more than 2 operands.
Differential Revision: http://reviews.llvm.org/D8883
llvm-svn: 234530
The integer extend optimization tries to fold the extend into the load
instruction. This requires us to identify if the extend has already been
emitted or not and act accordingly on it.
The check that was originally performed for this was not sufficient. Besides
checking the ValueMap for a mapped register we also need to check if the
virtual register has already an associated machine instruction that defines it.
This fixes rdar://problem/20470788.
llvm-svn: 234529
Currently, llvm (backend) doesn't know cortex-r4, even though it is the
default target for armv7r. Using "--target=armv7r-arm-none-eabi" provokes
'cortex-r4' is not a recognized processor for this target' by llvm.
This patch adds support for cortex-r4 and, very closely related, r4f.
llvm-svn: 234486
restrictions when choosing a type for small-memcpy inlining in
SelectionDAGBuilder.
This ensures that the loads and stores output for the memcpy won't be further
expanded during legalization, which would cause the total number of instructions
for the memcpy to exceed (often significantly) the inlining thresholds.
<rdar://problem/17829180>
llvm-svn: 234462
The bug manifests when there are two loads and two stores chained as follows in
a DAG,
(ld v3f32) -> (st f32) -> (ld v3f32) -> (st f32)
and the stores' values are extracted from the preceding vector loads.
MergeConsecutiveStores would replace the first store in the chain with the
merged vector store, which would create a cycle between the merged store node
and the last load node that appears in the chain.
This commits fixes the bug by replacing the last store in the chain instead.
rdar://problem/20275084
Differential Revision: http://reviews.llvm.org/D8849
llvm-svn: 234430
(Re-apply r234361 with a fix and a testcase for PR23157)
Both run-time pointer checking and the dependence analysis are capable
of dealing with uniform addresses. I.e. it's really just an orthogonal
property of the loop that the analysis computes.
Run-time pointer checking will only try to reason about SCEVAddRec
pointers or else gives up. If the uniform pointer turns out the be a
SCEVAddRec in an outer loop, the run-time checks generated will be
correct (start and end bounds would be equal).
In case of the dependence analysis, we work again with SCEVs. When
compared against a loop-dependent address of the same underlying object,
the difference of the two SCEVs won't be constant. This will result in
returning an Unknown dependence for the pair.
When compared against another uniform access, the difference would be
constant and we should return the right type of dependence
(forward/backward/etc).
The changes also adds support to query this property of the loop and
modify the vectorizer to use this.
Patch by Ashutosh Nema!
llvm-svn: 234424
Because -menable-no-nans causes fcmp conditions to be rewritten
without 'o' or 'u' the recognition code in needs to cope. Also
extended it to handle 'le' and 'ge.
Differential Revision: http://reviews.llvm.org/D8725
llvm-svn: 234421
Summary:
Even though there is no 2nd register operand in the "lw/sw $8, symbol" case, we still try to find one,
and we end up with $0, which makes us generate an unnecessary "addu $8, $8, $0" (a.k.a. "move $8, $8").
We can avoid this by checking if the 2nd register operand is different from $0, before generating the addu.
Reviewers: dsanders
Reviewed By: dsanders
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8055
llvm-svn: 234406
Summary:
They are of the form "bnezl/beqzl $rs, offset" and expand to "bnel/beql $rs, $zero, offset".
These instructions are used in Linux inline assembly.
Reviewers: dsanders
Reviewed By: dsanders
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8540
llvm-svn: 234401
One could make the argument for writing it immediately after the ELF header,
but writing it in the middle of the sections like we were doing just makes
it harder for no reason.
llvm-svn: 234400
Summary: Looks like new code from [[ http://reviews.llvm.org/rL222057 | rL222057 ]] doesn't account for early `return` in `ARMFrameLowering::emitPrologue`, which leads to loosing `.cfi_def_cfa_offset` directive for functions without stack frame.
Reviewers: echristo, rengolin, asl, t.p.northover
Reviewed By: t.p.northover
Subscribers: llvm-commits, rengolin, aemerson
Differential Revision: http://reviews.llvm.org/D8606
llvm-svn: 234399
Summary:
These AssemblerPredicate's are unnecessary and actually make some instructions unusable when assembling pre-MIPS32 ISAs.
For example, this was causing the IAS to reject the 'j' instruction for MIPS I-V.
Reviewers: dsanders
Reviewed By: dsanders
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8300
llvm-svn: 234398
This is currently considered experimental, but most of the more
commonly used instructions should work.
So far only SI has been extensively tested, CI and VI probably work too,
but may be buggy. The current set of tests cases do not give complete
coverage, but I think it is sufficient for an experimental assembler.
See the documentation in R600Usage for more information.
llvm-svn: 234381
We weren't checking the sign of the floating point immediate before translating
it to "fmov sD, wzr". Similarly for D-regs.
Technically "movi vD.2s, #0x80, lsl #24" would work most of the time, but it's
not a blessed alias (and I don't think it should be since people expect writing
sD to zero out the high lanes, and there's no dD equivalent). So an error it is.
rdar://20455398
llvm-svn: 234372
Both run-time pointer checking and the dependence analysis are capable
of dealing with uniform addresses. I.e. it's really just an orthogonal
property of the loop that the analysis computes.
Run-time pointer checking will only try to reason about SCEVAddRec
pointers or else gives up. If the uniform pointer turns out the be a
SCEVAddRec in an outer loop, the run-time checks generated will be
correct (start and end bounds would be equal).
In case of the dependence analysis, we work again with SCEVs. When
compared against a loop-dependent address of the same underlying object,
the difference of the two SCEVs won't be constant. This will result in
returning an Unknown dependence for the pair.
When compared against another uniform access, the difference would be
constant and we should return the right type of dependence
(forward/backward/etc).
The changes also adds support to query this property of the loop and
modify the vectorizer to use this.
Patch by Ashutosh Nema!
llvm-svn: 234361
Summary:
This is not possible when using the IAS for MIPS, but it is possible when using the IAS for other architectures and when using GAS for MIPS.
Reviewers: dsanders
Reviewed By: dsanders
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8578
llvm-svn: 234316
Summary:
The loop which emits AssemblerPredicate conditions also links them together by emitting a '&&'.
If the 1st predicate is not an AssemblerPredicate, while the 2nd one is, nothing gets emitted for the 1st one, but we still emit the '&&' because of the 2nd predicate.
This generated code looks like "( && Cond2)" and is invalid.
Reviewers: dsanders
Reviewed By: dsanders
Subscribers: dsanders, llvm-commits
Differential Revision: http://reviews.llvm.org/D8294
llvm-svn: 234312
Fast isel used to zero extends immediates to 64 bits. This normally goes
unnoticed because the value is truncated to 32 bits for output.
Two cases were it is noticed:
* We fail to use smaller encodings.
* If the original constant was smaller than i32.
In the tests using i1 constants, codegen would change to use -1, which is fine
(and matches what regular isel does) since only the lowest bit is then used.
Instead, this patch then changes the ir to use i8 constants, which looks more
like what clang produces.
llvm-svn: 234249
The uselist isn't enough to infer anything about the lifetime of such
allocas. If we want to re-add this optimization, we will need to
leverage lifetime markers to do it.
Fixes PR23122.
llvm-svn: 234196
After recognising that a certain narrow instruction might need a relocation to
be represented, we used to unconditionally relax it to a Thumb2 instruction to
permit this. Unfortunately, some CPUs (e.g. v6m) don't even have most Thumb2
instructions, so we end up emitting a completely invalid instruction.
Theoretically, ELF does have relocations for these situations; but they are
fairly unusable with such short ranges and the ABI document even says they're
documented "for completeness". So an error is probably better there too.
rdar://20391953
llvm-svn: 234195
This patch allows SSE4.1 targets to use (V)PINSRB to create 16i8 vectors by inserting i8 scalars directly into a XMM register instead of merging pairs of i8 scalars into a i16 and using the SSE2 PINSRW instruction.
This allows folding of byte loads and reduces scalar register usage as well.
Differential Revision: http://reviews.llvm.org/D8839
llvm-svn: 234193
Add missing checks for `templateParams:` in `MDCompositeType`. Pull the
current check for `MDSubprogram` to reduce duplicated code and fix it up
to print a good message when the immediate operand isn't an `MDTuple`
(as a drive-by, make the same fix to `variables:` in `MDSubprogram`).
llvm-svn: 234177
Before when deciding if we needed a relocation in A-B, we wore only checking
if A was weak.
This fixes the asymmetry.
The "InSet" argument should probably be renamed to "ForValue", since InSet is
very MachO specific, but doing so in this patch would make it hard to read.
This fixes PR22815.
llvm-svn: 234165
This allows the compiler/assembly programmer to switch back to a
section. This in turn fixes the bootstrap failure on powerpc (tested
on gcc110) without changing the ppc codegen at all.
I will try to cleanup the various getELFSection overloads in a followup patch.
Just using a default argument now would lead to ambiguities.
llvm-svn: 234099
Previously the patterns didn't have high enough priority and we would only use the GR32 form if the only the upper 32 or 56 bits were zero.
Fixes PR23100.
llvm-svn: 234075
This add support for catching an exception such that an exception object
available to the catch handler will be initialized by the runtime.
llvm-svn: 234062
We don't need to represent UnwindHelp in IR. Instead, we can use the
knowledge that we are emitting the parent function to decide if we
should create the UnwindHelp stack object.
llvm-svn: 234061
InstCombine didn't realize that it needs to use DataLayout to determine
how wide pointers are. This lead to assertion failures.
This fixes PR23113.
llvm-svn: 234046
Check that the `MDLocalVariable::getInlinedAt()` in a debug info
intrinsic's variable always matches the `MDLocation::getInlinedAt()` of
its `!dbg` attachment.
The goal here is to get rid of `MDLocalVariable::getInlinedAt()`
entirely (PR22778), since it's expensive and unnecessary, but I'll let
this verifier check bake for a while (a week maybe?) first. I've
updated the testcases that had the wrong value for `inlinedAt:`.
This checks that things are sane in the IR, but currently things go out
of whack in a few places in the backend. I'll follow shortly with
assertions in the backend (with code fixes).
If you have out-of-tree testcases that just started failing, here's how
I updated these ones:
1. The verifier check gives you the basic block, function, instruction,
and relevant metadata arguments (metadata numbering doesn't
necessarily match the source file, unfortunately).
2. Look at the `@llvm.dbg.*()` instruction, and compare the
`inlinedAt:` fields of the variable argument (second `metadata`
argument) and the `!dbg` attachment.
3. Figure out based on the variable `scope:` chain and the functions in
the file whether the variable has been inlined (and into what), so
you can determine which `inlinedAt:` is actually correct. In all of
the in-tree testcases, the `!MDLocation()` was correct and the
`!MDLocalVariable()` was wrong, but YMMV.
4. Duplicate the metadata that you're going to change, and add/drop the
`inlinedAt:` field from one of them. Be careful that the other
references to the same metadata node point at the correct one.
llvm-svn: 234021
When enabling PPC64LE, I disabled some optimizations of BUILD_VECTOR
nodes for little endian because wrong results were produced. I've
subsequently investigated and found this is due to a call to
BuildVectorSDNode::isConstantSplat that was always specifying
big-endian. With this changed to correctly identify the target
endianness, the optimizations work as expected.
I found another case of a call to the same method with big-endian
hardcoded, in PPC::isAllNegativeZeroVector(). I discovered this was
an orphaned method with no callers, so I've just removed it.
The existing test/CodeGen/PowerPC/vec_constants.ll checks these
optimizations, so for testing I've just added a variant for little
endian.
llvm-svn: 234011
This patch attempts to fold the shuffling of 'scalar source' inputs - BUILD_VECTOR and SCALAR_TO_VECTOR nodes - if the shuffle node is the only user. This folds away a lot of unnecessary shuffle nodes, and allows quite a bit of constant folding that was being missed.
Differential Revision: http://reviews.llvm.org/D8516
llvm-svn: 234004
Fixes PR19582.
Previously, when an asm assignment (.set or =) was created, we would look up
the section immediately in MCSymbol::setVariableValue. This caused symbols
to receive the wrong section if the RHS of the assignment had not been seen
yet. This had a knock-on effect in the object file emitters, causing them
to emit extra symbols, or to give symbols the wrong visibility or the wrong
section. For example, in the following asm:
.data
.Llocal:
.text
leaq .Llocal1(%rip), %rdi
.Llocal1 = .Llocal2
.Llocal2 = .Llocal
the first assignment would give .Llocal1 a null section, which would never get
fixed up by the second assignment. This would cause the ELF object file emitter
to consider .Llocal1 to be an undefined symbol and give it external linkage,
even though .Llocal1 should not have been emitted at all in the object file.
Or in the following asm:
alias_to_local = Ltmp0
Ltmp0:
the Mach-O object file emitter would give the alias_to_local symbol a n_type
of N_SECT and a n_sect of 0. This is invalid under the Mach-O specification,
which requires N_SECT symbols to receive a non-zero section number if the
symbol is defined in a section in the object file.
https://developer.apple.com/library/mac/documentation/DeveloperTools/Conceptual/MachORuntime/#//apple_ref/c/tag/nlist
After this change we do not look up the section when the assignment is created,
but instead look it up on demand and store it in Section, which is treated
as a cache if the symbol is a variable symbol.
This change also fixes a bug in MCExpr::FindAssociatedSection. Previously,
if we saw a subtraction, we would return the first referenced section, even in
cases where we should have been returning the absolute pseudo-section. Now we
always return the absolute pseudo-section for expressions that subtract two
section-derived expressions. This isn't always correct (e.g. if one of the
sections ends up being laid out at an absolute address), but it's probably
the best we can do without more context.
This allows us to remove code in two places where we appear to have been
working around this bug, in MachObjectWriter::markAbsoluteVariableSymbols
and in X86AsmPrinter::EmitStartOfAsmFile.
Re-applies r233595 (aka D8586), which was reverted in r233898.
Differential Revision: http://reviews.llvm.org/D8798
llvm-svn: 233995
Register coalescing can change the target of a RegPair hint to a
physreg, we should not crash on this. This also slightly improved the
way ARMBaseRegisterInfo::updateRegAllocHint() works.
llvm-svn: 233987
This prevents us from running out of registers in the backend.
Introducing stack malloc calls prevents the backend from recognizing the
inline asm operands as stack objects. When the backend recognizes a
stack object, it doesn't need to materialize the address of the memory
in a physical register. Instead it generates a simple SP-based memory
operand. Introducing a stack malloc forces the backend to find a free
register for every memory operand. 32-bit x86 simply doesn't have enough
registers for this to succeed in most cases.
Reviewers: kcc, samsonov
Differential Revision: http://reviews.llvm.org/D8790
llvm-svn: 233979
Summary:
The old requirement on GEP candidates being in bounds is unnecessary.
For off-bound GEPs, we still have
&B[i * S] = B + (i * S) * e = B + (i * e) * S
Test Plan: slsr_offbound_gep in slsr-gep.ll
Reviewers: meheff
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8809
llvm-svn: 233949
This makes it possible to use the same representation of llvm.eh.actions
in outlined handlers as we use in the parent function because i32's are
just constants that can be copied freely between functions.
I had to add a sentinel alloca to the list of child allocas so that we
don't try to sink the catch object into the handler. Normally, one would
use nullptr for this kind of thing, but TinyPtrVector doesn't support
null elements. More than that, it's elements have to have a suitable
alignment. Therefore, I settled on this for my sentinel:
AllocaInst *getCatchObjectSentinel() {
return static_cast<AllocaInst *>(nullptr) + 1;
}
llvm-svn: 233947
Without this patch, we split the 256-bit vector into halves and produced something like:
movzwl (%rdi), %eax
vmovd %eax, %xmm0
vxorps %xmm1, %xmm1, %xmm1
vblendps $15, %ymm0, %ymm1, %ymm0 ## ymm0 = ymm0[0,1,2,3],ymm1[4,5,6,7]
Now, we eliminate the xor and blend because those zeros are free with the vmovd:
movzwl (%rdi), %eax
vmovd %eax, %xmm0
This should be the final fix needed to resolve PR22685:
https://llvm.org/bugs/show_bug.cgi?id=22685
llvm-svn: 233941
For code like this:
define <8 x i32> @load_v8i32() {
ret <8 x i32> <i32 7, i32 0, i32 0, i32 0, i32 0, i32 0, i32 0, i32 0>
}
We produce this AVX code:
_load_v8i32: ## @load_v8i32
movl $7, %eax
vmovd %eax, %xmm0
vxorps %ymm1, %ymm1, %ymm1
vblendps $1, %ymm0, %ymm1, %ymm0 ## ymm0 = ymm0[0],ymm1[1,2,3,4,5,6,7]
retq
There are at least 2 bugs in play here:
We're generating a blend when a move scalar does the same job using 2 less instruction bytes (see FIXMEs).
We're not matching an existing pattern that would eliminate the xor and blend entirely. The zero bytes are free with vmovd.
The 2nd fix involves an adjustment of "AddedComplexity" [1] and mostly masks the 1st problem.
[1] AddedComplexity has close to no documentation in the source.
The best we have is this comment: "roughly corresponds to the number of nodes that are covered".
It appears that x86 has bastardized this definition by inflating its values for some other
undocumented reason. For example, we have a pattern with "AddedComplexity = 400" (!).
I searched my way to this page:
https://groups.google.com/forum/#!topic/llvm-dev/5UX-Og9M0xQ
Differential Revision: http://reviews.llvm.org/D8794
llvm-svn: 233931
I'm playing with supporting custom stack map formats with statepoints. While
doing so, I noticed that the existing implementation didn't indicate inherently
unsized frames. This change essentially just ports the functionality that already
exists for the default StackMaps section to custom stackmaps.
llvm-svn: 233891
use these to add support for C++ static ctors/dtors to the Orc-lazy JIT in LLI.
Replace the trivial_retval_1 regression test - the new 'hello' test is covering
strictly more code.
llvm-svn: 233885
print the Objective-C runtime meta data for Mach-O files.
There are three types of Objective-C runtime meta data, Objc2 64-bit,
Objc2 32-bit and Objc1 32-bit. This prints the first of these types. The
changes to print the others will follow next.
llvm-svn: 233840
addl has higher throughput and this was needlessly picking a suboptimal
encoding causing PR23098.
I wish there was a way of doing this without further duplicating tbl-
generated patterns, but so far I haven't found one.
llvm-svn: 233832
Summary:
This change teaches ScalarEvolution::isLoopBackedgeGuardedByCond to look
at edges within the loop body that dominate the latch. We don't do an
exhaustive search for all possible edges, but only a quick walk up the
dom tree.
This re-lands r233447. r233447 was reverted because it caused massive
compile-time regressions. This change has a fix for the same issue.
llvm-svn: 233829
Summary:
This is part 1 of fixes to address the problems described in
https://llvm.org/bugs/show_bug.cgi?id=22719.
The restriction to limit loop scales to 4,096 does not really prevent
overflows anymore, as the underlying algorithm has changed and does
not seem to suffer from this problem.
Additionally, artificially restricting loop scales to such a low number
skews frequency information, making loops of equal hotness appear to
have very different hotness properties.
The only loops that are artificially restricted to a scale of 4096 are
infinite loops (those loops with an exit mass of 0). This prevents
infinite loops from skewing the frequencies of other regions in the CFG.
At the end of propagation, frequencies are scaled to values that take no
more than 64 bits to represent. When the range of frequencies to be
represented fits within 61 bits, it pushes up the scaling factor to a
minimum of 8 to better distinguish small frequency values. Otherwise,
small frequency values are all saturated down at 1.
Tested on x86_64.
Reviewers: dexonsmith
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8718
llvm-svn: 233826
v8.1a is renamed to architecture, following current entity naming approach.
Excess generic cpu is removed. Intended use: "generic" cpu with "v8.1a" subtarget feature
Reviewers: jmolloy
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8767
llvm-svn: 233811
We used to do this before refactorings around r225640.
Some clang users checked for _chk libcall availability using:
__has_builtin(__builtin___memcpy_chk)
When compiling with -fno-builtin, this is always true.
When passing -ffreestanding/-mkernel, which both imply -fno-builtin, we
end up with fortified libcalls, which isn't acceptable in a freestanding
environment which only provides their non-fortified counterparts.
Until we change clang and/or teach external users to check for availability
differently, disregard the "nobuiltin" attribute and TLI::has.
Workaround for PR23093.
llvm-svn: 233776
Under normal circumstances, use of CR bits is disabled when running at -O0, but
it is enabled by default otherwise, and if you have optnone functions, they'll
still generally be generated with crbits turned on (because nothing else turns
them off). FastISel can't handle most things dealing with i1 values when using
CR bits, and checks for that, but was not checking the return type on
functions; we can't fast-isel function calls with i1 return values either when
using CR bits for boolean values.
Fixes PR22664.
llvm-svn: 233775
This lets us catch exceptions in simple cases.
N.B. Things that do not work include (but are not limited to):
- Throwing from within a catch handler.
- Catching an object with a named catch parameter.
- 'CatchHigh' is fictitious, we aren't sure of its purpose.
- We aren't entirely efficient with regards to the number of EH states
that we generate.
- IP-to-State tables are sensitive to the order of emission.
llvm-svn: 233767
Even at -O0, we fall back to SDAG when we hit intrinsics, and if the intrinsic
is a memset/memcpy/etc. we might normally use vector types. At -O0, this is
probably not a good idea (because, if there is a bug in the lowering code,
there would be no good way to turn it off). At -O0, only use scalar preferred
types.
Related to PR22754.
llvm-svn: 233755
extended loads.
Implement the related target lowering hook so that the optimization has a better
estimation of the cost of an extension.
rdar://problem/19267165
llvm-svn: 233753
The existing code in getMemsetValue only handled integer-preferred types when
the fill value was not a constant. Make this more robust in two ways:
1. If the preferred type is a floating-point value, do the mul-splat trick on
the corresponding integer type and then bitcast.
2. If the preferred type is a vector, do the mul-splat trick on one vector
element, and then build a vector out of them.
Fixes PR22754 (although, we should also turn off use of vector types at -O0).
llvm-svn: 233749
These regression tests are supposed to test small code model support, but have
been XFAIL'd because we don't have an in-tree memory manager that can guarantee
a small-code-model compatible memory layout. Unfortunately, they can
occasionally pass if they get lucky with memory allocation, causing unexpected
passes on the bots. That's not very helpful.
I'm going to remove these until we have the infrastructure (small-code-model
compatible memory manager) to run them properly.
llvm-svn: 233722
I suggested this change in D7898 (http://llvm.org/viewvc/llvm-project?view=revision&revision=231354)
It improves the v4i64 case although not optimally. This AVX codegen:
vmovq {{.*#+}} xmm0 = mem[0],zero
vxorpd %ymm1, %ymm1, %ymm1
vblendpd {{.*#+}} ymm0 = ymm0[0],ymm1[1,2,3]
Becomes:
vmovsd {{.*#+}} xmm0 = mem[0],zero
Unfortunately, this doesn't completely solve PR22685. There are still at least 2 problems under here:
We're not handling v32i8 / v16i16.
We're not getting the FP / int domains right for instruction selection.
But since this patch alone appears to do no harm, reduces code duplication, and helps v4i64,
I'm submitting this patch ahead of fixing the above.
Differential Revision: http://reviews.llvm.org/D8341
llvm-svn: 233704
So far, we do not yet support any instruction specific to zEC12.
Most of the facilities added with zEC12 are indeed not very useful
to compiler code generation, but there is one exception: the
miscellaneous-extensions facility provides the RISBGN instruction,
which is a variant of RISBG that does not set the condition code.
Add support for this facility, MC support for RISBGN, and CodeGen
support for prefering RISBGN over RISBG on zEC12, unless we can
actually make use of the condition code set by RISBG.
llvm-svn: 233690
We already exploit a number of instructions specific to z196,
but not yet POPCNT. Add support for the population-count
facility, MC support for the POPCNT instruction, CodeGen
support for using POPCNT, and implement the getPopcntSupport
TargetTransformInfo hook.
llvm-svn: 233689
This hooks up the TargetTransformInfo machinery for SystemZ,
and provides an implementation of getIntImmCost.
In addition, the patch adds the isLegalICmpImmediate and
isLegalAddImmediate TargetLowering overrides, and updates
a couple of test cases where we now generate slightly
better code.
llvm-svn: 233688
it more liberally.
SplitVecOp_TRUNCATE has logic for recursively splitting oversize vectors
that need more than one round of splitting to become legal. There are many
other ISD nodes that could benefit from this logic, so factor it out and
use it for FP_TO_UINT,FP_TO_SINT,SINT_TO_FP,UINT_TO_FP and FTRUNC.
llvm-svn: 233681
We used to miss non-Q YMM integer vectors, and, non-Q/D XMM integer
vectors.
While there, change the v4i32 patterns to prefer MOVNTDQ.
llvm-svn: 233668
Unify the error messages for the various tools when `verifyModule()`
fails on an input module. The "brave new way" is:
lltool: path/to/input.ll: error: input module is broken!
llvm-svn: 233667
Most of these checks were already in the `Verifier` so this is more of a
cleanup. Now almost everything is over there.
Now that require a `name:` for `MDGlobalVariable`, add a check in
`LLParser` for it.
llvm-svn: 233657
Move over some more checks from `DIDescriptor::Verify()`, and change
`LLParser` to require non-null `file:` fields in compile units.
I've ignored the comment in test/Assembler/metadata-null-operands.ll
since I disagree with it. At the time that test was written (r229960),
the debug info verifier wasn't on by default, so my comment there is in
the context of not expecting the verifier to be useful. It is now, and
besides that, since r233394 we can check when parsing textual IR whether
an operand is null that shouldn't be.
llvm-svn: 233654
Generate tables in the .xdata section representing what actions to take
when an exception is thrown. This currently fills in state for
cleanups, catch handlers are still unfinished.
llvm-svn: 233636
When we expand the RET_ReallyLR pseudo instruction we also need to transfer the
implicit operands.
The return register is an implicit operand and without it the liveness
calculation generates an incorrect live-out set for the patchpoint.
This fixes rdar://problem/19068476.
llvm-svn: 233635
This fixes the visibility of symbols in certain edge cases involving aliases
with multiple levels of indirection.
Fixes PR19582.
Differential Revision: http://reviews.llvm.org/D8586
llvm-svn: 233595
Adds a test to verify the behavior that r233153 restored: 'optnone'
does not spuriously disable the DAG combiner, and in fact there are
cases where the DAG combiner must run (even at -O0 or 'optnone') in
order for codegen to succeed.
Differential Revision: http://reviews.llvm.org/D8614
llvm-svn: 233584
When a new SM architecture is introduced, it is only supported by the
current PTX version and later. Make sure we are using at least the
minimum PTX version for the target architecture.
This also removes support for PTX ISA < 3.2.
llvm-svn: 233583
Add operand checks for `MDLexicalBlock` and `MDLexicalBlockFile`. Like
`MDLocalVariable` and `MDLocation`, these nodes always require a scope.
There was no test bitrot to fix here (just updated the serialization
tests in test/Assembler/mdlexicalblock.ll).
llvm-svn: 233561
Momentarily (but never in tree), the `scope:` field was called
`parent:`. Apparently a few testcases were left behind with "parent" in
the name, so rename them.
llvm-svn: 233560
Compiling the following function with -O0 would crash, since LLVM would
hit an assertion in getTestUnderMaskCond:
int test(unsigned long x)
{
return x >= 0 && x <= 15;
}
Fixed by detecting the case in the caller of getTestUnderMaskCond.
llvm-svn: 233541
Summary:
The 'R' constraint is actually supposed to be much more complicated than
this and is defined in terms of whether it will cause macro expansion in
the assembler. 'R' is getting less useful due to architecture changes and
ought to be replaced by other constraints. We therefore implement 9-bit
offsets which will work for all subtargets and all instructions.
Reviewers: vkalintiris
Reviewed By: vkalintiris
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8440
llvm-svn: 233537
The asm syntax for the 32-bit rotate-and-mask instructions can take a 32-bit
bitmask instead of an (mb, me) pair. This syntax is not specified in the Power
ISA manual, but is accepted by GNU as, and is documented in IBM's Assembler
Language Reference. The GNU Multiple Precision Arithmetic Library (gmp)
contains assembly that uses this syntax.
To implement this, I moved the isRunOfOnes utility function from
PPCISelDAGToDAG.cpp to PPCMCTargetDesc.h.
llvm-svn: 233483
PPC_FP128 is really the sum of two consecutive doubles, where the first double
is always stored first in memory, regardless of the target endianness. The
memory layout of i128, however, depends on the target endianness, and so we
can't fold this without target endianness information. As a result, we must not
do this folding in lib/IR/ConstantFold.cpp (it could be done instead in
Analysis/ConstantFolding.cpp, but that's not done now).
Fixes PR23026.
llvm-svn: 233481
Loosen one check from r233446: as long as `DIBuilder` requires a
non-null type for every subprogram, we should allow a null type array.
Also add tests for the rest of `MDSubroutineType`, which were somehow
missing.
llvm-svn: 233468
Fix testcases that don't pass the verifier after a WIP patch to check
`MDSubprogram` operands more effectively. I found the following issues:
- When `isDefinition: false`, the `variables:` field might point at
`!{i32 786468}`, or at a tuple that pointed at an empty tuple with
the comment "previously: invalid DW_TAG_base_type" (I vaguely recall
adding those comments during an upgrade script). In these cases, I
just dropped the array.
- The `variables:` field might point at something like `!{!{!8}}`,
where `!8` was an `MDLocation`. I removed the extra layer of
indirection.
- Invalid `type:` (not an `MDSubroutineType`).
llvm-svn: 233466
Summary:
This change teaches ScalarEvolution::isLoopBackedgeGuardedByCond to look
at edges within the loop body that dominate the latch. We don't do an
exhaustive search for all possible edges, but only a quick walk up the
dom tree.
Reviewers: atrick, hfinkel
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8627
llvm-svn: 233447
Change `llc` and `opt` to run `verifyModule()`. This ensures that we
check the full module before `FunctionPass::doInitialization()` ever
gets called (I was getting crashes in `DwarfDebug` instead of verifier
failures when testing a WIP patch that checks operands of compile
units). In `opt`, also move up debug-info-stripping so that it still
runs before verification.
There was a fair bit of broken code that was sitting in tree.
Interestingly, some were cases of a `select` that referred to itself in
`-instcombine` tests (apparently an intermediate result). I split them
off to `*-noverify.ll` tests with RUN lines like this:
opt < %s -S -disable-verify -instcombine | opt -S | FileCheck %s
This avoids verifying the input file (so we can get the broken code into
`-instcombine), but still verifies the output with a second call to
`opt` (to verify that `-instcombine` will clean it up like it should).
llvm-svn: 233432
Fix debug info in these tests, which started failing with a WIP patch to
verify compile units and types. The problems look like they were all
caused by bitrot. They fell into these categories:
- Using `!{i32 0}` instead of `!{}`.
- Using `!{null}` instead of `!{}`.
- Using `!MDExpression()` instead of `!{}`.
- Using `!8` instead of `!{!8}`.
- `file:` references that pointed at `MDCompileUnit`s instead of the
same `MDFile` as the compile unit.
- `file:` references that were numerically off-by-one or (off-by-ten).
llvm-svn: 233415
method.
This enables the instprinter to print a different system register name based on
the feature bits of the per-function subtarget.
Differential Revision: http://reviews.llvm.org/D8668
llvm-svn: 233412
Tailcalls are only OK with forwarded sret pointers. With explicit sret,
one approximation is to check that the pointer isn't an Instruction, as
in that case it might point into some local memory (alloca). That's not
OK with tailcalls.
Explicit sret counterpart to r233409.
Differential Revison: http://reviews.llvm.org/D8510
llvm-svn: 233410
Tailcalls are only OK with forwarded sret pointers. With sret demotion,
they're not, as we'd have a pointer into a soon-to-be-dead stack frame.
Differential Revison: http://reviews.llvm.org/D8510
llvm-svn: 233409
Expose bpf pseudo load instruction via intrinsic. It is used by front-ends that
can encode file descriptors directly into IR instead of relying on relocations.
llvm-svn: 233396
nodes.
When a node is terminal it is pushed at the end of the list of the copies to
coalesce instead of being completely ignored. In effect, this reduces its
priority over non-terminal nodes.
Because of that, we do not miss the rematerialization opportunities, nor the
copies that can be merged with more complex, than the terminal rule,
interference checks.
Related to PR22768.
llvm-svn: 233395
Change `LLParser` to require a non-null `scope:` field for both
`MDLocation` and `MDLocalVariable`. There's no need to wait for the
verifier for this check. This also allows their `::getImpl()` methods
to assert that the incoming scope is non-null.
llvm-svn: 233394
Check fields from `MDLocalVariable` and `MDGlobalVariable` and change
the accessors to downcast to the right types. `getType()` still returns
`Metadata*` since it could be an `MDString`-based reference.
Since local variables require non-null scopes, I also updated `LLParser`
to require a `scope:` field.
A number of testcases had grown bitrot and started failing with this
patch; I committed them separately in r233349. If I just broke your
out-of-tree testcases, you're probably hitting similar problems (so have
a look there).
llvm-svn: 233389
Dropping old debug format requires the entire module to be read upfront.
This was failing only with the gold plugin, but that is just because
llvm-link was not upgrading metadata.
The new testcase using llvm-link shows the problem.
llvm-svn: 233381
This re-adds float2int to the tree, after fixing PR23038. It turns
out the argument to APSInt() is true-if-unsigned, rather than
true-if-signed :(. Added testcase and explanatory comment.
llvm-svn: 233370
"Fix the MachineScheduler's logic for updating ready times for in-order.
Now the scheduler updates a node's ready time as soon as it is
scheduled, before releasing dependent nodes."
This fix was only made in one variant of the ScheduleDAGMI driver.
Francois de Ferriere reported the issue in the other bit of code where
it was also needed.
I never got around to coming up with a test case, but it's an
obvious fix that shouldn't be delayed any longer.
I'll try to refactor this code a little better.
I did verify performance on a wide variety of targets and saw no
negative impact with this fix.
llvm-svn: 233366
This was discussed a while back and I left it optional for migration. Since it's been far more than the 'week or two' that was discussed, time to actually make this manditory.
llvm-svn: 233357
This patch adds support for explicitly provided spill slots in the GC arguments of a gc.statepoint. This is somewhat analogous to gcroot, but leverages the STATEPOINT MI node and StackMap infrastructure. The motivation for this is:
1) The stack spilling code for gc.statepoints hasn't advanced as fast as I'd like. One major option is to give up on doing spilling in the backend and do it at the IR level instead. We'd give up the ability to have gc values in registers, but that's a minor cost in practice. We are not neccessarily moving in that direction, but having the ability to prototype such a thing cheaply is interesting.
2) I want to port the gcroot lowering to use the statepoint infastructure. Given the metadata printers for gcroot expect a fixed set of stack roots, it's easiest to just reuse the explicit stack slots and pass them directly to the underlying statepoint.
I'm holding off on the documentation for the new feature until I'm reasonable sure this is going to stick around.
llvm-svn: 233356
This test returns nonnative integer types which aren't supported on all targets.
The real issue with the SelectionDAG scheduler is with x86 EFLAGS.
llvm-svn: 233355
We don't have any logic to emit those tables yet, so the SDAG lowering
of this intrinsic is just a stub. We can see the intrinsic in the
prepared IR, though.
llvm-svn: 233354
It can happen (by line CurSU->isPending = true; // This SU is not in
AvailableQueue right now.) that a SUnit is mark as available but is
not in the AvailableQueue. For SUnit being selected for scheduling
both conditions must be met.
This patch mainly defensively protects from invalid removing a node
from a queue. Sometimes nodes are marked isAvailable but are not in
the queue because they have been defered due to some hazard.
Patch by Pawel Bylica!
llvm-svn: 233351
Fix testcases whose variables are invalid. I'm working on a patch that
adds `Verifier` checks for `MDLocalVariable` (and `MDGlobalVariable`),
and these failed because:
- `scope:` fields need to point at `MDLocalScope` and can't be null.
- `file:` fields need to point at `MDFile`.
- `inlinedAt:` fields need to point at `MDLocation`.
llvm-svn: 233349
Summary:
The ARM backend can use a loop to implement copying byval parameters before
a call. In non-thumb2 mode it uses a constant pool load to materialize the
trip count. For targets that need movt instead (e.g. Native Client), use
the same code as in thumb2 mode to materialize the trip count.
Reviewers: jfb, t.p.northover
Differential Revision: http://reviews.llvm.org/D8442
llvm-svn: 233324
Check accessors of `MDLocation`, and change them to `cast<>` down to the
right types. Also add type-safe factory functions.
All the callers that handle broken code need to use the new versions of
the accessors (`getRawScope()` instead of `getScope()`) that still
return `Metadata*`. This is also necessary for things like
`MDNodeKeyImpl<MDLocation>` (in LLVMContextImpl.h) that need to unique
the nodes when their operands might still be forward references of the
wrong type.
In the `Value` hierarchy, consumers that handle broken code use
`getOperand()` directly. However, debug info nodes have a ton of
operands, and their order (even their existence) isn't stable yet. It's
safer and more maintainable to add an explicit "raw" accessor on the
class itself.
llvm-svn: 233322
There is something in link.exe that requires a relocation to use a
global symbol. Not doing so breaks the chrome build on windows.
This patch sets isWeak for that to work. To compensate,
we then need to look past those symbols when not creating relocations.
This patch includes an ELF test that matches GNU as behaviour.
I am still reducing the chrome build issue and will add a test
once that is done.
llvm-svn: 233318
The "|&" operator isn't POSIX, so it can fail depending on the host's
default shell. Avoid it.
There were also a couple of places that did "2>1", but this creates a
file called "1". They clearly meant "2>&1".
llvm-svn: 233309
This reverts commit r233254, effectively reapplying r233164 (and its
successors), with an additional testcase for when subprograms match
exactly. This fixes PR22792 (again).
I'm using the same approach, but I've moved up the call to
`stripReplacedSubprograms()`. The function pointers need to be dropped
before mapping any metadata from the source module, or else this can
drop the function from new subprograms that have merged (via Metadata
uniquing) with the old ones. Dropping the pointers first prevents them
from merging.
**** The original commit message follows. ****
Linker: Drop function pointers for overridden subprograms
Instead of dropping subprograms that have been overridden, just set
their function pointers to `nullptr`. This is a minor adjustment to the
stop-gap fix for PR21910 committed in r224487, and fixes the crasher
from PR22792.
The problem that r224487 put a band-aid on: how do we find the canonical
subprogram for a `Function`? Since the backend currently relies on
`DebugInfoFinder` (which does a naive in-order traversal of compile
units and picks the first subprogram) for this, r224487 tried dropping
non-canonical subprograms.
Dropping subprograms fails because the backend *also* builds up a map
from subprogram to compile unit (`DwarfDebug::SPMap`) based on the
subprogram lists. A missing subprogram causes segfaults later when an
inlined reference (such as in this testcase) is created.
Instead, just drop the `Function` pointer to `nullptr`, which nicely
mirrors what happens when an already-inlined `Function` is optimized
out. We can't really be sure that it's the same definition anyway, as
the testcase demonstrates.
This still isn't completely satisfactory. Two flaws at least that I can
think of:
- I still haven't found a straightforward way to make this symmetric
in the IR. (Interestingly, the DWARF output is already symmetric,
and I've tested for that to be sure we don't regress.)
- Using `DebugInfoFinder` to find the canonical subprogram for a
function is kind of crazy. We should just attach metadata to the
function, like this:
define weak i32 @foo(i32, i32) !dbg !MDSubprogram(...) {
llvm-svn: 233302
Summary:
With the introduction of MarkPendingLoopPredicates in r157092, I don't
think the bailout is needed anymore.
Reviewers: atrick, nicholas
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8624
llvm-svn: 233296
Summary:
This patch enhances SLSR to handle another candidate form &B[i * S]. If
we found two candidates
S1: X = &B[i * S]
S2: Y = &B[i' * S]
and S1 dominates S2, we can replace S2 with
Y = &X[(i' - i) * S]
Test Plan:
slsr-gep.ll
X86/no-slsr.ll: verify that we do not run SLSR on GEPs that already fit into
an addressing mode
Reviewers: eliben, atrick, meheff, hfinkel
Reviewed By: hfinkel
Subscribers: sanjoy, llvm-commits
Differential Revision: http://reviews.llvm.org/D7459
llvm-svn: 233286
This patch teaches fast-isel how to select 128-bit vector load instructions.
Added test CodeGen/X86/fast-isel-vecload.ll
Differential Revision: http://reviews.llvm.org/D8605
llvm-svn: 233270
those are in the same basic block.
The previous approach was the topological order of the basic block.
By default this rule is disabled.
Related to PR22768.
llvm-svn: 233241
Otherwise, broken input modules can cause assertions. I've updated two
of the testcases that started failing (modules that had `Require` flags
but didn't meet their own requirements), but Rafael and I decided that
test/Linker/2011-08-22-ResolveAlias.ll should just be deleted outright
-- it's a leftover of the way llvm-gcc used to implement weakref.
llvm-svn: 233229
Summary:
`ComputeNumSignBits` returns incorrect results for `srem` instructions.
This change fixes the issue and adds a test case.
Reviewers: nadav, nicholas, atrick
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8600
llvm-svn: 233225
This patch adds supports for the vector constant folding of TRUNCATE and FP_EXTEND instructions and tidies up the SINT_TO_FP and UINT_TO_FP instructions to match.
It also moves the vector constant folding for the FNEG and FABS instructions to use the DAG.getNode() functionality like the other unary instructions.
Differential Revision: http://reviews.llvm.org/D8593
llvm-svn: 233224
As dblaikie pointed out, if I stop setting `emissionKind: 2` then the
backend won't do magical things on Linux vs. Darwin. I had wrongly
assumed that there were stricter requirements on the input if we weren't
in line-tables-only mode, but apparently not.
With that knowledge, clean up this testcase a little more.
- Set `emissionKind: 1`.
- Add back checks for the weak version of @foo.
- Check more robustly that we have the right subprograms by checking
the `DW_AT_decl_file` and `DW_AT_decl_line` which now show up.
- Check the line table in isolation (since it's no longer doubling as
an indirect test for the subprogram of the weak version of @foo).
llvm-svn: 233221
We don't have any logic to emit those tables yet, so the sdag lowering
of this intrinsic is just a stub. We can see the intrinsic in the
prepared IR, though.
llvm-svn: 233209
Rewrite the checks from r233164 that I temporarily disabled in r233165.
It turns out that the line-tables only debug info we emit from `llc` is
(intentionally) different on Linux than on Darwin. r218129 started
skipping emission of subprograms with no inlined subroutines, and
r218702 was a spiritual revert of that behaviour for Darwin.
I think we can still test this in a platform-neutral way.
- Stop checking for the possibly missing `DW_TAG_subprogram` defining
the debug info for the real version of `@foo`.
- Start checking the line tables, ensuring that the right debug info
was used to generate them (grabbing `DW_AT_low_pc` from the compile
unit).
- I changed up the line numbers used in the "weak" version so it's
easier to follow.
This should hopefully finish off PR22792.
llvm-svn: 233207
This patch adds Hardware Transaction Memory (HTM) support supported by ISA 2.07
(POWER8). The intrinsic support is based on GCC one [1], but currently only the
'PowerPC HTM Low Level Built-in Function' are implemented.
The HTM instructions follows the RC ones and the transaction initiation result
is set on RC0 (with exception of tcheck). Currently approach is to create a
register copy from CR0 to GPR and comapring. Although this is suboptimal, since
the branch could be taken directly by comparing the CR0 value, it generates code
correctly on both test and branch and just return value. A possible future
optimization could be elimitate the MFCR instruction to branch directly.
The HTM usage requires a recently newer kernel with PPC HTM enabled. Tested on
powerpc64 and powerpc64le.
This is send along a clang patch to enabled the builtins and option switch.
[1] https://gcc.gnu.org/onlinedocs/gcc/PowerPC-Hardware-Transactional-Memory-Built-in-Functions.html
Phabricator Review: http://reviews.llvm.org/D8247
llvm-svn: 233204
Some languages, such as Go, have pre-defined structure types (e.g. "string"
is essentially a pointer/length pair) or pre-defined "typedef" types
(e.g. "error" is essentially a typedef for a specific interface type).
Such types do not have associated source location, so a Go frontend would
be correct not to associate a file name with such types.
This change relaxes the DIType verifier to permit unlocated types with
these tags.
Differential Revision: http://reviews.llvm.org/D8588
llvm-svn: 233200
This patch allows AVX blend instructions to handle insertion into the low
element of a 256-bit vector for the appropriate data types.
For f32, instead of:
vblendps $1, %xmm1, %xmm0, %xmm1 ## xmm1 = xmm1[0],xmm0[1,2,3]
vblendps $15, %ymm1, %ymm0, %ymm0 ## ymm0 = ymm1[0,1,2,3],ymm0[4,5,6,7]
we get:
vblendps $1, %ymm1, %ymm0, %ymm0 ## ymm0 = ymm1[0],ymm0[1,2,3,4,5,6,7]
For f64, instead of:
vmovsd %xmm1, %xmm0, %xmm1 ## xmm1 = xmm1[0],xmm0[1]
vblendpd $3, %ymm1, %ymm0, %ymm0 ## ymm0 = ymm1[0,1],ymm0[2,3]
we get:
vblendpd $1, %ymm1, %ymm0, %ymm0 ## ymm0 = ymm1[0],ymm0[1,2,3]
For the hardware-neglected integer data types, I left a TODO comment in the
code and added regression tests for a follow-on patch.
Differential Revision: http://reviews.llvm.org/D8609
llvm-svn: 233199
1. There were no CHECK-LABELs, so we could match instructions from the wrong function.
2. The use of zero operands meant multiple xor instructions could match some CHECKs.
3. The test was over-specified to need a Sandybridge CPU and Darwin triple.
llvm-svn: 233198
The previous logic was to first try without relocations at all
and failing that stop on the first defined symbol.
That was inefficient and incorrect in the case part of the
expression could be simplified and another part could not
(see included test).
We now stop the evaluation when we get to a variable whose value
can change (i.e. is weak).
llvm-svn: 233187
This ensures that we're building and testing the CompileOnDemand layer, at least
in a basic way.
Currently x86-64 only, and with limited to no library calls enabled (depending
on host platform). Patches welcome. ;)
To enable access to the lazy JIT, this patch replaces the '-use-orcmcjit' lli
option with a new option:
'-jit-kind={ mcjit | orc-mcjit | orc-lazy }'.
All regression tests are updated to use the new option, and one trivial test of
the new lazy JIT is added.
llvm-svn: 233182
In r233009 we gained specific check-llvm-* build targets for invoking
specific parts of the test suite, but they were copying the
dependencies for check-all, rather than just listing the dependencies
for check-llvm.
This moves the creation of these targets next to the check-llvm
target, and uses that target's configuration rather than the check-all
config.
llvm-svn: 233174
At least one Linux bot [1] doesn't like my dwarfdump checks, so I've
disable those until I can investigate what's going on there. I'll
continue to track this in PR22792.
[1]: http://bb.pgr.jp/builders/cmake-llvm-x86_64-linux/builds/22863
llvm-svn: 233165
Instead of dropping subprograms that have been overridden, just set
their function pointers to `nullptr`. This is a minor adjustment to the
stop-gap fix for PR21910 committed in r224487, and fixes the crasher
from PR22792.
The problem that r224487 put a band-aid on: how do we find the canonical
subprogram for a `Function`? Since the backend currently relies on
`DebugInfoFinder` (which does a naive in-order traversal of compile
units and picks the first subprogram) for this, r224487 tried dropping
non-canonical subprograms.
Dropping subprograms fails because the backend *also* builds up a map
from subprogram to compile unit (`DwarfDebug::SPMap`) based on the
subprogram lists. A missing subprogram causes segfaults later when an
inlined reference (such as in this testcase) is created.
Instead, just drop the `Function` pointer to `nullptr`, which nicely
mirrors what happens when an already-inlined `Function` is optimized
out. We can't really be sure that it's the same definition anyway, as
the testcase demonstrates.
This still isn't completely satisfactory. Two flaws at least that I can
think of:
- I still haven't found a straightforward way to make this symmetric
in the IR. (Interestingly, the DWARF output is already symmetric,
and I've tested for that to be sure we don't regress.)
- Using `DebugInfoFinder` to find the canonical subprogram for a
function is kind of crazy. We should just attach metadata to the
function, like this:
define weak i32 @foo(i32, i32) !dbg !MDSubprogram(...) {
llvm-svn: 233164
Reverts the code change from r221168 and the relevant test.
It was a mistake to disable the combiner, and based on the ultimate
definition of 'optnone' we shouldn't have considered the test case
as failing in the first place.
llvm-svn: 233153
A load from an invariant location is assumed to not alias any otherwise potentially aliasing stores. Our implementation only applied this rule to store instructions themselves whereas they it should apply for any memory accessing instruction. This results in both FRE and PRE becoming more effective at eliminating invariant loads.
Note that as a follow on change I will likely move this into AliasAnalysis itself. That's where the TBAA constant flag is handled and the semantics are essentially the same. I'd like to separate the semantic change from the refactoring and thus have extended the hack that's already in MemoryDependenceAnalysis for this change.
Differential Revision: http://reviews.llvm.org/D8591
llvm-svn: 233140
In a subtraction of the form A - B, if B is weak, there is no way to represent
that on ELF since all relocations add the value of a symbol.
llvm-svn: 233139
We can't use TargetFrameLowering::getFrameIndexOffset directly, because
Win64 really wants the offset from the stack pointer at the end of the
prologue. Instead, use X86FrameLowering::getFrameIndexOffsetFromSP(),
which is a pretty close approximiation of that. It fails to handle cases
with interestingly large stack alignments, which is pretty uncommon on
Win64 and is TODO.
llvm-svn: 233137
A while ago llvm-cov gained support for clang's instrumentation based
profiling in addition to its gcov support, and subcommands were added
to choose which behaviour to use. When no subcommand was specified, we
fell back to gcov compatibility with a warning that a subcommand would
be required in the future. Now, we require the subcommand.
Note that if the basename of llvm-cov is gcov (via symlink or
hardlink, for example), we still use the gcov compatible behaviour
with no subcommand required.
llvm-svn: 233132
It seems one windows bot fails since I added ilne table linking to
llvm-dsymutil (see r232333 commit thread).
Disable the affected tests until I can figure out what's happening.
llvm-svn: 233130
This patch tries to merge duplicate landing pads when they branch to a common shared target.
Given IR that looks like this:
lpad1:
%exn = landingpad {i8*, i32} personality i32 (...)* @__gxx_personality_v0
cleanup
br label %shared_resume
lpad2:
%exn2 = landingpad {i8*, i32} personality i32 (...)* @__gxx_personality_v0
cleanup
br label %shared_resume
shared_resume:
call void @fn()
ret void
}
We can rewrite the users of both landing pad blocks to use one of them. This will generally allow the shared_resume block to be merged with the common landing pad as well.
Without this change, tail duplication would likely kick in - creating N (2 in this case) copies of the shared_resume basic block.
Differential Revision: http://reviews.llvm.org/D8297
llvm-svn: 233125
Assert that this doesn't fire - I'll remove all of this later, but just
leaving it in for a while in case this is firing & we just don't have
test coverage.
llvm-svn: 233116
This is the IR optimizer follow-on patch for D8563: the x86 backend patch
that converts this kind of shuffle back into a vperm2.
This is also a continuation of the transform that started in D8486.
In that patch, Andrea suggested that we could convert vperm2 intrinsics that
use zero masks into a single shuffle.
This is an implementation of that suggestion.
Differential Revision: http://reviews.llvm.org/D8567
llvm-svn: 233110
This caused PR23008, compiles failing with: "Use still stuck around after Def is
destroyed: %.sroa.speculated"
Also reverting follow-up r233064.
llvm-svn: 233105
vperm2x128 instructions have the special ability (aka free hardware capability)
to shuffle zero values into a vector.
This patch recognizes that type of shuffle and generates the appropriate
control byte.
https://llvm.org/bugs/show_bug.cgi?id=22984
Differential Revision: http://reviews.llvm.org/D8563
llvm-svn: 233100
The main verifier already recurses through the other entry points, so we
might as well descend here too.
This temporarily duplicates some work already done in
`verifyDebugInfo()`, but eventually I'll be removing the other side.
llvm-svn: 233095
Summary:
Previous behaviour of 'R' and 'm' has been preserved for now. They will be
improved in subsequent commits.
The offset permitted by ZC varies according to the subtarget since it is
intended to match the restrictions of the pref, ll, and sc instructions.
The restrictions on these instructions are:
* For microMIPS: 12-bit signed offset.
* For Mips32r6/Mips64r6: 9-bit signed offset.
* Otherwise: 16-bit signed offset.
Reviewers: vkalintiris
Reviewed By: vkalintiris
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8414
llvm-svn: 233063
It is possible to have code that converts from integer to float, performs operations then converts back, and the result is provably the same as if integers were used.
This can come from different sources, but the most obvious is a helper function that uses floats but the arguments given at an inlined callsites are integers.
This pass considers all integers requiring a bitwidth less than or equal to the bitwidth of the mantissa of a floating point type (23 for floats, 52 for doubles) as exactly representable in floating point.
To reduce the risk of harming efficient code, the pass only attempts to perform complete removal of inttofp/fptoint operations, not just move them around.
llvm-svn: 233062
While the uitofp scalar constant folding treats an integer as an unsigned value (from lang ref):
%X = sitofp i8 -1 to double ; yields double:-1.0
%Y = uitofp i8 -1 to double ; yields double:255.0
The vector constant folding was always using sitofp:
%X = sitofp <2 x i8> <i8 -1, i8 -1> to <2 x double> ; yields <double -1.0, double -1.0>
%Y = uitofp <2 x i8> <i8 -1, i8 -1> to <2 x double> ; yields <double -1.0, double -1.0>
This patch fixes this so that the correct opcode is used for sitofp and uitofp.
%X = sitofp <2 x i8> <i8 -1, i8 -1> to <2 x double> ; yields <double -1.0, double -1.0>
%Y = uitofp <2 x i8> <i8 -1, i8 -1> to <2 x double> ; yields <double 255.0, double 255.0>
Differential Revision: http://reviews.llvm.org/D8560
llvm-svn: 233033
Continue to simplify the `DIDescriptor` subclasses, so that they behave
more like raw pointers. Remove `getRaw()`, replace it with an
overloaded `get()`, and overload the arrow and cast operators. Two
testcases started to crash on the arrow operators with this change
because of `scope:` references that weren't real scopes. I fixed them.
Soon I'll add verifier checks for them too.
This also adds explicit dereference operators. Previously, the builtin
dereference against `operator MDNode *()` would have worked, but now the
builtins are ambiguous.
llvm-svn: 233030
There is now a canonical symbol at the end of a section that different
passes can request.
This also allows us to assert that we don't switch back to a section whose
end symbol has already been printed.
llvm-svn: 233026
The pass used to be enabled by default with CodeGenOpt::Less (-O1).
This is too aggressive, considering the pass indiscriminately merges
all globals together.
Currently, performance doesn't always improve, and, on code that uses
few globals (e.g., the odd file- or function- static), more often than
not is degraded by the optimization. Lengthy discussion can be found
on llvmdev (AArch64-focused; ARM has similar problems):
http://lists.cs.uiuc.edu/pipermail/llvmdev/2015-February/082800.html
Also, it makes tooling and debuggers less useful when dealing with
globals and data sections.
GlobalMerge needs to better identify those cases that benefit, and this
will be done separately. In the meantime, move the pass to run with
-O3 rather than -O1, on both ARM and AArch64.
llvm-svn: 233024
This enables very common cases to switch to the
smaller encoding.
All of the standard LLVM canonicalizations of comparisons
are the opposite of what we want. Compares with constants
are moved to the RHS, but the first operand can be an inline
immediate, literal constant, or SGPR using the 32-bit VOPC
encoding.
There are additional bad canonicalizations that should
also be fixed, such as canonicalizing ge x, k to gt x, (k + 1)
if this makes k no longer an inline immediate value.
llvm-svn: 232988
This change is incorrect since it converts double rounding into single rounding,
which can produce different results. Instead this optimization will be done by
modifying Clang's codegen to not produce double rounding in the first place.
This reverts commit r232954.
llvm-svn: 232962
Specifically when the conversion is done in two steps, f16 -> f32 -> f64.
For example:
%1 = tail call float @llvm.convert.from.fp16.f32(i16 %0)
%conv = fpext float %1 to double
to:
vcvtb.f64.f16
llvm-svn: 232954
Fixing sign extension in makeLibCall for MIPS64. In MIPS64 architecture all
32 bit arguments (int, unsigned int, float 32 (soft float)) must be sign
extended. This fixes test "MultiSource/Applications/oggenc/".
Patch by Strahinja Petrovic.
Differential Revision: http://reviews.llvm.org/D7791
llvm-svn: 232943
Because the operands of a vector SETCC node can be of a different type from the
result (and often are), it can happen that even if we'd prefer to widen the
result type of the SETCC, the operands have been split instead. In this case,
the SETCC result also must be split. This mirrors what is done in
WidenVecRes_SELECT, and should be NFC elsewhere because if the operands are not
widened the following calls to GetWidenedVector will assert (which is what was
happening in the test case).
llvm-svn: 232935
A build directory with a name like `build-Werror` would hit a false
positive on these `CHECK-NOT`s before, since the actual error line looks
like:
.../build-Werror/bin/llvm-as <stdin>:1:2: error: ...
Switch to using:
CHECK-NOT: error:
(note the trailing semi-colon) to avoid matching almost any file path.
llvm-svn: 232917
strchr("123!", C) != nullptr is a common pattern to check if C is one
of 1, 2, 3 or !. If the largest element of the string is smaller than
the target's register size we can easily create a bitfield and just
do a simple test for set membership.
int foo(char C) { return strchr("123!", C) != nullptr; } now becomes
cmpl $64, %edi ## range check
sbbb %al, %al
movabsq $0xE000200000001, %rcx
btq %rdi, %rcx ## bit test
sbbb %cl, %cl
andb %al, %cl ## and the two conditions
andb $1, %cl
movzbl %cl, %eax ## returning an int
ret
(imho the backend should expand this into a series of branches, but
that's a different story)
The code is currently limited to bit fields that fit in a register, so
usually 64 or 32 bits. Sadly, this misses anything using alpha chars
or {}. This could be fixed by just emitting a i128 bit field, but that
can generate really ugly code so we have to find a better way. To some
degree this is also recreating switch lowering logic, but we can't
simply emit a switch instruction and thus change the CFG within
instcombine.
llvm-svn: 232902
r216771 introduced a change to MemoryDependenceAnalysis that allowed it
to reason about acquire/release operations. However, this change does
not ensure that the acquire/release operations pair. Unfortunately,
this leads to miscompiles as we won't see an acquire load as properly
memory effecting. This largely reverts r216771.
This fixes PR22708.
llvm-svn: 232889
As preparation for removing the getSubtargetImpl() call from
TargetMachine go ahead and flip the switch on caching the function
dependent subtarget and remove the bare getSubtargetImpl call
from the X86 port. As part of this add a few tests that show we
can generate code and assemble on X86 based on features/cpu on
the Function.
llvm-svn: 232879
If we couldn't analyze its terminator (i.e., it's an indirectbr, or some
other weirdness), we can't safely re-if-convert a predicated block,
because we can't tell whether the predicated terminator can
fallthrough (it does).
Currently, we would completely ignore the fallthrough successor. In
the added testcase, this means we used to generate:
...
@ %entry:
cmp r5, #21
ittt ne
@ %cc1f:
cmpne r7, #42
@ %cc2t:
strne.w r5, [r8]
movne pc, r10
@ %cc1t:
...
Whereas the successor of %cc1f was originally %bb1.
With the fix, we get the correct:
...
@ %entry:
cmp r5, #21
itt eq
@ %cc1t:
streq.w r5, [r11]
moveq pc, r0
@ %cc1f:
cmp r7, #42
itt ne
@ %cc2t:
strne.w r5, [r8]
movne pc, r10
@ %bb1:
...
rdar://20192768
Differential Revision: http://reviews.llvm.org/D8509
llvm-svn: 232872
vperm2* intrinsics are just shuffles.
In a few special cases, they're not even shuffles.
Optimizing intrinsics in InstCombine is better than
handling this in the front-end for at least two reasons:
1. Optimizing custom-written SSE intrinsic code at -O0 makes vector coders
really angry (and so I have regrets about some patches from last week).
2. Doing mask conversion logic in header files is hard to write and
subsequently read.
There are a couple of TODOs in this patch to complete this optimization.
Differential Revision: http://reviews.llvm.org/D8486
llvm-svn: 232852
With this patch, for this one exact case, we'll generate:
blendps %xmm0, %xmm1, $1
instead of:
insertps %xmm0, %xmm1, $0
If there's a memory operand available for load folding and we're
optimizing for size, we'll still generate the insertps.
The detailed performance data motivation for this may be found in D7866;
in summary, blendps has 2-3x throughput vs. insertps on widely used chips.
Differential Revision: http://reviews.llvm.org/D8332
llvm-svn: 232850
The code this patch removes was there to make sure the text sections went
before the dwarf sections. That is necessary because MachO uses offsets
relative to the start of the file, so adding a section can change relaxations.
The dwarf sections were being printed at the start just to produce symbols
pointing at the start of those sections.
The underlying issue was fixed in r231898. The dwarf sections are now printed
when they are about to be used, which is after we printed the text sections.
To make sure we don't regress, the patch makes the MachO streamer assert
if CodeGen puts anything unexpected after the DWARF sections.
llvm-svn: 232842
The main differences are:
* Split in 32 and 64 bit functions.
* First switch on the Modifier so that we have only one non fully covered
switch.
* Map the fixup kind first to a x86_64 (or i386) specific enum, to make
it easy to handle cases like X86::reloc_riprel_4byte_movq_load.
* Switch on IsPCRel last, which reduces code duplication.
Fixes pr22308.
llvm-svn: 232837
A WIP patch makes `DIDescriptor` accessors more strict, which in turn
causes the `DebugInfoFinder` to crash on wrongly typed `!dbg`
attachments. Catch that error up front in
`Verifier::visitInstruction()`.
Also remove a test that we "handle" invalid `!dbg` attachments, added
back in r99938. We don't want to handle those anymore.
Note: I'm *not* recursing and verifying the debug info graph reachable
from this node; that work is already done by `verifyDebugInfo()`.
llvm-svn: 232834
This test is supposed to be testing whether metadata attachments to
instructions work, but it was using invalid debug info to do so. (This
was causing assertion failures in the `DebugInfoFinder` with a WIP patch
to be more strict about `DIDescriptor` accessors.)
Rather than fix the debug info -- which is better tested elsewhere --
just test the IR feature directly.
llvm-svn: 232828
When estimating SROA savings, we want to see if an address is derived
off an alloca in the caller. For store instructions, operand 1 is the
address operand, but the current code uses operand 0. Use
getPointerOperand for loads and stores to fix this.
Patch by Easwaran Raman.
http://reviews.llvm.org/D8425
llvm-svn: 232827
LocalStackSlotPass assumes that isFrameOffsetLegal doesn't change its
answer when the base register changes. Unfortunately this isn't true
in thumb1, where SP-based loads allow a larger offset than
non-SP-based loads, and this causes the base register reuse code to
generate instructions that are unencodable, causing an assertion
failure.
Solve this by adding a BaseReg parameter to isFrameOffsetLegal, which
ARMBaseRegisterInfo can then make use of to give the correct answer.
Differential Revision: http://reviews.llvm.org/D8419
llvm-svn: 232825
With the option -outline-optional-branches, LLVM will place optional
branches out of line (more details on r231230).
With this patch, this is not done for short optional branches. A short
optional branch is a branch containing a single block with an
instruction count below a certain threshold (defaulting to 3). Still
everything is guarded under -outline-optional-branches).
Outlining a short branch can't significantly improve code locality. It
can however decrease performance because of the additional jmp and in
cases where the optional branch is hot. This fixes a compile time
regression I have observed in a benchmark.
Review: http://reviews.llvm.org/D8108
llvm-svn: 232802
This works in a similar way to the gold plugin tests. We search for a compatible
linker on $PATH and use it to run tests against our just-built libLTO. To start
with, test the just added opt level functionality.
Differential Revision: http://reviews.llvm.org/D8472
llvm-svn: 232785
This is very related to the bug fixed in r174431. The problem is that
SelectionDAG does not include alignment in the uniquing of loads and
stores. When an otherwise no-op DAGCombine would increase the alignment
of a load or store, the original node would be returned (with the
alignment increased), which would cause the node not to be processed by
any further DAGCombines.
I don't have a direct testcase for this that manifests on an in-tree
target, but I did see some noise in the tests for other targets and have
updated them for it.
llvm-svn: 232780
This switches the sense of the i32 values and updates the test cases.
We can also use CHECK-SAME to clean up some tests, and reduce the visual
noise from bitcasts.
llvm-svn: 232774
Another case of x86-specific shuffle strength reduction:
avoid generating insert*128 instructions with index 0 because
they are slower than their non-lane-changing blend equivalents.
Shuffle lowering already catches most of these cases, but
the zero vector case and some other paths such as in the
modified test in vector-shuffle-256-v32.ll were getting
through.
Differential Revision: http://reviews.llvm.org/D8366
llvm-svn: 232773
Each use of the byte array uses a different alias. This makes the
backend less likely to reuse previously computed byte array addresses,
improving the security of the CFI mechanism based on this pass.
Differential Revision: http://reviews.llvm.org/D8455
llvm-svn: 232770
This change also introduces a link-time optimization level of 1. This
optimization level runs only the globaldce pass as well as cleanup passes for
passes that run at -O0, specifically simplifycfg which cleans up lowerbitsets.
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20150316/266951.html
llvm-svn: 232769
Summary:
CUDA 7.0's libdevice uses slightly different IR to call __nvvm_reflect
and that triggers an assertion in nvvm_reflect optimization pass. This
change allows nvvm_reflect pass to deal with both old and new ways to
pass an argument to __nvvm_reflect.
Test Plan: ninja check-all
Reviewers: eliben, echristo
Subscribers: jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D8399
llvm-svn: 232732
This should bring the windows bots back.
It is a bit ugly, but it is better than what we had before: The triple would
say that the object format was COFF, but llc/llvm-mc would produce an ELF.
llvm-svn: 232683
No outlining is necessary for SEH catch blocks. Use the blockaddr of the
handler in place of the usual outlined function.
Reviewers: majnemer, andrew.w.kaylor
Differential Revision: http://reviews.llvm.org/D8370
llvm-svn: 232664
Currently v2i64 vectors shifts (non-equal shift amounts) are scalarized, costing 4 x extract, 2 x x86-shifts and 2 x insert instructions - and it gets even more awkward on 32-bit targets.
This patch separately shifts the vector by both shift amounts and then shuffles the partial results back together, costing 2 x shuffles and 2 x sse-shifts instructions (+ 2 movs on pre-AVX hardware).
Note - this patch only improves the SHL / LSHR logical shifts as only these are supported in SSE hardware.
Differential Revision: http://reviews.llvm.org/D8416
llvm-svn: 232660
When calculating the lanemask of a register class we have to include the
masks of subregisters supported by any of the class members, not just
the ones supported by all class members.
This fixes problems when coalescing towards a subclass with additional
subregisters available.
The attached testcase works fine as is, but does crash if you enable
subregister liveness on x86 without this change applied.
llvm-svn: 232652
The checks here were so vague that we could nuke intrinsics
from existence and still pass the test because we'd match
the function name.
llvm-svn: 232647
The 'vmovntdq' was only passing due to a fluke in
SandyBridge codegen that splits 32-byte stores in half,
but that meant that the test was not correctly checking
for the 32-byte store that we thought we were generating.
The lax checking in this file will be addressed in
another commit. There are bigger problems here.
llvm-svn: 232644
The two hot blocks are right next to each other and I verified that
there is no performance regression by compressing/uncompressing some
files with a minigzip built with the different options.
llvm-svn: 232629
Memcpy, and other memory intrinsics, typically tries to use LDM/STM if
the source and target addresses are 4-byte aligned. In CodeGenPrepare
look for calls to memory intrinsics and, if the object is on the
stack, 4-byte align it if it's large enough that we expect that memcpy
would want to use LDM/STM to copy it.
Differential Revision: http://reviews.llvm.org/D7908
llvm-svn: 232627
These BEXTR cases are a check for the 64-bit load form and two negative cases where the bitrange is non-contiguous. From a private patch equivalent to r189742/PR17028.
llvm-svn: 232580
Summary:
This change teaches isImpliedCond to infer things like "X sgt 0" => "X -
1 sgt -1". The `ConstantRange` class has the logic to do the heavy
lifting, this change simply gets ScalarEvolution to exploit that when
reasonable.
Depends on D8345
Reviewers: atrick
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8346
llvm-svn: 232576
Targets which provide a rotate make it possible to replace a sequence of
(XOR (SHL 1, x), -1) with (ROTL ~1, x). This saves an instruction on
architectures like X86 and POWER(64).
Differential Revision: http://reviews.llvm.org/D8350
llvm-svn: 232572
COFF COMDATs (for selection kinds other than 'select any') require at
least one non-section symbol in the symbol table.
Satisfy this by morally enhancing the linkage from private to internal.
Differential Revision: http://reviews.llvm.org/D8394
llvm-svn: 232570
Summary: Building FP16 constant vectors caused the FP16 data to be bitcast to i64. This patch creates a BITCAST node with the correct value, and adds a test to verify correct handling.
Reviewers: mcrosier
Reviewed By: mcrosier
Subscribers: mcrosier, jmolloy, ab, srhines, llvm-commits, rengolin, aemerson
Differential Revision: http://reviews.llvm.org/D8369
llvm-svn: 232562
Summary:
COFF COMDATs (for selection kinds other than 'select any') require at
least one non-section symbol in the symbol table.
Satisfy this by morally enhancing the linkage from private to internal.
Reviewers: rafael
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8374
llvm-svn: 232539
Before this patch code wanting to create temporary labels for a given entity
(function, cu, exception range, etc) had to keep its own counter to have stable
symbol names.
createTempSymbol would still add a suffix to make sure a new symbol was always
returned, but it kept a single counter. Because of that, if we were to use
just createTempSymbol("cu_begin"), the label could change from cu_begin42 to
cu_begin43 because some other code started using temporary labels.
Simplify this by just keeping one counter per prefix and removing the various
specialized counters.
llvm-svn: 232535
Also, add several entries to vectorizable functions table, and
corresponding tests. The table isn't complete, it'll be populated later.
Review: http://reviews.llvm.org/D8131
llvm-svn: 232531
The input offset to needsFrameBaseReg is a negative value below the top of the
stack frame, but when converting to a positive offset from the bottom of the
stack frame this value was negated, causing the final offset to be too large
by twice the input offset's magnitude. Fix that by not negating the offset.
Patch by John Brawn
Differential Revision: http://reviews.llvm.org/D8316
llvm-svn: 232513
The experiments can be used to evaluate potential optimizations that remove
instrumentation (assess false negatives). Instead of completely removing
some instrumentation, you set Exp to a non-zero value (mask of optimization
experiments that want to remove instrumentation of this instruction).
If Exp is non-zero, this pass will emit special calls into runtime
(e.g. __asan_report_exp_load1 instead of __asan_report_load1). These calls
make runtime terminate the program in a special way (with a different
exit status). Then you run the new compiler on a buggy corpus, collect
the special terminations (ideally, you don't see them at all -- no false
negatives) and make the decision on the optimization.
The exact reaction to experiments in runtime is not implemented in this patch.
It will be defined and implemented in a subsequent patch.
http://reviews.llvm.org/D8198
llvm-svn: 232502
The VSX stores are sometimes generated with a undefined index register, causing %noreg to be used and R0 to be emitted later on. The semantics of the VSX store (e.g. stdsdx) requires R0 to be used as base if we want zero to be used in the computation of the effective address instead of the content of R0. This patch checks if no index register was generated and forces R0 to be used as base address.
llvm-svn: 232486
Summary:
This adds a MipsInstAlias which expands to XORi $reg,$reg,imm. For example, "xor $6, 0x3A" should be expanded to "xori $6, $6, 58".
This should work for all MIPS ISAs.
Reviewers: dsanders
Reviewed By: dsanders
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8284
llvm-svn: 232473
ARMv6K is another layer between ARMV6 and ARMV6T2. This is the LLVM
side of the changes.
ARMV6 family LLVM implementation.
+-------------------------------------+
| ARMV6 |
+----------------+--------------------+
| ARMV6M (thumb) | ARMV6K (arm,thumb) | <- From ARMV6K and ARMV6M processors
+----------------+--------------------+ have support for hint instructions
| ARMV6T2 (arm,thumb,thumb2) | (SEV/WFE/WFI/NOP/YIELD). They can
+-------------------------------------+ be either real or default to NOP.
| ARMV7 (arm,thumb,thumb2) | The two processors also use
+-------------------------------------+ different encoding for them.
Patch by Vinicius Tinti.
llvm-svn: 232468
Optimize concat_vectors of truncated vectors, where the intermediate
type is illegal, to avoid said illegality, e.g.,
(v4i16 (concat_vectors (v2i16 (truncate (v2i64))),
(v2i16 (truncate (v2i64)))))
->
(v4i16 (truncate (v4i32 (concat_vectors (v2i32 (truncate (v2i64))),
(v2i32 (truncate (v2i64)))))))
This isn't really target-specific, and, as such, would best go in the
DAGCombiner. However, ISD::TRUNCATE legality isn't keyed on both input
and result type, so we might generate worse code when we don't know
better. On AArch64 we know it's fine for v2i64->v4i16 and v4i32->v8i8.
rdar://20022387
llvm-svn: 232459
Re-commit the test cases added in r232444. These now use
-irce-print-changed-loops and -irce-print-range-checks so they run
correctly on a without asserts build of llvm.
llvm-svn: 232452
I accidentally checked in two tests that used -debug-only -- these fail
on a release LLVM build. Temporarily delete these from the repo to keep
the bots green while I fix this locally.
llvm-svn: 232446
This change to IRCE gets it to recognize "half" range checks. Half
range checks are range checks that only either check if the index is
`slt` some positive integer ("length") or if the index is `sge` `0`.
The range solver does not try to be clever / aggressive about solving
half-range checks -- it transforms "I < L" to "0 <= I < L" and "0 <= I"
to "0 <= I < INT_SMAX". This is safe, but not always optimal.
llvm-svn: 232444
By default we want our gcov emission to stay 4.2 compatible, which
means we need to continue emit the exit block last by default. We add
an option to emit it before the body for users that need it.
llvm-svn: 232438
This makes the reader check the endianness of the object file its
given and behave appropriately. For the test I dug up a really old
linker and created a ppc-apple-darwin file for llvm-cov to read.
llvm-svn: 232422
We removed @llvm.eh.typeid.for.i32 and replaced it with
@llvm.eh.typeid.for quite some time ago. Fix up some test cases which
never got updated.
llvm-svn: 232421
(turns out I had regressed this when sinking handling of this type down
into GetElementPtrInst::Create - since that asserted before the error
handling was performed)
llvm-svn: 232420
As part of PR22777, fix testcases that fail the debug info verifier.
The changes fall into the following categories:
- Empty `filename:` fields in `MDFile`s. Compile units and some types
require non-empty filenames. A number of testcases have empty
filenames, probably due to hand-reduction of testcases.
- Not-quite empty arrays: `!{i32 0}`. This used to be equivalent in
the debug info schema to `!{}`. They cause problems for
`!MDSubroutineType`'s `types:` array, since it requires all operands
to be valid types. (Note that `!{null}` is the correct type array
for functions that take no arguments and return `void`.)
- Significantly bitrotted testcases. Nodes got left behind a few
upgrades ago because of missing or invalid tags.
llvm-svn: 232415
Turns out `visitIntrinsicFunctionCall()` descends into all operands
already, so explicitly descending in `visitDbgIntrinsic()` (part of
r232296) isn't useful.
Updating a testcase that doesn't really need `-verify-debug-info` (since
r231082) as confirmation.
llvm-svn: 232408
r230877 optimized which fields are written out for `CHECK`-ability, but
apparently missed changing some of them to optional in `LLParser`.
Fixes PR22921.
llvm-svn: 232400
Fix justify error for small structures bigger than 32 bits in fixed
arguments for MIPS64 big endian. There was a problem when small structures
are passed as fixed arguments. The structures that are bigger than 32 bits
but smaller than 64 bits were not left justified properly on MIPS64 big
endian. This is fixed by shifting the value to make it left justified when
appropriate.
Patch by Aleksandar Beserminji.
Differential Revision: http://reviews.llvm.org/D8174
llvm-svn: 232382
This still doesn't actually work correctly for big endian input files,
but since these tests all use little endian input files they don't
actually fail. I'll be committing a real fix for big endian soon, but
I don't have proper tests for it yet.
llvm-svn: 232354
The problem here is the infamous one direction known safe. I was
hesitant to turn it off before b/c of the potential for regressions
without an actual bug from users hitting the problem. This is that bug ;
).
The main performance impact of having known safe in both directions is
that often times it is very difficult to find two releases without a use
in-between them since we are so conservative with determining potential
uses. The one direction known safe gets around that problem by taking
advantage of many situations where we have two retains in a row,
allowing us to avoid that problem. That being said, the one direction
known safe is unsafe. Consider the following situation:
retain(x)
retain(x)
call(x)
call(x)
release(x)
Then we know the following about the reference count of x:
// rc(x) == N (for some N).
retain(x)
// rc(x) == N+1
retain(x)
// rc(x) == N+2
call A(x)
call B(x)
// rc(x) >= 1 (since we can not release a deallocated pointer).
release(x)
// rc(x) >= 0
That is all the information that we can know statically. That means that
we know that A(x), B(x) together can release (x) at most N+1 times. Lets
say that we remove the inner retain, release pair.
// rc(x) == N (for some N).
retain(x)
// rc(x) == N+1
call A(x)
call B(x)
// rc(x) >= 1
release(x)
// rc(x) >= 0
We knew before that A(x), B(x) could release x up to N+1 times meaning
that rc(x) may be zero at the release(x). That is not safe. On the other
hand, consider the following situation where we have a must use of
release(x) that x must be kept alive for after the release(x)**. Then we
know that:
// rc(x) == N (for some N).
retain(x)
// rc(x) == N+1
retain(x)
// rc(x) == N+2
call A(x)
call B(x)
// rc(x) >= 2 (since we know that we are going to release x and that that release can not be the last use of x).
release(x)
// rc(x) >= 1 (since we can not deallocate the pointer since we have a must use after x).
…
// rc(x) >= 1
use(x)
Thus we know that statically the calls to A(x), B(x) can together only
release rc(x) N times. Thus if we remove the inner retain, release pair:
// rc(x) == N (for some N).
retain(x)
// rc(x) == N+1
call A(x)
call B(x)
// rc(x) >= 1
…
// rc(x) >= 1
use(x)
We are still safe unless in the final … there are unbalanced retains,
releases which would have caused the program to blow up anyways even
before optimization occurred. The simplest form of must use is an
additional release that has not been paired up with any retain (if we
had paired the release with a retain and removed it we would not have
the additional use). This fits nicely into the ARC framework since
basically what you do is say that given any nested releases regardless
of what is in between, the inner release is known safe. This enables us to get
back the lost performance.
<rdar://problem/19023795>
llvm-svn: 232351
This code was casting regions of a memory buffer to a couple of
different structs. This is wrong in a few ways:
1. It breaks aliasing rules.
2. If the buffer isn't aligned, it hits undefined behaviour.
3. It completely ignores endianness differences.
4. The structs being defined for this aren't specifying their padding
properly, so this doesn't even represent the data properly on some
platforms.
This commit is mostly NFC, except that it fixes reading coverage for
32 bit binaries as a side effect of getting rid of the mispadded
structs. I've included a test for that.
I've also baked in that we only handle little endian more explicitly,
since that was true in practice already. I'll fix this to handle
endianness properly in a followup commit.
llvm-svn: 232346
The information gathering part of the patch stores a bit more information
than what is strictly necessary for these 2 sections. The rest will
become useful when we start emitting __apple_* type accelerator tables.
llvm-svn: 232342
This code comes with a lot of cruft that is meant to mimic darwin's
dsymutil behavior. A much simpler approach (described in the numerous
FIXMEs that I put in there) gives the right output for the vast
majority of cases. The extra corner cases that are handled differently
need to be investigated: they seem to correctly handle debug info that
is in the input, but that info looks suspicious in the first place.
Anyway, the current code needs to handle this, but I plan to revisit it
as soon as the big round of validation against the classic dsymutil is
over.
llvm-svn: 232333
The debug map embedded by ld64 in binaries conatins function sizes.
These sizes are less precise than the ones given by the debug information
(byte granularity vs linker atom granularity), but they might cover code
that is referenced in the line table but not in the DIE tree (that might
very well be a compiler bug that I need to investigate later).
Anyway, extracting that information is necessary to be able to mimic
dsymutil's behavior exactly.
llvm-svn: 232300
Verify that debug info intrinsic arguments are valid. (These checks
will not recurse through the full debug info graph, so they don't need
to be cordoned of in `DebugInfoVerifier`.)
With those checks in place, changing the `DbgIntrinsicInst` accessors to
downcast to `MDLocalVariable` and `MDExpression` is natural (added isa
specializations in `Metadata.h` to support this).
Added tests to `test/Verifier` for the new -verify checks, and fixed the
debug info in all the in-tree tests.
If you have out-of-tree testcases that have started to fail to -verify,
hopefully the verify checks are helpful. The most likely problem is
that the expression argument is `!{}` (instead of `!MDExpression()`).
llvm-svn: 232296
This test for function-local metadata did strange things, and never
really sent in valid arguments for `llvm.dbg.declare` and
`llvm.dbg.value` intrinsics. Those that might have once been valid have
bitrotted.
Rewrite it to be a targeted test for function-local metadata --
unrelated to debug info, which is tested elsewhere -- and rename it to
better match other metadata-related tests.
(Note: the scope of function-local metadata changed drastically during
the metadata/value split, but I didn't properly clean up this testcase.
Most of the IR in this file, while invalid for debug info intrinsics,
used to provide coverage for various (now illegal) forms of
function-local metadata.)
llvm-svn: 232290
Summary: This is a first step toward getting proper support for aggregate loads and stores.
Test Plan: Added unittests
Reviewers: reames, chandlerc
Reviewed By: chandlerc
Subscribers: majnemer, joker.eph, chandlerc, llvm-commits
Differential Revision: http://reviews.llvm.org/D7780
Patch by Amaury Sechet
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 232284
There is no need to look into the location expressions to transfer them,
the only modification to apply is to patch their base address to reflect
the linked function address.
llvm-svn: 232267
Specifically, if there are copy-like instructions in the loop header
they are moved into the loop close to their uses. This reduces the live
intervals of the values and can avoid register spills.
This is working towards a fix for http://llvm.org/PR22230.
Review: http://reviews.llvm.org/D7259
Next steps:
- Find a better cost model (which non-copy instructions should be sunk?)
- Make this dependent on register pressure
llvm-svn: 232262
This actually shares most of its implementation with the generation
of the debug_ranges (the absence of 'a' is not a typo) contribution
for the unit's DW_AT_ranges attribute.
llvm-svn: 232246
The linker on that platform may re-order symbols or strip dead symbols, which
will break bit set checks. Avoid this by hiding the symbols from the linker.
llvm-svn: 232235
Nothing fancy, just a straightforward offset to apply to the original
debug_ranges entries to get them in line with the linked addresses.
llvm-svn: 232232
This fixes pr22854.
The core issue on the bug is that there are multiple instructions that
print the same in assembly. In fact, there doesn't seem to be any
syntax for specifying that a constant that fits in 8 bits should use a 32 bit
immediate.
The attached patch changes fast isel to consider i16immSExt8,
i32immSExt8, and i64immSExt8. They were disabled because fastisel didn’t know
to call the predicate back in the day.
llvm-svn: 232223
This reapplies the patch previously committed at revision 232190. This was
reverted at revision 232196 as it caused test failures in tests that did not
expect operands to be commuted. I have made the tests more resilient to
reassociation in revision 232206.
llvm-svn: 232209
As a follow-up to r232200, add an `-instcombine` to canonicalize scalar
allocations to `i32 1`. Since r232200, `iX 1` (for X != 32) are only
created by RAUWs, so this shouldn't fire too often. Nevertheless, it's
a cheap check and a nice cleanup.
llvm-svn: 232202
Write the `alloca` array size explicitly when it's non-canonical.
Previously, if the array size was `iX 1` (where X is not 32), the type
would mutate to `i32` when round-tripping through assembly.
The testcase I added fails in `verify-uselistorder` (as well as
`FileCheck`), since the use-lists for `i32 1` and `i64 1` change.
(Manman Ren came across this when running `verify-uselistorder` on some
non-trivial, optimized code as part of PR5680.)
The type mutation started with r104911, which allowed array sizes to be
something other than an `i32`. Starting with r204945, we
"canonicalized" to `i64` on 64-bit platforms -- and then on every
round-trip through assembly, mutated back to `i32`.
I bundled a fixup for `-instcombine` to avoid r204945 on scalar
allocations. (There wasn't a clean way to sequence this into two
commits, since the assembly change on its own caused testcase churn, and
the `-instcombine` change can't be tested without the assembly changes.)
An obvious alternative fix -- change `AllocaInst::AllocaInst()`,
`AsmWriter` and `LLParser` to treat `intptr_t` as the canonical type for
scalar allocations -- was rejected out of hand, since this required
teaching them each about the data layout.
A follow-up commit will add an `-instcombine` to canonicalize the scalar
allocation array size to `i32 1` rather than leaving `iX 1` alone.
rdar://problem/20075773
llvm-svn: 232200
We recorded the forward references in the CU that holds the referenced
DIE, but this is wrong as those will get resoled *after* the CU that
holds the reference. Record the references in their originating CU along
with a pointer to the remote CU to be able to compute the fixed up
offset at the right time.
llvm-svn: 232193
The typo got unnoticed because we were testing only on Dwarf 2. Add a
Dwarf4 test that exercises the code path, and also tests some newer
FORMs that the other test doesn't cover.
llvm-svn: 232191
This patch adds initial support for vector instructions to the reassociation
pass. It enables most parts of the pass to work with vectors but to keep the
size of the patch small, optimization of Xor trees, canonicalization of
negative constants and converting shifts to muls, etc., have been left out.
This will be handled in later patches.
The patch is based on an initial patch by Chad Rosier.
Differential Revision: http://reviews.llvm.org/D7566
llvm-svn: 232190
Summary:
ScalarEvolutionExpander assumes that the header block of a loop is a
legal place to have a use for a phi node. This is true only for phis
that are either in the header or dominate the header block, but it is
not true for phi nodes that are strictly internal to the loop body.
This change teaches ScalarEvolutionExpander to place uses of PHI nodes
in the basic block the PHI nodes belong to. This is always legal, and
`hoistIVInc` ensures that the said position dominates `IsomorphicInc`.
Reviewers: atrick
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8311
llvm-svn: 232189
Similar to gep (r230786) and load (r230794) changes.
Similar migration script can be used to update test cases, which
successfully migrated all of LLVM and Polly, but about 4 test cases
needed manually changes in Clang.
(this script will read the contents of stdin and massage it into stdout
- wrap it in the 'apply.sh' script shown in previous commits + xargs to
apply it over a large set of test cases)
import fileinput
import sys
import re
rep = re.compile(r"(getelementptr(?:\s+inbounds)?\s*\()((<\d*\s+x\s+)?([^@]*?)(|\s*addrspace\(\d+\))\s*\*(?(3)>)\s*)(?=$|%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|zeroinitializer|<|\[\[[a-zA-Z]|\{\{)", re.MULTILINE | re.DOTALL)
def conv(match):
line = match.group(1)
line += match.group(4)
line += ", "
line += match.group(2)
return line
line = sys.stdin.read()
off = 0
for match in re.finditer(rep, line):
sys.stdout.write(line[off:match.start()])
sys.stdout.write(conv(match))
off = match.end()
sys.stdout.write(line[off:])
llvm-svn: 232184
using numeric values and not their symbolic constant names.
The routines that print Mach-O stuff already had a verbose parameter and this
change is just changing the passing true to passing !NonVerbose. With just a
couple of fixes and a bunch of test case updates.
llvm-svn: 232182
This patch fixes a bug in the shuffle lowering logic implemented by function
'lowerV2X128VectorShuffle'.
The are few cases where function 'lowerV2X128VectorShuffle' wrongly expands a
shuffle of two v4X64 vectors into a CONCAT_VECTORS of two EXTRACT_SUBVECTOR
nodes. The problematic expansion only occurs when the shuffle mask M has an
'undef' element at position 2, and M is equivalent to mask <0,1,4,5>.
In that case, the algorithm propagates the wrong vector to one of the two
new EXTRACT_SUBVECTOR nodes.
Example:
;;
define <4 x double> @test(<4 x double> %A, <4 x double> %B) {
entry:
%0 = shufflevector <4 x double> %A, <4 x double> %B, <4 x i32><i32 undef, i32 1, i32 undef, i32 5>
ret <4 x double> %0
}
;;
Before this patch, llc (-mattr=+avx) generated:
vinsertf128 $1, %xmm0, %ymm0, %ymm0
With this patch, llc correctly generates:
vinsertf128 $1, %xmm1, %ymm0, %ymm0
Added test lower-vec-shuffle-bug.ll
Differential Revision: http://reviews.llvm.org/D8259
llvm-svn: 232179
Constant folding for shift IR instructions ignores all bits above 32 of
second argument (shift amount).
Because of that, some undef results are not recognized and APInt can
raise an assert failure if second argument has more than 64 bits.
Patch by Paweł Bylica!
Differential Revision: http://reviews.llvm.org/D7701
llvm-svn: 232176
%Q5_Q6<def> = COPY %Q2_Q3
%D5<def> =
%D3<def> =
%D3<def> = COPY %D6 // Incorrectly removed in MachineCopyPropagation
Using of %D3 results in incorrect result ...
Reviewed in http://reviews.llvm.org/D8242
llvm-svn: 232142
There's a missed optimization opportunity where we could look at the full chain of computation and take the intersection of the flags instead of only looking one instruction deep.
llvm-svn: 232134
This should complete the job started in r231794 and continued in r232045:
We want to replace as much custom x86 shuffling via intrinsics
as possible because pushing the code down the generic shuffle
optimization path allows for better codegen and less complexity
in LLVM.
AVX2 introduced proper integer variants of the hacked integer insert/extract
C intrinsics that were created for this same functionality with AVX1.
This should complete the removal of insert/extract128 intrinsics.
The Clang precursor patch for this change was checked in at r232109.
llvm-svn: 232120
Instead print them as part of the $dst operand. The AsmMatcher
requires the 32-bit and 64-bit encodings have the same mnemonic in
order to parse them correctly.
llvm-svn: 232105
The permps and permd instructions have their operands swapped compared to the
intrinsic definition. Therefore, they do not fall into the INTR_TYPE_2OP
category.
I did not create a new category for those two, as they are the only one AFAICT
in that case.
<rdar://problem/20108262>
llvm-svn: 232085
Part of the folding logic implemented by function 'PerformISDSETCCCombine'
only worked under the assumption that the condition code in input could have
been either SETNE or SETEQ.
Unfortunately that assumption was incorrect, and in some cases the algorithm
ended up incorrectly folding SETCC nodes.
The incorrect folding only affected SETCC dag nodes where:
- one of the operands was a build_vector of all zeroes;
- the other operand was a SIGN_EXTEND from a vector of MVT:i1 elements;
- the condition code was neither SETNE nor SETEQ.
Example:
(setcc (v4i32 (sign_extend v4i1:%A)), (v4i32 VectorOfAllZeroes), setge)
Before this patch, the entire dag node sequence from the example was
incorrectly folded to node %A.
With this patch, the dag node sequence is folded to a
(xor %A, (v4i1 VectorOfAllOnes)).
Added test setcc-combine.ll.
Thanks to Greg Bedwell for spotting this issue.
llvm-svn: 232046
Now that we've replaced the vinsertf128 intrinsics,
do the same for their extract twins.
This is very much like D8086 (checked in at r231794):
We want to replace as much custom x86 shuffling via intrinsics
as possible because pushing the code down the generic shuffle
optimization path allows for better codegen and less complexity
in LLVM.
This is also the LLVM sibling to the cfe D8275 patch.
Differential Revision: http://reviews.llvm.org/D8276
llvm-svn: 232045
It's firstly committed at r231630, and reverted at r231635.
Function pass InstructionSimplifier is inserted as barrier to
make sure loop unroll pass won't affect on LICM pass.
llvm-svn: 232011
Instead, run both EH preparation passes, and have them both ignore
functions with unrecognized EH personalities. Pass delegation involved
some hacky code for creating an AnalysisResolver that we don't need now.
llvm-svn: 231995
CodeGen incorrectly ignores (assert from APInt) constant index bigger
than 2^64 in getelementptr instruction. This is a test and fix for that.
Patch by Paweł Bylica!
Reviewed By: rnk
Subscribers: majnemer, rnk, mcrosier, resistor, llvm-commits
Differential Revision: http://reviews.llvm.org/D8219
llvm-svn: 231984
If a function is going in an unique section (because of -ffunction-sections
for example), putting a jump table in .rodata will keep .rodata alive and
that will keep alive any other function that also has a jump table.
Instead, put the jump table in a unique section that is associated with the
function.
llvm-svn: 231961
The main issue being fixed here is that APCS targets handling a "byval align N"
parameter with N > 4 were miscounting what objects were where on the stack,
leading to FrameLowering setting the frame pointer incorrectly and clobbering
the stack.
But byval handling had grown over many years, and had multiple layers of cruft
trying to compensate for each other and calculate padding correctly. This only
really needs to be done once, in the HandleByVal function. Elsewhere should
just do what it's told by that call.
I also stripped out unnecessary APCS/AAPCS distinctions (now that Clang emits
byvals with the correct C ABI alignment), which simplified HandleByVal.
rdar://20095672
llvm-svn: 231959
DW_AT_low_pc on functions is taken care of by the relocation processing, but
DW_AT_high_pc and DW_AT_low_pc on other lexical scopes need special handling.
llvm-svn: 231955
This is a follow-up to r231182. This adds the "vbroadcasti128" instruction
back, but without the intrinsic mapping. Also add a test to check the
instriction encoding.
This is related to rdar://problem/18742778.
llvm-svn: 231945
Summary:
The generic ELF TargetObjectFile defaults to .ctors, but Linux's
defaults to .init_array by calling InitializeELF with the value of
UseInitArray from TargetMachine. Make NaCl's behavior match.
Reviewers: jvoung
Differential Revision: http://reviews.llvm.org/D8240
llvm-svn: 231934
The CallGraphNode function "addCalledFunction()" asserts that edges are not to intrinsics.
This patch makes sure that the Inliner does not add such an edge to the callgraph.
Fix for clang crash by assertion: https://llvm.org/bugs/show_bug.cgi?id=22857
Differential Revision: http://reviews.llvm.org/D8231
llvm-svn: 231927
As of r231908, the test I added in r231902 actually gets run - but I'd
checked in a stale version of the input so it didn't pass. Fix the
input and un-xfail the test.
llvm-svn: 231911
This causes a crash if the referenced intrinsic was malformed. In this case, we
would already have reported an error on the referenced intrinsic, but then
crashed on the second one when it tried to introspect the first without
error checking.
llvm-svn: 231910
There were also errors in the CHECK line which I fixed and the test
doesn't actually pass as the "100" is in the wrong line. Not sure
whether this is a test failure or a coverage failure so making the test
XFAIL for now.
llvm-svn: 231908
Given that large parts of inst combine is restricted to instructions which have one use, getting rid of a use on the condition can help the effectiveness of the optimizer. Also, it allows the condition to potentially be deleted by instcombine rather than waiting for another pass.
I noticed this completely by accident in another test case. It's not anything that actually came from a real workload.
p.s. We should probably do the same thing for switch instructions.
Differential Revision: http://reviews.llvm.org/D8220
llvm-svn: 231881
There are still 4 tests that check for DW_AT_MIPS_linkage_name,
because they specify DWARF 2 or 3 in the module metadata. So, I didn't
create an explicit version-based test for the attribute.
Differential Revision: http://reviews.llvm.org/D8227
llvm-svn: 231880
This patch adds limited support in ValueTracking for inferring known bits of a value from conditional expressions which must be true to reach the instruction we're trying to optimize. At this time, the feature is off by default. Once landed, I'm hoping for feedback from others on both profitability and compile time impact.
Forms of conditional value propagation have been tried in LLVM before and have failed due to compile time problems. In an attempt to side step that, this patch only considers conditions where the edge leaving the branch dominates the context instruction. It does not attempt full dataflow. Even with that restriction, it handles many interesting cases:
* Early exits from functions
* Early exits from loops (for context instructions in the loop and after the check)
* Conditions which control entry into loops, including multi-version loops (such as those produced during vectorization, IRCE, loop unswitch, etc..)
Possible applications include optimizing using information provided by constructs such as: preconditions, assumptions, null checks, & range checks.
This patch implements two approaches to the problem that need further benchmarking. Approach 1 is to directly walk the dominator tree looking for interesting conditions. Approach 2 is to inspect other uses of the value being queried for interesting comparisons. From initial benchmarking, it appears that Approach 2 is faster than Approach 1, but this needs to be further validated.
Differential Revision: http://reviews.llvm.org/D7708
llvm-svn: 231879
- Use TargetLowering to check for the actual cost of each extension.
- Provide a factorized method to check for the cost of an extension:
TargetLowering::isExtFree.
- Provide a virtual method TargetLowering::isExtFreeImpl for targets to be able
to tune the cost of non-free extensions.
This refactoring offers a better granularity to model what really happens on
different targets.
No performance changes and very few code differences.
Part of <rdar://problem/19267165>
llvm-svn: 231855
This adds new node types for each intrinsic.
For instance, for addv, we have AArch64ISD::UADDV, such that:
(v4i32 (uaddv ...))
is the same as
(v4i32 (scalar_to_vector (i32 (int_aarch64_neon_uaddv ...))))
that is,
(v4i32 (INSERT_SUBREG (v4i32 (IMPLICIT_DEF)),
(i32 (int_aarch64_neon_uaddv ...)), ssub)
In a combine, we transform all such across-vector-lanes intrinsics to:
(i32 (extract_vector_elt (uaddv ...), 0))
This has one big advantage: by making the extract_element explicit, we
enable the existing patterns for lane-aware instructions to fire.
This lets us avoid needlessly going through the GPRs. Consider:
uint32x4_t test_mul(uint32x4_t a, uint32x4_t b) {
return vmulq_n_u32(a, vaddvq_u32(b));
}
We now generate:
addv.4s s1, v1
mul.4s v0, v0, v1[0]
instead of the previous:
addv.4s s1, v1
fmov w8, s1
dup.4s v1, w8
mul.4s v0, v1, v0
rdar://20044838
llvm-svn: 231840
Follow up from r231505.
Fix the non-determinism by using a MapVector and reintroduce the AArch64
testcase. Defer deleting the got candidates up to the end and remove
them in a bulk, avoiding linear time removal of each element.
Thanks to Renato Golin for trying it out on other platforms.
llvm-svn: 231830
The dependences are now expose through the new getInterestingDependences
API so we can use that with -analyze too and fix the FIXME.
This lets us remove the test that relied on -debug to check the
dependences.
llvm-svn: 231807
Also it extracts getCopyFromRegs helper function in SelectionDAGBuilder as we need to be able to customize type of the register exported from basic block during lowering of the gc.result.
(Resubmitting this change after not being able to reproduce buildbot failure)
Differential Revision: http://reviews.llvm.org/D7760
llvm-svn: 231800
We want to replace as much custom x86 shuffling via intrinsics
as possible because pushing the code down the generic shuffle
optimization path allows for better codegen and less complexity
in LLVM.
This is the sibling patch for the Clang half of this change:
http://reviews.llvm.org/D8088
Differential Revision: http://reviews.llvm.org/D8086
llvm-svn: 231794
This crash occurs due to memory corruption when trying to update dependency
direction based on Constraints.
This crash was observed during lnt regression of Polybench benchmark test case dynprog.
Review: http://reviews.llvm.org/D8059
llvm-svn: 231788
This crash in Dependency analysis is because we assume here that in case of UsefulGEP
both source and destination have the same number of operands which may not be true.
This incorrect assumption results in crash while populating Pairs. Fix the same.
This crash was observed during lnt regression for code such as-
struct s{
int A[10][10];
int C[10][10][10];
} S;
void dep_constraint_crash_test(int k,int N) {
for( int i=0;i<N;i++)
for( int j=0;j<N;j++)
S.A[0][0] = S.C[0][0][k];
}
Review: http://reviews.llvm.org/D8162
llvm-svn: 231784
We failed to use a marking set to properly handle recursive types, which caused use
to recurse infinitely and eventually overflow the stack.
llvm-svn: 231760
In this situation we would always have already flagged an error on the statepoint intrinsic,
but then we carry on to parse other, related GC intrinsics, and could end up crashing during that
verification when they try to access data from the malformed statepoint.
llvm-svn: 231759
ReplaceInstUsesWith needs to return nullptr when the input has no users,
because in that case it does not mutate the program. Otherwise, we can
get stuck in an infinite loop of repeatedly attempting to constant fold
and instruction with no users.
llvm-svn: 231755
They mark the start of a compile unit, so name them .Lcu_*. Using
Section->getLabelBeginName() makes it looks like they mark the start of the
section.
While at it, switch to createTempSymbol to avoid collisions with labels
created in inline assembly. Not sure if a "don't crash" test is worth it.
With this getLabelBeginName is dead, delete it.
llvm-svn: 231750
CFLAA didn't know how to properly handle ConstantExprs; it would silently
ignore them. This was a problem if the ConstantExpr is, say, a GEP of a global,
because CFLAA wouldn't realize that there's a global there. :)
llvm-svn: 231743
We now treat pointers given to ptrtoint and pointers retrieved from
inttoptr as similar to arguments or globals (can alias anything, etc.)
This solves some of the problems we were having with giving incorrect
results.
llvm-svn: 231741
It turns out accelerator tables where totally broken if they contained
entries with colliding hashes. The failure mode is pretty bad, as it not
only impacted the colliding entries, but would basically make all the
entries after the first hash collision pointing in the wrong place.
The testcase uses the symbol names that where found to collide during a
clang build.
From a performance point of view, the patch adds a sort and a linear
walk over each bucket contents. While it has a measurable impact on the
accelerator table emission, it's not showing up significantly in clang
profiles (and I'd argue that correctness is priceless :-)).
llvm-svn: 231732
This fixes a subtle issue that was introduced in r205153.
When reusing a store for the extractelement expansion (to load directly
from it, inserting of going through the stack), later stores to the
same location might have overwritten the data we were expecting to
extract from.
To fix that, we need to explicitly replace the chain going out of the
reused store, so that later stores also have an explicit dependency on
the generated element-extracting loads, and can't clobber them.
rdar://20066785
Differential Revision: http://reviews.llvm.org/D8180
llvm-svn: 231721
Fix the double-deletion of AnalysisResolver when delegating through to
Dwarf EH preparation by creating one from scratch. Hopefully the new
pass manager simplifies this.
This reverts commit r229952.
llvm-svn: 231719
Summary:
This removes some duplicated code, and also helps optimization: e.g. in
the test case added, `%idx ULT 128` in `@x` is not currently optimized
to `true` by `-indvars` but will be, after this change.
The only functional change in ths commit is that for add recurrences,
ScalarEvolution::getRange will be more aggressive -- computing the
unsigned (resp. signed) range for a SCEVAddRecExpr will now look at the
NSW (resp. NUW) bits and check for signed (resp. unsigned) overflow.
This can be a strict improvement in some cases (such as the attached
test case), and should be no worse in other cases.
Reviewers: atrick, nlewycky
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8142
llvm-svn: 231709
Summary:
Unused in this commit, but will be used in a subsequent change (D8142)
by a FileCheck test.
Reviewers: atrick
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8143
llvm-svn: 231708
In the case where just tables are part of the function section, this produces
more readable assembly by avoiding switching to the eh section and back
to .text.
This would also break with non unique section names, as trying to switch to
a unique section actually creates a new one.
llvm-svn: 231677
Summary:
Code is mostly copied from AArch64 port and modified where needed for Mips.
This handles the "non" legal cases of logical ops. Legal cases are handled by tablegen patterns.
Test Plan:
Make check test logopm.ll
All of test-suite passes at O0/O2 and mips32 r1/r2 with this new change.
Reviewers: dsanders
Reviewed By: dsanders
Subscribers: echristo, llvm-commits, aemerson, rfuhler
Differential Revision: http://reviews.llvm.org/D6599
llvm-svn: 231665
Fixing this also exposed a related issue where the landingpad under construction was not
cleaned up when an error was raised, which would cause bad reference errors before the
error could actually be printed.
llvm-svn: 231634
For inner one of nested loops, it is more likely to be a hot loop,
and the runtime check can be promoted out from patch 0001, so the
overhead is less, we can try a doubled threshold to unroll more loops.
llvm-svn: 231632
Runtime unrolling is an expensive optimization which can bring benefit
only if the loop is hot and iteration number is relatively large enough.
For some loops, we know they are not worth to be runtime unrolled.
The scalar loop from vectorization is one of the cases.
llvm-svn: 231631
Runtime unrollng will introduce a runtime check in loop prologue.
If the unrolled loop is a inner loop, then the proglogue will be inside
the outer loop. LICM pass can help to promote the runtime check out if
the checked value is loop invariant.
llvm-svn: 231630
Summary:
See the two test cases.
; Can fold fcmp with undef on one side by choosing NaN for the undef
; Can fold fcmp with undef on both side
; fcmp u_pred undef, undef -> true
; fcmp o_pred undef, undef -> false
; because whatever you choose for the first undef
; you can choose NaN for the other undef
Reviewers: hfinkel, chandlerc, majnemer
Reviewed By: majnemer
Subscribers: majnemer, llvm-commits
Differential Revision: http://reviews.llvm.org/D7617
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 231626
There were cases where the backend computed a wrong permute mask for a VPERM2X128 node.
Example:
\code
define <8 x float> @foo(<8 x float> %a, <8 x float> %b) {
%shuffle = shufflevector <8 x float> %a, <8 x float> %b, <8 x i32> <i32 undef, i32 undef, i32 6, i32 7, i32 undef, i32 undef, i32 6, i32 7>
ret <8 x float> %shuffle
}
\code end
Before this patch, llc (with -mattr=+avx) emitted the following vperm2f128:
vperm2f128 $0, %ymm0, %ymm0, %ymm0 # ymm0 = ymm0[0,1,0,1]
With this patch, llc emits a vperm2f128 with a correct permute mask:
vperm2f128 $17, %ymm0, %ymm0, %ymm0 # ymm0 = ymm0[2,3,2,3]
Differential Revision: http://reviews.llvm.org/D8119
llvm-svn: 231601
Provide basic support for dynamically loadable coff objects. Only handles a subset of x64 currently.
Patch by Andy Ayers!
Differential Revision: http://reviews.llvm.org/D7793
llvm-svn: 231574
This patch fixes the logic in the DAGCombiner that folds an AND node according
to rule: (and (X (load V)), C) -> (X (load V))
An AND between a vector load 'X' and a constant build_vector 'C' can be folded
into the load itself only if we can prove that the AND operation is redundant.
The algorithm implemented by 'visitAND' firstly computes the splat value 'S'
from C, and then checks if S has the lower 'B' bits set (where B is the size in
bits of the vector element type). The algorithm takes into account also the
'undef' bits in the splat mask.
Unfortunately, the algorithm only worked under the assumption that the size of S
is a multiple of the vector element type. With this patch, we conservatively
avoid folding the AND if the splat bits are not compatible with the vector
element type.
Added X86 test and-load-fold.ll
Differential Revision: http://reviews.llvm.org/D8085
llvm-svn: 231563
This patch attempts to convert a SCALAR_TO_VECTOR using an operand from an EXTRACT_VECTOR_ELT into a VECTOR_SHUFFLE.
This prevents many cases of spilling scalar data between the gpr + simd registers.
At present the optimization only accepts cases where there is no TRUNC of the scalar type (i.e. all types must match).
Differential Revision: http://reviews.llvm.org/D8132
llvm-svn: 231554
Doing this gets function's low_pc and global variable's locations right
in the output debug info. It also could get right other attributes
that need to be relocated (in linker terms), but I don't know of any
other than the address attributes.
This doesn't fixup low_pc attributes in compile_unit, lexical_block
or inlined subroutine, nor does it get right high_pc attributes
for function. This will come in a subsequent commit.
llvm-svn: 231544
to disable lane switching if we don't actually have the instruction
set we want to switch to. Models the earlier check above the
conditional for the pass.
The testcase is one that triggered with the assert that's added
as part of the fix, use it to avoid adding a new testcase as it
highlights the same problem.
llvm-svn: 231539
Reference attributes are mainly handled by just creating DIEEntry
attributes for them. There is a special case for DW_FORM_ref_addr
attributes though, because the DIEEntry code needs a DwarfDebug
code to emit them (and we don't have one as we do no CodeGen).
In that case, just use DIEInteger attributes with the right form.
llvm-svn: 231531
Teach the load store optimizer how to sign extend a result of a load pair when
it helps creating more pairs.
The rational is that loads are more expensive than sign extensions, so if we
gather some in one instruction this is better!
<rdar://problem/20072968>
llvm-svn: 231527
This is based on the following equivalences:
select(C0 & C1, X, Y) <=> select(C0, select(C1, X, Y), Y)
select(C0 | C1, X, Y) <=> select(C0, X, select(C1, X, Y))
Many target cannot perform and/or on the CPU flags and therefore the
right side should be choosen to avoid materializign the i1 flags in an
integer register. If the target can perform this operation efficiently
we normalize to the left form.
Differential Revision: http://reviews.llvm.org/D7622
llvm-svn: 231507
The checking for extgotequiv and localgotequiv rely on the emission
order, which is not guaranteed because we use DenseMap to hold the GOT
equivalents. XFAIL this now until I get time to use MapVector and test
out the solution. In the meantime, appease buildbots.
llvm-svn: 231497
Add MachO 32-bit (i.e. arm and x86) support for replacing global GOT equivalent
symbol accesses. Unlike 64-bit targets, there's no GOTPCREL relocation, and
access through a non_lazy_symbol_pointers section is used instead.
-- before
_extgotequiv:
.long _extfoo
_delta:
.long _extgotequiv-_delta
-- after
_delta:
.long L_extfoo$non_lazy_ptr-_delta
.section __IMPORT,__pointers,non_lazy_symbol_pointers
L_extfoo$non_lazy_ptr:
.indirect_symbol _extfoo
.long 0
llvm-svn: 231475
Follow up r230264 and add ARM64 support for replacing global GOT
equivalent symbol accesses by references to the GOT entry for the final
symbol instead, example:
-- before
.globl _foo
_foo:
.long 42
.globl _gotequivalent
_gotequivalent:
.quad _foo
.globl _delta
_delta:
.long _gotequivalent-_delta
-- after
.globl _foo
_foo:
.long 42
.globl _delta
Ltmp3:
.long _foo@GOT-Ltmp3
llvm-svn: 231474
Summary:
None of the .set directives can be used before the .module directives. The .set mips0/pop/push were not triggering this constraint.
Also added testing for all the other implemented directives which are supposed to trigger this constraint.
Reviewers: dsanders
Reviewed By: dsanders
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D7140
llvm-svn: 231465
This pass interchanges loops to provide a more cache-friendly memory access.
For e.g. given a loop like -
for(int i=0;i<N;i++)
for(int j=0;j<N;j++)
A[j][i] = A[j][i]+B[j][i];
is interchanged to -
for(int j=0;j<N;j++)
for(int i=0;i<N;i++)
A[j][i] = A[j][i]+B[j][i];
This pass is currently disabled by default.
To give a brief introduction it consists of 3 stages-
LoopInterchangeLegality : Checks the legality of loop interchange based on Dependency matrix.
LoopInterchangeProfitability: A very basic heuristic has been added to check for profitibility. This will evolve over time.
LoopInterchangeTransform : Which does the actual transform.
LNT Performance tests shows improvement in Polybench/linear-algebra/kernels/mvt and Polybench/linear-algebra/kernels/gemver becnmarks.
TODO:
1) Add support for reductions and lcssa phi.
2) Improve profitability model.
3) Improve loop selection algorithm to select best loop for interchange. Currently the innermost loop is selected for interchange.
4) Improve compile time regression found in llvm lnt due to this pass.
5) Fix issues in Dependency Analysis module.
A special thanks to Hal for reviewing this code.
Review: http://reviews.llvm.org/D7499
llvm-svn: 231458
We supported forming IMGREL relocations from ConstantExprs involving
__ImageBase if the minuend was a GlobalVariable. Extend this
functionality to all GlobalObjects.
llvm-svn: 231456
We would set the body of a struct type (therefore making it non-opaque)
but were forgetting to move it to the non-opaque set.
Fixes pr22807.
llvm-svn: 231442
At this point, we should have decent coverage of the involved code. I've got a few more test cases to cleanup and submit, but what's here is already reasonable.
I've got a collection of liveness tests which will be posted for review along with a decent liveness algorithm in the next few days. Once those are in, the code in this file should be well tested and I can start renaming things without risk of serious breakage.
llvm-svn: 231414
This patch reduces code size for all AVX targets and increases speed for some chips.
SSE 4.1 introduced the useless (see code comments) 2-register form of BLENDV and
only in the packed float/double flavors.
AVX subsequently made the instruction useful by adding a 4-register operand form.
So we just need to paper over the lack of scalar forms of this instruction, complicate
the code to choose float or double forms, and use blendv on scalars since all FP is in
xmm registers anyway.
This gives us an approximately 50% speed up for a blendv microbenchmark sequence
on SandyBridge and Haswell:
blendv : 29.73 cycles/iter
logic : 43.15 cycles/iter
No new test cases with this patch because:
1. fast-isel-select-sse.ll tests the positive side for regular X86 lowering and fast-isel
2. sse-minmax.ll and fp-select-cmp-and.ll confirm that we're not firing for scalar selects without AVX
3. fp-select-cmp-and.ll and logical-load-fold.ll confirm that we're not firing for scalar selects with constants.
http://llvm.org/bugs/show_bug.cgi?id=22483
Differential Revision: http://reviews.llvm.org/D8063
llvm-svn: 231408
This commit enables forming vector extloads for ARM.
It only does so for legal types, and when we can't fold the extension
in a wide/long form of the user instruction.
Enabling it for larger types isn't as good an idea on ARM as it is on
X86, because:
- we pretend that extloads are legal, but end up generating vld+vmov
- we have instructions like vld {dN, dM}, which can't be generated
when we "manually expand" extloads to vld+vmov.
For legal types, the combine doesn't fire that often: in the
integration tests only in a big endian testcase, where it removes a
pointless AND.
Related to rdar://19723053
Differential Revision: http://reviews.llvm.org/D7423
llvm-svn: 231396
Currently shuffles may only be combined if they are of the same type, despite the fact that bitcasts are often introduced in between shuffle nodes (e.g. x86 shuffle type widening).
This patch allows a single input shuffle to peek through bitcasts and if the input is another shuffle will merge them, shuffling using the smallest sized type, and re-applying the bitcasts at the inputs and output instead.
Dropped old ShuffleToZext test - this patch removes the use of the zext and vector-zext.ll covers these anyhow.
Differential Revision: http://reviews.llvm.org/D7939
llvm-svn: 231380
Added lowering for ISD::CONCAT_VECTORS and ISD::INSERT_SUBVECTOR for i1 vectors,
it is needed to pass all masked_memop.ll tests for SKX.
llvm-svn: 231371
Also it extracts getCopyFromRegs helper function in SelectionDAGBuilder as we need to be able to customize type of the register exported from basic block during lowering of the gc.result.
llvm-svn: 231366
isNormalFp and isFiniteNonZeroFp should not assume vector operands can not be constant expressions.
Patch by Pawel Jurek <pawel.jurek@intel.com>
Differential Revision: http://reviews.llvm.org/D8053
llvm-svn: 231359
just arbitrarily interleaving unrelated control flows once they get
moved "out-of-line" (both outside of natural CFG ordering and with
diamonds that cannot be fully laid out by chaining fallthrough edges).
This easy solution doesn't work in practice, and it isn't just a small
bug. It looks like a very different strategy will be required. I'm
working on that now, and it'll again go behind some flag so that
everyone can experiment and make sure it is working well for them.
llvm-svn: 231332
Improve test robustness in preparation of coming commits:
- Avoid undefs which may get propagated too much.
- Remove several pointless add 0, instructions
llvm-svn: 231307
Summary:
Teach SCEV to prove no overflow for an add recurrence by proving
something about the range of another add recurrence a loop-invariant
distance away from it.
Reviewers: atrick, hfinkel
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D7980
llvm-svn: 231305
This commit adds code to emit DIE trees that have been pruned from the
parts that haven't been marked as kept in the previous pass.
It works by 'cloning' the input DIE tree (as read by libDebugInfoDwarf)
into a tree of DIE objects. Cloning the DIEs means essentially cloning
their attributes. The code in this commit does only handle scalar and
block attributes (scalar because they are trivial, blocks because they
can't be easily replaced by a scalr placeholder), all the other ones
are replaced by placeholder zero values and will be handled in
further commits.
The added tests mostly check that the DIE tree has the correct layout and
also verify that a few chosen scalar and block attributes correctly make
their way into the output.
llvm-svn: 231300
The issue was that we were always printing the remarks. Fix that and add a test
showing that it prints nothing if -pass-remarks is not given.
Original message:
Correctly handle -pass-remarks in the gold plugin.
llvm-svn: 231273
Summary:
DataLayout keeps the string used for its creation.
As a side effect it is no longer needed in the Module.
This is "almost" NFC, the string is no longer
canonicalized, you can't rely on two "equals" DataLayout
having the same string returned by getStringRepresentation().
Get rid of DataLayoutPass: the DataLayout is in the Module
The DataLayout is "per-module", let's enforce this by not
duplicating it more than necessary.
One more step toward non-optionality of the DataLayout in the
module.
Make DataLayout Non-Optional in the Module
Module->getDataLayout() will never returns nullptr anymore.
Reviewers: echristo
Subscribers: resistor, llvm-commits, jholewinski
Differential Revision: http://reviews.llvm.org/D7992
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 231270
The target-independent selection algorithm in FastISel already knows how
to select a SINT_TO_FP if the target is SSE but not AVX.
On targets that have SSE but not AVX, the tablegen'd 'fastEmit' functions
for ISD::SINT_TO_FP know how to select instruction X86::CVTSI2SSrr
(for an i32 to f32 conversion) and X86::CVTSI2SDrr (for an i32 to f64
conversion).
This patch simplifies the logic in method X86SelectSIToFP knowing that
the code would not be reachable if the subtarget doesn't have AVX.
No functional change intended.
llvm-svn: 231243
Do not instrument direct accesses to stack variables that can be
proven to be inbounds, e.g. accesses to fields of structs on stack.
But it eliminates 33% of instrumentation on webrtc/modules_unittests
(number of memory accesses goes down from 290152 to 193998) and
reduces binary size by 15% (from 74M to 64M) and improved compilation time by 6-12%.
The optimization is guarded by asan-opt-stack flag that is off by default.
http://reviews.llvm.org/D7583
llvm-svn: 231241
a flag for now.
First off, thanks to Daniel Jasper for really pointing out the issue
here. It's been here forever (at least, I think it was there when
I first wrote this code) without getting really noticed or fixed.
The key problem is what happens when two reasonably common patterns
happen at the same time: we outline multiple cold regions of code, and
those regions in turn have diamonds or other CFGs for which we can't
just topologically lay them out. Consider some C code that looks like:
if (a1()) { if (b1()) c1(); else d1(); f1(); }
if (a2()) { if (b2()) c2(); else d2(); f2(); }
done();
Now consider the case where a1() and a2() are unlikely to be true. In
that case, we might lay out the first part of the function like:
a1, a2, done;
And then we will be out of successors in which to build the chain. We go
to find the best block to continue the chain with, which is perfectly
reasonable here, and find "b1" let's say. Laying out successors gets us
to:
a1, a2, done; b1, c1;
At this point, we will refuse to lay out the successor to c1 (f1)
because there are still un-placed predecessors of f1 and we want to try
to preserve the CFG structure. So we go get the next best block, d1.
... wait for it ...
Except that the next best block *isn't* d1. It is b2! d1 is waaay down
inside these conditionals. It is much less important than b2. Except
that this is exactly what we didn't want. If we keep going we get the
entire set of the rest of the CFG *interleaved*!!!
a1, a2, done; b1, c1; b2, c2; d1, f1; d2, f2;
So we clearly need a better strategy here. =] My current favorite
strategy is to actually try to place the block whose predecessor is
closest. This very simply ensures that we unwind these kinds of CFGs the
way that is natural and fitting, and should minimize the number of cache
lines instructions are spread across.
It also happens to be *dead simple*. It's like the datastructure was
specifically set up for this use case or something. We only push blocks
onto the work list when the last predecessor for them is placed into the
chain. So the back of the worklist *is* the nearest next block.
Unfortunately, a change like this is going to cause *soooo* many
benchmarks to swing wildly. So for now I'm adding this under a flag so
that we and others can validate that this is fixing the problems
described, that it seems possible to enable, and hopefully that it fixes
more of our problems long term.
llvm-svn: 231238
In a CFG with the edges A->B->C and A->C, B is an optional branch.
LLVM's default behavior is to lay the blocks out naturally, i.e. A, B,
C, in order to improve code locality and fallthroughs. However, if a
function contains many of those optional branches only a few of which
are taken, this leads to a lot of unnecessary icache misses. Moving B
out of line can work around this.
Review: http://reviews.llvm.org/D7719
llvm-svn: 231230
As is described at http://llvm.org/bugs/show_bug.cgi?id=22408, the GNU linkers
ld.bfd and ld.gold currently only support a subset of the whole range of AArch64
ELF TLS relocations. Furthermore, they assume that some of the code sequences to
access thread-local variables are produced in a very specific sequence.
When the sequence is not as the linker expects, it can silently mis-relaxe/mis-optimize
the instructions.
Even if that wouldn't be the case, it's good to produce the exact sequence,
as that ensures that linkers can perform optimizing relaxations.
This patch:
* implements support for 16MiB TLS area size instead of 4GiB TLS area size. Ideally clang
would grow an -mtls-size option to allow support for both, but that's not part of this patch.
* by default doesn't produce local dynamic access patterns, as even modern ld.bfd and ld.gold
linkers do not support the associated relocations. An option (-aarch64-elf-ldtls-generation)
is added to enable generation of local dynamic code sequence, but is off by default.
* makes sure that the exact expected code sequence for local dynamic and general dynamic
accesses is produced, by making use of a new pseudo instruction. The patch also removes
two (AArch64ISD::TLSDESC_BLR, AArch64ISD::TLSDESC_CALL) pre-existing AArch64-specific pseudo
SDNode instructions that are superseded by the new one (TLSDESC_CALLSEQ).
llvm-svn: 231227
When trying to convert a BUILD_VECTOR into a shuffle, we try to split a single source vector that is twice as wide as the destination vector.
We can not do this when we also need the zero vector to create a blend.
This fixes PR22774.
Differential Revision: http://reviews.llvm.org/D8040
llvm-svn: 231219
This will now display enum definitions both at the global
scope as well as nested inside of classes. Additionally,
it will no longer display enums at the global scope if the
enum is nested. Instead, it will omit the definition of
the enum globally and instead emit it in the corresponding
class definition.
llvm-svn: 231215
RewriteStatepointsForGC pass emits an alloca for each GC pointer which will be relocated. It then inserts stores after def and all relocations, and inserts loads before each use as well. In the end, mem2reg is used to update IR with relocations in SSA form.
However, there is a problem with inserting stores for values defined by invoke instructions. The code didn't expect a def was a terminator instruction, and inserting instructions after these terminators resulted in malformed IR.
This patch fixes this problem by handling invoke instructions as a special case. If the def is an invoke instruction, the store will be inserted at the beginning of the normal destination block. Since return value from invoke instruction does not dominate the unwind destination block, no action is needed there.
Patch by: Chen Li
Differential Revision: http://reviews.llvm.org/D7923
llvm-svn: 231183
The intrinsic is no longer generated by the front-end. Remove the intrinsic and
auto-upgrade it to a vector shuffle.
Reviewed by Nadav
This is related to rdar://problem/18742778.
llvm-svn: 231182
Introduce -mllvm -sanitizer-coverage-8bit-counters=1
which adds imprecise thread-unfriendly 8-bit coverage counters.
The run-time library maps these 8-bit counters to 8-bit bitsets in the same way
AFL (http://lcamtuf.coredump.cx/afl/technical_details.txt) does:
counter values are divided into 8 ranges and based on the counter
value one of the bits in the bitset is set.
The AFL ranges are used here: 1, 2, 3, 4-7, 8-15, 16-31, 32-127, 128+.
These counters provide a search heuristic for single-threaded
coverage-guided fuzzers, we do not expect them to be useful for other purposes.
Depending on the value of -fsanitize-coverage=[123] flag,
these counters will be added to the function entry blocks (=1),
every basic block (=2), or every edge (=3).
Use these counters as an optional search heuristic in the Fuzzer library.
Add a test where this heuristic is critical.
llvm-svn: 231166
Ultimately, we'll need to leave something behind to indicate which
alloca will hold the exception, but we can figure that out when it comes
time to emit the __CxxFrameHandler3 catch handler table.
llvm-svn: 231164
Selection conditions may be vectors or scalars. Make sure InstCombine
doesn't indiscriminately assume that a select which is value dependent
on another select have identical select condition types.
This fixes PR22773.
llvm-svn: 231156
From:
int M, total;
void foo() {
int i;
for (i = 0; i < M; i++) {
total = total + i / 2;
}
}
This is the kernel loop:
.LBB0_2: # %for.body
=>This Inner Loop Header: Depth=1
movl %edx, %esi
movl %ecx, %edx
shrl $31, %edx
addl %ecx, %edx
sarl %edx
addl %esi, %edx
incl %ecx
cmpl %eax, %ecx
jl .LBB0_2
--------------------------
The first mov insn "movl %edx, %esi" could be removed if we change "addl %esi, %edx" to "addl %edx, %esi".
The IR before TwoAddressInstructionPass is:
BB#2: derived from LLVM BB %for.body
Predecessors according to CFG: BB#1 BB#2
%vreg3<def> = COPY %vreg12<kill>; GR32:%vreg3,%vreg12
%vreg2<def> = COPY %vreg11<kill>; GR32:%vreg2,%vreg11
%vreg7<def,tied1> = SHR32ri %vreg3<tied0>, 31, %EFLAGS<imp-def,dead>; GR32:%vreg7,%vreg3
%vreg8<def,tied1> = ADD32rr %vreg3<tied0>, %vreg7<kill>, %EFLAGS<imp-def,dead>; GR32:%vreg8,%vreg3,%vreg7
%vreg9<def,tied1> = SAR32r1 %vreg8<kill,tied0>, %EFLAGS<imp-def,dead>; GR32:%vreg9,%vreg8
%vreg4<def,tied1> = ADD32rr %vreg9<kill,tied0>, %vreg2<kill>, %EFLAGS<imp-def,dead>; GR32:%vreg4,%vreg9,%vreg2
%vreg5<def,tied1> = INC64_32r %vreg3<kill,tied0>, %EFLAGS<imp-def,dead>; GR32:%vreg5,%vreg3
CMP32rr %vreg5, %vreg0, %EFLAGS<imp-def>; GR32:%vreg5,%vreg0
%vreg11<def> = COPY %vreg4; GR32:%vreg11,%vreg4
%vreg12<def> = COPY %vreg5<kill>; GR32:%vreg12,%vreg5
JL_4 <BB#2>, %EFLAGS<imp-use,kill>
Now TwoAddressInstructionPass will choose vreg9 to be tied with vreg4. However, it doesn't see that there is copy from vreg4 to vreg11 and another copy from vreg11 to vreg2 inside the loop body. To remove those copies, it is necessary to choose vreg2 to be tied with vreg4 instead of vreg9. This code pattern commonly appears when there is reduction operation in a loop.
So check for a reversed copy chain and if we encounter one then we can commute the add instruction so we can avoid a copy.
Patch by Wei Mi.
http://reviews.llvm.org/D7806
llvm-svn: 231148
Previously we had only Linux using DTPOFF for these; all X86 ELF
targets should. Fixes a side issue mentioned in PR21077.
Differential Revision: http://reviews.llvm.org/D8011
llvm-svn: 231130
frame register before checking if there is a DWARF register number for it.
Thanks to H.J. Lu for diagnosing this and providing the testcase!
llvm-svn: 231121
Ultimately, __CxxFrameHandler3 needs us to put a stack offset in a
table, and it will take responsibility for copying the exception object
into that slot. Modelling the exception object as an SSA value returned
by begincatch isn't going to work in general, so make it use an output
parameter.
Reviewers: andrew.w.kaylor
Differential Revision: http://reviews.llvm.org/D7920
llvm-svn: 231086
Move the specialized metadata nodes for the new debug info hierarchy
into place, finishing off PR22464. I've done bootstraps (and all that)
and I'm confident this commit is NFC as far as DWARF output is
concerned. Let me know if I'm wrong :).
The code changes are fairly mechanical:
- Bumped the "Debug Info Version".
- `DIBuilder` now creates the appropriate subclass of `MDNode`.
- Subclasses of DIDescriptor now expect to hold their "MD"
counterparts (e.g., `DIBasicType` expects `MDBasicType`).
- Deleted a ton of dead code in `AsmWriter.cpp` and `DebugInfo.cpp`
for printing comments.
- Big update to LangRef to describe the nodes in the new hierarchy.
Feel free to make it better.
Testcase changes are enormous. There's an accompanying clang commit on
its way.
If you have out-of-tree debug info testcases, I just broke your build.
- `upgrade-specialized-nodes.sh` is attached to PR22564. I used it to
update all the IR testcases.
- Unfortunately I failed to find way to script the updates to CHECK
lines, so I updated all of these by hand. This was fairly painful,
since the old CHECKs are difficult to reason about. That's one of
the benefits of the new hierarchy.
This work isn't quite finished, BTW. The `DIDescriptor` subclasses are
almost empty wrappers, but not quite: they still have loose casting
checks (see the `RETURN_FROM_RAW()` macro). Once they're completely
gutted, I'll rename the "MD" classes to "DI" and kill the wrappers. I
also expect to make a few schema changes now that it's easier to reason
about everything.
llvm-svn: 231082
This prevents the behavior observed in llvm.org/PR22369. I am not sure
whether I am reading the code correctly, but the early exit based on
isLiveOutPastPHIs() seems to make the wrong assumption that
RegisterCoalescer won't be able to coalesce those copies later.
This change hides the new behavior behind -no-phi-elim-live-out-early-exit
as it currently breaks four tests:
* Assertion in:
CodeGen/Hexagon/hwloop-cleanup.ll
* Worse code in:
CodeGen/X86/coalescer-commute4.ll
CodeGen/X86/phys_subreg_coalesce-2.ll
CodeGen/X86/zlib-longest-match.ll
The root cause here seems to be that the heuristic that determines
the visitation order in RegisterCoalescer gets less lucky.
llvm-svn: 231064
This lets us avoid a few copies that are otherwise hard to get rid of.
The way this is done is, the custom-inserter looks at the following
instruction for another CMOV, and replaces both at the same time.
A previous version used a new CMOV2 opcode, but the custom inserter
is expected to be able to return a different basic block anyway, which
means it's OK - though far from ideal - to alter that block's contents.
Explicitly document that, in case it ever makes a difference.
Alternatives welcome!
Follow-up to r231045.
rdar://19767934
Closes http://reviews.llvm.org/D8019
llvm-svn: 231046
Fold and/or of setcc's to double CMOV:
(CMOV F, T, ((cc1 | cc2) != 0)) -> (CMOV (CMOV F, T, cc1), T, cc2)
(CMOV F, T, ((cc1 & cc2) != 0)) -> (CMOV (CMOV T, F, !cc1), F, !cc2)
When we can't use the CMOV instruction, it might increase branch
mispredicts. When we can, or when there is no mispredict, this
improves throughput and reduces register pressure.
These can't be catched by generic combines, because the pattern can
appear when legalizing some instructions (such as fcmp une).
rdar://19767934
http://reviews.llvm.org/D7634
llvm-svn: 231045
By loading from indexed offsets into a byte array and applying a mask, a
program can test bits from the bit set with a relatively short instruction
sequence. For example, suppose we have 15 bit sets to lay out:
A (16 bits), B (15 bits), C (14 bits), D (13 bits), E (12 bits),
F (11 bits), G (10 bits), H (9 bits), I (7 bits), J (6 bits), K (5 bits),
L (4 bits), M (3 bits), N (2 bits), O (1 bit)
These bits can be laid out in a 16-byte array like this:
Byte Offset
0123456789ABCDEF
Bit
7 HHHHHHHHHIIIIIII
6 GGGGGGGGGGJJJJJJ
5 FFFFFFFFFFFKKKKK
4 EEEEEEEEEEEELLLL
3 DDDDDDDDDDDDDMMM
2 CCCCCCCCCCCCCCNN
1 BBBBBBBBBBBBBBBO
0 AAAAAAAAAAAAAAAA
For example, to test bit X of A, we evaluate ((bits[X] & 1) != 0), or to
test bit X of I, we evaluate ((bits[9 + X] & 0x80) != 0). This can be done
in 1-2 machine instructions on x86, or 4-6 instructions on ARM.
This uses the LPT multiprocessor scheduling algorithm to lay out the bits
efficiently.
Saves ~450KB of instructions in a recent build of Chromium.
Differential Revision: http://reviews.llvm.org/D7954
llvm-svn: 231043
There's really no reason to have them have entries in the symbol table
anymore. Old versions of ld64 had some bugs in this area but those have
been fixed long ago.
llvm-svn: 231041
In the future, we should run the output of clang through instnamer to
make it easier to manually edit test cases.
No functionality change.
llvm-svn: 231037
TargetRegisterInfo. DebugLocEntry now holds a buffer with the raw bytes
of the pre-calculated DWARF expression.
Ought to be NFC, but it does slightly alter the output format of the
textual assembly.
This reapplies 230930 without the assertion in DebugLocEntry::finalize()
because not all Machine registers can be lowered into DWARF register
numbers and floating point constants cannot be expressed.
llvm-svn: 231023
This re-lands change r230921. r230921 was reverted because it broke a
clang test; a checkin fixing the clang test will be commited shortly.
Summary:
As far as I can tell, the real bug causing the issue was fixed in
r230533. SCEVExpander should mark an increment operation as nuw or nsw
only if it can *prove* that the operation does not overflow. There
shouldn't be any situation where we have to do something different
because of no-wrap flags generated by SCEVExpander.
Revert "IndVarSimplify: Allow LFTR to fire more often"
This reverts commit 1ade0f0faa98877b688e0b9da58e876052c1e04e (SVN: 222213).
Revert "IndVarSimplify: Don't let LFTR compare against a poison value"
This reverts commit c0f2b8b528d8a37b0a1522aae90af649d6357eb5 (SVN: 217102).
Reviewers: majnemer, atrick, spatel
Differential Revision: http://reviews.llvm.org/D7979
llvm-svn: 231018
The internal shell was already threading around a 'cwd' parameter. We
just have to make it mutable so that we can update it as the test script
executes.
If the shell ever grows support for environment variable substitution,
we could also implement support for export.
llvm-svn: 231017
The issue is that now we have a diag handler during optimizations
and get forward every optimization remark, flooding stdout.
The same filtering should probably be done with or without a
custom handler, but for now just ignore remarks.
Original message:
gold-plugin: "Upgrade" debug info and handle its warnings.
The gold plugin never calls MaterializeModule, so any old debug info
was not deleted and could cause crashes.
Now that it is being "upgraded", the plugin also has to handle warnings
and create Modules with a nice id (it shows in the warning).
llvm-svn: 230991
TargetRegisterInfo. DebugLocEntry now holds a buffer with the raw bytes
of the pre-calculated DWARF expression.
Ought to be NFC, but it does slightly alter the output format of the
textual assembly.
This reapplies 230930 with a relaxed assertion in DebugLocEntry::finalize()
that allows for empty DWARF expressions for constant FP values.
llvm-svn: 230975
Summary:
When the RHS of a conditional move node is zero, we can utilize the $zero
register by inverting the conditional move instruction and by swapping the
order of its True/False operands.
Reviewers: dsanders
Differential Revision: http://reviews.llvm.org/D7945
llvm-svn: 230956
Previously this would result in assertion failures or simply crashes
at various points in the optimizer when trying to create types of zero
bit width.
llvm-svn: 230936
A short list of some of the improvements:
1) Now supports -all command line argument, which implies many
other command line arguments to simplify usage.
2) Now supports -no-compiler-generated command line argument to
exclude compiler generated types.
3) Prints base class list.
4) -class-definitions implies -types.
5) Proper display of bitfields.
6) Can now distinguish between struct/class/interface/union.
And a few other minor tweaks.
llvm-svn: 230933
TargetRegisterInfo. DebugLocEntry now holds a buffer with the raw bytes
of the pre-calculated DWARF expression.
Ought to be NFC, but it does slightly alter the output format of the
textual assembly.
llvm-svn: 230930
Summary:
As far as I can tell, the real bug causing the issue was fixed in
r230533. SCEVExpander should mark an increment operation as nuw or nsw
only if it can *prove* that the operation does not overflow. There
shouldn't be any situation where we have to do something different
because of no-wrap flags generated by SCEVExpander.
Revert "IndVarSimplify: Allow LFTR to fire more often"
This reverts commit 1ade0f0faa98877b688e0b9da58e876052c1e04e (SVN: 222213).
Revert "IndVarSimplify: Don't let LFTR compare against a poison value"
This reverts commit c0f2b8b528d8a37b0a1522aae90af649d6357eb5 (SVN: 217102).
Reviewers: majnemer, atrick, spatel
Differential Revision: http://reviews.llvm.org/D7979
llvm-svn: 230921
We were missing a check for the following fold in DAGCombiner:
// fold (fmul (fmul x, c1), c2) -> (fmul x, (fmul c1, c2))
If 'x' is also a constant, then we shouldn't do anything. Otherwise, we could end up swapping the operands back and forth forever.
This should fix:
http://llvm.org/bugs/show_bug.cgi?id=22698
Differential Revision: http://reviews.llvm.org/D7917
llvm-svn: 230884
r228631 stopped using `DW_OP_piece` inside `DIExpression`s in the IR,
but it apparently missed updating these testcases. Caught by verifier
checks for `MDExpression` while working on moving the new hierarchy into
place.
llvm-svn: 230882
There are two types of files in the old (current) debug info schema.
!0 = !{!"some/filename", !"/path/to/dir"}
!1 = !{!"0x29", !0} ; [ DW_TAG_file_type ]
!1 has a wrapper class called `DIFile` which inherits from `DIScope` and
is referenced in 'scope' fields.
!0 is called a "file node", and debug info nodes with a 'file' field
point at one of these directly -- although they're built in `DIBuilder`
by sending in a `DIFile` and reaching into it.
In the new hierarchy, I unified these nodes as `MDFile` (which `DIFile`
is a lightweight wrapper for) in r230057. Moving the new hierarchy into
place (and upgrading testcases) caused CodeGen/X86/unknown-location.ll
to start failing -- apparently "0x29" was previously showing up in the
linetable as a filename, causing:
.loc 2 4 3
(where 2 points at filename "0x29") instead of:
.loc 1 4 3
(where 1 points at the actual filename).
Change the testcase to use the old schema correctly.
llvm-svn: 230880
While gaining practical experience hand-updating CHECK lines (for moving
the new debug info hierarchy into place), I learnt a few things about
CHECK-ability of the specialized node assembly output.
- The first part of a `CHECK:` is to identify the "right" node (this
is especially true if you intend to use the new `CHECK-SAME`
feature, since the first CHECK needs to identify the node correctly
before you can split the line).
- If there's a `tag:`, it should go first.
- If there's a `name:`, it should go next (followed by the
`linkageName:`, if any).
- If there's a `scope:`, it should follow after that.
- When a node type supports multiple DW_TAGs, but one is implied by
its name and is overwhelmingly more common, the `tag:` field is
terribly uninteresting unless it's different.
- `MDBasicType` is almost always `DW_TAG_base_type`.
- `MDTemplateValueParameter` is almost always
`DW_TAG_template_value_parameter`.
- Printing `name: ""` doesn't improve CHECK-ability, and there are far
more nodes than I realized that are commonly nameless.
- There are a few other fields that similarly aren't very interesting
when they're empty.
This commit updates the `AsmWriter` as suggested above (and makes
necessary changes in `LLParser` for round-tripping).
llvm-svn: 230877
Properly escape string fields in metadata. I've added a spot-check with
direct coverage for `MDFile::getFilename()`, but we'll get more coverage
once the hierarchy is moved into place (since this comes up in various
checked-in testcases).
I've replicated the `if` logic using the `ShouldSkipEmpty` flag
(although a follow-up commit is going to change how often this flag is
specified); no NFCI other than escaping the string fields.
llvm-svn: 230875
Leaving empty blocks around just opens up a can of bugs like PR22704. Deleting
them early also slightly simplifies code.
Thanks to Sanjay for the IR test case.
llvm-svn: 230856
It turns out the naming of inserted phis and selects is sensative to the order in which two sets are iterated. We need to nail this down to avoid non-deterministic output and possible test failures.
The modified test is the one I first noticed something odd in. The change is making it more strict to report the error. With the test change, but without the code change, the test fails roughly 1 in 5. With the code change, I've run ~30 runs without error.
Long term, the right fix here is to adjust the naming scheme. I'm checking in this hack to avoid any possible non-determinism in the tests over the weekend. HJust because I only noticed one case doesn't mean it's actually the only case. I hope to get to the right change Monday.
std->llvm data structure changes bugfix change #3
llvm-svn: 230835
This class is responsible for getting the linked data to the
disk in the appropriate form. Today it it an empty shell that
just instantiates an MC layer.
As we do not put anything in the resulting file yet, we just
check it has the right architecture (and check that -o does
the right thing).
To be able to create all the components, this commit adds a
few dependencies to llvm-dsymutil, namely all-targets, MC and
AsmPrinter.
Also add a -no-output option, so that tests that do not need
the binary result can continue to run even if they do not have
the required target linked in.
llvm-svn: 230824
These tests cover the 'base object' identification and rewritting portion of RewriteStatepointsForGC. These aren't completely exhaustive, but they've proven to be reasonable effective over time at finding regressions.
In the process of porting these tests over, I found my first "cleanup per llvm code style standards" bug. We were relying on the order of iteration when testing the base pointers found for a derived pointer. When we switched from std::set to DenseSet, this stopped being a safe assumption. I'm suspecting I'm going to find more of those. In particular, I'm now really wondering about the main iteration loop for this algorithm. I need to go take a closer look at the assumptions there.
I'm not really happy with the fact these are testing what is essentially debug output (i.e. enabled via command line flags). Suggestions for how to structure this better are very welcome.
llvm-svn: 230818
Straightforward patch to emit an alignment directive when emitting a
TOC entry. The test case was generated from the test in PR22711 that
demonstrated a misaligned .toc section. The object code is run
through llvm-readobj to verify that the correct alignment has been
applied to the .toc section.
Thanks to Ulrich Weigand for running down where the fix was needed.
llvm-svn: 230801
Essentially the same as the GEP change in r230786.
A similar migration script can be used to update test cases, though a few more
test case improvements/changes were required this time around: (r229269-r229278)
import fileinput
import sys
import re
pat = re.compile(r"((?:=|:|^)\s*load (?:atomic )?(?:volatile )?(.*?))(| addrspace\(\d+\) *)\*($| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$)")
for line in sys.stdin:
sys.stdout.write(re.sub(pat, r"\1, \2\3*\4", line))
Reviewers: rafael, dexonsmith, grosser
Differential Revision: http://reviews.llvm.org/D7649
llvm-svn: 230794
Summary:
Until now, we did this (among other things) based on whether or not the
target was Windows. This is clearly wrong, not just for Win64 ABI functions
on non-Windows, but for System V ABI functions on Windows, too. In this
change, we make this decision based on the ABI the calling convention
specifies instead.
Reviewers: rnk
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D7953
llvm-svn: 230793
When using Altivec, we can use vector loads and stores for aligned memcpy and
friends. Starting with the P7 and VXS, we have reasonable unaligned vector
stores. Starting with the P8, we have fast unaligned loads too.
For QPX, we use vector loads are stores, but only for aligned memory accesses.
llvm-svn: 230788
One of several parallel first steps to remove the target type of pointers,
replacing them with a single opaque pointer type.
This adds an explicit type parameter to the gep instruction so that when the
first parameter becomes an opaque pointer type, the type to gep through is
still available to the instructions.
* This doesn't modify gep operators, only instructions (operators will be
handled separately)
* Textual IR changes only. Bitcode (including upgrade) and changing the
in-memory representation will be in separate changes.
* geps of vectors are transformed as:
getelementptr <4 x float*> %x, ...
->getelementptr float, <4 x float*> %x, ...
Then, once the opaque pointer type is introduced, this will ultimately look
like:
getelementptr float, <4 x ptr> %x
with the unambiguous interpretation that it is a vector of pointers to float.
* address spaces remain on the pointer, not the type:
getelementptr float addrspace(1)* %x
->getelementptr float, float addrspace(1)* %x
Then, eventually:
getelementptr float, ptr addrspace(1) %x
Importantly, the massive amount of test case churn has been automated by
same crappy python code. I had to manually update a few test cases that
wouldn't fit the script's model (r228970,r229196,r229197,r229198). The
python script just massages stdin and writes the result to stdout, I
then wrapped that in a shell script to handle replacing files, then
using the usual find+xargs to migrate all the files.
update.py:
import fileinput
import sys
import re
ibrep = re.compile(r"(^.*?[^%\w]getelementptr inbounds )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
normrep = re.compile( r"(^.*?[^%\w]getelementptr )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
def conv(match, line):
if not match:
return line
line = match.groups()[0]
if len(match.groups()[5]) == 0:
line += match.groups()[2]
line += match.groups()[3]
line += ", "
line += match.groups()[1]
line += "\n"
return line
for line in sys.stdin:
if line.find("getelementptr ") == line.find("getelementptr inbounds"):
if line.find("getelementptr inbounds") != line.find("getelementptr inbounds ("):
line = conv(re.match(ibrep, line), line)
elif line.find("getelementptr ") != line.find("getelementptr ("):
line = conv(re.match(normrep, line), line)
sys.stdout.write(line)
apply.sh:
for name in "$@"
do
python3 `dirname "$0"`/update.py < "$name" > "$name.tmp" && mv "$name.tmp" "$name"
rm -f "$name.tmp"
done
The actual commands:
From llvm/src:
find test/ -name *.ll | xargs ./apply.sh
From llvm/src/tools/clang:
find test/ -name *.mm -o -name *.m -o -name *.cpp -o -name *.c | xargs -I '{}' ../../apply.sh "{}"
From llvm/src/tools/polly:
find test/ -name *.ll | xargs ./apply.sh
After that, check-all (with llvm, clang, clang-tools-extra, lld,
compiler-rt, and polly all checked out).
The extra 'rm' in the apply.sh script is due to a few files in clang's test
suite using interesting unicode stuff that my python script was throwing
exceptions on. None of those files needed to be migrated, so it seemed
sufficient to ignore those cases.
Reviewers: rafael, dexonsmith, grosser
Differential Revision: http://reviews.llvm.org/D7636
llvm-svn: 230786
This work is currently being rethought along different lines and
if this work is needed it can be resurrected out of svn. Remove it
for now as no current work in ongoing on it and it's unused. Verified
with the authors before removal.
llvm-svn: 230780
In the review for r230567, it was pointed out we should really test
the lib/Object part of that change. This does so using llvm-readobj.
llvm-svn: 230779
Summary:
Currently fast-isel-abort will only abort for regular instructions,
and just warn for function calls, terminators, function arguments.
There is already fast-isel-abort-args but nothing for calls and
terminators.
This change turns the fast-isel-abort options into an integer option,
so that multiple levels of strictness can be defined.
This will help no being surprised when the "abort" option indeed does
not abort, and enables the possibility to write test that verifies
that no intrinsics are forgotten by fast-isel.
Reviewers: resistor, echristo
Subscribers: jfb, llvm-commits
Differential Revision: http://reviews.llvm.org/D7941
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 230775
This removes a bit of duplicated code and more importantly, remembers the
labels so that they don't need to be looked up by name.
This in turn allows for any name to be used and avoids a crash if the name
we wanted was already taken.
llvm-svn: 230772
Not passing mtriple for one of the tests caused a regression failure
on MIPS buildbot. The issue was introduced by r230651.
Differential Revision: http://reviews.llvm.org/D7938
llvm-svn: 230756
vectors. This lets us fix the rest of the v16 lowering problems when
pshufb is clearly better.
We might still be able to improve some of the lowerings by enabling the
other combine-based rewriting to fire for non-128-bit vectors, but this
at least should remove any regressions from using the fancy v16i16
lowering strategy.
llvm-svn: 230753
repeated 128-bit lane shuffles of wider vector types and use it to lower
256-bit v16i16 vector shuffles where applicable.
This should let us perfectly lowering the pattern of pshuflw and pshufhw
even for AVX2 256-bit patterns.
I've not added AVX-512 support, but it should be trivial for someone
working on that to wire up.
Note that currently this generates bad, long shuffle chains because we
don't combine 256-bit target shuffles. The subsequent patches will fix
that.
llvm-svn: 230751
by mirroring v8i16 test cases across both 128-bit lanes. This should
highlight problems where we aren't correctly using 128-bit shuffles to
implement things.
llvm-svn: 230750
Summary:
We identify the cases where the operand to an ADDE node is a constant
zero. In such cases, we can avoid generating an extra ADDu instruction
disguised as an identity move alias (ie. addu $r, $r, 0 --> move $r, $r).
Reviewers: dsanders
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D7906
llvm-svn: 230742
Currently, the ASan executables built with -O0 are unnecessarily slow.
The main reason is that ASan instrumentation pass inserts redundant
checks around promotable allocas. These allocas do not get instrumented
under -O1 because they get converted to virtual registered by mem2reg.
With this patch, ASan instrumentation pass will only instrument non
promotable allocas, giving us a speedup of 39% on a collection of
benchmarks with -O0. (There is no measurable speedup at -O1.)
llvm-svn: 230724
Summary:
This change causes us to actually save non-volatile registers in a Win64
ABI function that calls a System V ABI function, and vice-versa.
Reviewers: rnk
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D7919
llvm-svn: 230714
Function pointers were not correctly handled by the dumper, and
they would print as "* name". They now print as
"int (__cdecl *name)(int arg1, int arg2)" as they should.
Also, doubles were being printed as floats. This fixes that bug
as well, and adds tests for all builtin types. as well as a test
for function pointers.
llvm-svn: 230703
blend as legal.
We made the same mistake in two different places. Whenever we are custom
lowering a v32i8 blend we need to check whether we are custom lowering
it only for constant conditions that can be shuffled, or whether we
actually have AVX2 and full dynamic blending support on bytes. Both are
fixed, with comments added to make it clear what is going on and a new
test case.
llvm-svn: 230695
On 32bits x86 Darwin, the register mappings for the eh_frane and
debug_frame sections are different. Thus the same CFI instructions
should result in different registers in the object file. The
problem isn't target specific though, but it requires that the
mappings for EH register numbers be different from the standard
Dwarf one.
The patch looks a bit clumsy. LLVM uses the EH mapping as
canonical for everything frame related. Thus we need to do a
double conversion EH -> LLVM -> Non-EH, when emitting the
debug_frame section.
Fixes PR22363.
Differential Revision: http://reviews.llvm.org/D7593
llvm-svn: 230670
InstCombine has long had logic to convert aligned Altivec load/store intrinsics
into regular loads and stores. This mirrors that functionality for QPX vector
load/store intrinsics.
llvm-svn: 230660
have the debugger step through each one individually. Turn off the
combine for adjacent stores at -O0 so we get this behavior.
Possibly, DAGCombine shouldn't run at all at -O0, but that's for
another day; see PR22346.
Differential Revision: http://reviews.llvm.org/D7181
llvm-svn: 230659
There was a problem when passing structures as variable arguments.
The structures smaller than 64 bit were not left justified on MIPS64
big endian. This is now fixed by shifting the value to make it left-
justified when appropriate.
This fixes the bug http://llvm.org/bugs/show_bug.cgi?id=21608
Patch by Aleksandar Beserminji.
Differential Revision: http://reviews.llvm.org/D7881
llvm-svn: 230657
The gold plugin never calls MaterializeModule, so any old debug info
was not deleted and could cause crashes.
Now that it is being "upgraded", the plugin also has to handle warnings
and create Modules with a nice id (it shows in the warning).
llvm-svn: 230655
In case of "krait" CPU, asm printer doesn't emit any ".cpu" so the
features bits are not computed. This patch lets the asm printer
emit ".cpu cortex-a9" directive for krait and the hwdiv feature is
enabled through ".arch_extension". In short, krait is treated
as "cortex-a9" with hwdiv. We can not emit ".krait" as CPU since
it is not supported bu GNU GAS yet
llvm-svn: 230651
accesses are via different types
Noticed this while generalizing the code for loop distribution.
I confirmed with Arnold that this was indeed a bug and managed to create
a testcase.
llvm-svn: 230647
Turns out that after the past MMX commits, we don't need to rely on this
flag to get better codegen for MMX. Also update the tests to become
triple neutral.
llvm-svn: 230637
InstCombine has logic to convert aligned Altivec load/store intrinsics into
regular loads and stores. Unfortunately, there seems to be no regression test
covering this behavior. Adding one...
llvm-svn: 230632
Add `CHECK-SAME`, which requires that the pattern matches on the *same*
line as the previous `CHECK`/`CHECK-NEXT` -- in other words, no newline
is allowed in the skipped region. This is similar to `CHECK-NEXT`,
which requires exactly 1 newline in the skipped region.
My motivation is to simplify checking the long lines of LLVM assembly
for the new debug info hierarchy. This allows CHECK sequences like the
following:
CHECK: ![[REF]] = !SomeMDNode(
CHECK-SAME: file: ![[FILE:[0-9]+]]
CHECK-SAME: otherField: 93{{[,)]}}
which is equivalent to:
CHECK: ![[REF]] = !SomeMDNode({{.*}}file: ![[FILE:[0-9]+]]{{.*}}otherField: 93{{[,)]}}
While this example just has two fields, many nodes in debug info have
more than that. `CHECK-SAME` will keep the logic easy to follow.
Morever, it enables interleaving `CHECK-NOT`s without allowing newlines.
Consider the following:
CHECK: ![[REF]] = !SomeMDNode(
CHECK-SAME: file: ![[FILE:[0-9]+]]
CHECK-NOT: unexpectedField:
CHECK-SAME: otherField: 93{{[,)]}}
CHECK-NOT: otherUnexpectedField:
CHECK-SAME: )
which doesn't seem to have an equivalent `CHECK` line.
llvm-svn: 230612
Use the IRBuilder helpers for gc.statepoint and gc.result, instead of
coding the construction by hand. Note that the gc.statepoint IRBuilder
handles only CallInst, not InvokeInst; retain that part of hand-coding.
Differential Revision: http://reviews.llvm.org/D7518
llvm-svn: 230591
This symbol exists only to pull in the required pieces of the runtime,
so nothing ever needs to refer to it. Making it hidden avoids the
potential for issues with duplicate symbols when linking profiled
libraries together.
llvm-svn: 230566
This is a follow-on to r227491 which tightens the check for propagating FP
values. If a non-constant value happens to be a zero, we would hit the same
bug as before.
Bug noted and patch suggested by Eli Friedman.
llvm-svn: 230564
Summary: SROA generates code that isn't quite as easy to optimize and contains unusual-sized shuffles, but that code is generally correct. As discussed in D7487 the right place to clean things up is InstCombine, which will pick up the type-punning pattern and transform it into a more obvious bitcast+extractelement, while leaving the other patterns SROA encounters as-is.
Test Plan: make check
Reviewers: jvoung, chandlerc
Subscribers: llvm-commits
llvm-svn: 230560
LDtocL, and other loads that roughly correspond to the TOC_ENTRY SDAG node,
represent loads from the TOC, which is invariant. As a result, these loads can
be hoisted out of loops, etc. In order to do this, we need to generate
GOT-style MMOs for TOC_ENTRY, which requires treating it as a legitimate memory
intrinsic node type. Once this is done, the MMO transfer is automatically
handled for TableGen-driven instruction selection, and for nodes generated
directly in PPCISelDAGToDAG, we need to transfer the MMOs manually.
Also, we were not transferring MMOs associated with pre-increment loads, so do
that too.
Lastly, this fixes an exposed bug where R30 was not added as a defined operand of
UpdateGBR.
This problem was highlighted by an example (used to generate the test case)
posted to llvmdev by Francois Pichet.
llvm-svn: 230553
The Win64 epilogue structure is very restrictive, it permits a very
small number of opcodes and none of them are 'mov'.
This means that given:
mov %rbp, %rsp
pop %rbp
The mov isn't the epilogue, only the pop is. This is problematic unless
a frame pointer is present in which case we are free to do whatever we'd
like in the "body" of the function. If a frame pointer is present,
unwinding will undo the prologue operations in reverse order regardless
of the fact that we are at an instruction which is reseting the stack
pointer.
llvm-svn: 230543
This change aligns globals to the next highest power of 2 bytes, up to a
maximum of 128. This makes it more likely that we will be able to compress
bit sets with a greater alignment. In many more cases, we can now take
advantage of a new optimization also introduced in this patch that removes
bit set checks if the bit set is all ones.
The 128 byte maximum was found to provide the best tradeoff between instruction
overhead and data overhead in a recent build of Chromium. It allows us to
remove ~2.4MB of instructions at the cost of ~250KB of data.
Differential Revision: http://reviews.llvm.org/D7873
llvm-svn: 230540
(The change was landed in r230280 and caused the regression PR22674.
This version contains a fix and a test-case for PR22674).
When emitting the increment operation, SCEVExpander marks the
operation as nuw or nsw based on the flags on the preincrement SCEV.
This is incorrect because, for instance, it is possible that {-6,+,1}
is <nuw> while {-6,+,1}+1 = {-5,+,1} is not.
This change teaches SCEV to mark the increment as nuw/nsw only if it
can explicitly prove that the increment operation won't overflow.
Apart from the attached test case, another (more realistic)
manifestation of the bug can be seen in
Transforms/IndVarSimplify/pr20680.ll.
Differential Revision: http://reviews.llvm.org/D7778
llvm-svn: 230533
With a diabolically crafted test case, we could recurse
through this code and return true instead of false.
The larger engineering crime is the use of magic numbers.
Added FIXME comments for those.
llvm-svn: 230515
Reapply r230248.
Teach the peephole optimizer to work with MMX instructions by adding
entries into the foldable tables. This covers folding opportunities not
handled during isel.
llvm-svn: 230499
Thumb-1 only allows SP-based LDR and STR to be word-sized, and SP-base LDR,
STR, and ADD only allow offsets that are a multiple of 4. Make some changes
to better make use of these instructions:
* Use word loads for anyext byte and halfword loads from the stack.
* Enforce 4-byte alignment on objects accessed in this way, to ensure that
the offset is valid.
* Do the same for objects whose frame index is used, in order to avoid having
to use more than one ADD to generate the frame index.
* Correct how many bits of offset we think AddrModeT1_s has.
Patch by John Brawn.
llvm-svn: 230496
Summary:
This change fixes the FIXME that you recently added when you committed
(a modified version of) my patch. When `InstCombine` combines a load and
store of an pointer to those of an equivalently-sized integer, it currently
drops any `!nonnull` metadata that might be present. This change replaces
`!nonnull` metadata with `!range !{ 1, -1 }` metadata instead.
Reviewers: chandlerc
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D7621
llvm-svn: 230462
Since r199356, we've printed a warning when dropping debug info.
r225562 started crashing on that, since it registered a diagnostic
handler that only expected errors. This fixes the handler to expect
other severities. As a side effect, it now prints "error: " at the
start of error messages, similar to `llvm-as`.
There was a testcase for r199356, but it only really checked the
assembler. Move `test/Bitcode/drop-debug-info.ll` to `test/Assembler`,
and introduce `test/Bitcode/drop-debug-info.3.5.ll` (and companion
`.bc`) to test the bitcode reader.
Note: tools/gold/gold-plugin.cpp has an equivalent bug, but I'm not sure
what the best fix is there. I'll file a PR.
llvm-svn: 230416
Like r230414, add bitcode support including backwards compatibility, for
an explicit type parameter to GEP.
At the suggestion of Duncan I tried coalescing the two older bitcodes into a
single new bitcode, though I did hit a wrinkle: I couldn't figure out how to
create an explicit abbreviation for a record with a variable number of
arguments (the indicies to the gep). This means the discriminator between
inbounds and non-inbounds gep is a full variable-length field I believe? Is my
understanding correct? Is there a way to create such an abbreviation? Should I
just use two bitcodes as before?
Reviewers: dexonsmith
Differential Revision: http://reviews.llvm.org/D7736
llvm-svn: 230415
This adds support for the QPX vector instruction set, which is used by the
enhanced A2 cores on the IBM BG/Q supercomputers. QPX vectors are 256 bytes
wide, holding 4 double-precision floating-point values. Boolean values, modeled
here as <4 x i1> are actually also represented as floating-point values
(essentially { -1, 1 } for { false, true }). QPX shares many features with
Altivec and VSX, but is distinct from both of them. One major difference is
that, instead of adding completely-separate vector registers, QPX vector
registers are extensions of the scalar floating-point registers (lane 0 is the
corresponding scalar floating-point value). The operations supported on QPX
vectors mirrors that supported on the scalar floating-point values (with some
additional ones for permutations and logical/comparison operations).
I've been maintaining this support out-of-tree, as part of the bgclang project,
for several years. This is not the entire bgclang patch set, but is most of the
subset that can be cleanly integrated into LLVM proper at this time. Adding
this to the LLVM backend is part of my efforts to rebase bgclang to the current
LLVM trunk, but is independently useful (especially for codes that use LLVM as
a JIT in library form).
The assembler/disassembler test coverage is complete. The CodeGen test coverage
is not, but I've included some tests, and more will be added as follow-up work.
llvm-svn: 230413
This patch unifies the comdat and non-comdat code paths. By doing this
it add missing features to the comdat side and removes the fixed
section assumptions from the non-comdat side.
In ELF there is no one true section for "4 byte mergeable" constants.
We are better off computing the required properties of the section
and asking the context for it.
llvm-svn: 230411
The builder is based on a layout algorithm that tries to keep members of
small bit sets together. The new layout compresses Chromium's bit sets to
around 15% of their original size.
Differential Revision: http://reviews.llvm.org/D7796
llvm-svn: 230394
This test checks that the symbols instrprof creates have appropriate
linkage. The tests already exist in clang in a slightly different form
from before we sunk profile generation into an LLVM pass, but that's
an awkward place for them now. I'll remove/simplify the clang versions
shortly.
llvm-svn: 230383
Author: Simon Pilgrim <llvm-dev@redking.me.uk>
Date: Mon Feb 23 23:04:28 2015 +0000
Fix based on post-commit comment on D7816 & rL230177 - BUILD_VECTOR operand truncation was using the the BV's output scalar type instead of the input type.
and
Author: Simon Pilgrim <llvm-dev@redking.me.uk>
Date: Sun Feb 22 18:17:28 2015 +0000
[DagCombiner] Generalized BuildVector Vector Concatenation
The CONCAT_VECTORS combiner pass can transform the concat of two BUILD_VECTOR nodes into a single BUILD_VECTOR node.
This patch generalises this to support any number of BUILD_VECTOR nodes, and also permits UNDEF nodes to be included as well.
This was noticed as AVX vec128 -> vec256 canonicalization sometimes creates a CONCAT_VECTOR with a real vec128 lower and an vec128 UNDEF upper.
Differential Revision: http://reviews.llvm.org/D7816
as the root cause of PR22678 which is causing an assertion inside the DAG combiner.
I'll follow up to the main thread as well.
llvm-svn: 230358
The reason why these large shift sizes happen is because OpaqueConstants
currently inhibit alot of DAG combining, but that has to be addressed in
another commit (like the proposal in D6946).
Differential Revision: http://reviews.llvm.org/D6940
llvm-svn: 230355
The logic is almost there already, with our special homogeneous aggregate
handling. Tweaking it like this allows front-ends to emit AAPCS compliant code
without ever having to count registers or add discarded padding arguments.
Only arrays of i32 and i64 are needed to model AAPCS rules, but I decided to
apply the logic to all integer arrays for more consistency.
llvm-svn: 230348
Summary: Begin to add various address modes; including alloca.
Test Plan: Make sure there are no regressions in test-suite at O0/02 in mips32r1/r2
Reviewers: dsanders
Reviewed By: dsanders
Subscribers: echristo, rfuhler, llvm-commits
Differential Revision: http://reviews.llvm.org/D6426
llvm-svn: 230300
The bug was a result of getPreStartForExtend interpreting nsw/nuw
flags on an add recurrence more strongly than is legal. {S,+,X}<nsw>
implies S+X is nsw only if the backedge of the loop is taken at least
once.
NOTE: I had accidentally committed an unrelated change with the commit
message of this change in r230275 (r230275 was reverted in r230279).
This is the correct change for this commit message.
Differential Revision: http://reviews.llvm.org/D7808
llvm-svn: 230291
When debugging LTO issues with ld64, we use -save-temps to save the merged
optimized bitcode file, then invoke ld64 again on the single bitcode file to
speed up debugging code generation passes and ld64 stuff after code generation.
llvm linking a single bitcode file via lto_codegen_add_module will generate a
different bitcode file from the single input. With the newly-added
lto_codegen_set_module, we can make sure the destination module is the same as
the input.
lto_codegen_set_module will transfer the ownship of the module to code
generator.
rdar://19024554
llvm-svn: 230290
This case is interesting because ScalarEvolutionExpander lowers min(a,
b) as ~max(~a,~b). I think the profitability heuristics can be made
more clever/aggressive, but this is a start.
Differential Revision: http://reviews.llvm.org/D7821
llvm-svn: 230285
When emitting the increment operation, SCEVExpander marks the
operation as nuw or nsw based on the flags on the preincrement SCEV.
This is incorrect because, for instance, it is possible that {-6,+,1}
is <nuw> while {-6,+,1}+1 = {-5,+,1} is not.
This change teaches SCEV to mark the increment as nuw/nsw only if it
can explicitly prove that the increment operation won't overflow.
Apart from the attached test case, another (more realistic) manifestation
of the bug can be seen in Transforms/IndVarSimplify/pr20680.ll.
NOTE: this change was landed with an incorrect commit message in
rL230275 and was reverted for that reason in rL230279. This commit
message is the correct one.
Differential Revision: http://reviews.llvm.org/D7778
llvm-svn: 230280
230275 got committed with an incorrect commit message due to a mixup
on my side. Will re-land in a few moments with the correct commit
message.
llvm-svn: 230279
This patch teaches the backend how to expand a double-half conversion into
a double-float conversion immediately followed by a float-half conversion.
We do this only under fast-math, and if float-half conversions are legal
for the target.
Added test CodeGen/X86/fastmath-float-half-conversion.ll
Differential Revision: http://reviews.llvm.org/D7832
llvm-svn: 230276
The bug was a result of getPreStartForExtend interpreting nsw/nuw
flags on an add recurrence more strongly than is legal. {S,+,X}<nsw>
implies S+X is nsw only if the backedge of the loop is taken at least
once.
Differential Revision: http://reviews.llvm.org/D7808
llvm-svn: 230275
Prologue emission, in some cases, requires calls to a stack probe helper
function. The amount of stack to probe is passed as a register
argument in the Win64 ABI but the instruction sequence used is
pessimistic: it assumes that the number of bytes to probe is greater
than 4 GB.
Instead, select a more appropriate opcode depending on the number of
bytes we are going to probe.
llvm-svn: 230270
Front-ends could use global unnamed_addr to hold pointers to other
symbols, like @gotequivalent below:
@foo = global i32 42
@gotequivalent = private unnamed_addr constant i32* @foo
@delta = global i32 trunc (i64 sub (i64 ptrtoint (i32** @gotequivalent to i64),
i64 ptrtoint (i32* @delta to i64))
to i32)
The global @delta holds a data "PC"-relative offset to @gotequivalent,
an unnamed pointer to @foo. The darwin/x86-64 assembly output for this follows:
.globl _foo
_foo:
.long 42
.globl _gotequivalent
_gotequivalent:
.quad _foo
.globl _delta
_delta:
.long _gotequivalent-_delta
Since unnamed_addr indicates that the address is not significant, only
the content, we can optimize the case above by replacing pc-relative
accesses to "GOT equivalent" globals, by a PC relative access to the GOT
entry of the final symbol instead. Therefore, "delta" can contain a pc
relative relocation to foo's GOT entry and we avoid the emission of
"gotequivalent", yielding the assembly code below:
.globl _foo
_foo:
.long 42
.globl _delta
_delta:
.long _foo@GOTPCREL+4
There are a couple of advantages of doing this: (1) Front-ends that need
to emit a great deal of data to store pointers to external symbols could
save space by not emitting such "got equivalent" globals and (2) IR
constructs combined with this opt opens a way to represent GOT pcrel
relocations by using the LLVM IR, which is something we previously had
no way to express.
Differential Revision: http://reviews.llvm.org/D6922
rdar://problem/18534217
llvm-svn: 230264
When multiple regions start on the same line, llvm-cov was just
showing the count of the last one as the line count. This can be
confusing and misleading for things like one-liner loops, where the
count at the end isn't very interesting, or even "if" statements with
an opening brace at the end of the line.
Instead, use the maximum of all of the region start counts.
llvm-svn: 230263
This patch adds the isProfitableToHoist API. For AArch64, we want to prevent a
fmul from being hoisted in cases where it is more profitable to form a
fmsub/fmadd.
Phabricator Review: http://reviews.llvm.org/D7299
Patch by Lawrence Hu <lawrence@codeaurora.org>
llvm-svn: 230241
Summary:
-mno-odd-spreg prohibits the use of odd-numbered single-precision floating
point registers. However, vector insert/extract was still using them when
manipulating the subregisters of an MSA register. Fixed this by ensuring
that insertion/extraction is only performed on even-numbered vector
registers when -mno-odd-spreg is given.
Reviewers: vmedic, sstankovic
Reviewed By: sstankovic
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D7672
llvm-svn: 230235
Teach the peephole optimizer to work with MMX instructions by adding
entries into the foldable tables. This covers folding opportunities not
handled during isel.
llvm-svn: 230226
Add tests to cover the RR form of the pslli, psrli and psrai intrinsics.
In the next commit, the loads are going to be folded and the
instructions use the RM form.
llvm-svn: 230224
This adds the --class-definitions flag. If specified, when dumping
types, instead of "class Foo" you will see the full class definition,
with member functions, constructors, access specifiers.
NOTE: Using this option can be very slow, as generating a full class
definition requires accessing many different parts of the PDB.
llvm-svn: 230203
This increases the flexibility of how to dump different
symbol types -- necessary for context-sensitive formatting of
symbol types -- and also improves the modularity by allowing
the dumping to be implemented in the actual dumper, as opposed
to in the PDB library.
llvm-svn: 230184
This removes a wealth of options, and instead now only provides
three options. -symbols, -types, and -compilands. This greatly
simplifies use of the tool, and makes it easier to understand
what you're going to see when you run the tool.
llvm-svn: 230182
While fuzzing LLVM bitcode files, I discovered that (1) the bitcode reader doesn't check that alignments are no larger than 2**29; (2) downstream code doesn't check the range; and (3) for values out of range, corresponding large memory requests (based on alignment size) will fail. This code fixes the bitcode reader to check for valid alignments, fixing this problem.
This CL fixes alignment value on global variables, functions, and instructions: alloca, load, load atomic, store, store atomic.
Patch by Karl Schimpf (kschimpf@google.com).
llvm-svn: 230180
The CONCAT_VECTORS combiner pass can transform the concat of two BUILD_VECTOR nodes into a single BUILD_VECTOR node.
This patch generalises this to support any number of BUILD_VECTOR nodes, and also permits UNDEF nodes to be included as well.
This was noticed as AVX vec128 -> vec256 canonicalization sometimes creates a CONCAT_VECTOR with a real vec128 lower and an vec128 UNDEF upper.
Differential Revision: http://reviews.llvm.org/D7816
llvm-svn: 230177
The issue was that the test Makefile had not been updated to
provide a value for HAVE_DIA_SDK, so it was being initialized
incorrectly. Hopefully this brings everything back to green.
llvm-svn: 230162
NOTE: This patch intentionally breaks the build. It attempts
to resubmit r230083, but with some debug logging in the CMake
and lit config files to determine why certain bots do not
correctly disable the DIA tests when DIA is not available.
After a sufficient number of bots fail, this patch will either
be reverted or, if the cause of the failure becomes obvious,
a fix submitted with the log statements removed.
llvm-svn: 230161
calculations. Semantically non-functional change.
This gets rid of some of the SCEV -> Value -> SCEV round tripping and
the Construct(SMin|SMax)Of and MaybeSimplify helper routines.
llvm-svn: 230150
Stack realignment occurs after the prolog, not during, for Win64.
Because of this, don't factor in the maximum stack alignment when
establishing a frame pointer.
This fixes PR22572.
llvm-svn: 230113
Parse (and write) symbolic constants in debug info `flags:` fields.
This prevents a readability (and CHECK-ability) regression with the new
debug info hierarchy.
Old (well, current) assembly, with pretty-printing:
!{!"...\\0016387", ...} ; ... [public] [rvalue reference]
Flags field without this change:
!MDDerivedType(flags: 16387, ...)
Flags field with this change:
!MDDerivedType(flags: DIFlagPublic | DIFlagRValueReference, ...)
As discussed in the review thread, this isn't a final state. Most of
these flags correspond to `DW_AT_` symbolic constants, and we might
eventually want to support arbitrary attributes in some form. However,
as it stands now, some of the flags correspond to other concepts (like
`FlagStaticMember`); until things are refactored this is the simplest
way to move forward without regressing assembly.
llvm-svn: 230111
Previously, this pass ran over every function in the Module if added to the pass order. With this change, it runs only over those with a GC attribute where the GC explicitly opts in. A GC can also choose which of entry safepoint polls, backedge safepoint polls, and call safepoints it wants. I hope to get these exposed as checks on the GCStrategy at some point, but for now, the checks are manual string comparisons.
llvm-svn: 230097
This adds only a very basic set of tests that dump a few
functions and object files.
Differential Revision: http://reviews.llvm.org/D7656
Reviewed By: David Blaikie
llvm-svn: 230083
The expansion code does the same thing. Since
the operands were not defined with the correct
types, this has the side effect of fixing operand
folding since the expanded pseudo would never use
SGPRs or inline immediates.
llvm-svn: 230072
This enables a few useful combines that used to only
use fma.
Also since v_mad_f32 apparently does not support denormals,
disable the existing cases that are custom handled if they are
requested.
llvm-svn: 230071
Yet another chapter in the endless story. While this looks like we leave
the loop in a non-canonical state this replicates the logic in
LoopSimplify so it doesn't diverge from the canonical form in any way.
PR21968
llvm-svn: 230058
In the old (well, current) schema, there are two types of file
references: untagged and tagged (the latter references the former).
!0 = !{!"filename", !"/directory"}
!1 = !{!"0x29", !1} ; DW_TAG_file_type [filename] [/directory]
The interface to `DIBuilder` universally takes the tagged version,
described by `DIFile`. However, most `file:` references actually use
the untagged version directly.
In the new hierarchy, I'm merging this into a single node: `MDFile`.
Originally I'd planned to keep the old schema unchanged until after I
moved the new hierarchy into place.
However, it turns out to be trivial to make `MDFile` match both nodes at
the same time.
- Anyone referencing !1 does so through `DIFile`, whose implementation
I need to gut anyway (as I do the rest of the `DIDescriptor`s).
- Anyone referencing !0 just references an `MDNode`, and expects a
node with two `MDString` operands.
This commit achieves that, and updates all the testcases for the parts
of the new hierarchy that used the two-node schema (I've replaced the
untagged nodes with `distinct !{}` to make the diff clear (otherwise the
metadata all gets renumbered); it might be worthwhile to come back and
delete those nodes and renumber the world, not sure).
llvm-svn: 230057
This patch introduces a new mechanism that allows IR modules to co-operatively
build pointer sets corresponding to addresses within a given set of
globals. One particular use case for this is to allow a C++ program to
efficiently verify (at each call site) that a vtable pointer is in the set
of valid vtable pointers for the class or its derived classes. One way of
doing this is for a toolchain component to build, for each class, a bit set
that maps to the memory region allocated for the vtables, such that each 1
bit in the bit set maps to a valid vtable for that class, and lay out the
vtables next to each other, to minimize the total size of the bit sets.
The patch introduces a metadata format for representing pointer sets, an
'@llvm.bitset.test' intrinsic and an LTO lowering pass that lays out the globals
and builds the bitsets, and documents the new feature.
Differential Revision: http://reviews.llvm.org/D7288
llvm-svn: 230054
usage of instruction ADDU16 by CodeGen. For this instruction an improper
register is allocated, i.e. the register that is not from register set defined
for the instruction.
llvm-svn: 230053
This patch teaches X86FastISel how to select intrinsic 'convert_from_fp16' and
intrinsic 'convert_to_fp16'.
If the target has F16C, we can select VCVTPS2PHrr for a float-half conversion,
and VCVTPH2PSrr for a half-float conversion.
Differential Revision: http://reviews.llvm.org/D7673
llvm-svn: 230043
Before calling Function::getGC to test for enablement, we need to make sure there's actually a GC at all via Function::hasGC. Otherwise, we'd crash on functions without a GC. Thankfully, this only mattered if you manually scheduled the pass, but still, oops. :(
llvm-svn: 230040
The IBM BG/Q supercomputer's A2 cores have a hardware prefetching unit, the
L1P, but it does not prefetch directly into the A2's L1 cache. Instead, it
prefetches into its own L1P buffer, and the latency to access that buffer is
significantly higher than that to the L1 cache (although smaller than the
latency to the L2 cache). As a result, especially when multiple hardware
threads are not actively busy, explicitly prefetching data into the L1 cache is
advantageous.
I've been using this pass out-of-tree for data prefetching on the BG/Q for well
over a year, and it has worked quite well. It is enabled by default only for
the BG/Q, but can be enabled for other cores as well via a command-line option.
Eventually, we might want to add some TTI interfaces and move this into
Transforms/Scalar (there is nothing particularly target dependent about it,
although only machines like the BG/Q will benefit from its simplistic
strategy).
llvm-svn: 229966
The new shuffle lowering has been the default for some time. I've
enabled the new legality testing by default with no really blocking
regressions. I've fuzz tested this very heavily (many millions of fuzz
test cases have passed at this point). And this cleans up a ton of code.
=]
Thanks again to the many folks that helped with this transition. There
was a lot of work by others that went into the new shuffle lowering to
make it really excellent.
In case you aren't using a diff algorithm that can handle this:
X86ISelLowering.cpp: 22 insertions(+), 2940 deletions(-)
llvm-svn: 229964
is going well, remove the flag and the code for the old legality tests.
This is the first step toward removing the entire old vector shuffle
lowering. *Much* more code to delete coming up next.
llvm-svn: 229963
When writing the bitcode serialization for the new debug info hierarchy,
I assumed two fields would never be null.
Drop that assumption, since it's brittle (and crashes the
`BitcodeWriter` if wrong), and is a check better left for the verifier
anyway. (No need for a bitcode upgrade here, since the new hierarchy is
still not in place.)
The fields in question are `MDCompileUnit::getFile()` and
`MDDerivedType::getBaseType()`, the latter of which isn't null in
test/Transforms/Mem2Reg/ConvertDebugInfo2.ll (see !14, a pointer to
nothing). While the testcase might have bitrotted, there's no reason
for the bitcode format to rely on non-null for metadata operands.
This also fixes a bug in `AsmWriter` where if the `file:` is null it
isn't emitted (caught by the double-round trip in the testcase I'm
adding) -- this is a required field in `LLParser`.
I'll circle back to ConvertDebugInfo2. Once the specialized nodes are
in place, I'll be trying to turn the debug info verifier back on by
default (in the newer module pass form committed r206300) and throwing
more logic in there. If the testcase has bitrotted (as opposed to me
not understanding the schema correctly) I'll fix it then.
llvm-svn: 229960
This change addresses a deficiency pointed out in PR22629. To copy from the bug
report:
[from the bug report]
Consider this code:
int f(int x) {
int a[] = {12};
return a[x];
}
GCC knows to optimize this to
movl $12, %eax
ret
The code generated by recent Clang at -O3 is:
movslq %edi, %rax
movl .L_ZZ1fiE1a(,%rax,4), %eax
retq
.L_ZZ1fiE1a:
.long 12 # 0xc
[end from the bug report]
This definitely seems worth fixing. I've also seen this kind of code before (as
the base case of generic vector wrapper templates with one element).
The general idea is to look at the GEP feeding a load or a store, which has
some variable as its first non-zero index, and determine if that index must be
zero (or else an out-of-bounds access would occur). We can do this for allocas
and globals with constant initializers where we know the maximum size of the
underlying object. When we find such a GEP, we create a new one for the memory
access with that first variable index replaced with a constant zero.
Even if we can't eliminate the memory access (and sometimes we can't), it is
still useful because it removes unnecessary indexing calculations.
llvm-svn: 229959
reflects the fact that the x86 backend can in fact lower any shuffle you
want it to with reasonably high code quality.
My recent work on the new vector shuffle has made this regress *very*
little. The diff in the test cases makes me very, very happy.
llvm-svn: 229958
one test case that is only partially tested in 32-bits into two test
cases so that the script doesn't generate massive spews of tests for the
cases we don't care about.
llvm-svn: 229955
When back merging the changes in 229945 I noticed that I forgot to mark the test cases with the appropriate GC. We want the rewriting to be off by default (even when manually added to the pass order), not on-by default. To keep the current test working, mark them as using the statepoint-example GC and whitelist that GC.
Longer term, we need a better selection mechanism here for both actual usage and testing. As I migrate more tests to the in tree version of this pass, I will probably need to update the enable/disable logic as well.
llvm-svn: 229954
This patch consists of a single pass whose only purpose is to visit previous inserted gc.statepoints which do not have gc.relocates inserted yet, and insert them. This can be used either immediately after IR generation to perform 'early safepoint insertion' or late in the pass order to perform 'late insertion'.
This patch is setting the stage for work to continue in tree. In particular, there are known naming and style violations in the current patch. I'll try to get those resolved over the next week or so. As I touch each area to make style changes, I need to make sure we have adequate testing in place. As part of the cleanup, I will be cleaning up a collection of test cases we have out of tree and submitting them upstream. The tests included in this change are very basic and mostly to provide examples of usage.
The pass has several main subproblems it needs to address:
- First, it has identify any live pointers. In the current code, the use of address spaces to distinguish pointers to GC managed objects is hard coded, but this will become parametrizable in the near future. Note that the current change doesn't actually contain a useful liveness analysis. It was seperated into a followup change as the code wasn't ready to be shared. Instead, the current implementation just considers any dominating def of appropriate pointer type to be live.
- Second, it has to identify base pointers for each live pointer. This is a fairly straight forward data flow algorithm.
- Third, the information in the previous steps is used to actually introduce rewrites. Rather than trying to do this by hand, we simply re-purpose the code behind Mem2Reg to do this for us.
llvm-svn: 229945
Today a simple function that only catches exceptions and doesn't run
destructor cleanups ends up containing a dead call to _Unwind_Resume
(PR20300). We can't remove these dead resume instructions during normal
optimization because inlining might introduce additional landingpads
that do have cleanups to run. Instead we can do this during EH
preparation, which is guaranteed to run after inlining.
Fixes PR20300.
Reviewers: majnemer
Differential Revision: http://reviews.llvm.org/D7744
llvm-svn: 229944
When trying to match the current schema with the new debug info
hierarchy, I downgraded `SizeInBits`, `AlignInBits` and `OffsetInBits`
to 32-bits (oops!). Caught this while testing my upgrade script to move
the hierarchy into place. Bump it back up to 64-bits and update tests.
llvm-svn: 229933
This re-applies r223862, r224198, r224203, and r224754, which were
reverted in r228129 because they exposed Clang misalignment problems
when self-hosting.
The combine caused the crashes because we turned ISD::LOAD/STORE nodes
to ARMISD::VLD1/VST1_UPD nodes. When selecting addressing modes, we
were very lax for the former, and only emitted the alignment operand
(as in "[r1:128]") when it was larger than the standard alignment of
the memory type.
However, for ARMISD nodes, we just used the MMO alignment, no matter
what. In our case, we turned ISD nodes to ARMISD nodes, and this
caused the alignment operands to start being emitted.
And that's how we exposed alignment problems that were ignored before
(but I believe would have been caught with SCTRL.A==1?).
To fix this, we can just mirror the hack done for ISD nodes: only
take into account the MMO alignment when the access is overaligned.
Original commit message:
We used to only combine intrinsics, and turn them into VLD1_UPD/VST1_UPD
when the base pointer is incremented after the load/store.
We can do the same thing for generic load/stores.
Note that we can only combine the first load/store+adds pair in
a sequence (as might be generated for a v16f32 load for instance),
because other combines turn the base pointer addition chain (each
computing the address of the next load, from the address of the last
load) into independent additions (common base pointer + this load's
offset).
rdar://19717869, rdar://14062261.
llvm-svn: 229932
There's no way for `DIBuilder` to create a subprogram or global variable
where `getName()` and `getDisplayName()` give different answers. This
testcase managed to achieve the feat though. This was probably just
left behind in some sort of upgrade along the way.
llvm-svn: 229930
The LoopInfo in combination with depth_first is used to enumerate the
loops.
Right now -analyze is not yet complete. It only prints the result of
the analysis, the report and the run-time checks. Printing the unsafe
depedences will require a bit more reshuffling which I'd like to do in a
follow-on to this patchset. Unsafe dependences are currently checked
via -debug-only=loop-accesses in the new test.
This is part of the patchset that converts LoopAccessAnalysis into an
actual analysis pass.
llvm-svn: 229898
X86 load folding is fragile; eg, the tests here
don't work without AVX even though they should. This
is because we have a mix of tablegen patterns that have
been added over time, and we have a load folding table
used by the peephole optimizer that has to be kept in
sync with the ever-changing ISA and tablegen defs.
llvm-svn: 229870
systematic lowering of v8i16.
This required a slight strategy shift to prefer unpack lowerings in more
places. While this isn't a cut-and-dry win in every case, it is in the
overwhelming majority. There are only a few places where the old
lowering would probably be a touch faster, and then only by a small
margin.
In some cases, this is yet another significant improvement.
llvm-svn: 229859
addition to lowering to trees rooted in an unpack.
This saves shuffles and or registers in many various ways, lets us
handle another class of v4i32 shuffles pre SSE4.1 without domain
crosses, etc.
llvm-svn: 229856
terribly complex partial blend logic.
This code path was one of the more complex and bug prone when it first
went in and it hasn't faired much better. Ultimately, with the simpler
basis for unpack lowering and support bit-math blending, this is
completely obsolete. In the worst case without this we generate
different but equivalent instructions. However, in many cases we
generate much better code. This is especially true when blends or pshufb
is available.
This does expose one (minor) weakness of the unpack lowering that I'll
try to address.
In case you were wondering, this is actually a big part of what I've
been trying to pull off in the recent string of commits.
llvm-svn: 229853
needed, and significantly improve the SSSE3 path.
This makes the new strategy much more clear. If we can blend, we just go
with that. If we can't blend, we try to permute into an unpack so
that we handle cases where the unpack doing the blend also simplifies
the shuffle. If that fails and we've got SSSE3, we now call into
factored-out pshufb lowering code so that we leverage the fact that
pshufb can set up a blend for us while shuffling. This generates great
code, especially because we *know* we don't have a fast blend at this
point. Finally, we fall back on decomposing into permutes and blends
because we do at least have a bit-math-based blend if we need to use
that.
This pretty significantly improves some of the v8i16 code paths. We
never need to form pshufb for the single-input shuffles because we have
effective target-specific combines to form it there, but we were missing
its effectiveness in the blends.
llvm-svn: 229851
them into permutes and a blend with the generic decomposition logic.
This works really well in almost every case and lets the code only
manage the expansion of a single input into two v8i16 vectors to perform
the actual shuffle. The blend-based merging is often much nicer than the
pack based merging that this replaces. The only place where it isn't we
end up blending between two packs when we could do a single pack. To
handle that case, just teach the v2i64 lowering to handle these blends
by digging out the operands.
With this we're down to only really random permutations that cause an
explosion of instructions.
llvm-svn: 229849
v16i8 shuffles, and replace it with new facilities.
This uses precise patterns to match exact unpacks, and the new
generalized unpack lowering only when we detect a case where we will
have to shuffle both inputs anyways and they terminate in exactly
a blend.
This fixes all of the blend horrors that I uncovered by always lowering
blends through the vector shuffle lowering. It also removes *sooooo*
much of the crazy instruction sequences required for v16i8 lowering
previously. Much cleaner now.
The only "meh" aspect is that we sometimes use pshufb+pshufb+unpck when
it would be marginally nicer to use pshufb+pshufb+por. However, the
difference there is *tiny*. In many cases its a win because we re-use
the pshufb mask. In others, we get to avoid the pshufb entirely. I've
left a FIXME, but I'm dubious we can really do better than this. I'm
actually pretty happy with this lowering now.
For SSE2 this exposes some horrors that were really already there. Those
will have to fixed by changing a different path through the v16i8
lowering.
llvm-svn: 229846
lowering paths. I'm going to be leveraging this to simplify a lot of the
overly complex lowering of v8 and v16 shuffles in pre-SSSE3 modes.
Sadly, this isn't profitable on v4i32 and v2i64. There, the float and
double blending instructions for pre-SSE4.1 are actually pretty good,
and we can't beat them with bit math. And once SSE4.1 comes around we
have direct blending support and this ceases to be relevant.
Also, some of the test cases look odd because the domain fixer
canonicalizes these to floating point domain. That's OK, it'll use the
integer domain when it matters and some day I may be able to update
enough of LLVM to canonicalize the other way.
This restores almost all of the regressions from teaching x86's vselect
lowering to always use vector shuffle lowering for blends. The remaining
problems are because the v16 lowering path is still doing crazy things.
I'll be re-arranging that strategy in more detail in subsequent commits
to finish recovering the performance here.
llvm-svn: 229836
First, don't combine bit masking into vector shuffles (even ones the
target can handle) once operation legalization has taken place. Custom
legalization of vector shuffles may exist for these patterns (making the
predicate return true) but that custom legalization may in some cases
produce the exact bit math this matches. We only really want to handle
this prior to operation legalization.
However, the x86 backend, in a fit of awesome, relied on this. What it
would do is mark VSELECTs as expand, which would turn them into
arithmetic, which this would then match back into vector shuffles, which
we would then lower properly. Amazing.
Instead, the second change is to teach the x86 backend to directly form
vector shuffles from VSELECT nodes with constant conditions, and to mark
all of the vector types we support lowering blends as shuffles as custom
VSELECT lowering. We still mark the forms which actually support
variable blends as *legal* so that the custom lowering is bypassed, and
the legal lowering can even be used by the vector shuffle legalization
(yes, i know, this is confusing. but that's how the patterns are
written).
This makes the VSELECT lowering much more sensible, and in fact should
fix a bunch of bugs with it. However, as you'll see in the test cases,
right now what it does is point out the *hilarious* deficiency of the
new vector shuffle lowering when it comes to blends. Fortunately, my
very next patch fixes that. I can't submit it yet, because that patch,
somewhat obviously, forms the exact and/or pattern that the DAG combine
is matching here! Without this patch, teaching the vector shuffle
lowering to produce the right code infloops in the DAG combiner. With
this patch alone, we produce terrible code but at least lower through
the right paths. With both patches, all the regressions here should be
fixed, and a bunch of the improvements (like using 2 shufps with no
memory loads instead of 2 andps with memory loads and an orps) will
stay. Win!
There is one other change worth noting here. We had hilariously wrong
vectorization cost estimates for vselect because we fell through to the
code path that assumed all "expand" vector operations are scalarized.
However, the "expand" lowering of VSELECT is vector bit math, most
definitely not scalarized. So now we go back to the correct if horribly
naive cost of "1" for "not scalarized". If anyone wants to add actual
modeling of shuffle costs, that would be cool, but this seems an
improvement on its own. Note the removal of 16 and 32 "costs" for doing
a blend. Even in SSE2 we can blend in fewer than 16 instructions. ;] Of
course, we don't right now because of OMG bad code, but I'm going to fix
that. Next patch. I promise.
llvm-svn: 229835
A null MCTargetStreamer allows IRObjectFile to ignore target-specific
directives. Previously we were crashing.
Differential Revision: http://reviews.llvm.org/D7711
llvm-svn: 229797
Follow-up to r229740, which removed `DITemplate*::getContext()` after my
upgrade script revealed that scopes are always `nullptr` for template
parameters. This is the other shoe: drop `scope:` from
`MDTemplateParameter` and its two subclasses. (Note: a bitcode upgrade
would be pointless, since the hierarchy hasn't been moved into place.)
llvm-svn: 229791
It turns out that `count: -1` is a special value indicating an empty
array, such as `Values` in:
struct T {
unsigned Count;
int Values[];
};
Handle it.
llvm-svn: 229769
Put the name before the value in assembly for `MDEnum`. While working
on the testcase upgrade script for the new hierarchy, I noticed that it
"looks nicer" to have the name first, since it lines the names up in the
(somewhat typical) case that they have a common prefix.
llvm-svn: 229747
Don't spend the entire iteration space in the scalar loop prologue if
computing the trip count overflows. This change also gets rid of the
backedge check in the prologue loop and the extra check for
overflowing trip-count.
Differential Revision: http://reviews.llvm.org/D7715
llvm-svn: 229731
This will help us study the format of individual symbol
records more closely.
Differential Revision: http://reviews.llvm.org/D7664
Reviewed by: Timur Iskhodzhanov
llvm-svn: 229730
Summary:
These ISA's didn't add any instructions so they are almost identical to
Mips32r2 and Mips64r2. Even the ELF e_flags are the same, However the ISA
revision in .MIPS.abiflags is 3 or 5 respectively instead of 2.
Reviewers: vmedic
Reviewed By: vmedic
Subscribers: tomatabacu, llvm-commits, atanasyan
Differential Revision: http://reviews.llvm.org/D7381
llvm-svn: 229695
Summary:
Parse for an MCExpr instead of an Identifier and use the symbol for relocations, not just the symbol's name.
This fixes errors when using local labels in .cpsetup (PR22518).
Reviewers: dsanders
Reviewed By: dsanders
Subscribers: seanbruno, emaste, llvm-commits
Differential Revision: http://reviews.llvm.org/D7697
llvm-svn: 229671
Add some of the missing M and R class Cortex CPUs, namely:
Cortex-M0+ (called Cortex-M0plus for GCC compatibility)
Cortex-M1
SC000
SC300
Cortex-R5
llvm-svn: 229660
1) We should not try to simplify if the sext has multiple uses
2) There is no need to simplify is the source value is already sign-extended.
Patch by Gil Rapaport <gil.rapaport@intel.com>
Differential Revision: http://reviews.llvm.org/D6949
llvm-svn: 229659
code.
While this didn't have the miscompile (it used MatchLeft consistently)
it missed some cases where it could use right shifts. I've added a test
case Craig Topper came up with to exercise the right shift matching.
This code is really identical between the two. I'm going to merge them
next so that we don't keep two copies of all of this logic.
llvm-svn: 229655
The current SystemZ back-end only supports the local-exec TLS access model.
This patch adds all required CodeGen support for the other TLS models, which
means in particular:
- Expand initial-exec TLS accesses by loading TLS offsets from the GOT
using @indntpoff relocations.
- Expand general-dynamic and local-dynamic accesses by generating the
appropriate calls to __tls_get_offset. Note that this routine has
a non-standard ABI and requires loading the GOT pointer into %r12,
so the patch also adds support for the GLOBAL_OFFSET_TABLE ISD node.
- Add a new platform-specific optimization pass to remove redundant
__tls_get_offset calls in the local-dynamic model (modeled after
the corresponding X86 pass).
- Add test cases verifying all access models and optimizations.
llvm-svn: 229654
The current SystemZ back-end only supports the local-exec TLS access model.
This patch adds all required MC support for the other TLS models, which
means in particular:
- Support additional relocation types for
Initial-exec model: R_390_TLS_IEENT
Local-dynamic-model: R_390_TLS_LDO32, R_390_TLS_LDO64,
R_390_TLS_LDM32, R_390_TLS_LDM64, R_390_TLS_LDCALL
General-dynamic model: R_390_TLS_GD32, R_390_TLS_GD64, R_390_TLS_GDCALL
- Support assembler syntax to generate additional relocations
for use with __tls_get_offset calls:
:tls_gdcall:
:tls_ldcall:
The patch also adds a new test to verify fixups and relocations,
and removes the (already unused) FK_390_PLT16DBL/FK_390_PLT32DBL
fixup kinds.
llvm-svn: 229652
r229622: "[LoopAccesses] Make VectorizerParams global"
r229623: "[LoopAccesses] Stash the report from the analysis rather than emitting it"
r229624: "[LoopAccesses] Cache the result of canVectorizeMemory"
r229626: "[LoopAccesses] Create the analysis pass"
r229628: "[LoopAccesses] Change debug messages from LV to LAA"
r229630: "[LoopAccesses] Add canAnalyzeLoop"
r229631: "[LoopAccesses] Add missing const to APIs in VectorizationReport"
r229632: "[LoopAccesses] Split out LoopAccessReport from VectorizerReport"
r229633: "[LoopAccesses] Add -analyze support"
r229634: "[LoopAccesses] Change LAA:getInfo to return a constant reference"
r229638: "Analysis: fix buildbots"
llvm-svn: 229650
track state.
I didn't like this in the code review because the pattern tends to be
error prone, but I didn't see a clear way to rewrite it. Turns out that
there were bugs here, I found them when fuzz testing our shuffle
lowering for correctness on x86.
The core of the problem is that we need to consistently test all our
preconditions for the same directionality of shift and the same input
vector. Instead, formulate this as two predicates (one doesn't depend on
the input in any way), pass things like the directionality and input
vector as inputs, and loop over the alternatives.
This fixes a pattern of very rare miscompiles coming out of this code.
Turned up roughly 4 out of every 1 million v8 shuffles in my fuzz
testing. The new code is over half a million test runs with no failures
yet. I've also fuzzed every other function in the lowering code with
over 3.5 million test cases and not discovered any other miscompiles.
llvm-svn: 229642
The LoopInfo in combination with depth_first is used to enumerate the
loops.
Right now -analyze is not yet complete. It only prints the result of
the analysis, the report and the run-time checks. Printing the unsafe
depedences will require a bit more reshuffling which I'd like to do in a
follow-on to this patchset. Unsafe dependences are currently checked
via -debug-only=loop-accesses in the new test.
This is part of the patchset that converts LoopAccessAnalysis into an
actual analysis pass.
llvm-svn: 229633
InstCombiner::visitGetElementPtrInst was using getFirstNonPHI to compute the
insertion point, which caused the verifier to complain when a GEP was inserted
before a landingpad instruction. This commit fixes it to use getFirstInsertionPt
instead.
rdar://problem/19394964
llvm-svn: 229619
When visiting the initial list of "root" instructions (those which must always
be alive), for those that are integer-valued (such as invokes returning an
integer), we mark their bits as (initially) all dead (we might, obviously, find
uses of those bits later, but all bits are assumed dead until proven
otherwise). Don't do so, however, if we're already seen a use of those bits by
another root instruction (such as a store).
Fixes a miscompile of the sanitizer unit tests on x86_64.
Also, add a debug line for visiting the root instructions, and remove a debug
line which tried to print instructions being removed (printing dead
instructions is dangerous, and can sometimes crash).
llvm-svn: 229618
extensions.
This change also removes `DEBUG(dbgs() << "SCEV: untested prestart
overflow check\n");` because that case has a unit test now.
Differential Revision: http://reviews.llvm.org/D7645
llvm-svn: 229600
This patch teaches fast-isel how to select a (V)CVTSI2SSrr for an integer to
float conversion, and how to select a (V)CVTSI2SDrr for an integer to double
conversion.
Added test 'fast-isel-int-float-conversion.ll'.
Differential Revision: http://reviews.llvm.org/D7698
llvm-svn: 229589
The problem in the original patch was not switching back to .text after printing
an eh table.
Original message:
On ELF, put PIC jump tables in a non executable section.
Fixes PR22558.
llvm-svn: 229586
If an EH table is printed in between the function and the jump table we would
fail to switch back to the text section to print the jump table.
llvm-svn: 229580
with the Mach-O S_LITERAL_POINTERS section type.
Also fix the printing of the leading addresses for literal sections to be consistent and
not print the 0x prefix. Updated test cases to match.
llvm-svn: 229548
Change the memory operands in sse12_fp_packed_scalar_logical_alias from scalars to vectors.
That's what the hardware packed logical FP instructions define: 128-bit memory operands.
There are no scalar versions of these instructions...because this is x86.
Generating the wrong code (folding a scalar load into a 128-bit load) is still possible
using the peephole optimization pass and the load folding tables. We won't completely
solve this bug until we either fix the lowering in fabs/fneg/fcopysign and any other
places where scalar FP logic is created or fix the load folding in foldMemoryOperandImpl()
to make sure it isn't changing the size of the load.
Differential Revision: http://reviews.llvm.org/D7474
llvm-svn: 229531
The 64-bit MIPS ELF archive file format is used by MIPS64 targets.
The main difference from a regular archive file is the symbol table format:
1. ar_name is equal to "/SYM64/"
2. number of symbols and offsets are 64-bit integers
http://techpubs.sgi.com/library/manuals/4000/007-4658-001/pdf/007-4658-001.pdf
Page 96
The patch allows reading of such archive files by llvm-nm, llvm-objdump
and other tools. But it does not support archive files with number of symbols
and/or offsets exceed 2^32. I think it is a rather rare case requires more
significant modification of `Archive` class code.
http://reviews.llvm.org/D7546
llvm-svn: 229520
This is a follow-on patch to:
http://reviews.llvm.org/D7093
That patch canonicalized constant splats as build_vectors,
and this patch removes the constant check so we can canonicalize
all splats as build_vectors.
This fixes the 2nd test case in PR22283:
http://llvm.org/bugs/show_bug.cgi?id=22283
The unfortunate code duplication between SelectionDAG and DAGCombiner
is discussed in the earlier patch review. At least this patch is just
removing code...
This improves an existing x86 AVX test and changes codegen in an ARM test.
Differential Revision: http://reviews.llvm.org/D7389
llvm-svn: 229511
The problem was in store-sink barrier check.
Store sink barrier should be checked for ModRef (read-write) mode.
http://llvm.org/bugs/show_bug.cgi?id=22613
llvm-svn: 229495
Flag -fast-isel-abort is required in order to verify that X86FastISel
never fails to select FPExt (float-to-double) and FPTrunc (double-to-float).
No Functional change intended.
llvm-svn: 229489
- added mask types v8i1 and v16i1 to possible function parameters
- enabled passing 512-bit vectors in standard CC
- added a test for KNL intel_ocl_bi conventions
llvm-svn: 229482
Vector zext tends to get legalized into a vector anyext, represented as a vector shuffle with an undef vector + a bitcast, that gets ANDed with a mask that zeroes the undef elements.
Combine this into an explicit shuffle with a zero vector instead. This allows shuffle lowering to match it as a zext, instead of matching it as an anyext and emitting an explicit AND.
This combine only covers a subset of the cases, but it's a start.
Differential Revision: http://reviews.llvm.org/D7666
llvm-svn: 229480
This allows it to match still more places where previously we would have
to fall back on floating point shuffles or other more complex lowering
strategies.
I'm hoping to replace some of the hand-rolled unpack matching with this
routine is it gets more and more clever.
llvm-svn: 229463
BDCE is a bit-tracking dead code elimination pass. It is based on ADCE (the
"aggressive DCE" pass), with the added capability to track dead bits of integer
valued instructions and remove those instructions when all of the bits are
dead.
Currently, it does not actually do this all-bits-dead removal, but rather
replaces the instruction's uses with a constant zero, and lets instcombine (and
the later run of ADCE) do the rest. Because we essentially get a run of ADCE
"for free" while tracking the dead bits, we also do what ADCE does and removes
actually-dead instructions as well (this includes instructions newly trivially
dead because all bits were dead, but not all such instructions can be removed).
The motivation for this is a case like:
int __attribute__((const)) foo(int i);
int bar(int x) {
x |= (4 & foo(5));
x |= (8 & foo(3));
x |= (16 & foo(2));
x |= (32 & foo(1));
x |= (64 & foo(0));
x |= (128& foo(4));
return x >> 4;
}
As it turns out, if you order the bit-field insertions so that all of the dead
ones come last, then instcombine will remove them. However, if you pick some
other order (such as the one above), the fact that some of the calls to foo()
are useless is not locally obvious, and we don't remove them (without this
pass).
I did a quick compile-time overhead check using sqlite from the test suite
(Release+Asserts). BDCE took ~0.4% of the compilation time (making it about
twice as expensive as ADCE).
I've not looked at why yet, but we eliminate instructions due to having
all-dead bits in:
External/SPEC/CFP2006/447.dealII/447.dealII
External/SPEC/CINT2006/400.perlbench/400.perlbench
External/SPEC/CINT2006/403.gcc/403.gcc
MultiSource/Applications/ClamAV/clamscan
MultiSource/Benchmarks/7zip/7zip-benchmark
llvm-svn: 229462
This test was failing on non-x86 hosts because it specified a cpu of x86_64,
but not an architecture. x86_64 is obviously not a valid cpu on all
architectures.
llvm-svn: 229460
Our register allocation has become better recently, it seems, and is now
starting to generate cross-block copies into inflated register classes. These
copies are not transformed into subregister insertions/extractions by the
PPCVSXCopy class, and so need to be handled directly by
PPCInstrInfo::copyPhysReg. The code to do this was *almost* there, but not
quite (it was unnecessarily restricting itself to only the direct
sub/super-register-class case (not copying between, for example, something in
VRRC and the lower-half of VSRC which are super-registers of F8RC).
Triggering this behavior manually is difficult; I'm including two
bugpoint-reduced test cases from the test suite.
llvm-svn: 229457
Patch to explicitly add the SSE MOVQ (rr,mr,rm) instructions to SSEPackedInt domain - prevents a number of costly domain switches.
Differential Revision: http://reviews.llvm.org/D7600
llvm-svn: 229439
This adds a safe interface to the machine independent InputArg struct
for accessing the index of the original (IR-level) argument. When a
non-native return type is lowered, we generate the hidden
machine-level sret argument on-the-fly. Before this fix, we were
representing this argument as OrigArgIndex == 0, which is an outright
lie. In particular this crashed in the AArch64 backend where we
actually try to access the type of the original argument.
Now we use a sentinel value for machine arguments that have no
original argument index. AArch64, ARM, Mips, and PPC now check for this
case before accessing the original argument.
Fixes <rdar://19792160> Null pointer assertion in AArch64TargetLowering
llvm-svn: 229413
We won't find a root with index zero in any loop that we are able to reroll.
However, we may find one in a non-rerollable loop, so bail gracefully instead
of failing hard.
llvm-svn: 229406
If a PHI has no users, don't crash; bail gracefully. This shouldn't
happen often, but we can make no guarantees that previous passes didn't leave
dead code around.
llvm-svn: 229405
to generically lower blends and is particularly nice because it is
available frome SSE2 onward. This removes a lot of the remaining domain
crossing blends in SSE2 code.
I'm hoping to replace some of the "interleaved" lowering hacks with
something closer to this which should be more principled. First, this
needs to learn how to detect and use other interleavings besides that of
the natural type provided. That will be a follow-up patch though.
llvm-svn: 229378
For #pragma comment(linker, ...) MSVC expects the comment string to be quoted, but for #pragma comment(lib, ...) the compiler itself quotes the library name.
Since this distinction disappears by the time the directive reaches the backend, move quoting for the "lib" version to the frontend.
Differential Revision: http://reviews.llvm.org/D7652
llvm-svn: 229375
This blend instruction is ... really lame. The register usage is insane.
As a consequence this is probably only *barely* better than 2 pshufbs
followed by a por, and that mostly because it only has to read from
a single memory location.
However, this doesn't fix as much as I kind of expected, so more to go.
Pretty sure that the ordering and delegation of v16i8 is just really,
really bad.
llvm-svn: 229373
We didn't properly handle the out-of-bounds case for
ConstantAggregateZero and UndefValue. This would manifest as a crash
when the constant folder was asked to fold a load of a constant global
whose struct type has no operands.
This fixes PR22595.
llvm-svn: 229352
advantage of the existence of a reasonable blend instruction.
The 256-bit vector shuffle lowering has leveraged the general technique
of decomposed shuffles and blends for quite some time, but this never
made it back into the 128-bit code, and there are a large number of
patterns where this is substantially better. For example, this removes
almost all domain crossing in vector shuffles that involve some blend
and some permutation with SSE4.1 and later. See the massive reduction
in 'shufps' for integer test cases in this commit.
This isn't perfect yet for a few reasons:
1) The v8i16 shuffle lowering continues to plague me. We don't always
form an unpack-based blend when that would be better. But the wins
pretty drastically outstrip the losses here.
2) The v16i8 shuffle lowering is just a disaster here. I never went and
implemented blend support here for some terrible reason. I'll do
that next probably. I've not updated it for now.
More variations on this technique are coming as well -- we don't
shuffle-into-unpack or shuffle-into-palignr, both of which would also be
profitable.
Note that some test cases grow significantly in the number of
instructions, but I expect to actually be faster. We use
pshufd+pshufd+blendw instead of a single shufps, but the pshufd's are
very likely to pipeline well (two ports on most modern intel chips) and
the blend is a *very* fast instruction. The domain switch penalty will
essentially always be more than a blend instruction, which is the only
increase in tree height.
llvm-svn: 229350
Summary:
When creating {insert,extract}value instructions from a BitcodeReader, we
weren't verifying the fields were valid.
Bugs found with afl-fuzz
Reviewers: rafael
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D7325
llvm-svn: 229345
This patch refactors the existing lowerVectorShuffleAsByteShift function to add support for 256-bit vectors on AVX2 targets.
It also fixes a tablegen issue that prevented the lowering of vpslldq/vpsrldq vec256 instructions.
Differential Revision: http://reviews.llvm.org/D7596
llvm-svn: 229311
when that will allow it to lower with a single permute instead of
multiple permutes.
It tries to detect when it will only have to do a single permute in
either case to maximize folding of loads and such.
This cuts a *lot* of the avx2 shuffle permute counts in half. =]
llvm-svn: 229309
directly into blends of the splats.
These patterns show up even very late in the vector shuffle lowering
where we don't have any chance for DAG combining to kick in, and
blending is a tremendously simpler operation to model. By coercing the
shuffle into a blend we can much more easily match and lower shuffles of
splats.
Immediately with this change there are significantly more blends being
matched in the x86 vector shuffle lowering.
llvm-svn: 229308
I was somewhat surprised this pattern really came up, but it does. It
seems better to just directly handle it than try to special case every
place where we end up forming a shuffle that devolves to a shuffle of
a zero vector.
llvm-svn: 229301
subvectors from buildvectors. That doesn't really make any sense and it
breaks all of the down-stream matching of buildvectors to cleverly lower
shuffles.
With this, we now get the shift-based lowering of 256-bit vector
shuffles with AVX1 when we split them into 128-bit vectors. We also do
much better on the zero-extension patterns, although there remains quite
a bit of room for improvement here.
llvm-svn: 229299
GNU ld sets default, not hidden, visibility on local symbols.
Having default or hidden visibility on local symbols makes no difference in run-time behavior.
Patch by: H.J. Lu <hjl.tools@gmail.com>
llvm-svn: 229297
lowerings -- one which decomposes into an initial blend followed by
a permute.
Particularly on newer chips, blends are handled independently of
shuffles and so this is much less bottlenecked on the single port that
floating point shuffles are executed with on Intel.
I'll be adding this lowering to a bunch of other code paths in
subsequent commits to handle still more places where we can effectively
leverage blends when they're available in the ISA.
llvm-svn: 229292
test.
This was just a matter of the DAG combine for vector shuffles being too
aggressive. This is a bit of a grey area, but I think generally if we
can re-use intermediate shuffles, we should. Certainly, given the test
cases I have available, this seems like the right call.
llvm-svn: 229285
legality test (essentially, everything is legal).
I'm planning to make this the default shortly, but I'd like to fix
a collection of the bugs it exposes first, and this will let me easily
test them. It also showcases both the improvements and a few of the
regressions triggered by the change. The biggest improvements by far are
the significantly reduced shuffling and domain crossing in the combining
test case. The biggest regressions are missing some clever blending
patterns.
llvm-svn: 229284
asm and port the mmx vector shuffle test to it.
Not thrilled with how it handles the stack manipulation logic, but I'm
much less bothered by that than I am by updating the test manually. =]
If anyone wants to teach the test checks management script about stack
adjustment patterns, that'd be cool too.
llvm-svn: 229268
Patch to allow XOP instructions (integer comparison and integer multiply-add) to be commuted. The comparison instructions sometimes require the compare mode to be flipped but the remaining instructions can use default commutation modes.
This patch also sets the SSE domains of all the XOP instructions.
Differential Revision: http://reviews.llvm.org/D7646
llvm-svn: 229267
The "dereferenceable" attribute cannot be added via .addAttribute(),
since it also expects a size in bytes. AttrBuilder#addAttribute or
AttributeSet#addAttribute is wrapped by classes Function, InvokeInst,
and CallInst. Add corresponding wrappers to
AttrBuilder#addDereferenceableAttr.
Having done this, propagate the dereferenceable attribute via
gc.relocate, adding a test to exercise it. Note that -datalayout is
required during execution over and above -instcombine, because
InstCombine only optionally requires DataLayoutPass.
Differential Revision: http://reviews.llvm.org/D7510
llvm-svn: 229265
a gold binary explicitly. Substitute this binary into the tests rather
than just directly executing the 'ld' binary.
This should allow folks to inject a cross compiling gold binary, or in
my case to use a gold binary built and installed somewhere other than
/usr/bin/ld. It should also allow the tests to find 'ld.gold' so that
things work even if gold isn't the default on the system.
I've only stubbed out support in the makefile to preserve the existing
behavior with none of the fancy logic. If someone else wants to add
logic here, they're welcome to do so.
llvm-svn: 229251
This should allow finally fixing the f64 fdiv implementation.
Test is disabled for VI since there seems to be a problem with one
of the buffer load instructions on it.
llvm-svn: 229236
This code didn't really make sense as is. If a filename is passed in,
the user obviously wants the coverage *for that file*, not *for
everything*.
llvm-svn: 229217
If we know that the sign bit of a value being sign extended is zero, we can use a zero extension instead. This is motivated by the fact that zero extensions are generally cheaper on x86 (and most other architectures?). We already apply a similar transform in DAGCombine, this just extends that to the IR level.
This comes up when we eagerly canonicalize gep indices to the width of a machine register (i64 on x86_64). To do so, we insert sign extensions (sext) to promote smaller types.
Differential Revision: http://reviews.llvm.org/D7255
llvm-svn: 229189
With this commit, llvm-dsymutil learns how to choose which DIEs
it will link in the final output and which ones it won't. This
is based on the 'valid relocation' information that has been
built in the previous commits.
The test only tests that we choose the right 'root DIEs'. The
selection algorithm (and especially the part that walk the
dependencies of a root DIE) lacks a bit test coverage. This
will be much easier to cover when we output actual Dwarf and
thus can use llvm-dwarfdump to verify the structure of the
emitted DIE trees. I'll add more tests then.
llvm-svn: 229183
This takes the preposterous number of patterns in this section
that were last added to in r219033 down to just plain obnoxious.
With a little more work, we might get this down to just comical.
I've added more test cases to the existing file that checks these
patterns, but it seems that some of these patterns simply don't
exist with today's shuffle lowering.
llvm-svn: 229158
llc would hang trying to write output to a full pipe that FileCheck
wasn't reading. FileCheck wasn't reading from stdin because it needs a
file as a positional argument.
llvm-svn: 229157
This patch adds functionality in MIPS delay slot filler such as if delay slot
filler have to put NOP instruction into the delay slot of microMIPS JR
instruction, then instead of emitting NOP this instruction is replaced by
compact jump instruction JRC.
Differential Revision: http://reviews.llvm.org/D7522
llvm-svn: 229128
This patch fixes a problem I accidentally introduced in an instruction combine
on select instructions added at r227197. That revision taught the instruction
combiner how to fold a cttz/ctlz followed by a icmp plus select into a single
cttz/ctlz with flag 'is_zero_undef' cleared.
However, the new rule added at r227197 would have produced wrong results in the
case where a cttz/ctlz with flag 'is_zero_undef' cleared was follwed by a
zero-extend or truncate. In that case, the folded instruction would have
been inserted in a wrong location thus leaving the CFG in an inconsistent
state.
This patch fixes the problem and add two reproducible test cases to
existing test 'InstCombine/select-cmp-cttz-ctlz.ll'.
llvm-svn: 229124
SimplifyCFG now knows how to speculate calls to intrinsic cttz/ctlz that are
'cheap' for the target. Therefore, some of the logic in CodeGenPrepare
that was originally added at revision 224899 can now be removed.
This patch is basically a no functional change. It removes the duplicated
logic in CodeGenPrepare and converts all the existing target specific tests
for cttz/ctlz into SimplifyCFG tests.
Differential Revision: http://reviews.llvm.org/D7608
llvm-svn: 229105
Summary:
Made the following changes:
Added calls to emitDirectiveSetNoAt() and emitDirectiveSetAt().
Added special emit function for .set at=$reg, emitDirectiveSetAtWithArg(unsigned RegNo).
Improved parsing error checks for .set at.
Refactored parser code for .set at.
Improved testing of both directives.
Improved code readability and comments.
Reviewers: dsanders
Reviewed By: dsanders
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D7176
llvm-svn: 229097
regressions for LLDB on Linux. Rafael indicated on lldb-dev that we
should just go ahead and revert these but that he wasn't at a computer.
The patches backed out are as follows:
r228980: Add support for having multiple sections with the name and ...
r228889: Invert the section relocation map.
r228888: Use the existing SymbolTableIndex intsead of doing a lookup.
r228886: Create the Section -> Rel Section map when it is first needed.
These patches look pretty nice to me, so hoping its not too hard to get
them re-instated. =D
llvm-svn: 229080
The issues with the new unroll analyzer are more fundamental than code
cleanup, algorithm, or data structure changes. I've sent an email to the
original commit thread with details and a proposal for how to redesign
things. I'm disabling this for now so that we don't spend time
debugging issues with it in its current state.
llvm-svn: 229064
- First, there's a crash when we try to combine that pointers into `icmp`
directly by creating a `bitcast`, which is invalid if that two pointers are
from different address spaces.
- It's not always appropriate to cast one pointer to another if they are from
different address spaces as that is not no-op cast. Instead, we only combine
`icmp` from `ptrtoint` if that two pointers are of the same address space.
llvm-svn: 229063
propagating of metadata.
We were propagating !nonnull metadata even when the newly formed load is
no longer of a pointer type. This is clearly broken and results in LLVM
failing the verifier and aborting. This patch just restricts the
propagation of !nonnull metadata to when we actually have a pointer
type.
This bug report and the initial version of this patch was provided by
Charles Davis! Many thanks for finding this!
We still need to add logic to round-trip the metadata correctly if we
combine from pointer types to integer types and then back by using range
metadata for the integer type loads. But this is the minimal and safe
version of the patch, which is important so we can backport it into 3.6.
llvm-svn: 229029
This commit makes the following changes:
- Stop issuing a warning when the triples' string representations do not match
exactly if the Triple objects generated from the strings compare equal.
- On Apple platforms, choose the triple that has the larger minimum version
number.
rdar://problem/16743513
Differential Revision: http://reviews.llvm.org/D7591
llvm-svn: 228999
Using this in combination with -ffunction-sections allows LLVM to output a .o
file with mulitple sections named .text. This saves space by avoiding long
unique names of the form .text.<C++ mangled name>.
llvm-svn: 228980
Constant pool entries are uniqued by their contents regardless of their
type. This means that a pshufb can have a shuffle mask which isn't a
simple array of bytes.
The code path which attempts to decode the mask didn't check for
failure, causing PR22559.
llvm-svn: 228979
I'd modify my migration tool to account for this, but this is the only
instance of a typedef'd pointer type to a gep I found in the whole test
suite, so it didn't seem worthwhile.
llvm-svn: 228970
The PowerPC backend has long promoted some floating-point vector operations
(such as select) to integer vector operations. Unfortunately, this behavior was
broken by r216555. When using FP_EXTEND/FP_ROUND for promotions, we must check
that both the old and new types are floating-point types. Otherwise, we must
use BITCAST as we did prior to r216555 for everything.
llvm-svn: 228969
Summary:
Implement the bulk of returning values in Mips fast-isel
Test Plan:
reatabi.ll
Passes test-suite at -O0,-O2 and with mips32r2 and mips32r1.
Reviewers: dsanders
Reviewed By: dsanders
Subscribers: llvm-commits, aemerson, rfuhler
Differential Revision: http://reviews.llvm.org/D5920
llvm-svn: 228958
Summary:
Instances of the AssumptionCache are per function, so we can't re-use
the same AssumptionCache instance when recursing in the CallAnalyzer to
analyze a different function. Instead we have to pass the
AssumptionCacheTracker to the CallAnalyzer so it can get the right
AssumptionCache on demand.
Reviewers: hfinkel
Subscribers: llvm-commits, hans
Differential Revision: http://reviews.llvm.org/D7533
llvm-svn: 228957
We can't solve the full subgraph isomorphism problem. But we can
allow obvious cases, where for example two instructions of different
types are out of order. Due to them having different types/opcodes,
there is no ambiguity.
llvm-svn: 228931
Now that SimplifyCFG uses TTI for the cost heuristic, we can teach BasicTTIImpl
how to query TLI in order to get a more accurate cost for truncates and
zero-extends.
Before this patch, the basic cost heuristic in TargetTransformInfoImplCRTPBase
would have conservatively returned a 'default' TCC_Basic for all zero-extends,
and TCC_Free for truncates on native types.
This patch improves the heuristic so that we query TLI (if available) to get
more accurate answers. If TLI is available, then methods 'isZExtFree' and
'isTruncateFree' can be used to check if a zext/trunc is free for the target.
Added more test cases to SimplifyCFG/X86/speculate-cttz-ctlz.ll.
With this change, SimplifyCFG is now able to speculate a 'cheap' cttz/ctlz
immediately followed by a free zext/trunc.
Differential Revision: http://reviews.llvm.org/D7585
llvm-svn: 228923
The changes in r223113 (ARM modified-immediate syntax) have broken
instructions like:
mov r0, #~0xffffff00
The problem is that I've added a spurious range check on the immediate
operand to ensure that it lies between INT32_MIN and UINT32_MAX. While
this range check is correct in theory, it causes problems because the
operand is stored in an int64_t (by MC). So valid 32-bit constants like
\#~0xffffff00 become out of range. The solution is to simply remove this
range check. It is not possible to validate the range of the immediate
operand with the current setup because: 1) The operand is stored in an
int64_t by MC, 2) The immediate can be of the forms #imm, #-imm, #~imm
or even #((~imm)) etc. So we just chop the value to 32 bits and use it.
Also noted that the original range check was note tested by any of the
unit tests. I've added a new test to cover #~imm kind of operands.
Change-Id: I411e90d84312a2eff01b732bb238af536c4a7599
llvm-svn: 228920
I've built some tests in WebRTC with and without this change. With this change number of __tsan_read/write calls is reduced by 20-40%, binary size decreases by 5-10% and execution time drops by ~5%. For example:
$ ls -l old/modules_unittests new/modules_unittests
-rwxr-x--- 1 dvyukov 41708976 Jan 20 18:35 old/modules_unittests
-rwxr-x--- 1 dvyukov 38294008 Jan 20 18:29 new/modules_unittests
$ objdump -d old/modules_unittests | egrep "callq.*__tsan_(read|write|unaligned)" | wc -l
239871
$ objdump -d new/modules_unittests | egrep "callq.*__tsan_(read|write|unaligned)" | wc -l
148365
http://reviews.llvm.org/D7069
llvm-svn: 228917
Using KORTESTW for comparison i1 value with zero was wrong since the instruction tests 16 bits.
KORTESTW may be used with KSHIFTL+KSHIFTR that clean the 15 upper bits.
I removed (X86cmp i1, 0) pattern and zero-extend i1 to i8 and then use TESTB.
There are some cases where i1 is in the mask register and the upper bits are already zeroed.
Then KORTESTW is the better solution, but it is subject for optimization.
Meanwhile, I'm fixing the correctness issue.
llvm-svn: 228916
This gives a rough estimate of whether using pushes instead of movs is profitable, in terms of size.
We go over all calls in the MachineFunction and compute:
a) For each callsite that can not use pushes, the penalty of not having a reserved call frame.
b) For each callsite that can use pushes, the gain of actually replacing the movs with pushes (and the potential penalty of having to readjust the stack).
Differential Revision: http://reviews.llvm.org/D7561
llvm-svn: 228915
We used to do this DAG combine, but it's not always correct:
If the first fp_round isn't a value preserving truncation, it might
introduce a tie in the second fp_round, that wouldn't occur in the
single-step fp_round we want to fold to.
In other words, double rounding isn't the same as rounding.
Differential Revision: http://reviews.llvm.org/D7571
llvm-svn: 228911
We would crash if we couldn't locate a Function that either Location's
Value belonged to. Now we just print out a debug message and return
conservatively.
llvm-svn: 228901
Apparently some code finally started to tickle this after my
canonicalization changes to instcombine.
The bug stems from trying to form a vector type out of scalars that
aren't compatible at all. In this example, from x86_mmx values. The code
in the vectorizer that checks for reasonable types whas checking for
aggregates or vectors, but there are lots of other types that should
just never reach the vectorizer.
Debugging this was made more confusing by the lie in an assert in
VectorType::get() -- it isn't that the types are *primitive*. The types
must be integer, pointer, or floating point types. No other types are
allowed.
I've improved the assert and added a helper to the vectorizer to handle
the element type validity checks. It now re-uses the VectorType static
function and then further excludes weird target-specific types that we
probably shouldn't be touching here (x86_fp80 and ppc_fp128). Neither of
these are really reachable anyways (neither 80-bit nor 128-bit things
will get vectorized) but it seems better to just eagerly exclude such
nonesense.
I've added a test case, but while it definitely covers two of the paths
through this code there may be more paths that would benefit from test
coverage. I'm not familiar enough with the SLP vectorizer to synthesize
test cases for all of these, but was able to update the code itself by
inspection.
llvm-svn: 228899
On PowerPC, which has a full set of logical operations on (its multiple sets
of) condition-register bits, it is not profitable to break of complex
conditions feeding a jump into multiple jumps. We can turn off this feature of
CGP/SDAGBuilder by marking jumps as "expensive".
P7 test-suite speedups (no regressions):
MultiSource/Benchmarks/FreeBench/pcompress2/pcompress2
-0.626647% +/- 0.323583%
MultiSource/Benchmarks/Olden/power/power
-18.2821% +/- 8.06481%
llvm-svn: 228895
I mistakenly thought the liveness of each "RetVal(F, i)" depended only on F. It
actually depends on the index too, which means we need to be careful about how
the results are combined before return. In particular if a single Use returns
Live, that counts for the entire object, at the granularity we're considering.
llvm-svn: 228885
Summary:
When trying to canonicalize negative constants out of
multiplication expressions, we need to check that the
constant is not INT_MIN which cannot be negated.
Reviewers: mcrosier
Reviewed By: mcrosier
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D7286
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 228872
Summary:
Move calls to get_input_file and release_input_file out of
getModuleForFile(). Otherwise release_input_file may end up
unmapping a view of the file while the view is still being
used by the Module (on 32-bit hosts).
Fix for PR22482.
Test Plan: Add test using --no-map-whole-files.
Reviewers: rafael, nlewycky
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D7539
llvm-svn: 228842
This is a union of these commits:
* R600/SI: Enable more tests for VI which need no changes
* R600/SI: Enable V_BCNT tests for VI
Differences:
- v_bcnt_..._e32 -> _e64
- s_load_dword* inline offset is in bytes instead of dwords
* R600/SI: Enable all tests for VI which use S_LOAD_DWORD
The inline offset is changed from dwords to bytes.
* R600/SI: Enable LDS tests for VI
Differences:
- the s_load_dword inline offset changed from dwords to bytes
- the tests checked very little on CI, so they have been fixed to check all
instructions that "SI" checked
* R600/SI: Enable lshr tests for VI
* R600/SI: Fix divrem64 tests
- "v_lshl_64" was missing "b" before "64"
- added VI-NOT checks
* R600/SI: Enable the SI.tid test for VI
* R600/SI: Enable the frem test for VI
Also, the frem_f64 checking is added for CI-VI.
* R600/SI: Add VI tests for rsq.clamped
llvm-svn: 228830
This patch is a follow-up of r228826 (see code-review: D7506).
Now that SimplifyCFG uses TargetTransformInfo for cost analysis, we
have to fix the cost heuristic for intrinsic calls to cttz/ctlz.
This patch defines method 'getIntrinsicCost' in BasicTTIImpl: now, BasicTTIImpl
queries TLI to check if a call to cttz/ctlz is cheap for the target.
Added test cases in Transforms/SimplifyCFG/X86 to verify that on x86,
SimplifyCFG only speculates a call to cttz/ctlz if it is cheap.
Differential Revision: http://reviews.llvm.org/D7554
llvm-svn: 228829
analysis.
We're already using TTI in SimplifyCFG, so remove the hard-baked "cheapness"
heuristic and use TTI directly. Generally NFC intended, but we're using a slightly
different heuristic now so there is a slight test churn.
Test changes:
* combine-comparisons-by-cse.ll: Removed unneeded branch check.
* 2014-08-04-muls-it.ll: Test now doesn't branch but emits muleq.
* coalesce-subregs.ll: Superfluous block check.
* 2008-01-02-hoist-fp-add.ll: fadd is safe to speculate. Change to udiv.
* PhiBlockMerge.ll: Superfluous CFG checking code. Main checks still present.
* select-gep.ll: A variable GEP is not expensive, just TCC_Basic, according to the TTI.
llvm-svn: 228826
Summary:
Currently we have Mips32 and Mips64 disassemblers and this causes the target
triple to affect the disassembly despite all the relevant information being in
the ELF header. These implementations do not need to be separate.
This patch merges them together such that the appropriate tables are checked
for the subtarget (e.g. Mips64 is checked when GP64 is enabled).
Reviewers: vmedic
Reviewed By: vmedic
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D7498
llvm-svn: 228825
A DAGRootSet models an induction variable being used in a rerollable
loop. For example:
x[i*3+0] = y1
x[i*3+1] = y2
x[i*3+2] = y3
Base instruction -> i*3
+---+----+
/ | \
ST[y1] +1 +2 <-- Roots
| |
ST[y2] ST[y3]
There may be multiple DAGRootSets, for example:
x[i*2+0] = ... (1)
x[i*2+1] = ... (1)
x[i*2+4] = ... (2)
x[i*2+5] = ... (2)
x[(i+1234)*2+5678] = ... (3)
x[(i+1234)*2+5679] = ... (3)
This concept is similar to the "Scale" member used previously, but allows
multiple independent sets of roots based off the same induction variable.
llvm-svn: 228821
If the landingpad of the invoke is using a personality function that
catches asynch exceptions, then it can catch a trap.
Also add some landingpads to invalid LLVM IR test cases that lack them.
Over-the-shoulder reviewed by David Majnemer.
llvm-svn: 228782
The isSigned argument of makeLibCall function was hard-coded to false
(unsigned). This caused zero extension on MIPS64 soft float.
As the result SingleSource/Benchmarks/Stanford/FloatMM test and
SingleSource/UnitTests/2005-07-17-INT-To-FP test failed.
The solution was to use the proper argument.
Patch by Strahinja Petrovic.
Differential Revision: http://reviews.llvm.org/D7292
llvm-svn: 228765