Summary:
According to PTX ISA:
For convenience, ld, st, and cvt instructions permit source and destination data operands to be wider than the instruction-type size, so that narrow values may be loaded, stored, and converted using regular-width registers. For example, 8-bit or 16-bit values may be held directly in 32-bit or 64-bit registers when being loaded, stored, or converted to other types and sizes. The operand type checking rules are relaxed for bit-size and integer (signed and unsigned) instruction types; floating-point instruction types still require that the operand type-size matches exactly, unless the operand is of bit-size type.
So, the ISA does not support load with extending/store with truncatation for floating numbers. This is reflected in setting the loadext/truncstore actions to expand in the code for floating numbers, but vectors of floating numbers are not taken care of.
As a result, loading a vector of floats followed by a fp_extend may be combined by DAGCombiner to a extload, and the extload may be lowered to NVPTXISD::LoadV2 with extending information. However, NVPTXISD::LoadV2 does not perform extending, and no extending instructions are inserted. Finally, PTX instructions with mismatched types are generated, like
ld.v2.f32 {%fd3, %fd4}, [%rd2]
This patch adds the correct actions for vectors of floats, so DAGCombiner would not create loads with extending, and correct code is generated.
Patched by Gang Hu.
Test Plan: Test case attached.
Reviewers: jingyue
Reviewed By: jingyue
Subscribers: llvm-commits, jholewinski
Differential Revision: http://reviews.llvm.org/D10876
llvm-svn: 241191
Summary:
Offset of frame index is calculated by NVPTXPrologEpilogPass. Before
that the correct offset of stack objects cannot be obtained, which
leads to wrong offset if there are more than 2 frame objects. This patch
move NVPTXPeephole after NVPTXPrologEpilogPass. Because the frame index
is already replaced by %VRFrame in NVPTXPrologEpilogPass, we check
VRFrame register instead, and try to remove the VRFrame if there
is no usage after NVPTXPeephole pass.
Patched by Xuetian Weng.
Test Plan:
Strengthened test/CodeGen/NVPTX/local-stack-frame.ll to check the
offset calculation based on SP and SPL.
Reviewers: jholewinski, jingyue
Reviewed By: jingyue
Subscribers: jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D10853
llvm-svn: 241185
When adding little-endian vector support for PowerPC last year, I
inadvertently disabled an optimization that recognizes a load-splat
idiom and generates the lxvdsx instruction. This patch moves the
offending logic so lxvdsx is once again generated.
This pattern is frequently generated by the vectorizer for scalar
loads of an effective constant. Previously the lxvdsx instruction was
wrongly listed as lane-sensitive for the VSX swap optimization (since
both doublewords are identical, swaps are safe). This patch fixes
this as well, so that vectorized code using lxvdsx can now have swaps
removed from the computation.
There is an existing test (@test50) in test/CodeGen/PowerPC/vsx.ll
that checks for the missing optimization. However, vsx.ll was only
being tested for POWER7 with big-endian code generation. I've added
a little-endian RUN statement and expected LE code generation for all
the tests in vsx.ll to give us a bit better VSX coverage, including
what's needed for this patch.
llvm-svn: 241183
This patch is not intended to change existing codegen behavior for any target.
It just exposes the JumpIsExpensive setting on the command-line to allow for
easier testing and emergency overrides.
Also, change the existing regression test to use FileCheck, explicitly specify
the jump-is-expensive option, and use more precise checks.
Differential Revision: http://reviews.llvm.org/D10846
llvm-svn: 241179
The EH code might have been deleted as unreachable and the personality
pruned while the filter is still present. Currently I'm hitting this at
-O0 due to the clang bug PR24009.
llvm-svn: 241170
The incoming EBP value established by the runtime is actually a pointer
to the end of the EH registration object, and not the true parent
function frame pointer. Clang doesn't need llvm.x86.seh.exceptioninfo
anymore because we know that the exception info pointer is at a fixed
offset from this incoming EBP.
The llvm.x86.seh.recoverfp intrinsic takes an EBP value provided by the
EH runtime and returns a pointer that is usable with llvm.framerecover.
The llvm.x86.seh.restoreframe intrinsic is inserted by the 32-bit
specific preparation pass in blocks targetted by the EH runtime. It
re-establishes any physical registers used by the parent function to
address the stack, such as the frame, base, and stack pointers.
Neither of these intrinsics correctly handle stack realignment prologues
yet, but it's possible to add that later.
Reviewers: majnemer
Differential Revision: http://reviews.llvm.org/D10848
llvm-svn: 241125
Summary:
This change introduces a !make.implicit metadata that allows the
frontend to pre-select the set of explicit null checks that will be
considered for transformation into implicit null checks.
The reason for not using profiling data instead of !make.implicit is
explained in the change to `FaultMaps.rst`.
Reviewers: atrick, reames, pgavlin, JosephTremoulet
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D10824
llvm-svn: 241116
It is mandatory to specify a comdat in order to receive comdat semantics
for a symbol. We were previously getting this wrong in -function-sections
mode; linker-weak symbols were being emitted in a selectany comdat. This
change causes such symbols to use a noduplicates comdat instead, fixing
the inconsistency.
Also correct an inaccuracy in the docs.
Differential Revision: http://reviews.llvm.org/D10828
llvm-svn: 241103
Summary:
Really check if %SP is not used in other places, instead of checking only exact
one non-dbg use.
Patched by Xuetian Weng.
Test Plan:
@foo4 in test/CodeGen/NVPTX/local-stack-frame.ll, create a case that
SP will appear twice.
Reviewers: jholewinski, jingyue
Reviewed By: jingyue
Subscribers: llvm-commits, sfantao, jholewinski
Differential Revision: http://reviews.llvm.org/D10844
llvm-svn: 241099
This commit implements serialization of the machine basic block successors. It
uses a YAML flow sequence that contains strings that have the MBB references.
The MBB references in those strings use the same syntax as the MBB machine
operands in the machine instruction strings.
Reviewers: Duncan P. N. Exon Smith
Differential Revision: http://reviews.llvm.org/D10699
llvm-svn: 241093
Duplicating an FP register "as itself" is a bad idea, since it violates the
invariant that every FP register is mapped to at most one FPU stack slot.
Use the scratch FP register instead.
This fixes PR23957.
llvm-svn: 241069
A call to removeEmptySubranges() is necessary after every operation that
potentially removes all segments from a subregister range; this case in
the register coalescer was missing.
llvm-svn: 241027
This change unifies how LTOModule and the backend obtain linker flags
for globals: via a new TargetLoweringObjectFile member function named
emitLinkerFlagsForGlobal. A new function LTOModule::getLinkerOpts() returns
the list of linker flags as a single concatenated string.
This change affects the C libLTO API: the function lto_module_get_*deplibs now
exposes an empty list, and lto_module_get_*linkeropts exposes a single element
which combines the contents of all observed flags. libLTO should never have
tried to parse the linker flags; it is the linker's job to do so. Because
linkers will need to be able to parse flags in regular object files, it
makes little sense for libLTO to have a redundant mechanism for doing so.
The new API is compatible with the old one. It is valid for a user to specify
multiple linker flags in a single pragma directive like this:
#pragma comment(linker, "/defaultlib:foo /defaultlib:bar")
The previous implementation would not have exposed
either flag via lto_module_get_*deplibs (as the test in
TargetLoweringObjectFileCOFF::getDepLibFromLinkerOpt was case sensitive)
and would have exposed "/defaultlib:foo /defaultlib:bar" as a single flag via
lto_module_get_*linkeropts. This may have been a bug in the implementation,
but it does give us a chance to fix the interface.
Differential Revision: http://reviews.llvm.org/D10548
llvm-svn: 241010
When the store sequence being combined actually stores the base register, we
should not mark it as killed until the end.
rdar://21504262
llvm-svn: 241003
This is a new version of http://reviews.llvm.org/D10260.
It turned out that when you specify an integer register in inline asm on
x86 you get the register of the required type size back. That means that
X86TargetLowering::getRegForInlineAsmConstraint() has to accept any of
the integer registers and adapt its size to the given target size which
may be any 8/16/32/64 bit sized type. Surprisingly that means given a
constraint of "{ax}" and a type of MVT::F32 we need to return X86::EAX.
This change makes this face explicit, the previous code seemed like
working by accident because there it never returned an error once a
register was found. On the other hand this rewrite allows to actually
return errors for invalid situations like requesting an integer register
for an i128 type.
Related to rdar://21042280
Differential Revision: http://reviews.llvm.org/D10813
llvm-svn: 241002
Summary: This patch fixes the cases of sext/zext constant folding in DAG combiner where constans do not fit 64 bits. The fix simply removes un$
Test Plan: New regression test included.
Reviewers: RKSimon
Reviewed By: RKSimon
Subscribers: RKSimon, llvm-commits
Differential Revision: http://reviews.llvm.org/D10607
llvm-svn: 240991
This commit implements serialization of the register mask machine
operands. This commit serializes only the call preserved register
masks that are defined by a target, it doesn't serialize arbitrary
register masks.
This commit also extends the TargetRegisterInfo class and TableGen so that
the users of TRI can get the list of all the call preserved register masks and
their names.
Reviewers: Duncan P. N. Exon Smith
Differential Revision: http://reviews.llvm.org/D10673
llvm-svn: 240966
Some of the the permissible ARM -mfpu options, which are supported in GCC,
are currently not present in llvm/clang.This patch adds the options:
'neon-fp16', 'vfpv3-fp16', 'vfpv3-d16-fp16', 'vfpv3xd' and 'vfpv3xd-fp16.
These are related to half-precision floating-point and single precision.
Reviewers: rengolin, ranjeet.singh
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D10645
llvm-svn: 240930
We had a hack in SDAGBuilder in place to work around this but now we
can avoid that. Call BuildExactSDIV from BuildSDIV so DAGCombiner can
perform this trick automatically.
The added check in DAGCombiner is necessary to prevent exact sdiv by pow2
from regressing as the target-specific pow2 lowering is not aware of
exact bits yet.
This is mostly covered by existing tests. One side effect is that we
get the better lowering for exact vector sdivs now too :)
llvm-svn: 240891
This commit serializes the global address machine operands.
This commit doesn't serialize the operand's offset and target
flags, it serializes only the global value reference.
Reviewers: Duncan P. N. Exon Smith
Differential Revision: http://reviews.llvm.org/D10671
llvm-svn: 240851
Summary:
Some front ends make kernel pointers global already. In that case,
handlePointerParams does nothing.
Test Plan: more tests in lower-kernel-ptr-arg.ll
Reviewers: grosser
Subscribers: jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D10779
llvm-svn: 240849
Summary: We need to set MTYPE = 2 for VI shaders when targeting the HSA runtime.
Reviewers: arsenm
Differential Revision: http://reviews.llvm.org/D10777
llvm-svn: 240841
We support invoking a subset of llvm's intrinsics, but the verifier didn't account for this. We had previously added a special case to verify invokes of statepoints. By generalizing the code in terms of CallSite, we can verify invokes of other intrinsics as well. Interestingly, this found one test case which was invalid.
Note: I'm deliberately leaving the naming change from CI to CS to a follow up change. That will happen shortly, I just wanted to reduce the diff to make it clear what was happening with this one.
Differential Revision: http://reviews.llvm.org/D10118
llvm-svn: 240836
Summary:
This way the function symbol points to the start of amd_kernel_code_t
rather than the start of the function.
Reviewers: arsenm
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D10705
llvm-svn: 240829
If pseudoToMCOpcode failed, we would return the original opcode, so operands
would be swapped, but the instruction would remain the same.
It resulted in LSHLREV a, b ---> LSHLREV b, a.
This fixes Glamor text rendering and
piglit/arb_sample_shading-builtin-gl-sample-mask on VI.
This is a candidate for stable branches.
v2: the test was simplified by Tom Stellard
llvm-svn: 240824
This patch corresponds to review:
http://reviews.llvm.org/D10638
This is the back end portion of patch
http://reviews.llvm.org/D10637
It just adds the code gen and intrinsic functions necessary to support that patch to the back end.
llvm-svn: 240820
This patch fixes the error in ARM.td which stated that Cortex-R5
floating point unit can do only single precision, when it can do double as well.
Reviewers: rengolin
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D10769
llvm-svn: 240799
This commit serializes machine basic block operands. The
machine basic block operands use the following syntax:
%bb.<id>[.<name>]
This commit also modifies the YAML representation for the
machine basic blocks - a new, required field 'id' is added
to the MBB YAML mapping.
The id is used to resolve the MBB references to the
actual MBBs. And while the name of the MBB can be
included in a MBB reference, this name isn't used to
resolve MBB references - as it's possible that multiple
MBBs will reference the same BB and thus they will have the
same name. If the name is specified, the parser will verify
that it is equal to the name of the MBB with the specified id.
Reviewers: Duncan P. N. Exon Smith
Differential Revision: http://reviews.llvm.org/D10608
llvm-svn: 240792
Cortex-R4F TRM states that fpu supports both single and double precision.
This patch corrects the information in ARM.td file and corresponding test.
Reviewers: rengolin
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D10763
llvm-svn: 240776
This patch adds support for the vector merge even word and vector merge odd word
instructions introduced in POWER8.
Phabricator review: http://reviews.llvm.org/D10704
llvm-svn: 240650
We don't always have FMA, for example when using 'clang -mavx512f'
without an explicit CPU.
Also check for an explicit +avx512f instead of CPUs in a couple
related tests.
llvm-svn: 240616
Summary
This change turns on the emission of
__LLVM_Stackmaps section when generating COFF binaries.
Test Plan
Added a scenario to the test case:
test\CodeGen\X86\statepoint-stackmap-format.ll.
Code Review:
http://reviews.llvm.org/D10680
llvm-svn: 240613
Summary:
This patch first change the register that holds local address for stack
frame to %SPL. Then the new NVPTXPeephole pass will try to scan the
following pattern
%vreg0<def> = LEA_ADDRi64 <fi#0>, 4
%vreg1<def> = cvta_to_local %vreg0
and transform it into
%vreg1<def> = LEA_ADDRi64 %VRFrameLocal, 4
Patched by Xuetian Weng
Test Plan: test/CodeGen/NVPTX/local-stack-frame.ll
Reviewers: jholewinski, jingyue
Reviewed By: jingyue
Subscribers: eliben, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D10549
llvm-svn: 240587
This commit serializes the 3 scalar boolean attributes from the
MachineRegisterInfo class: IsSSA, TracksRegLiveness, and
TracksSubRegLiveness. These attributes are serialized as part
of the machine function YAML mapping.
Reviewers: Duncan P. N. Exon Smith
Differential Revision: http://reviews.llvm.org/D10618
llvm-svn: 240579
This commit serializes the null register machine operands.
It uses the '_' keyword to represent them, but the parser
also allows the '%noreg' named register syntax.
Reviewers: Duncan P. N. Exon Smith
Differential Revision: http://reviews.llvm.org/D10580
llvm-svn: 240558
Summary:
This patch fixes PR23405 (https://llvm.org/bugs/show_bug.cgi?id=23405).
During a node unscheduling an entry in LiveRegGens can be replaced with a new value. That corrupts the live reg tracking and LiveReg* structure is not cleared as should be during unscheduling. Problematic condition that enforces Gen replacement is `I->getSUnit()->getHeight() < LiveRegGens[I->getReg()]->getHeight()`. This condition should be checked only if LiveRegGen was set in current node unscheduling.
Test Plan: Regression test included.
Reviewers: hfinkel, atrick
Reviewed By: atrick
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D9993
llvm-svn: 240538
We used to erroneously match:
(v4i64 shuffle (v2i64 load), <0,0,0,0>)
Whereas vbroadcasti128 is more like:
(v4i64 shuffle (v2i64 load), <0,1,0,1>)
This problem doesn't exist for vbroadcastf128, which kept matching
the intrinsic after r231182. We should perhaps re-introduce the
intrinsic here as well, but that's a separate issue still being
discussed.
While there, add some proper vbroadcastf128 tests. We don't currently
match those, like for loading vbroadcastsd/ss on AVX (the reg-reg
broadcasts where added in AVX2).
Fixes PR23886.
llvm-svn: 240488
This commit translates the source locations for MIParser diagnostics from
the locations in the machine instruction string to the locations in the
MIR file.
Reviewers: Duncan P. N. Exon Smith
Differential Revision: http://reviews.llvm.org/D10574
llvm-svn: 240474
This commit introduces functionality that's used to serialize machine operands.
Only the physical register operands are serialized by this commit.
Reviewers: Duncan P. N. Exon Smith
Differential Revision: http://reviews.llvm.org/D10525
llvm-svn: 240425
This causes errors like:
ld: error: blah.o: requires dynamic R_X86_64_PC32 reloc against '' which
may overflow at runtime; recompile with -fPIC
blah.cc:function f(): error: undefined reference to ''
blah.o:g(): error: undefined reference to ''
I have not yet come up with an appropriate reproduction.
llvm-svn: 240394
Currently ( D10321, http://reviews.llvm.org/rL239486 ), we can use the machine combiner pass
to reassociate the following sequence to reduce the critical path:
A = ? op ?
B = A op X
C = B op Y
-->
A = ? op ?
B = X op Y
C = A op B
'op' is currently limited to x86 AVX scalar FP adds (with fast-math on), but in theory, it could
be any associative math/logic op (see TODO in code comment).
This patch generalizes the pattern match to ignore the instruction that defines 'A'. So instead of
a sequence of 3 adds, we now only need to find 2 dependent adds and decide if it's worth
reassociating them.
This generalization has a compile-time cost because we can now match more instruction sequences
and we rely more heavily on the machine combiner to discard sequences where reassociation doesn't
improve the critical path.
For example, in the new test case:
A = M div N
B = A add X
C = B add Y
We'll match 2 reassociation patterns, but this transform doesn't reduce the critical path:
A = M div N
B = A add Y
C = B add X
We need the combiner to reject that pattern but select this:
A = M div N
B = X add Y
C = B add A
Differential Revision: http://reviews.llvm.org/D10460
llvm-svn: 240361
The _Int instructions are special, in that they operate on the full
VR128 instead of FR32. The load folding then looks at MOVSS, at the
user, and bails out when it sees a size mismatch.
What we really know is that the rm_Int instructions don't load the
higher lanes, so folding is fine.
This happens for the straightforward intrinsic code, e.g.:
_mm_add_ss(a, _mm_load_ss(p));
Fixes PR23349.
Differential Revision: http://reviews.llvm.org/D10554
llvm-svn: 240326
This commit adds a function that tokenizes the string containing
the machine instruction. This commit also adds a struct called
'MIToken' which is used to represent the lexer's tokens.
Reviewers: Sean Silva
Differential Revision: http://reviews.llvm.org/D10521
llvm-svn: 240323
D8982 ( checked in at http://reviews.llvm.org/rL239001 ) added command-line
options to allow reciprocal estimate instructions to be used in place of
divisions and square roots.
This patch changes the default settings for x86 targets to allow that recip
codegen (except for scalar division because that breaks too much code) when
using -ffast-math or its equivalent.
This matches GCC behavior for this kind of codegen.
Differential Revision: http://reviews.llvm.org/D10396
llvm-svn: 240310
Summary:
The parser is exercised by llvm-objdump using -print-fault-maps. As is
probably obvious, the code itself was "heavily inspired" by
http://reviews.llvm.org/D10434.
Reviewers: reames, atrick, JosephTremoulet
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D10491
llvm-svn: 240304
Now that pr23900 is fixed, we can bring it back with no changes.
Original message:
Make all temporary symbols unnamed.
What this does is make all symbols that would otherwise start with a .L
(or L on MachO) unnamed.
Some of these symbols still show up in the symbol table, but we can just
make them unnamed.
In order to make sure we produce identical results when going thought assembly,
all .L (not just the compiler produced ones), are now unnamed.
Running llc on llvm-as.opt.bc, the peak memory usage goes from 208.24MB to
205.57MB.
llvm-svn: 240302
This commit implements initial machine instruction serialization. It
serializes machine instruction names. The instructions are represented
using a YAML sequence of string literals and are a part of machine
basic block YAML mapping.
This commit introduces a class called 'MIParser' which will be used to
parse the machine instructions and operands.
Reviewers: Duncan P. N. Exon Smith
Differential Revision: http://reviews.llvm.org/D10481
llvm-svn: 240295
Summary: The code responsible for shl folding in the DAGCombiner was assuming incorrectly that all constants are less than 64 bits. This patch simply changes the way values are compared.
Test Plan: A regression test included.
Reviewers: andreadb
Reviewed By: andreadb
Subscribers: andreadb, test, llvm-commits
Differential Revision: http://reviews.llvm.org/D10602
llvm-svn: 240291
This allows more call sequences to use pushes instead of movs when optimizing for size.
In particular, calling conventions that pass some parameters in registers (e.g. thiscall) are now supported.
Differential Revision: http://reviews.llvm.org/D10500
llvm-svn: 240257
Sparse switches with profile info are lowered as weight-balanced BSTs. For
example, if the node weights are {1,1,1,1,1,1000}, the right-most node would
end up in a tree by itself, bringing it closer to the top.
However, a leaf in this BST can contain up to 3 cases, and having a single
case in a leaf node as in the example means the tree might become
unnecessarily high.
This patch adds a heauristic to the pivot selection algorithm that moves more
cases into leaf nodes unless that would lower their rank. It still doesn't
yield the optimal tree in every case, but I believe it's conservatibely correct.
llvm-svn: 240224
This commit implements the initial serialization of machine basic blocks in a
machine function. Only the simple, scalar MBB attributes are serialized. The
reference to LLVM IR's basic block is preserved when that basic block has a name.
Reviewers: Duncan P. N. Exon Smith
Differential Revision: http://reviews.llvm.org/D10465
llvm-svn: 240145
What this does is make all symbols that would otherwise start with a .L
(or L on MachO) unnamed.
Some of these symbols still show up in the symbol table, but we can just
make them unnamed.
In order to make sure we produce identical results when going thought assembly,
all .L (not just the compiler produced ones), are now unnamed.
Running llc on llvm-as.opt.bc, the peak memory usage goes from 208.24MB to
205.57MB.
llvm-svn: 240130
Currently, we canonicalize shuffles that produce a result larger than
their operands with:
shuffle(concat(v1, undef), concat(v2, undef))
->
shuffle(concat(v1, v2), undef)
because we can access quad vectors (see PerformVECTOR_SHUFFLECombine).
This is useful in the general case, but there are special cases where
native shuffles produce larger results: the two-result ops.
We can look through the concat when lowering them:
shuffle(concat(v1, v2), undef)
->
concat(VZIP(v1, v2):0, :1)
This lets us generate the native shuffles instead of scalarizing to
dozens of VMOVs.
Differential Revision: http://reviews.llvm.org/D10424
llvm-svn: 240118
The test 'llvm/test/CodeGen/MIR/machine-function.mir' was disabled on
x86 msc18 in r239805 as it failed. My commit r240054 have fixed the
problem, so this commit reverts the commit that disabled the test as
it should pass now.
llvm-svn: 240074
In a relocation target can take 3 basic forms
* A r_value in scattered relocations.
* A symbol in external relocations.
* A section is non-external relocations.
Have the dump reflect that. With this change we go from
CHECK-NEXT: Extern: 0
CHECK-NEXT: Type: X86_64_RELOC_SUBTRACTOR (5)
CHECK-NEXT: Symbol: 0x2
CHECK-NEXT: Scattered: 0
To just
// CHECK-NEXT: Type: X86_64_RELOC_SUBTRACTOR (5)
// CHECK-NEXT: Section: __data (2)
Since the relocation is with a section, we print the seciton name and don't
need to say that it is not scattered or external.
Someone motivated can add further special cases for things like
ARM64_RELOC_ADDEND and ARM_RELOC_PAIR.
llvm-svn: 240073
To same compile time, the analysis to find dense case-clusters in switches is
not done at -O0. However, when the whole switch is dense enough, it is easy to
turn it into a jump table, resulting in much faster code with no extra effort.
llvm-svn: 240071
1. Used update_llc_test_checks.py to tighten checks
2. Fixed triple (nothing Darwin-specific here)
3. Replaced CPU specifiers with attributes
4. Fixed comments
5. Removed IvyBridge run because it did not add any coverage
llvm-svn: 240058
They had been getting emitted as a section + offset reference, which
is bogus since the value needs to be the offset within the GOT, not
the actual address of the symbol's object.
Differential Revision: http://reviews.llvm.org/D10441
llvm-svn: 240020
- zext the value to alloc size first, then check if the value repeats
with zero padding included. If so we can still emit a .space
- Do the checking with APInt.isSplat(8), which handles non-pow2 types
- Also handle large constants (bit width > 64)
- In a ConstantArray all elements have the same type, so it's sufficient
to check the first constant recursively and then just compare if all
following constants are the same by pointer compare
llvm-svn: 239977
Added explicit sign extension for v4i16/v8i16 to v4i32/v8i32 before conversion to floats. Matches existing support for v4i8/v8i8.
Follow up to D10433
llvm-svn: 239966
Summary:
This is done by first adding two additional instructions to convert the
alloca returned address to local and convert it back to generic. Then
replace all uses of alloca instruction with the converted generic
address. Then we can rely NVPTXFavorNonGenericAddrSpace pass to combine
the generic addresscast and the corresponding Load, Store, Bitcast, GEP
Instruction together.
Patched by Xuetian Weng (xweng@google.com).
Test Plan: test/CodeGen/NVPTX/lower-alloca.ll
Reviewers: jholewinski, jingyue
Reviewed By: jingyue
Subscribers: meheff, broune, eliben, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D10483
llvm-svn: 239964
The personality routine currently lives in the LandingPadInst.
This isn't desirable because:
- All LandingPadInsts in the same function must have the same
personality routine. This means that each LandingPadInst beyond the
first has an operand which produces no additional information.
- There is ongoing work to introduce EH IR constructs other than
LandingPadInst. Moving the personality routine off of any one
particular Instruction and onto the parent function seems a lot better
than have N different places a personality function can sneak onto an
exceptional function.
Differential Revision: http://reviews.llvm.org/D10429
llvm-svn: 239940
It's been used before to avoid infinite loops caused by separate CGP
optimizations undoing one another. We found one more such issue
caused by r238054. To avoid it, generalize the "InsertedTruncs"
set to any inst, and use it to avoid touching those again.
llvm-svn: 239938
Change builtin function name and signature ( add third parameter - rounding mode ).
Added tests for intrinsics.
Differential Revision: http://reviews.llvm.org/D10473
llvm-svn: 239888
The patch triggers a miscompile on SPEC 2006 403.gcc with the (ref)
200.i and scilab.i inputs. I opened PR23866 to track analysis of this.
This reverts commit r238793.
llvm-svn: 239880
This patch enables support for the conversion of v2i32 to v2f64 to use the CVTDQ2PD xmm instruction and stay on the SSE unit instead of scalarizing, sign extending to i64 and using CVTSI2SDQ scalar conversions.
Differential Revision: http://reviews.llvm.org/D10433
llvm-svn: 239855
The original change broke clang side tests. I will be submitting those momentarily. This change includes post commit feedback on the original change from from Pete Cooper.
Original Submission comments:
If a parameter to a function is known non-null, use the existing parameter attributes to record that fact at the call site. This has no optimization benefit by itself - that I know of - but is an enabling change for http://reviews.llvm.org/D9129.
Differential Revision: http://reviews.llvm.org/D9132
llvm-svn: 239849
This commit reports an error when a machine function from a MIR file that contains
LLVM IR can't find a function with the same name in the loaded LLVM IR module.
Reviewers: Duncan P. N. Exon Smith
Differential Revision: http://reviews.llvm.org/D10468
llvm-svn: 239831
The mftb instruction was incorrectly marked as deprecated in the PPC
Backend. Instead, it should not be treated as deprecated, but rather be
implemented using the mfspr instruction. A similar patch was put into GCC last
year. Details can be found at:
https://sourceware.org/ml/binutils/2014-11/msg00383.html.
This change will replace instances of the mftb instruction with the mfspr
instruction for all CPUs except 601 and pwr3. This will also be the default
behaviour.
Additional details can be found in:
https://llvm.org/bugs/show_bug.cgi?id=23680
Phabricator review: http://reviews.llvm.org/D10419
llvm-svn: 239827
Reapply r239539. Don't assume the collected number of
stores is the same vector size. Just take the first N
stores to fill the vector.
llvm-svn: 239825
When we multiply two 64-bit vectors, we extract lower and upper part and use the PMULUDQ instruction.
When one of the operands is a constant, the upper part may be zero, we know this at compile time.
Example: %a = mul <4 x i64> %b, <4 x i64> < i64 5, i64 5, i64 5, i64 5>.
I'm checking the value of the upper part and prevent redundant "multiply", "shift" and "add" operations.
llvm-svn: 239802
These are really immediate DUPs, and suffer from the same problem
with long instructions with a high/2 variant (e.g. smull).
By extending a MOVI (or DUP, before this patch), we can avoid an ext
on the other operand of the long instruction, e.g. turning:
ext.16b v0, v0, v0, #8
movi.4h v1, #0x53
smull.4s v0, v0, v1
into:
movi.8h v1, #0x53
smull2.4s v0, v0, v1
While there, add a now-necessary combine to fold (VT NVCAST (VT x)).
llvm-svn: 239799
If a parameter to a function is known non-null, use the existing parameter attributes to record that fact at the call site. This has no optimization benefit by itself - that I know of - but is an enabling change for http://reviews.llvm.org/D9129.
Differential Revision: http://reviews.llvm.org/D9132
llvm-svn: 239795
This commit serializes the simple, scalar attributes from the
'MachineFunction' class.
Reviewers: Duncan P. N. Exon Smith
Differential Revision: http://reviews.llvm.org/D10449
llvm-svn: 239790
This commit creates a dummy LLVM IR function with one basic block and an unreachable
instruction for each parsed machine function when the MIR file doesn't have LLVM IR.
This change is required as the machine function analysis pass creates machine
functions only for the functions that are defined in the current LLVM module.
Reviewers: Duncan P. N. Exon Smith
Differential Revision: http://reviews.llvm.org/D10135
llvm-svn: 239778
This commit reports an error when the MIR parser encounters a machine
function with the name that is the same as the name of a different
machine function.
Reviewers: Duncan P. N. Exon Smith
Differential Revision: http://reviews.llvm.org/D10130
llvm-svn: 239774
This commit connects the machine function analysis pass (which creates machine
functions) to the MIR parser, which will initialize the machine functions
with the state from the MIR file and reconstruct the machine IR.
This commit introduces a new interface called 'MachineFunctionInitializer',
which can be used to provide custom initialization for the machine functions.
This commit also introduces a new diagnostic class called
'DiagnosticInfoMIRParser' which is used for MIR parsing errors.
This commit modifies the default diagnostic handling in LLVMContext - now the
the diagnostics are printed directly into llvm::errs() so that the MIR parsing
errors can be printed with colours.
Reviewers: Justin Bogner
Differential Revision: http://reviews.llvm.org/D9928
llvm-svn: 239753
LLVM targeting aarch64 doesn't correctly produce aligned accesses for non-aligned
data at -O0/fast-isel (-mno-unaligned-access).
The root cause seems to be in fast-isel not producing unaligned access correctly
for -mno-unaligned-access.
The patch just aborts fast-isel for loads and stores when -mno-unaligned-access is
present.
The regression test is updated to check this new test case (-mno-unaligned-access
together with fast-isel).
Differential Revision: http://reviews.llvm.org/D10360
llvm-svn: 239732
Re-commit after adding "-aarch64-neon-syntax=generic" to fix the failure on OS X.
This patch was firstly committed in r239514, then reverted in r239544 because of a syntax incompatible failure on OS X.
llvm-svn: 239711
- Add glc, slc, and tfe operands to flat instructions
- Add missing flat instructions
- Fix the encoding of flat_load_dwordx3 and flat_store_dwordx3.
llvm-svn: 239637
ARMTargetParser::getFPUFeatures should disable fp16 whenever it
disables vfp4, as otherwise something like -mcpu=cortex-a7 -mfpu=none
leaves us with fp16 enabled (though the only effect that will have is
a wrong build attribute).
Differential Revision: http://reviews.llvm.org/D10397
llvm-svn: 239599
We were putting them in the filter field, which is correct for 64-bit
but wrong for 32-bit.
Also switch the order of scope table entry emission so outermost entries
are emitted first, and fix an obvious state assignment bug.
llvm-svn: 239574
This intrinsic is like framerecover plus a load. It recovers the EH
registration stack allocation from the parent frame and loads the
exception information field out of it, giving back a pointer to an
EXCEPTION_POINTERS struct. It's designed for clang to use in SEH filter
expressions instead of accessing the EXCEPTION_POINTERS parameter that
is available on x64.
This required a minor change to MC to allow defining a label variable to
another absolute framerecover label variable.
llvm-svn: 239567
We cannot prepend __imp_ in the IR mangler because a function reference may
be emitted unmangled in a constant initializer. The linker is expected to
resolve such references to thunks. This is covered by the new test case.
Strictly speaking we ought to emit two undefined symbols, one with __imp_ and
one without, as we cannot know which symbol the final object file will refer
to. However, this would require rather intrusive changes to IRObjectFile,
and lld works fine without it for now.
This reimplements r239437, which was reverted in r239502.
Differential Revision: http://reviews.llvm.org/D10400
llvm-svn: 239560
Revert "[AArch64] Match interleaved memory accesses into ldN/stN instructions."
Revert "Fixing MSVC 2013 build error."
The test/CodeGen/AArch64/aarch64-interleaved-accesses.ll test was failing on OS X.
llvm-svn: 239544
Now actually stores the non-zero constant instead of 0.
I somehow forgot to include this part of r238108.
The test change was just an independent instruction order swap,
so just add another check line to satisfy CHECK-NEXT.
llvm-svn: 239539
This patch ensures that SHL/SRL/SRA shifts for i8 and i16 vectors avoid scalarization. It builds on the existing i8 SHL vectorized implementation of moving the shift bits up to the sign bit position and separating the 4, 2 & 1 bit shifts with several improvements:
1 - SSE41 targets can use (v)pblendvb directly with the sign bit instead of performing a comparison to feed into a VSELECT node.
2 - pre-SSE41 targets were masking + comparing with an 0x80 constant - we avoid this by using the fact that a set sign bit means a negative integer which can be compared against zero to then feed into VSELECT, avoiding the need for a constant mask (zero generation is much cheaper).
3 - SRA i8 needs to be unpacked to the upper byte of a i16 so that the i16 psraw instruction can be correctly used for sign extension - we have to do more work than for SHL/SRL but perf tests indicate that this is still beneficial.
The i16 implementation is similar but simpler than for i8 - we have to do 8, 4, 2 & 1 bit shifts but less shift masking is involved. SSE41 use of (v)pblendvb requires that the i16 shift amount is splatted to both bytes however.
Tested on SSE2, SSE41 and AVX machines.
Differential Revision: http://reviews.llvm.org/D9474
llvm-svn: 239509
This patch corresponds to review:
http://reviews.llvm.org/D10096
This is the back end portion of the patch related to D10095.
The patch adds the instructions and back end intrinsics for:
vbpermq
vgbbd
llvm-svn: 239505
This is a reimplementation of D9780 at the machine instruction level rather than the DAG.
Use the MachineCombiner pass to reassociate scalar single-precision AVX additions (just a
starting point; see the TODO comments) to increase ILP when it's safe to do so.
The code is closely based on the existing MachineCombiner optimization that is implemented
for AArch64.
This patch should not cause the kind of spilling tragedy that led to the reversion of r236031.
Differential Revision: http://reviews.llvm.org/D10321
llvm-svn: 239486
During statepoint lowering we can sometimes avoid spilling of the value if we know that it was already spilled for previous statepoint.
We were doing this by checking if incoming statepoint value was lowered into load from stack slot. This was working only in boundaries of one basic block.
But instead of looking at the lowered node we can look directly at the llvm-ir value and if it was gc.relocate (or some simple modification of it) look up stack slot for it's derived pointer and reuse stack slot from it. This allows us to look across basic block boundaries.
Differential Revision: http://reviews.llvm.org/D10251
llvm-svn: 239472
Use a "safeseh" string attribute to do this. You would think we chould
just accumulate the set of personalities like we do on dwarf, but this
fails to account for the LSDA-loading thunks we use for
__CxxFrameHandler3. Each of those needs to make it into .sxdata as well.
The string attribute seemed like the most straightforward approach.
llvm-svn: 239448
Summary:
We used to assume V->RAUW only modifies the operand list of V's user.
However, if V and V's user are Constants, RAUW may replace and invalidate V's
user entirely.
This patch fixes the above issue by letting the caller replace the
operand instead of calling RAUW on Constants.
Test Plan: @nested_const_expr and @rauw in access-non-generic.ll
Reviewers: broune, jholewinski
Reviewed By: broune, jholewinski
Subscribers: jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D10345
llvm-svn: 239435
This gets all the handler info through to the asm printer and we can
look at the .xdata tables now. I've convinced one small catch-all test
case to work, but other than that, it would be a stretch to say this is
functional.
The state numbering algorithm avoids doing any scope reconstruction as
we do for C++ to simplify the implementation.
llvm-svn: 239433
Store instructions do not modify register values and therefore it's safe
to form a store pair even if the source register has been read in between
the two store instructions.
Previously, the read of w1 (see below) prevented the formation of a stp.
str w0, [x2]
ldr w8, [x2, #8]
add w0, w8, w1
str w1, [x2, #4]
ret
We now generate the following code.
stp w0, w1, [x2]
ldr w8, [x2, #8]
add w0, w8, w1
ret
All correctness tests with -Ofast on A57 with Spec200x and EEMBC pass.
Performance results for SPEC2K were within noise.
llvm-svn: 239432
that was resetting it.
Remove the uses of DisableTailCalls in subclasses of TargetLowering and use
the value of function attribute "disable-tail-calls" instead. Also,
unconditionally add pass TailCallElim to the pipeline and check the function
attribute at the start of runOnFunction to disable the pass on a per-function
basis.
This is part of the work to remove TargetMachine::resetTargetOptions, and since
DisableTailCalls was the last non-fast-math option that was being reset in that
function, we should be able to remove the function entirely after the work to
propagate IR-level fast-math flags to DAG nodes is completed.
Out-of-tree users should remove the uses of DisableTailCalls and make changes
to attach attribute "disable-tail-calls"="true" or "false" to the functions in
the IR.
rdar://problem/13752163
Differential Revision: http://reviews.llvm.org/D10099
llvm-svn: 239427
array of bytes. The generation of this byte arrays was expecting
the host to be little endian, which prevents big endian hosts to be
used in the generation of the PTX code. This patch fixes the
problem by changing the way the bytes are extracted so that it
works for either little and big endian.
llvm-svn: 239412
Summary:
This cleans up most allocas NVPTXLowerKernelArgs emits for byval
parameters.
Test Plan: makes bug21465.ll more stronger to verify no redundant local load/store.
Reviewers: eliben, jholewinski
Reviewed By: eliben, jholewinski
Subscribers: jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D10322
llvm-svn: 239368
on a per-function basis.
Previously some of the passes were conditionally added to ARM's pass pipeline
based on the target machine's subtarget. This patch makes changes to add those
passes unconditionally and execute them conditonally based on the predicate
functor passed to the pass constructors. This enables running different sets of
passes for different functions in the module.
rdar://problem/20542263
Differential Revision: http://reviews.llvm.org/D8717
llvm-svn: 239325
While we have some code to transform specification like {ax} into
{eax}/{rax} if the operand type isn't 16bit, we should reject cases
where there is no sane way to do this, like the i128 type in the
example.
Related to rdar://21042280
Differential Revision: http://reviews.llvm.org/D10260
llvm-svn: 239309
Implemented DAG lowering for all these forms.
Added tests for DAG lowering and encoding.
Differential Revision: http://reviews.llvm.org/D10310
llvm-svn: 239300