Commit Graph

20094 Commits

Author SHA1 Message Date
Craig Topper c0d0e6b198 [X86] Recognize CVTPH2PS from STRICT_FP_EXTEND
This should avoid scalarizing the cvtph2ps intrinsics with D75162

Differential Revision: https://reviews.llvm.org/D75304
2020-02-28 10:19:57 -08:00
Sanjay Patel 90fd859f51 [x86] use instruction-level fast-math-flags to drive MachineCombiner
The code changes here are hopefully straightforward:

1. Use MachineInstruction flags to decide if FP ops can be reassociated
   (use both "reassoc" and "nsz" to be consistent with IR transforms;
   we probably don't need "nsz", but that's a safer interpretation of
   the FMF).
2. Check that both nodes allow reassociation to change instructions.
   This is a stronger requirement than we've usually implemented in
   IR/DAG, but this is needed to solve the motivating bug (see below),
   and it seems unlikely to impede optimization at this late stage.
3. Intersect/propagate MachineIR flags to enable further reassociation
   in MachineCombiner.

We managed to make MachineCombiner flexible enough that no changes are
needed to that pass itself. So this patch should only affect x86
(assuming no other targets have implemented the hooks using MachineIR
flags yet).

The motivating example in PR43609 is another case of fast-math transforms
interacting badly with special FP ops created during lowering:
https://bugs.llvm.org/show_bug.cgi?id=43609
The special fadd ops used for converting int to FP assume that they will
not be altered, so those are created without FMF.

However, the MachineCombiner pass was being enabled for FP ops using the
global/function-level TargetOption for "UnsafeFPMath". We managed to run
instruction/node-level FMF all the way down to MachineIR sometime in the
last 1-2 years though, so we can do better now.

The test diffs require some explanation:

1. llvm/test/CodeGen/X86/fmf-flags.ll - no target option for unsafe math was
   specified here, so MachineCombiner kicks in where it did not previously;
   to make it behave consistently, we need to specify a CPU schedule model,
   so use the default model, and there are no code diffs.
2. llvm/test/CodeGen/X86/machine-combiner.ll - replace the target option for
   unsafe math with the equivalent IR-level flags, and there are no code diffs;
   we can't remove the NaN/nsz options because those are still used to drive
   x86 fmin/fmax codegen (special SDAG opcodes).
3. llvm/test/CodeGen/X86/pow.ll - similar to #1
4. llvm/test/CodeGen/X86/sqrt-fastmath.ll - similar to #1, but MachineCombiner
   does some reassociation of the estimate sequence ops; presumably these are
   perf wins based on latency/throughput (and we get some reduction of move
   instructions too); I'm not sure how it affects numerical accuracy, but the
   test reflects reality better now because we would expect MachineCombiner to
   be enabled if the IR was generated via something like "-ffast-math" with clang.
5. llvm/test/CodeGen/X86/vec_int_to_fp.ll - this is the test added to model PR43609;
   the fadds are not reassociated now, so we should get the expected results.
6. llvm/test/CodeGen/X86/vector-reduce-fadd-fast.ll - similar to #1
7. llvm/test/CodeGen/X86/vector-reduce-fmul-fast.ll - similar to #1

Differential Revision: https://reviews.llvm.org/D74851
2020-02-27 15:19:37 -05:00
Simon Pilgrim 168a44a70e [CostModel][X86] Improve extract/insert element costs (PR43605)
This tries to improve the accuracy of extract/insert element costs by accounting for subvector extraction/insertion for >128-bit vectors and the shuffling of elements to/from the 0'th index.

It also adds INSERTPS for f32 types and PINSR/PEXTR costs for integer types (at the moment we assume the same cost as MOVD/MOVQ - which isn't always true).

Differential Revision: https://reviews.llvm.org/D74976
2020-02-27 15:54:13 +00:00
Simon Pilgrim f90cc633de Fix cppcheck definition/declaration arg mismatch warnings. NFCI. 2020-02-27 14:35:20 +00:00
Simon Pilgrim fe6bcfaf3b [X86] Use Subtarget.useSoftFloat() in X86TargetLowering constructor
Avoid use of X86TargetLowering::useSoftFloat() in the constructor as its a virtual function
2020-02-27 14:35:20 +00:00
Simon Pilgrim e61e7f0794 Fix shadow variable warning. NFC. 2020-02-27 14:23:05 +00:00
Simon Pilgrim dc7ac563ac Fix shadow variable warnings. NFC. 2020-02-27 14:21:30 +00:00
Simon Pilgrim efe2f59ec4 [X86] LowerMSCATTER/MGATHER - reduce scope of MaskVT. NFCI.
Fixes cppcheck warning.
2020-02-27 14:20:44 +00:00
Simon Pilgrim fabe52a741 Fix uninitialized variable warning. NFC. 2020-02-27 14:20:43 +00:00
Simon Pilgrim 6bdd63dc28 [X86] createVariablePermute - handle case where recursive createVariablePermute call fails
Account for the case where a recursive createVariablePermute call with a wider vector type fails.

Original test case from @craig.topper (Craig Topper)
2020-02-27 13:52:31 +00:00
Craig Topper 82a21c1655 [X86] Add proper MachinePointerInfo to stack store created in LowerWin64_i128OP. 2020-02-26 16:55:24 -08:00
Craig Topper 870363a22d [X86] Explicitly pass Destination VT and debug location to BuildFILD. NFC
We'd already passed most everything else. Might was well pass
these two things and stop passing Op.
2020-02-26 16:26:46 -08:00
Craig Topper 15e2831fcd [X86] Explicitly pass Pointer, MachinePointerInfo and Alignment to BuildFILD.
Previously this code was called into two ways, either a FrameIndexSDNode
was passed in StackSlot. Or a load node was passed in the argument
called StackSlot. This was determined by a dyn_cast to FrameIndexSDNode.

In the case of a load, we had to go find the real pointer from
operand 0 and cast the node to MemSDNode to find the pointer info.

For the stack slot case, the code assumed that the stack slot
was perfectly aligned despite not being the creator of the slot.

This commit modifies the interface to make the caller responsible
for passing all of the required information to avoid all the
guess work and reverse engineering.

I'm not aware of any issues with the original code after an
earlier commit to fix the alignment of one of the stack objects.
This is just clean up to make the code less surprising.
2020-02-26 16:26:26 -08:00
Craig Topper 77d9b7b2cd [X86] Query constant pool object alignment instead of hardcoding. 2020-02-26 14:45:39 -08:00
Craig Topper 9c1a707ba3 [X86] Use proper alignment for stack temporary and correct MachinePointerInfo for stack accesses in LowerUINT_TO_FP. 2020-02-26 14:45:38 -08:00
Craig Topper a8186935ae [X86] Use correct MachineMemOperand for stack load in LowerFLT_ROUNDS_ 2020-02-26 14:45:38 -08:00
Craig Topper 14306ce80c [X86] Add proper MachinePointerInfo to the loads/stores created for moving data between SSE and X87 in X86DAGToDAGISel::PreprocessISelDAG 2020-02-26 14:45:37 -08:00
Eric Astor 85b641c27a [ms] Rename ParsingInlineAsm functions/variables to reflect MS-specificity.
Summary: ParsingInlineAsm was a misleading name. These values are only set for MS-style inline assembly.

Reviewed By: rnk

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D75198
2020-02-26 15:19:40 -05:00
Craig Topper 735d27dc40 [SelectionDAG][PowerPC][AArch64][X86][ARM] Add chain input and output the ISD::FLT_ROUNDS_
This node reads the rounding control which means it needs to be ordered properly with operations that change the rounding control. So it needs to be chained to maintain order.

This patch adds a chain input and output to the node and connects it to the chain in SelectionDAGBuilder. I've update all in-tree targets to connect their chain through their lowering code.

Differential Revision: https://reviews.llvm.org/D75132
2020-02-25 16:58:23 -08:00
Vedant Kumar 8594f3d899 Revert "[X86MCTargetDesc.h] Speculative fix for macro collision with sys/param.h"
This reverts commit eee22ec3c3.

This is not the correct fix, the root cause seems to be a bug in the
stage1 host clang compiler. See https://reviews.llvm.org/D75091 for more
discussion.
2020-02-25 14:38:46 -08:00
Vedant Kumar eee22ec3c3 [X86MCTargetDesc.h] Speculative fix for macro collision with sys/param.h
See discussion on https://reviews.llvm.org/D75091 for information about
the build failure and alternatives considered.
2020-02-25 10:52:37 -08:00
Craig Topper 89ba4acad6 [X86] Pass parameters into selectVectorAddr to remove dependency on X86MaskedGatherScatterSDNode.
Might be able to get rid of X86ISD::SCATTER and some uses of
X86ISD::GATHER. Which require isel to use ISD::SCATTER and
ISD::GATHER as well.
2020-02-24 23:56:34 -08:00
Craig Topper 9238dfb4d8 [X86] Remove mask output from X86 gather/scatter ISD opcodes.
Instead add it when we make the machine nodes during instruction
selections.

This makes this ISD node closer to ISD::MGATHER. Trying to see
if we remove the X86 specific ones.
2020-02-24 23:56:28 -08:00
Craig Topper 727328433a [X86] Add back fmaddsub intrinsics to work towards fixing the strict fp implementation
Previously we emitted an fmadd and a fmadd+fneg and combined them with a shufflevector. But this doesn't follow the correct exception behavior for unselected elements so the backend can't merge them into the fmaddsub/fmsubadd instructions.

This patch restores the the fmaddsub intrinsics so we don't have two arithmetic operations. We lose out on optimization opportunity in the non-strict FP case, but I don't think this is a big loss. If someone gives us a test case we can look into adding instcombine/dagcombine improvements. I'd rather not have the frontend do completely different things for strict and non-strict.

This still has problems because target specific intrinsics don't support strict semantics yet. We also still have all of the problems with masking. But we at least generate the right instruction in constrained mode now.

Differential Revision: https://reviews.llvm.org/D74268
2020-02-24 12:07:21 -08:00
Simon Pilgrim daac8dba77 [X86] combineX86ShuffleChain - select X86ISD::FAND/ISD::AND based on MaskVT
Noticed by inspection, we shouldn't use FloatDomain directly, we've already bitcast both inputs to MaskVT so select the opcode using that.
2020-02-24 18:24:44 +00:00
Simon Pilgrim 59d8d13c7b [X86] getTargetShuffleInputs - check that the source inputs are all the right size.
I'm hoping to begin improving shuffle combining across different vector sizes, but before that we must ensure that all existing getTargetShuffleInputs calls must bail if the inputs aren't the same size.
2020-02-24 16:26:10 +00:00
Simon Pilgrim b82438872b [CostModel][X86] We don't need a scale factor for SLM extract costs
D74976 will handle larger vector types, but since SLM doesn't support AVX+ then we will always be extracting from 128-bit vectors so don't need to scale the cost.
2020-02-24 14:23:04 +00:00
Craig Topper 7a7146cf72 [X86] When creating X86ISD::MGATHER nodes from AVX2 gather intrinsics, cast the mask to integer type.
The gather intrinsics use a floating point mask when the result
type is FP. But we call DemandedBits on the mask assuming its an
integer type. We also use integer types when we create it from
generic IR. So add a bitcast to the intrinsic path to guarantee
the integer type.
2020-02-23 23:00:41 -08:00
Craig Topper f1b8ec3398 [X86] Use custom isel for gather/scatter instructions.
The type profile we use for the isel patterns lied about how
many operands the gather/scatter node has to skip the index
and scale operands. This allowed us to expand the baseptr
operand into base, displacement, and segment and then merge
the index and scale with them in the final instruction during
isel. This is kind of a hack that relies on isel not checking the
number of operands at all.

This commit switches to custom isel where we can manage this
directly without relying on holes in the isel checking.
2020-02-23 22:33:06 -08:00
Craig Topper 5a70518660 [X86] Remove most X86 specific subclasses of MemSDNode. Just use a MemIntrinsicSDNode as we usually do.
Leave the gather/scatter subclasses, but make them inherit from
MemIntrinsicSDNode and delete their constructor and destructor.
This way we can still have the getIndex, getMask, etc. convenience
functions.
2020-02-23 15:13:32 -08:00
Craig Topper 15b6aa7448 [X86] Enable the use of movlps for i64 atomic load on 32-bit targets with sse1.
Still a little room for improvement by using movlps to store to
the stack temporary needed to move data out of the xmm register
after the load.
2020-02-23 15:11:38 -08:00
Craig Topper 2a10f8019d [X86] Use FIST for i64 atomic stores on 32-bit targets without SSE. 2020-02-23 15:11:38 -08:00
Craig Topper 84cd968f75 [X86] Add AddToWorklist(N) after calls to SimplifyDemandedBits/SimplifyDemandedVectorElts that are called on an operand of N.
If a simplication occurs the operand will be added to the worklist.
But since the demanded mask was based on N, we need to make sure
we revisit N in case there are more simplifications to be done.
Returning SDValue(N, 0) as we do, only tells DAG combine that
something changed, but that won't make it add anything to the
worklist.

Found while playing around with using VEXTRACT_STORE in more cases.
But I guess this doesn't affect any of our existing tests.
2020-02-22 21:42:59 -08:00
Craig Topper bdb1729c83 [X86] Teach EltsFromConsecutiveLoads that it's ok to form a v4f32 VZEXT_LOAD with a 64 bit memory size on SSE1 targets.
We can use MOVLPS which will load 64 bits, but we need a v4f32
result type. We already have isel patterns for this.

The code here is a little hacky. We can probably improve it with
more isel patterns.
2020-02-22 18:50:52 -08:00
Craig Topper e7a184fc7c [X86] Use movlps for i64 atomic stores on 32-targets with sse1.
This is similar to using movd which we do for sse2 targets.

I've added a DAG combine for VEXTRACT_STORE to use SimplifyDemandedVectorElts
to clean up some artifacts from type legalization.
2020-02-22 18:22:47 -08:00
Craig Topper 228a2bc9b7 [X86] Teach combineCVTPH2PS to shrink v8i16 loads when the output type is v4f32. Remove extra isel patterns.
Similar to what do for other operations that use a subset of bits.
Allows us to remove a pattern that shrinks a load. Which was
incorrect if the load was volatile.
2020-02-21 18:11:07 -08:00
Francis Visoiu Mistrih a32d539798 [Target] Remove libObject dependency in lib/Target
This removes a couple useless includes and the dependency of X86Desc on Object,
which was useless as well.
2020-02-21 14:52:31 -08:00
Fangrui Song fad1c750f1 [AArch64][SVE] Fix -DBUILD_SHARED_LIBS=on builds after -D74808/1874dee5662603c9251228c71b66de72cec0c979 2020-02-21 13:59:47 -08:00
Francis Visoiu Mistrih 1874dee566 [macho][NFC] Extract all CPU_(SUB_)TYPE logic to BinaryFormat
This moves all the logic of converting LLVM Triples to
MachO::CPU_(SUB_)TYPE from the specific target (Target)AsmBackend to
more convenient functions in lib/BinaryFormat.

This also gets rid of the separate two X86AsmBackend classes.

The previous attempt was to add it to libObject, but that adds an
unnecessary dependency to libObject from all the targets.

Differential Revision: https://reviews.llvm.org/D74808
2020-02-21 12:43:29 -08:00
Craig Topper 8875ee18d7 [X86] Add a new format type for instructions that represent named prefix bytes like data16 and rep. Use it to make a simpler version of isPrefix.
isPrefix was added to support the patches to align branches.
it relies on a switch over instruction names.

This moves those opcodes to a new format so the information is
tablegen and we can just check for a specific value in some bits
in TSFlags instead.

I've left the other function in place for now so that the
existing patches in phabricator will still work. I'll work with
the owner to get them migrated.
2020-02-21 12:34:59 -08:00
Nikita Popov c90ea87cfd [X86] Fix SDLoc initialization
Fixes -Wparentheses warning, in this case indicating a genuine
bug.
2020-02-21 18:26:05 +01:00
Craig Topper 97f11600e0 [X86] Don't bother avoiding illegal FCMOVs if we don't have the cmov subtarget feature.
We'll be forced to emit branches so we might as well use the most
direct condition.
2020-02-21 00:34:15 -08:00
Craig Topper 263bef2bbc [X86] Make combineCMov not create unsupported FCMOVs when f32/f64 are using X87.
This makes the behavior consistent with what's in LowerSELECT.
2020-02-21 00:34:15 -08:00
Craig Topper 4576606831 [X86] Remove unnecessary isNullConstant in LowerSelect. NFC
At this point in the code we know that Op1 or Op2 is
all ones. Y points to the other operand. In the case that
Op2 is zero, Op1 must be all ones and Y is Op2. The OR
ORs Y into Res. But if Y is 0 the OR will be folded away by
getNode so we don't need to check for it.
2020-02-20 21:41:13 -08:00
Craig Topper 78be618717 [X86] Add CMOV_VR64 pseudo instruction for MMX. Remove mmx handling from combineSelect.
The combineSelect code was casting to i64 without any check that
i64 was legal. This can break after type legalization.

It also required splitting the mmx register on 32-bit targets.
It's not clear that this makes sense. Instead switch to using
a cmov pseudo like we do for XMM/YMM/ZMM.
2020-02-20 20:30:56 -08:00
Craig Topper e5782377f3 [X86] Add CMOV_VK1 pseudo so we don't crash on v1i1 ISD::SELECT 2020-02-20 15:13:48 -08:00
Craig Topper 7e92769862 [X86] Expand vselect of v1i1 under avx512.
We already do this for v2i1, v4i1, etc.
2020-02-20 15:13:47 -08:00
Craig Topper b00ef8951b [X86] Custom legalize v1i1 UADDSAT/USUBSAT/SADDSAT/UADDSAT to match v2i1/v4i1/v8i1 etc. 2020-02-20 15:13:46 -08:00
Craig Topper 5228a5544b [X86] Fix a couple copy mistakes in v4i1 or/and/xor isel patterns.
VK1 was being used as the output of the copy to regclass, but it
should be VK2/VK4. Shouldn't matter in practice though since
VK1/VK2/VK4/VK8/VK16 are all identicaly and just have different VTs.
2020-02-20 15:13:45 -08:00
Craig Topper d95a10a7f9 [X86] Custom legalize v1i1 add/sub/mul to xor/xor/and with avx512.
We already did this for v2i1, v4i1, v8i1, etc.
2020-02-20 15:13:44 -08:00