Commit Graph

7035 Commits

Author SHA1 Message Date
Simon Pilgrim a9ee3ffbc0 [X86] Move BEXTR DemandedBits handling inside SimplifyDemandedBitsForTargetNode
Some prep work for PR39153.
2020-02-03 15:16:40 +00:00
Craig Topper cf20fde1d1 [X86] Remove a couple unnecessary calls to ConvertCmpIfNecessary.
We only need to call this on floating point comparisons. In this
case these are known to be integer compares. One of them even
has a SUB opcode instead of CMP.
2020-02-02 21:36:51 -08:00
Craig Topper ee85415dbb [X86] Use MVT::f80 for the result type of the FLD used to convert from SSE register to X87 register in FP_TO_INTHelper. 2020-02-02 13:24:37 -08:00
Simon Pilgrim 5d86ac82a6 Fix a few spelling mistakes in comments. NFCI. 2020-02-02 18:27:43 +00:00
Simon Pilgrim 17e91b7dd2 [X86][SSE] combineBitcastvxi1 - add pre-AVX512 v64i1 handling 2020-02-02 18:00:09 +00:00
Guillaume Chatelet 3c89b75f23 [NFC] Introduce a type to model memory operation
Summary: This is a first step before changing the types to llvm::Align and introduce functions to ease client code.

Reviewers: courbet

Subscribers: arsenm, sdardis, nemanjai, jvesely, nhaehnle, hiraditya, kbarton, jrtc27, atanasyan, jsji, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73785
2020-01-31 17:29:01 +01:00
Craig Topper 90c31b0f42 [X86] Custom lower ISD::FROUND with SSE4.1 to avoid a libcall.
ISD::FROUND is defined to round to nearest with ties rounding
away from 0. This mode isn't supported in hardware on X86.

But as long as we aren't compiling with trapping math, we can
emulate this with floor(X + copysign(nextafter(0.5, 0.0), X)).

We have to use nextafter to avoid some corner cases that adding
0.5 would have. For example, if X is nextafter(0.5, 0.0) it should
round to 0.0, but adding 0.5 would need one extra bit of mantissa
than can be stored so it rounds to 1.0. Adding nextafter(0.5, 0.0)
instead will just increase the exponent by 1 and leave the mantissa
as all 1s. This would be nextafter(1.0, 0.0) which will floor to 0.0.

Techically this requires -fno-trapping-math which isn't our default.
But if we care about exceptions we should be using constrained
intrinsics. Constrained intrinsics would use STRICT_FROUND which
won't go through this code.

Fixes PR42195.

Differential Revision: https://reviews.llvm.org/D73607
2020-01-29 09:10:02 -08:00
Craig Topper e5edd641fd [X86] Use a shorter sequence to implement FLT_ROUNDS
This code needs to map from the FPCW 2-bit encoding for rounding mode to the 2-bit encoding defined for FLT_ROUNDS. The previous implementation did some clever swapping of bits and adding 1 modulo 4 to do the mapping.

This patch instead uses an 8-bit immediate as a lookup table of four 2-bit values. Then we use the 2-bit FPCW encoding to index the lookup table by using a right shift and an AND. This requires extracting the 2-bit value from FPCW and multipying it by 2 to make it usable as a shift amount. But still results in less code.

Differential Revision: https://reviews.llvm.org/D73599
2020-01-29 08:56:33 -08:00
Craig Topper ca2abea29a [X86] Use SelectionDAG::getZExtOrTrunc to simplify some code. NFCI 2020-01-28 16:27:59 -08:00
Wang, Pengfei 3d1f0ce3b9 [X86] Add combination for fma and fneg on X86 under strict FP.
Summary: X86 has instructions to calculate fma and fneg at the same time. But we combine the fneg and fma only when fneg is the source operand under strict FP.

Reviewers: craig.topper, andrew.w.kaylor, uweigand, RKSimon, LiuChen3

Subscribers: LuoYuanke, llvm-commits, cfe-commits, jdoerfert, hiraditya

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72824
2020-01-28 20:09:56 +08:00
Simon Pilgrim 2d5e281b0f [X86][AVX] Add a more aggressive SimplifyMultipleUseDemandedBits to simplify masked store masks.
Fixes a poor codegen issue noticed in PR11210.
2020-01-27 16:44:25 +00:00
Simon Pilgrim fa19d67a2a [X86][AVX] Extend combineCommutableSHUFP to handle v8f32 and v16f32 commutable shufps patterns 2020-01-26 19:04:12 +00:00
Simon Pilgrim 1a81b296cd [X86][SSE] combineCommutableSHUFP - permilps(shufps(load(),x)) --> permilps(shufps(x,load()))
Pull out combineTargetShuffle code added in rG3fd5d1c6e7db into a helper function and extend it to handle shufps(shufps(load(),x),y) and shufps(y,shufps(load(),x)) cases as well.
2020-01-26 14:36:23 +00:00
Craig Topper 3fdd435a4b [X86] Use a macro to convert X86ISD names to strings in getTargetNodeName.
Every case in the switch had a string version of themselves. Two
of them had a typo that used : instead of ::

By using a macro we can automate the string creation and avoid
the possibility of typos like this.

This is similar to what is done on the AMDGPU target.
2020-01-25 18:27:29 -08:00
Craig Topper 2c1decc040 [X86] Break the loop in LowerReturn into 2 loops. NFCI
I believe for STRICT_FP I need to use a STRICT_FP_EXTEND for the extending to f80 for returning f32/f64 in 32-bit mode when SSE is enabled. The STRICT_FP_EXTEND node requires a Chain. I need to get that node onto the chain before any CopyToRegs are emitted. This is because all the CopyToRegs are glued and chained together. So I can't put a STRICT_FP_EXTEND on the chain between the glued nodes without also glueing the STRICT_ FP_EXTEND.

This patch moves all the extend creation to a first pass and then creates the copytoregs and fills out RetOps in a second pass.

Differential Revision: https://reviews.llvm.org/D72665
2020-01-24 14:44:38 -08:00
Simon Pilgrim 3fd5d1c6e7 [X86][SSE] combineTargetShuffle - permilps(shufps(load(),x)) --> permilps(shufps(x,load()))
Moves lowerShuffleWithSHUFPS commutation code from rG30fcd29fe479 to catch cases during combine
2020-01-24 15:23:20 +00:00
Simon Pilgrim 30fcd29fe4 [X86][SSE] lowerShuffleWithSHUFPS - commute '2*V1+2*V2 elements' mask if it allows a loaded fold
As mentioned on D73023.
2020-01-24 12:04:10 +00:00
Guillaume Chatelet 805c157e8a [Alignment][NFC] Deprecate Align::None()
Summary:
This is a follow up on https://reviews.llvm.org/D71473#inline-647262.
There's a caveat here that `Align(1)` relies on the compiler understanding of `Log2_64` implementation to produce good code. One could use `Align()` as a replacement but I believe it is less clear that the alignment is one in that case.

Reviewers: xbolva00, courbet, bollu

Subscribers: arsenm, dylanmckay, sdardis, nemanjai, jvesely, nhaehnle, hiraditya, kbarton, jrtc27, atanasyan, jsji, Jim, kerbowa, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D73099
2020-01-24 12:53:58 +01:00
Simon Pilgrim 0ec25a0316 [X86] LowerRotate - early out for vector rotates by zero 2020-01-23 17:48:09 +00:00
Guillaume Chatelet 279fa8e006 [Alignement][NFC] Deprecate untyped CreateAlignedLoad
Summary:
This is patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790

Reviewers: courbet

Subscribers: arsenm, jvesely, nhaehnle, hiraditya, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73260
2020-01-23 13:34:32 +01:00
Sanjay Patel 363d27c871 [x86] fold vperm2x128 to concat of 128-bit high half vectors
vperm (ins ?, X, C), (ins ?, Y, C), 0x31 --> concat X, Y

This is another shuffle problem seen with PR42024:
https://bugs.llvm.org/show_bug.cgi?id=42024

We have this small crack in legalization/lowering/combining/demanded
that allows forming a vperm2f128 of high halves with AVX1 when we
could do better by peeking through the insert_subvector nodes.
AFAICT, it requires IR as shown in the diffs - much larger than legal
vectors - to avoid all of the usual folds.

Another option would prevent forming the 256-bit vperm in lowering.

Differential Revision: https://reviews.llvm.org/D73197
2020-01-22 15:35:50 -05:00
Simon Pilgrim 5340434c94 [X86][SSE] combineExtractWithShuffle - extract(bitcast(broadcast(x))) --> x
Removes some unnecessary gpr<-->fpu traffic
2020-01-22 18:02:58 +00:00
Simon Pilgrim a14aa7dabd [X86][SSE] combineExtractWithShuffle - extract(bictcast(scalar_to_vector(x))) --> x
Removes some unnecessary gpr<-->fpu traffic
2020-01-22 16:11:08 +00:00
Simon Pilgrim c784e5451b Use SelectionDAG::getShiftAmountConstant(). NFCI. 2020-01-22 13:52:43 +00:00
Simon Pilgrim 963f268186 [X86][SSE] combineExtractWithShuffle - pull out repeated extract index code. NFCI. 2020-01-22 12:08:58 +00:00
Simon Pilgrim b065902ed4 [X86] combineBT - use SimplifyDemandedBits instead of GetDemandedBits
Another step towards removing SelectionDAG::GetDemandedBits entirely
2020-01-21 14:24:46 +00:00
Simon Pilgrim eaa4548459 [X86][SSE] Add PACKSS SimplifyMultipleUseDemandedBits 'sign bit' handling.
Attempt to use SimplifyMultipleUseDemandedBits to simplify PACKSS if we're only after the sign bit.
2020-01-20 10:48:54 +00:00
Florian Hahn 0ee1db2d1d [X86] Try to avoid casts around logical vector ops recursively.
Currently PromoteMaskArithemtic only looks at a single operation to
skip casts. This means we miss cases where we combine multiple masks.

This patch updates PromoteMaskArithemtic to try to recursively promote
AND/XOR/AND nodes that terminate in truncates of the right size or
constant vectors.

Reviewers: craig.topper, RKSimon, spatel

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D72524
2020-01-19 17:22:43 -08:00
Craig Topper 5fa2022ec0 [X86] Remove X86ISD::FILD_FLAG and stop gluing nodes together.
Summary:
I think whatever problem the gluing was fixing has long since been fixed. We don't have any of the restrictions on FP stack stuff that existed back when this was first added.

I had to change which type we use for FILD in BuildFILD when X86 was enabled because most of the isel patterns block f32/f64 instructions when SSE1/SSE2 are enabled. So I needed to use the f80 pattern, but this shouldn't have an effect the generated code since there is only one FILD instruction anyway. We already use f80 explicitly in other other places.

Reviewers: RKSimon, spatel

Reviewed By: RKSimon

Subscribers: andrew.w.kaylor, scanon, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72805
2020-01-18 23:44:05 -06:00
Simon Pilgrim 69bc450882 [X86] Rename lowerShuffleAsRotate -> lowerShuffleAsVALIGN
Since it can only ever create VALIGN nodes.
2020-01-18 11:29:14 +00:00
Michael Liao 6d0d86a64d [DAG] Add helper for creating constant vector index with correct type. NFC. 2020-01-18 01:23:36 -05:00
Sanjay Patel 43f60e614a [x86] try harder to form 256-bit unpck*
This is another part of a problem noted in PR42024:
https://bugs.llvm.org/show_bug.cgi?id=42024

The AVX2 code may use awkward 256-bit shuffles vs. the AVX code that gets split
into the expected 128-bit unpack instructions. We have to be selective in
matching the types where we try to do this though. Otherwise, we can end up
with more instructions (in the case of v8x32/v4x64).

Differential Revision: https://reviews.llvm.org/D72575
2020-01-17 10:42:39 -05:00
Craig Topper e445447921 [X86] When handling i64->f32 sint_to_fp on 32-bit targets only bitcast to f64 if sse2 is enabled.
The code is trying to copy the i64 value to an xmm register to
use a 64-bit store so that the 64-bit fild can benefit from
store forwarding.

But this trick only works if f64 is going to be stored in an
XMM register. If we only have SSE1 then only float is in xmm
register. So this trick just causes 2 stores i32 stores, an f64
load into the x87, an f64 from x87, and a 64-bit fild. So we end
up with an extra stack temporary and still didn't get store forwarding.

We might be able to use v2f32 here instead, but I didn't check. I
just wanted the code to make sense.

Found by inspection as I continue to stare too hard at our
int_to_fp conversions.
2020-01-15 18:26:28 -08:00
Craig Topper be8f217b18 [X86] Don't call LowerUINT_TO_FP_i32 for i32->f80 on 32-bit targets with sse2.
We were performing an emulated i32->f64 in the SSE registers, then
storing that value to memory and doing a extload into the X87
domain.

After this patch we'll now just store the i32 to memory along
with an i32 0. Then do a 64-bit FILD to f80 completely in the X87
unit. This matches what we do without SSE.
2020-01-15 00:43:07 -08:00
Reid Kleckner 40cd26c700 [Win64] Handle FP arguments more gracefully under -mno-sse
Pass small FP values in GPRs or stack memory according the the normal
convention. This is what gcc -mno-sse does on Win64.

I adjusted the conditions under which we emit an error to check if the
argument or return value would be passed in an XMM register when SSE is
disabled. This has a side effect of no longer emitting an error for FP
arguments marked 'inreg' when targetting x86 with SSE disabled. Our
calling convention logic was already assigning it to FP0/FP1, and then
we emitted this error. That seems unnecessary, we can ignore 'inreg' and
compile it without SSE.

Reviewers: jyknight, aemerson

Differential Revision: https://reviews.llvm.org/D70465
2020-01-14 17:19:35 -08:00
Craig Topper 76291e1158 [X86] Drop an unneeded FIXME. NFC
The extload on X87 is free.
2020-01-14 17:05:46 -08:00
Craig Topper 57eb56b839 [X86] Swap the 0 and the fudge factor in the constant pool for the 32-bit mode i64->f32/f64/f80 uint_to_fp algorithm.
This allows us to generate better code for selecting the fixup
to load.

Previously when the sign was set we had to load offset 0. And
when it was clear we had to load offset 4. This required a testl,
setns, zero extend, and finally a mul by 4. By switching the offsets
we can just shift the sign bit into the lsb and multiply it by 4.
2020-01-14 17:05:23 -08:00
Craig Topper 98c54fb1fe [X86] Directly emit a BROADCAST_LOAD from constant pool in lowerUINT_TO_FP_vXi32 to avoid double loads seen in D71971
By directly emitting the constants as a constant pool load we seem to avoid the build_vector/extract_subvector combines that resulted in the duplicate loads we had before.

Differential Revision: https://reviews.llvm.org/D72307
2020-01-14 10:50:39 -08:00
Simon Pilgrim 66e39067ed [X86][AVX] Use lowerShuffleAsLanePermuteAndSHUFP to lower binary v4f64 shuffles.
Only perform this if we are shuffling lower and upper lane elements across the lanes (otherwise splitting to lower xmm shuffles would be better).

This is a regression if we shuffle build_vectors due to getVectorShuffle canonicalizing 'blend of splat' build vectors, for now I've set this not to shuffle build_vector nodes at all to avoid this.
2020-01-12 12:29:41 +00:00
Simon Pilgrim b375f28b0e [X86][AVX] lowerShuffleAsLanePermuteAndSHUFP - only set the demanded elements of the lane mask.
Fixes an cyclic dependency issue with an upcoming patch where getVectorShuffle canonicalizes masks with splat build vector sources.
2020-01-12 09:41:40 +00:00
Craig Topper d692f0f6c8 [X86] Don't call LowerSETCC from LowerSELECT for STRICT_FSETCC/STRICT_FSETCCS nodes.
This causes the STRICT_FSETCC/STRICT_FSETCCS nodes to lowered
early while lowering SELECT, but the output chain doesn't get
connected. Then we visit the node again when it is its turn
because we haven't replaced the use of the chain result. In the
case of the fp128 libcall lowering, after D72341 this will cause
the libcall to be emitted twice.
2020-01-11 20:43:00 -08:00
Craig Topper ed679804d5 [TargetLowering][X86] Connect the chain from STRICT_FSETCC in TargetLowering::expandFP_TO_UINT and X86TargetLowering::FP_TO_INTHelper. 2020-01-11 17:50:20 -08:00
Simon Pilgrim 24763734e7 [X86] Fix outdated comment
The generic saturated math opcodes are no longer widened inside X86TargetLowering
2020-01-11 14:37:18 +00:00
Simon Pilgrim ce35010d78 [X86][AVX] Add lowerShuffleAsLanePermuteAndSHUFP lowering
Add initial support for lowering v4f64 shuffles to SHUFPD(VPERM2F128(V1, V2), VPERM2F128(V1, V2)), eventually this could be used for v8f32 (and maybe v8f64/v16f32) but I'm being conservative for the initial implementation as only v4f64 can always succeed.

This currently is only called from lowerShuffleAsLanePermuteAndShuffle so only gets used for unary shuffles, and we limit this to cases where we use upper elements as otherwise concating 2 xmm shuffles is probably the better case.

Helps with poor shuffles mentioned in D66004.
2020-01-11 12:42:00 +00:00
Simon Pilgrim a5bdada09d [X86][AVX] lowerShuffleAsLanePermuteAndShuffle - consistently normalize multi-input shuffle elements
We only use lowerShuffleAsLanePermuteAndShuffle for unary shuffles at the moment, but we should consistently handle lane index calculations for multiple inputs in both the AVX1 and AVX2 paths.

Minor (almost NFC) tidyup as I'm hoping to use lowerShuffleAsLanePermuteAndShuffle for binary shuffles soon.
2020-01-10 17:21:20 +00:00
Matt Arsenault 255cc5a760 CodeGen: Use LLT instead of EVT in getRegisterByName
Only PPC seems to be using it, and only checks some simple cases and
doesn't distinguish between FP. Just switch to using LLT to simplify
use from GlobalISel.
2020-01-09 17:37:52 -05:00
Craig Topper 3811417f39 [X86] Custom type legalize v4i64->v4f32 uint_to_fp on sse4.1 targets in 64-bit mode
For v4i64->v4f32 uint_to_fp on pre-avx targets where v4i64 isn't legal we create to v2i64->v2f32 uint_to_fp that need to be shuffled together. Our codegen for v2i64->v2f32 involves detecting if the number is larger than (2^31 - 1), if so we do a special divison by 2 so we can do a signed conversion which we need to scalarize, then do a multiply by 2 at the end if we divided earlier.

When v4i64 isn't legal we need to split the checking for a larger number and dividing by 2 into two v2i64 vectors. The scalar part can extract the 4 i64 values from those 4 splits. But we can reassemble the 4 scalar f32 results directly into a single v432 vector. Then we just need to combine the fixup indications from the 2 halves and we can do the final multiply by 2 fixup on all 4 values if needed at once using a single v4f32 blend and v4f32 fadd.

Differential Revision: https://reviews.llvm.org/D72368
2020-01-08 10:06:01 -08:00
Wang, Pengfei 9a621de1ec [X86] Adding fp128 support for strict fcmp
Summary: Adding fp128 support for strict fcmp

Reviewers: craig.topper, LiuChen3, andrew.w.kaylor, RKSimon, uweigand

Subscribers: hiraditya, llvm-commits, LuoYuanke

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71897
2020-01-08 12:59:31 +08:00
Craig Topper 9685cf709f [X86] Enable v2i64->v2f32 uint_to_fp code in ReplaceNodeResults on SSE4.1 target
Now that we generate decent code for (v2i64 (setlt zero, X)) on pre-sse4.2 targets I think we can use this now.

Differential Revision: https://reviews.llvm.org/D72354
2020-01-07 13:25:29 -08:00
Craig Topper afa8211e97 [X86] Improve lowering of (v2i64 (setgt X, -1)) on pre-SSE2 targets. Enable v2i64 in foldVectorXorShiftIntoCmp.
Similar to D72302 but for the canonical form for the opposite case. I've changed foldVectorXorShiftIntoCmp to form a target independent setcc node instead of PCMPGT now and enabled its for v2i64 on pre-SSE4.2 targets. The setcc should eventually get lowered to PCMPGT or the new v2i64 sequence.

Differential Revision: https://reviews.llvm.org/D72318
2020-01-07 11:22:04 -08:00
Craig Topper b9376690a0 [X86] Improve lowering of v2i64 sign bit tests on pre-sse4.2 targets
Without sse4.2 a v2i64 setlt needs to expand into a pcmpgtd, pcmpeqd, 3 shuffles, and 2 logic ops. But if we're only interested in the sign bit of the i64 elements, we can just use one pcmpgtd and shuffle the odd elements to the even elements.

Differential Revision: https://reviews.llvm.org/D72302
2020-01-07 11:22:03 -08:00
Simon Pilgrim 0e912e22b6 [X86] Pull out repeated SrcVT.getVectorNumElements() call. NFCI. 2020-01-07 16:51:10 +00:00
Simon Pilgrim c0365aaaa4 [X86] Standardize shuffle match/lowering function names. NFC.
We mainly use lowerShuffle*/matchShuffle* - replace the (few) lowerVectorShuffle*/matchVectorShuffle* cases to be consistent.
2020-01-07 13:41:52 +00:00
Craig Topper 6a0564adcf [X86] Improve v4i32->v4f64 uint_to_fp for AVX1/AVX2 targets.
Use zext+or+fsub to do the conversion. Similar to D71971.

Differential Revision: https://reviews.llvm.org/D71971
2020-01-06 14:07:35 -08:00
Craig Topper 95840866b7 [X86] Improve v2i64->v2f32 and v4i64->v4f32 uint_to_fp on avx and avx2 targets.
Summary:
Based on Simon's D52965, but improved to handle strict fp and improve some of the shuffling.

Rather than use v2i1/v4i1 and let type legalization continue, just generate all the code with legal types and use an explicit shuffle.

I also added an explicit setcc to the v4i64 code to match the semantics of vselect which doesn't just use the sign bit. I'm also using a v4i64->v4i32 truncate instead of the shuffle in Simon's original code. With the setcc this will become a pack.

Future work can look into using X86ISD::BLENDV and a different shuffle that only moves the sign bit.

Reviewers: RKSimon, spatel

Reviewed By: RKSimon

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71956
2020-01-05 17:44:08 -08:00
Liu, Chen3 ca3bf289a7 [NFC] Modify the format:
Drop the else since we alerady returned in the if.
2020-01-06 09:35:19 +08:00
Simon Pilgrim e3bd011890 [X86][SSE] Combine combineLogicBlendIntoConditionalNegate for VSELECT nodes (PR43660)
Attempt to use combineLogicBlendIntoConditionalNegate for (select M, (sub 0, X), X) -> (sub (xor X, M), M)

We limit this to cases that can't easily replace the VSELECT with a shuffle (non-constant masks) or where a BLENDV is likely to occur (which tends to result in slower codegen).
2020-01-05 18:50:44 +00:00
Simon Pilgrim 6a6e6f04ec [X86] Move combineLogicBlendIntoConditionalNegate before combineSelect. NFCI.
Updates function order in preparation of future fix for PR43660
2020-01-05 17:17:41 +00:00
Simon Pilgrim 3db84f142a [X86] Merge (identical) LowerGC_TRANSITION_START and LowerGC_TRANSITION_END (NFC)
Silences a copy+paste analyzer warning - all they are doing are inserting NOOPs in exactly the same way.
2020-01-05 15:24:57 +00:00
Craig Topper 2875cc6b29 [X86] Improve for v2i32->v2f64 uint_to_fp
This uses an alternative implementation of this conversion derived
from our v2i32->v2f32 handling. We can zero extend the v2i32 to
v2i64, or it with the bit representation of 2.0^52 which will give
us 2.0^52 plus the 32-bit integer since double's mantissa is 52 bits.
Then we just need to subtract 2.0^52 as a double and let the floating
point unit normalize the remaining bits into a valid double.

This is less instructions then our previous code, but does require
a port 5 shuffle for the zero extend or unpack.

Differential Revision: https://reviews.llvm.org/D71945
2020-01-03 11:39:08 -08:00
Reid Kleckner 9c2b72821b Move tail call disabling code to target independent code
When the "disable-tail-calls" attribute was added, checks were added for
it in various backends. Now this code has proliferated, and it is
something the target is responsible for checking. Move that
responsibility back to the ISels (fast, global, and SD).

There's no major functionality change, except for targets that never
implemented this check.

This LLVM attribute was originally added in
d9699bc7bd (2015).

Reviewers: echristo, MaskRay

Differential Revision: https://reviews.llvm.org/D72118
2020-01-03 11:27:41 -08:00
Craig Topper bd46e29742 [X86] Re-enable lowerUINT_TO_FP_vXi32 under fast-math by using an FSUB instead of an FADD.
Summary:
We previously disabled this under fast math due to aggressive
reassociation by the machine combiner. But I think we can work
around this by using a FSUB instead of FADD for the first
operation.

This matches the similar algorithm we do for uint_to_fp i64->f64
in TargetLowering::expandUINT_TO_FP. If reassociation hasn't
been a problem for that, hopefully its not a problem here.

Reviewers: RKSimon, spatel, scanon

Reviewed By: spatel

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71968
2020-01-02 21:46:53 -08:00
Wang, Pengfei 60333a5317 [X86] Enable strict FP by default and remove option -disable-strictnode-mutation. NFCI. 2020-01-03 10:59:34 +08:00
Wang, Pengfei 9dc9e0ea64 [X86] Optimization of inserting vxi1 sub vector into vXi1 vector
Summary:
After bugfix the undef value case here, we used more operations to implement inserting vxi1 sub vector into vXi1 vector, I optimize it by use less operations.

The history information at https://reviews.llvm.org/D68311

Reviewers: craig.topper, LuoYuanke, yubing, annita.zhang, pengfei, LiuChen3, RKSimon

Reviewed By: craig.topper

Subscribers: hiraditya, llvm-commits

Patch by Xiang Zhang (xiangzhangllvm)

Differential Revision: https://reviews.llvm.org/D71917
2020-01-03 09:25:25 +08:00
Craig Topper c36763d894 [X86] Call SimplifyMultipleUseDemandedBits from combineVSelectToBLENDV if the condition is used by something other than select conditions.
We might be able to bypass some nodes on the condition path.

Differential Revision: https://reviews.llvm.org/D71984
2020-01-01 11:16:52 -08:00
Liu, Chen3 8af492ade1 add strict float for round operation
Differential Revision: https://reviews.llvm.org/D72026
2020-01-01 20:42:12 +08:00
Craig Topper 26bdc603f7 [X86] Constant fold KSHIFT of an all zeros vector to just an all zeros vector. 2019-12-31 15:57:39 -08:00
Craig Topper 1cc8a74de3 [X86] Use carry flag from add for (seteq (add X, -1), -1).
If we just subtracted 1 and are checking if the result is -1. We can use the carry flag from the ADD instead of an explicit CMP. I'm using the same checks for the add users as EmitTest.

Fixes one case from PR44412

Differential Revision: https://reviews.llvm.org/D72019
2019-12-31 15:05:23 -08:00
Craig Topper e898ba2d15 [X86] Slightly improve our attempted error recovery for 64-bit -mno-sse2 in LowerCallResult to use FP1 if there are two return values.
If the return value is a struct of 2 doubles we need two return
registers.

If SSE2 is disabled we can't return in XMM registers like the ABI says.
After logging an error we attempt to recover by using FP0 instead
of an XMM register. But if the return needs two registers, we may have
already used FP0. So if the register we were supposed to copy to is
XMM1, copy to FP1 in the recovery instead.

This seems to fix the assertion/crash in PR44413.
2019-12-31 00:16:13 -08:00
Craig Topper 47a2fd2df4 [X86] Add X86ISD::PCMPGT to SimplifyMultipleUseDemandedBitsForTargetNode.
If only the sign bit is demanded, and the LHS is all zeroes, then
we can bypass the PCMPGT.
2019-12-30 10:50:25 -08:00
Craig Topper 266cd7717c [X86] Use APInt::isOneValue and ConstantSDNode::isOne. NFC
These are implemented slightly more efficiently than comparing
to 1 in the case that the value is more than 64 bits.
2019-12-29 17:35:49 -08:00
Craig Topper b2f19320dc [X86] Use isOneConstant to simplify some code. NFC 2019-12-29 16:53:38 -08:00
Craig Topper 599d070910 [X86] Remove dyn_casts to ConstantSDNode for operand 1 of X86ISD::VSRLI/VSRAI/VSRLI. Use getConstantOperandVal and APInt operations.
These nodes should only ever be formed with an i8 TargetConstant
so we don't need to check for it to be a constant. It's also
always 8-bits so we don't need to use APInt compare functions.
2019-12-29 16:53:38 -08:00
Craig Topper a5c96e326a [X86] Stop accidentally custom type legalizing v4i32->v4f32 on SSE1 only targets.
We had a Custom operation action for v4i32 on SSE1. But since
v4i32 isn't legal until SSE2 this was not what was intended. The
code that get executed was intended for op legalization and
creates a bunch of v4i32 nodes that all end up scalarized.
2019-12-28 23:11:48 -08:00
Craig Topper ae321faeed [X86] Remove a redundant (scalar_to_vector (extract_vector_elt X))) in LowerUINT_TO_FP_i32. NFCI 2019-12-28 21:49:22 -08:00
Craig Topper fca4736874 [X86] Allow v2i32->v2f32 strict and non-strict uint_to_fp to be widened to v4i32->v4f32 under avx512.
With avx512vl we get v4i32->v4f32 uint_to_fp instructions. With
avx512f we get v16i32->v16f32 instructions which we can use to
emulate v4i32->v4f32.
2019-12-27 00:28:44 -08:00
Craig Topper 20aab49492 [X86] Custom widen v2i32->v2f32 strict_sint_to_fp to avoid scalarization. 2019-12-27 00:28:44 -08:00
Fangrui Song 7a7334663c Delete llvm.{sig,}{setjmp,longjmp} remnant after r136821
Intrinsic has incorrect argument type!
  i32 (i32*)* @llvm.setjmp

*wipes tear*
2019-12-27 00:00:14 -08:00
Craig Topper ecbaf152f8 [X86] Custom widen 128/256-bit vXi32 fp_to_uint on avx512f targets without avx512vl. Similar for vXi64 on avx512dq without avx512vl.
Summary:
Previously we did this with isel patterns that used garbage in
the widened part of the source. But that's not valid for strictfp.
So now we custom widen and use zeroes for the widened elemens for
strictfp.

This replaces D71864.

Reviewers: RKSimon, spatel, andrew.w.kaylor, pengfei, LiuChen3

Reviewed By: pengfei

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71879
2019-12-26 22:04:40 -08:00
Craig Topper 50fb3957c1 [X86] Custom widen strict v2f32->v2i32 by padding with zeroes.
For non-strict, generic type legalization will take care of this,
but that doesn't happen currently for strict nodes.
2019-12-26 21:45:18 -08:00
Fangrui Song c4a97b64e3 [X86] Fix -Wmisleading-indentation after D71892 2019-12-26 21:41:16 -08:00
Craig Topper 53ee806d93 [X86][FPEnv] Promote some float strictfp operations to double on i686-pc-windows-msvc to match what we do for non-strict.
The float libcalls are inlined in MSVC's math header where they
just cast to double and use the double libcall. Do the same when
we emit libcalls.
2019-12-26 20:22:24 -08:00
Craig Topper a5d266b9cf [X86] Add custom legalization for strict_uint_to_fp v2i32->v2f32.
I believe the algorithm we use for non-strict is exception safe
for strict. The fsub won't generate any exceptions. After it we
will have an exact version of the i32 integer in a double. Then
we just round it to f32. That rounding will generate a precision
exception if it can't be represented exactly.
2019-12-26 19:10:26 -08:00
Liu, Chen3 1a7b69f5dd add custom operation for strict fpextend/fpround
Differential Revision: https://reviews.llvm.org/D71892
2019-12-27 08:28:33 +08:00
Eric Christopher 1584e2f987 Remove SrcVT only used in an assert and propagate query. 2019-12-26 15:28:32 -08:00
Craig Topper f953882113 [X86] Custom widen 128/256-bit vXi32 uint_to_fp on avx512f targets without avx512vl. Similar for vXi64 sint_to_fp/uint_to_fp on avx512dq without avx512vl.
Previously we widened these through isel patterns, but that
didn't work for STRICT_ nodes. Those need to be padded with
zeroes in the upper bits which is harder to do in isel patterns.
2019-12-26 14:46:56 -08:00
Craig Topper 90ff34e6ab [X86] Add custom widening for v2i32->v2f64 strict_uint_to_fp with AVX512F, but not AVX512VL.
Previously we were widening with isel patterns, but that wasn't
exception safe for strict FP. So now we widen to v4i32->v4f64
during type legalization. And then let op legalization further
widen to v8i32->v8f64.

The vec_int_to_fp.ll changes are caused by us no longer narrowing
extracts of strict_uint_to_fp to the v4i32->v2f64 instruction
without AVX512VL only to have isel rewiden it. Now we just keep
it wide throughout. So we don't have an opportunity to narrow
the load.
2019-12-26 13:40:56 -08:00
Craig Topper bb0138729b [X86] Add custom widening for v2f64->v2i32 strict_fp_to_uint with avx512f, but not avx512vl.
AVX512F added instruction for vector fp_to_uint conversions. With
AVX512VL we can use a specific instruction that does v2f64->v4i32 with
zeroes in the 2 extra elements. For non-strict nodes without AVX512VL
we relied on type legalization to turn it to v4f64->v4i32 which would
later be widened by op legalization to v8f64->v8i32. But type legalization
doesn't currently widen strict nodes since it doesn't know how to
safely and efficiently pad the extra elements. But for X86 we know
padding with zeroes is safe and efficient so do that ourselves.
2019-12-26 12:42:27 -08:00
Craig Topper c91bf72e2c [X86] Merge the SINT_TO_FP/UINT_TO_FP handlers in ReplaceNodeResults since the AVX512DQ+AVX512VL code is very similar in both. NFC 2019-12-26 08:58:34 -08:00
Craig Topper 4e6b0dd681 [X86] Add custom lowering for v2i64->v2f32 strict_sint_to_fp/strict_uint_to_fp for avx512dq+avx512vl targets.
With avx512dq+avx512vl we have instruction that implements this and
places zeroes in the upper 64-bits of the destination xmm register.
2019-12-26 08:58:34 -08:00
Wang, Pengfei 472bded3ed [X86] Enable STRICT_SINT_TO_FP/STRICT_UINT_TO_FP on X86 backend
Summary: Enable STRICT_SINT_TO_FP/STRICT_UINT_TO_FP on X86 backend

Reviewers: craig.topper, RKSimon, LiuChen3, uweigand, andrew.w.kaylor

Subscribers: hiraditya, llvm-commits, LuoYuanke

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71871
2019-12-26 08:15:13 +08:00
Craig Topper c5b4a2386b [X86] Use zero vector to extend to 512-bits for strict_fp_to_uint v2i1->v2f64 on targets with AVX512F, but not AVX512VL.
In the worst case, this requires a 128-bit move instruction to
implicitly zero the upper bits. In the common case, we should
recognize the producing instruction already zeroed the upper bits.
2019-12-25 10:46:00 -08:00
Craig Topper 2498d88259 [X86] Merge together some common code in LowerFP_TO_INT now that we have STRICT_CVTTP2SI/STRICT_CVTTP2UI nodes. NFC 2019-12-25 09:57:27 -08:00
Liu, Chen3 8304781cae Add missing strict_fp_to_int
Differential Revision: https://reviews.llvm.org/D71867
2019-12-25 16:10:10 +08:00
Craig Topper c06e53119b [X86] Use 128-bit vector instructions for f32/f64->i64 conversions on 32-bit targets with avx512dq and avx512vl instructions.
On 32-bit targets we can't use the scalar instruction so we
insert the scalar into a vector and use packed conversions.
Previously we used either v4f32->v4i64 or v4f64->v4i64 to avoid
some complexity creating target specific ISD opcodes for
v4f32->v2i64. But this causes extra vzeroupper instructions and
possibly frequency throttling on Intel CPUs.

This patch changes this to create a 128-bit vector and uses a
target specific ISD opcode if needed.
2019-12-24 11:20:10 -08:00
Craig Topper a21beccea2 [X86] Add STRICT versions of CVTTP2SI, CVTTP2UI, CMPM, and CMPP.
Differential Revision: https://reviews.llvm.org/D71850
2019-12-24 10:07:04 -08:00
Ulrich Weigand 0d3f782e41 [FPEnv][X86] More strict int <-> FP conversion fixes
Fix several several additional problems with the int <-> FP conversion
logic both in common code and in the X86 target. In particular:

- The STRICT_FP_TO_UINT expansion emits a floating-point compare. This
  compare can raise exceptions and therefore needs to be a strict compare.
  I've made it signaling (even though quiet would also be correct) as
  signaling is the more usual default for an LT. This code exists both
  in common code and in the X86 target.

- The STRICT_UINT_TO_FP expansion algorithm was incorrect for strict mode:
  it emitted two STRICT_SINT_TO_FP nodes and then used a select to choose one
  of the results. This can cause spurious exceptions by the STRICT_SINT_TO_FP
  that ends up not chosen. I've fixed the algorithm to use only a single
  STRICT_SINT_TO_FP instead.

- The !isStrictFPEnabled logic in DoInstructionSelection would sometimes do
  the wrong thing because it calls getOperationAction using the result VT.
  But for some opcodes, incuding [SU]INT_TO_FP, getOperationAction needs to
  be called using the operand VT.

- Remove some (obsolete) code in X86DAGToDAGISel::Select that would mutate
  STRICT_FP_TO_[SU]INT to non-strict versions unnecessarily.

Reviewed by: craig.topper

Differential Revision: https://reviews.llvm.org/D71840
2019-12-23 21:11:45 +01:00
Sanjay Patel 8cefc37be5 [DAGCombine] visitEXTRACT_SUBVECTOR - 'little to big' extract_subvector(bitcast()) support
This moves the X86 specific transform from rL364407
into DAGCombiner to generically handle 'little to big' cases
(for example: extract_subvector(v2i64 bitcast(v16i8))). This
allows us to remove both the x86 implementation and the aarch64
bitcast(extract_subvector(bitcast())) combine.

Earlier patches that dealt with regressions initially exposed
by this patch:
rG5e5e99c041e4
rG0b38af89e2c0

Patch by: @RKSimon (Simon Pilgrim)

Differential Revision: https://reviews.llvm.org/D63815
2019-12-23 10:11:45 -05:00
Craig Topper de2378b4f3 [X86] Fix a KNL miscompile caused by combineSetCC swapping LHS/RHS variables before a later use.
The setcc operands are copied into LHS and RHS variables at the top of the function. We also capture the condition code.

A later piece of code swaps the operands and changing the CC variable as part of a canonicalization to make some other checks simpler. But we might not make the transform we canonicalized for. So we continue on through the function where we can use the swapped LHS/RHS variables and access the original condition code operand instead of the modified CC variable. This leads to a setcc being created with the original condition code, but with swapped operands.

To mitigate this, this patch does a couple things. The LHS/RHS/CC variables are made const to keep them from being modified like this again. The transform that needs the swap now uses temporary copies of the variables. And the transform that used the original condition code operand has been altered to use the CC variable we cached originally. Either of these changes are enough to fix the issue, but doing both to make this code very safe.

I also considered rewriting the swap code in some way to check both permutations without explicitly swapping or needing temporary variables, but held off on that.

Differential Revision: https://reviews.llvm.org/D71736
2019-12-20 11:24:45 -08:00
Craig Topper bf507d4259 [X86] Make EmitCmp into a static function and explicitly return chain result for STRICT_FCMP. NFCI
The only thing its getting from the X86TargetLowering class is
the subtarget which we can easily pass. This function only has
one call site now since this might help the compiler inline it.

Explicitly return both the flag result and the chain result for
STRICT_FCMP nodes. This removes an assumption in the caller that
getValue(1) is the right way to get the chain.
2019-12-19 23:03:15 -08:00
Craig Topper 9b6fafa399 [X86] Directly call EmitTest in two places instead of creating a null constant and calling EmitCmp. NFCI
EmitCmp will just immediately call EmitTest and discard the null
constant only to have EmitTest create it again if it doesn't fold.

So just skip all that and go directly to EmitTest.
2019-12-19 23:03:06 -08:00
Liu, Chen3 2f932b5729 Enable STRICT_FP_TO_SINT/UINT on X86 backend
This patch is mainly for custom lowering the vector operation.

Differential Revision: https://reviews.llvm.org/D71592
2019-12-19 14:49:13 +08:00
Wang, Pengfei 1949235d13 [X86] Add strict fma support
Summary: Add strict fma support

Reviewers: craig.topper, RKSimon, LiuChen3

Subscribers: hiraditya, llvm-commits, LuoYuanke

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71604
2019-12-18 11:44:00 +08:00
Craig Topper 004fdbe041 [X86] Manually format some setOperationAction calls to line up arguments to improve readability. NFC 2019-12-17 16:11:31 -08:00
Kevin P. Neal b1d8576b0a This adds constrained intrinsics for the signed and unsigned conversions
of integers to floating point.

This includes some of Craig Topper's changes for promotion support from
D71130.

Differential Revision: https://reviews.llvm.org/D69275
2019-12-17 10:06:51 -05:00
Alex Richardson be15dfa88f [NFC] Use EVT instead of bool for getSetCCInverse()
Summary:
The use of a boolean isInteger flag (generally initialized using
VT.isInteger()) caused errors in our out-of-tree CHERI backend
(https://github.com/CTSRD-CHERI/llvm-project).

In our backend, pointers use a separate ValueType (iFATPTR) and therefore
.isInteger() returns false. This meant that getSetCCInverse() was using the
floating-point variant and generated incorrect code for us:
`(void *)0x12033091e < (void *)0xffffffffffffffff` would return false.

Committing this change will significantly reduce our merge conflicts
for each upstream merge.

Reviewers: spatel, bogner

Reviewed By: bogner

Subscribers: wuzish, arsenm, sdardis, nemanjai, jvesely, nhaehnle, hiraditya, kbarton, jrtc27, atanasyan, jsji, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70917
2019-12-13 12:22:03 +00:00
Sanjay Patel cdf5cfea8e Revert "[SDAG] remove use restriction in isNegatibleForFree() when called from getNegatedExpression()"
This reverts commit d1f0bdf2d2.
The patch can cause infinite loops in DAGCombiner.
2019-12-11 16:56:58 -05:00
Sanjay Patel d1f0bdf2d2 [SDAG] remove use restriction in isNegatibleForFree() when called from getNegatedExpression()
This is an alternate fix for the bug discussed in D70595.
This also includes minimal tests for other in-tree targets
to show the problem more generally.

We check the number of uses as a predicate for whether some
value is free to negate, but that use count can change as we
rewrite the expression in getNegatedExpression(). So something
that was marked free to negate during the cost evaluation
phase becomes not free to negate during the rewrite phase (or
the inverse - something that was not free becomes free).
This can lead to a crash/assert because we expect that
everything in an expression that is negatible to be handled
in the corresponding code within getNegatedExpression().

This patch skips the use check during the rewrite phase.
So we determine that some expression isNegatibleForFree
(identically to without this patch), but during the rewrite,
don't rely on use counts to decide how to create the optimal
expression.

Differential Revision: https://reviews.llvm.org/D70975
2019-12-11 13:30:39 -05:00
Craig Topper 935d41e4bd [X86] Split v64i1 arguments into 2 v32i1s that will be promoted to v32i8 under min-legal-vector-width=256
This is an improvement to 88dacbd436
2019-12-10 17:29:02 -08:00
Wang, Pengfei 21bc8631fe [FPEnv][X86] Constrained FCmp intrinsics enabling on X86
Summary: This is a follow up of D69281, it enables the X86 backend support for the FP comparision.

Reviewers: uweigand, kpn, craig.topper, RKSimon, cameron.mcinally, andrew.w.kaylor

Subscribers: hiraditya, llvm-commits, annita.zhang, LuoYuanke, LiuChen3

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70582
2019-12-11 08:23:09 +08:00
Craig Topper 88dacbd436 [X86] Go back to considering v64i1 as a legal type under min-legal-vector-width=256. Scalarize v64i1 arguments and shuffles under min-legal-vector-width=256.
This reverts 3e1aee2ba7 in favor
of a different approach.

Scalarizing isn't great codegen, but making the type illegal was
interfering with k constraint in inline assembly.
2019-12-10 15:07:55 -08:00
Liu, Chen3 bbf7860b93 add support for strict operation fpextend/fpround/fsqrt on X86 backend
Differential Revision: https://reviews.llvm.org/D71184
2019-12-10 09:04:28 +08:00
Amara Emerson 84fdd9d7a5 [X86] Fix prolog/epilog mismatch for stack protectors on win32-macho.
The xor'ing behaviour is only used for msvc/crt environments, when we're targeting
macho the guard load code doesn't know about the xor in the epilog. Disable xor'ing
when targeting win32-macho to be consistent.

Differential Revision: https://reviews.llvm.org/D71095
2019-12-06 14:44:56 -08:00
Craig Topper 28b573d249 [TargetLowering] Fix another potential FPE in expandFP_TO_UINT
D53794 introduced code to perform the FP_TO_UINT expansion via FP_TO_SINT in a way that would never expose floating-point exceptions in the intermediate steps. Unfortunately, I just noticed there is still a way this can happen. As discussed in D53794, the compiler now generates this sequence:

// Sel = Src < 0x8000000000000000
// Val = select Sel, Src, Src - 0x8000000000000000
// Ofs = select Sel, 0, 0x8000000000000000
// Result = fp_to_sint(Val) ^ Ofs
The problem is with the Src - 0x8000000000000000 expression. As I mentioned in the original review, that expression can never overflow or underflow if the original value is in range for FP_TO_UINT. But I missed that we can get an Inexact exception in the case where Src is a very small positive value. (In this case the result of the sub is ignored, but that doesn't help.)

Instead, I'd suggest to use the following sequence:

// Sel = Src < 0x8000000000000000
// FltOfs = select Sel, 0, 0x8000000000000000
// IntOfs = select Sel, 0, 0x8000000000000000
// Result = fp_to_sint(Val - FltOfs) ^ IntOfs
In the case where the value is already in range of FP_TO_SINT, we now simply compute Val - 0, which now definitely cannot trap (unless Val is a NaN in which case we'd want to trap anyway).

In the case where the value is not in range of FP_TO_SINT, but still in range of FP_TO_UINT, the sub can never be inexact, as Val is between 2^(n-1) and (2^n)-1, i.e. always has the 2^(n-1) bit set, and the sub is always simply clearing that bit.

There is a slight complication in the case where Val is a constant, so we know at compile time whether Sel is true or false. In that scenario, the old code would automatically optimize the sub away, while this no longer happens with the new code. Instead, I've added extra code to check for this case and then just fall back to FP_TO_SINT directly. (This seems to catch even slightly more cases.)

Original version of the patch by Ulrich Weigand. X86 changes added by Craig Topper

Differential Revision: https://reviews.llvm.org/D67105
2019-12-06 14:11:04 -08:00
Reid Kleckner c089f02898 [X86] Don't setup and teardown memory for a musttail call
Summary:
musttail calls should not require allocating extra stack for arguments.
Updates to arguments passed in memory should happen in place before the
epilogue.

This bug was mostly a missed optimization, unless inalloca was used and
store to push conversion fired.

If a reserved call frame was used for an inalloca musttail call, the
call setup and teardown instructions would be deleted, and SP
adjustments would be inserted in the prologue and epilogue. You can see
these are removed from several test cases in this change.

In the case where the stack frame was not reserved, i.e. call frame
optimization fires and turns argument stores into pushes, then the
imbalanced call frame setup instructions created for inalloca calls
become a problem. They remain in the instruction stream, resulting in a
call setup that allocates zero bytes (expected for inalloca), and a call
teardown that deallocates the inalloca pack. This deallocation was
unbalanced, leading to subsequent crashes.

Reviewers: hans

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71097
2019-12-06 12:58:54 -08:00
Craig Topper 8267be2995 [X86] Make X86TargetLowering::BuildFILD return a std::pair of SDValues so we explicitly return the chain instead of calling getValue on the single SDValue.
We shouldn't assume that the returned result can be used to get
the other result.

This is prep-work for strict FP where we will also need to pass
the chain result along in more cases.
2019-12-05 17:54:21 -08:00
Liu, Chen3 3041434450 Add strict fp support for instructions fadd/fsub/fmul/fdiv
Differential Revision: https://reviews.llvm.org/D68757
2019-12-06 09:44:33 +08:00
Craig Topper 3d43c73f26 [X86] Remove override of shouldUseStrictFP_TO_INT for fp80. NFC
I suspect this became unnecessary after r354161. Prior to that
we may have been going through the default expansion of FP_TO_UINT
on 64-bit targets and then ending up back in Custom X86 handling
to handle the FP_TO_SINT for it. Now we just Custom handle the
FP_TO_UINT directly. We already need to handle it for 32-bit mode
during type legalization so we wouldn't save any code by using
the default expansion on 64-bit.
2019-12-04 17:58:10 -08:00
Amy Huang 9e978bb01c Add support for lowering 32-bit/64-bit pointers
Summary:
This follows a previous patch that changes the X86 datalayout to represent
mixed size pointers (32-bit sext, 32-bit zext, and 64-bit) with address spaces
(https://reviews.llvm.org/D64931)

This patch implements the address space cast lowering to the corresponding
sign extension, zero extension, or truncate instructions.

Related to https://bugs.llvm.org/show_bug.cgi?id=42359

Reviewers: rnk, craig.topper, RKSimon

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D69639
2019-12-04 11:39:03 -08:00
Craig Topper 8f73a93b2d [X86] Add support for STRICT_FP_TO_UINT/SINT from fp128. 2019-11-27 18:38:32 -08:00
Craig Topper cfce8f2cfb [X86] Add strict fp support for operations of X87 instructions
This is the following patch of D68854.

This patch adds basic operations of X87 instructions, including +, -, *, / , fp extensions and fp truncations.

Patch by Chen Liu(LiuChen3)

Differential Revision: https://reviews.llvm.org/D68857
2019-11-26 10:59:41 -08:00
David Green b5315ae8ff [Codegen][ARM] Add addressing modes from masked loads and stores
MVE has a basic symmetry between it's normal loads/store operations and
the masked variants. This means that masked loads and stores can use
pre-inc and post-inc addressing modes, just like the standard loads and
stores already do.

To enable that, this patch adds all the relevant infrastructure for
treating masked loads/stores addressing modes in the same way as normal
loads/stores.

This involves:
- Adding an AddressingMode to MaskedLoadStoreSDNode, along with an extra
   Offset operand that is added after the PtrBase.
- Extending the IndexedModeActions from 8bits to 16bits to store the
   legality of masked operations as well as normal ones. This array is
   fairly small, so doubling the size still won't make it very large.
   Offset masked loads can then be controlled with
   setIndexedMaskedLoadAction, similar to standard loads.
- The same methods that combine to indexed loads, such as
   CombineToPostIndexedLoadStore, are adjusted to handle masked loads in
   the same way.
- The ARM backend is then adjusted to make use of these indexed masked
   loads/stores.
- The X86 backend is adjusted to hopefully be no functional changes.

Differential Revision: https://reviews.llvm.org/D70176
2019-11-26 16:21:01 +00:00
Craig Topper 1b20908334 [X86] Return Op instead of SDValue() for lowering flags_read/write intrinsics
Returning SDValue() means we didn't handle it and the common
code should try to expand it. But its a target intrinsic so
expanding won't do anything and just leave the node alone. But
it will print confusing debug messages.

By returning Op we tell the common code that the node is legal
and shouldn't receive any further processing.
2019-11-25 23:13:30 -08:00
Craig Topper c43b8ec735 [X86] Add support for STRICT_FP_ROUND/STRICT_FP_EXTEND from/to fp128 to/from f32/f64/f80 in 64-bit mode.
These need to emit a libcall like we do for the non-strict version.

32-bit mode needs to SoftenFloat support to be implemented for strict FP nodes.

Differential Revision: https://reviews.llvm.org/D70504
2019-11-25 18:18:39 -08:00
Simon Pilgrim 5d9a259ad5 [X86][SSE] Split off generic isLaneCrossingShuffleMask helper. NFC.
Avoid MVT dependency which will be needed in a future patch.
2019-11-23 12:41:03 +00:00
Craig Topper b29e5cdb7c [X86] Add test cases for most of the constrained fp libcalls with fp128.
Add explicit setOperation actions for some to match their none
strict counterparts. This isn't required, but makes the code
self documenting that we didn't forget about strict fp. I've
used LibCall instead of Expand since that's more explicitly what
we want.

Only lrint/llrint/lround/llround are missing now.
2019-11-21 18:17:59 -08:00
Craig Topper fc4020dbbe [X86] Mark fp128 FMA as LibCall instead of Expand. Add STRICT_FMA as well.
The Expand code would fall back to LibCall, but this makes it
more explicit.
2019-11-21 18:17:57 -08:00
Craig Topper 7696b99258 [LegalizeDAG][X86] Add support for turning STRICT_FADD/SUB/MUL/DIV into libcalls. Use it for fp128 on x86-64.
This requires a minor hack for f32/f64 strict fadd/fsub to avoid
turning those into libcalls.
2019-11-21 16:19:25 -08:00
Craig Topper 95f44cf44a [X86] Mark vector STRICT_FADD/STRICT_FSUB as Legal and add mutation to X86ISelDAGToDAG
The prevents LegalizeVectorOps from scalarizing them. We'll need
to remove the X86 mutation code when we add isel patterns.
2019-11-21 16:19:18 -08:00
Hiroshi Yamauchi 52e377497d [PGO][PGSO] DAG.shouldOptForSize part.
Summary:
(Split of off D67120)

SelectionDAG::shouldOptForSize changes for profile guided size optimization.

Reviewers: davidxl

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70095
2019-11-21 14:16:00 -08:00
Craig Topper 1439059cc7 [X86] Change legalization action for f128 fadd/fsub/fmul/fdiv from Custom to LibCall.
The custom code just emits a libcall, but we can do the same
with generic code. The only difference is that the generic code
can form tail calls where the custom code couldn't. This is
responsible for the test changes.

This avoids needing to modify the Custom handling for strict fp.
2019-11-21 11:44:29 -08:00
Craig Topper 27da569a7a [X86] Fix i16->f128 sitofp to promote the i16 to i32 before trying to form a libcall.
Previously one of the test cases added here gave an error.
2019-11-20 17:09:32 -08:00
Craig Topper 5f3bf5967b [X86] Fix f128->i16 fptosi to promote the i16 to i32 before trying to form a libcall.
Previously one of the test cases added here gave an error.
2019-11-20 17:09:31 -08:00
Craig Topper 7488c0a6f5 [X86] Mark vector STRICT_FP_ROUND as Legal instead of Custom.
The Custom handler doesn't do anything for these nodes anyway.

SelectionDAGISel won't mutate them if they are Legal or Custom.
X86 has custom code for mutating them due to missing isel patterns.
When the isel patterns are added Legal will be the right answer.
So go ahead a change it now since that's where we'll end up.
2019-11-20 13:03:51 -08:00
Reid Kleckner 606a2bd621 [musttail] Don't forward AL on Win64
AL is only used for varargs on SysV platforms. Don't forward it on
Windows. This allows control flow guard to set up an extra hidden
parameter in RAX, as described in PR44049.

This also has the effect of freeing up RAX for use in virtual member
pointer thunks, which may also be a nice little code size improvement on
Win64.

Fixes PR44049

Reviewers: ajpaverd, efriedma, hans

Differential Revision: https://reviews.llvm.org/D70413
2019-11-19 16:54:00 -08:00
Craig Topper c4b41e8d1d [LegalizeDAG][X86] Enable STRICT_FP_TO_SINT/UINT to be promoted
Differential Revision: https://reviews.llvm.org/D70220
2019-11-19 16:14:37 -08:00
Craig Topper 85589f8077 [X86] Add custom type legalization and lowering for scalar STRICT_FP_TO_SINT/UINT
This is a first pass at Custom lowering for these operations. I also updated some of the vector code where it was obviously easy and straightforward. More work needed in follow up.

This enables these operations to be handled with X87 where special rounding control adjustments are needed to perform a truncate.

Still need to fix Promotion in the target independent code in LegalizeDAG.
llrint/llround split into separate test file because we can't make a strict libcall properly yet either and we need to do that when i64 isn't a legal type.

This does not include any isel support. So we still rely on the mutation in SelectionDAGIsel to remove the strict from this stuff later. Except for the X87 stuff which goes through custom nodes that already had chains.

Differential Revision: https://reviews.llvm.org/D70214
2019-11-19 16:05:22 -08:00
Matt Arsenault b696b9dba7 DAG: Add function context to isFMAFasterThanFMulAndFAdd
AMDGPU needs to know the FP mode for the function to answer this
correctly when this is removed from the subtarget.

AArch64 had to make this more complicated by using this from an IR
hook, so add an IR typed overload.
2019-11-19 19:25:26 +05:30
Simon Pilgrim bbf4af3109 [X86][SSE] Remove XFormVExtractWithShuffleIntoLoad to prevent legalization infinite loops (PR43971)
As detailed in PR43971/D70267, the use of XFormVExtractWithShuffleIntoLoad causes issues where we end up in infinite loops of extract(targetshuffle(vecload)) -> extract(shuffle(vecload)) -> extract(vecload) -> extract(targetshuffle(vecload)), there are just too many legalization checks at every stage that we can't guarantee that extract(shuffle(vecload)) -> scalarload can occur.

At the moment we see a number of minor regressions as we don't fold extract(shuffle(vecload)) -> scalarload before legal ops, these can be addressed in future patches and extension of X86ISelLowering's combineExtractWithShuffle.
2019-11-19 11:55:44 +00:00
Graham Hunter 3f08ad611a [SVE][CodeGen] Scalable vector MVT size queries
* Implements scalable size queries for MVTs, split out from D53137.

* Contains a fix for FindMemType to avoid using scalable vector type
  to contain non-scalable types.

* Explicit casts for several places where implicit integer sign
  changes or promotion from 32 to 64 bits caused problems.

* CodeGenDAGPatterns will treat scalable and non-scalable vector types
  as different.

Reviewers: greened, cameron.mcinally, sdesmalen, rovka

Reviewed By: rovka

Differential Revision: https://reviews.llvm.org/D66871
2019-11-18 12:30:59 +00:00
Craig Topper f7e9d81a8e [X86] Don't set the operation action for i16 SINT_TO_FP to Promote just because SSE1 is enabled.
Instead do custom promotion in the handler so that we can still
allow i16 to be used with fp80. And f64 without sse2.
2019-11-13 14:07:56 -08:00
Craig Topper 787595b2e7 [X86] Fix typo in comment. NFC 2019-11-13 14:07:55 -08:00
Craig Topper fee9067261 [X86] Move all the FP_TO_XINT/XINT_TO_FP setOperationActions into the same !useSoftFloat block. Qualify all of the Promote actions for these with !useSoftFloat too. NFCI
The Promote action doesn't apply until LegalizeDAG. By the time
we get there, we would have already softened all the FP operations
if useSoftFloat was true. So there wouldn't be any operation left
to Promote.
2019-11-13 14:07:54 -08:00
Craig Topper a4b7613a49 [X86] Remove setOperationAction for FP_TO_SINT v8i16.
This is no longer needed after widening legalization as we
custom legalize v8i8 ourselves.

Added entries to the cost model, but bumped the cost slightly
to account for the truncate shuffle that wasn't costed before.
2019-11-12 22:45:52 -08:00
Craig Topper 3e1aee2ba7 [X86] Don't consider v64i1 as a legal type unless v64i8 is also a legal type.
This avoids some nasty issues with argument passing and lowering of
arbitrary v64i8 shuffles.
2019-11-12 14:56:02 -08:00
Craig Topper 0f04ffc073 [X86] Only pass v64i8/v32i16 as v16i32 on non-avx512bw targets if the v16i32 type won't be split by prefer-vector-width=256
Otherwise just let the v64i8/v32i16 types be split to v32i8/v16i16.

In reality this shouldn't happen because it means we have a 512-bit
vector argument, but min-legal-vector-width says a value less than
512. But a 512-bit argument should have been factored into the
preferred vector width.
2019-11-12 14:56:01 -08:00
Craig Topper ff1504da6f [X86] Update stale comment. NFC 2019-11-11 23:55:12 -08:00
Craig Topper 578f3b5dce [X86] Remove setOperationAction lines that say to promote MVT::i1
MVT::i1 should be removed by type legalization before we reach
any code that would act on the promote action.

Mainly to avoid replicating this for strict FP versions of these
operations.
2019-11-11 18:35:57 -08:00
Craig Topper 6c86d6efaf [X86] Remove some else branches after checking for !useSoftFloat() that set operations to Expand.
If we're using soft floats, then these operations shoudl be
softened during type legalization. They'll never get to
LegalizeVectorOps or LegalizeDAG so they don't need to be
Expanded there.
2019-11-11 16:32:19 -08:00
Eli Friedman 5df3a87224 [AArch64][X86] Don't assume __powidf2 is available on Windows.
We had some code for this for 32-bit ARM, but this doesn't really need
to be in target-specific code; generalize it.

(I think this started showing up recently because we added an
optimization that converts pow to powi.)

Differential Revision: https://reviews.llvm.org/D69013
2019-11-08 12:43:21 -08:00
Craig Topper 17eb12fa6d [X86] Remove unused variable. NFC 2019-11-06 22:53:48 -08:00
Craig Topper 1c8460d6e1 [X86] Remove dead code from combineStore.
Leftovers from before we switched to widening legalization.

Fixes PR43919.
2019-11-06 22:24:47 -08:00
Craig Topper 641d2e5232 [X86] Clamp large constant shift amounts for MMX shift intrinsics to 8-bits.
The MMX intrinsics for shift by immediate take a 32-bit shift
amount but the hardware for shifting by immediate only encodes
8-bits. For the intrinsic we don't require the shift amount to
fit in 8-bits in the frontend because we don't check that its an
immediate in the frontend. If its is not an immediate we move it
to an MMX register and use the shift by register.

But if it is an immediate we'll use the shift by immediate
instruction. But we need to change the shift amount to 8-bits.
We were previously doing this accidentally by masking it in the
encoder. But this can make a large shift amount into a small
in bounds shift amount. Instead we should clamp larger shift
amounts to 255 so that the they don't become in bounds.

Fixes PR43922
2019-11-06 13:03:18 -08:00
Dávid Bolvanský ca7f5becf9 [X86ISelLowering] Fixed typo in assert. NFCI. 2019-11-06 20:04:15 +01:00
Sanjay Patel 8e34dd941c [x86] avoid crashing when splitting AVX stores with non-simple type (PR43916)
The store splitting transform was assuming a simple type (MVT),
but that's not necessarily the case as shown in the test.
2019-11-06 09:28:41 -05:00
Simon Pilgrim 37cdac6344 [X86] LowerAVXExtend - fix dodgy self-comparison assert.
PVS Studio noticed that we were asserting "VT.getVectorNumElements() == VT.getVectorNumElements()" instead of "VT.getVectorNumElements() == InVT.getVectorNumElements()".
2019-11-06 12:50:29 +00:00
Benjamin Kramer 5f158d8e21 [X86] Gate select->fmin/fmax transform on NoSignedZeros instead of UnsafeFPMath 2019-11-05 21:28:41 +01:00
Philip Reames 027aa27d95 [X86/Atomics] (Semantically) revert G246098, switch back to the old atomic example
When writing an email for a follow up proposal, I realized one of the diffs in the committed change was incorrect.  Digging into it revealed that the fix is complicated enough to require some thought, so reverting in the meantime.

The problem is visible in this diff (from the revert):
 ; X64-SSE-LABEL: store_fp128:
 ; X64-SSE:       # %bb.0:
-; X64-SSE-NEXT:    movaps %xmm0, (%rdi)
+; X64-SSE-NEXT:    subq $24, %rsp
+; X64-SSE-NEXT:    .cfi_def_cfa_offset 32
+; X64-SSE-NEXT:    movaps %xmm0, (%rsp)
+; X64-SSE-NEXT:    movq (%rsp), %rsi
+; X64-SSE-NEXT:    movq {{[0-9]+}}(%rsp), %rdx
+; X64-SSE-NEXT:    callq __sync_lock_test_and_set_16
+; X64-SSE-NEXT:    addq $24, %rsp
+; X64-SSE-NEXT:    .cfi_def_cfa_offset 8
 ; X64-SSE-NEXT:    retq
   store atomic fp128 %v, fp128* %fptr unordered, align 16
   ret void

The problem here is three fold:
1) x86-64 doesn't guarantee atomicity of anything larger than 8 bytes.  Some platforms observably break this guarantee, others don't, but the codegen isn't considering this, so it's wrong on at least some platforms.
2) When I started to track down the problem, I discovered that DAGCombiner had stripped the atomicity off the store entirely.  This comes down to idiomatic usage of DAG.getStore passing all MMO components separately as opposed to just passing the MMO.
3) On x86 (not -64), there are cases where 8 byte atomiciy is supported, but only for floating point operations.  This would seem to imply that operation typing matters for correctness, and DAGCombine happily folds away bitcasts.  I'm not 100% sure there's a problem here, but I'm not entirely sure there isn't either.

I plan on returning to each issue in turn;  sorry for the churn here.
2019-11-05 11:24:27 -08:00
Benjamin Kramer 00e53d912d [X86] Specifically limit fmin/fmax commutativity to NoNaNs + NoSignedZeros
The backend UnsafeFPMath flag is not a superset of all the others, so
limit it to the exact bits needed.
2019-11-05 19:34:06 +01:00
Simon Pilgrim 9ad9d1531b [X86] Convert ShrinkMode to scoped enum class. NFCI. 2019-11-04 15:35:20 +00:00
Simon Pilgrim 31ed36d044 [X86] SimplifyDemandedVectorElts - attempt to recombine target shuffle using DemandedElts mask (REAPPLIED)
If we don't demand all elements, then attempt to combine to a simpler shuffle.

At the moment we can only do this if Depth == 0 as combineX86ShufflesRecursively uses Depth to track whether the shuffle has really changed or not - we'll need to change this before we can properly start merging combineX86ShufflesRecursively into SimplifyDemandedVectorElts (see D66004).

This reapplies rL368307 (reverted at rL369167) after the fix for the infinite loop reported at PR43024 was applied at rG3f087e38a2e7b87a5adaaac1c1b61e51220e7ff3
2019-11-04 11:37:57 +00:00
Simon Pilgrim 3f087e38a2 [X86][SSE] combineX86ShufflesRecursively - at Depth==0, only resolve KnownZero if it removes an input.
This stops infinite loops where KnownUndef elements are converted to Zeroable, resulting in KnownZero elements which are then simplified (via SimplifyDemandedElts etc.) back to KnownUndef elements........

Prep fix for PR43024 which will allow rL368307 to be re-applied.
2019-11-03 21:10:47 +00:00
Simon Pilgrim 8f29e4407c [X86][SSE] combineX86ShufflesRecursively - don't bother merging shuffles with empty roots. NFCI.
This doesn't affect actual codegen, but is a minor refactor toward fixing PR43024 where we need to avoid excess changes (folding zeroables etc.) to the shuffle mask at Depth == 0.
2019-11-03 17:46:00 +00:00
Simon Pilgrim 297d96bb60 Fix uninitialized variable warning. NFCI. 2019-11-03 11:15:55 +00:00
Simon Pilgrim 254b8461ac [X86] Move computeZeroableShuffleElements before getTargetShuffleAndZeroables. NFCI.
Prep work toward merging some of the functionality.
2019-11-02 13:38:35 +00:00
Craig Topper eeeb18cd07 [X86] Change the behavior of canWidenShuffleElements used by lowerV2X128Shuffle to match the behavior in lowerVectorShuffle with regards to zeroable elements.
Previously we marked zeroable elements in a way that prevented
the widening check from recognizing that it could widen. Now
we only mark them zeroable if V2 is an all zeros vector. This
matches what we do for widening elements in lowerVectorShuffle.

Fixes PR43866.
2019-11-01 13:06:03 -07:00
Simon Pilgrim 9b0dfdf5e1 [X86][AVX] Add support for and/or scalar bool reduction with AVX512 mask registers
combineBitcastvxi1 only handles bitcast->MOVMSK combines, with mask registers we use BITCAST directly.
2019-11-01 17:55:31 +00:00
Simon Pilgrim ea27d82814 [X86] isFNEG - use switch() instead of if-else tree. NFCI.
In a future patch this will avoid some checks which don't need to be done for some opcodes.
2019-11-01 17:09:04 +00:00
Simon Pilgrim a780b94cd1 [X86][SSE] Convert computeZeroableShuffleElements to emit KnownUndef and KnownZero 2019-10-31 11:21:39 +00:00
Simon Pilgrim f25f3d39df [X86] Add FIXME comment to merge more of computeZeroableShuffleElements and getTargetShuffleAndZeroables 2019-10-30 18:30:01 +00:00
Simon Pilgrim 94a4a2c97f [X86][SSE] combineX86ShuffleChain - use resolveZeroablesFromTargetShuffle helper. NFCI. 2019-10-30 18:30:01 +00:00
Simon Pilgrim 81399002ae [X86] combineOrShiftToFunnelShift - use isOperationLegalOrCustom to check FSHL/FSHR support
Remove hard wired legality check.
2019-10-30 11:52:22 +00:00
Simon Pilgrim 26655376fe [X86] combineOrShiftToFunnelShift - use getShiftAmountTy instead of hardwiring to MVT::i8 2019-10-30 11:52:22 +00:00
Guillaume Chatelet 119b436da1 [Alignment] Use Align for TFI.getStackAlignment() in X86ISelLowering
Summary:
This is patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790

Reviewers: courbet, craig.topper, rnk

Reviewed By: rnk

Subscribers: rnk, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D69034
2019-10-30 10:35:13 +01:00
David Zarzycki f68925d450
[X86] Make memcmp vector lowering handle arbitrary expansions
Teach combineVectorSizedSetCCEquality() to handle arbitrary memcmp
expansions but do not change any default policy for now.

This also fixes a bug in the memcmp expansion itself when large
displacements are needed.

https://reviews.llvm.org/D69507
2019-10-30 09:12:57 +02:00
Philip Reames 2460989eab [SelectionDAG] Enable lowering unordered atomics loads w/LoadSDNode (and stores w/StoreSDNode) by default
Enable the new SelectionDAG representation for unordered loads and stores introduced in r371441 by default.  As a reminder, the new lowering changes the representation of an unordered atomic load from an AtomicSDNode - which is essentially a black box which gets passed through without combines messing with it - to a LoadSDNode w/a atomic marker on the MMO. The later parallels the way we handle volatiles, and I've audited the code to ensure that every location which checks one checks the other.

This has been fairly heavily fuzzed, and I examined diffs in a reasonable large corpus of assembly by hand, so I'm reasonable sure this is correct for the common case.  Late in the review for this, it was discovered that I hadn't correctly handled cases which could be legalized into CAS operations.  This points out that there's a strong bias in the IR of the frontend I'm working with towards only legal atomics.  If there are problems with this patch, the most likely area will be legalization.

Differential Revision: https://reviews.llvm.org/D69219
2019-10-29 12:46:24 -07:00
Craig Topper 772533d921 [X86] Narrow i64 compares with constant to i32 when the upper 32-bits are known zero.
This catches some cases. There are probably ways to improve this.
I tried doing it as a combine on the setcc, but that broke
some cases involving flag reuse in place of test.

I renamed the isX86CCUnsigned to isX86CCSigned and flipped its
polarity to make it consistent with the similar functions for
ISD::SETCC. This avoids calling EQ/NE as being signed or unsigned.

Fixes PR43823.

Differential Revision: https://reviews.llvm.org/D69499
2019-10-29 11:38:15 -07:00
Simon Pilgrim 501cf25839 [X86] Pull out combineOrShiftToFunnelShift helper. NFCI. 2019-10-29 15:29:51 +00:00
Craig Topper 3da269a248 [X86] Add a DAG combine to turn (and (bitcast (vXi1 (concat_vectors (vYi1 setcc), undef,))), C) into (bitcast (vXi1 (concat_vectors (vYi1 setcc), zero,)))
The legalization of v2i1->i2 or v4i1->i4 bitcasts followed by a setcc can create an and after the bitcast. If we're lucky enough that the input to the bitcast is a concat_vectors where the first operand is a setcc that can natively 0 all the upper bits of ak-register, then we should replace the other operands of the concat_vectors with zero in order to remove the AND.

With the AND removed we might be able to use a kortest on the result.

Differential Revision: https://reviews.llvm.org/D69205
2019-10-28 11:27:01 -07:00
David Zarzycki 657e4240b1 [X86] Fix 48/96 byte memcmp code gen
Detect scalar ISD::ZERO_EXTEND generated by memcmp lowering and convert
it to ISD::INSERT_SUBVECTOR.

https://reviews.llvm.org/D69464
2019-10-28 08:41:45 +02:00
David Zarzycki 11c920207a [X86] Prefer KORTEST on Knights Landing or later for memcmp()
PTEST and especially the MOVMSK instructions are slow on Knights Landing
or later. As a bonus, this patch increases instruction parallelism by
emitting:
    KORTEST(PCMPNEQ(a, b), PCMPNEQ(c, d)) == 0
Instead of:
    KORTEST(AND(PCMPEQ(a, b), PCMPEQ(c, d))) == ~0

https://reviews.llvm.org/D69157
2019-10-26 21:14:57 +03:00
Craig Topper 3dd0a896b6 [X86] Add a check for SSE2 to the top of combineReductionToHorizontal.
Without this, we can create a PSADBW node that isn't legal.
2019-10-25 11:11:32 -07:00
Simon Pilgrim a4d55a2c36 [X86] combineX86ShufflesRecursively - assert the root mask is legal. NFCI. 2019-10-23 07:33:29 -07:00
Simon Pilgrim b446356bf3 [X86][SSE] Add OR(EXTRACTELT(X,0),OR(EXTRACTELT(X,1))) -> MOVMSK+CMP reduction combine
llvm-svn: 375463
2019-10-21 22:36:31 +00:00
Simon Pilgrim 7c15c4fb17 [X86] Rename matchBitOpReduction to matchScalarReduction. NFCI.
This doesn't need to be just for bitops, but the ops do need to be fully associative.

llvm-svn: 375445
2019-10-21 19:19:50 +00:00
Craig Topper e78414622d [X86] Check Subtarget.hasSSE3() before calling shouldUseHorizontalOp and emitting X86ISD::FHADD in LowerUINT_TO_FP_i64.
This was a regression from r375341.

Fixes PR43729.

llvm-svn: 375381
2019-10-20 23:54:19 +00:00
Simon Pilgrim 10213b9073 [X86] Pulled out helper to decode target shuffle element sentinel values to 'Zeroable' known undef/zero bits. NFCI.
Renamed 'resolveTargetShuffleAndZeroables' to 'resolveTargetShuffleFromZeroables' to match.

llvm-svn: 375348
2019-10-19 16:58:24 +00:00
Simon Pilgrim b5088aa944 [X86][SSE] lowerV16I8Shuffle - tryToWidenViaDuplication - undef unpack args
tryToWidenViaDuplication lowers using the shuffle_v8i16(unpack_v16i8(shuffle_v8i16(x),shuffle_v8i16(x))) pattern, but the unpack only needs the even/odd 16i8 args if the original v16i8 shuffle mask references the even/odd elements - which isn't true for many extension style shuffles.

llvm-svn: 375342
2019-10-19 13:18:02 +00:00
Simon Pilgrim 6ada70d1b5 [X86][SSE] LowerUINT_TO_FP_i64 - only use HADDPD for size/fast-hops
We were always generating a single source HADDPD, but really we should only do this if shouldUseHorizontalOp says its a good idea.

Differential Revision: https://reviews.llvm.org/D69175

llvm-svn: 375341
2019-10-19 11:53:48 +00:00
Simon Pilgrim 696794b66e [X86] combineX86ShufflesRecursively - pull out isTargetShuffleVariableMask. NFCI.
llvm-svn: 375253
2019-10-18 16:39:01 +00:00
David Zarzycki 7b9fd37fa1 [X86] Emit KTEST when possible
https://reviews.llvm.org/D69111

llvm-svn: 375197
2019-10-18 03:45:52 +00:00
Sam Parker 39af8a3a3b [DAGCombine][ARM] Enable extending masked loads
Add generic DAG combine for extending masked loads.

Allow us to generate sext/zext masked loads which can access v4i8,
v8i8 and v4i16 memory to produce v4i32, v8i16 and v4i32 respectively.

Differential Revision: https://reviews.llvm.org/D68337

llvm-svn: 375085
2019-10-17 07:55:55 +00:00
Simon Pilgrim 50dc09dd16 [X86] combineX86ShufflesRecursively - split the getTargetShuffleInputs call from the resolveTargetShuffleAndZeroables call.
Exposes an issue in getFauxShuffleMask where the OR(SHUFFLE,SHUFFLE) decode should always resolve zero/undef elements.

Part of the fix for PR43024 where ideally we shouldn't call resolveTargetShuffleAndZeroables for Depth == 0

llvm-svn: 374928
2019-10-15 17:59:13 +00:00
David Zarzycki 59390efef2 [X86] Make memcmp() use PTEST if possible and also enable AVX1
llvm-svn: 374922
2019-10-15 17:40:12 +00:00
Simon Pilgrim 70778444c7 [X86] Resolve KnownUndef/KnownZero bits into target shuffle masks in helper. NFCI.
llvm-svn: 374878
2019-10-15 11:13:51 +00:00
Craig Topper b2661a2d15 [X86] Don't check for VBROADCAST_LOAD being a user of the source of a VBROADCAST when trying to share broadcasts.
The only things VBROADCAST_LOAD uses is an address and a chain
node. It has no vector inputs.

So if its a user of the source of another broadcast that could
only mean one of two things. The other broadcast is broadcasting
the address of the broadcast_load. Or the source is a load and
the use we're seeing is the chain result from that load. Neither
of these cases make sense to combine here.

This issue was reported post-commit r373871. Test case has not
been reduced yet.

llvm-svn: 374862
2019-10-15 06:10:11 +00:00
Craig Topper f4d03213f3 [X86] Teach EmitTest to handle ISD::SSUBO/USUBO in order to use the Z flag from the subtract directly during isel.
This prevents isel from emitting a TEST instruction that
optimizeCompareInstr will need to remove later.

In some of the modified tests, the SUB gets duplicated due to
the flags being needed in two places and being clobbered in
between. optimizeCompareInstr was able to optimize away the TEST
that was using the result of one of them, but optimizeCompareInstr
doesn't know to turn SUB into CMP after removing the TEST. It
only knows how to turn SUB into CMP if the result was already
dead.

With this change the TEST never exists, so optimizeCompareInstr
doesn't have to remove it. Then it can just turn the SUB into
CMP immediately.

Fixes PR43649.

llvm-svn: 374755
2019-10-14 06:47:56 +00:00
Simon Pilgrim 11495e5acb [X86] getTargetShuffleInputs - Control KnownUndef mask element resolution as well as KnownZero.
We were already controlling whether the KnownZero elements were being written to the target mask, this extends it to the KnownUndef elements as well so we can prevent the target shuffle mask being manipulated at all.

llvm-svn: 374732
2019-10-13 19:35:35 +00:00
Craig Topper 25eb219959 [X86] Enable use of avx512 saturating truncate instructions in more cases.
This enables use of the saturating truncate instructions when the
result type is less than 128 bits. It also enables the use of
saturating truncate instructions on KNL when the input is less
than 512 bits. We can do this by widening the input and then
extracting the result.

llvm-svn: 374731
2019-10-13 19:07:28 +00:00
Simon Pilgrim 3efafd6c38 [X86] SimplifyMultipleUseDemandedBitsForTargetNode - use getTargetShuffleInputs with KnownUndef/Zero results.
llvm-svn: 374725
2019-10-13 17:03:11 +00:00
Simon Pilgrim e4c58db8bc [X86] getTargetShuffleInputs - add KnownUndef/Zero output support
Adjust SimplifyDemandedVectorEltsForTargetNode to use the known elts masks instead of recomputing it locally.

llvm-svn: 374724
2019-10-13 17:03:02 +00:00
Craig Topper d50cb9ac8c [X86] Add a one use check on the setcc to the min/max canonicalization code in combineSelect.
This seems to improve std::midpoint code where we have a min and
a max with the same condition. If we split the setcc we can end
up with two compares if the one of the operands is a constant.
Since we aggressively canonicalize compares with constants.
For non-constants it can interfere with our ability to share
control flow if we need to expand cmovs into control flow.

I'm also not sure I understand this min/max canonicalization code.
The motivating case talks about comparing with 0. But we don't
check for 0 explicitly.

Removes one instruction from the codegen for PR43658.

llvm-svn: 374706
2019-10-13 06:48:05 +00:00
Craig Topper bf57aa2b25 [X86] Enable v4i32->v4i16 and v8i16->v8i8 saturating truncates to use pack instructions with avx512.
llvm-svn: 374705
2019-10-13 05:47:47 +00:00
Simon Pilgrim 6716512670 [X86] scaleShuffleMask - use size_t Scale to avoid overflow warnings
llvm-svn: 374674
2019-10-12 18:33:47 +00:00
Simon Pilgrim 66417a9f03 Replace for-loop of SmallVector::push_back with SmallVector::append. NFCI.
llvm-svn: 374669
2019-10-12 16:37:02 +00:00
Simon Pilgrim 37041c7d22 Fix cppcheck shadow variable name warnings. NFCI.
llvm-svn: 374668
2019-10-12 16:36:52 +00:00
Simon Pilgrim 6446079add [X86] Use any_of/all_of patterns in shuffle mask pattern recognisers. NFCI.
llvm-svn: 374667
2019-10-12 16:36:44 +00:00
Simon Pilgrim 9f0885d38d [X86][SSE] Avoid unnecessary PMOVZX in v4i8 sum reduction
This should go away once D66004 has landed and we can simplify shuffle chains using demanded elts.

llvm-svn: 374658
2019-10-12 15:19:13 +00:00
Craig Topper 9bd542dcd5 [X86] Use pack instructions for packus/ssat truncate patterns when 256-bit is the largest legal vector and the result type is at least 256 bits.
Since the input type is larger than 256-bits we'll need to some
concatenating to reassemble the results. The pack instructions
ability to concatenate while packing make this a shorter/faster
sequence.

llvm-svn: 374643
2019-10-12 07:59:29 +00:00
Craig Topper 3472feb94c [X86] Fold a VTRUNCS/VTRUNCUS+store into a saturating truncating store.
We already did this for VTRUNCUS with a specific combination of
types. This extends this to VTRUNCS and handles any types where
a truncating store is legal.

llvm-svn: 374615
2019-10-12 00:01:08 +00:00
Simon Pilgrim af6c15f679 [X86][SSE] Add support for v4i8 add reduction
llvm-svn: 374579
2019-10-11 17:54:15 +00:00
Simon Pilgrim 6434eac860 [X86] isFNEG - add recursion depth limit
Now that its used by isNegatibleForFree we should try to avoid costly deep recursion

llvm-svn: 374534
2019-10-11 11:34:18 +00:00
Craig Topper ccc85ac855 [X86] Add a DAG combine to turn v16i16->v16i8 VTRUNCUS+store into a saturating truncating store.
llvm-svn: 374509
2019-10-11 04:16:49 +00:00
Craig Topper b560fd6c52 [X86] Improve the AVX512 bailout in combineTruncateWithSat to allow pack instructions in more situations.
If we don't have VLX we won't end up selecting a saturating
truncate for 256-bit or smaller vectors so we should just use
the pack lowering.

llvm-svn: 374487
2019-10-11 00:38:51 +00:00
Craig Topper 4ee7f8365f [X86] Guard against leaving a dangling node in combineTruncateWithSat.
When handling the packus pattern for i32->i8 we do a two step
process using a packss to i16 followed by a packus to i8. If the
final i8 step is a type with less than 64-bits the packus step
will return SDValue(), but the i32->i16 step might have succeeded.
This leaves the nodes from the middle step dangling.

Guard against this by pre-checking that the number of elements is
at least 8 before doing the middle step.

With that check in place this should mean the only other
case the middle step itself can fail is when SSE2 is disabled. So
add an early SSE2 check then just assert that neither the middle
or final step ever fail.

llvm-svn: 374460
2019-10-10 21:46:52 +00:00
Craig Topper 0e561437c5 [X86] Use packusdw+vpmovuswb to implement v16i32->V16i8 that clamps signed inputs to be between 0 and 255 when zmm registers are disabled on SKX.
If we've disable zmm registers, the v16i32 will need to be split. This split will propagate through min/max the truncate. This creates two sequences that need to be concatenated back to v16i8. We can instead use packusdw to do part of the clamping, truncating, and concatenating all at once. Then we can use a vpmovuswb to finish off the clamp.

Differential Revision: https://reviews.llvm.org/D68763

llvm-svn: 374431
2019-10-10 19:40:44 +00:00
Simon Pilgrim 6a38474f77 [X86] combineFMA - Convert to use isNegatibleForFree/GetNegatedExpression.
Split off from D67557.

llvm-svn: 374356
2019-10-10 14:14:12 +00:00
Simon Pilgrim f096443a98 [X86] combineFMADDSUB - Convert to use isNegatibleForFree/GetNegatedExpression.
Split off from D67557, fixes the compile time regression mentioned in rL372756

llvm-svn: 374351
2019-10-10 13:46:44 +00:00
Simon Pilgrim 08c2f530ec [DAG][X86] Add isNegatibleForFree/GetNegatedExpression override placeholders. NFCI.
Continuing to undo the rL372756 reversion.

Differential Revision: https://reviews.llvm.org/D67557

llvm-svn: 374345
2019-10-10 13:29:35 +00:00
Philip Reames 931120846e Conservatively add volatility and atomic checks in a few places
As background, starting in D66309, I'm working on support unordered atomics analogous to volatile flags on normal LoadSDNode/StoreSDNodes for X86.

As part of that, I spent some time going through usages of LoadSDNode and StoreSDNode looking for cases where we might have missed a volatility check or need an atomic check. I couldn't find any cases that clearly miscompile - i.e. no test cases - but a couple of pieces in code loop suspicious though I can't figure out how to exercise them.

This patch adds defensive checks and asserts in the places my manual audit found. If anyone has any ideas on how to either a) disprove any of the checks, or b) hit the bug they might be fixing, I welcome suggestions.

Differential Revision: https://reviews.llvm.org/D68419

llvm-svn: 374261
2019-10-09 23:43:33 +00:00
Craig Topper be7f81ece9 [X86] Shrink zero extends of gather indices from type less than i32 to types larger than i32.
Gather instructions can use i32 or i64 elements for indices. If
the index is zero extended from a type smaller than i32 to i64, we
can shrink the extend to just extend to i32.

llvm-svn: 373982
2019-10-07 23:03:12 +00:00
Reid Kleckner f9b67b810e [X86] Add new calling convention that guarantees tail call optimization
When the target option GuaranteedTailCallOpt is specified, calls with
the fastcc calling convention will be transformed into tail calls if
they are in tail position. This diff adds a new calling convention,
tailcc, currently supported only on X86, which behaves the same way as
fastcc, except that the GuaranteedTailCallOpt flag does not need to
enabled in order to enable tail call optimization.

Patch by Dwight Guth <dwight.guth@runtimeverification.com>!

Reviewed By: lebedev.ri, paquette, rnk

Differential Revision: https://reviews.llvm.org/D67855

llvm-svn: 373976
2019-10-07 22:28:58 +00:00
Simon Pilgrim 9c2e123043 [X86][SSE] getTargetShuffleInputs - move VT.isSimple/isVector checks inside. NFCI.
Stop all the callers from having to check the value type before calling getTargetShuffleInputs.

llvm-svn: 373915
2019-10-07 16:15:20 +00:00
Simon Pilgrim b4ba3cbda0 [X86][AVX] Access a scalar float/double as a free extract from a broadcast load (PR43217)
If a fp scalar is loaded and then used as both a scalar and a vector broadcast, perform the load as a broadcast and then extract the scalar for 'free' from the 0th element.

This involved switching the order of the X86ISD::BROADCAST combines so we only convert to X86ISD::BROADCAST_LOAD once all other canonicalizations have been attempted.

Adds a DAGCombinerInfo::recursivelyDeleteUnusedNodes wrapper.

Fixes PR43217

Differential Revision: https://reviews.llvm.org/D68544

llvm-svn: 373871
2019-10-06 21:11:45 +00:00
Simon Pilgrim d84cd7caa8 Fix signed/unsigned warning. NFCI
llvm-svn: 373870
2019-10-06 19:54:20 +00:00
Simon Pilgrim 739c9f0b79 [X86][SSE] Remove resolveTargetShuffleInputs and use getTargetShuffleInputs directly.
Move the resolveTargetShuffleInputsAndMask call to after the shuffle mask combine before the undef/zero constant fold instead.

llvm-svn: 373868
2019-10-06 19:07:00 +00:00
Simon Pilgrim 42010dc810 [X86][SSE] Don't merge known undef/zero elements into target shuffle masks.
Replaces setTargetShuffleZeroElements with getTargetShuffleAndZeroables which reports the Zeroable elements but doesn't merge them into the decoded target shuffle mask (the merging has been moved up into getTargetShuffleInputs until we can get rid of it entirely).

This is part of the work to fix PR43024 and allow us to use SimplifyDemandedElts to simplify shuffle chains - we need to get to a point where the target shuffle mask isn't adjusted by its source inputs but instead we cache them in a parallel Zeroable mask.

llvm-svn: 373867
2019-10-06 19:06:45 +00:00
Craig Topper 570ae49d03 [X86] Add custom type legalization for v16i64->v16i8 truncate and v8i64->v8i8 truncate when v8i64 isn't legal
Summary:
The default legalization for v16i64->v16i8 tries to create a multiple stage truncate concatenating after each stage and truncating again. But avx512 implements truncates with multiple uops. So it should be better to truncate all the way to the desired element size and then concatenate the pieces using unpckl instructions. This minimizes the number of 2 uop truncates. The unpcks are all single uop instructions.

I tried to handle this by just custom splitting the v16i64->v16i8 shuffle. And hoped that the DAG combiner would leave the two halves in the state needed to make D68374 do the job for each half. This worked for the first half, but the second half got messed up. So I've implemented custom handling for v8i64->v8i8 when v8i64 needs to be split to produce the VTRUNCs directly.

Reviewers: RKSimon, spatel

Reviewed By: RKSimon

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D68428

llvm-svn: 373864
2019-10-06 18:43:08 +00:00
Simon Pilgrim 5c876303ec [X86][SSE] resolveTargetShuffleInputs - call getTargetShuffleInputs instead of using setTargetShuffleZeroElements directly. NFCI.
llvm-svn: 373855
2019-10-06 15:42:25 +00:00
Simon Pilgrim 2dee7e5561 [X86][AVX] combineExtractSubvector - merge duplicate variables. NFCI.
llvm-svn: 373849
2019-10-06 13:25:10 +00:00
Simon Pilgrim 032dd9b086 [X86][SSE] matchVectorShuffleAsBlend - use Zeroable element mask directly.
We can make use of the Zeroable mask to indicate which elements we can safely set to zero instead of creating a target shuffle mask on the fly.

This allows us to remove createTargetShuffleMask.

This is part of the work to fix PR43024 and allow us to use SimplifyDemandedElts to simplify shuffle chains - we need to get to a point where the target shuffle masks isn't adjusted by its source inputs in setTargetShuffleZeroElements but instead we cache them in a parallel Zeroable mask.

llvm-svn: 373846
2019-10-06 12:38:38 +00:00
David Zarzycki 7653ff398d [X86] Enable AVX512BW for memcmp()
llvm-svn: 373845
2019-10-06 10:25:52 +00:00
Simon Pilgrim 8815be04ec [X86][AVX] Push sign extensions of comparison bool results through bitops (PR42025)
As discussed on PR42025, with more complex boolean math we can end up with many truncations/extensions of the comparison results through each bitop.

This patch handles the cases introduced in combineBitcastvxi1 by pushing the sign extension through the AND/OR/XOR ops so its just the original SETCC ops that gets extended.

Differential Revision: https://reviews.llvm.org/D68226

llvm-svn: 373834
2019-10-05 20:49:34 +00:00
Simon Pilgrim 9ecacb0d54 [X86] lowerShuffleAsLanePermuteAndRepeatedMask - variable renames. NFCI.
Rename some variables to match lowerShuffleAsRepeatedMaskAndLanePermute - prep work toward adding some equivalent sublane functionality.

llvm-svn: 373832
2019-10-05 16:08:30 +00:00
Craig Topper 074fa390d2 [X86] Add DAG combine to form saturating VTRUNCUS/VTRUNCS from VTRUNC
We already do this for ISD::TRUNCATE, but we can do the same for X86ISD::VTRUNC

Differential Revision: https://reviews.llvm.org/D68432

llvm-svn: 373765
2019-10-04 17:53:18 +00:00
Craig Topper 185ee6ec7c [X86] Add v32i8 shuffle lowering strategy to recognize two v4i64 vectors truncated to v4i8 and concatenated into the lower 8 bytes with undef/zero upper bytes.
This patch recognizes the shuffle pattern we get from a
v8i64->v8i8 truncate when v8i64 isn't a legal type.

With VLX we can use two VTRUNCs, unpckldq, and a insert_subvector.

Diffrential Revision: https://reviews.llvm.org/D68374

llvm-svn: 373645
2019-10-03 18:34:42 +00:00
Simon Pilgrim eb8d85e5db [X86] matchShuffleWithSHUFPD - use Zeroable element mask directly. NFCI.
We can make use of the Zeroable mask to indicate which elements we can safely set to zero instead of creating a target shuffle mask on the fly.

This only leaves one user of createTargetShuffleMask which we can hopefully get rid of in a similar manner.

This is part of the work to fix PR43024 and allow us to use SimplifyDemandedElts to simplify shuffle chains - we need to get to a point where the target shuffle masks isn't adjusted by its source inputs in setTargetShuffleZeroElements but instead we cache them in a parallel Zeroable mask.

llvm-svn: 373641
2019-10-03 18:13:50 +00:00
Craig Topper eb420aa379 [X86] Add DAG combine to turn (bitcast (vbroadcast_load)) into just a vbroadcast_load if the scalar size is the same.
This improves broadcast load folding of i64 elements on 32-bit
targets where i64 isn't legal.

Previously we had to represent these as vXf64 vbroadcast_loads and
a bitcast to vXi64. But we didn't have any isel patterns
looking for that.

This also allows us to remove or simplify some isel patterns that
were looking for bitcasted vbroadcast_loads.

llvm-svn: 373566
2019-10-03 05:30:02 +00:00
Craig Topper 74c7d6be28 [X86] Rewrite to the vXi1 subvector insertion code to not rely on the value of bits that might be undef
The previous code tried to do a trick where we would extract the subvector from the location we were inserting. Then xor that with the new value. Take the xored value and clear out the bits above the subvector size. Then shift that xored subvector to the insert location. And finally xor that with the original vector. Since the old subvector was used in both xors, this would leave just the new subvector at the inserted location. Since the surrounding bits had been zeroed no other bits of the original vector would be modified.

Unfortunately, if the old subvector came from undef we might aggressively propagate the undef. Then we end up with the XORs not cancelling because they aren't using the same value for the two uses of the old subvector. @bkramer gave me a case that demonstrated this, but we haven't reduced it enough to make it easily readable to see what's happening.

This patch uses a safer, but more costly approach. It isolate the bits above the insertion and bits below the insert point and ORs those together leaving 0 for the insertion location. Then widens the subvector with 0s in the upper bits, shifts it into position with 0s in the lower bits. Then we do another OR.

Differential Revision: https://reviews.llvm.org/D68311

llvm-svn: 373495
2019-10-02 17:47:09 +00:00
Craig Topper 8c19925f42 [X86] Add a DAG combine to shrink vXi64 gather/scatter indices that are constant with sufficient sign bits to fit in vXi32
The gather/scatter instructions can implicitly sign extend the indices. If we're operating on 32-bit data, an v16i64 index can force a v16i32 gather to be split in two since the index needs 2 registers. If we can shrink the index to the i32 we can avoid the split. It should always be safe to shrink the index regardless of the number of elements. We have gather/scatter instructions that can use v2i32 index stored in a v4i32 register with v2i64 data size.

I've limited this to before legalize types to avoid creating a v2i32 after type legalization. We could check for it, but we'd also need testing. I'm also only handling build_vectors with no bitcasts to be sure the truncate will constant fold.

Differential Revision: https://reviews.llvm.org/D68247

llvm-svn: 373408
2019-10-01 23:18:31 +00:00
Craig Topper 105e82edde [X86] Add a VBROADCAST_LOAD ISD opcode representing a scalar load broadcasted to a vector.
Summary:
This adds the ISD opcode and a DAG combine to create it. There are
probably some places where we can directly create it, but I'll
leave that for future work.

This updates all of the isel patterns to look for this new node.
I had to add a few additional isel patterns for aligned extloads
which we should probably fix with a DAG combine or something. This
does mean that the broadcast load folding for avx512 can no
longer match a broadcasted aligned extload.

There's still some work to do here for combining a broadcast of
a broadcast_load. We also need to improve extractelement or
demanded vector elements of a broadcast_load. I'll try to get
those done before I submit this patch.

Reviewers: RKSimon, spatel

Reviewed By: RKSimon

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D68198

llvm-svn: 373349
2019-10-01 16:28:20 +00:00
Matt Arsenault f24ac13aaa TLI: Remove DAG argument from getRegisterByName
Replace with the MachineFunction. X86 is the only user, and only uses
it for the function. This removes one obstacle from using this in
GlobalISel. The other is the more tolerable EVT argument.

The X86 use of the function seems questionable to me. It checks hasFP,
before frame lowering.

llvm-svn: 373292
2019-10-01 01:44:39 +00:00
Craig Topper 3405237f77 [X86] Mask off upper bits of splat element in LowerBUILD_VECTORvXi1 when forming a SELECT.
The i1 scalar would have been type legalized to i8, but that
doesn't guarantee anything about the upper bits. If we're going
to use it as condition we need to make sure the upper bits are 0.

I've special cased ISD::SETCC conditions since that should
guarantee zero upper bits. We could go further and use
computeKnownBits, but we have no tests that would need that.

Fixes PR43507.

llvm-svn: 373246
2019-09-30 18:43:44 +00:00
Craig Topper 8216414fd1 [X86] Address post-commit review from code I accidentally commited in r373136.
See https://reviews.llvm.org/D68167

llvm-svn: 373245
2019-09-30 18:43:27 +00:00
Craig Topper 299ebacfe9 [X86] Add ANY_EXTEND to switch in ReplaceNodeResults, but just fall back to default handling.
ANY_EXTEND of v8i8 is marked Custom on AVX512 for handling extends
from v8i8. But the type legalization infrastructure will call
ReplaceNodeResults for v8i8 results. We should just defer it the
default handling instead of asserting in the default of the switch.

Fixes PR43509.

llvm-svn: 373234
2019-09-30 17:14:22 +00:00
Craig Topper 1b0ea0a12e [X86] Split v16i32/v8i64 bitreverse on avx512f targets without avx512bw to enable the use of vpshufb on the 256-bit halves.
llvm-svn: 373177
2019-09-30 03:14:38 +00:00
Fangrui Song 6c320b22cd [X86] Fix -Wunused-variable in -DLLVM_ENABLE_ASSERTIONS=off builds after r373174
llvm-svn: 373175
2019-09-30 02:06:23 +00:00
Craig Topper 1069c01924 [X86] Remove -x86-experimental-vector-widening-legalization command line flag
This was added back to allow some performance regressions to be
investigated. The main perf issue was fixed shortly after adding
this back and no other major issues have been reported. So I
think its safe to remove this again.

llvm-svn: 373174
2019-09-29 23:32:37 +00:00
Craig Topper 0ac4aacea8 [X86] Enable canonicalizeBitSelect for AVX512 since we can use VPTERNLOG now.
llvm-svn: 373155
2019-09-29 01:24:22 +00:00
Craig Topper 82a707e941 [X86] Stop using UpdateNodeOperands in combineGatherScatter. Create new nodes like most other DAG combines.
Creating new nodes is what we usually do. Have to explicitly
check that we don't update to an existing node and having
to manually manage the worklist is unusual.

We can probably add a helper function to reduce the duplication
of having to check if we should create a gather or scatter, but
I wanted to just get the simple thing done.

llvm-svn: 373137
2019-09-28 01:08:46 +00:00