Commit Graph

4101 Commits

Author SHA1 Message Date
Elena Demikhovsky 2c0780b8e5 AVX-512: Fixed BT instruction selection.
The following condition expression ( a >> n) & 1 is converted to "bt a, n" instruction. It works on all intel targets.
But on AVX-512 it was broken because the expression is modified to (truncate (a >>n) to i1).

I added the new sequence (truncate (a >>n) to i1) to the BT pattern.

Differential Revision: https://reviews.llvm.org/D22354

llvm-svn: 275950
2016-07-19 07:14:21 +00:00
Simon Pilgrim c941f6b329 [X86][AVX] Add target shuffle decode support for VBROADCAST
Currently we only decode broadcasts from a vector of the same size.

llvm-svn: 275823
2016-07-18 17:32:59 +00:00
Simon Pilgrim 1be1222293 [X86][SSE] lowerVectorShuffleAsPermuteAndUnpack tidyup. NFCI.
Moved unpack type determination into TryUnpack lambda.

Added missing comment describing lowerVectorShuffleAsPermuteAndUnpack call.

llvm-svn: 275708
2016-07-17 15:48:25 +00:00
Guy Blank 3357ba36e2 test commit
llvm-svn: 275703
2016-07-17 12:10:35 +00:00
Justin Lebar 9c375817ac [SelectionDAG] Get rid of bool parameters in SelectionDAG::getLoad, getStore, and friends.
Summary:
Instead, we take a single flags arg (a bitset).

Also add a default 0 alignment, and change the order of arguments so the
alignment comes before the flags.

This greatly simplifies many callsites, and fixes a bug in
AMDGPUISelLowering, wherein the order of the args to getLoad was
inverted.  It also greatly simplifies the process of adding another flag
to getLoad.

Reviewers: chandlerc, tstellarAMD

Subscribers: jholewinski, arsenm, jyknight, dsanders, nemanjai, llvm-commits

Differential Revision: http://reviews.llvm.org/D22249

llvm-svn: 275592
2016-07-15 18:27:10 +00:00
Simon Pilgrim 2683ad54ad [X86][AVX2] Improve lowerShuffleAsRepeatedMaskAndLanePermute permutation of 64-bit sub-lanes
As discussed on PR28136, lowerShuffleAsRepeatedMaskAndLanePermute was attempting to match repeated masks at the 128-bit level and then permute the resultant lanes at the 128-bit (AVX1) or 64-bit (AVX2) sub-lane level.

This change allows us to create the repeated masks at the sub-lane level (and then concat them together to create a 128-bit repeated mask) and then select which sub-lane to permute. This has no effect on the AVX1 codegen.

Fixes PR28136.

llvm-svn: 275543
2016-07-15 09:49:12 +00:00
Simon Pilgrim 420b266d0a [X86][AVX2] Allow VPERMPD/VPERMQ shuffles to call combineShuffle (reapplied)
This improves the situation discussed in D19228 where we were forcing VPERMPD/VPERMQ where VPERM2F128/VPERM2I128 would have been better.

This was incorrectly reverted in rL275421 during triage of PR28552.

llvm-svn: 275497
2016-07-14 23:05:09 +00:00
Nico Weber 3afaf16abc Revert r275411, it cause PR28552.
llvm-svn: 275421
2016-07-14 14:49:35 +00:00
Simon Pilgrim 534e3240e8 Remove trailing whitespace.
llvm-svn: 275412
2016-07-14 13:29:23 +00:00
Simon Pilgrim 3ecb6bdd5f [X86][AVX2] Allow VPERMPD/VPERMQ shuffles to call combineShuffle
This improves the situation discussed in D19228 where we were forcing VPERMPD/VPERMQ where VPERM2F128/VPERM2I128 would have been better.

llvm-svn: 275411
2016-07-14 13:28:43 +00:00
Simon Pilgrim 053d32906f [X86][AVX] Add support for narrowing 128-bit+ shuffle mask elements to 64-bits to allow combining
Primarily this is to allow blend with zero instead of having to use vperm2f128, but we can use this in the future to deal with AVX512 cases where we need to keep the original element size to correctly fold masked operations.

llvm-svn: 275406
2016-07-14 12:58:04 +00:00
Sanjay Patel 610a2f6525 [x86][SSE/AVX] optimize pcmp results better (PR28484)
We know that pcmp produces all-ones/all-zeros bitmasks, so we can use that behavior to avoid unnecessary constant loading.

One could argue that load+and is actually a better solution for some CPUs (Intel big cores) because shifts don't have the
same throughput potential as load+and on those cores, but that should be handled as a CPU-specific later transformation if
it ever comes up. Removing the load is the more general x86 optimization. Note that the uneven usage of vpbroadcast in the
test cases is filed as PR28505:
https://llvm.org/bugs/show_bug.cgi?id=28505

Differential Revision: http://reviews.llvm.org/D22225

llvm-svn: 275276
2016-07-13 16:04:07 +00:00
Simon Pilgrim 48d8340760 [X86][AVX] Add support for target shuffle combining to VPERMILPS variable shuffle mask
Added AVX512F VPERMILPS shuffle decoding support

llvm-svn: 275270
2016-07-13 15:10:43 +00:00
Simon Pilgrim 57548a6fa6 [X86][SSE] Check for lane crossing shuffles before trying to combine to PSHUFB
Removes a return-on-fail that was making it tricky to add other variable mask shuffles.

llvm-svn: 275262
2016-07-13 12:48:41 +00:00
Simon Pilgrim 6fa71da4a4 [X86][AVX] Add support for target shuffle combining to VPERM2F128/VPERM2I128
llvm-svn: 275212
2016-07-12 20:27:32 +00:00
Simon Pilgrim 832463eada [X86][SSE] Generalise target shuffle combine of shuffles using variable masks
At present the only shuffle with a variable mask we recognise is PSHUFB, which influences if its worth the cost of mask creation/loading of a combined target shuffle with a variable mask. This change sets up the infrastructure to support other shuffles in the future but has no effect yet.

llvm-svn: 275059
2016-07-11 12:49:35 +00:00
Craig Topper 7ee070e7bc [AVX512] Add support for 512-bit ANDN now that all ones build vectors survive long enough to allow the matching.
llvm-svn: 275046
2016-07-11 05:36:53 +00:00
Craig Topper 516e14cd8e [AVX512] Use vpternlog with an immediate of 0xff to create 512-bit all one vectors.
llvm-svn: 275045
2016-07-11 05:36:48 +00:00
Simon Pilgrim ee4a33ae46 [X86][SSE] Relax type assertions for matchVectorShuffleAsInsertPS
Calls to matchVectorShuffleAsInsertPS only need to ensure the inputs are 128-bit vectors. Only lowerVectorShuffleAsInsertPS needs to ensure that they are v4f32.

llvm-svn: 275028
2016-07-10 22:26:05 +00:00
Simon Pilgrim 2191faa433 [X86][SSE] Add support for target shuffle combining to PSHUFLW/PSHUFHW
llvm-svn: 275022
2016-07-10 21:02:47 +00:00
Craig Topper 0b0954570a [AVX512] Add support for lowering to 512-bit SHUFPS.
llvm-svn: 275011
2016-07-10 05:55:53 +00:00
Simon Pilgrim 606126e848 [X86][SSE] Add support for target shuffle combining to INSERTPS
llvm-svn: 274990
2016-07-09 21:47:55 +00:00
Simon Pilgrim 59c6a211cd [X86][SSE] Use scaleShuffleMask helper. NFCI.
llvm-svn: 274988
2016-07-09 21:12:03 +00:00
Craig Topper 70610cf7b6 [X86] Remove and autoupgrade 512-bit non-temporal store intrinsics.
llvm-svn: 274966
2016-07-09 04:38:27 +00:00
Simon Pilgrim 950419f948 [X86][AVX2] Add support for target shuffle combining to VPERMPD/VPERMQ
llvm-svn: 274908
2016-07-08 19:23:29 +00:00
Simon Pilgrim 828c731880 [X86][SSE] Accept any shuffle mask that is all zeroes
Until we have a better way to extract constants through bitcasted build vectors (and how to handle undefs of partial lanes etc.) at least accept build vectors that are all zeroes.

llvm-svn: 274833
2016-07-08 10:39:12 +00:00
Craig Topper f7bf6de0af [AVX512] Remove and autoupgrade a duplicate set of 512-bit masked shift intrinsics.
I'm not sure if clang ever used these builtin names or not.

llvm-svn: 274827
2016-07-08 06:14:47 +00:00
Simon Pilgrim 8ff7157513 [X86][SSE] Fixed typo in insertps lowering.
We were checking for 2 insertions (which is caught earlier in the pattern matching loop) instead of the case where we have no insertions.

Turns out this code never fires as we always try to lower to insertps after trying to lower to blendps, which would catch these cases - I'm about to make some changes to support combining to insertps which could cause this to fire so I don't want to remove it.

llvm-svn: 274648
2016-07-06 18:09:08 +00:00
Elena Demikhovsky ad0a56f3da Re-commit of 274613.
The prev commit failed on compilation.
A minor change in one pattern in lib/Target/X86/X86InstrAVX512.td fixes the failure.

llvm-svn: 274626
2016-07-06 14:15:43 +00:00
Elena Demikhovsky 02ced295aa Reverted 274613 due to compilation failue.
llvm-svn: 274615
2016-07-06 09:11:49 +00:00
Elena Demikhovsky 5a4f2476fd AVX-512: Optimization for patterns with i1 scalar type
The patch removes redundant kmov instructions (not all, we still have a lot of work here) and redundant "and" instructions after "setcc".
I use "AssertZero" marker between X86ISD::SETCC node and "truncate" to eliminate extra "and $1" instruction.
I also changed zext, aext and trunc patterns in the .td file. It allows to remove extra "kmov" instruictions.

This patch fixes https://llvm.org/bugs/show_bug.cgi?id=28173.

Fast ISEL mode is not supported correctly for AVX-512. ICMP/FCMP scalar instruction should return result in k-reg. It will be fixed in one of the next patches. I redirected handling of "cmp" to the DAG builder mode. (The code looks worse in one specific test case, but without this fix the new patch fails).

Differential revision: http://reviews.llvm.org/D21956

llvm-svn: 274613
2016-07-06 09:01:20 +00:00
Simon Pilgrim 7643b337a2 [X86][AVX2] Simplified BROADCAST combining to avoid repeated matching attempts
llvm-svn: 274583
2016-07-05 22:41:04 +00:00
Simon Pilgrim bec6543d17 [X86][AVX2] Add support for target shuffle combining to BROADCAST
Only support broadcast from vector register so far - memory folding support will have to wait.

llvm-svn: 274572
2016-07-05 20:11:29 +00:00
Simon Pilgrim 48adedffb7 [X86][AVX512] Fixed decoding of permd/permpd variable mask shuffles + enabled them for target shuffle combining
Corrected element mask masking to extract the bottom index bits (now matches the perm2 implementation but for unary inputs).

llvm-svn: 274571
2016-07-05 18:31:17 +00:00
Simon Pilgrim 3ad040909a [X86][AVX512] Add support for lowering shuffles to VSHUFPD
llvm-svn: 274520
2016-07-04 20:41:24 +00:00
Simon Pilgrim c804751a18 [X86] Add shuffle mask rescaling helper function. NFCI.
llvm-svn: 274476
2016-07-03 21:28:17 +00:00
Simon Pilgrim 8e84fcf118 [X86][AVX2] Merge unary permute matching behind the same V2.isUndef() condition. NFCI.
llvm-svn: 274474
2016-07-03 20:39:42 +00:00
Simon Pilgrim 7f096de0b8 [X86][AVX512] Add support for 512-bit shuffle lowering to VPERMPD/VPERMQ
llvm-svn: 274473
2016-07-03 19:50:06 +00:00
Simon Pilgrim a0d73835b2 [X86][AVX512] Add support for 512-bit shuffle decoding of VPERMPD/VPERMQ
llvm-svn: 274468
2016-07-03 18:27:37 +00:00
Simon Pilgrim 129b720c18 [X86][AVX512] Add support for lowering shuffles to VPERMILPS
llvm-svn: 274458
2016-07-03 12:47:21 +00:00
Simon Pilgrim a7329dac6f Fix spelling.
llvm-svn: 274451
2016-07-02 20:21:39 +00:00
Simon Pilgrim 99e8a1aa0b [X86][AVX512] Add support for lowering shuffles to VPERMILPD
llvm-svn: 274450
2016-07-02 20:20:12 +00:00
Simon Pilgrim cde7c54baa [X86][AVX512] Add support for 512-bit PSHUFB lowering
llvm-svn: 274444
2016-07-02 18:14:31 +00:00
Simon Pilgrim f040d8c061 [X86][AVX512] Add support for lowering shuffles to MOVDDUP/MOVSLDUP/MOVSHDUP
llvm-svn: 274436
2016-07-02 12:45:03 +00:00
Craig Topper 2bd8b4b180 [CodeGen,Target] Remove the version of DAG.getVectorShuffle that takes a pointer to a mask array. Convert all callers to use the ArrayRef version. No functional change intended.
For the most part this simplifies all callers. There were two places in X86 that needed an explicit makeArrayRef to shorten a statically sized array.

llvm-svn: 274337
2016-07-01 06:54:47 +00:00
Duncan P. N. Exon Smith e4f5e4f4d1 CodeGen: Use MachineInstr& in TargetLowering, NFC
This is a mechanical change to make TargetLowering API take MachineInstr&
(instead of MachineInstr*), since the argument is expected to be a valid
MachineInstr.  In one case, changed a parameter from MachineInstr* to
MachineBasicBlock::iterator, since it was used as an insertion point.

As a side effect, this removes a bunch of MachineInstr* to
MachineBasicBlock::iterator implicit conversions, a necessary step
toward fixing PR26753.

llvm-svn: 274287
2016-06-30 22:52:52 +00:00
Duncan P. N. Exon Smith 9cfc75c214 CodeGen: Use MachineInstr& in TargetInstrInfo, NFC
This is mostly a mechanical change to make TargetInstrInfo API take
MachineInstr& (instead of MachineInstr* or MachineBasicBlock::iterator)
when the argument is expected to be a valid MachineInstr.  This is a
general API improvement.

Although it would be possible to do this one function at a time, that
would demand a quadratic amount of churn since many of these functions
call each other.  Instead I've done everything as a block and just
updated what was necessary.

This is mostly mechanical fixes: adding and removing `*` and `&`
operators.  The only non-mechanical change is to split
ARMBaseInstrInfo::getOperandLatencyImpl out from
ARMBaseInstrInfo::getOperandLatency.  Previously, the latter took a
`MachineInstr*` which it updated to the instruction bundle leader; now,
the latter calls the former either with the same `MachineInstr&` or the
bundle leader.

As a side effect, this removes a bunch of MachineInstr* to
MachineBasicBlock::iterator implicit conversions, a necessary step
toward fixing PR26753.

Note: I updated WebAssembly, Lanai, and AVR (despite being
off-by-default) since it turned out to be easy.  I couldn't run tests
for AVR since llc doesn't link with it turned on.

llvm-svn: 274189
2016-06-30 00:01:54 +00:00
Ahmed Bougacha 15a2f6d58c [X86] Lower blended PACKUSes using appropriate types.
When lowering two blended PACKUS, we used to disregard the types
of the PACKUS inputs, indiscriminately generating a v16i8 PACKUS.

This leads to non-selectable things like:
    (v16i8 (PACKUS (v4i32 v0), (v4i32 v1)))

Instead, check that the PACKUSes have the same type, and use that
as the final result type.

llvm-svn: 274138
2016-06-29 16:56:09 +00:00
David Majnemer 1c7d532cde [X86] Make WRPKRU/RDPKRU pass -verify-machineinstrs
The original implementation attempted to zero registers using
XOR %foo, %foo.  This is problematic because it constitutes a
read-modify-write of a register which might not be defined.

Instead, use MOV32r0 to avoid these problems; expandPostRAPseudo does
the right thing here.

llvm-svn: 274024
2016-06-28 16:04:46 +00:00
Simon Pilgrim 5f71c909f0 [X86][AVX] Peek through bitcasts to find the source of broadcasts (reapplied)
AVX1 can only broadcast vectors as floats/doubles, so for 256-bit vectors we insert bitcasts if we are shuffling v8i32/v4i64 types. Unfortunately the presence of these bitcasts prevents the current broadcast lowering code from peeking through cases where we have concatenated / extracted vectors to create the 256-bit vectors.

This patch allows us to peek through bitcasts as long as the number of elements doesn't change (i.e. element bitwidth is the same) so the broadcast index is not affected.

Note this bitcast peek is different from the stage later on which doesn't care about the type and is just trying to find a load node.

As we're being more aggressive with bitcasts, we also need to ensure that the broadcast type is correctly bitcasted

Differential Revision: http://reviews.llvm.org/D21660

llvm-svn: 274013
2016-06-28 13:24:05 +00:00
Simon Pilgrim c15d217831 [X86][SSE] Added support for combining target shuffles to (V)PSHUFD/VPERMILPD/VPERMILPS immediate permutes
This patch allows target shuffles to be combined to single input immediate permute instructions - (V)PSHUFD/VPERMILPD/VPERMILPS - allowing more general pattern matching than what we current do and improves the likelihood of memory folding compared to existing patterns which tend to reuse the input in multiple arguments.

Further permute instructions (V)PSHUFLW/(V)PSHUFHW/(V)PERMQ/(V)PERMPD may be added in the future but its proven tricky to create tests cases for them so far. (V)PSHUFLW/(V)PSHUFHW is already handled quite well in combineTargetShuffle so it may be that removing some of that code may allow us to perform more of the combining in one place without duplication.

Differential Revision: http://reviews.llvm.org/D21148

llvm-svn: 273999
2016-06-28 08:08:15 +00:00
Nick Lewycky 9980075133 NFC. Fix popular typo in comment 'deferencing' --> 'dereferencing'.
Bonus changes, * placement in X86ISelLowering and 'exerce' -> 'exercise' in test.

llvm-svn: 273984
2016-06-28 01:45:05 +00:00
Elena Demikhovsky ad3929cc64 X86 Lowering - Fixed a crash in ICMP scalar instruction
Fixed a bug in EmitTest() function in combining shl + icmp.

https://llvm.org/bugs/show_bug.cgi?id=28119

llvm-svn: 273899
2016-06-27 18:07:16 +00:00
Nico Weber 1e058160dd Revert 273848, it caused PR28329
llvm-svn: 273879
2016-06-27 14:36:46 +00:00
Simon Pilgrim a45da385f8 [X86][AVX] Peek through bitcasts to find the source of broadcasts
AVX1 can only broadcast vectors as floats/doubles, so for 256-bit vectors we insert bitcasts if we are shuffling v8i32/v4i64 types. Unfortunately the presence of these bitcasts prevents the current broadcast lowering code from peeking through cases where we have concatenated / extracted vectors to create the 256-bit vectors.

This patch allows us to peek through bitcasts as long as the number of elements doesn't change (i.e. element bitwidth is the same) so the broadcast index is not affected.

Note this bitcast peek is different from the stage later on which doesn't care about the type and is just trying to find a load node.

Differential Revision: http://reviews.llvm.org/D21660

llvm-svn: 273848
2016-06-27 07:44:32 +00:00
Rafael Espindola ae0d866f56 Refactor a duplicated predicate. NFC.
llvm-svn: 273826
2016-06-26 22:13:55 +00:00
Craig Topper 8f577fd5b5 [X86] Rewrite lowerVectorShuffleWithPSHUFB to not require a ZeroableMask to be created. We can do everything with the starting mask and zeroable bit vector. This removes the last usage of isSingleInputShuffleMask. NFC
llvm-svn: 273804
2016-06-26 05:10:56 +00:00
Craig Topper 8bba749a48 [X86] Replace calls to isSingleInputShuffleMask with just checking if V2 is UNDEF. Canonicalization and creation of shuffle vector ensures this is equivalent.
llvm-svn: 273803
2016-06-26 05:10:53 +00:00
Craig Topper 9a2e979b3d [X86] Convert ==/!= comparisons with -1 for checking undef in shuffle lowering to comparisons of <0 or >=0. While there do the same for other kinds of index checks that can just check for greater than 0. No functional change intended.
llvm-svn: 273788
2016-06-25 19:05:29 +00:00
Craig Topper 53a39d1a63 [X86] Pull similar bitcasts on different paths to earlier shared point. NFC
llvm-svn: 273787
2016-06-25 19:05:23 +00:00
Ahmed Bougacha 0851ecd1b0 [X86] Remove dead ISD opcodes. NFC.
llvm-svn: 273716
2016-06-24 20:37:55 +00:00
David Majnemer d770877328 Switch more loops to be range-based
This makes the code a little more concise, no functional change is
intended.

llvm-svn: 273644
2016-06-24 04:05:21 +00:00
Craig Topper 024402dcdf [X86] Combine two nearby calls to isSingleInputShuffleVector. NFC
llvm-svn: 273643
2016-06-24 03:06:11 +00:00
Craig Topper 597aa42fec [AVX512] Remove masked unpack intrinsics and autoupgrade to vectorshuffle and selects.
llvm-svn: 273543
2016-06-23 07:37:33 +00:00
Craig Topper 8f8bd37dd3 [X86] Add assert to ensure only 128-bit vector types are used. 256 or 512-bit would require lane handling which is missing.
llvm-svn: 273542
2016-06-23 07:37:26 +00:00
Krzysztof Parzyszek e116d500a7 [SDAG] Remove FixedArgs parameter from CallLoweringInfo::setCallee
The setCallee function will set the number of fixed arguments based
on the size of the argument list. The FixedArgs parameter was often
explicitly set to 0, leading to a lack of consistent value for non-
vararg functions.

Differential Revision: http://reviews.llvm.org/D20376

llvm-svn: 273403
2016-06-22 12:54:25 +00:00
Etienne Bergeron f6be62f2c8 [StackProtector] Fix computation of GSCookieOffset and EHCookieOffset with SEH4
Summary:
Fix the computation of the offsets present in the scopetable when using the
SEH (__except_handler4).

This patch added an intrinsic to track the position of the allocation on the
stack of the EHGuard. This position is needed when producing the ScopeTable.

```
    struct _EH4_SCOPETABLE {
        DWORD GSCookieOffset;
        DWORD GSCookieXOROffset;
        DWORD EHCookieOffset;
        DWORD EHCookieXOROffset;
        _EH4_SCOPETABLE_RECORD ScopeRecord[1];
    };

    struct _EH4_SCOPETABLE_RECORD {
        DWORD EnclosingLevel;
        long (*FilterFunc)();
            union {
            void (*HandlerAddress)();
            void (*FinallyFunc)();
        };
    };
```

The code to generate the EHCookie is added in `X86WinEHState.cpp`.
Which is adding these instructions when using SEH4.

```
Lfunc_begin0:
# BB#0:                                 # %entry
	pushl	%ebp
	movl	%esp, %ebp
	pushl	%ebx
	pushl	%edi
	pushl	%esi
	subl	$28, %esp
	movl	%ebp, %eax                <<-- Loading FramePtr
	movl	%esp, -36(%ebp)
	movl	$-2, -16(%ebp)
	movl	$L__ehtable$use_except_handler4_ssp, %ecx
	xorl	___security_cookie, %ecx
	movl	%ecx, -20(%ebp)
	xorl	___security_cookie, %eax  <<-- XOR FramePtr and Cookie
	movl	%eax, -40(%ebp)           <<-- Storing EHGuard
	leal	-28(%ebp), %eax
	movl	$__except_handler4, -24(%ebp)
	movl	%fs:0, %ecx
	movl	%ecx, -28(%ebp)
	movl	%eax, %fs:0
	movl	$0, -16(%ebp)
	calll	_may_throw_or_crash
LBB1_1:                                 # %cont
	movl	-28(%ebp), %eax
	movl	%eax, %fs:0
	addl	$28, %esp
	popl	%esi
	popl	%edi
	popl	%ebx
	popl	%ebp
	retl

```

And the corresponding offset is computed:
```
Luse_except_handler4_ssp$parent_frame_offset = -36
	.p2align	2
L__ehtable$use_except_handler4_ssp:
	.long	-2                      # GSCookieOffset
	.long	0                       # GSCookieXOROffset
	.long	-40                     # EHCookieOffset    <<----
	.long	0                       # EHCookieXOROffset
	.long	-2                      # ToState
	.long	_catchall_filt          # FilterFunction
	.long	LBB1_2                  # ExceptionHandler

```

Clang is not yet producing function using SEH4, but it's a work in progress.
This patch is a step toward having a valid implementation of SEH4.
Unfortunately, it is not yet fully working. The EH registration block is not
allocated at the right offset on the stack.

Reviewers: rnk, majnemer

Subscribers: llvm-commits, chrisha

Differential Revision: http://reviews.llvm.org/D21231

llvm-svn: 273281
2016-06-21 15:58:55 +00:00
Craig Topper e4cf09ad07 [X86] Pre-allocate SmallVector instead of using push_back in a loop. NFC
llvm-svn: 273234
2016-06-21 03:05:40 +00:00
Simon Pilgrim 225b2e37a0 [X86][X87] Fix issue with sitofp i64 -> fp128 on 32-bit targets
Fix for PR27726 - sitofp i64 to fp128 was loading the merged load i64 to a x87 register preventing legalization for conversion to fp128.

Added 32-bit tests for fp128 cast/conversions.

llvm-svn: 273210
2016-06-20 22:41:17 +00:00
Igor Breger e59165ca63 [AVX512] [AVX512/AVX][Intrinsics] Fix Variable Bit Shift Right Arithmetic intrinsic lowering.
Differential Revision: http://reviews.llvm.org/D20897

llvm-svn: 273138
2016-06-20 07:05:43 +00:00
Craig Topper 4296c025c0 [X86] Pass the SDLoc and Mask ArrayRef down from lowerVectorShuffle through all of the other routines instead of recreating them in the handlers for each type. NFC
llvm-svn: 273137
2016-06-20 04:00:55 +00:00
Craig Topper ddf5d2a4a5 [X86] Use existing ArrayRef variable instead of calling SVOp->getMask() repeatedly. Remove nearby else after return as well. NFC
llvm-svn: 273136
2016-06-20 04:00:53 +00:00
Craig Topper 01ef65dd79 [X86] Avoid making a copy of a shuffle mask until we're sure we really need to. And just use a SmallVector to do the copy because its easy.
llvm-svn: 273135
2016-06-20 04:00:50 +00:00
Simon Pilgrim 0887d5b02e [X86][AVX512] Added 512-bit BITREVERSE tests and enabled AVX512BW lowering support
llvm-svn: 273125
2016-06-19 20:59:19 +00:00
Simon Pilgrim 0c62bc0324 Strip trailing whitespace. NFCI.
llvm-svn: 273124
2016-06-19 20:22:43 +00:00
Simon Pilgrim 2b007189b0 Fixed signed/unsigned warning.
llvm-svn: 273120
2016-06-19 18:20:44 +00:00
Simon Pilgrim 3d881a0230 [X86][SSE] Allow target shuffle combining to match masks with SM_Sentinel values
We currently only allow exact matches of shuffle mask patterns during target shuffle combining.

This patch relaxes this to permit SM_SentinelUndef in the combined shuffle to always be accepted as well as allowing exact matching of the SM_SentinelZero value.

I've adjusted some tests that were requiring exact shuffle masks to now include undef values.

Differential Revision: http://reviews.llvm.org/D21495

llvm-svn: 273119
2016-06-19 18:03:52 +00:00
Craig Topper bbb9a8d255 [X86] Add an assert to ensure that a routine is only used with 128-bit vectors. Reduce SmallVector size accordingly.
llvm-svn: 273117
2016-06-19 15:37:39 +00:00
Craig Topper 969457e0e3 [X86] Make is128BitLaneRepeatedShuffleMask correct the indices of the second vector for the smaller mask. This removes some custom correction code and can potentially provide other benefits in the future.
llvm-svn: 273116
2016-06-19 15:37:37 +00:00
Craig Topper 54ec3d6b1b [X86] Remove a dead path through one of the shuffle lowering routines. It's only called on single input shuffles masks already. Add an assert instead to verify.
llvm-svn: 273115
2016-06-19 15:37:35 +00:00
Craig Topper ae21810ce4 [X86] Pre-allocate a SmallVector instead of using push_back in a loop. NFC
llvm-svn: 273114
2016-06-19 15:37:33 +00:00
Craig Topper 4181c03c54 [X86] Use SmallVector::assign instead of resize to ensure we really start with a vector of all -1s. Otherwise we're trusting the caller to pass the right thing.
This should be no functional change with current code.

llvm-svn: 273113
2016-06-19 15:37:30 +00:00
Craig Topper 1f083543c9 [X86] Pre-size several SmallVectors instead of calling push_back in a loop. NFC
llvm-svn: 272997
2016-06-17 12:20:50 +00:00
Craig Topper 07984f2068 [X86] Fix formatting. NFC
llvm-svn: 272996
2016-06-17 12:20:48 +00:00
Rafael Espindola 498b9e06c8 Refactor more duplicated code.
llvm-svn: 272939
2016-06-16 19:30:55 +00:00
Rafael Espindola ed44cf6ccd Refactor duplicated code.
llvm-svn: 272936
2016-06-16 18:50:12 +00:00
Craig Topper 97b1fc92e8 [X86] Pre-size some SmallVectors using the constructor in the shuffle lowering code instead of using push_back. Some of these already did this but used resize or assign instead of the constructor. NFC
llvm-svn: 272872
2016-06-16 03:58:45 +00:00
Craig Topper 66f1a8b608 [X86] Remove else after return. NFC
llvm-svn: 272871
2016-06-16 03:58:42 +00:00
Craig Topper ceda65bdc4 [X86] Inline a couple lambdas into their callers since they are only used once and it all fits on a single line. NFC
llvm-svn: 272869
2016-06-16 03:11:00 +00:00
Sanjay Patel 1a4569df54 [x86] add folds for x86 vector compare nodes (PR27924)
Ideally, we can get rid of most x86 LLVM intrinsics by transforming them to IR (and some of that happened 
with http://reviews.llvm.org/rL272807), but it doesn't cost much to have some simple folds in the backend
too while we're working on that and as a backstop.

This fixes:
https://llvm.org/bugs/show_bug.cgi?id=27924

Differential Revision: http://reviews.llvm.org/D21356

llvm-svn: 272828
2016-06-15 20:26:58 +00:00
Kevin B. Smith acbda9ef30 [X86]: Updated r272801 to promote 16 bit compares with immediate operand
to 32 bits. This is in response to a comment by Eli Friedman.

llvm-svn: 272814
2016-06-15 18:18:05 +00:00
Sanjay Patel 30e0456562 [x86] fix function name; NFC
llvm-svn: 272805
2016-06-15 17:12:29 +00:00
Kevin B. Smith 54566a0e9a [X86]: Quit promoting 8 and 16 bit compares to 32 bit.
Differential Revision: http://reviews.llvm.org/D21144

llvm-svn: 272801
2016-06-15 16:37:46 +00:00
Wei Mi b799a625f9 [X86] Reduce the width of multiplification when its operands are extended from i8 or i16
For <N x i32> type mul, pmuludq will be used for targets without SSE41, which
often introduces many extra pack and unpack instructions in vectorized loop
body because pmuludq generates <N/2 x i64> type value. However when the operands
of <N x i32> mul are extended from smaller size values like i8 and i16, the type
of mul may be shrunk to use pmullw + pmulhw/pmulhuw instead of pmuludq, which
generates better code. For targets with SSE41, pmulld is supported so no
shrinking is needed.

Differential Revision: http://reviews.llvm.org/D20931

llvm-svn: 272694
2016-06-14 18:53:20 +00:00
Haojian Wu 7900ca1e7e Fix an enumeral mismatch warning.
Summary:
The "-Werror=enum-compare" shows that the statement is using two different enums:

enumeral mismatch in conditional expression: 'llvm::X86ISD::NodeType' vs 'llvm::ISD::NodeType'

A follow-up fix on D21235.

Reviewers: klimek

Subscribers: spatel, cfe-commits

Differential Revision: http://reviews.llvm.org/D21278

llvm-svn: 272539
2016-06-13 09:03:45 +00:00
Benjamin Kramer bdc4956bac Pass DebugLoc and SDLoc by const ref.
This used to be free, copying and moving DebugLocs became expensive
after the metadata rewrite. Passing by reference eliminates a ton of
track/untrack operations. No functionality change intended.

llvm-svn: 272512
2016-06-12 15:39:02 +00:00
Sanjay Patel 977530a8c9 [x86, SSE] change patterns for CMPP to float types to allow matching with SSE1 (PR28044)
This patch is intended to solve:
https://llvm.org/bugs/show_bug.cgi?id=28044

By changing the definition of X86ISD::CMPP to use float types, we allow it to be created 
and pass legalization for an SSE1-only target where v4i32 is not legal.

The motivational trail for this change includes:
https://llvm.org/bugs/show_bug.cgi?id=28001

and eventually makes this trigger:
http://reviews.llvm.org/D21190

Ie, after this step, we should be free to have Clang generate FP compare IR instead of x86
intrinsics for SSE C packed compare intrinsics. (We can auto-upgrade and remove the LLVM 
sse.cmp intrinsics as a follow-up step.) Once we're generating vector IR instead of x86
intrinsics, a big pile of generic optimizations can trigger.

Differential Revision: http://reviews.llvm.org/D21235

llvm-svn: 272511
2016-06-12 15:03:25 +00:00
Simon Pilgrim 5b9bade8dd [X86][SSSE3] Added PSHUFB LUT implementation of BITREVERSE
PSHUFB can speed up BITREVERSE of byte vectors by performing LUT on the low/high nibbles separately and ORing the results. Wider integer vector types are already BSWAP'd beforehand so also make use of this approach.

llvm-svn: 272477
2016-06-11 15:44:13 +00:00
Craig Topper 504fba5c8a [AVX512] Lower v8i64 and v16i32 to pshufd when possible.
llvm-svn: 272473
2016-06-11 13:43:21 +00:00
Simon Pilgrim 6800a45790 [X86][SSE] Added PSLLDQ/PSRLDQ as a target shuffle type
Ensure that PALIGNR/PSLLDQ/PSRLDQ are byte vectors so that they can be correctly decoded for target shuffle combining

llvm-svn: 272471
2016-06-11 13:38:28 +00:00
Simon Pilgrim 255fdd0666 [X86][SSE] Use vXi8 return type for PSLLDQ/PSRLDQ instructions
These are byte shift instructions and it will make shuffle combining a lot more straightforward if we can assume a vXi8 vector of bytes so decoded shuffle masks match the return type's number of elements

llvm-svn: 272468
2016-06-11 12:54:37 +00:00
Craig Topper 40abd1cc61 [AVX512] Add support for lowering v32i16 shuffles with repeated lanes. This allows us to create 512-bit PSHUFLW/PSHUFHW.
llvm-svn: 272450
2016-06-11 03:27:42 +00:00
Craig Topper b9b86fcfff [AVX512] No need to check for BWI being enabled before lowering v32i16 and v64i8 shuffles. If we get this far the types are already legal which means BWI must be enabled.
llvm-svn: 272449
2016-06-11 03:27:37 +00:00
Sanjoy Das 39c226fdba [STLExtras] Introduce and use llvm::count_if; NFC
(This is split out from was D21115)

llvm-svn: 272435
2016-06-10 21:18:39 +00:00
Simon Pilgrim 47c76e201a [X86][AVX512] Fixed issue with v16i32 shuffles lowering to VPALIGNR
llvm-svn: 272307
2016-06-09 20:53:12 +00:00
Simon Pilgrim 0ab9d3026a [X86][AVX512] Added support for lowering 512-bit vector shuffles to bit/byte shifts
512-bit VPSLLDQ/VPSRLDQ can only be used for avx512bw targets so lowerVectorShuffleAsShift had to be adjusted to include the subtarget

llvm-svn: 272300
2016-06-09 20:13:58 +00:00
Igor Breger f635367e2b [AVX512] Remove masked_move/blendm intrinsic from back-end.
This is complement patch to D21060.

Differential Revision: http://reviews.llvm.org/D21174

llvm-svn: 272257
2016-06-09 11:46:55 +00:00
Craig Topper 565a5b5451 [X86] Fix bad comment in assert. NFC
llvm-svn: 272248
2016-06-09 07:06:33 +00:00
Benjamin Kramer 46e38f3678 Avoid copies of std::strings and APInt/APFloats where we only read from it
As suggested by clang-tidy's performance-unnecessary-copy-initialization.
This can easily hit lifetime issues, so I audited every change and ran the
tests under asan, which came back clean.

llvm-svn: 272126
2016-06-08 10:01:20 +00:00
Etienne Bergeron 22bfa83208 [stack-protection] Add support for MSVC buffer security check
Summary:
This patch is adding support for the MSVC buffer security check implementation

The buffer security check is turned on with the '/GS' compiler switch.
  * https://msdn.microsoft.com/en-us/library/8dbf701c.aspx
  * To be added to clang here: http://reviews.llvm.org/D20347

Some overview of buffer security check feature and implementation:
  * https://msdn.microsoft.com/en-us/library/aa290051(VS.71).aspx
  * http://www.ksyash.com/2011/01/buffer-overflow-protection-3/
  * http://blog.osom.info/2012/02/understanding-vs-c-compilers-buffer.html


For the following example:
```
int example(int offset, int index) {
  char buffer[10];
  memset(buffer, 0xCC, index);
  return buffer[index];
}
```

The MSVC compiler is adding these instructions to perform stack integrity check:
```
        push        ebp  
        mov         ebp,esp  
        sub         esp,50h  
  [1]   mov         eax,dword ptr [__security_cookie (01068024h)]  
  [2]   xor         eax,ebp  
  [3]   mov         dword ptr [ebp-4],eax  
        push        ebx  
        push        esi  
        push        edi  
        mov         eax,dword ptr [index]  
        push        eax  
        push        0CCh  
        lea         ecx,[buffer]  
        push        ecx  
        call        _memset (010610B9h)  
        add         esp,0Ch  
        mov         eax,dword ptr [index]  
        movsx       eax,byte ptr buffer[eax]  
        pop         edi  
        pop         esi  
        pop         ebx  
  [4]   mov         ecx,dword ptr [ebp-4]  
  [5]   xor         ecx,ebp  
  [6]   call        @__security_check_cookie@4 (01061276h)  
        mov         esp,ebp  
        pop         ebp  
        ret  
```

The instrumentation above is:
  * [1] is loading the global security canary,
  * [3] is storing the local computed ([2]) canary to the guard slot,
  * [4] is loading the guard slot and ([5]) re-compute the global canary,
  * [6] is validating the resulting canary with the '__security_check_cookie' and performs error handling.

Overview of the current stack-protection implementation:
  * lib/CodeGen/StackProtector.cpp
    * There is a default stack-protection implementation applied on intermediate representation.
    * The target can overload 'getIRStackGuard' method if it has a standard location for the stack protector cookie.
    * An intrinsic 'Intrinsic::stackprotector' is added to the prologue. It will be expanded by the instruction selection pass (DAG or Fast).
    * Basic Blocks are added to every instrumented function to receive the code for handling stack guard validation and errors handling.
    * Guard manipulation and comparison are added directly to the intermediate representation.

  * lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
  * lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
    * There is an implementation that adds instrumentation during instruction selection (for better handling of sibbling calls).
      * see long comment above 'class StackProtectorDescriptor' declaration.
    * The target needs to override 'getSDagStackGuard' to activate SDAG stack protection generation. (note: getIRStackGuard MUST be nullptr).
      * 'getSDagStackGuard' returns the appropriate stack guard (security cookie)
    * The code is generated by 'SelectionDAGBuilder.cpp' and 'SelectionDAGISel.cpp'.

  * include/llvm/Target/TargetLowering.h
    * Contains function to retrieve the default Guard 'Value'; should be overriden by each target to select which implementation is used and provide Guard 'Value'.

  * lib/Target/X86/X86ISelLowering.cpp
    * Contains the x86 specialisation; Guard 'Value' used by the SelectionDAG algorithm.

Function-based Instrumentation:
  * The MSVC doesn't inline the stack guard comparison in every function. Instead, a call to '__security_check_cookie' is added to the epilogue before every return instructions.
  * To support function-based instrumentation, this patch is
    * adding a function to get the function-based check (llvm 'Value', see include/llvm/Target/TargetLowering.h),
      * If provided, the stack protection instrumentation won't be inlined and a call to that function will be added to the prologue.
    * modifying (SelectionDAGISel.cpp) do avoid producing basic blocks used for inline instrumentation,
    * generating the function-based instrumentation during the ISEL pass (SelectionDAGBuilder.cpp),
    * if FastISEL (not SelectionDAG), using the fallback which rely on the same function-based implemented over intermediate representation (StackProtector.cpp).

Modifications
  * adding support for MSVC (lib/Target/X86/X86ISelLowering.cpp)
  * adding support function-based instrumentation (lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp, .h)

Results

  * IR generated instrumentation:
```
clang-cl /GS test.cc /Od /c -mllvm -print-isel-input
```

```
*** Final LLVM Code input to ISel ***

; Function Attrs: nounwind sspstrong
define i32 @"\01?example@@YAHHH@Z"(i32 %offset, i32 %index) #0 {
entry:
  %StackGuardSlot = alloca i8*                                                  <<<-- Allocated guard slot
  %0 = call i8* @llvm.stackguard()                                              <<<-- Loading Stack Guard value
  call void @llvm.stackprotector(i8* %0, i8** %StackGuardSlot)                  <<<-- Prologue intrinsic call (store to Guard slot)
  %index.addr = alloca i32, align 4
  %offset.addr = alloca i32, align 4
  %buffer = alloca [10 x i8], align 1
  store i32 %index, i32* %index.addr, align 4
  store i32 %offset, i32* %offset.addr, align 4
  %arraydecay = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 0
  %1 = load i32, i32* %index.addr, align 4
  call void @llvm.memset.p0i8.i32(i8* %arraydecay, i8 -52, i32 %1, i32 1, i1 false)
  %2 = load i32, i32* %index.addr, align 4
  %arrayidx = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 %2
  %3 = load i8, i8* %arrayidx, align 1
  %conv = sext i8 %3 to i32
  %4 = load volatile i8*, i8** %StackGuardSlot                                  <<<-- Loading Guard slot
  call void @__security_check_cookie(i8* %4)                                    <<<-- Epilogue function-based check
  ret i32 %conv
}
```

  * SelectionDAG generated instrumentation:

```
clang-cl /GS test.cc /O1 /c /FA
```

```
"?example@@YAHHH@Z":                    # @"\01?example@@YAHHH@Z"
# BB#0:                                 # %entry
        pushl   %esi
        subl    $16, %esp
        movl    ___security_cookie, %eax                                        <<<-- Loading Stack Guard value
        movl    28(%esp), %esi
        movl    %eax, 12(%esp)                                                  <<<-- Store to Guard slot
        leal    2(%esp), %eax
        pushl   %esi
        pushl   $204
        pushl   %eax
        calll   _memset
        addl    $12, %esp
        movsbl  2(%esp,%esi), %esi
        movl    12(%esp), %ecx                                                  <<<-- Loading Guard slot
        calll   @__security_check_cookie@4                                      <<<-- Epilogue function-based check
        movl    %esi, %eax
        addl    $16, %esp
        popl    %esi
        retl
```

Reviewers: kcc, pcc, eugenis, rnk

Subscribers: majnemer, llvm-commits, hans, thakis, rnk

Differential Revision: http://reviews.llvm.org/D20346

llvm-svn: 272053
2016-06-07 20:15:35 +00:00
Simon Pilgrim ca1da1bf07 [X86][SSE] Improved blend+zero target shuffle combining to use combined shuffle mask directly
We currently only combine to blend+zero if the target value type has 8 elements or less, but this was missing a lot of cases where the combined mask had been widened.

This change makes it so we use the combined mask to determine the blend value type, allowing us to catch more widened cases.

llvm-svn: 272003
2016-06-07 12:20:14 +00:00
Igor Breger edafb0595e [KNL] Fix UMULO lowering.
Differential Revision: http://reviews.llvm.org/D21013

llvm-svn: 271891
2016-06-06 12:24:52 +00:00
Craig Topper 143446d5c1 [AVX512] Add PALIGNR shuffle lowering for v32i16 and v16i32.
llvm-svn: 271870
2016-06-06 05:39:10 +00:00
Simon Pilgrim 64c6de4525 [X86][XOP] Added VPERMIL2PD/VPERMIL2PS raw mask decoding for target shuffle combines
llvm-svn: 271834
2016-06-05 15:21:30 +00:00
Simon Pilgrim 478295dadd [X86][XOP] Added VPERMIL2PD/VPERMIL2PS as a target shuffle type
llvm-svn: 271831
2016-06-05 15:01:45 +00:00
Craig Topper 8eeda57a40 [AVX512] Add support for lowering PALIGNR for v64i8.
Could do this for other types to, but this is what's needed to replace the instrinsic with native IR in clang.

llvm-svn: 271828
2016-06-05 06:29:12 +00:00
Craig Topper 9f51c9ef15 [AVX512] Fix PANDN combining for v4i32/v8i32 when VLX is enabled.
v4i32/v8i32 ANDs aren't promoted to v2i64/v4i64 when VLX is enabled.

llvm-svn: 271826
2016-06-05 05:35:11 +00:00
Craig Topper e609bd6600 [X86] Add the VR128L/H and VR256L/H to the list of vector register classes for inline asm constraints. Also fix the comment on the function.
llvm-svn: 271802
2016-06-04 20:15:08 +00:00
Saleem Abdulrasool 1fcdc23a6e X86: enable TLS on Windows itanium
Windows itanium is nearly identical to windows-msvc (MS ABI for C, itanium for
C++).  Enable the TLS support for the target similar to the MSVC model.

llvm-svn: 271797
2016-06-04 18:27:22 +00:00
Simon Pilgrim fd2eda4f64 [X86][AVX2] Fix v16i16 SHL lowering (PR27730)
The AVX2 v16i16 shift lowering works by unpacking to 2 x v8i32, performing the shift and then truncating the result.

The unpacking is used to place the values in the upper 16-bits so that we can correctly sign-extend for SRA shifts. Unfortunately we weren't ensuring that the lower 16-bits were zero to ensure that SHL correctly shifts in zero bits.

llvm-svn: 271796
2016-06-04 16:45:33 +00:00
Simon Pilgrim e85506b6e0 [X86][XOP] Support for VPERMIL2PD/VPERMIL2PS 2-input shuffle instructions
This patch begins adding support for lowering to the XOP VPERMIL2PD/VPERMIL2PS shuffle instructions - adding the X86ISD::VPERMIL2 opcode and cleaning up the usage.

The internal llvm intrinsics were assuming the shuffle mask operand was the same type as the float/double input operands (I guess to simplify the intrinsic definitions in X86InstrXOP.td to a single value type). These needed changing to integer types (matching the clang builtin and the AMD intrinsics definitions), an auto upgrade path is added to convert old calls.

Mask decoding/target shuffle support will be added in future patches.

Differential Revision: http://reviews.llvm.org/D20049

llvm-svn: 271633
2016-06-03 08:06:03 +00:00
Dimitry Andric 6a482a73d6 Only attempt to detect AVG if SSE2 is available
Summary:
In PR29973 Sanjay Patel reported an assertion failure when a certain
loop was optimized, for a target without SSE2 support.  It turned out
this was because of the AVG pattern detection introduced in rL253952.

Prevent the assertion failure by bailing out early in
`detectAVGPattern()`, if the target does not support SSE2.

Also add a minimized test case.

Reviewers: congh, eli.friedman, spatel

Subscribers: emaste, llvm-commits

Differential Revision: http://reviews.llvm.org/D20905

llvm-svn: 271548
2016-06-02 17:30:49 +00:00
Craig Topper 5bb9cda620 [AVX512] Remove LOADA/LOADU/STOREA/STOREU intrinsic types now that they are unused.
llvm-svn: 271479
2016-06-02 04:19:40 +00:00
Yaron Keren a34bfa000f Do not modify a std::vector while looping it.
Introduced in r271244, this is probably undefined behaviour and asserts when
compiled with Visual C++ debug mode. 

On further note, the loop is quadratic with regard to the number of successors
since removeSuccessor is linear and could probably be modified to linear time.

llvm-svn: 271278
2016-05-31 13:45:05 +00:00
Igor Breger 73ee8ba9b0 [AVX512] Fix intrinsic vcvtps2ph lowering.
Differential Revision: http://reviews.llvm.org/D20788

llvm-svn: 271255
2016-05-31 08:04:21 +00:00
Igor Breger 52bd1d5fcc Fix intrinsic vbroadcast{i32|f32}x2 lowering.
Differential Revision: http://reviews.llvm.org/D20780

llvm-svn: 271254
2016-05-31 07:43:39 +00:00
Saleem Abdulrasool d2f705ddf9 X86: permit using SjLj EH on x86 targets as an option
This adds support to the backed to actually support SjLj EH as an exception
model.  This is *NOT* the default model, and requires explicitly opting into it
from the frontend.  GCC supports this model and for MinGW can still be enabled
via the `--using-sjlj-exceptions` options.

Addresses PR27749!

llvm-svn: 271244
2016-05-31 01:48:07 +00:00
Ahmed Bougacha a3dc1ba142 [X86] Try to zero elts when lowering 256-bit shuffle with PSHUFB.
Otherwise we fallback to a blend of PSHUFBs later on.

Differential Revision: http://reviews.llvm.org/D19661

llvm-svn: 271113
2016-05-28 14:38:04 +00:00
Michael Kuperstein a75c77b127 [X86] Detect SAD patterns and emit psadbw instructions.
This recommits r267649 with a fix for PR27539.

Differential Revision: http://reviews.llvm.org/D20598

llvm-svn: 271033
2016-05-27 18:53:22 +00:00
Ahmed Bougacha 346b98011c [X86] Clarify PSHUFB+blend lowering function name. NFC.
Also guard against v32i8 users.

llvm-svn: 271024
2016-05-27 17:58:17 +00:00
Simon Pilgrim cf340bd9c1 [X86][SSE] When lowering a 256-bit shuffle as PMOVZX, reduce the input vector to the lower 128-bit subvector.
Most often as not this is what it started out as, the extraction is zero-cost on AVX and the PMOVZX/PMOVSX folding logic is based around 128-bit loads.

llvm-svn: 270858
2016-05-26 15:40:36 +00:00
Igor Breger 8437bb70fd [AVX512] Fix intrinsic cmp{sd|ss} lowering.
Differential Revision: http://reviews.llvm.org/D20615

llvm-svn: 270843
2016-05-26 12:42:25 +00:00
Simon Pilgrim 4810683ddf Simplify std::all_of/any_of predicates by using llvm::all_of/any_of. NFCI.
llvm-svn: 270753
2016-05-25 20:41:11 +00:00
Igor Breger 23c2090606 [llvm][AVX512][intrinsics] Fix vperm{b|w|d|q|ps|pd} intrinsics. Index is second argument to buildin function but it is first instruction operand.
Differential Revision: http://reviews.llvm.org/D20515

llvm-svn: 270548
2016-05-24 11:06:22 +00:00
Igor Breger 2ba64ab9ae [AVX512] Implement missing patterns for any_extend load lowering.
Differential Revision: http://reviews.llvm.org/D20513

llvm-svn: 270357
2016-05-22 10:21:04 +00:00
Michael Zuckerman a63a129749 [Clang][AVX512][intrinsics] Fix rcp and sqrt intrinsics.
Differential Revision: http://reviews.llvm.org/D20438

llvm-svn: 270322
2016-05-21 14:44:18 +00:00
Michael Zuckerman 11b55b29d1 [Clang][AVX512][intrinsics] Fix vscalef intrinsics.
Differential Revision: http://reviews.llvm.org/D20324

llvm-svn: 270321
2016-05-21 11:09:53 +00:00
Simon Pilgrim 55ef3da27b [X86][AVX] Generalized matching for target shuffle combines
This patch is a first step towards a more extendible method of matching combined target shuffle masks.

Initially this just pulls out the existing basic mask matches and adds support for some 256/512 bit equivalents. Future patterns will require a number of features to be added but I wanted to keep this patch simple.

I hope we can avoid duplication between shuffle lowering and combining and share more complex pattern match functions in future commits.

Differential Revision: http://reviews.llvm.org/D19198

llvm-svn: 270230
2016-05-20 16:19:30 +00:00
Rafael Espindola c7e9813228 Refactor X86 symbol access classification.
This refactors the logic in X86 to avoid code duplication. It also
splits it in two steps: it first decides if a symbol is local to the DSO
and then uses that information to decide how to access it.

The first part is implemented by shouldAssumeDSOLocal. It is not in any
way specific to X86. In a followup patch I intend to move it to
somewhere common and reused it in other backends.

llvm-svn: 270209
2016-05-20 12:20:10 +00:00
Rafael Espindola ab03eb007c Record a TargetMachine instead of a Reloc::Model.
Addresses r270095's code review.

llvm-svn: 270147
2016-05-19 22:07:57 +00:00
Hans Wennborg 172eee9cfc X86: Don't reset the stack after calls that don't return (PR27117)
Since the calls don't return, the instruction afterwards will never run,
and is just taking up unnecessary space in the binary.

Differential Revision: http://reviews.llvm.org/D20406

llvm-svn: 270109
2016-05-19 20:15:33 +00:00
Rafael Espindola 46107b9e62 Remember the relocation model. NFC.
This avoids passing a TargetMachine in a few places.

llvm-svn: 270095
2016-05-19 18:49:29 +00:00
Rafael Espindola cb2d266360 Style fixes. NFC.
llvm-svn: 270093
2016-05-19 18:34:20 +00:00
Hans Wennborg 8eb336c14e Re-commit r269828 "X86: Avoid using _chkstk when lowering WIN_ALLOCA instructions"
with an additional fix to make RegAllocFast ignore undef physreg uses. It would
previously get confused about the "push %eax" instruction's use of eax. That
method for adjusting the stack pointer is used in X86FrameLowering::emitSPUpdate
as well, but since that runs after register-allocation, we didn't run into the
RegAllocFast issue before.

llvm-svn: 269949
2016-05-18 16:10:17 +00:00
Ashutosh Nema 348af9cc6b Add new flag and intrinsic support for MWAITX and MONITORX instructions
Summary:

MONITORX/MWAITX instructions provide similar capability to the MONITOR/MWAIT
pair while adding a timer function, such that another termination of the MWAITX
instruction occurs when the timer expires. The presence of the MONITORX and
MWAITX instructions is indicated by CPUID 8000_0001, ECX, bit 29.

The MONITORX and MWAITX instructions are intercepted by the same bits that
intercept MONITOR and MWAIT. MONITORX instruction establishes a range to be
monitored. MWAITX instruction causes the processor to stop instruction execution
and enter an implementation-dependent optimized state until occurrence of a
class of events.

Opcode of MONITORX instruction is "0F 01 FA". Opcode of MWAITX instruction is
"0F 01 FB". These opcode information is used in adding tests for the
disassembler.

These instructions are enabled for AMD's bdver4 architecture.

Patch by Ganesh Gopalasubramanian!

Reviewers: echristo, craig.topper, RKSimon
Subscribers: RKSimon, joker.eph, llvm-commits
Differential Revision: http://reviews.llvm.org/D19795

llvm-svn: 269911
2016-05-18 11:59:12 +00:00
Hans Wennborg 759af30109 Revert r269828 "X86: Avoid using _chkstk when lowering WIN_ALLOCA instructions"
Seems to have broken the Windows ASan bot. Reverting while investigating.

llvm-svn: 269833
2016-05-17 20:38:56 +00:00
Hans Wennborg c3fb51171e X86: Avoid using _chkstk when lowering WIN_ALLOCA instructions
This patch moves the expansion of WIN_ALLOCA pseudo-instructions
into a separate pass that walks the CFG and lowers the instructions
based on a conservative estimate of the offset between the stack
pointer and the lowest accessed stack address.

The goal is to reduce binary size and run-time costs by removing
calls to _chkstk. While it doesn't fix all the code quality problems
with inalloca calls, it's an incremental improvement for PR27076.

Differential Revision: http://reviews.llvm.org/D20263

llvm-svn: 269828
2016-05-17 20:13:29 +00:00
Michael Kuperstein ac2088d122 [X86] Remove transformVSELECTtoBlendVECTOR_SHUFFLE
The new X86 shuffle lowering can do just fine without transforming vselects
into vector_shuffles. It looks like the only thing this code does right now
is cause trouble - in particular, it can lead to combine/legalization infinite
loops.

Note that it's not completely NFC, since some of the shuffle masks get inverted,
which may cause slight differences further down the line. We may want to find
a way to invert those masks, but that's orthogonal to this commit.

This fixes the hang in PR27689.

llvm-svn: 269676
2016-05-16 18:27:00 +00:00
Simon Pilgrim 0d05484db6 Fixed unused variable warning
llvm-svn: 269650
2016-05-16 11:48:54 +00:00
Simon Pilgrim 265995ef53 [X86][SSSE3] Lower vector CTLZ with PSHUFB lookups
This patch uses PSHUFB to lower vector CTLZ and avoid (slower) scalarizations.

The leading zero count of each 4-bit nibble of the vector is determined by using a PSHUFB lookup. Pairs of results are then repeatedly combined up to the original element width.

Differential Revision: http://reviews.llvm.org/D20016

llvm-svn: 269646
2016-05-16 11:19:11 +00:00