Commit Graph

21769 Commits

Author SHA1 Message Date
Simon Pilgrim 7b32e8b96a [X86] combineSetCCAtomicArith - pull out repeated ops. NFCI.
Reduces diff in D101074
2021-04-23 14:19:24 +01:00
Wang, Pengfei 151e244fe6 [X86][AMX][NFC] Make comparison operators to be complete
The previous D101039 didn't fix the SmallSet insertion issue, due to we
always return false for the comparison between 2 different nonnull BBs.
This patch makes the the comparison to be complete by comparing `MBB`
first, so that we can always get the invariant order by a single
operator.
2021-04-23 17:38:54 +08:00
Wang, Pengfei 53673fd1bf [X86][AMX][NFC] Avoid assert for the same immidiate value
The previous condition in the assert was over strict. We ought to allow
the same immidiate value being loaded more than once. The intention for
the assert is to check the same AMX register uses multiple different
immidiate shapes. So this fix supposes to be NFC.

Reviewed By: LuoYuanke

Differential Revision: https://reviews.llvm.org/D101124
2021-04-23 12:17:00 +08:00
Wang, Pengfei 90118563ad [X86][AMX] Try to hoist AMX shapes' def
We request no intersections between AMX instructions and their shapes'
def when we insert ldtilecfg. However, this is not always ture resulting
from not only users don't follow AMX API model, but also optimizations.

This patch adds a mechanism that tries to hoist AMX shapes' def as well.
It only hoists shapes inside a BB, we can improve it for cases across
BBs in future. Currently, it only hoists shapes of which all sources' def
above the first AMX instruction. We can improve for the case that only
source that moves an immediate value to a register below AMX instruction.

Differential Revision: https://reviews.llvm.org/D101067
2021-04-23 12:17:00 +08:00
Wang, Pengfei e8bce83996 [X86] Enable compilation of user interrupt handlers.
Add __uintr_frame structure and use UIRET instruction for functions with
x86 interrupt calling convention when UINTR is present.

Reviewed By: LuoYuanke

Differential Revision: https://reviews.llvm.org/D99708
2021-04-23 11:43:57 +08:00
Wang, Pengfei aafb6d81cf [X86][AMX][NFC] Remove assert for comparison between different BBs.
SmallSet may use operator `<` when we insert MIRef elements, so we
cannot limit the comparison between different BBs.

We allow MIRef() to be less that any initialized MIRef object, otherwise,
we always reture false when compare between different BBs.

Differential Revision: https://reviews.llvm.org/D101039
2021-04-22 20:41:59 +08:00
Simon Pilgrim a511b55cfd [X86][SSE] getFauxShuffleMask - don't decode OR(SHUFFLE,SHUFFLE) containing UNDEFs. (PR50049)
PR50049 demonstrated an infinite loop between OR(SHUFFLE,SHUFFLE) <-> BLEND(SHUFFLE,SHUFFLE) patterns.

The UNDEF elements were allowing a combined shuffle mask to be widened which lost the undef element, resulting us needing to use the BLEND pattern (as the undef element would need to be zero for the OR pattern). But then bitcast folds would re-expose the undef element allowing us to use OR again.....
2021-04-21 18:47:00 +01:00
Anirudh Prasad 8f6185c713 [AsmParser][ms][X86] Fix possible misbehaviour in parsing of special tokens at start of string.
- Previously, https://reviews.llvm.org/D72680 introduced a new attribute called `AllowSymbolAtNameStart` (in relation to the MAsmParser changes) in `MCAsmInfo.h` which (according to the comment in the header) allows the following behaviour:

```
  /// This is true if the assembler allows $ @ ? characters at the start of
  /// symbol names. Defaults to false.
```

- However, the usage of this field in AsmLexer.cpp doesn't seem completely accurate* for a couple of reasons.

```
  default:
    if (MAI.doesAllowSymbolAtNameStart()) {
      // Handle Microsoft-style identifier: [a-zA-Z_$.@?][a-zA-Z0-9_$.@#?]*
      if (!isDigit(CurChar) &&
          isIdentifierChar(CurChar, MAI.doesAllowAtInName(),
                           AllowHashInIdentifier))
        return LexIdentifier();
    }
```

1. The Dollar and At tokens, when occurring at the start of the string, are treated as separate tokens (AsmToken::Dollar and AsmToken::At respectively) and not lexed as an Identifier.
2. I'm not too sure why `MAI.doesAllowAtInName()` is used when `AllowAtInIdentifier` could be used. For X86 platforms, afaict, this shouldn't be an issue, since the `CommentString` attribute isn't "@". (alternatively the call to the setter can be set anywhere else as needed). The `AllowAtInName` does have an additional important meaning, but in the context of AsmLexer, shouldn't mean anything different compared to `AllowAtInIdentifier`

My proposal is the following:

- Introduce 3 new fields called `AllowQuestionTokenAtStartOfString`, `AllowDollarTokenAtStartOfString` and `AllowAtTokenAtStartOfString` in MCAsmInfo.h which will encapsulate the previously documented behaviour of "allowing $, @, ? characters at the start of symbol names")
- Introduce these fields where "$", "@" are lexed, and treat them as identifiers depending on whether `Allow[Dollar|At]TokenAtStartOfString` is set.
- For the sole case of "?", append it to the existing logic for treating a "default" token as an Identifier.

z/OS (HLASM) will also make use of some of these fields in follow up patches.

completely accurate* - This was based on the comments and the intended behaviour the code. I might have completely misinterpreted it, and if that is the case my sincere apologies. We can close this patch if necessary, if there are no changes to be made :)

Depends on https://reviews.llvm.org/D99374

Reviewed By: Jonathan.Crowther

Differential Revision: https://reviews.llvm.org/D99889
2021-04-21 10:21:09 -04:00
Philip Reames 4824d876f0 Revert "Allow invokable sub-classes of IntrinsicInst"
This reverts commit d87b9b81cc.

Post commit review raised concerns, reverting while discussion happens.
2021-04-20 15:38:38 -07:00
Philip Reames d87b9b81cc Allow invokable sub-classes of IntrinsicInst
It used to be that all of our intrinsics were call instructions, but over time, we've added more and more invokable intrinsics. According to the verifier, we're up to 8 right now. As IntrinsicInst is a sub-class of CallInst, this puts us in an awkward spot where the idiomatic means to check for intrinsic has a false negative if the intrinsic is invoked.

This change switches IntrinsicInst from being a sub-class of CallInst to being a subclass of CallBase. This allows invoked intrinsics to be instances of IntrinsicInst, at the cost of requiring a few more casts to CallInst in places where the intrinsic really is known to be a call, not an invoke.

After this lands and has baked for a couple days, planned cleanups:
    Make GCStatepointInst a IntrinsicInst subclass.
    Merge intrinsic handling in InstCombine and use idiomatic visitIntrinsicInst entry point for InstVisitor.
    Do the same in SelectionDAG.
    Do the same in FastISEL.

Differential Revision: https://reviews.llvm.org/D99976
2021-04-20 15:03:49 -07:00
Simon Pilgrim 2a419a0b99 [X86][SSE] combineX86ShuffleChain - check if we're blending with zero into already zero elements
Add a SelectionDAG::MaskedElementsAreZero helper that wraps SelectionDAG::MaskedValueIsZero testing for entirely zero vector elements
2021-04-20 17:09:49 +01:00
Nick Desaulniers c440b97d89 [TargetLowering] move "o" and "X" constraint handling to base class
These constraints are machine agnostic; there's no reason to handle
these per-arch. If arches don't support these constraints, then they
will fail elsewhere during instruction selection. We don't need virtual
calls to look these up; TargetLowering::getInlineAsmMemConstraint should
only be overridden by architectures with additional unique memory
constraints.

Reviewed By: echristo, MaskRay

Differential Revision: https://reviews.llvm.org/D100416
2021-04-19 10:53:31 -07:00
Roman Lebedev df9597cf5a
[X86][CostModel] X86TTIImpl::getShuffleCost(): subvector insertions are cheap
This is similar to the subvector extractions,
except that the 0'th subvector isn't free to insert,
because we generally don't know whether or not
the upper elements need to be preserved:
https://godbolt.org/z/rsxP5W4sW

This is needed to avoid regressions in D100684

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D100698
2021-04-19 13:24:58 +03:00
Serge Guelton d6de1e1a71 Normalize interaction with boolean attributes
Such attributes can either be unset, or set to "true" or "false" (as string).
throughout the codebase, this led to inelegant checks ranging from

        if (Fn->getFnAttribute("no-jump-tables").getValueAsString() == "true")

to

        if (Fn->hasAttribute("no-jump-tables") && Fn->getFnAttribute("no-jump-tables").getValueAsString() == "true")

Introduce a getValueAsBool that normalize the check, with the following
behavior:

no attributes or attribute set to "false" => return false
attribute set to "true" => return true

Differential Revision: https://reviews.llvm.org/D99299
2021-04-17 08:17:33 +02:00
Roman Lebedev b06c55a698
[X86][CostModel] Fix cost model for non-power-of-two vector load/stores
Sometimes LV has to produce really wide vectors,
and sometimes they end up being not powers of two.
As it can be seen from the diff, the cost computation
is currently completely non-sensical in those cases.

Instead of just scalarizing everything, split/factorize the wide vector
into a number of subvectors, each one having a power-of-two elements,
recurse to get the cost of op on this subvector. Also, check how we'd
legalize this subvector, and if the legalized type is scalar,
also account for the scalarization cost.

Note that for sub-vector loads, we might be able to do better,
when the vectors are properly aligned.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D100099
2021-04-16 15:30:57 +03:00
Simon Pilgrim 9d57a77b81 [X86] combineCMP - fold cmpEQ/NE(TRUNC(X),0) -> cmpEQ/NE(X,0)
If we are truncating from a i32 source before comparing the result against zero, then see if we can directly compare the source value against zero.

If the upper (truncated) bits are known to be zero then we can compare against that, hopefully increasing the chances of us folding the compare into a EFLAG result of the source's operation.

Fixes PR49028.

Differential Revision: https://reviews.llvm.org/D100491
2021-04-15 13:55:51 +01:00
Sander de Smalen 4f42d873c2 [TTI] NFC: Change getArithmeticInstrCost to return InstructionCost
This patch migrates the TTI cost interfaces to return an InstructionCost.

See this patch for the introduction of the type: https://reviews.llvm.org/D91174
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2020-November/146408.html

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D100317
2021-04-14 17:20:36 +01:00
Sander de Smalen 1af35e77f4 [TTI] NFC: Change getVectorInstrCost to return InstructionCost
This patch migrates the TTI cost interfaces to return an InstructionCost.

See this patch for the introduction of the type: https://reviews.llvm.org/D91174
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2020-November/146408.html

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D100315
2021-04-14 17:20:35 +01:00
Sander de Smalen 174e8f6c5e [TTI] NFC: Change getShuffleCost to return InstructionCost
This patch migrates the TTI cost interfaces to return an InstructionCost.

See this patch for the introduction of the type: https://reviews.llvm.org/D91174
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2020-November/146408.html

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D100314
2021-04-14 17:20:35 +01:00
Sander de Smalen 14b934f8a6 [TTI] NFC: Change getCFInstrCost to return InstructionCost
This patch migrates the TTI cost interfaces to return an InstructionCost.

See this patch for the introduction of the type: https://reviews.llvm.org/D91174
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2020-November/146408.html

Reviewed By: samparker

Differential Revision: https://reviews.llvm.org/D100313
2021-04-14 17:20:34 +01:00
Simon Pilgrim 4fbe761572 [X86][SSE] canonicalizeShuffleWithBinOps - check for more combos of merge-able binary shuffles.
In the fold SHUFFLE(BINOP(X,Y),BINOP(Z,W)) -> BINOP(SHUFFLE(X,Z),SHUFFLE(Y,W)), check if both X/Z AND Y/W have at least one merge-able shuffle in which case the total number of shuffle should still fall.

Helps with instruction count regressions we saw while fixing PR48823
2021-04-14 15:24:41 +01:00
Simon Pilgrim 73737fe990 [X86] Fold cmpeq/ne(trunc(x),0) --> cmpeq/ne(x,0)
Relax the fold from rGbaadbe04bf75 to compare any op, not just logic ops, now that the movmsk regressions have been handled.
2021-04-14 11:02:02 +01:00
Simon Pilgrim 016ceb8382 [X86][SSE] combineSetCCMOVMSK - allow comparison with upper (known zero) bits in MOVMSK(SHUFFLE(X,u)) -> MOVMSK(X) fold
Extension to rG74f98391a7a4, we can also include any of the upper (known zero) bits in the comparison in the shuffle removal fold, just as long as we demand all the elements of the movmsk source vector.
2021-04-14 11:02:01 +01:00
Bogdan Graur 0acf4e5005
[NFC] Fix unused warning.
Differential Revision: https://reviews.llvm.org/D100449
2021-04-14 09:09:20 +02:00
Wang, Pengfei a3b52a9d13 [X86][AMX] Refactor for PostRA ldtilecfg pass.
This is a follow up of D99010. We didn't consider the live range of shape registers when hoist ldtilecfg. There maybe risks, e.g. we happen to insert it to an invalid range of some registers and get unexpected error.

This patch fixes this problem by storing the value to corresponding stack place of ldtilecfg after all its definition immediately.

This patch also fix a problem in previous code: If we don't have a ldtilecfg which dominates all AMX instructions, we cannot initialize shapes for other ldtilecfg.

There're still some optimization points left. E.g. eliminate unused mov instructions, break the def-use dependency before RA etc.

Reviewed By: LuoYuanke, xiangzhangllvm

Differential Revision: https://reviews.llvm.org/D99966
2021-04-14 10:08:23 +08:00
Simon Pilgrim 74f98391a7 [X86][SSE] combineSetCCMOVMSK - allow comparison with upper (known zero) bits in CMP(MOVMSK(PACKSS())) -> CMP(MOVMSK()) fold
We already allow the comparison of the upper bits of 'IsAllOf' (allbits) patterns, but we can safely compare the known zero bits for 'IsAnyOf' (zerobits) patterns as well.

This fixes an issues where we are comparing a type wide than the number of vector elements, which avoids a regression mentioned in rGbaadbe04bf75.
2021-04-13 17:37:24 +01:00
Sander de Smalen 03f47bdcb1 [TTI] NFC: Change get[Interleaved]MemoryOpCost to return InstructionCost
This patch migrates the TTI cost interfaces to return an InstructionCost.

See this patch for the introduction of the type: https://reviews.llvm.org/D91174
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2020-November/146408.html

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D100205
2021-04-13 14:21:02 +01:00
Sander de Smalen d676b5749d [TTI] NFC: Change getMaskedMemoryOpCost to return InstructionCost
This patch migrates the TTI cost interfaces to return an InstructionCost.

See this patch for the introduction of the type: https://reviews.llvm.org/D91174
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2020-November/146408.html

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D100204
2021-04-13 14:21:01 +01:00
Sander de Smalen db134e2428 [TTI] NFC: Change getCmpSelInstrCost to return InstructionCost
This patch migrates the TTI cost interfaces to return an InstructionCost.

See this patch for the introduction of the type: https://reviews.llvm.org/D91174
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2020-November/146408.html

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D100203
2021-04-13 14:21:01 +01:00
Sander de Smalen 2285dfb73f [TTI] NFC: Change getMinMaxReductionCost to return InstructionCost
This patch migrates the TTI cost interfaces to return an InstructionCost.

See this patch for the introduction of the type: https://reviews.llvm.org/D91174
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2020-November/146408.html

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D100202
2021-04-13 14:21:00 +01:00
Sander de Smalen bd86824d98 [TTI] NFC: Change getArithmeticReductionCost to return InstructionCost
This patch migrates the TTI cost interfaces to return an InstructionCost.

See this patch for the introduction of the type: https://reviews.llvm.org/D91174
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2020-November/146408.html

This patch is practically NFC, with the exception of an AArch64 SVE related
cost-model change, where we can now return an Invalid cost instead of some
bogus number.

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D100201
2021-04-13 14:20:59 +01:00
Sander de Smalen fd1f8a5462 [TTI] NFC: Change getGatherScatterOpCost to return InstructionCost
This patch migrates the TTI cost interfaces to return an InstructionCost.

See this patch for the introduction of the type: https://reviews.llvm.org/D91174
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2020-November/146408.html

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D100200
2021-04-13 14:20:59 +01:00
Sander de Smalen 92d8421f49 [TTI] NFC: Change getCastInstrCost and getExtractWithExtendCost to return InstructionCost
This patch migrates the TTI cost interfaces to return an InstructionCost.

See this patch for the introduction of the type: https://reviews.llvm.org/D91174
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2020-November/146408.html

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D100199
2021-04-13 14:20:58 +01:00
Freddy Ye 3fc1fe8db8 [X86] Support -march=rocketlake
Reviewed By: skan, craig.topper, MaskRay

Differential Revision: https://reviews.llvm.org/D100085
2021-04-13 09:48:13 +08:00
Simon Pilgrim baadbe04bf [X86] Fold cmpeq/ne(trunc(logic(x)),0) --> cmpeq/ne(logic(x),0)
Fixes the issues noted in PR48768, where the and/or/xor instruction had been promoted to avoid i8/i16 partial-dependencies, but the test against zero had not.

We can almost certainly relax this fold to work for any truncation, although it breaks a number of existing folds (notable movmsk folds which tend to rely on the truncate to determine the demanded bits/elts in the source vector).

There is a reverse combine in TargetLowering.SimplifySetCC so we must wait until after legalization before attempting this.
2021-04-12 16:05:34 +01:00
Wang, Pengfei 4cbaaf4a24 [X86][AMX] Hoist ldtilecfg
The previous code calculated the first ldtilecfg by dominating all AMX registers' def. This may result in the ldtilecfg being inserted into a loop.

This patch try to calculate the nearest point where all shapes of AMX registers are reachable.

Reviewed By: LuoYuanke

Differential Revision: https://reviews.llvm.org/D99010
2021-04-12 22:36:41 +08:00
Bing1 Yu 747111ea71 [X86] Pass to transform tdpbsud&tdpbusd&tdpbuud intrinsics to scalar operation
Reviewed By: pengfei

Differential Revision: https://reviews.llvm.org/D99244
2021-04-12 13:58:14 +08:00
Freddy Ye 5cb47be410 [X86] Remove FeatureCLWB from FeaturesICLClient
Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D100279
2021-04-12 12:08:59 +08:00
Simon Pilgrim 231b87618b [X86][AVX512] Fold not(kmov(x)) -> kmov(not(x)) and not(widen_subvector(x)) -> widen_subvector(not(x))
Improve AVX512 mask inversion, rG38c799bce801 exposed some missing opportunities to move scalar not() back onto the boolvector types for folding with setcc etc.
2021-04-11 20:07:09 +01:00
Simon Pilgrim 13bdac5709 [X86] combineXor - Pull out repeated getOperand() calls. NFCI. 2021-04-11 19:01:59 +01:00
Simon Pilgrim 38c799bce8 [X86] Fold cmpeq/ne(and(X,Y),Y) --> cmpeq/ne(and(~X,Y),0)
Followup to D100177, handle an similar (demorgan inverse style) case from PR47797 as well

The AVX512 test cases could be further improved if we folded not(iX bitcast(vXi1)) -> (iX bitcast(not(vXi1)))

Alive2: https://alive2.llvm.org/ce/z/AnA_-W
2021-04-11 18:42:01 +01:00
dfukalov 8f4b7e94a2 [AMDGPU][CostModel] Refine cost model for control-flow instructions.
Added cost estimation for switch instruction, updated costs of branches, fixed
phi cost.
Had to increase `-amdgpu-unroll-threshold-if` default value since conditional
branch cost (size) was corrected to higher value.
Test renamed to "control-flow.ll".

Removed redundant code in `X86TTIImpl::getCFInstrCost()` and
`PPCTTIImpl::getCFInstrCost()`.

Reviewed By: rampitec

Differential Revision: https://reviews.llvm.org/D96805
2021-04-10 09:20:24 +03:00
Simon Pilgrim d8bc4de3cf [X86] Fold cmpeq/ne(or(X,Y),X) --> cmpeq/ne(and(~X,Y),0) on non-BMI targets (PR44136)
Followup to D100177, enable the fold for non-BMI targets as well.
2021-04-09 16:11:11 +01:00
Simon Pilgrim 245036950a [X86][BMI] Fold cmpeq/ne(or(X,Y),X) --> cmpeq/ne(and(~X,Y),0) (PR44136)
I've initially just enabled this for BMI which has the ANDN instruction for i32/i64 - the i16/i8 cases give an idea of what'd we get when we enable it in all cases (I'll do this as a later commit).

Additionally, the i16/i8 cases could be freely promoted to i32 (as the args are already zeroext) and we could then make use of ANDN + the free cmp0 there as well - this has come up in PR48768 and PR49028 so I'm going to look at this soon.

https://alive2.llvm.org/ce/z/QVWHP_
https://alive2.llvm.org/ce/z/pLngT-

Vector cases do not appear to benefit from this as we end up with having to generate the zero vector as well - this is one of the reasons I didn't try to tie this into hasAndNot/hasAndNotCompare.

Differential Revision: https://reviews.llvm.org/D100177
2021-04-09 15:52:03 +01:00
dfukalov d066079728 [NFC][AA] Prepare to convert AliasResult to class with PartialAlias offset.
Main reason is preparation to transform AliasResult to class that contains
offset for PartialAlias case.

Reviewed By: asbirlea

Differential Revision: https://reviews.llvm.org/D98027
2021-04-09 12:54:22 +03:00
Simon Pilgrim 3ae0a405fc [X86] combineHorizOpWithShuffle - peek through one use bitcasts when decoding shuffles.
Checking for one use, peek through bitcasts of the horizop args to allows us to merge shuffles of different widths through the horizop.
2021-04-09 10:51:04 +01:00
Simon Pilgrim 302e748065 [X86] Improve optimizeCompareInstr for signed comparisons after AND/OR/XOR instructions
Extend D94856 to handle 'and', 'or' and 'xor' instructions as well

We still fail on many i8/i16 cases as the test and the logic-op are performed on different widths
2021-04-07 14:28:42 +01:00
Simon Pilgrim 583258723f [X86] Improve optimizeCompareInstr for signed comparisons after BZHI instructions
Extend D94856 to handle 'bzhi' instructions as well
2021-04-07 12:07:26 +01:00
Nicolás Alvarez a1aada75f5 [docs] Fix doxygen comments wrongly attached to the llvm namespace
Looking at the Doxygen-generated documentation for the llvm namespace
currently shows all sorts of random comments from different parts of the
codebase. These are mostly caused by:

- File doc comments that aren't marked with \file, so they're attached to
  the next declaration, which is usually "namespace llvm {".
- Class doc comments placed before the namespace rather than before the
  class.
- Code comments before the namespace that (in my opinion) shouldn't be
  extracted by doxygen at all.

This commit fixes these comments. The generated doxygen documentation now
has proper docs for several classes and files, and the docs for the llvm
and llvm::detail namespaces are now empty.

Reviewed By: thakis, mizvekov

Differential Revision: https://reviews.llvm.org/D96736
2021-04-07 01:20:18 +02:00
Simon Pilgrim 53283cc2f1 [X86][SSE] canonicalizeShuffleWithBinOps - add MOVSD/MOVSS handling. 2021-04-06 16:42:18 +01:00
Simon Pilgrim 1dcb5b5e89 [X86] Improve optimizeCompareInstr for signed comparisons after ANDN instructions
Extend D94856 to handle 'andn' instructions as well
2021-04-06 14:16:16 +01:00
Simon Pilgrim 201877d572 [CostModel][X86] Improve accuracy of vXi8 multiply reduction costs
After rG47321c311bdbe0145b9bf45d822185c37b19fa50 we promote vXi8 reductions to vXi16 to create a much faster PMULLW mul reduction, followed by a (free) truncation. This avoids the high cost of repeated vXi8 multiplications (which extend+multiply+truncate to/from vXi16 types....).

Fixes the missing vXi8 mul reduction vectorization in PR42674 (Comment #20) 'mul16' test case.
2021-04-06 11:53:22 +01:00
Simon Pilgrim ddbb58736a [KnownBits] Rename KnownBits::computeForMul to KnownBits::mul. NFCI.
As promised in D98866
2021-04-06 10:11:41 +01:00
Simon Pilgrim 36d4f6d7f8 [X86] Fold xor(zext(xor(x,c1)),c2) -> xor(zext(x),xor(zext(c1),c2))
Fixes PR47603 (second case) by extending rG89afec348dbd3e5078f176e978971ee2d3b5dec8
2021-04-05 11:40:37 +01:00
Roman Lebedev 7727cc242d
[NFC][X86] Split VPMOV* AVX2 instructions into their own sched class
At least on all three Zen's, all such instructions cleanly map
into this new class with no overrides needed.
2021-04-03 22:39:07 +03:00
Nikita Popov 665065821e [FastISel] Remove kill tracking
This is a followup to D98145: As far as I know, tracking of kill
flags in FastISel is just a compile-time optimization. However,
I'm not actually seeing any compile-time regression when removing
the tracking. This probably used to be more important in the past,
before FastRA was switched to allocate instructions in reverse
order, which means that it discovers kills as a matter of course.

As such, the kill tracking doesn't really seem to serve a purpose
anymore, and just adds additional complexity and potential for
errors. This patch removes it entirely. The primary changes are
dropping the hasTrivialKill() method and removing the kill
arguments from the emitFast methods. The rest is mechanical fixup.

Differential Revision: https://reviews.llvm.org/D98294
2021-04-03 15:50:13 +02:00
Simon Pilgrim 89afec348d [X86] Fold xor(truncate(xor(x,c1)),c2) -> xor(truncate(x),xor(truncate(c1),c2))
Fixes PR47603

This should probably be transferable to DAGCombine - the main limitation with the existing trunc(logicop) DAG fold is we don't know if legalization has tried to promote truncated logicops already. We might be able to peek through extensions as well.
2021-04-03 12:43:05 +01:00
Simon Pilgrim 7c17f1ea84 [X86][SSE] isHorizontalBinOp - use getTargetShuffleInputs helper (REAPPLIED)
Use the getTargetShuffleInputs helper for all shuffle decoding

Reapplied (after reversion in rGfa0aff6d6960) with fix+test for subvector splitting - we weren't accounting for peeking through bitcasts changing the vector element count of the shuffle sources.
2021-04-03 11:59:19 +01:00
Nico Weber fa0aff6d69 Revert "[X86][SSE] isHorizontalBinOp - use getTargetShuffleInputs helper"
This reverts commit 500969f1d0.
Makes clang assert compiling avx2 code, see
https://bugs.chromium.org/p/chromium/issues/detail?id=1195353#c4
for a standalone repro.
2021-04-02 09:55:55 -04:00
Simon Pilgrim 500969f1d0 [X86][SSE] isHorizontalBinOp - use getTargetShuffleInputs helper
Use the getTargetShuffleInputs helper for all shuffle decoding
2021-04-02 11:50:18 +01:00
Yang Fan bc6001ce1e
[X86] Fix -Wunused-function warning (NFC)
GCC warning:
```
/llvm-project/llvm/lib/Target/X86/X86ISelLowering.cpp:9212:13: warning: ‘bool isHorizOp(unsigned int)’ defined but not used [-Wunused-function]
 9212 | static bool isHorizOp(unsigned Opcode) {
      |             ^~~~~~~~~
```
2021-04-02 09:38:12 +08:00
Mircea Trofin ce61def529 [regalloc] Ensure Query::collectInterferringVregs is called before interval iteration
The main part of the patch is the change in RegAllocGreedy.cpp: Q.collectInterferringVregs()
needs to be called before iterating the interfering live ranges.

The rest of the patch offers support that is the case: instead of  clearing the query's
InterferingVRegs field, we invalidate it. The clearing happens when the live reg matrix
is invalidated (existing triggering mechanism).

Without the change in RegAllocGreedy.cpp, the compiler ices.

This patch should make it more easily discoverable by developers that
collectInterferringVregs needs to be called before iterating.

I will follow up with a subsequent patch to improve the usability and maintainability of Query.

Differential Revision: https://reviews.llvm.org/D98232
2021-04-01 08:33:28 -07:00
Simon Pilgrim abbe80fa52 [X86][SSE] Fold HOP(HOP(X,X),HOP(Y,Y)) -> HOP(PERMUTE(HOP(X,Y)),PERMUTE(HOP(X,Y))
For slow-hop targets, attempt to merge HADD/SUB pairs used in chains.
2021-04-01 11:54:10 +01:00
Simon Pilgrim 301319840e [X86][SSE] Enable (F)HADD/SUB handling to SimplifyMultipleUseDemandedVectorElts
Attempt to bypass unused horiz-op operands.

This is very similar to the PACKSS/PACKUS handling - we should try to merge these.
2021-04-01 11:54:09 +01:00
Simon Pilgrim f7aeaced65 [X86][SSE] Add isHorizOp helper function. NFCI. 2021-04-01 11:54:09 +01:00
YangKeao 1c268a8ff4 [X86] add dwarf annotation for inline stack probe
While probing stack, the stack register is moved without dwarf
information, which could cause panic if unwind the backtrace.
This commit only add annotation for the inline stack probe case.
Dwarf information for the loop case should be done in another
patch and need further discussion.

Reviewed By: nagisa

Differential Revision: https://reviews.llvm.org/D99579
2021-04-01 00:32:50 +03:00
Craig Topper 437958d9fd [X86] Improve SMULO/UMULO codegen for vXi8 vectors.
The default expansion creates a MUL and either a MULHS/MULHU. Each
of those separately expand to sequences that use one or more
PMULLW instructions as well as additional instructions to
extend the types to vXi16. The MULHS/MULHU expansion computes the
whole 16-bit product, but only keeps the high part.

We can improve the lowering of SMULO/UMULO for some cases by using the MULHS/MULHU
expansion, but keep both the high and low parts. And we can use
those parts to calculate the overflow.

For AVX512 we might have vXi1 overflow outputs. We can improve those by using
vpcmpeqw to produce a k register if AVX512BW is enabled. This is a little better
than truncating the high result to use vpcmpeqb. If we don't have avx512bw we
can extend up to v16i32 to use vpcmpeqd to produce a k register.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D97624
2021-03-31 10:13:50 -07:00
Craig Topper 50b8634a99 [X86] Improve optimizeCompareInstr for signed comparisons after BMI/TBM instructions
We previously couldn't optimize out a TEST if the branch/setcc/cmov
used the overflow flag. This patches allows the TEST to be removed
if the flag producing instruction is known to clear the OF flag.
Thats what the TEST instruction would have done so that should be
equivalent.

Need to add test cases. I'll try to get back to this if I have bandwidth.

Fixes PR48768.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D94856
2021-03-31 09:45:29 -07:00
Sander de Smalen 2f6f249a49 NFC: Change getIntrinsicInstrCost to return InstructionCost
This patch migrates the TTI cost interfaces to return an InstructionCost.

See this patch for the introduction of the type: https://reviews.llvm.org/D91174
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2020-November/146408.html

Depends on D97468

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D97469
2021-03-31 14:04:41 +01:00
Sander de Smalen 2f56e1c6b1 NFC: Change getTypeBasedIntrinsicCost to return InstructionCost
This patch migrates the TTI cost interfaces to return an InstructionCost.

See this patch for the introduction of the type: https://reviews.llvm.org/D91174
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2020-November/146408.html

Depends on D97466

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D97468
2021-03-31 14:04:41 +01:00
Roman Lebedev ce548aa236
[X86] AMD Zen 3 has macro fusion
This is an improvement over Zen 2, where only branch fusion is supported,
as per Agner, 21.4 Instruction fusion.
AMD SOG 17h has no mention of fusion.

AMD SOG 19h, 2.9.3 Branch Fusion
The following flag writing instructions support branch fusion
with their reg/reg, reg/imm and reg/mem forms
* CMP
* TEST
* SUB
* ADD
* INC (no fusion with branches dependent on CF)
* DEC (no fusion with branches dependent on CF)
* OR
* AND
* XOR

Agner, 22.4 Instruction fusion
<...> This applies to CMP, TEST, ADD, SUB, AND, OR, XOR, INC, DEC and
all conditional jumps, except if the arithmetic or logic instruction has a rip-relative address or
both an address displacement and an immediate operand.
2021-03-31 14:31:50 +03:00
Tomas Matheson a9968c0a33 [NFC][CodeGen] Tidy up TargetRegisterInfo stack realignment functions
Currently needsStackRealignment returns false if canRealignStack returns false.
This means that the behavior of needsStackRealignment does not correspond to
it's name and description; a function might need stack realignment, but if it
is not possible then this function returns false. Furthermore,
needsStackRealignment is not virtual and therefore some backends have made use
of canRealignStack to indicate whether a function needs stack realignment.

This patch attempts to clarify the situation by separating them and introducing
new names:

 - shouldRealignStack - true if there is any reason the stack should be
   realigned

 - canRealignStack - true if we are still able to realign the stack (e.g. we
   can still reserve/have reserved a frame pointer)

 - hasStackRealignment = shouldRealignStack && canRealignStack (not target
   customisable)

Targets can now override shouldRealignStack to indicate that stack realignment
is required.

This change will make it easier in a future change to handle the case where we
need to realign the stack but can't do so (for example when the register
allocator creates an aligned spill after the frame pointer has been
eliminated).

Differential Revision: https://reviews.llvm.org/D98716

Change-Id: Ib9a4d21728bf9d08a545b4365418d3ffe1af4d87
2021-03-30 17:31:39 +01:00
Sanjay Patel e694e19a79 [x86] enhance matching of pmaddwd
This was crashing with the example from:
https://llvm.org/PR49716
...and that was avoided with a283d72583 ,
but as we can see from the SSE vs. AVX test code diff,
we can try harder to match the pattern.

This matcher code was adapted from another pmadd pattern
match in D49636, but it needs different ops to deal with
size mismatches.

Differential Revision: https://reviews.llvm.org/D99531
2021-03-30 07:28:33 -04:00
Bing1 Yu 0c63b862c4 Revert "[X86] Pass to transform tdpbsud&tdpbusd&tdpbuud intrinsics to scalar operation"
This reverts commit 275df61f04.
2021-03-30 16:33:07 +08:00
Bing1 Yu 275df61f04 [X86] Pass to transform tdpbsud&tdpbusd&tdpbuud intrinsics to scalar operation
Reviewed By: pengfei

Differential Revision: https://reviews.llvm.org/D99244
2021-03-30 16:21:10 +08:00
Nikita Popov 7669455df4 [X86][FastISel] Fix with.overflow eflags clobber (PR49587)
If the successor block has a phi node, then additional moves may
be inserted into predecessors, which may clobber eflags. Don't try
to fold the with.overflow result into the branch in that case.

This is done by explicitly checking for any phis in successor
blocks, not sure if there's some more principled way to address
this. Other fused compare and branch patterns avoid the issue by
emitting the comparison when handling the branch, so that no
instructions may be inserted in between. In this case, the
with.overflow call is emitted separately (and I don't think this
is avoidable, as it will generally have at least two users).

Fixes https://bugs.llvm.org/show_bug.cgi?id=49587.

Differential Revision: https://reviews.llvm.org/D98600
2021-03-29 23:08:47 +02:00
Craig Topper 54bacaf311 [X86] Always use rip-relative addressing on 64-bit when rematerializing all zeros/ones registers using a folded load.
Previously we only used RIP relative when PIC was enabled. But
we know we're in small/kernel code model here so we should
be able to always use RIP-relative which will give a smaller
encoding.

Here's a godbolt link that demonstrates the current codegen https://godbolt.org/z/j3158o
Note in the non-PIC version the load from .LCPI0_0 doesn't use
RIP-relative addressing, but if you change the constant in the
source from 0.0 to 1.0 it will become RIP-relative.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D97208
2021-03-29 10:06:17 -07:00
Simon Pilgrim 805148eaf2 [X86][SSE] combineHorizOpWithShuffle - consistently use getTargetShuffleInputs to decode shuffles
Minor cleanup before I start trying to merge the unary/binary shuffle combining paths.
2021-03-29 11:31:19 +01:00
Craig Topper 69bdf35dc7 [X86] Optimize vXi8 MULHS on targets where we can't sign_extend to the next register size.
For these cases we need to extract the upper or lower elements,
multiply them using 16-bit multiplies and repack them.

Previously we used punpcklbw/punpckhbw+psraw or pmovsxbw+pshudfd to
extract and sign extend so we could use pmullw to compute the 16-bit
product and then shift down the high bits.

We can avoid the need to sign extend if we unpack the bytes into
the high byte of each word and fill the lower byte with 0 using
pxor. This puts the sign bit of each byte into the sign bit of
each word. Since the LHS and RHS have 8 trailing zeros, the full
32-bit product of those 16-bit values will have 16 trailing zeros.
This means the 16-bit product of the original bytes is in the upper
16 bits which we can calculate using pmulhw.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D98587
2021-03-28 11:41:29 -07:00
Simon Pilgrim 2a0d5da917 [X86][SSE] foldShuffleOfHorizOp - remove broadcast handling.
Remove VBROADCAST/MOVDDUP/splat-shuffle handling from foldShuffleOfHorizOp

This can all be handled by canonicalizeShuffleMaskWithHorizOp along as we check that the HADD/SUB are only used once (to prevent infinite loops on slow-horizop targets which will try to reuse the nodes again followed by a post-hop shuffle).
2021-03-27 15:09:23 +00:00
Simon Pilgrim 41146bfe82 [X86][SSE] combineX86ShuffleChain - attempt to recognise 'hidden' identity shuffles
See if the combined shuffle mask is equivalent to an identity shuffle, typically this is due to repeated LHS/RHS ops in horiz-ops, but isTargetShuffleEquivalent might see other patterns as well.

This is another small step towards getting rid of foldShuffleOfHorizOp and relying on canonicalizeShuffleMaskWithHorizOp and generic shuffle combining.
2021-03-27 11:09:30 +00:00
Sanjay Patel a283d72583 [x86] prevent crashing while matching pmaddwd
This could crash in 2 ways: either one or both of
the input vectors could be a different size than
the math ops.

https://llvm.org/PR49716
2021-03-27 05:27:14 -04:00
Simon Pilgrim c769ba9514 [X86][AVX] combineHorizOpWithShuffle - improve SHUFFLE(HOP(LOSUBVECTOR(X),HISUBVECTOR(X))) folding
Peek through bitcasts to find subvector splits and use getTargetShuffleInputs to decode target shuffles as well as ShuffleVectorSDNode
2021-03-26 17:23:54 +00:00
Simon Pilgrim 36e3c6c841 [X86][AVX] Truncate vectors with PACKSS/PACKUS on AVX2 targets
Until AVX512 we don't have any vector truncation instructions, and always lower using shuffles instead.

combineVectorTruncation performs this earlier than lowering as it makes it easier to use any sign/zero-extended bits in the truncated bits with PACKSS/PACKUS to perform the shuffle.

We currently don't attempt to use combineVectorTruncation on AVX2 targets as in the past 256-bit PACKSS/PACKUS tended to cause 128-bit lane shuffle regressions - but these should now be all resolved with combineHorizOpWithShuffle and in all cases we now reduce the amount of cross-lane shuffling and variable shuffle mask usage.

Differential Revision: https://reviews.llvm.org/D96609
2021-03-25 10:34:34 +00:00
Simon Pilgrim 9fde88c3e2 [X86][AVX] splitIntVSETCC - handle separate (canonicalized) SETCC operands
LowerVSETCC calls splitIntVSETCC after canonicalizing certain patterns, in particular (X & CPow2 != 0) -> (X & CPow2 == CPow2).

Unfortunately if we're splitting for AVX1/non-AVX512BW cases, we lose these canonicalizations as we call the split with the original SetCC node, and when the split nodes are later lowered in LowerVSETCC the patterns are lost behind extract_subvector etc. But if we pass the canonicalized operands for splitting we retain the optimizations.

Differential Revision: https://reviews.llvm.org/D99256
2021-03-25 10:18:44 +00:00
Sander de Smalen 55d18b3cc2 [TTI] Return a TypeSize from getRegisterBitWidth.
This patch changes the interface to take a RegisterKind, to indicate
whether the register bitwidth of a scalar register, fixed-width vector
register, or scalable vector register must be returned.

Reviewed By: paulwalker-arm

Differential Revision: https://reviews.llvm.org/D98874
2021-03-24 14:45:13 +00:00
Simon Pilgrim 7920527796 [X86][AVX] combineBitcastvxi1 - improve handling of vectors truncated to vXi1
If we're truncating to vXi1 from a wider type, then prefer the original wider vector as is simplifies folding the separate truncations + extensions.

AVX1 this is only worth it for v8i1 cases, not v4i1 where we're always better off truncating down to v4i32 for movmsk.

Helps with some regressions encountered in D96609
2021-03-24 14:05:59 +00:00
Simon Pilgrim e9015bd595 [X86][AVX] lowerShuffleAsBroadcast - MOVDDUP(SCALAR_TO_VECTOR(X)) -> BROADCAST(X)
Prefer broadcast from scalar on AVX targets as this makes it easier for later folds to strip away bitcasts etc.

This helps a lot with the AVX1 poor codegen from PR49658.

There's a trivial regression in bitcast-int-to-vector-bool-*ext.ll tests due to SimplifyDemandedBits not being able to see a multi-use case, but there's bigger existing codegen issues to be addressed first in those tests (unnecessary NOTs).
2021-03-24 11:31:56 +00:00
Simon Pilgrim c1ef642ad8 [X86] Remove unused 'OneUse' option from IsNOT helper. NFCI. 2021-03-24 11:14:38 +00:00
Craig Topper 6204ac4536 [X86] Bale out of X86FastISel::X86SelectCmp for vectors.
None of the code in this function was written to handle
vectors.  Most of the cases already fail for vectors for one
reason or another. The exception is an optimization that
detects identical operands. This can be triggered by vectors,
but the code always creates a 0 or 1 constants in a scalar
register which is incorrect for vectors.

Fixes PR49706.
2021-03-23 20:16:04 -07:00
Simon Pilgrim 080cb83e52 [X86][AVX] Narrow VPBROADCASTQ->VPBROADCASTD if we don't need the upper bits.
Helps fix cases where we've splatted smaller types to a wider vector element type without needing the upper bits.

Avoid this on AVX512 targets as that can affect broadcast folding.
2021-03-23 09:41:02 +00:00
serge-sans-paille e617cf9576 [NFC] Restore original SmallString size for X86TargetMachine::getSubtargetImpl lookup
Better safe than sorry here, quoting Craig Topper:

> Clang passes a pretty lengthy feature string.
2021-03-22 19:19:46 +01:00
serge-sans-paille b2f7ce91a6 [NFC] Simpler and faster key computation for getSubtargetImpl memoization
There's no use in computing a large key that's only used for a memoization
optimization.
2021-03-22 10:02:51 +01:00
Bing1 Yu 113f077f80 [X86] Pass to transform tdpbf16ps intrinsics to scalar operation.
In previous patch https://reviews.llvm.org/D93594, we only scalarize tilezero, tileload, tilestore and tiledpbssd. In this patch we scalarize tdpbf16ps intrinsic.

Reviewed By: pengfei

Differential Revision: https://reviews.llvm.org/D96110
2021-03-22 13:00:40 +08:00
Simon Pilgrim 3179588947 [X86][AVX] ComputeNumSignBitsForTargetNode - add X86ISD::VBROADCAST handling for scalar sources
The target shuffle code handles vector sources, but X86ISD::VBROADCAST can also accept a scalar source for splatting.

Added as an extension to PR49658
2021-03-21 12:22:51 +00:00
Simon Pilgrim 297b9bc3fa [X86][AVX] computeKnownBitsForTargetNode - add X86ISD::VBROADCAST handling for scalar sources
The target shuffle code handles vector sources, but X86ISD::VBROADCAST can also accept a scalar source for splatting.

Suggested by @craig.topper on PR49658
2021-03-21 10:40:57 +00:00
Simon Pilgrim 54a05f2ec8 [X86] computeKnownBitsForTargetNode - add X86ISD::PMULUDQ handling
Reuse the existing KnownBits multiplication code to handle what is effectively a ISD::UMUL_LOHI varient
2021-03-21 09:57:20 +00:00
Wang, Pengfei 2327513b85 [X86] Fix a bug when calculating the ldtilecfg insertion points.
The BB we initialized the ldtilecfg is special. We don't need to check
if its predecessor BBs need to insert ldtilecfg for calls.

We reused the flag HasCallBeforeAMX, so that the predecessors won't be
added to CfgNeedInsert.

This case happens only when the entry BB is in a loop. We need to hoist
the first tile config point out of the loop in future.

Reviewed By: LuoYuanke

Differential Revision: https://reviews.llvm.org/D98845
2021-03-20 17:48:59 +08:00
Fangrui Song c241659d15 [X86] Fix -Wunused-function in -DLLVM_ENABLE_ASSERTIONS=off builds 2021-03-18 23:22:58 -07:00
Bing1 Yu 0002d4bf36 [X86][AMX][NFC] Give correct Passname for Tile Register Pre-configure 2021-03-18 17:15:44 +08:00
Luo, Yuanke e64adc0b88 [X86] Fix compile time regression of D93594.
D93594 depend on the dominate tree and loop information. It increased
the compile time when build with -O0. However this is just to amend the
dominate tree and loop information, so that it is unnecessary to
re-analyze them again. Given the dominate tree of loop information are
absent in this pass, we can avoid amending them.

Differential Revision: https://reviews.llvm.org/D98773
2021-03-18 16:52:43 +08:00
David Green e2935dcfc4 [TTI] Add a Mask to getShuffleCost
This adds an Mask ArrayRef to getShuffleCost, so that if an exact mask
can be provided a more accurate cost can be provided by the backend.
For example VREV costs could be returned by the ARM backend. This should
be an NFC until then, laying the groundwork for that to be added.

Differential Revision: https://reviews.llvm.org/D98206
2021-03-17 17:46:26 +00:00
Simon Pilgrim 64687f2cc3 [X86][SSE] canonicalizeShuffleWithBinOps - add PERMILPS/PERMILPD + PERMPD/PERMQ + INSERTPS handling.
Bail if the INSERTPS would introduce zeros across the binop.
2021-03-16 13:52:08 +00:00
Bing1 Yu 4f198b0c27 [X86] Pass to transform amx intrinsics to scalar operation.
This pass runs in any situations but we skip it when it is not O0 and the
function doesn't have optnone attribute. With -O0, the def of shape to amx
intrinsics is near the amx intrinsics code. We are not able to find a
point which post-dominate all the shape and dominate all amx intrinsics.
To decouple the dependency of the shape, we transform amx intrinsics
to scalar operation, so that compiling doesn't fail. In long term, we
 should improve fast register allocation to allocate amx register.

Reviewed By: pengfei

Differential Revision: https://reviews.llvm.org/D93594
2021-03-16 10:40:22 +08:00
Fangrui Song 5d44c92bf8 Change void getNoop(MCInst &NopInst) to MCInst getNop()
Prefer (self-documenting) return values to output parameters (which are
liable to be used).
While here, rename Noop to Nop which is more widely used and improves
consistency with hasEmitNops/setEmitNops/emitNop/etc.
2021-03-15 12:05:34 -07:00
Simon Pilgrim 772155793b [X86][SSE] isHorizontalBinOp - ensure we clear any unused source operands to improve HADD/SUB matching
Our shuffle matching for HADD/SUB patterns wasn't clearing repeated ops in 'fake unary' style shuffle masks (unpack(x,x) etc.), preventing matching of add(fakeunary(),fakeunary()) style patterns.
2021-03-15 16:24:29 +00:00
Simon Pilgrim 814339454d [X86][SSE] canonicalizeShuffleWithBinOps - handle target shuffles.
Fold SHUFFLE(BINOP(SHUFFLE(X),SHUFFLE(Y))) -> BINOP(SHUFFLE'(X),SHUFFLE'(Y)) style patterns as well as the existing shuffles of constants.
2021-03-15 15:01:29 +00:00
Simon Pilgrim 07232f4507 [X86][SSE] canonicalizeShuffleWithBinOps - add X86ISD::PSHUFB handling.
Recommit rGcd938ab162b0ac560dd0e9fee290980c7e0e47e5 with an early-out if the pshub would introduce zeros across the binop.
2021-03-15 12:43:30 +00:00
Simon Pilgrim 75a184dacf Revert rG9ba577eca2e339726bfaad4e615c6324a705b292 "[X86][SSE] canonicalizeShuffleWithBinOps - handle target shuffles. NFCI."
Sorry this wasn't supposed to be committed yet (and certainly not tagged as NFCI....)
2021-03-15 12:23:44 +00:00
Simon Pilgrim 9ba577eca2 [X86][SSE] canonicalizeShuffleWithBinOps - handle target shuffles. NFCI.
Fold SHUFFLE(BINOP(SHUFFLE(X),SHUFFLE(Y))) -> BINOP(SHUFFLE'(X),SHUFFLE'(Y)) style patterns as well as the existing shuffles of constants.
2021-03-15 11:59:25 +00:00
Simon Pilgrim 6878be5dc3 [X86][SSE] Attempt to merge single-op hops for slow targets.
For slow-hop targets, see if any single-op hops are duplicating work already done on another (dual-op) hop, which can sometimes occur as isHorizontalBinOp tries to find potential duplicates (but can't merge them itself). If so, reuse the other hop and shuffle the result.
2021-03-15 09:30:20 +00:00
Saleem Abdulrasool c9fce5f0c3 X86: adjust the windows 64 calling convention for Swift
Adjust the Win64 calling convention for Swift to pass self in R13, which
is traditionally a CSR.  This makes the behaviour similar to the SysV CC
for Swift as well.  This should improve the argument passing on Windows,
although it comes at a high cost of ABI incompatibility.  Fortunately in
this case, there is no guarantee of ABI stability, and so we can make
this incompatible change.
2021-03-13 16:53:20 -08:00
Simon Pilgrim 6cb7dddaf4 [X86][AVX] Insert zeros byte elements into 256/512-bit vectors using shuffle/and
Avoid extracting/inserting subvectors which makes it more difficult for shuffle combining to merge them together.
2021-03-12 15:16:36 +00:00
Simon Pilgrim 33dcdd414c [X86] Provide lighter weight getTargetShuffleMask wrapper. NFCI.
Most callers to getTargetShuffleMask don't use the IsUnary flag.
2021-03-12 15:16:35 +00:00
Matt Arsenault 6b76d82853 GlobalISel: Fix marking byval arguments as immutable
byval arguments need to be assumed writable. Only implicitly stack
passed arguments which aren't addressable in the IR can be assumed
immutable.

Mips is still broken since for some reason its doing its own thing
with the ValueHandlers (and x86 doesn't actually handle byval
arguments now, although some of the code is there).
2021-03-12 09:01:53 -05:00
Simon Pilgrim bc5e9ec2dc Revert rGcd938ab162b0ac560dd0e9fee290980c7e0e47e5 "[X86] canonicalizeShuffleWithBinOps - add X86ISD::PSHUFB handling."
Investigating an issue reported by @bkramer, possibly when the PSHUFB mask generates zero elements.
2021-03-11 13:14:00 +00:00
Simon Pilgrim 77394c12a4 [X86] Don't attempt to fold sub(C1, xor(X, C2)) with opaque constants
Fixes PR49451
2021-03-11 12:06:40 +00:00
Simon Pilgrim d0884541cc [X86] canonicalizeShuffleWithBinOps - add binary shuffle handling 2021-03-09 13:57:03 +00:00
Liu, Chen3 b70e02a7e7 [X86][NFC] Move instruction selection of the x86_tdpb[s,u]d_internal and x86_tilezero_internal to X86InstrAMX.td
Differential Revision: https://reviews.llvm.org/D97997
2021-03-09 21:27:39 +08:00
Liu, Chen3 3618b21298 [X86][NFC] Adding one flag to imply whether the instruction should check the predicate when compress EVEX instructions to VEX encoding.
Some EVEX instructions should check the predicates when compress to VEX
encoding. For example, avx512vnni instructions. This is because avx512vnni
doesn't mean that avxvnni is supported on the target.
This patch moving the manually added check to .inc that generated by tablegen.

Differential Revision: https://reviews.llvm.org/D98011
2021-03-09 19:58:01 +08:00
Simon Pilgrim f71cee136d [X86] Break if-else chain. NFCI.
Both if blocks affect control flow - we don't need the else.

Fixes clang-tidy warning.
2021-03-08 11:44:31 +00:00
Freddy Ye 5f9489b754 [X86] Refine "Support -march=alderlake"
Refine "Support -march=alderlake"
Compare with tremont, it includes 25 more new features. They are
adx, aes, avx, avx2, avxvnni, bmi, bmi2, cldemote, f16c, fma, hreset, invpcid,
kl, lzcnt, movdir64b, movdiri, pclmulqdq, pconfig, pku, serialize, shstk, vaes,
vpclmulqdq, waitpkg, widekl.

Reviewed By: pengfei

Differential Revision: https://reviews.llvm.org/D97832
2021-03-08 13:17:18 +08:00
Simon Pilgrim cd938ab162 [X86] canonicalizeShuffleWithBinOps - add X86ISD::PSHUFB handling. 2021-03-07 12:56:35 +00:00
Simon Pilgrim 772a501bf4 [X86] canonicalizeShuffleWithBinOps - shuffle oneuse constants.
We can freely shuffle all ones/zeros constants but we can also freely shuffle other constants as long as they only have one use.
2021-03-07 11:17:03 +00:00
Alexey Lapshin cf7cdaff64 [X86][VARARG] Avoid spilling xmm registers for va_start.
That review is extracted from D69372.
It fixes https://bugs.llvm.org/show_bug.cgi?id=42219 bug.

For the noimplicitfloat mode, the compiler mustn't generate
floating-point code if it was not asked directly to do so.
This rule does not work with variable function arguments currently.
Though compiler correctly guards block of code, which copies xmm vararg
parameters with a check for %al, it does not protect spills for xmm registers.
Thus, such spills are generated in non-protected areas and could break code,
which does not expect floating-point data. The problem happens in -O0
optimization mode. With this optimization level there is used
FastRegisterAllocator, which spills virtual registers at basic block boundaries.
Register Allocator does not protect spills with additional control-flow modifications.
Thus to resolve that problem, it is suggested to not copy incoming physical
registers into virtual registers. Instead, store incoming physical xmm registers
into the memory from scratch.

Differential Revision: https://reviews.llvm.org/D80163
2021-03-06 15:25:47 +03:00
Fangrui Song 4f7562d52f [MC][X86] Support .reloc *, BFD_RELOC_{NONE,8,16,32,64}, *
The names are unfortunate, but BFD_RELOC_NONE provides a generic way indicating
a dependency between two sections, which is useful for ld --gc-sections.
See https://sourceware.org/bugzilla/show_bug.cgi?id=27530
2021-03-05 21:31:05 -08:00
Simon Pilgrim 87d5b34c24 [X86] X86ISelDAGToDAG.cpp - include cstdint instead of stdint.h NFCI.
Fixes clang-tidy warning
2021-03-05 15:58:20 +00:00
Simon Pilgrim f11f86c114 [X86] X86DAGToDAGISel::Select - merge X86::TEST load bitsize checks. NFCI. 2021-03-05 15:58:20 +00:00
Stephen Tozer f677413071 Reapply "[DebugInfo] Add new instruction and DIExpression operator for variadic debug values"
Rewrites test to use correct architecture triple; fixes incorrect
reference in SourceLevelDebugging doc; simplifies `spillReg` behaviour
so as to not be dependent on changes elsewhere in the patch stack.

This reverts commit d2000b45d0.
2021-03-05 12:32:05 +00:00
Simon Pilgrim 3fd2fa1220 Revert rG8198d83965ba4b9db6922b44ef3041030b2bac39: "[X86] Pass to transform amx intrinsics to scalar operation."
This reverts commit 8198d83965ba4b9db6922b44ef3041030b2bac39.due to buildbot breakages
2021-03-05 11:09:14 +00:00
Simon Pilgrim d7b8cb4d57 [X86] X86ISelLowering.cpp - try to use for-range loops. NFCI. 2021-03-05 11:09:14 +00:00
Luo, Yuanke 8198d83965 [X86] Pass to transform amx intrinsics to scalar operation.
This pass runs in any situations but we skip it when it is not O0 and the
function doesn't have optnone attribute. With -O0, the def of shape to amx
intrinsics is near the amx intrinsics code. We are not able to find a
point which post-dominate all the shape and dominate all amx intrinsics.
To decouple the dependency of the shape, we transform amx intrinsics
to scalar operation, so that compiling doesn't fail. In long term, we
 should improve fast register allocation to allocate amx register.

Reviewed By: pengfei

Differential Revision: https://reviews.llvm.org/D93594
2021-03-05 16:02:02 +08:00
Simon Pilgrim 7cbc5df438 [X86] X86TargetLowering::isSafeMemOpType - break if-else chain. NFCI.
All if-else blocks return - fixes clang-tidy warning.
2021-03-04 12:15:08 +00:00
Stephen Tozer d2000b45d0 Revert "[DebugInfo] Add new instruction and DIExpression operator for variadic debug values"
This reverts commit d07f106f4a.
2021-03-04 11:59:21 +00:00
gbtozers d07f106f4a [DebugInfo] Add new instruction and DIExpression operator for variadic debug values
This patch adds a new instruction that can represent variadic debug values,
DBG_VALUE_VAR. This patch alone covers the addition of the instruction and a set
of basic code changes in MachineInstr and a few adjacent areas, but does not
correctly handle variadic debug values outside of these areas, nor does it
generate them at any point.

The new instruction is similar to the existing DBG_VALUE instruction, with the
following differences: the operands are in a different order, any number of
values may be used in the instruction following the Variable and Expression
operands (these are referred to in code as “debug operands”) and are indexed
from 0 so that getDebugOperand(X) == getOperand(X+2), and the Expression in a
DBG_VALUE_VAR must use the DW_OP_LLVM_arg operator to pass arguments into the
expression.

The new DW_OP_LLVM_arg operator is only valid in expressions appearing in a
DBG_VALUE_VAR; it takes a single argument and pushes the debug operand at the
index given by the argument onto the Expression stack. For example the
sub-expression `DW_OP_LLVM_arg, 0` has the meaning “Push the debug operand at
index 0 onto the expression stack.”

Differential Revision: https://reviews.llvm.org/D82363
2021-03-04 11:45:35 +00:00
Simon Pilgrim 1584e55a26 [X86] canonicalizeShuffleWithBinOps - handle general unaryshuffle(binop(x,c)) patterns not just xor(x,-1)
Generalize the shuffle(not(x)) -> not(shuffle(x)) fold to handle any binop with 0/-1.

Hopefully we can further generalize to help push target unary/binary shuffles through binops similar to what we do in DAGCombiner::visitVECTOR_SHUFFLE
2021-03-04 10:44:38 +00:00
Simon Pilgrim aa4afebbf9 [X86] Fold scalar_to_vector(x) -> extract_subvector(broadcast(x),0) iff broadcast(x) exists
Add handling for reusing an existing broadcast(x) to a wider vector.
2021-03-03 15:50:37 +00:00
Amara Emerson 8a316045ed [AArch64][GlobalISel] Enable use of the optsize predicate in the selector.
To do this while supporting the existing functionality in SelectionDAG of using
PGO info, we add the ProfileSummaryInfo and LazyBlockFrequencyInfo analysis
dependencies to the instruction selector pass.

Then, use the predicate to generate constant pool loads for f32 materialization,
if we're targeting optsize/minsize.

Differential Revision: https://reviews.llvm.org/D97732
2021-03-02 12:55:51 -08:00
Benjamin Kramer 10c256ccaf Revert "[X86] Fold shuffle(not(x),undef) -> not(shuffle(x,undef))"
This reverts commit 925093d88a.

Causes an infinite loop when compiling some shuffles:

$ cat bugpoint-reduced-simplified.ll
target triple = "x86_64-unknown-linux-gnu"

define void @foo() {
entry:
  %0 = load i8, i8* undef, align 1
  %broadcast.splatinsert = insertelement <16 x i8> poison, i8 %0, i32 0
  %1 = icmp ne <16 x i8> %broadcast.splatinsert, zeroinitializer
  %2 = shufflevector <16 x i1> %1, <16 x i1> undef, <16 x i32> zeroinitializer
  %wide.load = load <16 x i8>, <16 x i8>* undef, align 1
  %3 = icmp ne <16 x i8> %wide.load, zeroinitializer
  %4 = and <16 x i1> %3, %2
  %5 = zext <16 x i1> %4 to <16 x i8>
  store <16 x i8> %5, <16 x i8>* undef, align 1
  ret void
}

$ llc < bugpoint-reduced-simplified.ll
<timeout>
2021-03-02 11:24:07 +01:00
Simon Pilgrim 925093d88a [X86] Fold shuffle(not(x),undef) -> not(shuffle(x,undef))
Move NOT out to expose more AND -> ANDN folds
2021-03-01 14:47:39 +00:00
Matt Arsenault 6c260d3bc0 GlobalISel: Move splitToValueTypes to generic code
I copied the nearly identical function from AArch64 into AMDGPU, so
fix this duplication.

Mips and X86 have their own more exotic versions which should be
removed. However replacing those is better left for a separate patch
since it requires other changes to avoid regressions.
2021-03-01 08:58:18 -05:00
Simon Pilgrim ab3ea27b6f [X86][AVX] Reuse existing VBROADCAST(x) for SCALAR_TO_VECTOR(x)
Similar to what we already do for BROADCASTs of different vector sizes - if we're going to broadcast it anyway might as well reuse it.
2021-02-28 11:37:27 +00:00
Craig Topper 993f4d8ffa [X86] Fix a couple comments that said LHS where they meant RHS. NFC 2021-02-27 17:14:17 -08:00
Wang, Pengfei 42e025f9de [X86] Disable rematerializion for PTILELOADDV
Per the discussion in D97453. We currently disable it due to it's not a
common scenario and has some problem in implementation.

Differential Revision: https://reviews.llvm.org/D97453
2021-02-27 21:08:58 +08:00
James Y Knight 6de6455752 Use getAlign() on atomicrmw/cmpxchg instructions, now that it's available.
These locations were missed as part of adding alignment to the
instructions, and were still making their own alignment assumptions.
2021-02-26 15:06:15 -05:00
Simon Pilgrim ed1f45bce9 [X86][AVX] SimplifyDemandedBitsForTargetNode - add basic X86ISD::VBROADCAST handling.
Simplify through to the scalar/vector source operand.
2021-02-26 16:13:14 +00:00
Wang, Pengfei ad9091c5fa [X86] Allow PTILEZEROV and PTILELOADDV to be rematerializable
Spilling and reloading AMX registers are expensive. We allow PTILEZEROV
and PTILELOADDV to be rematerializable to avoid the register spilling.

Reviewed By: LuoYuanke

Differential Revision: https://reviews.llvm.org/D97453
2021-02-26 21:55:59 +08:00
Simon Pilgrim 7ac4c956af [X86] Remove unnecessary custom lowering of vXi1 SADDSAT/SSUBSAT/UADDSAT/USUBSAT
As discussed on D97478. The removal of the custom tag causes some changes in the add/sub-overflow expansion as it no longer expands to sat-arith codegen.
2021-02-26 12:10:23 +00:00
Simon Pilgrim aefe8f2f6c [DAG] Fold vXi1 multiplies -> and
This allows us to remove X86 custom lowering of vXi1 MUL, which helps simplify a load of mask math.

Mentioned in D97478 post review.
2021-02-26 11:46:12 +00:00
Simon Pilgrim 40b8b4a466 [X86] Remove unnecessary custom lowering of v16i1/v32i1 ADD/SUB
These were missed in D97478
2021-02-26 11:46:11 +00:00
Bill Wendling a9f9ceb35f [X86] Use correct padding when in 16-bit mode
In 16-bit mode, some of the nop patterns used in 32-bit mode can end up
mangling other instructions. For instance, an aligned "movz" instruction
may have the 0x66 and 0x67 prefixes omitted, because the nop that's used
messes things up.

       xorl    %ebx, %ebx
       .p2align 4, 0x90
       movzbl  (%esi,%ebx), %ecx

Use instead nop patterns we know 16-bit mode can handle.

Differential Revision: https://reviews.llvm.org/D97268
2021-02-25 20:05:45 -08:00
Craig Topper ceaedfb5fc [X86] Remove custom lowering of vXi1 ADD/SUB now that they are canonicalized to XOR in getNode.
Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D97478
2021-02-25 08:52:41 -08:00
Simon Pilgrim 8b82669d56 [X86][SSE] Move unaryshuffle(xor(x,-1)) -> xor(unaryshuffle(x),-1) fold into helper. NFCI.
We should be able to extend this "canonicalizeShuffleWithBinOps" to handle more generic binop cases where either/both operands can be cheaply shuffled.
2021-02-25 10:56:23 +00:00
Liu, Chen3 4bc7c8631a [X86] Support amx-bf16 intrinsic.
Adding support for intrinsics of AMX-BF16.
This patch alse fix a bug that AMX-INT8 instructions will be selected with wrong
predicate.

Differential Revision: https://reviews.llvm.org/D97358
2021-02-25 09:06:48 +08:00
Liu, Chen3 f8b9035aae [X86] Support amx-int8 intrinsic.
Adding support for intrinsics of TDPBSUD/TDPBUSD/TDPBUUD.

Differential Revision: https://reviews.llvm.org/D97259
2021-02-23 17:08:05 +08:00
Luo, Yuanke 8f48ddd193 [X86][AMX] Lower tile copy instruction.
Since there is no tile copy instruction, we need to store tile
register to stack and load from stack to another tile register.
We need extra GR to hold the stride, and we need stack slot to
hold the tile data register. We would run this pass after copy
propagation, so that we don't miss copy optimization. And we
would run this pass before prolog/epilog insertion, so that we
can allocate stack slot.

Differential Revision: https://reviews.llvm.org/D97112
2021-02-23 07:49:42 +08:00
Simon Pilgrim b568d3d6c9 [X86] Add vector support to sub(C1, xor(X, C2)) -> add(xor(X, ~C2), C1+1) fold. 2021-02-21 21:51:27 +00:00
Simon Pilgrim 3ab32c94a4 [X86] Replace explicit constant handling in sub(C1, xor(X, C2)) -> add(xor(X, ~C2), C1+1) fold. NFCI.
NFC cleanup before adding vector support - rely on the SelectionDAG to handle everything for us.
2021-02-21 21:40:32 +00:00
Simon Pilgrim bae04a3e2d [X86][AVX] canonicalizeLaneShuffleWithRepeatedOps - remove unnecessary BITCASTs.
In conjunction with the 'vperm2x128(bitcast(x),bitcast(y),c) -> bitcast(vperm2x128(x,y,c))' fold in combineTargetShuffle, this should remove any unnecessary bitcasts around vperm2x128 lane shuffles.
2021-02-21 18:40:32 +00:00
Simon Pilgrim a6a258f1da [X86][AVX] Fold concat(extract_subvector(v0,c0), extract_subvector(v1,c1)) -> vperm2x128
Fixes regression exposed by removing bitcasts across logic-ops in D96206.

Differential Revision: https://reviews.llvm.org/D96206
2021-02-21 14:50:43 +00:00
Simon Pilgrim 2885d1251f [X86] Fold bitcast(logic(bitcast(X), Y)) --> logic'(X, bitcast(Y)) for int-int bitcasts
Extend the existing combine that handles bitcasting for fp-logic ops to also help remove logic ops across bitcasts to/from the same integer types.

This helps improve AVX512 predicate handling for D/Q logic ops and also allows DAGCombine's scalarizeExtractedBinop to remove some annoying gpr->simd->gpr transfers.

The concat_vectors regression in pr40891.ll will be addressed in a followup commit on this patch.

Differential Revision: https://reviews.llvm.org/D96206
2021-02-21 14:40:54 +00:00
Simon Pilgrim 761bbed264 [DAG] foldSubToUSubSat - fold sub(a,trunc(umin(zext(a),b))) -> usubsat(a,trunc(umin(b,SatLimit)))
This moves the last custom x86 USUBSAT fold to generic DAGCombine.

Completes PR40111

Differential Revision: https://reviews.llvm.org/D96703
2021-02-20 12:02:07 +00:00
Simon Pilgrim 2258b367db [X86][AVX] getFauxShuffleMask - decode VBROADCAST(EXTRACT_VECTOR_ELT(V,0))
Handle the case where we're broadcasting a scalar extracted from another vector.
2021-02-19 11:06:53 +00:00
Wang, Pengfei c98644c2ec [X86] Fix a codegen crash in getSetCCResultType
This patch fixes some crashes coming from
X86ISelLowering::getSetCCResultType, which would occasionally return
an EVT constructed from an invalid MVT, which has a null Type pointer.

This patch refers to D95434.

Differential Revision: https://reviews.llvm.org/D97036
2021-02-19 17:30:10 +08:00
Wang, Pengfei e9c11c1934 [X86] Zero AMX config buffer for non AVX512 cases.
Zero AMX config buffer for non AVX512 cases.

Differential Revision: https://reviews.llvm.org/D96927
2021-02-18 13:26:09 +08:00
Simon Pilgrim 05c64ea672 [DAG] Fold shuffle(bop(shuffle(x,y),shuffle(z,w)),bop(shuffle(a,b),shuffle(c,d))) (REAPPLIED)
Fold shuffle(bop(shuffle(x,y),shuffle(z,w)),bop(shuffle(a,b),shuffle(c,d))) -> bop(shuffle(x,y),shuffle(z,w)),bop(shuffle(a,b),shuffle(c,d))

Attempt to fold from a shuffle of a pair of binops to a binop of shuffles, as long as one/both of the binop sources are also shuffles that can be merged with the outer shuffle. This should guarantee that we remove one binop without introducing any additional shuffles.

Technically there's potential for a merged shuffle's lowering to be poorer than the original shuffle, but it could also be better, and I'm not seeing any regressions as long as we keep the 'don't merge splats' rule already present in MergeInnerShuffle.

This expands and generalizes an existing X86 combine and attempts to merge either of each binop's sources (with an on-the-fly commutation of the shuffle mask) - we couldn't do that in the x86 version as it had to stay in a form that DAGCombine's MergeInnerShuffle would still recognise.

Fixes issue raised by @saugustine in rG5aa8f4c0843a where we were failing to replace null shuffle operands from MergeInnerShuffle to UNDEFs.

Differential Revision: https://reviews.llvm.org/D96345
2021-02-17 11:42:43 +00:00
Sterling Augustine 5aa8f4c084 Revert "[DAG] Fold shuffle(bop(shuffle(x,y),shuffle(z,w)),bop(shuffle(a,b),shuffle(c,d)))"
This reverts commit 5dfba562dd.

That commit causes an assertion failure with the following repro:

typedef long b __attribute__((__vector_size__(16)));
b *d;
b e;
b __attribute__((__always_inline__)) c(b h, b i) {
  return (__attribute__((__vector_size__(8 * sizeof(short)))) short)h + i;
}
j() {
  b k, l, m, n, o[6], p, q;
  m = d[5];
  b r = m;
  b s = f(r, 8);
  q = s;
  l = d[1];
  p = l;
  t(q);
  n = c(m, l);
  o[1] = c(s, f(p, 8));
  k = __builtin_shufflevector(n, o[1], 0, 2);
  e = __builtin_ia32_psrlwi128(k, j);
}

./bin/clang -cc1 -triple x86_64-grtev4-linux-gnu -emit-obj -O1 -std=c99 test.c
2021-02-16 12:48:15 -08:00
Simon Pilgrim 5dfba562dd [DAG] Fold shuffle(bop(shuffle(x,y),shuffle(z,w)),bop(shuffle(a,b),shuffle(c,d)))
Fold shuffle(bop(shuffle(x,y),shuffle(z,w)),bop(shuffle(a,b),shuffle(c,d))) -> bop(shuffle(x,y),shuffle(z,w)),bop(shuffle(a,b),shuffle(c,d))

Attempt to fold from a shuffle of a pair of binops to a binop of shuffles, as long as one/both of the binop sources are also shuffles that can be merged with the outer shuffle. This should guarantee that we remove one binop without introducing any additional shuffles.

Technically there's potential for a merged shuffle's lowering to be poorer than the original shuffle, but it could also be better, and I'm not seeing any regressions as long as we keep the 'don't merge splats' rule already present in MergeInnerShuffle.

This expands and generalizes an existing X86 combine and attempts to merge either of each binop's sources (with an on-the-fly commutation of the shuffle mask) - we couldn't do that in the x86 version as it had to stay in a form that DAGCombine's MergeInnerShuffle would still recognise.

Differential Revision: https://reviews.llvm.org/D96345
2021-02-16 15:46:34 +00:00
Arlo Siemsen 080866470d Add ehcont section support
In the future Windows will enable Control-flow Enforcement Technology (CET aka shadow stacks). To protect the path where the context is updated during exception handling, the binary is required to enumerate valid unwind entrypoints in a dedicated section which is validated when the context is being set during exception handling.

This change allows llvm to generate the section that contains the appropriate symbol references in the form expected by the msvc linker.

This feature is enabled through a new module flag, ehcontguard, which was modelled on the cfguard flag.

The change includes a test that when the module flag is enabled the section is correctly generated.

The set of exception continuation information includes returns from exceptional control flow (catchret in llvm).

In order to collect catchret we:
1) Includes an additional flag on machine basic blocks to indicate that the given block is the target of a catchret operation,
2) Introduces a new machine function pass to insert and collect symbols at the start of each block, and
3) Combines these targets with the other EHCont targets that were already being collected.

Change originally authored by Daniel Frampton <dframpto@microsoft.com>

For more details, see MSVC documentation for `/guard:ehcont`
  https://docs.microsoft.com/en-us/cpp/build/reference/guard-enable-eh-continuation-metadata

Reviewed By: pengfei

Differential Revision: https://reviews.llvm.org/D94835
2021-02-15 14:27:12 +08:00
Kazu Hirata 910e2d1e57 [llvm] Use llvm::is_contained (NFC) 2021-02-14 08:36:20 -08:00
Simon Pilgrim 4841a225b7 [DAG] Move basic USUBSAT pattern matches from X86 to DAGCombine
Begin transitioning the X86 vector code to recognise sub(umax(a,b) ,b) or sub(a,umin(a,b)) USUBSAT patterns to make it more generic and available to all targets.

This initial patch just moves the basic umin/umax patterns to DAG, removing some vector-only checks on the way - these are some of the patterns that the legalizer will try to expand back to so we can be reasonably relaxed about matching these pre-legalization.

We can handle the trunc(sub(..))) variants as well, which helps with patterns where we were promoting to a wider type to detect overflow/saturation.

The remaining x86 code requires some cleanup first - some of it isn't actually tested etc. I also need to resurrect D25987.

Differential Revision: https://reviews.llvm.org/D96413
2021-02-12 18:22:57 +00:00
Sanjay Patel 79b1b4a581 [Vectorizers][TTI] remove option to bypass creation of vector reduction intrinsics
The vector reduction intrinsics started life as experimental ops, so backend support
was lacking. As part of promoting them to 1st-class intrinsics, however, codegen
support was added/improved:
D58015
D90247

So I think it is safe to now remove this complication from IR.

Note that we still have an IR-level codegen expansion pass for these as discussed
in D95690. Removing that is another step in simplifying the logic. Also note that
x86 was already unconditionally forming reductions in IR, so there should be no
difference for x86.

I spot checked a couple of the tests here by running them through opt+llc and did
not see any asm diffs.

If we do find functional differences for other targets, it should be possible
to (at least temporarily) restore the shuffle IR with the ExpandReductions IR
pass.

Differential Revision: https://reviews.llvm.org/D96552
2021-02-12 08:13:50 -05:00
Craig Topper 5189c5b940 [X86] Simplify patterns for avx512 vpcmp. NFC
This removes the commuted PatFrags that only existed to carry
an SDNodeXForm in its OperandTransform field. We know all the places
that need to use the commuted SDNodeXForm and there is one transform
shared by signed and unsigned compares. So just hardcode the
the SDNodeXForm where it is needed and use the non commuted PatFrag
in the pattern.

I think when I wrote this I thought the SDNodeXForm name had to
match what is in the PatFrag that is being used. But that's not
true. The OperandTransform is only used when the PatFrag is used
in an instruction pattern and not a separate Pat pattern. All
the commuted cases are Pat patterns.
2021-02-10 19:24:27 -08:00
Simon Pilgrim eb31c3c5cb Revert rGe1172959226689a "[X86][AVX] canonicalizeLaneShuffleWithRepeatedOps - merge VPERMILPD ops with different low/high masks."
Revert this while I investigate a downstream breakage report.
2021-02-10 10:26:44 +00:00
Matt Arsenault b72a23650f GlobalISel: Fix using wrong calling convention for callees
This was taking the calling convention from the parent function,
instead of the callee. Avoids regressions in a future patch when the
caller and callee have different type breakdowns.

For some reason AArch64's lowerFormalArguments seems to intentionally
ignore the parent isVarArg.
2021-02-09 13:48:56 -05:00
Simon Pilgrim 89d9ff8229 [X86][SSE] foldShuffleOfHorizOp - add SHUFPS v4f32 handling
Fold shufps(hop(x,y),hop(z,w)) -> permute(hop(x,z)) - this is very similar to the equivalent unpack fold.

I did start trying to convert foldShuffleOfHorizOp to handle generic shuffle masks but we're relying on a lot of special cases at the moment.
2021-02-09 14:18:45 +00:00
Simon Pilgrim 598ceb25d4 [X86][AVX] Fold extract_subvector(splat, c) -> extract_subvector(splat, 0)
We already do this for VBROADCASTs, extend this for any splat that SelectionDAG::isSplatValue recognises as well.
2021-02-07 11:42:41 +00:00
Craig Topper 6f4f0efd89 [X86] Don't pass a 1 to the second argument of ISD::FP_ROUND in LowerFCOPYSIGN.
I don't think we have any reason to believe the FP_ROUND here doesn't change the value.

Found while trying to see if we still need the fp128 block in CanCombineFCOPYSIGN_EXTEND_ROUND.
Removing that check caused this FP_ROUND to fire for fp128 which introduced a libcall expansion that asserted for this being a 1.

Reviewed By: RKSimon, pengfei

Differential Revision: https://reviews.llvm.org/D96098
2021-02-06 10:29:01 -08:00
Simon Pilgrim e117295922 [X86][AVX] canonicalizeLaneShuffleWithRepeatedOps - merge VPERMILPD ops with different low/high masks.
Now that PR48908 has been dealt with, we can handle v4f64 permute cases by extracting the low/high lane VPERMILPD masks and creating a new mask based on which lanes are referenced by the VPERM2F128 mask.
2021-02-06 15:58:02 +00:00
Matheus Izvekov 1ac98044df [X86] Generate unaligned access for fixed slots in unaligned stack
loadRegFromStackSlot()/storeRegToStackSlot() can generate aligned access
instructions for stack slots even if the stack is unaligned, based on the
assumption that the stack can be realigned.
However, this doesn't work for fixed slots, which are e.g. used for
spilling XMM registers in a non-leaf function with
`__attribute__((preserve_all))`.
When compiling such code with `-mstack-alignment=8`, this causes general
protection faults.

Fix it by only considering stack realignment for non-fixed slots.

Note that this changes the output of three existing tests which spill AVX
registers, since AVX requires higher alignment than the ABI provides on
stack frame entry.

Reviewed By: rnk, jyknight

Differential Revision: https://reviews.llvm.org/D73126
2021-02-05 11:36:54 +08:00
Craig Topper 11ef356d9e [TargetLowering] Use Align in allowsMisalignedMemoryAccesses.
Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D96097
2021-02-04 19:22:06 -08:00
Simon Pilgrim 31b85e1c0b [X86] Use VT::changeVectorElementType helper where possible. NFCI. 2021-02-04 15:03:56 +00:00
Simon Pilgrim fa2cdb8140 [X86] Remove stale TODO comment. NFC.
We now handle implicit zero-extension shuffle mask cases.
2021-02-04 12:14:05 +00:00
Simon Pilgrim 32b7c2fa42 [X86][SSE] Support variable-index float/double vector insertion on SSE41+ targets (PR47924)
Extends D95779 to permit insertion into float/doubles vectors while avoiding a lot of aliased memory traffic.

The scalar value is already on the simd unit, so we only need to transfer and splat the index value, then perform the select.

SSE4 codegen is a little bulky due to the tied register requirements of (non-VEX) BLENDPS/PD but the extra moves are cheap so shouldn't be an actual problem.

Differential Revision: https://reviews.llvm.org/D95866
2021-02-03 14:14:35 +00:00
Wang, Pengfei fae6d129da [X86] Correct types in tablegen multiclasses found by D95874.
Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D95926
2021-02-03 16:05:05 +08:00
Simon Pilgrim 8c2e075c2c [X86][SSE] LowerINSERT_VECTOR_ELT - pull out repeated EltSizeInBits calls. NFCI. 2021-02-02 13:45:18 +00:00
Simon Pilgrim d46a6b3d55 [X86][AVX512] Support variable-index vector insertion on AVX512 targets (PR47924)
With predicate masks, AVX512 can efficiently perform variable-index vector insertion with 2 broadcasts + 1 comparison, avoiding a lot of aliased memory traffic.

Differential Revision: https://reviews.llvm.org/D95779
2021-02-02 11:41:18 +00:00
Andrew Ng 94fedd2661 [X86] Fix disassembly of x86-64 GDTLS code sequence
For x86-64 the REX.w prefix takes precedence over any other size
override (i.e. 0x66). Therefore, for x86-64 when REX.w is present set
'hasOpSize' to false to ensure that any size override is ignored.

Fixes PR48901.

Differential Revision: https://reviews.llvm.org/D95682
2021-02-02 11:35:00 +00:00
Simon Pilgrim 4d904776a7 [X86][AVX] Add missing VEX_WIG tags from VPACKUSDW/VPHSUBD/VPCMPISTRI/VPCMPISTRM/VPCMPESTRI/VPCMPESTRM
Fixes PR48877

Differential Revision: https://reviews.llvm.org/D95801
2021-02-02 11:25:44 +00:00
Philip Reames 46e764a628 [x86] introduce no_callee_saved_registers attribute
This is directly analogous to the existing no_caller_saved_registers, but with the opposite intention.  A function or call so marked shifts the responsibility of spilling the usual CSRs to it's caller.

An indirect call site and callee which don't agree on the attribute is ill defined.

The motivation for this change is that being able to prune callee saves (without modifying other details of the calling convention) is sometimes useful when generating stubs and adapters.  There's no intention to expose this as a source language feature; this is expected to be used by frontends to implement adapters where warranted.

Some specific examples of use cases:
* GC compatible compiled code wants to call an externally defined library function without needing to track pointer values through CSRs.
* debug enabled code wants to call precompiled library which doesn't provide enough information to track CSRs while preserving debug quality in caller.
* adapter stub entering hand written assembler which doesn't follow normal calling conventions.
2021-02-01 16:19:14 -08:00
Philip Reames 9d09db941f [NFC][X86] Use CallBase interface to simplify code 2021-02-01 15:24:41 -08:00
Philip Reames bb6c23b1f5 [NFC][X86] Avoid redundant work inspecting callee 2021-02-01 15:24:41 -08:00
Craig Topper c691fe14da [X86] Accept 64-bit GPRs for vextractps when using a register that requires EVEX.
This is consistent with the VEX version. It also fixes a sorting
issue in the matching table that caused the EVEX version to be
prioritized over VEX in intel syntax.

Fixes issue [2] from PR48991.
2021-02-01 11:01:32 -08:00
Simon Pilgrim e640b209b2 [X86][SSE] LowerScalarImmediateShift - use APInt::getLowBitsSet for vXi8 ISD::SRL mask generation. NFCI.
Match what we do for ISD::SHL
2021-02-01 18:17:40 +00:00
Simon Pilgrim 5211af4818 [X86][AVX] combineExtractWithShuffle - combine extracts from 256/512-bit vector shuffles.
We can only legally extract from the lowest 128-bit subvector, so extract the correct subvector to allow us to handle 256/512-bit vector element extracts.
2021-02-01 10:31:43 +00:00
Craig Topper ff46026897 [X86] Cleanup isel patterns to use 'vnot' instead of (xor X, immAllOnesV) to improve readability. NFC 2021-01-31 18:53:40 -08:00
Kazu Hirata 3d1200b9f6 [llvm] Drop unnecessary const from return types (NFC)
Identified with const-return-type.
2021-01-31 10:23:43 -08:00
Kazu Hirata 1a2d67fa23 [llvm] Use llvm::lower_bound and llvm::upper_bound (NFC) 2021-01-29 23:23:36 -08:00
Wang, Pengfei a5d9e0c79b [X86] Fix tile config register spill issue.
This is an optimized approach for D94155.

Previous code build the model that tile config register is the user of
each AMX instruction. There is a problem for the tile config register
spill. When across function, the ldtilecfg instruction may be inserted
on each AMX instruction which use tile config register. This cause all
tile data register clobber.

To fix this issue, we remove the model of tile config register. Instead,
we analyze the AMX instructions between one call to another. We will
insert ldtilecfg after the first call if we find any AMX instructions.

Reviewed By: LuoYuanke

Differential Revision: https://reviews.llvm.org/D95136
2021-01-30 12:53:57 +08:00
Simon Pilgrim d6b68d1344 [X86][SSE] combineExtractWithShuffle - support zero-extending to allow extracting from narrow shuffle masks
If the shuffle mask can't be widened to match the original extracted element width, see if the upper bits are zeroable - which allows us to extract+zero-extend the smaller extraction.
2021-01-29 14:22:10 +00:00
Christudasan Devadasan 892e4567e1 Support a list of CostPerUse values
This patch allows targets to define multiple cost
values for each register so that the cost model
can be more flexible and better used during the
register allocation as per the target requirements.

For AMDGPU the VGPR allocation will be more efficient
if the register cost can be associated dynamically
based on the calling convention.

Reviewed By: qcolombet

Differential Revision: https://reviews.llvm.org/D86836
2021-01-29 10:14:52 +05:30
Simon Pilgrim f84efe97bc [X86][AVX] combineHorizOpWithShuffle - fix valuetype comparison typo.
Ensure we check the valuetypes of all the HOP(SHUFFLE(X,Y),SHUFFLE(X,Y)) shuffle input ops - there was a copy+paste typo (noticed by MSVC analyzer) that meant we were checking the same input from one of the shuffles twice.

I haven't been able to create a test case for this yet - I don't think its currently possible to create a target/faux binary shuffle that scales to a 2x128 shuffle mask from two different value types.
2021-01-28 16:36:23 +00:00
Simon Pilgrim 6663330bc8 [X86][AVX] canonicalizeLaneShuffleWithRepeatedOps - don't merge VPERMILPD ops with different low/high masks.
Unlike VPERMILPS, VPERMILPD can have non-repeating masks in each 128-bit subvector, we weren't accounting for this when folding vperm2f128(vpermilpd(x,c),vpermilpd(y,c)) -> vpermilpd(vperm2f128(x,y),c).

I'm intending to add support for this but wanted to get a minimal fix in first for merging into 12.xx.

Fixes PR48908
2021-01-28 12:11:31 +00:00
Luo, Yuanke bf64918150 [X86][AMX] Prevent shape def being scheduled across ldtilecfg.
Differential Revision: https://reviews.llvm.org/D95582
2021-01-28 16:20:16 +08:00
Kazu Hirata f890fd5f91 [llvm] Use llvm::is_sorted (NFC) 2021-01-27 23:25:39 -08:00
Craig Topper 74784a5aa4 [X86] In shrinkAndImmediate, place the new constant into the topological sort.
Revert the change to use APInt::isSignedIntN from
5ff5cf8e05.

Its clear that the games we were playing to avoid the topological
sort aren't working. So just fix it once and for all.

Fixes PR48888.
2021-01-26 13:18:04 -08:00
Freddy Ye b3b0acdc6f [NFC] Refine some uninitialized used variables.
These warning are reported by static code analysis tool: Klocwork

Reviewed By: pengfei

Differential Revision: https://reviews.llvm.org/D95421
2021-01-26 16:51:05 +08:00
Reid Kleckner 988a5334ed [Win64] Ensure all stack frames are 8 byte aligned
The unwind info format requires that all adjustments are 8 byte aligned,
and the bottom three bits are masked out. Most Win64 calling conventions
have 32 bytes of shadow stack space for spilling parameters, and I
believe that constructing these fixed stack objects had the side effect
of ensuring an alignment of 8. However, the Intel regcall convention
does not have this shadow space, so when using that convention, it was
possible to make a 4 byte stack frame, which was impossible to describe
with unwind info.

Fixes pr48867
2021-01-25 10:39:27 -08:00
Simon Pilgrim 13f2aee783 [X86][AVX] Generalize vperm2f128/vperm2i128 patterns to support all legal 256-bit vector types
Remove bitcasts to/from v4x64 types through vperm2f128/vperm2i128 ops to help improve shuffle combining and demanded vector elts folding.
2021-01-25 15:35:36 +00:00
Simon Pilgrim 821a51a9ca [X86][AVX] combineX86ShuffleChainWithExtract - widen to at least original root size. NFCI.
We're relying on the source inputs for shuffle combining having already been widened to the root size (otherwise the offset logic falls over) - we're going to be supporting different sized shuffle inputs soon, so we need to explicitly make the minimum widened width the original root size.
2021-01-25 13:45:37 +00:00
Simon Pilgrim 1b780cf32e [X86][AVX] LowerTRUNCATE - avoid bitcasts around extract_subvectors.
We allow extract_subvector lowering of all legal types, so pre-bitcast the source type to try and reduce bitcast pollution.
2021-01-25 12:10:36 +00:00
Simon Pilgrim f461e35cba [X86][AVX] combineX86ShuffleChain - avoid bitcasts around insert_subvector() shuffle patterns.
We allow insert_subvector lowering of all legal types, so don't always cast to the vXi64/vXf64 shuffle types - this is only necessary for X86ISD::SHUF128/X86ISD::VPERM2X128 patterns later.
2021-01-25 11:35:45 +00:00
Fangrui Song d745b82de1 [XRay] Support DW_TAG_call_site and delete unneeded PATCHABLE_EVENT_CALL/PATCHABLE_TYPED_EVENT_CALL lowering 2021-01-25 00:49:18 -08:00
Kazu Hirata 054444177b [Target] Use llvm::append_range (NFC) 2021-01-24 12:18:56 -08:00
Kazu Hirata e4847a7fcf Revert "[Target] Use llvm::append_range (NFC)"
This reverts commit cc7a238286.

The X86WinEHState.cpp hunk seems to break certain builds.
2021-01-23 11:25:27 -08:00
Kazu Hirata cc7a238286 [Target] Use llvm::append_range (NFC) 2021-01-23 10:56:31 -08:00
Kazu Hirata 49231c1f80 [llvm] Use static_assert instead of assert (NFC)
Identified with misc-static-assert.
2021-01-22 23:25:05 -08:00
Simon Pilgrim bd122f6d21 [X86][AVX] canonicalizeLaneShuffleWithRepeatedOps - handle vperm2x128(movddup(x),movddup(y)) cases
Fold vperm2x128(movddup(x),movddup(y)) -> movddup(vperm2x128(x,y))
2021-01-22 16:05:19 +00:00
Simon Pilgrim c33d36e066 [X86][AVX] canonicalizeLaneShuffleWithRepeatedOps - handle unary vperm2x128(permute/shift(x,c),undef) cases
Fold vperm2x128(permute/shift(x,c),undef) -> permute/shift(vperm2x128(x,undef),c)
2021-01-22 15:47:23 +00:00
Simon Pilgrim 4846f6ab81 [X86][AVX] combineTargetShuffle - simplify the X86ISD::VPERM2X128 subvector matching
Simplify vperm2x128(concat(X,Y),concat(Z,W)) folding.

Use collectConcatOps / ISD::INSERT_SUBVECTOR to find the source subvectors instead of hardcoded immediate matching.
2021-01-22 15:47:22 +00:00
Simon Pilgrim b1166e1317 [X86][AVX] combineX86ShufflesRecursively - attempt to constant fold before widening shuffle inputs
combineX86ShufflesConstants/canonicalizeShuffleMaskWithHorizOp can both handle/earlyout shuffles with inputs of different widths, so delay widening as late as possible to make it easier to match constant folds etc.

The plan is to eventually move the widening inside combineX86ShuffleChain so that we don't create any new nodes unless we successfully combine the shuffles.
2021-01-22 13:19:35 +00:00
Simon Pilgrim ffe72f987f [X86][SSE] Don't fold shuffle(binop(),binop()) -> binop(shuffle(),shuffle()) if the shuffle are splats
rGbe69e66b1cd8 added the fold, but DAGCombiner.visitVECTOR_SHUFFLE doesn't merge shuffles if the inner shuffle is a splat, so we need to bail.

The non-fast-horiz-ops paths see some minor regressions, we might be able to improve on this after lowering to target shuffles.

Fix PR48823
2021-01-22 11:31:38 +00:00
Duncan P. N. Exon Smith f2fd41d789 X86: Fix use-after-realloc in X86AsmParser::ParseIntelExpression
`X86AsmParser::ParseIntelExpression` has a while loop. In the body,
calls to MCAsmLexer::UnLex can force a reallocation in the MCAsmLexer's
`CurToken` SmallVector, invalidating saved references to
`MCAsmLexer::getTok()`.

`const MCAsmToken &Tok` is such a saved reference, and this moves it
from outside the while loop to inside the body, fixing a
use-after-realloc.

`Tok` will still be reused across calls to `Lex()`, each of which
effectively destroys and constructs the pointed-to token. I'm a bit
skeptical of this usage pattern, but it seems broadly used in the
X86AsmParser (and others) so I'm leaving it alone (for now).

Somehow this bug was exposed by https://reviews.llvm.org/D94739,
resulting in test failures in dot-operator related tests in
llvm/test/tools/llvm-ml. I suspect the exposure path is related to
optimizer changes from splitting up the grow operation, but I haven't
dug all the way in. Regardless, there are already tests in tree that
cover this; they might fail consistently if we added ASan
instrumentation to SmallVector.

Differential Revision: https://reviews.llvm.org/D95112
2021-01-21 11:24:35 -08:00
Simon Pilgrim 86021d98d3 [X86] Avoid a std::string copy by replacing auto with const auto&. NFC.
Fixes msvc analyzer warning.
2021-01-21 11:04:07 +00:00
Luo, Yuanke 64132f541e Revert "[X86][AMX] Fix tile config register spill issue."
This reverts commit 20013d02f3.
2021-01-21 18:11:43 +08:00
Luo, Yuanke 20013d02f3 [X86][AMX] Fix tile config register spill issue.
Previous code build the model that tile config register is the user of
each AMX instruction. There is a problem for the tile config register
spill. When across function, the ldtilecfg instruction may be inserted
on each AMX instruction which use tile config register. This cause all
tile data register clobber.
To fix this issue, we remove the model of tile config register. We
analyze the regmask of call instruction and insert ldtilecfg if there is
any tile data register live across the call. Inserting the sttilecfg
before the call is unneccessary, because the tile config doesn't change
and we can just reload the config.
Besides we also need check tile config register interference. Since we
don't model the config register we should check interference from the
ldtilecfg to each tile data register def.
             ldtilecfg
             /       \
            BB1      BB2
            /         \
           call       BB3
           /           \
       %1=tileload   %2=tilezero
We can start from the instruction of each tile def, and backward to
ldtilecfg. If there is any call instruction, and tile data register is
not preserved, we should insert ldtilecfg after the call instruction.

Differential Revision: https://reviews.llvm.org/D94155
2021-01-21 16:01:50 +08:00
Max Kazantsev d6bb96e677 [X86] Add experimental option to separately tune alignment of innermost loops
We already have an experimental option to tune loop alignment. Its impact
is very wide (and there is a suspicion that it's not always profitable). We want
to have something more narrow to play with. This patch adds similar option that
overrides preferred alignment for innermost loops. This is for experimental
purposes, default values do not change the existing behavior.

Differential Revision: https://reviews.llvm.org/D94895
Reviewed By: pengfei
2021-01-21 11:15:16 +07:00
Simon Pilgrim b8b5e87e6b [X86][AVX] Handle vperm2x128 shuffling of a subvector splat.
We already handle "vperm2x128 (ins ?, X, C1), (ins ?, X, C1), 0x31" for shuffling of the upper subvectors, but we weren't dealing with the case when we were splatting the upper subvector from a single source.
2021-01-20 18:16:33 +00:00
Simon Pilgrim 19d02842ee [X86][AVX] Fold extract_subvector(VSRLI/VSHLI(x,32)) -> VSRLI/VSHLI(extract_subvector(x),32)
As discussed on D56387, if we're shifting to extract the upper/lower half of a vXi64 vector then we're actually better off performing this at the subvector level as its very likely to fold into something.

combineConcatVectorOps can perform this in reverse if necessary.
2021-01-20 14:34:54 +00:00
Bill Wendling e22295385c [X86] Add segment and address-size override prefixes
X86 allows for the "addr32" and "addr16" address size override prefixes.
Also, these and the segment override prefixes should be recognized as
valid prefixes.

Differential Revision: https://reviews.llvm.org/D94726
2021-01-19 23:54:31 -08:00
Simon Pilgrim 5626adcd6b [X86][SSE] combineVectorSignBitsTruncation - fold trunc(srl(x,c)) -> packss(sra(x,c))
If a srl doesn't introduce any sign bits into the truncated result, then replace with a sra to let us use a PACKSS truncation - fixes a regression noticed in D56387 on pre-SSE41 targets that don't have PACKUSDW.
2021-01-19 11:04:13 +00:00
Sanjay Patel d27bb5c375 [x86] add cast to avoid compile-time warning; NFC 2021-01-18 17:47:04 -05:00
Kazu Hirata 23b0ab2acb [llvm] Use the default value of drop_begin (NFC) 2021-01-18 10:16:36 -08:00
Simon Pilgrim ce06475da9 [X86][AVX] IsElementEquivalent - add matchShuffleWithUNPCK + VBROADCAST/VBROADCAST_LOAD handling
Specify LHS/RHS operands in matchShuffleWithUNPCK's calls to isTargetShuffleEquivalent, and handle VBROADCAST/VBROADCAST_LOAD matching in IsElementEquivalent
2021-01-18 15:55:00 +00:00
Simon Pilgrim 770d1e0a88 [X86][SSE] isHorizontalBinOp - reuse any existing horizontal ops.
If we already have similar horizontal ops using the same args, then match that, even if we are on a target with slow horizontal ops.
2021-01-18 10:14:45 +00:00
Fangrui Song a048ce13e3 [X86] Default to -x86-pad-for-align=false to drop assembler difference with or w/o -g
Fix PR48742: the D75203 assembler optimization locates MCRelaxableFragment's
within two MCSymbol's and relaxes some MCRelaxableFragment's to reduce the size
of a MCAlignFragment.  A -g build has more MCSymbol's and therefore may have
different assembler output (e.g. a MCRelaxableFragment (jmp) may have 5 bytes
with -O1 while 2 bytes with -O1 -g).

`.p2align 4, 0x90` is common due to loops. For a larger program, with a
lot of temporary labels, the assembly output difference is somewhat
destined. The cost seems to overweigh the benefits so we default to
-x86-pad-for-align=false until the heuristic is improved.

Reviewed By: skan

Differential Revision: https://reviews.llvm.org/D94542
2021-01-16 16:39:54 -08:00
Kazu Hirata 2082b10d10 [llvm] Use *::empty (NFC) 2021-01-16 09:40:55 -08:00
Simon Pilgrim be69e66b1c [X86][SSE] Attempt to fold shuffle(binop(),binop()) -> binop(shuffle(),shuffle())
If this will help us fold shuffles together, then push the shuffle through the merged binops.

Ideally this would be performed in DAGCombiner::visitVECTOR_SHUFFLE but getting an efficient+legal merged shuffle can be tricky - on SSE we can be confident that for 32/64-bit elements vectors shuffles should easily fold.
2021-01-15 16:25:25 +00:00
Simon Pilgrim 1dfd5c9ad8 [X86][AVX] combineHorizOpWithShuffle - support target shuffles in HOP(SHUFFLE(X,Y),SHUFFLE(X,Y)) -> SHUFFLE(HOP(X,Y))
Be more aggressive on (AVX2+) folds of lane shuffles of 256-bit horizontal ops by working on target/faux shuffles as well.
2021-01-15 13:55:30 +00:00
Kazu Hirata 7dc3575ef2 [llvm] Remove redundant return and continue statements (NFC)
Identified with readability-redundant-control-flow.
2021-01-14 20:30:34 -08:00
Kazu Hirata 2efcbe24a7 [llvm] Use llvm::drop_begin (NFC) 2021-01-14 20:30:33 -08:00
Hiroshi Yamauchi 202d359753 [X86] Add the FSRM feature (Fast Short Rep Mov) to Zen3.
Note -x86-use-fsrm-for-memcpy is still disabled by default and there's no
default behavior change.

Differential Revision: https://reviews.llvm.org/D94436
2021-01-14 10:47:33 -08:00
Simon Pilgrim cbbfc82586 [X86][SSE] canonicalizeShuffleMaskWithHorizOp - simplify shuffle(HOP(HOP(X,Y),HOP(Z,W))) style chains.
See if we can remove the shuffle by resorting a HOP chain so that the HOP args are pre-shuffled.

This initial version just handles (the most common) v4i32/v4f32 hadd/hsub reduction patterns - future work can extend this to v8i16 types plus PACK chains (2f64 HADD/HSUB should already be handled in the half-lane combine code later on).
2021-01-13 17:19:40 +00:00
Simon Pilgrim 0a0ee7f5a5 [X86] canonicalizeShuffleMaskWithHorizOp - minor refactor to support multiple src ops. NFCI.
canonicalizeShuffleMaskWithHorizOp currently only supports shuffles with 1 or 2 sources, but PR41813 will require us to support higher numbers of sources.

This patch just generalizes the initial setup stages to ensure all src ops are the same type and opcode and then will continue to early out if we have more than 2 sources.
2021-01-13 13:59:56 +00:00
Simon Pilgrim 0f59d09957 [X86][AVX] combineVectorSignBitsTruncation - limit AVX512 truncations to 128-bits (PR48727)
rG73a44f437bf1 result in 256-bit packss/packus ops with additional shuffles that shuffle combining can sometimes try to convert back into a truncation.
2021-01-13 10:38:23 +00:00
Bevin Hansson 07605ea1f3 [X86] Improved lowering for saturating float to int.
Adapted from D54696 by @nikic.

This patch improves lowering of saturating float to
int conversions, FP_TO_[SU]INT_SAT, for X86.

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D86079
2021-01-12 15:44:41 +01:00
Simon Pilgrim 2ed914cb7e [X86][SSE] getFauxShuffleMask - handle PACKSS(SRAI(),SRAI()) shuffle patterns.
We can't easily treat ASHR a faux shuffle, but if it was just feeding a PACKSS then it was likely being used as sign-extension for a truncation, so just peek through and adjust the mask accordingly.
2021-01-12 14:07:53 +00:00
Simon Pilgrim 7e44208115 [X86][SSE] combineSubToSubus - add v16i32 handling on pre-AVX512BW targets.
v16i32 -> v16i16/v8i16 truncation is now good enough using PACKSS/PACKUS + shuffle combining that its no longer necessary to early-out on pre-AVX512BW targets.

This was noticed while looking at completing PR40111 and moving combineSubToSubus to DAGCombine entirely.
2021-01-12 13:44:11 +00:00
Simon Pilgrim a5212b5c91 [X86][SSE] combineSubToSubus - remove SSE2 early-out.
SSE2 truncation codegen has improved over the past few years (mainly due to better shuffle lowering/combining and computeKnownBits) - its no longer necessary to early-out from v8i32/v8i64 truncations.

This was noticed while looking at completing PR40111 and moving combineSubToSubus to DAGCombine entirely.
2021-01-12 12:52:11 +00:00
Simon Pilgrim 4214ca9614 [X86][AVX] Attempt to fold vpermf128(op(x,i),op(y,i)) -> op(vpermf128(x,y),i)
If vpermf128/vpermi128 is acting on 2 similar 'inlane' ops, then try to perform the vpermf128 first which will allow us to merge the ops.

This will help us fix one of the regressions in D56387
2021-01-11 16:59:25 +00:00
Simon Pilgrim 41bf338dd1 Revert rGd43a264a5dd3 "Revert "[X86][SSE] Fold unpack(hop(),hop()) -> permute(hop())""
This reapplies commit rG80dee7965dffdfb866afa9d74f3a4a97453708b2.

[X86][SSE] Fold unpack(hop(),hop()) -> permute(hop())

UNPCKL/UNPCKH only uses one op from each hop, so we can merge the hops and then permute the result.

REAPPLIED with a fix for unary unpacks of HOP.
2021-01-11 11:29:04 +00:00
Nico Weber d43a264a5d Revert "[X86][SSE] Fold unpack(hop(),hop()) -> permute(hop())"
This reverts commit 80dee7965d.
Makes clang sometimes hang forever. See
https://bugs.chromium.org/p/chromium/issues/detail?id=1164786#c6 for a
stand-alone repro.
2021-01-10 20:22:53 -05:00
Ganesh Gopalasubramanian 9386483b71 [X86] Add TLBSYNC, INVLPGB and SNP instructions
Differential Revision: https://reviews.llvm.org/D94134
2021-01-08 22:28:53 +05:30
Simon Pilgrim 80dee7965d [X86][SSE] Fold unpack(hop(),hop()) -> permute(hop())
UNPCKL/UNPCKH only uses one op from each hop, so we can merge the hops and then permute the result.
2021-01-08 15:22:17 +00:00
Kazu Hirata b934160aaa [Target] Use llvm::find_if (NFC) 2021-01-07 20:29:36 -08:00
Kazu Hirata cfeecdf7b6 [llvm] Use llvm::all_of (NFC) 2021-01-06 18:27:36 -08:00
Reid Kleckner 08e5e91e45 [X86] Remove [ER]SP from all CSR lists
The CSR lists control which registers are spilled and reloaded in the
prologue and epilogue. The stack pointer is managed explicitly, and
should never be pushed or popped. Remove it from these lists. This
affected regcall and preserves all / most.

Differential Revision: https://reviews.llvm.org/D94118
2021-01-06 09:50:46 -08:00
Christudasan Devadasan d68458bd56 [GlobalISel] Base implementation for sret demotion.
If the return values can't be lowered to registers
SelectionDAG performs the sret demotion. This patch
contains the basic implementation for the same in
the GlobalISel pipeline.

Furthermore, targets should bring relevant changes
during lowerFormalArguments, lowerReturn and
lowerCall to make use of this feature.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D92953
2021-01-06 10:30:50 +05:30
Juneyoung Lee 8444a2494d [X86] Update X86InstCombineIntrinsic to use CreateShuffleVector with one vector
This patch updates X86InstCombineIntrinsic.cpp to use the newly updated CreateShuffleVector.

The tests are updated because the updated CreateShuffleVector uses poison value for the second vector.
If I didn't miss something, the masks in the tests are choosing elements from the first vector only; therefore the tests are having equivalent behavior.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D94059
2021-01-06 11:42:45 +09:00
Simon Pilgrim 73a44f437b [X86][AVX] combineVectorSignBitsTruncation - use PACKSS/PACKUS in more AVX cases
AVX512 has fast truncation ops, but if the truncation source is a concatenation of subvectors then its likely that we can use PACK more efficiently.

This is only guaranteed to work for truncations to 128/256-bit vectors as the PACK works across 128-bit sub-lanes, for now I've just disabled 512-bit truncation cases but we need to get them working eventually for D61129.
2021-01-05 15:01:45 +00:00
Simon Pilgrim dc74d7ed1f [X86] getMemoryOpCost - use dyn_cast_or_null<StoreInst>. NFCI.
Use instead of the isa_and_nonnull<StoreInst> and use the StoreInst::getPointerOperand wrapper instead of a hardcoded Instruction::getOperand.

Looks cleaner and avoids a spurious clang static analyzer null dereference warning.
2021-01-05 13:23:09 +00:00
QingShan Zhang 2962f1149c [NFC] Add the getSizeInBytes() interface for MachineConstantPoolValue
Current implementation assumes that, each MachineConstantPoolValue takes
up sizeof(MachineConstantPoolValue::Ty) bytes. For PowerPC, we want to
lump all the constants with the same type as one MachineConstantPoolValue
to save the cost that calculate the TOC entry for each const. So, we need
to extend the MachineConstantPoolValue that break this assumption.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D89108
2021-01-05 03:22:45 +00:00
Kazu Hirata 985f899bf2 [Target] Use llvm::append_range (NFC) 2021-01-03 09:57:43 -08:00
Juneyoung Lee 49c2d703d3 [X86] Make deinterleave8bitStride3 use unary CreateShuffleVector
This patch makes X86InterleavedAccessGroup::deinterleave8bitStride3 use the unary CreateShuffleVector.

This is a continuation of D93923. There were a few missing replacements.

IIUC, this patch does not cause change in the generated programs' semantics because the
function inserts shufflevectors that only choose elements from the first vector.

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D93993
2021-01-04 02:10:51 +09:00
Luo, Yuanke 08665b1805 Support tilezero intrinsic and c interface for AMX.
Differential Revision: https://reviews.llvm.org/D92837
2020-12-31 13:24:57 +08:00
Fangrui Song 6be0b9a8dd [X86] Don't fold negative offset into 32-bit absolute address (e.g. movl $foo-1, %eax)
When building abseil-cpp `bin/absl_hash_test` with Clang in -fno-pic
mode, an instruction like `movl $foo-2147483648, $eax` may be produced
(subtracting a number from the address of a static variable). If foo's
address is smaller than 2147483648, GNU ld/gold/LLD will error because
R_X86_64_32 cannot represent a negative value.

```
using absl::Hash;
struct NoOp {
  template < typename HashCode >
  friend HashCode AbslHashValue(HashCode , NoOp );
};
template <typename> class HashIntTest : public testing::Test {};
TYPED_TEST_SUITE_P(HashIntTest);
TYPED_TEST_P(HashIntTest, BasicUsage) {
  if (std::numeric_limits< TypeParam >::min )
    EXPECT_NE(Hash< NoOp >()({}),
              Hash< TypeParam >()(std::numeric_limits< TypeParam >::min()));
}
REGISTER_TYPED_TEST_CASE_P(HashIntTest, BasicUsage);
using IntTypes = testing::Types< int32_t>;
INSTANTIATE_TYPED_TEST_CASE_P(My, HashIntTest, IntTypes);

ld: error: hash_test.cc:(function (anonymous namespace)::gtest_suite_HashIntTest_::BasicUsage<int>::TestBody(): .text+0x4E472): relocation R_X86_64_32 out of range: 18446744071564237392 is not in [0, 4294967295]; references absl::hash_internal::HashState::kSeed
```

Actually any negative offset is not allowed because the symbol address
can be zero (e.g. set by `-Wl,--defsym=foo=0`). So disallow such folding.

Reviewed By: pengfei

Differential Revision: https://reviews.llvm.org/D93931
2020-12-30 18:47:26 -08:00
Juneyoung Lee 9b29610228 Use unary CreateShuffleVector if possible
As mentioned in D93793, there are quite a few places where unary `IRBuilder::CreateShuffleVector(X, Mask)` can be used
instead of `IRBuilder::CreateShuffleVector(X, Undef, Mask)`.
Let's update them.

Actually, it would have been more natural if the patches were made in this order:
(1) let them use unary CreateShuffleVector first
(2) update IRBuilder::CreateShuffleVector to use poison as a placeholder value (D93793)

The order is swapped, but in terms of correctness it is still fine.

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D93923
2020-12-30 22:36:08 +09:00
Luo, Yuanke 981a0bd858 [X86] Add x86_amx type for intel AMX.
The x86_amx is used for AMX intrisics. <256 x i32> is bitcast to x86_amx when
it is used by AMX intrinsics, and x86_amx is bitcast to <256 x i32> when it
is used by load/store instruction. So amx intrinsics only operate on type x86_amx.
It can help to separate amx intrinsics from llvm IR instructions (+-*/).
Thank Craig for the idea. This patch depend on https://reviews.llvm.org/D87981.

Differential Revision: https://reviews.llvm.org/D91927
2020-12-30 13:52:13 +08:00
Craig Topper 33051d5d61 [X86] Remove X86Fmadd SDNode from tablegen. Use standard fma instead. NFC
I guess I missed this in 4252f7773a
when I modified most patterns.
2020-12-26 23:41:46 -08:00
Matt Arsenault 581d13f8ae GlobalISel: Return APInt from getConstantVRegVal
Returning int64_t was arbitrarily limiting for wide integer types, and
the functions should handle the full generality of the IR.

Also changes the full form which returns the originally defined
vreg. Add another wrapper for the common case of just immediately
converting to int64_t (arguably this would be useful for the full
return value case as well).

One possible issue with this change is some of the existing uses did
break without conversion to getConstantVRegSExtVal, and it's possible
some without adequate test coverage are now broken.
2020-12-22 22:23:58 -05:00
Craig Topper f47b07315a [X86] Teach assembler to accept vmsave/vmload/vmrun/invlpga/skinit with or without the fixed register operands
These instructions read their inputs from fixed registers rather
than using a modrm byte. We shouldn't require the user to list them
when parsing assembly. This matches the GNU assembler.

This patch adds InstAliases so we can accept either form. It also
changes the printing code to use the form without registers. This
will change the behavior of llvm-objdump, but should be consistent
with binutils objdump. This also matches what we already do in LLVM for
clzero and monitorx which also used fixed registers.

I need to add and improve tests before this can be commited. The
disassembler tests exist, but weren't checking the fixed register
so they pass before and after this change.

Fixes https://github.com/ClangBuiltLinux/linux/issues/1216

Differential Revision: https://reviews.llvm.org/D93524
2020-12-19 11:01:55 -08:00
Harald van Dijk adc55b5a5a
[X86] Avoid generating invalid R_X86_64_GOTPCRELX relocations
We need to make sure not to emit R_X86_64_GOTPCRELX relocations for
instructions that use a REX prefix. If a REX prefix is present, we need to
instead use a R_X86_64_REX_GOTPCRELX relocation. The existing logic for
CALL64m, JMP64m, etc. already handles this by checking the HasREX parameter
and using it to determine which relocation type to use. Do this for all
instructions that can use relaxed relocations.

Reviewed By: MaskRay

Differential Revision: https://reviews.llvm.org/D93561
2020-12-18 23:38:38 +00:00
Simon Pilgrim 8767f3bb97 [X86][AVX] Remove X86ISD::SUBV_BROADCAST (PR38969)
Followup to D92645 - remove the remaining places where we create X86ISD::SUBV_BROADCAST, and fold splatted vector loads to X86ISD::SUBV_BROADCAST_LOAD instead.

Remove all the X86SubVBroadcast isel patterns, including all the fallbacks for if memory folding failed.
2020-12-18 15:49:53 +00:00
Simon Pilgrim 992fad03e2 [X86][AVX] Replace extract_subvector(broadcast(), 0) folds with generic SimplifyDemandedVectorEltsForTargetNode handling.
Simplifies a few more cases, notably shuffle demanded elts cases.
2020-12-18 11:51:10 +00:00
Simon Pilgrim 931e66bd89 [X86] Remove extract_subvector(subv_broadcast_load()) fold.
This was needed in an earlier version of D92645, but isn't now - and I've just noticed that it was potentially flawed depending on the relevant widths of the broadcasted and extracted subvectors.
2020-12-17 11:02:49 +00:00
Simon Pilgrim cdb692ee0c [X86] Add X86ISD::SUBV_BROADCAST_LOAD and begin removing X86ISD::SUBV_BROADCAST (PR38969)
Subvector broadcasts are only load instructions, yet X86ISD::SUBV_BROADCAST treats them more generally, requiring a lot of fallback tablegen patterns.

This initial patch replaces constant vector lowering inside lowerBuildVectorAsBroadcast with direct X86ISD::SUBV_BROADCAST_LOAD loads which helps us merge a number of equivalent loads/broadcasts.

As well as general plumbing/analysis additions for SUBV_BROADCAST_LOAD, I needed to wrap SelectionDAG::makeEquivalentMemoryOrdering so it can handle result chains from non generic LoadSDNode nodes.

Later patches will continue to replace X86ISD::SUBV_BROADCAST usage.

Differential Revision: https://reviews.llvm.org/D92645
2020-12-17 10:25:25 +00:00
QingShan Zhang ebdd20f430 Expand the fp_to_int/int_to_fp/fp_round/fp_extend as libcall for fp128
X86 and AArch64 expand it as libcall inside the target. And PowerPC also
want to expand them as libcall for P8. So, propose an implement in the
legalizer to common the logic and remove the code for X86/AArch64 to
avoid the duplicate code.

Reviewed By: Craig Topper

Differential Revision: https://reviews.llvm.org/D91331
2020-12-17 07:59:30 +00:00
Harald van Dijk 09d0e7a7c1
[X86] Avoid %fs:(%eax) references in x32 mode
The ABI explains that %fs:(%eax) zero-extends %eax to 64 bits, and adds
that the TLS base address, but that the TLS base address need not be
at the start of the TLS block, TLS references may use negative offsets.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D93158
2020-12-16 22:39:57 +00:00
Simon Pilgrim 553808d456 [X86] Rename reduction combiners to make it clearer whats happening. NFCI.
Since these are all working on reduction patterns, actually use that term in the function name to make them easier to search for.

At some point we're likely to start working with the ISD::VECREDUCE_* opcodes directly in the x86 backend, but that is still some way off.
2020-12-16 14:48:21 +00:00
Simon Pilgrim e55f7de946 [X86][SSE] combineReductionToHorizontal - don't rely on widenSubVector to handle illegal vector types.
Thanks to @asbirlea for reporting the bug.
2020-12-16 11:24:40 +00:00
Harald van Dijk 2aae2136d5
[X86] Add REX prefix for GOTTPOFF/TLSDESC relocs in x32 mode
The REX prefix is needed to allow linker relaxations: even if the
instruction we emit may not need it, the linker may change it to a
different instruction which does need it.
2020-12-15 23:07:34 +00:00
Simon Pilgrim 712117338a [X86] Explicitly use SDValue instead of auto. NFCI.
Fix static analyzer warning about not using a SDValue&
2020-12-15 17:27:25 +00:00
Simon Pilgrim b0e5aea557 [X86] Remove unnecessary SUBV_BROADCAST combines. NFCI.
Noticed while dealing with D92645 - these are now handled by getFauxShuffleMask + shuffle combining code.
2020-12-15 16:54:34 +00:00
Simon Pilgrim bd07092669 [X86] Remove trailing whitespace. NFC. 2020-12-15 10:11:38 +00:00
Simon Pilgrim 15a31389b2 [X86][AVX] LowerBUILD_VECTOR - reduce 256/512-bit build vectors with zero/undef upper elements + pad.
As discussed on D92645, we don't do a good job of recognising when we don't require the full width of a ymm/zmm build vector because the upper elements are undef/zero.

This commit allows us to make use of implicit zeroing of upper elements with AVX instructions, which we emulate in DAG with a INSERT_SUBVECTOR into the bottom of a undef/zero vector of the original type.

This exposed a limitation in getTargetConstantBitsFromNode which didn't extract bits from INSERT_SUBVECTORs of different element widths which I've included as well to prevent a couple of regressions.
2020-12-15 10:11:38 +00:00
Harald van Dijk 9eac818370
[X86] Fix variadic argument handling for x32
The X86-64 ABI defines va_list as

  typedef struct {
    unsigned int gp_offset;
    unsigned int fp_offset;
    void *overflow_arg_area;
    void *reg_save_area;
  } va_list[1];

This means the size, alignment, and reg_save_area offset will depend on
whether we are in LP64 or in ILP32 mode, so this commit adds the checks.
Additionally, the VAARG_64 pseudo-instruction assumed 64-bit pointers, so
this commit adds a VAARG_X32 pseudo-instruction that behaves just like
VAARG_64, except for assuming 32-bit pointers.

Some of these changes were originally done by
Michael Liao <michael.hliao@gmail.com>.

Fixes https://bugs.llvm.org/show_bug.cgi?id=48428.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D93160
2020-12-14 23:47:27 +00:00
Simon Pilgrim 5f5a2547c1 [X86] LowerBUILD_VECTOR - track zero/nonzero elements with APInt masks. NFCI.
Prep work for undef/zero 'upper elements' handling as proposed in D92645.
2020-12-14 16:28:45 +00:00
Kazu Hirata 913515e465 [Target] Use llvm::is_contained (NFC) 2020-12-13 19:35:10 -08:00
Craig Topper 0261ce9e17 [X86] Add ExeDomain = SSEPackedSingle to cvtss2sd and cvtsd2ss instrutions.
Prep for D92993
2020-12-13 12:35:33 -08:00
Craig Topper fa31f337a2 [X86] Add isel patterns to form VPDPWSSD from (add (vpmaddwd X, Y), Z) when AVXVNNI is enabled.
We already have these patterns for AVX512VNNI.
2020-12-13 12:02:07 -08:00
Simon Pilgrim d5c434d7dd [X86][SSE] combineX86ShufflesRecursively - add basic handling for combining shuffles of different widths (PR45974)
If a faux shuffle uses smaller shuffle inputs, try to recursively combine with those inputs directly instead of widening them immediately. Then widen all smaller inputs at the bottom of the recursion.

This will still mean we're generating nodes on the fly (PR45974) even if we don't combine to a new shuffle but it does help AVX2+ targets combine across xmm/ymm/zmm types, mainly as variable shuffles.
2020-12-13 17:18:07 +00:00
Simon Pilgrim 47321c311b [X86][SSE] combineReductionToHorizontal - add vXi8 ISD::MUL reduction handling (PR39709)
Default expansion leads to repeated extensions/truncations to/from vXi16 which shuffle combining and demanded elts can't completely unravel.

Better just to promote (any_extend) the input and perform a vXi16 reduction.

We'll be able to remove a lot of this if we ever get decent legalization support for reduction intrinsics in SelectionDAG.
2020-12-13 15:22:54 +00:00
Chris Sears 36a23b33aa X86: Correcting X86OutgoingValueHandler typo (NFC)
https://reviews.llvm.org/D92631
2020-12-12 20:28:37 -05:00
Harald van Dijk f61e5ecb91
[X86] Avoid data16 prefix for lea in x32 mode
The ABI demands a data16 prefix for lea in 64-bit LP64 mode, but not in
64-bit ILP32 mode. In both modes this prefix would ordinarily be
ignored, but the instructions may be changed by the linker to
instructions that are affected by the prefix.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D93157
2020-12-12 17:05:24 +00:00
Luo, Yuanke e52bc1d2bb [X86] Add chain in ISel for x86_tdpbssd_internal intrinsic. 2020-12-12 21:14:38 +08:00
Sanjay Patel 204bdc5322 [InstCombine][x86] fix insertion point bug in vector demanded elts fold (PR48476)
This transform was added at:
c63799fc52

From what I see, it's the first demanded elements transform that adds
a new instruction using the IRBuilder. There are similar folds in
the generic demanded bits chunk of instcombine that also use the
InsertPointGuard code pattern.

The tests here would assert/crash because the new instruction was
being added at the start of the demanded elements analysis rather
than at the instruction that is being replaced.
2020-12-11 17:23:35 -05:00
Luo, Yuanke f80b29878b [X86] AMX programming model.
This patch implements amx programming model that discussed in llvm-dev
 (http://lists.llvm.org/pipermail/llvm-dev/2020-August/144302.html).
 Thank Hal for the good suggestion in the RA. The fast RA is not in the patch yet.
 This patch implemeted 7 components.

1. The c interface to end user.
2. The AMX intrinsics in LLVM IR.
3. Transform load/store <256 x i32> to AMX intrinsics or split the
   type into two <128 x i32>.
4. The Lowering from AMX intrinsics to AMX pseudo instruction.
5. Insert psuedo ldtilecfg and build the def-use between ldtilecfg to amx
   intruction.
6. The register allocation for tile register.
7. Morph AMX pseudo instruction to AMX real instruction.

Change-Id: I935e1080916ffcb72af54c2c83faa8b2e97d5cb0

Differential Revision: https://reviews.llvm.org/D87981
2020-12-10 17:01:54 +08:00
Saleem Abdulrasool ee74d1b420 X86: use a data driven configuration of Windows x86 libcalls (NFC)
Rather than creating a series of associated calls and ensuring that
everything is lined up, use a table driven approach that ensures that
they two always stay in sync.
2020-12-09 22:49:11 +00:00
Craig Topper 5ff5cf8e05 [X86] Use APInt::isSignedIntN instead of isIntN for 64-bit ANDs in X86DAGToDAGISel::IsProfitableToFold
Pretty sure we meant to be checking signed 32 immediates here
rather than unsigned 32 bit. I suspect I messed this up because
in MathExtras.h we have isIntN and isUIntN so isIntN differs in
signedness depending on whether you're using APInt or plain integers.

This fixes a case where we didn't fold a constant created
by shrinkAndImmediate. Since shrinkAndImmediate doesn't topologically
sort constants it creates, we can fail to convert the Constant
to a TargetConstant. This leads to very strange behavior later.

Fixes PR48458.
2020-12-09 13:39:07 -08:00
Simon Pilgrim 24184dbb82 [X86] Fold CONCAT(VPERMV3(X,Y,M0),VPERMV3(Z,W,M1)) -> VPERMV3(CONCAT(X,Z),CONCAT(Y,W),CONCAT(M0,M1))
Further prep work toward supporting different subvector sizes in combineX86ShufflesRecursively
2020-12-09 14:29:32 +00:00
Kerry McLaughlin 4519ff4b6f [SVE][CodeGen] Add the ExtensionType flag to MGATHER
Adds the ExtensionType flag, which reflects the LoadExtType of a MaskedGatherSDNode.
Also updated SelectionDAGDumper::print_details so that details of the gather
load (is signed, is scaled & extension type) are printed.

Reviewed By: sdesmalen

Differential Revision: https://reviews.llvm.org/D91084
2020-12-09 11:19:08 +00:00
Harald van Dijk 29c8ea6f1a
[X86] Handle localdynamic TLS model in x32 mode
D92346 added TLS_(base_)addrX32 to handle TLS in x32 mode, but missed the
different TLS models. This diff fixes the logic for the local dynamic model
where `RAX` was used when `EAX` should be, and extends the tests to cover
all four TLS models.

Fixes https://bugs.llvm.org/show_bug.cgi?id=26472.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D92737
2020-12-08 21:06:00 +00:00
Tim Northover c5978f42ec UBSAN: emit distinctive traps
Sometimes people get minimal crash reports after a UBSAN incident. This change
tags each trap with an integer representing the kind of failure encountered,
which can aid in tracking down the root cause of the problem.
2020-12-08 10:28:26 +00:00
Simon Pilgrim c86c024e10 [X86] Fix static analyzer warnings. NFCI.
Replace '|' with '||' in condition, and fix case of SignedMode variable.
2020-12-07 18:23:55 +00:00
Fangrui Song 9fe1809f8c [X86] Delete 3 unused declarations 2020-12-06 15:13:39 -08:00
Simon Pilgrim 0101fb73de [X86] Fold MOVMSK(ICMP_SGT(X,-1)) -> NOT(MOVMSK(X)))
Noticed while triaging PR37506
2020-12-06 17:56:41 +00:00
Layton Kifer ac522f8700 [DAGCombiner] Fold (sext (not i1 x)) -> (add (zext i1 x), -1)
Move fold of (sext (not i1 x)) -> (add (zext i1 x), -1) from X86 to DAGCombiner to improve codegen on other targets.

Differential Revision: https://reviews.llvm.org/D91589
2020-12-06 11:52:10 -05:00
Simon Pilgrim db900995ed [CostModel][X86] getGatherScatterOpCost - use default implementation for alt costkinds
Noticed while looking at D92701 - we only really handle TCK_RecipThroughput gather/scatter costs - for now drop back to the default implementation for non-legal gathers/scatters.
2020-12-06 14:08:26 +00:00
Fangrui Song 687b83ceab [X86FastISel] Fix MO_GOTPCREL GlobalValue reference in static relocation model
This fixes the bug referenced by 5582a79876
which was exposed by 961f31d8ad.

With this change, `movq src@GOTPCREL, %rcx` => `movq src@GOTPCREL(%rip), %rcx`
2020-12-05 23:13:28 -08:00
Fangrui Song a084c0388e [TargetMachine] Don't imply dso_local on function declarations in Reloc::Static model for ELF/wasm
clang/lib/CodeGen/CodeGenModule sets dso_local on applicable function declarations,
we don't need to duplicate the work in TargetMachine:shouldAssumeDSOLocal.
(Actually the long-term goal (started by r324535) is to drop TargetMachine::shouldAssumeDSOLocal.)

By not implying dso_local, we will respect dso_local/dso_preemptable specifiers
set by the frontend. This allows the proposed -fno-direct-access-external-data
option to work with -fno-pic and prevent a canonical PLT entry (SHN_UNDEF with non-zero st_value)
when taking the address of a function symbol.

This patch should be NFC in terms of the Clang emitted assembly because the case
we don't set dso_local is a case Clang sets dso_local. However, some tests don't
set dso_local on some function declarations and expose some differences. Most
tests have been fixed to be more robust in the previous commit.
2020-12-05 14:54:37 -08:00
Fangrui Song 37f0c8df47 [X86] Emit @PLT for x86-64 and keep unadorned symbols for x86-32
This essentially reverts the x86-64 side effect of r327198.

For x86-32, @PLT (R_386_PLT32) is not suitable in -fno-pic mode so the
code forces MO_NO_FLAG (like a forced dso_local) (https://bugs.llvm.org//show_bug.cgi?id=36674#c6).

For x86-64, both `call/jmp foo` and `call/jmp foo@PLT` emit R_X86_64_PLT32
(https://sourceware.org/bugzilla/show_bug.cgi?id=22791) so there is no
difference using @PLT. Using @PLT is actually favorable because this drops
a difference with -fpie/-fpic code and makes it possible to avoid a canonical
PLT entry when taking the address of an undefined function symbol.
2020-12-05 13:17:47 -08:00
Fangrui Song db13a138bd [TargetMachine] Move X86 specific shouldAssumeDSOLocal logic to X86Subtarget::classifyGlobalFunctionReference 2020-12-05 12:32:50 -08:00
Simon Pilgrim b96a521077 [X86] LowerRotate - enable custom lowering of ROTL/ROTR vXi16 on VBMI2 targets. 2020-12-04 12:16:59 +00:00
Simon Pilgrim d073805be6 [X86] LowerRotate - VBMI2 targets can lower vXi16 rotates using funnel shifts.
Ideally we'd do this inside DAGCombine but until we can make the FSHL/FSHR opcodes legal for VBMI2 it won't help us.
2020-12-04 11:29:23 +00:00
Simon Pilgrim df1ddc4234 [X86] Let VBMI2 non-VLX targets still use funnel shifts instructions 2020-12-04 11:06:43 +00:00
Simon Pilgrim 8eedd18fcb [X86] Remove unnecessary bitcast. NFC.
The X86ISD::SUBV_BROADCAST node is already VT
2020-12-04 09:44:57 +00:00
Xiang1 Zhang f2e2924463 [X86] Unbind the ebx with GOT address in regcall calling convention
No register can be allocated for indirect call when it use regcall calling
convention and passed 5/5+ args.
For example:
call vreg (ag1, ag2, ag3, ag4, ag5, ...) --> 5 regs (EAX, ECX, EDX, ESI, EDI)
used for pass args, 1 reg (EBX )used for hold GOT point, so no regs can be
allocated to vreg.

The Intel386 architecture provides 8 general purpose 32-bit registers. RA
mostly use 6 of them (EAX, EBX, ECX, EDX, ESI, EDI). 5 of this regs can be
used to pass function arguments (EAX, ECX, EDX, ESI, EDI).
EBX used to hold the GOT pointer when making function calls via the PLT.
ESP and EBP usually be "reserved" in register allocation.

Reviewed By: LuoYuanke

Differential Revision: https://reviews.llvm.org/D91020
2020-12-04 10:00:13 +08:00
Kazu Hirata a333071754 [X86] Remove DecodeVPERMVMask and DecodeVPERMV3Mask
This patch removes the variants of DecodeVPERMVMask and
DecodeVPERMV3Mask that take "const Constant *C" as they are not used
anymore.

They were introduced on Sep 8, 2015 in commit
e88038f235.

The last use of DecodeVPERMVMask(const Constant *C, ...)  was removed
on Feb 7, 2016 in commit 73fc26b44a.

The last use of DecodeVPERMV3Mask(const Constant *C, ...) was removed
on May 28, 2018 in commit dcfcfdb0d1.

Differential Revision: https://reviews.llvm.org/D91926
2020-12-03 09:12:02 -08:00
Harald van Dijk c9be4ef184
[X86] Add TLS_(base_)addrX32 for X32 mode
LLVM has TLS_(base_)addr32 for 32-bit TLS addresses in 32-bit mode, and
TLS_(base_)addr64 for 64-bit TLS addresses in 64-bit mode. x32 mode wants 32-bit
TLS addresses in 64-bit mode, which were not yet handled. This adds
TLS_(base_)addrX32 as copies of TLS_(base_)addr64, except that they use
tls32(base)addr rather than tls64(base)addr, and then restricts
TLS_(base_)addr64 to 64-bit LP64 mode, TLS_(base_)addrX32 to 64-bit ILP32 mode.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D92346
2020-12-02 22:20:36 +00:00
H.J. Lu 18ce612353
Use PC-relative address for x32 TLS address
Since x32 supports PC-relative address, it shouldn't use EBX for TLS
address.  Instead of checking N.getValueType(), we should check
Subtarget->is32Bit().  This fixes PR 22676.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D16474
2020-12-02 22:20:36 +00:00
Hongtao Yu 24d4291ca7 [CSSPGO] Pseudo probes for function calls.
An indirect call site needs to be probed for its potential call targets. With CSSPGO a direct call also needs a probe so that a calling context can be represented by a stack of callsite probes. Unlike pseudo probes for basic blocks that are in form of standalone intrinsic call instructions, pseudo probes for callsites have to be attached to the call instruction, thus a separate instruction would not work.

One possible way of attaching a probe to a call instruction is to use a special metadata that carries information about the probe. The special metadata will have to make its way through the optimization pipeline down to object emission. This requires additional efforts to maintain the metadata in various places. Given that the `!dbg` metadata is a first-class metadata and has all essential support in place , leveraging the `!dbg` metadata as a channel to encode pseudo probe information is probably the easiest solution.

With the requirement of not inflating `!dbg` metadata that is allocated for almost every instruction, we found that the 32-bit DWARF discriminator field which mainly serves AutoFDO can be reused for pseudo probes. DWARF discriminators distinguish identical source locations between instructions and with pseudo probes such support is not required. In this change we are using the discriminator field to encode the ID and type of a callsite probe and the encoded value will be unpacked and consumed right before object emission. When a callsite is inlined, the callsite discriminator field will go with the inlined instructions. The `!dbg` metadata of an inlined instruction is in form of a scope stack. The top of the stack is the instruction's original `!dbg` metadata and the bottom of the stack is for the original callsite of the top-level inliner. Except for the top of the stack, all other elements of the stack actually refer to the nested inlined callsites whose discriminator field (which actually represents a calliste probe) can be used together to represent the inline context of an inlined PseudoProbeInst or CallInst.

To avoid collision with the baseline AutoFDO in various places that handles dwarf discriminators where a check against  the `-pseudo-probe-for-profiling` switch is not available, a special encoding scheme is used to tell apart a pseudo probe discriminator from a regular discriminator. For the regular discriminator, if all lowest 3 bits are non-zero, it means the discriminator is basically empty and all higher 29 bits can be reversed for pseudo probe use.

Callsite pseudo probes are inserted in `SampleProfileProbePass` and a target-independent MIR pass `PseudoProbeInserter` is added to unpack the probe ID/type from `!dbg`.

Note that with this work the switch -debug-info-for-profiling will not work with -pseudo-probe-for-profiling anymore. They cannot be used at the same time.

Reviewed By: wmi

Differential Revision: https://reviews.llvm.org/D91756
2020-12-02 13:45:20 -08:00
Simon Pilgrim f019362329 [X86] EltsFromConsecutiveLoads - remove old FIXME comment. NFC.
Its unlikely an undef element in a zero vector will be any use.
2020-12-02 17:21:41 +00:00
Simon Pilgrim 3900ec6f05 [X86] combineX86ShufflesRecursively - remove old FIXME comment. NFC.
Its unlikely an undef element in a zero vector will be any use, and SimplifyDemandedVectorElts now calls combineX86ShufflesRecursively so its unlikely we actually have a dependency on these specific elements.
2020-12-02 16:29:38 +00:00
Simon Pilgrim 0dab7ecc5d [X86] EltsFromConsecutiveLoads - pull out repeated NumLoadedElts. NFCI. 2020-12-02 16:29:37 +00:00
Fangrui Song f0659c0673 [X86] Support modifier @PLTOFF for R_X86_64_PLTOFF64
`gcc -mcmodel=large` can emit @PLTOFF.

Reviewed By: grimar

Differential Revision: https://reviews.llvm.org/D92294
2020-12-01 08:39:01 -08:00
Sanjay Patel 136f98e523 [x86] adjust cost model values for minnum/maxnum with fast-math-flags
Without FMF, we lower these intrinsics into something like this:

vmaxsd	%xmm0, %xmm1, %xmm2
vcmpunordsd	%xmm0, %xmm0, %xmm0
vblendvpd	%xmm0, %xmm1, %xmm2, %xmm0

But if we can ignore NANs, the single min/max instruction is enough
because there is no need to fix up the x86 logic that corresponds to
X > Y ? X : Y.

We probably want to make other adjustments for FP intrinsics with FMF
to account for specialized codegen (for example, FSQRT).

Differential Revision: https://reviews.llvm.org/D92337
2020-12-01 10:45:53 -05:00
Simon Pilgrim 1b209ff9e3 [DAG] Move vselect(icmp_ult, 0, sub(x,y)) -> usubsat(x,y) to DAGCombine (PR40111)
Move the X86 VSELECT->USUBSAT fold to DAGCombiner - there's nothing target specific about these folds.
2020-12-01 14:25:29 +00:00
Simon Pilgrim 6dbd0d36a1 [DAG] Move vselect(icmp_ult, -1, add(x,y)) -> uaddsat(x,y) to DAGCombine (PR40111)
Move the X86 VSELECT->UADDSAT fold to DAGCombiner - there's nothing target specific about these folds.

The SSE42 test diffs are relatively benign - its avoiding an extra constant load in exchange for an extra xor operation - there are extra register moves, which is annoying as all those operations should commute them away.

Differential Revision: https://reviews.llvm.org/D91876
2020-12-01 11:56:26 +00:00
Simon Pilgrim c63799fc52 [InstCombine][X86] Fold addsub intrinsic to fadd/fsub depending on demanded elts (PR46277) 2020-12-01 11:27:40 +00:00
Harald van Dijk cdac34bd47
[X86] Zero-extend pointers to i64 for x86_64
For LP64 mode, this has no effect as pointers are already 64 bits.
For ILP32 mode (x32), this extension is specified by the ABI.

Reviewed By: pengfei

Differential Revision: https://reviews.llvm.org/D91338
2020-11-30 18:51:23 +00:00
Simon Pilgrim e425d0b92a [InstCombine][X86] Add basic addsub intrinsic SimplifyDemandedVectorElts support (PR46277)
Pass through the demanded elts mask to the source operands.

The next step will be to add support for folding to add/sub if we only demand odd/even elements.
2020-11-30 18:40:16 +00:00
Fangrui Song 25c8fbb3d9 [X86] Don't emit R_X86_64_[REX_]GOTPCRELX for a GOT load with an offset
clang may produce `movl x@GOTPCREL+4(%rip), %eax` when loading the high
32 bits of the address of a global variable in -fpic/-fpie mode.

If assembled by GNU as, the fixup emits R_X86_64_GOTPCRELX with an addend != -4.
The instruction loads from the GOT entry with an offset and thus it is incorrect
to relax the instruction.

This patch does not emit a relaxable relocation for a GOT load with an offset
because R_X86_64_[REX_]GOTPCRELX do not make sense for instructions which cannot
be relaxed.  The result is good enough for LLD to work. GNU ld relaxes
mov+GOTPCREL as well, but it suppresses the relaxation if addend != -4.

Reviewed By: jhenderson

Differential Revision: https://reviews.llvm.org/D92114
2020-11-30 08:27:31 -08:00
Simon Pilgrim 83d79ca5bf [X86][AVX512] Only lower to VPALIGNR if we have BWI (PR48322) 2020-11-30 10:51:24 +00:00
Harald van Dijk 47e2fafbf3
[X86] Do not allow FixupSetCC to relax constraints
The build bots caught two additional pre-existing problems exposed by the test change part of my change https://reviews.llvm.org/D91339, when expensive checks are enabled. https://reviews.llvm.org/D91924 fixes one of them, this fixes the other.

FixupSetCC will change code in the form of

  %setcc = SETCCr ...
  %ext1 = MOVZX32rr8 %setcc

to

  %zero = MOV32r0
  %setcc = SETCCr ...
  %ext2 = INSERT_SUBREG %zero, %setcc, %subreg.sub_8bit

and replace uses of %ext1 with %ext2.

The register class for %ext2 did not take into account any constraints on %ext1, which may have been required by its uses. This change ensures that the original constraints are honoured, by instead of creating a new %ext2 register, reusing %ext1 and further constraining it as needed. This requires a slight reorganisation to account for the fact that it is possible for the constraining to fail, in which case no changes should be made.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D91933
2020-11-28 17:46:56 +00:00
Harald van Dijk 47c902ba84
[X86] Have indirect calls take 64-bit operands in 64-bit modes
The build bots caught two additional pre-existing problems exposed by the test change part of my change https://reviews.llvm.org/D91339, when expensive checks are enabled. This fixes one of them.

X86 has CALL64r and CALL32r opcodes, where CALL64r takes a 64-bit register, and CALL32r takes a 32-bit register. CALL64r can only be used in 64-bit mode, CALL32r can only be used in 32-bit mode. LLVM would assume that after picking the appropriate CALLr opcode, a pointer-sized register would be a valid operand, but in x32 mode, a 64-bit mode, pointers are 32 bits. In this mode, it is invalid to directly pass a pointer to CALL64r, it needs to be extended to 64 bits first.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D91924
2020-11-28 16:46:30 +00:00
Simon Pilgrim 969918e177 [DAG] Legalize umin(x,y) -> sub(x,usubsat(x,y)) and umax(x,y) -> add(x,usubsat(y,x)) iff usubsat is legal
If usubsat() is legal, this is likely to result in smaller codegen expansion than the default cmp+select codegen expansion.

Allows us to move the x86-specific lowering to the generic expansion code.

Differential Revision: https://reviews.llvm.org/D92183
2020-11-27 11:18:58 +00:00
Simon Pilgrim 8057ebf4a0 Revert rG12d59b696b330 "[DAG] Legalize umin(x,y) -> sub(x,usubsat(x,y)) and umax(x,y) -> add(x,usubsat(y,x)) iff usubsat is legal"
This reverts commit 12d59b696b.

Prematurely pushed this to trunk
2020-11-26 15:07:45 +00:00
Simon Pilgrim 12d59b696b [DAG] Legalize umin(x,y) -> sub(x,usubsat(x,y)) and umax(x,y) -> add(x,usubsat(y,x)) iff usubsat is legal
If usubsat() is legal, this is likely to result in smaller codegen expansion than the default cmp+select codegen expansion.

Allows us to move the x86-specific lowering to the generic expansion code.
2020-11-26 14:47:28 +00:00
Simon Pilgrim 385a27d6cd [CostModel][X86] Refresh ISD::ABS costs
Update costs now that D92095 and D92102 have tweaked the SSE2 implementation

The SSE42 BLENDVPD cost can actually be used on SSE41 as we don't attempt to generate PCMPGT anymore

Add scalar i16/i32/i64 costs as we can do this cheaply with CMOV
2020-11-25 18:40:19 +00:00
Martin Storsjö 6f792041a5 Reapply "[CodeGen] [WinException] Only produce handler data at the end of the function if needed"
This reapplies 36c64af9d7 in updated
form.

Emit the xdata for each function at .seh_endproc. This keeps the
exact same output header order for most code generated by the LLVM
CodeGen layer. (Sections still change order for code built from
assembly where functions lack an explicit .seh_handlerdata
directive, and functions with chained unwind info.)

The practical effect should be that assembly output lacks
superfluous ".seh_handlerdata; .text" pairs at the end of functions
that don't handle exceptions, which allows such functions to use
the AArch64 packed unwind format again.

Differential Revision: https://reviews.llvm.org/D87448
2020-11-23 23:17:03 +02:00
Craig Topper 4252f7773a [SelectionDAG][ARM][AArch64][Hexagon][RISCV][X86] Add SDNPCommutative to fma and fmad nodes in tablegen. Remove explicit commuted patterns from targets.
X86 was already specially marking fma as commutable which allowed
tablegen to autogenerate commuted patterns. This moves it to the target
independent definition and fix up the targets to remove now
unneeded patterns.

Unfortunately, the tests change because the commuted version of
the patterns are generating operands in a different than the
explicit patterns.

Differential Revision: https://reviews.llvm.org/D91842
2020-11-23 10:09:20 -08:00
Simon Pilgrim 791040cd8b [DAG] LowerMINMAX - move default expansion to generic TargetLowering::expandIntMINMAX
This is part of the discussion on D91876 about trying to reduce custom lowering of MIN/MAX ops on older SSE targets - if we can improve generic vector expansion we should be able to relax the limitations in SelectionDAGBuilder when it will let MIN/MAX ops be generated, and avoid having to flag so many ops as 'custom'.
2020-11-22 13:02:27 +00:00
Harald van Dijk 4629afa101 [X86] Include %rip for 32-bit RIP-relative relocs for x32
%rip was only included for 64-bit RIP-relative relocations, but needs to be included for 32-bit as well.

Reviewed By: MaskRay, RKSimon

Differential Revision: https://reviews.llvm.org/D91339
2020-11-21 09:20:20 -08:00
Simon Pilgrim 0341029bb4 [X86][AVX] LowerADDSAT_SUBSAT - avoid X86ISD::BLENDV in UADDSAT/USUBSAT v8i32/v4i64 lowering
Use the OR(CMP,ADD) / AND(CMP,SUB) patterns like we do on SSE targets.

Enable custom lowering for v8i32/v4i64 and generalize the 128-bit lowering code for any vector size - this also lets us use the slightly cheaper codegen for icmp_ugt instead of umin/umax.
2020-11-20 18:16:44 +00:00
Craig Topper a7eae62a42 [SelectionDAG][X86][PowerPC][Mips] Replace the default implementation of LowerOperationWrapper with the X86 and PowerPC version.
The default version only works if the returned node has a single
result. The X86 and PowerPC versions support multiple results
and allow a single result to be returned from a node with
multiple outputs. And allow a single result that is not result 0
of the node.

Also replace the Mips version since the new version should work
for it. The original version handled multiple results, but only
if the new node and original node had the same number of results.

Differential Revision: https://reviews.llvm.org/D91846
2020-11-20 10:06:53 -08:00
Simon Pilgrim 09a081f221 [X86][SSE] LowerADDSAT_SUBSAT - avoid X86ISD::BLENDV in UADDSAT/USUBSAT custom lowering
Use the OR(CMP,ADD) / AND(CMP,SUB) patterns like we do on pre-SSE4 targets.

We're still using X86ISD::BLENDV on some AVX targets as we don't do custom lowering for >= 256-bit vectors.

Really this (and combineVSelectWithAllOnesOrZeros) needs moving to DAGCombiner, but pre-SSE42 we see the vXi64 comparison type as a 2 x 32-bits result so we can't just rely on ComputeNumSignBits to give us the 'all bits' result we need.
2020-11-20 16:53:01 +00:00
Liu, Chen3 776f92e067 [X86] Add support for vex, vex2, vex3, and evex for MASM
For MASM syntax, the prefixes are not enclosed in braces.
The assembly code should like:
  "evex vcvtps2pd xmm0, xmm1"

Differential Revision: https://reviews.llvm.org/D90441
2020-11-20 16:20:19 +08:00
Simon Pilgrim 14ae02fb33 [X86][AVX] Only share broadcasts of different widths from the same SDValue of the same SDNode (PR48215)
D57663 allowed us to reuse broadcasts of the same scalar value by extracting low subvectors from the widest type.

Unfortunately we weren't ensuring the broadcasts were from the same SDValue, just the same SDNode - which failed on multiple-value nodes like ISD::SDIVREM

FYI: I intend to request this be merged into the 11.x release branch.

Differential Revision: https://reviews.llvm.org/D91709
2020-11-19 12:15:18 +00:00
Craig Topper f0b0bab34d [X86] Use GF2P8AFFINEQB to implement vector bitreverse.
We can use GF2P8AFFINEQB to reverse bits in a byte. Shuffles are needed to reverse the bytes in elements larger than i8. LegalizeVectorOps takes care of inserting the shuffle for the larger element size.

We already have Custom lowering for v16i8 with SSSE3, v32i8 with AVX, and v64i8 with AVX512BW.

I think we might be able to use this for scalars too by moving into a vector and back. But I'll save that for a follow up as its a little more involved.

Reviewed By: RKSimon, pengfei

Differential Revision: https://reviews.llvm.org/D91515
2020-11-17 23:49:06 -08:00
Florian Hahn b2f4c5fddc
[AsmWriter] Factor out mnemonic generation to accessible getMnemonic.
This patch factors out the part of printInstruction that gets the
mnemonic string for a given MCInst. This is intended to be used
subsequently for the instruction-mix remarks to display the final
mnemonic (D90040).

Unfortunately making `getMnemonic` available to the AsmPrinter
seems to require making it virtual. Not sure if there's a way around
that with the current layering of the AsmPrinters.

Reviewed By: Paul-C-Anagnostopoulos

Differential Revision: https://reviews.llvm.org/D90039
2020-11-17 09:47:38 +00:00
Craig Topper 57c0c4a275 [X86] Fix crash with i64 bitreverse on 32-bit targets with XOP.
We unconditionally marked i64 as Custom, but did not install a
handler in ReplaceNodeResults when i64 isn't legal type. This
leads to ReplaceNodeResults asserting.

We have two options to fix this. Only mark i64 as Custom on
64-bit targets and let it expand to two i32 bitreverses which
each need a VPPERM. Or the other option is to add the Custom
handling to ReplaceNodeResults. This is what I went with.
2020-11-15 19:02:34 -08:00
serge-sans-paille 9218ff50f9 llvmbuildectomy - replace llvm-build by plain cmake
No longer rely on an external tool to build the llvm component layout.

Instead, leverage the existing `add_llvm_componentlibrary` cmake function and
introduce `add_llvm_component_group` to accurately describe component behavior.

These function store extra properties in the created targets. These properties
are processed once all components are defined to resolve library dependencies
and produce the header expected by llvm-config.

Differential Revision: https://reviews.llvm.org/D90848
2020-11-13 10:35:24 +01:00
Craig Topper 114f044640 [X86] Use EVT::getIntegerVT instead of MVT::getIntegerVT where the type can be i2 or i4.
This was a mistake introduced in D91294. I'm not sure how to
exercise this with the existing code, but I hit it while trying
some follow up experiments.
2020-11-12 21:48:45 -08:00
Craig Topper a4124e455e [X86] When storing v1i1/v2i1/v4i1 to memory, make sure we store zeros in the rest of the byte
We can't store garbage in the unused bits. It possible that something like zextload from i1/i2/i4 is created to read the memory. Those zextloads would be legalized assuming the extra bits are 0.

I'm not sure that the code in lowerStore is executed for the v1i1/v2i1/v4i1 case. It looks like the DAG combine in combineStore may have converted them to v8i1 first. And I think we're missing some cases to avoid going to the stack in the first place. But I don't have time to investigate those things at the moment so I wanted to focus on the correctness issue.

Should fix PR48147.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D91294
2020-11-12 21:28:18 -08:00
Simon Pilgrim 1a62ca65c1 [KnownBits] Add KnownBits::commonBits helper. NFCI.
We have a frequent pattern where we're merging two KnownBits to get the common/shared bits, and I just fell for the gotcha where I tried to use the & operator to merge them........
2020-11-11 12:15:54 +00:00
Kerry McLaughlin ffbbfc76ca [SVE][CodeGen] Add the isTruncatingStore flag to MSCATTER
This patch adds the IsTruncatingStore flag to MaskedScatterSDNode, set by getMaskedScatter().
Updated SelectionDAGDumper::print_details for MaskedScatterSDNode to print
the details of masked scatters (is truncating, signed or scaled).

This is the first in a series of patches which adds support for scalable masked scatters

Reviewed By: sdesmalen

Differential Revision: https://reviews.llvm.org/D90939
2020-11-11 10:58:24 +00:00
Gaurav Jain 3726b14428 [NFC] Use [MC]Register for x86 target
Differential Revision: https://reviews.llvm.org/D91161
2020-11-10 15:49:39 -08:00
Eric Astor d657f7cd30 [ms] [llvm-ml] Support MASM's relational operators (EQ, LT, etc.)
Support the named relational operators (EQ, LT, etc.).

Reviewed By: thakis

Differential Revision: https://reviews.llvm.org/D89733
2020-11-09 14:01:36 -05:00
Craig Topper f40925aa8b [X86] Improve lowering of fptoui
Invert the select condition when masking in the sign bit of a fptoui operation. Also, rather than lowering the sign mask to select/xor and expecting the select to get cleaned up later, directly lower to shift/xor.

Patch by Layton Kifer!

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D90658
2020-11-07 23:50:03 -08:00
Eric Astor 5afb360808 [ms] [llvm-ml] Allow arbitrary strings as integer constants
MASM interprets strings in expression contexts as integers expressed in big-endian base-256, treating each character as its ASCII representation.

This completely eliminates the need to special-case single-character strings.

Reviewed By: thakis

Differential Revision: https://reviews.llvm.org/D90788
2020-11-06 17:15:49 -05:00
Sander de Smalen d57bba7cf8 [SVE] Return StackOffset for TargetFrameLowering::getFrameIndexReference.
To accommodate frame layouts that have both fixed and scalable objects
on the stack, describing a stack location or offset using a pointer + uint64_t
is not sufficient. For this reason, we've introduced the StackOffset class,
which models both the fixed- and scalable sized offsets.

The TargetFrameLowering::getFrameIndexReference is made to return a StackOffset,
so that this can be used in other interfaces, such as to eliminate frame indices
in PEI or to emit Debug locations for variables on the stack.

This patch is purely mechanical and doesn't change the behaviour of how
the result of this function is used for fixed-sized offsets. The patch adds
various checks to assert that the offset has no scalable component, as frame
offsets with a scalable component are not yet supported in various places.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D90018
2020-11-05 11:02:18 +00:00
Fangrui Song 96b0b9a5e3 [X86] Enable shrink-wrapping for no-frame-pointer non-nounwind functions on platforms not using compact unwind
The current compact unwind scheme does not work when the prologue is not at the
start (the instructions before the prologue cannot be described).  (Technically
this is fixable, but it requires multiple compact unwind descriptors for one
function.)

rL255175 chose to not perform shrink-wrapping for no-frame-pointer functions not
marked as nounwind to work around PR25614. This is overly limited, as platforms
not supporting compact unwind (all non-Darwin) does not need the workaround.
This patch restricts the limitation to compact unwind platforms.

Reviewed By: qcolombet

Differential Revision: https://reviews.llvm.org/D89930
2020-11-04 16:51:48 -08:00
Eric Astor 07c4f1d10b [ms] [llvm-ml] Lex MASM strings, including escaping
Allow single-quoted strings and double-quoted character values, as well as doubled-quote escaping.

Reviewed By: thakis

Differential Revision: https://reviews.llvm.org/D89731
2020-11-04 15:28:43 -05:00
Sanjay Patel 9af561ec99 [x86] update cost table comments for maxnum; NFC
Follow-up suggested in D90613.
2020-11-03 08:09:59 -05:00
Sanjay Patel 35fa3c474f [x86] add AVX2 cost model entries for maxnum of 256-bit vectors
As noticed in D90554 ,
the AVX2 costs for 256-bit vectors did not include FMAXNUM entries,
so we fell back to AVX1 which assumes those ops will be split into
128-bit halves or something close to that.

Differential Revision: https://reviews.llvm.org/D90613
2020-11-02 12:20:17 -05:00
Florian Hahn b3b993a7ad Reland "[TTI] Add VecPred argument to getCmpSelInstrCost."
This reverts the revert commit 408c4408fa.

This version of the patch includes a fix for a crash caused by
treating ICmp/FCmp constant expressions as instructions.

Original message:

On some targets, like AArch64, vector selects can be efficiently lowered
if the vector condition is a compare with a supported predicate.

This patch adds a new argument to getCmpSelInstrCost, to indicate the
predicate of the feeding select condition. Note that it is not
sufficient to use the context instruction when querying the cost of a
vector select starting from a scalar one, because the condition of the
vector select could be composed of compares with different predicates.

This change greatly improves modeling the costs of certain
compare/select patterns on AArch64.

I am also planning on putting up patches to make use of the new argument in
SLPVectorizer & LV.
2020-11-02 15:39:29 +00:00
Simon Pilgrim 9e406ee808 [X86] Make some basic VarArgsLoweringHelper helper methods const. NFCI.
Fixes a number of cppcheck remarks.
2020-10-31 12:16:49 +00:00
Simon Pilgrim e0cbcf96ce [X86] Make the X86FrameSortingComparator operator const. NFCI.
Fixes a cppcheck remark.
2020-10-31 12:16:49 +00:00
Simon Pilgrim 55dbb7d823 [X86] X86MCTargetDesc - ensure the declaration/definition variable names match. NFCI.
Silences cppcheck mismatch warnings.
2020-10-31 11:50:00 +00:00
Simon Pilgrim 30a1d91127 [X86] Reduce scope of DestReg and use specific Register type not unsigned. NFCI. 2020-10-31 11:46:07 +00:00
Simon Pilgrim ae80ac6db2 [X86] printAsmMRegister - make the X86AsmPrinter arg a const reference. NFC.
Fixes cppcheck warning.
2020-10-31 11:41:14 +00:00
Simon Pilgrim 39f77b3224 [X86] assignValueToReg - fix Wshadow warning. NFCI.
X86OutgoingValueHandler already has a MIB member
2020-10-31 11:39:26 +00:00
Simon Pilgrim 33e20008d1 [X86] printAsmVRegister - remove unused argument. NFC. 2020-10-31 11:34:28 +00:00
Simon Pilgrim ec547a7517 [X86] X86AsmPrinter - ensure the declaration/definition variable names match. NFCI.
Silences cppcheck mismatch warnings.
2020-10-31 11:31:46 +00:00
Simon Pilgrim 5eec049689 [X86] No need to determine pointer when the type is already a MachineInstr*. NFCI.
Caught by cppcheck - appears to be a copy+paste typo as the other var is an iterator that does need the &* pointer operation.
2020-10-31 11:26:25 +00:00
Liu, Chen3 756f597841 [X86] Support Intel avxvnni
This patch mainly made the following changes:

1. Support AVX-VNNI instructions;
2. Introduce ExplicitVEXPrefix flag so that vpdpbusd/vpdpbusds/vpdpbusds/vpdpbusds instructions only use vex-encoding when user explicity add {vex} prefix.

Differential Revision: https://reviews.llvm.org/D89105
2020-10-31 12:39:51 +08:00
Florian Hahn 408c4408fa Revert "[TTI] Add VecPred argument to getCmpSelInstrCost."
This reverts commit 73f01e3df5.

This appears to break
http://lab.llvm.org:8011/#/builders/85/builds/383.
2020-10-30 21:26:14 +00:00
Sanjay Patel 251dd7c0f9 [x86] add cost overrides for mul with overflow
I'm assuming the standard size integer instructions for this end up as something like:
mulq %rsi
seto %al

And the 'mul' generally has reciprocal throughput of 1 on typical implementations
(higher latency, but that's not handled here).
The default costs may end up much higher than that, and that's what we see in the test diffs.

Vector types are left as a 'TODO'.

Differential Revision: https://reviews.llvm.org/D90431
2020-10-30 12:38:16 -04:00
serge-sans-paille 0f60bcc36c [stack-clash] Fix probing of dynamic alloca
- Perform the probing in the correct direction.
  Related to https://github.com/rust-lang/rust/pull/77885#issuecomment-711062924

- The first touch on a dynamic alloca cannot use a mov because it clobbers
  existing space. Use a xor 0 instead

Differential Revision: https://reviews.llvm.org/D90216
2020-10-30 15:34:00 +01:00
Florian Hahn 73f01e3df5 [TTI] Add VecPred argument to getCmpSelInstrCost.
On some targets, like AArch64, vector selects can be efficiently lowered
if the vector condition is a compare with a supported predicate.

This patch adds a new argument to getCmpSelInstrCost, to indicate the
predicate of the feeding select condition. Note that it is not
sufficient to use the context instruction when querying the cost of a
vector select starting from a scalar one, because the condition of the
vector select could be composed of compares with different predicates.

This change greatly improves modeling the costs of certain
compare/select patterns on AArch64.

I am also planning on putting up patches to make use of the new argument in
SLPVectorizer & LV.

Reviewed By: dmgreen, RKSimon

Differential Revision: https://reviews.llvm.org/D90070
2020-10-30 13:49:08 +00:00
Sanjay Patel 7c395f31a6 [CostModel][x86] remove cost-kind predicate for intrinsic costs
We model cost as number of instructions / uops, so it does not
make sense to treat size/blended costs any differently than
throughput.
2020-10-28 14:33:37 -04:00
Benjamin Kramer 35f7cbf9df [X86] Don't crash on CVTPS2PH with wide vector inputs. 2020-10-27 14:42:02 +01:00
Craig Topper f385823e04 [X86] Alternate implementation of D88194.
This uses PreprocessISelDAG to replace the constant before
instruction selection instead of matching opcodes after.

Reviewed By: pengfei

Differential Revision: https://reviews.llvm.org/D89178
2020-10-27 00:20:03 -07:00
Wei Wang d602e79a81 [X86] Encode global address in small code model
In small code model, program and its symbols are linked in the lower 2 GB of
the address space. Try encoding global address even when the range is unknown
in such case.

Differential Revision: https://reviews.llvm.org/D89341
2020-10-26 23:14:06 -07:00
Bing1 Yu 2c08f1b4b6 [CostModel][X86] teach TTI calculate cost of chain of vector inserts/extracts more precisely and correctly:In each 128-lane, if there is at least one index is demanded and not all indices are demanded...
In each 128-lane, if there is at least one index is demanded and not all
indices are demanded and this 128-lane is not the first 128-lane of the
legalized-vector, then this 128-lane needs a extracti128;
If in each 128-lane, there is at least one index is demanded, this 128-lane
needs a inserti128.

The following cases will help you build a better understanding:
Assume we insert several elements into a v8i32 vector in avx2,
Case#1: inserting into 1th index needs vpinsrd + inserti128
Case#2: inserting into 5th index needs extracti128 + vpinsrd +
inserti128
Case#3: inserting into 4,5,6,7 index needs 4*vpinsrd + inserti128.

Reviewed By: pengfei, RKSimon

Differential Revision: https://reviews.llvm.org/D89767
2020-10-27 11:21:13 +08:00
Craig Topper 82974e0114 [X86] Don't disassemble wbinvd with 0xf2 or 0x66 prefix.
The 0xf3 prefix has been defined as wbnoinvd on Icelake Server. So
the prefix isn't ignored by the CPU. AMD documentation suggests that
wbnoinvd is treated as wbinvd on older processors. Intel documentation
is not clear. Perhaps 0xf2 and 0x66 are treated the same, but its
not documented.

This patch changes TB to PS in the td file so 0xf2 and 0x66 will
be treated as errors. This matches versions of objdump after
wbnoinvd was added.
2020-10-25 20:56:01 -07:00
Liu, Chen3 180548c5c7 [X86] VEX/EVEX prefix doesn't work for inline assembly.
For now, we lost the encoding information if we using inline assembly.
The encoding for the inline assembly will keep default even if we add
the vex/evex prefix.

Differential Revision: https://reviews.llvm.org/D90009
2020-10-26 08:37:45 +08:00
Craig Topper 63ba82ed00 [X86] Use TargetConstant for immediates for VASTART_SAVE_XMM_REGS. 2020-10-25 12:52:56 -07:00
Craig Topper 2ed16aa66f [X86] Use TargetConstant instead of Constant for operands to X86vaarg64. 2020-10-25 12:24:59 -07:00
Craig Topper a222d832d5 [X86] Use TargetConstant for FPDiff with X86::TC_RETURN.
It's required to be a constant and can never be in a register so
make it explicit.
2020-10-25 00:29:11 -07:00
Fangrui Song f04d92af94 [X86] Produce R_X86_64_GOTPCRELX for test/binop instructions (MOV32rm/TEST32rm/...) when -Wa,-mrelax-relocations=yes is enabled
We have been producing R_X86_64_REX_GOTPCRELX (MOV64rm/TEST64rm/...) and
R_X86_64_GOTPCRELX for CALL64m/JMP64m without the REX prefix since 2016 (to be
consistent with GNU as), but not for MOV32rm/TEST32rm/...
2020-10-24 15:14:17 -07:00
Benjamin Kramer 39a0d6889d [X86] Add a stub for Intel's alderlake.
No scheduling, no autodetection.
2020-10-24 19:01:22 +02:00
Benjamin Kramer bd2cf96c09 [X86] Add a stub for znver3 based on the little public information there is in AMD's manuals
No scheduling, no autodetection. Just enough so -march=znver3 works.
2020-10-24 19:01:22 +02:00
Simon Pilgrim ce356e1546 [DAG] Add BuildVectorSDNode::getRepeatedSequence helper to recognise multi-element splat patterns
Replace the X86 specific isSplatZeroExtended helper with a generic BuildVectorSDNode method.

I've just used this to simplify the X86ISD::BROADCASTM lowering so far (and remove isSplatZeroExtended), but we should be able to use this in more places to lower to complex broadcast patterns.

Differential Revision: https://reviews.llvm.org/D87930
2020-10-24 12:23:09 +01:00
Simon Pilgrim 936ef89ebe [X86] lowerShuffleWithPERMV - use MVT::changeTypeToInteger helper. NFCI. 2020-10-23 12:35:27 +01:00
Simon Pilgrim 2692978050 [X86] X86AsmParser - make methods const where possible. NFCI.
Reported by cppcheck
2020-10-22 15:55:06 +01:00
Simon Pilgrim 091b18ba81 [X86] Return const& in IntelExprStateMachine::getIdentifierInfo(). NFCI.
Avoid unnecessary copy in X86AsmParser::ParseIntelOperand
2020-10-22 15:55:06 +01:00
Simon Pilgrim 794dc7ad26 [CodeGen] Split MVT::changeTypeToInteger() functionality from EVT::changeTypeToInteger().
Add the MVT equivalent handling for EVT changeTypeToInteger/changeVectorElementType/changeVectorElementTypeToInteger.

All the SimpleVT code already exists inside the EVT equivalents, but by splitting this out we can use these directly inside MVT types without converting to/from EVT.
2020-10-22 14:27:42 +01:00
Jeremy Morse d73275993b [DebugInstrRef] Substitute debug value numbers to handle optimizations
This patch touches two optimizations, TwoAddressInstruction and X86's
FixupLEAs pass, both of which optimize by re-creating instructions. For
LEAs, various bits of arithmetic are better represented as LEAs on X86,
while TwoAddressInstruction sometimes converts instrs into three address
instructions if it's profitable.

For debug instruction referencing, both of these require substitutions to
be created -- the old instruction number must be pointed to the new
instruction number, as illustrated in the added test. If this isn't done,
any variable locations based on the optimized instruction are
conservatively dropped.

Differential Revision: https://reviews.llvm.org/D85756
2020-10-22 13:01:03 +01:00
Tianqing Wang be39a6fe6f [X86] Add User Interrupts(UINTR) instructions
For more details about these instructions, please refer to the latest
ISE document:
https://software.intel.com/en-us/download/intel-architecture-instruction-set-extensions-programming-reference.

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D89301
2020-10-22 17:33:07 +08:00