Commit Graph

6983 Commits

Author SHA1 Message Date
Martin Storsjö cc3affd8b0 [clang] [MSVC] Implement __mulh and __umulh builtins for aarch64
The code is based on the same __mulh and __umulh intrinsics for
x86.

This should fix PR51128.

Differential Revision: https://reviews.llvm.org/D106721
2021-08-19 11:29:55 +03:00
Christopher Tetreault 2afb9394a7 [hwasan] Flag stack safety check as requiring aarch64
Reviewed By: fmayer

Differential Revision: https://reviews.llvm.org/D108241
2021-08-18 11:14:01 -07:00
Wang, Pengfei 5aeca3b0a5 [CFE][X86] Enable complex _Float16 support
Support complex _Float16 on X86 in C/C++ following the latest X86 psABI. (https://gitlab.com/x86-psABIs)

Reviewed By: LuoYuanke

Differential Revision: https://reviews.llvm.org/D105331
2021-08-18 11:16:14 +08:00
Wang, Pengfei 2379949aad [X86] AVX512FP16 instructions enabling 3/6
Enable FP16 conversion instructions.

Ref.: https://software.intel.com/content/www/us/en/develop/download/intel-avx512-fp16-architecture-specification.html

Reviewed By: LuoYuanke

Differential Revision: https://reviews.llvm.org/D105265
2021-08-18 09:03:41 +08:00
Dylan Fleming ef198cd99e [SVE] Remove usage of getMaxVScale for AArch64, in favour of IR Attribute
Removed AArch64 usage of the getMaxVScale interface, replacing it with
the vscale_range(min, max) IR Attribute.

Reviewed By: paulwalker-arm

Differential Revision: https://reviews.llvm.org/D106277
2021-08-17 14:42:47 +01:00
Nikita Popov 570c9beb8e [MemorySSA] Remove unnecessary MSSA dependencies
LoopLoadElimination, LoopVersioning and LoopVectorize currently
fetch MemorySSA when construction LoopAccessAnalysis. However,
LoopAccessAnalysis does not actually use MemorySSA and we can pass
nullptr instead.

This saves one MemorySSA calculation in the default pipeline, and
thus improves compile-time.

Differential Revision: https://reviews.llvm.org/D108074
2021-08-16 20:40:55 +02:00
Nikita Popov 0a031449b2 [PassBuilder] Don't use MemorySSA for standalone LoopRotate passes
Two standalone LoopRotate passes scheduled using
createFunctionToLoopPassAdaptor() currently enable MemorySSA.
However, while LoopRotate can preserve MemorySSA, it does not use
it, so requiring MemorySSA is unnecessary.

This change doesn't have a practical compile-time impact by itself,
because subsequent passes still request MemorySSA.

Differential Revision: https://reviews.llvm.org/D108073
2021-08-16 20:34:18 +02:00
Wang, Pengfei f1de9d6dae [X86] AVX512FP16 instructions enabling 2/6
Enable FP16 binary operator instructions.

Ref.: https://software.intel.com/content/www/us/en/develop/download/intel-avx512-fp16-architecture-specification.html

Reviewed By: LuoYuanke

Differential Revision: https://reviews.llvm.org/D105264
2021-08-15 08:56:33 +08:00
Craig Topper 4190d99dfc [X86] Add parentheses around casts in some of the X86 intrinsic headers.
This covers the SSE and AVX/AVX2 headers. AVX512 has a lot more macros
due to rounding mode.

Fixes part of PR51324.

Reviewed By: pengfei

Differential Revision: https://reviews.llvm.org/D107843
2021-08-13 09:36:16 -07:00
Lei Huang 8930af45c3 [PowerPC] Implement XL compatibility builtin __addex
Add builtin and intrinsic for `__addex`.

This patch is part of a series of patches to provide builtins for
compatibility with the XL compiler.

Reviewed By: stefanp, nemanjai, NeHuang

Differential Revision: https://reviews.llvm.org/D107002
2021-08-12 16:38:21 -05:00
Thomas Preud'homme 1e11ccad83 [clang/test] Run thinlto-clang-diagnostic-handler-in-be.c on x86
Clang test CodeGen/thinlto-clang-diagnostic-handler-in-be.c fails on
some non x86 targets, e.g. hexagon. Since the test already requires x86
to be available as a target this commit forces the target to x86_64.

Reviewed By: steven_wu

Differential Revision: https://reviews.llvm.org/D107667
2021-08-12 21:38:35 +01:00
Florian Hahn f999312872
Recommit "[Matrix] Overload stride arg in matrix.columnwise.load/store."
This reverts the revert 28c04794df.

The failing MLIR test that caused the revert should be fixed  in this
version.

Also includes a PPC test fix previously in 1f87c7c478.
2021-08-12 18:31:57 +01:00
Mehdi Amini 28c04794df Revert "[Matrix] Overload stride arg in matrix.columnwise.load/store."
This reverts commit a1ef81de35.

Broke the MLIR buildbot.
2021-08-12 11:57:19 +00:00
Florian Hahn a1ef81de35
[Matrix] Overload stride arg in matrix.columnwise.load/store.
This patch adjusts the intrinsics definition of
llvm.matrix.column.major.load and llvm.matrix.column.major.store to
allow overloading the type of the stride. The bitwidth of the stride is
used to perform the offset computation.

This fixes a crash when using __builtin_matrix_column_major_load or
__builtin_matrix_column_major_store on 32 bit platforms. The stride argument
of the builtins are defined as `size_t`, which is 32 bits wide on 32 bit
platforms.

Note that we still perform offset computations with 64 bit width on 32
bit platforms for accesses that do not take a user-specified stride.
This can be fixed separately.

Fixes PR51304.

Reviewed By: erichkeane

Differential Revision: https://reviews.llvm.org/D107349
2021-08-12 10:45:25 +01:00
Fangrui Song 76093b1739 [InlineAdvisor] Add single quotes around caller/callee names
Clang diagnostics refer to identifier names in quotes.
This patch makes inline remarks conform to the convention.
New behavior:

```
% clang -O2 -Rpass=inline -Rpass-missed=inline -S a.c
a.c:4:25: remark: 'foo' inlined into 'bar' with (cost=-30, threshold=337) at callsite bar:0:25; [-Rpass=inline]
int bar(int a) { return foo(a); }
                        ^
```

Reviewed By: hoy

Differential Revision: https://reviews.llvm.org/D107791
2021-08-10 11:51:31 -07:00
Thomas Preud'homme 1397e19129 Set supported target for asan-use-callbacks test
Explicitely set x86_64-linux-gnu as a target for asan-use-callbacks
clang test since some target do not support -fsanitize=address (e.g.
i386-pc-openbsd). Also remove redundant -fsanitize=address and move
-emit-llvm right after -S.

Reviewed By: vitalybuka

Differential Revision: https://reviews.llvm.org/D107633
2021-08-10 15:01:44 +01:00
Wang, Pengfei 6f7f5b54c8 [X86] AVX512FP16 instructions enabling 1/6
1. Enable FP16 type support and basic declarations used by following patches.
2. Enable new instructions VMOVW and VMOVSH.

Ref.: https://software.intel.com/content/www/us/en/develop/download/intel-avx512-fp16-architecture-specification.html

Reviewed By: LuoYuanke

Differential Revision: https://reviews.llvm.org/D105263
2021-08-10 12:46:01 +08:00
Hsiangkai Wang 5f996705e0 [RISCV] Half-precision for vget/vset.
Differential Revision: https://reviews.llvm.org/D107433
2021-08-09 17:38:15 +08:00
Zahira Ammarguellat 4389a413e2 Revert "[clang][fpenv][patch] Change clang option -ffp-model=precise to select ffp-contract=on"
This reverts commit 48ad446a0f.
2021-08-06 12:01:47 -07:00
Serge Pavlov 4c4093e6e3 Introduce intrinsic llvm.isnan
This is recommit of the patch 16ff91ebcc,
reverted in 0c28a7c990 because it had
an error in call of getFastMathFlags (base type should be FPMathOperator
but not Instruction). The original commit message is duplicated below:

    Clang has builtin function '__builtin_isnan', which implements C
    library function 'isnan'. This function now is implemented entirely in
    clang codegen, which expands the function into set of IR operations.
    There are three mechanisms by which the expansion can be made.

    * The most common mechanism is using an unordered comparison made by
      instruction 'fcmp uno'. This simple solution is target-independent
      and works well in most cases. It however is not suitable if floating
      point exceptions are tracked. Corresponding IEEE 754 operation and C
      function must never raise FP exception, even if the argument is a
      signaling NaN. Compare instructions usually does not have such
      property, they raise 'invalid' exception in such case. So this
      mechanism is unsuitable when exception behavior is strict. In
      particular it could result in unexpected trapping if argument is SNaN.

    * Another solution was implemented in https://reviews.llvm.org/D95948.
      It is used in the cases when raising FP exceptions by 'isnan' is not
      allowed. This solution implements 'isnan' using integer operations.
      It solves the problem of exceptions, but offers one solution for all
      targets, however some can do the check in more efficient way.

    * Solution implemented by https://reviews.llvm.org/D96568 introduced a
      hook 'clang::TargetCodeGenInfo::testFPKind', which injects target
      specific code into IR. Now only SystemZ implements this hook and it
      generates a call to target specific intrinsic function.

    Although these mechanisms allow to implement 'isnan' with enough
    efficiency, expanding 'isnan' in clang has drawbacks:

    * The operation 'isnan' is hidden behind generic integer operations or
      target-specific intrinsics. It complicates analysis and can prevent
      some optimizations.

    * IR can be created by tools other than clang, in this case treatment
      of 'isnan' has to be duplicated in that tool.

    Another issue with the current implementation of 'isnan' comes from the
    use of options '-ffast-math' or '-fno-honor-nans'. If such option is
    specified, 'fcmp uno' may be optimized to 'false'. It is valid
    optimization in general, but it results in 'isnan' always returning
    'false'. For example, in some libc++ implementations the following code
    returns 'false':

        std::isnan(std::numeric_limits<float>::quiet_NaN())

    The options '-ffast-math' and '-fno-honor-nans' imply that FP operation
    operands are never NaNs. This assumption however should not be applied
    to the functions that check FP number properties, including 'isnan'. If
    such function returns expected result instead of actually making
    checks, it becomes useless in many cases. The option '-ffast-math' is
    often used for performance critical code, as it can speed up execution
    by the expense of manual treatment of corner cases. If 'isnan' returns
    assumed result, a user cannot use it in the manual treatment of NaNs
    and has to invent replacements, like making the check using integer
    operations. There is a discussion in https://reviews.llvm.org/D18513#387418,
    which also expresses the opinion, that limitations imposed by
    '-ffast-math' should be applied only to 'math' functions but not to
    'tests'.

    To overcome these drawbacks, this change introduces a new IR intrinsic
    function 'llvm.isnan', which realizes the check as specified by IEEE-754
    and C standards in target-agnostic way. During IR transformations it
    does not undergo undesirable optimizations. It reaches instruction
    selection, where is lowered in target-dependent way. The lowering can
    vary depending on options like '-ffast-math' or '-ffp-model' so the
    resulting code satisfies requested semantics.

    Differential Revision: https://reviews.llvm.org/D104854
2021-08-06 14:32:27 +07:00
Fangrui Song c38efb4899 [clang] Implement -falign-loops=N (N is a power of 2) for non-LTO
GCC supports multiple forms of -falign-loops=.
-falign-loops= is currently ignored in Clang.

This patch implements the simplest but the most useful form where N is a
power of 2.

The underlying implementation uses a `llvm::TargetOptions` option for now.
Bitcode generation ignores this option.

Differential Revision: https://reviews.llvm.org/D106701
2021-08-05 12:17:50 -07:00
Sean Fertile f888e442bc [PowerPC][AIX] attribute aligned cannot decrease align of a vector var.
On AIX an aligned attribute cannot decrease the alignment of a variable
when placed on a variable declaration of vector type.

Differential Revision: https://reviews.llvm.org/D107522
2021-08-05 11:15:12 -04:00
Bradley Smith e57e1e4e00 [clang][AArch64][SVE] Avoid going through memory for fixed/scalable predicate casts
For fixed SVE types, predicates are represented using vectors of i8,
where as for scalable types they are represented using vectors of i1. We
can avoid going through memory for casts between these by bitcasting the
i1 scalable vectors to/from a scalable i8 vector of matching size, which
can then use the existing vector insert/extract logic.

Differential Revision: https://reviews.llvm.org/D106860
2021-08-04 16:10:37 +00:00
Serge Pavlov 0c28a7c990 Revert "Introduce intrinsic llvm.isnan"
This reverts commit 16ff91ebcc.
Several errors were reported mainly test-suite execution time. Reverted
for investigation.
2021-08-04 17:18:15 +07:00
Serge Pavlov 16ff91ebcc Introduce intrinsic llvm.isnan
Clang has builtin function '__builtin_isnan', which implements C
library function 'isnan'. This function now is implemented entirely in
clang codegen, which expands the function into set of IR operations.
There are three mechanisms by which the expansion can be made.

* The most common mechanism is using an unordered comparison made by
  instruction 'fcmp uno'. This simple solution is target-independent
  and works well in most cases. It however is not suitable if floating
  point exceptions are tracked. Corresponding IEEE 754 operation and C
  function must never raise FP exception, even if the argument is a
  signaling NaN. Compare instructions usually does not have such
  property, they raise 'invalid' exception in such case. So this
  mechanism is unsuitable when exception behavior is strict. In
  particular it could result in unexpected trapping if argument is SNaN.

* Another solution was implemented in https://reviews.llvm.org/D95948.
  It is used in the cases when raising FP exceptions by 'isnan' is not
  allowed. This solution implements 'isnan' using integer operations.
  It solves the problem of exceptions, but offers one solution for all
  targets, however some can do the check in more efficient way.

* Solution implemented by https://reviews.llvm.org/D96568 introduced a
  hook 'clang::TargetCodeGenInfo::testFPKind', which injects target
  specific code into IR. Now only SystemZ implements this hook and it
  generates a call to target specific intrinsic function.

Although these mechanisms allow to implement 'isnan' with enough
efficiency, expanding 'isnan' in clang has drawbacks:

* The operation 'isnan' is hidden behind generic integer operations or
  target-specific intrinsics. It complicates analysis and can prevent
  some optimizations.

* IR can be created by tools other than clang, in this case treatment
  of 'isnan' has to be duplicated in that tool.

Another issue with the current implementation of 'isnan' comes from the
use of options '-ffast-math' or '-fno-honor-nans'. If such option is
specified, 'fcmp uno' may be optimized to 'false'. It is valid
optimization in general, but it results in 'isnan' always returning
'false'. For example, in some libc++ implementations the following code
returns 'false':

    std::isnan(std::numeric_limits<float>::quiet_NaN())

The options '-ffast-math' and '-fno-honor-nans' imply that FP operation
operands are never NaNs. This assumption however should not be applied
to the functions that check FP number properties, including 'isnan'. If
such function returns expected result instead of actually making
checks, it becomes useless in many cases. The option '-ffast-math' is
often used for performance critical code, as it can speed up execution
by the expense of manual treatment of corner cases. If 'isnan' returns
assumed result, a user cannot use it in the manual treatment of NaNs
and has to invent replacements, like making the check using integer
operations. There is a discussion in https://reviews.llvm.org/D18513#387418,
which also expresses the opinion, that limitations imposed by
'-ffast-math' should be applied only to 'math' functions but not to
'tests'.

To overcome these drawbacks, this change introduces a new IR intrinsic
function 'llvm.isnan', which realizes the check as specified by IEEE-754
and C standards in target-agnostic way. During IR transformations it
does not undergo undesirable optimizations. It reaches instruction
selection, where is lowered in target-dependent way. The lowering can
vary depending on options like '-ffast-math' or '-ffp-model' so the
resulting code satisfies requested semantics.

Differential Revision: https://reviews.llvm.org/D104854
2021-08-04 15:27:49 +07:00
Hsiangkai Wang 8b33839f01 [RISCV] Rename vector inline constraint from 'v' to 'vr' and 'vm' in IR.
Differential Revision: https://reviews.llvm.org/D107139
2021-08-01 05:58:17 +08:00
Eli Friedman bdd55b2f18 Fix the default alignment of i1 vectors.
Currently, the default alignment is much larger than the actual size of
the vector in memory.  Fix this to use a sane default.

For SVE, temporarily remove lowering of load/store operations for
predicates with less than 16 elements. The layout the backend was
assuming for SVE predicates with less than 16 elements doesn't agree
with the frontend. More work probably needs to be done here.

This change is, strictly speaking, not backwards-compatible at the
bitcode level. But probably nobody is actually depending on that; i1
vectors in memory are rare, and the code that does use them probably
ends up forcing the alignment to something sane anyway.  If we think
this is a concern, I can restrict this to scalable vectors for now
(where it's actually causing issues for me at the moment).

Differential Revision: https://reviews.llvm.org/D88994
2021-07-31 14:09:59 -07:00
Eli Friedman 6eb2ffbaeb Fix a couple regression tests I missed updating in 2a284782 2021-07-31 13:41:15 -07:00
Eli Friedman 2a2847823f [ConstantFold] Get rid of special cases for sizeof etc.
Target-dependent constant folding will fold these down to simple
constants (or at least, expressions that don't involve a GEP).  We don't
need heroics to try to optimize the form of the expression before that
happens.

Fixes https://bugs.llvm.org/show_bug.cgi?id=51232 .

Differential Revision: https://reviews.llvm.org/D107116
2021-07-31 13:20:47 -07:00
Alexandros Lamprineas 29b263a34f [Clang][AArch64] Inline assembly support for the ACLE type 'data512_t'
In LLVM IR terms the ACLE type 'data512_t' is essentially an aggregate
type { [8 x i64] }. When emitting code for inline assembly operands,
clang tries to scalarize aggregate types to an integer of the equivalent
length, otherwise it passes them by-reference. This patch adds a target
hook to tell whether a given inline assembly operand is scalarizable
so that clang can emit code to pass/return it by-value.

Differential Revision: https://reviews.llvm.org/D94098
2021-07-31 09:51:28 +01:00
Fanbo Meng bdf4c7b738 [z/OS]Remove overriding default attribute aligned value
Make DefaultAlignForAttributeAligned consistent with SystemZ.

Reviewed By: abhina.sreeskantharajan, anirudhp

Differential Revision: https://reviews.llvm.org/D107189
2021-07-30 15:51:40 -04:00
Nemanja Ivanovic 9019b55b60 [PowerPC] Fix byte ordering of ld/st with length on BE
The builtins vec_xl_len_r and vec_xst_len_r actually use the
wrong side of the vector on big endian Power9 systems. We never
spotted this before because there was no such thing as a big
endian distro that supported Power9. Now we have AIX and the
elements are in the wrong part of the vector. This just fixes
it so the elements are loaded to and stored from the right
side of the vector.
2021-07-30 14:37:24 -05:00
Amy Kwan 5ea6117a9e [PowerPC] Emit error for Altivec vector initializations when -faltivec-src-compat=gcc is specified
Under the -faltivec-src-compat=gcc option, AltiVec vector initialization should
be treated as if they were compiled with gcc - which is, to emit an error when
the vectors are initialized in the parenthesized or non-parenthesized manner.
This patch implements this behaviour.

Differential Revision: https://reviews.llvm.org/D106410
2021-07-30 09:35:43 -05:00
Melanie Blower 0a175ad445 [clang][patch][FPEnv] Fix syntax errors in pragma float_control test
In a post-commit message to https://reviews.llvm.org/D102343
@MaskRay pointed out syntax errors in one of the test cases. This
patch fixes those problems, I had forgotten the colon after the CHECK- strings.
2021-07-30 09:59:45 -04:00
Melanie Blower fd251d903b [clang][patch] Remove erroneous run line committed in D102343 2021-07-29 12:42:04 -04:00
Melanie Blower bc5b5ea037 [clang][patch][FPEnv] Make initialization of C++ globals strictfp aware
@kpn pointed out that the global variable initialization functions didn't
have the "strictfp" metadata set correctly, and @rjmccall said that there
was buggy code in SetFPModel and StartFunction, this patch is to solve
those problems. When Sema creates a FunctionDecl, it sets the
FunctionDeclBits.UsesFPIntrin to "true" if the lexical FP settings
(i.e. a combination of command line options and #pragma float_control
settings) correspond to ConstrainedFP mode. That bit is used when CodeGen
starts codegen for a llvm function, and it translates into the
"strictfp" function attribute. See bugs.llvm.org/show_bug.cgi?id=44571

Reviewed By: Aaron Ballman

Differential Revision: https://reviews.llvm.org/D102343
2021-07-29 12:02:37 -04:00
Kai Luo e4902e69e9 [PowerPC] Fix return type of XL compat CAS
`__compare_and_swap*` should return `i32` rather than `i1`.

Reviewed By: jsji

Differential Revision: https://reviews.llvm.org/D107077
2021-07-29 14:49:26 +00:00
Freddy Ye 58712987e5 [NFC][X86] add missing tests in clang/test/CodeGen/attr-target-mv.c
Reviewed By: pengfei

Differential Revision: https://reviews.llvm.org/D106849
2021-07-29 13:28:10 +08:00
Melanie Blower 66ddac22e2 [CLANG][PATCH][FPEnv] Add support for option -ffp-eval-method and extend #pragma float_control similarly
The Intel compiler ICC supports the option "-fp-model=(source|double|extended)"
which causes the compiler to use a wider type for intermediate floating point
calculations. Also supported is a way to embed this effect in the source
program with #pragma float_control(source|double|extended).
This patch extends pragma float_control syntax, and also adds support
for a new floating point option "-ffp-eval-method=(source|double|extended)".
source: intermediate results use source precision
double: intermediate results use double precision
extended: intermediate results use extended precision

Reviewed By: Aaron Ballman

Differential Revision: https://reviews.llvm.org/D93769
2021-07-28 10:50:32 -04:00
Jinsong Ji edbdf8e5b5 [AIX] Update fetch_and_add type
It turns out that the AIX kernel is defining int instead of unsigned int for fetch_and_add.

Legacy XL also defines this to be signed.

https://www.ibm.com/docs/en/aix/7.2?topic=f-fetch-add-kernel-services

So update the type for compat.

Reviewed By: hubert.reinterpretcast

Differential Revision: https://reviews.llvm.org/D106920
2021-07-27 22:13:29 +00:00
Florian Mayer 835ef6f93d [hwasan] Fix stack safety test for old PM.
With the old PM, the stub for __hwasan_generate_tag is still generated
in the IR, but never called.

Reviewed By: vitalybuka

Differential Revision: https://reviews.llvm.org/D106858
2021-07-27 20:50:46 +01:00
Melanie Blower 48ad446a0f [clang][fpenv][patch] Change clang option -ffp-model=precise to select ffp-contract=on
Change the ffp-model=precise to enables -ffp-contract=on (previously
-ffp-model=precise enabled -ffp-contract=fast). This is a follow-up
to Andy Kaylor's comments in the llvm-dev discussion "Floating Point
semantic modes". From the same email thread, I put Andy's distillation
of floating point options and floating point modes into UsersManual.rst
Also fixes bugs.llvm.org/show_bug.cgi?id=50222

I had to revert this a few times because of failures on the x86-64
buildbot but I think we finally have that fixed by LNT/79f2b03c51.

Reviewed By: rjmccall, andrew.kaylor

Differential Revision: https://reviews.llvm.org/D74436
2021-07-27 13:55:31 -04:00
Thomas Lively 33786576fd [WebAssembly] Codegen for extmul SIMD instructions
Replace the clang builtins and LLVM intrinsics for the SIMD extmul instructions
with normal codegen patterns.

Differential Revision: https://reviews.llvm.org/D106724
2021-07-27 08:41:30 -07:00
Albion Fung 18526b0d66 [PowerPC] Changed sema checking range for tdw td builtin
To match xlc behaviour and definition in the PowerPC ISA3.1,
it is a better idea to have ibm-clang produce an error when a
0 is passed to the builtin, which will match xlc's behaviour.
This patch changes the accepted range from 0 to 31 to 1 to 31.

Differential revision: https://reviews.llvm.org/D106817
2021-07-26 18:44:33 -05:00
Reid Kleckner f9f56488e0 [DebugInfo] Use per-enumerator signedness for DIEnumerator
Allegedly the DWARF backend ignores this field of DIEnumerator, but we
set it nonetheless in case we decide to use it in the future.
Alternatively, we could remove it, but it is simpler to pass down the
signed bit as it is in the AST for now.

Implemented to address comments on D106585
2021-07-26 16:14:28 -07:00
Nemanja Ivanovic 1c50a5da36 [PowerPC] Implement partial vector ld/st builtins for XL compatibility
XL provides functions __vec_ldrmb/__vec_strmb for loading/storing a
sequence of 1 to 16 bytes in big endian order, right justified in the
vector register (regardless of target endianness).
This is equivalent to vec_xl_len_r/vec_xst_len_r which are only
available on Power9.

This patch simply uses the Power9 functions when compiled for Power9,
but provides a more general implementation for Power8.

Differential revision: https://reviews.llvm.org/D106757
2021-07-26 13:19:52 -05:00
Qiu Chaofan 240dde9482 [PowerPC] Change altivec indexed load/store builtins argument type
This patch changes the index argument of lvxl?/lve[bhw]x and
stvxl?/stve[bhw]x builtins from int to long. Because on 64-bit
subtargets, an extra extsw will always been generated, which is
incorrect.

Reviewed By: nemanjai

Differential Revision: https://reviews.llvm.org/D106530
2021-07-27 00:26:50 +08:00
Ulrich Weigand 8cd8120a7b [SystemZ] Add support for new cpu architecture - arch14
This patch adds support for the next-generation arch14
CPU architecture to the SystemZ backend.

This includes:
- Basic support for the new processor and its features.
- Detection of arch14 as host processor.
- Assembler/disassembler support for new instructions.
- New LLVM intrinsics for certain new instructions.
- Support for low-level builtins mapped to new LLVM intrinsics.
- New high-level intrinsics in vecintrin.h.
- Indicate support by defining  __VEC__ == 10304.

Note: No currently available Z system supports the arch14
architecture.  Once new systems become available, the
official system name will be added as supported -march name.
2021-07-26 16:57:28 +02:00
Thomas Lively 85157c0079 [WebAssembly] Codegen for pmin and pmax
Replace the clang builtins and LLVM intrinsics for {f32x4,f64x2}.{pmin,pmax}
with standard codegen patterns. Since wasm_simd128.h uses an integer vector as
the standard single vector type, the IR for the pmin and pmax intrinsic
functions contains bitcasts that would not be there otherwise. Add extra codegen
patterns that can still select the pmin and pmax instructions in the presence of
these bitcasts.

Differential Revision: https://reviews.llvm.org/D106612
2021-07-23 14:49:21 -07:00
namazso 91bc85b1eb [MS] Preserve base register %esi around movs[bwl]
fix for behavior reported in https://bugs.llvm.org/show_bug.cgi?id=51100 workaround for root cause https://bugs.llvm.org/show_bug.cgi?id=16830

similar to https://reviews.llvm.org/D101338

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D106210
2021-07-23 16:28:32 +08:00