Commit Graph

31454 Commits

Author SHA1 Message Date
Clement Courbet cb15ba84fe Reland "[DAGCombiner] Allow zextended load combines."
Check that the generated type is simple.
2019-11-22 14:47:18 +01:00
Roman Lebedev 96cf5c8d47
[Codegen] TargetLowering::prepareUREMEqFold(): `x u% C1 ==/!= C2` (PR35479)
Summary:
The current lowering is:
```
Name: (X % C1) == C2 -> X * C3 <= C4 || false
Pre: (C2 == 0 || C1 u<= C2) && (C1 u>> countTrailingZeros(C1)) * C3 == 1
%zz = and i8 C3, 0 ; trick alive into making C3 avaliable in precondition
%o0 = urem i8 %x, C1
%r = icmp eq i8 %o0, C2
  =>
%zz = and i8 C3, 0 ; and silence it from complaining about said reg
%C4 = -1 /u C1
%n0 = mul i8 %x, C3
%n1 = lshr i8 %n0, countTrailingZeros(C1) ; rotate right
%n2 = shl i8 %n0, ((8-countTrailingZeros(C1)) %u 8) ; rotate right
%n3 = or i8 %n1, %n2 ; rotate right
%is_tautologically_false = icmp ule i8 C1, C2
%C4_fixed = select i1 %is_tautologically_false, i8 -1, i8 %C4
%res = icmp ule i8 %n3, %C4_fixed
%r = xor i1 %res, %is_tautologically_false
```
https://rise4fun.com/Alive/2xC
https://rise4fun.com/Alive/jpb5

However, we can support non-tautological cases `C1 u> C2` too.
Said handling consists of two parts:
* `C2 u<= (-1 %u C1)`. It just works. We only have to change `(X % C1) == C2` into `((X - C2) % C1) == 0`
```
Name: (X % C1) == C2 -> (X - C2) * C3 <= C4   iff C2 u<= (-1 %u C1)
Pre: (C1 u>> countTrailingZeros(C1)) * C3 == 1 && C2 u<= (-1 %u C1)
%zz = and i8 C3, 0 ; trick alive into making C3 avaliable in precondition
%o0 = urem i8 %x, C1
%r = icmp eq i8 %o0, C2
  =>
%zz = and i8 C3, 0 ; and silence it from complaining about said reg
%C4 = (-1 /u C1)
%n0 = sub i8 %x, C2
%n1 = mul i8 %n0, C3
%n2 = lshr i8 %n1, countTrailingZeros(C1) ; rotate right
%n3 = shl i8 %n1, ((8-countTrailingZeros(C1)) %u 8) ; rotate right
%n4 = or i8 %n2, %n3 ; rotate right
%is_tautologically_false = icmp ule i8 C1, C2
%C4_fixed = select i1 %is_tautologically_false, i8 -1, i8 %C4
%res = icmp ule i8 %n4, %C4_fixed
%r = xor i1 %res, %is_tautologically_false
```
https://rise4fun.com/Alive/m4P
https://rise4fun.com/Alive/SKrx
* `C2 u> (-1 %u C1)`. We also have to change `(X % C1) == C2` into `((X - C2) % C1) == 0`,
  and we have to decrement C4:
```
Name: (X % C1) == C2 -> (X - C2) * C3 <= C4   iff C2 u> (-1 %u C1)
Pre: (C1 u>> countTrailingZeros(C1)) * C3 == 1 && C2 u> (-1 %u C1)
%zz = and i8 C3, 0 ; trick alive into making C3 avaliable in precondition
%o0 = urem i8 %x, C1
%r = icmp eq i8 %o0, C2
  =>
%zz = and i8 C3, 0 ; and silence it from complaining about said reg
%C4 = (-1 /u C1)-1
%n0 = sub i8 %x, C2
%n1 = mul i8 %n0, C3
%n2 = lshr i8 %n1, countTrailingZeros(C1) ; rotate right
%n3 = shl i8 %n1, ((8-countTrailingZeros(C1)) %u 8) ; rotate right
%n4 = or i8 %n2, %n3 ; rotate right
%is_tautologically_false = icmp ule i8 C1, C2
%C4_fixed = select i1 %is_tautologically_false, i8 -1, i8 %C4
%res = icmp ule i8 %n4, %C4_fixed
%r = xor i1 %res, %is_tautologically_false
```
https://rise4fun.com/Alive/d40
https://rise4fun.com/Alive/8cF

I believe this concludes `x u% C1 ==/!= C2` lowering.
In fact, clang is may now be better in this regard than gcc:
as it can be seen from `@t32_6_4` test, we do lower `x % 6 == 4`
via this pattern, while gcc does not: https://godbolt.org/z/XNU2z9
And all the general alive proofs say this is legal.
And manual checking agrees: https://rise4fun.com/Alive/WA2

Fixes [[ https://bugs.llvm.org/show_bug.cgi?id=35479 | PR35479 ]].

Reviewers: RKSimon, craig.topper, spatel

Reviewed By: RKSimon

Subscribers: nick, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70053
2019-11-22 15:22:42 +03:00
Roman Lebedev 3f46022e33
[Codegen] TargetLowering::prepareUREMEqFold(): `x u% C1 ==/!= C2` with tautological C1 u<= C2 (PR35479)
Summary:
This is a preparatory cleanup before i add more
of this fold to deal with comparisons with non-zero.

In essence, the current lowering is:
```
Name: (X % C1) == 0 -> X * C3 <= C4
Pre: (C1 u>> countTrailingZeros(C1)) * C3 == 1
%zz = and i8 C3, 0 ; trick alive into making C3 avaliable in precondition
%o0 = urem i8 %x, C1
%r = icmp eq i8 %o0, 0
  =>
%zz = and i8 C3, 0 ; and silence it from complaining about said reg
%C4 = -1 /u C1
%n0 = mul i8 %x, C3
%n1 = lshr i8 %n0, countTrailingZeros(C1) ; rotate right
%n2 = shl i8 %n0, ((8-countTrailingZeros(C1)) %u 8) ; rotate right
%n3 = or i8 %n1, %n2 ; rotate right
%r = icmp ule i8 %n3, %C4
```
https://rise4fun.com/Alive/oqd

It kinda just works, really no weird edge-cases.
But it isn't all that great for when comparing with non-zero.
In particular, given `(X % C1) == C2`, there will be problems
in the always-false tautological case where `C2 u>= C1`:
https://rise4fun.com/Alive/pH3

That case is tautological, always-false:
```
Name: (X % Y) u>= Y
%o0 = urem i8 %x, %y
%r = icmp uge i8 %o0, %y
  =>
%r = false
```
https://rise4fun.com/Alive/ofu

While we can't/shouldn't get such tautological case normally,
we do deal with non-splat vectors, so unless we want to give up
in this case, we need to fixup/short-circuit such lanes.

There are two lowering variants:
1. We can blend between whatever computed result and the correct tautological result
```
Name: (X % C1) == C2 -> X * C3 <= C4 || false
Pre: (C2 == 0 || C1 u<= C2) && (C1 u>> countTrailingZeros(C1)) * C3 == 1
%zz = and i8 C3, 0 ; trick alive into making C3 avaliable in precondition
%o0 = urem i8 %x, C1
%r = icmp eq i8 %o0, C2
  =>
%zz = and i8 C3, 0 ; and silence it from complaining about said reg
%C4 = -1 /u C1
%n0 = mul i8 %x, C3
%n1 = lshr i8 %n0, countTrailingZeros(C1) ; rotate right
%n2 = shl i8 %n0, ((8-countTrailingZeros(C1)) %u 8) ; rotate right
%n3 = or i8 %n1, %n2 ; rotate right
%is_tautologically_false = icmp ule i8 C1, C2
%res = icmp ule i8 %n3, %C4
%r = select i1 %is_tautologically_false, i1 0, i1 %res
```
https://rise4fun.com/Alive/PjT5
https://rise4fun.com/Alive/1KV

2. We can invert the comparison result
```
Name: (X % C1) == C2 -> X * C3 <= C4 || false
Pre: (C2 == 0 || C1 u<= C2) && (C1 u>> countTrailingZeros(C1)) * C3 == 1
%zz = and i8 C3, 0 ; trick alive into making C3 avaliable in precondition
%o0 = urem i8 %x, C1
%r = icmp eq i8 %o0, C2
  =>
%zz = and i8 C3, 0 ; and silence it from complaining about said reg
%C4 = -1 /u C1
%n0 = mul i8 %x, C3
%n1 = lshr i8 %n0, countTrailingZeros(C1) ; rotate right
%n2 = shl i8 %n0, ((8-countTrailingZeros(C1)) %u 8) ; rotate right
%n3 = or i8 %n1, %n2 ; rotate right
%is_tautologically_false = icmp ule i8 C1, C2
%C4_fixed = select i1 %is_tautologically_false, i8 -1, i8 %C4
%res = icmp ule i8 %n3, %C4_fixed
%r = xor i1 %res, %is_tautologically_false
```
https://rise4fun.com/Alive/2xC
https://rise4fun.com/Alive/jpb5

3. We can expand into `and`/`or`:
https://rise4fun.com/Alive/WGn
https://rise4fun.com/Alive/lcb5

Blend-one is likely better since we avoid having to load the
replacement from constant pool. `xor` is second best since
it's still pretty general. I'm not adding `and`/`or` variants.

Reviewers: RKSimon, craig.topper, spatel

Reviewed By: RKSimon

Subscribers: nick, hiraditya, xbolva00, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70051
2019-11-22 15:16:03 +03:00
Simon Pilgrim 5aaca2355e [X86] Updated strict fp scalar tests and add fp80 tests for D68857 2019-11-22 11:57:21 +00:00
QingShan Zhang a4cc895aee [PowerPC] Implement the vector extend sign instruction pattern match
Power9 has instructions to implement the semantics of SIGN_EXTEND_INREG for vector type.
Mark it as legal and add the match pattern.

Differential Revision: https://reviews.llvm.org/D69601
2019-11-22 08:58:27 +00:00
Clement Courbet 88e205525c Revert "[DAGCombiner] Allow zextended load combines."
Breaks some bots.
2019-11-22 09:01:08 +01:00
Clement Courbet 036790f988 [DAGCombiner] Allow zextended load combines.
Summary: or(zext(load8(base)), zext(load8(base+1)) -> zext(load16 base)

Reviewers: apilipenko, RKSimon

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70487
2019-11-22 08:40:19 +01:00
czhengsz 29f6f9b2b2 [PowerPC] combine rlwinm+rlwinm to rlwinm
combine
x3 = rlwinm x3, 27, 5, 31
x3 = rlwinm x3, 19, 0, 12

to
x3 = rlwinm x3, 14, 0, 12

Reviewed by: steven.zhang

Differential Revision: https://reviews.llvm.org/D70374
2019-11-22 00:00:33 -05:00
Wang, Pengfei 085d7847aa [X86] Add option 'disable-strictnode-mutation' for tests that respect
strict fp semantics. NFCI.
2019-11-22 12:26:55 +08:00
Craig Topper b29e5cdb7c [X86] Add test cases for most of the constrained fp libcalls with fp128.
Add explicit setOperation actions for some to match their none
strict counterparts. This isn't required, but makes the code
self documenting that we didn't forget about strict fp. I've
used LibCall instead of Expand since that's more explicitly what
we want.

Only lrint/llrint/lround/llround are missing now.
2019-11-21 18:17:59 -08:00
Craig Topper fc4020dbbe [X86] Mark fp128 FMA as LibCall instead of Expand. Add STRICT_FMA as well.
The Expand code would fall back to LibCall, but this makes it
more explicit.
2019-11-21 18:17:57 -08:00
Craig Topper 7696b99258 [LegalizeDAG][X86] Add support for turning STRICT_FADD/SUB/MUL/DIV into libcalls. Use it for fp128 on x86-64.
This requires a minor hack for f32/f64 strict fadd/fsub to avoid
turning those into libcalls.
2019-11-21 16:19:25 -08:00
Craig Topper 95f44cf44a [X86] Mark vector STRICT_FADD/STRICT_FSUB as Legal and add mutation to X86ISelDAGToDAG
The prevents LegalizeVectorOps from scalarizing them. We'll need
to remove the X86 mutation code when we add isel patterns.
2019-11-21 16:19:18 -08:00
Craig Topper 0cc12b8a83 [X86] Remove regcall calling convention from fp-strict-scalar.ll. Add 32-bit and 64-bit check prefixes.
The regcall was making 32-bit mode pass things in xmm registers
which made 32-bit and 64-bit more similar. But I have upcoming
patches that require them to be separated anyway.
2019-11-21 16:18:55 -08:00
David Tellenbach 75434366ce [AArch64] [FrameLowering] Allow conditional insertion of CFI instruction
Summary:
The insertion of most CFI instructions during AArch64 frame lowering can
be disabled (e.g. using the function attribute `nounwind`).

This patch enables conditional insertion for one more CFI instruction.

Reviewers: t.p.northover, ostannard

Reviewed By: ostannard

Subscribers: kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70129
2019-11-22 00:27:41 +01:00
Craig Topper 1439059cc7 [X86] Change legalization action for f128 fadd/fsub/fmul/fdiv from Custom to LibCall.
The custom code just emits a libcall, but we can do the same
with generic code. The only difference is that the generic code
can form tail calls where the custom code couldn't. This is
responsible for the test changes.

This avoids needing to modify the Custom handling for strict fp.
2019-11-21 11:44:29 -08:00
Craig Topper fea8288c17 [X86] Add test case for f128 fma. NFC
This should be turned into a libcall to fmal. We already do it
correctly, but we had no test to confirm.
2019-11-21 11:44:27 -08:00
Bjorn Pettersson 898de30291 [BranchFolding] Fix PR43964 about branch folder not being debug invariant
Summary:
The fix in BranchFolder related to non debug invariant problems
done in commit ec32dff0b0 actually introduced some new
problems with debug invariance.

Before that patch ComputeCommonTailLength would move iterators
back, past debug instructions, in order to make ProfitableToMerge
make consistent answers "when one block differs from the other
only by whether debugging pseudos are present at the beginning".
But the changes in ec32dff0b0 undid that by moving the iterators
forward again.

This patch refactors ComputeCommonTailLength. The function was
really complex, considering that the SkipTopCFIAndReturn part
always moved the iterators forward to the first "real" instruction
in the found tail after ec32dff0b0.

The patch also restores the logic to "back past possible debugging
pseudos at beginning of block" to make sure ProfitableToMerge
gives consistent answers independent of DBG_VALUE instructions
before the tail. That is now done by ProfitableToMerge instead of
being hidden as a side-effect in ComputeCommonTailLength.

Reviewers: probinson, yechunliang, jmorse

Reviewed By: jmorse

Subscribers: Orlando, mehdi_amini, dexonsmith, aprantl, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70091
2019-11-21 18:13:32 +01:00
Krzysztof Parzyszek e8d1578131 [Hexagon] Remove incorrect intrinsic definition and invalid testcase
The intrinsic int_hexagon_S2_asr_i_vh was mapped to S2_asr_r_vh, which
is wrong. The testcase vasrh.select.ll was using an invalid immediate
for that intrinsic. This is not a proper testcase, since at the MIR
level such use of this intrinsic should never appear.

Together with 824b25fc02, this completes the fix for llvm.org/PR44090.
2019-11-21 09:18:15 -06:00
Miloš Stojanović 6ba5cbf3ea [mips] Add a 'generic' Mips CPU
Having a generic CPU removes a warning when creating a subtarget without
the CPU being explicitly specified.

Differential Revision: https://reviews.llvm.org/D70490
2019-11-21 15:17:21 +01:00
Clement Courbet b25f985848 [DAGCombiner] Use the right thumbv7meb triple for ARM big-endian test. 2019-11-21 15:07:35 +01:00
Clement Courbet 12a88f0128 [DAGCombiner] Add tests for thumb load-combine. 2019-11-21 13:12:12 +01:00
Tim Corringham 6821a3ccd6 [AMDGPU] Add attribute for target loop unroll threshold default
Summary:
Add a function attribute to allow the target specific default loop unroll threshold
to be specified on a per-function basis. This allows a front-end to give guidance
where it has insight that is not available to the back-end, while still allowing the
target specific heuristics to also have an effect.

Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D68873
2019-11-21 09:47:28 +00:00
Craig Topper 27da569a7a [X86] Fix i16->f128 sitofp to promote the i16 to i32 before trying to form a libcall.
Previously one of the test cases added here gave an error.
2019-11-20 17:09:32 -08:00
Craig Topper 5f3bf5967b [X86] Fix f128->i16 fptosi to promote the i16 to i32 before trying to form a libcall.
Previously one of the test cases added here gave an error.
2019-11-20 17:09:31 -08:00
Yonghong Song a0841dfe85 [BPF] Fix a bug in peephole optimization
One of current peephole optimiations is to remove SLL/SRL if
the sub register has been zero extended. This phase has two bugs
and one limitations.

First, for the physical subregister used in pseudo insn COPY
like below, it permits incorrect optimization.
    %0:gpr32 = COPY $w0
    ...
    %4:gpr = MOV_32_64 %0:gpr32
    %5:gpr = SLL_ri %4:gpr(tied-def 0), 32
    %6:gpr = SRA_ri %5:gpr(tied-def 0), 32
The $w0 could be from the return value of a previous function call
and its upper 32-bit value might contain some non-zero values.
The same applies to function arguments.

Second, the current code may permits removing SLL/SRA like below:
    %0:gpr32 = COPY $w0
    %1:gpr32 = COPY %0:gpr32
    ...
    %4:gpr = MOV_32_64 %1:gpr32
    %5:gpr = SLL_ri %4:gpr(tied-def 0), 32
    %6:gpr = SRA_ri %5:gpr(tied-def 0), 32
The reason is that it did not follow def-use chain to skip all
intermediate 32bit-to-32bit COPY instructions.

The current implementation is also very conservative for PHI
instructions. If any PHI insn component is another PHI or COPY insn,
it will just permit SLL/SRA.

This patch fixed the issue as follows:
 - During def/use chain traversal, if any physical register is read,
   SLL/SRA will be preserved as these physical registers are mostly
   from function return values or current function arguments.
 - Recursively visit all COPY and PHI instructions.
2019-11-20 15:19:59 -08:00
Piotr Sobczak 4a801170f3 [AMDGPU][SILoadStoreOptimizer] Merge TBUFFER loads/stores
Summary: Extend SILoadStoreOptimizer to merge tbuffer loads and stores.

Reviewers: nhaehnle

Reviewed By: nhaehnle

Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D69794
2019-11-20 22:59:30 +01:00
Krzysztof Parzyszek 824b25fc02 [Hexagon] Fix two testcase errors
This fixes issues discovered in https://reviews.llvm.org/D63973.
2019-11-20 15:06:35 -06:00
Stanislav Mekhanoshin 899cdf95d9 [AMDGPU] Fixed mfma test check. NFC. 2019-11-20 12:33:12 -08:00
Michael Liao 4a308d302c [AMDGPU] Keep consistent check of legal addressing mode.
Summary:
- Add test cases for GFX10, which has narrower offset range compared to
  GFX9.

Reviewers: rampitec, arsenm

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70473
2019-11-20 15:08:17 -05:00
Ulrich Weigand 97743089bf [SystemZ] Avoid mixing strict and non-strict FP operations in tests
This is to prepare for having the IR verifier reject mixed functions.
Note that fp-strict-mul-02.ll and fp-strict-mul-04.ll still remain
to be fixed.
2019-11-20 19:51:30 +01:00
Ulrich Weigand ac37755c60 [SystemZ] Use fneg in test cases
Now that we have fneg, prefer using it over "fsub -0.0, ...".
This helps in particular with strict FP tests, as fneg does
not raise any exceptions.
2019-11-20 19:08:27 +01:00
Xing Xue 5665fc91fe [AIX][XCOFF] Add support for generating assembly code for one-byte mergable strings
This patch adds support for generating assembly code for one-byte mergeable strings.

Generating assembly code for multi-byte mergeable strings and the `XCOFF` object code for mergeable strings will be supported later.

Reviewers: hubert.reinterpretcast, jasonliu, daltenty, sfertile, DiggerLin, Xiangling_L

Reviewed by: daltenty

Subscribers: wuzish, nemanjai, hiraditya, kbarton, jsji, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70310
2019-11-20 11:26:49 -05:00
Xiangling Liao ca33727abe [AIX] Lowering jump table, constant pool and block address in asm
This patch lowering jump table, constant pool and block address in assembly.
1. On AIX, jump table index is always relative;
2. Put CPI and JTI into ReadOnlySection until we support unique data sections;
3. Create the temp symbol for block address symbol;
4. Update MIR testcases and add related assembly part;

Differential Revision: https://reviews.llvm.org/D70243
2019-11-20 10:27:15 -05:00
David Zarzycki 257acbf6ae
[SelectionDAG] Combine U{ADD,SUB}O diamonds into {ADD,SUB}CARRY
Summary:
Convert (uaddo (uaddo x, y), carryIn) into addcarry x, y, carryIn if-and-only-if the carry flags of the first two uaddo are merged via OR or XOR.

Work remaining: match ADD, etc.

Reviewers: craig.topper, RKSimon, spatel, niravd, jonpa, uweigand, deadalnix, nikic, lebedev.ri, dmgreen, chfast

Reviewed By: lebedev.ri

Subscribers: chfast, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70079
2019-11-20 16:25:42 +02:00
Anna Welker 96e94e37e3 [ARM][MVE] Select vqabs
Adds a pattern to ARMInstrMVE.td to use a VQABS
  instruction if an equivalent multi-instruction
  construct is found.

  Differential revision: https://reviews.llvm.org/D70181
2019-11-20 13:58:38 +00:00
Clement Courbet 23c7679208 [CodeGen][NFC] Regenerate load-combine test with update_llc_test.
To prepare for D27861.
2019-11-20 13:27:31 +01:00
Sameer Sahasrabuddhe 52c5014da0 [AMDGPU] add support for hostcall buffer pointer as hidden kernel argument
Hostcall is a service that allows a kernel to submit requests to the
host using shared buffers, and block until a response is
received. This will eventually replace the shared buffer currently
used for printf, and repurposes the same hidden kernel argument. This
change introduces a new ValueKind in the HSA metadata to represent the
hostcall buffer.

Differential Revision: https://reviews.llvm.org/D70038
2019-11-20 15:53:55 +05:30
Austin Kerbow f3225f2abe AMDGPU/GlobalISel: Legalize FDIV64
Reviewers: arsenm

Reviewed By: arsenm

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, rovka, dstuttard, tpr, t-tye, hiraditya, Petar.Avramovic, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70403
2019-11-19 21:02:27 -08:00
Reid Kleckner 606a2bd621 [musttail] Don't forward AL on Win64
AL is only used for varargs on SysV platforms. Don't forward it on
Windows. This allows control flow guard to set up an extra hidden
parameter in RAX, as described in PR44049.

This also has the effect of freeing up RAX for use in virtual member
pointer thunks, which may also be a nice little code size improvement on
Win64.

Fixes PR44049

Reviewers: ajpaverd, efriedma, hans

Differential Revision: https://reviews.llvm.org/D70413
2019-11-19 16:54:00 -08:00
Craig Topper c4b41e8d1d [LegalizeDAG][X86] Enable STRICT_FP_TO_SINT/UINT to be promoted
Differential Revision: https://reviews.llvm.org/D70220
2019-11-19 16:14:37 -08:00
Craig Topper 85589f8077 [X86] Add custom type legalization and lowering for scalar STRICT_FP_TO_SINT/UINT
This is a first pass at Custom lowering for these operations. I also updated some of the vector code where it was obviously easy and straightforward. More work needed in follow up.

This enables these operations to be handled with X87 where special rounding control adjustments are needed to perform a truncate.

Still need to fix Promotion in the target independent code in LegalizeDAG.
llrint/llround split into separate test file because we can't make a strict libcall properly yet either and we need to do that when i64 isn't a legal type.

This does not include any isel support. So we still rely on the mutation in SelectionDAGIsel to remove the strict from this stuff later. Except for the X87 stuff which goes through custom nodes that already had chains.

Differential Revision: https://reviews.llvm.org/D70214
2019-11-19 16:05:22 -08:00
Amara Emerson cf823ce4ad [AArch64] Fix MIR test instruction to not have invalid operand.
In anticipation of an improved verifier in D63973.
2019-11-19 13:40:11 -08:00
Evgenii Stepanov 2535fe5ad3 MTE: add more unchecked instructions.
Summary:
In particular, 1- and 2-byte loads and stores ignore the pointer tag
when using SP as the base register.

Reviewers: pcc, ostannard

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D70341
2019-11-19 11:19:53 -08:00
David Green 882f23caea [ARM] MVE interleaving load and stores.
Now that we have the intrinsics, we can add VLD2/4 and VST2/4 lowering
for MVE. This works the same way as Neon, recognising the load/shuffles
combination and converting them into intrinsics in a pre-isel pass,
which just calls getMaxSupportedInterleaveFactor, lowerInterleavedLoad
and lowerInterleavedStore.

The main difference to Neon is that we do not have a VLD3 instruction.
Otherwise most of the code works very similarly, with just some minor
differences in the form of the intrinsics to work around. VLD3 is
disabled by making isLegalInterleavedAccessType return false for those
cases.

We may need some other future adjustments, such as VLD4 take up half the
available registers so should maybe cost more. This patch should get the
basics in though.

Differential Revision: https://reviews.llvm.org/D69392
2019-11-19 18:37:30 +00:00
David Green 411bfe476b [ARM] Add and update a lot of VLDn tests. NFC 2019-11-19 18:37:30 +00:00
jasonliu c9edaa828e [AIX][XCOFF] Write Function descriptors and TOC base to data section
This patch implements writing function descriptors and TOC base into
data section, and also add function descriptors(both csect and label)
and TOC base symbols to the symbol table.
2019-11-19 16:11:00 +00:00
Simon Tatham 254b4f2500 [ARM,MVE] Add intrinsics for scalar shifts.
This fills in the small family of MVE intrinsics that have nothing to
do with vectors: they implement bit-shift operations on 32- or 64-bit
values held in one or two general-purpose registers. Most of these
shift operations saturate if shifting left, and round to nearest if
shifting right, although LSLL and ASRL behave like ordinary shifts.

When these instructions take a variable shift count in a register,
they pay attention to its sign, so that (for example) LSLL or UQRSHLL
will shift left if given a positive number but right if given a
negative one. That makes even LSLL and ASRL different enough from
standard LLVM IR shift semantics that I couldn't see any better
alternative than to simply model the whole family as a set of
MVE-specific IR intrinsics.

(The //immediate// forms of LSLL and ASRL, on the other hand, do
behave exactly like a standard IR shift of a 64-bit value. In fact,
those forms don't have ACLE intrinsics defined at all, because you can
just write an ordinary C shift operation if you want one of those.)

The 64-bit shifts have to be instruction-selected in C++, because they
deliver two output values. But the 32-bit ones are simple enough that
I could write a DAG isel pattern directly into each Instruction
record.

Reviewers: ostannard, MarkMurrayARM, dmgreen

Reviewed By: dmgreen

Subscribers: kristof.beyls, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D70319
2019-11-19 14:47:29 +00:00
Matt Arsenault db0ed3e429 AMDGPU: Refactor treatment of denormal mode
Start moving towards treating this as a property of the calling
convention, and not the subtarget. The default denormal mode should
not be part of the subtarget, and be moved into a separate function
attribute.

This patch is still NFC. The denormal mode remains as a subtarget
feature for now, but make the necessary changes to switch to using an
attribute.
2019-11-19 19:55:43 +05:30
Matt Arsenault ea23b6428b AMDGPU: Be explicit about denormal mode in MIR tests
Start checking the machine function in GlobalISel instead of the
target directly.

This temporarily breaks fcanonicalize selection in GlobalISel.
2019-11-19 19:55:43 +05:30