Commit Graph

30512 Commits

Author SHA1 Message Date
Vedant Kumar 4fe81b6b6a Revert "[DebugInfo] Shorten legacy [s|z]ext dwarf expressions"
This reverts commit 2ce36ebca5. It depends
on https://reviews.llvm.org/D89838, which needs to be reverted.
2020-10-28 18:57:17 -07:00
Derek Schuff 77973f8dee [WebAssembly] Add support for DWARF type units
Since Wasm comdat sections work similarly to ELF, we can use that mechanism
    to eliminate duplicate dwarf type information in the same way.

    Differential Revision: https://reviews.llvm.org/D88603
2020-10-28 17:41:22 -07:00
Amy Huang 7669f3c0f6 Recommit "[CodeView] Emit static data members as S_CONSTANTs."
We used to only emit static const data members in CodeView as
S_CONSTANTS when they were used; this patch makes it so they are always emitted.

This changes CodeViewDebug.cpp to find the static const members from the
class debug info instead of creating DIGlobalVariables in the IR
whenever a static const data member is used.

Bug: https://bugs.llvm.org/show_bug.cgi?id=47580

Differential Revision: https://reviews.llvm.org/D89072

This reverts commit 504615353f.
2020-10-28 16:35:59 -07:00
Gaurav Jain f719fd7ade [NFC] Use [MC]Register in CSE & LICM
Differential Revision: https://reviews.llvm.org/D90327
2020-10-28 15:53:26 -07:00
Alok Kumar Sharma a6dd01afa3 [DebugInfo] Support for DW_TAG_generic_subrange
This is needed to support fortran assumed rank arrays which
have runtime rank.

  Summary:
Fortran assumed rank arrays have dynamic rank. DWARF TAG
DW_TAG_generic_subrange is needed to support that.

  Testing:
unit test cases added (hand-written)
check llvm
check debug-info

Reviewed By: aprantl

Differential Revision: https://reviews.llvm.org/D89218
2020-10-29 01:34:15 +05:30
Aditya Nandakumar bed8394047 [GISel]: Few InsertVecElt combines
https://reviews.llvm.org/D88060

This adds the following combines
1) build_vector formation from insert_vec_elts
2) insert_vec_elts (build_vector) -> build_vector
2020-10-28 12:27:07 -07:00
Mircea Trofin f0a98ad820 [NFC] Use Register in RegisterPressure APIs
Some related changes as well.

Differential Revision: https://reviews.llvm.org/D90268
2020-10-28 12:14:08 -07:00
Vedant Kumar 2ce36ebca5 [DebugInfo] Shorten legacy [s|z]ext dwarf expressions
Take advantage of the emitConstu helper to emit slightly shorter dwarf
expressions to implement legacy [s|z]ext operations.
2020-10-28 12:06:02 -07:00
Vedant Kumar 9905346221 [DebugInfo] Fix legacy ZExt emission when FromBits >= 64 (PR47927)
Fix an out-of-bounds shift in emitLegacyZExt by using a slightly more
complicated dwarf expression to create the zext mask.

This addresses a UBSan diagnostic seen when compiling compiler-rt
(llvm.org/PR47927).

rdar://70307714

Differential Revision: https://reviews.llvm.org/D89838
2020-10-28 12:06:02 -07:00
Matt Arsenault b9c21d43bb RegAlloc: Clear isSSA
The MIR parser may infer SSA, so -run-pass=regallocgreedy would hit a
verifier error after multiple vreg defs are added.
2020-10-28 12:02:16 -04:00
Djordje Todorovic 6384378582 [NFC][IntrRefLDV] Improve the Value printing
Basically, this just improves the dump of the Value stored within a location.

If the defining instruction number is zero, it means it is "live-in".

Before the patch:

ESI --> bb 0 inst 0 loc ESI
After:

ESI --> Value{bb: 0, inst: live-in, loc: ESI}
This is an NFC.

Differential Revision: https://reviews.llvm.org/D90309
2020-10-28 07:39:08 -07:00
Simon Pilgrim f53d7f55f1 [DAG] Move canFoldInAddressingMode before foldBinOpIntoSelect. NFC.
Reduces the diff in D90113.
2020-10-28 12:16:05 +00:00
Luqman Aden 4c0a016927 Rename EHPersonality::MSVC_Win64SEH to EHPersonality::MSVC_TableSEH. NFC.
The types of SEH aren't x86(-32) vs x64 but rather stack-based exception chaining
vs table-based exception handling. x86-32 is the only arch for which Windows
uses the former. 32-bit ARM would use what is called Win64SEH today, which
is a bit confusing so instead let's just rename it to be a bit more clear.

Reviewed By: compnerd, rnk

Differential Revision: https://reviews.llvm.org/D90117
2020-10-27 23:22:13 -07:00
Derek Schuff 44eea0b1a7 Revert "[WebAssembly] Add support for DWARF type units"
This reverts commit bcb8a119df.
2020-10-27 17:57:32 -07:00
Derek Schuff bcb8a119df [WebAssembly] Add support for DWARF type units
Since Wasm comdat sections work similarly to ELF, we can use that mechanism
to eliminate duplicate dwarf type information in the same way.

Differential Revision: https://reviews.llvm.org/D88603
2020-10-27 17:13:41 -07:00
Nicolai Hähnle e025d09b21 Revert multiple patches based on "Introduce CfgTraits abstraction"
These logically belong together since it's a base commit plus
followup fixes to less common build configurations.

The patches are:

Revert "CfgInterface: rename interface() to getInterface()"

This reverts commit a74fc48158.

Revert "Wrap CfgTraitsFor in namespace llvm to please GCC 5"

This reverts commit f2a06875b6.

Revert "Try to make GCC5 happy about the CfgTraits thing"

This reverts commit 03a5f7ce12.

Revert "Introduce CfgTraits abstraction"

This reverts commit c0cdd22c72.
2020-10-27 20:33:30 +01:00
Amy Huang 504615353f Revert "[CodeView] Emit static data members as S_CONSTANTs."
Seems like there's an assert in here that we shouldn't be running into.

This reverts commit 515973222e.
2020-10-27 11:29:58 -07:00
Vedant Kumar 5a3ef55a52 [Utils] Skip RemoveRedundantDbgInstrs in MergeBlockIntoPredecessor (PR47746)
This patch changes MergeBlockIntoPredecessor to skip the call to
RemoveRedundantDbgInstrs, in effect partially reverting D71480 due to
some compile-time issues spotted in LoopUnroll and SimplifyCFG.

The call to RemoveRedundantDbgInstrs appears to have changed the
worst-case behavior of the merging utility. Loosely speaking, it seems
to have gone from O(#phis) to O(#insts).

It might not be possible to mitigate this by scanning a block to
determine whether there are any debug intrinsics to remove, since such a
scan costs O(#insts).

So: skip the call to RemoveRedundantDbgInstrs. There's surprisingly
little fallout from this, and most of it can be addressed by doing
RemoveRedundantDbgInstrs later. The exception is (the block-local
version of) SimplifyCFG, where it might just be too expensive to call
RemoveRedundantDbgInstrs.

Differential Revision: https://reviews.llvm.org/D88928
2020-10-27 10:12:59 -07:00
Djordje Todorovic cca049ad2b [NFC][IntrRefLDV] Some code clean up
As reading the source code, I've found some minor nits:
  -Use using instead of typedef
  -Fix a comment
  -Refactor

Differential Revision: https://reviews.llvm.org/D90155
2020-10-27 05:31:24 -07:00
Sven van Haastregt 5d03080092 [TargetLowering] Add i1 condition for bit comparison fold
For i1 types, boolean false is represented identically regardless of
the boolean content, so we can allow optimizations that otherwise
would not be correct for booleans with false represented as a negative
one.

Patch by Erik Hogeman.

Differential Revision: https://reviews.llvm.org/D90145
2020-10-27 12:22:20 +00:00
Med Ismail Bennani a3aea0193d [llvm/DebugInfo] Simplify DW_OP_implicit_value condition (NFC)
Signed-off-by: Med Ismail Bennani <medismail.bennani@gmail.com>
2020-10-27 11:25:19 +01:00
Gaurav Jain 17cdba61d4 [NFC] Use [MC]Register in RegAllocPBQP & RegisterCoalescer
Differential Revision: https://reviews.llvm.org/D90008
2020-10-26 17:13:32 -07:00
Rahman Lavaee 0b2f4cdf2b Explicitly check for entry basic block, rather than relying on MachineBasicBlock::pred_empty.
Sometimes in unoptimized code, we have dangling unreachable basic blocks with no predecessors. Basic block sections should be emitted for those as well. Without this patch, the included test fails with a fatal error in `AsmPrinter::emitBasicBlockEnd`.

Reviewed By: tmsriram

Differential Revision: https://reviews.llvm.org/D89423
2020-10-26 16:15:56 -07:00
Amy Huang 515973222e [CodeView] Emit static data members as S_CONSTANTs.
We used to only emit static const data members in CodeView as
S_CONSTANTS when they were used; this patch makes it so they are always emitted.

I changed CodeViewDebug.cpp to find the static const members from the
class debug info instead of creating DIGlobalVariables in the IR
whenever a static const data member is used.

Bug: https://bugs.llvm.org/show_bug.cgi?id=47580

Differential Revision: https://reviews.llvm.org/D89072
2020-10-26 15:30:35 -07:00
Peter Waller 5b742a0c10 [SVE][CodeGen][DAGCombiner] Fix TypeSize warning in redundant store elimination
The modified code in visitSTORE was missing a scalable vector check, and still
using the now deprecated implicit cast of TypeSize to uint64_t through the
overloaded operator. This patch fixes these issues.

This brings the logic in line with the comment on the context line immediately
above the added precondition.

Add a test in sve-redundant-store.ll that the warning is not triggered.

Differential Revision: https://reviews.llvm.org/D89701
2020-10-26 16:37:48 +00:00
Peter Waller 6536d6040f Revert "[SVE][CodeGen][DAGCombiner] Fix TypeSize warning in redundant store elimination"
This reverts commit 4604441386.

Reverting because it was not the intended version of the patch, which
follows this patch.
2020-10-26 16:37:00 +00:00
Peter Waller 4604441386 [SVE][CodeGen][DAGCombiner] Fix TypeSize warning in redundant store elimination
The modified code in visitSTORE was missing a scalable vector check, and still
using the now deprecated implicit cast of TypeSize to uint64_t through the
overloaded operator. This patch fixes these issues.

This brings the logic in line with the comment on the context line immediately
above the added precondition.

Add a test in Redundantstores.ll that the warning is not triggered.
2020-10-26 16:23:42 +00:00
Djordje Todorovic a64b2c9366 [NFC][InstrRefLDV] Fix a typo 2020-10-26 04:04:16 -07:00
Florian Hahn b2bec7cece [AsmPrinter] Add per BB instruction mix remark.
This patch adds a remarks that provides counts for each opcode per basic block.

An snippet of the generated information can be seen below.

The current implementation uses the target specific opcode for the counts. For example, on AArch64 this means we currently get 2 entries for `add` instructions if the block contains 32 and 64 bit adds. Similarly, immediate version are treated differently.

Unfortunately there seems to be no convenient way to get only the mnemonic part of the instruction as a string AFAIK. This could be improved in the future.

```
--- !Analysis
Pass:            asm-printer
Name:            InstructionMix
DebugLoc:        { File: arm64-instruction-mix-remarks.ll, Line: 30, Column: 30 }
Function:        foo
Args:
  - String:          'BasicBlock: '
  - BasicBlock:      else
  - String:          "\n"
  - String:          INST_MADDWrrr
  - String:          ': '
  - INST_MADDWrrr:   '2'
  - String:          "\n"
  - String:          INST_MOVZWi
  - String:          ': '
  - INST_MOVZWi:     '1'
```

Reviewed By: anemet, thegameg, paquette

Differential Revision: https://reviews.llvm.org/D89892
2020-10-26 09:25:45 +00:00
David Green 61bc18de0b [Schedule] Add a MultiHazardRecognizer
This adds a MultiHazardRecognizer and starts to make use of it in the
ARM backend. The idea of the class is to allow multiple independent
hazard recognizers to be added to a single base MultiHazardRecognizer,
allowing them to all work in parallel without requiring them to be
chained into subclasses. They can then be added or not based on cpu or
subtarget features, which will become useful in the ARM backend once
more hazard recognizers are being used for various things.

This also renames ARMHazardRecognizer to ARMHazardRecognizerFPMLx in the
process, to more clearly explain what that recognizer is designed for.

Differential Revision: https://reviews.llvm.org/D72939
2020-10-26 08:06:17 +00:00
Simon Pilgrim ce356e1546 [DAG] Add BuildVectorSDNode::getRepeatedSequence helper to recognise multi-element splat patterns
Replace the X86 specific isSplatZeroExtended helper with a generic BuildVectorSDNode method.

I've just used this to simplify the X86ISD::BROADCASTM lowering so far (and remove isSplatZeroExtended), but we should be able to use this in more places to lower to complex broadcast patterns.

Differential Revision: https://reviews.llvm.org/D87930
2020-10-24 12:23:09 +01:00
Simon Pilgrim 62b17a7697 [LegalizeTypes] Legalize vector rotate operations
Lower vector rotate operations as long as the legalization occurs outside of LegalizeVectorOps.

This fixes https://bugs.llvm.org/show_bug.cgi?id=47320

Patch By: @rsanthir.quic (Ryan Santhirarajan)

Differential Revision: https://reviews.llvm.org/D89497
2020-10-24 11:30:32 +01:00
Med Ismail Bennani 64c4dac60e
[llvm/DebugInfo] Emit DW_OP_implicit_value when tuning for LLDB
This patch enables emitting DWARF `DW_OP_implicit_value` opcode when
tuning debug information for LLDB (`-debugger-tune=lldb`).

This will also propagate to Darwin platforms, since they use LLDB tuning
as a default.

rdar://67406059

Differential Revision: https://reviews.llvm.org/D90001

Signed-off-by: Med Ismail Bennani <medismail.bennani@gmail.com>
2020-10-24 06:45:33 +02:00
Mehdi Amini 8f492f6467 Remove unused verifyRegStateMapping() function in RegAllocFast (NFC)
This fixes compiler warning when building with assertions.
2020-10-24 00:36:51 +00:00
Cameron McInally a1cc274cb3 [SVE] Lower fixed length VECREDUCE_SEQ_FADD operation
Differential Revision: https://reviews.llvm.org/D89162
2020-10-23 16:24:02 -05:00
Nick Desaulniers b7926ce6d7 [IR] add fn attr for no_stack_protector; prevent inlining on mismatch
It's currently ambiguous in IR whether the source language explicitly
did not want a stack a stack protector (in C, via function attribute
no_stack_protector) or doesn't care for any given function.

It's common for code that manipulates the stack via inline assembly or
that has to set up its own stack canary (such as the Linux kernel) would
like to avoid stack protectors in certain functions. In this case, we've
been bitten by numerous bugs where a callee with a stack protector is
inlined into an __attribute__((__no_stack_protector__)) caller, which
generally breaks the caller's assumptions about not having a stack
protector. LTO exacerbates the issue.

While developers can avoid this by putting all no_stack_protector
functions in one translation unit together and compiling those with
-fno-stack-protector, it's generally not very ergonomic or as
ergonomic as a function attribute, and still doesn't work for LTO. See also:
https://lore.kernel.org/linux-pm/20200915172658.1432732-1-rkir@google.com/
https://lore.kernel.org/lkml/20200918201436.2932360-30-samitolvanen@google.com/T/#u

Typically, when inlining a callee into a caller, the caller will be
upgraded in its level of stack protection (see adjustCallerSSPLevel()).
By adding an explicit attribute in the IR when the function attribute is
used in the source language, we can now identify such cases and prevent
inlining.  Block inlining when the callee and caller differ in the case that one
contains `nossp` when the other has `ssp`, `sspstrong`, or `sspreq`.

Fixes pr/47479.

Reviewed By: void

Differential Revision: https://reviews.llvm.org/D87956
2020-10-23 11:55:39 -07:00
Mircea Trofin 819044ad2d [NFC] Use [MC]Register in RegAllocGreedy
This was initiated from the uses of MCRegUnitIterator, so while likely
not exhaustive, it's a step forward.

Differential Revision: https://reviews.llvm.org/D89975
2020-10-23 11:30:53 -07:00
Paulo Matos 69e2797eae [WebAssembly] Implementation of (most) table instructions
Implementation of instructions table.get, table.set, table.grow,
table.size, table.fill, table.copy.

Missing instructions are table.init and elem.drop as they deal with
element sections which are not yet implemented.

Added more tests to tables.s

Differential Revision: https://reviews.llvm.org/D89797
2020-10-23 08:42:54 -07:00
Jeremy Morse b1b2c6ab66 [DebugInstrRef] Handle DBG_INSTR_REFs use-before-defs in LiveDebugValues
Deciding where to place debugging instructions when normal instructions
sink between blocks is difficult -- see PR44117. Dealing with this with
instruction-referencing variable locations is simple: we just tolerate
DBG_INSTR_REFs referring to values that haven't been computed yet. This
patch adds support into InstrRefBasedLDV to record when a variable value
appears in the middle of a block, and should have a DBG_VALUE added when it
appears (a debug use before def).

While described simply, this relies heavily on the value-propagation
algorithm in InstrRefBasedLDV. The implementation doesn't attempt to verify
the location of a value unless something non-trivial occurs to merge
variable values in vlocJoin. This means that a variable with a value that
has no location can retain it across all control flow (including loops).
It's only when another debug instruction specifies a different variable
value that we have to check, and find there's no location.

This property means that if a machine value is defined in a block dominated
by a DBG_INSTR_REF that refers to it, all the successor blocks can
automatically find a location for that value (if it's not clobbered). Thus
in a sense, InstrRefBasedLDV is already supporting and implementing
use-before-defs. This patch allows us to specify a variable location in the
block where it's defined.

When loading live-in variable locations, TransferTracker currently discards
those where it can't find a location for the variable value. However, we
can tell from the machine value number whether the value is defined in this
block. If it is, add it to a set of use-before-def records. Then, once the
relevant instruction has been processed, emit a DBG_VALUE immediately after
it.

Differential Revision: https://reviews.llvm.org/D85775
2020-10-23 16:33:23 +01:00
Denis Antrushin 4f7ee55971 Revert "[Statepoints] Allow deopt GC pointer on VReg if gc-live bundle is empty."
Downstream testing revealed some problems with this patch.
Reverting while investigating.
This reverts commit 2b96dcebfa.
2020-10-23 21:55:06 +07:00
Jeremy Morse 68f4715716 [DebugInstrRef] Convert DBG_INSTR_REFs into variable locations
Handle DBG_INSTR_REF instructions in LiveDebugValues, to determine and
propagate variable locations. The logic is fairly straight forwards:
Collect a map of debug-instruction-number to the machine value numbers
generated in the first walk through the function. When building the
variable value transfer function and we see a DBG_INSTR_REF, look up the
instruction it refers to, and pick the machine value number it generates,
That's it; the rest of LiveDebugValues continues as normal.

Awkwardly, there are two kinds of instruction numbering happening here: the
offset into the block (which is how machine value numbers are determined),
and the numbers that we label instructions with when generating
DBG_INSTR_REFs.

I've also restructured the TransferTracker redefVar code a little, to
separate some DBG_VALUE specific operations into its own method. The
changes around redefVar should be largely NFC, while allowing
DBG_INSTR_REFs to specify a value number rather than just a location.

Differential Revision: https://reviews.llvm.org/D85771
2020-10-23 14:50:02 +01:00
Jeremy Morse ab93e71065 [DebugInstrRef] NFC: Separate collection of machine/variable values
This patch adjusts _when_ something happens in LiveDebugValues /
InstrRefBasedLDV, to make it more amenable to dealing with DBG_INSTR_REF
instructions. There's no functional change.

In the current InstrRefBasedLDV implementation, we collect the machine
value-number transfer function for blocks at the same time as the
variable-value transfer function. After solving machine value numbers, the
variable-value transfer function is updated so that DBG_VALUEs of live-in
registers have the correct value. The same would need to be done for
DBG_INSTR_REFs, to connect instruction-references with machine value
numbers.

Rather than writing more code for that, this patch separates the two: we
collect the (machine-value-number) transfer function and solve for
machine value numbers, then step through the MachineInstrs again collecting
the variable value transfer function. This simplifies things for the new
few patches.

Differential Revision: https://reviews.llvm.org/D85760
2020-10-23 11:13:20 +01:00
David Blaikie 4437df8eed DebugInfo: Hash DIE referevences (DW_OP_convert) when computing Split DWARF signatures 2020-10-22 20:09:33 -07:00
Han Shen e42f6c0ac0 Revert "[MBP] Add whole chain to BlockFilterSet instead of individual BB"
This reverts commit adfb541501.

This is reverted because it caused an chrome error: https://crbug.com/1140168
2020-10-22 17:31:01 -07:00
David Blaikie a66311277a DWARFv5: Disable DW_OP_convert for configurations that don't yet support it
Testing reveals that lldb and gdb have some problems with supporting
DW_OP_convert - gdb with Split DWARF tries to resolve the CU-relative
DIE offset relative to the skeleton DIE. lldb tries to treat the offset
as absolute, which judging by the llvm-dsymutil support for
DW_OP_convert, I guess works OK in MachO? (though probably llvm-dsymutil
is producing invalid DWARF by resolving the relative reference to an
absolute one?).

Specifically this disables DW_OP_convert usage in DWARFv5 if:
* Tuning for GDB and using Split DWARF
* Tuning for LLDB and not targeting MachO
2020-10-22 12:02:33 -07:00
Mircea Trofin e24537d48f [NFC][MC] Use MCRegister for ReachingDefAnalysis APIs
Also updated the users of the APIs; and a drive-by small change to
RDFRegister.cpp

Differential Revision: https://reviews.llvm.org/D89912
2020-10-22 08:47:35 -07:00
Jeremy Morse 68ac02c0dd [DebugInstrRef] Pass DBG_INSTR_REFs through register allocation
Both FastRegAlloc and LiveDebugVariables/greedy need to cope with
DBG_INSTR_REFs. None of them actually need to take any action, other than
passing DBG_INSTR_REFs through: variable location information doesn't refer
to any registers at this stage.

LiveDebugVariables stashes the instruction information in a tuple, then
re-creates it later. This is only necessary as the register allocator
doesn't expect to see any debug instructions while it's working. No
equivalence classes or interval splitting is required at all!

No changes are needed for the fast register allocator, as it just ignores
debug instructions. The test added checks that both of them preserve
DBG_INSTR_REFs.

This also expands ScheduleDAGInstrs.cpp to treat DBG_INSTR_REFs the same as
DBG_VALUEs when rescheduling instructions around. The current movement of
DBG_VALUEs around is less than ideal, but it's not a regression to make
DBG_INSTR_REFs subject to the same movement.

Differential Revision: https://reviews.llvm.org/D85757
2020-10-22 15:51:22 +01:00
Matt Arsenault 188df17420 ScheduleDAGInstrs: Skip debug instructions at end of scheduling region
If the end instruction of the scheduling region was a DBG_VALUE, the
uses of the debug instruction were tracked as if they were real
uses. This would then hit the deadDefHasNoUse assertion in
addVRegDefDeps if the only use was the debug instruction.
2020-10-22 10:16:45 -04:00
Jeremy Morse d73275993b [DebugInstrRef] Substitute debug value numbers to handle optimizations
This patch touches two optimizations, TwoAddressInstruction and X86's
FixupLEAs pass, both of which optimize by re-creating instructions. For
LEAs, various bits of arithmetic are better represented as LEAs on X86,
while TwoAddressInstruction sometimes converts instrs into three address
instructions if it's profitable.

For debug instruction referencing, both of these require substitutions to
be created -- the old instruction number must be pointed to the new
instruction number, as illustrated in the added test. If this isn't done,
any variable locations based on the optimized instruction are
conservatively dropped.

Differential Revision: https://reviews.llvm.org/D85756
2020-10-22 13:01:03 +01:00
Fangrui Song b0c12474ed [ShrinkWrap] Delete unneeded nullptr checks for the save point. NFC
findNearestCommonDominator never returns nullptr.
2020-10-22 00:27:01 -07:00
Xiang1 Zhang 7c3fea7721 [X86] Support customizing stack protector guard
Reviewed By: nickdesaulniers, MaskRay

Differential Revision: https://reviews.llvm.org/D88631
2020-10-22 10:08:14 +08:00
Craig Topper 9e884169a2 [FPEnv][X86][SystemZ] Use different algorithms for i64->double uint_to_fp under strictfp to avoid producing -0.0 when rounding toward negative infinity
Some of our conversion algorithms produce -0.0 when converting unsigned i64 to double when the rounding mode is round toward negative. This switches them to other algorithms that don't have this problem. Since it is undefined behavior to change rounding mode with the non-strict nodes, this patch only changes the behavior for strict nodes.

There are still problems with unsigned i32 conversions too which I'll try to fix in another patch.

Fixes part of PR47393

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D87115
2020-10-21 18:12:54 -07:00
Gaurav Jain 4634ad6c0b [NFC] Set return type of getStackPointerRegisterToSaveRestore to Register
Differential Revision: https://reviews.llvm.org/D89858
2020-10-21 16:19:38 -07:00
Jeremy Morse 537f0fbe82 [DebugInfo] Follow up c521e44def with an API improvement
As mentioned post-commit in D85749, the 'substituteDebugValuesForInst'
method added in c521e44def would be better off with a limit on the
number of operands to substitute. This handles the common case of
"substitute the first operand between these two differing instructions",
or possibly up to N first operands.
2020-10-21 14:45:55 +01:00
Simon Pilgrim 88523f6f4b [DAG] getNode(ISD::EXTRACT_SUBVECTOR) Drop unnecessary N2C null check - we assert that this isn't null and have already used the pointer. NFCI.
Fixes cppcheck + null dereference warning.
2020-10-21 11:53:44 +01:00
Nicholas Guy 9a2d2bedb7 Add "SkipDead" parameter to TargetInstrInfo::DefinesPredicate
Some instructions may be removable through processes such as IfConversion,
however DefinesPredicate can not be made aware of when this should be considered.
This parameter allows DefinesPredicate to distinguish these removable instructions
on a per-call basis, allowing for more fine-grained control from processes like
ifConversion.

Renames DefinesPredicate to ClobbersPredicate, to better reflect it's purpose

Differential Revision: https://reviews.llvm.org/D88494
2020-10-21 11:52:47 +01:00
Sven van Haastregt bfc961aeb2 [TargetLowering] Check boolean content when folding bit compare
Updates an optimization that relies on boolean contents being either 0
or 1 to properly check for this before triggering.

The following:
  (X & 8) != 0 --> (X & 8) >> 3
Produces unexpected results when a boolean 'true' value is represented
by negative one.

Patch by Erik Hogeman.

Differential Revision: https://reviews.llvm.org/D89390
2020-10-21 11:46:55 +01:00
David Sherwood 5b17b323a6 [SVE][CodeGen] Replace use of TypeSize comparison operator in CreateStackTemporary
We were previously relying upon the TypeSize comparison operators to
obtain the maximum size of two types, however use of such operators is
being deprecated in favour of making the caller aware that it could
be dealing with scalable vector types. I have changed the code to assert
that the two types have the same scalable property and thus we can
simply take the maximum of the known minimum sizes instead.

Differential Revision: https://reviews.llvm.org/D88563
2020-10-21 08:31:36 +01:00
Mircea Trofin 5e731625f3 [NFC][MC] Use [MC]Register in MachineVerifier
Differential Revision: https://reviews.llvm.org/D89815
2020-10-20 20:42:35 -07:00
Hubert Tong 134ffa8138 NFC: Fix -Wsign-compare warnings on 32-bit builds
Comparing 32-bit `ptrdiff_t` against 32-bit `unsigned` results in
`-Wsign-compare` warnings for both GCC and Clang.

The warning for the cases in question appear to identify an issue
where the `ptrdiff_t` value would be mutated via conversion to an
unsigned type.

The warning is resolved by using the usual arithmetic conversions to
safely preserve the value of the `unsigned` operand while trying to
convert to a signed type. Host platforms where `unsigned` has the same
width as `unsigned long long` will need to make a different change, but
using an explicit cast has disadvantages that can be avoided for now.

Reviewed By: dantrushin

Differential Revision: https://reviews.llvm.org/D89612
2020-10-20 20:52:10 -04:00
Austin Kerbow 37d907899f [HazardRec] Allow inserting multiple wait-states simultaneously
If a target can encode multiple wait-states into a noop allow emitting such
instructions directly.

Reviewed By: rampitec, dmgreen

Differential Revision: https://reviews.llvm.org/D89753
2020-10-20 17:03:47 -07:00
Nicolai Hähnle c0cdd22c72 Introduce CfgTraits abstraction
The CfgTraits abstraction simplfies writing algorithms that are
generic over the type of CFG, and enables writing such algorithms
as regular non-template code that operates on opaque references
to CFG blocks and values.

Implementations of CfgTraits provide operations on the concrete
CFG types, e.g. `IrCfgTraits::BlockRef` is `BasicBlock *`.

CfgInterface is an abstract base class which provides operations
on opaque types CfgBlockRef and CfgValueRef. Those opaque types
encapsulate a `void *`, but the meaning depends on the concrete
CFG type. For example, MachineCfgTraits -- for use with MachineIR
in SSA form -- encodes a Register inside CfgValueRef. Converting
between concrete references and opaque/generic ones is done by
CfgTraits::{fromGeneric,toGeneric}. Convenience methods
CfgTraits::{un}wrap{Iterator,Range} are available as well.

Writing algorithms in terms of CfgInterface adds some overhead
(virtual method calls, plus in same cases it removes the
opportunity to inline iterators), but can be much more convenient
since generic algorithms can be written as non-templates.

This patch adds implementations of CfgTraits for all CFGs on
which dominator trees are calculated, so that the dominator
tree can be ported to this machinery. Only IrCfgTraits (LLVM IR)
and MachineCfgTraits (Machine IR in SSA form) are complete, the
other implementations are limited to the absolute minimum
required to make the upcoming dominator tree changes work.

v5:
- fix MachineCfgTraits::blockdef_iterator and allow it to iterate over
  the instructions in a bundle
- use MachineBasicBlock::printName

v6:
- implement predecessors/successors for all CfgTraits implementations
- fix error in unwrapRange
- rename toGeneric/fromGeneric into wrapRef/unwrapRef to have naming
  that is consistent with {wrap,unwrap}{Iterator,Range}
- use getVRegDef instead of getUniqueVRegDef

v7:
- std::forward fix in wrapping_iterator
- fix typos

v8:
- cleanup operators on CfgOpaqueType
- address other review comments

Change-Id: Ia75f4f268fded33fca11218a7d578c9aec1f3f4d

Differential Revision: https://reviews.llvm.org/D83088
2020-10-20 13:50:52 +02:00
Luqman Aden 51892a42da [COFF][ARM] Fix CodeView for Windows on 32bit ARM targets.
Create the LLVM / CodeView register mappings for the 32-bit ARM Window targets.

Reviewed By: compnerd

Differential Revision: https://reviews.llvm.org/D89622
2020-10-19 22:16:16 -07:00
Qiu Chaofan 1b2fe71ecf [DAGCombiner] Tighten reasscociation of visitFMA
From LangRef, FMF contract should not enable reassociating to form
arbitrary contractions. So it should not help rearrange nodes like
(fma (fmul x, c1), c2, y) into (fma x, c1*c2, y).

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D89527
2020-10-20 10:13:01 +08:00
Craig Topper edd0cb11bd [SelectionDAG][X86] Enable SimplifySetCC CTPOP transforms for vector splats
This enables these transforms for vectors:
(ctpop x) u< 2 -> (x & x-1) == 0
(ctpop x) u> 1 -> (x & x-1) != 0
(ctpop x) == 1 --> (x != 0) && ((x & x-1) == 0)
(ctpop x) != 1 --> (x == 0) || ((x & x-1) != 0)

All enabled if CTPOP isn't Legal. This differs from the scalar
behavior where the first two are done unconditionally and the
last two are done if CTPOP isn't Legal or Custom. The Legal
check produced better results for vectors based on X86's
custom handling. Might be worth re-visiting scalars here.

I disabled the looking through truncate for vectors. The
code that creates new setcc can use the same result VT as the
original setcc even if we truncated the input. That may work
work for most scalars, but definitely wouldn't work for vectors
unless it was a vector of i1.

Fixes or at least improves PR47825

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D89346
2020-10-19 12:56:59 -07:00
Amy Kwan 6a946fd06f [DAGCombiner][PowerPC] Remove isMulhCheaperThanMulShift TLI hook, Use isOperationLegalOrCustom directly instead.
MULH is often expanded on targets.
This patch removes the isMulhCheaperThanMulShift hook and uses
isOperationLegalOrCustom instead.

Differential Revision: https://reviews.llvm.org/D80485
2020-10-19 12:23:04 -05:00
Mircea Trofin 225065b9a8 [NFC][MC] Type [MC]Register uses in MachineTraceMetrics
Differential Revision: https://reviews.llvm.org/D89710
2020-10-19 09:49:52 -07:00
David Sherwood 3945b69e81 [SVE][CodeGen] Replace more TypeSize comparison operators with their scalar equivalents
In certain places in llvm/lib/CodeGen we were relying upon the TypeSize
comparison operators when in fact the code was only ever expecting
either scalar values or fixed width vectors. This patch changes a few
functions that were always expecting to work on scalar or fixed width
types:

1. DAGCombiner::mergeTruncStores - deals with scalar integers only.
2. DAGCombiner::ReduceLoadWidth - not valid for vectors.
3. DAGCombiner::createBuildVecShuffle - should only be used for
   fixed width vectors.
4. SelectionDAGLegalize::ExpandFCOPYSIGN and
   SelectionDAGLegalize::getSignAsIntValue - only work on scalars.

Differential Revision: https://reviews.llvm.org/D88562
2020-10-19 08:38:50 +01:00
David Sherwood 35a531fb45 [SVE][CodeGen][NFC] Replace TypeSize comparison operators with their scalar equivalents
In certain places in llvm/lib/CodeGen we were relying upon the TypeSize
comparison operators when in fact the code was only ever expecting
either scalar values or fixed width vectors. I've changed some of these
places to use the equivalent scalar operator.

Differential Revision: https://reviews.llvm.org/D88482
2020-10-19 08:30:31 +01:00
David Sherwood f693f915a0 [SVE][CodeGen] Replace uses of TypeSize comparison operators
In certain places in the code we can never end up in a situation where
we're mixing fixed width and scalable vector types. For example,
we can't have truncations and extends that change the lane count. Also,
in other places such as GenWidenVectorStores and GenWidenVectorLoads we
know from the behaviour of FindMemType that we can never choose a vector
type with a different scalable property.

In various places I have used EVT::bitsXY functions instead of
TypeSize::isKnownXY, where it probably makes sense to keep an assert
that scalable properties match.

Differential Revision: https://reviews.llvm.org/D88654
2020-10-19 08:08:41 +01:00
Alok Kumar Sharma 0538353b3b [DebugInfo] Support for DWARF operator DW_OP_over
LLVM rejects DWARF operator DW_OP_over. This DWARF operator is needed
for Flang to support assumed rank array.

  Summary:
Currently LLVM rejects DWARF operator DW_OP_over. Below error is
produced when llvm finds this operator.
[..]
invalid expression
!DIExpression(151, 20, 16, 48, 30, 35, 80, 34, 6)
warning: ignoring invalid debug info in over.ll
[..]
There were some parts missing in support of this operator, which are
now completed.

  Testing
-added a unit testcase
-check-debuginfo
-check-llvm

Reviewed By: aprantl

Differential Revision: https://reviews.llvm.org/D89208
2020-10-17 08:42:28 +05:30
Craig Topper 278bd06891 [TargetLowering] Extract simplifySetCCs ctpop into a separate function. NFCI
As requested in D89346. This allows us to add some early outs.

I reordered some checks a little bit to make the more common bail outs happen earlier. Like checking opcode before checking hasOneUse. And I moved the bit width check to make sure it was safe to look through a truncate to the spot where we look through truncates instead of after.

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D89494
2020-10-16 19:47:56 -07:00
Jameson Nash 4242df1470 Revert "make the AsmPrinterHandler array public"
I messed up one of the tests.
2020-10-16 17:22:07 -04:00
Jameson Nash ac2def2d8d make the AsmPrinterHandler array public
This lets external consumers customize the output, similar to how
AssemblyAnnotationWriter lets the caller define callbacks when printing
IR. The array of handlers already existed, this just cleans up the code
so that it can be exposed publically.

Differential Revision: https://reviews.llvm.org/D74158
2020-10-16 16:27:31 -04:00
Amara Emerson 6042c25b0a [GlobalISel] Add translation support for vector reduction intrinsics.
In order to prevent the ExpandReductions pass from expanding some intrinsics
before they get to codegen, I had to add a -disable-expand-reductions flag
for testing purposes.

Differential Revision: https://reviews.llvm.org/D89028
2020-10-16 10:17:53 -07:00
Jay Foad 0c1381d795 [llc] Use -filetype=null to disable MIR printing
If you use -stop-after or similar options, llc will normally print MIR.
This patch checks for -filetype=null as a special case to disable MIR
printing. As the comment says, "The Null output is intended for use for
performance analysis ...", and I found this useful for timing a subset
of the passes that llc runs without the significant overhead of printing
MIR just to send it to /dev/null.

Differential Revision: https://reviews.llvm.org/D89476
2020-10-16 16:51:56 +01:00
Amara Emerson c2551c1f40 [GlobalISel] Remove scalar src from non-sequential fadd/fmul reductions.
It's probably better to split these into separate G_FADD/G_FMUL + G_VECREDUCE
operations in the translator rather than carrying the scalar around. The
majority of the time it'll get simplified away as the scalars are probably
identity values.

Differential Revision: https://reviews.llvm.org/D89150
2020-10-15 15:51:44 -07:00
Denis Antrushin 8f0ddd4a1a [Statepoints] Remove MI limit on number of tied operands.
After D87915 statepoint can have more than 15 tied operands.
Remove this restriction from statepoint lowering code.
2020-10-15 19:02:38 +07:00
Adrian Kuegel ead2aa7098 Fix unused variable warning when compiling with asserts disabled.
Differential Revision: https://reviews.llvm.org/D89454
2020-10-15 12:50:19 +02:00
Jeremy Morse c521e44def [DebugInstrRef] Support recording of instruction reference substitutions
Add a table recording "substitutions" between pairs of <instruction,
operand> numbers, from old pairs to new pairs. Post-isel optimizations are
able to record the outcome of an optimization in this way. For example, if
there were a divide instruction that generated the quotient and remainder,
and it were replaced by one that only generated the quotient:

  $rax, $rcx = DIV-AND-REMAINDER $rdx, $rsi, debug-instr-num 1
  DBG_INSTR_REF 1, 0
  DBG_INSTR_REF 1, 1

Became:

  $rax = DIV $rdx, $rsi, debug-instr-num 2
  DBG_INSTR_REF 1, 0
  DBG_INSTR_REF 1, 1

We could enter a substitution from <1, 0> to <2, 0>, and no substitution
for <1, 1> as it's no longer generated.

This approach means that if an instruction or value is deleted once we've
left SSA form, all variables that used the value implicitly become
"optimized out", something that isn't true of the current DBG_VALUE
approach.

Differential Revision: https://reviews.llvm.org/D85749
2020-10-15 11:30:14 +01:00
Denis Antrushin 8c2b69d53a [Statepoints] Unlimited tied operands.
Current limit on amount of tied operands (15) sometimes is too low
for statepoint. We may get couple dozens of gc pointer operands on
statepoint.
Review D87154 changed format of statepoint to list every gc pointer
only once, which makes it trivial to find tiedness relation between
statepoint operands: defs are mapped 1-1 to gc pointer operands passed
on registers.

Reviewed By: skatkov

Differential Revision: https://reviews.llvm.org/D87915
2020-10-15 16:16:11 +07:00
Craig Topper 50c9f1e11d [TargetLowering] Replace Log2_32_Ceil with Log2_32 in SimplifySetCC ctpop combine.
This combine can look through (trunc (ctpop X)). When doing this
it tries to make sure the trunc doesn't lose any information
from the ctpop. It does this by checking that the truncated type
has more bits that Log2_32_Ceil of the ctpop type. The Ceil is
unnecessary and pessimizes non-power of 2 types.

For example, ctpop of i256 requires 9 bits to represent the max
value of 256. But ctpop of i255 only requires 8 bits to represent
the max result of 255. Log2_32_Ceil of 256 and 255 both return 8
while Log2_32 returns 8 for 256 and 7 for 255

The code with popcnt enabled is a regression for this test case,
but it does match what already happens with i256 truncated to i9.
Since power of 2 is more likely, I don't think it should block
this change.

Differential Revision: https://reviews.llvm.org/D89412
2020-10-15 01:05:07 -07:00
Snehasish Kumar 24bf6ff4e0 [llvm] Update default cutoff threshold for machine function splitter.
Based on internal testing at Google we found that setting the profile
summary cutoff threshold to 999950 yields the best results in terms of
itlb and icache metrics (as observed on Intel CPUs).

*default* = Split out code if no profile count available for block
*size-%*  = The fraction of bytes split out of .text and .text.hot
*itlb*    = Misses per kilo instructions (MPKI) for itlb
*icache*  = Misses per kilo instructions (MPKI) for L1 icache

Search1

| cutoff  | size-%  | itlb      | icache  |
|---------|---------|-----------|---------|
| default | 42.5861 | 0.0822151 | 2.46363 |
|  999999 | 44.9350 | 0.0767194 | 2.44416 |
|  999950 | 50.0660 |  0.075744 |  2.4091 |
|  999500 | 56.9158 |  0.082564 |  2.4188 |
|  995000 | 63.8625 | 0.0814927 | 2.42832 |
|  990000 | 71.7314 |  0.106906 | 2.57785 |

Search2

| cutoff  | size-% | itlb     | icache  |
|---------|--------|----------|---------|
| default | 2.8845 | 0.626712 | 4.73245 |
|  999999 | 3.3291 | 0.602309 | 4.70045 |
|  999950 | 3.8577 | 0.587842 | 4.71632 |
|  999500 | 4.4170 |  0.63577 | 4.68351 |
|  995000 | 5.1020 | 0.657969 | 4.82272 |
|  990000 | 5.7153 | 0.719122 | 5.39496 |

Differential Revision: https://reviews.llvm.org/D89085
2020-10-14 12:48:10 -07:00
Snehasish Kumar 77638a5343 [llvm] Set the default for -bbsections-cold-text-prefix to .text.split.
After using this for a while, we find that it is generally useful to
have it set to .text.split. by default, removing the need for an
additional -mllvm option.

Differential Revision: https://reviews.llvm.org/D88997
2020-10-14 12:16:36 -07:00
Guozhi Wei adfb541501 [MBP] Add whole chain to BlockFilterSet instead of individual BB
Currently we add individual BB to BlockFilterSet if its frequency satisfies

LoopFreq / Freq <= LoopToColdBlockRatio

LoopFreq is edge frequency from outside to loop header.
LoopToColdBlockRatio is a command line parameter.

It doesn't make sense since we always layout whole chain, not individual BBs.

It may also cause a tricky problem. Sometimes it is possible that the LoopFreq
of an inner loop is smaller than LoopFreq of outer loop. So a BB can be in
BlockFilterSet of inner loop, but not in BlockFilterSet of outer loop,
like .cold in the test case. So it is added to the chain of inner loop. When
work on the outer loop, .cold is not added to BlockFilterSet, so the edge to
successor .problem is not counted in UnscheduledPredecessors of .problem chain.
But other blocks in the inner loop are added BlockFilterSet, so the whole inner
loop chain can be layout, and markChainSuccessors is called to decrease
UnscheduledPredecessors of following chains. markChainSuccessors calls
markBlockSuccessors for every BB, even it is not in BlockFilterSet, like .cold,
so .problem chain's UnscheduledPredecessors is decreased, but this edge was not
counted on in fillWorkLists, so .problem chain's UnscheduledPredecessors
becomes 0 when it still has an unscheduled predecessor .pred! And it causes
problems in following various successor BB selection algorithms.

Differential Revision: https://reviews.llvm.org/D89088
2020-10-14 11:55:10 -07:00
jasonliu f85bcc21dd [AIX] Turn -fdata-sections on by default in Clang
Summary:

This patch does the following:
1. Make InitTargetOptionsFromCodeGenFlags() accepts Triple as a
 parameter, because some options' default value is triple dependant.
2. DataSections is turned on by default on AIX for llc.
3. Test cases change accordingly because of the default behaviour change.
4. Clang Driver passes in -fdata-sections by default on AIX.

Reviewed By: MaskRay, DiggerLin

Differential Revision: https://reviews.llvm.org/D88737
2020-10-14 15:58:31 +00:00
Mircea Trofin c8fcffe775 [NFC][MC] Use MCRegister in Machine{Sink|Pipeliner}.cpp
Differential Revision: https://reviews.llvm.org/D89328
2020-10-14 08:42:17 -07:00
Michael Liao ae40d2858e Fix an apparent typo. `assert()` must not contain side-effects. NFC. 2020-10-14 11:33:34 -04:00
Jeremy Morse c4e7857d4e [DebugInstrRef] Create DBG_INSTR_REFs in SelectionDAG
When given the -experimental-debug-variable-locations option (via -Xclang
or to llc), have SelectionDAG generate DBG_INSTR_REF instructions instead
of DBG_VALUE. For now, this only happens in a limited circumstance: when
the value referred to is not a PHI and is defined in the current block.
Other situations introduce interesting problems, addresed in later patches.

Practically, this patch hooks into InstrEmitter and if it can find a
defining instruction for a value, gives it an instruction number, and
points the DBG_INSTR_REF at that <instr, operand> pair.

Differential Revision: https://reviews.llvm.org/D85747
2020-10-14 14:24:08 +01:00
Jeremy Morse 2c5f3d54c5 [DebugInstrRef] Parse debug instruction-references from/to MIR
This patch defines the MIR format for debug instruction references: it's an
integer trailing an instruction, marked out by "debug-instr-number", much
like how "debug-location" identifies the DebugLoc metadata of an
instruction. The instruction number is stored directly in a MachineInstr.

Actually referring to an instruction comes in a later patch, but is done
using one of these instruction numbers.

I've added a round-trip test and two verifier checks: that we don't label
meta-instructions as generating values, and that there are no duplicates.

Differential Revision: https://reviews.llvm.org/D85746
2020-10-14 10:57:09 +01:00
Aditya Nandakumar ef3d17482f [GISel] Add combine for constant G_PTR_ADD offsets.
https://reviews.llvm.org/D88865

This adds a single combine for GlobalISel to fold:

ptradd (inttoptr C1) C2
Into:

C1 + C2
Additionally, a small test for AArch64 is added.

Patch by pnappa.
2020-10-13 17:26:12 -07:00
Andrew Paverd cfd8481da1 Reland [CFGuard] Add address-taken IAT tables and delay-load support
This patch adds support for creating Guard Address-Taken IAT Entry Tables (.giats$y sections) in object files, matching the behavior of MSVC. These contain lists of address-taken imported functions, which are used by the linker to create the final GIATS table.
Additionally, if any DLLs are delay-loaded, the linker must look through the .giats tables and add the respective load thunks of address-taken imports to the GFIDS table, as these are also valid call targets.

Reviewed By: rnk

Differential Revision: https://reviews.llvm.org/D87544
2020-10-13 13:20:52 -07:00
Mircea Trofin 08097fc6a9 [NFC][Regalloc] Use MCRegister in MachineCopyPropagation
Differential Revision: https://reviews.llvm.org/D89250
2020-10-13 09:05:08 -07:00
Mirko Brkusanin 52ba4fa6aa [GlobalISel] Avoid making G_PTR_ADD with nullptr
When the first operand is a null pointer we can avoid making a G_PTR_ADD and
make a G_INTTOPTR with the offset operand.
This helps us avoid making add with 0 later on for targets such as AMDGPU.

Differential Revision: https://reviews.llvm.org/D87140
2020-10-13 13:02:55 +02:00
Craig Topper 1687a8d83b [X86][SelectionDAG] Add SADDO_CARRY and SSUBO_CARRY to support multipart signed add/sub overflow legalization.
This passes existing X86 test but I'm not sure if it handles all type
legalization cases it needs to.

Alternative to D89200

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D89222
2020-10-12 23:18:29 -07:00
Konstantin Schwarz 7341123439 [GlobalISel][KnownBits] Early return on out of bound shift amounts
If the known shift amount is bigger than or equal to the bitwidth of the type of the value to be shifted,
the result is target dependent, so don't try to infer any bits.

This fixes a crash we've seen in one of our internal test suites.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D89232
2020-10-12 18:39:19 +02:00
Mircea Trofin 43d347995c [NFC][MC] Use MCRegister in LiveRangeMatrix
The change starts from LiveRangeMatrix and also checks the users of the
APIs are typed accordingly.

Differential Revision: https://reviews.llvm.org/D89145
2020-10-12 08:54:36 -07:00
Mircea Trofin 596a9f6b89 [NFC][Regalloc] Pass VirtRegMap by reference.
It's never null - the reason it's modeled as a pointer is because the
pass can't init it in its ctor. Passing by ref simplifies the code, too,
as the null checks were unnecessary complexity.

Differential Revision: https://reviews.llvm.org/D89171
2020-10-12 08:32:30 -07:00
Simon Pilgrim c252200e4d [DAG][ARM][MIPS][RISCV] Improve funnel shift promotion to use 'double shift' patterns
Based on a discussion on D88783, if we're promoting a funnel shift to a width at least twice the size as the original type, then we can use the 'double shift' patterns (shifting the concatenated sources).

Differential Revision: https://reviews.llvm.org/D89139
2020-10-12 14:11:02 +01:00
David Sherwood c5ba0d33cc [SVE] Make ElementCount and TypeSize use a new PolySize class
I have introduced a new template PolySize class, where the template
parameter determines the type of quantity, i.e. for an element
count this is just an unsigned value. The ElementCount class is
now just a simple derivation of PolySize<unsigned>, whereas TypeSize
is more complicated because it still needs to contain the uint64_t
cast operator, since there are still many places in the code that
rely upon this implicit cast. As such the class also still needs
some of it's own operators.

I've tried to minimise the amount of code in the base PolySize
class, which led to a couple of changes:

1. In some places we were relying on '==' operator comparisons
between ElementCounts and the scalar value 1. I didn't put this
operator in the new PolySize class, and thought it was actually
clearer to use the isScalar() function instead.
2. I removed the isByteSized function and replaced it with calls
to isKnownMultipleOf(8).

I've also renamed NextPowerOf2 to be coefficientNextPowerOf2 so
that it's more consistent with coefficientDivideBy.

Differential Revision: https://reviews.llvm.org/D88409
2020-10-12 08:23:38 +01:00
Fangrui Song cddb49bcc0 [SchedDAGInstrs] Delete redundant contains(). NFC 2020-10-11 20:58:30 -07:00
Krzysztof Parzyszek 61eaa2e14a [SDAG] Remember to set UndefElts in isSplatValue for SPLAT_VECTOR 2020-10-10 19:42:24 -05:00
David Green cb27006a94 [ARM] Attempt to make Tail predication / RDA more resilient to empty blocks
There are a number of places in RDA where we assume the block will not
be empty. This isn't necessarily true for tail predicated loops where we
have removed instructions. This attempt to make the pass more resilient
to empty blocks, not casting pointers to machine instructions where they
would be invalid.

The test contains a case that was previously failing, but recently been
hidden on trunk. It contains an empty block to begin with to show a
similar error.

Differential Revision: https://reviews.llvm.org/D88926
2020-10-10 14:50:25 +01:00
Alok Kumar Sharma 96bd4d34a2 [DebugInfo] Support for DWARF attribute DW_AT_rank
This patch adds support for DWARF attribute DW_AT_rank.

  Summary:
Fortran assumed rank arrays have dynamic rank. DWARF attribute
DW_AT_rank is needed to support that.

  Testing:
unit test cases added (hand-written)
check llvm
check debug-info

Reviewed By: aprantl

Differential Revision: https://reviews.llvm.org/D89141
2020-10-10 17:51:12 +05:30
Denis Antrushin 2b96dcebfa [Statepoints] Allow deopt GC pointer on VReg if gc-live bundle is empty.
Currently we allow passing pointers from deopt bundle on VReg only if
they were seen in list of gc-live pointers passed on VRegs.
This means that for the case of empty gc-live bundle we spill deopt
bundle's pointers. This change allows lowering deopt pointers to VRegs
in case of empty gc-live bundle. In case of non-empty gc-live bundle,
behavior does not change.

Reviewed By: skatkov

Differential Revision: https://reviews.llvm.org/D88999
2020-10-10 14:58:08 +07:00
Mircea Trofin c11c20fb00 [NFC][Regalloc] VirtRegAuxInfo::Hint does not need to be a field
It is only used in weightCalcHelper, and cleared upon its finishing its
job there.

The patch further cleans up style guide discrepancies, and simplifies
CopyHint by removing duplicate 'IsPhys' information (it's what the Reg
field would report).
2020-10-09 13:42:23 -07:00
Mircea Trofin 62e2ac6461 [NFC][Regalloc] Fix coding style in CalcSpillWeights 2020-10-09 12:22:12 -07:00
Scott Linder 4a98cf7867 [NFC] Reformat MILexer.cpp:getIdentifierKind
Reformat to avoid unrelated changes in diff of future patch.
Committed as obvious.
2020-10-09 15:21:24 +00:00
Esme-Yi e9fd8823ba [DAGCombiner] Add decomposition patterns for Mul-by-Imm.
Summary: This patch is derived from D87384.
In this patch we expand the existing decomposition of mul-by-constant to be more general by implementing 2 patterns:
```
  mul x, (2^N + 2^M) --> (add (shl x, N), (shl x, M))
  mul x, (2^N - 2^M) --> (sub (shl x, N), (shl x, M))
```
The conversion will be trigged if the multiplier is a big constant that the target can't use a single multiplication instruction to handle. This is controlled by the hook `decomposeMulByConstant`.
More over, the conversion benefits from an ILP improvement since the instructions are independent. A case with the sequence like following also gets benefit since a shift instruction is saved.

```
*res1 = a * 0x8800;
*res2 = a * 0x8080;
```

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D88201
2020-10-09 08:51:40 +00:00
Fangrui Song 2c4c2dc2d9 [MCRegister] Simplify isStackSlot & isPhysicalRegister and delete isPhysical. NFC 2020-10-08 22:08:33 -07:00
Fangrui Song c3de9a9e69 Fix incorect Register -> MCRegister conversion
getReg returns a Register which may represent a virtual register.
2020-10-08 21:40:48 -07:00
Kazushi (Jam) Marukawa 1d1c1f8ff2 [VE] Add new MVT types for NEC SX Aurora VE vector
This patch adds entries for:
    v64i64
    v128i64
    v256i64
    v64f64
    v128f64
    v256f64

Reviewed By: simoll

Differential Revision: https://reviews.llvm.org/D88776
2020-10-09 12:07:41 +09:00
Kai Luo 8a5858c8fd [TwoAddressInstruction][PowerPC] Call `regOverlapsSet` to find out real clobbers and uses
In `rescheduleKillAboveMI`, current implementation uses `SmallSet` to track reg's defs and uses. When comparing, use `SmallSet.count` to find out if it's clobbered or used. It's not correct if involving subregisters. This patch uses `regOverlapsSet` already used by `rescheduleMIBelowKill` to fix the issue.

Fixed https://bugs.llvm.org/show_bug.cgi?id=47707.

Reviewed By: #powerpc, nemanjai

Differential Revision: https://reviews.llvm.org/D88716
2020-10-09 02:34:54 +00:00
Mircea Trofin 4cfc4025cc [NFC][MC] MCRegister API typing.
Mostly LiveIntervals, with their effects (users).

Differential Revision: https://reviews.llvm.org/D89018
2020-10-08 15:08:34 -07:00
Quentin Colombet fd8275e04a [GlobalISel] Add missing pass dependencies for IRTranslator
The IRTranslator depends on the branch probability info pass when the
optimization level is different than None and it depends all the time on
the StackProtector pass.

We have to explicitly call out pass dependencies otherwise the pass manager
may not be able to schedule the IRTranslator.

Before this patch, we were lucky because previous passes depend on the branch
probability info pass (like the Global Variable Optimization) and the stack
protector pass is initialized in initializeCodeGen.
However, if the target has a custom pipeline without any passes like Global
Variable Optimization, the pipeline creation will fail, at least because of
the branch probability info pass dependency (it is unlikely that
initializeCodeGen is not called).

This patch adds the missing dependencies to the IRTranslator.

Differential Revision: https://reviews.llvm.org/D89063
2020-10-08 13:57:21 -07:00
Rahman Lavaee 2b0c5d76a6 Introduce and use a new section type for the bb_addr_map section.
This patch lets the bb_addr_map (renamed to __llvm_bb_addr_map) section use a special section type (SHT_LLVM_BB_ADDR_MAP) instead of SHT_PROGBITS. This would help parsers, dumpers and other tools to use the sh_type ELF field to identify this section rather than relying on string comparison on the section name.

Reviewed By: jhenderson

Differential Revision: https://reviews.llvm.org/D88199
2020-10-08 11:13:19 -07:00
Amara Emerson 283b4d6ba3 [GlobalISel] Add G_VECREDUCE_* opcodes for vector reductions.
These mirror the IR and SelectionDAG intrinsics & nodes.

Opcodes added:
G_VECREDUCE_SEQ_FADD
G_VECREDUCE_SEQ_FMUL
G_VECREDUCE_FADD
G_VECREDUCE_FMUL
G_VECREDUCE_FMAX
G_VECREDUCE_FMIN
G_VECREDUCE_ADD
G_VECREDUCE_MUL
G_VECREDUCE_AND
G_VECREDUCE_OR
G_VECREDUCE_XOR
G_VECREDUCE_SMAX
G_VECREDUCE_SMIN
G_VECREDUCE_UMAX
G_VECREDUCE_UMIN

Differential Revision: https://reviews.llvm.org/D88750
2020-10-08 10:33:19 -07:00
diggerlin 92bca12843 [AIX] add new option -mignore-xcoff-visibility
SUMMARY:

In IBM compiler xlclang , there is an option -fnovisibility which suppresses visibility. For more details see: https://www.ibm.com/support/knowledgecenter/SSGH3R_16.1.0/com.ibm.xlcpp161.aix.doc/compiler_ref/opt_visibility.html.

We need to add the option -mignore-xcoff-visibility for compatibility with the IBM AIX OS (as the option is enabled by default in AIX). With this option llvm does not emit any visibility attribute to ASM or XCOFF object file.

The option only work on the AIX OS, for other non-AIX OS using the option will report an unsupported options error.

In AIX OS:

1.1  the option -mignore-xcoff-visibility is enabled by default , if there is not -fvisibility=* and -mignore-xcoff-visibility explicitly in the clang command .

1.2 if there is -fvisibility=* explicitly but not -mignore-xcoff-visibility  explicitly in the clang command.  it will generate visibility attributes.

1.3 if there are  both  -fvisibility=* and  -mignore-xcoff-visibility  explicitly in the clang command. The option  "-mignore-xcoff-visibility" wins , it do not emit the visibility attribute.

The option -mignore-xcoff-visibility has no effect on visibility attribute when compile with -emit-llvm option to generated LLVM IR.

Reviewer: daltenty,Jason Liu

Differential Revision: https://reviews.llvm.org/D87451
2020-10-08 09:34:58 -04:00
Geoffrey Martin-Noble 93db4a8ce6 Remove unused variables
These are unused since
https://reviews.llvm.org/rG35cb45c533fb76dcfc9f44b4e8bbd5d8a855ed2a
causing `-Wunused` warnings.

Differential Revision: https://reviews.llvm.org/D89022
2020-10-07 18:30:12 -07:00
Anna Thomas 35cb45c533 [ImplicitNullChecks] Support complex addressing mode
The pass is updated to handle loads through complex addressing mode,
specifically, when we have a scaled register and a scale.
It requires two API updates in TII which have been implemented for X86.

See added IR and MIR testcases.

Tests-Run: make check
Reviewed-By: reames, danstrushin
Differential Revision: https://reviews.llvm.org/D87148
2020-10-07 20:55:38 -04:00
Mircea Trofin 297655c123 [NFC][regalloc] Use MCRegister instead of unsigned in InterferenceCache
Also changed users of APIs.

Differential Revision: https://reviews.llvm.org/D88930
2020-10-07 14:48:43 -07:00
Rahman Lavaee 34cd06a9b3 [BasicBlockSections] Make sure that the labels for address-taken blocks are emitted after switching the seciton.
Currently, AsmPrinter code is organized in a way in which the labels of address-taken blocks are emitted in the previous section, which makes the relocation incorrect.
This patch reorganizes the code to switch to the basic block section before handling address-taken blocks.

Reviewed By: snehasish, MaskRay

Differential Revision: https://reviews.llvm.org/D88517
2020-10-07 13:22:38 -07:00
Amara Emerson e72cfd938f Rename the VECREDUCE_STRICT_{FADD,FMUL} SDNodes to VECREDUCE_SEQ_{FADD,FMUL}.
The STRICT was causing unnecessary confusion. I think SEQ is a more accurate
name for what they actually do, and the other obvious option of "ORDERED"
has the issue of already having a meaning in FP contexts.

Differential Revision: https://reviews.llvm.org/D88791
2020-10-07 10:45:09 -07:00
Amara Emerson 322d0afd87 [llvm][mlir] Promote the experimental reduction intrinsics to be first class intrinsics.
This change renames the intrinsics to not have "experimental" in the name.

The autoupgrader will handle legacy intrinsics.

Relevant ML thread: http://lists.llvm.org/pipermail/llvm-dev/2020-April/140729.html

Differential Revision: https://reviews.llvm.org/D88787
2020-10-07 10:36:44 -07:00
Jay Foad 1aa8e6a51a [SDag] SimplifyDemandedBits: simplify to FP constant if all bits known
We were already doing this for integer constants. This patch implements
the same thing for floating point constants.

Differential Revision: https://reviews.llvm.org/D88570
2020-10-07 09:24:38 +01:00
Chen Zheng ed46e84c7a [MachineInstr] exclude call instruction in mayAlias
we now get noAlias result for a call instruction and other
load/store/call instructions if we query mayAlias.
This is not right as call instruction is not with mayloadorstore,
but it may alter the memory.

This patch fixes this wrong alias query.

Differential Revision: https://reviews.llvm.org/D87490
2020-10-07 00:12:21 -04:00
Bill Wendling d2c61d2bf9 [CodeGen][TailDuplicator] Don't duplicate blocks with INLINEASM_BR
Tail duplication of a block with an INLINEASM_BR may result in a PHI
node on the indirect branch. This is okay, but it also introduces a copy
for that PHI node *after* the INLINEASM_BR, which is not okay.

See: https://github.com/ClangBuiltLinux/linux/issues/1125

Differential Revision: https://reviews.llvm.org/D88823
2020-10-06 18:44:59 -07:00
Mircea Trofin d85b845cb2 [NFC][MC] Type uses of MCRegUnitIterator as MCRegister
This is one of many subsequent similar changes. Note that we're ok with
the parameter being typed as MCPhysReg, as MCPhysReg -> MCRegister is a
correct conversion; Register -> MCRegister assumes the former is indeed
physical, so we stop relying on the implicit conversion and use the
explicit, value-asserting asMCReg().

Differential Revision: https://reviews.llvm.org/D88862
2020-10-06 12:09:56 -07:00
Dmitri Gribenko b3876ef490 Silence -Wunused-variable in NDEBUG mode 2020-10-06 16:02:17 +02:00
Denis Antrushin c08d48fc2d [Statepoints] Change statepoint machine instr format to better suit VReg lowering.
Current Statepoint MI format is this:

   STATEPOINT
   <id>, <num patch bytes >, <num call arguments>, <call target>,
   [call arguments...],
   <StackMaps::ConstantOp>, <calling convention>,
   <StackMaps::ConstantOp>, <statepoint flags>,
   <StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
   <gc base/derived pairs...> <gc allocas...>

Note that GC pointers are listed in pairs <base,derived>.
This causes base pointers to appear many times (at least twice) in
instruction, which is bad for us when VReg lowering is ON.
The problem is that machine operand tiedness is 1-1 relation, so
it might look like this:

  %vr2 = STATEPOINT ... %vr1, %vr1(tied-def0)

Since only one instance of %vr1 is tied, that may lead to incorrect
codegen (see PR46917 for more details), so we have to always spill
base pointers. This mostly defeats new VReg lowering scheme.

This patch changes statepoint instruction format so that every
gc pointer appears only once in operand list. That way they all can
be tied. Additional set of operands is added to preserve base-derived
relation required to build stackmap.
New statepoint has following format:

  STATEPOINT
  <id>, <num patch bytes>, <num call arguments>, <call target>,
  [call arguments...],
  <StackMaps::ConstantOp>, <calling convention>,
  <StackMaps::ConstantOp>, <statepoint flags>,
  <StackMaps::ConstantOp>, <num deopt args>, [deopt args...],
  <StackMaps::ConstantOp>, <num gc pointers>, [gc pointers...],
  <StackMaps::ConstantOp>, <num gc allocas>,  [gc allocas...]
  <StackMaps::ConstantOp>, <num entries in gc map>, [base/derived indices...]

Changes are:
  - every gc pointer is listed only once in a flat length-prefixed list;
  - alloca list is prefixed with its length too;
  - following alloca list is length-prefixed list of base-derived
    indices of pointers from gc pointer list. Note that indices are
    logical (number of pointer), not absolute (index of machine operand).

Differential Revision: https://reviews.llvm.org/D87154
2020-10-06 17:40:29 +07:00
David Sherwood 4ed47d50ea [SVE][CodeGen] Fix DAGCombiner::ForwardStoreValueToDirectLoad for scalable vectors
In DAGCombiner::ForwardStoreValueToDirectLoad I have fixed up some
implicit casts from TypeSize -> uint64_t and replaced calls to
getVectorNumElements() with getVectorElementCount(). There are some
simple cases of forwarding that we can definitely support for
scalable vectors, i.e. when the store and load are both scalable
vectors and have the same size. I have added tests for the new
code paths here:

  CodeGen/AArch64/sve-forward-st-to-ld.ll

Differential Revision: https://reviews.llvm.org/D87098
2020-10-06 08:04:03 +01:00
Carl Ritson ea9d6392f4 Fix reordering of instructions during VirtRegRewriter unbundling
When unbundling COPY bundles in VirtRegRewriter the start of the
bundle is not correctly referenced in the unbundling loop.

The effect of this is that unbundled instructions are sometimes
inserted out-of-order, particular in cases where multiple
reordering have been applied to avoid clobbering dependencies.
The resulting instruction sequence clobbers dependencies.

Reviewed By: foad

Differential Revision: https://reviews.llvm.org/D88821
2020-10-06 09:43:02 +09:00
Mircea Trofin b268e24d43 [NFC][regalloc] Separate iteration from AllocationOrder
This separates the two concerns - encapsulation of traversal order; and
iteration.

Differential Revision: https://reviews.llvm.org/D88256
2020-10-05 16:13:18 -07:00
Craig Topper 1127662c6d [SelectionDAG] Make sure FMF are propagated when getSetcc canonicalizes FP constants to RHS.
getNode handling for ISD:SETCC calls FoldSETCC which can canonicalize
FP constants to the RHS. When this happens we should create the node
with the FMF that was requested. By using FlagInserter when can ensure
any calls to getNode/getSetcc during canonicalization will also get the flags.

Differential Revision: https://reviews.llvm.org/D88063
2020-10-05 14:55:23 -07:00
Fangrui Song 27e1cc6f39 Cleanup CodeGen/CallingConvLower.cpp
Patch by pi1024e (email unavailable)

Differential Revision: https://reviews.llvm.org/D82593
2020-10-05 14:47:46 -07:00
Jon Roelofs db80cc397e [CodeGen][MachineSched] Fixup function name typo. NFC 2020-10-05 12:43:50 -07:00
Mircea Trofin 82ebbcfb05 [NFC][regalloc] Model weight normalization as a virtual
Continuing from D88499, we can now model the normalization function as a
virtual member of VirtRegAuxInfo. Note that the default
(normalizeSpillWeight) is also used stand-alone in RAGreedy.

Differential Revision: https://reviews.llvm.org/D88713
2020-10-05 11:33:07 -07:00
David Sherwood fa0293081d [SVE][CodeGen] Fix TypeSize/ElementCount related warnings in sve-split-store.ll
I have fixed up a number of warnings resulting from TypeSize -> uint64_t
casts and calling getVectorNumElements() on scalable vector types. I
think most of the changes are fairly trivial except for those in
DAGTypeLegalizer::SplitVecRes_MSTORE I've tried to ensure we create
the MachineMemoryOperands in a sensible way for scalable vectors.

I have added a CHECK line to the following test:

 CodeGen/AArch64/sve-split-store.ll

that ensures no new warnings are added.

Differential Revision: https://reviews.llvm.org/D86928
2020-10-05 19:27:00 +01:00
Amara Emerson c2bce848ec [GlobalISel] Fix CSEMIRBuilder silently allowing use-before-def.
If a CSEMIRBuilder query hits the instruction at the current insert point,
move insert point ahead one so that subsequent uses of the builder don't end up with
uses before defs.

This fix also shows that AMDGPU was also affected by this bug often, but got away
with it because it was using a G_IMPLICIT_DEF before the use.

Differential Revision: https://reviews.llvm.org/D88605
2020-10-05 11:00:00 -07:00
Qiu Chaofan b326d4ff94 [SelectionDAG] Don't remove unused negated constant immediately
This reverts partial of a2fb5446 (actually, 2508ef01) about removing
negated FP constant immediately if it has no uses. However, as discussed
in bug 47517, there're cases when NegX is folded into constant from
other places while NegY is removed by that line of code and NegX is
equal to NegY. In these cases, NegX is deleted before used and crash
happens. So revert the code and add necessary test case.
2020-10-06 01:16:45 +08:00
Sanjay Patel 2ccbf3dbd5 [SDAG] fold x * 0.0 at node creation time
In the motivating case from https://llvm.org/PR47517
we create a node that does not get constant folded
before getNegatedExpression is attempted from some
other node, and we crash.

By moving the fold into SelectionDAG::simplifyFPBinop(),
we get the constant fold sooner and avoid the problem.
2020-10-04 11:31:57 -04:00
Denis Antrushin 7b19cd06d7 [Statepoints][ISEL] visitGCRelocate: chain to current DAG root.
This is similar to D87251, but for CopyFromRegs nodes.
Even for local statepoint uses we generate CopyToRegs/CopyFromRegs
nodes.  When generating CopyFromRegs in visitGCRelocate, we must chain
to current DAG root, not EntryNode, to ensure proper ordering of copy
w.r.t. statepoint node producing result for it.

Reviewed By: reames

Differential Revision: https://reviews.llvm.org/D88639
2020-10-02 21:41:22 +07:00
David Sherwood b0ce9f0f4c [SVE][CodeGen] Fix implicit TypeSize->uint64_t casts in TypePromotion
The TypePromotion pass only operates on scalar types so I've fixed up
all places where we were relying upon the implicit cast from
TypeSize->uint64_t.

Differential Revision: https://reviews.llvm.org/D88575
2020-10-02 08:12:11 +01:00
David Sherwood b8ce6a6756 [SVE][CodeGen] Add new EVT/MVT getFixedSizeInBits() functions
When we know that a particular type is always going to be fixed
width we have so far been writing code like this:

  getSizeInBits().getFixedSize()

Since we are doing this in quite a few places now it seems to make
sense to add a new helper function that allows us to replace
these calls with a single getFixedSizeInBits() call.

Differential Revision: https://reviews.llvm.org/D88649
2020-10-02 07:47:31 +01:00
Carl Ritson 5136f4748a CodeGen: Fix livein calculation in MachineBasicBlock splitAt
Fix and simplify computation of liveins for new block.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D88535
2020-10-02 10:45:04 +09:00
jasonliu 78a9e62aa6 [XCOFF] Enable -fdata-sections on AIX
Summary:
Some design decision worth noting about:

I've noticed a recent mailing discussing about why string literal is
not affected by -fdata-sections for ELF target:
http://lists.llvm.org/pipermail/llvm-dev/2020-September/145121.html

But on AIX, our linker could not split the mergeable string like other target.
So I think it would make more sense for us to emit separate csect for
every mergeable string in -fdata-sections mode,
as there might not be other ways for linker to do garbage collection
on unused mergeable string.

Reviewed By: daltenty, hubert.reinterpretcast

Differential Revision: https://reviews.llvm.org/D88339
2020-10-02 00:16:24 +00:00
Arthur Eubanks 499260c03b Revert "[CFGuard] Add address-taken IAT tables and delay-load support"
This reverts commit ef4e971e5e.
2020-10-01 11:29:54 -07:00
David Sherwood 15474d7691 [SVE][CodeGen] Replace use of TypeSize operator< in GlobalMerge::doMerge
We don't support global variables with scalable vector types so I've
changed the code to compare the fixed sizes instead.

Differential Revision: https://reviews.llvm.org/D88564
2020-10-01 14:06:59 +01:00
Andrew Paverd ef4e971e5e [CFGuard] Add address-taken IAT tables and delay-load support
This patch adds support for creating Guard Address-Taken IAT Entry Tables (.giats$y sections) in object files, matching the behavior of MSVC. These contain lists of address-taken imported functions, which are used by the linker to create the final GIATS table.
Additionally, if any DLLs are delay-loaded, the linker must look through the .giats tables and add the respective load thunks of address-taken imports to the GFIDS table, as these are also valid call targets.

Reviewed By: rnk

Differential Revision: https://reviews.llvm.org/D87544
2020-10-01 12:45:07 +01:00
Kerry McLaughlin fcf70e1e3b [SVE][CodeGen] Lower scalable fp_extend & fp_round operations
This patch adds FP_EXTEND_MERGE_PASSTHRU & FP_ROUND_MERGE_PASSTHRU
ISD nodes, used to lower scalable vector fp_extend/fp_round operations.
fp_round has an additional argument, the 'trunc' flag, which is an integer of zero or one.

This also fixes a warning introduced by the new tests added to sve-split-fcvt.ll,
resulting from an implicit TypeSize -> uint64_t cast in SplitVecOp_FP_ROUND.

Reviewed By: sdesmalen, paulwalker-arm

Differential Revision: https://reviews.llvm.org/D88321
2020-10-01 12:17:37 +01:00
Rahman Lavaee 8955950c12 Exception support for basic block sections
This is part of the Propeller framework to do post link code layout optimizations. Please see the RFC here: https://groups.google.com/forum/#!msg/llvm-dev/ef3mKzAdJ7U/1shV64BYBAAJ and the detailed RFC doc here: https://github.com/google/llvm-propeller/blob/plo-dev/Propeller_RFC.pdf

This patch provides exception support for basic block sections by splitting the call-site table into call-site ranges corresponding to different basic block sections. Still all landing pads must reside in the same basic block section (which is guaranteed by the the core basic block section patch D73674 (ExceptionSection) ). Each call-site table will refer to the landing pad fragment by explicitly specifying @LPstart (which is omitted in the normal non-basic-block section case). All these call-site tables will share their action and type tables.

The C++ ABI somehow assumes that no landing pads point directly to LPStart (which works in the normal case since the function begin is never a landing pad), and uses LP.offset = 0 to specify no landing pad. In the case of basic block section where one section contains all the landing pads, the landing pad offset relative to LPStart could actually be zero. Thus, we avoid zero-offset landing pads by inserting a **nop** operation as the first non-CFI instruction in the exception section.

**Background on Exception Handling in C++ ABI**
https://github.com/itanium-cxx-abi/cxx-abi/blob/master/exceptions.pdf

Compiler emits an exception table for every function. When an exception is thrown, the stack unwinding library queries the unwind table (which includes the start and end of each function) to locate the exception table for that function.

The exception table includes a call site table for the function, which is used to guide the exception handling runtime to take the appropriate action upon an exception. Each call site record in this table is structured as follows:

| CallSite                       |  -->  Position of the call site (relative to the function entry)
| CallSite length           |  -->  Length of the call site.
| Landing Pad               |  -->  Position of the landing pad (relative to the landing pad fragment’s begin label)
| Action record offset  |  -->  Position of the first action record

The call site records partition a function into different pieces and describe what action must be taken for each callsite. The callsite fields are relative to the start of the function (as captured in the unwind table).

The landing pad entry is a reference into the function and corresponds roughly to the catch block of a try/catch statement. When execution resumes at a landing pad, it receives an exception structure and a selector value corresponding to the type of the exception thrown, and executes similar to a switch-case statement. The landing pad field is relative to the beginning of the procedure fragment which includes all the landing pads (@LPStart). The C++ ABI requires all landing pads to be in the same fragment. Nonetheless, without basic block sections, @LPStart is the same as the function @Start (found in the unwind table) and can be omitted.

The action record offset is an index into the action table which includes information about which exception types are caught.

**C++ Exceptions with Basic Block Sections**
Basic block sections break the contiguity of a function fragment. Therefore, call sites must be specified relative to the beginning of the basic block section. Furthermore, the unwinding library should be able to find the corresponding callsites for each section. To do so, the .cfi_lsda directive for a section must point to the range of call-sites for that section.
This patch introduces a new **CallSiteRange** structure which specifies the range of call-sites which correspond to every section:

  `struct CallSiteRange {
    // Symbol marking the beginning of the precedure fragment.
    MCSymbol *FragmentBeginLabel = nullptr;
    // Symbol marking the end of the procedure fragment.
    MCSymbol *FragmentEndLabel = nullptr;
    // LSDA symbol for this call-site range.
    MCSymbol *ExceptionLabel = nullptr;
    // Index of the first call-site entry in the call-site table which
    // belongs to this range.
    size_t CallSiteBeginIdx = 0;
    // Index just after the last call-site entry in the call-site table which
    // belongs to this range.
    size_t CallSiteEndIdx = 0;
    // Whether this is the call-site range containing all the landing pads.
    bool IsLPRange = false;
  };`

With N basic-block-sections, the call-site table is partitioned into N call-site ranges.

Conceptually, we emit the call-site ranges for sections sequentially in the exception table as if each section has its own exception table. In the example below, two sections result in the two call site ranges (denoted by LSDA1 and LSDA2) placed next to each other. However, their call-sites will refer to records in the shared Action Table. We also emit the header fields (@LPStart and CallSite Table Length) for each call site range in order to place the call site ranges in separate LSDAs. We note that with -basic-block-sections, The CallSiteTableLength will not actually represent the length of the call site table, but rather the reference to the action table. Since the only purpose of this field is to locate the action table, correctness is guaranteed.

Finally, every call site range has one @LPStart pointer so the landing pads of each section must all reside in one section (not necessarily the same section). To make this easier, we decide to place all landing pads of the function in one section (hence the `IsLPRange` field in CallSiteRange).

|  @LPStart                   |  --->  Landing pad fragment     ( LSDA1 points here)
| CallSite Table Length | ---> Used to find the action table.
| CallSites                     |
| …                                 |
| …                                 |
| @LPStart                    |  --->  Landing pad fragment ( LSDA2 points here)
| CallSite Table Length |
| CallSites                     |
| …                                 |
| …                                 |
…
…
|      Action Table          |
|      Types Table           |

Reviewed By: MaskRay

Differential Revision: https://reviews.llvm.org/D73739
2020-09-30 11:05:55 -07:00
Mircea Trofin d6de40f886 [NFC][regalloc] Make VirtRegAuxInfo part of allocator state
All the state of VRAI is allocator-wide, so we can avoid creating it
every time we need it. In addition, the normalization function is
allocator-specific. In a next change, we can simplify that design in
favor of just having it as a virtual member.

Differential Revision: https://reviews.llvm.org/D88499
2020-09-30 08:13:05 -07:00
Matt Arsenault 5aa1119537 GlobalISel: Assert if MoreElements uses a non-vector type 2020-09-30 10:36:00 -04:00
Matt Arsenault d93459992e LiveDebugValues: Fix typos and indentation 2020-09-30 10:35:25 -04:00
Matt Arsenault a66fca44ac RegAllocFast: Add extra DBG_VALUE for live out spills
This allows LiveDebugValues to insert the proper DBG_VALUEs in live
out blocks if a spill is inserted before the use of a
register. Previously, this would see the register use as the last
DBG_VALUE, even though the stack slot should be treated as the live
out value.

This avoids an lldb test regression when D52010 is re-applied.
2020-09-30 10:35:25 -04:00
Matt Arsenault 89baeaef2f Reapply "RegAllocFast: Rewrite and improve"
This reverts commit 73a6a164b8.
2020-09-30 10:35:25 -04:00
Gabriel Hjort Åkerlund 43d239d0fa [GlobalISel] Fix incorrect setting of ValNo when splitting
Before, for each original argument i, ValNo was set to i + PartIdx, but
ValNo is intended to reflect the index of the value before splitting.
Hence, ValNo should always be set to i and not consider the PartIdx.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D86511
2020-09-30 16:08:51 +02:00
Sam Parker 3f88c10a6b [RDA] isSafeToDefRegAt: Look at global uses
We weren't looking at global uses of a value, so we could happily
overwrite the register incorrectly.

Differential Revision: https://reviews.llvm.org/D88554
2020-09-30 14:06:45 +01:00
Jay Foad cdac4492b4 [SplitKit] Cope with no live subranges in defFromParent
Following on from D87757 "[SplitKit] Only copy live lanes", it is
possible to split a live range at a point when none of its subranges
are live. This patch handles that case by inserting an implicit def
of the superreg.

Patch by Quentin Colombet!

Differential Revision: https://reviews.llvm.org/D88397
2020-09-30 10:16:25 +01:00
Sam Parker 779a8a028f [ARM][LowOverheadLoops] TryRemove helper.
Make a helper function that wraps around RDA::isSafeToRemove and
utilises the existing DCE IT block checks.
2020-09-30 09:37:24 +01:00
Sam Parker 700f93e92b [RDA] Switch isSafeToMove iterators
So forwards is forwards and backwards is reverse. Also add a check
so that we know the instructions are in the expected order.

Differential Revision: https://reviews.llvm.org/D88419
2020-09-30 08:10:48 +01:00
Amara Emerson 1d54e75cf2 [GlobalISel] Fix multiply with overflow intrinsics legalization generating invalid MIR.
During lowering of G_UMULO and friends, the previous code moved the builder's
insertion point to be after the legalizing instruction. When that happened, if
there happened to be a "G_CONSTANT i32 0" immediately after, the CSEMIRBuilder
would try to find that constant during the buildConstant(zero) call, and since
it dominates itself would return the iterator unchanged, even though the def
of the constant was *after* the current insertion point. This resulted in the
compare being generated *before* the constant which it was using.

There's no need to modify the insertion point before building the mul-hi or
constant. Delaying moving the insert point ensures those are built/CSEd before
the G_ICMP is built.

Fixes PR47679

Differential Revision: https://reviews.llvm.org/D88514
2020-09-29 18:40:58 -07:00
Zequan Wu 6c91e623e5 [CodeGen] emit CG profile for COFF object file
Differential Revision: https://reviews.llvm.org/D87811
2020-09-29 12:03:30 -07:00
Mircea Trofin 6d193ba333 [NFC][regalloc] Unit test for AllocationOrder iteration.
Added unittests. In the process, separated core construction - which just
needs the hits, order, and 'HardHints' values - from construction from
current register allocation state, to simplify testing.

Differential Revision: https://reviews.llvm.org/D88455
2020-09-29 10:48:07 -07:00
Krzysztof Parzyszek db04bec5f1 [SDAG] Do not convert undef to 0 when folding CONCAT/BUILD_VECTOR
Differential Revision: https://reviews.llvm.org/D88273
2020-09-29 09:12:26 -05:00
Dominik Montada 113114a5da [GlobalISel] fix widenScalarUnmerge if widen type is not a multiple of destination type
Fix creation of illegal unmerge when widen was requested to a type which
is not a multiple of the destination type. E.g. when trying to widen
an s48 unmerge to s64 the existing code would create an illegal unmerge
from s64 to s48.

Instead, create further unmerges to a GCD type, then use this to remerge
these intermediate results to the actual destinations.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D88422
2020-09-29 15:52:20 +02:00
Jay Foad 781edd501c [SDag] Verify DAG divergence after dumping. NFC.
When debugging, it's useful to be able to see the DAG that has just
failed divergence verification.
2020-09-29 14:05:07 +01:00
Jay Foad d6b04f3937 [SDag] Refactor and simplify divergence calculation and checking. NFC. 2020-09-29 14:05:07 +01:00
Ruiling Song 73805329ba [RegisterCoalescer] Pass Undefs to extendToIndices()
When extending the subranges, the reaching-def may be an undefs. When
extending such kind of subrange, it will try to search for the reaching
def first. If the reaching def is an undef and we did not provide 'Undefs',
The findReachingDefs() will fail with message:
"Use of $noreg does not have a corresponding definition on every path:
 LLVM ERROR: Use not jointly dominated by defs."
So we computeSubRangeUndefs() and pass the result to extendToIndices().

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D87744
2020-09-29 08:14:24 +08:00
Fangrui Song bd08a87cfe [EHStreamer] Simplify sharedTypeIDs with std::mismatch
(Note that EMStreamer.cpp is largely under tested. The only test checking the prefix sharing is CodeGen/WebAssembly/eh-lsda.ll)
2020-09-28 15:05:59 -07:00
Amara Emerson 082321909e [GlobalISel] Add support for lowering of vector G_SELECT and use for AArch64.
The lowering is a port of the SDAG expansion.

Differential Revision: https://reviews.llvm.org/D88364
2020-09-28 14:00:46 -07:00
Jessica Paquette a52e78012a [GlobalISel] Combine (xor (and x, y), y) -> (and (not x), y)
When we see this:

```
%and = G_AND %x, %y
%xor = G_XOR %and, %y
```

Produce this:

```
%not = G_XOR %x, -1
%new_and = G_AND %not, %y
```

as long as we are guaranteed to eliminate the original G_AND.

Also matches all commuted forms. E.g.

```
%and = G_AND %y, %x
%xor = G_XOR %y, %and
```

will be matched as well.

Differential Revision: https://reviews.llvm.org/D88104
2020-09-28 10:08:14 -07:00
David Sherwood bafdd11326 [SVE] Replace / operator in TypeSize/ElementCount with divideCoefficientBy
After some recent upstream discussion we decided that it was best
to avoid having the / operator for both ElementCount and TypeSize,
since this could give the impression that these classes can be used
in the same way as basic integer integer types. However, division
for scalable types is a bit odd because we are only dividing the
minimum quantity by a value, as opposed to something like:

  (MinSize * Vscale) / SomeValue

This is why when performing division it's important the caller
first establishes whether the operation makes sense, perhaps by
calling isKnownMultipleOf() prior to division. The caller must now
explictly call divideCoefficientBy() on the class to perform the
operation.

Differential Revision: https://reviews.llvm.org/D87700
2020-09-28 08:03:00 +01:00
Arthur Eubanks a2578e92e2 Revert "Reland [CodeGen] emit CG profile for COFF object file"
This reverts commit 506b6170cb.

This still causes link errors, see https://crbug.com/1130780.
2020-09-27 22:43:14 -07:00
Nikita Popov f229bf2e12 [Legalize][X86] Improve nnan fmin/fmax vector reduction
Use +/-Inf or +/-Largest as neutral element for nnan fmin/fmax
reductions. This avoids dropping any FMF flags. Preserving the
nnan flag in particular is important to get a good lowering on X86.

Differential Revision: https://reviews.llvm.org/D87586
2020-09-27 10:47:35 +02:00
Chen Zheng c8f6c0f961 [Machinesink] add one more profitable loop related pattern
Reviewed By: qcolombet

Differential Revision: https://reviews.llvm.org/D86925
2020-09-26 21:02:21 -04:00
Simon Pilgrim a61272a900 [DAG] Fold vector mul(x,0)/mul(x,1) to a clearing mask
If we're multiplying all elements of a vector by '0' or '1' then we can more efficiently perform this as a clearing mask (that is likely to further simplify to a shuffle blend).

This was noticed when reviewing D87502 but seems to help idiv/irem by constant cases even more as '0'/'1' values are often used for 'passthrough' cases.

Differential Revision: https://reviews.llvm.org/D88225
2020-09-26 14:31:57 +01:00
Simon Pilgrim decc1944f3 MachineCSE.cpp - use auto const& iterators in for-range loops to avoid copies. NFCI. 2020-09-26 14:31:57 +01:00
Simon Atanasyan c6c5629f2f [CodeGen] Do not call `emitGlobalConstantLargeInt` for constant requires 8 bytes to store
This is a fix for PR47630. The regression is caused by the D78011. After
this change the code starts to call the `emitGlobalConstantLargeInt` even
for constants which requires eight bytes to store.

Differential revision: https://reviews.llvm.org/D88261
2020-09-26 08:58:46 +03:00
Qiu Chaofan c0f8e4c06c [SelectionDAG] Add guard to automatically insert flags
This is like FastMathFlagGuard in IR. Since we use SDAG instance to get
values, it's with SelectionDAG. By creating a FlagInserter in current
scope, all values created by getNode will get the flags if no Flags
argument provided.

In this patch, I applied it to floating point operations folding part in
DAG combiner, and removed Flags passing to getNode to show its effect.
Other places in DAG combiner and other helper methods similar to getNode
also need this. They can be done in follow-up patches.

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D87361
2020-09-26 13:57:52 +08:00
Dávid Bolvanský 179e15d53a [SystemZ] Optimize bcmp calls (PR47420)
Solves https://bugs.llvm.org/show_bug.cgi?id=47420

Reviewed By: uweigand

Differential Revision: https://reviews.llvm.org/D87988
2020-09-25 17:55:39 +02:00
Jay Foad b34ddfcc76 [SplitKit] In addDeadDef tolerate parent range that defines more lanes
Following on from D87757 "[SplitKit] Only copy live lanes", in
SplitEditor::addDeadDef, when we're checking whether the parent live
interval has a subrange defining the same lanes, tolerate the case
where the parent subrange defines a superset of the lanes. This can
happen when the child subrange comes from SplitEditor::buildCopy
decomposing a partial copy into a sequence of subreg copies that cover
the required lanes.

Differential Revision: https://reviews.llvm.org/D88020
2020-09-25 11:31:56 +01:00
Sam Parker a399d1880b [ARM] Find VPT implicitly predicated by VCTP
On failing to find a VCTP in the list of instructions that explicitly
predicate the entry of a VPT block, inspect whether the block is
controlled via VPT which is implicitly predicated due to it's
predicated operand(s).

Differential Revision: https://reviews.llvm.org/D87819
2020-09-25 08:50:53 +01:00
Snehasish Kumar d2696dec45 [llvm] Add -bbsections-cold-text-prefix to emit cold clusters to a different section.
This change adds an option to basic block sections to allow cold
clusters to be assigned a custom text prefix. With a custom prefix such
as ".text.split." (D87840), lld can place them in a separate output section.
The benefits are -

* Empirically shown to improve icache and itlb metrics by 3-5%
(absolute) compared to placing split parts in .text.unlikely.
* Mitigates against poor profiles, eg samplePGO profiles used with the
machine function splitter. Optimizations such as hugepage remapping can
make different decisions at the section granularity.
* Enables section granularity hotness monitoring (checking on the
decisions made during compilation vs sample data from production).

Differential Revision: https://reviews.llvm.org/D87813
2020-09-24 15:26:15 -07:00
Sriraman Tallam e39286510d Temporary fix for D85085 debug_loc bug with basic block sections.
Until then, this one line fix removes the assert fail with basic block sections
with debug info. Bug tracking this: #47549
This fix does not generate loc list or DW_AT_const_value if the argument is
mentioned in a different section than the start of the function.

Temporarily fixes bugzilla : https://bugs.llvm.org/show_bug.cgi?id=47549

Differential Revision: https://reviews.llvm.org/D87787
2020-09-24 14:41:49 -07:00
Zequan Wu 506b6170cb Reland [CodeGen] emit CG profile for COFF object file
This reverts commit 90242caca2.

Error fixed at f5435399e8

Differential Revision: https://reviews.llvm.org/D87811
2020-09-24 14:38:53 -07:00
Bill Wendling 0c0c57f7b2 Revert "[CodeGen] Postprocess PHI nodes for callbr"
Accidental commit.

This reverts commit 7f4c940bd0.
2020-09-24 14:35:23 -07:00
Bill Wendling 7f4c940bd0 [CodeGen] Postprocess PHI nodes for callbr
When processing PHI nodes after a callbr, we need to make sure that the
PHI nodes on the default branch are resolved after the callbr
(inserted after INLINEASM_BR). The PHI node values on the indirect
branches are processed before the INLINEASM_BR.

Differential Revision: https://reviews.llvm.org/D86260
2020-09-24 14:34:28 -07:00
Mircea Trofin 89aad892a5 [NFC][regalloc] Remove unused API in AllocationOrder
Differential Revision: https://reviews.llvm.org/D88197
2020-09-24 12:25:56 -07:00
Matt Arsenault e75afc9acf GlobalISel: Use unmerge when copying wide vectors to result registers
Avoid using G_EXTRACT and move towards a more consistent vector
legalization strategy.
2020-09-24 15:19:51 -04:00
vpykhtin d9beff04a3 [RegisterCoalescer] Fix IMPLICIT_DEF init removal for a register on joining
This patch removes redundant IMPLICIT_DEF for subregs which was leading to
incorrect register initialization on joining in some cases.

Reviewed by: qcolombet

Differential revision: https://reviews.llvm.org/D82258
2020-09-24 17:37:03 +03:00
Alexandre Ganea 4b64ce7428 Improve 723fea2307 - Silence 'warning: unused variable' when compiling with Clang 10.0 2020-09-24 09:07:22 -04:00
Pushpinder Singh 41d6669f1f [GlobalISel][AMDGPU] Lower G_SMULH/G_UMULH
Reviewed By: arsenm, foad

Differential Revision: https://reviews.llvm.org/D85653
2020-09-23 22:25:29 -04:00
Eli Friedman 3f739f736b [SelectionDAG][GISel] Make LegalizeDAG lower FNEG using integer ops.
Previously, if a floating-point type was legal, but FNEG wasn't legal,
we would use FSUB.  Instead, we should use integer ops, to preserve the
semantics.  (Alternatively, there's a compiler-rt call we could use, but
there isn't much reason to use that.)

It turns out we actually are still using this obscure codepath in a few
cases: on some targets, we have "legal" floating-point types that don't
actually support any floating-point operations.  In particular, ARM and
AArch64 are using this path.

The implementation for SelectionDAG is pretty simple because we can
reuse the infrastructure from FCOPYSIGN.

See also 9a3dc3e, the corresponding change to type legalization.

Also includes a "bonus" change to STRICT_FSUB legalization, so we can
lower a STRICT_FSUB to a float libcall.

Includes the changes to both LegalizeDAG and GlobalISel so we don't have
inconsistent results in the future.

Fixes https://bugs.llvm.org/show_bug.cgi?id=46792 .

Differential Revision: https://reviews.llvm.org/D84287
2020-09-23 14:10:33 -07:00
Guozhi Wei fd75ad8662 [MBFIWrapper] Add a new function getBlockProfileCount
MBFIWrapper keeps track of block frequencies of newly created blocks and
modified blocks, modified block frequencies should also impact block profile
count. This class doesn't provide interface getBlockProfileCount, users can only
use the underlying MBFI to query profile count, the underlying MBFI doesn't know
the modifications made in MBFIWrapper, so it either provides stale profile count
for modified block or simply crashes on new blocks.

So this patch add function getBlockProfileCount to class MBFIWrapper to handle
new blocks or modified blocks.

Differential Revision: https://reviews.llvm.org/D87802
2020-09-23 09:31:45 -07:00
Matt Arsenault c463fd136e GlobalISel: Fix truncating shift amount in trunc (shl) combine
The shift amount type does not necessarily match the result type. This
was inserting a trunc from s32 to s32, which asserted. Just preserve
the original shift amount type which can be legalized later.
2020-09-23 09:07:50 -04:00
David Sherwood e077367a28 [SVE] Make EVT::getScalarSizeInBits and others consistent with Type::getScalarSizeInBits
An existing function Type::getScalarSizeInBits returns a uint64_t
instead of a TypeSize class because the caller is requesting a
scalar size, which cannot be scalable. This patch makes other
similar functions requesting a scalar size consistent with that,
thereby eliminating more than 1000 implicit TypeSize -> uint64_t
casts.

Differential revision: https://reviews.llvm.org/D87889
2020-09-23 09:20:08 +01:00
Fangrui Song bee68b2956 [EHStreamer] Ensure CallSiteEntry::{BeginLabel,EndLabel} are non-null. NFC
... to simplify the code a bit.

Reviewed By: rahmanl

Differential Revision: https://reviews.llvm.org/D87999
2020-09-22 17:34:43 -07:00
Reid Kleckner 90242caca2 Revert "[CodeGen] emit CG profile for COFF object file"
This reverts commit 91aed9bf97, it is
causing link errors.
2020-09-22 13:47:39 -07:00
Stefanos Baziotis 89c1e35f3c [LoopInfo] empty() -> isInnermost(), add isOutermost()
Differential Revision: https://reviews.llvm.org/D82895
2020-09-22 23:28:51 +03:00
Mircea Trofin d1e0f9f3cf [NFC][regalloc] Simplify/conform to style guide indvars in Greedy
Differential Revision: https://reviews.llvm.org/D88055
2020-09-22 10:55:52 -07:00
Simon Pilgrim 4dada8d617 [DAG] Remove DAGTypeLegalizer::GenWidenVectorTruncStores (PR42046)
Just scalarize trunc stores - GenWidenVectorTruncStores does the same thing but is flawed (PR42046) and unused.

Differential Revision: https://reviews.llvm.org/D87708
2020-09-22 17:24:45 +01:00
Alexandre Ganea 723fea2307 Silence 'warning: unused variable' when compiling with Clang 10.0 2020-09-22 12:17:40 -04:00
Michael Liao 534f6e1718 [PeepholeOptimizer] Enhance the redundant COPY elimination.
- Eliminate redundant COPYs from the same register & subregister pair.

Differential Revision: https://reviews.llvm.org/D87939
2020-09-22 10:11:37 -04:00
Muhammad Omair Javaid 73a6a164b8 Revert "Reapply Revert "RegAllocFast: Rewrite and improve""
This reverts commit 55f9f87da2.

Breaks following buildbots:
http://lab.llvm.org:8011/builders/lldb-arm-ubuntu/builds/4306
http://lab.llvm.org:8011/builders/lldb-aarch64-ubuntu/builds/9154
2020-09-22 14:40:06 +05:00
Mircea Trofin 6a6b06f526 [NFC][regalloc] Use reverse iterator ranges for improved readability
Differential Revision: https://reviews.llvm.org/D88047
2020-09-21 14:58:37 -07:00
Martin Storsjö 36c64af9d7 [CodeGen] [WinException] Only produce handler data at the end of the function if needed
If we are going to write handler data (that is written as variable
length data following after the unwind info in .xdata), we need to
emit the handler data immediately, but for cases where no such
info is going to be written, skip emitting it right away. (Unwind
info for all remaining functions that hasn't gotten it emitted
directly is emitted at the end.)

This does slightly change the ordering of sections (triggering a
bunch of updates to DebugInfo/COFF tests), but the change should be
benign.

This also matches GCC's assembly output, which doesn't output
.seh_handlerdata unless it actually is needed.

For ARM64, the unwind info can be packed into the runtime function
entry itself (leaving no data in the .xdata section at all), but
that can only be done if there's no follow-on data in the .xdata
section. If emission of the unwind info is triggered via
EmitWinEHHandlerData (or the .seh_handlerdata directive), which
implicitly switches to the .xdata section, there's a chance of the
caller wanting to pass further data there, so the packed format
can't be used in that case.

Differential Revision: https://reviews.llvm.org/D87448
2020-09-21 23:42:59 +03:00
Matt Arsenault 55f9f87da2 Reapply Revert "RegAllocFast: Rewrite and improve"
This reverts commit dbd53a1f0c.

Needed lldb test updates
2020-09-21 15:45:27 -04:00
Simon Pilgrim 6a0ed57a22 ImplicitNullChecks.cpp - use auto const& iterators in for-range loops to avoid copies. NFCI. 2020-09-21 17:42:57 +01:00
Simon Pilgrim 3ae07b2a33 TargetPassConfig.cpp - use auto const& iterator in for-range loop to avoid copies. NFCI. 2020-09-21 17:17:11 +01:00
Simon Pilgrim ce294ff8cd MachineCSE.cpp - use auto const& iterator in for-range loop to avoid copies. NFCI. 2020-09-21 16:54:26 +01:00
Denis Antrushin ee86688b81 [Statepoints][ISEL] gc.relocate uniquification should be based on SDValue, not IR Value.
When exporting statepoint results to virtual registers we try to avoid
generating exports for duplicated inputs. But we erroneously use
IR Value* to check if inputs are duplicated. Instead, we should use
SDValue, because even different IR values can get lowered to the same
SDValue.
I'm adding a (degenerate) test case which emphasizes importance of this
feature for invoke statepoints.
If we fail to export only unique values we will end up with something
like that:

  %0 = STATEPOINT
  %1 = COPY %0

landing_pad:
  <use of %1>

And when exceptional path is taken, %1 is left uninitialized (COPY is never
execute).

Reviewed By: reames

Differential Revision: https://reviews.llvm.org/D87695
2020-09-21 19:44:46 +07:00
Alexander Belyaev 17dc729bd4 Revert "[NFC][ScheduleDAG] Remove unused EntrySU SUnit"
This reverts commit 0345d88de6.

Google internal backend uses EntrySU, we are looking into removing
dependency on it.

Differential Revision: https://reviews.llvm.org/D88018
2020-09-21 13:33:05 +02:00
Lucas Prates 53d238a961 [CodeGen] Fixing inconsistent ABI mangling of vlaues in SelectionDAGBuilder
SelectionDAGBuilder was inconsistently mangling values based on ABI
Calling Conventions when getting them through copyFromRegs in
SelectionDAGBuilder, causing duplicate value type convertions for
function arguments. The checking for the mangling requirement was based
on the value's originating instruction and was performed outside of, and
inspite of, the regular Calling Convention Lowering.

The issue could be observed in a scenario such as:

```
%arg1 = load half, half* %const, align 2
%arg2 = call fastcc half @someFunc()
call fastcc void @otherFunc(half %arg1, half %arg2)
; Here, %arg2 was incorrectly mangled twice, as the CallConv data from
; the call to @someFunc() was taken into consideration for the check
; when getting the value for processing the call to @otherFunc(...),
; after the proper convertion had taken place when lowering the return
; value of the first call.
```

This patch fixes the issue by disregarding the Calling Convention
information for such copyFromRegs, making sure the ABI mangling is
properly contanined in the Calling Convention Lowering.

This fixes Bugzilla #47454.

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D87844
2020-09-21 10:05:34 +01:00
Fangrui Song dbc616e982 [EHStreamer] Fix a "Continue to action" -fverbose-asm comment when multi-byte LEB128 encoding is needed
This only happens with more than 64 action records and it is difficult to construct a test.
2020-09-20 21:41:48 -07:00
Qiu Chaofan 1d782c2987 [PowerPC] Pass nofpexcept flag to custom lowered constrained ops
This is a follow-up of D86605. For strict DAG FP node, if its FP
exception behavior metadata is ignore, it should have nofpexcept flag.
But during custom lowering, this flag isn't passed down.

This is also seen on X86 target.

Reviewed By: uweigand

Differential Revision: https://reviews.llvm.org/D87390
2020-09-21 10:44:25 +08:00
Fangrui Song d06485685d [XRay] Change mips to use version 2 sled (PC-relative address)
Follow-up to D78590. All targets use PC-relative addresses now.

Reviewed By: atanasyan, dberris

Differential Revision: https://reviews.llvm.org/D87977
2020-09-20 17:59:57 -07:00
Fangrui Song 6913812abc Fix some clang-tidy bugprone-argument-comment issues 2020-09-19 20:41:25 -07:00
Eric Christopher dbd53a1f0c Temporarily Revert "RegAllocFast: Rewrite and improve"
as it's breaking a few tests in the lldb test suite.

Bot: http://lab.llvm.org:8011/builders/lldb-arm-ubuntu/builds/4226/steps/test/logs/stdio

This reverts commit c8757ff3aa.
2020-09-18 18:11:21 -07:00
Fangrui Song 2ac06241d2 [LiveDebugValues] Add `#if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP)` to suppress -Wunused-function 2020-09-18 17:25:37 -07:00
Amara Emerson 5d34d7f1a0 [GlobalISel] Add lowering support for G_ABS and use for AArch64.
Differential Revision: https://reviews.llvm.org/D87952
2020-09-18 16:17:18 -07:00
Reid Kleckner 9932561b48 [COFF] Move per-global .drective emission from AsmPrinter to TLOFCOFF
This changes the order of output sections and the output assembly, but
is otherwise NFC.

It simplifies the TLOF interface by removing two COFF-only methods.
2020-09-18 14:31:01 -07:00
James Y Knight f7a53d82c0 PR47468: Fix findPHICopyInsertPoint, so that copies aren't incorrectly inserted after an INLINEASM_BR.
findPHICopyInsertPoint special cases placement in a block with a
callbr or invoke in it. In that case, we must ensure that the copy is
placed before the INLINEASM_BR or call instruction, if the register is
defined prior to that instruction, because it may jump out of the
block.

Previously, the code placed it immediately after the last def _or
use_. This is wrong, if the use is the instruction which may jump.  We
could correctly place it immediately after the last def (ignoring
uses), but that is non-optimal for register pressure.

Instead, place the copy after the last def, or before the
call/inlineasm_br, whichever is later.

Differential Revision: https://reviews.llvm.org/D87865
2020-09-18 14:14:04 -04:00
Matt Arsenault 3105d0f84b CodeGen: Move split block utility to MachineBasicBlock
AMDGPU needs this in several places, so consolidate them here.
2020-09-18 14:05:18 -04:00
Matt Arsenault c8757ff3aa RegAllocFast: Rewrite and improve
This rewrites big parts of the fast register allocator. The basic
strategy of doing block-local allocation hasn't changed but I tweaked
several details:

Track register state on register units instead of physical
registers. This simplifies and speeds up handling of register aliases.
Process basic blocks in reverse order: Definitions are known to end
register livetimes when walking backwards (contrary when walking
forward then uses may or may not be a kill so we need heuristics).

Check register mask operands (calls) instead of conservatively
assuming everything is clobbered.  Enhance heuristics to detect
killing uses: In case of a small number of defs/uses check if they are
all in the same basic block and if so the last one is a killing use.
Enhance heuristic for copy-coalescing through hinting: We check the
first k defs of a register for COPYs rather than relying on there just
being a single definition.  When testing this on the full llvm
test-suite including SPEC externals I measured:

average 5.1% reduction in code size for X86, 4.9% reduction in code on
aarch64. (ranging between 0% and 20% depending on the test) 0.5%
faster compiletime (some analysis suggests the pass is slightly slower
than before, but we more than make up for it because later passes are
faster with the reduced instruction count)

Also adds a few testcases that were broken without this patch, in
particular bug 47278.

Patch mostly by Matthias Braun
2020-09-18 14:05:18 -04:00
Matt Arsenault 870fd53e4f Reapply "RegAllocFast: Record internal state based on register units"
The regressions this caused should be fixed when
https://reviews.llvm.org/D52010 is applied.

This reverts commit a21387c654.
2020-09-18 14:05:18 -04:00
Zequan Wu 91aed9bf97 [CodeGen] emit CG profile for COFF object file
I forgot to add emission of CG profile for COFF object file, when adding the support (https://reviews.llvm.org/D81775)

Differential Revision: https://reviews.llvm.org/D87811
2020-09-18 10:57:54 -07:00
Francis Visoiu Mistrih 0345d88de6 [NFC][ScheduleDAG] Remove unused EntrySU SUnit
EntrySU doesn't seem to be used at all when building the ScheduleDAG.

Differential Revision: https://reviews.llvm.org/D87867
2020-09-18 09:50:47 -07:00
Simon Pilgrim d967aaa8fa [DAG] BuildVectorSDNode::getSplatValue - pull out repeated getNumOperands() calls. NFCI. 2020-09-18 16:10:23 +01:00
Matt Arsenault 751a6c5760 IR: Move denormal mode parsing from MachineFunction to Function
This was just inspecting the IR to begin with, and is useful to check
in some places in the IR.
2020-09-18 09:55:47 -04:00
Philip Reames b04c181ed7 [AArch64] Enable implicit null check transformation
This change enables the generic implicit null transformation for the AArch64 target. As background for those unfamiliar with our implicit null check support:

    An implicit null check is the use of a signal handler to catch and redirect to a handler a null pointer. Specifically, it's replacing an explicit conditional branch with such a redirect. This is only done for very cold branches under frontend control w/appropriate metadata.
    FAULTING_OP is used to wrap the faulting instruction. It is modelled as being a conditional branch to reflect the fact it can transfer control in the CFG.
    FAULTING_OP does not need to be an analyzable branch to achieve it's purpose. (Or at least, that's the x86 model. I find this slightly questionable.)
    When lowering to MC, we convert the FAULTING_OP back into the actual instruction, record the labels, and lower the original instruction.

As can be seen in the test changes, currently the AArch64 backend does not eliminate the unconditional branch to the fallthrough block. I've tried two approaches, neither of which worked. I plan to return to this in a separate change set once I've wrapped my head around the interactions a bit better. (X86 handles this via AllowModify on analyzeBranch, but adding the obvious code causing BranchFolding to crash. I haven't yet figured out if it's a latent bug in BranchFolding, or something I'm doing wrong.)

Differential Revision: https://reviews.llvm.org/D87851
2020-09-17 16:00:19 -07:00
Quentin Colombet 99e865b618 [TargetRegisterInfo] Add a couple of target hooks for the greedy register allocator
Before this patch, the last chance recoloring and deferred spilling
techniques were solely controled by command line options.
This patch adds target hooks for these two techniques so that it
is easier for backend writers to override the default behavior.

The default behavior of the hooks preserves the default values of
the related command line options.

NFC
2020-09-17 15:23:15 -07:00
Derek Schuff 0ff28fa6a7 Support dwarf fission for wasm object files
Initial support for dwarf fission sections (-gsplit-dwarf) on wasm.
The most interesting change is support for writing 2 files (.o and .dwo) in the
wasm object writer. My approach moves object-writing logic into its own function
and calls it twice, swapping out the endian::Writer (W) in between calls.
It also splits the import-preparation step into its own function (and skips it when writing a dwo).

Differential Revision: https://reviews.llvm.org/D85685
2020-09-17 14:42:41 -07:00
Victor Huang a4bb71b1c0 Disable hoisting MI to hotter basic blocks when using pgo
This is a follow up patch for https://reviews.llvm.org/D63676 to
enable the feature when using pgo.

Differential Revision: https://reviews.llvm.org/D85240
2020-09-17 14:17:00 -05:00
Amara Emerson 79b21fc187 [AArch64][GlobalISel] Fix bug in fewVectorElts action while legalizing oversize G_FPTRUNC vectors.
For <8 x s32> = fptrunc <8 x s64> the fewerElementsVector action tries to break
down the source vector into the final source vectors of <2 x s64> using unmerge.
This fixes a crash due to using the wrong number of elements for the breakdown
type.

Also add some legalizer tests for explicitly G_FPTRUNC which we didn't have.

Differential Revision: https://reviews.llvm.org/D87814
2020-09-17 08:56:26 -07:00
Simon Pilgrim 2a56a0ba08 ModuloSchedule.cpp - remove unnecessary includes. NFCI.
Already included in ModuloSchedule.h
2020-09-17 16:47:48 +01:00
Simon Pilgrim 85ba2f1663 LiveDebugVariables.cpp - remove unnecessary Compiler.h include. NFCI.
Already included in LiveDebugVariables.h
2020-09-17 15:06:02 +01:00
Simon Pilgrim 46e59062a0 DwarfExpression.cpp - remove unnecessary includes. NFCI.
Already included in DwarfExpression.h
2020-09-17 15:06:02 +01:00
Simon Pilgrim 67ae46c820 SafeStackLayout.cpp - remove unnecessary StackLifetime.h include. NFCI.
Already included in SafeStackLayout.h
2020-09-17 14:56:46 +01:00
Simon Pilgrim 572e542c5e DwarfStringPool.cpp - remove unnecessary StringRef include. NFCI.
Already included in DwarfStringPool.h
2020-09-17 12:18:27 +01:00
Simon Pilgrim 71f237506b DwarfFile.h - remove unnecessary includes. NFCI.
Use forward declarations where possible, move includes down to DwarfFile.cpp and avoid duplicate includes.
2020-09-17 12:12:18 +01:00
Simon Pilgrim 550b1a6fd4 [AsmPrinter] DwarfDebug - use DebugLoc const references where possible. NFC.
Avoid unnecessary copies.
2020-09-17 10:45:54 +01:00
Simon Pilgrim 4ae1bb193a [AsmPrinter] Remove orphan DwarfUnit::shareAcrossDWOCUs declaration. NFCI.
Method implementation no longer exists.
2020-09-17 10:45:52 +01:00
Jay Foad 6f6d389da5 [SplitKit] Only copy live lanes
When splitting a live interval with subranges, only insert copies for
the lanes that are live at the point of the split. This avoids some
unnecessary copies and fixes a problem where copying dead lanes was
generating MIR that failed verification. The test case for this is
test/CodeGen/AMDGPU/splitkit-copy-live-lanes.mir.

Without this fix, some earlier live range splitting would create %430:

%430 [256r,848r:0)[848r,2584r:1)  0@256r 1@848r L0000000000000003 [848r,2584r:0)  0@848r L0000000000000030 [256r,2584r:0)  0@256r weight:1.480938e-03
...
256B     undef %430.sub2:vreg_128 = V_LSHRREV_B32_e32 16, %20.sub1:vreg_128, implicit $exec
...
848B     %430.sub0:vreg_128 = V_AND_B32_e32 %92:sreg_32, %20.sub1:vreg_128, implicit $exec
...
2584B    %431:vreg_128 = COPY %430:vreg_128

Then RAGreedy::tryLocalSplit would split %430 into %432 and %433 just
before 848B giving:

%432 [256r,844r:0)  0@256r L0000000000000030 [256r,844r:0)  0@256r weight:3.066802e-03
%433 [844r,848r:0)[848r,2584r:1)  0@844r 1@848r L0000000000000030 [844r,2584r:0)  0@844r L0000000000000003 [844r,844d:0)[848r,2584r:1)  0@844r 1@848r weight:2.831776e-03
...
256B     undef %432.sub2:vreg_128 = V_LSHRREV_B32_e32 16, %20.sub1:vreg_128, implicit $exec
...
844B     undef %433.sub0:vreg_128 = COPY %432.sub0:vreg_128 {
           internal %433.sub2:vreg_128 = COPY %432.sub2:vreg_128
848B     }
  %433.sub0:vreg_128 = V_AND_B32_e32 %92:sreg_32, %20.sub1:vreg_128, implicit $exec
...
2584B    %431:vreg_128 = COPY %433:vreg_128

Note that the copy from %432 to %433 at 844B is a curious
bundle-without-a-BUNDLE-instruction that SplitKit creates deliberately,
and it includes a copy of .sub0 which is not live at this point, and
that causes it to fail verification:

*** Bad machine code: No live subrange at use ***
- function:    zextload_global_v64i16_to_v64i64
- basic block: %bb.0  (0x7faed48) [0B;2848B)
- instruction: 844B    undef %433.sub0:vreg_128 = COPY %432.sub0:vreg_128
- operand 1:   %432.sub0:vreg_128
- interval:    %432 [256r,844r:0)  0@256r L0000000000000030 [256r,844r:0)  0@256r weight:3.066802e-03
- at:          844B

Using real bundles with a BUNDLE instruction might also fix this
problem, but the current fix is less invasive and also avoids some
unnecessary copies.

https://bugs.llvm.org/show_bug.cgi?id=47492

Differential Revision: https://reviews.llvm.org/D87757
2020-09-17 09:26:11 +01:00
Qiu Chaofan a2fb5446be [SelectionDAG] Check any use of negation result before removal
2508ef01 fixed a bug about constant removal in negation. But after
sanitizing check I found there's still some issue about it so it's
reverted.

Temporary nodes will be removed if useless in negation. Before the
removal, they'd be checked if any other nodes used it. So the removal
was moved after getNode. However in rare cases the node to be removed is
the same as result of getNode. We missed that and will be fixed by this
patch.

Reviewed By: steven.zhang

Differential Revision: https://reviews.llvm.org/D87614
2020-09-17 16:00:54 +08:00
Igor Kudrin 027d47d1c7 [DebugInfo] Simplify DIEInteger::SizeOf().
An AsmPrinter should always be provided to the method because some forms
depend on its parameters. The only place in the codebase which passed
a nullptr value was found in the unit tests, so the patch updates it to
use some dummy AsmPrinter instead.

Differential Revision: https://reviews.llvm.org/D85293
2020-09-17 12:47:38 +07:00
Craig Topper e30371d99d [DAGCombiner] Teach visitMSTORE to replace an all ones mask with an unmasked store.
Similar to what done in D87788 for MLOAD.

Again I've skipped indexed, truncating, and compressing stores.
2020-09-16 16:42:22 -07:00
Craig Topper 89ee4c0314 [DAGCombiner] Teach visitMLOAD to replace an all ones mask with an unmasked load
If we have an all ones mask, we can just a regular masked load. InstCombine already gets this in IR. But the all ones mask can appear after type legalization.

Only avx512 test cases are affected because X86 backend already looks for element 0 and the last element being 1. It replaces this with an unmasked load and blend. The all ones mask is a special case of that where the blend will be removed. That transform is only enabled on avx2 targets. I believe that's because a non-zero passthru on avx2 already requires a separate blend so its more profitable to handle mixed constant masks.

This patch adds a dedicated all ones handling to the target independent DAG combiner. I've skipped extending, expanding, and index loads for now. X86 doesn't use index so I don't know much about it. Extending made me nervous because I wasn't sure I could trust the memory VT had the right element count due to some weirdness in vector splitting. For expanding I wasn't sure if we needed different undef handling.

Differential Revision: https://reviews.llvm.org/D87788
2020-09-16 13:21:16 -07:00
Matt Arsenault 88bdcbbf1a GlobalISel: Lift store value widening restriction
This doesn't change the memory size and doesn't need to worry about
non-power-of-2 sizes.
2020-09-16 14:25:07 -04:00
Michael Kitzan c4e589b795 [GISel] Add new combines for unary FP instrs with constant operand
https://reviews.llvm.org/D86393

Patch adds five new `GICombinerRules`, one for each of the following unary
FP instrs: `G_FNEG`, `G_FABS`, `G_FPTRUNC`, `G_FSQRT`, and `G_FLOG2`. The
combine rules perform the FP operation on the constant operand and replace
the original instr with the result. Patch additionally adds new combiner
tests for the AArch64 target to test these new combiner rules.
2020-09-16 10:34:15 -07:00
Simon Pilgrim 8f7d6b2375 DwarfUnit.h - remove unnecessary includes. NFCI. 2020-09-16 18:32:29 +01:00
Simon Pilgrim 69682f993c InterferenceCache.cpp - remove duplicate includes. NFCI.
Remove headers already included in InterferenceCache.h
2020-09-16 18:32:28 +01:00
Matt Arsenault 738c73a454 RegAllocFast: Make self loop live-out heuristic more aggressive
This currently has no impact on code, but prevents sizeable code size
regressions after D52010. This prevents spilling and reloading all
values inside blocks that loop back. Add a baseline test which would
regress without this patch.
2020-09-16 13:12:38 -04:00
Matt Arsenault 8d8a496356 LocalStackSlotAllocation: Swap order of check 2020-09-16 12:56:40 -04:00
Francesco Petrogalli 15e9a6c211 [llvm][CodeGen] Do not scalarize `llvm.masked.[gather|scatter]` operating on scalable vectors.
This patch prevents the `llvm.masked.gather` and `llvm.masked.scatter` intrinsics to be scalarized when invoked on scalable vectors.

The change in `Function.cpp` is needed to prevent the warning that is raised when `getNumElements` is used in place of `getElementCount` on `VectorType` instances. The tests guards for regressions on this change.

The tests makes sure that calls to `llvm.masked.[gather|scatter]` are still scalarized when:

  # the intrinsics are operating on fixed size vectors, and
  # the compiler is not targeting fixed length SVE code generation.

Reviewed By: efriedma, sdesmalen

Differential Revision: https://reviews.llvm.org/D86249
2020-09-16 16:00:28 +00:00
Mircea Trofin 6e85c3d5c7 [NFC][Regalloc] accessors for 'reg' and 'weight'
Also renamed the fields to follow style guidelines.

Accessors help with readability - weight mutation, in particular,
is easier to follow this way.

Differential Revision: https://reviews.llvm.org/D87725
2020-09-16 08:28:57 -07:00
Sebastian Neubauer 833b3b0d3a [AMDGPU] Add v3f16/v3i16 support to SDag
Fix lowering and instruction selection for v3x16 types
and enable InstCombine to emit them.

This patch only implements it for the selection dag.
GlobalISel tests in GlobalISel/llvm.amdgcn.image.load.1d.d16.ll and
GlobalISel/llvm.amdgcn.image.store.2d.d16.ll still don't work.

Differential Revision: https://reviews.llvm.org/D84420
2020-09-16 17:20:27 +02:00
Sam Parker 1c421046d7 [RDA] Fix getUniqueReachingDef for self loops
We've fixed the case where this could return an instruction after the
given instruction, but also means that we can falsely return a
'unique' def when they could be one coming from the backedge of a
loop.

Differential Revision: https://reviews.llvm.org/D87751
2020-09-16 12:44:23 +01:00
Simon Pilgrim 3f682611ab [DAG] Remover getOperand() call. NFCI. 2020-09-16 11:18:58 +01:00
Volkan Keles 79378b1b75 GlobalISel: Fix a failing combiner test
test/CodeGen/AArch64/GlobalISel/combine-trunc.mir was failing
due to the different order for evaluating function arguments.
This patch updates the related code to fix the issue.
2020-09-15 16:40:38 -07:00
Aditya Nandakumar 97203cfd6b [GISel] Add new GISel combiners for G_MUL
https://reviews.llvm.org/D87668

Patch adds two new GICombinerRules, one for G_MUL(X, 1) and another for G_MUL(X, -1).
G_MUL(X, 1) is an identity combine, and G_MUL(X, -1) gets replaced with G_SUB(0, X).
Patch additionally adds new combiner tests for the AArch64 target to test these
new combiner rules, as well as updates AMDGPU GISel tests.

Patch by mkitzan
2020-09-15 16:08:47 -07:00
Volkan Keles a4e35cc2ec GlobalISel: Add combines for G_TRUNC
https://reviews.llvm.org/D87050
2020-09-15 15:50:34 -07:00
Guozhi Wei 243ffd0cad [MachineBasicBlock] Fix a typo in function copySuccessor
The condition used to decide if need to copy probability should be reversed.

Differential Revision: https://reviews.llvm.org/D87417
2020-09-15 09:18:18 -07:00
Qiu Chaofan e1669843f2 Revert "[SelectionDAG] Remove unused FP constant in getNegatedExpression"
2508ef01 doesn't totally fix the issue since we did not handle the case
when unused temporary negated result is the same with the result, which
is found by address sanitizer.
2020-09-15 22:03:50 +08:00
Hans Wennborg a21387c654 Revert "RegAllocFast: Record internal state based on register units"
This seems to have caused incorrect register allocation in some cases,
breaking tests in the Zig standard library (PR47278).

As discussed on the bug, revert back to green for now.

> Record internal state based on register units. This is often more
> efficient as there are typically fewer register units to update
> compared to iterating over all the aliases of a register.
>
> Original patch by Matthias Braun, but I've been rebasing and fixing it
> for almost 2 years and fixed a few bugs causing intermediate failures
> to make this patch independent of the changes in
> https://reviews.llvm.org/D52010.

This reverts commit 66251f7e1d, and
follow-ups 931a68f26b
and 0671a4c508. It also adjust some
test expectations.
2020-09-15 13:25:41 +02:00
Simon Pilgrim 6c1f2a34fb SpillPlacement.cpp - remove unnecessary includes. NFCI.
These are all directly included in SpillPlacement.h
2020-09-15 12:18:24 +01:00
Simon Pilgrim 1abb4461ea StatepointLowering.cpp - remove unnecessary includes. NFCI.
These are all directly included in StatepointLowering.h
2020-09-15 12:18:23 +01:00
Simon Pilgrim bee79cdcc6 SelectionDAGBuilder.h - remove unnecessary includes. NFCI.
Reduce to forward declarations and move implicit dependencies down to the cpp files.
2020-09-15 12:18:22 +01:00
Qiu Chaofan 2508ef014e [SelectionDAG] Remove unused FP constant in getNegatedExpression
960cbc53 immediately removes nodes that won't be used to avoid
compilation time explosion. This patch adds the removal to constants to
fix PR47517.

Reviewed By: RKSimon, steven.zhang

Differential Revision: https://reviews.llvm.org/D87614
2020-09-15 17:59:10 +08:00
Petar Avramovic 9b4fa85434 GlobalISel/IRTranslator resetTargetOptions based on function attributes
Update TargetMachine.Options with function attributes before we start
to generate MIR instructions. This allows access to correct function
attributes via TargetMachine.Options (it used to access attributes of
the function that was translated first).
This affects some existing tests with "no-nans-fp-math" attribute.
Follow-up on D87456.

Differential Revision: https://reviews.llvm.org/D87511
2020-09-15 10:26:09 +02:00
Igor Kudrin a845ebd633 [DebugInfo] Make offsets of dwarf units 64-bit (19/19).
In the case of LTO, several DWARF units can be emitted in one section.
For an extremely large application, they may exceed the limit of 4GiB
for 32-bit offsets. As it is now possible to emit 64-bit debugging info,
the patch enables storing the larger offsets.

Differential Revision: https://reviews.llvm.org/D87026
2020-09-15 12:23:32 +07:00
Igor Kudrin 8c19ac23bd [DebugInfo] Make the offset of string pool entries 64-bit (18/19).
The string pool is shared among several units in the case of LTO,
and it potentially can exceed the limit of 4GiB for an extremely
large application. As it is now possible to emit 64-bit debugging
info, the limitation can be removed.

Differential Revision: https://reviews.llvm.org/D87025
2020-09-15 12:23:32 +07:00
Igor Kudrin 7e1e4e81cb [DebugInfo] Fix emitting DWARF64 .debug_macro[.dwo] sections (17/19).
The patch fixes emitting flags and the debug_line_offset field in
the header, as well as the reference to the macro string for
a pre-standard GNU .debug_macro extension.

Differential Revision: https://reviews.llvm.org/D87024
2020-09-15 12:23:31 +07:00
Igor Kudrin a93dd26d8c [DebugInfo] Fix emitting DWARF64 .debug_names sections (16/19).
The patch fixes emitting the unit length field in the header of
the table and offsets to the entry pool. Note that while the patch
changes the common method to emit offsets, in fact, nothing is changed
for Apple accelerator tables, because we do not yet support DWARF64 for
those targets.

Differential Revision: https://reviews.llvm.org/D87023
2020-09-15 12:23:31 +07:00
Igor Kudrin 00ce54689d [DebugInfo] Fix emitting DWARF64 .debug_addr sections (15/19).
The patch fixes emitting the header of the table. The content is
independent of the DWARF format.

Differential Revision: https://reviews.llvm.org/D87022
2020-09-15 12:23:31 +07:00
Igor Kudrin 3158d3dd4b [DebugInfo] Fix emitting DWARF64 .debug_loclists sections (14/19).
The size of the offsets in the table depends on the DWARF format.

Differential Revision: https://reviews.llvm.org/D87020
2020-09-15 12:23:31 +07:00
Igor Kudrin f9b242fe24 [DebugInfo] Fix emitting DWARF64 .debug_rnglists sections (13/19).
The size of the offsets in the table depends on the DWARF format.

Differential Revision: https://reviews.llvm.org/D87019
2020-09-15 12:23:31 +07:00
Igor Kudrin 03b09c6b68 [DebugInfo] Fix emitting pre-v5 name lookup tables in the DWARF64 format (12/19).
The transition is done by using methods of AsmPrinter which
automatically emit values in compliance with the selected DWARF format.

Differential Revision: https://reviews.llvm.org/D87013
2020-09-15 12:23:30 +07:00
Igor Kudrin b118030f3f [DebugInfo] Fix emitting DWARF64 .debug_aranges sections (11/19).
The patch fixes calculating the size of the table and emitting
the fields which depend on the DWARF format by using methods that
choose appropriate sizes automatically.

Differential Revision: https://reviews.llvm.org/D87012
2020-09-15 12:23:30 +07:00
Igor Kudrin 18f23b3ecc [DebugInfo] Fix emitting DWARF64 type units (10/19).
The patch fixes emitting the offset to the type DIE. All other fields
are already fixed in previous patches.

Differential Revision: https://reviews.llvm.org/D87021
2020-09-15 11:31:07 +07:00
Igor Kudrin 924dc58076 [DebugInfo] Fix emitting DWARF64 DWO compilation units and string offset tables (9/19).
These two fixes are better to go together because llvm-dwarfdump is
unable to dump a table when another one is malformed.

Differential Revision: https://reviews.llvm.org/D87018
2020-09-15 11:31:00 +07:00
Igor Kudrin 383d34c077 [DebugInfo] Fix emitting DWARF64 .debug_str_offsets sections (8/19).
The patch fixes calculating the size of the table and emitting the unit
length field.

Differential Revision: https://reviews.llvm.org/D87017
2020-09-15 11:30:53 +07:00
Igor Kudrin 26f1f18831 [DebugInfo] Fix emitting the DW_AT_location attribute for 64-bit DWARFv3 (7/19).
The patch uses a common method to determine the appropriate form for
the value of the attribute.

Differential Revision: https://reviews.llvm.org/D87016
2020-09-15 11:30:46 +07:00
Igor Kudrin cae7c1eb78 [DebugInfo] Use a common method to determine a suitable form for section offsts (6/19).
This is mostly an NFC patch because the involved methods are used when
emitting DWO files, which is incompatible with DWARFv3, or for platforms
where DWARF64 is not supported yet.

Differential Revision: https://reviews.llvm.org/D87015
2020-09-15 11:30:38 +07:00
Igor Kudrin 5dd1c59188 [DebugInfo] Fix emitting DWARF64 compilation units (5/19).
The patch also adds a method to choose an appropriate DWARF form
to represent section offsets according to the version and the format
of producing debug info.

Differential Revision: https://reviews.llvm.org/D87014
2020-09-15 11:30:30 +07:00
Igor Kudrin 982b31fad2 [DebugInfo] Add the -dwarf64 switch to llc and other internal tools (4/19).
The patch adds a switch to enable emitting debug info in the 64-bit
DWARF format. Most emitter for sections will be updated in the subsequent
patches, whereas for .debug_line and .debug_frame the emitters are in
the MC library, which is already updated.

For now, the switch is enabled only for 64-bit ELF targets.

Differential Revision: https://reviews.llvm.org/D87011
2020-09-15 11:30:18 +07:00
Igor Kudrin c3c501f5d7 [DebugInfo] Add new emitting methods for values which depend on the DWARF format (3/19).
These methods are going to be used in subsequent patches.

Differential Revision: https://reviews.llvm.org/D87010
2020-09-15 11:30:10 +07:00
Igor Kudrin a8058c6f8d [DebugInfo] Fix DIE value emitters to be compatible with DWARF64 (2/19).
DW_FORM_sec_offset and DW_FORM_strp imply values of different sizes with
DWARF32 and DWARF64. The patch fixes DIE value classes to use correct
sizes when emitting their values. For DIELocList it ensures that the
requested DWARF form matches the current DWARF format because that class
uses a method that selects the size automatically.

Differential Revision: https://reviews.llvm.org/D87009
2020-09-15 11:30:02 +07:00
Igor Kudrin 380e746bcc [DebugInfo] Fix methods of AsmPrinter to emit values corresponding to the DWARF format (1/19).
These methods are used to emit values which are 32-bit in DWARF32 and
64-bit in DWARF64. The patch fixes them so that they choose the length
automatically, depending on the DWARF format set in the Context.

Differential Revision: https://reviews.llvm.org/D87008
2020-09-15 11:29:48 +07:00
Quentin Colombet b3afad0463 [GlobalISel] Add a `X, Y = G_UNMERGE(G_ZEXT Z)` -> X = G_ZEXT Z; Y = 0 combine
Add a combiner helper to transform unmerge of zext into one zext and
a constant 0

Differential Revision: https://reviews.llvm.org/D87427
2020-09-14 17:27:23 -07:00
Quentin Colombet d2321129bd [GlobalISel] Add `X,Y<dead> = G_UNMERGE Z` -> X = G_TRUNC Z
Add a combiner helper that replaces G_UNMERGE where all the destination lanes
are dead except the first one with a G_TRUNC.

Differential Revision: https://reviews.llvm.org/D87174
2020-09-14 17:27:23 -07:00
Quentin Colombet a36278c2f8 [GlobalISel] Add G_UNMERGE(Cst) -> Cst1, Cst2, ... combine
Add a combiner helper that replaces G_UNMERGE of big constants into direct
use of smaller constants.

Differential Revision: https://reviews.llvm.org/D87166
2020-09-14 16:30:18 -07:00
Aditya Nandakumar 46f9137e43 [GISel]: Add combine for G_FABS to G_FABS
https://reviews.llvm.org/D87554

Patch adds one new GICombinerRule for G_FABS. The combine rule folds G_FABS(G_FABS(X)) to G_FABS(X).
Patch additionally adds new combiner tests for the AArch64 target to test this new combiner rule.

Patch by mkitzan.
2020-09-14 15:56:24 -07:00
Quentin Colombet 670c276232 [GlobalISel] Add G_UNMERGE_VALUES(G_MERGE_VALUES) combine
Add the matching and applying function to the combiner helper for
G_UNMERGE_VALUES(G_MERGE_VALUES).

This combine also supports any merge-like input nodes, like G_BUILD_VECTORS
and is robust against bitcasts in between int unmerge and merge nodes.

When the input type of the merge node and the output type of the unmerge
node are not the same, but the sizes are, the combine still applies but
creates bitcasts between the sources and the destinations instead of
reusing the destinations directly.

Long term, the artifact combiner should probably reuse that helper, but
as of today, it doesn't use any outside helper, so I kept it this way.

Differential Revision: https://reviews.llvm.org/D87117
2020-09-14 15:45:06 -07:00
Craig Topper c193a689b4 [SelectionDAG] Use Align/MaybeAlign in calls to getLoad/getStore/getExtLoad/getTruncStore.
The versions that take 'unsigned' will be removed in the future.

I tried to use getOriginalAlign instead of getAlign in some
places. getAlign factors in the minimum alignment implied by
the offset in the pointer info. Since we're also passing the
pointer info we can use the original alignment.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D87592
2020-09-14 13:54:50 -07:00
Craig Topper 4208ea3e19 [FastISel] Bail out of selectGetElementPtr for vector GEPs.
The code that decomposes the GEP into ADD/MUL doesn't work properly
for vector GEPs. It can create bad COPY instructions or possibly
assert.

For now just bail out to SelectionDAG.

Fixes PR45906
2020-09-14 12:53:06 -07:00
Nikita Popov 53f36f06af [Legalize][ARM][X86] Add float legalization for VECREDUCE
This adds SoftenFloatRes, PromoteFloatRes and SoftPromoteHalfRes
legalizations for VECREDUCE, to fill the remaining hole in the SDAG
legalization. These legalizations simply expand the reduction and
let it be recursively legalized. For the PromoteFloatRes case at
least it is possible to do better than that, but it's pretty tricky
(because we need to consider the interaction of three different
vector legalizations and the type promotion) and probably not
really worthwhile.

I haven't added ExpandFloatRes support, as I am not familiar with
ppc_fp128.

Differential Revision: https://reviews.llvm.org/D87569
2020-09-14 20:42:09 +02:00
Nikita Popov 8e69c3cde8 [DAGCombiner] Fold fmin/fmax with INF / FLT_MAX
Similar to D87415, this folds the various float min/max opcodes
with a constant INF or -INF operand, or FLT_MAX / -FLT_MAX operand
if the ninf flag is set. Some of the folds are only possible under
nnan.

The fminnum(X, INF) with nnan and fmaxnum(X, -INF) with nnan cases
are needed to improve the VECREDUCE_FMIN/FMAX lowerings on X86,
the rest is here for the sake of completeness.

Differential Revision: https://reviews.llvm.org/D87571
2020-09-14 19:59:33 +02:00
Rahman Lavaee 7841e21c98 Let -basic-block-sections=labels emit basicblock metadata in a new .bb_addr_map section, instead of emitting special unary-encoded symbols.
This patch introduces the new .bb_addr_map section feature which allows us to emit the bits needed for mapping binary profiles to basic blocks into a separate section.
The format of the emitted data is represented as follows. It includes a header for every function:

|  Address of the function                      |  -> 8 bytes (pointer size)
|  Number of basic blocks in this function (>0) |  -> ULEB128

The header is followed by a BB record for every basic block. These records are ordered in the same order as MachineBasicBlocks are placed in the function. Each BB Info is structured as follows:

|  Offset of the basic block relative to function begin |  -> ULEB128
|  Binary size of the basic block                       |  -> ULEB128
|  BB metadata                                          |  -> ULEB128  [ MBB.isReturn() OR MBB.hasTailCall() << 1  OR  MBB.isEHPad() << 2 ]

The new feature will replace the existing "BB labels" functionality with -basic-block-sections=labels.
The .bb_addr_map section scrubs the specially-encoded BB symbols from the binary and makes it friendly to profilers and debuggers.
Furthermore, the new feature reduces the binary size overhead from 70% bloat to only 12%.

For more information and results please refer to the RFC: https://lists.llvm.org/pipermail/llvm-dev/2020-July/143512.html

Reviewed By: MaskRay, snehasish

Differential Revision: https://reviews.llvm.org/D85408
2020-09-14 10:16:44 -07:00
David Green 06fb4e9064 [CGP] Limit converting phi types to simple loads and stores
Instcombine limits converting phi types to simple loads and stores. This
does the same in codegenprepare, not processing phis that are not
simple.

Note that volatile loads/store ISel will happily convert between float
and int. Atomics are more likely to always be integer. This just keeps
things simple and doesn't process either.

Differential Revision: https://reviews.llvm.org/D83770
2020-09-14 12:08:34 +01:00
Petar Avramovic 6e2a86ed5a AMDGPU/GlobalISel Check for NoNaNsFPMath in isKnownNeverSNaN
Check for NoNaNsFPMath function attribute in isKnownNeverSNaN.
Function attributes are in held in 'TargetMachine.Options'.
Among other things, this allows selection of some patterns imported
in D87351 since G_FCANONICALIZE is not generated when isKnownNeverSNaN
returns true in lowerFMinNumMaxNum.

However we notice some incorrect results since function attributes are
not correctly written in TargetMachine.Options when next function is
processed. Take a look at @v_test_no_global_nnans_med3_f32_pat0_srcmod0,
it has "no-nans-fp-math"="false" but TargetMachine.Options still has it
set to true since first function in test file had this attribute set to
true. This will be fixed in D87511.

Differential Revision: https://reviews.llvm.org/D87456
2020-09-14 12:11:00 +02:00
Simon Pilgrim 00e5676cf6 [LegalizeDAG] Fix MSVC "result of 32-bit shift implicitly converted to 64 bits" warning. NFCI. 2020-09-14 11:09:43 +01:00
Jeremy Morse d3af441dfe [DebugInstrRef][1/9] Add fields for instr-ref variable locations
Add a DBG_INSTR_REF instruction and a "debug instruction number" field to
MachineInstr. The two allow variable values to be specified by
identifying where the value is computed, rather than the register it lies
in, like so:

  %0 = fooinst, debug-instr-number 1
  [...]
  DBG_INSTR_REF 1, 0

See the original RFC for motivation:
http://lists.llvm.org/pipermail/llvm-dev/2020-February/139440.html

This patch is NFCI; it only adds fields and other boiler plate.

Differential Revision: https://reviews.llvm.org/D85741
2020-09-14 10:06:52 +01:00
David Sherwood 15bff4dec4 [CodeGen] Fix bug in IncrementPointer
In an earlier patch I meant to add the correct flags to the ADD
node when incrementing the pointer, but forgot to pass them to
SelectionDAG::getNode.

Differential Revision: https://reviews.llvm.org/D87496
2020-09-14 08:03:55 +01:00
Yevgeny Rouban 88690a9658 [CodeGenPrepare] Fix zapping dead operands of assume
This patch fixes a problem of the commit 52cc97a0.
A test case is created to demonstrate the crash caused by
the instruction iterator invalidated by the recursive
removal of dead operands of assume. The solution restarts
from the blocks's first instruction in case CurInstIterator
is invalidated by RecursivelyDeleteTriviallyDeadInstructions().

Reviewed By: bkramer

Differential Revision: https://reviews.llvm.org/D87434
2020-09-14 11:46:34 +07:00
Craig Topper 56b33391d3 [SelectionDAG] Move ISD:PARITY formation from DAGCombine to SimplifyDemandedBits.
Previously, we formed ISD::PARITY by looking for (and (ctpop X), 1)
but the AND might be separated from the ctpop. For example if the
parity result is multiplied by 2, we'll pull the AND through the
shift.

So to handle more cases, move to SimplifyDemandedBits where we
can handle more cases that result in only the LSB of the CTPOP
being used.
2020-09-13 21:04:13 -07:00
Qiu Chaofan a4c5351986 [DAGCombiner] Propagate FMF flags in FMA folding
DAG combiner folds (fma a 1.0 b) into (fadd a b) but the flag isn't
propagated into new fadd. This patch fixes that.

Some code in visitFMA is redundant and such support for vector constants
is missing. Need follow-up patch to clean.

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D87037
2020-09-14 00:19:06 +08:00
David Green 9237fde481 [CGP] Prevent optimizePhiType from iterating forever
The recently added optimizePhiType algorithm had no checks to make sure
it didn't continually iterate backward and forth between float and int
types. This means that given an input like store(phi(bitcast(load))), we
could convert that back and forth to store(bitcast(phi(load))). This
particular case would usually have been simplified to a different load
type (folding the bitcast into the load) before CGP, but other cases can
occur. The one that came up was phi(bitcast(phi)), where the two phi's
of different types were bitcast between. That was not helped by a dead
bitcast being kept around which could make conversion look profitable.

This adds an extra check of the bitcast Uses or Defs, to make sure that
at least one is grounded and will not end up being converted back. It
also makes sure that dead bitcasts are removed, and there is a minor
change to include newly created Phi nodes in the Visited set so that
they do not need to be revisited.

Differential Revision: https://reviews.llvm.org/D82676
2020-09-13 16:11:01 +01:00
Craig Topper 61d29e0dff [LegalizeTypes] Remove a few cases from SplitVectorOperand that should never happen. NFC
CTTZ, CTLZ, CTPOP, and FCANONICALIZE all have the same input and
output types so the operand should have already been legalized when the
result type was legalized.
2020-09-12 20:59:14 -07:00
Craig Topper ad3d6f993d [SelectionDAG][X86][ARM][AArch64] Add ISD opcode for __builtin_parity. Expand it to shifts and xors.
Clang emits (and (ctpop X), 1) for __builtin_parity. If ctpop
isn't natively supported by the target, this leads to poor codegen
due to the expansion of ctpop being more complex than what is needed
for parity.

This adds a DAG combine to convert the pattern to ISD::PARITY
before operation legalization. Type legalization is updated
to handled Expanding and Promoting this operation. If after type
legalization, CTPOP is supported for this type, LegalizeDAG will
turn it back into CTPOP+AND. Otherwise LegalizeDAG will emit a
series of shifts and xors followed by an AND with 1.

I've avoided vectors in this patch to avoid more legalization
complexity for this patch.

X86 previously had a custom DAG combiner for this. This is now
moved to Custom lowering for the new opcode. There is a minor
regression in vector-reduce-xor-bool.ll, but a follow up patch
can easily fix that.

Fixes PR47433

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D87209
2020-09-12 11:42:18 -07:00
Sanjay Patel 3a8ea8609b [Intrinsics] define semantics for experimental fmax/fmin vector reductions
As discussed on llvm-dev:
http://lists.llvm.org/pipermail/llvm-dev/2020-April/140729.html

This is hopefully the final remaining showstopper before we can remove
the 'experimental' from the reduction intrinsics.

No behavior was specified for the FP min/max reductions, so we have a
mess of different interpretations.

There are a few potential options for the semantics of these max/min ops.
I think this is the simplest based on current behavior/implementation:
make the reductions inherit from the existing llvm.maxnum/minnum intrinsics.
These correspond to libm fmax/fmin, and those are similar to the (now
deprecated?) IEEE-754 maxNum/minNum functions (NaNs are treated as missing
data). So the default expansion creates calls to libm functions.

Another option would be to inherit from llvm.maximum/minimum (NaNs propagate),
but most targets just crash in codegen when given those nodes because no
default expansion was ever implemented AFAICT.

We could also just assume 'nnan' semantics by default (we are already
assuming 'nsz' semantics in the maxnum/minnum intrinsics), but some targets
(AArch64, PowerPC) support the more defined behavior, so it doesn't make much
sense to not allow a tighter spec. Fast-math-flags (nnan) can be used to
loosen the semantics.

(Note that D67507 was proposed to update the LangRef to acknowledge the more
recent IEEE-754 2019 standard, but that patch seems to have stalled. If we do
update based on the new standard, the reduction instructions can seamlessly
inherit from whatever updates are made to the max/min intrinsics.)

x86 sees a regression here on 'nnan' tests because we have underlying,
longstanding bugs in FMF creation/propagation. Those need to be fixed apart
from this change (for example: https://llvm.org/PR35538). The expansion
sequence before this patch may not have been correct.

Differential Revision: https://reviews.llvm.org/D87391
2020-09-12 09:10:28 -04:00
Yuanfang Chen ad99e34c59 Revert "[NewPM][CodeGen] Introduce CodeGenPassBuilder to help build codegen pipeline"
This reverts commit 31ecf8d29d.
This reverts commit 3fdaa8602a.

There is laying violation for Target->CodeGen.
2020-09-11 18:52:32 -07:00
Yuanfang Chen 31ecf8d29d [NewPM][CodeGen] Introduce CodeGenPassBuilder to help build codegen pipeline
Following up on D67687.
Please refer to the RFC here http://lists.llvm.org/pipermail/llvm-dev/2020-July/143309.html

`CodeGenPassBuilder` is the NPM counterpart of `TargetPassConfig` with below differences.
- Debugging features (MIR print/verify, disable pass, start/stop-before/after, etc.) living in `TargetPassConfig` are moved to use PassInstrument as much as possible. (Implementation also lives in `TargetPassConfig.cpp`)
- `TargetPassConfig` is a polymorphic base (virtual inheritance) to build the target-dependent pipeline whereas `CodeGenPassBuilder` is the CRTP base/helper to implement the target-dependent pipeline. The motivation is flexibility for targets to customize the pipeline, inlining opportunity, and fits the overall NPM value semantics design.
- `TargetPassConfig` is a legacy immutable pass to declare hooks for targets to customize some target-independent codegen layer behavior. This is partially ported to TargetMachine::options. The rest, such as `createMachineScheduler/createPostMachineScheduler`, are left out for now. They should be implemented in LLVMTargetMachine in the future.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D83608
2020-09-11 16:41:17 -07:00
Matt Arsenault 382b2b1b51 RegAllocFast: Fix typo in comment 2020-09-11 18:06:14 -04:00
Matt Arsenault e21bb31eb6 CodeGen: Require SSA to run PeepholeOptimizer 2020-09-11 18:03:04 -04:00
Jeremy Morse 0caeaff123 [LiveDebugValues][NFC] Re-land 60db26a66d, add instr-ref tests
This was landed but reverted in 5b9c2b1bea due to asan picking up a memory
leak. This is fixed in the change to InstrRefBasedImpl.cpp. Original
commit message follows:

[LiveDebugValues][NFC] Add instr-ref tests, adapt old tests

This patch adds a few tests in DebugInfo/MIR/InstrRef/ of interesting
behaviour that the instruction referencing implementation of
LiveDebugValues has. Mostly, these tests exist to ensure that if you
give the "-experimental-debug-variable-locations" command line switch,
the right implementation runs; and to ensure it behaves the same way as
the VarLoc LiveDebugValues implementation.

I've also touched roughly 30 other tests, purely to make the tests less
rigid about what output to accept. DBG_VALUE instructions are usually
printed with a trailing !debug-location indicating its scope:

  !debug-location !1234

However InstrRefBasedLDV produces new DebugLoc instances on the fly,
meaning there sometimes isn't a numbered node when they're printed,
making the output:

  !debug-location !DILocation(line: 0, blah blah)

Which causes a ton of these tests to fail. This patch removes checks for
that final part of each DBG_VALUE instruction. None of them appear to
be actually checking the scope is correct, just that it's present, so
I don't believe there's any loss in coverage here.

Differential Revision: https://reviews.llvm.org/D83054
2020-09-11 12:14:44 +01:00
Benjamin Kramer 5405ee553a [CodeGenPrepare] Simplify code. NFCI. 2020-09-11 11:24:08 +02:00
Martin Storsjö 46416f0803 [CodeGen] [WinException] Remove a redundant explicit section switch for aarch64
The following EmitWinEHHandlerData() implicitly switches to .xdata, just
like on x86_64.

This became orphaned from the original code requiring it in
0b61d220c9 / https://reviews.llvm.org/D61095.

Differential Revision: https://reviews.llvm.org/D87447
2020-09-11 10:31:04 +03:00
Alok Kumar Sharma e45b0708ae [DebugInfo] Fixing CodeView assert related to lowerBound field of DISubrange.
This is to fix CodeView build failure https://bugs.llvm.org/show_bug.cgi?id=47287
    after DIsSubrange upgrade D80197

    Assert condition is now removed and Count is calculated in case LowerBound
    is absent or zero and Count or UpperBound is constant. If Count is unknown
    it is later handled as VLA (currently Count is set to zero).

Reviewed By: rnk

Differential Revision: https://reviews.llvm.org/D87406
2020-09-11 11:12:49 +05:30
Reid Kleckner 2c73bef7fa Fix wrong comment about enabling optimizations to work around a bug 2020-09-10 16:45:20 -07:00
Reid Kleckner 4e3edef4b8 Use pragmas to work around MSVC x86_32 debug miscompile bug
Halide users reported this here: https://llvm.org/pr46176
I reported the issue to MSVC here:
https://developercommunity.visualstudio.com/content/problem/1179643/msvc-copies-overaligned-non-trivially-copyable-par.html

This codepath is apparently not covered by LLVM's unit tests, so I added
coverage in a unit test.

If we want to support this configuration going forward, it means that is
in general not safe to pass a SmallVector<T, N> by value if alignof(T)
is greater than 4. This doesn't appear to come up often because passing
a SmallVector by value is inefficient and not idiomatic: it copies the
inline storage. In this case, the SmallVector<LLT,4> is captured by
value by a lambda, and the lambda is passed by value into std::function,
and that's how we hit the bug.

Differential Revision: https://reviews.llvm.org/D87475
2020-09-10 14:50:01 -07:00
Volkan Keles d4bf90271f GlobalISel: Combine fneg(fneg x) to x
https://reviews.llvm.org/D87473
2020-09-10 12:57:38 -07:00
Anna Thomas b1b9806370 [ImplicitNullChecks] NFC: Remove unused PointerReg arg in dep analysis
The PointerReg arg was passed into the dependence function for an
assertion which no longer exists. So, this patch updates the dependence
functions to avoid the PointerReg in the signature.

Tests-Run: make check
2020-09-10 15:31:57 -04:00
Anna Thomas 46329f6079 [ImplicitNullCheck] Handle instructions that preserve zero value
This is the first in a series of patches to make implicit null checks
more general. This patch identifies instructions that preserves zero
value of a register and considers that as a valid instruction to hoist
along with the faulting load. See added testcases.

Reviewed-By: reames, dantrushin

Differential Revision: https://reviews.llvm.org/D87108
2020-09-10 13:39:50 -04:00
Simon Pilgrim f42f733af9 SwitchLoweringUtils.h - reduce TargetLowering.h include. NFCI.
Only include the headers we actually need, and move the remaining includes down to implicit dependent files.
2020-09-10 17:42:18 +01:00
Jay Foad 517202c720 [TargetLowering] Fix comments describing XOR -> OR/AND transformations 2020-09-10 13:56:34 +01:00
Kerry McLaughlin cd89f5c91b [SVE][CodeGen] Legalisation of truncate for scalable vectors
Truncating from an illegal SVE type to a legal type, e.g.
`trunc <vscale x 4 x i64> %in to <vscale x 4 x i32>`
fails after PromoteIntOp_CONCAT_VECTORS attempts to
create a BUILD_VECTOR.

This patch changes the promote function to create a sequence of
INSERT_SUBVECTORs if the return type is scalable, and replaces
these with UNPK+UZP1 for AArch64.

Reviewed By: paulwalker-arm

Differential Revision: https://reviews.llvm.org/D86548
2020-09-10 11:35:33 +01:00
Nikita Popov 0a5dc7effb [DAGCombiner] Fold fmin/fmax of NaN
fminnum(X, NaN) is X, fminimum(X, NaN) is NaN. This mirrors the
behavior of existing InstSimplify folds.

This is expected to improve the reduction lowerings in D87391,
which use NaN as a neutral element.

Differential Revision: https://reviews.llvm.org/D87415
2020-09-09 23:53:32 +02:00
Amara Emerson e5784ef8f6 [GlobalISel] Enable usage of BranchProbabilityInfo in IRTranslator.
We weren't using this before, so none of the MachineFunction CFG edges had the
branch probability information added. As a result, block placement later in the
pipeline was flying blind.

This is enabled only with optimizations enabled like SelectionDAG.

Differential Revision: https://reviews.llvm.org/D86824
2020-09-09 14:31:12 -07:00
Amara Emerson 467a071285 [GlobalISel][IRTranslator] Generate better conditional branch lowering.
This is a port of the functionality from SelectionDAG, which tries to find
a tree of conditions from compares that are then combined using OR or AND,
before using that result as the input to a branch. Instead of naively
lowering the code as is, this change converts that into a sequence of
conditional branches on the sub-expressions of the tree.

Like SelectionDAG, we re-use the case block codegen functionality from
the switch lowering utils, which causes us to generate some different code.
The result of which I've tried to mitigate in earlier combine patches.

Differential Revision: https://reviews.llvm.org/D86665
2020-09-09 13:16:11 -07:00
Amara Emerson cc76da7ada [GlobalISel] Rewrite the elide-br-by-swapping-icmp-ops combine to do less.
This combine previously tried to take sequences like:
  %cond = G_ICMP pred, a, b
  G_BRCOND %cond, %truebb
  G_BR %falsebb
%truebb:
  ...
%falsebb:
  ...

and by inverting the compare predicate and swapping branch targets, delete the
G_BR and instead have a single conditional branch to the falsebb. Since in an
earlier patch we have a combine to fold not(icmp) into just an inverted icmp,
we don't need this combine to do as much. This patch instead generalizes the
combine by just looking for:
  G_BRCOND %cond, %truebb
  G_BR %falsebb
%truebb:
  ...
%falsebb:
  ...

and then inverting the condition using a not (xor). The xor can be folded away
in a separate combine. This change also lets us avoid some optimization code
in the IRTranslator.

I also think that deleting G_BRs in the combiner is unnecessary. That's
something that targets can decide to do at selection time and could simplify
generic code in future.

Differential Revision: https://reviews.llvm.org/D86664
2020-09-09 13:08:16 -07:00
Ulrich Weigand 1a25133bcd [DAGCombine] Skip re-visiting EntryToken to avoid compile time explosion
During the main DAGCombine loop, whenever a node gets replaced, the new
node and all its users are pushed onto the worklist.  Omit this if the
new node is the EntryToken (e.g. if a store managed to get optimized
out), because re-visiting the EntryToken and its users will not uncover
any additional opportunities, but there may be a large number of such
users, potentially causing compile time explosion.

This compile time explosion showed up in particular when building the
SingleSource/UnitTests/matrix-types-spec.cpp test-suite case on any
platform without SIMD vector support.

Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D86963
2020-09-09 19:13:46 +02:00
Alon Kom 818cf30b83 [MachinePipeliner] Fix II_setByPragma initialization
II_setByPragma was not reset between 2 calls of the MachinePipleiner pass

Reviewed By: bcahoon

Differential Revision: https://reviews.llvm.org/D87088
2020-09-09 13:38:35 +00:00
Denis Antrushin 4358fa782e [Statepoints] Update DAG root after emitting statepoint.
Since we always generate CopyToRegs for statepoint results,
we must update DAG root after emitting statepoint, so that
these copies are scheduled before any possible local uses.
Note: getControlRoot() flushes all PendingExports, not only
those we generates for relocates. If that'll become a problem,
we can change it to flushing relocate exports only.

Reviewed By: reames

Differential Revision: https://reviews.llvm.org/D87251
2020-09-09 20:22:10 +07:00
Simon Pilgrim d816499f95 [KnownBits] Move SelectionDAG::computeKnownBits ISD::ABS handling to KnownBits::abs
Move the ISD::ABS handling to a KnownBits::abs handler, to simplify future implementations in ValueTracking/GlobalISel.
2020-09-09 13:22:58 +01:00
Denis Antrushin 2a52c3301a [Statepoints] Properly handle const base pointer.
Current code in InstEmitter assumes all GC pointers are either
VRegs or stack slots - hence, taking only one operand.
But it is possible to have constant base, in which case it
occupies two machine operands.

Add a convinience function to StackMaps to get index of next
meta argument and use it in InsrEmitter to properly advance to
the next statepoint meta operand.

Reviewed By: reames

Differential Revision: https://reviews.llvm.org/D87252
2020-09-09 14:07:00 +07:00
Craig Topper 844e94a502 [SelectionDAGBuilder] Remove Unnecessary FastMathFlags temporary. Use SDNodeFlags instead. NFCI
This was a missed simplication in D87200
2020-09-08 15:50:12 -07:00
Craig Topper b1e68f885b [SelectionDAGBuilder] Pass fast math flags to getNode calls rather than trying to set them after the fact.:
This removes the after the fact FMF handling from D46854 in favor of passing fast math flags to getNode. This should be a superset of D87130.

This required adding a SDNodeFlags to SelectionDAG::getSetCC.

Now we manage to contant fold some stuff undefs during the
initial getNode that we don't do in later DAG combines.

Differential Revision: https://reviews.llvm.org/D87200
2020-09-08 15:27:21 -07:00
Volkan Keles 1242dd330d GlobalISel: Combine `op undef, x` to 0
https://reviews.llvm.org/D86611
2020-09-08 09:46:38 -07:00
Simon Pilgrim 3c83b967cf LiveRegUnits.h - reduce MachineRegisterInfo.h include. NFC.
We only need to include MachineInstrBundle.h, but exposes an implicit dependency in MachineOutliner.h.

Also, remove duplicate includes from LiveRegUnits.cpp + MachineOutliner.cpp.
2020-09-08 17:27:00 +01:00
Jonas Paulsson 6dc3e22b57 [DAGTypeLegalizer] Handle ZERO_EXTEND of promoted type in WidenVecRes_Convert.
On SystemZ, a ZERO_EXTEND of an i1 vector handled by WidenVecRes_Convert()
always ended up being scalarized, because the type action of the input is
promotion which was previously an unhandled case in this method.

This fixes https://bugs.llvm.org/show_bug.cgi?id=47132.

Differential Revision: https://reviews.llvm.org/D86268

Patch by Eli Friedman.
Review: Ulrich Weigand
2020-09-08 16:49:51 +02:00
Craig Topper da79b1eecc [SelectionDAG][X86][ARM] Teach ExpandIntRes_ABS to use sra+add+xor expansion when ADDCARRY is supported.
Rather than using SELECT instructions, use SRA, UADDO/ADDCARRY and
XORs to expand ABS. This is the multi-part version of the sequence
we use in LegalizeDAG.

It's also the same as the Custom sequence uses for i64 on 32-bit
and i128 on 64-bit. So we can remove the X86 customization.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D87215
2020-09-07 13:15:26 -07:00
Sanjay Patel 7a06b166b1 [DAGCombiner] allow more store merging for non-i8 truncated ops
This is a follow-up suggested in D86420 - if we have a pair of stores
in inverted order for the target endian, we can rotate the source
bits into place.
The "be_i64_to_i16_order" test shows a limitation of the current
function (which might be avoided if we integrate this function with
the other cases in mergeConsecutiveStores). In the earlier
"be_i64_to_i16" test, we skip the first 2 stores because we do not
match the full set as consecutive or rotate-able, but then we reach
the last 2 stores and see that they are an inverted pair of 16-bit
stores. The "be_i64_to_i16_order" test alters the program order of
the stores, so we miss matching the sub-pattern.

Differential Revision: https://reviews.llvm.org/D87112
2020-09-07 14:12:36 -04:00
Momchil Velikov eb482afaf5 Reduce the number of memory allocations when displaying
a warning about clobbering reserved registers (NFC).

Also address some minor inefficiencies and style issues.

Differential Revision: https://reviews.llvm.org/D86088
2020-09-07 17:04:00 +01:00
Simon Pilgrim 6670f5d1e6 MachineStableHash.h - remove MachineInstr.h include. NFC.
Use forward declarations and move the include to MachineStableHash.cpp
2020-09-07 13:33:48 +01:00
Simon Wallis 79ea83e104 [SelectionDAG] memcpy expansion of const volatile struct ignores const zero
In getMemcpyLoadsAndStores(), a memcpy where the source is a zero constant is expanded to a MemOp::Set instead of a MemOp::Copy, even when the memcpy is volatile.
This is incorrect.

The fix is to add a check for volatile, and expand to MemOp::Copy in the volatile case.

Reviewed By: chill

Differential Revision: https://reviews.llvm.org/D87134
2020-09-07 13:22:09 +01:00
Simon Pilgrim e57cbcbdc1 LegalizeTypes.h - remove orphan SplitVSETCC declaration. NFCI.
The implementation no longer exists
2020-09-07 13:11:49 +01:00
Jay Foad 713c2ad60c [GlobalISel] Extend not_cmp_fold to work on conditional expressions
Differential Revision: https://reviews.llvm.org/D86709
2020-09-07 09:31:08 +01:00
Jay Foad 5350e1b509 [KnownBits] Implement accurate unsigned and signed max and min
Use the new implementation in ValueTracking, SelectionDAG and
GlobalISel.

Differential Revision: https://reviews.llvm.org/D87034
2020-09-07 09:09:01 +01:00
Amara Emerson d0abc75749 [GlobalISel] Disable the indexed loads combine completely unless forced. NFC.
The post-index matcher, before it queries the target legality, walks uses
of some instructions which in pathological cases can be massive. Since
no targets actually support indexed loads yet, disable this to stop wasting
compile time on something which is going to fail anyway.
2020-09-05 21:04:03 -07:00
Jonas Paulsson 714ceefad9 [SelectionDAG] Always intersect SDNode flags during getNode() node memoization.
Previously SDNodeFlags::instersectWith(Flags) would do nothing if Flags was
in an undefined state, which is very bad given that this is the default when
getNode() is called without passing an explicit SDNodeFlags argument.

This meant that if an already existing and reused node had a flag which the
second caller to getNode() did not set, that flag would remain uncleared.

This was exposed by https://bugs.llvm.org/show_bug.cgi?id=47092, where an NSW
flag was incorrectly set on an add instruction (which did in fact overflow in
one of the two original contexts), so when SystemZElimCompare removed the
compare with 0 trusting that flag, wrong-code resulted.

There is more that needs to be done in this area as discussed here:

Differential Revision: https://reviews.llvm.org/D86871

Review: Ulrich Weigand, Sanjay Patel
2020-09-05 10:30:38 +02:00
Fangrui Song 398ba37230 [LiveDebugVariables] Delete unneeded doInitialization 2020-09-04 13:27:42 -07:00
Simon Pilgrim 7582c5c023 CallingConvLower.h - remove unnecessary MachineFunction.h include. NFC.
Reduce to forward declaration, add the Register.h include that we still needed, move CCState::ensureMaxAlignment into CallingConvLower.cpp as it was the only function that needed the full definition of MachineFunction.

Fix a few implicit dependencies further down.
2020-09-04 12:16:48 +01:00
David Sherwood 73a3d350a4 [SVE][CodeGen] Fix up warnings in sve-split-insert/extract tests
I have fixed up some more ElementCount/TypeSize related warnings in
the following tests:

  CodeGen/AArch64/sve-split-extract-elt.ll
  CodeGen/AArch64/sve-split-insert-elt.ll

In SelectionDAG::CreateStackTemporary we were relying upon the implicit
cast from TypeSize -> uint64_t when calling MachineFrameInfo::CreateStackObject.
I've fixed this by passing in the known minimum size instead, which I
believe is fine because the associated stack id indicates whether this
is a scalable object or not.

I've also fixed up a case in TargetLowering::SimplifyDemandedBits when
extracting a vector element from a scalable vector. The result is a scalar,
hence it wasn't caught at the start of the function. If the vector is
scalable we just bail out for now.

Differential Revision: https://reviews.llvm.org/D86431
2020-09-04 09:51:31 +01:00
Michael Liao bf41c4d29e [codegen] Ensure target flags are cleared/set properly. NFC.
- When an operand is changed into an immediate value or like, ensure their
  target flags being cleared or set properly.

Differential Revision: https://reviews.llvm.org/D87109
2020-09-03 18:37:39 -04:00
Puyan Lotfi 7fff1fbd3c [MIRVRegNamer] Experimental MachineInstr stable hashing (Fowler-Noll-Vo)
This hashing scheme has been useful out of tree, and I want to start
experimenting with it. Specifically I want to experiment on the
MIRVRegNamer, MIRCanononicalizer, and eventually the MachineOutliner.

This diff is a first step, that optionally brings stable hashing to the
MIRVRegNamer (and as a result, the MIRCanonicalizer).  We've tested this
hashing scheme on a lot of MachineOperand types that llvm::hash_value
can not handle in a stable manner.

This stable hashing was also the basis for

"Global Machine Outliner for ThinLTO" in EuroLLVM 2020

http://llvm.org/devmtg/2020-04/talks.html#TechTalk_58

Credits: Kyungwoo Lee, Nikolai Tillmann

Differential Revision: https://reviews.llvm.org/D86952
2020-09-03 16:13:09 -04:00
Amy Huang 5fe33f7399 [DebugInfo] Make DWARF ignore sizes on forward declared class types.
Make sure the sizes for forward declared classes aren't emitted in
DWARF.

This comes before https://reviews.llvm.org/D87062, which adds sizes to
all classes with definitions.

Bug: https://bugs.llvm.org/show_bug.cgi?id=47338

Differential Revision: https://reviews.llvm.org/D87070
2020-09-03 11:01:49 -07:00
Simon Pilgrim 1673a08044 SelectionDAG.h - remove unnecessary FunctionLoweringInfo.h include. NFCI.
Use forward declarations and move the include down to dependent files that actually use it.

This also exposes a number of implicit dependencies on KnownBits.h
2020-09-03 18:33:25 +01:00
Simon Pilgrim 46780cc0ee PHIEliminationUtils.cpp - remove unnecessary MachineBasicBlock.h include. NFCI.
This is already included in PHIEliminationUtils.h
2020-09-03 17:43:34 +01:00
Simon Pilgrim 6731eb644a Fix Wdocumentation trailing comments warnings. NFCI. 2020-09-03 17:43:34 +01:00
Simon Pilgrim b196c7192f Fix Wdocumentation warning. NFCI.
Remove \returns tag from a void function
2020-09-03 17:43:34 +01:00
Simon Pilgrim 898e42db93 GlobalISel/Utils.h - remove unused includes. NFCI.
Twine is unused, and TargetLowering can be reduced to a forward declaration and moved to Utils.cpp
2020-09-03 15:59:12 +01:00
Simon Pilgrim 91848b11b4 LowerEmuTLS.cpp - remove unused TargetLowering.h include. NFC.
We only needed llvm/IR/Constants.h.
2020-09-03 14:40:09 +01:00
Amara Emerson 2878ecc90f [StackProtector] Fix crash with vararg due to not checking LocationSize validity.
Differential Revision: https://reviews.llvm.org/D87074
2020-09-03 00:08:48 -07:00
Craig Topper b16e8687ab [CodeGenPrepare][X86] Teach optimizeGatherScatterInst to turn a splat pointer into GEP with scalar base and 0 index
This helps SelectionDAGBuilder recognize the splat can be used as a uniform base.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D86371
2020-09-02 20:44:12 -07:00
Jay Foad 099c089d4b [APInt] New member function setBitVal
Differential Revision: https://reviews.llvm.org/D87033
2020-09-02 21:40:31 +01:00
Anna Thomas 425573a2fa [ImplicitNullChecks] NFC: Refactor dependence safety check
After computing dependence, we check if it is safe to hoist by
identifying if it clobbers any liveIns in the sibling block (NullSucc).
This check is moved to its own function which will be used in the
soon-to-be modified dependence checking algorithm for implicit null
checks pass.

Tests-Run: lit tests on X86/implicit-*
2020-09-02 10:29:44 -04:00
Anna Thomas 6f7737c468 [ImplicitNullChecks] NFC: Separated out checks and added comments
Separated out some checks in isSuitableMemoryOp and added comments
explaining why some of those checks are done.

Tests-Run:X86 implicit null checks tests.
2020-09-02 10:29:44 -04:00
Paul Walker f72121254d [SVE] Don't reorder subvector/binop sequences when the resulting binop is not legal.
When lowering fixed length vector operations for SVE the subvector
operations are used extensively to marshall data between scalable
and fixed-length vectors. This means that sequences like:

  extract_subvec(binop(insert_subvec(a), insert_subvec(b)))

are very common. DAGCombine only checks if the resulting binop is
legal or can be custom lowered when undoing such sequences. When
it's custom lowering that is introducing them the result is an
infinite legalise->combine->legalise loop.

This patch extends the isOperationLegalOr... functions to include
a "LegalOnly" parameter to restrict the check to legal operations
only. Although isOperationLegal could be used it's common for
the affected code paths to be visited pre and post legalisation,
so the extra parameter keeps the code tidy.

Differential Revision: https://reviews.llvm.org/D86450
2020-09-02 11:01:33 +01:00
Sander de Smalen f13beac51b [AArch64][SVE] Preserve full vector regs over EH edge.
Unwinders may only preserve the lower 64bits of Neon and SVE registers,
as only the registers in the base ABI are guaranteed to be preserved
over the exception edge. The caller will need to preserve additional
registers for when the call throws an exception and the unwinder has
tried to recover state.

For  e.g.

    svint32_t bar(svint32_t);
    svint32_t foo(svint32_t x, bool *err) {
      try { bar(x); } catch (...) { *err = true; }
      return x;
    }

`z0` needs to be spilled before the call to `bar(x)` and reloaded before
returning from foo, as the exception handler may have clobbered z0.

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D84737
2020-09-02 10:54:18 +01:00
Igor Kudrin 3445ec9ba7 [DebugInfo] Emit a 1-byte value as a terminator of entries list in the name index.
As stated in section 6.1.1.2, DWARFv5, p. 142,
| The last entry for each name is followed by a zero byte that
| terminates the list. There may be gaps between the lists.

The patch changes emitting a 4-byte zero value to a 1-byte one, which
effectively removes the gap between entry lists, and thus saves
approximately 3 bytes per name; the calculation is not exact because
the total size of the table is aligned to 4.

Differential Revision: https://reviews.llvm.org/D86927
2020-09-02 16:12:39 +07:00
Igor Kudrin 71eed4808f [DebugInfo] Remove Dwarf5AccelTableWriter::Header::UnitLength. NFC.
The member is not in use; the unit length for the table is emitted as
a difference between two labels. Moreover, the type of the member might
be misleading, because for DWARF64 the field should be 64 bit long.

Differential Revision: https://reviews.llvm.org/D86912
2020-09-02 16:11:45 +07:00
Cameron McInally cfe2b81710 [SVE] Update INSERT_SUBVECTOR DAGCombine to use getVectorElementCount().
A small piece of the project to replace getVectorNumElements() with getVectorElementCount().

Differential Revision: https://reviews.llvm.org/D86894
2020-09-01 16:51:44 -05:00
Amara Emerson 520ab710fb Revert "Revert "[GlobalISel] Fold xor(cmp(pred, _, _), 1) -> cmp(inverse(pred), _, _)" (and dependent patch "Optimize away a Not feeding a brcond by using tbz instead of tbnz.")"
This reverts commit 8693ddc743.

Re-committing with the test requiring asserts.
2020-09-01 14:29:04 -07:00
Jordan Rupprecht 8693ddc743 Revert "[GlobalISel] Fold xor(cmp(pred, _, _), 1) -> cmp(inverse(pred), _, _)" (and dependent patch "Optimize away a Not feeding a brcond by using tbz instead of tbnz.")
This reverts commit 8ad8f484b6. It causes crashes when running `ninja check-llvm-codegen-aarch64-globalisel`, e.g.
http://lab.llvm.org:8011/builders/clang-with-thin-lto-ubuntu/builds/24132/steps/test-stage1-compiler/logs/stdio.
Note that the crash does not seem to reproduce in debug builds.

5ded444252 depends on this, so revert that too.
2020-09-01 13:31:57 -07:00
Craig Topper 4783e2c9c6 [MachineCopyPropagation] In isNopCopy, check the destination registers match in addition to the source registers.
Previously if the source match we asserted that the destination
matched. But GPR <-> mask register copies on X86 can violate this
since we use the same K-registers for multiple sizes.

Fixes this ISPC issue https://github.com/ispc/ispc/issues/1851

Differential Revision: https://reviews.llvm.org/D86507
2020-09-01 12:44:32 -07:00
Amara Emerson 8ad8f484b6 [GlobalISel] Fold xor(cmp(pred, _, _), 1) -> cmp(inverse(pred), _, _)
This is needed for an upcoming change to how we translate conditional branches
which might generate these.

Differential Revision: https://reviews.llvm.org/D86383
2020-09-01 10:57:17 -07:00
Matt Arsenault 32a8a10b42 GlobalISel: Implement computeNumSignBits for G_SELECT 2020-09-01 12:50:19 -04:00
Matt Arsenault 35c94d3f7e GlobalISel: Port smarter known bits for umin/umax from DAG 2020-09-01 12:50:15 -04:00
Matt Arsenault 759482ddaa GlobalISel: Implement computeKnownBits for G_BSWAP and G_BITREVERSE 2020-09-01 12:49:57 -04:00
Sam Tebbs 15e880a04f [DAGCombiner] Fold an AND of a masked load into a zext_masked_load
This patch folds an AND of a masked load and build vector into a zero
extended masked load.

Differential Revision: https://reviews.llvm.org/D86789
2020-09-01 17:02:07 +01:00
Volkan Keles 061182b7ba GlobalISel: Add combines for extend operations
https://reviews.llvm.org/D86516
2020-09-01 08:50:06 -07:00
Matt Arsenault 9e7e1b2d4b GlobalISel: Implement computeNumSignBits for G_SEXTLOAD/G_ZEXTLOAD 2020-09-01 11:20:02 -04:00
Matt Arsenault 92090e8bd8 GlobalISel: Implement computeKnownBits for G_UNMERGE_VALUES 2020-09-01 11:19:27 -04:00
Sourabh Singh Tomar 03812041d8 [NFCI] Removed an un-used declaration got accidentally introduced in f91d18eaa9 2020-09-01 15:59:04 +05:30
David Sherwood 9fbb113247 [SVE][CodeGen] Fix TypeSize/ElementCount related warnings in sve-split-load.ll
I have fixed up a number of warnings resulting from TypeSize -> uint64_t
casts and calling getVectorNumElements() on scalable vector types. I
think most of the changes are fairly trivial except for those in
DAGTypeLegalizer::SplitVecRes_MLOAD I've tried to ensure we create
the MachineMemoryOperands in a sensible way for scalable vectors.

I have added a CHECK line to the following test:

  CodeGen/AArch64/sve-split-load.ll

that ensures no new warnings are added.

Differential Revision: https://reviews.llvm.org/D86697
2020-09-01 07:47:59 +01:00
Qiu Chaofan 5475154865 [NFC] [DAGCombiner] Refactor bitcast folding within fabs/fneg
fabs and fneg share a common transformation:

(fneg (bitconvert x)) -> (bitconvert (xor x sign))
(fabs (bitconvert x)) -> (bitconvert (and x ~sign))

This patch separate the code into a single method.

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D86862
2020-09-01 00:48:12 +08:00
Qiu Chaofan eb2a405c18 [NFC] [DAGCombiner] Remove unnecessary negation in visitFNEG
In visitFNEG of DAGCombiner, the folding of (fneg (fsub c, x)) is
redundant since getNegatedExpression already handles it.
2020-09-01 00:35:01 +08:00
Sanjay Patel 1c9a09f42e [DAGCombiner] skip reciprocal divisor optimization for x/sqrt(x), better
I tried to fix this in:
rG716e35a0cf53
...but that patch depends on the order that we encounter the
magic "x/sqrt(x)" expression in the combiner's worklist.

This patch should improve that by waiting until we walk the
user list to decide if there's a use to skip.

The AArch64 test reveals another (existing) ordering problem
though - we may try to create an estimate for plain sqrt(x)
before we see that it is part of a 1/sqrt(x) expression.
2020-08-31 09:35:59 -04:00
Sanjay Patel 716e35a0cf [DAGCombiner] skip reciprocal divisor optimization for x/sqrt(x)
In general, we probably want to try the multi-use reciprocal
transform before sqrt transforms, but x/sqrt(x) is a special-case
because that will always reduce to plain sqrt(x) or an estimate.

The AArch64 tests show that the transform is limited by TLI
hook to patterns where there are 3 or more uses of the divisor.
So this change can result in an extra division compared to
what we had, but that's the intended behvior based on the
current setting of that hook.
2020-08-30 10:55:45 -04:00
Nikita Popov 51d34c0c53 [TargetLowering] Strip tailing whitespace (NFC) 2020-08-29 18:09:08 +02:00
Kai Luo b904324788 [DAGCombiner] Enhance (zext(setcc))
Current `v:t = zext(setcc x,y,cc)` will be transformed to `select x, y, 1:t, 0:t, cc`. It misses some opportunities if x's type size is less than `t`'s size. This patch enhances the above transformation.

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D86687
2020-08-29 03:37:41 +00:00
Matt Arsenault 1b201914b5 GlobalISel: Combine out redundant sext_inreg
The scalar tests don't work yet, since computeNumSignBits apparently
doesn't handle sextload yet, and sext folds into the load first.
2020-08-28 17:57:31 -04:00
Jon Roelofs b15f2bd3ad [early-ifcvt] Add OptRemarks 2020-08-28 15:51:18 -06:00
Craig Topper aab90384a3 [Attributes] Add a method to check if an Attribute has AttrKind None. Use instead of hasAttribute(Attribute::None)
There's a special case in hasAttribute for None when pImpl is null. If pImpl is not null we dispatch to pImpl->hasAttribute which will always return false for Attribute::None.

So if we just want to check for None its sufficient to just check that pImpl is null. Which can even be done inline.

This patch adds a helper for that case which I hope will speed up our getSubtargetImpl implementations.

Differential Revision: https://reviews.llvm.org/D86744
2020-08-28 13:23:45 -07:00
Benjamin Kramer 52cc97a0db [CodeGenPrepare] Zap the argument of llvm.assume when deleting it
We know that the argument is mostly likely dead, so we can purge it
early. Otherwise it would make it to codegen, and can block further
optimizations.
2020-08-28 20:52:22 +02:00
Snehasish Kumar 94faadaca4 [llvm][CodeGen] Machine Function Splitter
We introduce a codegen optimization pass which splits functions into hot and cold
parts. This pass leverages the basic block sections feature recently
introduced in LLVM from the Propeller project. The pass targets
functions with profile coverage, identifies cold blocks and moves them
to a separate section. The linker groups all cold blocks across
functions together, decreasing fragmentation and improving icache and
itlb utilization.

We evaluated the Machine Function Splitter pass on clang bootstrap and
SPECInt 2017.

For clang bootstrap we observe a mean 2.33% runtime improvement with a
~32% reduction in itlb and stlb misses. Additionally, L1 icache misses
reduced by 9.5% while L2 instruction misses reduced by 20%.

For SPECInt we report the change in IntRate the C/C++
benchmarks. All benchmarks apart from mcf and x264 improve, on average
by 0.6% with the max for deepsjeng at 1.6%.

Benchmark		% Change
500.perlbench_r		 0.78
502.gcc_r		 0.82
505.mcf_r		-0.30
520.omnetpp_r		 0.18
523.xalancbmk_r		 0.37
525.x264_r		-0.46
531.deepsjeng_r		 1.61
541.leela_r		 0.83
557.xz_r		 0.15

Differential Revision: https://reviews.llvm.org/D85368
2020-08-28 11:10:14 -07:00
Denis Antrushin fabd4c1ae1 [Statepoint] Always spill base pointer.
There is a subtle problem with new statepoint lowering scheme
when base and pointers are the same (see PR46917 for more context):

%1 = STATEPOINT ... %0, %0(tied-def 0)...

if, for some reason, register allocator desides to put two instances
of %0 into two different objects (registers or spill slots), we may
end up with

$reg3 = STATEPOINT ... $reg2, $reg1(tied-def 0)...

and nothing will prevent later passes to sink uses of $reg2 below
statepoint, which is incorrect.

As a short term solution, always put base pointers on stack during
lowering.
A longer term solution may be to rework MIR statepoint format to
avoid GC pointer duplication in statepoint argument list.

Reviewed By: reames

Differential Revision: https://reviews.llvm.org/D86712
2020-08-28 23:22:07 +07:00
Yonghong Song 443d352a1c [GlobalISel] fix a compilation error with gcc 6.3.0
With gcc 6.3.0, I hit the following compilation error:
  ../lib/CodeGen/GlobalISel/Combiner.cpp: In member function
      ‘bool llvm::Combiner::combineMachineInstrs(llvm::MachineFunction&,
       llvm::GISelCSEInfo*)’:
  ../lib/CodeGen/GlobalISel/Combiner.cpp:156:54: error: suggest parentheses
       around ‘&&’ within ‘||’ [-Werror=parentheses]
     assert(!CSEInfo || !errorToBool(CSEInfo->verify()) &&
                        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~
                            "CSEInfo is not consistent. Likely missing calls to "
                            ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                            "observer on mutations");

Fix the code as suggested by the compiler.
2020-08-28 09:16:52 -07:00
QingShan Zhang deb4b25807 [DAGCombine] Don't delete the node if it has uses immediately
This is the follow up patch for https://reviews.llvm.org/D86183 as we miss to delete the node if NegX == NegY, which has use after we create the node.
```
    if (NegX && (CostX <= CostY)) {
      Cost = std::min(CostX, CostZ);
      RemoveDeadNode(NegY);
      return DAG.getNode(Opcode, DL, VT, NegX, Y, NegZ, Flags);  #<-- NegY is used here if NegY == NegX.
    }
```

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D86689
2020-08-28 16:13:43 +00:00
David Sherwood f4257c5832 [SVE] Make ElementCount members private
This patch changes ElementCount so that the Min and Scalable
members are now private and can only be accessed via the get
functions getKnownMinValue() and isScalable(). In addition I've
added some other member functions for more commonly used operations.
Hopefully this makes the class more useful and will reduce the
need for calling getKnownMinValue().

Differential Revision: https://reviews.llvm.org/D86065
2020-08-28 14:43:53 +01:00
Denis Antrushin 248a67f144 [Statepoint] Turn assert into check in foldPatchpoint.
Original D81646 had check for tied regs in foldPatchpoint().
Due to unfortunate miscommunication with review comments and
adressing some comments post commit, it turned into assertion.

We had an offline talk and agreed that with current implementation
this path is possible, so I'm changing it back to check.

Note that this is workaround until ussues described in PR46917 are
resolved.
2020-08-28 20:00:23 +07:00
Sam Parker b30adfb529 [ARM][LowOverheadLoops] Liveouts and reductions
Remove the code that tried to look for reduction patterns, since the
vectorizer and isel can now produce predicated arithmetic instructios
within the loop body. This has required some reorganisation and fixes
around live-out and predication checks, as well as looking for cases
where an input/output is initialised to zero.

Differential Revision: https://reviews.llvm.org/D86613
2020-08-28 13:56:16 +01:00
Matt Arsenault 5feca7c9c3 GlobalISel: Implement computeNumSignBits for G_SEXT_INREG 2020-08-27 19:44:37 -04:00
Matt Arsenault f08bbde83f Correctly revert "GlobalISel: Use & operator on KnownBits"
I mis-resolved the revert through moving the code to another function.
2020-08-27 19:08:31 -04:00
Matt Arsenault 6cf4f25670 Revert "GlobalISel: Use & operator on KnownBits"
This reverts commit e53b799779.

Confusingly, this does not simply and the two sets of known bits, but
implements known bits for the and operator.
2020-08-27 18:52:34 -04:00
Brad Smith d870e36326 [SSP] Restore setting the visibility of __guard_local to hidden for better code generation.
Patch by: Philip Guenther
2020-08-27 17:17:38 -04:00
Matt Arsenault abc99ab572 GlobalISel: Implement known bits for min/max 2020-08-27 16:56:17 -04:00
Matt Arsenault ee679638d7 MIR: Infer not-SSA for subregister defs
It's possible to have a single virtual register def with a subreg
index that would pass the previous check, but it's not possible to
have a subregister def in SSA.

This is in preparation for adding stricter checks for SSA MIR.
2020-08-27 16:56:16 -04:00
Eli Friedman 8d21985a75 [RegisterScavenging] Delete dead function unprocess(). 2020-08-27 13:19:32 -07:00
Matt Arsenault e53b799779 GlobalISel: Use & operator on KnownBits
Avoid repeating for zero and one
2020-08-27 14:07:18 -04:00
Matt Arsenault 531f7063ba GlobalISel: Implement known bits for G_MERGE_VALUES 2020-08-27 14:07:18 -04:00
Aditya Nandakumar db464a3dbf [GISel] Add new GISel combiners for G_SELECT
https://reviews.llvm.org/D83833

Patch adds two new GICombinerRules for G_SELECT. The rules include:
combining selects with undef comparisons into their first selectee value,
and to combine away selects with constant comparisons. Patch additionally
adds a new combiner test for the AArch64 target to test these new G_SELECT
combiner rules and the existing select_same_val combiner rule.

Patch by  mkitzan
2020-08-27 09:40:15 -07:00
Aditya Nandakumar 5c2db1655b [GISel]: Fix one more CSE Non determinism
https://reviews.llvm.org/D86676

Sometimes we can have the following code

 x:gpr(s32) = G_OP

Say we build G_OP2 to the same x and then delete the previous instruction. Using something like

 Register X = ...;
 auto NewMIB = CSEBuilder.buildOp2(X, ... args);

Currently there's a mismatch in how NewMIB is profiled and inserted into the CSEMap (ie it doesn't consider register bank/register class along with type).Unify the profiling by refactoring and calling the common method.

This was found by turning on the CSEInfo::verify in at the end of each of our GISel passes which turns inconsistent state/non determinism in CSEing into crashes which likely usually indicates missing calls to Observer on mutations (the most common case). Here non determinism usually means not cseing sometimes, but almost never about producing incorrect code.
Also this patch adds this verification at the end of the combiners as well.
2020-08-27 09:06:21 -07:00
Lucas Prates 3d943bcd22 [CodeGen] Properly propagating Calling Convention information when lowering vector arguments
When joining the legal parts of vector arguments into its original value
during the lower of Formal Arguments in SelectionDAGBuilder, the Calling
Convention information was not being propagated for the handling of each
individual parts. The same did not happen when lowering calls, causing a
mismatch.

This patch fixes the issue by properly propagating the Calling
Convention details.

This fixes Bugzilla #47001.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D86715
2020-08-27 17:01:10 +01:00
Drew Wock 0ec098e22b [FPEnv] Allow fneg + strict_fadd -> strict_fsub in DAGCombiner
This is the first of a set of DAGCombiner changes enabling strictfp
optimizations. I want to test to waters with this to make sure changes
like these are acceptable for the strictfp case- this particular change
should preserve exception ordering and result precision perfectly, and
many other possible changes appear to be able to as well.

Copied from regular fadd combines but modified to preserve ordering via
the chain, this change allows strict_fadd x, (fneg y) to become
struct_fsub x, y and strict_fadd (fneg x), y to become strict_fsub y, x.

Differential Revision: https://reviews.llvm.org/D85548
2020-08-27 08:17:01 -04:00
OCHyams b6cca0ec05 Revert "[DWARF] Add cuttoff guarding quadratic validThroughout behaviour"
This reverts commit b9d977b0ca.

This cutoff is no longer required. The commit 34ffa7fc501 (D86153) introduces a
performance improvement which was tested against the motivating case for this
patch.

Discussed in differential revision: https://reviews.llvm.org/D86153
2020-08-27 11:52:30 +01:00
OCHyams 57d8acac64 [DwarfDebug] Improve validThroughout performance (4/4)
Almost NFC (see end).

The backwards scan in validThroughout significantly contributed to compile time
for a pathological case, causing the 'X86 Assembly Printer' pass to account for
roughly 70% of the run time. This patch guards the loop against running
unnecessarily, bringing the pass contribution down to 4%.

Almost NFC: There is a hack in validThroughout which promotes single constant
value DBG_VALUEs in the prologue to be live throughout the function. We're more
likely to hit this code path with this patch applied. Similarly to the parent
patches there is a small coverage change reported in the order of 10s of bytes.

Reviewed By: aprantl

Differential Revision: https://reviews.llvm.org/D86153
2020-08-27 11:52:30 +01:00
OCHyams 3c491881d2 [DwarfDebug] Improve multi-BB single location detection in validThroughout (3/4)
With the changes introduced in D86151 we can now check for single locations
which span multiple blocks for inlined scopes and blocks.

D86151 introduced the InstructionOrdering parameter, replacing a scan through
MBB instructions. The functionality to compare instruction positions across
blocks was add there, and this patch just removes the exit checks that were
previously (but no longer) required.

CTMark shows a geomean binary size reduction of 2.2% for RelWithDebInfo builds.
llvm-locstats (using D85636) shows a very small variable location coverage
change in 5 of 10 binaries, but just like in D86151 it is only in the order of
10s of bytes.

Reviewed By: djtodoro

Differential Revision: https://reviews.llvm.org/D86152
2020-08-27 11:52:29 +01:00
OCHyams 0b5a8050ea [DwarfDebug] Improve single location detection in validThroughout (2/4)
With this patch we're now accounting for two more cases which should be
considered 'valid throughout': First, where RangeEnd is ScopeEnd. Second, where
RangeEnd comes before ScopeEnd when including meta instructions, but are both
preceded by the same non-meta instruction.

CTMark shows a geomean binary size reduction of 1.5% for RelWithDebInfo builds.
`llvm-locstats` (using D85636) shows a very small variable location coverage
change in 2 of 10 binaries, but it is in the order of 10s of bytes which lines
up with my expectations.

I've added a test which checks both of these new cases. The first check in the
test isn't strictly necessary for this patch. But I'm not sure that it is
explicitly tested anywhere else, and is useful for the final patch in the
series.

Reviewed By: aprantl

Differential Revision: https://reviews.llvm.org/D86151
2020-08-27 11:52:29 +01:00
OCHyams e048ea7b1a [NFC][DebugInfo] Create InstructionOrdering helper class (1/4)
Group the map and methods used to query instruction ordering for trimVarLocs
(D82129) into a class. This will make it easier to reuse the functionality
upcoming patches.

Reviewed By: aprantl

Differential Revision: https://reviews.llvm.org/D86150
2020-08-27 11:52:29 +01:00
Sander de Smalen 4e9b66de3f [AArch64][SVE] Add missing debug info for ACLE types.
This patch adds type information for SVE ACLE vector types,
by describing them as vectors, with a lower bound of 0, and
an upper bound described by a DWARF expression using the
AArch64 Vector Granule register (VG), which contains the
runtime multiple of 64bit granules in an SVE vector.

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D86101
2020-08-27 10:56:42 +01:00
Sam Parker a3e41d4581 [ARM] Make MachineVerifier more strict about terminators
Fix the ARM backend's analyzeBranch so it doesn't ignore predicated
return instructions, and make the MachineVerifier rule more strict.

Differential Revision: https://reviews.llvm.org/D40061
2020-08-27 07:10:20 +01:00
Matt Arsenault 5207545a86 GlobalISel: IRTranslate minimum of pointer sizes on memcpy
I forgot to squash this with 0b7f6cc71a
2020-08-26 20:10:00 -04:00
Matt Arsenault 0b7f6cc71a GlobalISel: Add generic instructions for memory intrinsics
AArch64, X86 and Mips currently directly consumes these and custom
lowering to produce a libcall, but really these should follow the
normal legalization process through the libcall/lower action.
2020-08-26 20:08:45 -04:00
Alina Sbirlea 0b34226304 Use properlyDominates in RDFLiveness when sorting on dominance.
Summary:
When looking for all reaching definitions, we sort basic blocks on dominance. When sorting looking for properlyDominates() handles the case A == B.

Authored by: pranavb

Differential Revision: https://reviews.llvm.org/D86661
2020-08-26 15:16:40 -07:00
Sanjay Patel 54a5dd485c [DAGCombiner] allow store merging non-i8 truncated ops
We have a gap in our store merging capabilities for shift+truncate
patterns as discussed in:
https://llvm.org/PR46662

I generalized the code/comments for this function in earlier commits,
so we only need ease the type restriction and adjust the address/endian
checking to make this work.

AArch64 lets us switch endian to make sure that patterns are matched
either way.

Differential Revision: https://reviews.llvm.org/D86420
2020-08-26 15:23:08 -04:00
aartbik 72305a08ff [llvm] [DAG] Fix bug in llvm.get.active.lane.mask lowering
This intrinsic only accepted proper machine vector lengths.
Fixed by this change. With unit tests.

https://bugs.llvm.org/show_bug.cgi?id=47299

Reviewed By: SjoerdMeijer

Differential Revision: https://reviews.llvm.org/D86585
2020-08-26 10:16:31 -07:00
Craig Topper 28bd47fc47 [LegalizeTypes] Remove WidenVecRes_Shift and just use WidenVecRes_Binary
This function seems to allow for the shift amount to have a different type than the result, but I don't think we do that anywhere else for vector shifts. We also don't have any support for legalizing the shift amount alone if the result is legal and the shift amount type isn't. The code coverage report here shows this code as uncovered http://lab.llvm.org:8080/coverage/coverage-reports/coverage/Users/buildslave/jenkins/workspace/coverage/llvm-project/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp.html

Differential Revision: https://reviews.llvm.org/D86475
2020-08-26 09:57:41 -07:00
Jay Foad 75d159f924 [LegalizeTypes] Add ROTL/ROTR to ScalarizeVectorResult.
We can scalarize these just like any other binary operation.

Fixes https://bugs.llvm.org/show_bug.cgi?id=47303 caused by D77152.

Differential Revision: https://reviews.llvm.org/D86601
2020-08-26 14:42:57 +01:00
Matt Arsenault eb074088c9 GlobalISel: Combine G_ADD of G_PTRTOINT to G_PTR_ADD
This produces less work for addressing mode matching. I think this is
safe since I don't think machine IR is supposed to give the same
aliasing properties as getelementptr in the IR.
2020-08-26 08:57:15 -04:00
QingShan Zhang ebf3b188c6 [Scheduling] Implement a new way to cluster loads/stores
Before calling target hook to determine if two loads/stores are clusterable,
we put them into different groups to avoid fake cluster due to dependency.
For now, we are putting the loads/stores into the same group if they have
the same predecessor. We assume that, if two loads/stores have the same
predecessor, it is likely that, they didn't have dependency for each other.

However, one SUnit might have several predecessors and for now, we just
pick up the first predecessor that has non-data/non-artificial dependency,
which is too arbitrary. And we are struggling to fix it.

So, I am proposing some better implementation.
1. Collect all the loads/stores that has memory info first to reduce the complexity.
2. Sort these loads/stores so that we can stop the seeking as early as possible.
3. For each load/store, seeking for the first non-dependency instruction with the
   sorted order, and check if they can cluster or not.

Reviewed By: Jay Foad

Differential Revision: https://reviews.llvm.org/D85517
2020-08-26 12:33:59 +00:00
Sam Tebbs 85dd852a0d [RDA] Don't visit the BB of the instruction in getReachingUniqueMIDef
If the basic block of the instruction passed to getUniqueReachingMIDef
is a transitive predecessor of itself and has a definition of the
register, the function will return that definition even if it is after
the instruction given to the function. This patch stops the function
from scanning the instruction's basic block to prevent this.

Differential Revision: https://reviews.llvm.org/D86607
2020-08-26 12:40:39 +01:00
Jay Foad b7e3599a22 [SelectionDAG] Handle non-power-of-2 bitwidths in expandROT
Differential Revision: https://reviews.llvm.org/D86449
2020-08-26 09:20:46 +01:00
Fangrui Song 82d0749749 [TargetLoweringObjectFileImpl] Make .llvmbc and .llvmcmd non-SHF_ALLOC
There are two ways .llvmbc can be produced:

* clang -c -fembed-bitcode=all (which also produces .llvmcmd)
* LTO backend: ld.lld -mllvm -lto-embed-bitcode or -plugin-opt=-lto-embed-bitcode

.llvmbc and .llvmcmd have the SHF_ALLOC flag, so they can be dropped by
--gc-sections.

This patch sets SectionKind::Metadata to drop the SHF_ALLOC flag. This
is conceptually correct: the two sections are not part of the process
image, so SHF_ALLOC is not appropriate.

`test/LTO/X86/embed-bitcode.ll`: changed `llvm-objcopy -O binary --only-section` to
`llvm-objcopy --dump-section`. `-O binary` does not dump non-SHF_ALLOC sections.

Reviewed By: tejohnson

Differential Revision: https://reviews.llvm.org/D86374
2020-08-25 13:37:29 -07:00
Sjoerd Meijer 39522b1e10 [SelectionDAG] Legalize intrinsic get.active.lane.mask
This adapts legalization of intrinsic get.active.lane.mask to the new semantics
as described in D86147. Because the second argument is now the loop tripcount,
we legalize this intrinsic to an 'icmp ULT' instead of an ULE when it was the
backedge-taken count.

Differential Revision: https://reviews.llvm.org/D86302
2020-08-25 15:00:10 +01:00
Jeremy Morse 121a49d839 [LiveDebugValues] Add switches for using instr-ref variable locations
This patch adds the -Xclang option
"-fexperimental-debug-variable-locations" and same LLVM CodeGen option,
to pick which variable location tracking solution to use.

Right now all the switch does is pick which LiveDebugValues
implementation to use, the normal VarLoc one or the instruction
referencing one in rGae6f78824031. Over time, the aim is to add fragments
of support in aid of the value-tracking RFC:

  http://lists.llvm.org/pipermail/llvm-dev/2020-February/139440.html

also controlled by this command line switch. That will slowly move
variable locations to be defined by an instruction calculating a value,
and a DBG_INSTR_REF instruction referring to that value. Thus, this is
going to grow into a "use the new kind of variable locations" switch,
rather than just "use the new LiveDebugValues implementation".

Differential Revision: https://reviews.llvm.org/D83048
2020-08-25 14:58:48 +01:00
David Green 5b7e27a4db [ARM][CGP] Fix scalar condition selects for MVE
The arm backend does not handle select/select_cc on vectors with scalar
conditions, preferring to expand them in codegenprepare instead. This
usually works except when optimizing for size, where the optsize check
would end up overruling the backend isSelectSupported check.

We could handle the selects in ISel too, but this seems like smaller
code than trying to splat the condition to all lanes.

Differential Revision: https://reviews.llvm.org/D86433
2020-08-25 12:09:06 +01:00
Paul Walker 73ac3c0ede [SVE] Lower scalable vector ISD::FNEG operations.
Also updates isConstOrConstSplatFP to allow the mul(A,-1) -> neg(A)
transformation when -1 is expressed as an ISD::SPLAT_VECTOR.

Differential Revision: https://reviews.llvm.org/D86415
2020-08-25 11:22:28 +01:00
Sam Parker 85a5c65f69 [NFC][RDA] Add explicit def check
Explicitly check that there is a local def prior to the given
instruction in getReachingLocalMIDef instead of just relying on
a nullptr return from getInstFromId.
2020-08-25 08:37:45 +01:00
Venkataramanan Kumar 62e91bf563 [DAGCombine]: Fold X/Sqrt(X) to Sqrt(X)
With FMF ( "nsz" and " reassoc") fold X/Sqrt(X) to Sqrt(X).

This is done after targets have the chance to produce a
reciprocal sqrt estimate sequence because that expansion
is probably more efficient than an expansion of a
non-reciprocal sqrt. That is also why we deferred doing
this transform in IR (D85709).

Differential Revision: https://reviews.llvm.org/D86403
2020-08-24 18:16:13 -04:00
Craig Topper 43465a4375 [LegalizeTypes][X86] Add ROTL/ROTR to WidenVectorResult.
We can widen these just like any other binary operation.

Added test cases for v2i32 for X86 for coverage.

Fixes failures seen after D77152.
2020-08-24 10:10:20 -07:00
Jay Foad a522067692 [SDAG] Convert FSHL <--> FSHR if the target only supports one of them
D77152 tried to do this but got it wrong in the shift-by-zero case.
D86430 reverted the wrong code. Reimplement the optimization with
different code depending on whether the shift amount is known to be
non-zero (modulo bitwidth).

This improves code quality for fshl tests on AMDGPU, which only has an
fshr instruction.

Differential Revision: https://reviews.llvm.org/D86438
2020-08-24 17:47:10 +01:00
Matt Arsenault 517caca359 GlobalISel: Improve dead instruction debug printing
This was printing the "Is dead" on a separate line from the
instruction, which was harder to follow.
2020-08-24 10:12:00 -04:00
Matt Arsenault e1644a3779 GlobalISel: Reduce G_SHL width if source is extension
shl ([sza]ext x, y) => zext (shl x, y).

Turns expensive 64 bit shifts into 32 bit if it does not overflow the
source type:

This is a port of an AMDGPU DAG combine added in
5fa289f0d8. InstCombine does this
already, but we need to do it again here to apply it to shifts
introduced for lowered getelementptrs. This will help matching
addressing modes that use 32-bit offsets in a future patch.

TableGen annoyingly assumes only a single match data operand, so
introduce a reusable struct. However, this still requires defining a
separate GIMatchData for every combine which is still annoying.

Adds a morally equivalent function to the existing
getShiftAmountTy. Without this, we would have to do try to repeatedly
query the legalizer info and guess at what type to use for the shift.
2020-08-24 09:42:40 -04:00
Bjorn Pettersson 7a4e26adc8 [SelectionDAG] Fix miscompile bug in expandFunnelShift
This is a fixup of commit 0819a6416f (D77152) which could
result in miscompiles. The miscompile could only happen for targets
where isOperationLegalOrCustom could return different values for
FSHL and FSHR.

The commit mentioned above added logic in expandFunnelShift to
convert between FSHL and FSHR by swapping direction of the
funnel shift. However, that transform is only legal if we know
that the shift count (modulo bitwidth) isn't zero.

Basically, since fshr(-1,0,0)==0 and fshl(-1,0,0)==-1 then doing a
rewrite such as fshr(X,Y,Z) => fshl(X,Y,0-Z) would be incorrect if
Z modulo bitwidth, could be zero.

```
$ ./alive-tv /tmp/test.ll

----------------------------------------
define i32 @src(i32 %x, i32 %y, i32 %z) {
%0:
  %t0 = fshl i32 %x, i32 %y, i32 %z
  ret i32 %t0
}
=>
define i32 @tgt(i32 %x, i32 %y, i32 %z) {
%0:
  %t0 = sub i32 32, %z
  %t1 = fshr i32 %x, i32 %y, i32 %t0
  ret i32 %t1
}
Transformation doesn't verify!
ERROR: Value mismatch

Example:
i32 %x = #x00000000 (0)
i32 %y = #x00000400 (1024)
i32 %z = #x00000000 (0)

Source:
i32 %t0 = #x00000000 (0)

Target:
i32 %t0 = #x00000020 (32)
i32 %t1 = #x00000400 (1024)
Source value: #x00000000 (0)
Target value: #x00000400 (1024)
```

It could be possible to add back the transform, given that logic
is added to check that (Z % BW) can't be zero. Since there were
no test cases proving that such a transform actually would be useful
I decided to simply remove the faulty code in this patch.

Reviewed By: foad, lebedev.ri

Differential Revision: https://reviews.llvm.org/D86430
2020-08-24 09:52:11 +02:00
Fangrui Song fd485673da [LiveDebugVariables] Internalize class DbgVariableValue. NFC 2020-08-23 22:53:46 -07:00
Qiu Chaofan 1bc45b2fd8 [PowerPC] Support lowering int-to-fp on ppc_fp128
D70867 introduced support for expanding most ppc_fp128 operations. But
sitofp/uitofp is missing. This patch adds that after D81669.

Reviewed By: uweigand

Differntial Revision: https://reviews.llvm.org/D81918
2020-08-24 11:18:16 +08:00
QingShan Zhang 960cbc53ca [DAGCombine] Remove dead node when it is created by getNegatedExpression
We hit the compiling time reported by https://bugs.llvm.org/show_bug.cgi?id=46877
and the reason is the same as D77319. So we need to remove the dead node we created
to avoid increase the problem size of DAGCombiner.

Reviewed By: Spatel

Differential Revision: https://reviews.llvm.org/D86183
2020-08-24 02:50:58 +00:00
Sanjay Patel 1d0fa79824 [DAGCombiner] restrict store merge of truncs to early combining
The pattern matching does not account for truncating stores,
so it is unlikely to work at later stages. So we are likely
wasting compile-time with no hope of improvement by running
this later.
2020-08-23 10:44:23 -04:00
Sanjay Patel 79cb289a95 [DAGCombiner] add early exit for store merging of truncs
This should be NFC in terms of output because the endian
check further down would bail out too, but we are wasting
time by waiting to that point to give up. If we generalize
that function to deal with more than i8 types, we should
not have to deal with the degenerate case.
2020-08-22 16:25:16 -04:00
Jeremy Morse 93af37043b Follow-up build fix for rGae6f78824031
One of the bots objects to brace-initializing a tuple:

  http://lab.llvm.org:8011/builders/clang-cmake-x86_64-sde-avx512-linux/builds/43595/steps/build%20stage%201/logs/stdio

As the tuple constructor is apparently explicit. Fall back to the (not
as pretty) explicit construction of a tuple. I'd thought this was
permitted behaviour; will investigate why this fails later.
2020-08-22 19:09:30 +01:00
Fangrui Song 60bcec4eea [LiveDebugValues] Delete unneeded copy constructor after D83047
It will suppress the implicitly-declared copy assignment operator in C++20.
2020-08-22 10:55:28 -07:00
Jeremy Morse ae6f788240 [LiveDebugValues] Add instruction-referencing LDV implementation
This patch imports the instruction-referencing implementation of
LiveDebugValues proposed here:

  http://lists.llvm.org/pipermail/llvm-dev/2020-June/142368.html

The new implementation is unreachable in this patch, it's the next patch
that enables it behind a command line switch. Briefly, rather than
tracking variable locations by just their location as the 'VarLoc'
implementation does, this implementation does it by value:
 * Each value defined in a function is numbered, and propagated through
   dataflow,
 * Each DBG_VALUE reads a machine value number from a machine location,
 * Variable _values_ are propagated through dataflow,
 * Variable values are translated back into locations, DBG_VALUEs
   inserted to specify where those locations are.

The ultimate aim of this is to enable referring to variable values
throughout post-isel code, rather than locations. Those patches will
build on top of this new LiveDebugValues implementation in later patches
-- it can't be done with the VarLoc implementation as we don't have
value information, only locations.

Differential Revision: https://reviews.llvm.org/D83047
2020-08-22 18:31:08 +01:00
Matt Arsenault 901e3317fe GlobalISel: Merge FewerElements for G_BUILD_VECTOR/G_CONCAT_VECTORS
This switches from using G_EXTRACT in odd cases to widen with undef
and unmerge.
2020-08-22 10:25:53 -04:00
Jeremy Morse 2d9be9e318 Fix some builds after 20bb9fe565
-Wsuggest-override indicates this VarLocBasedLDV method needs the
override keyword.
2020-08-22 15:20:42 +01:00
Jeremy Morse 20bb9fe565 [LiveDebugValues] Install an implementation-picking LiveDebugValues pass
This patch renames the current LiveDebugValues class to "VarLocBasedLDV"
and removes the pass-registration code from it. It creates a separate
LiveDebugValues class that deals with pass registration and management,
that calls through to VarLocBasedLDV::ExtendRanges when
runOnMachineFunction is called. This is done through the "LDVImpl"
abstract class, so that a future patch can install the new
instruction-referencing LiveDebugValues implementation and have it
picked at runtime.

No functional change is intended, just shuffling responsibilities.

Differential Revision: https://reviews.llvm.org/D83046
2020-08-22 14:50:22 +01:00
Sanjay Patel 2fc7c85201 [DAGCombiner] clean up merge of truncated stores; NFC
This code handles the special-case of i8 stores,
but it could be generalized to deal with other types.
2020-08-22 09:23:32 -04:00
Jeremy Morse fba06e3c85 [LiveDebugValues][NFC] Move LiveDebugValues source for refactor
This is a pure file move of LiveDebugValues.cpp ahead of the pass being
refactored, with an experimental new implementation to follow.

The motivation for these changes can be found here:

  http://lists.llvm.org/pipermail/llvm-dev/2020-June/142368.html

And the other related changes can be found in the phabricator stack for
this revision:

Differential Revision: https://reviews.llvm.org/D83304
2020-08-22 12:58:30 +01:00
Sourabh Singh Tomar f91d18eaa9 [DebugInfo][flang]Added support for representing Fortran assumed length strings
This patch adds support for representing Fortran `character(n)`.

Primarily patch is based out of D54114 with appropriate modifications.

Test case IR is generated using our downstream classic-flang. We're in process
of upstreaming flang PR's but classic-flang has dependencies on llvm, so
this has to get in first.

Patch includes functional test case for both IR and corresponding
dwarf, furthermore it has been manually tested as well using GDB.

Source snippet:
```
 program assumedLength
   call sub('Hello')
   call sub('Goodbye')
   contains
   subroutine sub(string)
           implicit none
           character(len=*), intent(in) :: string
           print *, string
   end subroutine sub
 end program assumedLength
```

GDB:
```
(gdb) ptype string
type = character (5)
(gdb) p string
$1 = 'Hello'
```

Reviewed By: aprantl, schweitz

Differential Revision: https://reviews.llvm.org/D86305
2020-08-22 10:13:40 +05:30
Nicolai Hähnle b37db11d95 MachineSSAUpdater: Allow initialization with just a register class
The register class is required for inserting PHIs, but the "current
virtual register" isn't actually used for anything, so let's remove it
while we're at it.

Differential Revision: https://reviews.llvm.org/D85602

Change-Id: I1e647f31570ef21a7ea8e20db3454178e98a6a8b
2020-08-21 23:04:35 +02:00
Jay Foad 0819a6416f [SelectionDAG] Better legalization for FSHL and FSHR
In SelectionDAGBuilder always translate the fshl and fshr intrinsics to
FSHL and FSHR (or ROTL and ROTR) instead of lowering them to shifts and
ORs. Improve the legalization of FSHL and FSHR to avoid code quality
regressions.

Differential Revision: https://reviews.llvm.org/D77152
2020-08-21 10:32:49 +01:00
Yevgeny Rouban 18bc400f97 [NewPM][PassInstrumentation] Add PreservedAnalyses parameter to AfterPass* callbacks
Both AfterPass and AfterPassInvalidated pass instrumentation
callbacks get additional parameter of type PreservedAnalyses.
This patch was created by @fedor.sergeev. I have just slightly
changed it.

Reviewers: fedor.sergeev

Differential Revision: https://reviews.llvm.org/D81555
2020-08-21 16:10:42 +07:00
Justin Bogner 1283dca007 [GISel] Correct the known bits of G_ANYEXT
Known bits for G_ANYEXT was incorrectly using KnownBits::zext, causing
us to treat the high bits as zero even though they're (by definition)
unknown.

Differential Revision: https://reviews.llvm.org/D86323
2020-08-20 17:17:04 -07:00
Jon Roelofs 74ca5275e9 Fix a couple of typos. NFC 2020-08-20 14:56:57 -06:00
Matt Arsenault 79ce9bb380 CodeGen: Don't drop AA metadata when splitting MachineMemOperands
Assuming this is used to split a memory access into smaller pieces,
the new access should still have the same aliasing properties as the
original memory access. As far as I can tell, this wasn't
intentionally dropped. It may be necessary to drop this if you are
moving the operand outside of the bounds of the original object in
such a way that it may alias another IR object, but I don't think any
of the existing users are doing this. Some of the uses widen into
unused alignment padding, which I think is OK.
2020-08-20 16:17:30 -04:00
Jay Foad 4aaf772542 [PeepholeOptimizer] Remove dead code
At this point we have already ruled out all def operands, so we can't
possibly see a dead implicit def operand.
2020-08-20 16:48:57 +01:00
Bjorn Pettersson b43235a76c [DebugInfo] Fix DwarfExpression::addConstantFP for float on big-endian
The byte swapping, when dealing with 4 byte (float) FP constants
in DwarfExpression::addConstantFP, added in commit ef8992b9f0
was not correct. It always performed byte swapping using an
uint64_t value. When dealing with 4 byte values the 4 interesting
bytes ended up in the big end of the uint64_t, but later we emitted
the 4 bytes at the little end. So we ended up with zeroes being
emitted and faulty debug information.

This patch simplifies things a bit, IMHO. Using the APInt
representation throughout the function, instead of looking at
the internal representation using getRawBytes and without using
reinterpret_cast etc. And using API.byteSwap() should result in
correct byte swapping independent of APInt being 4 or 8 bytes.

Differential Revision: https://reviews.llvm.org/D86272
2020-08-20 11:48:05 +02:00
Konstantin Schwarz 7497b861f4 [GlobalISel][IRTranslator] Support PHI instructions in landingpad blocks
The check for the landingpad instructions was overly restrictive. In optimimized builds PHI nodes can appear
before the landingpad instructions, resulting in a fallback to SelectionDAG.

This change relaxes the check to allow PHI nodes.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D86141
2020-08-20 10:49:31 +02:00
Matt Arsenault 31adc28d24 GlobalISel: Implement fewerElementsVector for G_CONCAT_VECTORS sources
This fixes <6 x s16> = G_CONCAT_VECTORS from <3 x s16> handling.
2020-08-19 18:53:24 -04:00
Sourabh Singh Tomar ef8992b9f0 Re-apply "[DebugInfo] Emit DW_OP_implicit_value for Floating point constants"
This patch was reverted in 7c182663a8 due to some failures
observed on PCC based machines. Failures were due to Endianness issue and
long double representation issues.

Patch is revised to address Endianness issue. Furthermore, support
for emission of `DW_OP_implicit_value` for `long double` has been removed
(since it was unclean at the moment). Planning to handle this in
a clean way soon!

For more context, please refer to following review link.

Reviewed By: aprantl

Differential Revision: https://reviews.llvm.org/D83560
2020-08-20 01:39:42 +05:30
Sourabh Singh Tomar 9937872c02 Revert "[DebugInfo] Emit DW_OP_implicit_value for Floating point constants"
This reverts commit 15801f1619.
arc's land messed up! It removed the new commit message and took it
from revision.
2020-08-20 01:28:03 +05:30
Sourabh Singh Tomar 15801f1619 [DebugInfo] Emit DW_OP_implicit_value for Floating point constants
llvm is missing support for DW_OP_implicit_value operation.
DW_OP_implicit_value op is indispensable for cases such as
optimized out long double variables.

For intro refer: DWARFv5 Spec Pg: 40 2.6.1.1.4 Implicit Location Descriptions

Consider the following example:
```
int main() {
        long double ld = 3.14;
        printf("dummy\n");
        ld *= ld;
        return 0;
}
```
when compiled with tunk `clang` as
`clang test.c -g -O1` produces following location description
of variable `ld`:
```
DW_AT_location        (0x00000000:
                     [0x0000000000201691, 0x000000000020169b): DW_OP_constu 0xc8f5c28f5c28f800, DW_OP_stack_value, DW_OP_piece 0x8, DW_OP_constu 0x4000, DW_OP_stack_value, DW_OP_bit_piece 0x10 0x40, DW_OP_stack_value)
                  DW_AT_name    ("ld")
```
Here one may notice that this representation is incorrect(DWARF4
stack could only hold integers(and only up to the size of address)).
Here the variable size itself is `128` bit.
GDB and LLDB confirms this:
```
(gdb) p ld
$1 = <invalid float value>
(lldb) frame variable ld
(long double) ld = <extracting data from value failed>
```

GCC represents/uses DW_OP_implicit_value in these sort of situations.
Based on the discussion with Jakub Jelinek regarding GCC's motivation
for using this, I concluded that DW_OP_implicit_value is most appropriate
in this case.

Link: https://gcc.gnu.org/pipermail/gcc/2020-July/233057.html

GDB seems happy after this patch:(LLDB doesn't have support
for DW_OP_implicit_value)
```
(gdb) p ld
p ld
$1 = 3.14000000000000012434
```

Reviewed By: aprantl

Differential Revision: https://reviews.llvm.org/D83560
2020-08-20 01:20:40 +05:30
Matt Arsenault adbcc8e733 GlobalISel: Add TargetLowering member to LegalizerHelper 2020-08-19 14:50:35 -04:00
Matt Arsenault d64ad3f051 GlobalISel: Don't check for verifier enforced constraint
Loads are always required to have a single memory operand.
2020-08-19 14:15:38 -04:00
Matt Arsenault e95c08432a GlobalISel: Use Register 2020-08-19 13:45:31 -04:00
Mehdi Amini a407ec9b6d Revert "Revert "[NFC][llvm] Make the contructors of `ElementCount` private.""
Was reverted because MLIR/Flang builds were broken, these APIs have been
fixed in the meantime.
2020-08-19 17:26:36 +00:00
Mehdi Amini 4fc56d70aa Revert "[NFC][llvm] Make the contructors of `ElementCount` private."
This reverts commit 264afb9e6a.
(and dependent 6b742cc48 and fc53bd610f)

MLIR/Flang are broken.
2020-08-19 17:21:37 +00:00
Jessica Paquette d25b12bdc3 [GlobalISel] Add combine for (x & mask) -> x when (x & mask) == x
If we have a mask, and a value x, where (x & mask) == x, we can drop the AND
and just use x.

This is about a 0.4% geomean code size improvement on CTMark at -O3 for AArch64.

In AArch64, this is most useful post-legalization. Patterns like this often
show up when legalizing s1s, which must be extended to larger types.

e.g.

```
%cmp:_(s32) = G_ICMP ...
%and:_(s32) = G_AND %cmp, 1
```

Since G_ICMP only produces a single bit, there's no reason to mask it with the
G_AND.

Differential Revision: https://reviews.llvm.org/D85463
2020-08-19 10:20:57 -07:00
Francesco Petrogalli 264afb9e6a [NFC][llvm] Make the contructors of `ElementCount` private.
Differential Revision: https://reviews.llvm.org/D86120
2020-08-19 16:26:44 +00:00
David Sherwood 3f36561f69 [SVE][CodeGen] Fix scalable vector issues in DAGTypeLegalizer::GenWidenVectorLoads
In DAGTypeLegalizer::GenWidenVectorLoads the algorithm assumes it only
ever deals with fixed width types, hence the offsets for each individual
store never take 'vscale' into account. I've changed the code in that
function to use TypeSize instead of unsigned for tracking the remaining
load amount. In addition, I've changed the load loop to use the new
IncrementPointer helper function for updating the addresses in each
iteration, since this handles scalable vector types.

Also, I've added report_fatal_errors in GenWidenVectorExtLoads,
TargetLowering::scalarizeVectorLoad and TargetLowering::scalarizeVectorStores,
since these functions currently use a sequence of element-by-element
scalar loads/stores. In a similar vein, I've also added a fatal error
report in FindMemType for the case when we decide to return the element
type for a scalable vector type.

I've added new tests in

  CodeGen/AArch64/sve-split-load.ll
  CodeGen/AArch64/sve-ld-addressing-mode-reg-imm.ll

for the changes in GenWidenVectorLoads.

Differential Revision: https://reviews.llvm.org/D85909
2020-08-19 07:54:32 +01:00
Amara Emerson ed35344524 Use std::make_tuple instead of initializer lists to make a bot happy:
http://lab.llvm.org:8011/builders/clang-cmake-x86_64-avx2-linux
2020-08-18 14:55:52 -07:00
David Blaikie 1870b52f0c Recommit "PR44685: DebugInfo: Handle address-use-invalid type units referencing non-type units"
Originally committed as be3ef93bf5.
Reverted by b4bffdbadf due to bot
failures:
http://green.lab.llvm.org/green/job/clang-stage1-cmake-RA-expensive/17380/testReport/junit/LLVM/DebugInfo_X86/addr_tu_to_non_tu_ll/
http://45.33.8.238/win/22216/step_11.txt

MacOS failure due to testing Split DWARF which isn't compatible with
MachO.
Windows failure due to testing type units which aren't enabled on
Windows.

Fix both of these by applying an explicit x86 linux triple to the test.
2020-08-18 13:43:28 -07:00
Jessica Paquette bf36e90295 [GlobalISel][CallLowering] NFC: Unify flag-setting from CallBase + AttributeList
It's annoying to have to maintain multiple, nearly identical chains of if
statements which all set the same attributes.

Add a helper function, `addFlagsUsingAttrFn` which performs the attribute
setting.

Then, use wrappers for that function in `lowerCall` and `setArgFlags`.

(Note that the flag-setting code in `setArgFlags` was missing the returned
attribute. There's no selection for this yet, so no test. It's an example of
the kind of thing this lets us avoid, though.)

Differential Revision: https://reviews.llvm.org/D86159
2020-08-18 11:07:33 -07:00
Jessica Paquette f29e6277ad [GlobalISel][CallLowering] Don't tail call with non-forwarded explicit sret
Similar to this commit:

faf8065a99

Testcase is pretty much the same as

test/CodeGen/AArch64/tailcall-explicit-sret.ll

Except it uses i64 (since we don't handle the i1024 return values yet), and
doesn't have indirect tail call testcases (because we can't translate those
yet).

Differential Revision: https://reviews.llvm.org/D86148
2020-08-18 11:06:57 -07:00
Matt Arsenault 5a15f6628e GlobalISel: Implement fewerElementsVector for G_INSERT_VECTOR_ELT
Add unit tests since AMDGPU will only trigger this for gigantic
vectors, and won't use the annoying odd sized breakdown case.
2020-08-18 13:51:19 -04:00
Amara Emerson 04a6ea5d77 [GlobalISel] Add a combine for sext_inreg(load x), c --> sextload x
This is restricted to single use loads, which if we fold to sextloads we can
find more optimal addressing modes on AArch64.

This also fixes an overload the MachineFunction::getMachineMemOperand() method
which was incorrectly using the MF alignment instead of the MMO alignment.

Differential Revision: https://reviews.llvm.org/D85966
2020-08-18 10:42:15 -07:00
Amara Emerson 40e269ea6d [GlobalISel] Add a combine for ashr(shl x, c), c --> sext_inreg x, c'
By detecting this sign extend pattern early, we can uncover opportunities for
more optimizations.

Differential Revision: https://reviews.llvm.org/D85965
2020-08-18 10:42:15 -07:00
Jessica Paquette 224a8c639e [GlobalISel][CallLowering] Look through call parameters for flags
We weren't looking through the parameters on calls at all.

E.g., say you had

```
declare i32 @zext(i32 zeroext %x)

...
%y = call i32 @zext(i32 %something)
...

```

At the point of the call, we wouldn't know that the %something should have the
zeroext attribute.

This sets flags in about the same way as
TargetLoweringBase::ArgListEntry::setAttributes.

Differential Revision: https://reviews.llvm.org/D86125
2020-08-18 08:48:56 -07:00
Nico Weber b4bffdbadf Revert "PR44685: DebugInfo: Handle address-use-invalid type units referencing non-type units"
This reverts commit be3ef93bf5.
Test fails on macOS and Windows, e.g. http://45.33.8.238/win/22216/step_11.txt
2020-08-18 08:40:36 -04:00
David Blaikie be3ef93bf5 PR44685: DebugInfo: Handle address-use-invalid type units referencing non-type units
Theory was that we should never reach a non-type unit (eg: type in an
anonymous namespace) when we're already in the invalid "encountered an
address-use, so stop emitting types for now, until we throw out the
whole type tree to restart emitting in non-type unit" state. But that's
not the case (prior commit cleaned up one reason this wasn't exposed
sooner - but also makes it easier to test/demonstrate this issue)
2020-08-17 21:42:00 -07:00
David Blaikie 24c3dabef4 DebugInfo: Emit class template parameters first, before members
This reads more like what you'd expect the DWARF to look like (from the
lexical order of C++ - template parameters come before members, etc),
and also happens to make it easier to tickle (& thus test) a bug related
to type units and Split DWARF I'm about to fix.
2020-08-17 21:42:00 -07:00
Matt Arsenault a128292b90 GlobalISel: Make type for lower action more consistently optional
Some of the lower implementations were relying on this, however the
type was not set depending on which form .lower* helper form you were
using. For instance, if you used an unconditonal lower(), the type was
never set. Most of the lower actions do not benefit from a type
parameter, and just expand in terms of the original operation's types.

However, some lowerings could benefit from an additional type hint to
combine a promotion and an expansion. An example of this is for
add/sub sat. The DAG integer legalization tries to use smarter
expansions directly when promoting the integer type, and doesn't
always produce the same instruction with a wider type.

Treat this as an optional hint argument, that only means something for
specific lower actions. It may be useful to generalize this mechanism
to pass a full list of type indexes and desired types, but I haven't
run into a case like that yet.
2020-08-17 16:24:55 -04:00
Alexandre Ganea 98e01f56b0 Revert "Re-Re-land: [CodeView] Add full repro to LF_BUILDINFO record"
This reverts commit a3036b3863.

As requested in: https://reviews.llvm.org/D80833#2221866
Bug report: https://crbug.com/1117026
2020-08-17 15:49:18 -04:00
Sanjay Patel f925fd3304 [DAGCombiner] give magic number a name in getStoreMergeCandidates; NFC 2020-08-17 15:37:55 -04:00
Sanjay Patel 046b4a550a [DAGCombiner] reduce code duplication in getStoreMergeCandidates; NFC 2020-08-17 15:37:55 -04:00
Sanjay Patel 20c85fd1ab [DAGCombiner] simplify bool return in getStoreMergeCandidates; NFC 2020-08-17 15:37:55 -04:00
Sanjay Patel 52cd8f1ecb [DAGCombiner] clean up getStoreMergeCandidates(); NFC
1. Move bailouts and local var declarations.
2. Convert if-chain to switch on StoreSource with unreachable default.
2020-08-17 15:37:54 -04:00
Sanjay Patel 27708db3e3 [DAGCombiner] convert StoreSource if-chain to switch; NFC
The "isa" checks were less constrained because they allow
target constants, but the later matching code would bail
out on those anyway, so this should be slightly more
efficient.
2020-08-17 15:37:54 -04:00
Matt Arsenault a275acc4a9 GlobalISel: Early continue to reduce loop indentation 2020-08-17 13:51:08 -04:00
Matt Arsenault 5b53b17cd3 DAG: Add missing comment for transform 2020-08-17 10:01:12 -04:00
Matt Arsenault 924f31bc3c GlobalISel: Remove unnecessary check for copy type
COPY isn't allowed to change the type, but can mix no type with type.
2020-08-17 09:19:25 -04:00
Matt Arsenault 04a288f0f0 GlobalISel: Remove unnecessary llvm:: 2020-08-15 12:12:50 -04:00
Philip Reames a96fc4638b Remove deopt and gc transition arguments from gc.statepoint intrinsic
(Forgot to land this a couple of weeks back.)

In a recent series of changes, I've introduced support for using the respective operand bundle kinds on the statepoint. At the moment, code supports either/or, but there's no need to keep the old support around. For the moment, I am simply changing the specification and verifier to require zero length argument sets in the intrinsic.

The intrinsic itself is experimental. Given that, there's no forward serialization needed. The in tree uses and generation have already been updated to use the new operand bundle based forms, the only folks broken by the change will be those with frontends generating statepoints directly and the updates should be easy.

Why not go ahead and just remove the arguments entirely? Well, I plan to. But while working on this I've found that almost all of the arguments to the statepoint can be expressed via operand bundles or attributes. Given that, I'm planning a radical simplification of the arguments and figured I'd do one update not several small ones.

Differential Revision: https://reviews.llvm.org/D80892
2020-08-14 16:07:40 -07:00
Craig Topper c7a0b2684f [X86][MC][Target] Initial backend support a tune CPU to support -mtune
This patch implements initial backend support for a -mtune CPU controlled by a "tune-cpu" function attribute. If the attribute is not present X86 will use the resolved CPU from target-cpu attribute or command line.

This patch adds MC layer support a tune CPU. Each CPU now has two sets of features stored in their GenSubtargetInfo.inc tables . These features lists are passed separately to the Processor and ProcessorModel classes in tablegen. The tune list defaults to an empty list to avoid changes to non-X86. This annoyingly increases the size of static tables on all target as we now store 24 more bytes per CPU. I haven't quantified the overall impact, but I can if we're concerned.

One new test is added to X86 to show a few tuning features with mismatched tune-cpu and target-cpu/target-feature attributes to demonstrate independent control. Another new test is added to demonstrate that the scheduler model follows the tune CPU.

I have not added a -mtune to llc/opt or MC layer command line yet. With no attributes we'll just use the -mcpu for both. MC layer tools will always follow the normal CPU for tuning.

Differential Revision: https://reviews.llvm.org/D85165
2020-08-14 15:31:50 -07:00
Matt Arsenault 5c5e6d951e TableGen/GlobalISel: Partially handle immAllOnesV/immAllZerosV
These should really match either G_BUILD_VECTOR or
G_BUILD_VECTOR_TRUNC, but there doesn't seem to be an existing
mechanism for matching alternative opcodes. There is GIM_SwitchOpcode,
but it seems to assume it's oly only used for matcher optimization.

I could also omit any opcode check and rely on the matcher directly
checking the opcode, but the table optimizer currently assumes there
has to be an opcode check.

Also doesn't try to handle undef elements like the DAG version.
2020-08-14 13:55:30 -04:00
Jordan Rupprecht fd9187f746 [NFC] Silence variables unused in release builds 2020-08-14 08:35:58 -07:00
Denis Antrushin 1c80a6ce5f [Statepoints] FixupStatepoint: properly set isKill on spilled register.
When spilling statepoint meta arg register it is incorrect to blindly
mark it as killed - it may be used in non-meta args (e.g., as call
parameter).
2020-08-14 22:19:20 +07:00
Denis Antrushin 5f6bee77fa [Statepoints] Spill GC Ptr regs in FixupStatepoints.
Extend FixupStatepointCallerSaved pass with ability to spill
statepoint GC pointer arguments (optionally allowing them on CSRs).
Special handling is required for invoke statepoints, because at MI
level single landing pad may be shared by multiple statepoints, so
we must ensure we spill landing pad's live-ins into the same stack
slots.

Full statepoint refactoring change set is available at D81603.

Reviewed By: skatkov

Differential Revision: https://reviews.llvm.org/D81647
2020-08-14 20:21:19 +07:00
Yuanfang Chen a5ed20b549 [NewPM][CodeGen] Add machine code verification callback
D83608 need this.

Reviewed By: aeubanks

Differential Revision: https://reviews.llvm.org/D85916
2020-08-13 16:13:01 -07:00
Matt Arsenault c7191e3185 DAG: Don't pass 0 alignment value to allowsMisalignedMemoryAccesses
I think not unconditionally passing getDstAlign is broken, but leave
that for another change.
2020-08-13 09:33:17 -04:00
Kerry McLaughlin 30af595f05 [SVE][CodeGen] Legalisation of EXTRACT_VECTOR_ELT for scalable vectors
This patch changes SplitVecOp_EXTRACT_VECTOR_ELT to work correctly
for scalable vectors and also fixes an a bug in DAGCombiner where
the scalable property is dropped in visitTRUNCATE when attempting
to fold an extract + a truncate.

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D85754
2020-08-13 12:32:59 +01:00
Simon Pilgrim 8a41a1f567 BranchFolding.cpp - removes includes already included by BranchFolding.h. NFC. 2020-08-13 12:14:31 +01:00
Simon Pilgrim ebfa410433 SplitKit.cpp - removes includes already included by SplitKit.h. NFC.
Don't duplicate includes already provided by the module header.
2020-08-13 11:43:28 +01:00
Simon Pilgrim c4c1267cad DwarfDebug.cpp - removes includes already included by DwarfDebug.h. NFC.
Don't duplicate includes already provided by the module header.
2020-08-13 11:43:28 +01:00
David Sherwood 6af1677161 [SVE][CodeGen] Fix scalable vector issues in DAGTypeLegalizer::GenWidenVectorStores
In DAGTypeLegalizer::GenWidenVectorStores the algorithm assumes it only
ever deals with fixed width types, hence the offsets for each individual
store never take 'vscale' into account. I've changed the main loop in
that function to use TypeSize instead of unsigned for tracking the
remaining store amount and offset increment. In addition, I've changed
the loop to use the new IncrementPointer helper function for updating
the addresses in each iteration, since this handles scalable vector
types.

Whilst fixing this function I also fixed a minor issue in
IncrementPointer whereby we were not adding the no-unsigned-wrap flag
for the add instruction in the same way as the fixed width case does.

Also, I've added a report_fatal_error in GenWidenVectorTruncStores,
since this code currently uses a sequence of element-by-element scalar
stores.

I've added new tests in

  CodeGen/AArch64/sve-intrinsics-stores.ll
  CodeGen/AArch64/sve-st1-addressing-mode-reg-imm.ll

for the changes in GenWidenVectorStores.

Differential Revision: https://reviews.llvm.org/D84937
2020-08-13 11:07:17 +01:00
David Sherwood 3ec3fcb97a [CodeGen] In narrowExtractedVectorLoad bail out for scalable vectors
In narrowExtractedVectorLoad there is an optimisation that tries to
combine extract_subvector with a narrowing vector load. At the moment
this produces warnings due to the incorrect calls to
getVectorNumElements() for scalable vector types. I've got this
working for scalable vectors too when the extract subvector index
is a multiple of the minimum number of elements. I have added a
new variant of the function:

  MachineFunction::getMachineMemOperand

that copies an existing MachineMemOperand, but replaces the pointer
info with a null version since we cannot currently represent scaled
offsets.

I've added a new test for this particular case in:

  CodeGen/AArch64/sve-extract-subvector.ll

Differential Revision: https://reviews.llvm.org/D83950
2020-08-13 10:46:18 +01:00
Amara Emerson 2ff14957e8 [GlobalISel] Implement bit-test switch table optimization.
This is mostly a straight port from SelectionDAG. We re-use the actual bit-test
analysis part from SwitchLoweringUtils, which was factored out earlier to
support jump-tables.

Differential Revision: https://reviews.llvm.org/D85233
2020-08-12 11:31:39 -07:00
David Sherwood 88bbd30736 [SVE][CodeGen] Fix issues with EXTRACT_SUBVECTOR when using scalable FP vectors
In this patch I have fixed two issues:

1. Our SVE tuple get/set intrinsics were using the wrong constant type
for the index passed to EXTRACT_SUBVECTOR. I have fixed this by using the
function SelectionDAG::getVectorIdxConstant to create the value. Also, I
have updated the documentation for EXTRACT_SUBVECTOR describing what type
the constant index should be and we now enforce this when creating the
node.
2. The AArch64 backend was missing the appropriate patterns for
extracting certain subvectors (nxv4f16 and nxv2f32) from legal SVE types.
I have added them as part of this patch.

The only way that I could find to test the new patterns was to use the
SVE tuple get intrinsics, although I realise it looks a bit unusual.
Tests added here:

  test/CodeGen/AArch64/sve-extract-subvector.ll

Differential Revision: https://reviews.llvm.org/D85516
2020-08-12 08:35:46 +01:00
diggerlin e9ac1495e2 [AIX][XCOFF] change the operand of branch instruction from symbol name to qualified symbol name for function declarations
SUMMARY:

1. in the patch  , remove setting storageclass in function .getXCOFFSection and construct function of class MCSectionXCOFF
there are

XCOFF::StorageMappingClass MappingClass;
XCOFF::SymbolType Type;
XCOFF::StorageClass StorageClass;
in the MCSectionXCOFF class,
these attribute only used in the XCOFFObjectWriter, (asm path do not need the StorageClass)

we need get the value of StorageClass, Type,MappingClass before we invoke the getXCOFFSection every time.

actually , we can get the StorageClass of the MCSectionXCOFF  from it's delegated symbol.

2. we also change the oprand of branch instruction from symbol name to qualify symbol name.
for example change
bl .foo
extern .foo
to
bl .foo[PR]
extern .foo[PR]

3. and if there is reference indirect call a function bar.
we also add
  extern .bar[PR]

Reviewers:  Jason liu, Xiangling Liao

Differential Revision: https://reviews.llvm.org/D84765
2020-08-11 15:26:19 -04:00
Yuanfang Chen 39617aaed9 NFC. Constify MachineVerifier::verify parameter 2020-08-11 11:59:45 -07:00
Jessica Paquette bebe6a6449 [GlobalISel] Combine (logic_op (op x...), (op y...)) -> (op (logic_op x, y))
This implements

```
(logic_op (op x...), (op y...)) -> (op (logic_op x, y))
```

when `op` is an extend, a shift, or an and.

This is similar to `DAGCombiner::hoistLogicOpWithSameOpcodeHands`
(with a bunch of missing cases, e.g. G_TRUNC, G_BITCAST, etc.)

This is implemented so it works both pre and post-legalization.

This also adds a general way to add a series of instructions in a combine.
(`applyBuildInstructionSteps`).

Differential Revision: https://reviews.llvm.org/D85050
2020-08-11 10:40:06 -07:00
Jay Foad fa2b836ea3 [GlobalISel] Add G_ABS
This is equivalent to the new llvm.abs intrinsic added by D84125 with
is_int_min_poison=0.

Differential Revision: https://reviews.llvm.org/D85718
2020-08-11 16:34:37 +01:00
David Stenberg e2f3240472 [DebugInfo] Allow GNU macro extension to be emitted
Allow the GNU .debug_macro extension to be emitted for DWARF versions
earlier than 5. The extension is basically what became DWARF 5's format,
except that a DW_AT_GNU_macros attribute is emitted, and some entries
like the strx entries are missing. In this patch I emit GNU's indirect
entries, which are the same as DWARF 5's strp entries.

This patch adds the extension behind a hidden LLVM flag,
-use-gnu-debug-macro. I would later want to enable it by default when
tuning for GDB and targeting DWARF versions earlier than 5.

The size of a Clang 8.0 binary built with RelWithDebInfo and the flags
"-gdwarf-4 -fdebug-macro" reduces from 1533 MB to 1349 MB with
.debug_macro (compared to 1296 MB without -fdebug-macro).

Reviewed By: SouraVX, dblaikie

Differential Revision: https://reviews.llvm.org/D82975
2020-08-11 17:00:25 +02:00
David Stenberg bb640645f5 [DebugInfo] Simplify DwarfDebug::emitMacro
Broken out from a review comment on D82975. This is an NFC expect for
that the Macinfo macro string is now emitted using a single emitBytes()
invocation, so it can be done using a single string directive.

Reviewed By: dblaikie

Differential Revision: https://reviews.llvm.org/D83557
2020-08-11 17:00:25 +02:00
Benjamin Kramer d287a5a33f [GlobalISel] Remove unused variable. NFC. 2020-08-11 16:56:45 +02:00
Matt Arsenault e2f1b48f86 GlobalISel: Implement bitcast action for G_INSERT_VECTOR_ELT
This mirrors the support for the equivalent extracts. This also
creates a huge mess that would be greatly improved if we had any bit
operation combines.
2020-08-11 10:39:14 -04:00
Kerry McLaughlin 455ed56d48 [SVE][CodeGen] Legalisation of INSERT_VECTOR_ELT for scalable vectors
When the result type of insertelement needs to be split,
SplitVecRes_INSERT_VECTOR_ELT will try to store the vector to a
stack temporary, store the element at the location of the stack
temporary plus the index, and reload the Lo/Hi parts.

This patch does the following to ensure this works for scalable vectors:
 - Sets the StackID with getStackIDForScalableVectors() in CreateStackTemporary
 - Adds an IsScalable flag to getMemBasePlusOffset() and scales the
    offset by VScale when this is true
 - Ensures the immediate is clamped correctly by clampDynamicVectorIndex
    so that we don't try to use an out of range index

Reviewed By: david-arm

Differential Revision: https://reviews.llvm.org/D84874
2020-08-11 12:57:28 +01:00
David Stenberg a73008c1ae [DebugInfo] Refactor .debug_macro checks. NFCI
Move the Dwarf version checks that determine if the .debug_macro section
should be emitted, into a DwarfDebug member. This is a preparatory
refactoring for allowing the GNU .debug_macro extension, which is a
precursor to the DWARF 5 format, to be emitted by LLVM for earlier DWARF
versions.

Reviewed By: dblaikie

Differential Revision: https://reviews.llvm.org/D82971
2020-08-11 13:30:52 +02:00
Kerry McLaughlin 85c7e89f3b [CodeGen] Refactor getMemBasePlusOffset & getObjectPtrOffset to accept a TypeSize
Changes the Offset arguments to both functions from int64_t to TypeSize
& updates all uses of the functions to create the offset using TypeSize::Fixed()

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D85220
2020-08-11 12:17:10 +01:00
Sam Parker 8f92f3c2ea [RDA] Fix DBG_VALUE issues
We skip debug instructions in RDA so we cannot attempt to look them
up in our instruction map without causing a crash. But some of the
methods select the last instruction in the block and this
instruction may be a debug instruction... So, use getLastNonDebugInstr
instead of calling back on a MachineBasicBlock.

MachineBasicBlock iterators have also been updated to use
instructionsWithoutDebug so we can avoid the manual checks for debug
instructions.

Differential Revision: https://reviews.llvm.org/D85658
2020-08-11 09:03:09 +01:00
QingShan Zhang 61ede38da0 [CodeGen] Expand float operand for STRICT_FSETCC/STRICT_FSETCCS
This patch is the continue work of https://reviews.llvm.org/D69281
to implement the way that expands STRICT_FSETCC/STRICT_FSETCCS.

Reviewed By: uweigand

Differential Revision: https://reviews.llvm.org/D81906
2020-08-11 05:55:00 +00:00
jasonliu 20abff0481 [XCOFF][AIX] Use TE storage mapping class when large code model is enabled
Summary:
Use TE SMC instead of TC SMC in large code model mode,
so that large code model TOC entries could get placed after all
the small code model TOC entries, which reduces the chance of TOC overflow.

Reviewed By: Xiangling_L

Differential Revision: https://reviews.llvm.org/D85455
2020-08-10 19:52:10 +00:00
Stanislav Mekhanoshin 08803f0e62 Unbundle KILL bundles in VirtRegRewriter
SplitKit forms invalid COPY subreg bundles without a leading
BUNDLE instruction. That manifests itself in post-RA scheduler
counting instruction and asserting on "Instruction count mismatch".

The bundle shall be undone by VirtRegRewriter::expandCopyBundle(),
but it does not because VirtRegRewriter::handleIdentityCopy() can
turn COPY bundle into a KILL bundle.

Process KILLs as well.

Differential Revision: https://reviews.llvm.org/D85484
2020-08-10 11:58:37 -07:00
Alexandre Ganea a3036b3863 Re-Re-land: [CodeView] Add full repro to LF_BUILDINFO record
This patch adds the missing information to the LF_BUILDINFO record, which allows for rebuilding a .CPP without any external dependency but the .OBJ itself (other than the compiler).

Some external tools that we are using (Recode, Live++) are extracting the information to reproduce a build without any knowledge of the build system. The LF_BUILDINFO stores a full path to the compiler, the PWD (CWD at program startup), a relative or absolute path to the TU, and the full CC1 command line. The command line needs to be freestanding (not depend on any environment variables). In the same way, MSVC doesn't store the provided command-line, but an expanded version (somehow their equivalent of CC1) which is also freestanding.

For more information see PR36198 and D43002.

Differential Revision: https://reviews.llvm.org/D80833
2020-08-10 13:36:30 -04:00
Craig Topper 96dfc783b2 [BreakFalseDeps][X86] Move operand loop out of X86's getUndefRegClearance and put in the pass.
X86 is the only user of this interface in tree. Previously the
X86 pass would loop over operands looking for one undef operand for
the pass to fix. But there could theoretically be multiple operands
to fix. So it makes more sense for the pass to do the looping and
ask the target if an operand needs to be fixed.
2020-08-10 10:32:29 -07:00
Xiangling Liao 6ef801aa6b [AIX] Static init frontend recovery and backend support
On the frontend side, this patch recovers AIX static init implementation to
use the linkage type and function names Clang chooses for sinit related function.

On the backend side, this patch sets correct linkage and function names on aliases
created for sinit/sterm functions.

Differential Revision: https://reviews.llvm.org/D84534
2020-08-10 10:10:49 -04:00
Matt Arsenault f9c279b057 PeepholeOptimizer: Use Register 2020-08-10 08:49:36 -04:00
Matt Arsenault 0bbf4bb8db GlobalISel: Remove redundant check for empty blocks 2020-08-10 08:46:30 -04:00
Simon Pilgrim c0c3b9a25f [ScalarizeMaskedMemIntrin] Scalarize constant mask expandload as shuffle(build_vector,pass_through)
As noticed on D66004, scalarization of an expandload with a constant mask as a chain of irregular loads+inserts makes it tricky to optimize before lowering, resulting in difficulties in merging loads etc.

This patch instead scalarizes the expansion to a build_vector(load0, load1, undef, load2,....) style pattern and then performs a blend shuffle with the pass through vector. This allows us to more easily make use of all the build_vector combines, merging of consecutive loads etc.

Differential Revision: https://reviews.llvm.org/D85416
2020-08-10 11:05:57 +01:00
Igor Kudrin d400606f8c [DebugInfo] Fix initialization of DwarfCompileUnit::LabelBegin.
This also fixes the condition in the assertion in
DwarfCompileUnit::getLabelBegin() because it checked something unrelated
to the returned value.

Differential Revision: https://reviews.llvm.org/D85437
2020-08-10 15:57:21 +07:00
Craig Topper fdfdee98ac [DAGCombiner] Teach SimplifySetCC SETUGE X, SINTMIN -> SETLT X, 0 and SETULE X, SINTMAX -> SETGT X, -1.
These aren't the canonical forms we'd get from InstCombine, but
we do have X86 tests for them. Recognizing them is pretty cheap.

While there make use of APInt:isSignedMinValue/isSignedMaxValue
instead of creating a new APInt to compare with. Also use
SelectionDAG::getAllOnesConstant helper to hide the all ones
APInt creation.
2020-08-08 22:27:16 -07:00
Sanjay Patel f22ac1d15b [DAGCombiner] reassociate reciprocal sqrt expression to eliminate FP division, part 2
Follow-up to D82716 / rGea71ba11ab11
We do not have the fabs removal fold in IR yet for the case
where the sqrt operand is repeated, so that's another potential
improvement.
2020-08-08 10:38:06 -04:00
Benjamin Kramer 38537307e5 lib/CodeGen doesn't depend on lib/Passes. 2020-08-08 13:40:24 +02:00
Yuanfang Chen f5b5ccf2a6 Reland "Revert "[NewPM][CodeGen] Introduce machine pass and machine pass manager""
This relands commit 320eab2d55.

The test failed because it was looking for x86-linux target
unconditionally. Now it gets the default target.
2020-08-07 16:40:49 -07:00
Yuanfang Chen 320eab2d55 Revert "[NewPM][CodeGen] Introduce machine pass and machine pass manager"
This reverts commit 911565d108.

Broke some non-Linux bots.
2020-08-07 11:59:58 -07:00
Yuanfang Chen 911565d108 [NewPM][CodeGen] Introduce machine pass and machine pass manager
machine pass could define four methods:
- `PreservedAnalyses run(MachineFunction &, MachineFunctionAnalysisManager &)`
- `Error doInitialization(Module &, MachineFunctionAnalysisManager &)`
- `Error doFinalization(Module &, MachineFunctionAnalysisManager &)`
- `Error run(Module &, MachineFunctionAnalysisManager &)`

machine pass manger:
- MachineFunctionAnalysisManager:
  Basically an AnalysisManager<MachineFunction> augmented with the ability to
  register and query IR analyses
- MachineFunctionPassManager: support only two methods, `addPass` and `run`

Reviewed By: arsenm, asbirlea, aeubanks

Differential Revision: https://reviews.llvm.org/D67687
2020-08-07 11:00:31 -07:00
Bevin Hansson 5de6c56f7e [Intrinsic] Add sshl.sat/ushl.sat, saturated shift intrinsics.
Summary:
This patch adds two intrinsics, llvm.sshl.sat and llvm.ushl.sat,
which perform signed and unsigned saturating left shift,
respectively.

These are useful for implementing the Embedded-C fixed point
support in Clang, originally discussed in
http://lists.llvm.org/pipermail/llvm-dev/2018-August/125433.html
and
http://lists.llvm.org/pipermail/cfe-dev/2018-May/058019.html

Reviewers: leonardchan, craig.topper, bjope, jdoerfert

Subscribers: hiraditya, jdoerfert, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D83216
2020-08-07 15:09:24 +02:00
Simon Pilgrim 66a163f328 [DAG] GetDemandedBits - remove custom AND handling.
As mentioned on D85463, we should be using SimplifyMultipleUseDemandedBits (which is the default fallback).

The minor regression in illegal-bitfield-loadstore.ll will be addressed properly by D77804.
2020-08-07 12:55:47 +01:00
Simon Pilgrim fcefb53222 Remove unreachable break. NFC 2020-08-07 12:37:49 +01:00
Igor Kudrin 1eade73d8b [DebugInfo] Remove DwarfUnit::getDwarfVersion(). NFC.
This helper method was used only in one place, which can easily use the
direct call.

Differential revision: https://reviews.llvm.org/D85438
2020-08-07 15:55:44 +07:00
Igor Kudrin b6b0ff18a3 [DebugInfo] Clean up DIEUnit. NFC.
This removes members of the DIEUnit class which were used only in unit
tests. Note also that child classes shadowed some of these methods,
namely, getDwarfVersion() was overridden in DwartfUnit and getLength()
was overridden in DwarfCompileUnit.

Differential Revision: https://reviews.llvm.org/D85436
2020-08-07 15:55:44 +07:00
QingShan Zhang 2b2bfdb474 [NFC] Add the stats for load/store cluster
We have the stats for MacroFusion but miss it for load/store cluster.
2020-08-07 07:09:48 +00:00
QingShan Zhang 3359ea62ed [Scheduling] Create the missing dependency edges for store cluster
If it is load cluster, we don't need to create the dependency edges(SUb->reg) from SUb to SUa
as they both depend on the base register "reg"

     +-------+
+---->  reg  |
|    +---+---+
|        ^
|        |
|        |
|        |
|    +---+---+
|    |  SUa  |  Load 0(reg)
|    +---+---+
|        ^
|        |
|        |
|    +---+---+
+----+  SUb  |  Load 4(reg)
     +-------+

But if it is store cluster, we need to create it as follow shows to avoid the instruction store
depend on scheduled in-between SUb and SUa.

     +-------+
+---->  reg  |
|    +---+---+
|        ^
|        |         Missing       +-------+
|        | +-------------------->+   y   |
|        | |                     +---+---+
|    +---+-+-+                       ^
|    |  SUa  |  Store x 0(reg)       |
|    +---+---+                       |
|        ^                           |
|        |  +------------------------+
|        |  |
|    +---+--++
+----+  SUb  |  Store y 4(reg)
     +-------+

Reviewed By: evandro, arsenm, rampitec, foad, fhahn

Differential Revision: https://reviews.llvm.org/D72031
2020-08-07 04:58:03 +00:00
Matt Arsenault 1ad051dd8c GlobalISel: Implement lower for G_INSERT_VECTOR_ELT 2020-08-06 19:29:17 -04:00
Craig Topper ffc248f3b8 [LegalTypes] Move VSELECT node creation out of WidenVSELECTAndMask and push to 2 of the 3 callers.
One of the callers only wants the condition, but the vselect can
be simplified by getNode making it hard or impossible to retrieve
the condition.

Instead, return the condition and make the other 2 callers
responsible for creating the vselect node using the condition.
Rename the function to WidenVSELECTMask accordingly.

Differential Revision: https://reviews.llvm.org/D85468
2020-08-06 13:18:16 -07:00
Snehasish Kumar 8d943a928d [NFC] Rename BBSectionsPrepare -> BasicBlockSections.
Rename the BBSectionsPrepare pass as suggested by the review comment in
https://reviews.llvm.org/D85368.

Differential Revision: https://reviews.llvm.org/D85380
2020-08-06 13:12:06 -07:00
Matt Arsenault e00201539f GlobalISel: Implement fewerElementsVector for G_EXTRACT_VECTOR_ELT
Use the same basic strategy as LegalizeVectorTypes. Try to index into
smaller pieces if there's a constant index, and otherwise fall back to
a stack temporary.
2020-08-06 14:33:16 -04:00
jasonliu e5062a6caf [XCOFF][AIX] Put each jump table in an independent section if -ffunction-sections is specified
If a function is in a unique section, putting all jump tables in
 .rodata will prevent functions that have a jump table to get
garbage collect by the linker.
Therefore, we need to put jump table into a unique section as well.

Reviewed By: Xiangling_L

Differential Revision: https://reviews.llvm.org/D84761
2020-08-06 14:31:04 +00:00
Petar Avramovic d893278bba [GlobalISel][InlineAsm] Fix matching input constraint to physreg
Add given input and mark it as tied.
Doesn't create additional copy compared to
matching input constraint to virtual register.

Differential Revision: https://reviews.llvm.org/D85122
2020-08-06 14:35:51 +02:00
Paul Walker 0d33a8ef5b [SVE] Lower scalable vector mul operations.
This allows us to remove extra patterns from AArch64SVEInstrInfo.td
because we can reuse those required for fixed length vectors.

Differential Revision: https://reviews.llvm.org/D85328
2020-08-06 11:15:35 +01:00
Rahman Lavaee 20a568c29d [Propeller]: Use a descriptive temporary symbol name for the end of the basic block.
This patch changes the functionality of AsmPrinter to name the basic block end labels as LBB_END${i}_${j}, with ${i} being the identifier for the function and ${j} being the identifier for the basic block. The new naming scheme is consistent with how basic block labels are named (.LBB${i}_{j}), and how function end symbol are named (.Lfunc_end${i}) and helps to write stronger tests for the upcoming patch for BB-Info section (as proposed in https://lists.llvm.org/pipermail/llvm-dev/2020-July/143512.html). The end label is used with basicblock-labels (BB-Info section in future) and basicblock-sections to compute the size of basic blocks and basic block sections, respectively. For BB sections, the section containing the entry basic block will not have a BB end label since it already gets the function end-label.
This label is cached for every basic block (CachedEndMCSymbol) like the label for the basic block (CachedMCSymbol).

Differential Revision: https://reviews.llvm.org/D83885
2020-08-05 13:17:19 -07:00
Denis Antrushin d21ce40821 [Statepoints] Operand folding in presense of tied registers.
Implement proper folding of statepoint meta operands (deopt and GC)
when statepoint uses tied registers.
For deopt operands it is just about properly preserving tiedness
in new instruction.
For tied GC operands folding is a little bit more tricky.
We can fold tied GC operands only from InlineSpiller, because it knows
how to properly reload tied def after it was turned into memory operand.
Other users (e.g. peephole) cannot properly fold such operands as they
do not know how (or when) to reload them from memory.
We do this by un-tieing operand we want to fold in InlineSpiller
and allowing to fold only untied operands in foldPatchpoint.
2020-08-05 20:18:28 +07:00
Simon Pilgrim 4aaf301fb8 [DAG] Fold vector (aext (load x)) -> (zext (truncate (zextload x)))
We currently don't do anything to fold any_extend vector loads as no target has such an instruction.

Instead I've added support for folding to a zextload, SimplifyDemandedBits does a good job of adjusting the zext(truncate(()) stages as required later on.

We still need the custom scalar extload handling instead of using the tryToFoldExtOfLoad helper as it has different legality tests - we can probably tweak that to reduce most of the code duplication.

Fixes the regression I mentioned in rG99a971cadff7

Differential Revision: https://reviews.llvm.org/D85129
2020-08-05 11:22:23 +01:00
Georgii Rymar f97019ad6e [llvm-readobj/elf] - Add a testing for --stackmap and refine the implementation.
Currently, we only test the `--stackmap` option here:
https://github.com/llvm/llvm-project/blob/master/llvm/test/Object/stackmap-dump.test
it uses a precompiled MachO binary currently and I've found no tests for this option for ELF.

The implementation also has issues. For example, it might assert on a wrong version
of the .llvm-stackmaps section. Or it might crash on an empty or truncated section.

This patch introduces a new tools/llvm-readobj/ELF test file as well as implements a few
basic checks to catch simple crashes/issues

It also eliminates `unwrapOrError` calls in `printStackMap()`.

Differential revision: https://reviews.llvm.org/D85208
2020-08-05 13:09:04 +03:00
Matt Arsenault 93cebb190a GlobalISel: Use buildAnyExtOrTrunc 2020-08-04 22:04:04 -04:00
Matt Arsenault 1ea182ce79 GlobalISel: Simplify code
This cannot be a vector of pointers, so using getScalarSizeInBits just
added a bit extra noise.
2020-08-04 22:03:59 -04:00
Matt Arsenault 8f65c933c4 GlobalISel: Fix redundant variable and shadowing 2020-08-04 22:03:55 -04:00
Matt Arsenault 54615ec48f GlobalISel: Move load/store lowering to separate functions 2020-08-04 22:03:51 -04:00
Krzysztof Parzyszek 06d425737b [RDF] Add operator<<(raw_ostream&, RegisterAggr), NFC 2020-08-04 18:40:07 -05:00
Krzysztof Parzyszek 9521704553 [RDF] Use hash-based containers, cache extra information
This improves performance.
2020-08-04 18:36:49 -05:00
Krzysztof Parzyszek 4b25f67299 [RDF] Really remove remaining uses of PhysicalRegisterInfo::normalize 2020-08-04 18:23:38 -05:00
Krzysztof Parzyszek f0f467aeec [RDF] Cache register aliases in PhysicalRegisterInfo
This improves performance of PhysicalRegisterInfo::makeRegRef.
2020-08-04 18:10:00 -05:00
Krzysztof Parzyszek 47fe1b63f4 [RDF] Lower the sorting complexity in RDFLiveness::getAllReachingDefs
The sorting is needed, because reaching defs are (logically) ordered,
but are not collected in that order. This change will break up the
single call to std::sort into a series of smaller sorts, each of which
should use a cheaper comparison function than the original.
2020-08-04 18:06:37 -05:00
Eli Friedman 4a47f1c4ce [SelectionDAG][SVE] Support scalable vectors in getConstantFP()
Differential Revision: https://reviews.llvm.org/D85249
2020-08-04 15:32:43 -07:00
Krzysztof Parzyszek 09897b146a [RDF] Remove uses of RDFRegisters::normalize (deprecate)
This function has been reduced to an identity function for some time.
2020-08-04 17:02:12 -05:00
Matt Arsenault f8fb7835d6 GlobalISel: Add utilty for getting function argument live ins
Get the argument register and ensure there's a copy to the virtual
register. AMDGPU and AArch64 have similarish code to get the livein
value, and I also want to use this in multiple places.

This is a bit more aggressive about setting the register class than
the original function, but that's probably OK.

I think we're missing a few verifier checks for function live ins. I
noticed AArch64's calling convention code is not actually adding
liveins to functions, only the entry block (which apparently might not
matter that much?). There should probably be a verifier check that
entry block live ins are also live into the function. We also might
need a verifier check that the copy to the livein virtual register is
in the entry block.
2020-08-04 16:55:55 -04:00
Cameron McInally 0f2b47b6da [FastISel] Don't transform FSUB(-0, X) -> FNEG(X) in FastISel
This corresponds with the SelectionDAGISel change in D84056.

Also, rename some poorly named tests in CodeGen/X86/fast-isel-fneg.ll with NFC.

Differential Revision: https://reviews.llvm.org/D85149
2020-08-04 14:42:53 -05:00
Matt Arsenault 3e16e2152c GlobalISel: Handle llvm.localescape
This one is pretty easy and shrinks the list of unhandled
intrinsics. I'm not sure how relevant the insert point is. Using the
insert position of EntryBuilder will place this after
constants. SelectionDAG seems to end up emitting these after argument
copies and before anything else, but I don't think it really
matters. This also ends up emitting these in the opposite order from
SelectionDAG, but I don't think that matters either.

This also needs a fix to stop the later passes dropping this as a dead
instruction. DeadMachineInstructionElim's version of isDead special
cases LOCAL_ESCAPE for some reason, and I'm not sure why it's excluded
from MachineInstr::isLabel (or why isDead doesn't check it).

I also noticed DeadMachineInstructionElim never considers inline asm
as dead, but GlobalISel will drop asm with no constraints.
2020-08-04 15:19:02 -04:00
Cameron McInally 23adbac9ee [GlobalISel] Don't transform FSUB(-0, X) -> FNEG(X) in GlobalISel.
This patch stops unconditionally transforming FSUB(-0, X) into an FNEG(X) while building the MIR.

This corresponds with the SelectionDAGISel change in D84056.

Differential Revision: https://reviews.llvm.org/D85139
2020-08-04 11:27:09 -05:00
Jay Foad 28e322ea93 [PowerPC] Custom lowering for funnel shifts
The custom lowering saves an instruction over the generic expansion, by
taking advantage of the fact that PowerPC shift instructions are well
defined in the shift-by-bitwidth case.

Differential Revision: https://reviews.llvm.org/D83948
2020-08-04 16:30:49 +01:00
Sander de Smalen fd6584a220 [AArch64][SVE] Fix CFA calculation in presence of SVE objects.
The CFA is calculated as (SP/FP + offset), but when there are
SVE objects on the stack the SP offset is partly scalable and
should instead be expressed as the DWARF expression:

     SP + offset + scalable_offset * VG

where VG is the Vector Granule register, containing the
number of 64bits 'granules' in a scalable vector.

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D84043
2020-08-04 11:47:06 +01:00
Fangrui Song 11bb7c220c [MC] Set sh_link to 0 if the associated symbol is undefined
Part of https://bugs.llvm.org/show_bug.cgi?id=41734

LTO can drop externally available definitions. Such AssociatedSymbol is
not associated with a symbol. ELFWriter::writeSection() will assert.

Allow a SHF_LINK_ORDER section to have sh_link=0.

We need to give sh_link a syntax, a literal zero in the linked-to symbol
position, e.g. `.section name,"ao",@progbits,0`

Reviewed By: pcc

Differential Revision: https://reviews.llvm.org/D72899
2020-08-03 13:43:48 -07:00
Jon Roelofs 7f1556f292 Fix typo: s/epomymous/eponymous/ NFC 2020-08-03 14:09:46 -06:00
Cameron McInally 31c7a2fd5c [FPEnv] Don't transform FSUB(-0,X)->FNEG(X) in SelectionDAGBuilder.
This patch stops unconditionally transforming FSUB(-0,X) into an FNEG(X) while building the DAG. There is also one small change to handle the new FSUB(-0,X) similarly to FNEG(X) in the AMDGPU backend.

Differential Revision: https://reviews.llvm.org/D84056
2020-08-03 10:22:25 -05:00
Matt Arsenault 42a9f6c554 GlobalISel: Handle arbitrary FewerElementsVector for G_IMPLICIT_DEF 2020-08-03 09:14:08 -04:00
Matt Arsenault 1782fbbc69 GlobalISel: Reimplement moreElementsVectorDst
Use pad with undef and unmerge with unused results. This is annoyingly
similar to several other places in LegalizerHelper, but they're all
slightly different.
2020-08-03 09:03:48 -04:00
Igor Kudrin 414b9bec6d [DebugInfo] Make DIEDelta::SizeOf() more explicit. NFCI.
The patch restricts DIEDelta::SizeOf() to accept only DWARF forms that
are actually used in the LLVM codebase. This should make the use of the
class more explicit and help to avoid issues similar to fixed in D83958
and D84094.

Differential Revision: https://reviews.llvm.org/D84095
2020-08-03 15:04:15 +07:00
Igor Kudrin f98e03a35d [DebugInfo] Fix misleading using of DWARF forms with DIELabel. NFCI.
DIELabel can emit only 32- or 64-bit values, while it was created in
some places with DW_FORM_udata, which implies emitting uleb128.
Nevertheless, these places also expected to emit U32 or U64, but just
used a misleading DWARF form. The patch updates those places to use more
appropriate DWARF forms and restricts DIELabel::SizeOf() to accept only
forms that are actually used in the LLVM codebase.

Differential Revision: https://reviews.llvm.org/D84094
2020-08-03 15:04:08 +07:00
Igor Kudrin 8feff8d14f [DebugInfo] Fix a comment and a variable name. NFC.
DebugLocListIndex keeps the index of an entry list, not the offset.

Differential Revision: https://reviews.llvm.org/D84093
2020-08-03 15:04:00 +07:00
Igor Kudrin 4e10a18972 [DebugInfo] Make DIELocList::SizeOf() more explicit. NFCI.
DIELocList is used with a limited number of DWARF forms, see the only
place where it is instantiated, DwarfCompileUnit::addLocationList().

The patch marks the unexpected execution path in DIELocList::SizeOf()
as unreachable, to reduce ambiguity.

Differential Revision: https://reviews.llvm.org/D84092
2020-08-03 15:03:37 +07:00
Matt Arsenault 212570abcf GlobalISel: Implement bitcast action for G_EXTRACT_VECTOR_ELEMENT
For AMDGPU, vectors with elements < 32 bits should be indexed in
32-bit elements and the desired bits extracted from there. For
elements > 64-bits, these should be reduce to 64/32 elements to enable
the normal dynamic indexing paths.

In the dynamic index cases, this produces shorter code most of the
time. This does immediately regress the constant index cases, but this
should be fixed once we have the most basic of shift combines.

The element size > 64 case is pretty much ported from the exisiting
DAG implementation for extract element promote. The increasing element
size case is new.
2020-08-02 10:42:07 -04:00
Simon Pilgrim b8ffbf0e02 [DAG] TargetLowering::expandMUL_LOHI - pass SDLoc as const&
Try to be more consistent with the SDLoc param in the TargetLowering methods.

This also exposes an issue where we were passing a SDNode as a SDLoc, relying on the implicit SDLoc(SDNode) constructor.
2020-08-02 15:31:36 +01:00
Simon Pilgrim d14a22da5e [DAG] TargetLowering::LowerAsmOutputForConstraint - pass SDLoc as const&
Try to be more consistent with the SDLoc param in the TargetLowering methods.
2020-08-02 15:12:02 +01:00
Kazu Hirata 60434989e5 Use llvm::is_contained where appropriate (NFC)
Use llvm::is_contained where appropriate (NFC)

Reviewed By: kazu

Differential Revision: https://reviews.llvm.org/D85083
2020-08-01 21:51:06 -07:00
Evgeny Leviant e73f5d86f1 [MachineVerifier] Refactor calcRegsPassed. NFC
Patch improves performance of verify-machineinstrs pass up to 10x.
Differential revision: https://reviews.llvm.org/D84105
2020-08-01 12:58:52 +03:00
Sriraman Tallam ca6b6d40ff Rename basic block sections options to be consistent.
D68049 created options for basic block sections: -fbasic-block-sections=,
-funique-basic-block-section-names. Rename options in llc and lld (--lto-)
to be consistent. Specifically,

+ Rename basicblock-sections to basic-block-sections
+ Rename unique-bb-section-names to unique-basic-block-section-names

Differential Revision: https://reviews.llvm.org/D84462
2020-07-31 11:50:55 -07:00
Aditya Nandakumar 2144a3bdbb [GISel] Add combiners for G_INTTOPTR and G_PTRTOINT
https://reviews.llvm.org/D84909

Patch adds two new GICombinerRules, one for G_INTTOPTR and one for
G_PTRTOINT. The G_INTTOPTR elides ptr2int(int2ptr(x)) to a copy of x, if
the cast is within the same address space. The G_PTRTOINT elides
int2ptr(ptr2int(x)) to a copy of x. Patch additionally adds new combiner
tests for the AArch64 target to test these new combiner rules.

Patch by mkitzan
2020-07-31 10:13:36 -07:00
Matt Arsenault 57bd64ff84 Support addrspacecast initializers with isNoopAddrSpaceCast
Moves isNoopAddrSpaceCast to the TargetMachine. It logically belongs
with the DataLayout.
2020-07-31 10:42:43 -04:00
Vitaly Buka b0eb40ca39 [NFC] Remove unused GetUnderlyingObject paramenter
Depends on D84617.

Differential Revision: https://reviews.llvm.org/D84621
2020-07-31 02:10:03 -07:00
Vitaly Buka 89051ebace [NFC] GetUnderlyingObject -> getUnderlyingObject
I am going to touch them in the next patch anyway
2020-07-30 21:08:24 -07:00
Eli Friedman 7e88efa7c5 [LegalizeTypes][SVE] Support widen/split legalization for SPLAT_VECTOR
Just the obvious implementation that rewrites the result type. Also fix
warning from EXTRACT_SUBVECTOR legalization that triggers on the test.

Differential Revision: https://reviews.llvm.org/D84706
2020-07-30 16:17:45 -07:00
Jon Roelofs afae6d97fa [SelectionDAG] Fix lowering of vector geps
This fixes an assertion failure that was being triggered in
SelectionDAG::getZeroExtendInReg(), where it was trying to extend the <2xi32>
to i64 (which should have been <2xi64>).

Fixes: rdar://66016901

Differential Revision: https://reviews.llvm.org/D84884
2020-07-30 14:56:53 -06:00
Brendon Cahoon 7b114446c3 Align store conditional address
In cases where the alignment of the datatype is smaller than
expected by the instruction, the address is aligned. The aligned
address is used for the load, but wasn't used for the store
conditional, which resulted in a run-time alignment exception.
2020-07-30 10:42:00 -05:00
jasonliu 04dc9691eb [XCOFF][AIX] Enable -ffunction-sections
Summary:
This patch implements -ffunction-sections on AIX.
This patch focuses on assembly generation.
Follow-on patch needs to handle:
1. -ffunction-sections implication for jump table.
2. Object file generation path and associated testing.

Differential Revision: https://reviews.llvm.org/D83875
2020-07-30 13:30:01 +00:00
Sam Tebbs 276ed5f7e4 [DAGCombiner] Fold sext_inreg of a masked load into a sign extended masked load
This patch adds a DAG combine fold for a sext(masked_load) into a sign extended masked load.

Differential Revision: https://reviews.llvm.org/D84332
2020-07-30 10:34:02 +01:00
Kang Zhang 0037a5f894 [PHIElimination] Fix the killed flag for LowerPHINode()
Summary:
In the phi-node-elimination pass, we set the killed flag incorrectly.
When we eliminate the PHI node, we replace the PHI with a copy for the
incoming value.

Before this patch, we will set incoming value as killed(PHICopy). And
we will remove the killed flag from last using incoming value(OldKill).
This is correct, only if the new PHICopy is after the OldKill.

Reviewed By: bjope

Differential Revision: https://reviews.llvm.org/D80886
2020-07-30 08:18:50 +00:00
Matt Arsenault 7d0b32c268 GlobalISel: Use result of find rather than rechecking map 2020-07-29 21:26:20 -04:00
Matt Arsenault 66c572af55 GlobalISel: Handle assorted no-op intrinsics
SelectionDAGBuilder just drops these, so do the same.
2020-07-29 21:26:20 -04:00
Matt Arsenault 0da582d9b6 GlobalISel: Handle llvm.roundeven
I still think it's highly questionable that we have two intrinsics
with identical behavior and only vary by the name of the libcall used
if it happens to be lowered that way, but try to reduce the feature
delta between SDAG and GlobalISel for recently added intrinsics. I'm
not sure which opcode should be considered the canonical one, but
lower roundeven back to round.
2020-07-29 20:01:12 -04:00
Philip Reames 755f91f12c [Statepoint] Enable cross block relocates w/vreg lowering
This change is mechanical, it just removes the restriction and updates tests.  The key building blocks were submitted in 31342eb and 8fe2abc.

Note that this (and preceeding changes) entirely subsumes D83965.  I did includes a couple of it's tests.

From the codegen changes, an interesting observation: this doesn't actual reduce spilling, it just let's the register allocator do it's job.  That results in a slightly different overall result which has both pros and cons over the eager spill lowering.  (i.e. We'll have some perf tuning to do once this is stable.)
2020-07-29 13:32:51 -07:00
Amara Emerson 0c0e36061a [GlobalISel] Add G_INTRINSIC_LRINT and translate from llvm.lrint
Differential Revision: https://reviews.llvm.org/D84551
2020-07-29 11:51:04 -07:00
Philip Reames 8fe2abc190 [Statepoint] Consolidate relocation type tracking [NFC]
Change the way we track how a particular pointer was relocated at a statepoint in selection dag.  Previously, we used an optional<location> for the spill lowering, and a block local Register for the newly introduced vreg lowering.  Combine all three lowerings (norelocate, spill, and vreg) into a single helper class, and keep a single copy of the information.

This is submitted separately as it really does make the code more readible on it's own, but the indirect motivation is to move vreg tracking from StatepointLowering to FunctionLoweringInfo.  This is the last piece needed to support cross block relocations with vregs; that will follow in a separate (non-NFC) patch.
2020-07-29 11:45:31 -07:00
Amara Emerson d8ba622209 [AArch64][GlobalISel] Selection support for vector DUP[X]lane instructions.
In future, we'd like to use the perfect-shuffle mechanism to deal with these
shuffle permutations. For now, this improves performance by avoiding the
super-expensive const-pool load + tbl instruction.

Differential Revision: https://reviews.llvm.org/D84866
2020-07-29 11:41:37 -07:00
Matt Arsenault 0b7de7966f GlobalISel: Implement lower for G_EXTRACT_VECTOR_ELT
Use the basic store to stack and reload.
2020-07-29 14:16:28 -04:00
Matt Arsenault 90b76dac57 GloblaISel: Remove unreachable condition
Fixes bug 46882
2020-07-29 13:42:22 -04:00
Simon Pilgrim fdc902774e [DAG][AMDGPU][X86] Add SimplifyMultipleUseDemandedBits handling for SIGN/ZERO_EXTEND + SIGN/ZERO_EXTEND_VECTOR_INREG
Peek through multiple use ops like we already do for ANY_EXTEND/ANY_EXTEND_VECTOR_INREG

Differential Revision: https://reviews.llvm.org/D84863
2020-07-29 18:10:59 +01:00
Philip Reames 31342eb63e [Statepoint] When using the tied def lowering, unconditionally use vregs [almost NFC]
This builds on 3da1a96 on the path towards supporting invokes and cross block relocations. The actual change attempts to be NFC, but does fail in one corner-case explained below.

The change itself is fairly mechanical. Rather than remember SDValues - which are inherently block local - immediately produce a virtual register copy and remember that.

Once this lands, we'll update the FunctionLoweringInfo::StatepointSpillMap map to allow register based lowerings, delete VirtRegs from StatepointLowering, and drop the restriction against cross block relocations. I deliberately separate the semantic part into it's own change for easy of understanding and fault isolation.

The corner-case which isn't quite NFC is that the old implementation implicitly CSEd gc.relocates of the same SDValue regardless of type. The new implementation still only relocates once, but it produces distinct vregs for the bitcast and it's source, whereas SelectionDAG's generic CSE was able to remove the bitcast in the old implementation. Note that the final assembly doesn't change (at least in the test), as our MI level optimizations catch the duplication.

I assert that this is an uninteresting corner-case. It's functionally correct, and if we find a case where this influences performance, we should really be canonicalizing types to i8* at the IR level.

Differential Revision: https://reviews.llvm.org/D84692
2020-07-29 09:23:52 -07:00
Kang Zhang a4ade9ed21 [MachineVerifier] Handle the PHI node for verifyLiveVariables()
Summary:
When doing MachineVerifier for LiveVariables, the MachineVerifier pass
will calculate the LiveVariables, and compares the result with the
result livevars pass gave. If they are different, verifyLiveVariables()
will give error.

But when we calculate the LiveVariables in MachineVerifier, we don't
consider the PHI node, while livevars considers.

This patch is to fix above bug.

Reviewed By: bjope

Differential Revision: https://reviews.llvm.org/D80274
2020-07-29 15:43:47 +00:00
Simon Wallis 6a05c6bfc8 [MachineCopyPropagation] BackwardPropagatableCopy: add check for hasOverlappingMultipleDef
In MachineCopyPropagation::BackwardPropagatableCopy(),
a check is added for multiple destination registers.

The copy propagation is avoided if the copied destination register
is the same register as another destination on the same instruction.

A new test is added.  This used to fail on ARM like this:
error: unpredictable instruction, RdHi and RdLo must be different
        umull   r9, r9, lr, r0

Reviewed By: lkail

Differential Revision: https://reviews.llvm.org/D82638
2020-07-29 16:21:01 +01:00
David Sherwood 2078771759 [SVE][CodeGen] Add simple integer add tests for SVE tuple types
I have added tests to:

  CodeGen/AArch64/sve-intrinsics-int-arith.ll

for doing simple integer add operations on tuple types. Since these
tests introduced new warnings due to incorrect use of
getVectorNumElements() I have also fixed up these warnings in the
same patch. These fixes are:

1. In narrowExtractedVectorBinOp I have changed the code to bail out
early for scalable vector types, since we've not yet hit a case that
proves the optimisations are profitable for scalable vectors.
2. In DAGTypeLegalizer::WidenVecRes_CONCAT_VECTORS I have replaced
calls to getVectorNumElements with getVectorMinNumElements in cases
that work with scalable vectors. For the other cases I have added
asserts that the vector is not scalable because we should not be
using shuffle vectors and build vectors in such cases.

Differential revision: https://reviews.llvm.org/D84016
2020-07-29 13:32:10 +01:00
David Sherwood 5d84eafc6b [CodeGen] Remove calls to getVectorNumElements in DAGTypeLegalizer::SplitVecOp_EXTRACT_SUBVECTOR
In DAGTypeLegalizer::SplitVecOp_EXTRACT_SUBVECTOR I have replaced
calls to getVectorNumElements with getVectorMinNumElements, since
this code path works for both fixed and scalable vector types. For
scalable vectors the index will be multiplied by VSCALE.

Fixes warnings in this test:

  sve-sext-zext.ll

Differential revision: https://reviews.llvm.org/D83198
2020-07-29 13:05:39 +01:00
Daniel Sanders abf1ed70d6 [globalisel][cse] Merge debug locations when CSE'ing
Reviewed By: aditya_nandakumar

Differential Revision: https://reviews.llvm.org/D78388
2020-07-28 14:25:26 -07:00
Matt Arsenault e87356b498 GlobalISel: Don't assert on operations with no type indices
Fix not marking G_FENCE as legal on AMDGPU This was apparently
defaulting to legal using the "legacy" rules, whatever those are.
2020-07-28 16:49:55 -04:00
Mircea Trofin 1e027b77f0 [llvm][NFC] refactor setBlockFrequency for clarity.
The refactoring encapsulates frequency calculation in MachineBlockFrequencyInfo,
and renames the API to clarify its motivation. It should clarify
frequencies may not be reset 'freely' by users of the analysis, as the
API serves as a partial update to avoid a full analysis recomputation.

Differential Revision: https://reviews.llvm.org/D84427
2020-07-28 13:04:11 -07:00
Simon Pilgrim b4b6e77454 [DAG] isSplatValue - add support for TRUNCATE/SIGN_EXTEND/ZERO_EXTEND
These are just pass-throughs to the source operand - we can't assume that ANY_EXTEND(splat) will still be a splat though.
2020-07-28 19:56:11 +01:00
Matt Arsenault 97b5fb78d1 GlobalISel: Translate llvm.convert.{to|from}.fp16 intrinsics
I think these were added as a workaround for SelectionDAG lacking half
legalization support in the past. I think they should probably be
removed from the IR, but clang does still have a target control to
emit these instead of the native half fpext/fptrunc.
2020-07-28 11:46:05 -04:00
Matt Arsenault 5f802be4e5 GlobalISel: Don't fail translate on intrinsics with metadata 2020-07-27 19:00:25 -04:00
Sridhar Gopinath 4b5412b5db Fix the move constructor of MMI to move MachineFunctions map
The move constructor of MachineModuleInfo currently does not copy the
MachineFunctions map. This commit fixes this issue.

Patch by Sridhar Gopinath. Thanks!

Differential Revision: https://reviews.llvm.org/D84274
2020-07-27 14:10:05 -07:00
Kazu Hirata 902cbcd59e Use llvm::is_contained where appropriate (NFC)
Summary:
This patch replaces std::find with llvm::is_contained where
appropriate.

Reviewers: efriedma, nhaehnle

Reviewed By: nhaehnle

Subscribers: arsenm, jvesely, nhaehnle, hiraditya, rogfer01, kerbowa, llvm-commits, vkmr

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D84489
2020-07-27 10:20:44 -07:00
Nadav Rotem df880b7730 [StackProtector] Speed up RequiresStackProtector
Speed up the method RequiresStackProtector by checking the intrinsic
value of the call. The original code calls getName() that returns an
allocating std::string on each check. This change removes about 96072
std::string instances when compiling sqlite3.c; The function was
discovered with a Facebook-internal performance tool.

Differential Revision: https://reviews.llvm.org/D84620
2020-07-27 10:07:47 -07:00
Amy Kwan 7c182663a8 Revert "Re-apply:" Emit DW_OP_implicit_value for Floating point constants""
This patch reverts commit `59a76d957a26` as it has caused failure on the
big endian PowerPC buildbots (as well as the SystemZ buildbots).
2020-07-27 09:44:13 -05:00
David Sherwood 14bc85e0eb [SVE] Don't use LocalStackAllocation for SVE objects
I have introduced a new TargetFrameLowering query function:

  isStackIdSafeForLocalArea

that queries whether or not it is safe for objects of a given stack
id to be bundled into the local area. The default behaviour is to
always bundle regardless of the stack id, however for AArch64 this is
overriden so that it's only safe for fixed-size stack objects.
There is future work here to extend this algorithm for multiple local
areas so that SVE stack objects can be bundled together and accessed
from their own virtual base-pointer.

Differential Revision: https://reviews.llvm.org/D83859
2020-07-27 08:22:01 +01:00
QingShan Zhang a6e9f5264c [Scheduling] Improve group algorithm for store cluster
Store Addr and Store Addr+8 are clusterable pair. They have memory(ctrl) dependency on different loads.
Current implementation will put these two stores into different group and miss to cluster them.

Reviewed By: evandro

Differential Revision: https://reviews.llvm.org/D84139
2020-07-27 02:02:40 +00:00
Matt Arsenault f6176f8a5f GlobalISel: Handle G_PTR_ADD in narrowScalar 2020-07-26 10:08:17 -04:00
Matt Arsenault 3e8bb7a000 GlobalISel: Handle fewerElementsVector for G_PTR_ADD 2020-07-26 10:08:09 -04:00
Matt Arsenault 61ced4b87a GlobalISel: Handle 'n' inline asm constraint 2020-07-26 09:30:41 -04:00
Changpeng Fang 9162b70e51 DADCombiner: Don't simplify the token factor if the node's number of operands already exceeds TokenFactorInlineLimit
Summary:
  In parallelizeChainedStores, a TokenFactor was created with the size greater than 3000.
We found that DAGCombiner::visitTokenFactor will consume a huge amount of time on
such nodes. Since the number of operands already exceeds TokenFactorInlineLimit, we propose
to give up simplification with the consideration of compile time.

Reviewers:
  @spatel, @arsenm

Differential Revision:
  https://reviews.llvm.org/D84204
2020-07-25 21:20:59 -07:00
Eric Christopher 18975762c1 Fold StatepointBB into checks as it's only used from an NDEBUG or ASSERT
context fixing an unused variable warning.
2020-07-25 18:36:53 -07:00
Philip Reames 55dae9c20c [Statepoints] Style cleanup after 3da1a963 [NFC]
Just fixing a few minor stylistic issues.
2020-07-25 16:40:39 -07:00
Philip Reames 3da1a9634e [Statepoints] Support lowering gc relocations to virtual registers
(Disabled under flag for the moment)

This is part of a larger project wherein we are finally integrating lowering of gc live operands with the register allocator.  Today, we force spill all operands in SelectionDAG.  The code to do so is distinctly non-optimal.  The approach this patch is working towards is to instead lower the relocations directly into the MI form, and let the register allocator pick which ones get spilled and which stack slots they get spilled to.  In terms of performance, the later part is actually more important as it avoids redundant shuffling of values between stack slots.

This particular change adds ISEL support to produce the variadic def STATEPOINT form required by the above.  In particular, the first N are lowered to variadic tied def/use pairs.  So new statepoint looks like this:
reloc1,reloc2,... = STATEPOINT ..., base1, derived1<tied-def0>, base2, derived2<tied-def1>, ...

N is limited by the maximal number of tied registers machine instruction can have (15 at the moment).

The current patch is restricted to handling relocations within a single basic block.  Cross block relocations (e.g. invokes) are handled via the legacy mechanism.  This restriction will be relaxed in future patches.

Patch By: dantrushin
Differential Revision: https://reviews.llvm.org/D81648
2020-07-25 14:26:05 -07:00
Matt Arsenault 4b53072ee5 GlobalISel: Define mulfix/divfix opcodes
The full expansion involves the funnel shifts, which depend on another
patch to expand those.
2020-07-24 20:02:20 -04:00
Nicolai Hähnle 5934df0c9a MachineBasicBlock: add printName method
Common up some existing MBB name printing logic into a single place.
Note that basic block dumping now prints the same set of attributes as
the MIRPrinter.

Change-Id: I8f022bbd922e831bc96d63143d7472c03282530b

Differential Revision: https://reviews.llvm.org/D83253
2020-07-24 18:18:09 +02:00
Djordje Todorovic 6371a0a00e [DWARF][EntryValues] Emit GNU extensions in the case of DWARF 4 + SCE
Emit DWARF 5 call-site symbols even though DWARF 4 is set,
only in the case of LLDB tuning.

This patch addresses PR46643.

Differential Revision: https://reviews.llvm.org/D83463
2020-07-24 14:33:57 +02:00
Simon Pilgrim 0128b9505c Revert rG5dd566b7c7b78bd- "PassManager.h - remove unnecessary Function.h/Module.h includes. NFCI."
This reverts commit 5dd566b7c7.

Causing some buildbot failures that I'm not seeing on MSVC builds.
2020-07-24 13:02:33 +01:00
Simon Pilgrim 5dd566b7c7 PassManager.h - remove unnecessary Function.h/Module.h includes. NFCI.
PassManager.h is one of the top headers in the ClangBuildAnalyzer frontend worst offenders list.

This exposes a large number of implicit dependencies on various forward declarations/includes in other headers that need addressing.
2020-07-24 12:40:50 +01:00
Djordje Todorovic cbb3571b0d [DWARF] Avoid entry_values production for SCE
SONY debugger does not prefer debug entry values feature, so
the plan is to avoid production of the entry values
by default when the tuning is SCE debugger.

The feature still can be enabled with the -debug-entry-values
option for the testing/development purposes.

This patch addresses PR46643.

Differential Revision: https://reviews.llvm.org/D83462
2020-07-24 13:34:05 +02:00
Craig Topper 8131e19064 [LegalizeTypes] Teach DAGTypeLegalizer::GenWidenVectorLoads to pad with undef if needed when concatenating small or loads to match a larger load
In the included test case the align 16 allowed the v23f32 load to handled as load v16f32, load v4f32, and load v4f32(one element not used). These loads all need to be concatenated together into a final vector. In this case we tried to concatenate the two v4f32 loads to match the type of the v16f32 load so we could do a second concat_vectors, but those loads alone only add up to v8f32. So we need to two v4f32 undefs to pad it.

It appears we've tried to hack around a similar issue in this code before by adding undef padding to loads in one of the earlier loops in this function. Originally in r147964 by padding all loads narrower than previous loads to the same size. Later modifed to only the last load in r293088. This patch removes that earlier code and just handles it on demand where we know we need it.

Fixes PR46820

Differential Revision: https://reviews.llvm.org/D84463
2020-07-23 19:02:03 -07:00
Matt Arsenault 891759db73 GlobalISel: Add scalarSameSizeAs LegalizeRule
Widen or narrow a type to a type with the same scalar size as
another. This can be used to force G_PTR_ADD/G_PTRMASK's scalar
operand to match the bitwidth of the pointer type. Use this to
disallow narrower types for G_PTRMASK.
2020-07-23 21:17:31 -04:00
Amara Emerson 645e7fc542 [GlobalISel] Use existing MIR builder instead of creating one in combiner. 2020-07-23 14:16:45 -07:00
Amara Emerson 3b10e42ba1 [AArch64][GlobalISel] Add post-legalize combine for sext(trunc(sextload)) -> trunc/copy
On AArch64 we generate redundant G_SEXTs or G_SEXT_INREGs because of this.

Differential Revision: https://reviews.llvm.org/D81993
2020-07-23 12:06:35 -07:00
Nikita Popov deb4bb2b3a [IR] Add min/max/abs intrinsics
This adds the llvm.abs(), llvm.umin(), llvm.umax(), llvm.smin(),
and llvm.smax() intrinsics specified in D81829. For SelectionDAG,
the ISD opcodes and all the legalization and lowering already exist,
so this just wires them up to the intrinsic in the SDAG builder and
adds rudimentary tests. For GlobalISel only the min/max intrinsics
are wired up, as llvm.abs() will require the addition of a G_ABS op,
and corresponding legalization support.

Differential Revision: https://reviews.llvm.org/D84125
2020-07-23 20:56:19 +02:00
Mircea Trofin 302e91baf4 [llvm][NFC] Add comments and common-case API to MachineBlockFrequencyInfo
Clarify the relation between a block's BlockFrequency and the
getEntryFreq() API, and added an API for the relatively common case of
finding a block's frequency relative to the entrypoint.

Added / moved some comments to header.

Differential Revision: https://reviews.llvm.org/D84357
2020-07-23 08:42:34 -07:00
Evgeny Leviant dc619f3d7a [CodeGen][TargetPassConfig] Add unreachable-mbb-elimination pass explicitly
Differential revision: https://reviews.llvm.org/D84228
2020-07-23 18:05:11 +03:00
Jay Foad b35833b84e [GlobalISel][AMDGPU] Legalize saturating add/subtract
Add support in LegalizerHelper for lowering G_SADDSAT etc. either
using add/subtract-with-overflow or using max/min instructions.

Enable this lowering for AMDGPU so it can be tested. The legalization
rules are still approximate and skips out on using the clamp bit to
treat these as legal, which has never been used before. This also
doesn't yet try to deal with expanding SALU cases.
2020-07-23 09:06:42 -04:00
Simon Pilgrim 1003113ef0 Fix -Wparentheses warning - add missing brackets around the entire assertion condition 2020-07-23 13:33:24 +01:00
Konstantin Schwarz 931488779f [GlobalISel][InlineAsm] Add register class ID to the flags of register input operands
Summary: We do this already for output operands, but missed it for (non-tied) input operands.

Reviewers: arsenm, Petar.Avramovic

Reviewed By: arsenm

Subscribers: jvesely, wdng, nhaehnle, rovka, hiraditya, llvm-commits, kerbowa

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D83763
2020-07-23 13:35:01 +02:00
Florian Hahn 6c9da995fc [ScheduleDAGRRList] Pacify overload mismatch in std::min.
On systems where size() doesn't return unsigned long, this leads to an
overloading mismatch. Convert the constant to whatever type is used for
Q.size() on the system.
2020-07-23 11:56:50 +01:00
Florian Hahn 2f8e6b5f3c [ScheduleDAGRRList] Limit number of candidates to explore.
Currently popFromQueueImpl iterates over all candidates to find the best
one. While the candidate queue is small, this is not a problem. But it
becomes a problem once the queue gets larger. For example, the snippet
below takes 330s to compile with llc -O0, but completes in 3s with this
patch.

define void @test(i4000000* %ptr) {
entry:
  store i4000000 0, i4000000* %ptr, align 4
  ret void
}

This patch limits the number of candidates to check to 1000. This limit
ensures that it never triggers for test-suite/SPEC2000/SPEC2006 on X86
and AArch64 with -O3, while still drastically limiting the compile-time
in case of very large queues.

It would be even better to use a binary heap to manage to queue
(D83335), but some heuristics change the score of a node in the queue
after another node has been scheduled. I plan to address this for
backends that use the MachineScheduler in the future, but that requires
a more careful evaluation. In the meantime, the limit should help users
impacted by this issue.

The patch includes a slightly smaller version of the motivating example
as test case, to guard against the issue.

Reviewers: efriedma, paquette, niravd

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D84328
2020-07-23 11:35:33 +01:00
Sourabh Singh Tomar 8998f8ab66 [DebugInfo] Attempt to fix regression test failure after 59a76d957a
Test case `test/CodeGen/WebAssembly/stackified-debug.ll`
was failing due to malformed DwarfExpression.

This failure has been seen in lot of bots, for instance in:
http://lab.llvm.org:8011/builders/lld-x86_64-ubuntu-fast/builds/18794

: 'RUN: at line 1'
/home/buildbot/as-builder-4/lld-x86_64-ubuntu-fast/build/bin/llc
/home/buildbot/as-builder-4/lld-x86_64-ubuntu-fast/build/bin/FileCheck /home/buildbot/as-builder-4/lld-x86_64-ubuntu-fast/llvm-project/llvm/test/CodeGen/WebAssembly/stackified-debug.ll
home/buildbot/as-builder-4/lld-x86_64-ubuntu-fast/llvm-project/llvm/test/CodeGen/WebAssembly/stackified-debug.ll:26:10: error: CHECK: expected string not found in input
 CHECK: .int16 4 # Loc expr size
         ^
<stdin>:34:2: note: scanning from here
 .int16 3 # Loc expr size

Differential Revision: https://reviews.llvm.org/D83560
2020-07-23 14:55:30 +05:30
Sourabh Singh Tomar 59a76d957a Re-apply:" Emit DW_OP_implicit_value for Floating point constants"
This patch was reverted in 9d2da6759b due to assertion failure seen
in `test/DebugInfo/Sparc/subreg.ll`. Assertion failure was happening
due to malformed/unhandeled DwarfExpression.

Differential Revision: https://reviews.llvm.org/D83560
2020-07-23 13:56:20 +05:30
Sourabh Singh Tomar 9d2da6759b Revert "[DebugInfo] Emit DW_OP_implicit_value for Floating point constants"
This reverts commit 6b55a95898.
Temporal revert due to a failing/assertion in test case in Sparc backend.
`test/DebugInfo/Sparc/subreg.ll`
Seen in lot of bots, for instance in:
`http://lab.llvm.org:8011/builders/llvm-clang-x86_64-expensive-checks-win/builds/24679`
2020-07-23 08:50:01 +05:30
Sourabh Singh Tomar 6b55a95898 [DebugInfo] Emit DW_OP_implicit_value for Floating point constants
Summary:
llvm is missing support for DW_OP_implicit_value operation.
DW_OP_implicit_value op is indispensable for cases such as
optimized out long double variables.

For intro refer: DWARFv5 Spec Pg: 40 2.6.1.1.4 Implicit Location Descriptions

Consider the following example:
```
int main() {
        long double ld = 3.14;
        printf("dummy\n");
        ld *= ld;
        return 0;
}
```
when compiled with tunk `clang` as
`clang test.c -g -O1` produces following location description
of variable `ld`:
```
DW_AT_location        (0x00000000:
                     [0x0000000000201691, 0x000000000020169b): DW_OP_constu 0xc8f5c28f5c28f800, DW_OP_stack_value, DW_OP_piece 0x8, DW_OP_constu 0x4000, DW_OP_stack_value, DW_OP_bit_piece 0x10 0x40, DW_OP_stack_value)
                  DW_AT_name    ("ld")
```
Here one may notice that this representation is incorrect(DWARF4
stack could only hold integers(and only up to the size of address)).
Here the variable size itself is `128` bit.
GDB and LLDB confirms this:
```
(gdb) p ld
$1 = <invalid float value>
(lldb) frame variable ld
(long double) ld = <extracting data from value failed>
```

GCC represents/uses DW_OP_implicit_value in these sort of situations.
Based on the discussion with Jakub Jelinek regarding GCC's motivation
for using this, I concluded that DW_OP_implicit_value is most appropriate
in this case.

Link: https://gcc.gnu.org/pipermail/gcc/2020-July/233057.html

GDB seems happy after this patch:(LLDB doesn't have support
for DW_OP_implicit_value)
```
(gdb) p ld
p ld
$1 = 3.14000000000000012434
```

Reviewed By: aprantl

Differential Revision: https://reviews.llvm.org/D83560
2020-07-23 07:21:49 +05:30
Christopher Tetreault ae35c09c34 [MVT] Fix getTypeForEVT for v64f16 and v128f16
Summary: These should have half float as the element type

Reviewers: cameron.mcinally, efriedma, sdesmalen, paulwalker-arm

Reviewed By: paulwalker-arm

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D84211
2020-07-22 14:27:08 -07:00
David Blaikie 5c2451785d DebugInfo: Use debug_line.dwo for debug_macro.dwo
This is an alternative proposal to D81476 (and D82084) - the details were sufficiently confusing to me it seemed easier to write some code and see how it looks.

Reviewers: SouraVX

Differential Revision: https://reviews.llvm.org/D84278
2020-07-22 14:06:33 -07:00
Mircea Trofin 111a018b36 [llvm][NFC] const-ed MachineBlockFrequencyInfo::isIrrLoopHeader 2020-07-22 13:06:34 -07:00
Andrew Litteken bcbc6117b5 [CGP] Add Pass Dependencies
Add pass dependecies:
  - TargetTransformInfoWrapperPass
  - TargetPassConfig
  - LoopInfoWrapperPass
  - TargetLibraryInfoWrapperPass

To fix inconsistencies when passes are added to the pipeline.

Reviewers: efriedma, kmclaughlin, paquette

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D84346
2020-07-22 12:02:53 -07:00
Simon Pilgrim 1c060aa988 DwarfCompileUnit.cpp - remove duplicate includes that already exist in DwarfCompileUnit.h. NFC.
Also remove DIE.h include from DwarfCompileUnit.h and replace with forward declarations.
2020-07-22 19:25:27 +01:00
Simon Pilgrim cd0a36bbda CodeViewDebug.cpp - remove duplicate includes that already exist in CodeViewDebug.h. NFC. 2020-07-22 19:25:27 +01:00
Matt Arsenault b98f902f18 GlobalISel: Restructure argument lowering loop in handleAssignments
This was structured in a way that implied every split argument is in
memory, or in registers. It is possible to pass an original argument
partially in registers, and partially in memory. Transpose the logic
here to only consider a single piece at a time. Every individual
CCValAssign should be treated independently, and any merge to original
value needs to be handled later.

This is in preparation for merging some preprocessing hacks in the
AMDGPU calling convention lowering into the generic code.

I'm also not sure what the correct behavior for memlocs where the
promoted size is larger than the original value. I've opted to clamp
the memory access size to not exceed the value register to avoid the
explicit trunc/extend/vector widen/vector extract instruction. This
happens for AMDGPU for i8 arguments that end up stack passed, which
are promoted to i16 (I think this is a preexisting DAG bug though, and
they should not really be promoted when in memory).
2020-07-22 13:31:11 -04:00
jasonliu b98b1700ef [XCOFF] Enable symbol alias for AIX
Summary:
AIX assembly's .set directive is not usable for aliasing purpose.
We need to use extra-label-at-defintion strategy to generate symbol
aliasing on AIX.

Reviewed By: DiggerLin, Xiangling_L

Differential Revision: https://reviews.llvm.org/D83252
2020-07-22 14:03:55 +00:00
Simon Pilgrim fa95688237 SelectionDAGBuilder.cpp - remove duplicate includes that already exist in SelectionDAGBuilder.h. NFC. 2020-07-22 14:19:41 +01:00
OCHyams ce6de3747b [DebugInfo] Drop location ranges for variables which exist entirely outside the variable's scope
Summary:
This patch reduces file size in debug builds by dropping variable locations a
debugger user will not see.

After building the debug entity history map we loop through it. For each
variable we look at each entry. If the entry opens a location range which does
not intersect any of the variable's scope's ranges then we mark it for removal.
After visiting the entries for each variable we also mark any clobbering
entries which will no longer be referenced for removal, and then finally erase
the marked entries. This all requires the ability to query the order of
instructions, so before this runs we number them.

Tests:
Added llvm/test/DebugInfo/X86/trim-var-locs.mir

Modified llvm/test/DebugInfo/COFF/register-variables.ll
  Branch folding merges the tails of if.then and if.else into if.else. Each
  blocks' debug-locations point to different scopes so when they're merged we
  can't use either. Because of this the variable 'c' ends up with a location
  range which doesn't cover any instructions in its scope; with the patch
  applied the location range is dropped and its flag changes to IsOptimizedOut.

Modified llvm/test/DebugInfo/X86/live-debug-variables.ll
Modified llvm/test/DebugInfo/ARM/PR26163.ll
  In both tests an out of scope location is now removed. The remaining location
  covers the entire scope of the variable allowing us to emit it as a single
  location.

Reviewed By: aprantl

Differential Revision: https://reviews.llvm.org/D82129
2020-07-22 12:45:21 +01:00
Matt Arsenault bf6bc62d1f GlobalISel: Use Register and update comment physical register syntax 2020-07-21 19:11:57 -04:00
Amara Emerson 791544422a Revert "[AArch64][GlobalISel] Add post-legalize combine for sext_inreg(trunc(sextload)) -> copy"
This reverts commit 64eb3a4915.

It caused miscompiles with optimizations enabled. Reverting while I investigate.
2020-07-21 16:01:18 -07:00
Matt Arsenault 7cd8a0256d GlobalISel: Legalize G_FPOWI 2020-07-21 18:13:04 -04:00
Matt Arsenault 7941dc5041 GlobalISel: Translate llvm.powi intrinsic
There are a few questionable things about this intrinsic and existing
DAG implementation. For some reason the intrinsic hardcodes the second
operand to be scalar-only i32, and SelectionDAG builder makes a
legalization decision based on whether the operand is constant.
2020-07-21 18:13:04 -04:00
Matt Arsenault f659c44016 CodeGen: Add support for lowering byref attribute 2020-07-21 17:38:15 -04:00
Matt Arsenault 2fe0ea8261 DAG: Handle expanding strict_fsub into fneg and strict_fadd
The AMDGPU handling of f16 vectors is terrible still since it gets
scalarized even when the vector operation is legal.

The code is is essentially duplicated between the non-strict and
strict case. Apparently no other expansions are currently trying to do
this. This is mostly because I found the behavior of
getStrictFPOperationAction to be confusing. In the ARM case, it would
expand strict_fsub even though it shouldn't due to the later check. At
that point, the logic required to check for legality was more complex
than just duplicating the 2 instruction expansion.
2020-07-21 16:17:10 -04:00
Guozhi Wei 28759e9fcc [MBP] Use profile count to compute tail dup cost if it is available
Current tail duplication in machine block placement pass uses block frequency
information in cost model. But frequency number has only relative meaning
compared to other basic blocks in the same function. A large frequency number
doesn't mean it is hot and a small frequency number doesn't mean it is cold.

To overcome this problem, this patch uses profile count in cost model if it's
available. So we can tail duplicate real hot basic blocks.

Differential Revision: https://reviews.llvm.org/D83265
2020-07-21 11:18:06 -07:00
David Blaikie 38fbba4cb8 DebugInfo: Move getMD5AsBytes from DwarfUnit to DwarfDebug
It wasn't using any state from DwarfUnit anyway.
2020-07-20 19:21:39 -07:00
Matt Arsenault 1ef3ed0eb4 GlobalISel: Rewrite getLCMType
Try to make the behavior more consistent with getGCDType, and bias
towards returning something closer to the source type whenever there's
an ambiguity.
2020-07-20 21:06:30 -04:00
Matt Arsenault 12d5bec8c7 GlobalISel: Handle more cases in getGCDType
Try harder to find a canonical unmerge type when trying to cover the
desired target type. Handle finding a compatible unmerge type for two
vectors with different element types. This will return the largest
multiple of the source vector element that will evenly divide the
target vector type.

Also make the handling mixing scalars and vectors, and prefer the
source element type as the unmerge target type.
2020-07-20 20:53:35 -04:00
Eli Friedman b8f765a1e1 [AArch64][SVE] Add support for trunc to <vscale x N x i1>.
This isn't a natively supported operation, so convert it to a
mask+compare.

In addition to the operation itself, fix up some surrounding stuff to
make the testcase work: we need concat_vectors on i1 vectors, we need
legalization of i1 vector truncates, and we need to fix up all the
relevant uses of getVectorNumElements().

Differential Revision: https://reviews.llvm.org/D83811
2020-07-20 13:11:02 -07:00
Yuanfang Chen efcb8a1903 [NFC] remove unneeded TargetLoweringObjectFile init after 85c30f3374 2020-07-20 10:43:28 -07:00
Yuanfang Chen 589c646a7e [llc] (almost) remove `--print-machineinstrs`
Its effect could be achieved by
`-stop-after`,`-print-after`,`-print-after-all`. But a few tests need to
print MIR after ISel which could not be done with
`-print-after`/`-stop-after` since isel pass does not have commandline name.
That's the reason `--print-machineinstrs` is downgraded to
`--print-after-isel` in this patch. `--print-after-isel` could be
removed after we switch to new pass manager since isel pass would have a
commandline text name to use `print-after` or equivalent switches.

The motivation of this patch is to reduce tests dependency on
would-be-deprecated feature.

Reviewed By: arsenm, dsanders

Differential Revision: https://reviews.llvm.org/D83275
2020-07-20 10:43:28 -07:00
Alok Kumar Sharma 2d10258a31 [DebugInfo] Support for DW_AT_associated and DW_AT_allocated.
Summary:
This support is needed for the Fortran array variables with pointer/allocatable
attribute. This support enables debugger to identify the status of variable
whether that is currently allocated/associated.

  for pointer array (before allocation/association)
  without DW_AT_associated

(gdb) pt ptr
type = integer (140737345375288:140737354129776)
(gdb) p ptr
value requires 35017956 bytes, which is more than max-value-size

  with DW_AT_associated

(gdb) pt ptr
type = integer (:)
(gdb) p ptr
$1 = <not associated>

  for allocatable array (before allocation)

  without DW_AT_allocated

(gdb) pt arr
type = integer (140737345375288:140737354129776)
(gdb) p arr
value requires 35017956 bytes, which is more than max-value-size

  with DW_AT_allocated

(gdb) pt arr
type = integer, allocatable (:)
(gdb) p arr
$1 = <not allocated>

    Testing
- unit test cases added
- check-llvm
- check-debuginfo

Reviewed By: aprantl

Differential Revision: https://reviews.llvm.org/D83544
2020-07-20 19:54:35 +05:30
Petar Avramovic 6a1030aa0e AMDGPU/GlobalISel: Legalize s16->s64 G_FPEXT
Legalize using narrowScalar as s16->s32 G_FPEXT
followed by s32->s64 G_FPEXT.

Differential Revision: https://reviews.llvm.org/D84030
2020-07-20 16:12:19 +02:00
Matt Arsenault 5cbd4e415e GlobalISel: Don't handle widenScalar for vector G_INSERT
This handling didn't make any sense for vectors.
2020-07-20 10:06:18 -04:00
Matt Arsenault a679f27e98 GlobalISel: Consistently get TII from MIRBuilder 2020-07-20 10:06:18 -04:00
Petar Avramovic ba938f6388 AMDGPU/GlobalISel: Legalize s16->s64 G_FPTOSI/G_FPTOUI
Add narrowScalarFor action.
Add narrow scalar for typeIndex == 0 for G_FPTOSI/G_FPTOUI.
Legalize using narrowScalarFor as s16->s32 G_FPTOSI/G_FPTOUI
followed by s32->s64 G_SEXT/G_ZEXT.

Differential Revision: https://reviews.llvm.org/D84010
2020-07-20 11:06:11 +02:00
Evgeny Leviant 24089928be [CodeGen][TargetPassConfig] Add TargetTransformInfo pass correctly
Patch adds tti pass directly enforcing its execution with correctly set
TargetTransformInfo.

Differential revision: https://reviews.llvm.org/D84047
2020-07-18 14:11:40 +03:00
Aditya Nandakumar 63c081e73d [GISel: Add support for CSEing SrcOps which are immediates
https://reviews.llvm.org/D84072

Add G_EXTRACT to CSEConfigFull and add unit test as well.
2020-07-17 16:04:24 -07:00
Sam Tebbs 6c348e4067 [HWLoops] Stop converting to a while loop when it would be unsafe to
There were cases where a do-while loop would be converted to a while
loop before finding out that it would be unsafe to expand the SCEV in
this situation and then bailing out of hardware loop conversion.

This patch checks if it would be unsafe to expand the SCEV and if so stops converting the do-while into a while, allowing conversion to a hardware loop.

Differential Revision: https://reviews.llvm.org/D83953
2020-07-17 11:47:08 +01:00
Jay Foad 62fd7f767c [MachineScheduler] Fix the TopDepth/BotHeightReduce latency heuristics
tryLatency compares two sched candidates. For the top zone it prefers
the one with lesser depth, but only if that depth is greater than the
total latency of the instructions we've already scheduled -- otherwise
its latency would be hidden and there would be no stall.

Unfortunately it only tests the depth of one of the candidates. This can
lead to situations where the TopDepthReduce heuristic does not kick in,
but a lower priority heuristic chooses the other candidate, whose depth
*is* greater than the already scheduled latency, which causes a stall.

The fix is to apply the heuristic if the depth of *either* candidate is
greater than the already scheduled latency.

All this also applies to the BotHeightReduce heuristic in the bottom
zone.

Differential Revision: https://reviews.llvm.org/D72392
2020-07-17 11:02:13 +01:00
Florian Hahn e297006d6f [ScheduleDAG] Move DBG_VALUEs after first term forward.
MBBs are not allowed to have non-terminator instructions after the first
terminator. Currently in some cases (see the modified test),
EmitSchedule can add DBG_VALUEs after the last terminator, for example
when referring a debug value that gets folded into a TCRETURN
instruction on ARM.

This patch updates EmitSchedule to move inserted DBG_VALUEs just before
the first terminator. I am not sure if there are terminators produce
values that can in turn be used by a DBG_VALUE. In that case, moving the
DBG_VALUE might result in referencing an undefined register. But in any
case, it seems like currently there is no way to insert a proper DBG_VALUEs
for such registers anyways.

Alternatively it might make sense to just remove those extra DBG_VALUES.

I am not too familiar with the details of debug info in the backend and
would appreciate any suggestions on how to address the issue in the best
possible way.

Reviewers: vsk, aprantl, jpaquette, efriedma, paquette

Reviewed By: aprantl

Differential Revision: https://reviews.llvm.org/D83561
2020-07-17 10:27:43 +01:00
Igor Kudrin f76a0cd97a [DebugInfo] Fix a misleading usage of DWARF forms with DIEExpr. NFCI.
For now, DIEExpr is used only in two places:

 1) in the debug info library unit test suite to emit
    a DW_AT_str_offsets_base attribute with the DW_FORM_sec_offset
    form, see dwarfgen::DIE::addStrOffsetsBaseAttribute();

 2) in DwarfCompileUnit::addLocationAttribute() to generate the location
    attribute for a TLS variable.

The later case used an incorrect DWARF form of DW_FORM_udata, which
implies storing an uleb128 value, not a 4/8 byte constant. The generated
result was as expected because DIEExpr::SizeOf() did not handle the used
form, but returned the size of the code pointer by default.

The patch fixes the issue by using more appropriate DWARF forms for
the problematic case and making DIEExpr::SizeOf() more straightforward.

Differential Revision: https://reviews.llvm.org/D83958
2020-07-17 13:49:27 +07:00
Denis Antrushin e04fe9aefd [Statepoint] Fix bug found by sanitaizer.
Statepoint has no static operands, so it cannot be verified
against MCInstrDescr. Revert NumDefs change introduced by ef658ebd62.
2020-07-16 23:06:53 +03:00
Nadav Rotem a394aa1b97 [LiveVariables] Replace std::vector with SmallVector.
Replace std::vector with SmallVector to reduce the number of mallocs.
This method is frequently executed, and the number of elements in the
vector is typically small.

https://reviews.llvm.org/D83920
2020-07-16 11:39:54 -07:00
Matt Arsenault 9d3e56e2ee DAG: Try scalarizing when expanding saturating add/sub
In an upcoming AMDGPU patch, the scalar cases will be legal and vector
ops should be scalarized, rather than producing a long sequence of
vector ops which will also need to be scalarized.

Use a lazy heuristic that seems to work and improves the thumb2 MVE
test.
2020-07-16 14:05:16 -04:00
Denis Antrushin ef658ebd62 MIR Statepoint refactoring. Part 1: Basic MI level changes.
Basic support for variadic-def MIR Statepoint:
- Change TableGen STATEPOINT description to variadic out list
  (For self-documentation purpose; by itself it does not affect
  code generation in any way).
- Update StatepointOpers helper class to handle variadic defs.
- Update MachineVerifier to properly handle them, too.

With this change, new Statepoint instruction can be passed through
backend (excluding ISEL) without errors.

Full change set is available at D81603.
Reviewed By: reames
Differential Revision: https://reviews.llvm.org/D81645
2020-07-17 00:57:21 +07:00
Matt Arsenault 023883a834 IR: Rename Argument::hasPassPointeeByValueAttr to prepare for byref
When the byref attribute is added, there will need to be two similar
functions for the existing cases which have an associate value copy,
and byref which does not. Most, but not all of the existing uses will
use the existing version.

The associated size function added by D82679 also needs to
contextually differ, and will help eliminate a few places still
relying on pointee element types.
2020-07-16 13:50:49 -04:00
Petar Avramovic 6850033ca6 AMDGPU/GlobalISel: Legalize s64->s16 G_SITOFP/G_UITOFP
Add widenScalar for TypeIdx == 0 for G_SITOFP/G_UITOFP.
Legailize, using widenScalar, as s64->s32 G_SITOFP/G_UITOFP
followed by s32->s16 G_FPTRUNC.

Differential Revision: https://reviews.llvm.org/D83880
2020-07-16 16:31:57 +02:00
James Y Knight 60433c63ac Remove TwoAddressInstructionPass::sink3AddrInstruction.
This function has a bug which will incorrectly reschedule instructions
after an INLINEASM_BR (which can branch). (The bug may also allow
scheduling past a throwing-CALL, I'm not certain.)

I could fix that bug, but, as the removed FIXME notes, it's better to
attempt rescheduling before converting to 3-addr form, as that may
remove the need to convert in the first place. In fact, the code to do
such reordering was added to this pass only a few months later, in
2011, via the addition of the function rescheduleMIBelowKill. That
code does not contain the same bug.

The removal of the sink3AddrInstruction function is not a no-op: in
some cases it would move an instruction post-conversion, when
rescheduleMIBelowKill would not move the instruction pre-converison.
However, this does not appear to be important: the machine instruction
scheduler can reorder the after-conversion instructions, in any case.

This patch fixes a kernel panic 4.4 LTS x86_64 Linux kernels, when
built with clang after 4b0aa5724f.

Link: https://github.com/ClangBuiltLinux/linux/issues/1085

Differential Revision: https://reviews.llvm.org/D83708
2020-07-16 10:02:52 -04:00
Kerry McLaughlin 2762da0a16 [SVE][CodeGen] Legalisation of masked loads and stores
Summary:
This patch modifies IncrementMemoryAddress to use a vscale
when calculating the new address if the data type is scalable.

Also adds tablegen patterns which match an extract_subvector
of a legal predicate type with zip1/zip2 instructions

Reviewers: sdesmalen, efriedma, david-arm

Reviewed By: efriedma, david-arm

Subscribers: tschuett, hiraditya, psnobl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D83137
2020-07-16 10:55:45 +01:00
Quentin Colombet 294be6b5d3 [CalcSpillWeights] Propagate the fact that a live-interval is not spillable
When we calculate the weight of a live-interval, add some code to
check if the original live-interval was markied as not spillable and
if so, progagate that information down to the new interval.

Previously we would just recompute a weight for the new interval,
thus, we could in theory just spill live-intervals marked as not
spillable by just splitting them. That goes against the spirit of
a non-spillable live-interval.

E.g., previously we could do:
v1 =  // v1 must not be spilled
...
= v1

Split:
v1 = // v1 must not be spilled
...
v2 = v1 // v2 can be spilled
...
v3 = v2 // v3 can be spilled
= v3

There's no test case for that one as we would need to split a
non-spillable live-interval without using LiveRangeEdit to see this
happening.
RegAlloc inserts non-spillable intervals only as part of the spilling
mechanism, thus at this point the intervals are not splittable anymore.
On top of that, RegAlloc uses the LiveRangeEdit API, which already
properly propagate that information.

In other words, this could only happen if a target was to mark
a live-interval as not spillable before register allocation and
split it without using LRE, e.g., through
LiveIntervals::splitSeparateComponent.
2020-07-15 17:57:36 -07:00
Hiroshi Yamauchi f233b92f92 [PGO][PGSO] Add profile guided size optimization to LegalizeDAG.
Reviewers: davidxl

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D83333
2020-07-15 10:03:38 -07:00
Cameron McInally ae51a70030 [Legalize] Hoist invariant condition in ExpandVectorBuildThroughStack(...)
The operands of a BUILD_VECTOR must all have the same type, so we can hoist this invariant condition out of the loop.

Differential Revision: https://reviews.llvm.org/D83882
2020-07-15 11:05:20 -05:00
Tim Northover 37b96d51d0 CodeGenPrep: remove AssertingVH references before deleting dead instructions.
CodeGenPrepare keeps fairly close track of various instructions it's
seen, particularly GEPs, in maps and vectors. However, sometimes those
instructions become dead and get removed while it's still executing.
This triggers AssertingVH references to them in an asserts build and
could lead to miscompiles in a release build (I've only seen a later
segfault though).

So this patch adds a callback to
RecursivelyDeleteTriviallyDeadInstructions which can make sure the
instruction about to be deleted is removed from CodeGenPrepare's data
structures.
2020-07-15 15:19:21 +01:00
Tim Northover 5165b2b5fd AArch64+ARM: make LLVM consider system registers volatile.
Some of the system registers readable on AArch64 and ARM platforms
return different values with each read (for example a timer counter),
these shouldn't be hoisted outside loops or otherwise interfered with,
but the normal @llvm.read_register intrinsic is only considered to read
memory.

This introduces a separate @llvm.read_volatile_register intrinsic and
maps all system-registers on ARM platforms to use it for the
__builtin_arm_rsr calls. Registers declared with asm("r9") or similar
are unaffected.
2020-07-15 09:47:36 +01:00
Roger Ferrer Ibanez 14bc5e149d [DAGCombiner] Rebuild (setcc x, y, ==) from (xor (xor x, y), 1)
The existing code already considered this case. Unfortunately a typo in
the condition prevents it from triggering. Also the existing code, had
it run, forgot to do the folding.

This fixes PR42876.

Differential Revision: https://reviews.llvm.org/D65802
2020-07-15 07:34:22 +00:00
Krzysztof Pszeniczny c3e6555616 Call Frame Information (CFI) Handling for Basic Block Sections
This patch handles CFI with basic block sections, which unlike DebugInfo does
not support ranges. The DWARF standard explicitly requires emitting separate
CFI Frame Descriptor Entries for each contiguous fragment of a function. Thus,
the CFI information for all callee-saved registers (possibly including the
frame pointer, if necessary) have to be emitted along with redefining the
Call Frame Address (CFA), viz. where the current frame starts.

CFI directives are emitted in FDE’s in the object file with a low_pc, high_pc
specification. So, a single FDE must point to a contiguous code region unlike
debug info which has the support for ranges. This is what complicates CFI for
basic block sections.

Now, what happens when we start placing individual basic blocks in unique
sections:

* Basic block sections allow the linker to randomly reorder basic blocks in the
address space such that a given basic block can become non-contiguous with the
original function.
* The different basic block sections can no longer share the cfi_startproc and
cfi_endproc directives. So, each basic block section should emit this
independently.
* Each (cfi_startproc, cfi_endproc) directive will result in a new FDE that
caters to that basic block section.
* Now, this basic block section needs to duplicate the information from the
entry block to compute the CFA as it is an independent entity. It cannot refer
to the FDE of the original function and hence must duplicate all the stuff that
is needed to compute the CFA on its own.
* We are working on a de-duplication patch that can share common information in
FDEs in a CIE (Common Information Entry) and we will present this as a follow up
patch. This can significantly reduce the duplication overhead and is
particularly useful when several basic block sections are created.
* The CFI directives are emitted similarly for registers that are pushed onto
the stack, like callee saved registers in the prologue. There are cfi
directives that emit how to retrieve the value of the register at that point
when the push happened. This has to be duplicated too in a basic block that is
floated as a separate section.

Differential Revision: https://reviews.llvm.org/D79978
2020-07-14 12:54:12 -07:00
Logan Smith a19461d9e1 [NFC] Add 'override' keyword where missing in include/ and lib/.
This fixes warnings raised by Clang's new -Wsuggest-override, in preparation for enabling that warning in the LLVM build. This patch also removes the virtual keyword where redundant, but only in places where doing so improves consistency within a given file. It also removes a couple unnecessary virtual destructor declarations in derived classes where the destructor inherited from the base class is already virtual.

Differential Revision: https://reviews.llvm.org/D83709
2020-07-14 09:47:29 -07:00
Paul Walker 6e198aae1d [SelectionDAG] Prevent warnings when extracting fixed length vector from scalable.
ComputeNumSignBits and computeKnownBits both trigger "Scalable flag
may be dropped" warnings when a fixed length vector is extracted
from a scalable vector.  This patch assumes nothing about the
demanded elements thus matching the behaviour when extracting a
scalable vector from a scalable vector.

Differential Revision: https://reviews.llvm.org/D83642
2020-07-14 11:12:56 +00:00
Sam Elliott 1d15bbb9d9 Revert "[RISCV] Avoid Splitting MBB in RISCVExpandPseudo"
This reverts commit 97106f9d80.

This is based on feedback from https://reviews.llvm.org/D82988#2147105
2020-07-14 11:15:01 +01:00
David Sherwood 3b8eaf26db [SVE][CodeGen] Fix implicit TypeSize->uint64_t conversion in TransformFPLoadStorePair
In DAGCombiner::TransformFPLoadStorePair we were dropping the scalable
property of TypeSize when trying to create an integer type of equivalent
size. In fact, this optimisation makes no sense for scalable types
since we don't know the size at compile time. I have changed the code
to bail out when encountering scalable type sizes.

I've added a test to

  llvm/test/CodeGen/AArch64/sve-fp.ll

that exercises this code path. The test already emits an error if it
encounters warnings due to implicit TypeSize->uint64_t conversions.

Differential Revision: https://reviews.llvm.org/D83572
2020-07-14 08:07:30 +01:00
Amara Emerson 64eb3a4915 [AArch64][GlobalISel] Add post-legalize combine for sext_inreg(trunc(sextload)) -> copy
On AArch64 we generate redundant G_SEXTs or G_SEXT_INREGs because of this.

Differential Revision: https://reviews.llvm.org/D81993
2020-07-13 20:27:45 -07:00
zuojian lin fefe6a6642 Fix undefined behavior in DWARF emission
Caused by uninitialized load of llvm::DwarfDebug::PrevCU:
llvm::DwarfCompileUnit::addRange () at ../lib/CodeGen/AsmPrinter/DwarfCompileUnit.cpp:276
llvm::DwarfDebug::endFunctionImpl () at ../lib/CodeGen/AsmPrinter/DwarfDebug.cpp:1586
llvm::DebugHandlerBase::endFunction () at ../lib/CodeGen/AsmPrinter/DebugHandlerBase.cpp:319
llvm::AsmPrinter::EmitFunctionBody () at ../lib/CodeGen/AsmPrinter/AsmPrinter.cpp:1230
llvm::ARMAsmPrinter::runOnMachineFunction () at ../lib/Target/ARM/ARMAsmPrinter.cpp:161

Most of the DebugInfo tests under `LLVM_LIT_ARGS:STRING=-sv --vg` prior to this fix, and pass with the fix applied.

Reviewed By: aprantl, dblaikie

Differential Revision: https://reviews.llvm.org/D81631
2020-07-13 18:32:36 -07:00
Matt Arsenault 23ec773d19 GlobalISel: Implement fewerElementsVector for saturating add/sub 2020-07-13 14:46:40 -04:00
Matt Arsenault 6a8c11a11f GlobalISel: Implement widenScalar for saturating add/sub
Add a placeholder legality rule for AMDGPU until the rest of the
actions are handled.
2020-07-13 14:46:40 -04:00
Sanjay Patel 8779b11410 [DAGCombiner] rot i16 X, 8 --> bswap X
We have this generic transform in IR (instcombine),
but as shown in PR41098:
http://bugs.llvm.org/PR41098
...the pattern may emerge in codegen too.

x86 has a potential refinement/reversal opportunity here,
but that should come later or needs a target hook to
avoid the transform. Converting to bswap is the more
specific form, so we should use it if it is available.
2020-07-13 12:01:53 -04:00
Sanjay Patel 2df46a5743 [DAGCombiner] allow load/store merging if pairs can be rotated into place
This carves out an exception for a pair of consecutive loads that are
reversed from the consecutive order of a pair of stores. All of the
existing profitability/legality checks for the memops remain between
the 2 altered hunks of code.

This should give us the same x86 base-case asm that gcc gets in
PR41098 and PR44895:
http://bugs.llvm.org/PR41098
http://bugs.llvm.org/PR44895

I think we are missing a potential subsequent conversion to use "movbe"
if the target supports that. That might be similar to what AArch64
would use to get "rev16".

Differential Revision: https://reviews.llvm.org/D83567
2020-07-13 08:57:00 -04:00
Sanjay Patel f1bbf3acb4 Revert "[DAGCombiner] allow load/store merging if pairs can be rotated into place"
This reverts commit 591a3af5c7.
The commit message was cut off and failed to include the review citation.
2020-07-13 08:55:29 -04:00
Sanjay Patel 591a3af5c7 [DAGCombiner] allow load/store merging if pairs can be rotated into place
This carves out an exception for a pair of consecutive loads that are
reversed from the consecutive order of a pair of stores. All of the
existing profitability/legality checks for the memops remain between
the 2 altered hunks of code.

This should give us the same x86 base-case asm that gcc gets in
PR41098 and PR44895:i
http://bugs.llvm.org/PR41098
http://bugs.llvm.org/PR44895

I think we are missing a potential subsequent conversion to use "movbe"
if the target supports that. That might be similar to what AArch64
would use to get "rev16".

Differential Revision:
2020-07-13 08:53:06 -04:00
Kerry McLaughlin afcc9a81d2 [SVE][Codegen] Add a helper function for pointer increment logic
Summary:
Helper used when splitting load & store operations to calculate
the pointer + offset for the high half of the split

Reviewers: efriedma, sdesmalen, david-arm

Reviewed By: efriedma

Subscribers: tschuett, hiraditya, psnobl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D83577
2020-07-13 10:53:40 +01:00
Petar Avramovic fd85b40aee [GlobalISel][InlineAsm] Fix buildCopy for inputs
Check that input size matches size of destination reg class.
Attempt to extend input size when needed.

Differential Revision: https://reviews.llvm.org/D83384
2020-07-13 10:52:33 +02:00
Sanjay Patel 39009a8245 [DAGCombiner] tighten fast-math constraints for fma fold
fadd (fma A, B, (fmul C, D)), E --> fma A, B, (fma C, D, E)

This is only allowed when "reassoc" is present on the fadd.

As discussed in D80801, this transform goes beyond
what is allowed by "contract" FMF (-ffp-contract=fast).
That is because we are fusing the trailing add of 'E' with a
multiply, but without "reassoc", the code mandates that the
products A*B and C*D are added together before adding in 'E'.

I've added this example to the LangRef to try to clarify the
meaning of "contract". If that seems reasonable, we should
probably do something similar for the clang docs because
there does not appear to be any formal spec for the behavior
of -ffp-contract=fast.

Differential Revision: https://reviews.llvm.org/D82499
2020-07-12 08:51:49 -04:00
Alexandre Ganea b71499ac9e Revert "Re-land [CodeView] Add full repro to LF_BUILDINFO record"
This reverts commit add59ecb34 and 41d2813a5f.
2020-07-10 19:46:16 -04:00
Alexandre Ganea add59ecb34 Re-land [CodeView] Add full repro to LF_BUILDINFO record
This patch adds some missing information to the LF_BUILDINFO which allows for rebuilding an .OBJ without any external dependency but the .OBJ itself (other than the compiler executable).

Some tools need this information to reproduce a build without any knowledge of the build system. The LF_BUILDINFO therefore stores a full path to the compiler, the PWD (which is the CWD at program startup), a relative or absolute path to the TU, and the full CC1 command line. The command line needs to be freestanding (not depend on any environment variable). In the same way, MSVC doesn't store the provided command-line, but an expanded version (somehow their equivalent of CC1) which is also freestanding.

For more information see PR36198 and D43002.

Differential Revision: https://reviews.llvm.org/D80833
2020-07-10 13:59:28 -04:00
Sanjay Patel 02fec9d2a5 [DAGCombiner] move/rename variables for readability; NFC 2020-07-10 11:28:51 -04:00
David Sherwood da731894a2 [CodeGen] Replace calls to getVectorNumElements() in DAGTypeLegalizer::SetSplitVector
In DAGTypeLegalizer::SetSplitVector I have changed calls in the assert
from getVectorNumElements() to getVectorElementCount(), since this
code path works for both fixed and scalable vectors.

This fixes up one warning in the test:

  sve-sext-zext.ll

Differential Revision: https://reviews.llvm.org/D83196
2020-07-10 08:29:17 +01:00
David Sherwood 229dfb4728 [CodeGen] Replace calls to getVectorNumElements() in SelectionDAG::SplitVector
This patch replaces some invalid calls to getVectorNumElements() with calls
to getVectorMinNumElements() instead, since the code paths changed in this
patch work for both fixed and scalable vector types.

Fixes warnings in this test:

  sve-sext-zext.ll

Differential Revision: https://reviews.llvm.org/D83203
2020-07-10 08:11:30 +01:00
Sanjay Patel a46cf40240 [DAGCombiner] convert if-chain in store merging to switch; NFC 2020-07-09 17:20:04 -04:00
Sanjay Patel b476e6a642 [DAGCombiner] add helper function for store merging of loaded values; NFC 2020-07-09 17:20:04 -04:00
Sanjay Patel f98a602c2e [DAGCombiner] add helper function for store merging of extracts; NFC 2020-07-09 17:20:03 -04:00
Sanjay Patel 8d74cb01b7 [DAGCombiner] add helper function for store merging of constants; NFC 2020-07-09 17:20:03 -04:00
Sanjay Patel 6890e2a17b [DAGCombiner] add helper function to manage list of consecutive stores; NFC 2020-07-09 17:20:03 -04:00
Christopher Tetreault ff5b9a7b3b [SVE] Remove calls to VectorType::getNumElements from CodeGen
Reviewers: efriedma, fpetrogalli, sdesmalen, RKSimon, arsenm

Reviewed By: RKSimon

Subscribers: wdng, tschuett, hiraditya, rkruppe, psnobl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D82210
2020-07-09 12:43:36 -07:00
Sam Elliott 97106f9d80 [RISCV] Avoid Splitting MBB in RISCVExpandPseudo
Since the `RISCVExpandPseudo` pass has been split from
`RISCVExpandAtomicPseudo` pass, it would be nice to run the former as
early as possible (The latter has to be run as late as possible to
ensure correctness). Running earlier means we can reschedule these pairs
as we see fit.

Running earlier in the machine pass pipeline is good, but would mean
teaching many more passes about `hasLabelMustBeEmitted`. Splitting the
basic blocks also pessimises possible optimisations because some
optimisations are MBB-local, and others are disabled if the block has
its address taken (which is notionally what `hasLabelMustBeEmitted`
means).

This patch uses a new approach of setting the pre-instruction symbol on
the AUIPC instruction to a temporary symbol and referencing that. This
avoids splitting the basic block, but allows us to reference exactly the
instruction that we need to. Notionally, this approach seems more
correct because we do actually want to address a specific instruction.

This then allows the pass to be moved much earlier in the pass pipeline,
before both scheduling and register allocation. However, to do so we
must leave the MIR in SSA form (by not redefining registers), and so use
a virtual register for the intermediate value. By using this virtual
register, this pass now has to come before register allocation.

Reviewed By: luismarques, asb

Differential Revision: https://reviews.llvm.org/D82988
2020-07-09 13:54:13 +01:00
Lucas Prates fc39a9ca0e [CodeGen] Matching promoted type for 16-bit integer bitcasts from fp16 operand
Summary:
When legalizing a biscast operation from an fp16 operand to an i16 on a
target that requires both input and output types to be promoted to
32-bits, an assertion can fail when building the new node due to a
mismatch between the the operation's result size and the type specified to
the node.

This patches fix the issue by making sure the bit width of the types
match for the FP_TO_FP16 node, covering the difference with an extra
ANYEXTEND operation.

Reviewers: ostannard, efriedma, pirama, jmolloy, plotfi

Reviewed By: efriedma

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D82552
2020-07-09 09:46:17 +01:00
serge-sans-paille a60c31fd62 Fix return status of AtomicExpandPass
Correctly reflect change in the return status.

Differential Revision: https://reviews.llvm.org/D83457
2020-07-09 10:27:48 +02:00
Qiu Chaofan 4254ed5c32 [Legalizer] Fix wrong operand in split vector helper
This should be a typo introduced in D69275, which may cause an unknown
segment fault in getNode.

Reviewed By: uweigand

Differential Revision: https://reviews.llvm.org/D83376
2020-07-09 09:57:29 +08:00
Matt Arsenault 18bd821f02 DAG: Remove redundant finalizeLowering call
9cac4e6d1403554b06ec2fc9d834087b1234b695/D32628 intended to eliminate
this, and move all isel pseudo expansion to FinalizeISel. This was a
bad rebase or something, and failed to actually delete this call.

GlobalISel also has a redundant call of finalizeLowering. However, it
requires more work to remove it since it currently triggers a lot of
verifier errors in tests.
2020-07-08 18:48:20 -04:00
Matt Arsenault 2ec5fc0c61 DAG: Remove redundant handling of reg fixups
It looks like 9cac4e6d14 accidentally
added a second copy of this from a bad rebase or something. This
second copy was added, and the finalizeLowering call was not deleted
as intended.
2020-07-08 18:32:43 -04:00
Matt Arsenault 74a148ad39 GlobalISel: Verify G_BITCAST changes the type
Updated the AArch64 tests the best I could with my vague, inferred
understanding of AArch64 register banks. As far as I can tell, there
is only one 32-bit/64-bit type which will use the gpr register bank,
so we have to use the fpr bank for the other operand.
2020-07-08 17:16:27 -04:00
Sanjay Patel 1265eb2d5f [DAGCombiner] clean up in mergeConsecutiveStores(); NFC 2020-07-08 14:48:05 -04:00
Sanjay Patel 12c2271e53 [DAGCombiner] fix code comment and improve readability; NFC 2020-07-08 14:48:05 -04:00
Sanjay Patel 683a7f7025 [DAGCombiner] fix function-name formatting; NFC 2020-07-08 12:49:59 -04:00
Sanjay Patel 39329d5724 [DAGCombiner] add enum for store source value; NFC
This removes existing code duplication and allows us to
assert that we are handling the expected cases.

We have a list of outstanding bugs that could benefit by
handling truncated source values, so that's a possible
addition going forward.
2020-07-08 12:49:59 -04:00
Evgeny Leviant a074984250 [MIR] Speedup parsing of function with large number of basic blocks
Patch eliminates string length calculation when lexing a token. Speedup can be up to
1000x.

Differential revision: https://reviews.llvm.org/D83389
2020-07-08 18:50:00 +03:00
Paul Walker bb35f0fd89 [SelectionDAG] Fix incorrect offset when expanding CONCAT_VECTORS.
ExpandVectorBuildThroughStack is also used for CONCAT_VECTORS.
However, when calculating the offsets for each of the operands we
incorrectly use the element size rather than actual size and thus
the stores overlap.

Differential Revision: https://reviews.llvm.org/D83303
2020-07-08 15:39:25 +00:00
Ties Stuij 26a22478cd [CodeGen] Don't combine extract + concat vectors with non-legal types
Summary:
The following combine currently breaks in the DAGCombiner:

```
extract_vector_elt (concat_vectors v4i16:a, v4i16:b), x
   -> extract_vector_elt a, x
```

This happens because after we have combined these nodes we have inserted nodes
that use individual instances of the vector element type. In the above example
i16. However this isn't a legal type on all backends, and when the combining pass calls
the legalizer it breaks as it expects types to already be legal. The type legalizer has
already been run, and running it again would make a mess of the nodes.

In the example code at least, the generated code is still efficient after the change.

Reviewers: miyuki, arsenm, dmgreen, lebedev.ri

Reviewed By: miyuki, lebedev.ri

Subscribers: lebedev.ri, wdng, hiraditya, steven.zhang, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D83231
2020-07-08 15:29:57 +01:00
Petar Avramovic 419c92a749 [GlobalISel][InlineAsm] Fix matching input constraints to mem operand
Mark matching input constraint to mem operand as not supported.

Differential Revision: https://reviews.llvm.org/D83235
2020-07-08 12:32:17 +02:00
Jeremy Morse b9d977b0ca [DWARF] Add cuttoff guarding quadratic validThroughout behaviour
Occasionally we see absolutely massive basic blocks, typically in global
constructors that are vulnerable to heavy inlining. When these blocks are
dense with DBG_VALUE instructions, we can hit near quadratic complexity in
DwarfDebug's validThroughout function. The problem is caused by:

  * validThroughout having to step through all instructions in the block to
    examine their lexical scope,
  * and a high proportion of instructions in that block being DBG_VALUEs
    for a unique variable fragment,

Leading to us stepping through every instruction in the block, for (nearly)
each instruction in the block.

By adding this guard, we force variables in large blocks to use a location
list rather than a single-location expression, as shown in the added test.
This shouldn't change the meaning of the output DWARF at all: instead we
use a less efficient DWARF encoding to avoid a poor-performance code path.

Differential Revision: https://reviews.llvm.org/D83236
2020-07-08 10:30:09 +01:00
David Sherwood 9e66e9c30a [CodeGen] Fix wrong use of getVectorNumElements() in DAGTypeLegalizer::SplitVecRes_ExtendOp
In DAGTypeLegalizer::SplitVecRes_ExtendOp I have replaced an invalid
call to getVectorNumElements() with a call to getVectorMinNumElements(),
since the code path works for both fixed and scalable vectors.

This fixes up a warning in the following test:

  sve-sext-zext.ll

Differential Revision: https://reviews.llvm.org/D83197
2020-07-08 09:53:20 +01:00
David Sherwood 5b14f5051f [CodeGen] Fix wrong use of getVectorNumElements in PromoteIntRes_EXTRACT_SUBVECTOR
Calling getVectorNumElements() is not safe for scalable vectors and we
should normally use getVectorElementCount() instead. However, for the
code changed in this patch I decided to simply move the instantiation of
the variable 'OutNumElems' lower down to the place where only fixed-width
vectors are used, and hence it is safe to call getVectorNumElements().

Fixes up one warning in this test:

  sve-sext-zext.ll

Differential Revision: https://reviews.llvm.org/D83195
2020-07-08 09:36:34 +01:00
David Sherwood 15aeb805dc [CodeGen] Fix warnings in sve-ld1-addressing-mode-reg-imm.ll
For the GetElementPtr case in function
  AddressingModeMatcher::matchOperationAddr
I've changed the code to use the TypeSize class instead of relying
upon the implicit conversion to a uint64_t. As part of this we now
check for scalable types and if we encounter one just bail out for
now as the subsequent optimisations doesn't currently support them.

This changes fixes up all warnings in the following tests:

  llvm/test/CodeGen/AArch64/sve-ld1-addressing-mode-reg-imm.ll
  llvm/test/CodeGen/AArch64/sve-st1-addressing-mode-reg-imm.ll

Differential Revision: https://reviews.llvm.org/D83124
2020-07-08 09:16:00 +01:00
Heejin Ahn 7e6793aa33 [WebAssembly] Generate unreachable after __stack_chk_fail
`__stack_chk_fail` does not return, but `unreachable` was not generated
following `call __stack_chk_fail`. This had a possibility to generate an
invalid binary for functions with a return type, because
`__stack_chk_fail`'s return type is void and `call __stack_chk_fail` can
be the last instruction in the function whose return type is non-void.
Generating `unreachable` after it makes sure CFGStackify's
`fixEndsAtEndOfFunction` handles it correctly.

Reviewed By: tlively

Differential Revision: https://reviews.llvm.org/D83277
2020-07-08 01:02:05 -07:00
serge-sans-paille edc7da2405 Upgrade TypePromotionTransaction to be able to report changes in CodeGenPrepare
optimizeMemoryInst was reporting no change while still modifying the IR.
Inspect the status of TypePromotionTransaction to get a better status.

Related to https://reviews.llvm.org/D80916

Differential Revision: https://reviews.llvm.org/D81256
2020-07-08 08:35:44 +02:00
Philip Reames 22596e7b2f [Statepoint] Use early return to reduce nesting and clarify comments [NFC] 2020-07-07 16:19:05 -07:00
Philip Reames 9955876d74 [Statepoint] Reduce intendation and change a variable name [NFC] 2020-07-07 16:19:05 -07:00
Matt Arsenault 23157f3bdb GlobalISel: Handle EVT argument lowering correctly
handleAssignments was assuming every argument type is an MVT, and
assignArg would always fail. This fixes one of the hacks in the
current AMDGPU calling convention code that pre-processes the
arguments.
2020-07-07 16:36:14 -04:00
Philip Reames b172cd7812 [Statepoint] Factor out logic for non-stack non-vreg lowering [almost NFC]
This is inspired by D81648.  The basic idea is to have the set of SDValues which are lowered as either constants or direct frame references explicit in one place, and to separate them clearly from the spilling logic.

This is not NFC in that the handling of constants larger than > 64 bit has changed.  The old lowering would crash on values which could not be encoded as a sign extended 64 bit value.  The new lowering just spills all constants > 64 bits.  We could be consistent about doing the sext(Con64) optimization, but I happen to know that this code path is utterly unexercised in practice, so simple is better for now.
2020-07-07 13:34:28 -07:00
Stanislav Mekhanoshin 7c03872645 LIS: fix handleMove to properly extend main range
handleMoveDown or handleMoveUp cannot properly repair a main
range of a LiveInterval since they only get LiveRange. There
is a problem if certain use has moved few segments away and
there is a hole in the main range in between of these two
locations. We may get a SubRange with a very extended Segment
spanning several Segments of the main range and also spanning
that hole. If that happens then we end up with the main range
not covering its SubRange which is an error.

It might be possible to attempt fixing the main range in place
just between of the old and new index by extending all of its
Segments in between, but it is unclear this logic will be
faster than just straight constructMainRangeFromSubranges,
which itself is pretty cheap since it only contains interval
logic. That will also require shrinkToUses() call after which
is probably even more expensive.

In the test second move is from 64B to 92B for the sub1.
Subrange is correctly fixed:

L000000000000000C [16r,32B:0)[32B,92r:1)  0@16r 1@32B-phi

But the main range has a hole in between 80d and 88r after
updateRange():

%1 [16r,32B:0)[32B,80r:4)[80r,80d:3)[88r,96r:1)[96r,160B:2)

Since source position is 64B this segment is not even considered
by the updateRange().

Differential Revision: https://reviews.llvm.org/D82916
2020-07-07 11:52:32 -07:00
Kerry McLaughlin cdf2eef613 [SVE][CodeGen] Legalisation of unpredicated store instructions
Summary:
When splitting a store of a scalable type, the new address is
calculated in SplitVecOp_STORE using a vscale and an add instruction.

Reviewers: sdesmalen, efriedma, david-arm

Reviewed By: david-arm

Subscribers: tschuett, hiraditya, psnobl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D83041
2020-07-07 11:47:10 +01:00
Kerry McLaughlin 5e8084beba [SVE][CodeGen] Legalisation of unpredicated load instructions
Summary:
When splitting a load of a scalable type, the new address is
calculated in SplitVecRes_LOAD using a vscale and an add instruction.

This patch also adds a DAG combiner fold to visitADD for vscale:
 - Fold (add (vscale(C0)), (vscale(C1))) to (add (vscale(C0 + C1)))

Reviewers: sdesmalen, efriedma, david-arm

Reviewed By: david-arm

Subscribers: tschuett, hiraditya, rkruppe, psnobl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D82792
2020-07-07 11:05:03 +01:00
David Sherwood 79d34a5a1b [SVE][CodeGen] Fix bug when falling back to DAG ISel
In an earlier commit 584d0d5c17 I
added functionality to allow AArch64 CodeGen support for falling
back to DAG ISel when Global ISel encounters scalable vector
types. However, it seems that we were not falling back early
enough as llvm::getLLTForType was still being invoked for scalable
vector types.

I've added a new fallback function to the call lowering class in
order to catch this problem early enough, rather than wait for
lowerFormalArguments to reject scalable vector types.

Differential Revision: https://reviews.llvm.org/D82524
2020-07-07 09:23:04 +01:00
David Sherwood c061e56e88 [CodeGen] Fix warnings in sve-vector-splat.ll and sve-trunc.ll
This patch fixes all remaining warnings in:

  llvm/test/CodeGen/AArch64/sve-trunc.ll
  llvm/test/CodeGen/AArch64/sve-vector-splat.ll

I hit some warnings related to getCopyPartsToVector. I fixed two
issues:

1. In widenVectorToPartType() we assumed that we'd always be
using BUILD_VECTOR nodes to expand from one vector type to another,
which is incorrect for scalable vector types. I've fixed this for now
by simply bailing out immediately for scalable vectors.
2. In getCopyToPartsVector() I've changed the code to compare
the element counts of different types.

Differential Revision: https://reviews.llvm.org/D83028
2020-07-07 09:21:47 +01:00
Sanjay Patel ea71ba11ab [DAGCombiner] reassociate reciprocal sqrt expression to eliminate FP division
X / (fabs(A) * sqrt(Z)) --> X / sqrt(A*A*Z) --> X * rsqrt(A*A*Z)

In the motivating case from PR46406:
https://bugs.llvm.org/show_bug.cgi?id=46406
...this is restoring the sequence that was originally in the source code.
We extracted a term from within the sqrt because we do not know in
instcombine whether a target will expand a sqrt call.
Note: we could say that the transform in IR should be restricted, but
that would not solve the problem if the source was originally in the
pattern shown here.

This is a gray area for fast-math-flag requirements. I think we should at
least check fast-math-flags on the fdiv and fmul because I view this
transform as 2 pieces: reassociate the fmul operands and form reciprocal
from the fdiv (as with the existing transform). We could argue that the
sqrt also needs FMF, but that was not required before, so we should change
that in a follow-up patch if that seems better.

We don't currently have a way to check that the target will produce a sqrt
or recip estimate without actually creating nodes (the APIs are SDValue
getSqrtEstimate() and SDValue getRecipEstimate()), so we clean up
speculatively created nodes if we are not able to create an estimate.
The x86 test with doubles verifies that we are not changing a test with
no estimate sequence.

Differential Revision: https://reviews.llvm.org/D82716
2020-07-06 19:12:21 -04:00
Yuanfang Chen 1e495e10e6 [NFC] change getLimitedCodeGenPipelineReason to static function 2020-07-06 15:39:27 -07:00
Nicolai Hähnle 76c5cb05a3 DomTree: Remove getChildren() accessor
Summary:
Avoid exposing details about how children are stored. This will enable
subsequent type-erasure changes.

New methods are introduced to cover common access patterns.

Change-Id: Idb5f4b1b9c84e4cc71ddb39bb52a388682f5674f

Reviewers: arsenm, RKSimon, mehdi_amini, courbet

Subscribers: qcolombet, sdardis, wdng, hiraditya, jrtc27, zzheng, atanasyan, asbirlea, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D83083
2020-07-06 21:58:11 +02:00
jasonliu 6d3ae365bd [XCOFF][AIX] Give symbol an internal name when desired symbol name contains invalid character(s)
Summary:

When a desired symbol name contains invalid character that the
system assembler could not process, we need to emit .rename
directive in assembly path in order for that desired symbol name
to appear in the symbol table.

Reviewed By: hubert.reinterpretcast, DiggerLin, daltenty, Xiangling_L

Differential Revision: https://reviews.llvm.org/D82481
2020-07-06 15:49:15 +00:00
Matt Arsenault 521ebc1681 GlobalISel: Move finalizeLowering call later
This matches the DAG behavior where this is called after the loop
checking for calls. The AMDGPU implementation depends on knowing if
there are calls in the function or not, so move this later.

Another problem is finalizeLowering is actually called twice; I was
seeing weird inconsistencies since the first call would produce
unexpected results and the second run would correct them in some
contexts. Since this requires disabling the verifier, and it's useful
to serialize the MIR immediately after selection, FinalizeISel should
probably not be a real pass.
2020-07-06 09:19:40 -04:00
Jay Foad babbeafa00 [TargetLowering] Improve expansion of FSHL/FSHR by non-zero amount
Use a simpler code sequence when the shift amount is known not to be
zero modulo the bit width.

Nothing much uses this until D77152 changes the translation of fshl and
fshr intrinsics.

Differential Revision: https://reviews.llvm.org/D82540
2020-07-06 12:07:14 +01:00
Jay Foad e7a4a24dc5 [TargetLowering] Improve expansion of ROTL/ROTR
Using a negation instead of a subtraction from a constant can save an
instruction on some targets.

Nothing much uses this until D77152 changes the translation of fshl and
fshr intrinsics.

Differential Revision: https://reviews.llvm.org/D82539
2020-07-06 12:07:14 +01:00
Craig Topper 76123d338d [DAGCombiner] visitSIGN_EXTEND_INREG should fold sext_vector_inreg(undef) to 0 not undef.
We need to ensure that the sign bits of the result all match
so we can't fold to undef.

Similar to PR46585.

Reviewed By: lebedev.ri

Differential Revision: https://reviews.llvm.org/D83163
2020-07-04 14:35:49 -07:00
Craig Topper 120c5f1057 [DAGCombiner] Don't fold zext_vector_inreg/sext_vector_inreg(undef) to undef. Fold to 0.
zext_vector_inreg needs to produces 0s in the extended bits and
sext_vector_inreg needs to produce upper bits that are all the
same. So we should fold them to a 0 vector instead of undef.

Fixes PR46585.
2020-07-04 11:42:53 -07:00
Simon Pilgrim 56a8a5c9fe [DAG] matchBinOpReduction - match subvector reduction patterns beyond a matched shufflevector reduction
Currently matchBinOpReduction only handles shufflevector reduction patterns, but in many cases these only occur in the final stages of a reduction, once we're down to legal vector widths.

Before this its likely that we are performing reductions using subvector extractions to repeatedly split the source vector in half and perform the binop on the halves.

Assuming we've found a non-partial reduction, this patch continues looking for subvector reductions as far as it can beyond the last shufflevector.

Fixes PR37890
2020-07-04 15:28:15 +01:00
David Green 9e03547cab [ARM][HWLoops] Create hardware loops for sibling loops
Given a loop with two subloops, it should be possible for both to be
converted to hardware loops. That's what this patch does, simply enough.
It slightly alters the loop iterating order to try and convert all
subloops. If one (or more) succeeds, it stops as before.

Differential Revision: https://reviews.llvm.org/D78502
2020-07-03 17:20:02 +01:00
Guillaume Chatelet 87e2751cf0 [Alignment][NFC] Use proper getter to retrieve alignment from ConstantInt and ConstantSDNode
This patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790

Differential Revision: https://reviews.llvm.org/D83082
2020-07-03 08:06:43 +00:00
Sanjay Patel bc110de78a [SelectionDAG] don't split branch on logic-of-vector-compares
SelectionDAGBuilder converts logic-of-compares into multiple branches based
on a boolean TLI setting in isJumpExpensive(). But that probably never
considered the pattern of extracted bools from a vector compare - it seems
unlikely that we would want to turn vector logic into control-flow.

The motivating x86 reduction case is shown in PR44565:
https://bugs.llvm.org/show_bug.cgi?id=44565
...and that test shows the expected improvement from using pmovmsk codegen.

For AArch64, I modified the test to include an extra op because the simpler
test gets transformed by a codegen invocation of SimplifyCFG.

Differential Revision: https://reviews.llvm.org/D82602
2020-07-02 17:05:24 -04:00
Sander de Smalen 143e324e75 [CodeGen][SVE] Don't drop scalable flag in DAGCombiner::visitEXTRACT_SUBVECTOR
There was a rogue 'assert' in AArch64ISelLowering for the tuple.get intrinsics,
that shouldn't really have been there (I suspect this was a remnant from when
we expected the wider vector always to have come from a vector CONCAT).

When I tried to create a more minimal reproducer, I found a bug in
DAGCombiner where it drops the scalable flag when trying to fold:

      extract_subv (bitcast X), Index --> bitcast (extract_subv X, Index')

This patch fixes both issues.

Reviewers: david-arm, efriedma, spatel

Reviewed By: efriedma

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D82910
2020-07-02 10:16:43 +01:00
David Sherwood c7df35d2b2 [CodeGen] Fix warnings in getCopyToPartsVector
Whilst trying to assemble the following test:

  clang/test/CodeGen/aarch64-sve-intrinsics/acle_sve_set2.c

I discovered we were hitting some warnings about possible invalid
calls to getVectorNumElements() in getCopyToPartsVector(). I've
tried to fix these by using ElementCount types where possible and
I've made the assumption that we don't support using a fixed width
vector to copy parts of a scalable vector, and vice versa. Looking
at how the copy is implemented I think that's the right thing for
now.

Differential Revision: https://reviews.llvm.org/D82744
2020-07-02 09:08:20 +01:00
Krzysztof Pszeniczny e4b3c138de This patch adds basic debug info support with basic block sections.
This patch uses ranges for debug information when a function contains basic block sections rather than using [lowpc, highpc]. This is also the first in a series of patches for debug info and does not contain the support for linker relaxation. That will be done as a follow up patch.

Differential Revision: https://reviews.llvm.org/D78851
2020-07-01 23:53:00 -07:00
Matt Arsenault afb3bd9914 RegAllocGreedy: Use TargetInstrInfo already in the class 2020-07-01 18:58:59 -04:00
Craig Topper 51e92b223b [X86] Speculatively apply the same fix from 361853c96f to PromoteIntOp_MGATHER.
The UpdateNodeOperands here is also subject to CSE.
2020-07-01 11:57:59 -07:00
Craig Topper 361853c96f [LegalizeTypes] Properly handle the case when UpdateNodeOperands in PromoteIntOp_MLOAD triggers CSE instead of updating the node in place.
The caller can't handle the node having multiple results like a
masked load does. So we need to detect the case and do our own
result replacement.

Fixes PR46532.
2020-07-01 11:48:50 -07:00
David Sherwood f11305780f [CodeGen] Fix warnings in DAGCombiner::visitSCALAR_TO_VECTOR
In visitSCALAR_TO_VECTOR we try to optimise cases such as:

  scalar_to_vector (extract_vector_elt %x)

into vector shuffles of %x. However, it led to numerous warnings
when %x is a scalable vector type, so for now I've changed the
code to only perform the combination on fixed length vectors.
Although we probably could change the code to work with scalable
vectors in certain cases, without a proper profit analysis it
doesn't seem worth it at the moment.

This change fixes up one of the warnings in:

  llvm/test/CodeGen/AArch64/sve-merging-stores.ll

I've also added a simplified version of the same test to:

  llvm/test/CodeGen/AArch64/sve-fp.ll

which already has checks for no warnings.

Differential Revision: https://reviews.llvm.org/D82872
2020-07-01 18:47:13 +01:00
James Y Knight 4b0aa5724f Change the INLINEASM_BR MachineInstr to be a non-terminating instruction.
Before this instruction supported output values, it fit fairly
naturally as a terminator. However, being a terminator while also
supporting outputs causes some trouble, as the physreg->vreg COPY
operations cannot be in the same block.

Modeling it as a non-terminator allows it to be handled the same way
as invoke is handled already.

Most of the changes here were created by auditing all the existing
users of MachineBasicBlock::isEHPad() and
MachineBasicBlock::hasEHPadSuccessor(), and adding calls to
isInlineAsmBrIndirectTarget or mayHaveInlineAsmBr, as appropriate.

Reviewed By: nickdesaulniers, void

Differential Revision: https://reviews.llvm.org/D79794
2020-07-01 12:51:50 -04:00
Yuanfang Chen 78c69a00a4 [NFC] Clean up uses of MachineModuleInfoWrapperPass 2020-07-01 09:45:05 -07:00
David Green ca4c1ad854 [Outliner] Set nounwind for outlined functions
This prevents the outlined functions from pulling in a lot of unnecessary code
in our downstream libraries/linker. Which stops outlining making codesize
worse in c++ code with no-exceptions.

Differential Revision: https://reviews.llvm.org/D57254
2020-07-01 17:18:34 +01:00
Guillaume Chatelet ef36f5143d [Alignment] TargetLowering::hasPairedLoad must use Align for RequiredAlignment
As per documentation of `hasPairLoad`:
"`RequiredAlignment` gives the minimal alignment constraints that must be met to be able to select this paired load."
In this sense, `0` is strictly equivalent to `1`. We make this obvious by using `Align` instead of unsigned.
There is only one implementor of this interface.

Differential Revision: https://reviews.llvm.org/D82958
2020-07-01 14:32:30 +00:00
Guillaume Chatelet d3085c2501 [Alignment][NFC] Transition and simplify calls to DL::getABITypeAlignment
This patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790

Differential Revision: https://reviews.llvm.org/D82956
2020-07-01 14:31:56 +00:00
Guillaume Chatelet 27bbc8ede1 [Alignment][NFC] Migrate TargetTransformInfo::CreateVariableSizedObject to Align
This patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790

Differential Revision: https://reviews.llvm.org/D82939
2020-07-01 14:31:21 +00:00
David Sherwood 97a7a9abb2 [CodeGen] Fix up warnings in visitEXTRACT_SUBVECTOR
It's perfectly valid to do certain DAG combines where we extract
subvectors from a concat vector when we have scalable vector types.
However, we can do this in a way that avoids generating compiler
warnings by replacing calls to getVectorNumElements() with
getVectorMinNumElements(). Due to the way subvector extracts are
designed to work with scalable vector types this is ok.

This eliminates some warnings from existing tests in this file:

  llvm/test/CodeGen/AArch64/sve-intrinsics-loads.ll

Differential Revision: https://reviews.llvm.org/D82655
2020-07-01 15:10:53 +01:00
David Stenberg 85460c4ea2 [DebugInfo] Do not emit entry values for composite locations
Summary:
This is a fix for PR45009.

When working on D67492 I made DwarfExpression emit a single
DW_OP_entry_value operation covering the whole composite location
description that is produced if a register does not have a valid DWARF
number, and is instead composed of multiple register pieces. Looking
closer at the standard, this appears to not be valid DWARF. A
DW_OP_entry_value operation's block can only be a DWARF expression or a
register location description, so it appears to not be valid for it to
hold a composite location description like that.

See DWARFv5 sec. 2.5.1.7:

"The DW_OP_entry_value operation pushes the value that the described
 location held upon entering the current subprogram. It has two
 operands: an unsigned LEB128 length, followed by a block containing a
 DWARF expression or a register location description (see Section
 2.6.1.1.3 on page 39)."

Here is a dwarf-discuss mail thread regarding this:

http://lists.dwarfstd.org/pipermail/dwarf-discuss-dwarfstd.org/2020-March/004610.html

There was not a strong consensus reached there, but people seem to lean
towards that operations specified under 2.6 (e.g. DW_OP_piece) may not
be part of a DWARF expression, and thus the DW_OP_entry_value operation
can't contain those.

Perhaps we instead want to emit a entry value operation per each
DW_OP_reg* operation, e.g.:

  - DW_OP_entry_value(DW_OP_regx sub_reg0),
    DW_OP_stack_value,
    DW_OP_piece 8,
  - DW_OP_entry_value(DW_OP_regx sub_reg1),
    DW_OP_stack_value,
    DW_OP_piece 8,
  [...]

The question then becomes how the call site should look; should a
composite location description be emitted there, and we then leave it up
to the debugger to match those two composite location descriptions?
Another alternative could be to emit a call site parameter entry for
each sub-register, but firstly I'm unsure if that is even valid DWARF,
and secondly it seems like that would complicate the collection of call
site values quite a bit. As far as I can tell GCC does not emit any
entry values / call sites in these cases, so we do not have something to
compare with, but the former seems like the more reasonable approach.

Currently when trying to emit a call site entry for a parameter composed
of multiple DWARF registers a (DwarfRegs.size() == 1) assert is
triggered in addMachineRegExpression(). Until the call site
representation is figured out, and until there is use for these entry
values in practice, this commit simply stops the invalid DWARF from
being emitted.

Reviewers: djtodoro, vsk, aprantl

Reviewed By: djtodoro, vsk

Subscribers: jyknight, hiraditya, fedor.sergeev, jrtc27, llvm-commits

Tags: #debug-info, #llvm

Differential Revision: https://reviews.llvm.org/D75270
2020-07-01 10:50:55 +02:00
Guillaume Chatelet 7f37d88306 [Alignment][NFC] Migrate MachineFrameInfo::CreateSpillStackObject to Align
iThis patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790

Differential Revision: https://reviews.llvm.org/D82934
2020-07-01 08:49:28 +00:00
Sam Parker 3ee580d017 [ARM][LowOverheadLoops] Handle reductions
While validating live-out values, record instructions that look like
a reduction. This will comprise of a vector op (for now only vadd),
a vorr (vmov) which store the previous value of vadd and then a vpsel
in the exit block which is predicated upon a vctp. This vctp will
combine the last two iterations using the vmov and vadd into a vector
which can then be consumed by a vaddv.

Once we have determined that it's safe to perform tail-predication,
we need to change this sequence of instructions so that the
predication doesn't produce incorrect code. This involves changing
the register allocation of the vadd so it updates itself and the
predication on the final iteration will not update the falsely
predicated lanes. This mimics what the vmov, vctp and vpsel do and
so we then don't need any of those instructions.

Differential Revision: https://reviews.llvm.org/D75533
2020-07-01 08:31:49 +01:00
Guillaume Chatelet 28de229bc6 [Alignment][NFC] Migrate MachineFrameInfo::CreateStackObject to Align
This patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790

Differential Revision: https://reviews.llvm.org/D82894
2020-07-01 07:28:11 +00:00
JF Bastien ca134e4c52 [NFC] fix diagnostic
It's pretty silly to diagnose on a scalar copy but the build does that:
  loop variable 'SibReg' of type 'const llvm::Register' creates a copy from type 'const llvm::Register' [-Wrange-loop-analysis]
2020-06-30 21:49:01 -07:00
Matt Arsenault e9eab30339 GlobalISel: Disallow undef generic virtual register uses
With an undef operand, it's possible for getVRegDef to fail and return
null. This is an edge case very little code bothered to
consider. Proper gMIR should use G_IMPLICIT_DEF instead.

I initially tried to apply this restriction to all SSA MIR, so then
getVRegDef would never fail anywhere. However, ProcessImplicitDefs
does technically run while the function is in SSA. ProcessImplicitDefs
and DetectDeadLanes would need to either move, or a new pseudo-SSA
type of function property would need to be introduced.
2020-06-30 19:18:01 -04:00
Hendrik Greving 50ac7ce94f [ModuloSchedule] Make PeelingModuloScheduleExpander inheritable.
Basically a NFC, but allows subclasses access to the entire PeelingModuloScheduleExpander
class. We are doing this to allow backends, particularly one that are not necessarily
upstreamed, to inherit from PeelingModuloScheduleExpander and access its basic structures.

Renames Info into LoopInfo for consistency in PeelingModuloScheduleExpander.

Differential Revision: https://reviews.llvm.org/D82673
2020-06-30 15:56:13 -07:00
Hsiangkai Wang a7b0f39185 [MVT] Add new MVT types for RISC-V vector.
In RISC-V vector extension, users could group multiple vector registers
as one pseudo register. In mixed width operations, users could use
partial vector registers to reduce the register pressure. The parameter
to control register grouping and partial use is called LMUL. LMUL is a
part of the type. So, we have a bunch of vector types. In order to
support all these types, we need new MVT types in LLVM. In this patch, I
added several MVT types that are used in RISC-V vector implementation.
This is a standalone patch for MVT types without RISC-V related implementation.

Differential revision: https://reviews.llvm.org/D81724
2020-07-01 01:07:50 +08:00
Matt Arsenault b7f6ecf0c7 RegAlloc: Start using Register 2020-06-30 12:13:08 -04:00
Matt Arsenault af1eeaf380 BranchFolding: Use Register 2020-06-30 12:13:08 -04:00
Matt Arsenault edb4a5cb36 TailDuplicator: Use Register 2020-06-30 12:13:08 -04:00
Guillaume Chatelet 423458ec09 [Alignment][NFC] TargetLowering::allowsMemoryAccessForAlignment
First patch of a series to adapt TargetLowering::allowsXXX functions

This patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790

Differential Revision: https://reviews.llvm.org/D81372
2020-06-30 15:31:24 +00:00
Guillaume Chatelet c1cd61e02a [Alignment][NFC] Migrate SelectionDAGTargetInfo::EmitTargetCodeForMemcpy to Align
This patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790

Differential Revision: https://reviews.llvm.org/D82849
2020-06-30 13:12:31 +00:00
Guillaume Chatelet 306d7c6929 [Alignment][NFC] Migrate SelectionDAGTargetInfo::EmitTargetCodeForMemmove to Align
This patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790

Differential Revision: https://reviews.llvm.org/D82850
2020-06-30 12:46:59 +00:00
Guillaume Chatelet 6a6af30d43 [Alignment][NFC] Migrate SelectionDAGTargetInfo::EmitTargetCodeForMemset to Align
This patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790

Differential Revision: https://reviews.llvm.org/D82851
2020-06-30 12:46:26 +00:00
Guillaume Chatelet 2c5ff48e61 [Alignment][NFC] Migrate AtomicExpandPass to Align
This is a followup on D78403.
I'm unsure about `getAtomicOpAlign` overloads that take `AtomicRMWInst` and `AtomicCmpXchgInst`, shouldn't `getAlign` provide the correct answer already?

Differential Revision: https://reviews.llvm.org/D81369
2020-06-30 09:54:45 +00:00
Petar Avramovic 4b980cc9ca [GlobalISel][InlineAsm] Add support for matching input constraints
Find def operand that corresponds to matching constraint and
tie input to that operand.

Differential Revision: https://reviews.llvm.org/D82651
2020-06-30 10:49:05 +02:00
Guillaume Chatelet 5f8bdb3e6a [Alignment][NFC] TargetLowering::allowsMemoryAccess
Second patch of a series to adapt TargetLowering::allowsXXX functions

This patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790

Differential Revision: https://reviews.llvm.org/D82785
2020-06-30 08:17:00 +00:00
David Sherwood c02332a693 [CodeGen] Fix warning in getNode for EXTRACT_SUBVECTOR
Fix a warning in getNode() when extracting a subvector from a
concat vector. We can simply replace the call to getVectorNumElements
with getVectorMinNumElements as this follows the defined behaviour
for EXTRACT_SUBVECTOR.

Differential Revision: https://reviews.llvm.org/D82746
2020-06-30 08:11:41 +01:00
David Sherwood 46a7f4d6f4 [SVE][CodeGen] Fix bug in DAGCombiner::reduceBuildVecToShuffle
When trying to reduce a BUILD_VECTOR to a SHUFFLE_VECTOR it's
important that we carefully check the vector types that led to
that BUILD_VECTOR. In the test I have attached to this commit
there is a case where the results of two SVE faddv instructions
are being stored to consecutive memory locations. With my fix,
as part of merging those stores we discover that each BUILD_VECTOR
element came from an extract of a SVE vector element and
therefore bail out.

Differential Revision: https://reviews.llvm.org/D82564
2020-06-30 07:28:15 +01:00
Guillaume Chatelet 368a5e3a66 [Alignment][NFC] migrate DataLayout::getPreferredAlignment
This patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790

Differential Revision: https://reviews.llvm.org/D82752
2020-06-29 11:24:36 +00:00
Simon Pilgrim 3521ecf1f8 [X86] Add vector support to targetShrinkDemandedConstant for OR/XOR opcodes
If a constant is only allsignbits in the demanded/active bits, then sign extend it to an allsignbits bool pattern for OR/XOR ops.

This also requires SimplifyDemandedBits XOR handling to be modified to call ShrinkDemandedConstant on any (non-NOT) XOR pattern to account for non-splat cases.

Next step towards fixing PR45808 - with this patch we now get a <-1,-1,0,0> v4i64 constant instead of <1,1,0,0>.

Differential Revision: https://reviews.llvm.org/D82257
2020-06-29 12:19:05 +01:00
Simon Pilgrim 973685fc78 [TargetLowering] Add DemandedElts arg to ShrinkDemandedConstant
Pre-commit for D82257, this adds a DemandedElts arg to ShrinkDemandedConstant/targetShrinkDemandedConstant which will allow future patches to (optionally) add vector support.
2020-06-29 11:46:58 +01:00
Guillaume Chatelet 3500d9ec95 Fix invalid alignment in DAGCombiner::isLegalNarrowLdSt
`ShAmt / 8` can be a non power of two, this can lead to an invalid alignment.
context: https://reviews.llvm.org/D41350#inline-749165

Differential Revision: https://reviews.llvm.org/D82565
2020-06-29 09:22:15 +00:00
madhur13490 299dee91b3 Revert accidentally landed patch citing o build errors
Summary: This reverts commit c73966c2f7.

Reviewers:

Subscribers:
2020-06-28 11:52:33 +00:00
madhur13490 c73966c2f7 Improve stack object printing. NFC.
Reviewers: madhur13490

Reviewed By: madhur13490

Subscribers: qcolombet, arsenm, jvesely, nhaehnle, hiraditya, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D82712
2020-06-28 11:43:33 +00:00
Simon Pilgrim 6bdb3ce452 [DAG] reduceBuildVecExtToExtBuildVec - don't combine if it would break a splat.
reduceBuildVecExtToExtBuildVec was breaking a splat(zext(x)) pattern into buildvector(x, 0, x, 0, ..) resulting in much more complex insert+shuffle codegen.

We already go to some lengths to avoid this in SimplifyDemandedVectorElts etc. when we encounter splat buildvectors.

It should be OK to fold all splat(aext(x)) patterns - we might need to tighten this if we find a case where we mustn't introduce a buildvector(x, undef, x, undef, ..) but I can't find one.

Fixes PR46461.
2020-06-27 11:03:57 +01:00
Matt Arsenault c2e403c19d GlobalISel: Don't fail translate on weak cmpxchg
The translation of cmpxchg added by
9481399c0f specifically skipped weak
cmpxchg due to not understanding the meaning. Weak cmpxchg was added
in 420a216817. As explained in the
commit message, the weak mode is implicit in how
ATOMIC_CMP_SWAP_WITH_SUCCESS is lowered. If it's expanded to a regular
ATOMIC_CMP_SWAP, it's replaced with a strong cmpxchg.

This handling seems weird to me, but this was already following the
DAG behavior. I would expect the strong IR instruction to not have the
boolean output. Failing that, I might expect the IRTranslator to emit
ATOMIC_CMP_SWAP and a constant for the boolean.
2020-06-26 17:52:18 -04:00
Sanjay Patel e7f7715eb9 [DAGCombiner] rename variables for readability; NFC
PR46406 shows a pattern where we can do better, so try to clean this up
before adding more code.
2020-06-26 14:22:11 -04:00
Guillaume Chatelet fdc7c7fb87 [Alignment][NFC] Migrate TTI::getInterleavedMemoryOpCost to Align
This is patch is part of a series to introduce an Alignment type.
See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html
See this patch for the introduction of the type: https://reviews.llvm.org/D64790

Differential Revision: https://reviews.llvm.org/D82573
2020-06-26 11:00:53 +00:00
Simon Pilgrim da426ead73 LiveRangeEdit.h - reduce AliasAnalysis.h include to forward declaration. NFC.
Move include to LiveRangeEdit.cpp and replace legacy AliasAnalysis typedef with AAResults where necessary.
2020-06-26 09:58:21 +01:00
Sjoerd Meijer 243a5329d4 [SelectionDAG] Lower @llvm.get.active.lane.mask to setcc
This lowers intrinsic @llvm.get.active.lane.mask to a setcc node, i.e. an icmp
ule, and creates vectors for its 2 arguments on which the comparison is
performed.

Differential Revision: https://reviews.llvm.org/D82292
2020-06-26 07:46:38 +01:00
Igor Kudrin 70165bb7e9 [DebugInfo] Fix emitting offsets to CUs with -dwarf-sections-as-references=Enable.
The size of the field depends on the DWARF format, not the address size
of the target.

Differential Revision: https://reviews.llvm.org/D82311
2020-06-26 12:12:26 +07:00
Wouter van Oortmerssen b9a539c010 [WebAssembly] Adding 64-bit versions of __stack_pointer and other globals
We have 6 globals, all of which except for __table_base are 64-bit under wasm64.

Differential Revision: https://reviews.llvm.org/D82130
2020-06-25 15:52:44 -07:00
Paul Walker 2c09e91054 [MVT] Add missing floating point types for 1024/2048-bit vectors.
Summary:
This patch adds entries for:
  v64f16
  v128f16
  v64bf16
  v128bf16
  v32f64

Subscribers: dschuff, hiraditya, aheejin, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D82466
2020-06-25 21:13:31 +00:00
Simon Pilgrim 1815b77c3e LiveIntervals.h.h - reduce AliasAnalysis.h include to forward declaration. NFC.
Fix implicit include dependencies in source files and replace legacy AliasAnalysis typedef with AAResults where necessary.
2020-06-25 14:22:21 +01:00
Simon Pilgrim 792e4a8c97 CodeGenPrepare.cpp - remove unused IntrinsicsX86.h header. NFC. 2020-06-25 14:22:19 +01:00
Simon Pilgrim 172c36a100 Fix typos in CodeGenPrepare::splitLargeGEPOffsets comments. 2020-06-25 14:22:19 +01:00
Scott Linder 4d81aec40c [MIR] Fix CFI_INSTRUCTION escape printing
Summary:
The printer seems to intend to not print the trailing comma but has a
copy-paste error for the last value in the escape, and the parser
enforces having no trailing comma, but somehow a test was never included
to actually confirm it.

Reviewers: thegameg, arsenm

Reviewed By: thegameg, arsenm

Subscribers: wdng, arsenm, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D82478
2020-06-24 18:15:28 -04:00
Simon Pilgrim a53dddb3e9 Local.h - reduce includes to forward declarations. NFC.
Fix implicit include dependencies in source files and replace legacy AliasAnalysis typedef with AAResults where necessary.
2020-06-24 19:27:37 +01:00
Simon Pilgrim bf77c7ef2d Loads.h - reduce AliasAnalysis.h include to forward declarations. NFC.
Fix implicit include dependencies in source files.
2020-06-24 13:49:04 +01:00
Eli Friedman a2caa3b614 Remove GlobalValue::getAlignment().
This function is deceptive at best: it doesn't return what you'd expect.
If you have an arbitrary GlobalValue and you want to determine the
alignment of that pointer, Value::getPointerAlignment() returns the
correct value.  If you want the actual declared alignment of a function
or variable, GlobalObject::getAlignment() returns that.

This patch switches all the users of GlobalValue::getAlignment to an
appropriate alternative.

Differential Revision: https://reviews.llvm.org/D80368
2020-06-23 19:13:42 -07:00
Eli Friedman e9d4e34ab8 [AArch64][SVE] Add legalization support for i32/i64 vector srem/urem
Implement them on top of sdiv/udiv, similar to what we do for integer
types.

Potential future work: implementing i8/i16 srem/urem, optimizations for
constant divisors, optimizing the mul+sub to mls.

Differential Revision: https://reviews.llvm.org/D81511
2020-06-23 16:27:52 -07:00
hsmahesha 5832950adb [AMDGPU/MemOpsCluster] Compute `width` for `MIMG` instruction class.
Summary:
`width` computation is missing for newly added `MIMG`
instruction class. Add it.

Reviewers: foad, rampitec, arsenm

Reviewed By: foad

Subscribers: MatzeB, javed.absar, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D81649
2020-06-23 17:32:17 +05:30
Kerry McLaughlin 5080503174 [SVE][CodeGen] Legalisation of vsetcc with scalable types
Summary: Changes SplitVecOp_VSETCC to use getVectorElementCount()

Reviewers: sdesmalen, efriedma, dancgr

Reviewed By: efriedma

Subscribers: david-arm, tschuett, hiraditya, rkruppe, psnobl, huihuiz, cfe-commits, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D79167
2020-06-23 11:56:29 +01:00
Simon Pilgrim bcc0dc3832 [DAG] visitSIGN_EXTEND_INREG - rename EVT variable. NFCI.
We had a EVT type variable called EVT, which isn't a good idea....
2020-06-23 10:45:27 +01:00
Paul Walker 499c63288f [SVE] Code generation for fixed length vector loads & stores.
Summary:
This patch adds base support for code generating fixed length
vector operations targeting a known SVE vector length. To achieve
this we lower fixed length vector operations to equivalent scalable
vector operations, whereby SVE predication is used to limit the
elements processed to those present within the fixed length vector.

Specifically this patch implements load and store operations, which
get lowered to their masked counterparts thusly:

  V = load(Addr) =>
    V = extract_fixed_vector(masked_load(make_pred(V.NumElts), Addr))

  store(V, (Addr)) =>
    masked_store(insert_fixed_vector(V), make_pred(V.NumElts), Addr))

Reviewers: rengolin, efriedma

Subscribers: tschuett, hiraditya, rkruppe, psnobl, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D80385
2020-06-23 09:39:03 +00:00
Simon Pilgrim 0acd22b8fb StatepointLowering.cpp - fix implicit CommandLine.h dependency. NFC.
StatepointLowering defines a cl::opt but don't include CommandLine.h.
2020-06-23 09:43:39 +01:00
Michael Liao b1360caa82 [SDAG] Add new AssertAlign ISD node.
Summary:
- AssertAlign node records the guaranteed alignment on its source node,
  where these alignments are retrieved from alignment attributes in LLVM
  IR. These tracked alignments could help DAG combining and lowering
  generating efficient code.
- In this patch, the basic support of AssertAlign node is added. So far,
  we only generate AssertAlign nodes on return values from intrinsic
  calls.
- Addressing selection in AMDGPU is revised accordingly to capture the
  new (base + offset) patterns.

Reviewers: arsenm, bogner

Subscribers: jvesely, wdng, nhaehnle, tpr, hiraditya, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D81711
2020-06-23 00:51:11 -04:00
stozer 539381da26 [DebugInfo] Update MachineInstr to help support variadic DBG_VALUE instructions
Following on from this RFC[0] from a while back, this is the first patch towards
implementing variadic debug values.

This patch specifically adds a set of functions to MachineInstr for performing
operations specific to debug values, and replacing uses of the more general
functions where appropriate. The most prevalent of these is replacing
getOperand(0) with getDebugOperand(0) for debug-value-specific code, as the
operands corresponding to values will no longer be at index 0, but index 2 and
upwards: getDebugOperand(x) == getOperand(x+2). Similar replacements have been
added for the other operands, along with some helper functions to replace
oft-repeated code and operate on a variable number of value operands.

[0] http://lists.llvm.org/pipermail/llvm-dev/2020-February/139376.html<Paste>

Differential Revision: https://reviews.llvm.org/D81852
2020-06-22 16:01:12 +01:00
Simon Pilgrim 48d1a2d6d0 [DAG] Add SimplifyMultipleUseDemandedVectorElts helper for SimplifyMultipleUseDemandedBits. NFCI.
We have many cases where we call SimplifyMultipleUseDemandedBits and demand specific vector elements, but all the bits from them - this adds a helper wrapper to handle this.
2020-06-22 14:24:39 +01:00
Simon Pilgrim ecc5d7ee0d [DAG] SimplifyMultipleUseDemandedBits - drop unnecessary *_EXTEND_VECTOR_INREG cases
For little endian targets, if we only need the lowest element and none of the extended bits then we can just use the (bitcasted) source vector directly.

We already do this in SimplifyDemandedBits, this adds the SimplifyMultipleUseDemandedBits equivalent.
2020-06-22 12:35:32 +01:00
Tres Popp 09d72ad399 Revert "[CGP] Enable CodeGenPrepares phi type convertion."
This reverts commit 67121d7b82.

This is causing compile times to be 2x slower on some large binaries.
2020-06-22 13:06:18 +02:00
David Green 67121d7b82 [CGP] Enable CodeGenPrepares phi type convertion. 2020-06-21 16:46:16 +01:00
David Green 730ecb63ec [CGP] Convert phi types
If a collection of interconnected phi nodes is only ever loaded, stored
or bitcast then we can convert the whole set to the bitcast type,
potentially helping to reduce the number of register moves needed as the
phi's are passed across basic block boundaries. This has to be done in
CodegenPrepare as it naturally straddles basic blocks.

The alorithm just looks from phi nodes, looking at uses and operands for
a collection of nodes that all together are bitcast between float and
integer types. We record visited phi nodes to not have to process them
more than once. The whole subgraph is then replaced with a new type.
Loads and Stores are bitcast to the correct type, which should then be
folded into the load/store, changing it's type.

This comes up in the biquad testcase due to the way MVE needs to keep
values in integer registers. I have also seen it come up from aarch64
partner example code, where a complicated set of sroa/inlining produced
integer phis, where float would have been a better choice.

I also added undef and extract element handling which increased the
potency in some cases.

This adds it with an option that defaults to off, and disabled for 32bit
X86 due to potential issues around canonicalizing NaNs.

Differential Revision: https://reviews.llvm.org/D81827
2020-06-21 15:54:17 +01:00
David Sherwood 584d0d5c17 [SVE] Fall back on DAG ISel at -O0 when encountering scalable types
At the moment we use Global ISel by default at -O0, however it is
currently not capable of dealing with scalable vectors for two
reasons:

1. The register banks know nothing about SVE registers.
2. The LLT (Low Level Type) class knows nothing about scalable
   vectors.

For now, the easiest way to avoid users hitting issues when using
the SVE ACLE is to fall back on normal DAG ISel when encountering
instructions that operate on scalable vector types.

I've added a couple of RUN lines to existing SVE tests to ensure
we can compile at -O0. I've also added some new tests to

  CodeGen/AArch64/GlobalISel/arm64-fallback.ll

that demonstrate we correctly fallback to DAG ISel at -O0 when
lowering formal arguments or translating instructions that involve
scalable vector types.

Differential Revision: https://reviews.llvm.org/D81557
2020-06-19 10:57:00 +01:00
Jay Foad 7cdf4326a8 [LiveIntervals] Fix early-clobber handling in handleMoveUp
Without this fix, handleMoveUp can create an invalid live range like
this:

[98904e,98908r:0)[98908e,227504r:1)

where the two segments overlap, but only because we have lost the "e"
(early-clobber) on the end point of the first segment.

Differential Revision: https://reviews.llvm.org/D82110
2020-06-19 10:17:04 +01:00
David Sherwood 7edc7f6edb [CodeGen] Fix SimplifyDemandedBits for scalable vectors
For now I have changed SimplifyDemandedBits and it's various callers
to assume we know nothing for scalable vectors and to ignore the
demanded bits completely. I have also done something similar for
SimplifyDemandedVectorElts. These changes fix up lots of warnings
due to calls to EVT::getVectorNumElements() for types with scalable
vectors. These functions are all used for optimisations, rather than
functional requirements. In future we can revisit this code if
there is a need to improve code quality for SVE.

Differential Revision: https://reviews.llvm.org/D80537
2020-06-19 07:59:35 +01:00
David Sherwood 9e811b0d93 [CodeGen] Fix ComputeNumSignBits for scalable vectors
When trying to calculate the number of sign bits for scalable vectors
we should just bail out for now and pretend we know nothing.

Differential Revision: https://reviews.llvm.org/D81093
2020-06-19 07:58:42 +01:00
Vitaly Buka fcd67665a8 [StackSafety] Add "Must Live" logic
Summary:
Extend StackLifetime with option to calculate liveliness
where alloca is only considered alive on basic block entry
if all non-dead predecessors had it alive at terminators.

Depends on D82043.

Reviewers: eugenis

Reviewed By: eugenis

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D82124
2020-06-18 16:53:37 -07:00
Nathan James 8b0df1c1a9
[NFC] Refactor Registry loops to range for 2020-06-19 00:40:10 +01:00
Matt Arsenault 95605b784b AMDGPU/GlobalISel: Implement computeKnownAlignForTargetInstr
We probably need to move where intrinsics are lowered to copies to
make this useful.
2020-06-18 17:28:00 -04:00
Matt Arsenault 7f8b2e1b91 GlobalISel: Pass LegalizerHelper to custom legalize callbacks
This was passing in all the parameters needed to construct a
LegalizerHelper in the custom legalization, when it's simpler to just
pass in the existing helper.

This is slightly more annoying to use in the common case where you
don't need the legalizer helper, but we could add back the common
parameters back in addition to the helper.

I didn't propagate this to all the internal target changes that this
logically implies, but did update a sample one for
legalizeMinNumMaxNum.

This is in preparation for moving AMDGPU load/store legalization
entirely into custom lowering. The current set of legalization actions
is really constraining and not really capable of expressing all the
actions needed to legalize loads/stores. In particular there's no way
to express when the memory access itself needs to change size vs. the
result type. There's also a lot of redundancy since the same
split/widen actions need to be applied in both vector and scalar
cases. All of the sub-cases logically belong as steps in the legalizer
helper, but it will be easier to consider everything at once in custom
lowering.
2020-06-18 17:17:38 -04:00
Alexandre Ganea 2ae0df5be7 [CodeView] Revert 8374bf4363 and 403f953792
This reverts:
8374bf4363 [CodeView] Fix generated command-line expansion in LF_BUILDINFO. Fix the 'pdb' entry which was previously a null reference, now an empty string.
403f953792 [CodeView] Add full repro to LF_BUILDINFO record

This is causing the lld/test/COFF/pdb-relative-source-lines.test to fail: http://lab.llvm.org:8011/builders/lld-x86_64-win/builds/1096/steps/test-check-all/logs/FAIL%3A%20lld%3A%3Apdb-relative-source-lines.test
And clang/test/CodeGen/debug-info-codeview-buildinfo.c fails as well: http://lab.llvm.org:8011/builders/clang-s390x-linux/builds/33346/steps/ninja%20check%201/logs/FAIL%3A%20Clang%3A%3Adebug-info-codeview-buildinfo.c
2020-06-18 16:18:46 -04:00
Simon Pilgrim 2474421398 [TargetLowering] SimplifyMultipleUseDemandedBits - drop already extended ISD::SIGN_EXTEND_INREG nodes.
If the source of the SIGN_EXTEND_INREG node is already sign extended, use the source directly.
2020-06-18 16:41:08 +01:00
Alexandre Ganea 8374bf4363 [CodeView] Fix generated command-line expansion in LF_BUILDINFO. Fix the 'pdb' entry which was previously a null reference, now an empty string.
Previously, the DIA SDK didn't like the empty reference in the 'pdb' entry.
2020-06-18 10:07:30 -04:00
Alexandre Ganea 403f953792 [CodeView] Add full repro to LF_BUILDINFO record
This patch adds some missing information to the LF_BUILDINFO which allows for rebuilding an .OBJ without any external dependency but the .OBJ itself (other than the compiler executable).

Some tools need this information to reproduce a build without any knowledge of the build system. The LF_BUILDINFO therefore stores a full path to the compiler, the PWD (which is the CWD at program startup), a relative or absolute path to the TU, and the full CC1 command line. The command line needs to be freestanding (not depend on any environment variable). In the same way, MSVC doesn't store the provided command-line, but an expanded version (somehow their equivalent of CC1) which is also freestanding.

For more information see PR36198 and D43002.

Differential Revision: https://reviews.llvm.org/D80833
2020-06-18 09:17:15 -04:00
Lucas Prates a255931c40 [ARM] Supporting lowering of half-precision FP arguments and returns in AArch32's backend
Summary:
Half-precision floating point arguments and returns are currently
promoted to either float or int32 in clang's CodeGen and there's
no existing support for the lowering of `half` arguments and returns
from IR in AArch32's backend.

Such frontend coercions, implemented as coercion through memory
in clang, can cause a series of issues in argument lowering, as causing
arguments to be stored on the wrong bits on big-endian architectures
and incurring in missing overflow detections in the return of certain
functions.

This patch introduces the handling of half-precision arguments and returns in
the backend using the actual "half" type on the IR. Using the "half"
type the backend is able to properly enforce the AAPCS' directions for
those arguments, making sure they are stored on the proper bits of the
registers and performing the necessary floating point convertions.

Reviewers: rjmccall, olista01, asl, efriedma, ostannard, SjoerdMeijer

Reviewed By: ostannard

Subscribers: stuij, hiraditya, dmgreen, llvm-commits, chill, dnsampaio, danielkiss, kristof.beyls, cfe-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D75169
2020-06-18 13:15:13 +01:00
Jeremy Morse 3626eba11f [NFC][LiveDebugValues] Document how LiveDebugValues operates
We're missing a plain English explanation of how this pass is supposed
to operate -- add one to the file comment.

Differential Revision: https://reviews.llvm.org/D80929
2020-06-18 10:54:09 +01:00
David Sherwood 7e30ef77f6 [CodeGen] Fix warnings in getVectorTypeBreakdown
Added NextPowerOf2() routine to TypeSize and rewritten the code
in getVectorTypeBreakdown to avoid warnings being generated.

Differential Revision: https://reviews.llvm.org/D81578
2020-06-18 09:54:16 +01:00
David Sherwood 65912a9768 [CodeGen] Fix warnings in foldCONCAT_VECTORS
Instead of asserting the number of elements is the same, we should be
comparing the element counts instead. In addition, when looking at
concats of extract_subvectors it's fine to use getVectorMinNumElements()
for scalable vectors.

I discovered these warnings when compiling the structured loads tests in
this file:

  test/CodeGen/AArch64/sve-intrinsics-loads.ll

Differential Revision: https://reviews.llvm.org/D81936
2020-06-18 09:29:37 +01:00
Nick Desaulniers e7816f263b [InlineSpiller] add assert about spills post terminators
Summary:
This invariant is being violated in the test case
https://reviews.llvm.org/D77849, related to the use of the relatively
new ability for callbr to have return values, and MachineBasicBlocks
with INLINEASM_BR terminators to emit live out register defs.

As noted in the comment, this triggers invariant violations in
MachineVerifier via `llc -verify-machineinstrs` or
`llc -verify-regalloc`, since only MachineInstrs that are terminators
are allowed to follow the first terminator.

https://reviews.llvm.org/D75098 may rework this very assertion if we're
spilling via a (proposed) TCOPY MachineInstr.

Reviewers: void, efriedma, arsenm

Reviewed By: efriedma

Subscribers: qcolombet, wdng, hiraditya, llvm-commits, srhines

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D78166
2020-06-17 11:51:58 -07:00
Davide Italiano 1cbaf847ab [CGP] Reset the debug location when promoting zext(s).
When the zext gets promoted, it used to retain the original location,
which pessimizes the debugging experience causing an unexpected
jump in stepping at -Og.

Fixes https://bugs.llvm.org/show_bug.cgi?id=46120 (which also
contains a full C repro).

Differential Revision:  https://reviews.llvm.org/D81437
2020-06-17 11:13:13 -07:00
Ian Levesque 7c7c8e0da4 [xray] Option to omit the function index
Summary:
Add a flag to omit the xray_fn_idx to cut size overhead and relocations
roughly in half at the cost of reduced performance for single function
patching.  Minor additions to compiler-rt support per-function patching
without the index.

Reviewers: dberris, MaskRay, johnislarry

Subscribers: hiraditya, arphaman, cfe-commits, #sanitizers, llvm-commits

Tags: #clang, #sanitizers, #llvm

Differential Revision: https://reviews.llvm.org/D81995
2020-06-17 13:49:01 -04:00
Vitaly Buka d812efb121 [SafeStack,NFC] Fix names after files move
Summary: Depends on D81831.

Reviewers: eugenis, pcc

Reviewed By: eugenis

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D81832
2020-06-17 01:08:40 -07:00
Vitaly Buka 6754a0e2ed [SafeStack,NFC] Move SafeStackColoring code
Summary:
This code is going to be used in StackSafety.
This patch is file move with minimal changes. Identifiers
will be fixed in the followup patch.

Reviewers: eugenis, pcc

Reviewed By: eugenis

Subscribers: mgorny, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D81831
2020-06-17 01:07:47 -07:00
Aaron Smith 7e01675ea5 [SelectionDAG] Add MVT::bf16 to getConstantFP()
Summary:
This was probably overlooked in recent bfloat patches.
Needed to handle bf16 constants in SelectionDAG.

  ConstantFP:bf16<APFloat(0)>

Reviewers: stuij

Reviewed By: stuij

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D81779
2020-06-16 15:10:05 -07:00
Matt Arsenault e4f19d1dda GlobalISel: Fix not failing on widening G_INSERT_VECTOR_ELT
This doesn't actually handled type idx 0, but was reporting Legalized
on it. No test changes because nothing was trying to use this.
2020-06-16 15:48:57 -04:00
Matt Arsenault 8a3340d25d GlobalISel: Use early return and reduce indentation 2020-06-16 14:47:08 -04:00
Fangrui Song 4799fb63b5 [GlobalISel] Delete unused variable after r353432 2020-06-16 08:32:09 -07:00
Jessica Paquette 5a4c3f6b06 [GlobalISel] Look through extends etc in CombinerHelper::matchConstantOp
It's possible to end up with a zext or something in the way of a G_CONSTANT,
even pre-legalization. This can happen with memsets.

e.g.

https://godbolt.org/z/Bjc8cw

To make sure we can catch these cases, use `getConstantVRegValWithLookThrough`
instead of `mi_match`.

Differential Revision: https://reviews.llvm.org/D81875
2020-06-15 16:34:25 -07:00
Amara Emerson fc905ae003 [GlobalISel] Don't emit multiply by magic constant for zero memset values. 2020-06-15 14:42:14 -07:00
Davide Italiano c2dccf9d5e [CodeGenPrepare] Reset the debug location when promoting trunc(s)
The promotion machinery in CGP moves instructions retaining
debug locations. When the transformation is local, this is mostly
correct, but when instructions are moved cross-BBs, this is not
always true and causes jumpiness in line tables. This is the first
of a series of commits. sext(s) and zext(s) need to be treated
similarly.

Differential Revision:  https://reviews.llvm.org/D81879
2020-06-15 14:25:43 -07:00
Jessica Paquette 1ac8451a9b [GlobalISel] Simplify G_ADD when it has (0-X) on the LHS or RHS
This implements the following combines:

((0-A) + B) -> B-A
(A + (0-B)) -> A-B

Porting over the basic algebraic combines from the DAGCombiner. There are
several combines which fold adds away into subtracts. This is just the simplest
one.

I noticed that add combines are some of the most commonly hit across CTMark,
(via print statements when they fire), so I'm porting over some of the obvious
ones.

This gives some minor code size improvements on CTMark at -O3 on AArch64.

Differential Revision: https://reviews.llvm.org/D77453
2020-06-15 09:43:24 -07:00
Dominik Montada 87e5742654 [NFC] Add braces to if-statement in MachineVerifier 2020-06-15 16:33:56 +02:00
Matt Arsenault 33e9086501 GlobalISel: Support lowering vector->vector G_BITCAST
Extract subvectors and cast to the result element type before
remerging.
2020-06-15 07:36:30 -04:00
Dominik Montada c87bf29149 [MachineVerifier][GlobalISel] Check that branches have a MBB operand or are declared indirect. Add missing properties to G_BRJT, G_BRINDIRECT
Summary:
Teach MachineVerifier to check branches for MBB operands if they are not declared indirect.

Add `isBarrier`, `isIndirectBranch` to `G_BRINDIRECT` and `G_BRJT`.
Without these, `MachineInstr.isConditionalBranch()` was giving a
false-positive for those instructions.

Reviewers: aemerson, qcolombet, dsanders, arsenm

Reviewed By: dsanders

Subscribers: hiraditya, wdng, simoncook, s.egerton, arsenm, rovka, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D81587
2020-06-15 11:17:09 +02:00
Vitaly Buka ca2dcbd030 [SafeStack,NFC] Make StackColoring read-only
Move core which removes markers out of StackColoring.
2020-06-14 23:05:43 -07:00
Vitaly Buka c6426e2657 [SafeStack,NFC] Remove unneded branch 2020-06-14 23:05:43 -07:00
Vitaly Buka 7282da1ea8 [SafeStack,NFC] Fix naming style 2020-06-14 23:05:42 -07:00
Vitaly Buka 2f5e535a84 [SafeStack,NFC] Cleanup LiveRange interface 2020-06-14 23:05:42 -07:00
Vitaly Buka adefa9ca2e [SafeStack,NFC] "const" cleanup 2020-06-14 23:05:42 -07:00
Vitaly Buka fb1e0f324f [SafeStack,NFC] Add BlockLifetimeInfo constructor 2020-06-14 23:05:42 -07:00
Vitaly Buka 645058036a [SafeStack,NFC] Use IntrinsicInst instead of Instruction 2020-06-14 23:05:41 -07:00
Vitaly Buka f8e411656e [SafeStack,NFC] Move ClColoring into SafeStack.cpp
This allows to reuse the code in other components.
2020-06-14 23:05:41 -07:00
Vitaly Buka 05590a9cb8 [SafeStack,NFC] Move unconditional code into constructor
Prepare to move ClColoring from SafeStackCode to SafeStackLayout.
This will allow to reuse the code in other components.
2020-06-14 23:05:41 -07:00
Chen Zheng bd7096b977 [PowerPC] fma chain break to expose more ILP
This patch tries to reassociate two patterns related to FMA to expose
more ILP on PowerPC.

// Pattern 1:
//   A =  FADD X,  Y          (Leaf)
//   B =  FMA  A,  M21,  M22  (Prev)
//   C =  FMA  B,  M31,  M32  (Root)
// -->
//   A =  FMA  X,  M21,  M22
//   B =  FMA  Y,  M31,  M32
//   C =  FADD A,  B

// Pattern 2:
//   A =  FMA  X,  M11,  M12  (Leaf)
//   B =  FMA  A,  M21,  M22  (Prev)
//   C =  FMA  B,  M31,  M32  (Root)
// -->
//   A =  FMUL M11,  M12
//   B =  FMA  X,  M21,  M22
//   D =  FMA  A,  M31,  M32
//   C =  FADD B,  D

Reviewed By: jsji

Differential Revision: https://reviews.llvm.org/D80175
2020-06-15 00:00:04 -04:00
Qiu Chaofan f8ef7c99a0 [DAGCombiner] Require ninf for division estimation
Current implementation of division estimation isn't correct for some
cases like 1.0/0.0 (result is nan, not expected inf).

And this change exposes a potential infinite loop: we use
isConstOrConstSplatFP in combineRepeatedFPDivisors to look up if the
divisor is some constant. But it doesn't work after legalized on some
platforms. This patch restricts the method to act before LegalDAG.

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D80542
2020-06-14 22:58:22 +08:00
Amanieu d'Antras 6973125cb7 Fix FastISel dropping srcloc metadata from InlineAsm
Summary:
Bugzilla: https://bugs.llvm.org/show_bug.cgi?id=46060

I've also added the Extra_IsConvergent flag which was missing from FastISel.

Reviewers: echristo

Reviewed By: echristo

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D80759
2020-06-13 16:52:37 +01:00
Roman Lebedev 17f7654152
[NFCI][MachineCopyPropagation] invalidateRegister(): use SmallSet<8> instead of DenseSet.
This decreases the time consumed by the pass [during RawSpeed unity build]
by 25% (0.0586 s -> 0.04388 s).

While that isn't really impressive overall, that wasn't the goal here.
The memory results here are noticeable.
The baseline results are:
```
total runtime: 55.65s.
calls to allocation functions: 19754254 (354960/s)
temporary memory allocations: 4951609 (88974/s)
peak heap memory consumption: 239.13MB
peak RSS (including heaptrack overhead): 463.79MB
total memory leaked: 198.01MB
```
While with this patch the results are:
```
total runtime: 55.37s.
calls to allocation functions: 19068237 (344403/s)   # -3.47 %
temporary memory allocations: 4261772 (76974/s)      # -13.93 % (!!!)
peak heap memory consumption: 239.13MB
peak RSS (including heaptrack overhead): 463.73MB
total memory leaked: 198.01MB
```

So we get rid of *a lot* of temporary allocations.

Using `SmallSet<8>` makes sense to me because at least here
for x86 BdVer2, the size of that set is *never* more than 3,
over all of llvm test-suite + RawSpeed.

The story might be different on other targets,
not sure if it will ever justify whole DenseSet,
but if it does SmallDenseSet might be a compromise.
2020-06-12 23:10:54 +03:00
Michael Liao e7b920e6fe [DAGCombine] Generalize the case (add (or x, c1), c2) -> (add x, (c1 + c2))
Reviewers: arsenm

Subscribers: sdardis, wdng, hiraditya, asb, rbar, johnrusso, simoncook, sabuasal, niosHD, jrtc27, MaskRay, zzheng, edward-jones, atanasyan, rogfer01, MartinMosbeck, brucehoult, the_o, PkmX, jocewei, Jim, lenary, s.egerton, pzheng, sameer.abuasal, apazos, luismarques, ecnelises, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D81708
2020-06-12 13:53:08 -04:00
Matt Arsenault 350ee7fb3f GlobalISel: Fix not erasing old instruction in sitofp/uitofp lowering 2020-06-12 10:33:23 -04:00
Simon Pilgrim 5509e2cc2e [DAG] foldAddSubOfSignBit - add support for non-uniform vector constants 2020-06-12 14:58:15 +01:00
diggerlin c6be3ea524 [NFC] clean up the AsmPrinter::emitLinkage for AIX part
SUMMARY:

Since we deal with aix emitLinkage in the PPCAIXAsmPrinter::emitLinkage() in the patch https://reviews.llvm.org/D75866. It do not go to AsmPrinter::emitLinkage() any more, we clean up some aix related code in the AsmPrinter::emitLinkage()

Reviewers:  Jason liu

Differential Revision: https://reviews.llvm.org/D81613
2020-06-11 13:33:51 -04:00
Petar Avramovic bd3d951b8b AMDGPU/GlobalISel: Fix lower for f64->f16 G_FPTRUNC
Put AND before ADD in LegalizerHelper::lowerFPTRUNC_F64_TO_F16
in order to match algorithm from AMDGPUTargetLowering::LowerFP_TO_FP16.

Differential Revision: https://reviews.llvm.org/D81666
2020-06-11 18:19:27 +02:00