Before this patch DbgInfoAvailable was set to true in
DwarfDebug::beginModule() or CodeViewDebug::CodeViewDebug(). This made
MIR testing weird since passes would suddenly stop dealing with debug
info just because we stopped the pipeline before the debug printers.
This patch changes the logic to initialize DbgInfoAvailable based on the
fact that debug_compile_units exist in the llvm Module. The debug
printers may then override it with false in case of debug printing being
disabled.
Differential Revision: https://reviews.llvm.org/D53885
llvm-svn: 345740
The debug-use flag must be set exactly for uses on DBG_VALUEs. This is
so obvious that it can be trivially inferred while parsing. This will
reduce noise when printing while omitting an information that has little
value to the user.
The parser will keep recognizing the flag for compatibility with old
`.mir` files.
Differential Revision: https://reviews.llvm.org/D53903
llvm-svn: 345671
Summary:
Normalize the offset for endianess before checking
if the store cover the load in ForwardStoreValueToDirectLoad.
Without this we missed out on some optimizations for big
endian targets. If for example having a 4 bytes store followed
by a 1 byte load, loading the least significant byte from the
store, the STCoversLD check would fail (see @test4 in
test/CodeGen/AArch64/load-store-forwarding.ll).
This patch also fixes a problem seen in an out-of-tree target.
The target has i40 as a legal type, it is big endian,
and the StoreSize for i40 is 48 bits. So when normalizing
the offset for endianess we need to take the StoreSize into
account (assuming that padding added when storing into
a larger StoreSize always is added at the most significant
end).
Reviewers: niravd
Reviewed By: niravd
Subscribers: javed.absar, kristof.beyls, llvm-commits, uabelho
Differential Revision: https://reviews.llvm.org/D53776
llvm-svn: 345636
The SchedModel allows the addition of ReadAdvances to express that certain
operands of the instructions are needed at a later point than the others.
RegAlloc may add pseudo operands that are not part of the instruction
descriptor, and therefore cannot have any read advance entries. This meant
that in some cases the desired read advance was nullified by such a pseudo
operand, which still had the original latency.
This patch fixes this by making sure that such pseudo operands get a zero
latency during DAG construction.
Review: Matthias Braun, Ulrich Weigand.
https://reviews.llvm.org/D49671
llvm-svn: 345606
Narrowing vector binops came up in the demanded bits discussion in D52912.
I don't think we're going to be able to do this transform in IR as a canonicalization
because of the risk of creating unsupported widths for vector ops, but we already have
a DAG TLI hook to allow what I was hoping for: isExtractSubvectorCheap(). This is
currently enabled for x86, ARM, and AArch64 (although only x86 has existing regression
test diffs).
This is artificially limited to not look through bitcasts because there are so many
test diffs already, but that's marked with a TODO and is a small follow-up.
Differential Revision: https://reviews.llvm.org/D53784
llvm-svn: 345602
Similar to FoldCONCAT_VECTORS, this patch adds FoldBUILD_VECTOR to simplify cases that can avoid the creation of the BUILD_VECTOR - if all the operands are UNDEF or if the BUILD_VECTOR simplifies to a copy.
This exposed an assumption in some AMDGPU code that getBuildVector was guaranteed to be a BUILD_VECTOR node that I've tried to handle.
Differential Revision: https://reviews.llvm.org/D53760
llvm-svn: 345578
Summary: Previously if we had a bitcast vector output type that needs promotion and a vector input type that needs widening we would just do a stack store and load to handle the conversion. We can do a little better if we can widen the bitcast to a legal vector type the same size as the widened input type. Then we can do the bitcast between this widened type and the widened input type. Afterwards we can extract_subvector back to the original output and any_extend that. Type legalization will then circle back and handle promotion of the extract_subvector and the any_extend will just be removed. This will avoid going through the stack and allows us to remove a custom version of this legalization from X86.
Reviewers: efriedma, RKSimon
Reviewed By: efriedma
Subscribers: javed.absar, llvm-commits
Differential Revision: https://reviews.llvm.org/D53229
llvm-svn: 345567
If a function has target features, it may contain instructions that aren't
represented in the default set of instructions. If the outliner pulls out one
of these instructions, and the function doesn't have the right attributes
attached, we'll run into an LLVM error explaining that the target doesn't
support the necessary feature for the instruction.
This makes outlined functions inherit target features from their parents.
It also updates the machine-outliner.ll test to check that we're properly
inheriting target features.
llvm-svn: 345535
Add an intrinsic that takes 2 integers and perform saturation subtraction on
them.
This is a part of implementing fixed point arithmetic in clang where some of
the more complex operations will be implemented as intrinsics.
Differential Revision: https://reviews.llvm.org/D53783
llvm-svn: 345512
Add vector support to TargetLowering::expandFP_TO_UINT.
This exposes an issue in X86TargetLowering::LowerVSELECT which was assuming that the select mask was the same width as the LHS/RHS ops - as long as the result is a sign splat we can easily sext/trunk this.
llvm-svn: 345473
Add ARM64 unwind codes to MCLayer, as well SEH directives that will be emitted
by the frame lowering patch to follow. We only emit unwind codes into object
object files for now.
Differential Revision: https://reviews.llvm.org/D50166
llvm-svn: 345450
The DAGTypeLegalizer::getSETCCWidenedResultTy was widening the MaskVT, but the code in convertMask called after getSETCCWidenedResultTy had no idea this widening had occurred. So none of the operands were widened when convertMask created new setccs with the widened VT.
This patch removes the widening and adds some asserts to getNode to validate the types of setccs to prevent issues like this in the future.
Differential Revision: https://reviews.llvm.org/D53743
llvm-svn: 345428
The "dead" markings allow existing target-independent optimizations,
like MachineSink, to trigger more frequently. The CPSR defs would have
eventually been marked dead by LiveVariables, so this only affects
optimizations before regalloc.
The ARMBaseInstrInfo.cpp change is fixing a bug which is only visible
with this change: the transform adds a use to an otherwise dead def
of CPSR. This is covered by existing regression tests.
thumb2-tbh.ll breaks for Thumb1 due to MachineLICM changing the
generated code; I'll fix it in D53452.
Differential Revision: https://reviews.llvm.org/D53453
llvm-svn: 345420
.debug_loclists is the DWARF 5 version of the .debug_loc.
With that patch, it will be emitted when DWARF 5 is used.
Differential revision: https://reviews.llvm.org/D53365
llvm-svn: 345377
Summary:
This adds support for LSDA (exception table) generation for wasm EH.
Wasm EH mostly follows the structure of Itanium-style exception tables,
with one exception: a call site table entry in wasm EH corresponds to
not a call site but a landing pad.
In wasm EH, the VM is responsible for stack unwinding. After an
exception occurs and the stack is unwound, the control flow is
transferred to wasm 'catch' instruction by the VM, after which the
personality function is called from the compiler-generated code. (Refer
to WasmEHPrepare pass for more information on this part.)
This patch:
- Changes wasm.landingpad.index intrinsic to take a token argument, to
make this 1:1 match with a catchpad instruction
- Stores landingpad index info and catch type info MachineFunction in
before instruction selection
- Lowers wasm.lsda intrinsic to an MCSymbol pointing to the start of an
exception table
- Adds WasmException class with overridden methods for table generation
- Adds support for LSDA section in Wasm object writer
Reviewers: dschuff, sbc100, rnk
Subscribers: mgorny, jgravelle-google, sunfish, llvm-commits
Differential Revision: https://reviews.llvm.org/D52748
llvm-svn: 345345
Handle the case where getCurrentFunction() returns nullptr by passing -1 to
printIRSlotNumber(). This will result in <badref> being printed instead of an
assertion failure.
Review: Francis Visoiu Mistrih
https://reviews.llvm.org/D53333
llvm-svn: 345342
This isn't the most object-size efficient encoding, but it's the only
one GDB supports for the pre-standard fission format. I've written fixes
for this twice now... - so perhaps this comment will help me remember
why neither of these have been committed and why I shouldn't try to
write a third fix another year from now...
llvm-svn: 345326
The artificial dependencies are not real dependencies. In some cases, they
form circuits with bigger MII. However, they are used to schedule instructions
better.
Differential Revision: https://reviews.llvm.org/D53450
llvm-svn: 345319
Replacing BinaryOperator::isFNeg(...) to avoid regressions when we
separate FNeg from the FSub IR instruction.
Differential Revision: https://reviews.llvm.org/D53650
llvm-svn: 345295
As noticed on D52965, the SINT_TO_FP i64 to f32 legalization code has been dead for years - protected by an assert.
Differential Revision: https://reviews.llvm.org/D53703
llvm-svn: 345290
As suggested on D52965, this patch moves the i64 to f64 UINT_TO_FP expansion code from LegalizeDAG into TargetLowering and makes it available to LegalizeVectorOps as well.
Not only does this help perform X86 lowering as a true vectorization instead of (partially vectorized) scalar conversions, it avoids the HADDPD op from the scalar code which can be slow on most targets.
The AVX512F does have the vcvtusi2sdq scalar operation but we don't unroll to use it as it seems to only help for the v2f64 case - otherwise the unrolling cost will certainly be too high. My feeling is that we should leave it to the vectorizers - and if it generates the vector UINT_TO_FP we should use it.
Differential Revision: https://reviews.llvm.org/D53649
llvm-svn: 345256
This makes the offsets larger (since they are further from the base
address) but those are in the .dwo - and allows removing addresses and
relocations from the .o file.
This could be built into the AddressPool more fundamentally, perhaps -
when you ask for an AddressPool entry you could say "or give me some
other entry and an offset I need to use" - though what to do about
situations where the first use of an address in a section is not the
earliest address in that section... is tricky.
At least with range addresses we can be fairly sure we've seen the
earliest address first because we see the start address for the
function.
llvm-svn: 345224
Summary:
Changes all uses of minnan/maxnan to minimum/maximum
globally. These names emphasize that the semantic difference between
these operations is more than just NaN-propagation.
Reviewers: arsenm, aheejin, dschuff, javed.absar
Subscribers: jholewinski, sdardis, wdng, sbc100, jgravelle-google, jrtc27, atanasyan, llvm-commits
Differential Revision: https://reviews.llvm.org/D53112
llvm-svn: 345218
Until now, we've only checked whether merging stores would cause a cycle via
the value argument, but the address and indexed offset arguments are also
capable of creating cycles in some situations.
The addresses are all base+offset with notionally the same base, but the base
SDNode may still be different (e.g. via an indexed load in one case, and an
ISD::ADD elsewhere). This allows cycles to creep in if one of these sources
depends on another.
The indexed offset is usually undef (representing a non-indexed store), but on
some architectures (e.g. 32-bit ARM-mode ARM) it can be an arbitrary value,
again allowing dependency cycles to creep in.
llvm-svn: 345200
Summary:
This renames the IsParsingMSInlineAsm member variable of AsmLexer to
LexMasmIntegers and moves it up to MCAsmLexer. This is the only behavior
controlled by that variable. I added a public setter, so that it can be
set from outside or from the llvm-mc command line. We may need to
arrange things so that users can get this behavior from clang, but
that's future work.
I also put additional hex literal lexing functionality under this flag
to fix PR32973. It appears that this hex literal parsing wasn't intended
to be enabled in non-masm-style blocks.
Now, masm integers (0b1101 and 0ABCh) work in __asm blocks from clang,
but 0b label references work when using .intel_syntax in standalone .s
files.
However, 0b label references will *not* work from __asm blocks in clang.
They will work from GCC inline asm blocks, which it sounds like is
important for Crypto++ as mentioned in PR36144.
Essentially, we only lex masm literals for inline asm blobs that use
intel syntax. If the .intel_syntax directive is used inside a gnu-style
inline asm statement, masm literals will not be lexed, which is
compatible with gas and llvm-mc standalone .s assembly.
This fixes PR36144 and PR32973.
Reviewers: Gerolf, avt77
Subscribers: eraman, hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D53535
llvm-svn: 345189
A lifetime end intrinsic between a tail call and the return should not
prevent the call from being tail call optimized.
Differential Revision: https://reviews.llvm.org/D53519
llvm-svn: 345163
Summary:
If the target does not support `.asciz` and `.ascii` directives, the
strings are represented as bytes and each byte is placed on the new line
as a separate byte directive `.b8 <data>`. NVPTX target allows to
represent the vector of the data of the same type as a vector, where
values are separated using `,` symbol: `.b8 <data1>,<data2>,...`. This
allows to reduce the size of the final PTX file. Ptxas tool includes ptx
files into the resulting binary object, so reducing the size of the PTX
file is important.
Reviewers: tra, jlebar, echristo
Subscribers: jholewinski, llvm-commits
Differential Revision: https://reviews.llvm.org/D45822
llvm-svn: 345142
When implementing memset's today we often see this pattern:
$x0 = MOV 0xXYXYXYXYXYXYXYXY
store $x0, ...
$w1 = MOV 0xXYXYXYXY
store $w1, ...
We first create a 64bit constant in a 64bit register with all bytes the
same and then create a 32bit constant with all bytes the same in a 32bit
register. In many targets we could just access the lower byte of the
64bit register instead.
- Ideally this would be handled by the ConstantHoist pass but it runs
too early when memset isn't expanded yet.
- The memset expansion code already had this optimization implemented,
however SelectionDAG constantfolding would constantfold the
"trunc(bigconstnat)" pattern to "smallconstant".
- This patch makes the memset expansion mark the constant as Opaque and
stop DAGCombiner from constant folding in this situation. (Similar to
how ConstantHoisting marks things as Opaque to avoid folding
ADD/SUB/etc.)
Differential Revision: https://reviews.llvm.org/D53181
llvm-svn: 345102
Clearing LargeOffsetGEPMap at the end fixes a bug where if a large
offset GEP is in a dead basic block, we fail an assertion when trying
to delete the block due to the asserting VH in LargeOffsetGEPMap.
Differential Revision: https://reviews.llvm.org/D53464
llvm-svn: 345082
As suggested on D53258, this patch move the CTPOP expansion code from SelectionDAGLegalize to TargetLowering to allow it to be reused by the VectorLegalizer.
Proper vector support will be added by D53258.
llvm-svn: 345066
As suggested on D53258, this patch shares common CTLZ expansion code between VectorLegalizer and SelectionDAGLegalize by putting it in TargetLowering.
Extension to D53474
llvm-svn: 345060
Vector types are not possible here because this code only starts
matching from the scalar bool value of a conditional branch, but
this is another step towards completely removing the fake binop
queries for not/neg/fneg.
llvm-svn: 345041
As suggested on D53258, this patch demonstrates sharing common CTTZ expansion code between VectorLegalizer and SelectionDAGLegalize by putting it in TargetLowering.
I intend to move CTLZ and (scalar) CTPOP over as well and then update D53258 accordingly.
Differential Revision: https://reviews.llvm.org/D53474
llvm-svn: 345039
Split MachinePipeliner code into header and cpp files to allow inheritance from SwingSchedulerDAG
Differential Revision: https://reviews.llvm.org/D53477
llvm-svn: 345008
Add an intrinsic that takes 2 integers and perform unsigned saturation
addition on them.
This is a part of implementing fixed point arithmetic in clang where some of
the more complex operations will be implemented as intrinsics.
Differential Revision: https://reviews.llvm.org/D53340
llvm-svn: 344971
I've included a fix to DAGCombiner::ForwardStoreValueToDirectLoad that I believe will prevent the previous miscompile.
Original commit message:
Theoretically this was done to simplify the amount of isel patterns that were needed. But it also meant a substantial number of our isel patterns have to match an explicit bitcast. By making the vXi32/vXi16/vXi8 types legal for loads, DAG combiner should be able to change the load type to rem
I had to add some additional plain load instruction patterns and a few other special cases, but overall the isel table has reduced in size by ~12000 bytes. So it looks like this promotion was hurting us more than helping.
I still have one crash in vector-trunc.ll that I'm hoping @RKSimon can help with. It seems to relate to using getTargetConstantFromNode on a load that was shrunk due to an extract_subvector combine after the constant pool entry was created. So we end up decoding more mask elements than the lo
I'm hoping this patch will simplify the number of patterns needed to remove the and/or/xor promotion.
Reviewers: RKSimon, spatel
Reviewed By: RKSimon
Subscribers: llvm-commits, RKSimon
Differential Revision: https://reviews.llvm.org/D53306
llvm-svn: 344965
Logs provided by @stella.stamenova indicate that on Linux, lldb adds a
spurious slide offset to the return PC it loads from AT_call_return_pc
attributes (see the list thread: "[PATCH] D50478: Add support for
artificial tail call frames").
This patch side-steps the issue by getting rid of the load address
calculation in lldb's CallEdge::GetReturnPCAddress.
The idea is to have the DWARF writer emit function-local offsets to the
instruction after a call. I.e. return-pc = label-after-call-insn -
function-entry. LLDB can simply add this offset to the base address of a
function to get the return PC.
Differential Revision: https://reviews.llvm.org/D53469
llvm-svn: 344960
Recommits r342942, which was reverted in r343189, with a fix for an
issue where we would propagate unsafely if we defined only the upper
part of a register.
Original message:
Change the copy tracker to keep a single map of register units
instead of 3 maps of registers. This gives a very significant
compile time performance improvement to the pass. I measured a
30-40% decrease in time spent in MCP on x86 and AArch64 and much
more significant improvements on out of tree targets with more
registers.
llvm-svn: 344942
Introduce new versions that follow the IEEE semantics
to help with legalization that may need quieted inputs.
There are some regressions from inserting unnecessary
canonicalizes when these are matched from fast math
fcmp + select which should be fixed in a future commit.
llvm-svn: 344914
This is a late backend subset of the IR transform added with:
D52439
We can confirm that the conversion to a 'trunc' is correct by running:
$ opt -instcombine -data-layout="e"
(assuming the IR transforms are correct; change "e" to "E" for big-endian)
As discussed in PR39016:
https://bugs.llvm.org/show_bug.cgi?id=39016
...the pattern may emerge during legalization, so that's we are waiting for an
insertelement to become a scalar_to_vector in the pattern matching here.
The DAG allows for fun variations that are not possible in IR. Result types for
extracts and scalar_to_vector don't necessarily match input types, so that means
we have to be a bit more careful in the transform (see code comments).
The tests show that we don't handle cases that require a shift (as we did in the
IR version). I've left that as a potential follow-up because I'm not sure if
that's a real concern at this late stage.
Differential Revision: https://reviews.llvm.org/D53201
llvm-svn: 344872
Using a base address specifier even for a single-element range is a size
win for object files (7 words versus 8 words - more significant savings
if the debug info is compressed (since it's 3 words of uncompressable
reloc + 4 compressable words compared to 6 uncompressable reloc + 2
compressable words) - does trade off executable size increase though.
llvm-svn: 344841
Putting addresses in the address pool, even with non-fission, can reduce
relocations - reusing the addresses from debug_info and debug_rnglists
(the latter coming soon)
llvm-svn: 344834
Change of approach, it looks like it's a much better idea to deal with
the vregs that have LLTs and reg classes both properly, than trying to
avoid creating those across all GlobalISel passes and all targets.
The change mostly touches MachineRegisterInfo::constrainRegClass,
which is apparently only used by MachineCSE. The changes are NFC for
any pipeline but one that contains MachineCSE mid-GlobalISel.
NOTE on isCallerPreservedOrConstPhysReg change in MachineCSE:
There is no test covering it as the only way to insert a new pass
(MachineCSE) from a command line I know of is llc's -run-pass option,
which only works with MIR, but MIRParser freezes reserved registers upon
MachineFunctions creation, making it impossible to reproduce the state
that exposes the issue.
Reviwed By: aditya_nandakumar
Differential Revision: https://reviews.llvm.org/D53144
llvm-svn: 344822
https://reviews.llvm.org/D53304
Currently dead phis are not cleaned up during DCE. This patch allows
dead PHI and G_PHI insts to be deleted.
Reviewed by: dsanders
llvm-svn: 344811
Summary:
Large GEP splitting, introduced in rL332015, uses a `DenseMap<AssertingVH<Value>, ...>`. This causes an assertion to fail (in debug builds) or undefined behaviour to occur (in release builds) when a value is RAUWed.
This manifested itself in the 7zip benchmark from the llvm test suite built on ARM with `-fstrict-vtable-pointers` enabled while RAUWing invariant group launders and splits in CodeGenPrepare.
This patch merges the large offsets of the argument and the result of an invariant.group strip/launder intrinsic before RAUWing.
Reviewers: Prazek, javed.absar, haicheng, efriedma
Reviewed By: Prazek, efriedma
Subscribers: kristof.beyls, hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D51936
llvm-svn: 344802
In a loop, create artificial dependences between the source of a
COPY/REG_SEQUENCE to the use in next iteration.
Eg:
SRC ----Data Dep--> COPY
COPY ---Anti Dep--> PHI (implies, to be used in next iteration)
PHI ----Data Dep--> USE
This patches creates
USE ----Artificial Dep---> SRC
This will effectively schedule the COPY late to eliminate additional copies.
Before this patch, the schedule can be
SRC, COPY, USE : The COPY is used in next iteration and it needs to be
preserved.
After this patch, the schedule can be
USE, SRC, COPY : The COPY is used in next iteration and the live interval is
reduced.
Differential Revision: https://reviews.llvm.org/D53303
llvm-svn: 344748
Add an intrinsic that takes 2 integers and perform saturation addition on them.
This is a part of implementing fixed point arithmetic in clang where some of
the more complex operations will be implemented as intrinsics.
Differential Revision: https://reviews.llvm.org/D53053
llvm-svn: 344629
Summary:
This adds support for LSDA (exception table) generation for wasm EH.
Wasm EH mostly follows the structure of Itanium-style exception tables,
with one exception: a call site table entry in wasm EH corresponds to
not a call site but a landing pad.
In wasm EH, the VM is responsible for stack unwinding. After an
exception occurs and the stack is unwound, the control flow is
transferred to wasm 'catch' instruction by the VM, after which the
personality function is called from the compiler-generated code. (Refer
to WasmEHPrepare pass for more information on this part.)
This patch:
- Changes wasm.landingpad.index intrinsic to take a token argument, to
make this 1:1 match with a catchpad instruction
- Stores landingpad index info and catch type info MachineFunction in
before instruction selection
- Lowers wasm.lsda intrinsic to an MCSymbol pointing to the start of an
exception table
- Adds WasmException class with overridden methods for table generation
- Adds support for LSDA section in Wasm object writer
Reviewers: dschuff, sbc100, rnk
Subscribers: mgorny, jgravelle-google, sunfish, llvm-commits
Differential Revision: https://reviews.llvm.org/D52748
llvm-svn: 344575
This is intended to make the backend on par with functionality that was
added to the IR version of SimplifyDemandedVectorElts in:
rL343727
...and the original motivation is that we need to improve demanded-vector-elements
in several ways to avoid problems that would be exposed in D51553.
Differential Revision: https://reviews.llvm.org/D52912
llvm-svn: 344541
by `getTerminator()` calls instead be declared as `Instruction`.
This is the biggest remaining chunk of the usage of `getTerminator()`
that insists on the narrow type and so is an easy batch of updates.
Several files saw more extensive updates where this would cascade to
requiring API updates within the file to use `Instruction` instead of
`TerminatorInst`. All of these were trivial in nature (pervasively using
`Instruction` instead just worked).
llvm-svn: 344502
Summary:
TwoAddressInstruction pass typically rewrites
%1:short = foo %0.sub_lo:long
as
%1:short = COPY %0.sub_lo:long
%1:short = foo %1:short
when having tied operands.
If there are extra un-tied operands that uses the same reg and
subreg, such as the second and third inputs to fie here:
%1:short = fie %0.sub_lo:long, %0.sub_hi:long, %0.sub_lo:long
then there was a bug which replaced the register %0 also for
the un-tied operand, but without changing the subregister indices.
So we used to get:
%1:short = COPY %0.sub_lo:long
%1:short = fie %1, %1.sub_hi:short, %1.sub_lo:short
With this fix we instead get:
%1:short = COPY %0.sub_lo:long
%1:short = fie %1, %0.sub_hi:long, %1
Reviewers: arsenm, JesperAntonsson, kparzysz, MatzeB
Reviewed By: MatzeB
Subscribers: bjope, kparzysz, wdng, llvm-commits
Differential Revision: https://reviews.llvm.org/D36224
llvm-svn: 344492
The final stage of CTPOP expansion (v = (v * 0x01010101...) >> (Len - 8)) is completely pointless for the byte (Len = 8) case as it reduces to (v = (v * 0x01...) >> 0), but annoyingly this doesn't always get optimized away.
Found while investigating generic vector CTPOP expansion (PR32655).
llvm-svn: 344477
The CTPOP case has been changed from VT.getSizeInBits to VT.getScalarSizeInBits - but this fits in with future work for vector support (PR32655) and doesn't affect any current (scalar) uses.
llvm-svn: 344461
Summary:
getShiftAmountTy for X86 returns MVT::i8. If a BSWAP or BITREVERSE is created that requires promotion and the difference between the original VT and the promoted VT is more than 255 then we won't able to create the constant.
This patch adds a check to replace the result from getShiftAmountTy to MVT::i32 if the difference won't fit. This should get legalized later when the shift is ultimately expanded since its clearly an illegal type that we're only promoting to make it a power of 2 bit width. Alternatively we could base the decision completely on the largest shift amount the promoted VT could use.
Vectors should be immune here because getShiftAmountTy always returns the incoming VT for vectors. Only the scalar shift amount can be changed by the targets.
Reviewers: eli.friedman, RKSimon, spatel
Reviewed By: RKSimon
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D53232
llvm-svn: 344460
There is one remnant - AVX1 custom splitting of 256-bit vectors - which is due to a regression where the X86ISD::ANDNP is still performed as a YMM.
I've also tightened the CTLZ or CTPOP lowering in SelectionDAGLegalize::ExpandBitCount to require a legal CTLZ - it doesn't affect existing users and fixes an issue with AVX512 codegen.
llvm-svn: 344457
Summary:
These new intrinsics have the semantics of the `minimum` and `maximum`
operations specified by the latest draft of IEEE 754-2018. Unlike
llvm.minnum and llvm.maxnum, these new intrinsics propagate NaNs and
always treat -0.0 as less than 0.0. `minimum` and `maximum` lower
directly to the existing `fminnan` and `fmaxnan` ISel DAG nodes. It is
safe to reuse these DAG nodes because before this patch were only
emitted in situations where there were known to be no NaN arguments or
where NaN propagation was correct and there were known to be no zero
arguments. I know of only four backends that lower fminnan and
fmaxnan: WebAssembly, ARM, AArch64, and SystemZ, and each of these
lowers fminnan and fmaxnan to instructions that are compatible with
the IEEE 754-2018 semantics.
Reviewers: aheejin, dschuff, sunfish, javed.absar
Subscribers: kristof.beyls, dexonsmith, kristina, llvm-commits
Differential Revision: https://reviews.llvm.org/D52764
llvm-svn: 344437
This is more consistent with what we usually do and matches some code X86 custom emits in some cases that I think I can cleanup.
The MIPS test change just looks to be an instruction ordering change.
llvm-svn: 344422
The initial patch was not reviewed, and does not have any tests;
it should not have been merged.
This reverts 344395, 344390, 344387, 344385, 344381, 344376,
and 344366.
llvm-svn: 344405
This saves a conversion to extracts and build_vector. We already do this when both the result and the input need to be widened to the same type.
This changed the sse-intrinsics-fast-isel test because we don't lower (insert_vector_elt (scalar_to_vector X), Y, 1) well. We turn it into (vector_shuffle (scalar_to_vector X), (scalar_to_vector Y), <0, 4, 2, 3>) losing track of the fact that the upper elts could be undef.
We should probably find a way to prevent the scalarization of the <2 x f32> load on these tests.
llvm-svn: 344404
If the input type is widened as well, but we still were forced to unroll, we shouldn't be considering the widened input element count. We should only create as many scalar operations as the original type called for.
This will be important for an upcoming patch.
llvm-svn: 344403
Summary: We can fill in the command line and compiler path later if we want.
Reviewers: zturner
Subscribers: hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D53179
llvm-svn: 344393
This a resubmission of a patch which was previously reverted
due to breaking several lld tests. The issues causing those
failures have been fixed, so the patch is now resubmitted.
---Original Commit Message---
While it doesn't make a *ton* of sense for POSIX paths to be
in PDBs, it's possible to occur in real scenarios involving
cross compilation.
The tools need to be able to handle this, because certain types
of debugging scenarios are possible without a running process
and so don't necessarily require you to be on a Windows system.
These include post-mortem debugging and binary forensics (e.g.
using a debugger to disassemble functions and examine symbols
without running the process).
There's changes in clang, LLD, and lldb in this patch. After
this the cross-platform disassembly and source-list tests pass
on Linux.
Furthermore, the behavior of LLD can now be summarized by a much
simpler rule than before: Unless you specify /pdbsourcepath and
/pdbaltpath, the PDB ends up with paths that are valid within
the context of the machine that the link is performed on.
Differential Revision: https://reviews.llvm.org/D53149
llvm-svn: 344377
* Move #include outside of namespaces
* Add missing #include
* Add out-of-line virtual destructor to BTFTypeEntry
designated initializers should also be fixed
llvm-svn: 344376
BTF is the debug format for BPF, a kernel virtual machine
and widely used for tracing, networking and security, etc ([1]).
Currently only instruction streams are passed to kernel,
the kernel verifier verifies them before execution. In order to
provide better visibility of bpf programs to user space
tools, some debug information, e.g., function names and
debug line information are desirable for kernel so tools
can get such information with better annotation
for jited instructions for performance or other reasons.
The dwarf is too complicated in kernel and for BPF.
Hence, BTF is designed to be the debug format for BPF ([2]).
Right now, pahole supports BTF for types, which
are generated based on dwarf sections in the ELF file.
In order to annotate performance metrics for jited bpf insns,
it is necessary to pass debug line info to the kernel.
Furthermore, we want to pass the actual code to the
kernel because of the following reasons:
. bpf program typically is small so storage overhead
should be small.
. in bpf land, it is totally possible that
an application loads the bpf program into the
kernel and then that application quits, so
holding debug info by the user space application
is not practical.
. having source codes directly kept by kernel
would ease deployment since the original source
code does not need ship on every hosts and
kernel-devel package does not need to be
deployed even if kernel headers are used.
The only reliable time to get the source code is
during compilation time. This will result in both more
accurate information and easier deployment as
stated in the above.
Another consideration is for JIT. The project like bcc
use MCJIT to compile a C program into bpf insns and
load them to the kernel ([3]). The generated BTF sections
will be readily available for such cases as well.
This patch implemented generation of BTF info in llvm
compiler. The BTF related sections will be generated
when both -target bpf and -g are specified. Two sections
are generated:
.BTF contains all the type and string information, and
.BTF.ext contains the func_info and line_info.
The separation is related to how two sections are used
differently in bpf loader, e.g., linux libbpf ([4]).
The .BTF section can be loaded into the kernel directly
while .BTF.ext needs loader manipulation before loading
to the kernel. The format of the each section is roughly
defined in llvm:include/llvm/MC/MCBTFContext.h and
from the implementation in llvm:lib/MC/MCBTFContext.cpp.
A later example also shows the contents in each section.
The type and func_info are gathered during CodeGen/AsmPrinter
by traversing dwarf debug_info. The line_info is
gathered in MCObjectStreamer before writing to
the object file. After all the information is gathered,
the two sections are emitted in MCObjectStreamer::finishImpl.
With cmake CMAKE_BUILD_TYPE=Debug, the compiler can
dump out all the tables except insn offset, which
will be resolved later as relocation records.
The debug type "btf" is used for BTFContext dump.
Dwarf tests the debug info generation with
llvm-dwarfdump to decode the binary sections and
check whether the result is expected. Currently
we do not have such a tool yet. We will implement
btf dump functionality in bpftool ([5]) as the bpftool is
considered the recommended tool for bpf introspection.
The implementation for type and func_info is tested
with linux kernel test cases. The line_info is visually
checked with dump from linux kernel libbpf ([4]) and
checked with readelf dumping section raw data.
Note that the .BTF and .BTF.ext information will not
be emitted to assembly code and there is no assembler
support for BTF either.
In the below, with a clang/llvm built with CMAKE_BUILD_TYPE=Debug,
Each table contents are shown for a simple C program.
-bash-4.2$ cat -n test.c
1 struct A {
2 int a;
3 char b;
4 };
5
6 int test(struct A *t) {
7 return t->a;
8 }
-bash-4.2$ clang -O2 -target bpf -g -mllvm -debug-only=btf -c test.c
Type Table:
[1] FUNC name_off=1 info=0x0c000001 size/type=2
param_type=3
[2] INT name_off=12 info=0x01000000 size/type=4
desc=0x01000020
[3] PTR name_off=0 info=0x02000000 size/type=4
[4] STRUCT name_off=16 info=0x04000002 size/type=8
name_off=18 type=2 bit_offset=0
name_off=20 type=5 bit_offset=32
[5] INT name_off=22 info=0x01000000 size/type=1
desc=0x02000008
String Table:
0 :
1 : test
6 : .text
12 : int
16 : A
18 : a
20 : b
22 : char
27 : test.c
34 : int test(struct A *t) {
58 : return t->a;
FuncInfo Table:
sec_name_off=6
insn_offset=<Omitted> type_id=1
LineInfo Table:
sec_name_off=6
insn_offset=<Omitted> file_name_off=27 line_off=34 line_num=6 column_num=0
insn_offset=<Omitted> file_name_off=27 line_off=58 line_num=7 column_num=3
-bash-4.2$ readelf -S test.o
......
[12] .BTF PROGBITS 0000000000000000 0000028d
00000000000000c1 0000000000000000 0 0 1
[13] .BTF.ext PROGBITS 0000000000000000 0000034e
0000000000000050 0000000000000000 0 0 1
[14] .rel.BTF.ext REL 0000000000000000 00000648
0000000000000030 0000000000000010 16 13 8
......
-bash-4.2$
The latest linux kernel ([6]) can already support .BTF with type information.
The [7] has the reference implementation in linux kernel side
to support .BTF.ext func_info. The .BTF.ext line_info support is not
implemented yet. If you have difficulty accessing [6], you can
manually do the following to access the code:
git clone https://github.com/yonghong-song/bpf-next-linux.git
cd bpf-next-linux
git checkout btf
The change will push to linux kernel soon once this patch is landed.
References:
[1]. https://www.kernel.org/doc/Documentation/networking/filter.txt
[2]. https://lwn.net/Articles/750695/
[3]. https://github.com/iovisor/bcc
[4]. https://github.com/torvalds/linux/tree/master/tools/lib/bpf
[5]. https://github.com/torvalds/linux/tree/master/tools/bpf/bpftool
[6]. https://github.com/torvalds/linux
[7]. https://github.com/yonghong-song/bpf-next-linux/tree/btf
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Yonghong Song <yhs@fb.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Differential Revision: https://reviews.llvm.org/D52950
llvm-svn: 344366
Summary:
Reland of
- r344197 "[MC][ELF] compute entity size for explicit sections"
- r344206 "[MC][ELF] Fix section_mergeable_size.ll"
after being reverted in r344278 due to build breakages from not
specifying a target triple.
Move test from test/CodeGen/Generic/ to test/MC/ELF/.
Add explicit target triple so we don't try to run
this test on non ELF targets.
Reported: https://reviews.llvm.org/D53056#1261707
Reviewers: fhahn, rnk, espindola, NoQ
Reviewed By: fhahn, rnk
Subscribers: NoQ, MaskRay, rengolin, emaste, arichardson, llvm-commits, pirama, srhines
Differential Revision: https://reviews.llvm.org/D53146
llvm-svn: 344360
Generalize SelectionDAGLegalize's CTLZ expansion to handle vectors - lets VectorLegalizer::ExpandCTLZ to just pass the expansion on instead of repeating the same codegen.
llvm-svn: 344349
I want to add another pattern here that includes scalar_to_vector,
so this makes that patch smaller. I was hoping to remove the
hasOneUse() check because it shouldn't be necessary for common
codegen, but an AMDGPU test has a comment suggesting that the
extra check makes things better on one of those targets.
llvm-svn: 344320
It originally triggered a stepping problem in the debugger, which could
be fixed by adjusting CodeGen/LexicalScopes.cpp however it seems we prefer
the previous behavior anyway.
See the discussion for details: http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20181008/593833.html
This reverts commit r343880.
This reverts commit r343874.
llvm-svn: 344318
In future, if we may add any new DAG mutations other than artificial dependencies,
the NodeNum may not be valid. Instead the index from topological schedule DAG can be
used as long as we update it with the DAG change.
Differential Revision: https://reviews.llvm.org/D53104
llvm-svn: 344283
This patch updates the DAG change to reflect in the topological ordering
of the nodes.
Differential Revision: https://reviews.llvm.org/D53105
llvm-svn: 344282
This was originally causing some test failures on non-Windows
platforms, which required fixes in the compiler and linker. After
those fixes, however, other tests started failing. Reverting
temporarily until I can address everything.
llvm-svn: 344279
Revert r344206 "[MC][ELF] Fix section_mergeable_size.ll"
They were causing failures on too many important buildbots for too long.
Please revert eagerly if your fix takes more than a couple of hours to land!
llvm-svn: 344278
While it doesn't make a *ton* of sense for POSIX paths to be
in PDBs, it's possible to occur in real scenarios involving
cross compilation.
The tools need to be able to handle this, because certain types
of debugging scenarios are possible without a running process
and so don't necessarily require you to be on a Windows system.
These include post-mortem debugging and binary forensics (e.g.
using a debugger to disassemble functions and examine symbols
without running the process).
There's changes in clang, LLD, and lldb in this patch. After
this the cross-platform disassembly and source-list tests pass
on Linux.
Furthermore, the behavior of LLD can now be summarized by a much
simpler rule than before: Unless you specify /pdbsourcepath and
/pdbaltpath, the PDB ends up with paths that are valid within
the context of the machine that the link is performed on.
Differential Revision: https://reviews.llvm.org/D53149
llvm-svn: 344269
Summary:
Global variables might declare themselves to be in explicit sections.
Calculate the entity size always to prevent assembler warnings
"entity size for SHF_MERGE not specified" when sections are to be
marked merge-able.
Fixes PR31828.
Reviewers: rnk, echristo
Reviewed By: rnk
Subscribers: llvm-commits, pirama, srhines
Differential Revision: https://reviews.llvm.org/D53056
llvm-svn: 344197
Moving away from UnknownSize is part of the effort to migrate us to
LocationSizes (e.g. the cleanup promised in D44748).
This doesn't entirely remove all of the uses of UnknownSize; some uses
require tweaks to assume that UnknownSize isn't just some kind of int.
This patch is intended to just be a trivial replacement for all places
where LocationSize::unknown() will Just Work.
llvm-svn: 344186
Summary:
Extend analysis forwarding loads from preceeding stores to work with
extended loads and truncated stores to the same address so long as the
load is fully subsumed by the store.
Hexagon's swp-epilog-phis.ll and swp-memrefs-epilog1.ll test are
deleted as they've no longer seem to be relevant.
Reviewers: RKSimon, rnk, kparzysz, javed.absar
Subscribers: sdardis, nemanjai, hiraditya, atanasyan, llvm-commits
Differential Revision: https://reviews.llvm.org/D49200
llvm-svn: 344142
Similar to what already happens in the DAGCombiner wrappers, this patch adds the root nodes back onto the worklist if the DCI wrappers' SimplifyDemandedBits/SimplifyDemandedVectorElts were successful.
Differential Revision: https://reviews.llvm.org/D53026
llvm-svn: 344132
We already do the following combines:
(bitcast int (and (bitcast fp X to int), 0x7fff...) to fp) -> fabs X
(bitcast int (xor (bitcast fp X to int), 0x8000...) to fp) -> fneg X
When the target has "bit preserving fp logic". This patch just extends it
to also combine:
(bitcast int (or (bitcast fp X to int), 0x8000...) to fp) -> fneg (fabs X)
As some targets have fnabs and even those that don't can efficiently lower
both the fabs and the fneg.
Differential revision: https://reviews.llvm.org/D44548
llvm-svn: 344093
One case left around nonsensical operands for the KILL instruction
which the machine verifier checks for nowadays. While this should not
hurt in release builds we should fix the machine verifier errors anyway.
llvm-svn: 344008
The comment was contradicting the code. Looking at history the feature
was implemented a day after the comment was written without dropping the
comment.
llvm-svn: 344005
r126518 introduced a a type parameter to the getShiftAmountTy target hook. It
produces the type of the shift (RHSTy), parameterised by the type of the value
being shifted (LHSTy). SelectionDAGBuilder::visitShift passed RHSTy rather
than LHSTy and this patch corrects this. The change is a no-op because in LLVM
IR the LHS and RHS types for a shift must be equal anyway.
llvm-svn: 343955
This is where we legalize gather and masked load so this is consistent.
Since these ops are always on vectors I've chosen to go with LegalizeDAG since that's what we do for other vector only ops like BUILD_VECTOR, VECTOR_SHUFFLE, etc. The ScalarizeMaskedMemIntrinsic pass should take care of scalarizing these before SelectionDAG so hopefully we don't need to worry about illegally typed scalar ops being emitted in the legalizing. If we did we would need to do this in LegalizeVectorOps so we could get the second type legalization that runs between LegalizeVectorOps and LegalizeDAG.
llvm-svn: 343947
This change is proposed as a part of D44548, but we
need this independently to avoid regressions from improved
undef propagation in SimplifyDemandedVectorElts().
llvm-svn: 343940
rL343913 was using SimplifyDemandedBits's original demanded mask instead of the adjusted 'NewMask' that accounts for multiple uses of the op (those variable names really need improving....).
Annoyingly many of the test changes (back to pre-rL343913 state) are actually safe - but only because their multiple uses are all by PMULDQ/PMULUDQ.
Thanks to Jan Vesely (@jvesely) for bisecting the bug.
llvm-svn: 343935
It was always returning the chain which seems to be the result number of the SDValue in the lit tests we have. But I don't know if that's guaranteed.
llvm-svn: 343933
This patch enables SimplifyDemandedBits to call SimplifyDemandedVectorElts in cases where the demanded bits mask covers entire elements of a bitcasted source vector.
There are a couple of cases here where simplification at a deeper level (such as through bitcasts) prevents further simplification - CommitTargetLoweringOpt only adds immediate uses/users back to the worklist when we might want to combine the original caller again to see what else it can simplify.
As well as that I had to disable handling of bool vector until SimplifyDemandedVectorElts better supports some of their opcodes (SETCC, shifts etc.).
Fixes PR39178
Differential Revision: https://reviews.llvm.org/D52935
llvm-svn: 343913
ASan often introduces basic blocks consisting exclusively of
instructions without debug locations, or with line 0 debug locations.
LiveDebugValues needs to extend variable ranges through these artificial
blocks. Otherwise, a lot of variables disappear -- even at -O0.
Typically, LiveDebugValues does not extend a variable's range into a
block unless the block is essentially "part of" the variable's scope
(for a precise definition, see LexicalScopes::dominates). This patch
relaxes the lexical dominance check for artificial blocks.
This makes the following Swift program debuggable at -O0:
```
1| var x = 100
2| print("x = \(x)")
```
rdar://39127144
Differential Revision: https://reviews.llvm.org/D52921
llvm-svn: 343890
Port over the implementation in SelectionDAGBuilder.cpp into the IRTranslator
and update the arm64-irtranslator test.
These were causing fallbacks in CTMark/Bullet (-Rpass-missed=gisel-select),
and this patch fixes that.
https://reviews.llvm.org/D52945
llvm-svn: 343885
DWARF v5 introduces DW_AT_call_all_calls, a subprogram attribute which
indicates that all calls (both regular and tail) within the subprogram
have call site entries. The information within these call site entries
can be used by a debugger to populate backtraces with synthetic tail
call frames.
Tail calling frames go missing in backtraces because the frame of the
caller is reused by the callee. Call site entries allow a debugger to
reconstruct a sequence of (tail) calls which led from one function to
another. This improves backtrace quality. There are limitations: tail
recursion isn't handled, variables within synthetic frames may not
survive to be inspected, etc. This approach is not novel, see:
https://gcc.gnu.org/wiki/summit2010?action=AttachFile&do=get&target=jelinek.pdf
This patch adds an IR-level flag (DIFlagAllCallsDescribed) which lowers
to DW_AT_call_all_calls. It adds the minimal amount of DWARF generation
support needed to emit standards-compliant call site entries. For easier
deployment, when the debugger tuning is LLDB, the DWARF requirement is
adjusted to v4.
Testing: Apart from check-{llvm, clang}, I built a stage2 RelWithDebInfo
clang binary. Its dSYM passed verification and grew by 1.4% compared to
the baseline. 151,879 call site entries were added.
rdar://42001377
Differential Revision: https://reviews.llvm.org/D49887
llvm-svn: 343883
Context: Compiler generated instructions do not have a debug location
assigned to them. However emitting 0-line records for all of them bloats
the line tables for very little benefit so we usually avoid doing that.
Not emitting anything will lead to the previous debug location getting
applied to the locationless instructions. This is not desirable for
block begin and after labels. Previously we would emit simply emit
line-0 records in this case, this patch changes the behavior to do a
forward search for a debug location in these cases before emitting a
line-0 record to further reduce line table bloat.
Inspired by the discussion in https://reviews.llvm.org/D52862
llvm-svn: 343874