Commit 8922adf646 recently made JITTargetMachineBuilder honor the
hasJIT property of the target. LLVM supports just-in-time compilation
on RISC-V, so set the flag.
Differential Revision: https://reviews.llvm.org/D131617
This works with the automatic export of all symbols; in MinGW mode,
when a DLL has no explicit dllexports, it exports all symbols (except
for some that are hardcoded to be excluded, including some toolchain
libraries).
By hooking up the hidden visibility to the -exclude-symbols: directive,
the automatic export of all symbols can be controlled in an easier
way (with a mechanism that doesn't require strict annotation of every
single symbol, but which allows gradually marking more unnecessary
symbols as hidden).
The primary use case is dylib builds of LLVM/Clang. These can be done
in MinGW mode but not in MSVC mode, as MinGW builds can export all
symbols (and the calling code can use APIs without corresponding
dllimport directives). However, as all symbols are exported, it can
easily overflow the max number of exported symbols in a DLL (65536).
In the llvm-mingw distribution, only the X86, ARM and AArch64 backends
are enabled; for the LLVM 13.0.0 release, libLLVM-13.dll ended up with
58112 exported symbols. For LLVM 14.0.0, it was 62015 symbols. Current
builds of the 15.x branch end up at around 64650 symbols - i.e. extremely
close to the limit.
The msys2 packages of LLVM have had to progressively disable more
of their backends in their builds, to be able to keep building with a
dylib.
This allows improving the current mingw dylib situation significantly,
by using the same hidden visibility options and attributes as on Unix.
With those in place, a current build of LLVM git main ends up at 35142
symbols instead of 64650.
For code using hidden visibility, this now requires linking with either
a current git lld or ld.bfd. (Older lld error out on the unknown
directives, older ld.bfd will successfully link, but will print huge
amounts of warnings.)
Differential Revision: https://reviews.llvm.org/D130121
Expand TypePromotion pass to try to promote PHI-nodes in loops that are the
operand of a ZExt, using the ZExt's result type to determine the Promote Width.
Differential Revision: https://reviews.llvm.org/D111237
Other sanitizers (ASan, TSan, see added tests) already handle
memcpy.inline and memset.inline by not relying on InstVisitor to turn
the intrinsics into calls. Only MSan instrumentation currently does not
support them due to missing InstVisitor callbacks.
Fix it by actually making InstVisitor handle Mem*InlineInst.
While the mem*.inline intrinsics promise no calls to external functions
as an optimization, for the sanitizers we need to break this guarantee
since access into the runtime is required either way, and performance
can no longer be guaranteed. All other cases, where generating a call is
incorrect, should instead use no_sanitize.
Fixes: https://github.com/llvm/llvm-project/issues/57048
Reviewed By: vitalybuka, dvyukov
Differential Revision: https://reviews.llvm.org/D131577
Introduces COFFVCRuntimeBootstrapper that loads/initialize vc runtime libraries. In COFF, we *must* jit-link vc runtime libraries as COFF relocation types have no proper way to deal with out-of-reach data symbols ragardless of linking mode. (even dynamic version msvcrt.lib have tons of static data symbols that must be jit-linked) This class tries to load vc runtime library files from msvc installations with an option to override the path.
There are some complications when dealing with static version of vc runtimes. First, they need static initializers to be ran that requires COFFPlatform support but orc runtime will not be usable before vc runtimes are fully initialized. (as orc runtime will use msvc stl libraries) COFFPlatform that will be introduced in a following up patch will collect static initializers and run them manually in host before boostrapping itself. So, the user will have to do the following.
1. Create COFFPlatform that addes static initializer collecting passes.
2. LoadVCRuntime
3. InitializeVCRuntime
4. COFFPlatform.bootstrap()
Second, the internal crt initialization function had to be reimplemented in orc side. There are other ways of doing this, but this is the simplest implementation that makes platform fully responsible for static initializer. The complication comes from the fact that crt initialization functions (such as acrt_initialize or dllmain_crt_process_attach) actually run all static initializers by traversing from `__xi_a` symbol to `__xi_z`. This requires symbols to be contiguously allocated in sections alphabetically sorted in memory, which is not possible right now and not practical in jit setting. We might ignore emission of `__xi_a` and `__xi_z` symbol and allocate them ourselves, but we have to take extra care after orc runtime boostrap has been done -- as that point orc runtime should be the one running the static initializers.
Reviewed By: lhames
Differential Revision: https://reviews.llvm.org/D130456
Stubs SECREL relocation to external symbol. In order to correctly deal with this, we want to requrest memory manager to keep track of address of first block of sepecific section and keep address to be only increased from that point. We also should give jitlink to get information about global section. The relocation is only used for debug and tls info which we don't support yet anyways, so just stubbing it for now.
Reviewed By: lhames
Differential Revision: https://reviews.llvm.org/D130451
Implements SECTION/SECREL relocation. These are used by debug info (pdb) data.
Reviewed By: lhames
Differential Revision: https://reviews.llvm.org/D130275
Summary: AIX XCOFF doesn't support the cold feature.
While it shouldn't be a function error when XCOFF catching the cold attribute.
As with the behavior of other formats, we just ignore the attribute for now.
Reviewed By: DiggerLin
Differential Revision: https://reviews.llvm.org/D131473
This is not used as general CPU alias. Only to support -mtune. Name it as such.
Reviewed By: kito-cheng
Differential Revision: https://reviews.llvm.org/D131602
This patch makes the variants of `mm*_cast*` intel intrinsics that use `shufflevector(freeze(poison), ..)` emit efficient assembly.
(These intrinsics are planned to use `shufflevector(freeze(poison), ..)` after shufflevector's semantics update; relevant thread: D103874)
To do so, this patch
1. Updates `LowerAVXCONCAT_VECTORS` in X86ISelLowering.cpp to recognize `FREEZE(UNDEF)` operand of `CONCAT_VECTOR` in addition to `UNDEF`
2. Updates X86InstrVecCompiler.td to recognize `insert_subvector` of `FREEZE(UNDEF)` vector as its first operand.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D130339
As extending from or truncating to mask vector do not use the same instructions as the normal cast, this path changed it to 2 which is the number of instructions we used.
Differential Revision: https://reviews.llvm.org/D131552
Since we don't yet implement PROC's PROLOGUE and EPILOGUE support, we can safely ignore the option that disables them.
Reviewed By: thakis
Differential Revision: https://reviews.llvm.org/D131524
All the prologue instructions should have unknown source location
co-ordinates while the epilogue instructions should have source
location of last non-debug instruction after which epilogue
instructions are insrted.
This ensures the prologue/epilogue markers are generated correctly
in the line table.
Changes are brought in from the downstream CFI patches.
Reviewed By: scott.linder
Differential Revision: https://reviews.llvm.org/D131485
The C++ Standard requires a complete type T when using any members of
`vector<T>`, see
https://eel.is/c++draft/vector#overview-4.
This only breaks with latest libc++ in C++20 mode and does not show up
in common configurations.
We have an internal experimental configuration that discovered this.
Reviewed By: alexfh
Differential Revision: https://reviews.llvm.org/D131595
This patch fixes:
llvm/lib/MC/MCParser/COFFMasmParser.cpp:333:28: error: comparison of
integers of different signs: 'unsigned int' and 'int'
[-Werror,-Wsign-compare]
This is done by calling __msan_set_alloca_origin and providing the location of the variable by using the call stack.
This is prepatory work for dropping variable names when track-origins is enabled.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D131205
These changes to address issue
https://github.com/llvm/llvm-project/issues/55857.
Since R30/S30 is used as pointer (32 bits) for GOT Table in the ppc32 ABI,
remove it from the SPE callee save register when PIC is enabled.
This prevents emitting the SPE load and store for S30 and S31 regs.
Differential revision: https://reviews.llvm.org/D127495
This adresses various regression in D131260 , as well as is a useful optimization in itself.
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D131358
SPE doesn't have a fmadd instruction, so don't bother hoisting a
multiply and add sequence to this, as it'd become just a library call.
Hoisting happens too late for the CTR usability test to veto using the
CTR in a loop, and results in an assert "Invalid PPC CTR loop!".
From the opengroup specifications, atan2 may fail if the result
underflows and atan may fail if the argument is subnormal, but
we assume that does not happen and eliminate the calls if we
can constant fold the result at compile-time.
Differential Revision: https://reviews.llvm.org/D127964
canCreateUndefOrPoison currently only handles unary ops, but we intend to change that soon - this more closely matches the pushFreezeToPreventPoisonFromPropagating behaviour where the freeze is pushed up to a single operand value, as long as all others are guaranteed not to be poison/undef.
However, pushFreezeToPreventPoisonFromPropagating would freeze all uses of the value - whilst this variant requires the frozen value to be only used in the op - we can look at generalize multiple uses later if the need arises.
Currently fcopysign for VLS vectors lowers through NEON even when the
vector width is wider than a NEON vector, causing bad codegen as the
vectors are split. This patch causes SVE to be used for these vectors
instead, giving much better codegen on wide VLS vectors.
Reviewed By: paulwalker-arm
Differential Revision: https://reviews.llvm.org/D128642
Prior to this patch, libcalls inserted by the SelectionDAG legalizer
could never be tailcalled. The eligibility of libcalls for tail calling
is is partly determined by checking TargetLowering::isInTailCallPosition
and comparing the return type of the libcall and the calleer.
isInTailCallPosition in turn calls TargetLowering::isUsedByReturnOnly
(which always returns false if not implemented by the target).
This patch provides a minimal implementation of
TargetLowering::isUsedByReturnOnly - enough to support tail calling
libcalls on hard float ABIs. Soft-float ABIs are left for a follow on
patch. libcall-tail-calls.ll also shows missed opportunities to tail
call integer libcalls, but this is due to issues outside of
the isUsedByReturnOnly hook.
Differential Revision: https://reviews.llvm.org/D131087
This specific optimisation is handled in OptimizeBlock in BranchFolding
so is redundant. As discussed on the review thread, I've verified that
we have test coverage for that optimisation within test/CodeGen/X86 by
disabling the BranchFolding version of this transform after applying
this patch and rerunning the test suite.
Differential Revision: https://reviews.llvm.org/D129204
WebAssembly globals are represented as IR globals with the wasm_var
address space (AS1). Prior to this patch, a wasm global load that isn't
lowerable will produce a failure to select, while a wasm global store
will produced incorrect code. This patch ensures we consistently produce
a clear error.
As noted in the test cases, it's conceivable that a frontend or an
optimisation pass could produce similar IR even in the presence of the
semantic restrictions on pointers to Wasm globals in the frontend, which
is a separate problem to address.
Differential Revision: https://reviews.llvm.org/D131387
The cost of convert from or to mask vector is different from other cases. We could not use PowDiff to calculate it. This patch set it to 3 as we use 3 instruction to make it.
Differential Revision: https://reviews.llvm.org/D131149
The relevant property of allocation functions of interest here is
their uniqueness (in the sense of disjoint provenance), which is
encoded by the noalias return attribute.
Differential Revision: https://reviews.llvm.org/D130225
This patch teaches llvm-profdata to output the sample profile in the
JSON format. The new option is intended to be used for research and
development purposes. For example, one can write a Python script to
take a JSON file and analyze how similar different inline instances of
a given function are to each other.
I've chosen JSON because Python can parse it reasonably fast, and it
just takes a couple of lines to read the whole data:
import json
with open ('profile.json') as f:
profile = json.load(f)
Differential Revision: https://reviews.llvm.org/D130944
This change finalizes the series of patches aiming to replace old
strategy of VGPR to SGPR copies loweriong. Following the
https://reviews.llvm.org/D128252 and https://reviews.llvm.org/D130367 code
parts that are no longer used were removed. Pass main loop is no longer used
for the MIR changes but collect information for further analysis. Actual MIR
lowering happens further according the analysys result in the set of separate
functions. Another important change concerns the order of lowering: VGPR to
SGPR copies lowering is done first to have priority on the rest of the MIR
changes.
Reviewed By: rampitec
Differential Revision: https://reviews.llvm.org/D131246
ROPI/RWPI are not supported with LOAD_STACK_GUARD currently.
Reviewed By: nickdesaulniers, rengolin
Differential Revision: https://reviews.llvm.org/D131427
After D121595 was commited, I noticed regressions assosicated with small trip
count numbersvectorisation by tail folding with scalable vectors. As a solution
for those issues I propose to introduce the minimal trip count threshold value.
Differential Revision: https://reviews.llvm.org/D130755
To move from TF C API to TFLite, we found that the argmax op in TFLite does not work for int64 inputs, so cast the int64 inputs to int32 inputs to make TFLite argmax op work
Differential Revision: https://reviews.llvm.org/D131462
This patch adds the names of the Arm Architecture Reference Manual (ARM)
features to the corresponding Subtarget Features in the AArch64 backend
and target parser.
The aim of this is to make it clearer what architectural features a
subtarget feature might enable (so, which features a CPU must provide to
support that subtarget feature), and so make it easier to add new CPUs
in the future.
Differential Revision: https://reviews.llvm.org/D131257
This is a follow-up patch to D130999. In the test, the MIR contains an
unreachable MBB but the code attempts to look it up in MLocs. This
patch fixes this issue by checking for the default-constructed value.
rdar://97226240
Differential Revision: https://reviews.llvm.org/D131453
si-annotate-control-flow does depth first traversal of BB's of
a function to insert amdgcn if intrinsics for conditional
branches so that isel can generate correct instructions later.
si-annotate-control-flow checks whether the successor BB for the 'else'
branch of a conditional branch has been visited. If it has been
visited, si-annotate-control-flow assumes the conditional
branch has been handled and will not try to insert if intrinsic
for it.
This assumption is not correct when the IR contains multiple
unreachable BB's. Then 'if' intrinscs are not inserted and incorrect
ISA are generated.
This patch fixes the issue by let amdgpu-unify-divergent-exit-nodes
unify unreachables even if they are uniformly reached. In this way
the IR will not contain multiple exits, and structurizer is able to
structurize the IR containing one unified exit.
Reviewed by: Ruiling Song, Matt Arsenault
Differential Revision: https://reviews.llvm.org/D131181
Fixes: SWDEV-343244
This adds a +forced-atomics target feature with the same semantics
as +atomics-32 on ARM (D130480). For RISCV targets without the +a
extension, this forces LLVM to assume that lock-free atomics
(up to 32/64 bits for riscv32/64 respectively) are available.
This means that atomic load/store are lowered to a simple load/store
(and fence as necessary), as these are guaranteed to be atomic
(as long as they're aligned). Atomic RMW/CAS are lowered to __sync
(rather than __atomic) libcalls. Responsibility for providing the
__sync libcalls lies with the user (for privileged single-core code
they can be implemented by disabling interrupts). Code using
+forced-atomics and -forced-atomics are not ABI compatible if atomic
variables cross the ABI boundary.
For context, the difference between __sync and __atomic is that the
former are required to be lock-free, while the latter requires a
shared global lock provided by a shared object library. See
https://llvm.org/docs/Atomics.html#libcalls-atomic for a detailed
discussion on the topic.
This target feature will be used by Rust's riscv32i target family
to support the use of atomic load/store without atomic RMW/CAS.
Differential Revision: https://reviews.llvm.org/D130621
This allows relaxing some relocations to STT_SECTION symbol+offset
instead of emitting a relocation against a symbol.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D131433
ARMAsmPrinter::emitFunctionEntryLabel() was not calling the base class
function so the $local alias was not being emitted. This should not have
any function effect right now since ARM does not generate different code
for the $local symbols, but it could be improved in the future.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D131392
The RelLookupTableConverter pass currently only supports 64-bit
pointers. This is currently enforced using an isArch64Bit() check
on the target triple. However, we consider x32 to be a 64-bit target,
even though the pointers are 32-bit. (And independently of that
specific example, there may be address spaces with different pointer
sizes.)
As such, add an additional guard for the size of the pointers that
are actually part of the lookup table.
Differential Revision: https://reviews.llvm.org/D131399
This allows a number of optimisation passes to work.
E.g. BranchFolding and MachineBlockPlacement.
Differential Revision: https://reviews.llvm.org/D131316
The register operand of DBG_VALUE is not selected to a proper register
bank in both AArch64 and X86. This would cause getRegClass crash after
global ISel. After discussion, we think the MIR should assume all
vritual register should be set proper register class after global ISel,
so this patch is to fix the gap of DBG_VALUE for AArch64 and X86.
Differential Revision: https://reviews.llvm.org/D129037
The previous code overwrites VRMap for prologue stages during Phi
generation if a register spans many stages.
As a result, the wrong register is used as the one coming from
the prologue in Phis at later stages. (A process exists to correct
this, but it does not work in all cases.)
In addition, VRMap for prologue must be preserved until addBranches().
This patch fixes them by separating the map for Phis into a different
variable (VRMapPhi).
Reviewed By: bcahoon
Differential Revision: https://reviews.llvm.org/D127840
1. Missing instruction information (FTSSEL, FMSB, PFIRST and RDFFR)
is added and CompleteModel is set to one.
2. Information for pseudo SVE instructions is added. Those
instructions are present at the time of scheduling.
3. Resource and latency information for SVE instructions is modified
to be more accurate.
For example, the description for CMPEQ, which consumes one cycle
each of unit FLA and PPR, is as follows.
```
Previous:
def A64FXGI01 : ProcResGroup<[A64FXIPFLA, A64FXIPPR]>;
def A64FXWrite_4Cyc_GI01 : SchedWriteRes<[A64FXGI01]> {...
Modified:
def A64FXGI0 : ProcResGroup<[A64FXIPFLA]>;
def A64FXGI1 : ProcResGroup<[A64FXIPPR]>;
def A64FXWrite_CMP : SchedWriteRes<[A64FXGI0, A64FXGI1]> {...
```
Reference: A64FX Microarchitecture Manual (Table 16-3)
https://github.com/fujitsu/A64FX/blob/master/doc/A64FX_Microarchitecture_Manual_en_1.7.pdf
Reviewed By: dmgreen, kawashima-fj
Differential Revision: https://reviews.llvm.org/D131165
Map hardware loop intrinsics loop_decrement and set_loop_iteration
to the new PowerPC pseudo instructions, so that the hardware loop
intrinsics will be expanded to normal cmp+branch form or ctrloop
form based on the CTR register usage on MIR level.
Reviewed By: lkail
Differential Revision: https://reviews.llvm.org/D123366
Exclude the terminating end opcode from the epilog - it doesn't
correspond to an actual instruction that is included in the epilog
itself (within the .seh_startepilogue/.seh_endepilogue range).
In most (all?) cases, an epilog is followed by a matching terminating
instruction though (a ret or a branch to a tail call), but it's not
strictly within the .seh_startepilogue/.seh_endepilogue range.
This fixes a number of failed asserts in cases where the codegen
has incorrectly reoredered SEH opcodes so they don't match up
exactly with their instructions.
However this still just avoids failing the assertion; the root cause
of generating unexpected epilogs is still present (and fixing that is
a less obvious issue).
Differential Revision: https://reviews.llvm.org/D131393
With profile data, non-trivial LoopUnswitch will only apply on non-cold loops, as unswitching cold loops may not gain much benefit but significantly increase the code size.
Reviewed By: aeubanks, asbirlea
Differential Revision: https://reviews.llvm.org/D129599
Implements the pc element for the symbolizing filter, including it's
"ra" and "pc" modes. Return addresses ("ra") are adjusted by
decrementing one. By default, {{{pc}}} elements are assumed to point to
precise code ("pc") locations. Backtrace elements will adopt the
opposite convention.
Along the way, some minor refactors of value printing and colorization.
Reviewed By: peter.smith
Differential Revision: https://reviews.llvm.org/D131115
We are seeing significant performance loss when an alloca fails to get promoted
to register. I have observed that this is due to the common type found when
attempting to rewrite partition users being unviable for promotion. While if we
would have continue looking for a type, we would have found a subtype in the
original allocated type that would have enabled promotion. Thus first check if
the initial common type found is promotion viable and if not then continue
looking instead of stopping with the initial common type found.
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D128073
The floating point stores use a different register class, it
probably makes sense to have a different SchedRead.
Reviewed By: monkchiang
Differential Revision: https://reviews.llvm.org/D131379
Previously we'd go off the end of the BI iterator because we expected
that the relative positions of common blocks before and after were
consistent. That's not always true though, for example with
jump-threading.
Reviewed By: jamieschmeiser
Differential Revision: https://reviews.llvm.org/D130596
The build breakages should be addressed by d4abdd2e3d:
[CMake] Check CMAKE_CXX_STANDARD and error if it's to old
Thanks to Tobias and Roy for addressing these issues.
This patch adds basic support for a DAG variant of the canCreateUndefOrPoison call and updates DAGCombiner::visitFREEZE to use it, further Opcodes (including target specific Opcodes) can be handled when we have test coverage.
So far, I've left visitFREEZE to just use this for unary nodes (which currently means the existing BITCAST/FREEZE cases) - later patches will add other unary opcodes (with test coverage) and we can also refactor visitFREEZE to support a general number of operands like we do in InstCombinerImpl::pushFreezeToPreventPoisonFromPropagating.
I'm not aware of any vector test freeze coverage so the DemandedElts (and the Depth) args are not being used yet - but they are in place. Similarly we will be able to handle poison generating SDNodeFlags as and when it becomes an issue.
Part of the work for D106675 / PR50468
Differential Revision: https://reviews.llvm.org/D130646
These are new debug types that ships with the latest
Windows SDK and would warn and finally fail lld-link.
The symbols seems to be related to Microsoft's XFG
which is their version of CFG. We can't handle any of
this yet, so for now we can just ignore these types
so that lld doesn't fail with a new version of Windows
SDK.
Fixes: #56285
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D129378
When EarlyCSE tries to common vector masked loads/stores, it first checks that
they have same base operand and then assumes that this is enough for mask types
to be equal. This is true for typed pointers but false for opaque ones -
two loads of different vector sizes from same base pointer '%b' are the same,
`ptr %b`. (For typed pointers, `%b` was cast to vector pointer type so bases
were different).
Change assert to return from lambda `isSubmask` so this transformation properly
works with opaque pointers.
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D131251
This patch emits table lookup in expandCTTZ.
Context -
https://reviews.llvm.org/D113291 transforms set of IR instructions to
cttz intrinsic but there are some targets which does not support CTTZ or
CTLZ. Hence, I generate a table lookup in TargetLowering::expandCTTZ().
Differential Revision: https://reviews.llvm.org/D128911
FoldConstantArithmetic can fold constant vectors hidden behind bitcasts (e.g. vXi64 -> v2Xi32 on 32-bit platforms), but currently bails if either vector contains undef elements. These undefs can often occur due to SimplifyDemandedBits/VectorElts calls recognising that the upper bits are often unnecessary (e.g. funnel-shift/rotate implicit-modulo and AND masks).
This patch adds a basic 'FoldValueWithUndef' handler that will attempt to constant fold if one or both of the ops are undef - so far this just handles the AND and MUL cases where we always fold to zero.
The RISCV codegen increase is interesting - it looks like the BUILD_VECTOR lowering was loading a constant pool entry but now (with all elements defined constant) it can materialize the constant instead?
Differential Revision: https://reviews.llvm.org/D130839
The ABI for big-endian AArch32, as specified by AAELF32, is above-
averagely complicated. Relocatable object files are expected to store
instruction encodings in byte order matching the ELF file's endianness
(so, big-endian for a BE ELF file). But executable images can
//either// do that //or// store instructions little-endian regardless
of data and ELF endianness (to support BE32 and BE8 platforms
respectively). They signal the latter by setting the EF_ARM_BE8 flag
in the ELF header.
(In the case of the Thumb instruction set, this all means that each
16-bit halfword of a Thumb instruction is stored in one or other
endianness. The two halfwords of a 32-bit Thumb instruction must
appear in the same order no matter what, because the first halfword is
the one that must avoid overlapping the encoding of any 16-bit Thumb
instruction.)
llvm-objdump was unconditionally expecting Arm instructions to be
stored little-endian. So it would correctly disassemble a BE8 image,
but if you gave it a BE32 image or a BE object file, it would retrieve
every instruction in byte-swapped form and disassemble it to
nonsense. (Even an object file output by LLVM itself, because
ARMMCCodeEmitter outputs instructions big-endian in big-endian mode,
which is correct for writing an object file.)
This patch allows llvm-objdump to correctly disassemble all three of
those classes of Arm ELF file. It does it by introducing a new
SubtargetFeature for big-endian instructions, setting it from the ELF
image type and flags during llvm-objdump setup, and teaching both
ARMDisassembler and llvm-objdump itself to pay attention to it when
retrieving instruction data from a section being disassembled.
Differential Revision: https://reviews.llvm.org/D130902
Calling reserve() used to require an RPC call. This commit allows large
ranges of executor address space to be reserved. Subsequent calls to
reserve() will return subranges of already reserved address space while
there is still space available.
Differential Revision: https://reviews.llvm.org/D130392
D129150 added a combine from shuffles to And that creates a BUILD_VECTOR
of constant elements. We need to ensure that the elements are of a legal
type, to prevent asserts during lowering.
Fixes#56970.
Differential Revision: https://reviews.llvm.org/D131350
Add patterns to select predicated instructions when lowering:
fadd(a, select(mask, b, splat(0)))
fsub(a, select(mask, b, splat(0)))
'fadd' is unsafe unless no-signed zeros fast-math flag is set, since
-0.0 + 0.0 = 0.0
changes the sign. Alive2: https://alive2.llvm.org/ce/z/wbhJh_
Also adds FMA patterns for:
fadd(a, select(mask, mul(b, c), splat(0))) -> fmla(a, mask, b, c)
fsub(a, select(mask, mul(b, c), splat(0))) -> fmla(a, mask, b, c)
These patterns require the 'contract' fast-math flag to be set, and the
fadd 'nsz' as above.
Reviewed By: paulwalker-arm
Differential Revision: https://reviews.llvm.org/D130564
Given a TypeIndex or CVType return its size in bytes. Basically it
is the inverse to 'CodeViewDebug::lowerTypeBasic', that returns a
TypeIndex based in a size.
Reviewed By: rnk, djtodoro
Differential Revision: https://reviews.llvm.org/D129846
This patch ensures the `$fp` always points to the bottom of the vararg
spill region.
Includes support for expand `ISD::DYNAMIC_STACKALLOC`.
Differential Revision: https://reviews.llvm.org/D130250
This allow optimization without LTO. Also remove some useless else-ifs.
Signed-off-by: Jun Zhang <jun@junz.org>
Differential Revision: https://reviews.llvm.org/D131313
This reverts commit 6ea5bf436a.
6ea5bf436a made use of new c++17 rules regarding
order of evaluation (specifically: in function calls the expression naming the
function should be sequenced before the evalution of any operands) to simplify
some continuation-passing calls. Unfortunately this appears to break at least
one MSVC bot: https://lab.llvm.org/buildbot/#/builders/123/builds/12149 .
Includes an update to the comments to note that the workaround is now based on
MSVC limitations, not on LLVM adopting c++17.
std::iterator has been deprecated in C++17 and some standard library implementations such as MS STL or libc++ emit deperecation messages when using the class.
Since LLVM has now switched to C++17 these will emit warnings on these implementations, or worse, errors in build configurations using -Werror.
This patch fixes these issues by replacing them with LLVMs own llvm::iterator_facade_base which offers a superset of functionality of std::iterator.
Differential Revision: https://reviews.llvm.org/D131320
This function heap-allocates a ThreadSafeModule (the current C bindings assume
that TSMs are always heap-allocated), but was failing to free it.
Should fix http://llvm.org/PR56953.
If we're adding a constant that can't use addi we try a few tricks,
one of which is using li+sh3add. We should not do this if lui+add
would work. For example adding 8192. Using sh3add prevents folding
a sext.w to form addw, thus increasing instruction count.
This fixes a bug reported privately by @craig.topper. Here's an example which illustrates the problem:
vsetivli a1, a0, e32, m1, ta, mu # both DefInfo and PrevInfo
vsetivli a2, a1, e32, m4, ta, mu
With the unsound result being:
vsetivli a1, a0, e32, m1, ta, mu
vsetivli a2, a0, e32, m4, ta, mu
Consider the case where this is running on a machine with VLEN=512,. For this case, the VLMAXs are 16 and 64 respectively.
Consider for a0 = 33. The correct result is: a1 = 16, and a2 = 16
After the unsound optimization: a1 = 16 and a2 = 33
This particular example used VLMAXs which differed by more than a power of two. With a difference of only one power of two, there's another form of this bug which involves the AVL < 2 x VLMAX special case, but that ones more complicated to construct as many examples turn out accidentally sound.
This patch takes the approach of simply removing the unsound optimization, but there are multiple sound sub-cases of it. I plan to return to at least a couple of them, but figured it was cleaner to remove the unsound optimization (for ease of backporting), and then review the new optimizations on their own.
Differential Revision: https://reviews.llvm.org/D131264
Create function segments and emit unwind info of them.
A segment must be less than 1MB and no prolog or epilog is splitted between two
segments.
This patch should generate correct, though not optimal, unwind info for large
functions. Currently it only generate pacted info (.pdata) only for functions
that are less than 1MB (single-segment functions). This is NFC from before this
patch.
The next step is to enable (.pdata) only unwind info for the first segment or
segments that have neither prolog or epilog in a multi-segment function.
Another future work item is to further split segments that require more than 255
code words or have more than 65535 epilogs.
Reference:
https://docs.microsoft.com/en-us/cpp/build/arm64-exception-handling#function-fragments
Differential Revision: https://reviews.llvm.org/D130049
NOTE: i8 vector splats are ignored because the immediate range of
DUP already has full coverage.
Differential Revision: https://reviews.llvm.org/D131078
We get a couple of improvements from recognizing swapped
operand patterns that were not handled by the replicated
code.
This should also enable simplifying larger patterns as
seen in issue #56653 and issue #56654, but that requires
enhancements to isImpliedCondition() itself.
There are no AMDGPUSampleVariant versions for _G16, it is treated more like a
modifier for derivatives (_D) (also for intrinsics where it is overloaded type
instead of part of instrinsic name) so we ended up making more variants for
these instruction then we actually needed.
32-bit derivatives need 6 dwords at most, while 16-bit need 4 at most. Using
same AMDGPUSampleVariant for both, we ended up creating 2 extra variants per
instruction than were necessary.
In total this deletes 260 unused tablegen records.
Differential Revision: https://reviews.llvm.org/D131252
Given a poison constant as input, the dyn_cast to a ConstantInt would
fail so we would fall through to the generic code that attempts to fold
each element of the input vectors. The inputs to these intrinsics are
not vectors though, leading to a compile time crash. Instead bail out
properly for poison values by returning nullptr. This doesn't try to
define what poison means for these intrinsics.
Fixes#56945
This fixes 69 llvm tests that failed when EXPENSIVE_CHECKS was enabled.
llvm/test/Transforms/IROutliner/outlining-commutative-operands-opposite-order.ll
is one example.
When we have EXPENSIVE_CHECKS, _GLIBCXX_DEBUG is defined. This means
that libstdc++ will call the compare function to check if it is
implemented correctly (that !(a < a) is true).
This happens even if there is only one item and here, we expect
to see one return void or multiple return constant integer.
Don't sort if we have 1 item, but do assert that it is the 1
ret void we expect. In the comparator, assert that neither
Value is a nullptr in case one ended up in a the list somehow.
Reviewed By: AndrewLitteken
Differential Revision: https://reviews.llvm.org/D130230
According to the description of the LoongArch abi documentation,
(https://loongson.github.io/LoongArch-Documentation/LoongArch-ELF-ABI-EN.html)
the calling convention of LoongArch is almost the same as the RISCV's
(except for the vector part), so we borrow the implementation of RISCV.
This patch only guarantees the correctness of lp64d, because only the
part of lp64d is described in detail in the documentation.
Differential Revision: https://reviews.llvm.org/D130249
Closing https://github.com/llvm/llvm-project/issues/56919
It is meaningless to preserve the lifetime markers for the spilled
allocas in the coroutine frames and it would block some optimizations
too.
This is a simple addition to emitConditionalComparison, to match CCMP
with immediates using getIConstantVRegValWithLookThrough, letting it
select the CCMPri variants of the instructions.
Differential Revision: https://reviews.llvm.org/D131073
`BMI` new instruction `tzcnt` has better performance than `bsf` on new
processors. Its encoding has a mandatory prefix '0xf3' compared to
`bsf`. If we force emit `rep` prefix for `bsf`, we will gain better
performance when the same code run on new processors.
GCC has already done this way: https://c.godbolt.org/z/6xere6fs1Fixes#34191
Reviewed By: craig.topper, skan
Differential Revision: https://reviews.llvm.org/D130956
A const reference is preferred over a non-null const pointer.
`Type *` is kept as is to match the other overload.
Reviewed By: davidxl
Differential Revision: https://reviews.llvm.org/D131197
When folding (sra (add (shl X, 32), C1), 32 - C) -> (shl (sext_inreg (add X, C1), i32), C)
it's possible that the add is used by multiple sras. We should
allow the combine if all the SRAs will eventually be updated.
After transforming all of the sras, the shls will share a single
(sext_inreg (add X, C1), i32).
This pattern occurs if an sra with 32 is used as index in multiple
GEPs with different scales. The shl from the GEPs will be combined
with the sra before we get a chance to match the sra pattern.
COFF has a verifier check that private global variables don't have a comdat of the same name.
Reviewed By: rnk
Differential Revision: https://reviews.llvm.org/D131043
1) Overloaded (instruction-based) method is a wrapper around the current (opcode-based) method.
2) This patch also changes a few callsites (VectorCombine.cpp,
SLPVectorizer.cpp, CodeGenPrepare.cpp) to call the overloaded method.
3) This is a split of D128302.
Differential Revision: https://reviews.llvm.org/D131114
In contrast to AAPotentialValues, the constant values version can
contain implicit `undef` in the set. We had an assertion that could
misfire before. Handle it properly now.
When folding (sra (add (shl X, 32), C1), 32 - C) -> (shl (sext_inreg (add X, C1), C)
ignore the use count on the (shl X, 32).
The sext_inreg after the transform is free. So we're only making
2 new instructions, the add and the shl. So we only need to be
concerned with replacing the original sra+add. The original shl
can have other uses. This helps if there are multiple different
constants being added to the same shl.
This connects the Symbolizer to the markup filter and enables the first
working end-to-end flow using the filter.
Reviewed By: peter.smith
Differential Revision: https://reviews.llvm.org/D130187
As discussed in [0], this diff adds the `skipprofile` attribute to
prevent the function from being profiled while allowing profiled
functions to be inlined into it. The `noprofile` attribute remains
unchanged.
The `noprofile` attribute is used for functions where it is
dangerous to add instrumentation to while the `skipprofile` attribute is
used to reduce code size or performance overhead.
[0] https://discourse.llvm.org/t/why-does-the-noprofile-attribute-restrict-inlining/64108
Reviewed By: phosek
Differential Revision: https://reviews.llvm.org/D130807
This allows the construct to be shared between different backends. However, it
still remains illegal to use TypedPointerType in LLVM IR--the type is intended
to remain an auxiliary type, not a real LLVM type. So no support is provided for
LLVM-C, nor bitcode, nor LLVM assembly (besides the bare minimum needed to make
Type->dump() work properly).
Reviewed By: beanz, nikic, aeubanks
Differential Revision: https://reviews.llvm.org/D130592
This fixes warnings like these:
../lib/ExecutionEngine/Orc/MemoryMapper.cpp:364:9: warning: ignoring return value of function declared with 'warn_unused_result' attribute [-Wunused-result]
joinErrors(std::move(Err),
^~~~~~~~~~ ~~~~~~~~~~~~~~~
Differential Revision: https://reviews.llvm.org/D131056
In RVV, we use vwredsum.vs and vwredsumu.vs for vecreduce.add(ext(Ty A)) if the result type's width is twice of the input vector's SEW-width. In this situation, the cost of extended add reduction should be same as single-width add reduction. So as the vector float widenning reduction.
Differential Revision: https://reviews.llvm.org/D129994
The isOnlyUserOf prevented the fold if the chain result had any
users. What we really care about is the the data result from the
AND is only used by the TEST, and the flags results from the ANDs
aren't used at all. It's ok if the chain has users, we just need
to replace those users with the chain from the TESTrm.
Reviewed By: LuoYuanke
Differential Revision: https://reviews.llvm.org/D131117
BoundsChecking uses ObjectSizeOffsetEvaluator to keep track of the
underlying size/offset of pointers in allocations. However,
ObjectSizeOffsetVisitor (something ObjectSizeOffsetEvaluator
uses to check for constant sizes/offsets)
doesn't quite treat sizes and offsets the same way as
BoundsChecking. BoundsChecking wants to know the size of the
underlying allocation and the current pointer's offset within
it, but ObjectSizeOffsetVisitor only cares about the size
from the pointer to the end of the underlying allocation.
This only comes up when merging two size/offset pairs. Add a new mode to
ObjectSizeOffsetVisitor which cares about the underlying size/offset
rather than the size from the current pointer to the end of the
allocation.
Fixes a false positive with -fsanitize=bounds.
Reviewed By: vitalybuka, asbirlea
Differential Revision: https://reviews.llvm.org/D131001
This is the 2nd patch of the two-patch series (D130188, D130189) that
fix PR56275 (https://github.com/llvm/llvm-project/issues/56275) which
is a missed opportunity for loop interchange.
As follow-up on the dependence analysis (DA) patch D130188, this patch
normalizes DA results in loop interchange, such that negative dependence
vectors queried by loop interchange are reversed to be non-negative.
Now all tests in PR56275 can get interchanged. Those tests are added
in lit test as `pr56275.ll`.
Reviewed By: kawashima-fj, bmahjour, Meinersbur, #loopoptwg
Differential Revision: https://reviews.llvm.org/D130189
This patch is the first of the two-patch series (D130188, D130179) that
resolve PR56275 (https://github.com/llvm/llvm-project/issues/56275)
which is a missed opportunity, where a perfrectly valid case for loop
interchange failed interchange legality.
If the distance/direction vector produced by dependence analysis (DA) is
negative, it needs to be normalized (reversed). This patch provides helper
functions `isDirectionNegative()` and `normalize()` in DA that does the
normalization, and clients can query DA to do normalization if needed.
A pass option `<normalized-results>` is added to DependenceAnalysisPrinterPass,
and we leverage it to update DA test cases to make sure of test coverage. The
test cases added in `Banerjee.ll` shows that negative vectors are normalized
with `print<da><normalized-results>`.
Reviewed By: bmahjour, Meinersbur, #loopoptwg
Differential Revision: https://reviews.llvm.org/D130188
During LTO a local promoted to a global gets a unique suffix based on
a hash of the module IR. This means that changes in the local's module
can affect the contents in another module that imported it (because the name
of the imported promoted local is changed, but that doesn't reflect a
real change in the importing module). So any tool that's
validating changes to the importing module will see a superficial change.
Instead of using the module hash, we can use the "source_filename" if it
exists to generate a unique identifier that doesn't change due to LTO
shenanigans.
Differential Revision: https://reviews.llvm.org/D128863
This just shuffles implementations and declarations around. Now the
logger and the TF C API-based model evaluator are separate.
Differential Revision: https://reviews.llvm.org/D131116
D129980 converts (seteq (i64 (and X, 0xffffffff)), C1) into
(seteq (i64 (sext_inreg X, i32)), C1). If bit 31 of X is 0, it
will be turned back into an 'and' by SimplifyDemandedBits which
can cause an infinite loop.
To prevent this, check if bit 31 is 0 with computeKnownBits before
doing the transformation.
Fixes PR56905.
Reviewed By: reames
Differential Revision: https://reviews.llvm.org/D131113
If we're going to emit a rep prefix before bsf as proposed in
D130956, it makes sense to promote i16 operations to i32 to avoid
the false depedency of tzcntw.
Reviewed By: skan, pengfei
Differential Revision: https://reviews.llvm.org/D130995
The testcase was delta-reduced from an LTO build with sanitizer
coverage and the MIR tail duplication pass caused a machine basic
block to become unreachable in MIR. This caused the MBB to be invisible
to the reverse post-order traversal used to initialize the MBB <->
RPONumber lookup tables.
rdar://97226240
Differential Revision: https://reviews.llvm.org/D130999
The function `handleDebugValue` has custom logic to handle certain kinds
constants, namely integers, floats and null pointers. However, it does
not handle constant pointers created from IntToPtr ConstantExpressions.
This patch addresses the issue by replacing the Constant with its
integer operand.
A similar bug was addressed for GlobalISel in D130642.
Reviewed By: aprantl, #debug-info
Differential Revision: https://reviews.llvm.org/D130908
This is mostly a stylistic change to make the uniform memop widening cost
code fit more naturally with the sourounding code. Its not strictly
speaking NFC as I added in the store with invariant value case, and we
could in theory have a target where a gather/scatter is cheaper than a
single load/store... but it's probably NFC in practice. Note that the
scatter/gather result can still be overriden later if the result is
uniform-by-parts.
This patch ensures consistency in the construction of FP_ROUND nodes
such that they always use ISD::TargetConstant instead of ISD::Constant.
This additionally fixes a bug in the AArch64 SVE backend where patterns
were matching against TargetConstant nodes and sometimes failing when
passed a Constant node.
Reviewed By: paulwalker-arm
Differential Revision: https://reviews.llvm.org/D130370
An unnecessary sext.w is generated when masking the result of the
riscv_masked_cmpxchg_i64 intrinsic. Implementing handling of the
intrinsic in ComputeNumSignBitsForTargetNode allows it to be removed.
Although this isn't a particularly important optimisation, removing the
sext.w simplifies implementation of an additional cmpxchg-related
optimisation in D130192.
Although I can't produce a test with different codegen for the other
atomics intrinsics, these are added as well for completeness.
Differential Revision: https://reviews.llvm.org/D130191
Enable SGPRs for the following operands of these opcodes:
- src operands of VOP3 variant.
- src2 operand of DPP variants.
Differential Revision: https://reviews.llvm.org/D130989
`BMI` new instruction `tzcnt` has better performance than `bsf` on new
processors. Its encoding has a mandatory prefix '0xf3' compared to
`bsf`. If we force emit `rep` prefix for `bsf`, we will gain better
performance when the same code run on new processors.
GCC has already done this way: https://c.godbolt.org/z/6xere6fs1Fixes#34191
Reviewed By: skan
Differential Revision: https://reviews.llvm.org/D130956
Unfortunately, this overflow is extremely hard to reproduce reliably (in fact, I was unable to do so). The issue is that:
- getOperandsToCreate sometimes skips creating an SCEV for the LHS
- then, createSCEV is called for the BinaryOp
- ... which calls getNoWrapFlagsFromUB
- ... which under certain circumstances calls isSCEVExprNeverPoison
- ... which under certain circumstances requires the SCEVs of all operands
For certain deep dependency trees, this causes a stack overflow.
Reviewed By: bkramer, fhahn
Differential Revision: https://reviews.llvm.org/D129745
Mark ModRefInfo as a bitmask enum, which allows using normal
& and | operators on it. This supersedes various functions like
unionModRef() and intersectModRef(). I think this makes the code
cleaner than going through helper functions...
Differential Revision: https://reviews.llvm.org/D130870
Sometimes SCEV cannot infer nuw/nsw from something as simple as
```
len in [0, MAX_INT]
...
iv = phi(0, iv.next)
guard(iv <s len)
guard(iv <u len)
iv.next = iv + 1
```
just because flag strenthening only relies on definition and does not use local facts.
This patch adds support for the simplest case: inference of flags of `add(x, constant)`
if we can contextually prove that `x <= max_int - constant`.
In case if it has negative CT impact, we can add an option to switch it off. I woudln't
expect that though.
Differential Revision: https://reviews.llvm.org/D129643
Reviewed By: apilipenko
In case that opaque pointers not enabled, there may be some constexpr
bitcast uses for thread local variables and the design of llvm allow
people to sink constant arbitrarily. This breaks the assumption of
IRBuilder::CreateThreadLocalAddress. This patch tries to handle the
case.
In this patch we replace common code patterns with the use of utility
functions for dealing with profiling metadata. There should be no change
in functionality, as the existing checks should be preserved in all
cases.
Reviewed By: bogner, davidxl
Differential Revision: https://reviews.llvm.org/D128860
When compiling for multiple targets the scheduler that is selected via the
-misched option is applied globally. This patch adds a target CL option instead.
Reviewed By: rampitec
Differential Revision: https://reviews.llvm.org/D131022
This used to print from the ADDI where the operand number was
correct. It recently changed to print from the LUI or AUIPC which
needs to use operand 1 instead of 2.
This shows up as a crash with -debug.
Prefer using these accessors to access the special sub-commands
corresponding to the top-level (no subcommand) and all sub-commands.
This is a preparatory step towards removing the use of ManagedStatic:
with a subsequent change, these global instances will be moved to
be regular function-scope statics.
It is split up to give downstream projects a (albeit short) window in
which they can switch to using the accessors in a forward-compatible
way.
Differential Revision: https://reviews.llvm.org/D129118
Creates a new scheduling strategy that attempts to maximize ILP for a single
wave.
Reviewed By: rampitec
Differential Revision: https://reviews.llvm.org/D130869
When emit-obj from clang directly, DirectX backend will hit assert caused by not initialize passes for AsmPrinter.
The fix will initialize the passes by calling createPassConfig.
Also ignore global variable which not has section in DXILAsmPrinter::emitGlobalVariable to avoid hit llvm_unreachable in DXILTargetObjectFile::SelectSectionForGlobal.
Reviewed By: beanz
Differential Revision: https://reviews.llvm.org/D130856
Fixes code in OrderedChangedData<T>::report which assumes that a string will only appear once in Before/After.
Reviewed By: jamieschmeiser
Differential Revision: https://reviews.llvm.org/D130587