DebugLocEntry assumes that it either contains 1 item that has no fragment
or many items that all have fragments (see the assert in addValues).
When EXPENSIVE_CHECKS is enabled, _GLIBCXX_DEBUG is defined. On a few machines
I've checked, this causes std::sort to call the comparator even
if there is only 1 item to sort. Perhaps to check that it is implemented
properly ordering wise, I didn't find out exactly why.
operator< for a DbgValueLoc will crash if this happens because the
optional Fragment is empty.
Compiler/linker/optimisation level seems to make this happen
or not. So I've seen this happen on x86 Ubuntu but the buildbot
for release EXPENSIVE_CHECKS did not have this issue.
Add an explicit check whether we have 1 item.
Reviewed By: aprantl
Differential Revision: https://reviews.llvm.org/D130156
As test in PR56672 shows, LAA produces different results which lead to either
positive or negative vectorization decisions depending on the order of blocks
in loop. The exact reason of this is not clear to me, however this makes investigation
of related bugs extremely complex.
Current order of blocks in the loop is arbitrary. It may change, for example, if loop
info analysis is dropped and recomputed. Seems that it interferes with LAA's logic.
This patch chooses fixed traversal order of blocks in loops, making it RPOT.
Note: this is *not* a fix for bug with incorrect analysis result. It just makes
the answer more robust to make the investigation easier.
Differential Revision: https://reviews.llvm.org/D130482
Reviewed By: aeubanks, fhahn
We will insert a new operand which is identical to the Dest for complex
FMUL with a mask. https://godbolt.org/z/eTEdnYv3q
Complex FMA and FMUL with maskz don't have this problem.
Reviewed By: LuoYuanke, skan
Differential Revision: https://reviews.llvm.org/D130638
By not clustering loads and adjusting heuristics to more aggressively reduce
register pressure we may be able to increase occupancy for the function if it
was dropped in a first pass scheduling.
Similarly, try to reduce spilling if register usage exceeds lower bound
occupancy.
Reviewed By: rampitec
Differential Revision: https://reviews.llvm.org/D130329
Clear all kill flags on source register when folding a COPY.
This is necessary because the kills may now be out of order with the uses.
Reviewed By: foad
Differential Revision: https://reviews.llvm.org/D130622
InstCombine and DAGCombine prefer to keep shl before binops.
This patch teaches isel to convert to (shl (and/or/xor X, C1 >> C2), C2)
if (C1 >> C2) is a simm12. The idea was taken from X86's isel code.
There's a special case implemented for a sext_inreg between the
shift and the binop.
Differential Revision: https://reviews.llvm.org/D130610
tryLastChanceRecoloring iterates over the set of LiveInterval pointers
and used that to seed the recoloring stack, which was
nondeterministic. Fixes a future test failing about 20% of the time.
This just takes the order the interfering vreg was encountered. Not
sure if we should try to order this more intelligently.
In this patch we replace common code patterns with the use of utility
functions for dealing with profiling metadata. There should be no change
in functionality, as the existing checks should be preserved in all
cases.
Reviewed By: bogner, davidxl
Differential Revision: https://reviews.llvm.org/D128860
Currently, there is significant code duplication for dealing with
MD_prof metadata throughout the compiler. These utility functions can
improve code reuse and simplify boilerplate code when dealing with
profiling metadata, such as branch weights. The inent is to provide a
uniform set of APIs that allow common tasks, such as identifying
specific types of MD_prof metadata and extracting branch weights.
Future patches can build on this initial implementation and clean up the
different implementations across the compiler.
Reviewed By: bogner
Differential Revision: https://reviews.llvm.org/D128858
Currently, the IR to MIR translator can only handle two kinds of constant
inputs to dbg.values intrinsics: constant integers and constant floats. In
particular, it cannot handle pointers created from IntToPtr ConstantExpression
objects.
This patch addresses the limitation above by replacing the IntToPtr with
its input integer prior to converting the dbg.value input.
Patch by Felipe Piovezan!
Differential Revision: https://reviews.llvm.org/D130642
This change enables vectorization (using scalable vectorization only, fixed vectors are not yet enabled) for RISCV when vector instructions are available for the target configuration.
At this point, the resulting configuration should be both stable (e.g. no crashes), and profitable (i.e. few cases where scalar loops beat vector ones), but is not going to be particularly well tuned (i.e. we emit the best possible vector loop). The goal of this change is to align testing across organizations and ensure the default configuration matches what downstreams are using as closely as possible.
This exposes a large amount of code which hasn't otherwise been on by default, and thus may not have been fully exercised. Given that, having issues fall out is not unexpected. If you find issues, please make sure to include as much information as you can when reverting this change.
Differential Revision: https://reviews.llvm.org/D129013
The instruction is used to modify wave priority with the intent
to affect VALU execution and currently we can reschedule VALU
around it since that VALU does not have side effects.
Differential Revision: https://reviews.llvm.org/D130654
Teach libDebugInfo (llvm-dwarfdump) and lldb about DWARF tags and
attributes for pointer authentication. These values have been emitted by
Apple clang for several releases. Although upstream LLVM doesn't emit
these values yet, we hope to upstream that part sometime soon.
Differential revision: https://reviews.llvm.org/D130215
Without this, the intrinsic will be expanded to an integer; thereby an
explicit copy (from GPR to SIMD register) will be codegen'd. This matches the
general convention of using "v1" types to represent scalar integer operations in
vector registers.
The similar approach is observed in D56616, and the pattern likely applies on
other intrinsic that accepts integer scalars (e.g.,
int_aarch64_neon_sqdmulls_scalar)
Differential Revision: https://reviews.llvm.org/D130548
This adds similar heuristics to G_GLOBAL_VALUE, querying the cost of
materializing a specific constant in code size. Doing so prevents us from
sinking constants which require multiple instructions to generate into
use blocks.
Code size savings on CTMark -Os:
Program size.__text
before after diff
ClamAV/clamscan 381940.00 382052.00 0.0%
lencod/lencod 428408.00 428428.00 0.0%
SPASS/SPASS 411868.00 411876.00 0.0%
kimwitu++/kc 449944.00 449944.00 0.0%
Bullet/bullet 463588.00 463556.00 -0.0%
sqlite3/sqlite3 284696.00 284668.00 -0.0%
consumer-typeset/consumer-typeset 414492.00 414424.00 -0.0%
7zip/7zip-benchmark 595244.00 594972.00 -0.0%
mafft/pairlocalalign 247512.00 247368.00 -0.1%
tramp3d-v4/tramp3d-v4 372884.00 372044.00 -0.2%
Geomean difference -0.0%
Differential Revision: https://reviews.llvm.org/D130554
I am playing with the LoopDataPrefetch pass and found out that it
bails to work with a pointer in a non-zero address space. This
patch adds the target callback to check if an address space is to
be considered for prefetching. Default implementation still only
allows address space 0, so this is NFCI.
This does not currently affect any known targets, but seems to be
generally useful for the future.
Differential Revision: https://reviews.llvm.org/D129795
I don't have any evidence these particular uses are actually causing any
issues, but we should avoid accidentally truncating immediate values
depending on the host.
We can use slli.uw by C followed by sh1add. Similar can be done
for multiples of 5 and 9. We need to make sure that C is less than
32 to stay in bounds of the 5-bit immediate for slli.uw.
We have existing patterns for (mul X, 3<<C) that use sh1add
followed by slli. That order doesn't allow the and to be folded.
Reviewed By: reames
Differential Revision: https://reviews.llvm.org/D130146
A mul by a negated power of 2 is a slli followed by neg. This doesn't
require any constant materialization and may be lower latency than mul.
The neg may also be foldable into other arithmetic.
Reviewed By: reames
Differential Revision: https://reviews.llvm.org/D130047
We can't guarantee the long always 64 bits like WINDOWS or LLP64 data
model (rare but we should consider).
So use int64_t from inttypes.h and safe in this case.
Fixes https://github.com/llvm/llvm-project/issues/55911 .
It errors out in the Bazel CI:
AMDGPULowerModuleLDSPass.cpp:384:12: error: chosen constructor is
explicit in copy-initialization
return {SGV, std::move(Map)};
Reviewed By: rupprecht
Differential Revision: https://reviews.llvm.org/D130623
The behaviour of this patch is not great, but it has some side-effects
that are required for OpenMPOpt to work. The problem is that when we use
`-mlink-builtin-bitcode` we only import used symbols from the runtime.
Then OpenMPOpt will insert calls to symbols that were not previously
included. This patch removed this implicit behaviour as these functions
were kept alive by the `noinline` simply because it kept calls to them
in the module. This caused regression in some tests that relied on some
OpenMPOpt passes without using LTO. Reverting for the LLVM15 release but
will try to fix it more correctly on main.
This reverts commit d61d72dae6.
Fixes#56752
This patch changes legacy LTO to set data-sections by default. The user can
explicitly unset data-sections. The reason for this patch is to match the
behaviour of lld and gold plugin. Both lld and gold plugin have data-sections on
by default.
This patch also fixes the forwarding of the clang options -fno-data-sections and
-fno-function-sections to libLTO. Now, when -fno-data/function-sections are
specified in clang, -data/function-sections=0 will be passed to libLTO to
explicitly unset data/function-sections.
Reviewed By: w2yehia, MaskRay
Differential Revision: https://reviews.llvm.org/D129401
This patch changes legacy LTO to set data-sections by default. The user can
explicitly unset data-sections. The reason for this patch is to match the
behaviour of lld and gold plugin. Both lld and gold plugin have data-sections on
by default.
This patch also fixes the forwarding of the clang options -fno-data-sections and
-fno-function-sections to libLTO. Now, when -fno-data/function-sections are
specified in clang, -data/function-sections=0 will be passed to libLTO to
explicitly unset data/function-sections.
Reviewed By: w2yehia, MaskRay
Differential Revision: https://reviews.llvm.org/D129401
Instructions between two adjacent loops will be hoisted above the first
loop, or sunk below the second to facilitate loop fusion. Hoisting will
be attempted for an instruction that dominates the first loop.
Otherwise, sinking this instructions will be attempted.
Instructions with side effects will not be considered for sinking or
hoisting. Hoisting/sinking of any instructions between loops will only
be performed if all the instructions can be moved. As well,
sinking/hoisting is considered for each instruction in isolation,
without taking into account sinking/hoisting decisions for other
instructions in the preheader.
Differential Revision: https://reviews.llvm.org/D118076
While working on D118450 <https://reviews.llvm.org/D118450>, I noticed that
`sys::getHostCPUName` lacks SPARC support.
This patch implements it. The code is taken from/inspired by GCC's
`gcc/config/sparc/driver-sparc.cc`. There's one caveat: since LLVM, unlike
GCC, doesn't support the SPARC-M7, -S7, and -M8 CPUs, I map all those to
the latest supported one (UltraSparc T4/`niagara4`).
Tested on `sparcv9-sun-solaris2.11` and `sparc64-unknown-linux-gnu` by
running `savcov --version` on
- Netra SPARC S7-2 (SPARC-S7, Solaris 11.4)
- SPARC T5-2 (SPARC T5, Solaris 11.4)
- SPARC Enterprise T5220 (UltraSPARC T2, Solaris 11.3)
- SPARC T5 (UltraSPARC T5, Debian sid)
- SPARC T3 (UltraSPARC T3, Debian sid)
- SPARC Enterprise T5220 (Debian sid)
Differential Revision: https://reviews.llvm.org/D130272
SimplifyDemandedBits currently early-outs for multi-use values beyond the root node (just returning the knownbits), which is missing a number of optimizations as there are plenty of cases where we can still simplify when initially demanding all elements/bits.
@lenary has confirmed that the test cases in aea-erratum-fix.ll need refactoring and the current increase codegen is not a major concern.
Differential Revision: https://reviews.llvm.org/D129765
Normally, generic processor does not have any SubtargetFeature. And it
can just generate most basic instructions which have no Predicates to
guard.
But it needs to enbale predicate for the btsti16 instruction as one of the most basic instructions.
Or the generic processor can't finish codegen process. So Add FeatureBTST16 SubtargetFeature to generic ProcessorModel.
Given a patch like D129506, using instructions not valid for the current
feature set becomes an error. This updates the Arm hint-space
instructions for pac/bti to require thumbv7m as opposed to 8.1-m.main, to
make them valid when compiling for thumbv7m with -mbranch-protection.
Differential Revision: https://reviews.llvm.org/D129692
This adds a +atomic-32 target feature, which instructs LLVM to assume
that lock-free 32-bit atomics are available for this target, even
if they usually wouldn't be.
If only atomic loads/stores are used, then this won't emit libcalls.
If atomic CAS is used, then the user is responsible for providing
any necessary __sync implementations (e.g. by masking interrupts
for single-core privileged use cases).
See https://reviews.llvm.org/D120026#3674333 for context on this
change. The tl;dr is that the thumbv6m target in Rust has
historically made atomic load/store only available, which is
incompatible with the change from D120026, which switched these to
use libatomic.
Differential Revision: https://reviews.llvm.org/D130480
Added alloca optimization which was missed during the implemenation of D112098.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D130503
This patch fixes the following error with MSVC 16.9.2 in case of NDEBUG and LLVM_ENABLE_DUMP:
llvm/lib/CodeGen/CodeGenPrepare.cpp(2581): error C2872: 'ExtAddrMode': ambiguous symbol
llvm/include/llvm/CodeGen/TargetInstrInfo.h(86): note: could be 'llvm::ExtAddrMode'
llvm/lib/CodeGen/CodeGenPrepare.cpp(2447): note: or '`anonymous-namespace'::ExtAddrMode'
llvm/lib/CodeGen/CodeGenPrepare.cpp(2581): error C2039: 'print': is not a member of 'llvm::ExtAddrMode'
Reviewed By: aaron.ballman
Differential Revision: https://reviews.llvm.org/D130426
This is an alternate to D129155 that uses TTI.haveFastSqrt() to avoid a
potential miscompile for programs with reads of errno. Moving the transform
to AggressiveInstCombine provides access to TTI.
If a sqrt call has "nnan", that implies that the input argument is never
negative because sqrt of {negative number} --> NAN.
If the argument is never negative and the call can be lowered without a
libcall, then we can assume that errno accesses are unchanged after lowering,
so the call can be translated to the LLVM intrinsic (which is expected to
become inline code).
This affects codegen for targets like x86 that have sqrt instructions, but
still have to conservatively assume that a libcall may be needed to set
errno as shown in issue #52620 and issue #56383.
This patch won't solve those examples - we will need to extend this to use
CannotBeOrderedLessThanZero or similar, enhance that analysis for new
operators, and/or deal with llvm.assume too.
Differential Revision: https://reviews.llvm.org/D129167
We already call the more general isImpliedCondition() (which calls
isImpliedTrueByMatchingCmp() internally) from simplifyAndInst()
and simplifyOrInst().
There was a difference visible with this change on a vector test
before a925bef70c, but I can't find any gaps now.
These are aliases that allow the immediate offset to be ommitted.
We had predicates for the RV64, RV32+F, and D versions, but
not the base versions.
I've also re-ordered them to share Predicate lines to improve
readability.
-print-changed for new pass manager is handy beside -print-after-all.
Port it to MachineFunctionPass.
Note: lib/Passes/StandardInstrumentations.cpp implements a number of
misc features. If we want to use them for codegen, we may need to lift
some functionality to LLVMIR.
Reviewed By: aeubanks, jamieschmeiser
Differential Revision: https://reviews.llvm.org/D130434
Tries to make the different scheduling stages a bit more self contained and
modifiable. Intended to be NFC. Preface to other changes.
Reviewed By: rampitec
Differential Revision: https://reviews.llvm.org/D130147
WinEHPrepare marks any function call from EH funclets as unreachable, if it's not a nounwind intrinsic or has no proper funclet bundle operand. This
affects ARC intrinsics on Windows, because they are lowered to regular function calls in the PreISelIntrinsicLowering pass. It caused silent binary truncations and crashes during unwinding with the GNUstep ObjC runtime: https://github.com/gnustep/libobjc2/issues/222
This patch adds a new function `llvm::IntrinsicInst::mayLowerToFunctionCall()` that aims to collect all affected intrinsic IDs.
* Clang CodeGen uses it to determine whether or not it must emit a funclet bundle operand.
* PreISelIntrinsicLowering asserts that the function returns true for all ObjC runtime calls it lowers.
* LLVM uses it to determine whether or not a funclet bundle operand must be propagated to inlined call sites.
Reviewed By: theraven
Differential Revision: https://reviews.llvm.org/D128190
This patch starts small, only detecting sequences of the form
<a, a+n, a+2n, a+3n, ...> where a and n are ConstantSDNodes.
Differential Revision: https://reviews.llvm.org/D125194
Turning on opaque pointers has uncovered an issue with WPD where we currently pattern match away `assume(type.test)` in WPD so that a later LTT doesn't resolve the type test to undef and introduce an `assume(false)`. The pattern matching can fail in cases where we transform two `assume(type.test)`s into `assume(phi(type.test.1, type.test.2))`.
Currently we create `assume(type.test)` for all virtual calls that might be devirtualized. This is to support `-Wl,--lto-whole-program-visibility`.
To prevent this, all virtual calls that may not be in the same LTO module instead use a new `llvm.public.type.test` intrinsic in place of the `llvm.type.test`. Then when we know if `-Wl,--lto-whole-program-visibility` is passed or not, we can either replace all `llvm.public.type.test` with `llvm.type.test`, or replace all `llvm.public.type.test` with `true`. This prevents WPD from trying to pattern match away `assume(type.test)` for public virtual calls when failing the pattern matching will result in miscompiles.
Reviewed By: tejohnson
Differential Revision: https://reviews.llvm.org/D128955
This helps fold away the ptest instructions, which needs the knowledge on whether
the general predicate is known to zero the inactive lanes.
This fixes some PTEST regressions introduced by D129282.
Reviewed By: paulwalker-arm
Differential Revision: https://reviews.llvm.org/D129852
I think what we need is the least Log2(EltSize) significant bits are known to be ones.
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D130251
Current DWARFLinker implementation does not support some debug sections
(mainly DWARF v5 sections). This patch adds diagnostic for such sections.
The warning would be displayed for critical(such that could not be removed)
sections and the source file would be skipped. Other unsupported sections
would be removed and warning message should be displayed. The zero exit
status would be returned for both cases.
Reviewed By: JDevlieghere
Differential Revision: https://reviews.llvm.org/D123623
Optimizing (a * 0 + b) to (b) requires assuming that a is finite and not
NaN. DAGCombiner will do this optimization when the reassoc fast math
flag is set, which is not correct. Change DAGCombiner to only consider
UnsafeMath for this optimization.
Differential Revision: https://reviews.llvm.org/D130232
Co-authored-by: Andrea Faulds <andrea.faulds@arm.com>
Currently, when llvm-objdump is disassembling a code section and
encounters a point where no instruction can be decoded, it uses the
same policy on all targets: consume one byte of the section, emit it
as "<unknown>", and try disassembling from the next byte position.
On an architecture where instructions are always 4 bytes long and
4-byte aligned, this makes no sense at all. If a 4-byte word cannot be
decoded as an instruction, then the next place that a valid
instruction could //possibly// be found is 4 bytes further on.
Disassembling from a misaligned address can't possibly produce
anything that the code generator intended, or that the CPU would even
attempt to execute.
This patch introduces a new MCDisassembler virtual method called
`suggestBytesToSkip`, which allows each target to choose its own
resynchronization policy. For Arm (as opposed to Thumb) and AArch64,
I've filled in the new method to return a fixed width of 4.
Thumb is a more interesting case, because the criterion for
identifying 2-byte and 4-byte instruction encodings is very simple,
and doesn't require the particular instruction to be recognized. So
`suggestBytesToSkip` is also passed an ArrayRef of the bytes in
question, so that it can take that into account. The new test case
shows Thumb disassembly skipping over two unrecognized instructions,
and identifying one as 2-byte and one as 4-byte.
For targets other than Arm and AArch64, this is NFC: the base class
implementation of `suggestBytesToSkip` still returns 1, so that the
existing behavior is unchanged. Other targets can fill in their own
implementations as they see fit; I haven't attempted to choose a new
behavior for each one myself.
I've updated all the call sites of `MCDisassembler::getInstruction` in
llvm-objdump, and also one in sancov, which was the only other place I
spotted the same idiom of `if (Size == 0) Size = 1` after a call to
`getInstruction`.
Reviewed By: DavidSpickett
Differential Revision: https://reviews.llvm.org/D130357
The ARM attribute parser for Tag_CPU_arch is missing value descriptions
for Armv8-A and Armv8-R.
This patch adds these descriptions.
Reviewed By: pratlucas
Differential Revision: https://reviews.llvm.org/D129631
It would be better for CodeMetrics to use hasOneLiveUse while analyzing
static and called once callsites, since inline cost now uses
hasOneLiveUse instead of hasOneUse to avoid overpessimization on dead
constant cases (since this patch https://reviews.llvm.org/D109294).
This change has no noticeable influence now, but it helps improve the
accuracy of cost models of passes that use CodeMetrics.
Reviewed By: fhahn, nikic
Differential Revision: https://reviews.llvm.org/D130461
A new helper class DXILOpBuilder is added to create DXIL op function calls.
TableGen backend for DXILOperation will create table for DXIL op function parameter types.
When create DXIL op function, these parameter types will used to create the function type.
Reviewed By: bogner
Differential Revision: https://reviews.llvm.org/D130291
Adds keep-alive edges to pdata section to prevent dead strip of block when its parent function is alive.
Reviewed By: lhames
Differential Revision: https://reviews.llvm.org/D129945
Currently the X86 shuffle lowering would widen the element type for
shuffle if the mask element value is adjacent. For below example
%t2 = add nsw <16 x i32> %t0, %t1
%t3 = sub nsw <16 x i32> %t0, %t1
%t4 = shufflevector <16 x i32> %t2, <16 x i32> %t3,
<16 x i32> <i32 16, i32 17, i32 2, i32 3, i32 4,
i32 5, i32 6, i32 7, i32 8, i32 9, i32 10,
i32 11, i32 12, i32 13, i32 14, i32 15>
ret <16 x i32> %t4
Compiler would transform the shuffle to
%t4 = shufflevector <8 x i64> %t2, <8 x i64> %t3,
<8 x i64> <i32 8, i32 1, i32 2, i32 3, i32 4,
i32 5, i32 6, i32 7>
This may lose the oppotunity to let ISel select mask instruction when
avx512 is enabled.
This patch is to prevent the tranform when avx512 feature is enabled.
Thank Simon for the idea.
Differential Revision: https://reviews.llvm.org/D129537
Implement IMAGE_COMDAT_SELECT_LARGEST partially. It's going to fail if larger symbol appears but this hasn't happened at least in vcruntime library.
We probably would not implement this properly as it requires complicated runtime patching which is not of nature of JIT. However, we'd like to validate if larger section appears and report to the user in the near future.
Reviewed By: lhames
Differential Revision: https://reviews.llvm.org/D129941
Implement IMAGE_WEAK_EXTERN_SEARCH_NOLIBRARY/LIBRARY characteristics flag.
Since COFFObjectFile class will set undefined flag for symbols with no alias flag, ORC ObjectFileInterface will not pull in this symbol. So, we only need to make sure the scope is local. NOLIBRARY and LIBRARY are handled in the same way for now. (which is what lld does right now)
Reviewed By: lhames
Differential Revision: https://reviews.llvm.org/D129939
Handles duplicate external symbols. This happens in few static libraries generaed from msvc toolchain.
Reviewed By: lhames
Differential Revision: https://reviews.llvm.org/D129937
We no longer need specialized knowledge of these allocator functions in
this file since we have the correct attributes available now.
As far as I can tell the changes in the attributor tests are due to
things getting more consistent on alloc-family once we remove the static
list entries.
The two test changes in NewGVN merit extra scrutiny: NewGVN appears to
be _extremely_ sensitive to the inaccessiblememonly for reasons that
are beyond me. As a result, I had-enumerated all the attributes on
allocation functions in those two tests instead of using -inferattrs.
I assumed that the two -disable-simplify-libcalls tests there no
longer are sensible since the function declaration now includes all the
relevant attributes.
Differential Revision: https://reviews.llvm.org/D130107
Only Emscripten supports dynamic linking with threads. To use
thread-local storage for other targets, this change defaults to the
`localexec` model.
Differential Revision: https://reviews.llvm.org/D130053
Set the priorities consistently to number of registers in the tuple -
1. Previously we started at 1, and also tried to give SGPR higher
values than VGPRs. There's no point in assigning SGPRs higher values
now that those are allocated in a separate regalloc run.
This avoids overflowing the 5 bits used for the class priority in the
allocation heuristic for 32 element tuples. This avoids some cases
where smaller registers unexpectedly get prioritized over larger.
We previously used the `noinline` attributes to specify some defintions
which should be kept alive in the runtime. These were then stripped
immediately in the OpenMPOpt module pass. However, Since the changes in
D130298, we not explicitly state which functions will have external
visiblity in the bitcode library. Additionally the OpenMPOpt module pass
should run before the inliner pass, so this shouldn't make a difference
in whether or not the functions will be alive for the initial pass of
OpenMPOpt. This should simplify the interface, and additionally save
time spend on scanning funciton names for noinline.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D130368
We call tail-call-elim near the beginning of the pipeline,
but that is too early to annotate calls that get added later.
In the motivating case from issue #47852, the missing 'tail'
on memset leads to sub-optimal codegen.
I experimented with removing the early instance of
tail-call-elim instead of just adding another pass, but that
appears to be slightly worse for compile-time:
+0.15% vs. +0.08% time.
"tailcall" shows adding the pass; "tailcall2" shows moving
the pass to later, then adding the original early pass back
(so 1596886802 is functionally equivalent to 180b0439dc ):
https://llvm-compile-time-tracker.com/index.php?config=NewPM-O3&stat=instructions&remote=rotateright
Note that there was an effort to split the tail call functionality
into 2 passes - that could help reduce compile-time if we find
that this change costs more in compile-time than expected based
on the preliminary testing:
D60031
Differential Revision: https://reviews.llvm.org/D130374
In D129523, it was noted that there is are some questionable naked casts
from Instruction to BinaryOperator, which could be addressed by doing a
dyn_cast directly to BinaryOperator, avoiding the need for the later cast.
This cleans up that casting.
Reviewed By: nikic, spatel, RKSimon
Differential Revision: https://reviews.llvm.org/D130448
(abs(i32 X, i1 1) always produces a positive result. The 'i1 1'
means INT_MIN input produces poison. If the result is sign extended,
InstCombine will convert it to zext. This does not produce ideal
code for RISCV.
This patch reverses the zext back to sext which can be folded
into a subw or negw. Ideally we'd do this in SelectionDAG, but
we lose the INT_MIN poison flag when llvm.abs becomes ISD::ABS.
Reviewed By: reames
Differential Revision: https://reviews.llvm.org/D130412
With SSE4.1 and above we were using 3 multiply instructions. This
was due to type legalization widening to v4i32 and the low half
being done with pmulld while the high half used two pmuldq/pmuludq.
Instead of that, we can use a single pmuludq/pmuldq to calculate
the full product at once, extract the high and low bits and compare
to check for overflow.
I've restricted SMULO to sse4.1 to get pmuldq. We can probably
do a fixup to pmuludq on earlier targets, but that's for another day.
I was going through my git stash and found an early version of this patch
from a year or two ago so I went ahead and finished it.
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D130432
Due to the way fixed length SVE lowering works, we sometimes introduce
ext/trunc nodes very late, these nodes then immediately get converted
into target specific nodes (UUNPKLO/UZP1) before they get a chance to be
folded into a load/store.
This patch introduces target specific dag combines for these nodes so that
we can still create extending loads/truncating stores out of them.
Differential Revision: https://reviews.llvm.org/D128065
Reimplements ADDR32NB/REL32 relocations properly, out-of-reach targets will be dealt in the separate patch that will generate the stub for dllimport symbols.
Reviewed By: sgraenitz
Differential Revision: https://reviews.llvm.org/D129936
Properly set weak flag to COMDAT symbols so that no duplicate definition error will be generated. There is an inaccuracy in setting plain weak for largest selection type, which will be dealt with soon when largest type is properly implemented.
Reviewed By: lhames
Differential Revision: https://reviews.llvm.org/D129764
AcceptedPublic
Implements IMAGE_SYM_CLASS_LABEL. It's simply a section + offset. This is not used a lot by llvm mc but very commonly used by MSVC compiler.
Reviewed By: sgraenitz
Differential Revision: https://reviews.llvm.org/D129754
Handle out-of-order COMDAT second symbols. In llvm codegen, the second symbol of COMDAT sequence always follows the first symbol in the global symbol list. But, when the object file came from MSVC compiler, these can come in out of order.
Reviewed By: lhames
Differential Revision: https://reviews.llvm.org/D129721
This prevents the dead strip of associative comdat section when parent section is alive.
Reviewed By: sgraenitz
Differential Revision: https://reviews.llvm.org/D129720
This patch adds shouldScalarizeBinop to RISCV target in order to convert an extract element of a vector binary operation into an extract element followed by a scalar binary operation.
Differential Revision: https://reviews.llvm.org/D129545
EXPENSIVE_CHECKS enables _GLIBCXX_DEBUG, which makes std::sort
check that the compare function is implemented correctly.
To do this it calls it with the first item as both sides.
Which trips the assert here because we think they're
2 capture ranges that overlap, when it's just the same range twice.
Check up front for the two sides being the same item
(same address, not just ==).
Reviewed By: kazu
Differential Revision: https://reviews.llvm.org/D130282
When using a ptrtoint to a size larger than the pointer width in a
global initializer, we currently create a ptr & low_bit_mask style
MCExpr, which will later result in a relocation error during object
file emission.
This patch rejects the constant expression already during
lowerConstant(), which results in a much clearer error message
that references the constant expression at fault.
This fixes https://github.com/llvm/llvm-project/issues/56400,
for certain definitions of "fix".
Differential Revision: https://reviews.llvm.org/D130366
This patch recognizes f16 immediates as legal and adds the necessary
patterns. This allows the fadda folding introduced in 05d424d165
to be applied to the f16 cases.
Differential Revision: https://reviews.llvm.org/D129989
The code in this `#if 0` block appears to be a net benefit. Put it
behind a switch defaulting to off to support experimentation and as a
request for comment.
The codegen impact of enabling this that I'm currently persuing is that
it allows PRE to take place more frequently, particularly in loops with
second order recurrences.
Preliminary experimental data:
Across LNT on AArch64, 54 benchmarks are sped up by >1%, and 42 are
regressed by >1%, the geomean (exec_time_enabled / exec_time_disabled)
of these 96 "1% or greater significance" benchmarks is 0.991. For the
full set of 770 benchmarks it's 0.998.
There are two benchmarks which experience a >30% speedup, and the worst
slowdown is ~12%, and for every benchmark with a slowdown there is a
benckmark which is sped up by a greater factor.
Differential Revision: https://reviews.llvm.org/D130241
When lowering add(a, select(mask, b, splat(0))) the sel instruction can
be removed by using predicated add/sub instructions.
Reviewed By: paulwalker-arm
Differential Revision: https://reviews.llvm.org/D129751
Handle guards uniformly with assumes, rather than iterating through all
block instructions in attempt to find them.
Differential Revision: https://reviews.llvm.org/D129874
Reviewed By: nikic
The last use was removed on May 3, 2017 in commit
2af5037d34.
This patch also removes isLateResultInstr and isEarlySourceInstr as
they become dead once we remove isLateInstrFeedsEarlyInstr.
In D129523, it was noted that the approach to check whether a value can
have FastMathFlags was done in different ways, and they should be made
consistent. This patch makes minor changes to fix that.
Reviewed By: spatel
Differential Revision: https://reviews.llvm.org/D130408
This would create a new interval missing the subrange and hit this
verifier error:
*** Bad machine code: Live interval for subreg operand has no subranges ***
- function: test_remat_subreg_def
- basic block: %bb.0 (0xa568758) [0B;128B)
- instruction: 32B dead undef %4.sub0:vreg_64 = V_MOV_B32_e32 2, implicit $exec
For mask ops like these, the other operand's corresponding element might be zero (result = zero) - so we must demand all the bits and that element.
This appears to be what D128570 was trying to fix - both sides of the funnel shift mask of the vXi64 (legalized to v2Xi32) were incorrectly simplifying the upper 32-bit halves to undef, resulting in bad folds later on.
I intend to address the test case regressions, but this close to the release branch I'd prefer to get a fix in first.
We still haven't found a solution that correctly handles 'don't care' sub elements properly - given how close it is to the next release branch, I'm making this fail safe change and we can revisit this later if we can't find alternatives.
NOTE: This isn't a reversion of D128570 - it's the removal of undef handling across bitcasts entirely
Fixes#56520
Minor bit of prep work toward not unnecessarily widening shuffle operands in combineX86ShufflesRecursively, instead only widening in combineX86ShuffleChain if we actual find a match - see Issue #45319
This patch fixes the following error with MSVC 16.9.2:
llvm/lib/IR/GCStrategy.cpp(35): error C2668: 'llvm::report_fatal_error': ambiguous call to overloaded function
llvm/include/llvm/Support/ErrorHandling.h(75): note: could be 'void llvm::report_fatal_error(const llvm::Twine &,bool)'
llvm/include/llvm/Support/ErrorHandling.h(73): note: or 'void llvm::report_fatal_error(llvm::StringRef,bool)'
llvm/lib/IR/GCStrategy.cpp(35): note: while trying to match the argument list '(const std::string)'
Reviewed By: RKSimon, barannikov88
Differential Revision: https://reviews.llvm.org/D130407
llvm::sort is beneficial even when we use the iterator-based overload,
since it can optionally shuffle the elements (to detect
non-determinism). However llvm::sort is not usable everywhere, for
example, in compiler-rt.
Reviewed By: nhaehnle
Differential Revision: https://reviews.llvm.org/D130406
This will fix the SystemZ v3i31 memcpy regression in D77804 (with the help of D129765 as well....).
It should also allow us to /bend/ the oneuse limitation for cases where we can use demanded bits to safely peek though multiple uses of the AND ops.
As noticed on D127115, when splitting ADD/SUB nodes we often end up with cases where overflow from the lower bits is impossible - in such cases we're better off breaking the carry chain dependency as soon as possible.
This path is being exercised by llvm/test/CodeGen/ARM/dsp-mlal.ll, although I haven't been able to get any codegen diff without a topological worklist.
Concat KnownBits from ISD::SHL_PARTS / ISD::SRA_PARTS / ISD::SRL_PARTS lo/hi operands and perform the KnownBits calculation by the shift amount on the extended type, before splitting the KnownBits based on the requested lo/hi result.
If we look at a write, we should not enact the "has been written to"
logic introduced to avoid spurious write -> read dependences. Doing so
lead to elimination of stores we needed, which is obviously bad.
This change adds a nop instruction if section starts with landing pad. This change is like [D73739](https://reviews.llvm.org/D73739) which avoids zero offset landing pad in basic block sections.
Detailed description:
The current machine functions splitter can create ˜sections which start with a landing pad themselves. This places landing pad at offset zero from LPStart.
```
.section .text.split.foo10,"ax",@progbits
foo10.cold: # %lpad
.cfi_startproc
.cfi_personality 3, __gxx_personality_v0
.cfi_lsda 3, .Lexception5
.cfi_def_cfa %rsp, 16
.Ltmp11: <--- This is a Landing pad and also LP Start as it is start of this section
movq %rax, %rdi <--- first instruction is at offest 0 from LPStart
callq _Unwind_Resume@PLT
```
This will cause landing pad entries to become zero (.Ltmp11-foo10.cold)
```
.Lcst_begin4:
.uleb128 .Ltmp9-.Lfunc_begin2 # >> Call Site 1 <<
.uleb128 .Ltmp10-.Ltmp9 # Call between .Ltmp9 and .Ltmp10
.uleb128 .Ltmp11-foo10.cold <---This is zero # jumps to .Ltmp11
.byte 3 # On action: 2
.uleb128 .Ltmp10-.Lfunc_begin2 # >> Call Site 2 <<
.uleb128 .Lfunc_end9-.Ltmp10 # Call between .Ltmp10 and .Lfunc_end9
.byte 0 # has no landing pad
.byte 0 # On action: cleanup
.p2align 2
```
The C++ ABI somehow assumes that no landing pads point directly to LPStart (which works in the normal case since the function begin is never a landing pad), and uses LP.offset = 0 to specify no landing pad. This change adds a nop instruction at start of such sections so that such a case could be avoided. Output:
```
.section .text.split.foo10,"ax",@progbits
foo10.cold: # %lpad
.cfi_startproc
.cfi_personality 3, __gxx_personality_v0
.cfi_lsda 3, .Lexception5
.cfi_def_cfa %rsp, 16
nop <--- new instruction that is added
.Ltmp11:
movq %rax, %rdi
callq _Unwind_Resume@PLT
```
Reviewed By: modimo, snehasish, rahmanl
Differential Revision: https://reviews.llvm.org/D130133
Today llc will crash when attempting to use non-power-of-two integer types as
function arguments or returns. This patch enables passing non standard integer
values in functions by promoting them before store and truncating after load.
The main motivation of implementing this change is that rust casts small structs
(less than pointer size) into an integer of the same size. As an example, if a
struct contains three u8 then it will be passed as an i24. This patch is a step
towards enabling rust compilation to ptx while retaining the target independent
optimizations.
More context can be found in https://github.com/llvm/llvm-project/issues/55764
Differential Revision: https://reviews.llvm.org/D129291
An async suspend models the split between two partial async functions.
`llvm.swift.async.context.addr ` will have a different value in the two
partial functions so it is not correct to generally CSE the instruction.
rdar://97336162
Differential Revision: https://reviews.llvm.org/D130201
As noticed on D129765 and reported on Issue #56531 - aarch64 targets can use the neon ctpop + add-reduce instructions to speed up scalar ctpop instructions, but we fail to do this for parity calculations.
I'm not sure where the cutoff should be for specific CPUs, but i64 (+ i128 special case) shows a definite reduction in instruction count. i32 is about the same (but scalar <-> neon transfers are probably more costly?), and sub-i32 promotion looks to be a definite regression compared to parity expansion optimized for those widths.
Differential Revision: https://reviews.llvm.org/D130246
The name `getEntrySamples` was misleading for 2 reasons. One, it's
close in name to `Function::getEntryCount`, but the equivalent here is
`getHeadSamples`; second, as opposed to the other get* APIs in
`FunctionSamples`, it performs an estimate/heuristic rather than just
retrieving raw data (or a non-heuristic derivate off that data, like
`getMaxCountInside`)
The new name should more clearly communicate its intent; and, being
close (in name) to `getHeadSamples`, it should allow the reader discover
the relation between them.
Also updated the doc comments for both `getHeadSamples[Estimate]` so a
reader may better understand the relation between them.
Differential Revision: https://reviews.llvm.org/D130281
We were looking for loads or any_extend+load. reduceLoadWidth
hasn't known how to look through such an any_extend to find the
load since D40667 almost 5 years ago.
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D130333
Reorganize the code to make it clear what is and isn't handle, and why.
Restructure bailout to remove (false and confusing) dependence on
CM_Scalarize; just return invalid cost and propagate, that's what it
is for.
This patch adds a command line flag to be able to test
the type based cost-model analysis for Intrinsics.
Differential Revision: https://reviews.llvm.org/D129109
GEP's across basic blocks were not getting splitted due to EnableGEPOpt
which was turned off by default. Hence, EarlyCSE missed the opportunity
to eliminate common part of GEP's. This can be achieved by simply
turning GEP pass on.
- This patch moves SeparateConstOffsetFromGEPPass() just before LSR.
- It enables EnableGEPOpt by default.
Resolves - https://github.com/llvm/llvm-project/issues/50528
Added an unit test.
Differential Revision: https://reviews.llvm.org/D128582
Move this out of the switch, so that different branches can
indicate an error by breaking out of the switch. This becomes
important if there are more than the two current error cases.
The internalize pass supports an option to provide a list of symbols
that should not be internalized. THis is useful retaining certain
defintions that should be kept alive. However, this interface is
somewhat difficult to use as it requires knowing every single symbol's
name and specifying it. Many APIs provide common prefixes for the
symbols exported by the library, so it would make sense to be able to
match these using a simple glob pattern. This patch changes the handling
from a simple string comparison to a glob pattern match.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D130319
If a function is non-recursive we only performed intra-procedural
reasoning for reachability (via AA::isPotentiallyReachable). However,
if it is re-entrant that doesn't mean we can't reach. Instead of this
problematic logic in the reachability reasoning we utilize logic in
AAPointerInfo. If a location is for sure written by a function it can
be re-entrant or recursive we know only intra-procedural reasoning is
sufficient.
The existing code doesn't expect dummy values (undef, poison, null-derived
constants etc) as arguments of these intrinsics. However, they can be there
in unreached code. Currently we fail trying to find base for them.
Handle these cases separately. Return null as base for them to be consistent
with the handling in the main algorithm in findBaseDefiningValue.
Differential Revision: https://reviews.llvm.org/D129561
Reviewed By: apilipenko
If we have a dominating must-write access we do not need to know the
initial value of some object to perform reasoning about the potential
values. The dominating must-write has overwritten the initial value.
lowerGlobalAddress added by D128427 can be used for PIC. The actual condition is
that the global value needs to be dso_local (a dso_preemptable one needs GOT
indirection).
load-store.ll has UB due to out-of-bounds load/store. Fix the UB in the variable
test and add an array test. Note: NOPIC array index is currently wrong.
Reviewed By: wangleiat
Differential Revision: https://reviews.llvm.org/D129977
The patch adds SPIRVPrepareFunctions pass, which modifies function
signatures containing aggregate arguments and/or return values before
IR translation. Information about the original signatures is stored in
metadata. It is used during call lowering to restore correct SPIR-V types
of function arguments and return values. This pass also substitutes some
llvm intrinsic calls to function calls, generating the necessary functions
in the module, as the SPIRV translator does.
The patch also includes changes in other modules, fixing errors and
enabling many SPIR-V features that were omitted earlier. And 15 LIT tests
are also added to demonstrate the new functionality.
Differential Revision: https://reviews.llvm.org/D129730
Co-authored-by: Aleksandr Bezzubikov <zuban32s@gmail.com>
Co-authored-by: Michal Paszkowski <michal.paszkowski@outlook.com>
Co-authored-by: Andrey Tretyakov <andrey1.tretyakov@intel.com>
Co-authored-by: Konrad Trifunovic <konrad.trifunovic@intel.com>
This code confuses LV's "Uniform" and LVL/LAI's "Uniform". Despite the
common name, these are different.
* LVs notion means that only the first lane *of each unrolled part* is
required. That is, lanes within a single unroll factor are considered
uniform. This allows e.g. widenable memory ops to be considered
uses of uniform computations.
* LVL and LAI's notion refers to all lanes across all unrollings.
IsUniformMem is in turn defined in terms of LAI's notion. Thus a
UniformMemOpmeans is a memory operation with a loop invariant address.
This means the same address is accessed in every iteration.
The tweaked piece of code was trying to match a uniform mem op (i.e.
fully loop invariant address), but instead checked for LV's notion of
uniformity. In theory, this meant with UF > 1, we could speculate
a load which wasn't safe to execute.
This ends up being mostly silent in current code as it is nearly
impossible to create the case where this difference is visible. The
closest I've come in the test case from 54cb87, but even then, the
incorrect result is only visible in the vplan debug output; before this
change we sink the unsafely speculated load back into the user's predicate
blocks before emitting IR. Both before and after IR are correct so the
differences aren't "interesting".
The other test changes are uninteresting. They're cases where LV's uniform
analysis is slightly weaker than SCEV isLoopInvariant.
We can always fold zext.b since it is just andi. The others require
Zba/Zbb.
Reviewed By: reames
Differential Revision: https://reviews.llvm.org/D130302
Adds a number of utilities that are used to help create and update
memprof related metadata. These will be used during profile matching
and annotation, as well as by the inliner when updating the metadata.
Also adds unit tests for the utilities.
See also related RFCs:
RFC: Sanitizer-based Heap Profiler [1]
RFC: A binary serialization format for MemProf [2]
RFC: IR metadata format for MemProf [3]
(Note that the IR metadata format has changed from the RFC during
implementation, as described in the preceeding patch adding the basic
metadata and verification support.)
Depends on D128141.
Differential Revision: https://reviews.llvm.org/D128854
If we build the Target libraries with -fvisibility=hidden, then
LLVM_EXTERNAL_VISIBILITY must also be able to override it back
to default visibility.
Currently, the LLVM_EXTERNAL_VISIBILITY define is a no-op for
mingw targets, thus set CMAKE_CXX_VISIBILITY_PRESET correspondingly.
This unbreaks the mingw dylib build, if the compiler actually
takes hidden visiblity into account (e.g. after D130121).
(Later, once hidden visiblity can be used for MinGW targets, we
can make LLVM_EXTERNAL_VISIBILITY and LLVM_LIBRARY_VISIBILITY expand
to actual attributes, and reverse this commit.)
Differential Revision: https://reviews.llvm.org/D130200
This probably should have been part of D123089, but the effects of it
don't show up until we start removing functions from the table in
D130107. Oops.
Differential Revision: https://reviews.llvm.org/D130184
This allows us to accept annotations from out-of-tree languages (the
example test is derived from Rust) so they can enjoy the benefits of
LLVM's optimizations without requiring LLVM to have language-specific
knowledge.
Differential Revision: https://reviews.llvm.org/D123091
Prior to this change, we relied on the hard-coded list for all of the
information performed by MemoryBuiltins. With this change, we're able to
start relying on properites of functions described in attributes, which
opens the door to out-of-tree compilers being able to describe their
allocator functions to LLVM's optimizer logic without having to register
their implementation details with LLVM.
Differential Revision: https://reviews.llvm.org/D123090
The InstCombine test is reduced from issue #56601. Without the more
liberal match for ConstantExpr, we try to rearrange constants in
Negator forever.
Alternatively, we could adjust the definition of m_ImmConstant to be
more conservative, but that's probably a larger patch, and I don't
see any downside to changing m_ConstantExpr. We never capture and
modify a ConstantExpr; transforms just want to avoid it.
Differential Revision: https://reviews.llvm.org/D130286
No need to add checks for every type per pointer that we couldn't create
a check for the first time around, just the types that weren't
successful.
Reviewed By: fhahn
Differential Revision: https://reviews.llvm.org/D119376
This change implements the contextual symbolizer markup elements: reset,
module, and mmap. These provide information about the runtime context of
the binary necessary to resolve addresses to symbolic values.
Summary information is printed to the output about this context.
Multiple mmap elements for the same module line are coalesced together.
The standard requires that such elements occur on their own lines to
allow for this; accordingly, anything after a contextual element on a
line is silently discarded.
Implementing this cleanly requires that the filter drive the parser;
this allows skipped sections to avoid being parsed. This also makes the
filter quite a bit easier to use, at the cost of some unused
flexibility.
Reviewed By: peter.smith
Differential Revision: https://reviews.llvm.org/D129519
If error occurs on constructing coverage info for one of the object files, it prints the name of the object file, so that users know which one is the cause of error.
Differential Revision: https://reviews.llvm.org/D130196
Ignore the flag for now, but we can start using it for verbose output if needed.
Reviewed By: thakis
Differential Revision: https://reviews.llvm.org/D130202
This patch adds the AArch64 hook for preferPredicateOverEpilogue,
which currently returns true if SVE is enabled and one of the
following conditions (non-exhaustive) is met:
1. The "sve-tail-folding" option is set to "all", or
2. The "sve-tail-folding" option is set to "all+noreductions"
and the loop does not contain reductions,
3. The "sve-tail-folding" option is set to "all+norecurrences"
and the loop has no first-order recurrences.
Currently the default option is "disabled", but this will be
changed in a later patch.
I've added new tests to show the options behave as expected here:
Transforms/LoopVectorize/AArch64/sve-tail-folding-option.ll
Differential Revision: https://reviews.llvm.org/D129560
Replace the value-accepting isReallocLikeFn() overload with a
getReallocatedOperand() function, which returns which operand is
the one being reallocated. Currently, this is always the first one,
but once allockind(realloc) is respected, the reallocated operand
will be determined by the allocptr parameter attribute.
Remove isFreeCall() in favor of getFreedOperand(). Replace the
two remaining uses with a getFreedOperand() != nullptr check, as
they only care that something is getting freed. (The usage in DSE
is correct as such. The allocator-related checks in CFLGraph look
rather questionable in general.)
Use getFreedOperand() instead of isFreeCall() to remove the
implicit assumption that any pointer operand to a free function
is the operand being freed. This won't actually matter until we
handle allockind(free).
This patch merges a consecutive sequence of
s_or_saveexec s_o, s_i
s_xor exec, exec, s_o
into a single
s_andn2_saveexec s_o, s_i instruction.
This patch also cleans up the SIOptimizeExecMasking pass a bit.
Reviewed By: nhaehnle
Differential Revision: https://reviews.llvm.org/D129073
DWARF files may contain overlapping address ranges. f.e. it can happen if the two
copies of the function have identical instruction sequences and they end up sharing.
That looks incorrect from the point of view of DWARF spec. Current implementation
of DWARFLinker does not combine overlapped address ranges. It would be good if such
ranges would be handled in some useful way. Thus, this patch allows DWARFLinker
to combine overlapped ranges in a single one.
Depends on D86539
Reviewed By: aprantl
Differential Revision: https://reviews.llvm.org/D123469
This patch lowers
duplane128(insert_subvector(undef, bitcast(op(128bitsubvec)), 0), 0)
to
bitcast(duplane128(insert_subvector(undef, op(128bitsubvec), 0), 0)).
This enables floating-point loads to match patterns added in
https://reviews.llvm.org/D130010
Differential Revision: https://reviews.llvm.org/D130013
We currently assume in a number of places that free-like functions
free their first argument. This is true for all hardcoded free-like
functions, but with the new attribute-based design, the freed
argument is supposed to be indicated by the allocptr attribute.
To make sure we handle this correctly once allockind(free) is
respected, add a getFreedOperand() helper which returns the freed
argument, rather than just indicating whether the call frees *some*
argument.
This migrates most but not all users of isFreeCall() to the new
API. The remaining users are a bit more tricky.
DWARF files may contain overlapping address ranges. f.e. it can happen if the two
copies of the function have identical instruction sequences and they end up sharing.
That looks incorrect from the point of view of DWARF spec. Current implementation
of DWARFLinker does not combine overlapped address ranges. It would be good if such
ranges would be handled in some useful way. Thus, this patch allows DWARFLinker
to combine overlapped ranges in a single one.
Depends on D86539
Reviewed By: aprantl
Differential Revision: https://reviews.llvm.org/D123469
Reapply the patch with getObjectSize() replaced by getAllocSize().
The former will also look through calls that return their argument,
and we'll end up placing dereferenceable attributes on intrinsics
like llvm.launder.invariant.group. While this isn't wrong, it also
doesn't seem to be particularly useful. For now, use getAllocSize()
instead, which sticks closer to the original behavior of this code.
-----
This code is just interested in the allocsize, not any other
allocator properties.
Default getAllocSize() to use the trivial mapper. Also switch
from using std::function to function_ref.
Furthermore, update the doc comment to point out a subtle difference
between getAllocSize() and getObjectSize(): The latter may also
return something for calls that return their argument (via "returned"
attribute or special intrinsics like invariant groups).
Fetching allocation data for calls is a rather hot operation, and
TLI lookups are slow. We can greatly reduce the number of calls
for which TLI is queried by checking that they return a pointer
value first, as this is a requirement for allocation functions
anyway.
- Test cases for arch only has 16-bit instruction such as ck801/ck802 need
compile with -mattr=+btst16
- Fix the GPR copy instruction with MOV16 for 16-bit only arch.
Vector fptosi_sat and fptoui_sat were being expanded by unrolling the
vector operation. This doesn't work for scalable vector, so this patch
adds a call to TLI.expandFP_TO_INT_SAT if the vector is scalable.
Scalable tests are added for AArch64 and RISCV. Some of the AArch64
fptoi_sat operations should be legal, but that will be handled in
another patch.
Differential Revision: https://reviews.llvm.org/D130028
There is a problem in loop cache analysis that the types of SCEV variables
`Coeff` and `ElemSize` in function `isConsecutive()` may not match. The
mismatch would cause SCEV failures when `Coeff` is multiplied with `ElemSize`.
The fix in this patch is to extend the type of both `Coeff` and `ElemSize` to
whichever is wider in those two variables. As a clean-up, duplicate calculations
of `Stride` in `computeRefCost()` is then removed.
Reviewed By: Meinersbur, #loopoptwg
Differential Revision: https://reviews.llvm.org/D128877
(srl (and X, 1<<C), C) is the form we receive for testing bit C.
An earlier combine removed the setcc so it wasn't there to match
when we created the SELECT_CC. This doesn't happen for BR_CC because
generic DAG combine rebuilds the setcc if it is used by BRCOND.
We can shift X left by XLen-1-C to put the bit to be tested in the
MSB, and use a signed compare with 0 to test the MSB.
The only difference between the combines were the calls to getNode
that include the true/false values for SELECT_CC or the chain
and branch target for BR_CC.
Wrap the rest of the code into a helper that reads LHS, RHS, and
CC and outputs new values and a bool if a new node needs to be
created.
If C > 10, this will require a constant to be materialized for the
And. To avoid this, we can shift X left by XLen-1-C bits to put the
tested bit in the MSB, then we can do a signed compare with 0 to
determine if the MSB is 0 or 1. Thanks to @reames for the suggestion.
I've implemented this inside of translateSetCCForBranch which is
called when setcc+brcond or setcc+select is converted to br_cc or
select_cc during lowering. It doesn't make sense to do this for
general setcc since we lack a sgez instruction.
I've tested bit 10, 11, 31, 32, 63 and a couple bits betwen 11 and 31
and between 32 and 63 for both i32 and i64 where applicable. Select
has some deficiencies where we receive (and (srl X, C), 1) instead.
This doesn't happen for br_cc due to the call to rebuildSetCC in the
generic DAGCombiner for brcond. I'll explore improving select in a
future patch.
Reviewed By: reames
Differential Revision: https://reviews.llvm.org/D130203
MapperJITLinkMemoryManager supports executor memory management using any
implementation of MemoryMapper to do the transfer such as InProcessMapper or
SharedMemoryMapper.
Reviewed By: lhames
Differential Revision: https://reviews.llvm.org/D129495
Since 67220c2ad7 empty SPSSequence<char>s deserialize to default-constructed
ArrayRef<char>s, which have a null data field. We need to check for this to
avoid memcpy'ing from a nullptr.
This should fix the bot failure in
https://lab.llvm.org/buildbot/#/builders/85/builds/9323
We were quite conservative when it came to PHI node handling to avoid
recursive reasoning. Now we check more direct if we have seen a PHI
already or not. This allows non-recursive PHI chains to be handled.
This also exposed a bug as we did only model the effect of one loop
traversal. `phi_no_store_3` has been adapted to show how we would have
used `undef` instead of `1` before. With this patch we don't replace
it at all, which is expected as we do not argue about loop iterations
(or alignments).
If we only have exact accesses we should never require the bit-pattern
to be uniform (in this case 0). Only a non-exact access should force us
to require only 0 values.
Add basic support for the MemProf metadata (!memprof and !callsite)
which was initially described in "RFC: IR metadata format for MemProf"
(https://discourse.llvm.org/t/rfc-ir-metadata-format-for-memprof/59165).
The bulk of the patch is verification support, along with some tests.
There are a couple of changes to the format described in the original
RFC:
Initial measurements suggested that a tree format for the stack ids in
the contexts would be more efficient, but subsequent evaluation with
large applications showed that in fact the cost of the additional
metadata nodes required by this deduplication scheme overwhelmed the
benefit from sharing stack id nodes. Therefore, the implementation here
and in follow on patches utilizes a simpler scheme of lists of stack id
integers in the memprof profile contexts and callsite metadata. The
follow on matching patch employs context trimming optimizations to
reduce the cost.
Secondly, instead of verbosely listing all profiled fields in each
profiled context (memory info block or MIB), and deferring the
interpretation of the profile data, the profile data is evaluated and
converted into string tags specifying the behavior (e.g. "cold") during
profile matching. This reduces the verbosity of the profile metadata,
and allows additional context trimming optimizations. As a result, the
named metadata schema description is also no longer needed.
Differential Revision: https://reviews.llvm.org/D128141
Noticed via inspection; to my knowledge, impossible to hit today. In theory, we could have a fixed stride check be analyzed, then a scalable one. With the old code, the scalable one would be silently dropped, and the runtime guard would go ahead with only the fixed one. This would be a miscompile.
If we are right shifting a multiply by a negated power of 2 where
the power of 2 is the same as the shift amount, we can replace with
a negate followed by an And.
New tests have not been committed yet but the patch shows the diffs.
Let me know if you want any changes or additional tests.
Differential Revision: https://reviews.llvm.org/D130103
VOPC DPP should not be formed when the row_mask and bank_mask are not
0xf (full) because the resulting VOP DPP would have different semantics
than the MOV DPP followed by VOP. Existing checks in GCNDPPCombine cover
this case but for different reasons, so assert the property for
future-proofing.
Reviewed By: nhaehnle
Differential Revision: https://reviews.llvm.org/D130101
Put AllocationFn check before I->willReturn can allow CodeGenPrepare to remove useless malloc instruction
Differential Revision: https://reviews.llvm.org/D130126
This change improves ctags generation for tablegen files.
For the following example
```
class A;
class A {
int a;
}
```
Previously, tags were generated only for a forward declaration of class 'A'.
This patch allows generating tags for the forward declarations
and further definition of class 'A'.
Reviewed By: barannikov88
Original patch by: rusyaev-roman (Roman Rusyaev)
Some adjustments by: nhaehnle (Nicolai Hähnle)
Differential Revision: https://reviews.llvm.org/D129935
An srem or sdiv has two cases which can cause undefined behavior, not just one. The existing code did not account for this, and as a result, we miscompiled when we encountered e.g. a srem i64 %v, -1 in a conditional block.
Instead of hand rolling the logic, just use the utility function which exists exactly for this purpose.
Differential Revision: https://reviews.llvm.org/D130106
When F calls G calls H, G is nounwind, and G is inlined into F, then the
inlined call-site to H should be effectively nounwind so as not to lose
information during inlining.
If H itself is nounwind (which often happens when H is an intrinsic), we
no longer mark the callsite explicitly as nounwind. Previously, there
were cases where the inlined call-site of H differs from a pre-existing
call-site of H in F *only* in the explicitly added nounwind attribute,
thus preventing common subexpression elimination.
v2:
- just check CI->doesNotThrow
v3 (resubmit after revert at 3443788087):
- update Clang tests
Differential Revision: https://reviews.llvm.org/D129860
In fact, in unreached code we can say that every fact is true. So do not waste time trying to
do something smarter.
Formally it's not an NFC because it may change query results in unreached code, but they
won't have any impact on execution.
Hypothetical CT boost expected but not measured in practice.
Differential Revision: https://reviews.llvm.org/D129878
The n_type field in the symbol table entry has two interpretations in XCOFF32, and a single interpretation in XCOFF64.
The new interpretation is used in XCOFF32 if the value of the o_vstamp field in the auxiliary header is 2.
In XCOFF64 and the new XCOFF32 interpretation, the n_type field is used for the symbol type and visibility.
The patch writes the aux header with an o_vstamp field value of 2 when the visibility is specified in XCOFF32 to make the new XCOFF32 interpretation used.
Reviewed By: DiggerLin, jhenderson
Differential Revision: https://reviews.llvm.org/D128148
Similar to what we already do in getNode for basic ADD/SUB nodes, return the X operand directly, but here we know that there will be no/zero overflow as well.
As noted on D127115 - this path is being exercised by llvm/test/CodeGen/ARM/dsp-mlal.ll, although I haven't been able to get any codegen without a topological worklist.
Given a patch like D129506, using instructions not valid for the current
target feature set becomes an error. This fixes an issue in
ARMExpandPseudo::ExpandCMP_SWAP where Thumb2 compares were used in
Thumb1Only code, such as thumbv8m.baseline targets.
Differential Revision: https://reviews.llvm.org/D129695
This patch introduces some initial def-use verification. This catches
cases like the one fixed by D129436.
Reviewed By: Ayal
Differential Revision: https://reviews.llvm.org/D129717
PromoteIntRes_BUILD_VECTOR currently always ANY_EXTENDs build vector operands, but if this is a constant boolean vector we're losing the useful ability to keep the vector matching the BooleanContents mode used by the target.
This patch extends constant boolean vectors according to target BooleanContents, allowing a number of additional all-bits folds (notable XOR -> NOT conversions) to occur.
Differential Revision: https://reviews.llvm.org/D129641
By default if SVE is enabled we want the select instruction used for
reductions to be inside the loop, rather than outside. This makes it
possible for the backend to fold the select into the operation to
produce a single predicated add, fadd, etc.
Differential Revision: https://reviews.llvm.org/D129763
Add promotion and expansion of integer operands for
experimental_vp_strided SelectionDAG nodes; the expansion is actually
just a truncation of the stride operand.
Reviewed By: simoll
Differential Revision: https://reviews.llvm.org/D123112
When we lower dynamic stack, we need to substract pattern `x15 << 4` from sp.
Subtract instruction with arith shifted register(SUBXrs) can't refer to sp. So for now we need two extra mov like:
```
mov x0, sp
sub x0, x0, x15, lsl #4
mov sp, x0
```
If we want to refer to sp in subtract instruction like this:
```
sub sp, sp, x15, lsl #4
```
We must use arith extended register version(SUBXrx).
So in this patch when we find sub have sp operand on src0, try to select to SubXrx64.
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D129932
To solve the readnone problems in coroutines. See
https://discourse.llvm.org/t/address-thread-identification-problems-with-coroutine/62015
for details.
According to the discussion, we decide to fix the problem by inserting
isPresplitCoroutine() checks in different passes instead of
wrapping/unwrapping readnone attributes in CoroEarly/CoroCleanup passes.
In this direction, we might not be able to cover every case at first.
Let's take a "find and fix" strategy.
Reviewed By: nikic, nhaehnle, jyknight
Differential Revision: https://reviews.llvm.org/D127383
The first attempt missed changing test files for tools
(update_llc_test_checks.py).
Original commit message:
This implements the main suggested change from issue #56498.
Using the shorter (non-extending) instruction with only
-Oz ("minsize") rather than -Os ("optsize") is left as a
possible follow-up.
As noted in the bug report, the zero-extending load may have
shorter latency/better throughput across a wide range of x86
micro-arches, and it avoids a potential false dependency.
The cost is an extra instruction byte.
This could cause perf ups and downs from secondary effects,
but I don't think it is possible to account for those in
advance, and that will likely also depend on exact micro-arch.
This does bring LLVM x86 codegen more in line with existing
gcc codegen, so if problems are exposed they are more likely
to occur for both compilers.
Differential Revision: https://reviews.llvm.org/D129775
There were two problems with the previous setup:
1. We weren't setting its size, which caused problems when `__llvm_addrsig`
wasn't the last section. In particular, `__debug_line` (if created) is
generated and placed after `__llvm_addrsig`, and would result in an
invalid object file w/ overlapping sections being emitted.
2. The symbol indices could be invalidated if e.g. `llvm-strip` ran on
the object file. See discussion [here][1].
To fix both these issues, we use symbol relocations instead of encoding
symbol indices directly in the section contents. The section itself
doesn't contain any data. That sidesteps the layout problem in addition
to solving the second issue.
The corresponding LLD change to read in this new format: {D128938}.
It will fix the icf-safe.ll test failure on this diff.
[1]: https://discourse.llvm.org/t/problems-with-mach-o-address-significance-table-generation/63392/
Reviewed By: #lld-macho, alx32
Differential Revision: https://reviews.llvm.org/D127637
...with more fixes.
The original patch was reverted in 3e9cc543f2 due to bot failures caused by
a missing dependence on librt. That issue was fixed in 32d8d23cd0, but that
commit also broke sanitizer bots due to a bug in SimplePackedSerialization:
empty ArrayRef<char>s triggered a zero-byte memcpy from a null source. The
ArrayRef<char> serialization issue was fixed in 67220c2ad7, and this patch has
also been updated with a new custom SharedMemorySegFinalizeRequest message that
should avoid serializing empty ArrayRefs in the first place.
https://reviews.llvm.org/D128544
DWARFRecordSectionSplitter pre-builds a splitBlock cache, but wasn't passing it
to the call to splitBlock. This was an oversight in the original patch.
For the longest time we used `AAValueSimplify` and
`genericValueTraversal` to determine "potential values". This was
problematic for many reasons:
- We recomputed the result a lot as there was no caching for the 9
locations calling `genericValueTraversal`.
- We added the idea of "intra" vs. "inter" procedural simplification
only as an afterthought. `genericValueTraversal` did offer an option
but `AAValueSimplify` did not. Thus, we might end up with "too much"
simplification in certain situations and then gave up on it.
- Because `genericValueTraversal` was not a real `AA` we ended up with
problems like the infinite recursion bug (#54981) as well as code
duplication.
This patch introduces `AAPotentialValues` and replaces the
`AAValueSimplify` uses with it. `genericValueTraversal` is folded into
`AAPotentialValues` as are the instruction simplifications performed in
`AAValueSimplify` before. We further distinguish "intra" and "inter"
procedural simplification now.
`AAValueSimplify` was not deleted as we haven't ported the
re-materialization of instructions yet. There are other differences over
the former handling, e.g., we may not fold trivially foldable
instructions right now, e.g., `add i32 1, 1` is not folded to `i32 2`
but if an operand would be simplified to `i32 1` we would fold it still.
We are also even more aware of function/SCC boundaries in CGSCC passes,
which is good even if some tests look like they regress.
Fixes: https://github.com/llvm/llvm-project/issues/54981
Note: A previous version was flawed and consequently reverted in
6555558a80.
This implements the main suggested change from issue #56498.
Using the shorter (non-extending) instruction with only
-Oz ("minsize") rather than -Os ("optsize") is left as a
possible follow-up.
As noted in the bug report, the zero-extending load may have
shorter latency/better throughput across a wide range of x86
micro-arches, and it avoids a potential false dependency.
The cost is an extra instruction byte.
This could cause perf ups and downs from secondary effects,
but I don't think it is possible to account for those in
advance, and that will likely also depend on exact micro-arch.
This does bring LLVM x86 codegen more in line with existing
gcc codegen, so if problems are exposed they are more likely
to occur for both compilers.
Differential Revision: https://reviews.llvm.org/D129775
- add zstd to `llvm::compression` namespace
- add a CMake option `LLVM_ENABLE_ZSTD` with behavior mirroring that of `LLVM_ENABLE_ZLIB`
- add tests for zstd to `llvm/unittests/Support/CompressionTest.cpp`
- debian users should install libzstd when using `LLVM_ENABLE_ZSTD=FORCE_ON` from source due to this bug https://bugs.launchpad.net/ubuntu/+source/libzstd/+bug/1941956
Reviewed By: leonardchan, MaskRay
Differential Revision: https://reviews.llvm.org/D128465
Implement an intrinsic for use lowering LDS variables to different
addresses from different kernels. This will allow kernels that cannot
reach an LDS variable to avoid wasting space for it.
There are a number of implicit arguments accessed by intrinsic already
so this implementation closely follows the existing handling. It is slightly
novel in that this SGPR is written by the kernel prologue.
It is necessary in the general case to put variables at different addresses
such that they can be compactly allocated and thus necessary for an
indirect function call to have some means of determining where a
given variable was allocated. Claiming an arbitrary SGPR into which
an integer can be written by the kernel, in this implementation based
on metadata associated with that kernel, which is then passed on to
indirect call sites is sufficient to determine the variable address.
The intent is to emit a __const array of LDS addresses and index into it.
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D125060
powi() is not a standard math library function; it is specified
with non-strict semantics in the LangRef. We currently require
'afn' to do this transform when it needs a sqrt(), so I just
extended that requirement to the whole-number exponent too.
This bug was introduced with:
b17754bcaa
...where we deferred expansion of pow() to later passes.
When determining if an `and` should be merged into an extending load
the constant argument to the `and` is currently not checked if the
argument requires truncation. This prevents the combine happening when
the vector width is half the normal available vector width for SVE VLA
vectors.
Reviewed By: c-rhodes
Differential Revision: https://reviews.llvm.org/D129281
This is helpful for debugging issues with very large functions or SCC.
Also helpful when function names are very large and it's hard to tell the number of nodes in an SCC.
Reviewed By: hans
Differential Revision: https://reviews.llvm.org/D128003
For most DPP instructions, the old operand stores the value that was in
the current lane before the DPP operation, and is tied to the
destination. For VOPC DPP, this is unnecessary and incorrect.
There appears to have been a latent bug related to D122737 with
SIInstrInfo::isOperandLegal. If you checked if a register operand was legal
when the InstructionDesc expected an immediate, it reported that is valid.
Its fix is necessary for and tested in this patch.
Reviewed By: foad, rampitec
Differential Revision: https://reviews.llvm.org/D130040
Use the FreeBSD AArch64 memory layout values when building for it.
These are based on the x86_64 values, scaled to take into account the
larger address space on AArch64.
Reviewed by: vitalybuka
Differential Revision: https://reviews.llvm.org/D125883
AArch64 has a larger address space than 64 but x86. Use the larger
shadow offset on FreeBSD AArch64.
Reviewed by: vitalybuka
Differential Revision: https://reviews.llvm.org/D125873
We currently assert in vectorizeTree(TreeEntry*) when processing a PHI
bundle in a block containing a catchswitch. We attempt to set the
IRBuilder insertion point following the catchswitch, which is invalid.
This is done so that ShuffleBuilder.finalize() knows where to insert
a shuffle if one is needed.
To avoid this occurring, watch out for catchswitch blocks during
buildTree_rec() processing, and avoid adding PHIs in such blocks to
the vectorizable tree. It is unlikely that constraining vectorization
over an exception path will cause a noticeable performance loss, so
this seems preferable to trying to anticipate when a shuffle will and
will not be required.
Since D129288, callbr is allowed to have duplicate successors. This
patch removes a limitation which prevents optimizations from actually
producing such callbrs.
Differential Revision: https://reviews.llvm.org/D129997
This patch implements proposal https://lists.llvm.org/pipermail/llvm-dev/2020-August/144579.html
llvm-dwarfutil - is a tool that is used for processing debug info(DWARF) located in built binary files to improve debug info quality, reduce debug info size. The patch currently implements smaller set of command-line options(comparing to the proposal):
```
./llvm-dwarfutil [options] <input file> <output file>
--garbage-collection Do garbage collection for debug info(default)
-j <value> Alias for --num-threads
--no-garbage-collection Don`t do garbage collection for debug info
--no-odr-deduplication Don`t do ODR deduplication for debug types
--no-odr Alias for --no-odr-deduplication
--no-separate-debug-file
Create single output file, containing debug tables(default)
--num-threads <threads> Number of available threads for multi-threaded execution. Defaults to the number of cores on the current machine
--odr-deduplication Do ODR deduplication for debug types(default)
--odr Alias for --odr-deduplication
--separate-debug-file Create two output files: file w/o debug tables and file with debug tables
--tombstone [bfd,maxpc,exec,universal]
Tombstone value used as a marker of invalid address(default: universal)
=bfd - Zero for all addresses and [1,1] for DWARF v4 (or less) address ranges and exec
=maxpc - Minus 1 for all addresses and minus 2 for DWARF v4 (or less) address ranges
=exec - Match with address ranges of executable sections
=universal - Both: bfd and maxpc
```
Reviewed By: clayborg
Differential Revision: https://reviews.llvm.org/D86539
Unlike the name suggests this can reuse any store as a base for a
memory-based vector extract. If that store is underaligned the loads
created to extract will have an invalid alignment. Since most CPUs are
forgiving wrt alignment this is almost never an issue, on x86 this is
only reproducible by extracting a 128 bit vector out of a wider vector.
I tried making a test case in the context of
https://reviews.llvm.org/D127982 but it's really really fragile, as the
output pretty much looks like a missed optimization.
If none of the bits of a VBICimm are demanded, we can remove the node
entirely using the input operand instead.
Differential Revision: https://reviews.llvm.org/D129966
The "xor (X >> ShiftC), XorC --> (not X) >> ShiftC" fold is currently limited to the XOR mask being a shifted all-bits mask, but we can relax this to only need to match under the demanded bits.
This helps expose more bit extraction/clearing patterns and fixes the PowerPC testCompares*.ll regressions from D127115
Alive2: https://alive2.llvm.org/ce/z/fl7T7K
Differential Revision: https://reviews.llvm.org/D129933
AMDGPUPerfHintAnalysis doesn't set the memory bound attribute if
FuncInfo::InstCost outweighs MemInstCost even if we have a basic block
with relatively high global memory access. GCNSchedStrategy could revert
optimal scheduling in favour of occupancy which seems to degrade
performance for some kernels. This change introduces the
HasDenseGlobalMemAcc metric in the heuristic that makes the analysis
more conservative in these cases.
This fixes SWDEV-334259/SWDEV-343932
Differential Revision: https://reviews.llvm.org/D129759
Fixes https://github.com/llvm/llvm-project/issues/56484
H registers are 16 bit views of AArch64's Neon registers and
B are the 8 bit views.
msvc does not support 16 bit float (some mention in DirectX but I
couldn't find a way to get to it) so for lack of a better reference
I'm using:
85c9b41b33/server/references/dia/include/cvconst.h
(the other microsoft-pdb repo is no longer up to date)
Luckily clang does support fp16 so a test is added for that.
There is no 8 bit float type so I had to get creative with the
test case. We're not testing for correct debug info here just
that we can select the B register and not crash in the process.
For FPCR it's never going to be passed as an argument so I've
not added a test for it. It is included to keep our list looking
the same as the reference.
Reviewed By: majnemer
Differential Revision: https://reviews.llvm.org/D129774
One of the transforms in LoopSimplifyCFG demands that the LCSSA form is
truly maintained for all values, tokens included, otherwise it may end up creating
a use that is not dominated by def (and Phi creation for tokens is impossible).
Detect this situation and prevent transform for it early.
Differential Revision: https://reviews.llvm.org/D129984
Reviewed By: efriedma
This patch implements proposal https://lists.llvm.org/pipermail/llvm-dev/2020-August/144579.html
llvm-dwarfutil - is a tool that is used for processing debug info(DWARF) located in built binary files to improve debug info quality, reduce debug info size. The patch currently implements smaller set of command-line options(comparing to the proposal):
```
./llvm-dwarfutil [options] <input file> <output file>
--garbage-collection Do garbage collection for debug info(default)
-j <value> Alias for --num-threads
--no-garbage-collection Don`t do garbage collection for debug info
--no-odr-deduplication Don`t do ODR deduplication for debug types
--no-odr Alias for --no-odr-deduplication
--no-separate-debug-file
Create single output file, containing debug tables(default)
--num-threads <threads> Number of available threads for multi-threaded execution. Defaults to the number of cores on the current machine
--odr-deduplication Do ODR deduplication for debug types(default)
--odr Alias for --odr-deduplication
--separate-debug-file Create two output files: file w/o debug tables and file with debug tables
--tombstone [bfd,maxpc,exec,universal]
Tombstone value used as a marker of invalid address(default: universal)
=bfd - Zero for all addresses and [1,1] for DWARF v4 (or less) address ranges and exec
=maxpc - Minus 1 for all addresses and minus 2 for DWARF v4 (or less) address ranges
=exec - Match with address ranges of executable sections
=universal - Both: bfd and maxpc
```
Reviewed By: clayborg
Differential Revision: https://reviews.llvm.org/D86539
This patch adds an SVE pattern to recognize the use of a select with an
fadda in the form fadda(ptrue, x, select(mask, y, -0.0)). In this case
the select can be folded away, with the select mask used as the
predicate for fadda. This improves the codegen when vectorizing loops
with ordered fp reductions.
Differential Revision: https://reviews.llvm.org/D129623
Following discussion in PR56243, we need to somehow detect the situation
when token values penetrate LCSSA form for transforms that require that
it is maintained by all values (for example, to sustain use-def dominance
invarians). This patch introduces a parameter to LCSSA checkers to control
their ignorance about tokens.
Differential Revision: https://reviews.llvm.org/D129983
Reviewed By: efriedma
This revision supports to scalarize a binary operation of two scalable splat vectors.
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D122791
This patch implements recently ratified extension Zmmul, a subextension
of M (Integer Multiplication and Division) consisting only
multiplication part of it.
Differential Revision: https://reviews.llvm.org/D103313
Reviewed By: craig.topper, jrtc27, asb
The flag `-fcs-profile-generate` for enabling CSIRPGO moves the pass
`pgo-instrumentation` after inlining. Function entry coverage works fine
with this change, so remove the assert. I had originally left this
assert in because I had not tested this at the time.
Reviewed By: davidxl, MaskRay
Differential Revision: https://reviews.llvm.org/D129407
This was stored in LiveIntervals, but not actually used for anything
related to LiveIntervals. It was only used in one check for if a load
instruction is rematerializable. I also don't think this was entirely
correct, since it was implicitly assuming constant loads are also
dereferenceable.
Remove this and rely only on the invariant+dereferenceable flags in
the memory operand. Set the flag based on the AA query upfront. This
should have the same net benefit, but has the possible disadvantage of
making this AA query nonlazy.
Preserve the behavior of assuming pointsToConstantMemory implying
dereferenceable for now, but maybe this should be changed.
r175673 changed repairIntervalsInRange to find anchoring end points for
ranges automatically, but the calculation of Begin included the first
instruction found that already had an index. This patch changes it to
exclude that instruction:
1. For symmetry, so that the half open range [Begin,End) only includes
instructions that do not already have indexes.
2. As a possible performance improvement, since repairOldRegInRange
will scan fewer instructions.
3. Because repairOldRegInRange hits assertion failures in some cases
when it sees a def that already has a live interval.
(3) fixes about ten tests in the CodeGen lit test suite when
-early-live-intervals is forced on.
Differential Revision: https://reviews.llvm.org/D110182
This patch relands the f32 vararg assertion on z/OS fix that was reverted previously due to the testcase failing on non-z/OS platforms. It is now passing.
The tablegen lines that specify the XPLINK64 calling convention for promoting an f32 vararg to an f64 are effectively overwritten by the following tablegen line which bitcast an f64 vararg to an i64 (so that it can be used in the GPRs). Thus it becomes a bitcast from f32 to i64. We don't handle bitcasts for f32s and so this causes an assertion to be thrown.
We fix this by simplifying the tablegen lines to explicity show this behaviour, and allow the f32 in the bitcast case by first promoting it to an f64.
(and X, 0xffffffff) requires 2 shifts in the base ISA. Since we
know the result is being used by a compare, we can use a sext_inreg
instead of an AND if we also modify C1 to have 33 sign bits instead
of 32 leading zeros. This can also improve the generated code for
materializing C1.
Reviewed By: reames
Differential Revision: https://reviews.llvm.org/D129980
The original patch revealed an issue of reading incorrect values on BE hosts.
That is now changed to use `endian::read32le()` and `endian::read64le()`.
Original commit message:
The current implementation assumes that all pointers used in the
initialization of an aggregate are aligned according to the pointer size
of the target; that might not be so if the object is packed. In that
case, an array of .u8 should be used and pointers should be decorated
with the mask() operator.
The operator was introduced in PTX ISA 7.1, so an error is issued if the
case is detected for an earlier version.
Differential Revision: https://reviews.llvm.org/D127504
setge/le/uge/ule selected by themselves require an xori with 1.
If we're negating the setcc, we can fold the xori with the neg
to create an addi with -1.
This works because xori X, 1 is equivalent to 1 - X if X is either
0 or 1. So we're doing -(1 - X) which is X-1 or X+-1.
This improves the code for selecting between 0 and -1 based on a
condition for some conditions.
Reviewed By: asb
Differential Revision: https://reviews.llvm.org/D129957
D123493 introduced llvm::Module::Min to encode module flags metadata for AArch64
BTI/PAC-RET. llvm::Module::Min does not take effect when the flag is absent in
one module. This behavior is misleading and does not address backward
compatibility problems (when a bitcode with "branch-target-enforcement"==1 and
another without the flag are merged, the merge result is 1 instead of 0).
To address the problems, require Min flags to be non-negative and treat absence
as having a value of zero. For an old bitcode without
"branch-target-enforcement"/"sign-return-address", its value is as if 0.
Differential Revision: https://reviews.llvm.org/D129911
The DAG Combiner unnecessarily restricts commutative CSE
to nodes with a single result value. This commit removes
that restriction.
Signed-off-by: Itay Bookstein <ibookstein@gmail.com>
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D129666
We can use lw to load 4 bytes from the stack and sign extend them
instead of loading all 8 bytes.
Reviewed By: asb
Differential Revision: https://reviews.llvm.org/D129948
When F calls G calls H, G is nounwind, and G is inlined into F, then the
inlined call-site to H should be effectively nounwind so as not to lose
information during inlining.
If H itself is nounwind (which often happens when H is an intrinsic), we
no longer mark the callsite explicitly as nounwind. Previously, there
were cases where the inlined call-site of H differs from a pre-existing
call-site of H in F *only* in the explicitly added nounwind attribute,
thus preventing common subexpression elimination.
v2:
- just check CI->doesNotThrow
Differential Revision: https://reviews.llvm.org/D129860
Parse op_sel for *_e64_dpp VOP3 opcodes.
Depends on D129637 and setting of VOP3_OPSEL in dpp pseudos.
Differential Revision: https://reviews.llvm.org/D129767
llvm/lib/Analysis/LoopAccessAnalysis.cpp:916:12: error: no viable conversion from returned value of type 'SmallVector<[...], 2>' to function return type 'SmallVector<[...], (default)
CalculateSmallVectorDefaultInlinedElements<T>::value aka 3>'
return Scevs;
^~~~~
If DISubpogram was not cloned (e.g. we are cloning a function that has other
functions inlined into it, and subprograms of the inlined functions are
not supposed to be cloned), it doesn't make sense to clone its DILexicalBlocks
as well. Otherwise we'll get duplicated DILexicalBlocks that may confuse
debug info emission in AsmPrinter.
I believe it also makes no sense cloning any DILocalVariables or maybe
other local entities, if their parent subprogram was not cloned, cause
they will be dangling and will not participate in futher emission.
Reviewed By: aprantl
Differential Revision: https://reviews.llvm.org/D127102
The current implementation assumes that all pointers used in the
initialization of an aggregate are aligned according to the pointer size
of the target; that might not be so if the object is packed. In that
case, an array of .u8 should be used and pointers should be decorated
with the mask() operator.
The operator was introduced in PTX ISA 7.1, so an error is issued if the
case is detected for an earlier version.
Differential Revision: https://reviews.llvm.org/D127504
This simplifies NVPTXAsmPrinter::AggBuffer and its usage.
It is also a preparation for D127504.
Differential Revision: https://reviews.llvm.org/D129773
This builds on the previous forked pointers patch, which only accepted
a single select as the pointer to check. A recursive function to walk
through IR has been added, which searches for either a loop-invariant
or addrec SCEV.
This will only handle a single fork at present, so selects of selects
or a GEP with a select for both the base and offset will be rejected.
There is also a recursion limit with a cli option to change it.
Reviewed By: fhahn, david-arm
Differential Revision: https://reviews.llvm.org/D108699
As callbr is now allowed to have duplicate destinations, we can
have a callbr with a unique successor. Make sure it doesn't get
dropped, as we still need to preserve the side-effect.
Saves some add instructions on a couple Rage 2 shaders and is also a
prerequisite for a coming-soon change matching (register + immediate)
offsets.
Reviewed By: foad, arsenm
Differential Revision: https://reviews.llvm.org/D129095
We only need to check the block for the normal/default destination,
not for other destinations. Using the value in those would be
illegal anyway.
The callbr case cannot actually happen here, because callbr is
currently limited to inline asm. Retaining it to match the spirit
of the original code.
Undef tokens may appear in unreached code as result of RAUW of some optimization,
and it should not be considered as bad IR.
Patch by Dmitry Bakunevich!
Differential Revision: https://reviews.llvm.org/D128904
Reviewed By: mkazantsev
We add the following ISEL pattern for i64 imm in D87384, this patch is for i32.
`mul with (2^N * int16_imm) -> MULLI + RLWINM`
Reviewed By: shchenz
Differential Revision: https://reviews.llvm.org/D129708
At the moment, the VPPRedInstPHIRecipe is not used in subsequent uses of
the predicate recipe. This incorrectly models the def-use chains, as all
later uses should use the phi recipe. Fix that by delaying recording of
the recipe.
Reviewed By: Ayal
Differential Revision: https://reviews.llvm.org/D129436
Callbr is no longer an indirect terminator in the sense that is
relevant here (that it's successors cannot be updated). The primary
effect of this change is that callbr no longer prevents formation
of loop simplify form.
I decided to drop the isIndirectTerminator() method entirely and
replace it with isa<IndirectBrInst>() checks. I assume this method
was added to abstract over indirectbr and callbr, but it never
really caught on, and there is nothing left to abstract anymore
at this point.
Differential Revision: https://reviews.llvm.org/D129849
Added function to the ExpandVectorPredication pass to handle VP loads
and stores.
Reviewed By: simoll
Differential Revision: https://reviews.llvm.org/D109584
This change introduces the dynamic stack boolean field to code-object-v3
and above under the code properties of the kernel descriptor and under
the kernel metadata map of NT_AMDGPU_METADATA. This field corresponds to
the is_dynamic_callstack field of amd_kernel_code_t.
Differential Revision: https://reviews.llvm.org/D128344
This patch replaces some foreach with Arrayref, and abstract some same literal array with a variable.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D125656
This patch extends D124824. It uses SHXADD+SLLI to emit 3, 5, or 9 multiplied by a power 2.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D129179
These instructions are only available when fp is available, so cannot be
used with just +mve. Add predicates to ensure we fall-back under the
right circumstances.
This PR adds support for creating leaf functions when there are no CSRs used, no function calls are made, no stack frame is acquired, and contain no try/catch/throw statements.
Reviewed By: uweigand
Differential Revision: https://reviews.llvm.org/D129687
If X is known positive by a dominating condition, we can fill in
ones into the upper bits of C1 if that would allow it to become an
simm12 allowing the use of ANDI.
This pattern often occurs in unrolled loops where the induction
variable has been widened.
To get the best benefit from this, I had to move the pass above
ConstantHoisting which is in addIRPasses. Otherwise the AND constant
is often hoisted away from the AND.
Reviewed By: asb
Differential Revision: https://reviews.llvm.org/D129888
We're creating single instruction to replace another instruction.
We can insert using the InsertBefore operand of the constructor.
Then copy the debug location.
Further improve liveness copying for CC register post optimization
by mirroring live internal splits.
The fixes a bug in register allocation when CC register liveness
is extended across a branches instead of split.
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D129557
trunc (sign_ext_inreg X, iM) to iN --> sign_ext_inreg (trunc X to iN), iM
There are improvements on existing tests from this, and there are a pair
of large regressions in D127115 for Thumb2 caused by not folding this
pattern.
Differential Revision: https://reviews.llvm.org/D129890
At the moment, the cost of runtime checks for scalable vectors is
overestimated due to creating separate vscale * VF expressions for each
check. Instead re-use the first expression.
D127595 added the ability to recurse up a (one-use) INSERT_VECTOR_ELT chain to create a BUILD_VECTOR before other combines manage to break the chain, something that is particularly bad in D127115.
The patch generalises this so it doesn't have to build the chain starting from the last element insertion, instead it can now start from any insertion and will recurse up the chain until it finds all elements or finds a UNDEF/BUILD_VECTOR/SCALAR_TO_VECTOR which represents that start of the chain.
Fixes several regressions in D127115
In https://reviews.llvm.org/D30114, support for mismatching address
spaces was introduced to CodeGenPrepare's optimizeMemoryInst, using
addrspacecast as it was argued that only no-op addrspacecasts would be
considered when constructing the address mode. However, by doing
inttoptr/ptrtoint, it's possible to get CGP to emit an addrspace
that's not actually no-op, introducing a miscompilation:
define void @kernel(i8* %julia_ptr) {
%intptr = ptrtoint i8* %julia_ptr to i64
%ptr = inttoptr i64 %intptr to i32 addrspace(3)*
br label %end
end:
store atomic i32 1, i32 addrspace(3)* %ptr unordered, align 4
ret void
}
Gets compiled to:
define void @kernel(i8* %julia_ptr) {
end:
%0 = addrspacecast i8* %julia_ptr to i32 addrspace(3)*
store atomic i32 1, i32 addrspace(3)* %0 unordered, align 4
ret void
}
In the case of NVPTX, this introduces a cvta.to.shared, whereas
leaving out the %end block and branch doesn't trigger this
optimization. This results in illegal memory accesses as seen in
https://github.com/JuliaGPU/CUDA.jl/issues/558
In this change, I introduced a check before doing the pointer cast
that verifies address spaces are the same. If not, it emits a
ptrtoint/inttoptr combination to get a no-op cast between address
spaces. I decided against disallowing ptrtoint/inttoptr with
non-default AS in matchOperationAddr, because now its still possible
to look through multiple sequences of them that ultimately do not
result in a address space mismatch (i.e. the second lit test).
The backend getShuffleCosts do not currently handle shuffles that change
size very well. Limit the shuffles we collect to the same type to make
sure they do not cause issues as reported in D128732.
As mentioned on D127115, this patch that attempts to recognise shuffle masks that could be simplified to a AND mask - we already have a similar transform that will fold AND -> 'clear mask' shuffle, but this patch handles cases where the referenced elements are not from the same lane indices but are known to be zero.
Differential Revision: https://reviews.llvm.org/D129150
When using weak symbols, the WinCOFFObjectWriter keeps a list (`WeakDefaults`)
that's used to make names unique. This list should be reset when the object
writer is reset, because otherwise reuse of the object writer can result in
freed symbols being accessed. With some added output, this becomes clear when
using `llc` in `--run-twice` mode:
```
$ ./llc --compile-twice -mtriple=x86_64-pc-win32 trivial.ll -filetype=obj
DefineSymbol::WeakDefaults
- .weak.foo.default
- .weak.bar.default
DefineSymbol::WeakDefaults
- .weak.foo.default
- áÑJij⌂ p§┼Ø┐☺
- .debug_macinfo.dw
- .weak.bar.default
```
This does not seem to leak into the output object file though, so I couldn't
come up with a test. I added one that just does `--run-twice` (and verified
that it does access freed memory), which should result in detecting the
invalid memory accesses when running under ASAN.
Observed in a Julia PR where we started using weak symbols:
https://github.com/JuliaLang/julia/pull/45649
Reviewed By: mstorsjo
Differential Revision: https://reviews.llvm.org/D129840
combineShiftAnd1ToBitTest already matches "and (not (srl X, C)), 1 --> (and X, 1<<C) == 0" patterns, but we can end up with situations where the not is before the shift.
Part of some yak shaving for D127115 to generalise the "xor (X >> ShiftC), XorC --> (not X) >> ShiftC" fold.
This is follow up of D107082, which enable vector support according to psABI.
Reviewed By: skan
Differential Revision: https://reviews.llvm.org/D127982
This patch reports number of counts being dropped when a hash-mismatch
happens. This information will be helpful to the users -- if the dropped
counts are large, the user should redo the instrumentation build and
recollect the profile.
Differential Revision: https://reviews.llvm.org/D129001
Don't cross reference CSFDO profile and non-CSFDO profile when
checking the function hash. Only return hash_mismatch when
CS bits match, and return unknown_function otherwise.
Differential Revision: https://reviews.llvm.org/D129000
This patch improves FDO hash-mismatch handling:
(1) filter out warnings to weak functions.
Weak functions definition will be overridden by a strong definition by linker.
The hash mismatch in profile use compilation is expected.
Make the profile hash mismatch warning under the existing option (default true).
(2) add an option to trace the hash of functions with the specific string.
Note that an empty string parameter will trace all functions.
Differential Revision: https://reviews.llvm.org/D129002
Changes since initial commit:
* Wrapping a pointer in an SCEV unknown hides the base, and SCEV is only able to compute a subtraction when the bases are known to be equal. This results in a SCEVCouldNotCompute flowing forward and triggering asserts. Test case added in d767b392.
* isLoopInvariant returns true for instructions outside the loop, but not necessarily *above* the loop. Since this code is allowed to visit uses of an IV outside of a loop, we have to make sure the operands of the compare are both invariant and dominating the header. Test case added in 2aed3cdb.
Original commit message follows...
The ICmpZero matching is checking to see if the expression is loop invariant per SCEV and expandable. This allows expressions inside the loop which can be made loop invariant to be seamlessly expanded, but is overly conservative for expressions which already *are* loop invariant.
As a simple justification for why this is correct, consider a loop invariant urem as RHS vs an alternate function with that same urem wrapped inside a helper call. Why would it be legal to match the later, but not the former?
Differential Revision: https://reviews.llvm.org/D129793
Compiling with '-ffast-math' tuns on all the FastMathFlags (FMF), as
expected, and that enables FP reassociation. Only the two FMF flags
'reassoc' and 'nsz' are technically required to perform reassociation,
but disabling other unrelated FMF bits is needlessly suppressing the
optimization.
This patch fixes that needless suppression, and makes appropriate
adjustments to test-cases, fixing some outstanding TODOs in the process.
Fixes: #56483
Reviewed By: spatel
Differential Revision: https://reviews.llvm.org/D129523
For scalable vectors, it is not sufficient to only check
MinProfitableTripCount if it is >= VF.getKnownMinValue() * UF, because
this property may not holder for larger values of vscale. In those
cases, compute umax(VF * UF, MinProfTC) instead.
This should fix
https://lab.llvm.org/buildbot/#/builders/197/builds/2262
The ICmpZero matching is checking to see if the expression is loop invariant per SCEV and expandable. This allows expressions inside the loop which can be made loop invariant to be seamlessly expanded, but is overly conservative for expressions which already *are* loop invariant.
As a simple justification for why this is correct, consider a loop invariant urem as RHS vs an alternate function with that same urem wrapped inside a helper call. Why would it be legal to match the later, but not the former?
Differential Revision: https://reviews.llvm.org/D129793
The original commit was reverted in 3e9cc543f2 due to buildbot failures, which
should be fixed by the addition of dependencies on librt.
Differential Revision: https://reviews.llvm.org/D128544
Explicitly list all binops rather than having a default case. There
were two bugs here:
1. U->getOpcode() was used instead of BO->Opcode, which means we
used the logic for the wrong opcode in some cases.
2. SCEV construction does not support LShr. We should return
unknown for it rather than recursing into the operands.
SimplifyDemandedBits is called slightly later which allows the not(sext(x)) -> sext(not(x)) fold to occur via foldLogicOfShifts
As mentioned on D127115, we should be able to further generalise this based off the demanded bits.
This is similar to D125680, but for llvm.experimental.patchpoint
(instead of llvm.experimental.stackmap).
Differential review: https://reviews.llvm.org/D129268
When folding a binop into a select, we need to ensure that one
of the select arms actually does constant fold, otherwise we'll
create two binop instructions and perform the reverse transform.
Ensure this by performing an explicit constant folding attempt,
and failing the transform if neither side simplifies.
A simple alternative here would have been to limit the fold to
ImmConstants, but given the current representation of scalable
vector splats, this wouldn't be ideal.
Following some recent discussions, this changes the representation
of callbrs in IR. The current blockaddress arguments are replaced
with `!` label constraints that refer directly to callbr indirect
destinations:
; Before:
%res = callbr i8* asm "", "=r,r,i"(i8* %x, i8* blockaddress(@test8, %foo))
to label %asm.fallthrough [label %foo]
; After:
%res = callbr i8* asm "", "=r,r,!i"(i8* %x)
to label %asm.fallthrough [label %foo]
The benefit of this is that we can easily update the successors of
a callbr, without having to worry about also updating blockaddress
references. This should allow us to remove some limitations:
* Allow unrolling/peeling/rotation of callbr, or any other
clone-based optimizations
(https://github.com/llvm/llvm-project/issues/41834)
* Allow duplicate successors
(https://github.com/llvm/llvm-project/issues/45248)
This is just the IR representation change though, I will follow up
with patches to remove limtations in various transformation passes
that are no longer needed.
Differential Revision: https://reviews.llvm.org/D129288
This is a followup to D129630, which switches LSR to the member
isSafeToExpand() variant, and removes the freestanding function.
This is done by creating the SCEVExpander early (already during the
analysis phase). Because the SCEVExpander is now available for the
whole lifetime of LSRInstance, I've also made it into a member
variable, rather than passing it around in even more places.
Differential Revision: https://reviews.llvm.org/D129769
Even with D129349 <https://reviews.llvm.org/D129349>, various SPARC JIT
tests still `FAIL`. They are guarded by `host-supports-jit`. This is set
from `clang-repl --host-supports-jit` which still returns `true`. It turns
out the `JITTargetMachineBuilder` doesn't consider `hasJIT` at all.
This patch corrects this, turning the affected tests `UNSUPPORTED`.
Tested on `sparcv9-sun-solaris2.11`.
Differential Revision: https://reviews.llvm.org/D129350
Until D118450 <https://reviews.llvm.org/D118450> lands, there's no JIT
support on SPARC, but the backend claims otherwise, leading to various
testsuite failures.
This patch corrects this.
Tested on `sparcv9-sun-solaris2.11`.
Differential Revision: https://reviews.llvm.org/D129349
clang 14 removed -gz=zlib-gnu and ld.lld/llvm-objcopy removed .zdebug support
recently. llvm-dwp currently doesn't support SHF_COMPRESSED. Add support and
remove .zdebug support.
Simplify llvm::object::Decompressor which has no .zdebug user now.
While here, add tests for ELF32LE, ELF32BE, and ELF64BE.
Reviewed By: dblaikie
Differential Revision: https://reviews.llvm.org/D129728
If we have a variable shift amount and the demanded mask has leading
zeros, we can propagate those leading zeros to not demand those bits
from operand 0. This can allow zero_extend/sign_extend to become
any_extend. This pattern can occur due to C integer promotion rules.
This transform is already done by InstCombineSimplifyDemanded.cpp where
sign_extend can be turned into zero_extend for example.
Reviewed By: spatel, foad
Differential Revision: https://reviews.llvm.org/D121833
Since the divergence-driven instruction selection has been enabled for AMDGPU,
all the uniform instructions are expected to be selected to SALU form, except those not having one.
VGPR to SGPR copies appear in MIR to connect values producers and consumers. This change implements an algorithm
that evolves a reasonable tradeoff between the profit achieved from keeping the uniform instructions in SALU form
and overhead introduced by the data transfer between the VGPRs and SGPRs.
Reviewed By: rampitec
Differential Revision: https://reviews.llvm.org/D128252
Widening a G_FCONSTANT by extending and then generating G_FPTRUNC doesn't produce
the same result all the time. Instead, we can just transform it to a G_CONSTANT
of the same bit pattern and truncate using a plain G_TRUNC instead.
Fixes https://github.com/llvm/llvm-project/issues/56454
Differential Revision: https://reviews.llvm.org/D129743
Initial optimization is to convert (i64 (zext (i32 X))) to
(i64 (sext (i32 X))) if the dominating condition for the basic block
guaranteed the sign bit of X is zero.
This frequently occurs in loop preheaders where a signed induction
variable that can never be negative has been widened. There will be
a dominating check that the 32-bit trip count isn't negative or zero.
The check here is not restricted to that specific case though.
A i32->i64 sext is cheaper than zext on RV64 without the Zba
extension. Later optimizations can often remove the sext from the
preheader basic block because the dominating block also needs a sext to
evaluate the greater than 0 check.
Reviewed By: asb
Differential Revision: https://reviews.llvm.org/D129732
If an MI will not generate a target instruction, we should not compute its
latency. Then we can compute more precise instruction sequence cost, and get
better result.
Differential Revision: https://reviews.llvm.org/D129615
I happened to notice a two places where the enum was being pass
directly to the bool IsSigned argument of createExtendInst. This
was functionally ok since SignExtended in the enum has value
of 1, but the code shouldn't rely on that.
Using an enum class prevents the enum from being convertible to bool,
but does make writing the enum values more verbose. Since we now
have to write ExtendKind:: in front of them, I've shortened the
names of ZeroExtended and SignExtended.
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D129733
After 675080a453, we always create SCEVs for all operands of a
SelectInst. This can cause notable compile-time regressions compared to
the recursive algorithm, which only evaluates the operands if the select
is in a form we can create a usable expression.
This approach adds additional logic to getOperandsToCreate to only
queue operands for selects if we will later be able to construct a
usable SCEV.
Unfortunately this introduces a bit of coupling between actual SCEV
construction for selects and getOperandsToCreate, but I am not sure if
there are better alternatives to address the regression mentioned for
675080a453.
This doesn't have any notable compile-time impact on CTMark.
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D129731
We previously enabled subregister liveness by default when compiling
with RVV. This has been shown to cause miscompilations where RVV
register operand constraints are not met. A test was added for this in
D129639 which explains the issue in more detail.
Until this issue is fixed in some way, we should not be enabling
subregister liveness unless the user asks for it.
Reviewed By: craig.topper, rogfer01, kito-cheng
Differential Revision: https://reviews.llvm.org/D129646
Motivation here is to unblock LSRs ability to use ICmpZero uses - the major effect of which is to enable count down IVs. The test changes reflect this goal, but the potential impact is much broader since this isn't a change in LSR at all.
SCEVExpander needs(*) to prove that expanding the expression is safe anywhere the SCEV expression is valid. In general, we can't expand any node which might fault (or exhibit UB) unless we can either a) prove it won't fault, or b) guard the faulting case. We'd been allowing non-zero constants here; this change extends it to non-zero values.
vscale is never zero. This is already implemented in ValueTracking, and this change just adds the same logic in SCEV's range computation (which in turn drives isKnownNonZero). We should common up some logic here, but let's do that in separate changes.
(*) As an aside, "needs" is such an interesting word here. First, we don't actually need to guard this at all; we could choose to emit a select for the RHS of ever udiv and remove this code entirely. Secondly, the property being checked here is way too strong. What the client actually needs is to expand the SCEV at some particular point in some particular loop. In the examples, the original urem dominates that loop and yet we completely ignore that information when analyzing legality. I don't plan to actively pursue either direction, just noting it for future reference.
Differential Revision: https://reviews.llvm.org/D129710
In analyzing issue #56483, it was noticed that running `opt` with
`-reassociate` was missing some minor optimizations. For example,
there were cases where the running `opt` on IR with floating-point
instructions that have the `fast` flags applied, sometimes resulted in
less efficient code than the input IR (things like dead instructions
left behind, and missed reassociations). These were sometimes noted
in the test-files with TODOs, to investigate further. This commit
fixes some of these problems, removing some TODOs in the process.
FTR, I refer to these as "minor" missed optimizations, because when
running a full clang/llvm compilation, these inefficiencies are not
happening, as other passes clean that residue up. Regardless, having
cleaner IR produced by `opt`, makes assessing the quality of fixes done
in `opt` easier.
This reverts commit f1b05a0a2b.
Need to revert to due to issues identified with testing. The
transformation is incorrect for blocks that contain convergent
instructions.
This patch turns on the flag `-enable-no-rerun-simplification-pipeline`, which means the simplification pipeline will not be rerun on unchanged functions in the CGSCCPass Manager.
Compile time improvement:
https://llvm-compile-time-tracker.com/compare.php?from=17457be1c393ff691cca032b04ea1698fedf0301&to=882301ebb893c8ef9f09fe1ea871f7995426fa07&stat=instructions
No meaningful run time regressions observed in the llvm test suite and
in additional internal workloads at this time.
The example test in `test/Other/no-rerun-function-simplification-pipeline.ll` is a good means to understand the effect of this change:
```
define void @f1(void()* %p) alwaysinline {
call void %p()
ret void
}
define void @f2() #0 {
call void @f1(void()* @f2)
call void @f3()
ret void
}
define void @f3() #0 {
call void @f2()
ret void
}
```
There are two SCCs formed by the ModuleToPostOrderCGSCCAdaptor: (f1) and (f2, f3).
The pass manager runs on the first SCC, leading to running the simplification pipeline (function and loop passes) on f1. With the flag on, after this, the output will have `Running analysis: ShouldNotRunFunctionPassesAnalysis on f1`.
Next, the pass manager runs on the second SCC: (f2, f3). Since f1() was inlined, f2() now calls itself, and also calls f3(), while f3() only calls f2().
So the pass manager for the SCC first runs the Inliner on (f2, f3), then the simplification pipeline on f2.
With the flag on, the output will have `Running analysis: ShouldNotRunFunctionPassesAnalysis on f2`; unless the inliner makes a change, this analysis remains preserved which means there's no reason to rerun the simplification pipeline. With the flag off, there is a second run of the simplification pipeline run on f2.
Next, the same flow occurs for f3. The simplification pipeline is run on f3 a single time with the flag on, along with `ShouldNotRunFunctionPassesAnalysis on f3`, and twice with the flag off.
The reruns occur only on f2 and f3 due to the additional ref edges.
As a followup to D129630, this switches a usage of the freestanding
function in LoopPredication to use the member variant instead. This
was the last use of the freestanding function, so drop it entirely.
isSafeToExpand() for addrecs depends on whether the SCEVExpander
will be used in CanonicalMode. At least one caller currently gets
this wrong, resulting in PR50506.
Fix this by a) making the CanonicalMode argument on the freestanding
functions required and b) adding member functions on SCEVExpander
that automatically take the SCEVExpander mode into account. We can
use the latter variant nearly everywhere, and thus make sure that
there is no chance of CanonicalMode mismatch.
Fixes https://github.com/llvm/llvm-project/issues/50506.
Differential Revision: https://reviews.llvm.org/D129630
D114999 added code to kill an immediate def if it was folded into its
only use by convertToThreeAddress. This patch updates LiveVariables when
that happens in order to fix verification failures exposed by D129213.
Differential Revision: https://reviews.llvm.org/D129661
In the Cortex A57 Optimization Guide [1] VCVTAU (AArch32) is incorrectly
listed in the AArch64 instructions for instruction groups:
- ASIMD FP convert, other, D-form
- ASIMD FP convert, other, Q-form
It's meant to be FCVTAU, this will be fixed in future releases of the guide.
[1] https://developer.arm.com/documentation/uan0015/b
Make the implementation more similar to other functions, by
explicitly skipping an unknown/undef first, and always falling
back to overdefined at the end. I don't think it makes a difference
now, but could make one once the constant evaluation can fail. In
that case we would directly mark the result as overdefined now,
rather than keeping it unknown (and later making it overdefined
because we think it's undef-based).
D25618 added a method to verify the instruction predicates for an
emitted instruction, through verifyInstructionPredicates added into
<Target>MCCodeEmitter::encodeInstruction. This is a very useful idea,
but the implementation inside MCCodeEmitter made it only fire for object
files, not assembly which most of the llvm test suite uses.
This patch moves the code into the <Target>_MC::verifyInstructionPredicates
method, inside the InstrInfo. The allows it to be called from other
places, such as in this patch where it is called from the
<Target>AsmPrinter::emitInstruction methods which should trigger for
both assembly and object files. It can also be called from other places
such as verifyInstruction, but that is not done here (it tends to catch
errors earlier, but in reality just shows all the mir tests that have
incorrect feature predicates). The interface was also simplified
slightly, moving computeAvailableFeatures into the function so that it
does not need to be called externally.
The ARM, AMDGPU (but not R600), AVR, Mips and X86 backends all currently
show errors in the test-suite, so have been disabled with FIXME
comments.
Recommitted with some fixes for the leftover MCII variables in release
builds.
Differential Revision: https://reviews.llvm.org/D129506
The value lattice explicitly represents undef, and markConstant()
internally checks for UndefValue and will create an undef rather
than constant lattice element in that case.
This is mostly a code simplification, it has little practical impact
because we usually get undef results from undef operands, and those
don't get processed.
Only leave the check behind for the CmpInst case, because it
currently goes through this incorrect code in the getCompare()
implementation: f98697642c/llvm/include/llvm/Analysis/ValueLattice.h (L456-L457)
Differential Revision: https://reviews.llvm.org/D128330
The SI machine scheduler inherits from ScheduleDAGMI.
This patch adds support for a few features that are implemented
in ScheduleDAGMI (or its base classes) that were missing so far
because their support is implemented in overridden functions.
* Support cl::opt -view-misched-dags
This option allows to open a graphical window of the scheduling DAG.
* Support cl::opt -misched-print-dags
This option allows to print the scheduling DAG in text form.
* After constructing the scheduling DAG, call postprocessDAG()
to apply any registered DAG mutations.
Note that currently there are no mutations defined in AMDGPUTargetMachine.cpp
in case SIScheduler is used.
Still add this to avoid surprises in the future in case mutations are added.
Differential Revision: https://reviews.llvm.org/D128808
- add `FindZSTD.cmake`
- add zstd to `llvm::compression` namespace
- add a CMake option `LLVM_ENABLE_ZSTD` with behavior mirroring that of `LLVM_ENABLE_ZLIB`
- add tests for zstd to `llvm/unittests/Support/CompressionTest.cpp`
Reviewed By: leonardchan, MaskRay
Differential Revision: https://reviews.llvm.org/D128465
- add `FindZSTD.cmake`
- add zstd to `llvm::compression` namespace
- add a CMake option `LLVM_ENABLE_ZSTD` with behavior mirroring that of `LLVM_ENABLE_ZLIB`
- add tests for zstd to `llvm/unittests/Support/CompressionTest.cpp`
Reviewed By: leonardchan, MaskRay
Differential Revision: https://reviews.llvm.org/D128465
It's more natural to use uint8_t * (std::byte needs C++17 and llvm has
too much uint8_t *) and most callers use uint8_t * instead of char *.
The functions are recently moved into `llvm::compression::zlib::`, so
downstream projects need to make adaption anyway.
This is an implementation of orc::MemoryMapper that maps shared memory
pages in both executor and controller process and writes directly to
them avoiding transferring content over EPC. All allocations are properly
deinitialized automatically on the executor side at shutdown by the
ExecutorSharedMemoryMapperService.
Reviewed By: lhames
Differential Revision: https://reviews.llvm.org/D128544
When doing experiment in kernel, for kernel data structure sockptr_t
in CO-RE operation, I hit an assertion error. The sockptr_t definition
and usage look like below:
#pragma clang attribute push (__attribute__((preserve_access_index)), apply_to = record)
typedef struct {
union {
void *kernel;
void *user;
};
unsigned is_kernel : 1;
} sockptr_t;
#pragma clang attribute pop
int test(sockptr_t *arg) {
return arg->is_kernel;
}
The assertion error looks like
clang: ../lib/Target/BPF/BPFAbstractMemberAccess.cpp:878: llvm::Value*
{anonymous}::BPFAbstractMemberAccess::computeBaseAndAccessKey(llvm::CallInst*,
{anonymous}::BPFAbstractMemberAccess::CallInfo&, std::__cxx11::string&,
llvm::MDNode*&): Assertion `TypeName.size()' failed.
In this particular, the clang frontend attach the debuginfo metadata associated
with anon structure with the preserve_access_info IR intrinsic. But the first
debuginfo type has to be a named type so libbpf can have a sound start to
do CO-RE relocation.
Besides the above approach using pragma to push attribute, the below typedef/struct
definition can have preserve_access_index directly applying to the anon struct.
typedef struct {
union {
void *kernel;
void *user;
};
unsigned is_kernel : 1;
} __attribute__((preserve_access_index) sockptr_t;
This patch fixed the issue by preprocessing function argument/return types
and local variable types used by other CO-RE intrinsics. For any
typedef struct/union { ... } typedef_name
an association of <anon struct/union, typedef> is recorded to replace
the IR intrinsic metadata 'anon struct/union' to 'typedef'.
It is possible that two different 'typedef' types may have identical
anon struct/union type. For such a case, the association will be
<anon struct/union, nullptr> to indicate the invalid case.
Differential Revision: https://reviews.llvm.org/D129621
Hwasan includes instructions in the prologue that mix the PC and SP and store
it into the stack ring buffer stored at __hwasan_tls. This is a thread_local
global exposed from the hwasan runtime. However, if TLS-mechanisms or the
hwasan runtime haven't been setup yet, it will be invalid to access __hwasan_tls.
This is the case for Fuchsia where we instrument libc, so some functions that
are instrumented but can run before hwasan initialization will incorrectly
access this global. Additionally, libc cannot have any TLS variables, so we
cannot weakly define __hwasan_tls until the runtime is loaded.
A way we can work around this is by moving the instructions into a hwasan
function that does the store into the ring buffer and creating a weak definition
of that function locally in libc. This way __hwasan_tls will not actually be
referenced. This is not our long-term solution, but this will allow us to roll
out hwasan in the meantime.
This patch includes:
- A new llvm flag for choosing to emit a libcall rather than instructions in the
prologue (off by default)
- The libcall for storing into the ringbuffer (__hwasan_add_frame_record)
Differential Revision: https://reviews.llvm.org/D128387
- add check before truncating (un)compressed data buffer if the buffer is already a perfect length, to avoid triggering truncate assertion in edge case.
- explictly coerce LLVM_ENABLE_ZLIB to a 0 or 1 value in OFF case, to match current ON, FORCE_ON behavior.
- fix code style nits in zlib tests
Reviewed By: phosek
Differential Revision: https://reviews.llvm.org/D129698
We shouldn't use getOpcodeDef() if we need to guarantee the def has only one
user since under the hood it may look through copies and optimization hints,
which themselves may have multiple users.
The former pattern will select as slliw+sraiw while the latter
will select as slli+srai. This can enable the slli+srai to be
compressed.
Differential Revision: https://reviews.llvm.org/D129688
Hwasan includes instructions in the prologue that mix the PC and SP and store
it into the stack ring buffer stored at __hwasan_tls. This is a thread_local
global exposed from the hwasan runtime. However, if TLS-mechanisms or the
hwasan runtime haven't been setup yet, it will be invalid to access __hwasan_tls.
This is the case for Fuchsia where we instrument libc, so some functions that
are instrumented but can run before hwasan initialization will incorrectly
access this global. Additionally, libc cannot have any TLS variables, so we
cannot weakly define __hwasan_tls until the runtime is loaded.
A way we can work around this is by moving the instructions into a hwasan
function that does the store into the ring buffer and creating a weak definition
of that function locally in libc. This way __hwasan_tls will not actually be
referenced. This is not our long-term solution, but this will allow us to roll
out hwasan in the meantime.
This patch includes:
- A new llvm flag for choosing to emit a libcall rather than instructions in the
prologue (off by default)
- The libcall for storing into the ringbuffer (__hwasan_record_frame_record)
Differential Revision: https://reviews.llvm.org/D128387
When doing scalable vectorization, the loop vectorizer uses a urem in the computation of the vector trip count. The RHS of that urem is a (possibly shifted) call to @llvm.vscale.
vscale is effectively the number of "blocks" in the vector register. (That is, types such as <vscale x 8 x i8> and <vscale x 1 x i8> both fill one 64 bit block, and vscale is essentially how many of those blocks there are in a single vector register at runtime.)
We know from the RISCV V extension specification that VLEN must be a power of two between ELEN and 2^16. Since our block size is 64 bits, the must be a power of two numbers of blocks. (For everything other than VLEN<=32, but that's already broken.)
It is worth noting that AArch64 SVE specification explicitly allows non-power-of-two sizes for the vector registers and thus can't claim that vscale is a power of two by this logic.
Differential Revision: https://reviews.llvm.org/D129609
The request is mentioned on D129053. I feel that having this functionality is
mildly useful (not strong).
* Rename .ctors to .init_array and change sh_type to SHT_INIT_ARRAY (GNU objcopy
detects the special name but we don't).
* Craft tests for a new SHT_LLVM_* extension
Reviewed By: jhenderson
Differential Revision: https://reviews.llvm.org/D129337
For MTE globals, we should have clang emit the attribute for all GV's
that it creates, and then use that in the upcoming AArch64 global
tagging IR pass. We need a positive attribute for this sanitizer (rather
than implicit sanitization of all globals) because it needs to interact
with other parts of LLVM, including:
1. Suppressing certain global optimisations (like merging),
2. Emitting extra directives by the ASM writer, and
3. Putting extra information in the symbol table entries.
While this does technically make the LLVM IR / bitcode format
non-backwards-compatible, nobody should have used this attribute yet,
because it's a no-op.
Reviewed By: eugenis
Differential Revision: https://reviews.llvm.org/D128950
We have the same fold in InstCombine - though implemented via OrZero flag on isKnownToBePowerOfTwo. The reasoning here is that either a) the result of the lshr is a power-of-two, or b) we have a div-by-zero triggering UB which we can ignore.
Differential Revision: https://reviews.llvm.org/D129606
This reverts commit e2fb8c0f4b as it does
not build for Release builds, and some buildbots are giving more warning
than I saw locally. Reverting to fix those issues.
D25618 added a method to verify the instruction predicates for an
emitted instruction, through verifyInstructionPredicates added into
<Target>MCCodeEmitter::encodeInstruction. This is a very useful idea,
but the implementation inside MCCodeEmitter made it only fire for object
files, not assembly which most of the llvm test suite uses.
This patch moves the code into the <Target>_MC::verifyInstructionPredicates
method, inside the InstrInfo. The allows it to be called from other
places, such as in this patch where it is called from the
<Target>AsmPrinter::emitInstruction methods which should trigger for
both assembly and object files. It can also be called from other places
such as verifyInstruction, but that is not done here (it tends to catch
errors earlier, but in reality just shows all the mir tests that have
incorrect feature predicates). The interface was also simplified
slightly, moving computeAvailableFeatures into the function so that it
does not need to be called externally.
The ARM, AMDGPU (but not R600), AVR, Mips and X86 backends all currently
show errors in the test-suite, so have been disabled with FIXME
comments.
Differential Revision: https://reviews.llvm.org/D129506
`isSafeToLoadUnconditionally` currently assumes sized types. Bail out for now.
This fixes a TypeSize warning reachable from instcombine via (load (select
cond, ptr, ptr)).
Differential Revision: https://reviews.llvm.org/D129477
When vectorising ordered reductions we call a function
LoopVectorizationPlanner::adjustRecipesForReductions to replace the
existing VPWidenRecipe for the fadd instruction with a new
VPReductionRecipe. We attempt to insert the new recipe in the same
place, but this is wrong because createBlockInMask may have
generated new recipes that VPReductionRecipe now depends upon. I
have changed the insertion code to append the recipe to the
VPBasicBlock instead.
Added a new RUN with tail-folding enabled to the existing test:
Transforms/LoopVectorize/AArch64/scalable-strict-fadd.ll
Differential Revision: https://reviews.llvm.org/D129550
After replacing a loop phi with the preheader value, it's usually
possible to simplify some of the using instructions, so do that as
part of replaceLoopPHINodesWithPreheaderValues().
Doing this as part of IndVars is valuable, because it may make GEPs
in the loop have constant offsets and allow the following SROA run
to succeed (as demonstrated in the PhaseOrdering test).
Differential Revision: https://reviews.llvm.org/D129293
Introduce an off-by default `-Winvalid-utf8` warning
that detects invalid UTF-8 code units sequences in comments.
Invalid UTF-8 in other places is already diagnosed,
as that cannot appear in identifiers and other grammar constructs.
The warning is off by default as its likely to be somewhat disruptive
otherwise.
This warning allows clang to conform to the yet-to be approved WG21
"P2295R5 Support for UTF-8 as a portable source file encoding"
paper.
Reviewed By: aaron.ballman, #clang-language-wg
Differential Revision: https://reviews.llvm.org/D128059
This patch was split off from D126465, where an early-exit is necessary
as it checks the VLEN and that asserts that V instructions are present.
Since this makes logical sense on its own, I think it's worth landing
regardless of D126465.
Reviewed By: kito-cheng
Differential Revision: https://reviews.llvm.org/D129617
Currently we only call replaceLoopPHINodesWithPreheaderValues() if
optimizeLoopExits() replaces the exit with an unconditional exit.
However, it is very common that this already happens as part of
eliminateIVComparison(), in which case we're leaving behind the
dead header phi.
Tweak the early bailout for already-constant exits to also call
replaceLoopPHINodesWithPreheaderValues().
Differential Revision: https://reviews.llvm.org/D129214
Currently, dsymutil treats the DW_OP_convert operand as absolute instead
of CU relative, as described by in the DWARFv5 spec, 2.5.1.6:
"[DW_OP_convert] takes one operand, which is an unsigned LEB128 integer
that represents the offset of a debugging information entry in the current
compilation unit"
This patch makes dsymutil correctly treat the offset as CU relative,
preventing a crash when there are multiple compilation units.
Big thanks to Akira Hatanaka for figuring out this issue and providing
both a reduced test case and a proposed fix.
This reverts commit cc309721d2 because it
breaks the following tests on GreenDragon:
TestDataFormatterObjCCF.py
TestDataFormatterObjCExpr.py
TestDataFormatterObjCKVO.py
TestDataFormatterObjCNSBundle.py
TestDataFormatterObjCNSData.py
TestDataFormatterObjCNSError.py
TestDataFormatterObjCNSNumber.py
TestDataFormatterObjCNSURL.py
TestDataFormatterObjCPlain.py
TestDataFormatterObjNSException.py
https://green.lab.llvm.org/green/view/LLDB/job/lldb-cmake/45288/
Currently, dsymutil treats the DW_OP_convert operand as absolute instead
of CU relative, as described by in the DWARFv5 spec, 2.5.1.6:
"[DW_OP_convert] takes one operand, which is an unsigned LEB128 integer
that represents the offset of a debugging information entry in the current
compilation unit"
This patch makes dsymutil correctly treat the offset as CU relative,
preventing a crash when there are multiple compilation units.
Big thanks to Akira Hatanaka for figuring out this issue and providing
both a reduced test case and a proposed fix.
Some rework of getStackGuard() based on comments in
https://reviews.llvm.org/D129505.
- getStackGuard() now creates and returns the destination
register, simplifying calls
- the pointer type is passed to getStackGuard() to avoid
recomputation
- removed PtrMemTy in emitSPDescriptorParent(), because
this type is only used here when loading the value but
not when storing the value
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D129576
I have no idea what's going on here. This code was moved
around/introduced in change cb26b01d57 and starts crashing with a NULL
dereference once I apply https://reviews.llvm.org/D123090. I assume that
I've unwittingly taught the attributor enough that it's able to do more
clever things than in the past, and it's able to trip on this case. I
make no claims about the correctness of this patch, but it passes tests
and seems to fix all the crashes I've been seeing.
Differential Revision: https://reviews.llvm.org/D129589
Adds initial COFF support in JITLink. This is able to run a hello world c program in x86 windows successfully.
Implemented
- COFF object loader
- Static local symbols
- Absolute symbols
- External symbols
- Weak external symbols
- Common symbols
- COFF jitlink-check support
- All COMDAT selection type execpt largest
- Implicit symobl size calculation
- Rel32 relocation with PLT stub.
- IMAGE_REL_AMD64_ADDR32NB relocation
Reviewed By: lhames
Differential Revision: https://reviews.llvm.org/D128968
To convert CTLZ to popcount we do
x = x | (x >> 1);
x = x | (x >> 2);
...
x = x | (x >>16);
x = x | (x >>32); // for 64-bit input
return popcount(~x);
This smears the most significant set bit across all of the bits
below it then inverts the remaining 0s and does a population count.
To support non-power of 2 types, the last shift amount must be
more than half of the size of the type. For i15, the last shift
was previously a shift by 4, with this patch we add another shift
of 8.
Fixes PR56457.
Differential Revision: https://reviews.llvm.org/D129431
It is illegal to merge two `llvm.coro.save` calls unless their
`llvm.coro.suspend` users are also merged. Marks it "nomerge" for
the moment.
This reverts D129025.
Alternative to D129025, which affects other token type users like WinEH.
Reviewed By: ChuanqiXu
Differential Revision: https://reviews.llvm.org/D129530