The name `MCFixedLenDisassembler.h` is out of date after D120958.
Rename it as `MCDecoderOps.h` to reflect the change.
Reviewed By: myhsu
Differential Revision: https://reviews.llvm.org/D124987
At some point in instruction selection, A2_tfrsi Constant:i32<...> was
created, where the "Constant" came from SelectAnyInt. Since it wasn't
a TargetConstant, it was selected again, leading to
%vreg = A2_tfrsi ...
... = A2_tfrsi %vreg
which is not a valid code.
Use the same enum as the other atomic instructions for consistency, in
preparation for addition of another strategy.
Introduce a new "Expand" option, since the store expansion does not
use cmpxchg. Alternatively, the existing CmpXChg strategy could be
renamed to Expand.
All LLVM backends use MCDisassembler as a base class for their
instruction decoders. Use "const MCDisassembler *" for the decoder
instead of "const void *". Remove unnecessary static casts.
Reviewed By: skan
Differential Revision: https://reviews.llvm.org/D122245
There's a few relevant forward declarations in there that may require downstream
adding explicit includes:
llvm/MC/MCContext.h no longer includes llvm/BinaryFormat/ELF.h, llvm/MC/MCSubtargetInfo.h, llvm/MC/MCTargetOptions.h
llvm/MC/MCObjectStreamer.h no longer include llvm/MC/MCAssembler.h
llvm/MC/MCAssembler.h no longer includes llvm/MC/MCFixup.h, llvm/MC/MCFragment.h
Counting preprocessed lines required to rebuild llvm-project on my setup:
before: 1052436830
after: 1049293745
Which is significant and backs up the change in addition to the usual benefits of
decreasing coupling between headers and compilation units.
Discourse thread: https://discourse.llvm.org/t/include-what-you-use-include-cleanup
Differential Revision: https://reviews.llvm.org/D119244
This reverts commit ef82063207.
- It conflicts with the existing llvm::size in STLExtras, which will now
never be called.
- Calling it without llvm:: breaks C++17 compat
Instead use either Type::getPointerElementType() or
Type::getNonOpaquePointerElementType().
This is part of D117885, in preparation for deprecating the API.
This is a fix for a crash in the HexagonOptAddrMode pass that was looking
for the third operand (offset) in the following instruction that does not,
in fact, have a third operand:
$r1 = L2_loadw_locked $r1
Additionally, this patch also adds an addrMode value to vgather pseudos
in the Hexagon backend.
Differential Revision: https://reviews.llvm.org/D117133
Rename argument from 'Fatal' => 'ReportErrors'. HexagonShuffler refers to
this arg as 'ReportErrors' and calling it 'Fatal' in HexagonMCShuffler is
misleading and inconsistent.
Previously compounding was all-or-nothing. Now, the
compounding attempts will iterate and yield the most
compounds that still result in a valid packet.
Lower select(I1,Q,Q) by converting vector predicate Q to vector register V,
doing select(I1,V,V), and then converting the resulting V back to Q. Also,
try to avoid creating such situations in the first place.
This change extends the addressing mode optimization
pass to HVX vgather. This is specifically intended to
resolve compiler not generating indexed addresses for
vgather stores to vtcm. Changed the vgather pseudo
instructions to accept an immediate operand and handled
addition of appropriate immediate operand in addressing
mode optimization pass.
Ideally we should make USR as Def for these floating point instructions.
However, it violates some assembler MCChecker rules. This patch fixes
the issue by marking these FP instructions as non-sinkable.
For code below:
{
r7 = addasl(r3,r0,#2)
r8 = addasl(r3,r2,#2)
r5 = memw(r3+r0<<#2)
r6 = memw(r3+r2<<#2)
}
{
p1 = cmp.gtu(r6,r5)
if (p1.new) memw(r8+#0) = r5
if (p1.new) memw(r7+#0) = r6
}
{
r0 = mux(p1,r2,r4)
}
In packetizer, a new packet is created for the cmp instruction since
there arent enough resources in previous packet. Also it is determined
that the cmp stalls by 2 cycles since it depends on the prior load of r5.
In current packetizer implementation, the predicated store is evaluated
for whether it can go in the same packet as compare, and since the compare
stalls, the stall of the predicated store does not matter and it can go in
the same packet as the cmp. However the predicated store will stall for
more cycles because of its dependence on the addasl instruction and to
avoid that stall we can put it in a new packet.
Improve the packetizer to check if an instruction being added to packet
will stall longer than instruction already in packet and if so create a
new packet.
When checking resources in the post RA scheduler, see if a .new
vector store should be used instead of a regular vector store.
It may not be possible to schedule a regular vector store, but
it may be possible to schedule a .new version. If the correct one
isn't used, then the post RA scheduler may not generate the best
schedule.
If there are multiple uses of the def of COPY/REG_SEQUENCE, set the
latency only if the latencies on all the uses are equal, otherwise set
it to default.
The HexagonVectorCombine pass was moving an instruction
incorrectly, which caused a use in a GEP that was not yet
defined.
HexagonVectorCombine removes a load from a group due to its
dependences, but in realignGroup, the load is processed anyways.
In realignGroup, when determining the maximum alignment, only
those instructions still in the group should be considered.
This reverts commit fd4808887e.
This patch causes gcc to issue a lot of warnings like:
warning: base class ‘class llvm::MCParsedAsmOperand’ should be
explicitly initialized in the copy constructor [-Wextra]
This reverts commit ba07f300c6.
A build-vector sequence is made of pairs: rotate+insert. When constructing
a single vector, this results in a chain of 2*N instructions. The rotate
operation is a permute operation, but the insert uses a multiplication
resource: insert and rotate can execute in the same cycle, but obviously
they cannot operate on the same vector. The original halving idea is still
beneficial since it does allow for insert/rotate overlap, and for hiding
insert's latency.
For vectors with repeating values, old codegen would rotate and insert
every duplicate element. This patch replaces that behavior with a splat
of the most common element, vinsert/vror only occur when needed.
They are the same as for the other HVX vectors, but types need to be
listed explicitly. Also, add a detailed codegen testcase.
Co-authored-by: Abhikrant Sharma <quic_abhikran@quicinc.com>