functions.
This makes the ownership of the resulting MCObjectWriter clear, and allows us
to remove one instance of MCObjectStreamer's bizarre "holding ownership via
someone else's reference" trick.
llvm-svn: 315327
Previously, the code that implemented the GNU assembler aliases for the
LDRD and STRD instructions (where the second register is omitted)
assumed that the input was a valid instruction. This caused assertion
failures for every example in ldrd-strd-gnu-bad-inst.s.
This improves this code so that it bails out if the instruction is not
in the expected format, the check bails out, and the asm parser is run
on the unmodified instruction.
It also relaxes the alias on thumb targets, so that unaligned pairs of
registers can be used. The restriction that Rt must be even-numbered
only applies to the ARM versions of these instructions.
Differential revision: https://reviews.llvm.org/D36732
llvm-svn: 315305
This adds diagnostic strings for the ARM floating-point register
classes, which will be used when these classes are expected by the
assembler, but the provided operand is not valid.
One of these, DPR, requires C++ code to select the correct error
message, as that class contains different registers depending on the
FPU. The rest can all have their diagnostic strings stored in the
tablegen decription of them.
Differential revision: https://reviews.llvm.org/D36693
llvm-svn: 315304
This adds diagnostic strings for the ARM general-purpose register
classes, which will be used when these classes are expected by the
assembler, but the provided operand is not valid.
One of these, rGPR, requires C++ code to select the correct error
message, as that class contains different registers in pre-v8 and v8
targets. The rest can all have their diagnostic strings stored in the
tablegen description of them.
Differential revision: https://reviews.llvm.org/D36692
llvm-svn: 315303
createWinCOFFObjectWriter to WinCOFFObjectWriter's constructor.
Fixes the same ownership issue for COFF that r315245 did for MachO:
WinCOFFObjectWriter takes ownership of its MCWinCOFFObjectTargetWriter, so we
want to pass this through to the constructor via a unique_ptr, rather than a
raw ptr.
llvm-svn: 315257
ELFObjectWriter's constructor.
Fixes the same ownership issue for ELF that r315245 did for MachO:
ELFObjectWriter takes ownership of its MCELFObjectTargetWriter, so we want to
pass this through to the constructor via a unique_ptr, rather than a raw ptr.
llvm-svn: 315254
to MCObjectWriter's constructor.
MCObjectWriter takes ownership of its MCMachObjectTargetWriter argument -- this
patch plumbs that ownership relationship through the constructor (which
previously took raw MCMachObjectTargetWriter*) and the createMachObjectWriter
function.
llvm-svn: 315245
We end up creating COPY's that are either truncating/extending and this
should be illegal.
https://reviews.llvm.org/D37640
Patch for X86 and ARM by igorb, rovka
llvm-svn: 315240
Unfortunately TableGen doesn't handle this yet:
Unable to deduce gMIR opcode to handle Src (which is a leaf).
Just add some temporary hand-written code to generate the proper MOVsr.
llvm-svn: 315071
This adds diagnostics for invalid immediate operands to the MOVW and MOVT
instructions (ARM and Thumb).
Differential revision: https://reviews.llvm.org/D31879
llvm-svn: 314888
Currently, our diagnostics for assembly operands are not consistent.
Some start with (for example) "immediate operand must be ...",
and some with "operand must be an immediate ...". I think the latter
form is preferable for a few reasons:
* It's unambiguous that it is referring to the expected type of operand, not
the type the user provided. For example, the user could provide an register
operand, and get a message taking about an operand is if it is already an
immediate, just not in the accepted range.
* It allows us to have a consistent style once we add diagnostics for operands
that could take two forms, for example a label or pc-relative memory operand.
Differential revision: https://reviews.llvm.org/D36689
llvm-svn: 314887
This switches the ARM AsmParser to use assembly operand diagnostics from
tablegen, rather than a switch statement on the ARMMatchResultTy. It
moves the existing diagnostic strings to tablegen, but adds no new ones,
so this is NFC except for one diagnostic string that had an off-by-1 error
in the hand-written switch statement.
Differential revision: https://reviews.llvm.org/D31607
llvm-svn: 314804
tryParseRegister advances the lexer, so we need to take copies of the start and
end locations of the register operand before calling it.
Previously, the caret in the diagnostic pointer to the comma after the r0
operand in the test, rather than the start of the operand.
Differential revision: https://reviews.llvm.org/D31537
llvm-svn: 314799
In this code, we use ~0U as a sentinel value for any operand class that doesn't
have a user-friendly error message, but this value isn't in range of the
MatchClassKind enum, so we need to ensure it does not get passed to isSubclass.
llvm-svn: 314793
This converts the ARM AsmParser to use the new assembly matcher error
reporting mechanism, which allows errors to be reported for multiple
instruction encodings when it is ambiguous which one the user intended
to use.
By itself this doesn't improve many error messages, because we don't have
diagnostic text for most operand types, but as we add that then this will allow
more of those diagnostic strings to be used when they are relevant.
Differential revision: https://reviews.llvm.org/D31530
llvm-svn: 314779
This adds some more debug messages to the type legalizer and functions
like PromoteNode, ExpandNode, ExpandLibCall in an attempt to make
the debug messages a little bit more informative and useful.
Differential Revision: https://reviews.llvm.org/D38450
llvm-svn: 314773
Implement shouldCoalesce() to help regalloc avoid running out of GR128
registers.
If a COPY involving a subreg of a GR128 is coalesced, the live range of the
GR128 virtual register will be extended. If this happens where there are
enough phys-reg clobbers present, regalloc will run out of registers (if
there is not a single GR128 allocatable register available).
This patch tries to allow coalescing only when it can prove that this will be
safe by checking the (local) interval in question.
Review: Ulrich Weigand, Quentin Colombet
https://reviews.llvm.org/D37899https://bugs.llvm.org/show_bug.cgi?id=34610
llvm-svn: 314516
New instructions are added to AArch32 and AArch64 to aid
floating-point multiplication and addition of complex numbers, where
the complex numbers are packed in a vector register as a pair of
elements. The Imaginary part of the number is placed in the more
significant element, and the Real part of the number is placed in the
less significant element.
This patch adds assembler for the ARM target.
Differential Revision: https://reviews.llvm.org/D36789
llvm-svn: 314511
LR is an untypical callee saved register in that it is restored into a
different register (PC) and thus does not live-out of the return block.
This case requires the `Restored` flag in CalleeSavedInfo to be cleared.
This fixes a number of cases where this wasn't handled correctly yet.
llvm-svn: 314471
In setupEntryBlockAndCallSites in CodeGen/SjLjEHPrepare.cpp,
we fetch and store the actual frame pointer, but on return via
the longjmp intrinsic, it always was restored into the r7 variable.
On windows, the frame pointer should be restored into r11 instead of r7.
On Darwin (where sjlj exception handling is used by default), the frame
pointer is always r7, both in arm and thumb mode, and likewise, on
windows, the frame pointer always is r11.
On linux however, if sjlj exception handling is enabled (which it isn't
by default), libcxxabi and the user code can be built in differing modes
using different registers as frame pointer. Therefore, when restoring
registers on a platform where we don't always use the same register
depending on code mode, restore both r7 and r11.
Differential Revision: https://reviews.llvm.org/D38253
llvm-svn: 314451
I implemented isTruncateFree in rL313533, this patch fixes the logic
to match my comment, as the previous logic was too general. Now the
only truncates that are free are i64 -> i32.
Differential Revision: https://reviews.llvm.org/D38234
llvm-svn: 314280
It leads to some improvements, but also a regression for the simple
case, so it's not clearly a good idea.
test/CodeGen/ARM/vcvt.ll now has test coverage to show the difference.
Ultimately, the right solution is probably to custom-lower fp-to-int
conversions, to something like ARMISD::VCVT_F32_S32 plus a bitcast.
It's hard to do the right thing when the implicit bitcast isn't visible
to DAG transforms.
llvm-svn: 314169
This teach simplifyDemandedBits to handle constant splat vector shifts.
This required changing some uses of getZExtValue to getLimitedValue since we can't rely on legalization using getShiftAmountTy for the shift amount.
I believe there may have been a bug in the ((X << C1) >>u ShAmt) handling where we didn't check if the inner shift was too large. I've fixed that here.
I had to add new patterns to ARM because the zext/sext the patterns were trying to look for got turned into an any_extend with this patch. Happy to split that out too, but not sure how to test without this change.
Differential Revision: https://reviews.llvm.org/D37665
llvm-svn: 314139
For the following function:
double fn1(double d0, double d1, double d2) {
double a = -d0 - d1 * d2;
return a;
}
on ARM, LLVM generates code along the lines of
vneg.f64 d0, d0
vmls.f64 d0, d1, d2
i.e., a negate and a multiply-subtract.
The attached patch adds instruction selection patterns to allow it to generate the single instruction
vnmla.f64 d0, d1, d2
(multiply-add with negation) instead, like GCC does.
Committed on behalf of @gergo- (Gergö Barany)
Differential Revision: https://reviews.llvm.org/D35911
llvm-svn: 313972
The ARM docs suggest in examples that the flags can have either case, and there
are applications in the wild that (libopencm3, for example) that expect to be
able to use the uppercase spelling.
https://reviews.llvm.org/D37953
llvm-svn: 313680
This is a preparatory step for D34515.
This change:
- makes nodes ISD::ADDCARRY and ISD::SUBCARRY legal for i32
- lowering is done by first converting the boolean value into the carry flag
using (_, C) ← (ARMISD::ADDC R, -1) and converted back to an integer value
using (R, _) ← (ARMISD::ADDE 0, 0, C). An ARMISD::ADDE between the two
operations does the actual addition.
- for subtraction, given that ISD::SUBCARRY second result is actually a
borrow, we need to invert the value of the second operand and result before
and after using ARMISD::SUBE. We need to invert the carry result of
ARMISD::SUBE to preserve the semantics.
- given that the generic combiner may lower ISD::ADDCARRY and
ISD::SUBCARRYinto ISD::UADDO and ISD::USUBO we need to update their lowering
as well otherwise i64 operations now would require branches. This implies
updating the corresponding test for unsigned.
- add new combiner to remove the redundant conversions from/to carry flags
to/from boolean values (ARMISD::ADDC (ARMISD::ADDE 0, 0, C), -1) → C
- fixes PR34045
- fixes PR34564
Differential Revision: https://reviews.llvm.org/D35192
llvm-svn: 313618
Implement the isTruncateFree hooks, lifted from AArch64, that are
used by TargetTransformInfo. This allows simplifycfg to reduce the
test case into a single basic block.
Differential Revision: https://reviews.llvm.org/D37516
llvm-svn: 313533
The indexed dot product instructions only accept the lower 16 D-registers as
the indexed register, but we were e.g. incorrectly accepting:
vudot.u8 d16,d16,d18[0]
Differential Revision: https://reviews.llvm.org/D37968
llvm-svn: 313531
This was causing PR34045 to fire again.
> This is a preparatory step for D34515 and also is being recommitted as its
> first version caused PR34045.
>
> This change:
> - makes nodes ISD::ADDCARRY and ISD::SUBCARRY legal for i32
> - lowering is done by first converting the boolean value into the carry flag
> using (_, C) ← (ARMISD::ADDC R, -1) and converted back to an integer value
> using (R, _) ← (ARMISD::ADDE 0, 0, C). An ARMISD::ADDE between the two
> operations does the actual addition.
> - for subtraction, given that ISD::SUBCARRY second result is actually a
> borrow, we need to invert the value of the second operand and result before
> and after using ARMISD::SUBE. We need to invert the carry result of
> ARMISD::SUBE to preserve the semantics.
> - given that the generic combiner may lower ISD::ADDCARRY and
> ISD::SUBCARRYinto ISD::UADDO and ISD::USUBO we need to update their lowering
> as well otherwise i64 operations now would require branches. This implies
> updating the corresponding test for unsigned.
> - add new combiner to remove the redundant conversions from/to carry flags
> to/from boolean values (ARMISD::ADDC (ARMISD::ADDE 0, 0, C), -1) → C
> - fixes PR34045
>
> Differential Revision: https://reviews.llvm.org/D35192
Also revert follow-up r313010:
> [ARM] Fix typo when creating ISD::SUB nodes
>
> In D35192, I accidentally introduced a typo when creating ISD::SUB nodes,
> giving them two values instead of one.
>
> This fails when the merge_values combiner finds one of these nodes.
>
> This change fixes PR34564.
>
> Differential Revision: https://reviews.llvm.org/D37690
llvm-svn: 313044
In D35192, I accidentally introduced a typo when creating ISD::SUB nodes,
giving them two values instead of one.
This fails when the merge_values combiner finds one of these nodes.
This change fixes PR34564.
Differential Revision: https://reviews.llvm.org/D37690
llvm-svn: 313010
This is a preparatory step for D34515 and also is being recommitted as its
first version caused PR34045.
This change:
- makes nodes ISD::ADDCARRY and ISD::SUBCARRY legal for i32
- lowering is done by first converting the boolean value into the carry flag
using (_, C) ← (ARMISD::ADDC R, -1) and converted back to an integer value
using (R, _) ← (ARMISD::ADDE 0, 0, C). An ARMISD::ADDE between the two
operations does the actual addition.
- for subtraction, given that ISD::SUBCARRY second result is actually a
borrow, we need to invert the value of the second operand and result before
and after using ARMISD::SUBE. We need to invert the carry result of
ARMISD::SUBE to preserve the semantics.
- given that the generic combiner may lower ISD::ADDCARRY and
ISD::SUBCARRYinto ISD::UADDO and ISD::USUBO we need to update their lowering
as well otherwise i64 operations now would require branches. This implies
updating the corresponding test for unsigned.
- add new combiner to remove the redundant conversions from/to carry flags
to/from boolean values (ARMISD::ADDC (ARMISD::ADDE 0, 0, C), -1) → C
- fixes PR34045
Differential Revision: https://reviews.llvm.org/D35192
llvm-svn: 313009
It caused PR34564.
> This is a preparatory step for D34515 and also is being recommitted as its
> first version caused PR34045.
>
> This change:
> - makes nodes ISD::ADDCARRY and ISD::SUBCARRY legal for i32
> - lowering is done by first converting the boolean value into the carry flag
> using (_, C) ← (ARMISD::ADDC R, -1) and converted back to an integer value
> using (R, _) ← (ARMISD::ADDE 0, 0, C). An ARMISD::ADDE between the two
> operations does the actual addition.
> - for subtraction, given that ISD::SUBCARRY second result is actually a
> borrow, we need to invert the value of the second operand and result before
> and after using ARMISD::SUBE. We need to invert the carry result of
> ARMISD::SUBE to preserve the semantics.
> - given that the generic combiner may lower ISD::ADDCARRY and
> ISD::SUBCARRYinto ISD::UADDO and ISD::USUBO we need to update their lowering
> as well otherwise i64 operations now would require branches. This implies
> updating the corresponding test for unsigned.
> - add new combiner to remove the redundant conversions from/to carry flags
> to/from boolean values (ARMISD::ADDC (ARMISD::ADDE 0, 0, C), -1) → C
> - fixes PR34045
>
> Differential Revision: https://reviews.llvm.org/D35192
llvm-svn: 312980
This is a preparatory step for D34515 and also is being recommitted as its
first version caused PR34045.
This change:
- makes nodes ISD::ADDCARRY and ISD::SUBCARRY legal for i32
- lowering is done by first converting the boolean value into the carry flag
using (_, C) ← (ARMISD::ADDC R, -1) and converted back to an integer value
using (R, _) ← (ARMISD::ADDE 0, 0, C). An ARMISD::ADDE between the two
operations does the actual addition.
- for subtraction, given that ISD::SUBCARRY second result is actually a
borrow, we need to invert the value of the second operand and result before
and after using ARMISD::SUBE. We need to invert the carry result of
ARMISD::SUBE to preserve the semantics.
- given that the generic combiner may lower ISD::ADDCARRY and
ISD::SUBCARRYinto ISD::UADDO and ISD::USUBO we need to update their lowering
as well otherwise i64 operations now would require branches. This implies
updating the corresponding test for unsigned.
- add new combiner to remove the redundant conversions from/to carry flags
to/from boolean values (ARMISD::ADDC (ARMISD::ADDE 0, 0, C), -1) → C
- fixes PR34045
Differential Revision: https://reviews.llvm.org/D35192
llvm-svn: 312898
These don't add any value as they're just compositions of existing
patterns. However, they can confuse the cost logic in ISel, leading to
duplicated vcvt instructions like in PR33199.
llvm-svn: 312724
Globals that are promoted to an ARM constant pool may alias with another
existing constant pool entry. We need to keep a reference to all globals
that were promoted to each constant pool value so that we can emit a
distinct label for each promoted global. These labels are necessary so
that debug info can refer to the promoted global without an undefined
reference during linking.
Patch by Stephen Crane!
llvm-svn: 312692
Tail merging can convert an undef use into a normal one when creating a
common tail. Doing so can make the register live out from a block which
previously contained the undef use. To keep the liveness up-to-date,
insert IMPLICIT_DEFs in such blocks when necessary.
To enable this patch the computeLiveIns() function which used to
compute live-ins for a block and set them immediately is split into new
functions:
- computeLiveIns() just computes the live-ins in a LivePhysRegs set.
- addLiveIns() applies the live-ins to a block live-in list.
- computeAndAddLiveIns() is a convenience function combining the other
two functions and behaving like computeLiveIns() before this patch.
Based on a patch by Krzysztof Parzyszek <kparzysz@codeaurora.org>
Differential Revision: https://reviews.llvm.org/D37034
llvm-svn: 312668
Missing these could potentially screw up post-ra scheduling.
Issue found by inspection, so I don't have a real testcase. Included
test just verifies the expected operands after expansion.
Differential Revision: https://reviews.llvm.org/D35156
llvm-svn: 312589
Use the STI member of ARMInstructionSelector instead of
TII.getSubtarget() and also make use of STI's methods instead of
checking the object format manually.
llvm-svn: 312522
In RWPI code, globals that are not read-only are accessed relative to
the SB register (R9). This is achieved by explicitly generating an ADD
instruction between SB and an offset that we either load from a constant
pool or movw + movt into a register.
llvm-svn: 312521
In the ROPI relocation model, read-only variables are accessed relative
to the PC. We use the (MOV|LDRLIT)_ga_pcrel pseudoinstructions for this.
llvm-svn: 312323
This adds 2-operand assembly aliases for these instructions:
add r0, r1 => add r0, r0, r1
sub r0, r1 => sub r0, r0, r1
Previously this syntax was only accepted for Thumb2 targets, where the
wide versions of the instructions were used.
This patch allows the 2-operand syntax to be used for Thumb1 targets,
and selects the narrow encoding when it is used for Thumb2 targets.
Differential revision: https://reviews.llvm.org/D37377
llvm-svn: 312321
Replace the UsePostRAScheduler SubtargetFeature with
DisablePostRAScheduler, which is then used by Swift and Cyclone.
This patch maintains enabling PostRA scheduling for other Thumb2
capable cores and/or for functions which are being compiled in Arm
mode.
Differential Revision: https://reviews.llvm.org/D37055
llvm-svn: 312226
Summary:
Remove a check for `ARMSubtarget::isTargetDarwin` when determining
whether to use Swift error registers, so that Swift errors work
properly on non-Darwin ARM32 targets (specifically Android).
Before this patch, generated code would save and restores ARM register r8 at
the entry and returns of a function that throws. As r8 is used as a virtual
return value for the object being thrown, this gets overwritten by the restore,
and calling code is unable to catch the error. In turn this caused Swift code
that used `do`/`try`/`catch` to work improperly on Android ARM32 targets.
Addresses Swift bug report https://bugs.swift.org/browse/SR-5438.
Patch by John Holdsworth.
Reviewers: manmanren, rjmccall, aschwaighofer
Reviewed By: aschwaighofer
Subscribers: srhines, aschwaighofer, aemerson, javed.absar, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D35835
llvm-svn: 312164
Support the selection of G_GLOBAL_VALUE in the PIC relocation model. For
simplicity we use the same pseudoinstructions for both Darwin and ELF:
(MOV|LDRLIT)_ga_pcrel(_ldr).
This is new for ELF, so it requires a small update to the ARM pseudo
expansion pass to make sure it adds the correct constant pool modifier
and add-current-address in the case of ELF.
Differential Revision: https://reviews.llvm.org/D36507
llvm-svn: 311992
ARMv4 doesn't support the "BX" instruction, which has been introduced
with ARMv4t. Adjust the call lowering and tail call implementation
accordingly.
Further changes are necessary to ensure that presence of the v4t feature
is correctly set. Most importantly, the "generic" CPU for thumb-*
triples should include ARMv4t, since thumb mode without thumb support
would naturally be pointless.
Add a couple of asserts to ensure thumb instructions are not emitted
without CPU support.
Differential Revision: https://reviews.llvm.org/D37030
llvm-svn: 311921
Summary:
ARMLoadStoreOpt::FixInvalidRegPairOp() was only checking if one of the
load destination registers to be split overlapped with the base register
if the base register was marked as killed. Since kill flags may not
always be present, this can lead to incorrect code.
This bug was exposed by my MachineCopyPropagation change D30751 breaking
the sanitizer-x86_64-linux-android buildbot.
Also clean up some dead code and add an assert that a register offset is
never encountered by this code, since it does not handle them correctly.
Reviewers: MatzeB, qcolombet, t.p.northover
Subscribers: aemerson, javed.absar, kristof.beyls, mcrosier, llvm-commits
Differential Revision: https://reviews.llvm.org/D37164
llvm-svn: 311907
ARMTargetLowering::isLegalAddressingMode can accept illegal addressing modes
for the Thumb1 target. This causes generation of redundant code and affects
performance.
This fixes PR34106: https://bugs.llvm.org/show_bug.cgi?id=34106
Differential Revision: https://reviews.llvm.org/D36467
llvm-svn: 311649
Switching to external relocations for ARM-mode branches (to allow Thumb
interworking when the offset is unencodable) causes calls to temporary symbols
to be miscompiled and instead go to the parent externally visible symbol.
Calling a temporary never happens in compiled code, but can occasionally in
hand-written assembly.
llvm-svn: 311611
G_PHI has the same semantics as PHI but also has types.
This lets us verify that the types in the G_PHI are consistent.
This also allows specifying legalization actions for G_PHIs.
https://reviews.llvm.org/D36990
llvm-svn: 311596
Summary: In some cases, shufflevector instruction can be transformed involving insert_subvector instructions. The ARM backend was missing some insert_subvector patterns, causing a failure during instruction selection. AArch64 has similar patterns.
Reviewers: t.p.northover, olista01, javed.absar, rengolin
Reviewed By: javed.absar
Subscribers: aemerson, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D36796
llvm-svn: 311543
Adds infrastructure to clone whole instruction bundles rather than just
single instructions. This fixes a bug where tail duplication would
unbundle instructions while cloning.
This should unbreak the "Clang Stage 1: cmake, RA, with expensive checks
enabled" build on greendragon. The bot broke with r311139 hitting this
pre-existing bug.
A proper testcase will come next.
llvm-svn: 311511
Armv8.3-A adds instructions that convert a double-precision floating
point number to a signed 32-bit integer with round towards zero,
designed for improving Javascript performance.
Differential Revision: https://reviews.llvm.org/D36785
llvm-svn: 311448
The ARM backend should call setBooleanContents so that it can
use known bits to make some optimizations.
Review: D35821
Patch by Joel Galenson <jgalenson@google.com>
llvm-svn: 311446
The calling convention can be specified by the user in IR. Failing to support
a particular calling convention isn't a programming error, and so relying on
llvm_unreachable to catch and report an unsupported calling convention is not
appropriate.
Differential Revision: https://reviews.llvm.org/D36830
llvm-svn: 311435
This is the exact same fix as in SVN r247254. In that commit, the fix was
applied only for isVTRNMask and isVTRN_v_undef_Mask, but the same issue
is present for VZIP/VUZP as well.
This fixes PR33921.
Differential Revision: https://reviews.llvm.org/D36899
llvm-svn: 311258
This doesn't really change anything as Tablegen would have inferred
those indices anyway; defining them gives us shorter names that are
easier to read while debugging (i.e. "ssub_4" rather than
"dsub2_then_ssub_0")
llvm-svn: 311218
This patch adds the option to allow also using the PostRA scheduler,
which brings the ARM backend inline with AArch64 targets. The
SchedModel can also set 'PostRAScheduler', as the R52 does, so also
query this property in the overridden function.
Differential Revision: https://reviews.llvm.org/D36866
llvm-svn: 311162
When lowering a VLA, we emit a __chstk call. However, this call can
internally clobber CPSR. We did not mark this register as an ImpDef,
which could potentially allow a comparison to be hoisted above the call
to `__chkstk`. In such a case, the CPSR could be clobbered, and the
check invalidated. When the support was initially added, it seemed that
the call would take care of preventing CPSR from being clobbered, but
this is not the case. Mark the register as clobbered to fix a possible
state corruption.
llvm-svn: 311061
- Set the default runtime unroll count to 4 and use the newly added
UnrollRemainder option.
- Create loop cost and force unroll for a cost less than 12.
- Disable unrolling on Thumb1 only targets.
Differential Revision: https://reviews.llvm.org/D36134
llvm-svn: 310997
This reverts commit r310425, thus reapplying r310335 with a fix for link
issue of the AArch64 unittests on Linux bots when BUILD_SHARED_LIBS is ON.
Original commit message:
[GlobalISel] Remove the GISelAccessor API.
Its sole purpose was to avoid spreading around ifdefs related to
building global-isel. Since r309990, GlobalISel is not optional anymore,
thus, we can get rid of this mechanism all together.
NFC.
----
The fix for the link issue consists in adding the GlobalISel library in
the list of dependencies for the AArch64 unittests. This dependency
comes from the use of AArch64Subtarget that needs to know how
to destruct the GISel related APIs when being detroyed.
Thanks to Bill Seurer and Ahmed Bougacha for helping me reproducing and
understand the problem.
llvm-svn: 310969
Summary:
Without the SrcVT its hard to know what is really being asked for. For example if your target has 128, 256, and 512 bit vectors. Maybe extracting 128 from 256 is cheap, but maybe extracting 128 from 512 is not.
For x86 we do support extracting a quarter of a 512-bit register. But for i1 vectors we don't have isel patterns for extracting arbitrary pieces. So we need this to have a correct implementation of isExtractSubvectorCheap for mask vectors.
Reviewers: RKSimon, zvi, efriedma
Reviewed By: RKSimon
Subscribers: aemerson, javed.absar, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D36649
llvm-svn: 310793
Summary:
isThumb returns true for Thumb triples (little and big endian), isARM
returns true for ARM triples (little and big endian).
There are a few more checks using arm/thumb that are not covered by
those functions, e.g. that the architecture is either ARM or Thumb
(little endian) or ARM/Thumb little endian only.
Reviewers: javed.absar, rengolin, kristof.beyls, t.p.northover
Reviewed By: rengolin
Subscribers: llvm-commits, aemerson
Differential Revision: https://reviews.llvm.org/D34682
llvm-svn: 310781
The existing code is very clever, but not clear, which seems
like the wrong tradeoff here.
Differential Revision: https://reviews.llvm.org/D36559
llvm-svn: 310653
The liveness-tracking code assumes that the registers that were saved
in the function's prolog are live outside of the function. Specifically,
that registers that were saved are also live-on-exit from the function.
This isn't always the case as illustrated by the LR register on ARM.
Differential Revision: https://reviews.llvm.org/D36160
llvm-svn: 310619
Clean up after my misguided attempt in r304267 to "fix" CMP_SWAP
returning an uninitialized status value.
- I was always using tMOVi8 to zero the status register which cannot
encode higher register numbers and llvm would silently miscompile)
- Nobody was ever looking at that status value outside the expansion.
ARMDAGToDAGISel::SelectCMP_SWAP() the only place creating CMP_SWAP
instructions was not mapping anything to it. (The cmpxchg status value
from llvm IR is lowered to a manual comparison after the CMP_SWAP)
So this:
- Renames the register from "status" to "temp" it make it obvious that
it isn't used outside the expansion.
- Remove the zeroing status/temp register.
- Keep the live-in list improvements from r304267
Fixes http://llvm.org/PR34056
llvm-svn: 310534
Summary:
A similar error message has been removed from the ARMTargetMachineBase
constructor in r306939. With this patch, we generate an error message
for the example below, compiled with -mcpu=cortex-m0, which does not
have ARM execution mode.
__attribute__((target("arm"))) int foo(int a, int b)
{
return a + b % a;
}
__attribute__((target("thumb"))) int bar(int a, int b)
{
return a + b % a;
}
By adding this error message to ARMBaseTargetMachine::getSubtargetImpl,
we can deal with functions that set -thumb-mode in target-features.
At the moment it seems like Clang does not have access to target-feature
specific information, so adding the error message to the frontend will
be harder.
Reviewers: echristo, richard.barton.arm, t.p.northover, rengolin, efriedma
Reviewed By: echristo, efriedma
Subscribers: efriedma, aemerson, javed.absar, kristof.beyls
Differential Revision: https://reviews.llvm.org/D35627
llvm-svn: 310486
Summary:
By removing FeatureNoARM implies ModeThumb, we can detect cases where a
function's target-features contain -thumb-mode (enables ARM codegen for the
function), but the architecture does not support ARM mode. Previously, the
implication caused the FeatureNoARM bit to be cleared for functions with
-thumb-mode, making the assertion in ARMSubtarget::ARMSubtarget [1]
pointless for such functions.
This assertion is the only guard against generating ARM code for
architectures without ARM codegen support. Is there a place where we
could easily generate error messages for the user? At the moment, we
would generate ARM code for Thumb-only architectures. X86 has the same
behavior as ARM, as in it only has an assertion and no error message,
but I think for ARM an error message would be helpful. What do you
think?
For the example below, `llc -mtriple=armv7m-eabi test.ll -o -` will
generate ARM assembler (or fail with an assertion error with this patch).
Note that if we run the resulting assembler through llvm-mc, we get
an appropriate error message, but not when codegen is handled
through clang.
```
define void @bar() #0 {
entry:
ret void
}
attributes #0 = { "target-features"="-thumb-mode" }
```
[1] c1f7b54cef/lib/Target/ARM/ARMSubtarget.cpp (L147)
Reviewers: t.p.northover, rengolin, peter.smith, aadg, silviu.baranga, richard.barton.arm, echristo
Reviewed By: rengolin, echristo
Subscribers: efriedma, aemerson, javed.absar, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D35569
llvm-svn: 310476
This reverts commit r310115.
It causes a linker failure for the one of the unittests of AArch64 on one
of the linux bot:
http://lab.llvm.org:8011/builders/clang-ppc64le-linux-multistage/builds/3429
: && /home/fedora/gcc/install/gcc-7.1.0/bin/g++ -fPIC
-fvisibility-inlines-hidden -Werror=date-time -std=c++11 -Wall -W
-Wno-unused-parameter -Wwrite-strings -Wcast-qual
-Wno-missing-field-initializers -pedantic -Wno-long-long
-Wno-maybe-uninitialized -Wdelete-non-virtual-dtor -Wno-comment
-ffunction-sections -fdata-sections -O2
-L/home/fedora/gcc/install/gcc-7.1.0/lib64 -Wl,-allow-shlib-undefined
-Wl,-O3 -Wl,--gc-sections
unittests/Target/AArch64/CMakeFiles/AArch64Tests.dir/InstSizes.cpp.o -o
unittests/Target/AArch64/AArch64Tests
lib/libLLVMAArch64CodeGen.so.6.0.0svn lib/libLLVMAArch64Desc.so.6.0.0svn
lib/libLLVMAArch64Info.so.6.0.0svn lib/libLLVMCodeGen.so.6.0.0svn
lib/libLLVMCore.so.6.0.0svn lib/libLLVMMC.so.6.0.0svn
lib/libLLVMMIRParser.so.6.0.0svn lib/libLLVMSelectionDAG.so.6.0.0svn
lib/libLLVMTarget.so.6.0.0svn lib/libLLVMSupport.so.6.0.0svn -lpthread
lib/libgtest_main.so.6.0.0svn lib/libgtest.so.6.0.0svn -lpthread
-Wl,-rpath,/home/buildbots/ppc64le-clang-multistage-test/clang-ppc64le-multistage/stage1/lib
&& :
unittests/Target/AArch64/CMakeFiles/AArch64Tests.dir/InstSizes.cpp.o:(.toc+0x0):
undefined reference to `vtable for llvm::LegalizerInfo'
unittests/Target/AArch64/CMakeFiles/AArch64Tests.dir/InstSizes.cpp.o:(.toc+0x8):
undefined reference to `vtable for llvm::RegisterBankInfo'
The particularity of this bot is that it is built with
BUILD_SHARED_LIBS=ON
However, I was not able to reproduce the problem so far.
Reverting to unblock the bot.
llvm-svn: 310425
This reverts r310243. Only MVFR2 is actually restricted to v8 and it'll be a
little while before we can get a proper fix together. Better that we allow
incorrect code than reject correct in the meantime.
llvm-svn: 310384
This patch addresses two issues with assembly and disassembly for VMRS/VMSR:
1.currently VMRS/VMSR instructions accessing fpsid, mvfr{0-2} and fpexc, are
accepted for non ARMv8-A targets.
2. all VMRS/VMSR instructions accept writing/reading to PC and SP, when only
ARMv7-A and ARMv8-A should be allowed to write/read to SP and none to PC.
This patch addresses those issues and adds tests for these cases.
Differential Revision: https://reviews.llvm.org/D36306
llvm-svn: 310243
Its sole purpose was to avoid spreading around ifdefs related to
building global-isel. Since r309990, GlobalISel is not optional anymore,
thus, we can get rid of this mechanism all together.
NFC.
llvm-svn: 310115
This is a continuation of https://reviews.llvm.org/D36219
This patch uses reverse mapping (encoding->name) in
ARMInstPrinter::printBankedRegOperand to get rid of
hard-coded values (as pointed out by @olista01).
Reviewed by: @fhahn, @rovka, @olista01
Differential Revision: https://reviews.llvm.org/D36260
llvm-svn: 310072
With this change, the GlobalISel library gets always built. In
particular, this is not possible to opt GlobalISel out of the build
using the LLVM_BUILD_GLOBAL_ISEL variable any more.
llvm-svn: 309990
Add support in the instruction selector for G_GLOBAL_VALUE for ELF and
MachO for the static relocation model. We don't handle Windows yet
because that's Thumb-only, and we don't handle Thumb in general at the
moment.
Support for PIC, ROPI, RWPI and TLS will be added in subsequent commits.
Differential Revision: https://reviews.llvm.org/D35883
llvm-svn: 309927
This patch:
- makes nodes ISD::ADDCARRY and ISD::SUBCARRY legal for i32
- lowering is done by first converting the boolean value into the carry flag
using (_, C) <- (ARMISD::ADDC R, -1) and converted back to an integer value
using (R, _) <- (ARMISD::ADDE 0, 0, C). An ARMISD::ADDE between the two
operations does the actual addition.
- for subtraction, given that ISD::SUBCARRY second result is actually a
borrow, we need to invert the value of the second operand and result before
and after using ARMISD::SUBE. We need to invert the carry result of
ARMISD::SUBE to preserve the semantics.
- given that the generic combiner may lower ISD::ADDCARRY and
ISD::SUBCARRY into ISD::UADDO and ISD::USUBO we need to update their lowering
as well otherwise i64 operations now would require branches. This implies
updating the corresponding test for unsigned.
- add new combiner to remove the redundant conversions from/to carry flags
to/from boolean values (ARMISD::ADDC (ARMISD::ADDE 0, 0, C), -1) -> C
Differential Revision: https://reviews.llvm.org/D35192
llvm-svn: 309923
IMHO it is an antipattern to have a enum value that is Default.
At any given piece of code it is not clear if we have to handle
Default or if has already been mapped to a concrete value. In this
case in particular, only the target can do the mapping and it is nice
to make sure it is always done.
This deletes the two default enum values of CodeModel and uses an
explicit Optional<CodeModel> when it is possible that it is
unspecified.
llvm-svn: 309911
Moves encoding (SYSm) information of banked registers to ARMSystemRegister.td,
where it rightly belongs and forms a single point of reference in the code.
Reviewed by: @fhahn, @rovka, @olista01
Differential Revision: https://reviews.llvm.org/D36219
llvm-svn: 309910
This patch enables choice for accessing thread local
storage pointer (like '-mtp' in gcc).
Differential Revision: https://reviews.llvm.org/D34408
llvm-svn: 309381
The code assumed that unclobbered/unspilled callee saved registers are
unused in the function. This is not true for callee saved registers that are
also used to pass parameters such as swiftself.
rdar://33401922
llvm-svn: 309350
Summary:
This change makes it easier to experiment with the MachineScheduler in
the ARM backend and also makes it very explicit which CPUs use the
MachineScheduler (currently only swift and cyclone).
Reviewers: MatzeB, t.p.northover, javed.absar
Reviewed By: MatzeB
Subscribers: aemerson, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D35935
llvm-svn: 309316
Summary:
Using c++11 enum classes ensures that only valid enum values are used
for ArchKind, ProfileKind, VersionKind and ISAKind. This removes the
need for checks that the provided values map to a proper enum value,
allows us to get rid of AK_LAST and prevents comparing values from
different enums. It also removes a bunch of static_cast
from unsigned to enum values and vice versa, at the cost of introducing
static casts to access AArch64ARCHNames and ARMARCHNames by ArchKind.
FPUKind and ArchExtKind are the only remaining old-style enum in
TargetParser.h. I think it's beneficial to keep ArchExtKind as old-style
enum, but FPUKind can be converted too, but this patch is quite big, so
could do this in a follow-up patch. I could also split this patch up a
bit, if people would prefer that.
Reviewers: rengolin, javed.absar, chandlerc, rovka
Reviewed By: rovka
Subscribers: aemerson, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D35882
llvm-svn: 309287
The comment at the top of the switch statement indicates that the
fall-through behavior is intentional. By using LLVM_FALLTHROUGH,
-Wimplicit-fallthrough are silenced, which is enabled by default in GCC
7.
llvm-svn: 309272
Summary:
Now that we have control flow in place, fuse the per-rule tables into a
single table. This is a compile-time saving at this point. However, this will
also enable the optimization of a table so that similar instructions can be
tested together, reducing the time spent on the matching the code.
This is NFC in terms of externally visible behaviour but some internals have
changed slightly. State.MIs is no longer reset between each rule that is
attempted because it's not necessary to do so. As a consequence of this the
restriction on the order that instructions are added to State.MIs has been
relaxed to only affect recorded instructions that require new elements to be
added to the vector. GIM_RecordInsn can now write to any element from 1 to
State.MIs.size() instead of just State.MIs.size().
The compile-time regressions from the last commit were caused by the ARM target
including a non-const variable (zero_reg) in the table and therefore generating
an initializer for it. That variable is now const.
Reviewers: ab, t.p.northover, qcolombet, rovka, aditya_nandakumar
Reviewed By: rovka
Subscribers: kristof.beyls, igorb, llvm-commits
Differential Revision: https://reviews.llvm.org/D35681
llvm-svn: 309264
Changing mask argument type from const SmallVectorImpl<int>& to
ArrayRef<int>.
This came up in D35700 where a mask is received as an ArrayRef<int> and
we want to pass it to TargetLowering::isShuffleMaskLegal().
Also saves a few lines of code.
llvm-svn: 309085
Enable runtime and partial loop unrolling of simple loops without
calls on M-class cores. The thresholds are calculated based on
whether the target is Thumb or Thumb-2.
Differential Revision: https://reviews.llvm.org/D34619
llvm-svn: 308956
This patch makes LSR generate better code for SystemZ in the cases of memory
intrinsics, Load->Store pairs or comparison of immediate with memory.
In order to achieve this, the following common code changes were made:
* New TTI hook: LSRWithInstrQueries(), which defaults to false. Controls if
LSR should do instruction-based addressing evaluations by calling
isLegalAddressingMode() with the Instruction pointers.
* In LoopStrengthReduce: handle address operands of memset, memmove and memcpy
as address uses, and call isFoldableMemAccessOffset() for any LSRUse::Address,
not just loads or stores.
SystemZ changes:
* isLSRCostLess() implemented with Insns first, and without ImmCost.
* New function supportedAddressingMode() that is a helper for TTI methods
looking at Instructions passed via pointers.
Review: Ulrich Weigand, Quentin Colombet
https://reviews.llvm.org/D35262https://reviews.llvm.org/D35049
llvm-svn: 308729
This patch cleans up and fixes issues in the M-Class system register handling:
1. It defines the system registers and the encoding (SYSm values) in one place:
a new ARMSystemRegister.td using SearchableTable, thereby removing the
hand-coded values which existed in multiple places.
2. Some system registers e.g. BASEPRI_MAX_NS which do not exist were being allowed!
Ref: ARMv6/7/8M architecture reference manual.
Reviewed by: @t.p.northover, @olist01, @john.brawn
Differential Revision: https://reviews.llvm.org/D35209
llvm-svn: 308456
Cleaned up the code in FastISel a bit.
Had to add make_range to MCInstrDesc as that was needed and seems missing.
Reviewed by: @t.p.northover
Differential Revision: https://reviews.llvm.org/D35494
llvm-svn: 308291
Insert a TSTri to set the flags and a Bcc to branch based on their
values. This is a bit inefficient in the (common) cases where the
condition for the branch comes from a compare right before the branch,
since we set the flags both as part of the compare lowering and as part
of the branch lowering. We're going to live with that until we settle on
a principled way to handle this kind of situation, which occurs with
other patterns as well (combines might be the way forward here).
llvm-svn: 308009
Constants are crucial for code size in the ARM Thumb-1 instruction
set. The 16 bit instruction size often does not offer enough space
for immediate arguments. This means that additional instructions are
frequently used to load constants into registers. Since constants are
hoisted, this can lead to significant register spillage if they are
used multiple times in a single function. This can be avoided by
rematerialization, i.e. recomputing a constant instead of reloading
it from the stack. This patch fixes the rematerialization of literal
pool loads in the ARM Thumb instruction set.
Patch by Philip Ginsbach
Differential Revision: https://reviews.llvm.org/D33936
llvm-svn: 308004
This boils down to not crashing in reg bank select due to the lack of
register operands on this instruction, and adding some tests. The
instruction selection is already covered by the TableGen'erated code.
llvm-svn: 307904
This patch tidies up and organises ARM.td
so that it is easier to understandand
and extend in the future.
Reviewed by: @hahn, @rovka
Differential Revision: https://reviews.llvm.org/D35248
llvm-svn: 307897
Summary:
Similar to X86, it should be safe to inline callees if their
target-features are a subset of the caller. As some subtarget features
provide different instructions depending on whether they are set or
unset (e.g. ThumbMode and ModeSoftFloat), we use a whitelist of
target-features describing hardware capabilities only.
Reviewers: kristof.beyls, rengolin, t.p.northover, SjoerdMeijer, peter.smith, silviu.baranga, efriedma
Reviewed By: SjoerdMeijer, efriedma
Subscribers: dschuff, efriedma, aemerson, sdardis, javed.absar, arichardson, eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D34697
llvm-svn: 307889
When we have a diamond ifcvt the fallthough block will have a branch at the end
of it that disappears when predicated, so discount it from the predication cost.
Differential Revision: https://reviews.llvm.org/D34952
llvm-svn: 307788
Refactor CmpHelper into something simpler. It was overkill to use
templates for this - instead, use a simple CmpConstants structure to
hold the opcodes and other constants that are different when selecting
int / float / double comparisons. Also, extract some of the helpers that
were in CmpHelper into ARMInstructionSelector and make use of some of
them when selecting other things than just compares.
llvm-svn: 307766
The issue is not if the value is pcrel. It is whether we have a
relocation or not.
If we have a relocation, the static linker will select the upper
bits. If we don't have a relocation, we have to do it.
llvm-svn: 307730
OpenCL 2.0 introduces the notion of memory scopes in atomic operations to
global and local memory. These scopes restrict how synchronization is
achieved, which can result in improved performance.
This change extends existing notion of synchronization scopes in LLVM to
support arbitrary scopes expressed as target-specific strings, in addition to
the already defined scopes (single thread, system).
The LLVM IR and MIR syntax for expressing synchronization scopes has changed
to use *syncscope("<scope>")*, where <scope> can be "singlethread" (this
replaces *singlethread* keyword), or a target-specific name. As before, if
the scope is not specified, it defaults to CrossThread/System scope.
Implementation details:
- Mapping from synchronization scope name/string to synchronization scope id
is stored in LLVM context;
- CrossThread/System and SingleThread scopes are pre-defined to efficiently
check for known scopes without comparing strings;
- Synchronization scope names are stored in SYNC_SCOPE_NAMES_BLOCK in
the bitcode.
Differential Revision: https://reviews.llvm.org/D21723
llvm-svn: 307722
For ELF, a movw+movt pair is handled as two separate relocations.
If an offset should be applied to the symbol address, this offset is
stored as an immediate in the instruction (as opposed to stored as an
offset in the relocation itself).
Even though the actual value stored in the movt immediate after linking
is the top half of the value, we need to store the unshifted offset
prior to linking. When the relocation is made during linking, the offset
gets added to the target symbol value, and the upper half of the value
is stored in the instruction.
This makes sure that movw+movt with offset symbols get properly
handled, in case the offset addition in the lower half should be
carried over to the upper half.
This makes the output from the additions to the test case match
the output from GNU binutils.
For COFF and MachO, the movw/movt relocations are handled as a pair,
and the overflow from the lower half gets carried over to the movt,
so they should keep the shifted offset just as before.
Differential Revision: https://reviews.llvm.org/D35242
llvm-svn: 307713
This change allows the pc to be used as a destination register for the
pseudo instruction LDR pc,=expression . The pseudo instruction must not be
transformed into a MOV, but it can use the Thumb2 LDR (literal) instruction
to a constant pool entry. See (A7.7.43 from ARMv7M ARM ARM).
Differential Revision: https://reviews.llvm.org/D34751
llvm-svn: 307640
We used to forget to erase the original instruction when replacing a
G_FCMP true/false. Fix this bug and make sure the tests check for it.
llvm-svn: 307639
r306334 fixed a bug in AArch64 dealing with wide interleaved accesses having
pointer types. The bug also exists in ARM, so this patch copies over the fix.
llvm-svn: 307409