shuffle before inserting on a 256-bit vector.
- Add AVX versions of movd/movq instructions
- Introduce a few COPY patterns to match insert_subvector instructions.
This turns a trivial insert_subvector instruction into a register copy,
coalescing the xmm into a ymm and avoid emiting on more instruction.
llvm-svn: 136002
Fix the Rn register encoding for both SSAT and USAT. Update the parsing of the
shift operand to correctly handle the allowed shift types and immediate ranges
and issue meaningful diagnostics when an illegal value or shift type is
specified. Add aliases to parse an ommitted shift operand (default value of
'lsl #0').
Add tests for diagnostics and proper encoding.
llvm-svn: 135990
The shift immediate encoding, printing, etc. is handled directly by the
enclosing operand definition, so it should be a vanilla immediate, not a
nested complex operand (shift_imm).
llvm-svn: 135968
Remove some inititalizers that are the same as the default, move defs next to
their (singular) uses and generally simplify some formatting of asm operand
definitions.
llvm-svn: 135946
unwind encoding for that function. This simply crawls through the prolog looking
for machine instrs marked as "frame setup". It can calculate from these what the
compact unwind should look like.
This is currently disabled because of needed linker support. But initial tests
look good.
llvm-svn: 135922
The immediate is in the range 1-32, but is encoded as 0-31 in a 5-bit bitfield.
Update the representation such that we store the operand as 0-31, allowing us
to remove the encoder method and the special case handling in the disassembler.
Update the assembly parser and the instruction printer accordingly.
llvm-svn: 135823
The header file was already properly located. The previous need for it
in Support had to do with the version string printing which was fixed in
r135757.
Also update build dependencies where libraries that needed the
functionality of the Target library (in the form of the TargetRegistry)
were picking it up via Support. This is pretty pervasive, essentially
every TargetInfo library (ARMInfo, etc) uses TargetRegistry, making it
depend on Target. All of these were previously just sneaking by.
llvm-svn: 135760
the way to go. Doing this here will prevent several node matches later,
and would have to force looking all the way through several
VINSERTF128/VEXTRACTF128 chains to optimize simple things.
llvm-svn: 135730
and was actually very wrong, fix it and make it simpler. Also remove the
ConcatVectors function, which is unused now.
- Fix a introduction of useless nodes in r126664 and r126264. The
VUNPCKL* should never be introduced cause we don't want duplicate
nodes for 128 AVX and non-AVX modes, the actual instruction
difference only exists during isel, but not for target specific DAG
nodes. We only introduce V* target nodes when there is no 128-bit
version already there.
- Fix a fragile test and make it more useful.
llvm-svn: 135729
Aliases for LDM/STM. The single-register versions should encode to LDR/STR
with writeback, but we don't (yet) get that correct. Neither does Darwin's
system assembler, though, so that's not a deal-breaker of a limitation.
llvm-svn: 135702
Stefanovic. I removed the part that actually emits the instructions cause
I want that to get in better shape first and in incremental steps. This
also makes it easier to review the upcoming parts.
llvm-svn: 135678
instruction introduced in AVX, which can operate on 128 and 256-bit vectors.
It considers a 256-bit vector as two independent 128-bit lanes. It can permute
any 32 or 64 elements inside a lane, and restricts the second lane to
have the same permutation of the first one. With the improved splat support
introduced early today, adding codegen for this instruction enable more
efficient 256-bit code:
Instead of:
vextractf128 $0, %ymm0, %xmm0
punpcklbw %xmm0, %xmm0
punpckhbw %xmm0, %xmm0
vinsertf128 $0, %xmm0, %ymm0, %ymm1
vinsertf128 $1, %xmm0, %ymm1, %ymm0
vextractf128 $1, %ymm0, %xmm1
shufps $1, %xmm1, %xmm1
movss %xmm1, 28(%rsp)
movss %xmm1, 24(%rsp)
movss %xmm1, 20(%rsp)
movss %xmm1, 16(%rsp)
vextractf128 $0, %ymm0, %xmm0
shufps $1, %xmm0, %xmm0
movss %xmm0, 12(%rsp)
movss %xmm0, 8(%rsp)
movss %xmm0, 4(%rsp)
movss %xmm0, (%rsp)
vmovaps (%rsp), %ymm0
We get:
vextractf128 $0, %ymm0, %xmm0
punpcklbw %xmm0, %xmm0
punpckhbw %xmm0, %xmm0
vinsertf128 $0, %xmm0, %ymm0, %ymm1
vinsertf128 $1, %xmm0, %ymm1, %ymm0
vpermilps $85, %ymm0, %ymm0
llvm-svn: 135662
refactor the code and add a bunch of comments. The final shuffle
emitted by handling 256-bit types is suitable for the VPERM shuffle
instruction which is going to be introduced in a next commit (with
a testcase which cover this commit)
llvm-svn: 135661
Move the shift operator and special value (32 encoded as 0 for PKHTB) handling
into the instruction printer. This cleans up a bit of the disassembler
special casing for these instructions, more easily handles not printing the
operand at all for "lsl #0" and prepares for correct asm parsing of these
operands.
llvm-svn: 135626
Move common definitions for ARM and Thumb2 into ARMInstrFormats.td and rename
them to be a bit more descriptive that they're for the PKH instructions.
llvm-svn: 135617
The shift type is implied by the instruction (PKHBT vs. PKHTB) and so shouldn't
be also encoded as part of the shift value immediate. Otherwise we're able to
represent invalid instructions, plus it needlessly complicates the
representation. Preparatory work for asm parsing of these instructions.
llvm-svn: 135616
- Introduce JITDefault code model. This tells targets to set different default
code model for JIT. This eliminates the ugly hack in TargetMachine where
code model is changed after construction.
llvm-svn: 135580
The system register spec should be case insensitive. The preferred form for
output with mask values of 4, 8, and 12 references APSR rather than CPSR.
Update and tidy up tests accordingly.
llvm-svn: 135532