This compiles with no changes to clang/lld/lldb with MSVC and includes
overloads to various functions which are used by those projects and llvm
which have OwningPtr's as parameters. This should allow out of tree
projects some time to move. There are also no changes to libs/Target,
which should help out of tree targets have time to move, if necessary.
llvm-svn: 203083
Unfortunately, it is currently impossible to use a PatFrag as part of an output
pattern (the part of the pattern that has instructions in it) in TableGen.
Looking at the current implementation, this was clearly intended to work (there
is already code in place to expand patterns in the output DAG), but is
currently broken by the baked-in type-checking assumption and the order in which
the pattern fragments are processed (output pattern fragments need to be
processed after the instruction definitions are processed).
Fixing this is fairly simple, but requires some way of differentiating output
patterns from the existing input patterns. The simplest way to handle this
seems to be to create a subclass of PatFrag, and so that's what I've done here.
As a simple example, this allows us to write:
def crnot : OutPatFrag<(ops node:$in),
(CRNOR $in, $in)>;
def : Pat<(not i1:$in),
(crnot $in)>;
which captures the core use case: handling of repeated subexpressions inside
of complicated output patterns.
This will be used by an upcoming commit to the PowerPC backend.
llvm-svn: 202450
should not be marked nounwind.
Marking them nounwind caused crashes in the WebKit FTL JIT, because if we enable
sufficient optimizations, LLVM starts eliding compact_unwind sections (or any unwind
data for that matter), making deoptimization via stackmaps impossible.
This changes the stackmap intrinsic to be may-throw, adds a test for exactly the
sympton that WebKit saw, and fixes TableGen to handle un-attributed intrinsics.
Thanks to atrick and philipreames for reviewing this.
llvm-svn: 201826
Original commits messages:
Add MRMXr/MRMXm form to X86 for use by instructions which treat the 'reg' field of modrm byte as a don't care value. Will allow for simplification of disassembler code.
Simplify a bunch of code by removing the need for the x86 disassembler table builder to know about extended opcodes. The modrm forms are sufficient to convey the information.
llvm-svn: 201065
r201059 appears to cause a crash in a bootstrapped build of clang. Craig
isn't available to look at it right now, so I'm reverting it while he
investigates.
llvm-svn: 201064
According to the AAPCS, when a CPRC is allocated to the stack, all other
VFP registers should be marked as unavailable.
I have also modified the rules for allocating non-CPRCs to the stack, to make
it more explicit that all GPRs must be made unavailable. I cannot think of a
case where the old version would produce incorrect answers, so there is no test
for this.
llvm-svn: 200970
The addition of IC_OPSIZE_ADSIZE in r198759 wasn't quite complete. It
also turns out to have been unnecessary. The disassembler handles the
AdSize prefix for itself, and doesn't care about the difference between
(e.g.) MOV8ao8 and MOB8ao8_16 definitions. So just let them coexist and
don't worry about it.
llvm-svn: 199654
promotion code, Tablegen will now select FPExt for floating point promotions
(previously it had returned AExt, which is not valid for floating point types).
Any out-of-tree targets that were relying on AExt being returned for FP
promotions will need to update their code check for FPExt instead.
llvm-svn: 199252
To declare or define reserved identifers is undefined behaviour in standard
C++. This needs to be addressed in compiler-rt before it can be used in LLVM.
See the list discussion for details.
This reverts commit r198858.
llvm-svn: 198884
It seems there is no separate instruction class for having AdSize *and*
OpSize bits set, which is required in order to disambiguate between all
these instructions. So add that to the disassembler.
Hm, perhaps we do need an AdSize16 bit after all?
llvm-svn: 198759
A ValueType in a pattern dag is a type cast, and GetNumNodeResults should
handle it (the type cast has only one result).
This comes up, for example, during the type checking of pattern fragments, for
example, AArch64's Neon_combine_2d fragment is:
dag Operands = (ops node:$Rm, node:$Rn);
dag Fragment = (v2f64 (concat_vectors (v1f64 node:$Rm), (v1f64 node:$Rn)));
llvm-svn: 198347
That's what it actually means, and with 16-bit support it's going to be
a little more relevant since in a few corner cases we may actually want
to distinguish between 16-bit and 32-bit mode (for example the bare 'push'
aliases to pushw/pushl etc.)
Patch by David Woodhouse
llvm-svn: 197768
Unfortunately, the PowerPC instruction definitions make heavy use of the
positional operand encoding heuristic to map operands onto bitfield variables
in the instruction definitions. Changing this to use name-based mapping is not
trivial, however, because additional infrastructure needs to be designed to
handle mapping of complex operands (with multiple suboperands) onto multiple
bitfield variables.
In the mean time, this adds support for positionally encoded operands to
FixedLenDecoderEmitter, so that we can generate a disassembler for the PowerPC
backend. To prevent an accidental reliance on this feature, and to prevent an
undesirable interaction with existing disassemblers, a backend must opt-in to
this support by setting the new decodePositionallyEncodedOperands
instruction-set bit to true.
When enabled, this iterates the variables that contribute to the instruction
encoding, just as the encoder does, and emulates the procedure the encoder uses
to map "numbered" operands to variables. The bit range for each variable is
also determined as the encoder determines them. This map is then consulted
during the decoder-generator's loop over operands to decode, allowing the
decoder to understand both position-based and name-based operand-to-variable
mappings.
As noted in the comment on the decodePositionallyEncodedOperands definition,
this support should be removed once it is no longer needed. There should be no
change to existing disassemblers.
llvm-svn: 197691
This is more prep for adding the PowerPC disassembler. FixedLenDecoderEmitter
should recognize PointerLikeRegClass operands as register types, and generate
register-like decoding calls instead of treating them like immediates.
llvm-svn: 197680
The convention used to specify the PowerPC ISA is that bits are numbered in
reverse order (0 is the index of the high bit). To support this "little endian"
encoding convention, CodeEmitterGen will reverse the bit numberings prior to
generating the encoding tables. In order to generate a disassembler,
FixedLenDecoderEmitter needs to do the same.
This moves the bit reversal logic out of CodeEmitterGen and into CodeGenTarget
(where it can be used by both CodeEmitterGen and FixedLenDecoderEmitter). This
is prep work for disassembly support in the PPC backend (which is the only
in-tree user of this little-endian encoding support).
llvm-svn: 197532
Added scalar compare VCMPSS, VCMPSD.
Implemented LowerSELECT for scalar FP operations.
I replaced FSETCCss, FSETCCsd with one node type FSETCCs.
Node extract_vector_elt(v16i1/v8i1, idx) returns an element of type i1.
llvm-svn: 197384
This patch places class definitions in implementation files into anonymous
namespaces to prevent weak vtables. This eliminates the need of providing an
out-of-line definition to pin the vtable explicitly to the file.
llvm-svn: 195092
This patch removes most of the trivial cases of weak vtables by pinning them to
a single object file. The memory leaks in this version have been fixed. Thanks
Alexey for pointing them out.
Differential Revision: http://llvm-reviews.chandlerc.com/D2068
Reviewed by Andy
llvm-svn: 195064
This change is incorrect. If you delete virtual destructor of both a base class
and a subclass, then the following code:
Base *foo = new Child();
delete foo;
will not cause the destructor for members of Child class. As a result, I observe
plently of memory leaks. Notable examples I investigated are:
ObjectBuffer and ObjectBufferStream, AttributeImpl and StringSAttributeImpl.
llvm-svn: 194997
This patch removes most of the trivial cases of weak vtables by pinning them to
a single object file.
Differential Revision: http://llvm-reviews.chandlerc.com/D2068
Reviewed by Andy
llvm-svn: 194865
These used to be referenced by the CGI->AWI map (in AsmWriterEmitter), but
stored in a vector local to EmitPrintInstruction. Move the vector to
AsmWriterEmitter too.
llvm-svn: 193525
The old code skipped one of the sorting criteria if either pattern had
no types. This could lead to cycles of the form X < Y, Y < Z, Z < X.
llvm-svn: 191735
Add VEX_LIG to scalar FMA4 instructions.
Use VEX_LIG in some of the inheriting checks in disassembler table generator.
Make use of VEX_L_W, VEX_L_W_XS, VEX_L_W_XD contexts.
Don't let VEX_L_W, VEX_L_W_XS, VEX_L_W_XD, VEX_L_W_OPSIZE inherit from their non-L forms unless VEX_LIG is set.
Let VEX_L_W, VEX_L_W_XS, VEX_L_W_XD, VEX_L_W_OPSIZE inherit from all of their non-L or non-W cases.
Increase ranking on VEX_L_W, VEX_L_W_XS, VEX_L_W_XD, VEX_L_W_OPSIZE so they get chosen over non-L/non-W forms.
llvm-svn: 191649
Ideally, the machinel model is added at the time the instructions are
defined. But many instructions in X86InstrSSE.td still need a model.
Without this workaround the scheduler asserts because x86 already has
itinerary classes for these instructions, indicating they should be
modeled by the scheduler. Since we use the new machine model for other
instructions, it expects a new machine model for these too.
llvm-svn: 191391
Patch by Ana Pazos.
1.Added support for v1ix and v1fx types.
2.Added Scalar Pairwise Reduce instructions.
3.Added initial implementation of Scalar Arithmetic instructions.
llvm-svn: 191263
This makes using array_pod_sort significantly safer. The implementation relies
on function pointer casting but that should be safe as we're dealing with void*
here.
llvm-svn: 191175
TableGen was sorting the entries in some of its internal data
structures by pointer. This order filtered through to the final
matching table and affected the diagnostics produced on bad assembly
occasionally.
It also turns out STL algorithms are ridiculously easy to misuse on
containers with custom order methods. (No bugs before, or now that I
know of, but plenty in the middle).
This should fix the sanitizer bot, which ends up with weird pointers.
llvm-svn: 190793
The 'Deprecated' class allows you to specify a SubtargetFeature that the
instruction is deprecated on.
The 'ComplexDeprecationPredicate' class allows you to define a custom
predicate that is called to check for deprecation.
For example:
ComplexDeprecationPredicate<"MCR">
would mean you would have to define the following function:
bool getMCRDeprecationInfo(MCInst &MI, MCSubtargetInfo &STI,
std::string &Info)
Which returns 'false' for not deprecated, and 'true' for deprecated
and store the warning message in 'Info'.
The MCTargetAsmParser constructor was chaned to take an extra argument of
the MCInstrInfo class, so out-of-tree targets will need to be changed.
llvm-svn: 190598
Link.exe's command line options are case-insensitive. This patch
adds a new attribute to OptTable to let the option parser to compare
options, ignoring case.
Command lines are generally case-insensitive on Windows. CL.exe is an
exception. So this new attribute should be useful for other commands
running on Windows.
Differential Revision: http://llvm-reviews.chandlerc.com/D1485
llvm-svn: 189416
This field specifies registers that are preserved across function calls,
but that should not be included in the generates SaveList array.
This can be used ot generate regmasks for architectures that save
registers through other means, like SPARC's register windows.
llvm-svn: 189084
Back in the mists of time (2008), it seems TableGen couldn't handle the
patterns necessary to match ARM's CMOV node that we convert select operations
to, so we wrote a lot of fairly hairy C++ to do it for us.
TableGen can deal with it now: there were a few minor differences to CodeGen
(see tests), but nothing obviously worse that I could see, so we should
probably address anything that *does* come up in a localised manner.
llvm-svn: 188995
clang bootstraps intermittently failed for me due a difference in
the MCK_Reg ordering in ARMGenAsmMatcher.inc. E.g. in my latest
run the stage 1 and stage 3 versions were the same but the stage 2
one was different (though still functionally correct). This meant
that the .o comparison failed.
MCK_Regs were assigned by iterating over a std::set< std::set<Record*> >,
and since std::set is sorted lexicographically, the order depended on the
order of the pointer values. This patch replaces the pointer ordering
with LessRecordByID.
llvm-svn: 188164
LLVM's coding standards recommend raw_ostream and MemoryBuffer for
reading and writing text.
This has the side effect of allowing clang to compile more of Support
and TableGen in the Microsoft C++ ABI.
llvm-svn: 187826
This makes option aliases more powerful by enabling them to
pass along arguments to the option they're aliasing.
For example, if we have a joined option "-foo=", we can now
specify a flag option "-bar" to be an alias of that, with the
argument "baz".
This is especially useful for the cl.exe compatible clang driver,
where many options are aliases. For example, this patch enables
us to alias "/Ox" to "-O3" (-O is a joined option), and "/WX" to
"-Werror" (again, -W is a joined option).
Differential Revision: http://llvm-reviews.chandlerc.com/D1245
llvm-svn: 187537
For two intrinsics 'llvm.nvvm.texsurf.handle' and 'llvm.nvvm.texsurf.handle.internal',
TableGen was emitting matching code like:
if (Name.startswith("llvm.nvvm.texsurf.handle")) ...
if (Name.startswith("llvm.nvvm.texsurf.handle.internal")) ...
We can never match "llvm.nvvm.texsurf.handle.internal" here because it will
always be erroneously matched by the first condition.
The fix is to sort the intrinsic names and emit them in reverse order.
llvm-svn: 187119
This removes the need to store the asm variant in each row of the single table that existed before. Shaves ~16K off the size of X86AsmParser.o.
llvm-svn: 187026
algorithm when assigning EnumValues to the synthesized registers.
The current algorithm, LessRecord, uses the StringRef compare_numeric
function. This function compares strings, while handling embedded numbers.
For example, the R600 backend registers are sorted as follows:
T1
T1_W
T1_X
T1_XYZW
T1_Y
T1_Z
T2
T2_W
T2_X
T2_XYZW
T2_Y
T2_Z
In this example, the 'scaling factor' is dEnum/dN = 6 because T0, T1, T2
have an EnumValue offset of 6 from one another. However, in other parts
of the register bank, the scaling factors are different:
dEnum/dN = 5:
KC0_128_W
KC0_128_X
KC0_128_XYZW
KC0_128_Y
KC0_128_Z
KC0_129_W
KC0_129_X
KC0_129_XYZW
KC0_129_Y
KC0_129_Z
The diff lists do not work correctly because different kinds of registers have
different 'scaling factors'. This new algorithm, LessRecordRegister, tries to
enforce a scaling factor of 1. For example, the registers are now sorted as
follows:
T1
T2
T3
...
T0_W
T1_W
T2_W
...
T0_X
T1_X
T2_X
...
KC0_128_W
KC0_129_W
KC0_130_W
...
For the Mips and R600 I see a 19% and 6% reduction in size, respectively. I
did see a few small regressions, but the differences were on the order of a
few bytes (e.g., AArch64 was 16 bytes). I suspect there will be even
greater wins for targets with larger register files.
Patch reviewed by Jakob.
rdar://14006013
llvm-svn: 185094
This patch modifies TableGen to generate a function in
${TARGET}GenInstrInfo.inc called getNamedOperandIdx(), which can be used
to look up indices for operands based on their names.
In order to activate this feature for an instruction, you must set the
UseNamedOperandTable bit.
For example, if you have an instruction like:
def ADD : TargetInstr <(outs GPR:$dst), (ins GPR:$src0, GPR:$src1)>;
You can look up the operand indices using the new function, like this:
Target::getNamedOperandIdx(Target::ADD, Target::OpName::dst) => 0
Target::getNamedOperandIdx(Target::ADD, Target::OpName::src0) => 1
Target::getNamedOperandIdx(Target::ADD, Target::OpName::src1) => 2
The operand names are case sensitive, so $dst and $DST are considered
different operands.
This change is useful for R600 which has instructions with a large number
of operands, many of which model single bit instruction configuration
values. These configuration bits are common across most instructions,
but may have a different operand index depending on the instruction type.
It is useful to have a convenient way to look up the operand indices,
so these bits can be generically set on any instruction.
llvm-svn: 184879
For decoding, keep the current behavior of always decoding these as their REP
versions. In the future, this could be improved to recognize the cases where
these behave as XACQUIRE and XRELEASE and decode them as such.
llvm-svn: 184207
Replace the ill-defined MinLatency and ILPWindow properties with
with straightforward buffer sizes:
MCSchedMode::MicroOpBufferSize
MCProcResourceDesc::BufferSize
These can be used to more precisely model instruction execution if desired.
Disabled some misched tests temporarily. They'll be reenabled in a few commits.
llvm-svn: 184032
The element passed to push_back is not copied before the vector reallocates.
The client needs to copy the element first before passing it to push_back.
No test case, will be tested by follow-up swift scheduler model change (it
segfaults without this change).
llvm-svn: 183459
Don't output data if we are supposed to ignore the record.
Reapply of 183255, I don't think this was causing the tablegen segfault on linux
testers.
llvm-svn: 183311
This fixes some of the ridiculously complex code for optimizing the
machine model tables that are shared among all processors of a given
target. A9 and Swift both use the "special" feature that maps old
itinerary classes to new machine model defs. They map different
overlapping subsets of instructions, which wasn't handled correctly.
llvm-svn: 183302
NOTE: If this broke your out-of-tree backend, in *RegisterInfo.td, change
the instances of SubRegIndex that have a comps template arg to use the
ComposedSubRegIndex class instead.
In TableGen land, this adds Size and Offset attributes to SubRegIndex,
and the ComposedSubRegIndex class, for which the Size and Offset are
computed by TableGen. This also adds an accessor in MCRegisterInfo, and
Size/Offsets for the X86 and ARM subreg indices.
llvm-svn: 183020
The size reduction in the RegDiffLists are rather dramatic. Here are a few
size differences for MCTargetDesc.o files (before and after) in bytes:
R600 - 36160B - 11184B - 69% reduction
ARM - 28480B - 8368B - 71% reduction
Mips - 816B - 576B - 29% reduction
One side effect of dynamically computing the aliases is that the iterator does
not guarantee that the entries are ordered or that duplicates have been removed.
The documentation implies this is a safe assumption and I found no clients that
requires these attributes (i.e., strict ordering and uniqueness).
My local LNT tester results showed no execution-time failures or significant
compile-time regressions (i.e., beyond what I would consider noise) for -O0g,
-O2 and -O3 runs on x86_64 and i386 configurations.
rdar://12906217
llvm-svn: 182783
Currently the fast-isel table generator recognizes registers, register
classes, and immediates for source pattern operands. ValueType
operands are not recognized. This is not a problem for existing
targets with fast-isel support, but will not work for targets like
PowerPC and SPARC that use types in source patterns.
The proposed patch allows ValueType operands and treats them in the
same manner as register classes. There is no convenient way to map
from a ValueType to a register class, but there's no need to do so.
The table generator already requires that all types in the source
pattern be identical, and we know the register class of the output
operand already. So we just assign that register class to any
ValueType operands we encounter.
No functional effect on existing targets. Testing deferred until the
PowerPC target implements fast-isel.
llvm-svn: 182512
This lane mask provides information about which register lanes
completely cover super-registers. See the block comment before
getCoveringLanes().
llvm-svn: 182034
The problem this patch addresses is the handling of register tie
constraints in AsmMatcherEmitter, where one operand is tied to a
sub-operand of another operand. The typical scenario for this to
happen is the tie between the "write-back" register of a pre-inc
instruction, and the base register sub-operand of the memory address
operand of that instruction.
The current AsmMatcherEmitter code attempts to handle tied
operands by emitting the operand as usual first, and emitting
a CVT_Tied node when handling the second (tied) operand. However,
this really only works correctly if the tied operand does not
have sub-operands (and isn't a sub-operand itself). Under those
circumstances, a wrong MC operand list is generated.
In discussions with Jim Grosbach, it turned out that the MC operand
list really ought not to contain tied operands in the first place;
instead, it ought to consist of exactly those operands that are
named in the AsmString. However, getting there requires significant
rework of (some) targets.
This patch fixes the immediate problem, and at the same time makes
one (small) step in the direction of the long-term solution, by
implementing two changes:
1. Restricts the AsmMatcherEmitter handling of tied operands to
apply solely to simple operands (not complex operands or
sub-operand of such).
This means that at least we don't get silently corrupt MC operand
lists as output. However, if we do have tied sub-operands, they
would now no longer be handled at all, except for:
2. If we have an operand that does not occur in the AsmString,
and also isn't handled as tied operand, simply emit a dummy
MC operand (constant 0).
This works as long as target code never attempts to access
MC operands that do no not occur in the AsmString (and are
not tied simple operands), which happens to be the case for
all targets where this situation can occur (ARM and PowerPC).
[ Note that this change means that many of the ARM custom
converters are now superfluous, since the implement the
same "hack" now performed already by common code. ]
Longer term, we ought to fix targets to never access *any*
MC operand that does not occur in the AsmString (including
tied simple operands), and then finally completely remove
all such operands from the MC operand list.
Patch approved by Jim Grosbach.
llvm-svn: 180677
Super-resources and resource groups are two ways of expressing
overlapping sets of processor resources. Now we generate table entries
the same way for both so the scheduler never needs to explicitly check
for super-resources.
llvm-svn: 180162
variant/dialect. Addresses a FIXME in the emitMnemonicAliases function.
Use and test case to come shortly.
rdar://13688439 and part of PR13340.
llvm-svn: 179804
As these two instructions in AVX extension are privileged instructions for
special purpose, it's only expected to be used in inlined assembly.
llvm-svn: 179266
A9 uses itinerary classes, Swift uses RW lists. This tripped some
verification when we're expanding variants. I had to refine the
verification a bit.
llvm-svn: 178357
This syntax is now preferred:
def : Pat<(subc i32:$b, i32:$c), (SUBCCrr $b, $c)>;
There is no reason to repeat the types in the output pattern.
llvm-svn: 177844
This makes it possible to define instruction patterns like this:
def LDri : F3_2<3, 0b000000,
(outs IntRegs:$dst), (ins MEMri:$addr),
"ld [$addr], $dst",
[(set i32:$dst, (load ADDRri:$addr))]>;
~~~
llvm-svn: 177834
Just like register classes, value types can be used in two ways in
patterns:
(sext_inreg i32:$src, i16)
In a named leaf node like i32:$src, the value type simply provides the
type of the node directly. This simplifies type inference a lot compared
to the current practice of specifiying types indirectly with register
classes.
As an unnamed leaf node, like i16 above, the value type represents
itself as an MVT::Other immediate.
llvm-svn: 177828
A register class can appear as a leaf TreePatternNode with and without a
name:
(COPY_TO_REGCLASS GPR:$src, F8RC)
In a named leaf node like GPR:$src, the register class provides type
information for the named variable represented by the node. The TypeSet
for such a node is the set of value types that the register class can
represent.
In an unnamed leaf node like F8RC above, the register class represents
itself as a kind of immediate. Such a node has the type MVT::i32,
we'll never create a virtual register representing it.
This change makes it possible to remove the special handling of
COPY_TO_REGCLASS in CodeGenDAGPatterns.cpp.
llvm-svn: 177825
To use this in conjunction with exuberant ctags to generate a single
combined tags file, run tblgen first and then
$ ctags --append [...]
Since some identifiers have corresponding definitions in C++ code,
it can be useful (if using vim) to also use cscope, and
:set cscopetagorder=1
so that
:tag X
will preferentially select the tablegen symbol, while
:cscope find g X
will always find the C++ symbol.
Patch by Kevin Schoedel!
(a couple small formatting changes courtesy of clang-format)
llvm-svn: 177682
of complex instruction operands (e.g. address modes).
Currently, if a Pat pattern creates an instruction that has a complex
operand (i.e. one that consists of multiple sub-operands at the MI
level), this operand must match a ComplexPattern DAG pattern with the
correct number of output operands.
This commit extends TableGen to alternatively allow match a complex
operands against multiple separate operands at the DAG level.
This allows using Pat patterns to match pre-increment nodes like
pre_store (which must have separate operands at the DAG level) onto
an instruction pattern that uses a multi-operand memory operand,
like the following example on PowerPC (will be committed as a
follow-on patch):
def STWU : DForm_1<37, (outs ptr_rc:$ea_res), (ins GPRC:$rS, memri:$dst),
"stwu $rS, $dst", LdStStoreUpd, []>,
RegConstraint<"$dst.reg = $ea_res">, NoEncode<"$ea_res">;
def : Pat<(pre_store GPRC:$rS, ptr_rc:$ptrreg, iaddroff:$ptroff),
(STWU GPRC:$rS, iaddroff:$ptroff, ptr_rc:$ptrreg)>;
Here, the pair of "ptroff" and "ptrreg" operands is matched onto the
complex operand "dst" of class "memri" in the "STWU" instruction.
Approved by Jakob Stoklund Olesen.
llvm-svn: 177428
Properly handle cases where a group of instructions have different
SchedRW lists with the same itinerary class.
This was supposed to work, but I left in an early break.
llvm-svn: 177317
We always supported a mixture of the old itinerary model and new
per-operand model, but it required a level of indirection to map
itinerary classes to SchedRW lists. This was done for ARM A9.
Now we want to define x86 SchedRW lists, with the goal of removing its
itinerary classes, but still support the itineraries in the mean
time. When I original developed the model, Atom did not have
itineraries, so there was no reason to expect this requirement.
llvm-svn: 177226
Don't require instructions to inherit Sched<...>. Sometimes it is more
convenient to say:
let SchedRW = ... in {
...
}
Which is now possible.
llvm-svn: 177199
This allows abitrary groups of processor resources. Using something in
a subset automatically counts againts the superset. Currently, this
only works if the superset is also a ProcResGroup as opposed to a
SuperUnit.
This allows SandyBridge to be expressed naturally, which will be
checked in shortly.
def SBPort01 : ProcResGroup<[SBPort0, SBPort1]>;
def SBPort15 : ProcResGroup<[SBPort1, SBPort5]>;
def SBPort23 : ProcResGroup<[SBPort2, SBPort3]>;
def SBPort015 : ProcResGroup<[SBPort0, SBPort1, SBPort5]>;
llvm-svn: 177112
Fix the way resources are counted. I'm taking some time to cleanup the
way MachineScheduler handles in-order machine resources. Eventually
we'll need more PPC/Atom test cases in tree.
llvm-svn: 176390
For example, ARM has several instructions with a literal '#0' immediate in the syntax
that's not represented as an actual operand. The asm matcher is expected a token
operand, but the parser will have created an immediate operand. This is currently
handled by dedicated per-instruction C++ munging of the ParsedAsmOperand list, but
will be better handled by this hook.
llvm-svn: 174487
and enables the instruction printer to print aliased
instructions.
Due to usage of RegisterOperands a change in common
code (utils/TableGen/AsmWriterEmitter.cpp) is required
to get the correct register value if it is a RegisterOperand.
Contributer: Vladimir Medic
llvm-svn: 174358
Drive by fix. I noticed some missing logic that might bite future
users. This shouldn't affect the final output on currently modeled
targets.
llvm-svn: 174142
This patch adds support for AArch64 (ARM's 64-bit architecture) to
LLVM in the "experimental" category. Currently, it won't be built
unless requested explicitly.
This initial commit should have support for:
+ Assembly of all scalar (i.e. non-NEON, non-Crypto) instructions
(except the late addition CRC instructions).
+ CodeGen features required for C++03 and C99.
+ Compilation for the "small" memory model: code+static data <
4GB.
+ Absolute and position-independent code.
+ GNU-style (i.e. "__thread") TLS.
+ Debugging information.
The principal omission, currently, is performance tuning.
This patch excludes the NEON support also reviewed due to an outbreak of
batshit insanity in our legal department. That will be committed soon bringing
the changes to precisely what has been approved.
Further reviews would be gratefully received.
llvm-svn: 174054
In the future, AttributeWithIndex won't be used anymore. Besides, it exposes the
internals of the AttributeSet to outside users, which isn't goodness.
llvm-svn: 173606
// FIXME: Constraints are hard coded to 'm', but we need an 'r'
// constraint for addressof. This needs to be cleaned up!
Test cases are already in place. Specifically,
clang/test/CodeGen/ms-inline-asm.c t15(), t16(), and t24().
llvm-svn: 172569
The purpose of this patch is to allow PredicateMethods to be set to something
like "isUImm<8>", calling a C++ template method to reduce code duplication. For
this to work, the PredicateMethod must be mangled into a valid C++ identifier
for insertion into an enum.
llvm-svn: 172073
When processing possible aliases, TableGen assumes that if an operand *can* be
an immediate, then it always *will* be. This is incorrect for the AArch64
backend. This patch inserts a check in the generated code to make sure isImm is
true first.
llvm-svn: 171972
This was an experimental option, but needs to be defined
per-target. e.g. PPC A2 needs to aggressively hide latency.
I converted some in-order scheduling tests to A2. Hal is working on
more test cases.
llvm-svn: 171946
MC disassembler clients (LLDB) are interested in querying if an
instruction may affect control flow other than by virtue of being
an explicit branch instruction. For example, instructions which
write directly to the PC on some architectures.
llvm-svn: 170610
beyond array bounds.
No test case since I cannot reproduce an ICE with this bug. According
to Carlos -- the bug reporter -- a segfault occurs only when LLVM is
compiled with a specific version of GCC.
llvm-svn: 169783
This is much simpler to reason about, more efficient, and
fixes some corner cases involving implicit super-register defs.
Fixed rdar://12797931.
llvm-svn: 169425
At build-time register pressure was always computed in terms of
register units. But the compile-time API was expressed in terms of
register classes because it was intended for virtual registers (and
physical register units weren't yet used anywhere in codegen).
Now that the codegen uses physreg units consistently, prepare for
tracking register pressure also in terms of live units, not live
registers.
llvm-svn: 169360
When code deletes the context, the AttributeImpls that the AttrListPtr points to
are now invalid. Therefore, instead of keeping a separate managed static for the
AttrListPtrs that's reference counted, move it into the LLVMContext and delete
it when deleting the AttributeImpls.
llvm-svn: 168354
This patch replaces the hard coded GPR pair [R0, R1] of
Intrinsic:arm_ldrexd and [R2, R3] of Intrinsic:arm_strexd with
even/odd GPRPair reg class.
Similar to the lowering of atomic_64 operation.
llvm-svn: 168207
- Add RTM code generation support throught 3 X86 intrinsics:
xbegin()/xend() to start/end a transaction region, and xabort() to abort a
tranaction region
llvm-svn: 167573
"../llvm-git/utils/TableGen/CodeGenSchedule.cpp", line 1594.12: 1540-0218 (S) The call does not match any parameter list for "operator+".
"../llvm-git/include/llvm/ADT/STLExtras.h", line 130.1: 1540-1283 (I) "template <class _Iterator, class Func> llvm::operator+(mapped_iterator<_Iterator,Func>::difference_type, const mapped_iterator<_Iterator,Func> &)" is not a viable candidate.
Patch by Kai.
llvm-svn: 167311
Explicitly allow composition of null sub-register indices, and handle
that common case in an inlinable stub.
Use a compressed table implementation instead of the previous nested
switches which generated pretty bad code.
llvm-svn: 167190
Most places can use PrintFatalError as the unwinding mechanism was not
used for anything other than printing the error. The single exception
was CodeGenDAGPatterns.cpp, where intermediate errors during type
resolution were ignored to simplify incremental platform development.
This use is replaced by an error flag in TreePattern and bailout earlier
in various places if it is set.
llvm-svn: 166712
Relationship maps are represented as InstrMapping records which are parsed by
TableGen and the information is used to construct mapping tables to represent
appropriate relations between instructions. These tables are emitted into
XXXGenInstrInfo.inc file along with the functions to query them.
Patch by Jyotsna Verma <jverma@codeaurora.org>.
llvm-svn: 166685
Convert the internal representation of the Attributes class into a pointer to an
opaque object that's uniqued by and stored in the LLVMContext object. The
Attributes class then becomes a thin wrapper around this opaque
object. Eventually, the internal representation will be expanded to include
attributes that represent code generation options, etc.
llvm-svn: 165917
Some of these dyn_cast<>'s would be better phrased as isa<> or cast<>.
That will happen in a future patch.
There are also two dyn_cast_or_null<>'s slipped in instead of
dyn_cast<>'s, since they were causing crashes with just dyn_cast<>.
llvm-svn: 165646
This is a mechanical change of dynamic_cast<> to dyn_cast<>. A number of
these uses are actually more like isa<> or cast<>, and will be changed
to the semanticaly appropriate one in a future patch.
llvm-svn: 165291
This allows the processor-specific machine model to override selected
base opcodes without any fanciness.
e.g. InstRW<[CoreXWriteVANDP], (instregex "VANDP")>.
llvm-svn: 165180
A processor can now arbitrarily alias one SchedWrite onto
another. Only the SchedAlias definition need be within the processor
model. The aliased SchedWrite may be a SchedVariant, WriteSequence, or
transitively refer to another alias.
llvm-svn: 165179
map constraints and MCInst operands to inline asm operands. This replaces the
getMCInstOperandNum() function.
The logic to determine the constraints are not in place, so we still default to
a register constraint (i.e., "r"). Also, we no longer build the MCInst but
rather return just the opcode to get the MCInstrDesc.
llvm-svn: 164979
This is a generally useful utility; there's no reason to have it hidden
in CodeGenDAGPatterns.cpp.
Also, rename it to fit the other comparators in Record.h
Review by Jakob.
llvm-svn: 164189
Now where we used to call ReInitMCSubtargetInfo, we actually recompute
the same information as InitMCSubtargetInfo instead of only setting
the feature bits.
llvm-svn: 164105
Map the CodeGenSchedule object model onto data tables. The structure
of the data tables is defined in MC, so for convenience we include
MCSchedule.h. The alternative is maintaining a redundant copy of the
table structure definitions. Mapping the object model onto data tables
is sufficiently complicated that it should not be interleaved with
emitting source code. This avoids major problem with the backend for
itinerary generation.
llvm-svn: 164059
Keep GCC's warnings happy. It can't reason out that the state machine won't
ever hit the potentially uninitialized use in OPC_FilterValue.
llvm-svn: 164041
48-bit if necessary, in order to reduce the generated code size.
We have 900 cases not covered by OpcodeInfo in ATT AsmWriter and more in Intel
AsmWriter and ARM AsmWriter.
This patch reduced the clang Release build size by 50k, running on a Mac Pro.
llvm-svn: 163814
* wrap code blocks in \code ... \endcode;
* refer to parameter names in paragraphs correctly (\arg is not what most
people want -- it starts a new paragraph).
llvm-svn: 163790
Sub-register lane masks are bitmasks that can be used to determine if
two sub-registers of a virtual register will overlap. For example, ARM's
ssub0 and ssub1 sub-register indices don't overlap each other, but both
overlap dsub0 and qsub0.
The lane masks will be accurate on most targets, but on targets that use
sub-register indexes in an irregular way, the masks may conservatively
report that two sub-register indices overlap when the eventually
allocated physregs don't.
Irregular register banks also mean that the bits in a lane mask can't be
mapped onto register units, but the concept is similar.
llvm-svn: 163630
Preserve the Composites map in the CodeGenSubRegIndex class so it can be
used to determine which sub-register indices can actually be composed.
llvm-svn: 163629
Apparently, NumSubRegIndices was completely unused before. Adjust it by
one to include the null subreg index, just like getNumRegs() includes
the null register.
llvm-svn: 163628
- This patch is inspired by the failure of the following code snippet
which is used to convert enumerable values into encoding bits to
improve the readability of td files.
class S<int s> {
bits<2> V = !if(!eq(s, 8), {0, 0},
!if(!eq(s, 16), {0, 1},
!if(!eq(s, 32), {1, 0},
!if(!eq(s, 64), {1, 1}, {?, ?}))));
}
Later, PR8330 is found to report not exactly the same bug relevant
issue to bit/bits values.
- Instead of resolving bit/bits values separately through
resolveBitReference(), this patch adds getBit() for all Inits and
resolves bit value by resolving plus getting the specified bit. This
unifies the resolving of bit with other values and removes redundant
logic for resolving bit only. In addition,
BitsInit::resolveReferences() is optimized to take advantage of this
origanization by resolving VarBitInit's variable reference first and
then getting bits from it.
- The type interference in '!if' operator is revised to support possible
combinations of int and bits/bit in MHS and RHS.
- As there may be illegal assignments from integer value to bit, says
assign 2 to a bit, but we only check this during instantiation in some
cases, e.g.
bit V = !if(!eq(x, 17), 0, 2);
Verbose diagnostic message is generated when invalid value is
resolveed to help locating the error.
- PR8330 is fixed as well.
llvm-svn: 163360
MatchInstructionImpl() function.
These values are used by the ConvertToMCInst() function to index into the
ConversionTable. The values are also needed to call the GetMCInstOperandNum()
function.
llvm-svn: 163101
AsmMatcherEmitter. This function maps inline assembly operands to MCInst
operands.
For example, '__asm mov j, eax' is represented by the follow MCInst:
<MCInst 1460 <MCOperand Reg:0> <MCOperand Imm:1> <MCOperand Reg:0>
<MCOperand Expr:(j)> <MCOperand Reg:0> <MCOperand Reg:43>>
The first 5 MCInst operands are a result of j matching as a memory operand
consisting of a BaseReg (Reg:0), MemScale (Imm:1), MemIndexReg(Reg:0),
Expr (Expr:(j), and a MemSegReg (Reg:0). The 6th MCInst operand represents
the eax register (Reg:43).
This translation is necessary to determine the Input and Output Exprs. If a
single asm operand maps to multiple MCInst operands, the index of the first
MCInst operand is returned. Ideally, it would return the operand we really
care out (i.e., the Expr:(j) in this case), but I haven't found an easy way
of doing this yet.
llvm-svn: 162920
Adding arbitrary records to ARM.td would break
basic-arm-instructions.s because selection of nop vs mov r0,r0 was
ambiguous (this will be tested by a subsequent addition to ARM.td).
An imperfect but sensible fix is to give precedence to match rules
that have more constraints.
llvm-svn: 162824
Previously, instructions without a primary patterns wouldn't get their
properties inferred. Now, we use all single-instruction patterns for
inference, including 'def : Pat<>' instances.
This causes a lot of instruction flags to change.
- Many instructions no longer have the UnmodeledSideEffects flag because
their flags are now inferred from a pattern.
- Instructions with intrinsics will get a mayStore flag if they already
have UnmodeledSideEffects and a mayLoad flag if they already have
mayStore. This is because intrinsics properties are linear.
- Instructions with atomic_load patterns get a mayStore flag because
atomic loads can't be reordered. The correct workaround is to create
pseudo-instructions instead of using normal loads. PR13693.
llvm-svn: 162614
Instructions are now only marked as variadic if they use variable_ops in
their ins list.
A variadic SDNode is typically used for call nodes that have the call
arguments as operands.
A variadic MachineInstr can actually encode a variable number of
operands, for example ARM's stm/ldm instructions. A call instruction
does not have to be variadic. The call argument registers are added as
implicit operands.
This change remove the MCID::Variadic flags from most call and return
instructions, allowing us to better verify their operands.
llvm-svn: 162599
It is now allowed to explicitly set hasSideEffects, mayStore, and
mayLoad on instructions with patterns.
Verify that the patterns are consistent with the explicit flags.
llvm-svn: 162569
Emit TableGen errors if guessInstructionProperties is 0 and
instruction properties can't be inferred from patterns.
Allow explicit instruction properties even when they can be inferred.
This patch doesn't change the TableGen output. Redundant properties
are not yet verified because the tree has errors.
llvm-svn: 162516
Currently, TableGen just guesses instruction properties when it can't
infer them form patterns.
This adds a guessInstructionProperties flag to the instruction set
definition that will be used to disable guessing. The flag is intended
as a migration aid. It will be removed again when no more targets need
their properties guessed.
llvm-svn: 162460
When reporting an error for a defm, we would previously only report the
location of the outer defm, which is not always where the error is.
Now we also print the location of the expanded multiclass defs:
lib/Target/X86/X86InstrSSE.td:2902:12: error: foo
defm ADD : basic_sse12_fp_binop_s<0x58, "add", fadd, SSE_ALU_ITINS_S>,
^
lib/Target/X86/X86InstrSSE.td:2801:11: note: instantiated from multiclass
defm PD : sse12_fp_packed<opc, !strconcat(OpcodeStr, "pd"), OpNode, VR128,
^
lib/Target/X86/X86InstrSSE.td:194:5: note: instantiated from multiclass
def rm : PI<opc, MRMSrcMem, (outs RC:$dst), (ins RC:$src1, x86memop:$src2),
^
llvm-svn: 162409
No change in interface or functionality. Purely under-the-hood
details of the generated function that change.
The X86 assembly parser is reduced in size by over 15% and ARM by
over 10%.
No performance change by my measurements.
llvm-svn: 162337
Select instructions pick one of two virtual registers based on a
condition, like x86 cmov. On targets like ARM that support predication,
selects can sometimes be eliminated by predicating the instruction
defining one of the operands.
Teach PeepholeOptimizer to recognize select instructions, and ask the
target to optimize them.
llvm-svn: 162059
TableGen sometimes synthesizes missing sub-register indexes. Emit these
indexes as enumerators in the target namespace along with the
user-defined ones.
Also take this opportunity to stop creating new Record objects for
synthetic indexes.
llvm-svn: 161964
Refactor the TableGen'erated fixed length disassemblmer to use a
table-driven state machine rather than a massive set of nested
switch() statements.
As a result, the ARM Disassembler (ARMDisassembler.cpp) builds much more
quickly and generates a smaller end result. For a Release+Asserts build on
a 16GB 3.4GHz i7 iMac w/ SSD:
Time to compile at -O2 (averaged w/ hot caches):
Previous: 35.5s
New: 8.9s
TEXT size:
Previous: 447,251
New: 297,661
Builds in 25% of the time previously required and generates code 66% of
the size.
Execution time of the disassembler is only slightly slower (7% disassembling
10 million ARM instructions, 19.6s vs 21.0s). The new implementation has
not yet been tuned, however, so the performance should almost certainly
be recoverable should it become a concern.
llvm-svn: 161888
These tables were indexed by [register][subreg index] which made them,
very large and sparse.
Replace them with lists of sub-register indexes that match the existing
lists of sub-registers. MCRI::getSubReg() becomes a very short linear
search, like getSubRegIndex() already was.
llvm-svn: 160843
Now that the weird X86 sub_ss and sub_sd sub-register indexes are gone,
there is no longer a need for the CompositeIndices construct in .td
files. Sub-register index composition can be specified on the
SubRegIndex itself using the ComposedOf field.
Also enforce unique names for sub-registers in TableGen. The same
sub-register cannot be available with multiple sub-register indexes.
llvm-svn: 160842
A standalone pattern defined in a multiclass expansion should handle
null_frag references just like patterns on instructions. Follow-up to
r160333.
llvm-svn: 160384
Define a 'null_frag' SDPatternOperator node, which if referenced in an
instruction Pattern, results in the pattern being collapsed to be as-if
'[]' had been specified instead. This allows supporting a multiclass
definition where some instaniations have ISel patterns associated and
others do not.
For example,
multiclass myMulti<RegisterClass rc, SDPatternOperator OpNode = null_frag> {
def _x : myI<(outs rc:), (ins rc:), []>;
def _r : myI<(outs rc:), (ins rc:), [(set rc:, (OpNode rc:))]>;
}
defm foo : myMulti<GRa, not>;
defm bar : myMulti<GRb>;
llvm-svn: 160333
Make sure the tblgen'erated asm matcher correctly returns numoperands+1
as the ErrorInfo when the problem was that there weren't enough operands
specified.
rdar://9142751
llvm-svn: 160144
subtarget CPU descriptions and support new features of
MachineScheduler.
MachineModel has three categories of data:
1) Basic properties for coarse grained instruction cost model.
2) Scheduler Read/Write resources for simple per-opcode and operand cost model (TBD).
3) Instruction itineraties for detailed per-cycle reservation tables.
These will all live side-by-side. Any subtarget can use any
combination of them. Instruction itineraries will not change in the
near term. In the long run, I expect them to only be relevant for
in-order VLIW machines that have complex contraints and require a
precise scheduling/bundling model. Once itineraries are only actively
used by VLIW-ish targets, they could be replaced by something more
appropriate for those targets.
This tablegen backend rewrite sets things up for introducing
MachineModel type #2: per opcode/operand cost model.
llvm-svn: 159891
The TargetInstrInfo::getNumMicroOps API does not change, but soon it
will be used by MachineScheduler. Now each subtarget can specify the
number of micro-ops per itinerary class. For ARM, this is currently
always dynamic (-1), because it is used for load/store multiple which
depends on the number of register operands.
Zero is now a valid number of micro-ops. This can be used for
nop pseudo-instructions or instructions that the hardware can squash
during dispatch.
llvm-svn: 159406
When generating selection tables for Pat instances, TableGen relied on
an output Instruction's Pattern field being set to infer whether a
chain should be added.
This patch adds additional logic to check various flag fields so that
correct code can be generated even if Pattern is unset.
llvm-svn: 159217
"Invalid operand" may be a completely correct diagnostic, but it's often
insufficiently specific to really help identify and fix the problem in
assembly source. Allow a target to specify a more-specific diagnostic kind
for each AsmOperandClass derived definition and use that to provide
more detailed diagnostics when an operant of that class resulted in a
match failure.
rdar://8987109
llvm-svn: 159050
Original commit message:
Allow up to 64 functional units per processor itinerary.
This patch changes the type used to hold the FU bitset from unsigned to uint64_t.
This will be needed for some upcoming PowerPC itineraries.
llvm-svn: 159027
This makes it explicit when ScoreboardHazardRecognizer will be used.
"GenericItineraries" would only make sense if it contained real
itinerary values and still required ScoreboardHazardRecognizer.
llvm-svn: 158963
This patch changes the type used to hold the FU bitset from unsigned to uint64_t.
This will be needed for some upcoming PowerPC itineraries.
llvm-svn: 158679
When returning a 'cannot match due to missing CPU features' error code,
if there are multiple potential matches with different feature sets,
return the smallest set of missing features from the alternatives as
that's most likely to be the one that's desired.
llvm-svn: 158673
LLVM is now -Wunused-private-field clean except for
- lib/MC/MCDisassembler/Disassembler.h. Not sure why it keeps all those unaccessible fields.
- gtest.
llvm-svn: 158096
There are some that I didn't remove this round because they looked like
obvious stubs. There are dead variables in gtest too, they should be
fixed upstream.
llvm-svn: 158090
This allows a subtarget to explicitly specify the issue width and
other properties without providing pipeline stage details for every
instruction.
llvm-svn: 157979
Each register unit has one or two root registers. The full set of
registers containing a given register unit can be computed as the union
of the root registers and their super-registers.
Provide an MCRegUnitRootIterator class to enumerate the roots.
llvm-svn: 157753
Register units are already used internally in TableGen to compute
register pressure sets and overlapping registers. This patch makes them
available to the code generators.
The register unit lists are differentially encoded so they can be reused
for many related registers. This keeps the total size of the lists below
200 bytes for most targets. ARM has the largest table at 560 bytes.
Add an MCRegUnitIterator for traversing the register unit lists. It
provides an abstract interface so the representation can be changed in
the future without changing all clients.
llvm-svn: 157650
This required light surgery on the assembler and disassembler
because the instructions use an uncommon encoding. They are
the only two instructions in x86 that use register operands
and two immediates.
llvm-svn: 157634
making it stronger and more sane.
Delete the code from tblgen that produced the old code.
Besides being a path forward in intrinsic sanity, this also eliminates a bunch of
machine generated code that was compiled into Function.o
llvm-svn: 157545
separate side table, using the handy SequenceToOffsetTable class. This encodes all
these weird things into another 256 bytes, allowing all intrinsics to be encoded this way.
llvm-svn: 156995
TableGen already computes register units as the basic unit of
interference. We can use that to compute the set of overlapping
registers.
This means that we can easily compute overlap sets for one register at a
time. There is no benefit to computing all registers at once.
llvm-svn: 156960
generated code (for Intrinsic::getType) into a table. This handles common cases right now,
but I plan to extend it to handle all cases and merge in type verification logic as well
in follow-on patches.
llvm-svn: 156905
Many targets always use the same bitwise encoding value for physical
registers in all (or most) instructions. Add this mapping to the
.td files and TableGen'erate the information and expose an accessor
in MCRegisterInfo.
patch by Tom Stellard.
llvm-svn: 156829
Besides the weight, we also want to store up to two root registers per
unit. Most units will have a single root, the leaf register they
represent. Units created for ad hoc aliasing get two roots: The two
aliasing registers.
The root registers can be used to compute the set of overlapping
registers.
llvm-svn: 156792
Register units can be used to compute if two registers overlap:
A overlaps B iff units(A) intersects units(B).
With this change, the above holds true even on targets that use ad hoc
aliasing (currently only ARM). This means that register units can be
used to implement regsOverlap() more efficiently, and the register
allocator can use the concept to model interference.
When there is no ad hoc aliasing, the register units correspond to the
maximal cliques in the register overlap graph. This is optimal, no other
register unit assignment can have fewer units.
With ad hoc aliasing, weird things are possible, and we don't try too
hard to compute the maximal cliques. The current approach is always
correct, and it works very well (probably optimally) as long as the ad
hoc aliasing doesn't have cliques larger than pairs. It seems unlikely
that any target would need more.
llvm-svn: 156763
The ad hoc aliasing specified in the 'Aliases' list in .td files is
currently only used by computeOverlaps(). It will soon be needed to
build accurate register units as well, so build the undirected graph in
CodeGenRegister::buildObjectGraph() instead.
Aliasing is a symmetric relationship with only one direction specified
in the .td files. Make sure both directions are represented in
getExplicitAliases().
llvm-svn: 156762
TableGen creates new register classes and sub-register indices based on
the sub-register structure present in the register bank. So far, it has
been doing that on a per-register basis, but that is not very efficient.
This patch teaches TableGen to compute topological signatures for
registers, and use that to reduce the amount of redundant computation.
Registers get the same TopoSig if they have identical sub-register
structure.
TopoSigs are not currently exposed outside TableGen.
llvm-svn: 156761
Don't compute the SuperRegs list until the sub-register graph is
completely finished. This guarantees that the list of super-registers is
properly topologically ordered, and has no duplicates.
llvm-svn: 156629
The sub-registers explicitly listed in SubRegs in the .td files form a
tree. In a complicated register bank, it is possible to have
sub-register relationships across sub-trees. For example, the ARM NEON
double vector Q0_Q1 is a tree:
Q0_Q1 = [Q0, Q1], Q0 = [D0, D1], Q1 = [D2, D3]
But we also define the DPair register D1_D2 = [D1, D2] which is fully
contained in Q0_Q1.
This patch teaches TableGen to find such sub-register relationships, and
assign sub-register indices to them. In the example, TableGen will
create a dsub_1_dsub_2 sub-register index, and add D1_D2 as a
sub-register of Q0_Q1.
This will eventually enable the coalescer to handle copies of skewed
sub-registers.
llvm-svn: 156587
The .td files specify a tree of sub-registers. Store that tree as
ExplicitSubRegs lists in CodeGenRegister instead of extracting it from
the Record when needed.
llvm-svn: 156555
This mapping is for internal use by TableGen. It will not be exposed in
the generated files.
Unfortunately, the mapping is not completely well-defined. The X86 xmm
registers appear with multiple sub-register indices in the ymm
registers. This is because of the odd idempotent sub_sd and sub_ss
sub-register indices. I hope to be able to eliminate them entirely, so
we can require the sub-registers to form a tree.
For now, just place the canonical sub_xmm index in the mapping, and
ignore the idempotents.
llvm-svn: 156519
Previously, if an instruction definition was missing the mnemonic,
the next line would just assert(). Issue a real diagnostic instead.
llvm-svn: 156263
This is still a topological ordering such that every register class gets
a smaller enum value than its sub-classes.
Placing the smaller spill sizes first makes a difference for the
super-register class bit masks. When looking for a super-register class,
we usually want the smallest possible kind of super-register. That is
now available as the first bit set in the bit mask.
llvm-svn: 156222
This manually enumerated list of super-register classes has been
superceeded by the automatically computed super-register class masks
available through SuperRegClassIterator.
llvm-svn: 156151
This is a pointer into one of the tables used by
getMatchingSuperRegClass(). It makes it possible to use a shared
implementation of that function.
llvm-svn: 156121
The RC->getSubClassMask() pointer now points to a sequence of register
class bit masks. The first bit mask is the normal sub-class mask. The
following masks are super-reg class masks used by
getMatchingSuperRegClass().
llvm-svn: 156120
Many register classes only have a few super-registers, so it is not
necessary to keep individual bit masks for all possible sub-register
indices.
llvm-svn: 156083
Some targets have no sub-registers at all. Use the TargetRegisterInfo
versions of composeSubRegIndices(), getSubClassWithSubReg(), and
getMatchingSuperRegClass() for those targets.
llvm-svn: 156075
When an instruction match is found, but the subtarget features it
requires are not available (missing floating point unit, or thumb vs arm
mode, for example), issue a diagnostic that identifies what the feature
mismatch is.
rdar://11257547
llvm-svn: 155499
Assembly matchers for instructions with a two-operand form. ARM is full
of these, for example:
add {Rd}, Rn, Rm // Rd is optional and is the same as Rn if omitted.
The property TwoOperandAliasConstraint on the instruction definition controls
when, and if, an alias will be formed. No explicit InstAlias definitions
are required.
rdar://11255754
llvm-svn: 155172
This is a new algorithm that finds sets of register units that can be
used to model registers pressure. This handles arbitrary, overlapping
register classes. Each register class is associated with a (small)
list of pressure sets. These are the dimensions of pressure affected
by the register class's liveness.
llvm-svn: 154374
This is a new algorithm that associates registers with weighted
register units to accuretely model their effect on register
pressure. This handles registers with multiple overlapping
subregisters. It is possible, but almost inconceivable that the
algorithm fails to find an exact solution for a target description. If
an exact solution cannot be found, an inexact, but reasonable solution
will be chosen.
llvm-svn: 154373
This also avoids emitting the information twice, which led to code bloat. On i386-linux-Release+Asserts
with all targets built this change shaves a whopping 1.3 MB off clang. The number is probably exaggerated
by recent inliner changes but the methods were already enormous with the old inline cost computation.
The DWARF reg -> LLVM reg mapping doesn't seem to have holes in it, so it could be a simple lookup table.
I didn't implement that optimization yet to avoid potentially changing functionality.
There is still some duplication both in tablegen and the generated code that should be cleaned up eventually.
llvm-svn: 153837
First small step toward modeling multi-register multi-pressure. In the
future, register units can also be used to model liveness and
aliasing.
llvm-svn: 153794
Use an explicit comparator instead of the default.
The sets are sorted, but not using the default comparator. Hopefully,
this will unbreak the Linux builders.
llvm-svn: 153772
TableGen emits lists of sub-registers, super-registers, and overlaps. Put
them all in a single table and use a SequenceToOffsetTable to share
suffixes.
llvm-svn: 153761
This is similar to the StringToOffsetTable we use to produce string
tables, but it can be used for other sequences than strings, and it
eliminates entries for suffixes.
llvm-svn: 153760
The arm_neon intrinsics can create virtual registers from the DPair
register class which allows both even-odd and odd-even D-register pairs.
This fixes PR12389.
llvm-svn: 153603
We cannot limit the concatenated instruction names to 64K. ARM is
already at 32K, and it is easy to imagine a target with more
instructions.
llvm-svn: 152817
This patch limited the concatenated register names to 64K which meant
that the total number of registers was many times less than 64K.
If any compilers actually enforce the 64K limit on string literals, and
it turns out to be a problem, we should fix that problem by not using
long string literals.
llvm-svn: 152816
~0U might be i32 on 32-bit hosts, then (uint64_t)~0U might not be expected as (i64)0xFFFFFFFF_FFFFFFFF, but as (i64)0x00000000_FFFFFFFF.
llvm-svn: 152407
Original commit message:
Use uint16_t to store InstrNameIndices in MCInstrInfo. Add asserts to protect all 16-bit string table offsets. Also make sure the string to offset table string is not larger than 65536 characters since larger string literals aren't portable.
llvm-svn: 152296
Original commit message:
Use uint16_t to store InstrNameIndices in MCInstrInfo. Add asserts to protect
all 16-bit string table offsets. Also make sure the string to offset table
string is not larger than 65536 characters since larger string literals aren't
portable.
llvm-svn: 152233