This separation improves the layering and paves the way for more interfaces coming up in the future.
Differential revision: https://reviews.llvm.org/D95941
Add printer and parser hooks for a custom directive that allows
parsing and printing of idioms that can represent a list of values
each of which is either an integer or an SSA value. For example in
`subview %source[%offset_0, 1] [4, %size_1] [%stride_0, 3]`
each of the list (which represents offset, size and strides) is a mix
of either statically know integer values or dynamically computed SSA
values. Since this is used in many places adding a custom directive to
parse/print this idiom allows using assembly format on operations
which use this idiom.
Differential Revision: https://reviews.llvm.org/D95773
This revision creates a build method of PadTensorOp which can be mapped to
SimplePad op. The verifier is updated to accept a static custom result type,
which has the same semantic as SimplePadOp.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D95555
OffsetSizeAndStrideOpInterface now have the ability to specify only a leading subset of
offset, sizes, strides operands/attributes.
The size of that leading subset must be limited by the corresponding entry in `getArrayAttrMaxRanks` to avoid overflows.
Missing trailing dimensions are assumed to span the whole range (i.e. [0 .. dim)).
This brings more natural semantics to slice-like op on top of subview and is a simplifies to removing all uses of SliceOp in dependent projects.
Differential revision: https://reviews.llvm.org/D95441
The operantion is an identity if the values yielded by the operation
is the argument of the basic block of that operation. Add this missing check.
Differential Revision: https://reviews.llvm.org/D94819
Add canonicalization to replace use of the result of a linalg
operation on tensors in a dim operation, to use one of the operands of
the linalg operations instead. This allows the linalg op itself to be
deleted when all its non-dim uses are removed (say through tiling, etc.)
Differential Revision: https://reviews.llvm.org/D93076
linalg.generic/indexed_generic operations on tensors whose body is
just yielding the (non-induction variable) arguments of the operation
can be canonicalized by replacing uses of the result with the
corresponding arguments.
Differential Revision: https://reviews.llvm.org/D94581
The existing verification of reshape ops in linalg (linalg.reshape and
linalg.tensor_reshape) allows specification of illegal ops, where
- A dynamic dimension is expanded into multiple dynamic
dimensions. This is ill-specified.
- A static dimension is expanded into dynamic dimension or viceversa,
- The product of extents of the static dimensions in the expanded type
doesnt match the static dimension of the collapsed type.
Making all of these illegal. This also implies that some pessimization
in canonicalization due to incomplete semantics of the operation can
be dropped.
Differential Revision: https://reviews.llvm.org/D93724
This revision drops init_tensor arguments from Linalg on tensors and instead uniformizes the output buffers and output tensors to be consistent.
This significantly simplifies the usage of Linalg on tensors and is a stepping stone for
its evolution towards a mixed tensor and shape abstraction discussed in https://llvm.discourse.group/t/linalg-and-shapes/2421/19.
Differential Revision: https://reviews.llvm.org/D93469
This operation is used to materialize a tensor of a particular
shape. The shape could be specified as a mix of static and dynamic
values.
The use of this operation is to be an `init` tensor for Linalg
structured operation on tensors where the bounds of the computation
depends on the shape of the output of the linalg operation. The result
of this operation will be used as the `init` tensor of such Linalg
operations. To note,
1) The values in the tensor materialized is not used. Any operation to
which this is an init tensor is expected to overwrite the entire
tensor.
2) The tensor is materialized only for the shape of the output and to
make the loop bounds depend only on operands of the structured
operation.
Based on (1) and (2) it is assumed that these operations eventually go
away since they are only used in `dim` operations that can be
canonicalized to make this operation dead. Such canonicalization are
added here too.
Differential Revision: https://reviews.llvm.org/D93374
This revision refactors code used in various Linalg transformations and makes it a first class citizen to the LinalgStructureOpInterface. This is in preparation to allowing more advanced Linalg behavior but is otherwise NFC.
Differential revision: https://reviews.llvm.org/D91863
Rationale:
Make sure preconditions are tested already during verfication.
Currently, the only way a sparse rewriting rule can fail is if
(1) the linalg op does not have sparse annotations, or
(2) a yet to be handled operation is encounted inside the op
Reviewed By: penpornk
Differential Revision: https://reviews.llvm.org/D91748
These includes have been deprecated in favor of BuiltinDialect.h, which contains the definitions of ModuleOp and FuncOp.
Differential Revision: https://reviews.llvm.org/D91572
This CL integrates the new sparse annotations (hereto merely added as fully
transparent attributes) more tightly to the generic linalg op in order to add
verification of the annotations' consistency as well as to make make other
passes more aware of their presence (in the long run, rewriting rules must
preserve the integrity of the annotations).
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D91224
I ran into this pattern when converting elementwise ops like
`addf %arg0, %arg : tensor<?xf32>` to linalg. Redundant arguments can
also easily arise from linalg-fusion-for-tensor-ops.
Also, fix some small bugs in the logic in
LinalgStructuredOpsInterface.td.
Differential Revision: https://reviews.llvm.org/D90812
The LinalgDependenceGraph and alias analysis provide the necessary analysis for the Linalg fusion on buffers case.
However this is not enough for linalg on tensors which require proper memory effects to play nicely with DCE and other transformations.
This revision adds side effects to Linalg ops that were previously missing and has 2 consequences:
1. one example in the copy removal pass now fails since the linalg.generic op has side effects and the pass does not perform alias analysis / distinguish between reads and writes.
2. a few examples in fusion-tensor.mlir need to return the resulting tensor otherwise DCE automatically kicks in as part of greedy pattern application.
Differential Revision: https://reviews.llvm.org/D90762
Adds support for
- Dropping unit dimension loops for indexed_generic ops.
- Folding consecutive folding (or expanding) reshapes when the result
(or src) is a scalar.
- Fixes to indexed_generic -> generic fusion when zero-dim tensors are
involved.
Differential Revision: https://reviews.llvm.org/D90118
The current fusion on tensors fuses reshape ops with generic ops by
linearizing the indexing maps of the fused tensor in the generic
op. This has some limitations
- It only works for static shapes
- The resulting indexing map has a linearization that would be
potentially prevent fusion later on (for ex. tile + fuse).
Instead, try to fuse the reshape consumer (producer) with generic op
producer (consumer) by expanding the dimensionality of the generic op
when the reshape is expanding (folding). This approach conflicts with
the linearization approach. The expansion method is used instead of
the linearization method.
Further refactoring that changes the fusion on tensors to be a
collection of patterns.
Differential Revision: https://reviews.llvm.org/D89002
This revision reduces the number of places that specific information needs to be modified when adding new named Linalg ops.
Differential Revision: https://reviews.llvm.org/D89223
This revision adds init_tensors support to buffer allocation for Linalg on tensors.
Currently makes the assumption that the init_tensors fold onto the first output tensors.
This assumption is not currently enforced or cast in stone and requires experimenting with tiling linalg on tensors for ops **without reductions**.
Still this allows progress towards the end-to-end goal.
This canonicalization is the counterpart of MemRefCastOp -> LinalgOp but on tensors.
This is needed to properly canonicalize post linalg tiling on tensors.
Differential Revision: https://reviews.llvm.org/D88729
While affine maps are part of the builtin memref type, there is very
limited support for manipulating them in the standard dialect. Add
transpose to the set of ops to complement the existing view/subview ops.
This is a metadata transformation that encodes the transpose into the
strides of a memref.
I'm planning to use this when lowering operations on strided memrefs,
using the transpose to remove the stride without adding a dependency on
linalg dialect.
Differential Revision: https://reviews.llvm.org/D88651
- Use TypeRange instead of ArrayRef<Type> where possible.
- Change some of the custom builders to also use TypeRange
Differential Revision: https://reviews.llvm.org/D87944
A sequence of two reshapes such that one of them is just adding unit
extent dims can be folded to a single reshape.
Differential Revision: https://reviews.llvm.org/D88057
This revision allows representing a reduction at the level of linalg on tensors for named ops. When a structured op has a reduction and returns tensor(s), new conventions are added and documented.
As an illustration, the syntax for a `linalg.matmul` writing into a buffer is:
```
linalg.matmul ins(%a, %b : memref<?x?xf32>, tensor<?x?xf32>)
outs(%c : memref<?x?xf32>)
```
, whereas the syntax for a `linalg.matmul` returning a new tensor is:
```
%d = linalg.matmul ins(%a, %b : tensor<?x?xf32>, memref<?x?xf32>)
init(%c : memref<?x?xf32>)
-> tensor<?x?xf32>
```
Other parts of linalg will be extended accordingly to allow mixed buffer/tensor semantics in the presence of reductions.
Now backends spell out which namespace they want to be in, instead of relying on
clients #including them inside already-opened namespaces. This also means that
cppNamespaces should be fully qualified, and there's no implicit "::mlir::"
prepended to them anymore.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D86811
This revision refactors and cleans up a bunch of things to simplify StructuredOpInterface
before work can proceed on Linalg on tensors:
- break out pieces of the StructuredOps trait that are part of the StructuredOpInterface,
- drop referenceIterators and referenceIndexingMaps that end up being more confusing than useful,
- drop NamedStructuredOpTrait
Previously only the input type was printed, and the parser applied it to
both input and output, creating an invalid transpose. Print and parse
both types, and verify that they match.
Differential Revision: https://reviews.llvm.org/D87462
This patch adds a new named structured op to accompany linalg.matmul and
linalg.matvec. We needed it for our codegen, so I figured it would be useful
to add it to Linalg.
Reviewed By: nicolasvasilache, mravishankar
Differential Revision: https://reviews.llvm.org/D87292
With `dynamic_tensor_from_elements` tensor values of dynamic size can be
created. The body of the operation essentially maps the index space to tensor
elements.
Declare SCF operations in the `scf` namespace to avoid name clash with the new
`std.yield` operation. Resolve ambiguities between `linalg/shape/std/scf.yield`
operations.
Differential Revision: https://reviews.llvm.org/D86276
The tensor_reshape op was only fusible only if it is a collapsing case. Now we
propagate the op to all the operands so there is a further chance to fuse it
with generic op. The pre-conditions are:
1) The producer is not an indexed_generic op.
2) All the shapes of the operands are the same.
3) All the indexing maps are identity.
4) All the loops are parallel loops.
5) The producer has a single user.
It is possible to fuse the ops if the producer is an indexed_generic op. We
still can compute the original indices. E.g., if the reshape op collapses the d0
and d1, we can use DimOp to get the width of d1, and calculate the index
`d0 * width + d1`. Then replace all the uses with it. However, this pattern is
not implemented in the patch.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D86314
This changes the behavior of constructing MLIRContext to no longer load globally
registered dialects on construction. Instead Dialects are only loaded explicitly
on demand:
- the Parser is lazily loading Dialects in the context as it encounters them
during parsing. This is the only purpose for registering dialects and not load
them in the context.
- Passes are expected to declare the dialects they will create entity from
(Operations, Attributes, or Types), and the PassManager is loading Dialects into
the Context when starting a pipeline.
This changes simplifies the configuration of the registration: a compiler only
need to load the dialect for the IR it will emit, and the optimizer is
self-contained and load the required Dialects. For example in the Toy tutorial,
the compiler only needs to load the Toy dialect in the Context, all the others
(linalg, affine, std, LLVM, ...) are automatically loaded depending on the
optimization pipeline enabled.
To adjust to this change, stop using the existing dialect registration: the
global registry will be removed soon.
1) For passes, you need to override the method:
virtual void getDependentDialects(DialectRegistry ®istry) const {}
and registery on the provided registry any dialect that this pass can produce.
Passes defined in TableGen can provide this list in the dependentDialects list
field.
2) For dialects, on construction you can register dependent dialects using the
provided MLIRContext: `context.getOrLoadDialect<DialectName>()`
This is useful if a dialect may canonicalize or have interfaces involving
another dialect.
3) For loading IR, dialect that can be in the input file must be explicitly
registered with the context. `MlirOptMain()` is taking an explicit registry for
this purpose. See how the standalone-opt.cpp example is setup:
mlir::DialectRegistry registry;
registry.insert<mlir::standalone::StandaloneDialect>();
registry.insert<mlir::StandardOpsDialect>();
Only operations from these two dialects can be in the input file. To include all
of the dialects in MLIR Core, you can populate the registry this way:
mlir::registerAllDialects(registry);
4) For `mlir-translate` callback, as well as frontend, Dialects can be loaded in
the context before emitting the IR: context.getOrLoadDialect<ToyDialect>()
Differential Revision: https://reviews.llvm.org/D85622
This changes the behavior of constructing MLIRContext to no longer load globally
registered dialects on construction. Instead Dialects are only loaded explicitly
on demand:
- the Parser is lazily loading Dialects in the context as it encounters them
during parsing. This is the only purpose for registering dialects and not load
them in the context.
- Passes are expected to declare the dialects they will create entity from
(Operations, Attributes, or Types), and the PassManager is loading Dialects into
the Context when starting a pipeline.
This changes simplifies the configuration of the registration: a compiler only
need to load the dialect for the IR it will emit, and the optimizer is
self-contained and load the required Dialects. For example in the Toy tutorial,
the compiler only needs to load the Toy dialect in the Context, all the others
(linalg, affine, std, LLVM, ...) are automatically loaded depending on the
optimization pipeline enabled.
To adjust to this change, stop using the existing dialect registration: the
global registry will be removed soon.
1) For passes, you need to override the method:
virtual void getDependentDialects(DialectRegistry ®istry) const {}
and registery on the provided registry any dialect that this pass can produce.
Passes defined in TableGen can provide this list in the dependentDialects list
field.
2) For dialects, on construction you can register dependent dialects using the
provided MLIRContext: `context.getOrLoadDialect<DialectName>()`
This is useful if a dialect may canonicalize or have interfaces involving
another dialect.
3) For loading IR, dialect that can be in the input file must be explicitly
registered with the context. `MlirOptMain()` is taking an explicit registry for
this purpose. See how the standalone-opt.cpp example is setup:
mlir::DialectRegistry registry;
registry.insert<mlir::standalone::StandaloneDialect>();
registry.insert<mlir::StandardOpsDialect>();
Only operations from these two dialects can be in the input file. To include all
of the dialects in MLIR Core, you can populate the registry this way:
mlir::registerAllDialects(registry);
4) For `mlir-translate` callback, as well as frontend, Dialects can be loaded in
the context before emitting the IR: context.getOrLoadDialect<ToyDialect>()
Differential Revision: https://reviews.llvm.org/D85622
This changes the behavior of constructing MLIRContext to no longer load globally
registered dialects on construction. Instead Dialects are only loaded explicitly
on demand:
- the Parser is lazily loading Dialects in the context as it encounters them
during parsing. This is the only purpose for registering dialects and not load
them in the context.
- Passes are expected to declare the dialects they will create entity from
(Operations, Attributes, or Types), and the PassManager is loading Dialects into
the Context when starting a pipeline.
This changes simplifies the configuration of the registration: a compiler only
need to load the dialect for the IR it will emit, and the optimizer is
self-contained and load the required Dialects. For example in the Toy tutorial,
the compiler only needs to load the Toy dialect in the Context, all the others
(linalg, affine, std, LLVM, ...) are automatically loaded depending on the
optimization pipeline enabled.
To adjust to this change, stop using the existing dialect registration: the
global registry will be removed soon.
1) For passes, you need to override the method:
virtual void getDependentDialects(DialectRegistry ®istry) const {}
and registery on the provided registry any dialect that this pass can produce.
Passes defined in TableGen can provide this list in the dependentDialects list
field.
2) For dialects, on construction you can register dependent dialects using the
provided MLIRContext: `context.getOrLoadDialect<DialectName>()`
This is useful if a dialect may canonicalize or have interfaces involving
another dialect.
3) For loading IR, dialect that can be in the input file must be explicitly
registered with the context. `MlirOptMain()` is taking an explicit registry for
this purpose. See how the standalone-opt.cpp example is setup:
mlir::DialectRegistry registry;
mlir::registerDialect<mlir::standalone::StandaloneDialect>();
mlir::registerDialect<mlir::StandardOpsDialect>();
Only operations from these two dialects can be in the input file. To include all
of the dialects in MLIR Core, you can populate the registry this way:
mlir::registerAllDialects(registry);
4) For `mlir-translate` callback, as well as frontend, Dialects can be loaded in
the context before emitting the IR: context.getOrLoadDialect<ToyDialect>()
When the operand to the linalg.tensor_reshape op is a splat constant,
the result can be replaced with a splat constant of the same value but
different type.
Differential Revision: https://reviews.llvm.org/D86117
This changes the behavior of constructing MLIRContext to no longer load globally registered dialects on construction. Instead Dialects are only loaded explicitly on demand:
- the Parser is lazily loading Dialects in the context as it encounters them during parsing. This is the only purpose for registering dialects and not load them in the context.
- Passes are expected to declare the dialects they will create entity from (Operations, Attributes, or Types), and the PassManager is loading Dialects into the Context when starting a pipeline.
This changes simplifies the configuration of the registration: a compiler only need to load the dialect for the IR it will emit, and the optimizer is self-contained and load the required Dialects. For example in the Toy tutorial, the compiler only needs to load the Toy dialect in the Context, all the others (linalg, affine, std, LLVM, ...) are automatically loaded depending on the optimization pipeline enabled.
Differential Revision: https://reviews.llvm.org/D85622
This changes the behavior of constructing MLIRContext to no longer load globally registered dialects on construction. Instead Dialects are only loaded explicitly on demand:
- the Parser is lazily loading Dialects in the context as it encounters them during parsing. This is the only purpose for registering dialects and not load them in the context.
- Passes are expected to declare the dialects they will create entity from (Operations, Attributes, or Types), and the PassManager is loading Dialects into the Context when starting a pipeline.
This changes simplifies the configuration of the registration: a compiler only need to load the dialect for the IR it will emit, and the optimizer is self-contained and load the required Dialects. For example in the Toy tutorial, the compiler only needs to load the Toy dialect in the Context, all the others (linalg, affine, std, LLVM, ...) are automatically loaded depending on the optimization pipeline enabled.
When any of the memrefs in a structured linalg op has a zero dimension, it becomes dead.
This is consistent with the fact that linalg ops deduce their loop bounds from their operands.
Note however that this is not the case for the `tensor<0xelt_type>` which is a special convention
that must be lowered away into either `memref<elt_type>` or just `elt_type` before this
canonicalization can kick in.
Differential Revision: https://reviews.llvm.org/D85413
Replaced definition of named ND ConvOps with tensor comprehension
syntax which reduces boilerplate code significantly. Furthermore,
new ops to support TF convolutions added (without strides and dilations).
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D84628
This commit is part of a greater project which aims to add
full end-to-end support for convolutions inside mlir. The
reason behind having conv ops for each rank rather than
having one generic ConvOp is to enable better optimizations
for every N-D case which reflects memory layout of input/kernel
buffers better and simplifies code as well. We expect plain linalg.conv
to be progressively retired.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D83879
- replace DotOp, now that DRR rules have been dropped.
- Capture arguments mismatch in the parser. The number of parsed arguments must
equal the number of expected arguments.
Reviewed By: ftynse, nicolasvasilache
Differential Revision: https://reviews.llvm.org/D82952
linalg.conv does not support memrefs with rank smaller than 3 as stated here:
https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/nn/convolution
However it does not verify it and thus crashes with "LLVM ERROR: out of memory"
error for 1D case and "nWin > 0 && "expected at least one window dimension"" assertion
for 2D case. This commit adds check for that in the verification method.
Differential Revision: https://reviews.llvm.org/D84317
This commit adds functionality needed for implementation of convolutions with
linalg.generic op. Since linalg.generic right now expects indexing maps to be
just permutations, offset indexing needed in convolutions is not possible.
Therefore in this commit we address the issue by adding support for symbols inside
indexing maps which enables more advanced indexing. The upcoming commit will
solve the problem of computing loop bounds from such maps.
Differential Revision: https://reviews.llvm.org/D83158
This also fixes the outdated use of `n_views` in the documentation.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D83795
Summary:
This revision replaces MatmulOp, now that DRR rules have been dropped.
This revision also fixes minor parsing bugs and a plugs a few holes to get e2e paths working (e.g. library call emission).
During the replacement the i32 version had to be dropped because only the EDSC operators +, *, etc support type inference.
Deciding on a type-polymorphic behavior, and implementing it, is left for future work.
Reviewers: aartbik
Subscribers: mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, stephenneuendorffer, Joonsoo, grosul1, frgossen, Kayjukh, jurahul, msifontes
Tags: #mlir
Differential Revision: https://reviews.llvm.org/D81935
This revision replaces MatmulOp, now that DRR rules have been dropped.
This revision also fixes minor parsing bugs and a plugs a few holes to get e2e paths working (e.g. library call emission).
During the replacement the i32 version had to be dropped because only the EDSC operators +, *, etc support type inference.
Deciding on a type-polymorphic behavior, and implementing it, is left for future work.
Differential Revision: https://reviews.llvm.org/D79762
operands of Generic ops.
Unit-extent dimensions are typically used for achieving broadcasting
behavior. The pattern added (along with canonicalization patterns
added previously) removes the use of unit-extent dimensions, and
instead uses a more canonical representation of the computation. This
new pattern is not added as a canonicalization for now since it
entails adding additional reshape operations. A pass is added to
exercise these patterns, along with an API entry to populate a
patterns list with these patterns.
Differential Revision: https://reviews.llvm.org/D79766
This is only valid if the source tensors (result tensor) is static
shaped with all unit-extents when the reshape is collapsing
(expanding) dimensions.
Differential Revision: https://reviews.llvm.org/D79764
Summary:
This makes a common pattern of
`dyn_cast_or_null<OpTy>(v.getDefiningOp())` more concise.
Differential Revision: https://reviews.llvm.org/D79681
Summary:
This revision adds a conservative canonicalization pattern for MemRefCastOp that are typically inserted during ViewOp and SubViewOp canonicalization.
Ideally such canonicalizations would propagate the type to consumers but this is not a local behavior. As a consequence MemRefCastOp are introduced to keep type compatibility but need to be cleaned up later, in the case where more dynamic behavior than necessary is introduced.
Differential Revision: https://reviews.llvm.org/D79438
This revision adds support to allow named ops to lower to loops.
Linalg.batch_matmul successfully lowers to loops and to LLVM.
In the process, this test also activates linalg to affine loops.
However padded convolutions to not lower to affine.load atm so this revision overrides the type of underlying load / store operation.
Differential Revision: https://reviews.llvm.org/D79135
As we start defining more complex Ops, we increasingly see the need for
Ops-with-regions to be able to construct Ops within their regions in
their ::build methods. However, these methods only have access to
Builder, and not OpBuilder. Creating a local instance of OpBuilder
inside ::build and using it fails to trigger the operation creation
hooks in derived builders (e.g., ConversionPatternRewriter). In this
case, we risk breaking the logic of the derived builder. At the same
time, OpBuilder::create, which is by far the largest user of ::build
already passes "this" as the first argument, so an OpBuilder instance is
already available.
Update all ::build methods in all Ops in MLIR and Flang to take
"OpBuilder &" instead of "Builder *". Note the change from pointer and
to reference to comply with the common style in MLIR, this also ensures
all other users must change their ::build methods.
Differential Revision: https://reviews.llvm.org/D78713
LinalgOps.cpp:232:71: error: specialization of 'template<class GenericOpType> static mlir::LogicalResult {anonymous}::BlockArgsVerifier<GenericOpType>::verify(GenericOpType, mlir::Block&)' in different namespace [-fpermissive]
This can help provide a common interface for view-like
ops so that for example Linalg's dependency analysis
can avoid relying on concrete ops.
Differential Revision: https://reviews.llvm.org/D78645
This revision is the first in a set of improvements that aim at allowing
more generalized named Linalg op generation from a mathematical
specification.
This revision allows creating a new op and checks that the parser,
printer and verifier are hooked up properly.
This opened up a few design points that will be addressed in the future:
1. A named linalg op has a static region builder instead of an
explicitly parsed region. This is not currently compatible with
assemblyFormat so a custom parser / printer are needed.
2. The convention for structured ops and tensor return values needs to
evolve to allow tensor-land and buffer land specifications to agree
3. ReferenceIndexingMaps and referenceIterators will need to become
static to allow building attributes at parse time.
4. Error messages will be improved once we have 3. and we pretty print
in custom form.
Differential Revision: https://reviews.llvm.org/D78327
The function attribute in generic ops is not paying for itself.
A region is the more standardized way of specifying a custom computation.
If needed this region can call a function directly.
This is deemed more natural than managing a dedicated function attribute.
This also simplifies named ops generation by trimming unnecessary complexity.
Differential Revision: https://reviews.llvm.org/D78266
Summary:
Modified AffineMap::get to remove support for the overload which allowed
an ArrayRef of AffineExpr but no context (and gathered the context from a
presumed first entry, resulting in bugs when there were 0 results).
Instead, we support only a ArrayRef and a context, and a version which
takes a single AffineExpr.
Additionally, removed some now needless case logic which previously
special cased which call to AffineMap::get to use.
Reviewers: flaub, bondhugula, rriddle!, nicolasvasilache, ftynse, ulysseB, mravishankar, antiagainst, aartbik
Subscribers: mehdi_amini, jpienaar, burmako, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, liufengdb, Joonsoo, bader, grosul1, frgossen, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D78226
These have proved incredibly useful for interleaving values between a range w.r.t to streams. After this revision, the mlir/Support/STLExtras.h is empty. A followup revision will remove it from the tree.
Differential Revision: https://reviews.llvm.org/D78067
Summary: Functional.h contains many different methods that have a direct, and more efficient, equivalent in LLVM. This revision replaces all usages with the LLVM equivalent, and removes the header. This is part of larger cleanup, pr45513, merging MLIR support facilities into LLVM.
Differential Revision: https://reviews.llvm.org/D78053
Summary:
This revision adds a tensor_reshape operation that operates on tensors.
In the tensor world the constraints are less stringent and we can allow more
arbitrary dynamic reshapes, as long as they are contractions.
The expansion of a dynamic dimension into multiple dynamic dimensions is under-specified and is punted on for now.
Differential Revision: https://reviews.llvm.org/D77360
Summary:
Performs an N-D pooling operation similarly to the description in the TF
documentation:
https://www.tensorflow.org/api_docs/python/tf/nn/pool
Different from the description, this operation doesn't perform on batch and
channel. It only takes tensors of rank `N`.
```
output[x[0], ..., x[N-1]] =
REDUCE_{z[0], ..., z[N-1]}
input[
x[0] * strides[0] - pad_before[0] + dilation_rate[0]*z[0],
...
x[N-1]*strides[N-1] - pad_before[N-1] + dilation_rate[N-1]*z[N-1]
],
```
The required optional arguments are:
- strides: an i64 array specifying the stride (i.e. step) for window
loops.
- dilations: an i64 array specifying the filter upsampling/input
downsampling rate
- padding: an i64 array of pairs (low, high) specifying the number of
elements to pad along a dimension.
If strides or dilations attributes are missing then the default value is
one for each of the input dimensions. Similarly, padding values are zero
for both low and high in each of the dimensions, if not specified.
Differential Revision: https://reviews.llvm.org/D76414
Summary:
To enable this, two changes are needed:
1) Add an optional attribute `padding` to linalg.conv.
2) Compute if the indices accessing is out of bound in the loops. If so, use the
padding value `0`. Otherwise, use the value derived from load.
In the patch, the padding only works for lowering without other transformations,
e.g., tiling, fusion, etc.
Differential Revision: https://reviews.llvm.org/D75722
This revision takes advantage of the empty AffineMap to specify the
0-D edge case. This allows removing a bunch of annoying corner cases
that ended up impacting users of Linalg.
Differential Revision: https://reviews.llvm.org/D75831
This revision performs some basic refactoring towards more easily defining Linalg "named" ops. Such named ops form the backbone of operations that are ubiquitous in the ML application domain.
Summary:
NFC - Moved StandardOps/Ops.h to a StandardOps/IR dir to better match surrounding
directories. This is to match other dialects, and prepare for moving StandardOps
related transforms in out for Transforms and into StandardOps/Transforms.
Differential Revision: https://reviews.llvm.org/D74940
Thus far IntegerType has been signless: a value of IntegerType does
not have a sign intrinsically and it's up to the specific operation
to decide how to interpret those bits. For example, std.addi does
two's complement arithmetic, and std.divis/std.diviu treats the first
bit as a sign.
This design choice was made some time ago when we did't have lots
of dialects and dialects were more rigid. Today we have much more
extensible infrastructure and different dialect may want different
modelling over integer signedness. So while we can say we want
signless integers in the standard dialect, we cannot dictate for
others. Requiring each dialect to model the signedness semantics
with another set of custom types is duplicating the functionality
everywhere, considering the fundamental role integer types play.
This CL extends the IntegerType with a signedness semantics bit.
This gives each dialect an option to opt in signedness semantics
if that's what they want and helps code sharing. The parser is
modified to recognize `si[1-9][0-9]*` and `ui[1-9][0-9]*` as
signed and unsigned integer types, respectively, leaving the
original `i[1-9][0-9]*` to continue to mean no indication over
signedness semantics. All existing dialects are not affected (yet)
as this is a feature to opt in.
More discussions can be found at:
https://groups.google.com/a/tensorflow.org/d/msg/mlir/XmkV8HOPWpo/7O4X0Nb_AQAJ
Differential Revision: https://reviews.llvm.org/D72533
Fixing a bug where using a zero-rank shaped type operand to
linalg.generic ops hit an unrelated assert. This also meant that
lowering the operation to loops was not supported. Adding roundtrip
tests and lowering to loops test for zero-rank shaped type operand
with fixes to make the test pass.
Differential Revision: https://reviews.llvm.org/D74638
This revision prepares the ground for declaratively defining Linalg "named" ops.
Such named ops form the backbone of operations that are ubiquitous in the ML
application domain.
This revision closely related to the definition of a "Tensor Computation
Primitives Dialect" and demonstrates that ops can be expressed as declarative
configurations of the `linalg.generic` op.
Differential Revision: https://reviews.llvm.org/D74491
The refactored MemRefType::get() calls all intend to clone from another
memref type, with some modifications. In fact, some calls dropped memory space
during the cloning. Migrate them to the cloning API so that nothing gets
dropped if they are not explicitly listed.
It's close to NFC but not quite, as it helps with propagating memory spaces in
some places.
Differential Revision: https://reviews.llvm.org/D73296
Summary:
Canonicalization and folding patterns in StandardOps may interfere with the needs
of Linalg. This revision introduces specific foldings for dynamic memrefs that can
be proven to be static.
Very concretely:
Determines whether it is possible to fold it away in the parent Linalg op:
```mlir
%1 = memref_cast %0 : memref<8x16xf32> to memref<?x?xf32>
%2 = linalg.slice %1 ... : memref<?x?xf32> ...
// or
%1 = memref_cast %0 : memref<8x16xf32, affine_map<(i, j)->(16 * i + j)>>
to memref<?x?xf32>
linalg.generic(%1 ...) : memref<?x?xf32> ...
```
into
```mlir
%2 = linalg.slice %0 ... : memref<8x16xf32> ...
// or
linalg.generic(%0 ... : memref<8x16xf32, affine_map<(i, j)->(16 * i + j)>>
```
Reviewers: ftynse, aartbik, jsetoain, tetuante, asaadaldien
Subscribers: mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, liufengdb, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D73565
Summary:
This diff fixes issues with the semantics of linalg.generic on tensors that appeared when converting directly from HLO to linalg.generic.
The changes are self-contained within MLIR and can be captured and tested independently of XLA.
The linalg.generic and indexed_generic are updated to:
To allow progressive lowering from the value world (a.k.a tensor values) to
the buffer world (a.k.a memref values), a linalg.generic op accepts
mixing input and output ranked tensor values with input and output memrefs.
```
%1 = linalg.generic #trait_attribute %A, %B {other-attributes} :
tensor<?x?xf32>,
memref<?x?xf32, stride_specification>
-> (tensor<?x?xf32>)
```
In this case, the number of outputs (args_out) must match the sum of (1) the
number of output buffer operands and (2) the number of tensor return values.
The semantics is that the linalg.indexed_generic op produces (i.e.
allocates and fills) its return values.
Tensor values must be legalized by a buffer allocation pass before most
transformations can be applied. Such legalization moves tensor return values
into output buffer operands and updates the region argument accordingly.
Transformations that create control-flow around linalg.indexed_generic
operations are not expected to mix with tensors because SSA values do not
escape naturally. Still, transformations and rewrites that take advantage of
tensor SSA values are expected to be useful and will be added in the near
future.
Subscribers: bmahjour, mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72555
Summary: The current syntax for AffineMapAttr and IntegerSetAttr conflict with function types, making it currently impossible to round-trip function types(and e.g. FuncOp) in the IR. This revision changes the syntax for the attributes by wrapping them in a keyword. AffineMapAttr is wrapped with `affine_map<>` and IntegerSetAttr is wrapped with `affine_set<>`.
Reviewed By: nicolasvasilache, ftynse
Differential Revision: https://reviews.llvm.org/D72429
Summary:
This diff makes it easier to create a `linalg.reshape` op
and adds an EDSC builder api test to exercise the new builders.
Reviewers: ftynse, jpienaar
Subscribers: mehdi_amini, rriddle, burmako, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72580
Summary:
This diff adds a new operation to linalg to allow reshaping of an
existing view into a new view in the same buffer at the same offset.
More specifically:
The `linalg.reshape` op produces a new view whose sizes are a reassociation
of the original `view`. Depending on whether or not the reassociated
MemRefType is contiguous, the resulting memref may require explicit alloc
and copies.
A reassociation is defined as a continous grouping of dimensions and is
represented with a affine map array attribute. In the future, non-continous
groupings may be allowed (i.e. permutations, reindexings etc).
For now, it is assumed that either:
1. a reassociation produces and consumes contiguous MemRefType or,
2. the reshape op will be folded into its consumers (by changing the shape
of the computations).
All other cases are undefined behavior and a reshape op may not lower to
LLVM if it cannot be proven statically that it does not require alloc+copy.
A reshape may either collapse or expand dimensions, depending on the
relationship between source and target memref ranks. The verification rule
is that the reassociation maps are applied to the memref with the larger
rank to obtain the memref with the smaller rank. In the case of a dimension
expansion, the reassociation maps can be interpreted as inverse maps.
Examples:
```mlir
// Dimension collapse (i, j) -> i' and k -> k'
%1 = linalg.reshape %0 [(i, j, k) -> (i, j),
(i, j, k) -> (k)] :
memref<?x?x?xf32, stride_spec> into memref<?x?xf32, stride_spec_2>
```
```mlir
// Dimension expansion i -> (i', j') and (k) -> (k')
%1 = linalg.reshape %0 [(i, j, k) -> (i, j),
(i, j, k) -> (k)] :
memref<?x?xf32, stride_spec> into memref<?x?x?xf32, stride_spec_2>
```
The relevant invalid and roundtripping tests are added.
Reviewers: AlexEichenberger, ftynse, rriddle, asaadaldien, yangjunpro
Subscribers: kiszk, merge_guards_bot, mehdi_amini, jpienaar, burmako, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72168
Summary:
This diff adds support to allow `linalg.generic` and
`linalg.indexed_generic` to take tensor input and output
arguments.
The subset of output tensor operand types must appear
verbatim in the result types after an arrow. The parser,
printer and verifier are extended to accomodate this
behavior.
The Linalg operations now support variadic ranked tensor
return values. This extension exhibited issues with the
current handling of NativeCall in RewriterGen.cpp. As a
consequence, an explicit cast to `SmallVector<Value, 4>`
is added in the proper place to support the new behavior
(better suggestions are welcome).
Relevant cleanups and name uniformization are applied.
Relevant invalid and roundtrip test are added.
Reviewers: mehdi_amini, rriddle, jpienaar, antiagainst, ftynse
Subscribers: burmako, shauheen, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72022
Summary: This is part of an ongoing cleanup and uniformization work.
Reviewers: ftynse
Subscribers: mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D72084
This is an initial step to refactoring the representation of OpResult as proposed in: https://groups.google.com/a/tensorflow.org/g/mlir/c/XXzzKhqqF_0/m/v6bKb08WCgAJ
This change will make it much simpler to incrementally transition all of the existing code to use value-typed semantics.
PiperOrigin-RevId: 286844725
This CL allows specifying an additional name for specifying the .td file that is used to generate the doc for a dialect. This is necessary for a dialect like Linalg which has different "types" of ops that are used in different contexts.
This CL also restructures the Linalg documentation and renames LinalgLibraryOps -> LinalgStructuredOps but is otherwise NFC.
PiperOrigin-RevId: 286450414
'```mlir' is used to indicate the code block is MLIR code/should use MLIR syntax
highlighting, while '{.mlir}' was a markdown extension that used a style file
to color the background differently of the code block. The background color
extension was a custom one that we can retire given we have syntax
highlighting.
Also change '```td' to '```tablegen' to match chroma syntax highlighting
designation.
PiperOrigin-RevId: 286222976
This cleans up the implementation of the various operation print methods. This is done via a combination of code cleanup, adding new streaming methods to the printer(e.g. operand ranges), etc.
PiperOrigin-RevId: 285285181
This CL adds more common information to StructuredOpsUtils.h
The n_view attribute is retired in favor of args_in + args_out but the CL is otherwise NFC.
PiperOrigin-RevId: 285000621
This allows for users to provide operand_range and result_range in builder.create<> calls, instead of requiring an explicit copy into a separate data structure like SmallVector/std::vector.
PiperOrigin-RevId: 284360710
This CL uses the now standard std.subview in linalg.
Two shortcuts are currently taken to allow this port:
1. the type resulting from a view is currently degraded to fully dynamic to pass the SubViewOp verifier.
2. indexing into SubViewOp may access out of bounds since lowering to LLVM does not currently enforce it by construction.
These will be fixed in subsequent commits after discussions.
PiperOrigin-RevId: 280250129
This operation is a companion operation to the std.view operation added as proposed in "Updates to the MLIR MemRefType" RFC.
PiperOrigin-RevId: 279766410
Now that a view op has graduated to the std dialect, we can update Linalg to use it and remove ops that have become obsolete. As a byproduct, the linalg buffer and associated ops can also disappear.
PiperOrigin-RevId: 279073591
When the implementation of the strided memref [RFC](https://groups.google.com/a/tensorflow.org/forum/#!msg/mlir/MaL8m2nXuio/1scRqZa6AQAJ) landed, linalg started using this type instead of the now retired !linalg.view.
As static and partially static cases appear, the stride information needs to be maintained properly. In particular, the result type of the subview op was generally incorrect.
This CL fixes the issue by computing a return type that:
1. always has dynamic sizes, which is generally the only correct way to construct a subview in the absence of data padding and/or code versioning.
2. has the same strides as the base strided memref.
Point 1. above can be further refined but will needs further analysis and canonicalization to optimize the particular case where:
1. The base memref has static size along a given dimension.
2. The subview size can be statically derived (e.g. after canonicalization).
3. *And* the subview size is an even divisor of the base memref.
This 3rd constraint is well-known in the case of tiled layouts that don't assume implicit padding: the boundary tile may be only partial and has size given by `problem_size % tile_size`.
Tests are updated as appropriate.
PiperOrigin-RevId: 274578624
This also adds coverage with a missing test, which uncovered a bug in the conditional for testing whether an offset is dynamic or not.
PiperOrigin-RevId: 272505798
This CL finishes the implementation of the Linalg + Affine type unification of the [strided memref RFC](https://groups.google.com/a/tensorflow.org/forum/#!topic/mlir/MaL8m2nXuio).
As a consequence, the !linalg.view type, linalg::DimOp, linalg::LoadOp and linalg::StoreOp can now disappear and Linalg can use standard types everywhere.
PiperOrigin-RevId: 272187165
- also remove stale terminology/references in docs
Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>
Closestensorflow/mlir#148
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/148 from bondhugula:cleanup e846b641a3c2936e874138aff480a23cdbf66591
PiperOrigin-RevId: 271618279
A linalg.transpose op is a pure metadata operation that takes a view + permutation map and produces
another view of the same underlying data, with a different reindexing. This is a
pure metadata operation that does not touch the underlying data.
Example:
```
%t = linalg.transpose %v (i, j) -> (j, i) : !linalg.view<?x?xf32>
```
PiperOrigin-RevId: 265139429
Operation interfaces generally require a bit of boilerplate code to connect all of the pieces together. This cl introduces mechanisms in the ODS to allow for generating operation interfaces via the 'OpInterface' class.
Providing a definition of the `OpInterface` class will auto-generate the c++
classes for the interface. An `OpInterface` includes a name, for the c++ class,
along with a list of interface methods. There are two types of methods that can be used with an interface, `InterfaceMethod` and `StaticInterfaceMethod`. They are both comprised of the same core components, with the distinction that `StaticInterfaceMethod` models a static method on the derived operation.
An `InterfaceMethod` is comprised of the following components:
* ReturnType
- A string corresponding to the c++ return type of the method.
* MethodName
- A string corresponding to the desired name of the method.
* Arguments
- A dag of strings that correspond to a c++ type and variable name
respectively.
* MethodBody (Optional)
- An optional explicit implementation of the interface method.
def MyInterface : OpInterface<"MyInterface"> {
let methods = [
// A simple non-static method with no inputs.
InterfaceMethod<"unsigned", "foo">,
// A new non-static method accepting an input argument.
InterfaceMethod<"Value *", "bar", (ins "unsigned":$i)>,
// Query a static property of the derived operation.
StaticInterfaceMethod<"unsigned", "fooStatic">,
// Provide the definition of a static interface method.
// Note: `ConcreteOp` corresponds to the derived operation typename.
StaticInterfaceMethod<"Operation *", "create",
(ins "OpBuilder &":$builder, "Location":$loc), [{
return builder.create<ConcreteOp>(loc);
}]>,
// Provide a definition of the non-static method.
// Note: `op` corresponds to the derived operation variable.
InterfaceMethod<"unsigned", "getNumInputsAndOutputs", (ins), [{
return op.getNumInputs() + op.getNumOutputs();
}]>,
];
PiperOrigin-RevId: 264754898