Commit Graph

247 Commits

Author SHA1 Message Date
Alexander Belyaev 2ea6e13bf8 [mlir] Add an optional distributionTypes attribute to TiledLoopOp.
Differential Revision: https://reviews.llvm.org/D103104
2021-05-25 20:04:41 +02:00
Nicolas Vasilache 4519ca3d2e [mlir][Linalg] NFC - Drop Linalg EDSC usage
Drop the Linalg dialect EDSC subdirectory and update all uses.

Differential Revision: https://reviews.llvm.org/D102848
2021-05-20 15:33:56 +00:00
Tobias Gysi 9a2769db80 [mir][Python][linalg] Support OpDSL extensions in C++.
The patch extends the yaml code generation to support the following new OpDSL constructs:
- captures
- constants
- iteration index accesses
- predefined types
These changes have been introduced by revision
https://reviews.llvm.org/D101364.

Differential Revision: https://reviews.llvm.org/D102075
2021-05-19 13:36:56 +00:00
Rob Suderman 7b57517507 [mlir][linalg] Fixed issue generating reassociation map with Rank-0 types
Rank-0 case causes a graph during linalg reshape operation.

Differential Revision: https://reviews.llvm.org/D102282
2021-05-12 11:00:51 -07:00
Aart Bik bf812ea484 [mlir][linalg] remove the -now- obsolete sparse support in linalg
All glue and clutter in the linalg ops has been replaced by proper
sparse tensor type encoding. This code is no longer needed. Thanks
to ntv@ for giving us a temporary home in linalg.

So long, and thanks for all the fish.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D102098
2021-05-10 16:49:33 -07:00
Tobias Gysi 26e916334e [mlir][linalg] Add IndexedGenericOp to GenericOp canonicalization.
Replace all `linalg.indexed_generic` ops by `linalg.generic` ops that access the iteration indices using the `linalg.index` op.

Differential Revision: https://reviews.llvm.org/D101612
2021-05-07 06:00:16 +00:00
Alexander Belyaev 2865d114f9 [mlir] Use ReassociationIndices instead of affine maps in linalg.reshape.
Differential Revision: https://reviews.llvm.org/D101861
2021-05-05 12:59:57 +02:00
MaheshRavishankar a6e09391bb [mlir][Linalg] Add a utility method to get reassociations maps for reshape.
Given the source and destination shapes, if they are static, or if the
expanded/collapsed dimensions are unit-extent, it is possible to
compute the reassociation maps that can be used to reshape one type
into another. Add a utility method to return the reassociation maps
when possible.

This utility function can be used to fuse a sequence of reshape ops,
given the type of the source of the producer and the final result
type. This pattern supercedes a more constrained folding pattern added
to DropUnitDims pass.

Differential Revision: https://reviews.llvm.org/D101343
2021-05-03 14:40:15 -07:00
Aart Bik 319072f4e3 [mlir][sparse] migrate sparse operations into new sparse tensor dialect
This is the very first step toward removing the glue and clutter from linalg and
replace it with proper sparse tensor types. This revision migrates the LinalgSparseOps
into SparseTensorOps of a sparse tensor dialect. This also provides a new home for
sparse tensor related transformation.

NOTE: the actual replacement with sparse tensor types (and removal of linalg glue/clutter)
will follow but I am trying to keep the amount of changes per revision manageable.

Differential Revision: https://reviews.llvm.org/D101573
2021-04-29 15:52:35 -07:00
Mehdi Amini 086e0f05bf Revert "[mlir][sparse] migrate sparse operations into new sparse tensor dialect"
This reverts commit a6d92a9711.

The build with -DBUILD_SHARED_LIBS=ON is broken.
2021-04-29 20:59:41 +00:00
Aart Bik a6d92a9711 [mlir][sparse] migrate sparse operations into new sparse tensor dialect
This is the very first step toward removing the glue and clutter from linalg and
replace it with proper sparse tensor types. This revision migrates the LinalgSparseOps
into SparseTensorOps of a sparse tensor dialect. This also provides a new home for
sparse tensor related transformation.

NOTE: the actual replacement with sparse tensor types (and removal of linalg glue/clutter)
will follow but I am trying to keep the amount of changes per revision manageable.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D101488
2021-04-29 12:09:10 -07:00
Alexander Belyaev fa0d044c44 [mlir] Fix canonicalization of tiled_loop if not all opresults fold.
The current canonicalization did not remap operation results correctly
and attempted to erase tiledLoop, which is incorrect if not all tensor
results are folded.
2021-04-28 19:57:48 +02:00
Alexander Belyaev 9a66d33452 [mlir] Fix the postsubmit comments in https://reviews.llvm.org/D101445 2021-04-28 14:58:02 +02:00
Alexander Belyaev 29dbac0ae2 [mlir] Add folding for tensor inputs and memref.cast in linalg.tiled_loop.
Tensor inputs, if not used in the body of TiledLoopOp, can be removed.
memref::CastOp can be folded into TiledLoopOp as well.

Differential Revision: https://reviews.llvm.org/D101445
2021-04-28 14:36:07 +02:00
Alexander Belyaev 5291a7a3c7 [mlir] Add block arguments for input/output operands of 'linalg.tiled_loop`.
Differential Revision: https://reviews.llvm.org/D101186
2021-04-23 20:55:20 +02:00
Alexander Belyaev cf761904a2 [mlir] Add verification for `linalg.tiled_loop` op.
Differential Revision: https://reviews.llvm.org/D100555
2021-04-15 20:50:36 +02:00
Tobias Gysi 93f9922d65 [mlir][linalg] adding operation to access the iteration index of enclosing linalg ops.
The `linalg.index` operation provides access to the iteration indexes of immediately enclosing linalg operations. It takes a dimension `dim` attribute and returns the iteration index in the given dimension. Having `linalg.index` allows us to unify `linalg.generic` and `linalg.indexed_generic` and also enables index access in named operations.

Differential Revision: https://reviews.llvm.org/D100292
2021-04-12 13:37:17 +00:00
MaheshRavishankar f4eb681dc3 [mlir][Linalg] Drop unit-trip loops of reductions only if other reduction loops exists.
Recent change enable dropping unit-trip loops of "reduction" iterator
type as well. This is fine as long as there is one other "reduction"
iterator in the operation. Without this the initialized value (value
of `out`) is not read which leads to a correctness issue.

Also fix a bug in the `fill` -> `tensor_reshape` folding. The `out`
operand of the `fill` needs to be reshaped to get the `out` operand of
the generated `fill` operation.

Differential Revision: https://reviews.llvm.org/D100145
2021-04-08 22:31:29 -07:00
MaheshRavishankar 9b0517035f [mlir] Enhance InferShapedTypeOpInterface and move LinalgOps to use them.
A new `InterfaceMethod` is added to `InferShapedTypeOpInterface` that
allows an operation to return the `Value`s for each dim of its
results. It is intended for the case where the `Value` returned for
each dim is computed using the operands and operation attributes. This
interface method is for cases where the result dim of an operation can
be computed independently, and it avoids the need to aggregate all
dims of a result into a single shape value. This also implies that
this is not suitable for cases where the result type is unranked (for
which the existing interface methods is to be used).

Also added is a canonicalization pattern that uses this interface and
resolves the shapes of the output in terms of the shapes of the
inputs. Moving Linalg ops to use this interface, so that many
canonicalization patterns implemented for individual linalg ops to
achieve the same result can be removed in favor of the added
canonicalization pattern.

Differential Revision: https://reviews.llvm.org/D97887
2021-03-29 11:39:48 -07:00
Alexander Belyaev 7f2236cf58 [mlir][linalg] Add output tensor args folding for linalg.tiled_loop.
Folds away TiledLoopOp output tensors when the following conditions are met:
* result of `linalg.tiled_loop` has no uses
* output tensor is the argument of `linalg.yield`

Example:

```
%0 = linalg.tiled_loop ...  outs (%out, %out_buf:tensor<...>, memref<...>) {
  ...
  linalg.yield %out : tensor ...
}
```

Becomes

```
linalg.tiled_loop ...  outs (%out_buf:memref<...>) {
  ...
  linalg.yield
}
```

Differential Revision: https://reviews.llvm.org/D99333
2021-03-25 18:11:05 +01:00
Lei Zhang 19435d3863 [mlir][linalg] Fold fill -> tensor_reshape chain
For such op chains, we can create new linalg.fill ops
with the result type of the linalg.tensor_reshape op.

Differential Revision: https://reviews.llvm.org/D99116
2021-03-24 18:17:58 -04:00
River Riddle 76f3c2f3f3 [mlir][Pattern] Add better support for using interfaces/traits to match root operations in rewrite patterns
To match an interface or trait, users currently have to use the `MatchAny` tag. This tag can be quite problematic for compile time for things like the canonicalizer, as the `MatchAny` patterns may get applied to  *every* operation. This revision adds better support by bucketing interface/trait patterns based on which registered operations have them registered. This means that moving forward we will only attempt to match these patterns to operations that have this interface registered. Two simplify defining patterns that match traits and interfaces, two new utility classes have been added: OpTraitRewritePattern and OpInterfaceRewritePattern.

Differential Revision: https://reviews.llvm.org/D98986
2021-03-23 14:05:33 -07:00
Chris Lattner dc4e913be9 [PatternMatch] Big mechanical rename OwningRewritePatternList -> RewritePatternSet and insert -> add. NFC
This doesn't change APIs, this just cleans up the many in-tree uses of these
names to use the new preferred names.  We'll keep the old names around for a
couple weeks to help transitions.

Differential Revision: https://reviews.llvm.org/D99127
2021-03-22 17:20:50 -07:00
Alexander Belyaev 628f5c9da2 [mlir] Add a roundtrip test for 'linalg.tiled_loop' on buffers.
https://llvm.discourse.group/t/rfc-add-linalg-tileop/2833

Differential Revision: https://reviews.llvm.org/D98900
2021-03-19 09:38:20 +01:00
Alexander Belyaev 283799157e [mlir][linalg] Add support for memref inputs/outputs for `linalg.tiled_loop`.
Also use `ArrayAttr` to pass iterator pass to the TiledLoopOp builder.

Differential Revision: https://reviews.llvm.org/D98871
2021-03-18 16:11:03 +01:00
Julian Gross e2310704d8 [MLIR] Create memref dialect and move dialect-specific ops from std.
Create the memref dialect and move dialect-specific ops
from std dialect to this dialect.

Moved ops:
AllocOp -> MemRef_AllocOp
AllocaOp -> MemRef_AllocaOp
AssumeAlignmentOp -> MemRef_AssumeAlignmentOp
DeallocOp -> MemRef_DeallocOp
DimOp -> MemRef_DimOp
MemRefCastOp -> MemRef_CastOp
MemRefReinterpretCastOp -> MemRef_ReinterpretCastOp
GetGlobalMemRefOp -> MemRef_GetGlobalOp
GlobalMemRefOp -> MemRef_GlobalOp
LoadOp -> MemRef_LoadOp
PrefetchOp -> MemRef_PrefetchOp
ReshapeOp -> MemRef_ReshapeOp
StoreOp -> MemRef_StoreOp
SubViewOp -> MemRef_SubViewOp
TransposeOp -> MemRef_TransposeOp
TensorLoadOp -> MemRef_TensorLoadOp
TensorStoreOp -> MemRef_TensorStoreOp
TensorToMemRefOp -> MemRef_BufferCastOp
ViewOp -> MemRef_ViewOp

The roadmap to split the memref dialect from std is discussed here:
https://llvm.discourse.group/t/rfc-split-the-memref-dialect-from-std/2667

Differential Revision: https://reviews.llvm.org/D98041
2021-03-15 11:14:09 +01:00
Nicolas Vasilache a756f12b4d [mlir][Linalg] Add folding of linalg.copy that are in fact identities.
Differential Revision: https://reviews.llvm.org/D97939
2021-03-04 13:37:26 +00:00
MaheshRavishankar c118fdcd59 [mlir] Remove incorrect folding for SubTensorInsertOp
The SubTensorInsertOp has a requirement that dest type and result
type match. Just folding the tensor.cast operation violates this and
creates verification errors during canonicalization. Also fix other
canonicalization methods that werent inserting casts properly.

Differential Revision: https://reviews.llvm.org/D97800
2021-03-03 13:58:05 -08:00
Stella Laurenzo d36a15de1f [mlir][linalg] Memoize indexing map generation.
Differential Revision: https://reviews.llvm.org/D97602
2021-03-01 21:15:40 -08:00
Stella Laurenzo 2ceedc3a20 [mlir][linalg] Add symbolic type conversion to linalg named ops.
This enables this kind of construct in the DSL to generate a named op that is polymorphic over numeric type variables `T` and `U`, generating the correct arithmetic casts at construction time:

```
@tc_def_op
def polymorphic_matmul(A=TensorDef(T1, S.M, S.K),
                       B=TensorDef(T2, S.K, S.N),
                       C=TensorDef(U, S.M, S.N, output=True)):
  implements(ContractionOpInterface)
  C[D.m, D.n] += cast(U, A[D.m, D.k]) * cast(U, B[D.k, D.n])
```

Presently, this only supports type variables that are bound to the element type of one of the arguments, although a further extension that allows binding a type variable to an attribute would allow some more expressiveness and may be useful for some formulations. This is left to a future patch. In addition, this patch does not yet materialize the verifier support which ensures that types are bound correctly (for such simple examples, failing to do so will yield IR that fails verification, it just won't yet fail with a precise error).

Note that the full grid of extensions/truncation/int<->float conversions are supported, but many of them are lossy and higher level code needs to be mindful of numerics (it is not the job of this level).

As-is, this should be sufficient for most integer matmul scenarios we work with in typical quantization schemes.

Differential Revision: https://reviews.llvm.org/D97603
2021-02-27 15:52:35 -08:00
Christian Sigg 8c074cb0b7 [mlir] Mark OpState::getAttrs() deprecated.
Fix call sites.

The method will be removed 2 weeks later.

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D97464
2021-02-25 20:54:42 +01:00
Alexander Belyaev 7377ef9357 [mlir] Add a builder to `linalg.tiled_loop`.
https://llvm.discourse.group/t/rfc-add-linalg-tileop/2833

Differential Revision: https://reviews.llvm.org/D97372
2021-02-24 14:47:27 +01:00
Alexander Belyaev 945b76d428 [mlir][linalg] Fix Linalg roundtrip test.
The test did not check whether the operations can be parsed again after
printing them once.

Differential Revision: https://reviews.llvm.org/D97368
2021-02-24 11:31:09 +01:00
Stella Laurenzo 6c9541d4dd Implement simple type polymorphism for linalg named ops.
* It was decided that this was the end of the line for the existing custom tc parser/generator, and this is the first step to replacing it with a declarative format that maps well to mathy source languages.
* One such source language is implemented here: https://github.com/stellaraccident/mlir-linalgpy/blob/main/samples/mm.py
  * In fact, this is the exact source of the declarative `polymorphic_matmul` in this change.
  * I am working separately to clean this python implementation up and add it to MLIR (probably as `mlir.tools.linalg_opgen` or equiv). The scope of the python side is greater than just generating named ops: the ops are callable and directly emit `linalg.generic` ops fully dynamically, and this is intended to be a feature for frontends like npcomp to define custom linear algebra ops at runtime.
* There is more work required to handle full type polymorphism, especially with respect to integer formulations, since they require more specificity wrt types.
* Followups to this change will bring the new generator to feature parity with the current one and delete the current. Roughly, this involves adding support for interface declarations and attribute symbol bindings.

Differential Revision: https://reviews.llvm.org/D97135
2021-02-21 14:30:31 -08:00
Geoffrey Martin-Noble 236aab0b0c [MLIR] Delete unused functions getCollapsedInitTensor and getExpandedInitTensor
These are unused since
https://reviews.llvm.org/rG81264dfbe80df08668a325a61613b64243b99c01

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D97014
2021-02-19 09:23:54 -08:00
Alexander Belyaev 624fccba87 [mlir] Add `linalg.tiled_loop` op.
`subtensor_insert` was used instead of `linalg.subtensor_yield` to make this PR
smaller. Verification will be added in a follow-up PR.

Differential Revision: https://reviews.llvm.org/D96943
2021-02-18 13:23:00 +01:00
Alexander Belyaev a89035d750 Revert "[MLIR] Create memref dialect and move several dialect-specific ops from std."
This commit introduced a cyclic dependency:
Memref dialect depends on Standard because it used ConstantIndexOp.
Std depends on the MemRef dialect in its EDSC/Intrinsics.h

Working on a fix.

This reverts commit 8aa6c3765b.
2021-02-18 12:49:52 +01:00
Julian Gross 8aa6c3765b [MLIR] Create memref dialect and move several dialect-specific ops from std.
Create the memref dialect and move several dialect-specific ops without
dependencies to other ops from std dialect to this dialect.

Moved ops:
AllocOp -> MemRef_AllocOp
AllocaOp -> MemRef_AllocaOp
DeallocOp -> MemRef_DeallocOp
MemRefCastOp -> MemRef_CastOp
GetGlobalMemRefOp -> MemRef_GetGlobalOp
GlobalMemRefOp -> MemRef_GlobalOp
PrefetchOp -> MemRef_PrefetchOp
ReshapeOp -> MemRef_ReshapeOp
StoreOp -> MemRef_StoreOp
TransposeOp -> MemRef_TransposeOp
ViewOp -> MemRef_ViewOp

The roadmap to split the memref dialect from std is discussed here:
https://llvm.discourse.group/t/rfc-split-the-memref-dialect-from-std/2667

Differential Revision: https://reviews.llvm.org/D96425
2021-02-18 11:29:39 +01:00
MaheshRavishankar 81264dfbe8 [mlir][Linalg] Add utility method to reshape ops to express output shape in terms of input shape.
Resolving the dim of outputs of a tensor_reshape op in terms of its
input shape allows the op to be eliminated when its used only in its
dims. The init_tensor -> tensor_reshape canonicalization can be
simplified to use the dims of the output of the tensor_reshape which
gets canonicalized away later making the tensor_reshape dead.

Differential Revision: https://reviews.llvm.org/D96635
2021-02-16 13:42:08 -08:00
Mehdi Amini aa4e466caa [mlir][Linalg] Improve region support in Linalg ops
This revision takes advantage of the newly extended `ref` directive in assembly format
to allow better region handling for LinalgOps. Specifically, FillOp and CopyOp now build their regions explicitly which allows retiring older behavior that relied on specific op knowledge in both lowering to loops and vectorization.

This reverts commit 3f22547fd1 and reland 973e133b76 with a workaround for
a gcc bug that does not accept lambda default parameters:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=59949

Differential Revision: https://reviews.llvm.org/D96598
2021-02-12 19:11:24 +00:00
Mehdi Amini 3f22547fd1 Revert "[mlir][Linalg] Improve region support in Linalg ops."
This reverts commit 973e133b76.

It triggers an issue in gcc5 that require investigation, the build is
broken with:

/tmp/ccdpj3B9.s: Assembler messages:
/tmp/ccdpj3B9.s:5821: Error: symbol `_ZNSt17_Function_handlerIFvjjEUljjE2_E9_M_invokeERKSt9_Any_dataOjS6_' is already defined
/tmp/ccdpj3B9.s:5860: Error: symbol `_ZNSt14_Function_base13_Base_managerIUljjE2_E10_M_managerERSt9_Any_dataRKS3_St18_Manager_operation' is already defined
2021-02-12 18:15:51 +00:00
Nicolas Vasilache f3fb2dd147 [mlir][Linalg] NFC - Add an OpFoldResult-based builder for InitTensorOp 2021-02-12 16:03:51 +00:00
Nicolas Vasilache 973e133b76 [mlir][Linalg] Improve region support in Linalg ops.
This revision takes advantage of the newly extended `ref` directive in assembly format
to allow better region handling for LinalgOps. Specifically, FillOp and CopyOp now build their regions explicitly which allows retiring older behavior that relied on specific op knowledge in both lowering to loops and vectorization.

Differential Revision: https://reviews.llvm.org/D96598
2021-02-12 14:51:03 +00:00
Aart Bik 0b1764a3d7 [mlir][sparse] sparse tensor storage implementation
This revision connects the generated sparse code with an actual
sparse storage scheme, which can be initialized from a test file.
Lacking a first-class citizen SparseTensor type (with buffer),
the storage is hidden behind an opaque pointer with some "glue"
to bring the pointer back to tensor land. Rather than generating
sparse setup code for each different annotated tensor (viz. the
"pack" methods in TACO), a single "one-size-fits-all" implementation
has been added to the runtime support library.  Many details and
abstractions need to be refined in the future, but this revision
allows full end-to-end integration testing and performance
benchmarking (with on one end, an annotated Lingalg
op and, on the other end, a JIT/AOT executable).

Reviewed By: nicolasvasilache, bixia

Differential Revision: https://reviews.llvm.org/D95847
2021-02-10 11:57:24 -08:00
Hanhan Wang e8d31754a2 [mlir][Linalg] Add a build method for linalg.pad_tensor
Add a build method that pads the source with a scalar value.

Reviewed By: nicolasvasilache, antiagainst

Differential Revision: https://reviews.llvm.org/D96343
2021-02-09 10:19:57 -08:00
Nicolas Vasilache d57a305fdf [mlir][Linalg] Fix padding related bugs.
This revision fixes the fact that the padding transformation did not have enough information to set the proper type for the padding value.
Additionally, the verifier for Yield in the presence of PadTensorOp is fixed to properly report incorrect number of results or operands. Previously, the error would be silently ignored which made the core issue difficult to debug.

Differential Revision: https://reviews.llvm.org/D96264
2021-02-08 18:59:24 +00:00
Tres Popp c2c83e97c3 Revert "Revert "Reorder MLIRContext location in BuiltinAttributes.h""
This reverts commit 511dd4f438 along with
a couple fixes.

Original message:
Now the context is the first, rather than the last input.

This better matches the rest of the infrastructure and makes
it easier to move these types to being declaratively specified.

Phabricator: https://reviews.llvm.org/D96111
2021-02-08 10:39:58 +01:00
Tres Popp 511dd4f438 Revert "Reorder MLIRContext location in BuiltinAttributes.h"
This reverts commit 7827753f98.
2021-02-08 09:32:42 +01:00
Tres Popp 7827753f98 Reorder MLIRContext location in BuiltinAttributes.h
Now the context is the first, rather than the last input.

This better matches the rest of the infrastructure and makes
it easier to move these types to being declaratively specified.

Differential Revision: https://reviews.llvm.org/D96111
2021-02-08 09:28:09 +01:00
Nicolas Vasilache 7f58196ec7 [mlir][linalg] Linalg.fill on tensor should not have side-effects
Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D96094
2021-02-05 08:22:14 +00:00
Nicolas Vasilache f4ac9f0334 [mlir][Linalg] Drop SliceOp
This op is subsumed by rank-reducing SubViewOp and has become useless.

Differential revision: https://reviews.llvm.org/D95317
2021-02-04 11:22:01 +00:00
Nicolas Vasilache 1029c82c1e [mlir][Linalg] NFC - Extract a standalone LinalgInterfaces
This separation improves the layering and paves the way for more interfaces coming up in the future.

Differential revision: https://reviews.llvm.org/D95941
2021-02-04 07:19:38 +00:00
MaheshRavishankar 342d4662e1 [mlir] Add custom directive hooks for printing mixed integer or value operands.
Add printer and parser hooks for a custom directive that allows
parsing and printing of idioms that can represent a list of values
each of which is either an integer or an SSA value. For example in

`subview %source[%offset_0, 1] [4, %size_1] [%stride_0, 3]`

each of the list (which represents offset, size and strides) is a mix
of either statically know integer values or dynamically computed SSA
values. Since this is used in many places adding a custom directive to
parse/print this idiom allows using assembly format on operations
which use this idiom.

Differential Revision: https://reviews.llvm.org/D95773
2021-02-01 19:03:49 -08:00
Hanhan Wang c818fa6729 [mlir][Linalg] Replace SimplePad with PadTensor in tile-and-pad
This revision creates a build method of PadTensorOp which can be mapped to
SimplePad op. The verifier is updated to accept a static custom result type,
which has the same semantic as SimplePadOp.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D95555
2021-01-28 06:50:26 -08:00
Nicolas Vasilache 5133673df4 [mlir] Extend semantic of OffsetSizeAndStrideOpInterface.
OffsetSizeAndStrideOpInterface now have the ability to specify only a leading subset of
offset, sizes, strides operands/attributes.
The size of that leading subset must be limited by the corresponding entry in `getArrayAttrMaxRanks` to avoid overflows.
Missing trailing dimensions are assumed to span the whole range (i.e. [0 .. dim)).
This brings more natural semantics to slice-like op on top of subview and is a simplifies to removing all uses of SliceOp in dependent projects.

Differential revision: https://reviews.llvm.org/D95441
2021-01-27 09:02:35 +00:00
MaheshRavishankar 7c15e0f64c [mlir][Linalg] Add canonicalization for init_tensor -> subtensor op.
Differential Revision: https://reviews.llvm.org/D95305
2021-01-26 23:22:28 -08:00
MaheshRavishankar 6e8ef3b76a [mlir][Linalg] Make Fill operation work on tensors.
Depends on D95109
2021-01-22 14:39:27 -08:00
Hanhan Wang 16d4bbef30 [mlir][Linalg] Introduce linalg.pad_tensor op.
`linalg.pad_tensor` is an operation that pads the `source` tensor
with given `low` and `high` padding config.

Example 1:

```mlir
  %pad_value = ... : f32
  %1 = linalg.pad_tensor %0 low[1, 2] high[2, 3] {
  ^bb0(%arg0 : index, %arg1 : index):
    linalg.yield %pad_value : f32
  } : tensor<?x?xf32> to tensor<?x?xf32>
```

Example 2:
```mlir
  %pad_value = ... : f32
  %1 = linalg.pad_tensor %arg0 low[2, %arg1, 3, 3] high[3, 3, %arg1, 2] {
  ^bb0(%arg2: index, %arg3: index, %arg4: index, %arg5: index):
    linalg.yield %pad_value : f32
  } : tensor<1x2x2x?xf32> to tensor<6x?x?x?xf32>
```

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D93704
2021-01-21 22:09:28 -08:00
MaheshRavishankar d7bc3b7ce2 [mlir][Linalg] Add missing check to canonicalization of GenericOp that are identity ops.
The operantion is an identity if the values yielded by the operation
is the argument of the basic block of that operation. Add this missing check.

Differential Revision: https://reviews.llvm.org/D94819
2021-01-15 13:55:35 -08:00
MaheshRavishankar 774c9c6ef3 [mlir][Linalg] Add canonicalization of linalg op -> dim op.
Add canonicalization to replace use of the result of a linalg
operation on tensors in a dim operation, to use one of the operands of
the linalg operations instead. This allows the linalg op itself to be
deleted when all its non-dim uses are removed (say through tiling, etc.)

Differential Revision: https://reviews.llvm.org/D93076
2021-01-14 16:17:08 -08:00
MaheshRavishankar 722ae10907 [mlir][Linalg] Add canonicalization to remove no-op linalg operations.
linalg.generic/indexed_generic operations on tensors whose body is
just yielding the (non-induction variable) arguments of the operation
can be canonicalized by replacing uses of the result with the
corresponding arguments.

Differential Revision: https://reviews.llvm.org/D94581
2021-01-14 14:59:24 -08:00
MaheshRavishankar 9c0dc0b2c1 [mlir][Linalg] Fold init_tensor -> linalg.tensor_reshape.
Reshaping an init_tensor can be folded to a init_tensor op of the
final type.

Differential Revision: https://reviews.llvm.org/D93773
2021-01-11 09:22:35 -08:00
MaheshRavishankar ec13f6c3e5 [mlir][Linalg] Add verification checks to disallow illegal reshape ops.
The existing verification of reshape ops in linalg (linalg.reshape and
linalg.tensor_reshape) allows specification of illegal ops, where
- A dynamic dimension is expanded into multiple dynamic
  dimensions. This is ill-specified.
- A static dimension is expanded into dynamic dimension or viceversa,
- The product of extents of the static dimensions in the expanded type
  doesnt match the static dimension of the collapsed type.
Making all of these illegal. This also implies that some pessimization
in canonicalization due to incomplete semantics of the operation can
be dropped.

Differential Revision: https://reviews.llvm.org/D93724
2021-01-08 10:54:46 -08:00
Alexander Belyaev 89ae5b5b6a [mlir] Add canonicalization pattern out_tensor->linalg->dim to out_tensor->dim.
Differential Revision: https://reviews.llvm.org/D94079
2021-01-05 15:15:21 +01:00
nicolasvasilache b7ae1d3d2b [mlir][Linalg] Revisit the Linalg on tensors abstraction
This revision drops init_tensor arguments from Linalg on tensors and instead uniformizes the output buffers and output tensors to be consistent.
This significantly simplifies the usage of Linalg on tensors and is a stepping stone for
its evolution towards a mixed tensor and shape abstraction discussed in https://llvm.discourse.group/t/linalg-and-shapes/2421/19.

Differential Revision: https://reviews.llvm.org/D93469
2020-12-21 12:29:10 -08:00
Sean Silva 129d6e554e [mlir] Move `std.tensor_cast` -> `tensor.cast`.
This is almost entirely mechanical.

Differential Revision: https://reviews.llvm.org/D93357
2020-12-17 16:06:56 -08:00
MaheshRavishankar 118a715654 [mlir][Linalg] Define a linalg.init_tensor operation.
This operation is used to materialize a tensor of a particular
shape. The shape could be specified as a mix of static and dynamic
values.

The use of this operation is to be an `init` tensor for Linalg
structured operation on tensors where the bounds of the computation
depends on the shape of the output of the linalg operation. The result
of this operation will be used as the `init` tensor of such Linalg
operations. To note,

1) The values in the tensor materialized is not used. Any operation to
   which this is an init tensor is expected to overwrite the entire
   tensor.
2) The tensor is materialized only for the shape of the output and to
   make the loop bounds depend only on operands of the structured
   operation.

Based on (1) and (2) it is assumed that these operations eventually go
away since they are only used in `dim` operations that can be
canonicalized to make this operation dead. Such canonicalization are
added here too.

Differential Revision: https://reviews.llvm.org/D93374
2020-12-17 14:45:51 -08:00
Christian Sigg 0bf4a82a5a [mlir] Use mlir::OpState::operator->() to get to methods of mlir::Operation. This is a preparation step to remove the corresponding methods from OpState.
Reviewed By: silvas, rriddle

Differential Revision: https://reviews.llvm.org/D92878
2020-12-09 12:11:32 +01:00
Nicolas Vasilache 9ac0b314a4 [mlir][Linalg] Drop symbol_source abstraction which does not pay for itself.
Differential Revision: https://reviews.llvm.org/D91956
2020-11-23 12:43:02 +00:00
Nicolas Vasilache 01c4418544 [mlir][Linalg] NFC - Factor out Linalg functionality for shape and loop bounds computation
This revision refactors code used in various Linalg transformations and makes it a first class citizen to the LinalgStructureOpInterface. This is in preparation to allowing more advanced Linalg behavior but is otherwise NFC.

Differential revision: https://reviews.llvm.org/D91863
2020-11-23 10:17:18 +00:00
River Riddle 65fcddff24 [mlir][BuiltinDialect] Resolve comments from D91571
* Move ops to a BuiltinOps.h
* Add file comments
2020-11-19 11:12:49 -08:00
Aart Bik 9ad62f62b9 [mlir][sparse] remove a few rewriting failures
Rationale:
Make sure preconditions are tested already during verfication.
Currently, the only way a sparse rewriting rule can fail is if
(1) the linalg op does not have sparse annotations, or
(2) a yet to be handled operation is encounted inside the op

Reviewed By: penpornk

Differential Revision: https://reviews.llvm.org/D91748
2020-11-18 17:29:40 -08:00
River Riddle 73ca690df8 [mlir][NFC] Remove references to Module.h and Function.h
These includes have been deprecated in favor of BuiltinDialect.h, which contains the definitions of ModuleOp and FuncOp.

Differential Revision: https://reviews.llvm.org/D91572
2020-11-17 00:55:47 -08:00
Aart Bik e1dbc25ee2 [mlir][sparse] integrate sparse annotation into generic linalg op
This CL integrates the new sparse annotations (hereto merely added as fully
transparent attributes) more tightly to the generic linalg op in order to add
verification of the annotations' consistency as well as to make make other
passes more aware of their presence (in the long run, rewriting rules must
preserve the integrity of the annotations).

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D91224
2020-11-11 17:26:30 -08:00
Sean Silva e6e9e7eedf [mlir][Linalg] Canonicalize duplicate args.
I ran into this pattern when converting elementwise ops like
`addf %arg0, %arg : tensor<?xf32>` to linalg. Redundant arguments can
also easily arise from linalg-fusion-for-tensor-ops.

Also, fix some small bugs in the logic in
LinalgStructuredOpsInterface.td.

Differential Revision: https://reviews.llvm.org/D90812
2020-11-06 14:40:51 -08:00
Mehdi Amini f580a49d27 Fix gcc warning by removing extra `;` after a macro (NFC) 2020-11-06 20:47:40 +00:00
Nicolas Vasilache ecca7852d9 [mlir][Linalg] Side effects interface for Linalg ops
The LinalgDependenceGraph and alias analysis provide the necessary analysis for the Linalg fusion on buffers case.

However this is not enough for linalg on tensors which require proper memory effects to play nicely with DCE and other transformations.
This revision adds side effects to Linalg ops that were previously missing and has 2 consequences:
1. one example in the copy removal pass now fails since the linalg.generic op has side effects and the pass does not perform alias analysis / distinguish between reads and writes.
2. a few examples in fusion-tensor.mlir need to return the resulting tensor otherwise DCE automatically kicks in as part of greedy pattern application.

Differential Revision: https://reviews.llvm.org/D90762
2020-11-05 09:00:28 +00:00
Kazuaki Ishizaki 41b09f4eff [mlir] NFC: fix trivial typos
fix typos in comments and documents

Reviewed By: jpienaar

Differential Revision: https://reviews.llvm.org/D90089
2020-10-29 04:05:22 +09:00
MaheshRavishankar 78f37b74da [mlir][Linalg] Miscalleneous enhancements to cover more fusion cases.
Adds support for
- Dropping unit dimension loops for indexed_generic ops.
- Folding consecutive folding (or expanding) reshapes when the result
  (or src) is a scalar.
- Fixes to indexed_generic -> generic fusion when zero-dim tensors are
  involved.

Differential Revision: https://reviews.llvm.org/D90118
2020-10-26 16:17:24 -07:00
Federico Lebrón 256492677d Fix pretty printing of linalg GenericOps when there are no inputs.
Differential Revision: https://reviews.llvm.org/D89825
2020-10-20 14:58:32 -07:00
MaheshRavishankar de2568aab8 [mlir][Linalg] Rethink fusion of linalg ops with reshape ops.
The current fusion on tensors fuses reshape ops with generic ops by
linearizing the indexing maps of the fused tensor in the generic
op. This has some limitations
- It only works for static shapes
- The resulting indexing map has a linearization that would be
  potentially prevent fusion later on (for ex. tile + fuse).

Instead, try to fuse the reshape consumer (producer) with generic op
producer (consumer) by expanding the dimensionality of the generic op
when the reshape is expanding (folding).  This approach conflicts with
the linearization approach. The expansion method is used instead of
the linearization method.

Further refactoring that changes the fusion on tensors to be a
collection of patterns.

Differential Revision: https://reviews.llvm.org/D89002
2020-10-14 13:50:31 -07:00
Nicolas Vasilache 69d3247f35 [mlir][Linalg] NFC - Automate the printing of canonicalizers and folders for nameds Linalg ops.
This revision reduces the number of places that specific information needs to be modified when adding new named Linalg ops.

Differential Revision: https://reviews.llvm.org/D89223
2020-10-12 11:22:29 +00:00
Nicolas Vasilache d8ee28b96e [mlir][Linalg] Extend buffer allocation to support Linalg init tensors
This revision adds init_tensors support to buffer allocation for Linalg on tensors.
Currently makes the assumption that the init_tensors fold onto the first output tensors.

This assumption is not currently enforced or cast in stone and requires experimenting with tiling linalg on tensors for ops **without reductions**.

Still this allows progress towards the end-to-end goal.
2020-10-06 13:24:27 +00:00
Nicolas Vasilache 4a8c70c319 [mlir][Linalg] Reintroduced missing verification check
A verification check on the number of indexing maps seems to have dropped inadvertently. Also update the relevant roundtrip tests.
2020-10-06 07:59:59 +00:00
Nicolas Vasilache 346b9d1772 [mlir][Linalg] Canonicalize TensorCastOp away when it feeds a LinalgOp.
This canonicalization is the counterpart of MemRefCastOp -> LinalgOp but on tensors.

This is needed to properly canonicalize post linalg tiling on tensors.

Differential Revision: https://reviews.llvm.org/D88729
2020-10-05 14:48:21 +00:00
Benjamin Kramer 6e2b267d1c Promote transpose from linalg to standard dialect
While affine maps are part of the builtin memref type, there is very
limited support for manipulating them in the standard dialect. Add
transpose to the set of ops to complement the existing view/subview ops.
This is a metadata transformation that encodes the transpose into the
strides of a memref.

I'm planning to use this when lowering operations on strided memrefs,
using the transpose to remove the stride without adding a dependency on
linalg dialect.

Differential Revision: https://reviews.llvm.org/D88651
2020-10-05 10:58:20 +02:00
Rahul Joshi 08e4f07852 [MLIR][NFC] Adopt use of TypeRange in build() methods.
- Use TypeRange instead of ArrayRef<Type> where possible.
- Change some of the custom builders to also use TypeRange

Differential Revision: https://reviews.llvm.org/D87944
2020-09-23 09:07:57 -07:00
MaheshRavishankar b62f9f4407 [mlir][Linalg] Add pattern to fold linalg.tensor_reshape that add unit extent dims.
A sequence of two reshapes such that one of them is just adding unit
extent dims can be folded to a single reshape.

Differential Revision: https://reviews.llvm.org/D88057
2020-09-23 00:01:58 -07:00
Nicolas Vasilache ed229132f1 [mlir][Linalg] Uniformize linalg.generic with named ops.
This revision allows representing a reduction at the level of linalg on tensors for generic ops by uniformizing with the named ops approach.
2020-09-22 04:13:22 -04:00
Nicolas Vasilache 93fd30bac3 [mlir][Linalg] Evolve named ops to use assembly form and support linalg on tensors.
This revision allows representing a reduction at the level of linalg on tensors for named ops. When a structured op has a reduction and returns tensor(s), new conventions are added and documented.

As an illustration, the syntax for a `linalg.matmul` writing into a buffer is:

```
  linalg.matmul ins(%a, %b : memref<?x?xf32>, tensor<?x?xf32>)
               outs(%c : memref<?x?xf32>)
```

, whereas the syntax for a `linalg.matmul` returning a new tensor is:

```
  %d = linalg.matmul ins(%a, %b : tensor<?x?xf32>, memref<?x?xf32>)
                    init(%c : memref<?x?xf32>)
                      -> tensor<?x?xf32>
```

Other parts of linalg will be extended accordingly to allow mixed buffer/tensor semantics in the presence of reductions.
2020-09-18 06:14:30 -04:00
Federico Lebrón 7d1ed69c8a Make namespace handling uniform across dialect backends.
Now backends spell out which namespace they want to be in, instead of relying on
clients #including them inside already-opened namespaces. This also means that
cppNamespaces should be fully qualified, and there's no implicit "::mlir::"
prepended to them anymore.

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D86811
2020-09-14 20:33:31 +00:00
Nicolas Vasilache e6f2f17f05 [mlir][Linalg] Refactor StructuredOpInterface - NFC
This revision refactors and cleans up a bunch of things to simplify StructuredOpInterface
before work can proceed on Linalg on tensors:
- break out pieces of the StructuredOps trait that are part of the StructuredOpInterface,
- drop referenceIterators and referenceIndexingMaps that end up being more confusing than useful,
- drop NamedStructuredOpTrait
2020-09-11 07:53:12 -04:00
Benjamin Kramer a0e0d30a29 [mlir][Linalg] Print both types for linalg.transpose
Previously only the input type was printed, and the parser applied it to
both input and output, creating an invalid transpose. Print and parse
both types, and verify that they match.

Differential Revision: https://reviews.llvm.org/D87462
2020-09-11 11:16:51 +02:00
Eugene Burmako 5638df1950 Introduce linalg.vecmat
This patch adds a new named structured op to accompany linalg.matmul and
linalg.matvec. We needed it for our codegen, so I figured it would be useful
to add it to Linalg.

Reviewed By: nicolasvasilache, mravishankar

Differential Revision: https://reviews.llvm.org/D87292
2020-09-10 18:48:14 +02:00
Frederik Gossen 136eb79a88 [MLIR][Standard] Add `dynamic_tensor_from_elements` operation
With `dynamic_tensor_from_elements` tensor values of dynamic size can be
created. The body of the operation essentially maps the index space to tensor
elements.

Declare SCF operations in the `scf` namespace to avoid name clash with the new
`std.yield` operation. Resolve ambiguities between `linalg/shape/std/scf.yield`
operations.

Differential Revision: https://reviews.llvm.org/D86276
2020-09-07 11:44:43 +00:00
Hanhan Wang eb4efa8832 [mlir][Linalg] Enhance Linalg fusion on generic op and tensor_reshape op.
The tensor_reshape op was only fusible only if it is a collapsing case. Now we
propagate the op to all the operands so there is a further chance to fuse it
with generic op. The pre-conditions are:

1) The producer is not an indexed_generic op.
2) All the shapes of the operands are the same.
3) All the indexing maps are identity.
4) All the loops are parallel loops.
5) The producer has a single user.

It is possible to fuse the ops if the producer is an indexed_generic op. We
still can compute the original indices. E.g., if the reshape op collapses the d0
and d1, we can use DimOp to get the width of d1, and calculate the index
`d0 * width + d1`. Then replace all the uses with it. However, this pattern is
not implemented in the patch.

Reviewed By: mravishankar

Differential Revision: https://reviews.llvm.org/D86314
2020-08-28 01:55:49 -07:00
Benjamin Kramer b98e25b6d7 Make helpers static. NFC. 2020-08-19 16:00:03 +02:00
Mehdi Amini f9dc2b7079 Separate the Registration from Loading dialects in the Context
This changes the behavior of constructing MLIRContext to no longer load globally
registered dialects on construction. Instead Dialects are only loaded explicitly
on demand:
- the Parser is lazily loading Dialects in the context as it encounters them
during parsing. This is the only purpose for registering dialects and not load
them in the context.
- Passes are expected to declare the dialects they will create entity from
(Operations, Attributes, or Types), and the PassManager is loading Dialects into
the Context when starting a pipeline.

This changes simplifies the configuration of the registration: a compiler only
need to load the dialect for the IR it will emit, and the optimizer is
self-contained and load the required Dialects. For example in the Toy tutorial,
the compiler only needs to load the Toy dialect in the Context, all the others
(linalg, affine, std, LLVM, ...) are automatically loaded depending on the
optimization pipeline enabled.

To adjust to this change, stop using the existing dialect registration: the
global registry will be removed soon.

1) For passes, you need to override the method:

virtual void getDependentDialects(DialectRegistry &registry) const {}

and registery on the provided registry any dialect that this pass can produce.
Passes defined in TableGen can provide this list in the dependentDialects list
field.

2) For dialects, on construction you can register dependent dialects using the
provided MLIRContext: `context.getOrLoadDialect<DialectName>()`
This is useful if a dialect may canonicalize or have interfaces involving
another dialect.

3) For loading IR, dialect that can be in the input file must be explicitly
registered with the context. `MlirOptMain()` is taking an explicit registry for
this purpose. See how the standalone-opt.cpp example is setup:

  mlir::DialectRegistry registry;
  registry.insert<mlir::standalone::StandaloneDialect>();
  registry.insert<mlir::StandardOpsDialect>();

Only operations from these two dialects can be in the input file. To include all
of the dialects in MLIR Core, you can populate the registry this way:

  mlir::registerAllDialects(registry);

4) For `mlir-translate` callback, as well as frontend, Dialects can be loaded in
the context before emitting the IR: context.getOrLoadDialect<ToyDialect>()

Differential Revision: https://reviews.llvm.org/D85622
2020-08-19 01:19:03 +00:00
Mehdi Amini e75bc5c791 Revert "Separate the Registration from Loading dialects in the Context"
This reverts commit d14cf45735.
The build is broken with GCC-5.
2020-08-19 01:19:03 +00:00
Mehdi Amini d14cf45735 Separate the Registration from Loading dialects in the Context
This changes the behavior of constructing MLIRContext to no longer load globally
registered dialects on construction. Instead Dialects are only loaded explicitly
on demand:
- the Parser is lazily loading Dialects in the context as it encounters them
during parsing. This is the only purpose for registering dialects and not load
them in the context.
- Passes are expected to declare the dialects they will create entity from
(Operations, Attributes, or Types), and the PassManager is loading Dialects into
the Context when starting a pipeline.

This changes simplifies the configuration of the registration: a compiler only
need to load the dialect for the IR it will emit, and the optimizer is
self-contained and load the required Dialects. For example in the Toy tutorial,
the compiler only needs to load the Toy dialect in the Context, all the others
(linalg, affine, std, LLVM, ...) are automatically loaded depending on the
optimization pipeline enabled.

To adjust to this change, stop using the existing dialect registration: the
global registry will be removed soon.

1) For passes, you need to override the method:

virtual void getDependentDialects(DialectRegistry &registry) const {}

and registery on the provided registry any dialect that this pass can produce.
Passes defined in TableGen can provide this list in the dependentDialects list
field.

2) For dialects, on construction you can register dependent dialects using the
provided MLIRContext: `context.getOrLoadDialect<DialectName>()`
This is useful if a dialect may canonicalize or have interfaces involving
another dialect.

3) For loading IR, dialect that can be in the input file must be explicitly
registered with the context. `MlirOptMain()` is taking an explicit registry for
this purpose. See how the standalone-opt.cpp example is setup:

  mlir::DialectRegistry registry;
  registry.insert<mlir::standalone::StandaloneDialect>();
  registry.insert<mlir::StandardOpsDialect>();

Only operations from these two dialects can be in the input file. To include all
of the dialects in MLIR Core, you can populate the registry this way:

  mlir::registerAllDialects(registry);

4) For `mlir-translate` callback, as well as frontend, Dialects can be loaded in
the context before emitting the IR: context.getOrLoadDialect<ToyDialect>()

Differential Revision: https://reviews.llvm.org/D85622
2020-08-18 23:23:56 +00:00