Commit Graph

632 Commits

Author SHA1 Message Date
Nicolas Vasilache 2240568579 [MLIR][Linalg] Hoist padding across multiple levels of tiling
This revision introduces proper backward slice computation during the hoisting of
PadTensorOp. This allows hoisting padding even across multiple levels of tiling.
Such hoisting requires the proper handling of loop bounds that may depend on enclosing
loop variables.

Differential revision: https://reviews.llvm.org/D98965
2021-03-23 17:47:32 +00:00
Chris Lattner 79d7f618af Rename FrozenRewritePatternList -> FrozenRewritePatternSet; NFC.
This nicely aligns the naming with RewritePatternSet.  This type isn't
as widely used, but we keep a using declaration in to help with
downstream consumption of this change.

Differential Revision: https://reviews.llvm.org/D99131
2021-03-22 17:40:45 -07:00
Chris Lattner dc4e913be9 [PatternMatch] Big mechanical rename OwningRewritePatternList -> RewritePatternSet and insert -> add. NFC
This doesn't change APIs, this just cleans up the many in-tree uses of these
names to use the new preferred names.  We'll keep the old names around for a
couple weeks to help transitions.

Differential Revision: https://reviews.llvm.org/D99127
2021-03-22 17:20:50 -07:00
Nicolas Vasilache bcd6424f9b [mlir][Linalg] Fix linalg on tensor fusion
- Drop unnecessary occurrences of rewriter.eraseOp: dead linalg ops on tensors should be cleaned up by DCE.
- reimplement the part of Linalg on fusion that constructs the body and block arguments: the previous implementation had too much magic. Instead this spells out all cases explicitly and asserts / introduces TODOs for incorrect cases.

As a consequence, we can use the default traversal order for this pattern.

Differential Revision: https://reviews.llvm.org/D99070
2021-03-22 13:29:40 +00:00
Adrian Kuegel c691b9686b [mlir] Add an option to still use bottom-up traversal
GreedyPatternRewriteDriver was changed from bottom-up traversal to top-down traversal. Not all passes work yet with that change for traversal order. To give some time for fixing, add an option to allow to switch back to bottom-up traversal. Use this option in FusionOfTensorOpsPass which fails otherwise.

Differential Revision: https://reviews.llvm.org/D99059
2021-03-22 09:49:44 +01:00
Chris Lattner 3a506b31a3 Change OwningRewritePatternList to carry an MLIRContext with it.
This updates the codebase to pass the context when creating an instance of
OwningRewritePatternList, and starts removing extraneous MLIRContext
parameters.  There are many many more to be removed.

Differential Revision: https://reviews.llvm.org/D99028
2021-03-21 10:06:31 -07:00
Benjamin Kramer 6327a7cfd7 [mlir][Linalg] Make LLVM_DEBUG region bigger to avoid warnings in Release builds
Transforms.cpp:586:16: error: unused variable 'v' [-Werror,-Wunused-variable]
    for (Value v : operands)
               ^
2021-03-19 20:56:59 +01:00
Nicolas Vasilache 5b2d8503d1 [mlir][Linalg] NFC - Expose helper function `substituteMin`. 2021-03-19 16:26:52 +00:00
Lei Zhang fcc1ce0093 Revert "Revert "[mlir] Add linalg.fill bufferization conversion""
This reverts commit c69550c132 with
proper fix applied.
2021-03-18 17:21:58 -04:00
Mehdi Amini c69550c132 Revert "[mlir] Add linalg.fill bufferization conversion"
This reverts commit 32a744ab20.

CI is broken:

test/Dialect/Linalg/bufferize.mlir:274:12: error: CHECK: expected string not found in input
 // CHECK: %[[MEMREF:.*]] = tensor_to_memref %[[IN]] : memref<?xf32>
           ^
2021-03-18 21:18:07 +00:00
Eugene Zhulenev 32a744ab20 [mlir] Add linalg.fill bufferization conversion
`BufferizeAnyLinalgOp` fails because `FillOp` is not a `LinalgGenericOp` and it fails while reading operand sizes attribute.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D98671
2021-03-18 13:41:16 -07:00
thomasraoux 16947650d5 [mlir][linalg] Extend linalg vectorization to support non-identity input maps
This propagates the affine map to transfer_read op in case it is not a
minor identity map.

Differential Revision: https://reviews.llvm.org/D98523
2021-03-18 12:32:35 -07:00
Julian Gross e2310704d8 [MLIR] Create memref dialect and move dialect-specific ops from std.
Create the memref dialect and move dialect-specific ops
from std dialect to this dialect.

Moved ops:
AllocOp -> MemRef_AllocOp
AllocaOp -> MemRef_AllocaOp
AssumeAlignmentOp -> MemRef_AssumeAlignmentOp
DeallocOp -> MemRef_DeallocOp
DimOp -> MemRef_DimOp
MemRefCastOp -> MemRef_CastOp
MemRefReinterpretCastOp -> MemRef_ReinterpretCastOp
GetGlobalMemRefOp -> MemRef_GetGlobalOp
GlobalMemRefOp -> MemRef_GlobalOp
LoadOp -> MemRef_LoadOp
PrefetchOp -> MemRef_PrefetchOp
ReshapeOp -> MemRef_ReshapeOp
StoreOp -> MemRef_StoreOp
SubViewOp -> MemRef_SubViewOp
TransposeOp -> MemRef_TransposeOp
TensorLoadOp -> MemRef_TensorLoadOp
TensorStoreOp -> MemRef_TensorStoreOp
TensorToMemRefOp -> MemRef_BufferCastOp
ViewOp -> MemRef_ViewOp

The roadmap to split the memref dialect from std is discussed here:
https://llvm.discourse.group/t/rfc-split-the-memref-dialect-from-std/2667

Differential Revision: https://reviews.llvm.org/D98041
2021-03-15 11:14:09 +01:00
Aart Bik e7ee4eaaf7 [mlir][sparse] disable nonunit stride dense vectorization
This is a temporary work-around to get our all-annotations-all-flags
stress testing effort run clean. In the long run, we want to provide
efficient implementations of strided loads and stores though

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D98563
2021-03-12 16:49:32 -08:00
Inho Seo 2ce4caf414 Moved getStaticLoopRanges and getStaticShape methods to LinalgInterfaces.td to add static shape verification
It is to use the methods in LinalgInterfaces.cpp for additional static shape verification to match the shaped operands and loop on linalgOps. If I used the existing methods, I would face circular dependency linking issue. Now we can use them as methods of LinalgOp.

Reviewed By: hanchung

Differential Revision: https://reviews.llvm.org/D98163
2021-03-10 04:06:22 -08:00
Tobias Gysi c1a4cd551f [mlir][linalg] refactor the result handling during vectorization.
Return the vectorization results using a vector passed by reference instead of returning them embedded in a structure.

Differential Revision: https://reviews.llvm.org/D98182
2021-03-09 07:11:57 +00:00
Aart Bik adc35b689f [mlir][sparse] mask reduction update
Reduction updates should be masked, just like the load and stores.
Note that alternatively, we could use the fact that masked values are
zero of += updates and mask invariants to get this working but that
would not work for *= updates. Masking the update itself is cleanest.
This change also replaces the constant mask with a broadcast of "true"
since this constant folds much better for various folding patterns.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D98000
2021-03-05 08:56:10 -08:00
Nicolas Vasilache c86d3c1a38 [mlir][Linalg] Fix order of dimensions in hoistPaddingOnTensors. 2021-03-05 15:11:35 +00:00
Aart Bik 553cb6d473 [mlir][sparse] fix bug in reduction chain
Found with exhaustive testing, it is possible that a while loop
appears in between chainable for loops. As long as we don't
scalarize reductions in while loops, this means we need to
terminate the chain at the while. This also refactors the
reduction code into more readable helper methods.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D97886
2021-03-03 17:38:22 -08:00
Aart Bik 5b333d3449 [mlir][sparse] do not ignore ordering for "dense" tensor linked with sparse type
Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D97795
2021-03-02 15:21:51 -08:00
Frederik Gossen bcc9b371e4 Split `ElementwiseMappable` trait into four more precise traits.
Some elementwise operations are not scalarizable, vectorizable, or tensorizable.
Split `ElementwiseMappable` trait into the following, more precise traits.
  - `Elementwise`
  - `Scalarizable`
  - `Vectorizable`
  - `Tensorizable`
This allows for reuse of `Elementwise` in dialects like HLO.

Differential Revision: https://reviews.llvm.org/D97674
2021-03-02 15:31:19 +01:00
KareemErgawy-TomTom 3b021fbdc0 [MLIR][LinAlg] Detensorize interal function control flow.
This patch continues detensorizing implementation by detensoring
internal control flow in functions.

In order to detensorize functions, all the non-entry block's arguments
are detensored and branches between such blocks are properly updated to
reflect the detensored types as well. Function entry block (signature)
is left intact.

This continues work towards handling github/google/iree#1159.

Reviewed By: silvas

Differential Revision: https://reviews.llvm.org/D97148
2021-03-02 11:46:20 +01:00
Aart Bik 6afaea6682 [mlir][sparse] fixed inaccury in maintaining universal index
The universal index was maintained if dense indices were still
in place, and lattice points followed. However, it should only
be kept if any of those following lattice points actually
consumes the universal index. This change also fixes an
inaccuracy with a missing broadcast around vector invariant.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D97594
2021-02-27 17:32:57 -08:00
Aart Bik df5ccf5a94 [mlir][vector] add higher dimensional support to gather/scatter
Similar to mask-load/store and compress/expand, the gather and
scatter operation now allow for higher dimension uses. Note that
to support the mixed-type index, the new syntax is:
   vector.gather %base [%i,%j] [%kvector] ....
The first client of this generalization is the sparse compiler,
which needs to define scatter and gathers on dense operands
of higher dimensions too.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D97422
2021-02-26 14:20:19 -08:00
Christian Sigg dffc487b07 [mlir] Mark OpState::removeAttr() deprecated.
Fix call sites.

The method will be removed 2 weeks later.

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D97530
2021-02-26 12:04:41 +01:00
Aart Bik 17fa919847 [mlir][sparse] incorporate vector index into address computation
When computing dense address, a vectorized index must be accounted
for properly. This bug was formerly undetected because we get 0 * prev + i
in most cases, which folds away the scalar part. Now it works for all cases.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D97317
2021-02-23 13:25:51 -08:00
Nicolas Vasilache 8cf14b8dec [mlir][Linalg] Retire hoistViewAllocOps.
This transformation was only used for quick experimentation and is not general enough.
Retire it.

Differential Revision: https://reviews.llvm.org/D97266
2021-02-23 11:45:19 +00:00
KareemErgawy-TomTom 67e0d58de4 [MLIR][LinAlg] Start detensoring implementation.
This commit is the first baby step towards detensoring in
linalg-on-tensors.

Detensoring is the process through which a tensor value is convereted to one
or potentially more primitive value(s). During this process, operations with
such detensored operands are also converted to an equivalen form that works
on primitives.

The detensoring process is driven by linalg-on-tensor ops. In particular, a
linalg-on-tensor op is checked to see whether *all* its operands can be
detensored. If so, those operands are converted to thier primitive
counterparts and the linalg op is replaced by an equivalent op that takes
those new primitive values as operands.

This works towards handling github/google/iree#1159.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D96271
2021-02-23 08:27:58 +01:00
Aart Bik 0df59f234b [sparse][mlir] simplify lattice optimization logic
Simplifies the way lattices are optimized with less, but more
powerful rules. This also fixes an inaccuracy where too many
lattices resulted (expecting a non-existing universal index).
Also puts no-side-effects on all proper getters and unifies
bufferization flags order in integration tests (for future,
more complex use cases).

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D97134
2021-02-22 16:52:06 -08:00
Nicolas Vasilache 62f5c46eec [mlir][Linalg] NFC - Expose more options to the CodegenStrategy 2021-02-19 14:01:44 +00:00
Alexander Belyaev a89035d750 Revert "[MLIR] Create memref dialect and move several dialect-specific ops from std."
This commit introduced a cyclic dependency:
Memref dialect depends on Standard because it used ConstantIndexOp.
Std depends on the MemRef dialect in its EDSC/Intrinsics.h

Working on a fix.

This reverts commit 8aa6c3765b.
2021-02-18 12:49:52 +01:00
Julian Gross 8aa6c3765b [MLIR] Create memref dialect and move several dialect-specific ops from std.
Create the memref dialect and move several dialect-specific ops without
dependencies to other ops from std dialect to this dialect.

Moved ops:
AllocOp -> MemRef_AllocOp
AllocaOp -> MemRef_AllocaOp
DeallocOp -> MemRef_DeallocOp
MemRefCastOp -> MemRef_CastOp
GetGlobalMemRefOp -> MemRef_GetGlobalOp
GlobalMemRefOp -> MemRef_GlobalOp
PrefetchOp -> MemRef_PrefetchOp
ReshapeOp -> MemRef_ReshapeOp
StoreOp -> MemRef_StoreOp
TransposeOp -> MemRef_TransposeOp
ViewOp -> MemRef_ViewOp

The roadmap to split the memref dialect from std is discussed here:
https://llvm.discourse.group/t/rfc-split-the-memref-dialect-from-std/2667

Differential Revision: https://reviews.llvm.org/D96425
2021-02-18 11:29:39 +01:00
Aart Bik ff6c84b803 [mlir][sparse] generalize sparse storage format to many more types
Rationale:
Narrower types for overhead storage yield a smaller memory footprint for
sparse tensors and thus needs to be supported. Also, more value types
need to be supported to deal with all kinds of kernels. Since the
"one-size-fits-all" sparse storage scheme implementation is used
instead of actual codegen, the library needs to be able to support
all combinations of desired types. With some crafty templating and
overloading, the actual code for this is kept reasonably sized though.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D96819
2021-02-17 18:20:23 -08:00
Nicolas Vasilache 21debeae78 [mlir][Linalg] Generalize vector::transfer hoisting on tensors.
This revision adds support for hoisting "subtensor + vector.transfer_read" / "subtensor_insert + vector.transfer_write pairs" across scf.for.
The unit of hoisting becomes a HoistableRead / HoistableWrite struct which contains a pair of "vector.transfer_read + optional subtensor" / "vector.transfer_write + optional subtensor_insert".
scf::ForOp canonicalization patterns are applied greedily on the successful application of the transformation to cleanup the IR more eagerly and potentially expose more transformation opportunities.

Differential revision: https://reviews.llvm.org/D96731
2021-02-16 09:45:14 +00:00
Nicolas Vasilache d01ea0edaa [mlir] Drop reliance of SliceAnalysis on specific ops.
SliceAnalysis originally was developed in the context of affine.for within mlfunc.
It predates the notion of region.
This revision updates it to not hardcode specific ops like scf::ForOp.
When rooted at an op, the behavior of the slice computation changes as it recurses into the regions of the op. This does not support gathering all values transitively depending on a loop induction variable anymore.
Additional variants rooted at a Value are added to also support the existing behavior.

Differential revision: https://reviews.llvm.org/D96702
2021-02-16 06:34:32 +00:00
Nicolas Vasilache 428bc6feed [mlir][Linalg] Fix constant detection in linalg.pad_tensor vectorization. 2021-02-14 15:53:39 +00:00
Mehdi Amini aa4e466caa [mlir][Linalg] Improve region support in Linalg ops
This revision takes advantage of the newly extended `ref` directive in assembly format
to allow better region handling for LinalgOps. Specifically, FillOp and CopyOp now build their regions explicitly which allows retiring older behavior that relied on specific op knowledge in both lowering to loops and vectorization.

This reverts commit 3f22547fd1 and reland 973e133b76 with a workaround for
a gcc bug that does not accept lambda default parameters:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=59949

Differential Revision: https://reviews.llvm.org/D96598
2021-02-12 19:11:24 +00:00
Mehdi Amini 3f22547fd1 Revert "[mlir][Linalg] Improve region support in Linalg ops."
This reverts commit 973e133b76.

It triggers an issue in gcc5 that require investigation, the build is
broken with:

/tmp/ccdpj3B9.s: Assembler messages:
/tmp/ccdpj3B9.s:5821: Error: symbol `_ZNSt17_Function_handlerIFvjjEUljjE2_E9_M_invokeERKSt9_Any_dataOjS6_' is already defined
/tmp/ccdpj3B9.s:5860: Error: symbol `_ZNSt14_Function_base13_Base_managerIUljjE2_E10_M_managerERSt9_Any_dataRKS3_St18_Manager_operation' is already defined
2021-02-12 18:15:51 +00:00
Nicolas Vasilache 973e133b76 [mlir][Linalg] Improve region support in Linalg ops.
This revision takes advantage of the newly extended `ref` directive in assembly format
to allow better region handling for LinalgOps. Specifically, FillOp and CopyOp now build their regions explicitly which allows retiring older behavior that relied on specific op knowledge in both lowering to loops and vectorization.

Differential Revision: https://reviews.llvm.org/D96598
2021-02-12 14:51:03 +00:00
Stephan Herhut 4348d8ab7f [mlir][math] Split off the math dialect.
This does not split transformations, yet. Those will be done as future clean ups.

Differential Revision: https://reviews.llvm.org/D96272
2021-02-12 10:55:12 +01:00
Nicolas Vasilache 5bc4f8846c s[mlir] Tighten computation of inferred SubView result type.
The AffineMap in the MemRef inferred by SubViewOp may have uncompressed symbols which result in type mismatch on otherwise unused symbols. Make the computation of the AffineMap compress those unused symbols which results in better canonical types.
Additionally, improve the error message to report which inferred type was expected.

Differential Revision: https://reviews.llvm.org/D96551
2021-02-11 22:38:16 +00:00
Hanhan Wang 9325b8da17 [mlir][Linalg] Add conv ops with TF definition.
The dimension order of a filter in tensorflow is
[filter_height, filter_width, in_channels, out_channels], which is different
from current definition. The current definition follows TOSA spec. Add TF
version conv ops to .tc, so we do not have to insert a transpose op around a
conv op.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D96038
2021-02-10 22:59:38 -08:00
Sanjoy Das bac1f12727 NFC; fix typo in comment
This should have gone in with a76761cf0d.
2021-02-10 21:34:29 -08:00
Sanjoy Das a76761cf0d NFC comment-only cleanups
- Remove leftover comment from de2568aab8
 - Fix a typo in a comment
2021-02-10 21:30:52 -08:00
Nicolas Vasilache 4643fd27c8 [mlir][Linalg] Fix crash when tileSizeComputationFunction is left unspecified 2021-02-10 22:47:05 +00:00
Aart Bik 0b1764a3d7 [mlir][sparse] sparse tensor storage implementation
This revision connects the generated sparse code with an actual
sparse storage scheme, which can be initialized from a test file.
Lacking a first-class citizen SparseTensor type (with buffer),
the storage is hidden behind an opaque pointer with some "glue"
to bring the pointer back to tensor land. Rather than generating
sparse setup code for each different annotated tensor (viz. the
"pack" methods in TACO), a single "one-size-fits-all" implementation
has been added to the runtime support library.  Many details and
abstractions need to be refined in the future, but this revision
allows full end-to-end integration testing and performance
benchmarking (with on one end, an annotated Lingalg
op and, on the other end, a JIT/AOT executable).

Reviewed By: nicolasvasilache, bixia

Differential Revision: https://reviews.llvm.org/D95847
2021-02-10 11:57:24 -08:00
Nicolas Vasilache 0ac3d97bf4 [mlir][Linalg] Fix pad hoisting.
This revision fixes the indexing logic into the packed tensor that result from hoisting padding. Previously, the index was incorrectly set to the loop induction variable when in fact we need to compute the iteration count (i.e. `(iv - lb).ceilDiv(step)`).

Differential Revision: https://reviews.llvm.org/D96417
2021-02-10 16:49:38 +00:00
Nicolas Vasilache bb69de3f41 [mlir][Linalg] Add a vectorization pattern for linalg::PadTensorOp
The new pattern is exercised from the TestLinalgTransforms pass.

Differential Revision: https://reviews.llvm.org/D96410
2021-02-10 14:13:49 +00:00
Nicolas Vasilache d57a305fdf [mlir][Linalg] Fix padding related bugs.
This revision fixes the fact that the padding transformation did not have enough information to set the proper type for the padding value.
Additionally, the verifier for Yield in the presence of PadTensorOp is fixed to properly report incorrect number of results or operands. Previously, the error would be silently ignored which made the core issue difficult to debug.

Differential Revision: https://reviews.llvm.org/D96264
2021-02-08 18:59:24 +00:00
Tres Popp c2c83e97c3 Revert "Revert "Reorder MLIRContext location in BuiltinAttributes.h""
This reverts commit 511dd4f438 along with
a couple fixes.

Original message:
Now the context is the first, rather than the last input.

This better matches the rest of the infrastructure and makes
it easier to move these types to being declaratively specified.

Phabricator: https://reviews.llvm.org/D96111
2021-02-08 10:39:58 +01:00
Tres Popp 511dd4f438 Revert "Reorder MLIRContext location in BuiltinAttributes.h"
This reverts commit 7827753f98.
2021-02-08 09:32:42 +01:00
Tres Popp 7827753f98 Reorder MLIRContext location in BuiltinAttributes.h
Now the context is the first, rather than the last input.

This better matches the rest of the infrastructure and makes
it easier to move these types to being declaratively specified.

Differential Revision: https://reviews.llvm.org/D96111
2021-02-08 09:28:09 +01:00
Nicolas Vasilache 0fcbbde2c7 [mlir][Linalg] NFC - Refactor vectorization to be more composable
Differential Revision: https://reviews.llvm.org/D96116
2021-02-05 12:03:14 +00:00
River Riddle e21adfa32d [mlir] Mark LogicalResult as LLVM_NODISCARD
This makes ignoring a result explicit by the user, and helps to prevent accidental errors with dropped results. Marking LogicalResult as no discard was always the intention from the beginning, but got lost along the way.

Differential Revision: https://reviews.llvm.org/D95841
2021-02-04 15:10:10 -08:00
Mehdi Amini 215441fcb7 Remove dead code from Linalg vectorization to fix GCC warning (NFC) 2021-02-04 17:37:25 +00:00
Nicolas Vasilache e4a503a26d [mlir][Linalg] Introduce a ContractionOpInterface
This revision takes advantage of recent extensions to vectorization to refactor contraction detection into a bona fide Linalg interface.
The mlit-linalg-ods-gen parser is extended to support adding such interfaces.
The detection that was originally enabling vectorization is refactored to serve as both a test on a generic LinalgOp as well as to verify ops that declare to conform to that interface.

This is plugged through Linalg transforms and strategies but it quickly becomes evident that the complexity and rigidity of the C++ class based templating does not pay for itself.
Therefore, this revision changes the API for vectorization patterns to get rid of templates as much as possible.
Variadic templates are relegated to the internals of LinalgTransformationFilter as much as possible and away from the user-facing APIs.

It is expected other patterns / transformations will follow the same path and drop as much C++ templating as possible from the class definition.

Differential revision: https://reviews.llvm.org/D95973
2021-02-04 16:53:24 +00:00
Nicolas Vasilache f4ac9f0334 [mlir][Linalg] Drop SliceOp
This op is subsumed by rank-reducing SubViewOp and has become useless.

Differential revision: https://reviews.llvm.org/D95317
2021-02-04 11:22:01 +00:00
Nicolas Vasilache f245b7ad36 [mlir][Linalg] Generalize the definition of a Linalg contraction.
This revision defines a Linalg contraction in general terms:

  1. Has 2 input and 1 output shapes.
  2. Has at least one reduction dimension.
  3. Has only projected permutation indexing maps.
  4. its body computes `u5(u1(c) + u2(u3(a) * u4(b)))` on some field
    (AddOpType, MulOpType), where u1, u2, u3, u4 and u5 represent scalar unary
    operations that may change the type (e.g. for mixed-precision).

As a consequence, when vectorization of such an op occurs, the only special
behavior is that the (unique) MulOpType is vectorized into a
`vector.contract`. All other ops are handled in a generic fashion.

 In the future, we may wish to allow more input arguments and elementwise and
 constant operations that do not involve the reduction dimension(s).

A test is added to demonstrate the proper vectorization of matmul_i8_i8_i32.

Differential revision: https://reviews.llvm.org/D95939
2021-02-04 07:50:44 +00:00
Benjamin Kramer 94f540cc7c [mlir][Linalg] Fix unused variable warning in Release builds. NFC. 2021-02-02 12:59:41 +01:00
Nicolas Vasilache 0a2a260aab [mlir][Linalg] Refactor Linalg vectorization for better reuse and extensibility.
This revision unifies Linalg vectorization and paves the way for vectorization of Linalg ops with mixed-precision operations.
The new algorithm traverses the ops in the linalg block in order and avoids recursion.
It uses a BlockAndValueMapping to keep track of vectorized operations.

The revision makes the following modifications but is otherwise NFC:
1. vector.transfer_read are created eagerly and may appear in a different order than the original order.
2. a more progressive vectorization to vector.contract results in only the multiply operation being converted to `vector.contract %a, %b, %zero`, where `%zero` is a
constant of the proper type. Later vector canonicalizations are assumed to rewrite vector.contract %a, %b, %zero + add to a proper accumulate form.

Differential revision: https://reviews.llvm.org/D95797
2021-02-02 11:31:09 +00:00
Hanhan Wang b3f611bfe7 [mlir][Linalg] Replace SimplePad with PadTensor in hoist-padding
This is the last revision to migrate using SimplePadOp to PadTensorOp, and the
SimplePadOp is removed in the patch. Update a bit in SliceAnalysis because the
PadTensorOp takes a region different from SimplePadOp. This is not covered by
LinalgOp because it is not a structured op.

Also, remove a duplicated comment from cpp file, which is already described in a
header file. And update the pseudo-mlir in the comment.

This is as same as D95615 but fixing one dep in CMakeLists.txt

Different from D95671, the fix was applied to run target.

Reviewed By: mravishankar

Differential Revision: https://reviews.llvm.org/D95785
2021-02-01 11:38:43 -08:00
Tres Popp 2790cbedd0 Revert "[mlir][Linalg] Replace SimplePad with PadTensor in hoist-padding"
This reverts commit d9b953d84b.

This commit resulted in build bot failures and the author is away from a
computer, so I am reverting on their behalf until they have a chance to
look into this.
2021-02-01 09:43:55 +01:00
Hanhan Wang d9b953d84b [mlir][Linalg] Replace SimplePad with PadTensor in hoist-padding
This is the last revision to migrate using SimplePadOp to PadTensorOp, and the
SimplePadOp is removed in the patch. Update a bit in SliceAnalysis because the
PadTensorOp takes a region different from SimplePadOp. This is not covered by
LinalgOp because it is not a structured op.

Also, remove a duplicated comment from cpp file, which is already described in a
header file. And update the pseudo-mlir in the comment.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D95671
2021-02-01 00:02:37 -08:00
MaheshRavishankar 98835e3d98 [mlir][Linalg] Enable TileAndFusePattern to work with tensors.
Differential Revision: https://reviews.llvm.org/D94531
2021-01-28 14:13:01 -08:00
Hanhan Wang 2c7cc5fd20 Revert "[mlir][Linalg] Replace SimplePad with PadTensor in hoist-padding"
This reverts commit 1e790b745d.

Differential Revision: https://reviews.llvm.org/D95636
2021-01-28 11:25:02 -08:00
Hanhan Wang 1e790b745d [mlir][Linalg] Replace SimplePad with PadTensor in hoist-padding
This is the last revision to migrate using SimplePadOp to PadTensorOp, and the
SimplePadOp is removed in the patch. Update a bit in SliceAnalysis because the
PadTensorOp takes a region different from SimplePadOp. This is not covered by
LinalgOp because it is not a structured op.

Also, remove a duplicated comment from cpp file, which is already described in a
header file. And update the pseudo-mlir in the comment.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D95615
2021-01-28 11:09:57 -08:00
Hanhan Wang c818fa6729 [mlir][Linalg] Replace SimplePad with PadTensor in tile-and-pad
This revision creates a build method of PadTensorOp which can be mapped to
SimplePad op. The verifier is updated to accept a static custom result type,
which has the same semantic as SimplePadOp.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D95555
2021-01-28 06:50:26 -08:00
Nicolas Vasilache 299cc5da6d [mlir][Linalg] Further improve codegen strategy and add a linalg.matmul_i8_i8_i32
This revision adds a layer of SFINAE to the composable codegen strategy so it does
not have to require statically defined ops but instead can also be used with OpInterfaces, Operation* and an op name string.

A linalg.matmul_i8_i8_i32 is added to the .tc spec to demonstrate how all this works end to end.

Differential Revision: https://reviews.llvm.org/D95600
2021-01-28 13:02:42 +00:00
Nicolas Vasilache d0c9fb1b8e [mlir][Linalg] Improve codegen strategy
This revision improves the usage of the codegen strategy by adding a few flags that
make it easier to control for the CLI.
Usage of ModuleOp is replaced by FuncOp as this created issues in multi-threaded mode.

A simple benchmarking capability is added for linalg.matmul as well as linalg.matmul_column_major.
This latter op is also added to linalg.

Now obsolete linalg integration tests that also take too long are deleted.

Correctness checks are still missing at this point.

Differential revision: https://reviews.llvm.org/D95531
2021-01-28 10:59:16 +00:00
Alex Zinenko 91bd1156f3 [mlir] drop unused statics 2021-01-26 13:30:45 +01:00
Nicolas Vasilache 05d5125d8a [mlir] Generalize OpFoldResult usage in ops with offsets, sizes and operands.
This revision starts evolving the APIs to manipulate ops with offsets, sizes and operands towards a ValueOrAttr abstraction that is already used in folding under the name OpFoldResult.

The objective, in the future, is to allow such manipulations all the way to the level of ODS to avoid all the genuflexions involved in distinguishing between values and attributes for generic constant foldings.

Once this evolution is accepted, the next step will be a mechanical OpFoldResult -> ValueOrAttr.

Differential Revision: https://reviews.llvm.org/D95310
2021-01-25 14:17:03 +00:00
Nicolas Vasilache 52e25523a9 [mlir][Linalg] Fix incorrect erase order 2021-01-25 14:04:06 +00:00
Nicolas Vasilache 68eee55ce6 [mlir][Linalg] Address missed review item
This revision addresses a remaining comment that was overlooked in https://reviews.llvm.org/D95243:
the pad hoisting transformation is made to additionally bail out on side effecting ops other than LoopLikeOps.
2021-01-25 13:47:44 +00:00
Nicolas Vasilache dbf9bedf40 [mlir][Linalg] Add a hoistPaddingOnTensors transformation
This transformation anchors on a padding op whose result is only used as an input
to a Linalg op and pulls it out of a given number of loops.
The result is a packing of padded tailes of ops that is amortized just before
the outermost loop from which the pad operation is hoisted.

Differential revision: https://reviews.llvm.org/D95243
2021-01-25 12:41:18 +00:00
Nicolas Vasilache 3747eb9c85 [mlir][Linalg] Add a padding option to Linalg tiling
This revision allows the base Linalg tiling pattern to optionally require padding to
a constant bounding shape.
When requested, a simple analysis is performed, similar to buffer promotion.
A temporary `linalg.simple_pad` op is added to model padding for the purpose of
connecting the dots. This will be replaced by a more fleshed out `linalg.pad_tensor`
op when it is available.
In the meantime, this temporary op serves the purpose of exhibiting the necessary
properties required from a more fleshed out pad op, to compose with transformations
properly.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D95149
2021-01-25 09:17:30 +00:00
MaheshRavishankar 430d43e010 [mlir][Linalg] Disable fusion of tensor_reshape op by expansion when unit-dims are involved
Fusion of generic/indexed_generic operations with tensor_reshape by
expansion when the latter just adds/removes unit-dimensions is
disabled since it just adds unit-trip count loops.

Differential Revision: https://reviews.llvm.org/D94626
2021-01-22 12:55:25 -08:00
MaheshRavishankar 01defcc8d7 [mlir][Linalg] Extend tile+fuse to work on Linalg operation on tensors.
Differential Revision: https://reviews.llvm.org/D93086
2021-01-22 11:33:35 -08:00
MaheshRavishankar bce318f58d [mlir][Linalg] NFC: Refactor LinalgDependenceGraphElem to allow
representing dependence from producer result to consumer.

With Linalg on tensors the dependence between operations can be from
the result of the producer to the consumer. This change just does a
NFC refactoring of the LinalgDependenceGraphElem to allow representing
both OpResult and OpOperand*.

Differential Revision: https://reviews.llvm.org/D95208
2021-01-22 11:19:59 -08:00
Nicolas Vasilache 8dd58a509c [mlir][Linalg] NFC - Fully compose map and operands when creating AffineMin in tiling.
This may simplify the composition of patterns but is otherwise NFC.
2021-01-20 20:36:18 +00:00
Nicolas Vasilache c075572646 [mlir][Linalg] NFC - Expose getSmallestBoundingIndex as an utility function 2021-01-20 19:53:09 +00:00
Aart Bik b5c542d64b [mlir][sparse] add narrower choices for pointers/indices
Use cases with 16- or even 8-bit pointer/index structures have been identified.

Reviewed By: penpornk

Differential Revision: https://reviews.llvm.org/D95015
2021-01-19 20:20:38 -08:00
Mehdi Amini 7dadcd02d6 Fix a few GCC compiler warnings (NFC) 2021-01-19 06:00:04 +00:00
Thomas Raoux fd2083d73c [mlir] Fixing potential build break in my previous commit 2021-01-15 17:38:16 -08:00
Thomas Raoux 3afbfb4145 [mlir][NFC] Move helper substWithMin into Affine utils
This allow using this helper outside of the linalg canonicalization.

Differential Revision: https://reviews.llvm.org/D94826
2021-01-15 17:13:56 -08:00
Aart Bik 5508516b06 [mlir][sparse] retry sparse-only for cyclic iteration graphs
This is a very minor improvement during iteration graph construction.
If the first attempt considering the dimension order of all tensors fails,
a second attempt is made using the constraints of sparse tensors only.
Dense tensors prefer dimension order (locality) but provide random access
if needed, enabling the compilation of more sparse kernels.

Reviewed By: penpornk

Differential Revision: https://reviews.llvm.org/D94709
2021-01-14 22:39:29 -08:00
MaheshRavishankar 42444d0cf0 [mlir][Linalg] NFC: Verify tiling on linalg.generic operation on tensors.
With the recent changes to linalg on tensor semantics, the tiling
operations works out-of-the-box for generic operations. Add a test to
verify that and some minor refactoring.

Differential Revision: https://reviews.llvm.org/D93077
2021-01-14 16:17:08 -08:00
Aart Bik f4f158b2f8 [mlir][sparse] add vectorization strategies to sparse compiler
Similar to the parallelization strategies, the vectorization strategies
provide control on what loops should be vectorize. Unlike the parallel
strategies, only innermost loops are considered, but including reductions,
with the control of vectorizing dense loops only or dense and sparse loops.

The vectorized loops are always controlled by a vector mask to avoid
overrunning the iterations, but subsequent vector operation folding removes
redundant masks and replaces the operations with more efficient counterparts.
Similarly, we will rely on subsequent loop optimizations to further optimize
masking, e.g. using an unconditional full vector loop and scalar cleanup loop.

The current strategy already demonstrates a nice interaction between the
sparse compiler and all prior optimizations that went into the vector dialect.

Ongoing discussion at:
https://llvm.discourse.group/t/mlir-support-for-sparse-tensors/2020/10

Reviewed By: penpornk

Differential Revision: https://reviews.llvm.org/D94551
2021-01-13 11:55:23 -08:00
David Blaikie 0d88d7d82b Delete unused function (was breaking the -Werror build) 2021-01-12 15:29:44 -08:00
Nicolas Vasilache 80f0785488 [mlir][Linalg] NFC - Refactor fusion APIs
This revision uniformizes fusion APIs to allow passing OpOperand, OpResult and adds a finer level of control fusion.

Differential Revision: https://reviews.llvm.org/D94493
2021-01-12 14:27:15 +00:00
Rob Suderman f75f391fc6 [MLIR][Linalg] Refactor transforms to use linalg::getDynOperands helper
getDynOperands behavior is commonly used in a number of passes. Refactored to
use a helper function and avoid code reuse.

Differential Revision: https://reviews.llvm.org/D94340
2021-01-11 16:24:59 -08:00
MaheshRavishankar c4486cfd55 [mlir][Linalg] Fix reshape fusion to reshape the outs instead of creating new tensors.
When fusing tensor_reshape ops with generic/indexed_Generic op, new
linalg.init_tensor operations were created for the `outs` of the fused
op. While correct (technically) it is better to just reshape the
original `outs` operands and rely on canonicalization of init_tensor
-> tensor_reshape to achieve the same effect.

Differential Revision: https://reviews.llvm.org/D93774
2021-01-11 09:26:22 -08:00
Lei Zhang 55225471d9 [mlir][linalg] Support permutation when lowering to loop nests
Linalg ops are perfect loop nests. When materializing the concrete
loop nest, the default order specified by the Linalg op's iterators
may not be the best for further CodeGen: targets frequently need
to plan the loop order in order to gain better data access. And
different targets can have different preferences. So there should
exist a way to control the order.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D91795
2021-01-11 09:13:06 -05:00
MaheshRavishankar fa8c397dfa [mlir][Linalg] NFC: Refactor fusion of LinalgOp with TensorReshapeOp by expansion.
Change the implementation of LinalgOp with TensorReshapeOp by
expansion to be more modular and easier to follow.

Differential Revision: https://reviews.llvm.org/D93748
2021-01-08 11:58:19 -08:00
Kazuaki Ishizaki f88fab5006 [mlir] NFC: fix trivial typos
fix typo under include and lib directories

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D94220
2021-01-08 02:10:12 +09:00
Thomas Raoux efd05040e1 [mlir] Add hoisting transformation for transfer ops on tensor
Add same hoisting transformation existing for transfer ops on buffers for
transfer_ops on tensor. The logic is significantly different so this is done as
a separate transformation and it is expect that user would know which
transformation to use based on the flow.

Differential Revision: https://reviews.llvm.org/D94115
2021-01-06 14:23:59 -08:00
Aart Bik 8b124c19f5 [mlir][sparse] adjust output shape inference to new tensor abstraction
Nicolas changed the tensor abstraction so that every output has
its own shape definition. This simplifies the "inference" that
was used in the sparse compiler.

Reviewed By: penpornk

Differential Revision: https://reviews.llvm.org/D94119
2021-01-05 15:31:39 -08:00
Thomas Raoux cf216670a0 [mlir][linalg] Add vectorization for linalg on tensor ops
Support vectorization of linalg ops using tensor inputs/outputs.

Differential Revision: https://reviews.llvm.org/D93890
2020-12-29 09:02:23 -08:00
Aart Bik 9a8cab8bac [mlir][sparse] adjust output tensor to synthetic tensor
Fixes a merge conflict with previous two CLs.

Reviewed By: mravishankar

Differential Revision: https://reviews.llvm.org/D93664
2020-12-21 14:13:54 -08:00
nicolasvasilache b7ae1d3d2b [mlir][Linalg] Revisit the Linalg on tensors abstraction
This revision drops init_tensor arguments from Linalg on tensors and instead uniformizes the output buffers and output tensors to be consistent.
This significantly simplifies the usage of Linalg on tensors and is a stepping stone for
its evolution towards a mixed tensor and shape abstraction discussed in https://llvm.discourse.group/t/linalg-and-shapes/2421/19.

Differential Revision: https://reviews.llvm.org/D93469
2020-12-21 12:29:10 -08:00
Thomas Raoux 26c8f9081b [mlir[[vector] Extend Transfer read/write ops to support tensor types.
Transfer_ops can now work on both buffers and tensor. Right now, lowering of
the tensor case is not supported yet.

Differential Revision: https://reviews.llvm.org/D93500
2020-12-21 08:55:04 -08:00
Aart Bik 14da25b4b2 [mlir][sparse] scalarize reductions in for-loops during sparse codegen
Reductions in innermost loops become harder for the backend to disambiguate
after bufferization into memrefs, resulting in less efficient load-update-store
cycles. By scalarizing innermost reductions, the backend is more likely to assign
a register to perform the reduction (also prepares vectorization). Even though
we could scalarize reductions for more outer loops and while-loops as well,
currently scalarization is only done for chains of innermost for-loops, where
it matters most, to avoid complicating codegen unnecessary (viz. adding lots
of yield instructions).

This CL also refactors condition simplification into the merger class,
where it belongs, so that conditions are simplified only once per loop
nest and not repeatedly as was currently done. This CL also fixes a few
minor bugs, some layout issues, and comments.

Reviewed By: penpornk

Differential Revision: https://reviews.llvm.org/D93143
2020-12-17 16:12:21 -08:00
Sean Silva 129d6e554e [mlir] Move `std.tensor_cast` -> `tensor.cast`.
This is almost entirely mechanical.

Differential Revision: https://reviews.llvm.org/D93357
2020-12-17 16:06:56 -08:00
River Riddle 1b97cdf885 [mlir][IR][NFC] Move context/location parameters of builtin Type::get methods to the start of the parameter list
This better matches the rest of the infrastructure, is much simpler, and makes it easier to move these types to being declaratively specified.

Differential Revision: https://reviews.llvm.org/D93432
2020-12-17 13:01:36 -08:00
Tres Popp 922d3d5522 [mlir] Allow nested regions in inlineRegionAndEmitStore
This is useful for scalar code that uses for/while loops.
This has also been confirmed to work for representing std.pow as an
scf.for loop on gpus.

Differential Revision: https://reviews.llvm.org/D93308
2020-12-15 21:02:57 +01:00
Thomas Raoux 8955e9f6b7 [mlir][linalg] Fix bug in elementwise vectorization
Fix a bug causing to pick the wrong vector size to broadcast to when the source
vectors have different ranks.

Differential Revision: https://reviews.llvm.org/D93118
2020-12-14 10:44:36 -08:00
Christian Sigg 1ffc1aaa09 [mlir] Use mlir::OpState::operator->() to get to methods of mlir::Operation.
This is a preparation step to remove those methods from OpState.

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D93098
2020-12-13 09:58:16 +01:00
Christian Sigg 0bf4a82a5a [mlir] Use mlir::OpState::operator->() to get to methods of mlir::Operation. This is a preparation step to remove the corresponding methods from OpState.
Reviewed By: silvas, rriddle

Differential Revision: https://reviews.llvm.org/D92878
2020-12-09 12:11:32 +01:00
Aart Bik 74cd9e587d [mlir][sparse] hoist loop invariant tensor loads in sparse compiler
After bufferization, the backend has much more trouble hoisting loop invariant
loads from the loops generated by the sparse compiler. Therefore, this is done
during sparse code generation. Note that we don't bother hoisting derived
invariant expressions on SSA values, since the backend does that very well.

Still TBD: scalarize reductions to avoid load-add-store cycles

Reviewed By: penpornk

Differential Revision: https://reviews.llvm.org/D92534
2020-12-07 11:59:48 -08:00
Nicolas Vasilache 2c66b6ec09 [mlir][Linalg] NFC - Expose tiling canonicalization patterns through a populate method 2020-12-04 14:57:29 +00:00
Nicolas Vasilache a1cd559ce5 [mlir][Linalg] Properly use distribution options.
Let tiling to scf.for actually use the distribution method.
For now only Cyclic is supported.

Differential Revision: https://reviews.llvm.org/D92653
2020-12-04 14:00:54 +00:00
Hanhan Wang f5f1a5c244 [mlir][Linalg] Handle fusion on tensors for projected permutation.
In the past, the reshape op can be folded only if the indexing map is
permutation in consumer's usage. We can relax to condition to be projected
permutation.

This patch still limits the fusion for scalar cases. Scalar case is a corner
case, because we need to decide where to put extra dims.

Reviewed By: mravishankar

Differential Revision: https://reviews.llvm.org/D92466
2020-12-03 23:11:29 -08:00
Thomas Raoux c503dc1b8a [mlir][linalg] Add vectorization for element-wise linalg ops
Add support for vectorization for linalg.generic representing element-wise ops.
Those are converted to transfer_read + vector ops + transfer_write.
Also re-organize the vectorization tests to be together.

Implementation derived from the work of @burmako, @agrue and
@fedelebron.

Differential Revision: https://reviews.llvm.org/D92540
2020-12-03 15:31:13 -08:00
Christian Sigg c4a0405902 Add `Operation* OpState::operator->()` to provide more convenient access to members of Operation.
Given that OpState already implicit converts to Operator*, this seems reasonable.

The alternative would be to add more functions to OpState which forward to Operation.

Reviewed By: rriddle, ftynse

Differential Revision: https://reviews.llvm.org/D92266
2020-12-02 15:46:20 +01:00
Aart Bik d5f0d0c0c4 [mlir][sparse] add ability to select pointer/index storage type
This change gives sparse compiler clients more control over selecting
individual types for the pointers and indices in the sparse storage schemes.
Narrower width obviously results in smaller memory footprints, but the
range should always suffice for the maximum number of entries or index value.

Reviewed By: penpornk

Differential Revision: https://reviews.llvm.org/D92126
2020-11-25 17:32:44 -08:00
Sean Silva 5488a6b0ff [NFC] Fix pattern name.
It still had the old name from before ElementwiseMappable was added.
2020-11-25 16:10:34 -08:00
Aart Bik 5c4e397e6c [mlir][sparse] add parallelization strategies to sparse compiler
This CL adds the ability to request different parallelization strategies
for the generate code. Every "parallel" loop is a candidate, and converted
to a parallel op if it is an actual for-loop (not a while) and the strategy
allows dense/sparse outer/inner parallelization.

This will connect directly with the work of @ezhulenev on parallel loops.

Still TBD: vectorization strategy

Reviewed By: penpornk

Differential Revision: https://reviews.llvm.org/D91978
2020-11-24 17:17:13 -08:00
Aart Bik b228e2bd92 [mlir][sparse] generalize invariant expression handling in sparse compiler
Generalizes invariant handling to anything defined outside the Linalg op
(parameters and SSA computations). Fixes bug that was using parameter number
as tensor number.

Reviewed By: penpornk

Differential Revision: https://reviews.llvm.org/D91985
2020-11-24 13:41:14 -08:00
Nicolas Vasilache c247081025 [mlir] NFC - Refactor and expose a helper printOffsetSizesAndStrides helper function.
Print part of an op of the form:
```
  <optional-offset-prefix>`[` offset-list `]`
  <optional-size-prefix>`[` size-list `]`
  <optional-stride-prefix>[` stride-list `]`
```

Also address some leftover nits.

Differential revision: https://reviews.llvm.org/D92031
2020-11-24 20:00:59 +00:00
Alexander Belyaev fd92c5dbee [mlir][linalg] Add bufferization pattern for `linalg.indexed_generic`.
Differential Revision: https://reviews.llvm.org/D92014
2020-11-24 11:14:21 +01:00
MaheshRavishankar 11ea2e2448 [mlir][Linalg] NFC: Expose some utility functions used for promotion.
Exposing some utility functions from Linalg to allow for promotion of
fused views outside of the core tile+fuse logic.
This is an alternative to patch D91322 which adds the promotion logic
to the tileAndFuse method. Downside with that approach is that it is
not easily customizable based on needs.

Differential Revision: https://reviews.llvm.org/D91503
2020-11-23 10:35:42 -08:00
MaheshRavishankar e65a5e5b00 [mlir][Linalg] Fuse sequence of Linalg operation (on buffers)
Enhance the tile+fuse logic to allow fusing a sequence of operations.

Make sure the value used to obtain tile shape is a
SubViewOp/SubTensorOp. Current logic used to get the bounds of loop
depends on the use of `getOrCreateRange` method on `SubViewOp` and
`SubTensorOp`. Make sure that the value/dim used to compute the range
is from such ops.  This fix is a reasonable WAR, but a btter fix would
be to make `getOrCreateRange` method be a method of `ViewInterface`.

Differential Revision: https://reviews.llvm.org/D90991
2020-11-23 10:30:51 -08:00
Nicolas Vasilache 9ac0b314a4 [mlir][Linalg] Drop symbol_source abstraction which does not pay for itself.
Differential Revision: https://reviews.llvm.org/D91956
2020-11-23 12:43:02 +00:00
Nicolas Vasilache 01c4418544 [mlir][Linalg] NFC - Factor out Linalg functionality for shape and loop bounds computation
This revision refactors code used in various Linalg transformations and makes it a first class citizen to the LinalgStructureOpInterface. This is in preparation to allowing more advanced Linalg behavior but is otherwise NFC.

Differential revision: https://reviews.llvm.org/D91863
2020-11-23 10:17:18 +00:00
Aart Bik af42550523 [mlir][sparse] refine optimization, add few more test cases
Adds tests for full sum reduction (tensors summed up into scalars)
and the well-known sampled-dense-dense-matrix-product. Refines
the optimizations rules slightly to handle the summation better.

Reviewed By: penpornk

Differential Revision: https://reviews.llvm.org/D91818
2020-11-20 17:01:59 -08:00
Thomas Raoux 369c51a74b [mlir][vector] Add transfer_op LoadToStore forwarding and deadStore optimizations
Add transformation to be able to forward transfer_write into transfer_read
operation and to be able to remove dead transfer_write when a transfer_write is
overwritten before being read.

Differential Revision: https://reviews.llvm.org/D91321
2020-11-20 11:59:01 -08:00
Mikhail Goncharov 0caa82e2ac Revert "[mlir][Linalg] Fuse sequence of Linalg operation (on buffers)"
This reverts commit f8284d21a8.

Revert "[mlir][Linalg] NFC: Expose some utility functions used for promotion."

This reverts commit 0c59f51592.

Revert "Remove unused isZero function"

This reverts commit 0f9f0a4046.

Change f8284d21 led to multiple failures in IREE compilation.
2020-11-20 13:12:54 +01:00
Geoffrey Martin-Noble 0f9f0a4046 Remove unused isZero function
Unused since https://reviews.llvm.org/D91503 and triggering
-Wunused-function

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D91838
2020-11-19 19:58:39 -08:00
MaheshRavishankar 0c59f51592 [mlir][Linalg] NFC: Expose some utility functions used for promotion.
Exposing some utility functions from Linalg to allow for promotion of
fused views outside of the core tile+fuse logic.
This is an alternative to patch D91322 which adds the promotion logic
to the tileAndFuse method. Downside with that approach is that it is
not easily customizable based on needs.

Differential Revision: https://reviews.llvm.org/D91503
2020-11-19 19:05:26 -08:00
MaheshRavishankar f8284d21a8 [mlir][Linalg] Fuse sequence of Linalg operation (on buffers)
Enhance the tile+fuse logic to allow fusing a sequence of operations.

Differential Revision: https://reviews.llvm.org/D90991
2020-11-19 19:03:06 -08:00
River Riddle 65fcddff24 [mlir][BuiltinDialect] Resolve comments from D91571
* Move ops to a BuiltinOps.h
* Add file comments
2020-11-19 11:12:49 -08:00
Lei Zhang 9e39a5d9a6 [mlir][linalg] Start a named ops to generic ops pass
This commit starts a new pass and patterns for converting Linalg
named ops to generic ops. This enables us to leverage the flexbility
from generic ops during transformations. Right now only linalg.conv
is supported; others will be added when useful.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D91357
2020-11-19 09:21:06 -05:00
Aart Bik 9ad62f62b9 [mlir][sparse] remove a few rewriting failures
Rationale:
Make sure preconditions are tested already during verfication.
Currently, the only way a sparse rewriting rule can fail is if
(1) the linalg op does not have sparse annotations, or
(2) a yet to be handled operation is encounted inside the op

Reviewed By: penpornk

Differential Revision: https://reviews.llvm.org/D91748
2020-11-18 17:29:40 -08:00
Aart Bik eced4a8e6f [mlir] [sparse] start of sparse tensor compiler support
As discussed in https://llvm.discourse.group/t/mlir-support-for-sparse-tensors/2020
this CL is the start of sparse tensor compiler support in MLIR. Starting with a
"dense" kernel expressed in the Linalg dialect together with per-dimension
sparsity annotations on the tensors, the compiler automatically lowers the
kernel to sparse code using the methods described in Fredrik Kjolstad's thesis.

Many details are still TBD. For example, the sparse "bufferization" is purely
done locally since we don't have a global solution for propagating sparsity
yet. Furthermore, code to input and output the sparse tensors is missing.
Nevertheless, with some hand modifications, the generated MLIR can be
easily converted into runnable code already.

Reviewed By: nicolasvasilache, ftynse

Differential Revision: https://reviews.llvm.org/D90994
2020-11-17 13:10:42 -08:00
River Riddle 73ca690df8 [mlir][NFC] Remove references to Module.h and Function.h
These includes have been deprecated in favor of BuiltinDialect.h, which contains the definitions of ModuleOp and FuncOp.

Differential Revision: https://reviews.llvm.org/D91572
2020-11-17 00:55:47 -08:00
Aart Bik 9ddb464d37 [mlir] refactor common idiom into AffineMap method
motivated by a refactoring in the new sparse code (yet to be merged), this avoids some lengthy code dup

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D91465
2020-11-13 19:18:13 -08:00
Sean Silva 703ef17e7a [mlir] Make linalg-bufferize run on FuncOp
That way, it runs in parallel across functions.
2020-11-13 15:43:24 -08:00
River Riddle 7f61396cfa [mlir][Interfaces] Add implicit casts from concrete operation types to the interfaces they implement.
This removes the need to have an explicit `cast<>` given that we always know it `isa` instance of the interface.

Differential Revision: https://reviews.llvm.org/D91304
2020-11-12 22:56:08 -08:00
Sean Silva faa66b1b2c [mlir] Bufferize tensor constant ops
We lower them to a std.global_memref (uniqued by constant value) + a
std.get_global_memref to produce the corresponding memref value.
This allows removing Linalg's somewhat hacky lowering of tensor
constants, now that std properly supports this.

Differential Revision: https://reviews.llvm.org/D91306
2020-11-12 14:56:10 -08:00
Sean Silva ad2f9f6745 [mlir] Fix subtensor_insert bufferization.
It was incorrect in the presence of a tensor argument with multiple
uses.

The bufferization of subtensor_insert was writing into a converted
memref operand, but there is no guarantee that the converted memref for
that operand is safe to write into. In this case, the same converted
memref is written to in-place by the subtensor_insert bufferization,
violating the tensor-level semantics.

I left some comments in a TODO about ways forward on this. I will be
working actively on this problem in the coming days.

Differential Revision: https://reviews.llvm.org/D91371
2020-11-12 14:56:09 -08:00
MaheshRavishankar 5ca20851e4 [mlir][Linalg] Improve the logic to perform tile and fuse with better dependence tracking.
This change does two main things
1) An operation might have multiple dependences to the same
   producer. Not tracking them correctly can result in incorrect code
   generation with fusion. To rectify this the dependence tracking
   needs to also have the operand number in the consumer.
2) Improve the logic used to find the fused loops making it easier to
   follow. The only constraint for fusion is that linalg ops (on
   buffers) have update semantics for the result. Fusion should be
   such that only one iteration of the fused loop (which is also a
   tiled loop) must touch only one (disjoint) tile of the output. This
   could be relaxed by allowing for recomputation that is the default
   when oeprands are tensors, or can be made legal with promotion of
   the fused view (in future).

Differential Revision: https://reviews.llvm.org/D90579
2020-11-12 00:25:24 -08:00
Aart Bik e1dbc25ee2 [mlir][sparse] integrate sparse annotation into generic linalg op
This CL integrates the new sparse annotations (hereto merely added as fully
transparent attributes) more tightly to the generic linalg op in order to add
verification of the annotations' consistency as well as to make make other
passes more aware of their presence (in the long run, rewriting rules must
preserve the integrity of the annotations).

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D91224
2020-11-11 17:26:30 -08:00
Sean Silva 53a0d45db6 [mlir] Add pass to convert elementwise ops to linalg.
This patch converts elementwise ops on tensors to linalg.generic ops
with the same elementwise op in the payload (except rewritten to
operate on scalars, obviously). This is a great form for later fusion to
clean up.

E.g.

```
// Compute: %arg0 + %arg1 - %arg2
func @f(%arg0: tensor<?xf32>, %arg1: tensor<?xf32>, %arg2: tensor<?xf32>) -> tensor<?xf32> {
  %0 = addf %arg0, %arg1 : tensor<?xf32>
  %1 = subf %0, %arg2 : tensor<?xf32>
  return %1 : tensor<?xf32>
}
```

Running this through
`mlir-opt -convert-std-to-linalg -linalg-fusion-for-tensor-ops` we get:

```
func @f(%arg0: tensor<?xf32>, %arg1: tensor<?xf32>, %arg2: tensor<?xf32>) -> tensor<?xf32> {
  %0 = linalg.generic {indexing_maps = [#map0, #map0, #map0, #map0], iterator_types = ["parallel"]} ins(%arg0, %arg1, %arg2 : tensor<?xf32>, tensor<?xf32>, tensor<?xf32>) {
  ^bb0(%arg3: f32, %arg4: f32, %arg5: f32):  // no predecessors
    %1 = addf %arg3, %arg4 : f32
    %2 = subf %1, %arg5 : f32
    linalg.yield %2 : f32
  } -> tensor<?xf32>
  return %0 : tensor<?xf32>
}
```

So the elementwise ops on tensors have nicely collapsed into a single
linalg.generic, which is the form we want for further transformations.

Differential Revision: https://reviews.llvm.org/D90354
2020-11-10 13:44:44 -08:00
Nicolas Vasilache 6fc3a44394 [mlir][Linalg] Add support for bufferization of SubTensorOp and SubTensorInsertOp
This revision adds support for bufferization by using a mix of `tensor_load`, `subview`, `linalg.copy` and `tensor_to_memref`.
2020-11-09 16:55:36 +00:00
Sean Silva eb8d386d51 [mlir] Make linalg-bufferize a composable bufferization pass
Previously, linalg-bufferize was a "finalizing" bufferization pass (it
did a "full" conversion). This wasn't great because it couldn't be used
composably with other bufferization passes like std-bufferize and
scf-bufferize.

This patch makes linalg-bufferize a composable bufferization pass.
Notice that the integration tests are switched over to using a pipeline
of std-bufferize, linalg-bufferize, and (to finalize the conversion)
func-bufferize. It all "just works" together.

While doing this transition, I ran into a nasty bug in the 1-use special
case logic for forwarding init tensors. That logic, while
well-intentioned, was fundamentally flawed, because it assumed that if
the original tensor value had one use, then the converted memref could
be mutated in place. That assumption is wrong in many cases. For
example:

```
  %0 = some_tensor : tensor<4xf32>
  br ^bb0(%0, %0: tensor<4xf32>, tensor<4xf32>)
^bb0(%bbarg0: tensor<4xf32>, %bbarg1: tensor<4xf32>)
  // %bbarg0 is an alias of %bbarg1. We cannot safely write
  // to it without analyzing uses of %bbarg1.
  linalg.generic ... init(%bbarg0) {...}
```

A similar example can happen in many scenarios with function arguments.
Even more sinister, if the converted memref is produced by a
`std.get_global_memref` of a constant global memref, then we might
attempt to write into read-only statically allocated storage! Not all
memrefs are writable!

Clearly, this 1-use check is not a local transformation that we can do
on the fly in this pattern, so I removed it.

The test is now drastically shorter and I basically rewrote the CHECK
lines from scratch because:
- the new composable linalg-bufferize just doesn't do as much, so there
is less to test
- a lot of the tests were related to the 1-use check, which is now gone,
so there is less to test
- the `-buffer-hoisting -buffer-deallocation` is no longer mixed in, so
the checks related to that had to be rewritten

Differential Revision: https://reviews.llvm.org/D90657
2020-11-04 10:16:55 -08:00
mikeurbach 2e36e0dad5 [MLIR] Move eraseArguments and eraseResults to FunctionLike
Previously, they were only defined for `FuncOp`.

To support this, `FunctionLike` needs a way to get an updated type
from the concrete operation. This adds a new hook for that purpose,
called `getTypeWithoutArgsAndResults`.

For now, `FunctionLike` continues to assume the type is
`FunctionType`, and concrete operations that use another type can hide
the `getType`, `setType`, and `getTypeWithoutArgsAndResults` methods.

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D90363
2020-11-03 16:53:46 -07:00
Thomas Raoux 29d1fba7b5 [mlir][vector] Make linalg FillOp vectorization use Transfer op
Differential Revision: https://reviews.llvm.org/D90474
2020-11-03 14:35:26 -08:00
Sean Silva 30e130c3ed [mlir] Move some linalg patterns around.
The bufferization patterns are moved to the .cpp file, which is
preferred in the codebase when it makes sense.

The LinalgToStandard patterns are kept a header because they are
expected to be used individually. However, they are moved to
LinalgToStandard.h which is the file corresponding to where they are
defined.

This also removes TensorCastOpConverter, which is handled by
populateStdBufferizePatterns now. Eventually, the constant op lowering
will be handled as well, but it there are currently holdups on moving
it (see https://reviews.llvm.org/D89916).

Differential Revision: https://reviews.llvm.org/D90254
2020-10-30 13:48:03 -07:00
Nicolas Vasilache 9b17bf2e54 [mlir][Linalg] Make Linalg fusion a test pass
Linalg "tile-and-fuse" is currently exposed as a Linalg pass "-linalg-fusion" but only the mechanics of the transformation are currently relevant.
Instead turn it into a "-test-linalg-greedy-fusion" pass which performs canonicalizations to enable more fusions to compose.
This allows dropping the OperationFolder which is not meant to be used with the pattern rewrite infrastructure.

Differential Revision: https://reviews.llvm.org/D90394
2020-10-29 15:18:51 +00:00
Kazuaki Ishizaki 41b09f4eff [mlir] NFC: fix trivial typos
fix typos in comments and documents

Reviewed By: jpienaar

Differential Revision: https://reviews.llvm.org/D90089
2020-10-29 04:05:22 +09:00
MaheshRavishankar 9d5239d39e [mlir][Linalg] Add fusion of IndexedGenericOp with TensorReshapeOp by expansion.
This patch adds support for fusing linalg.indexed_generic op with
linalg.tensor_reshape op by expansion, i.e.
- linalg.indexed_generic op -> linalg.tensor_reshape op when the
  latter is expanding.
- linalg.tensor_reshape op -> linalg.indexed_generic op when the
  former is folding.

Differential Revision: https://reviews.llvm.org/D90082
2020-10-27 16:15:34 -07:00
River Riddle 3fffffa882 [mlir][Pattern] Add a new FrozenRewritePatternList class
This class represents a rewrite pattern list that has been frozen, and thus immutable. This replaces the uses of OwningRewritePatternList in pattern driver related API, such as dialect conversion. When PDL becomes more prevalent, this API will allow for optimizing a set of patterns once without the need to do this per run of a pass.

Differential Revision: https://reviews.llvm.org/D89104
2020-10-26 18:01:06 -07:00
River Riddle b6eb26fd0e [mlir][NFC] Move around the code related to PatternRewriting to improve layering
There are several pieces of pattern rewriting infra in IR/ that really shouldn't be there. This revision moves those pieces to a better location such that they are easier to evolve in the future(e.g. with PDL). More concretely this revision does the following:

* Create a Transforms/GreedyPatternRewriteDriver.h and move the apply*andFold methods there.
The definitions for these methods are already in Transforms/ so it doesn't make sense for the declarations to be in IR.

* Create a new lib/Rewrite library and move PatternApplicator there.
This new library will be focused on applying rewrites, and will also include compiling rewrites with PDL.

Differential Revision: https://reviews.llvm.org/D89103
2020-10-26 18:01:06 -07:00
MaheshRavishankar 78f37b74da [mlir][Linalg] Miscalleneous enhancements to cover more fusion cases.
Adds support for
- Dropping unit dimension loops for indexed_generic ops.
- Folding consecutive folding (or expanding) reshapes when the result
  (or src) is a scalar.
- Fixes to indexed_generic -> generic fusion when zero-dim tensors are
  involved.

Differential Revision: https://reviews.llvm.org/D90118
2020-10-26 16:17:24 -07:00
Nicolas Vasilache 37e0fdd072 [mlir][Linalg] Add basic support for TileAndFuse on Linalg on tensors.
This revision allows the fusion of the producer of input tensors in the consumer under a tiling transformation (which produces subtensors).
Many pieces are still missing (e.g. support init_tensors, better refactor LinalgStructuredOp interface support, try to merge implementations and reuse code) but this still allows getting started.

The greedy pass itself is just for testing purposes and will be extracted in a separate test pass.

Differential revision: https://reviews.llvm.org/D89491
2020-10-26 17:19:08 +00:00
MaheshRavishankar de2568aab8 [mlir][Linalg] Rethink fusion of linalg ops with reshape ops.
The current fusion on tensors fuses reshape ops with generic ops by
linearizing the indexing maps of the fused tensor in the generic
op. This has some limitations
- It only works for static shapes
- The resulting indexing map has a linearization that would be
  potentially prevent fusion later on (for ex. tile + fuse).

Instead, try to fuse the reshape consumer (producer) with generic op
producer (consumer) by expanding the dimensionality of the generic op
when the reshape is expanding (folding).  This approach conflicts with
the linearization approach. The expansion method is used instead of
the linearization method.

Further refactoring that changes the fusion on tensors to be a
collection of patterns.

Differential Revision: https://reviews.llvm.org/D89002
2020-10-14 13:50:31 -07:00
Sean Silva 9a14cb53cb [mlir][bufferize] Rename BufferAssignment* to Bufferize*
Part of the refactor discussed in:
https://llvm.discourse.group/t/what-is-the-strategy-for-tensor-memref-conversion-bufferization/1938/17

Differential Revision: https://reviews.llvm.org/D89271
2020-10-14 12:39:16 -07:00
Sean Silva 9ca97cde85 [mlir] Linalg refactor for using "bufferize" terminology.
Part of the refactor discussed in:
https://llvm.discourse.group/t/what-is-the-strategy-for-tensor-memref-conversion-bufferization/1938/17

Differential Revision: https://reviews.llvm.org/D89261
2020-10-14 12:39:15 -07:00
Nicolas Vasilache d38277dbcf [mlir][Linalg] Add missing dependency 2020-10-14 13:50:29 +00:00
Nicolas Vasilache af5be38a01 [mlir][Linalg] Make a Linalg CodegenStrategy available.
This revision adds a programmable codegen strategy from linalg based on staged rewrite patterns. Testing is exercised on a simple linalg.matmul op.

Differential Revision: https://reviews.llvm.org/D89374
2020-10-14 11:11:26 +00:00
Alberto Magni 44865e9169 [mlir][Linalg] Lower padding attribute for pooling ops
Update linalg-to-loops lowering for pooling operations to perform
padding of the input when specified by the corresponding attribute.

Reviewed By: hanchung

Differential Revision: https://reviews.llvm.org/D88911
2020-10-13 14:11:02 -07:00
Nicolas Vasilache 6121117484 [mlir][Linalg] Fix TensorConstantOp bufferization in Linalg.
TensorConstantOp bufferization currently uses the vector dialect to store constant data into memory.
Due to natural vector size and alignment properties, this is problematic with n>1-D vectors whose most minor dimension is not naturally aligned.

Instead, this revision linearizes the constant and introduces a linalg.reshape to go back to the desired shape.

Still this is still to be considered a workaround and a better longer term solution will probably involve `llvm.global`.

Differential Revision: https://reviews.llvm.org/D89311
2020-10-13 16:36:56 +00:00
Nicolas Vasilache 422aaf31da [mlir][Linalg] Add named Linalg ops on tensor to buffer support.
This revision introduces support for buffer allocation for any named linalg op.
To avoid template instantiating many ops, a new ConversionPattern is created to capture the LinalgOp interface.

Some APIs are updated to remain consistent with MLIR style:
`OwningRewritePatternList * -> OwningRewritePatternList &`
`BufferAssignmentTypeConverter * -> BufferAssignmentTypeConverter &`

Differential revision: https://reviews.llvm.org/D89226
2020-10-12 11:20:23 +00:00
Alexander Belyaev b98e5e0f7e [mlir] Move Linalg tensors-to-buffers tests to Linalg tests.
The buffer placement preparation tests in
test/Transforms/buffer-placement-preparation* are using Linalg as a test
dialect which leads to confusion and "copy-pasta", i.e. Linalg is being
extended now and when TensorsToBuffers.cpp is changed, TestBufferPlacement is
sometimes kept in-sync, which should not be the case.

This has led to the unnoticed bug, because the tests were in a different directory and the patterns were slightly off.

Differential Revision: https://reviews.llvm.org/D89209
2020-10-12 10:18:57 +02:00
Sean Silva a2b6c75ac0 [mlir] Rename BufferPlacement.h to Bufferize.h
Context: https://llvm.discourse.group/t/what-is-the-strategy-for-tensor-memref-conversion-bufferization/1938/14

Differential Revision: https://reviews.llvm.org/D89174
2020-10-09 17:48:20 -07:00
Nicolas Vasilache c303d9b394 [mlir][Linalg] NFC - Cleanup explicitly instantiated paterns 2/n - Loops.cpp
This revision belongs to a series of patches that reduce reliance of Linalg transformations on templated rewrite and conversion patterns.
Instead, this uses a MatchAnyTag pattern for the vast majority of cases and dispatches internally.

Differential revision: https://reviews.llvm.org/D89133
2020-10-09 19:59:49 +00:00
Nicolas Vasilache 30e6033b45 [mlir][Linalg] Add TensorsToBuffers support for Constant ops.
This revision also inserts an end-to-end test that lowers tensors to buffers all the way to executable code on CPU.

Differential revision: https://reviews.llvm.org/D88998
2020-10-08 13:15:45 +00:00
Alexander Belyaev c1fd4305b6 [mlir] Add basic support for dynamic tensor results in TensorToBuffers.cpp.
The simplest case is when the indexing maps are DimIds in every component. This covers cwise ops.

Also:
* Expose populateConvertLinalgOnTensorsToBuffersPatterns in Transforms.h
* Expose emitLoopRanges in Transforms.h

Differential Revision: https://reviews.llvm.org/D88781
2020-10-08 11:55:42 +02:00
Ahmed S. Taei 7060920bd1 Relax FuseTensorReshapeOpAsproducer identity mapping constraint
Differential Revision: https://reviews.llvm.org/D88869
2020-10-06 22:31:39 +00:00
Nicolas Vasilache a3adcba645 [mlir][Linalg] Implement tiling on tensors
This revision implements tiling on tensors as described in:
https://llvm.discourse.group/t/an-update-on-linalg-on-tensors/1878/4

Differential revision: https://reviews.llvm.org/D88733
2020-10-06 17:51:11 +00:00
Nicolas Vasilache d8ee28b96e [mlir][Linalg] Extend buffer allocation to support Linalg init tensors
This revision adds init_tensors support to buffer allocation for Linalg on tensors.
Currently makes the assumption that the init_tensors fold onto the first output tensors.

This assumption is not currently enforced or cast in stone and requires experimenting with tiling linalg on tensors for ops **without reductions**.

Still this allows progress towards the end-to-end goal.
2020-10-06 13:24:27 +00:00
Nicolas Vasilache e3de249a4c [mlir] Add a subtensor operation
This revision introduces a `subtensor` op, which is the counterpart of `subview` for a tensor operand. This also refactors the relevant pieces to allow reusing the `subview` implementation where appropriate.

This operation will be used to implement tiling for Linalg on tensors.
2020-10-02 05:35:30 -04:00
MaheshRavishankar c6ea095b97 [mlir][Linalg] NFC : Move fusion on tensors to separate file.
Differential Revision: https://reviews.llvm.org/D88633
2020-10-01 09:50:37 -07:00
Geoffrey Martin-Noble d4e889f1f5 Remove `Ops` suffix from dialect library names
Dialects include more than just ops, so this suffix is outdated. Follows
discussion in
https://llvm.discourse.group/t/rfc-canonical-file-paths-to-dialects/621

Reviewed By: stellaraccident

Differential Revision: https://reviews.llvm.org/D88530
2020-09-30 18:00:44 -07:00
MaheshRavishankar c694588fc5 [mlir][Linalg] Add pattern to tile and fuse Linalg operations on buffers.
The pattern is structured similar to other patterns like
LinalgTilingPattern. The fusion patterns takes options that allows you
to fuse with producers of multiple operands at once.
- The pattern fuses only at the level that is known to be legal, i.e
  if a reduction loop in the consumer is tiled, then fusion should
  happen "before" this loop. Some refactoring of the fusion code is
  needed to fuse only where it is legal.
- Since the fusion on buffers uses the LinalgDependenceGraph that is
  not mutable in place the fusion pattern keeps the original
  operations in the IR, but are tagged with a marker that can be later
  used to find the original operations.

This change also fixes an issue with tiling and
distribution/interchange where if the tile size of a loop were 0 it
wasnt account for in these.

Differential Revision: https://reviews.llvm.org/D88435
2020-09-30 14:56:58 -07:00
Mahesh Ravishankar 892fdc923f [mlir][Linalg] Generalize the logic to compute reassociation maps
while folding tensor_reshape op.

While folding reshapes that introduce unit extent dims, the logic to
compute the reassociation maps can be generalized to handle some
corner cases, for example, when the folded shape still has unit-extent
dims but corresponds to folded unit extent dims of the expanded shape.

Differential Revision: https://reviews.llvm.org/D88521
2020-09-30 07:58:06 -07:00
Jakub Lichman 0b17d4754a [mlir][Linalg] Tile sizes for Conv ops vectorization added as pass arguments
Current setup for conv op vectorization does not enable user to specify tile
sizes as well as dimensions for vectorization. In this commit we change that by
adding tile sizes as pass arguments. Every dimension with corresponding tile
size > 1 is automatically vectorized.

Differential Revision: https://reviews.llvm.org/D88533
2020-09-30 11:31:28 +00:00
Nicolas Vasilache 6b649570cb [mlir][Linalg] Refactor Linalg op initTensors support - NFC
Manually-defined named ops do not currently support `init_tensors` or return values and may never support them. Add extra interface to the StructuredOpInterface so that we can still write op-agnostic transformations based on StructuredOpInterface.

This is an NFC extension in preparation for tiling on tensors.

Differential Revision: https://reviews.llvm.org/D88481
2020-09-29 09:56:38 -04:00
Nicolas Vasilache 074ab233ed [mlir][Linalg] Refactor Linalg creation of loops to allow passing iterArgs - NFC
This revision changes the signatures of helper function that Linalg uses to create loops so that they can also take iterArgs.
iterArgs are asserted empty to ensure no functional change.
This is a mechanical change in preparation of tiling on linalg on tensors to avoid  polluting the implementation with an NFC change.

Differential Revision: https://reviews.llvm.org/D88480
2020-09-29 09:51:11 -04:00
MaheshRavishankar b62f9f4407 [mlir][Linalg] Add pattern to fold linalg.tensor_reshape that add unit extent dims.
A sequence of two reshapes such that one of them is just adding unit
extent dims can be folded to a single reshape.

Differential Revision: https://reviews.llvm.org/D88057
2020-09-23 00:01:58 -07:00
Nicolas Vasilache ed229132f1 [mlir][Linalg] Uniformize linalg.generic with named ops.
This revision allows representing a reduction at the level of linalg on tensors for generic ops by uniformizing with the named ops approach.
2020-09-22 04:13:22 -04:00
Jakub Lichman 347d59b16c [mlir][Linalg] Convolution tiling added to ConvOp vectorization pass
ConvOp vectorization supports now only convolutions of static shapes with dimensions
of size either 3(vectorized) or 1(not) as underlying vectors have to be of static
shape as well. In this commit we add support for convolutions of any size as well as
dynamic shapes by leveraging existing matmul infrastructure for tiling of both input
and kernel to sizes accepted by the previous version of ConvOp vectorization.
In the future this pass can be extended to take "tiling mask" as a user input which
will enable vectorization of user specified dimensions.

Differential Revision: https://reviews.llvm.org/D87676
2020-09-17 09:39:41 +00:00
MaheshRavishankar d380b582f7 [mlir][Linalg] Make LinalgBaseTilingPattern not delete the original operation.
The LinalgTilingPattern class dervied from the base deletes the
original operation. This allows for the use case where the more
transformations are necessary on the original operation after
tiling. In such cases the pattern can derive from
LinalgBaseTilingPattern instead of LinalgTilingPattern.

Differential Revision: https://reviews.llvm.org/D87308
2020-09-11 00:39:22 -07:00
Eugene Burmako 5638df1950 Introduce linalg.vecmat
This patch adds a new named structured op to accompany linalg.matmul and
linalg.matvec. We needed it for our codegen, so I figured it would be useful
to add it to Linalg.

Reviewed By: nicolasvasilache, mravishankar

Differential Revision: https://reviews.llvm.org/D87292
2020-09-10 18:48:14 +02:00
Jakub Lichman fea175b59f [mlir][Linalg] Small refactoring of ConvOpVectorization
This commit addresses comments that were requested on D86619
after it was landed.

Differential Revision: https://reviews.llvm.org/D87354
2020-09-10 07:05:30 +00:00
Ehsan Toosi 847299d3f0 [mlir] remove BufferAssignmentPlacer from BufferAssignmentOpConversionPattern
BufferPlacement has been removed, as allocations are no longer placed during the conversion.

Differential Revision: https://reviews.llvm.org/D87079
2020-09-08 13:04:22 +02:00
Jakub Lichman 83d82d1fb1 [mlir] Fix of broken build on windows caused by using uint 2020-09-08 09:42:25 +00:00
Jakub Lichman 67b37f571c [mlir] Conv ops vectorization pass
In this commit a new way of convolution ops lowering is introduced.
The conv op vectorization pass lowers linalg convolution ops
into vector contractions. This lowering is possible when conv op
is first tiled by 1 along specific dimensions which transforms
it into dot product between input and kernel subview memory buffers.
This pass converts such conv op into vector contraction and does
all necessary vector transfers that make it work.

Differential Revision: https://reviews.llvm.org/D86619
2020-09-08 08:47:42 +00:00
Frederik Gossen 136eb79a88 [MLIR][Standard] Add `dynamic_tensor_from_elements` operation
With `dynamic_tensor_from_elements` tensor values of dynamic size can be
created. The body of the operation essentially maps the index space to tensor
elements.

Declare SCF operations in the `scf` namespace to avoid name clash with the new
`std.yield` operation. Resolve ambiguities between `linalg/shape/std/scf.yield`
operations.

Differential Revision: https://reviews.llvm.org/D86276
2020-09-07 11:44:43 +00:00
Jakub Lichman 8d35080ebb [mlir][Linalg] Wrong tile size for convolutions fixed
Sizes of tiles (subviews) are bigger by 1 than they should. Let's consider
1D convolution without batches or channels. Furthermore let m iterate over
the output and n over the kernel then input is accessed with m + n. In tiling
subview sizes for convolutions are computed by applying requested tile size
together with kernel size to the above mentioned expression thus let's say
for tile size of 2 the subview size is 2 + size(n), which is bigger by one
than it should since we move kernel only once. The problem behind it is that
range is not turned into closed interval before the composition. This commit
fixes the problem by turning ranges first into closed intervals by substracting
1 and after the composition back to half open by adding 1.

Differential Revision: https://reviews.llvm.org/D86638
2020-09-03 06:01:21 +00:00
Ehsan Toosi 39cf83cc78 [mlir] Extend BufferAssignmentTypeConverter with result conversion callbacks
In this PR, the users of BufferPlacement can configure
BufferAssginmentTypeConverter. These new configurations would give the user more
freedom in the process of converting function signature, and return and call
operation conversions.

These are the new features:
    - Accepting callback functions for decomposing types (i.e. 1 to N type
    conversion such as unpacking tuple types).
    - Defining ResultConversionKind for specifying whether a function result
    with a certain type should be appended to the function arguments list or
    should be kept as function result. (Usage:
    converter.setResultConversionKind<MemRefType>(AppendToArgumentList))
    - Accepting callback functions for composing or decomposing values (i.e. N
    to 1 and 1 to N value conversion).

Differential Revision: https://reviews.llvm.org/D85133
2020-09-02 17:53:42 +02:00
Lei Zhang 1b88bbf5eb Revert "[mlir] Extend BufferAssignmentTypeConverter with result conversion callbacks"
This reverts commit 94f5d24877 because
of failing the following tests:

MLIR :: Dialect/Linalg/tensors-to-buffers.mlir
MLIR :: Transforms/buffer-placement-preparation-allowed-memref-results.mlir
MLIR :: Transforms/buffer-placement-preparation.mlir
2020-09-02 09:24:36 -04:00
Ehsan Toosi 94f5d24877 [mlir] Extend BufferAssignmentTypeConverter with result conversion callbacks
In this PR, the users of BufferPlacement can configure
BufferAssginmentTypeConverter. These new configurations would give the user more
freedom in the process of converting function signature, and return and call
operation conversions.

These are the new features:
    - Accepting callback functions for decomposing types (i.e. 1 to N type
    conversion such as unpacking tuple types).
    - Defining ResultConversionKind for specifying whether a function result
    with a certain type should be appended to the function arguments list or
    should be kept as function result. (Usage:
    converter.setResultConversionKind<MemRefType>(AppendToArgumentList))
    - Accepting callback functions for composing or decomposing values (i.e. N
    to 1 and 1 to N value conversion).

Differential Revision: https://reviews.llvm.org/D85133
2020-09-02 13:26:55 +02:00
Benjamin Kramer 8782c72765 Strength-reduce SmallVectors to arrays. NFCI. 2020-08-28 21:14:20 +02:00
Hanhan Wang eb4efa8832 [mlir][Linalg] Enhance Linalg fusion on generic op and tensor_reshape op.
The tensor_reshape op was only fusible only if it is a collapsing case. Now we
propagate the op to all the operands so there is a further chance to fuse it
with generic op. The pre-conditions are:

1) The producer is not an indexed_generic op.
2) All the shapes of the operands are the same.
3) All the indexing maps are identity.
4) All the loops are parallel loops.
5) The producer has a single user.

It is possible to fuse the ops if the producer is an indexed_generic op. We
still can compute the original indices. E.g., if the reshape op collapses the d0
and d1, we can use DimOp to get the width of d1, and calculate the index
`d0 * width + d1`. Then replace all the uses with it. However, this pattern is
not implemented in the patch.

Reviewed By: mravishankar

Differential Revision: https://reviews.llvm.org/D86314
2020-08-28 01:55:49 -07:00
River Riddle d289a97f91 [mlir][PDL] Add a PDL Interpreter Dialect
The PDL Interpreter dialect provides a lower level abstraction compared to the PDL dialect, and is targeted towards low level optimization and interpreter code generation. The dialect operations encapsulates low-level pattern match and rewrite "primitives", such as navigating the IR (Operation::getOperand), creating new operations (OpBuilder::create), etc. Many of the operations within this dialect also fuse branching control flow with some form of a predicate comparison operation. This type of fusion reduces the amount of work that an interpreter must do when executing.

An example of this representation is shown below:

```mlir
// The following high level PDL pattern:
pdl.pattern : benefit(1) {
  %resultType = pdl.type
  %inputOperand = pdl.input
  %root, %results = pdl.operation "foo.op"(%inputOperand) -> %resultType
  pdl.rewrite %root {
    pdl.replace %root with (%inputOperand)
  }
}

// May be represented in the interpreter dialect as follows:
module {
  func @matcher(%arg0: !pdl.operation) {
    pdl_interp.check_operation_name of %arg0 is "foo.op" -> ^bb2, ^bb1
  ^bb1:
    pdl_interp.return
  ^bb2:
    pdl_interp.check_operand_count of %arg0 is 1 -> ^bb3, ^bb1
  ^bb3:
    pdl_interp.check_result_count of %arg0 is 1 -> ^bb4, ^bb1
  ^bb4:
    %0 = pdl_interp.get_operand 0 of %arg0
    pdl_interp.is_not_null %0 : !pdl.value -> ^bb5, ^bb1
  ^bb5:
    %1 = pdl_interp.get_result 0 of %arg0
    pdl_interp.is_not_null %1 : !pdl.value -> ^bb6, ^bb1
  ^bb6:
    pdl_interp.record_match @rewriters::@rewriter(%0, %arg0 : !pdl.value, !pdl.operation) : benefit(1), loc([%arg0]), root("foo.op") -> ^bb1
  }
  module @rewriters {
    func @rewriter(%arg0: !pdl.value, %arg1: !pdl.operation) {
      pdl_interp.replace %arg1 with(%arg0)
      pdl_interp.return
    }
  }
}
```

Differential Revision: https://reviews.llvm.org/D84579
2020-08-26 05:22:27 -07:00
Mehdi Amini f9dc2b7079 Separate the Registration from Loading dialects in the Context
This changes the behavior of constructing MLIRContext to no longer load globally
registered dialects on construction. Instead Dialects are only loaded explicitly
on demand:
- the Parser is lazily loading Dialects in the context as it encounters them
during parsing. This is the only purpose for registering dialects and not load
them in the context.
- Passes are expected to declare the dialects they will create entity from
(Operations, Attributes, or Types), and the PassManager is loading Dialects into
the Context when starting a pipeline.

This changes simplifies the configuration of the registration: a compiler only
need to load the dialect for the IR it will emit, and the optimizer is
self-contained and load the required Dialects. For example in the Toy tutorial,
the compiler only needs to load the Toy dialect in the Context, all the others
(linalg, affine, std, LLVM, ...) are automatically loaded depending on the
optimization pipeline enabled.

To adjust to this change, stop using the existing dialect registration: the
global registry will be removed soon.

1) For passes, you need to override the method:

virtual void getDependentDialects(DialectRegistry &registry) const {}

and registery on the provided registry any dialect that this pass can produce.
Passes defined in TableGen can provide this list in the dependentDialects list
field.

2) For dialects, on construction you can register dependent dialects using the
provided MLIRContext: `context.getOrLoadDialect<DialectName>()`
This is useful if a dialect may canonicalize or have interfaces involving
another dialect.

3) For loading IR, dialect that can be in the input file must be explicitly
registered with the context. `MlirOptMain()` is taking an explicit registry for
this purpose. See how the standalone-opt.cpp example is setup:

  mlir::DialectRegistry registry;
  registry.insert<mlir::standalone::StandaloneDialect>();
  registry.insert<mlir::StandardOpsDialect>();

Only operations from these two dialects can be in the input file. To include all
of the dialects in MLIR Core, you can populate the registry this way:

  mlir::registerAllDialects(registry);

4) For `mlir-translate` callback, as well as frontend, Dialects can be loaded in
the context before emitting the IR: context.getOrLoadDialect<ToyDialect>()

Differential Revision: https://reviews.llvm.org/D85622
2020-08-19 01:19:03 +00:00
Mehdi Amini e75bc5c791 Revert "Separate the Registration from Loading dialects in the Context"
This reverts commit d14cf45735.
The build is broken with GCC-5.
2020-08-19 01:19:03 +00:00
Mehdi Amini d14cf45735 Separate the Registration from Loading dialects in the Context
This changes the behavior of constructing MLIRContext to no longer load globally
registered dialects on construction. Instead Dialects are only loaded explicitly
on demand:
- the Parser is lazily loading Dialects in the context as it encounters them
during parsing. This is the only purpose for registering dialects and not load
them in the context.
- Passes are expected to declare the dialects they will create entity from
(Operations, Attributes, or Types), and the PassManager is loading Dialects into
the Context when starting a pipeline.

This changes simplifies the configuration of the registration: a compiler only
need to load the dialect for the IR it will emit, and the optimizer is
self-contained and load the required Dialects. For example in the Toy tutorial,
the compiler only needs to load the Toy dialect in the Context, all the others
(linalg, affine, std, LLVM, ...) are automatically loaded depending on the
optimization pipeline enabled.

To adjust to this change, stop using the existing dialect registration: the
global registry will be removed soon.

1) For passes, you need to override the method:

virtual void getDependentDialects(DialectRegistry &registry) const {}

and registery on the provided registry any dialect that this pass can produce.
Passes defined in TableGen can provide this list in the dependentDialects list
field.

2) For dialects, on construction you can register dependent dialects using the
provided MLIRContext: `context.getOrLoadDialect<DialectName>()`
This is useful if a dialect may canonicalize or have interfaces involving
another dialect.

3) For loading IR, dialect that can be in the input file must be explicitly
registered with the context. `MlirOptMain()` is taking an explicit registry for
this purpose. See how the standalone-opt.cpp example is setup:

  mlir::DialectRegistry registry;
  registry.insert<mlir::standalone::StandaloneDialect>();
  registry.insert<mlir::StandardOpsDialect>();

Only operations from these two dialects can be in the input file. To include all
of the dialects in MLIR Core, you can populate the registry this way:

  mlir::registerAllDialects(registry);

4) For `mlir-translate` callback, as well as frontend, Dialects can be loaded in
the context before emitting the IR: context.getOrLoadDialect<ToyDialect>()

Differential Revision: https://reviews.llvm.org/D85622
2020-08-18 23:23:56 +00:00
Mehdi Amini d84fe55e0d Revert "Separate the Registration from Loading dialects in the Context"
This reverts commit e1de2b7550.
Broke a build bot.
2020-08-18 22:16:34 +00:00
Mehdi Amini e1de2b7550 Separate the Registration from Loading dialects in the Context
This changes the behavior of constructing MLIRContext to no longer load globally
registered dialects on construction. Instead Dialects are only loaded explicitly
on demand:
- the Parser is lazily loading Dialects in the context as it encounters them
during parsing. This is the only purpose for registering dialects and not load
them in the context.
- Passes are expected to declare the dialects they will create entity from
(Operations, Attributes, or Types), and the PassManager is loading Dialects into
the Context when starting a pipeline.

This changes simplifies the configuration of the registration: a compiler only
need to load the dialect for the IR it will emit, and the optimizer is
self-contained and load the required Dialects. For example in the Toy tutorial,
the compiler only needs to load the Toy dialect in the Context, all the others
(linalg, affine, std, LLVM, ...) are automatically loaded depending on the
optimization pipeline enabled.

To adjust to this change, stop using the existing dialect registration: the
global registry will be removed soon.

1) For passes, you need to override the method:

virtual void getDependentDialects(DialectRegistry &registry) const {}

and registery on the provided registry any dialect that this pass can produce.
Passes defined in TableGen can provide this list in the dependentDialects list
field.

2) For dialects, on construction you can register dependent dialects using the
provided MLIRContext: `context.getOrLoadDialect<DialectName>()`
This is useful if a dialect may canonicalize or have interfaces involving
another dialect.

3) For loading IR, dialect that can be in the input file must be explicitly
registered with the context. `MlirOptMain()` is taking an explicit registry for
this purpose. See how the standalone-opt.cpp example is setup:

  mlir::DialectRegistry registry;
  mlir::registerDialect<mlir::standalone::StandaloneDialect>();
  mlir::registerDialect<mlir::StandardOpsDialect>();

Only operations from these two dialects can be in the input file. To include all
of the dialects in MLIR Core, you can populate the registry this way:

  mlir::registerAllDialects(registry);

4) For `mlir-translate` callback, as well as frontend, Dialects can be loaded in
the context before emitting the IR: context.getOrLoadDialect<ToyDialect>()
2020-08-18 21:14:39 +00:00
Mehdi Amini 25ee851746 Revert "Separate the Registration from Loading dialects in the Context"
This reverts commit 2056393387.

Build is broken on a few bots
2020-08-15 09:21:47 +00:00
Mehdi Amini 2056393387 Separate the Registration from Loading dialects in the Context
This changes the behavior of constructing MLIRContext to no longer load globally registered dialects on construction. Instead Dialects are only loaded explicitly on demand:
- the Parser is lazily loading Dialects in the context as it encounters them during parsing. This is the only purpose for registering dialects and not load them in the context.
- Passes are expected to declare the dialects they will create entity from (Operations, Attributes, or Types), and the PassManager is loading Dialects into the Context when starting a pipeline.

This changes simplifies the configuration of the registration: a compiler only need to load the dialect for the IR it will emit, and the optimizer is self-contained and load the required Dialects. For example in the Toy tutorial, the compiler only needs to load the Toy dialect in the Context, all the others (linalg, affine, std, LLVM, ...) are automatically loaded depending on the optimization pipeline enabled.

Differential Revision: https://reviews.llvm.org/D85622
2020-08-15 08:07:31 +00:00
Mehdi Amini ba92dadf05 Revert "Separate the Registration from Loading dialects in the Context"
This was landed by accident, will reland with the right comments
addressed from the reviews.
Also revert dependent build fixes.
2020-08-15 07:35:10 +00:00
Mehdi Amini ebf521e784 Separate the Registration from Loading dialects in the Context
This changes the behavior of constructing MLIRContext to no longer load globally registered dialects on construction. Instead Dialects are only loaded explicitly on demand:
- the Parser is lazily loading Dialects in the context as it encounters them during parsing. This is the only purpose for registering dialects and not load them in the context.
- Passes are expected to declare the dialects they will create entity from (Operations, Attributes, or Types), and the PassManager is loading Dialects into the Context when starting a pipeline.

This changes simplifies the configuration of the registration: a compiler only need to load the dialect for the IR it will emit, and the optimizer is self-contained and load the required Dialects. For example in the Toy tutorial, the compiler only needs to load the Toy dialect in the Context, all the others (linalg, affine, std, LLVM, ...) are automatically loaded depending on the optimization pipeline enabled.
2020-08-14 09:40:27 +00:00
MaheshRavishankar 41d4120017 [mlir][Linalg] Allow distribution `scf.parallel` loops generated in
Linalg to processors.

This changes adds infrastructure to distribute the loops generated in
Linalg to processors at the time of generation. This addresses use
case where the instantiation of loop is done just to distribute
them. The option to distribute is added to TilingOptions for now and
will allow specifying the distribution as a transformation option,
just like tiling and promotion are specified as options.

Differential Revision: https://reviews.llvm.org/D85147
2020-08-10 14:52:17 -07:00
Nicolas Vasilache 3110e7b077 [mlir] Introduce AffineMinSCF folding as a pattern
This revision adds a folding pattern to replace affine.min ops by the actual min value, when it can be determined statically from the strides and bounds of enclosing scf loop .

This matches the type of expressions that Linalg produces during tiling and simplifies boundary checks. For now Linalg depends both on Affine and SCF but they do not depend on each other, so the pattern is added there.
In the future this will move to a more appropriate place when it is determined.

The canonicalization of AffineMinOp operations in the context of enclosing scf.for and scf.parallel proceeds by:
  1. building an affine map where uses of the induction variable of a loop
  are replaced by `%lb + %step * floordiv(%iv - %lb, %step)` expressions.
  2. checking if any of the results of this affine map divides all the other
  results (in which case it is also guaranteed to be the min).
  3. replacing the AffineMinOp by the result of (2).

The algorithm is functional in simple parametric tiling cases by using semi-affine maps. However simplifications of such semi-affine maps are not yet available and the canonicalization does not succeed yet.

Differential Revision: https://reviews.llvm.org/D82009
2020-08-07 14:30:38 -04:00
Jakub Lichman eef1bfb2d2 [mlir][Linalg] Conv {1,2,3}D ops defined with TC syntax
Replaced definition of named ND ConvOps with tensor comprehension
syntax which reduces boilerplate code significantly. Furthermore,
new ops to support TF convolutions added (without strides and dilations).

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D84628
2020-07-31 13:20:17 +02:00
Jakub Lichman 1aaf8aa53d [mlir][Linalg] Conv1D, Conv2D and Conv3D added as named ops
This commit is part of a greater project which aims to add
full end-to-end support for convolutions inside mlir. The
reason behind having conv ops for each rank rather than
having one generic ConvOp is to enable better optimizations
for every N-D case which reflects memory layout of input/kernel
buffers better and simplifies code as well. We expect plain linalg.conv
to be progressively retired.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D83879
2020-07-29 16:39:56 +02:00
Rahul Joshi 706d992ced [NFC] Add getArgumentTypes() to Region
- Add getArgumentTypes() to Region (missed from before)
- Adopt Region argument API in `hasMultiplyAddBody`
- Fix 2 typos in comments

Differential Revision: https://reviews.llvm.org/D84807
2020-07-28 18:27:42 -07:00
lorenzo chelini 946be75b9e [MLIR][Linalg] Retire C++ DotOp in favor of a linalg-ods-gen'd op
- replace DotOp, now that DRR rules have been dropped.

- Capture arguments mismatch in the parser. The number of parsed arguments must
  equal the number of expected arguments.

Reviewed By: ftynse, nicolasvasilache

Differential Revision: https://reviews.llvm.org/D82952
2020-07-28 12:34:19 +02:00
MaheshRavishankar 8f6e84ba7b [mlir][Linalg] Enable fusion of std.constant (producer) with
linalg.indexed_generic (consumer) with tensor arguments.

The implementation of fusing std.constant producer with a
linalg.indexed_generic consumer was already in place. It is exposed
with this change. Also cleaning up some of the patterns that implement
the fusion to not be templated, thereby avoiding lot of conditional
checks for calling the right instantiation.

Differential Revision: https://reviews.llvm.org/D84566
2020-07-27 09:51:20 -07:00
MaheshRavishankar 4ff48db68d [mlir][Linalg] Fixing bug in subview size computation in Linalg tiling.
The `makeTiledViews` did not use the sizes of the tiled views based on
the result of the loop bound inference computation. This manifested as
an error in computing tile sizes with convolution where not all the
result expression of concatenated affine maps are simple
AffineDimExpr.

Differential Revision: https://reviews.llvm.org/D84366
2020-07-23 11:09:55 -07:00
Jakub Lichman 20c3386f4a [mlir][Linalg] emitLoopRanges and emitLoopRangesWithSymbols merged into one
Right now there is a branching for 2 functions based on whether target map has
symbols or not. In this commit these functions are merged into one.
Furthermore, emitting does not require inverse and map applying as it computes
the correct Range in a single step and thus reduces unnecessary overhead.

Differential Revision: https://reviews.llvm.org/D83756
2020-07-23 12:33:46 +02:00
Jakub Lichman e4dd964df0 [mlir] Loop bounds inference in linalg.generic op improved to support bounds for convolution
Loop bound inference is right now very limited as it supports only permutation maps and thus
it is impossible to implement convolution with linalg.generic as it requires more advanced
loop bound inference. This commits solves it for the convolution case.

Depends On D83158

Differential Revision: https://reviews.llvm.org/D83191
2020-07-23 11:01:54 +02:00
Thomas Raoux a1b9fb220f [mlir][linalg] Add vectorization transform for CopyOp
CopyOp get vectorized to vector.transfer_read followed by vector.transfer_write

Differential Revision: https://reviews.llvm.org/D83739
2020-07-22 12:40:42 -07:00
Benjamin Kramer bf561dd2eb [mlir][Vector] Vectorize integer matmuls
The underlying infrastructure supports this already, just add the
pattern matching for linalg.generic.

Differential Revision: https://reviews.llvm.org/D84335
2020-07-22 19:39:56 +02:00
Jakub Lichman f9c8febc52 [mlir] Added support for symbols inside linalg.generic and map concatenation
This commit adds functionality needed for implementation of convolutions with
linalg.generic op. Since linalg.generic right now expects indexing maps to be
just permutations, offset indexing needed in convolutions is not possible.
Therefore in this commit we address the issue by adding support for symbols inside
indexing maps which enables more advanced indexing. The upcoming commit will
solve the problem of computing loop bounds from such maps.

Differential Revision: https://reviews.llvm.org/D83158
2020-07-20 19:20:47 +02:00
Nicolas Vasilache 47cbd9f922 [mlir][Vector] NFC - Improve VectorInterfaces
This revision improves and makes better use of OpInterfaces for the Vector dialect.

Differential Revision: https://reviews.llvm.org/D84053
2020-07-20 08:24:22 -04:00
Nicolas Vasilache 512da70be7 [mlir][Vector] Degrade masking information when forwarding linalg.copy to vector.transfer
Summary:
linalg.copy + linalg.fill can be used to create a padded local buffer.
The `masked` attribute is only valid on this padded buffer.
When forwarding to vector.transfer ops, the attribute must be reset
conservatively.

Differential Revision: https://reviews.llvm.org/D83782
2020-07-15 02:32:45 -04:00
Thomas Raoux 6d5aeb0dce [mlir][linalg] Improve aliasing approximation for hoisting transfer read/write
Improve the logic deciding if it is safe to hoist vector transfer read/write
out of the loop. Change the logic to prevent hoisting operations if there are
any unknown access to the memref in the loop no matter where the operation is.
For other transfer read/write in the loop check if we can prove that they
access disjoint memory and ignore them in this case.

Differential Revision: https://reviews.llvm.org/D83538
2020-07-10 14:55:04 -07:00
Nicolas Vasilache 56c638b5c1 [mlir][Linalg] Generalize Vectorization of Linalg contractions
This revision adds support for vectorizing named and generic contraction ops to vector.contract. Cases in which the memref is 0-D are special cased to emit std.load/std.store instead of vector.transfer. Relevant tests are added.

Differential revision: https://reviews.llvm.org/D83307
2020-07-10 10:28:34 -04:00
Benjamin Kramer b44470547e Make helpers static. NFC. 2020-07-09 13:48:56 +02:00
River Riddle 9db53a1827 [mlir][NFC] Remove usernames and google bug numbers from TODO comments.
These were largely leftover from when MLIR was a google project, and don't really follow LLVM guidelines.
2020-07-07 01:40:52 -07:00
Uday Bondhugula 6d6d5db251 [MLIR][Linalg] Generate the right type of load/store when lowering max/min pooling ops
While lowering min/max pooling ops to loops, generate the right kind of
load/stores (std or affine) instead of always generating std
load/stores.

Differential Revision: https://reviews.llvm.org/D83080
2020-07-04 14:55:02 +05:30
Nicolas Vasilache 7d9518c800 [mlir][Linalg] Add an option to use Alloca instead of malloc/free pairs.
Summary: A relevant test is also added.

Subscribers: mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, stephenneuendorffer, Joonsoo, grosul1, Kayjukh, jurahul, msifontes

Tags: #mlir

Differential Revision: https://reviews.llvm.org/D82959
2020-07-01 09:44:01 -04:00
Rahul Joshi ee394e6842 [MLIR] Add variadic isa<> for Type, Value, and Attribute
- Also adopt variadic llvm::isa<> in more places.
- Fixes https://bugs.llvm.org/show_bug.cgi?id=46445

Differential Revision: https://reviews.llvm.org/D82769
2020-06-29 15:04:48 -07:00
Adam D Straw 25055a4fb9 [mlir] add unsigned comparison builders to Affine EDSC
Current Affine comparison builders, which use operator overload, default to signed comparison.  This creates the possibility of misuse of these builders and potential correctness issues when dealing with unsigned integers.  This change makes the distinction between signed and unsigned comparison builders and forces the caller to make a choice between the two.

Differential Revision: https://reviews.llvm.org/D82323
2020-06-29 23:30:49 +02:00
Rahul Joshi d891d738d9 [MLIR][NFC] Adopt variadic isa<>
Differential Revision: https://reviews.llvm.org/D82489
2020-06-24 17:02:44 -07:00
River Riddle 8d67d187ba [mlir][DialectConversion] Refactor how block argument types get converted
This revision removes the TypeConverter parameter passed to the apply* methods, and instead moves the responsibility of region type conversion to patterns. The types of a region can be converted using the 'convertRegionTypes' method, which acts similarly to the existing 'applySignatureConversion'. This method ensures that all blocks within, and including those moved into, a region will have the block argument types converted using the provided converter.

This has the benefit of making more of the legalization logic controlled by patterns, instead of being handled explicitly by the driver. It also opens up the possibility to support multiple type conversions at some point in the future.

This revision also adds a new utility class `FailureOr<T>` that provides a LogicalResult friendly facility for returning a failure or a valid result value.

Differential Revision: https://reviews.llvm.org/D81681
2020-06-18 15:59:22 -07:00
River Riddle 3e98fbf4f5 [mlir] Refactor RewritePatternMatcher into a new PatternApplicator class.
This class enables for abstracting more of the details for the rewrite process, and will allow for clients to apply specific cost models to the pattern list. This allows for DialectConversion and the GreedyPatternRewriter to share the same underlying matcher implementation. This also simplifies the plumbing necessary to support dynamic patterns.

Differential Revision: https://reviews.llvm.org/D81985
2020-06-18 13:58:47 -07:00
lorenzo chelini e31e8f1ed5 [MLIR][Linalg] Retire C++ MatvecOp in favor of a linalg-ods-gen'd op
Replace C++ MatvecOp, now that DRR rules have been dropped.

Differential Revision: https://reviews.llvm.org/D82007
2020-06-18 11:36:49 +02:00
Rahul Joshi 2eaadfc4fe [NFC] Use llvm::hasSingleElement() in place of .size() == 1
- Also use functions in Region instead of Region::getBlocks() where possible.

Differential Revision: https://reviews.llvm.org/D82032
2020-06-17 13:26:10 -07:00
Alex Zinenko b4bc72afb7 [mlir] refactor Linalg LoopNestBuilder to use common infra
Recent work has introduced support for constructing loops via `::build` with
callbacks that construct loop bodies using only the core OpBuilder. This is now
supported on all loop types that Linalg lowers to. Refactor LoopNestBuilder in
Linalg to rely on this functionality instead of using a custom EDSC-based
approach to creating loop nests.

The specialization targeting parallel loops is also simplified by factoring out
the recursive call into a separate static function and considering only two
alternatives: top-level loop is parallel or sequential.

This removes the last remaining in-tree use of edsc::LoopBuilder, which is now
deprecated and will be removed soon.

Differential Revision: https://reviews.llvm.org/D81873
2020-06-16 20:51:32 +02:00
Nicolas Vasilache eae76faeea [mlir][Linalg] Retire C++ MatmulOp in favor of a linalg-ods-gen'd op.
Summary:
This revision replaces MatmulOp, now that DRR rules have been dropped.
This revision also fixes minor parsing bugs and a plugs a few holes to get e2e paths working (e.g. library call emission).

During the replacement the i32 version had to be dropped because only the EDSC operators +, *, etc support type inference.

Deciding on a type-polymorphic behavior, and implementing it, is left for future work.

Reviewers: aartbik

Subscribers: mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, stephenneuendorffer, Joonsoo, grosul1, frgossen, Kayjukh, jurahul, msifontes

Tags: #mlir

Differential Revision: https://reviews.llvm.org/D81935
2020-06-16 10:46:35 -04:00
Kirill Bobyrev 9b72b47ed6 Revert "[mlir][Linalg] Retire C++ MatmulOp in favor of a linalg-ods-gen'd op."
This reverts commit 8c6c49f293.

As discussed offline, this patch breaks internal builds and tests so I'm
reverting it for now.
2020-06-16 11:02:28 +02:00
River Riddle 0e360744f3 [mlir][DialectConversion] Cache type conversions and add a few useful helpers
It is quite common for the same type to be converted many types throughout the conversion process, and there isn't any good reason why we aren't caching that result. Especially given that we currently use identity conversion to signify legality. This revision also adds a few additional helpers to TypeConverter.

Differential Revision: https://reviews.llvm.org/D81679
2020-06-15 15:57:43 -07:00
Nicolas Vasilache 8c6c49f293 [mlir][Linalg] Retire C++ MatmulOp in favor of a linalg-ods-gen'd op.
This revision replaces MatmulOp, now that DRR rules have been dropped.
This revision also fixes minor parsing bugs and a plugs a few holes to get e2e paths working (e.g. library call emission).

During the replacement the i32 version had to be dropped because only the EDSC operators +, *, etc support type inference.

Deciding on a type-polymorphic behavior, and implementing it, is left for future work.

Differential Revision: https://reviews.llvm.org/D79762
2020-06-15 18:14:15 -04:00
Alexander Belyaev e9ac792748 [mlir] Fix some of the warnings in MLIR code.
Summary:
* extra ';' in the following files:
    mlir/lib/Dialect/Linalg/Transforms/Transforms.cpp
    mlir/lib/Dialect/Shape/IR/Shape.cpp

* base class ‘mlir::ConvertVectorToSCFBase<ConvertVectorToSCFPass>’
  should be explicitly initialized in the copy constructor [-Wextra] in
    mlir/lib/Conversion/VectorToSCF/VectorToSCF.cpp

* warning: ‘bool Expression::operator==(const Expression&) const’
  defined but not used [-Wunused-function] in
    mlir/tools/mlir-linalg-ods-gen/mlir-linalg-ods-gen.cpp

Differential Revision: https://reviews.llvm.org/D81673
2020-06-11 22:18:32 +02:00
Ehsan Toosi 4214031d43 [mlir] Introduce allowMemrefFunctionResults for the helper operation converters of buffer placement
This parameter gives the developers the freedom to choose their desired function
signature conversion for preparing their functions for buffer placement. It is
introduced for BufferAssignmentFuncOpConverter, and also for
BufferAssignmentReturnOpConverter, and BufferAssignmentCallOpConverter to adapt
the return and call operations with the selected function signature conversion.
If the parameter is set, buffer placement won't also deallocate the returned
buffers.

Differential Revision: https://reviews.llvm.org/D81137
2020-06-08 09:25:41 +02:00
Nicolas Vasilache 56ce65e2b6 [mlir][Linalg] NFC - Cleanup debug, address post-commit review. 2020-06-05 13:02:34 -04:00
Nicolas Vasilache 6b0dfd703a [mlir][Linalg] Add missing CMake dependency on SCFTransforms 2020-06-05 07:38:55 -04:00
Nicolas Vasilache 6953cf6502 [mlir][Linalg] Add a hoistRedundantVectorTransfers helper function
This revision adds a helper function to hoist vector.transfer_read /
vector.transfer_write pairs out of immediately enclosing scf::ForOp
iteratively, if the following conditions are true:
   1. The 2 ops access the same memref with the same indices.
   2. All operands are invariant under the enclosing scf::ForOp.
   3. No uses of the memref either dominate the transfer_read or are
   dominated by the transfer_write (i.e. no aliasing between the write and
   the read across the loop)

To improve hoisting opportunities, call the `moveLoopInvariantCode` helper
function on the candidate loop above which to hoist. Hoisting the transfers
results in scf::ForOp yielding the value that originally transited through
memory.

This revision additionally exposes `moveLoopInvariantCode` as a helper in
LoopUtils.h and updates SliceAnalysis to support return scf::For values and
allow hoisting across multiple scf::ForOps.

Differential Revision: https://reviews.llvm.org/D81199
2020-06-05 06:50:24 -04:00
Uday Bondhugula 0f6999af88 [MLIR] Update linalg.conv lowering to use affine load in the absence of padding
Update linalg to affine lowering for convop to use affine load for input
whenever there is no padding. It had always been using std.loads because
max in index functions (needed for non-zero padding if not materializing
zeros) couldn't be represented in the non-zero padding cases.

In the future, the non-zero padding case could also be made to use
affine - either by materializing or using affine.execute_region. The
latter approach will not impact the scf/std output obtained after
lowering out affine.

Differential Revision: https://reviews.llvm.org/D81191
2020-06-05 12:28:30 +05:30
Nicolas Vasilache 3463d9835b [mlir][Linalg] Add a hoistViewAllocOps helper function
This revision adds a helper function to hoist alloc/dealloc pairs and
alloca op out of immediately enclosing scf::ForOp if both conditions are true:
   1. all operands are defined outside the loop.
   2. all uses are ViewLikeOp or DeallocOp.

This is now considered Linalg-specific and will be generalized on a per-need basis.

Differential Revision: https://reviews.llvm.org/D81152
2020-06-04 18:59:03 -04:00
Hanhan Wang 27fca57546 [mlir][Linalg] Add support for fusion between indexed_generic ops and tensor_reshape ops
Summary:
The fusion for tensor_reshape is embedding the information to indexing maps,
thus the exising pattenr also works for indexed_generic ops.

Depends On D80347

Differential Revision: https://reviews.llvm.org/D80348
2020-06-03 14:59:47 -07:00
Hanhan Wang cc11ceda16 [mlir][Linalg] Add support for fusion between indexed_generic ops and generic ops on tensors.
Summary:
Different from the fusion between generic ops, indices are involved. In this
context, we need to re-map the indices for producer since the fused op is built
on consumer's perspective. This patch supports all combination of the fusion
between indexed_generic ops and generic ops, which includes tests case:
  1) generic op as producer and indexed_generic op as consumer.
  2) indexed_generic op as producer and generic op as consumer.
  3) indexed_generic op as producer and indexed_generic op as consumer.

Differential Revision: https://reviews.llvm.org/D80347
2020-06-03 14:58:43 -07:00
Nicolas Vasilache e349fb70a2 [mlir][Linalg] NFC - Make markers use Identifier instead of StringRef
Summary: This removes string ownership worries by putting everything into the context and allows more constructing identifiers programmatically.

Reviewers: ftynse

Subscribers: mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, stephenneuendorffer, Joonsoo, grosul1, frgossen, Kayjukh, jurahul

Tags: #mlir

Differential Revision: https://reviews.llvm.org/D81027
2020-06-03 05:52:32 -04:00
Nicolas Vasilache 91beb5176b [mlir] NFC - Add debug information for Linalg transformations.
Address post-commit review of https://reviews.llvm.org/D79518
2020-05-29 18:35:22 -04:00
Nicolas Vasilache 9534192c3b [mlir][Linalg] Make contraction vectorization use vector transfers
This revision replaces the load + vector.type_cast by appropriate vector transfer
operations. These play more nicely with other vector abstractions and canonicalization
patterns and lower to load/store with or without masks when appropriate.

Differential Revision: https://reviews.llvm.org/D80809
2020-05-29 15:03:46 -04:00
Nicolas Vasilache 1ee114322c [mlir][Linalg][Vector] Add forwarding patterns between linalg.copy and vector.transfer
This revision adds custom rewrites for patterns that arise during linalg structured
ops vectorization. These patterns allow the composition of linalg promotion,
vectorization and removal of redundant copies.

The patterns are voluntarily limited and restrictive atm.
More robust behavior will be implemented once more powerful side effect modeling and analyses are available on view/subview.

On the transfer_read side, the following pattern is rewritten:
```
   %alloc = ...
   [optional] %view = std.view %alloc ...
   %subView = subview %allocOrView ...
   [optional] linalg.fill(%allocOrView, %cst) ...
   ...
   linalg.copy(%in, %subView) ...
   vector.transfer_read %allocOrView[...], %cst ...
```
into
```
   [unchanged] %alloc = ...
   [unchanged] [optional] %view = std.view %alloc ...
   [unchanged] [unchanged] %subView = subview %allocOrView ...
   ...
   vector.transfer_read %in[...], %cst ...
```

On the transfer_write side, the following pattern is rewriten:
```
   %alloc = ...
   [optional] %view = std.view %alloc ...
   %subView = subview %allocOrView...
   ...
   vector.transfer_write %..., %allocOrView[...]
   linalg.copy(%subView, %out)
```

Differential Revision: https://reviews.llvm.org/D80728
2020-05-29 08:08:34 -04:00