Commit Graph

632 Commits

Author SHA1 Message Date
Tobias Gysi e70d2c8e6f [mlir][linalg] Cleanup LinalgOp usage in promotion.
Replace the uses of deprecated Structured Op Interface methods in Promotion.cpp. This patch is based on https://reviews.llvm.org/D103394.

Differential Revision: https://reviews.llvm.org/D103450
2021-06-03 11:01:02 +00:00
Tobias Gysi ad10d965c8 [mlir][linalg] Cleanup LinalgOp usage in generalization.
Replace the uses of deprecated Structured Op Interface methods in Generalization.cpp. This patch is based on https://reviews.llvm.org/D103394.

Differential Revision: https://reviews.llvm.org/D103531
2021-06-03 09:45:02 +00:00
Alexander Belyaev 485c21be8a [mlir] Split linalg reshape ops into expand/collapse.
Differential Revision: https://reviews.llvm.org/D103548
2021-06-03 11:40:22 +02:00
Mehdi Amini 8c948b18e9 Fix -Wsign-compare warning (NFC) 2021-06-02 17:28:57 +00:00
Tobias Gysi f84b908f89 [mlir][linalg] Cleanup LinalgOp usage in fusion on tensors (NFC).
Replace the uses of deprecated Structured Op Interface methods in FusionOnTensors.cpp. This patch is based on https://reviews.llvm.org/D103394.

Differential Revision: https://reviews.llvm.org/D103471
2021-06-02 12:20:45 +00:00
Tobias Gysi 7594f5028a [mlir][linalg] Cleanup LinalgOp usage in fusion (NFC).
Replace the uses of deprecated Structured Op Interface methods in Fusion.cpp. This patch is based on https://reviews.llvm.org/D103394.

Differential Revision: https://reviews.llvm.org/D103437
2021-06-01 08:21:30 +00:00
Tobias Gysi c2e5226a85 [mlir][linalg] Cleanup LinalgOp usage in tiling (NFC).
Replace the uses of deprecated Structured Op Interface methods in Tiling.cpp and Utils.cpp. This patch is based on https://reviews.llvm.org/D103394.

Differential Revision: https://reviews.llvm.org/D103438
2021-06-01 08:17:38 +00:00
Tobias Gysi 912ebf60b1 [mlir][linalg] Cleanup LinalgOp usage in vectorization (NFC).
Replace the uses of deprecated Structured Op Interface methods in Vectorization.cpp. This patch is based on https://reviews.llvm.org/D103394.

Differential Revision: https://reviews.llvm.org/D103410
2021-06-01 08:08:40 +00:00
Nicolas Vasilache ce4f99e7f2 [mlir][Linalg] Add comprehensive bufferization support for subtensor (5/n)
This revision refactors and simplifies the pattern detection logic: thanks to SSA value properties, we can actually look at all the uses of a given value and avoid having to pattern-match specific chains of operations.

A bufferization pattern for subtensor is added and specific inplaceability analysis is implemented for the simple case of subtensor. More advanced use cases will follow.

Differential revision: https://reviews.llvm.org/D102512
2021-05-27 12:48:08 +00:00
Alexander Belyaev 281ee42911 [mlir] Add a pass to distribute linalg::TiledLoopOp.
Differential Revision: https://reviews.llvm.org/D103194
2021-05-27 08:45:20 +02:00
Alexander Belyaev 74a89cba8c [mlir] Add `distributionTypes` to LinalgTilingOptions.
Differential Revision: https://reviews.llvm.org/D103161
2021-05-26 17:51:38 +02:00
Alexander Belyaev 335fa18028 [mlir] NFC: Expose tiled_loop->scf pattern.
Differential Revision: https://reviews.llvm.org/D102921
2021-05-21 18:19:00 +02:00
Alexander Belyaev 9ecc8178d7 [mlir] Add support for fusion into TiledLoopOp.
Differential Revision: https://reviews.llvm.org/D102722
2021-05-21 18:13:45 +02:00
Stephan Herhut 884a6291f0 [mlir][linalg] Add scalar operands inlining pattern
This pattern inlines operands to a linalg.generic operation that use a constant
index and hence are loop-invariant scalars. This reduces the number of
linalg.generic operands and unlocks some canonicalizations that rely on seeing
an explicit tensor.extract.

Differential Revision: https://reviews.llvm.org/D102682
2021-05-21 15:23:28 +02:00
Nicolas Vasilache 8eb18a0f3e [mlir][Standard] NFC - Drop remaining EDSC usage
Drop the remaining EDSC subdirectories and update all uses.

Differential Revision: https://reviews.llvm.org/D102911
2021-05-21 10:40:39 +00:00
Nicolas Vasilache e84a9b9bb3 [mlir][Affine] NFC - Drop Affine EDSC usage
Drop the Affine dialect EDSC subdirectory and update all uses.

Differential Revision: https://reviews.llvm.org/D102878
2021-05-20 21:45:45 +00:00
Nicolas Vasilache e3cf7c88c4 [mlir][MemRef] NFC - Drop MemRef EDSC usage
Drop the MemRef dialect EDSC subdirectory and update all uses.

Differential Revision: https://reviews.llvm.org/D102868
2021-05-20 20:13:58 +00:00
Nicolas Vasilache 4519ca3d2e [mlir][Linalg] NFC - Drop Linalg EDSC usage
Drop the Linalg dialect EDSC subdirectory and update all uses.

Differential Revision: https://reviews.llvm.org/D102848
2021-05-20 15:33:56 +00:00
Nicolas Vasilache ef33c6e3ce [mlir][Linalg] Drop spurious usage of OperationFolder
Instead, use createOrFold builders which result in more static information available.

Differential Revision: https://reviews.llvm.org/D102832
2021-05-20 09:17:58 +00:00
Nicolas Vasilache 84a880e1e2 [mlir][SCF] NFC - Drop SCF EDSC usage
Drop the SCF dialect EDSC subdirectory and update all uses.

Differential Revision: https://reviews.llvm.org/D102780
2021-05-19 15:52:14 +00:00
Nicolas Vasilache 6825bfe23e [mlir][Vector] NFC - Drop vector EDSC usage
Drop the vector dialect EDSC subdirectory and update all uses.
2021-05-19 12:44:38 +00:00
MaheshRavishankar e2b365948b [mlir][Linalg] Break unnecessary dependency through unused `outs` tensor.
LinalgOps that are all parallel do not use the value of `outs`
tensor. The semantics is that the `outs` tensor is fully
overwritten. Using anything other than `init_tensor` can add false
dependencies between operations, when the use is just for the shape of
the tensor. Adding a canonicalization to always use `init_tensor` in
such cases, breaks this dependence.

Differential Revision: https://reviews.llvm.org/D102561
2021-05-18 22:31:42 -07:00
Tobias Gysi 7c16f93c44 [mlir][linalg] Remove template parameter from loop lowering.
Replace the templated linalgLowerOpToLoops method by three specialized methods linalgOpToLoops, LinalgOpToParallelLoops, and linalgOpToAffineLoops.

Differential Revision: https://reviews.llvm.org/D102324
2021-05-17 09:31:53 +00:00
Adrian Kuegel 5ef21506b9 Add support for complex constants to MLIR core.
BEGIN_PUBLIC
Add support for complex constants to MLIR core.
END_PUBLIC

Differential Revision: https://reviews.llvm.org/D101908
2021-05-17 09:12:39 +02:00
Nicolas Vasilache dd65f420cd [mlir][Linalg] NFC - More gracefully degrade lookup into failure during comprehensive bufferization (4/n)
Differential revsion: https://reviews.llvm.org/D102420
2021-05-14 22:12:23 +00:00
Nicolas Vasilache 6f90955f69 [mlir][Linalg] Add support for subtensor_insert comprehensive bufferization (3/n)
Differential revision: https://reviews.llvm.org/D102417
2021-05-14 21:51:00 +00:00
Rahul Joshi 23a84e1c60 [MLIR] Fix build failures due to unused variables in non-debug builds.
Differential Revision: https://reviews.llvm.org/D102458
2021-05-13 18:42:48 -07:00
Nicolas Vasilache bebf5d56bf [mlir][Linalg] Add support for vector.transfer ops to comprehensive bufferization (2/n).
Differential revision: https://reviews.llvm.org/D102395
2021-05-13 22:26:28 +00:00
Nicolas Vasilache 1e01a8919f [mlir][Linalg] Add ComprehensiveBufferize for functions(step 1/n)
This is the first step towards upstreaming comprehensive bufferization following the
discourse post: https://llvm.discourse.group/t/rfc-linalg-on-tensors-update-and-comprehensive-bufferization-rfc/3373/6.

This first commit introduces a basic pass for bufferizing within function boundaries,
assuming that the inplaceable function boundaries have been marked as such.

Differential revision: https://reviews.llvm.org/D101693
2021-05-13 22:24:40 +00:00
Sean Silva 12874e93a1 [mlir][NFC] Add helper for common pattern of replaceAllUsesExcept
This covers the extremely common case of replacing all uses of a Value
with a new op that is itself a user of the original Value.

This should also be a little bit more efficient than the
`SmallPtrSet<Operation *, 1>{op}` idiom that was being used before.

Differential Revision: https://reviews.llvm.org/D102373
2021-05-13 12:42:10 -07:00
Tobias Gysi cf194da1bb [mlir][linalg] Remove IndexedGenericOp support from FusionOnTensors...
after introducing the IndexedGenericOp to GenericOp canonicalization (https://reviews.llvm.org/D101612).

Differential Revision: https://reviews.llvm.org/D102163
2021-05-13 14:57:16 +00:00
Tobias Gysi f358c37209 [mlir][linalg] Remove IndexedGenericOp support from DropUnitDims...
after introducing the IndexedGenericOp to GenericOp canonicalization (https://reviews.llvm.org/D101612).

Differential Revision: https://reviews.llvm.org/D102235
2021-05-13 14:18:59 +00:00
Tobias Gysi 06bb9cf30d [mlir][linalg] Remove IndexedGenericOp support from LinalgInterchangePattern...
after introducing the IndexedGenericOp to GenericOp canonicalization (https://reviews.llvm.org/D101612).

Differential Revision: https://reviews.llvm.org/D102245
2021-05-12 13:01:37 +00:00
Tobias Gysi c6b96ae06f [mlir][linalg] Remove IndexedGenericOp support from LinalgBufferize...
after introducing the IndexedGenericOp to GenericOp canonicalization (https://reviews.llvm.org/D101612).

Differential Revision: https://reviews.llvm.org/D102308
2021-05-12 12:15:05 +00:00
Tobias Gysi 7bc6df2528 [mlir][linalg] Remove IndexedGenericOp support from LinalgToLoops...
after introducing the IndexedGenericOp to GenericOp canonicalization (https://reviews.llvm.org/D101612).

Differential Revision: https://reviews.llvm.org/D102187
2021-05-11 06:53:47 +00:00
Tobias Gysi 6676e09b22 [mlir][linalg] Remove IndexedGenericOp support from Fusion...
after introducing the IndexedGenericOp to GenericOp canonicalization (https://reviews.llvm.org/D101612).

Differential Revision: https://reviews.llvm.org/D102174
2021-05-11 06:49:25 +00:00
Tobias Gysi d69bccf1ed [mlir][linalg] Remove IndexedGenericOp support from Tiling...
after introducing the IndexedGenericOp to GenericOp canonicalization (https://reviews.llvm.org/D101612).

Differential Revision: https://reviews.llvm.org/D102176
2021-05-11 05:53:58 +00:00
Aart Bik bf812ea484 [mlir][linalg] remove the -now- obsolete sparse support in linalg
All glue and clutter in the linalg ops has been replaced by proper
sparse tensor type encoding. This code is no longer needed. Thanks
to ntv@ for giving us a temporary home in linalg.

So long, and thanks for all the fish.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D102098
2021-05-10 16:49:33 -07:00
River Riddle 53b946aa63 [mlir] Refactor the representation of function-like argument/result attributes.
The current design uses a unique entry for each argument/result attribute, with the name of the entry being something like "arg0". This provides for a somewhat sparse design, but ends up being much more expensive (from a runtime perspective) in-practice. The design requires building a string every time we lookup the dictionary for a specific arg/result, and also requires N attribute lookups when collecting all of the arg/result attribute dictionaries.

This revision restructures the design to instead have an ArrayAttr that contains all of the attribute dictionaries for arguments and another for results. This design reduces the number of attribute name lookups to 1, and allows for O(1) lookup for individual element dictionaries. The major downside is that we can end up with larger memory usage, as the ArrayAttr contains an entry for each element even if that element has no attributes. If the memory usage becomes too problematic, we can experiment with a more sparse structure that still provides a lot of the wins in this revision.

This dropped the compilation time of a somewhat large TensorFlow model from ~650 seconds to ~400 seconds.

Differential Revision: https://reviews.llvm.org/D102035
2021-05-07 19:32:31 -07:00
Alexander Belyaev a3f22d020b [mlir] Add a pattern to bufferize linalg.tensor_reshape.
Differential Revision: https://reviews.llvm.org/D102089
2021-05-07 21:31:17 +02:00
Tobias Gysi f31531a30b [mlir][linalg] Remove redundant indexOp builder.
Remove the builder signature taking a signed dimension identifier.

Reviewed By: ergawy

Differential Revision: https://reviews.llvm.org/D102055
2021-05-07 14:22:12 +00:00
MaheshRavishankar 05a89312d8 [mlir][Linalg] Allow folding to rank-zero tensor when using rank-reducing subtensors.
The pattern to convert subtensor ops to their rank-reduced versions
(by dropping unit-dims in the result) can also convert to a zero-rank
tensor. Handle that case.
This also fixes a OOB access bug in the existing pattern for such
cases.

Differential Revision: https://reviews.llvm.org/D101949
2021-05-06 19:03:55 -07:00
thomasraoux 52525cb20f [mlir][linalg][NFC] Make reshape folding control more fine grain
This expose a lambda control instead of just a boolean to control unit
dimension folding.
This however gives more control to user to pick a good heuristic.
Folding reshapes helps fusion opportunities but may generate sub-optimal
generic ops.

Differential Revision: https://reviews.llvm.org/D101917
2021-05-06 10:11:39 -07:00
MaheshRavishankar b6060b7673 [mlir][Linalg] Fix element type of results when folding reshapes.
Fixing a minor bug which lead to element type of the output being
modified when folding reshapes with generic op.

Differential Revision: https://reviews.llvm.org/D101942
2021-05-05 15:40:41 -07:00
Tobias Gysi 4a6ee23d83 [mlir][linalg] Fix bug in the fusion on tensors index op handling.
The old index op handling let the new index operations point back to the
producer block. As a result, after fusion some index operations in the
fused block had back references to the old producer block resulting in
illegal IR. The patch now relies on a block and value mapping to avoid
such back references.

Differential Revision: https://reviews.llvm.org/D101887
2021-05-05 14:46:08 +00:00
Alexander Belyaev 2865d114f9 [mlir] Use ReassociationIndices instead of affine maps in linalg.reshape.
Differential Revision: https://reviews.llvm.org/D101861
2021-05-05 12:59:57 +02:00
Aart Bik a2c9d4bb04 [mlir][sparse] Introduce proper sparsification passes
This revision migrates more code from Linalg into the new permanent home of
SparseTensor. It replaces the test passes with proper compiler passes.

NOTE: the actual removal of the last glue and clutter in Linalg will follow

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D101811
2021-05-04 17:10:09 -07:00
Tobias Gysi 05d2297b86 [mlir][linalg] Always lower index operations during loop lowering.
Ensure the index operations are lowered on all linalg loop lowering paths.

Differential Revision: https://reviews.llvm.org/D101827
2021-05-04 14:30:59 +00:00
Eugene Zhulenev 9b67096fe9 [mlir] Linalg: add vector transfer lowering patterns to the contraction lowering
This fixes a performance regression in vec-mat vectorization

Reviewed By: asaadaldien

Differential Revision: https://reviews.llvm.org/D101795
2021-05-03 16:21:51 -07:00
MaheshRavishankar a6e09391bb [mlir][Linalg] Add a utility method to get reassociations maps for reshape.
Given the source and destination shapes, if they are static, or if the
expanded/collapsed dimensions are unit-extent, it is possible to
compute the reassociation maps that can be used to reshape one type
into another. Add a utility method to return the reassociation maps
when possible.

This utility function can be used to fuse a sequence of reshape ops,
given the type of the source of the producer and the final result
type. This pattern supercedes a more constrained folding pattern added
to DropUnitDims pass.

Differential Revision: https://reviews.llvm.org/D101343
2021-05-03 14:40:15 -07:00
MaheshRavishankar fd15e2b825 [mlir][Linalg] Use rank-reduced versions of subtensor and subtensor insert when possible.
Convert subtensor and subtensor_insert operations to use their
rank-reduced versions to drop unit dimensions.

Differential Revision: https://reviews.llvm.org/D101495
2021-05-03 12:51:24 -07:00
thomasraoux 9621c1ef56 [mlir][linalg] Fix vectorization bug in vector transfer indexing map calculation
The current implementation had a bug as it was relying on the target vector
dimension sizes to calculate where to insert broadcast. If several dimensions
have the same size we may insert the broadcast on the wrong dimension. The
correct broadcast cannot be inferred from the type of the source and
destination vector.

Instead when we want to extend transfer ops we calculate an "inverse" map to the
projected permutation and insert broadcast in place of the projected dimensions.

Differential Revision: https://reviews.llvm.org/D101738
2021-05-03 12:16:38 -07:00
Frederik Gossen 456efbc0f1 [MLIR][Linalg] Avoid forward declaration in `Loops.cpp`
Differential Revision: https://reviews.llvm.org/D101771
2021-05-03 21:06:50 +02:00
Frederik Gossen ec339163a7 [MLIR][Linalg] Lower `linalg.tiled_loop` in a separate pass
Add dedicated pass `convert-linalg-tiled-loops-to-scf` to lower
`linalg.tiled_loop`s.

Differential Revision: https://reviews.llvm.org/D101768
2021-05-03 21:02:02 +02:00
Frederik Gossen d2a291a5f8 [MLIR][Linalg] Lower `linalg.tiled_loop` to `scf` loops
Differential Revision: https://reviews.llvm.org/D101747
2021-05-03 18:47:12 +02:00
Aart Bik 319072f4e3 [mlir][sparse] migrate sparse operations into new sparse tensor dialect
This is the very first step toward removing the glue and clutter from linalg and
replace it with proper sparse tensor types. This revision migrates the LinalgSparseOps
into SparseTensorOps of a sparse tensor dialect. This also provides a new home for
sparse tensor related transformation.

NOTE: the actual replacement with sparse tensor types (and removal of linalg glue/clutter)
will follow but I am trying to keep the amount of changes per revision manageable.

Differential Revision: https://reviews.llvm.org/D101573
2021-04-29 15:52:35 -07:00
Mehdi Amini 086e0f05bf Revert "[mlir][sparse] migrate sparse operations into new sparse tensor dialect"
This reverts commit a6d92a9711.

The build with -DBUILD_SHARED_LIBS=ON is broken.
2021-04-29 20:59:41 +00:00
Aart Bik a6d92a9711 [mlir][sparse] migrate sparse operations into new sparse tensor dialect
This is the very first step toward removing the glue and clutter from linalg and
replace it with proper sparse tensor types. This revision migrates the LinalgSparseOps
into SparseTensorOps of a sparse tensor dialect. This also provides a new home for
sparse tensor related transformation.

NOTE: the actual replacement with sparse tensor types (and removal of linalg glue/clutter)
will follow but I am trying to keep the amount of changes per revision manageable.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D101488
2021-04-29 12:09:10 -07:00
Tres Popp b863af5a5e [mlir] Add LinalgTransforms dependency on Complex 2021-04-29 12:20:44 +02:00
Tres Popp 42e5f42215 [mlir] Support complex numbers in Linalg promotion
FillOp allows complex ops, and filling a properly sized buffer with
a default zero complex number is implemented.

Differential Revision: https://reviews.llvm.org/D99939
2021-04-29 11:58:57 +02:00
Nicolas Vasilache b6113db955 [mlir][Linalg] Generalize linalg vectorization
This revision adds support for vectorizing more general linalg operations with projected permutation maps.

This is achieved by eagerly broadcasting the intermediate vector to the common size
of the iteration domain of the linalg op. This allows a much more natural expression of
generalized vectorization but may introduce additional computations until all the
proper canonicalizations are implemented.

This generalization modifies the vector.transfer_read/write permutation logic and
exposes the fact that the logic employed in vector.contract was too ad-hoc.

As a consequence, changes occur in the permutation / transposition logic for contraction. In turn this prompts supporting more cases in the lowering of contract
to matrix intrinsics, which is required to make the corresponding tests pass.

Differential revision: https://reviews.llvm.org/D101165
2021-04-29 07:44:01 +00:00
Alexander Belyaev 4b13b7581d [mlir] Add a pass to tile Linalg ops using `linalg.tiled_loop`.
Differential Revision: https://reviews.llvm.org/D101084
2021-04-27 12:33:28 +02:00
Frederik Gossen b003ebd603 [MLIR][Linalg] Generalize splat constant folding
Splat constant folding was limited to `std.constant` operations. Instead, use
the constant matcher and apply splat constant folding to any constant-like
operation that holds a splat attribute.

Differential Revision: https://reviews.llvm.org/D101301
2021-04-27 09:08:34 +02:00
Tobias Gysi 0e777e4ad7 [mlir][linalg] remove interchange option on linalg to loop lowering.
The interchange option attached to the linalg to loop lowering affects only the loops and does not update the memory accesses generated in to body of the operation. Instead of performing the interchange during the loop lowering use the interchange pattern.

Differential Revision: https://reviews.llvm.org/D100758
2021-04-22 08:55:17 +00:00
thomasraoux d40a19c3a8 [mlir][linalg] Add pattern to push reshape after elementwise operation
This help expose more fusion opportunities.

Differential Revision: https://reviews.llvm.org/D100685
2021-04-21 21:22:39 -07:00
Eugene Zhulenev 3f1e827abd [mlir] Linalg : do not forward memrefs to outputs when do bufferization
Example:
```
%0 = linalg.init_tensor : tensor<...>
%1 = linalg.generic ... outs(%0: tensor<...>)
%2 = linalg.generic ... outs(%0: tensor<...>)
```

Memref allocated as a result of `init_tensor` bufferization can be incorrectly overwritten by the second linalg.generic operation

Reviewed By: silvas

Differential Revision: https://reviews.llvm.org/D100921
2021-04-21 16:39:06 -07:00
Ahmed Taei 10d7924581 Fix FoldReshapeOpWithUnitExtent generating illegal reshape
This will prevent fusion that spains all dims and generates
(d0, d1, ...) -> () reshape that isn't legal

Differential Revision: https://reviews.llvm.org/D100805
2021-04-21 11:30:45 -07:00
thomasraoux ded18708f9 [mlir][NFC] Refactor linalg substituteMin and AffineMinSCF canonizalizations
Break up the dependency between SCF ops and substituteMin helper and make a
more generic version of AffineMinSCFCanonicalization. This reduce dependencies
between linalg and SCF and will allow the logic to be used with other kind of
ops. (Like ID ops).

Differential Revision: https://reviews.llvm.org/D100321
2021-04-21 07:19:36 -07:00
Tobias Gysi 5a451e486f [mlir][linalg] adapt named op generalization to work with captures.
Instead of always running the region builder check if the generalized op has a region attached. If yes inline the existing region instead of calling the region builder. This change circumvents a problem with named operations that have a region builder taking captures and the generalization pass not knowing about this captures.

Differential Revision: https://reviews.llvm.org/D100880
2021-04-21 06:37:53 +00:00
Tobias Gysi b9715156ff [mlir][linalg] lower index operations during linalg to vector lowering.
The patch extends the vectorization pass to lower linalg index operations to vector code. It allocates constant 1d vectors that enumerate the indexes along the iteration dimensions and broadcasts/transposes these 1d vectors to the iteration space.

Differential Revision: https://reviews.llvm.org/D100373
2021-04-20 11:55:44 +00:00
KareemErgawy-TomTom 0b05207e45 [MLIR][LinAlg] Detensoring CF cost-model: look forward.
This patch extends the control-flow cost-model for detensoring by
implementing a forward-looking pass on block arguments that should be
detensored. This makes sure that if a (to-be-detensored) block argument
"escapes" its block through the terminator, then the successor arguments
are also detensored.

Reviewed By: silvas

Differential Revision: https://reviews.llvm.org/D100457
2021-04-20 09:01:43 +02:00
Tobias Gysi 39a604e3df [mlir][linalg] update fusion on tensors to support linalg index operations.
The patch replaces the index operations in the body of fused producers and linearizes the indices after expansion.

Differential Revision: https://reviews.llvm.org/D100479
2021-04-20 06:13:04 +00:00
Tobias Gysi d0774f7f0a [mlir][linalg] update drop unit dims to support linalg index operations.
Update the dimensions of the index operations to account for dropped dimensions and replace the index operations of dropped dimensions by zero.

Differential Revision: https://reviews.llvm.org/D100395
2021-04-20 04:54:00 +00:00
Tobias Gysi 495e1d7e8a [mlir][linalg] adding pass to run the interchange pattern.
Instead of interchanging loops during the loop lowering this pass performs the interchange by permuting the indexing maps. It also updates the iterator types and the index accesses in the body of the operation.

Differential Revision: https://reviews.llvm.org/D100627
2021-04-19 12:19:15 +00:00
Nicolas Vasilache 8cf650c554 [mlir][linalg] Add support for WAW fusion on tensors.
Differential Revision: https://reviews.llvm.org/D100603
2021-04-16 08:22:09 +00:00
Ahmed Taei 0e2f9b61fd Fix tile-and-pad when padding doesn't span all dimension
Without this tile-and-pad will never terminate if pad-fails.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D97720
2021-04-15 20:17:40 -07:00
River Riddle 4efb7754e0 [mlir][NFC] Add a using directive for llvm::SetVector
Differential Revision: https://reviews.llvm.org/D100436
2021-04-15 16:09:34 -07:00
Aart Bik 92b0a9d7d4 [mlir][sparse] remove restriction on vectorization of index type
Rationale:
Now that vector<?xindex> is allowed, the restriction on vectorization
of index types in the sparse compiler can be removed. Also needs
generalization of scatter/gather index types.

Reviewed By: gysit

Differential Revision: https://reviews.llvm.org/D100522
2021-04-15 10:27:04 -07:00
Tobias Gysi ce82843f72 [mlir][linalg] update fusion to support linalg index operations.
The patch updates the linalg fusion pass to add the tile offsets to the indices.

Differential Revision: https://reviews.llvm.org/D100456
2021-04-14 15:32:42 +00:00
Tobias Gysi 8ea5d190ec [mlir][linalg] update tiling to support linalg index operations.
The patch updates the tiling pass to add the tile offsets to the indices returned by the linalg operations.

Differential Revision: https://reviews.llvm.org/D100379
2021-04-13 14:36:01 +00:00
Tobias Gysi ef30179eff [mlir][linalg] lower index operations during linalg to loop lowering.
The patch extends the linalg to loop lowering pass to replace all linalg index operations by the induction variables of the generated loop nests.

Differential Revision: https://reviews.llvm.org/D100364
2021-04-13 09:04:09 +00:00
KareemErgawy-TomTom aa6eb2af10 [MLIR][LinAlg] Implement detensoring cost-modelling.
This patch introduces the neccessary infrastructure changes to implement
cost-modelling for detensoring. In particular, it introduces the
following changes:
- An extension to the dialect conversion framework to selectively
convert sub-set of non-entry BB arguments.
- An extension to branch conversion pattern to selectively convert
sub-set of a branche's operands.
- An interface for detensoring cost-modelling.
- 2 simple implementations of 2 different cost models.

This sets the stage to explose cost-modelling for detessoring in an
easier way. We still need to come up with better cost models.

Reviewed By: silvas

Differential Revision: https://reviews.llvm.org/D99945
2021-04-13 09:07:18 +02:00
MaheshRavishankar b0fc712b14 [mlir][Linalg] Disable const -> linalg.generic when fused op is illegal.
Fusing a constant with a linalg.generic operation can result in the
fused operation being illegal since the loop bound computation
fails. Avoid such fusions.

Differential Revision: https://reviews.llvm.org/D100272
2021-04-12 10:15:54 -07:00
Tobias Gysi 93f9922d65 [mlir][linalg] adding operation to access the iteration index of enclosing linalg ops.
The `linalg.index` operation provides access to the iteration indexes of immediately enclosing linalg operations. It takes a dimension `dim` attribute and returns the iteration index in the given dimension. Having `linalg.index` allows us to unify `linalg.generic` and `linalg.indexed_generic` and also enables index access in named operations.

Differential Revision: https://reviews.llvm.org/D100292
2021-04-12 13:37:17 +00:00
MaheshRavishankar f4eb681dc3 [mlir][Linalg] Drop unit-trip loops of reductions only if other reduction loops exists.
Recent change enable dropping unit-trip loops of "reduction" iterator
type as well. This is fine as long as there is one other "reduction"
iterator in the operation. Without this the initialized value (value
of `out`) is not read which leads to a correctness issue.

Also fix a bug in the `fill` -> `tensor_reshape` folding. The `out`
operand of the `fill` needs to be reshaped to get the `out` operand of
the generated `fill` operation.

Differential Revision: https://reviews.llvm.org/D100145
2021-04-08 22:31:29 -07:00
Aart Bik 3acf49829c [mlir][sparse] support integral types i32,i16,i8 for *numerical* values
Some sparse matrices operate on integral values (in contrast with the common
f32 and f64 values). This CL expands the compiler and runtime support to deal
with several common type combinations.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D99999
2021-04-07 10:01:37 -07:00
Nicolas Vasilache 518e6f341d [mlir][Linalg] Fix fusion on tensors operands / bbArg mismatch
Linalg fusion on tensors has mismatching assumptions on the operand side than on the region bbArg side.
Relax the behavior on the operand/indexing map side so that we better support output operands that may also be read from.

Differential revision: https://reviews.llvm.org/D99499
2021-04-06 15:39:40 +00:00
MaheshRavishankar 944a2fe763 [mlir][Linalg] Add callbacks to fusion of elementwise operations to control fusion.
Right now Elementwise operations fusion in Linalg fuses everything it
can. This can run up against resource limits of the target hardware
without some checks. This patch adds a callback function that clients
can use to implement a cost function. When two elementwise operations
are deemed structurally fusable, the callback can be used to control
if the fusion applies.

Differential Revision: https://reviews.llvm.org/D99820
2021-04-05 16:08:47 -07:00
MaheshRavishankar ea069aebcc [mlir][Linalg] NFC: Move populatePatterns* method into linalg namespace.
The moved `populate` methods are only relevant to Linalg
operations. So they are better of in `linalg` namespace.  Also rename
`populateLinalgTensorOpsFusionPatterns` to
`populateElementwiseOpsFusionPatterns`. This makes the scope of these
patterns explicit and disambiguates it with fusion on tensors using
tile + fuse.

Differential Revision: https://reviews.llvm.org/D99819
2021-04-05 11:16:02 -07:00
Aart Bik a0c5b7e3b5 [mlir][sparse] support for very narrow index and pointer types
Rationale:
Small indices and values, when allowed by the required range of the
input tensors, can reduce the memory footprint of sparse tensors
even more. Note, however, that we must be careful zero extending
the values (since sparse tensors never use negatives for indexing),
but LLVM treats the index type as signed in most memory operations
(like the scatter and gather). This CL dots all the i's in this regard.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D99777
2021-04-01 18:21:27 -07:00
MaheshRavishankar f0a2fe7f79 [mlir][Linalg] Rewrite SubTensors that take a slice out of a unit-extend dimension.
Subtensor operations that are taking a slice out of a tensor that is
unit-extent along a dimension can be rewritten to drop that dimension.

Differential Revision: https://reviews.llvm.org/D99226
2021-03-29 09:19:36 -07:00
MaheshRavishankar 7d8b478ce1 [mlir][Linalg] Drop spurious error message
Drop usage of `emitRemark` and use `notifyMatchFailure` instead to
avoid unnecessary spew during compilation.

Differential Revision: https://reviews.llvm.org/D99485
2021-03-29 09:17:25 -07:00
Lei Zhang c241e1c2f5 [mlir][linalg] Support dropping unit dimensions for init tensors
init tensor operands also has indexing map and generally follow
the same constraints we expect for non-init-tensor operands.

Differential Revision: https://reviews.llvm.org/D99115
2021-03-24 18:17:58 -04:00
Lei Zhang 7f28d27cb6 [mlir][linalg] Allow controlling folding unit dim reshapes
This commit exposes an option to the pattern
FoldWithProducerReshapeOpByExpansion to allow
folding unit dim reshapes. This gives callers
more fine-grained controls.

Differential Revision: https://reviews.llvm.org/D99114
2021-03-24 18:17:57 -04:00
Lei Zhang e58597ee1c [mlir][linalg] Fuse producers with non-permutation indexing maps
Until now Linalg fusion only allow fusing producers whose operands
are all permutation indexing maps. It's easier to deduce the
subtensor/subview but it is an unnecessary constraint, as in tiling
we have more advanced logic to deduce the subranges even when the
operand is not of permutation indexing maps, e.g., the input operand
for convolution ops.

This patch uses the logic on tiling side to deduce subranges for
fusion. This enables fusing convolution with its consumer ops
when possible.

Along the way, we are now generating proper affine.min ops to guard
against size boundaries, if we cannot be certain they won't be
out of bounds.

Differential Revision: https://reviews.llvm.org/D99014
2021-03-24 18:17:57 -04:00
Lei Zhang ddf93abf49 [mlir][linalg] NFC: Move makeTiledShapes into Utils.{h|cpp}
This is a preparation step to reuse makeTiledShapes in tensor
fusion. Along the way, did some lightweight cleanups.

Differential Revision: https://reviews.llvm.org/D99013
2021-03-24 18:17:57 -04:00
Tobias Gysi 880822255e [mlir][linalg] Do not call region builder during vectorization.
All linalg operations having a region builder shall call it during op creation. Calling it during vectorization is obsolete.

Differential Revision: https://reviews.llvm.org/D99168
2021-03-24 14:55:11 +00:00
Nicolas Vasilache 7716e5535c [mlir] Fixes to hoist padding
Fix the BlockAndValueMapping update that was missing entries for scf.for op's blockIterArgs.
Skip cloning subtensors of the padded tensor as the logic for these is separate.
Add a filter to drop side-effecting ops.

Tests are beefed up to verify the IR is sound in all hoisting configurations for 2-level 3-D tiled matmul.

Differential Revision: https://reviews.llvm.org/D99255
2021-03-24 11:51:28 +00:00
River Riddle 76f3c2f3f3 [mlir][Pattern] Add better support for using interfaces/traits to match root operations in rewrite patterns
To match an interface or trait, users currently have to use the `MatchAny` tag. This tag can be quite problematic for compile time for things like the canonicalizer, as the `MatchAny` patterns may get applied to  *every* operation. This revision adds better support by bucketing interface/trait patterns based on which registered operations have them registered. This means that moving forward we will only attempt to match these patterns to operations that have this interface registered. Two simplify defining patterns that match traits and interfaces, two new utility classes have been added: OpTraitRewritePattern and OpInterfaceRewritePattern.

Differential Revision: https://reviews.llvm.org/D98986
2021-03-23 14:05:33 -07:00
Alex Zinenko 20c68d9441 [mlir] silence -Wunused-variable in release mode in Linalg transforms 2021-03-23 18:59:12 +01:00
Nicolas Vasilache 2240568579 [MLIR][Linalg] Hoist padding across multiple levels of tiling
This revision introduces proper backward slice computation during the hoisting of
PadTensorOp. This allows hoisting padding even across multiple levels of tiling.
Such hoisting requires the proper handling of loop bounds that may depend on enclosing
loop variables.

Differential revision: https://reviews.llvm.org/D98965
2021-03-23 17:47:32 +00:00
Chris Lattner 79d7f618af Rename FrozenRewritePatternList -> FrozenRewritePatternSet; NFC.
This nicely aligns the naming with RewritePatternSet.  This type isn't
as widely used, but we keep a using declaration in to help with
downstream consumption of this change.

Differential Revision: https://reviews.llvm.org/D99131
2021-03-22 17:40:45 -07:00
Chris Lattner dc4e913be9 [PatternMatch] Big mechanical rename OwningRewritePatternList -> RewritePatternSet and insert -> add. NFC
This doesn't change APIs, this just cleans up the many in-tree uses of these
names to use the new preferred names.  We'll keep the old names around for a
couple weeks to help transitions.

Differential Revision: https://reviews.llvm.org/D99127
2021-03-22 17:20:50 -07:00
Nicolas Vasilache bcd6424f9b [mlir][Linalg] Fix linalg on tensor fusion
- Drop unnecessary occurrences of rewriter.eraseOp: dead linalg ops on tensors should be cleaned up by DCE.
- reimplement the part of Linalg on fusion that constructs the body and block arguments: the previous implementation had too much magic. Instead this spells out all cases explicitly and asserts / introduces TODOs for incorrect cases.

As a consequence, we can use the default traversal order for this pattern.

Differential Revision: https://reviews.llvm.org/D99070
2021-03-22 13:29:40 +00:00
Adrian Kuegel c691b9686b [mlir] Add an option to still use bottom-up traversal
GreedyPatternRewriteDriver was changed from bottom-up traversal to top-down traversal. Not all passes work yet with that change for traversal order. To give some time for fixing, add an option to allow to switch back to bottom-up traversal. Use this option in FusionOfTensorOpsPass which fails otherwise.

Differential Revision: https://reviews.llvm.org/D99059
2021-03-22 09:49:44 +01:00
Chris Lattner 3a506b31a3 Change OwningRewritePatternList to carry an MLIRContext with it.
This updates the codebase to pass the context when creating an instance of
OwningRewritePatternList, and starts removing extraneous MLIRContext
parameters.  There are many many more to be removed.

Differential Revision: https://reviews.llvm.org/D99028
2021-03-21 10:06:31 -07:00
Benjamin Kramer 6327a7cfd7 [mlir][Linalg] Make LLVM_DEBUG region bigger to avoid warnings in Release builds
Transforms.cpp:586:16: error: unused variable 'v' [-Werror,-Wunused-variable]
    for (Value v : operands)
               ^
2021-03-19 20:56:59 +01:00
Nicolas Vasilache 5b2d8503d1 [mlir][Linalg] NFC - Expose helper function `substituteMin`. 2021-03-19 16:26:52 +00:00
Lei Zhang fcc1ce0093 Revert "Revert "[mlir] Add linalg.fill bufferization conversion""
This reverts commit c69550c132 with
proper fix applied.
2021-03-18 17:21:58 -04:00
Mehdi Amini c69550c132 Revert "[mlir] Add linalg.fill bufferization conversion"
This reverts commit 32a744ab20.

CI is broken:

test/Dialect/Linalg/bufferize.mlir:274:12: error: CHECK: expected string not found in input
 // CHECK: %[[MEMREF:.*]] = tensor_to_memref %[[IN]] : memref<?xf32>
           ^
2021-03-18 21:18:07 +00:00
Eugene Zhulenev 32a744ab20 [mlir] Add linalg.fill bufferization conversion
`BufferizeAnyLinalgOp` fails because `FillOp` is not a `LinalgGenericOp` and it fails while reading operand sizes attribute.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D98671
2021-03-18 13:41:16 -07:00
thomasraoux 16947650d5 [mlir][linalg] Extend linalg vectorization to support non-identity input maps
This propagates the affine map to transfer_read op in case it is not a
minor identity map.

Differential Revision: https://reviews.llvm.org/D98523
2021-03-18 12:32:35 -07:00
Julian Gross e2310704d8 [MLIR] Create memref dialect and move dialect-specific ops from std.
Create the memref dialect and move dialect-specific ops
from std dialect to this dialect.

Moved ops:
AllocOp -> MemRef_AllocOp
AllocaOp -> MemRef_AllocaOp
AssumeAlignmentOp -> MemRef_AssumeAlignmentOp
DeallocOp -> MemRef_DeallocOp
DimOp -> MemRef_DimOp
MemRefCastOp -> MemRef_CastOp
MemRefReinterpretCastOp -> MemRef_ReinterpretCastOp
GetGlobalMemRefOp -> MemRef_GetGlobalOp
GlobalMemRefOp -> MemRef_GlobalOp
LoadOp -> MemRef_LoadOp
PrefetchOp -> MemRef_PrefetchOp
ReshapeOp -> MemRef_ReshapeOp
StoreOp -> MemRef_StoreOp
SubViewOp -> MemRef_SubViewOp
TransposeOp -> MemRef_TransposeOp
TensorLoadOp -> MemRef_TensorLoadOp
TensorStoreOp -> MemRef_TensorStoreOp
TensorToMemRefOp -> MemRef_BufferCastOp
ViewOp -> MemRef_ViewOp

The roadmap to split the memref dialect from std is discussed here:
https://llvm.discourse.group/t/rfc-split-the-memref-dialect-from-std/2667

Differential Revision: https://reviews.llvm.org/D98041
2021-03-15 11:14:09 +01:00
Aart Bik e7ee4eaaf7 [mlir][sparse] disable nonunit stride dense vectorization
This is a temporary work-around to get our all-annotations-all-flags
stress testing effort run clean. In the long run, we want to provide
efficient implementations of strided loads and stores though

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D98563
2021-03-12 16:49:32 -08:00
Inho Seo 2ce4caf414 Moved getStaticLoopRanges and getStaticShape methods to LinalgInterfaces.td to add static shape verification
It is to use the methods in LinalgInterfaces.cpp for additional static shape verification to match the shaped operands and loop on linalgOps. If I used the existing methods, I would face circular dependency linking issue. Now we can use them as methods of LinalgOp.

Reviewed By: hanchung

Differential Revision: https://reviews.llvm.org/D98163
2021-03-10 04:06:22 -08:00
Tobias Gysi c1a4cd551f [mlir][linalg] refactor the result handling during vectorization.
Return the vectorization results using a vector passed by reference instead of returning them embedded in a structure.

Differential Revision: https://reviews.llvm.org/D98182
2021-03-09 07:11:57 +00:00
Aart Bik adc35b689f [mlir][sparse] mask reduction update
Reduction updates should be masked, just like the load and stores.
Note that alternatively, we could use the fact that masked values are
zero of += updates and mask invariants to get this working but that
would not work for *= updates. Masking the update itself is cleanest.
This change also replaces the constant mask with a broadcast of "true"
since this constant folds much better for various folding patterns.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D98000
2021-03-05 08:56:10 -08:00
Nicolas Vasilache c86d3c1a38 [mlir][Linalg] Fix order of dimensions in hoistPaddingOnTensors. 2021-03-05 15:11:35 +00:00
Aart Bik 553cb6d473 [mlir][sparse] fix bug in reduction chain
Found with exhaustive testing, it is possible that a while loop
appears in between chainable for loops. As long as we don't
scalarize reductions in while loops, this means we need to
terminate the chain at the while. This also refactors the
reduction code into more readable helper methods.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D97886
2021-03-03 17:38:22 -08:00
Aart Bik 5b333d3449 [mlir][sparse] do not ignore ordering for "dense" tensor linked with sparse type
Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D97795
2021-03-02 15:21:51 -08:00
Frederik Gossen bcc9b371e4 Split `ElementwiseMappable` trait into four more precise traits.
Some elementwise operations are not scalarizable, vectorizable, or tensorizable.
Split `ElementwiseMappable` trait into the following, more precise traits.
  - `Elementwise`
  - `Scalarizable`
  - `Vectorizable`
  - `Tensorizable`
This allows for reuse of `Elementwise` in dialects like HLO.

Differential Revision: https://reviews.llvm.org/D97674
2021-03-02 15:31:19 +01:00
KareemErgawy-TomTom 3b021fbdc0 [MLIR][LinAlg] Detensorize interal function control flow.
This patch continues detensorizing implementation by detensoring
internal control flow in functions.

In order to detensorize functions, all the non-entry block's arguments
are detensored and branches between such blocks are properly updated to
reflect the detensored types as well. Function entry block (signature)
is left intact.

This continues work towards handling github/google/iree#1159.

Reviewed By: silvas

Differential Revision: https://reviews.llvm.org/D97148
2021-03-02 11:46:20 +01:00
Aart Bik 6afaea6682 [mlir][sparse] fixed inaccury in maintaining universal index
The universal index was maintained if dense indices were still
in place, and lattice points followed. However, it should only
be kept if any of those following lattice points actually
consumes the universal index. This change also fixes an
inaccuracy with a missing broadcast around vector invariant.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D97594
2021-02-27 17:32:57 -08:00
Aart Bik df5ccf5a94 [mlir][vector] add higher dimensional support to gather/scatter
Similar to mask-load/store and compress/expand, the gather and
scatter operation now allow for higher dimension uses. Note that
to support the mixed-type index, the new syntax is:
   vector.gather %base [%i,%j] [%kvector] ....
The first client of this generalization is the sparse compiler,
which needs to define scatter and gathers on dense operands
of higher dimensions too.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D97422
2021-02-26 14:20:19 -08:00
Christian Sigg dffc487b07 [mlir] Mark OpState::removeAttr() deprecated.
Fix call sites.

The method will be removed 2 weeks later.

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D97530
2021-02-26 12:04:41 +01:00
Aart Bik 17fa919847 [mlir][sparse] incorporate vector index into address computation
When computing dense address, a vectorized index must be accounted
for properly. This bug was formerly undetected because we get 0 * prev + i
in most cases, which folds away the scalar part. Now it works for all cases.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D97317
2021-02-23 13:25:51 -08:00
Nicolas Vasilache 8cf14b8dec [mlir][Linalg] Retire hoistViewAllocOps.
This transformation was only used for quick experimentation and is not general enough.
Retire it.

Differential Revision: https://reviews.llvm.org/D97266
2021-02-23 11:45:19 +00:00
KareemErgawy-TomTom 67e0d58de4 [MLIR][LinAlg] Start detensoring implementation.
This commit is the first baby step towards detensoring in
linalg-on-tensors.

Detensoring is the process through which a tensor value is convereted to one
or potentially more primitive value(s). During this process, operations with
such detensored operands are also converted to an equivalen form that works
on primitives.

The detensoring process is driven by linalg-on-tensor ops. In particular, a
linalg-on-tensor op is checked to see whether *all* its operands can be
detensored. If so, those operands are converted to thier primitive
counterparts and the linalg op is replaced by an equivalent op that takes
those new primitive values as operands.

This works towards handling github/google/iree#1159.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D96271
2021-02-23 08:27:58 +01:00
Aart Bik 0df59f234b [sparse][mlir] simplify lattice optimization logic
Simplifies the way lattices are optimized with less, but more
powerful rules. This also fixes an inaccuracy where too many
lattices resulted (expecting a non-existing universal index).
Also puts no-side-effects on all proper getters and unifies
bufferization flags order in integration tests (for future,
more complex use cases).

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D97134
2021-02-22 16:52:06 -08:00
Nicolas Vasilache 62f5c46eec [mlir][Linalg] NFC - Expose more options to the CodegenStrategy 2021-02-19 14:01:44 +00:00
Alexander Belyaev a89035d750 Revert "[MLIR] Create memref dialect and move several dialect-specific ops from std."
This commit introduced a cyclic dependency:
Memref dialect depends on Standard because it used ConstantIndexOp.
Std depends on the MemRef dialect in its EDSC/Intrinsics.h

Working on a fix.

This reverts commit 8aa6c3765b.
2021-02-18 12:49:52 +01:00
Julian Gross 8aa6c3765b [MLIR] Create memref dialect and move several dialect-specific ops from std.
Create the memref dialect and move several dialect-specific ops without
dependencies to other ops from std dialect to this dialect.

Moved ops:
AllocOp -> MemRef_AllocOp
AllocaOp -> MemRef_AllocaOp
DeallocOp -> MemRef_DeallocOp
MemRefCastOp -> MemRef_CastOp
GetGlobalMemRefOp -> MemRef_GetGlobalOp
GlobalMemRefOp -> MemRef_GlobalOp
PrefetchOp -> MemRef_PrefetchOp
ReshapeOp -> MemRef_ReshapeOp
StoreOp -> MemRef_StoreOp
TransposeOp -> MemRef_TransposeOp
ViewOp -> MemRef_ViewOp

The roadmap to split the memref dialect from std is discussed here:
https://llvm.discourse.group/t/rfc-split-the-memref-dialect-from-std/2667

Differential Revision: https://reviews.llvm.org/D96425
2021-02-18 11:29:39 +01:00
Aart Bik ff6c84b803 [mlir][sparse] generalize sparse storage format to many more types
Rationale:
Narrower types for overhead storage yield a smaller memory footprint for
sparse tensors and thus needs to be supported. Also, more value types
need to be supported to deal with all kinds of kernels. Since the
"one-size-fits-all" sparse storage scheme implementation is used
instead of actual codegen, the library needs to be able to support
all combinations of desired types. With some crafty templating and
overloading, the actual code for this is kept reasonably sized though.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D96819
2021-02-17 18:20:23 -08:00
Nicolas Vasilache 21debeae78 [mlir][Linalg] Generalize vector::transfer hoisting on tensors.
This revision adds support for hoisting "subtensor + vector.transfer_read" / "subtensor_insert + vector.transfer_write pairs" across scf.for.
The unit of hoisting becomes a HoistableRead / HoistableWrite struct which contains a pair of "vector.transfer_read + optional subtensor" / "vector.transfer_write + optional subtensor_insert".
scf::ForOp canonicalization patterns are applied greedily on the successful application of the transformation to cleanup the IR more eagerly and potentially expose more transformation opportunities.

Differential revision: https://reviews.llvm.org/D96731
2021-02-16 09:45:14 +00:00
Nicolas Vasilache d01ea0edaa [mlir] Drop reliance of SliceAnalysis on specific ops.
SliceAnalysis originally was developed in the context of affine.for within mlfunc.
It predates the notion of region.
This revision updates it to not hardcode specific ops like scf::ForOp.
When rooted at an op, the behavior of the slice computation changes as it recurses into the regions of the op. This does not support gathering all values transitively depending on a loop induction variable anymore.
Additional variants rooted at a Value are added to also support the existing behavior.

Differential revision: https://reviews.llvm.org/D96702
2021-02-16 06:34:32 +00:00
Nicolas Vasilache 428bc6feed [mlir][Linalg] Fix constant detection in linalg.pad_tensor vectorization. 2021-02-14 15:53:39 +00:00
Mehdi Amini aa4e466caa [mlir][Linalg] Improve region support in Linalg ops
This revision takes advantage of the newly extended `ref` directive in assembly format
to allow better region handling for LinalgOps. Specifically, FillOp and CopyOp now build their regions explicitly which allows retiring older behavior that relied on specific op knowledge in both lowering to loops and vectorization.

This reverts commit 3f22547fd1 and reland 973e133b76 with a workaround for
a gcc bug that does not accept lambda default parameters:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=59949

Differential Revision: https://reviews.llvm.org/D96598
2021-02-12 19:11:24 +00:00
Mehdi Amini 3f22547fd1 Revert "[mlir][Linalg] Improve region support in Linalg ops."
This reverts commit 973e133b76.

It triggers an issue in gcc5 that require investigation, the build is
broken with:

/tmp/ccdpj3B9.s: Assembler messages:
/tmp/ccdpj3B9.s:5821: Error: symbol `_ZNSt17_Function_handlerIFvjjEUljjE2_E9_M_invokeERKSt9_Any_dataOjS6_' is already defined
/tmp/ccdpj3B9.s:5860: Error: symbol `_ZNSt14_Function_base13_Base_managerIUljjE2_E10_M_managerERSt9_Any_dataRKS3_St18_Manager_operation' is already defined
2021-02-12 18:15:51 +00:00
Nicolas Vasilache 973e133b76 [mlir][Linalg] Improve region support in Linalg ops.
This revision takes advantage of the newly extended `ref` directive in assembly format
to allow better region handling for LinalgOps. Specifically, FillOp and CopyOp now build their regions explicitly which allows retiring older behavior that relied on specific op knowledge in both lowering to loops and vectorization.

Differential Revision: https://reviews.llvm.org/D96598
2021-02-12 14:51:03 +00:00
Stephan Herhut 4348d8ab7f [mlir][math] Split off the math dialect.
This does not split transformations, yet. Those will be done as future clean ups.

Differential Revision: https://reviews.llvm.org/D96272
2021-02-12 10:55:12 +01:00
Nicolas Vasilache 5bc4f8846c s[mlir] Tighten computation of inferred SubView result type.
The AffineMap in the MemRef inferred by SubViewOp may have uncompressed symbols which result in type mismatch on otherwise unused symbols. Make the computation of the AffineMap compress those unused symbols which results in better canonical types.
Additionally, improve the error message to report which inferred type was expected.

Differential Revision: https://reviews.llvm.org/D96551
2021-02-11 22:38:16 +00:00
Hanhan Wang 9325b8da17 [mlir][Linalg] Add conv ops with TF definition.
The dimension order of a filter in tensorflow is
[filter_height, filter_width, in_channels, out_channels], which is different
from current definition. The current definition follows TOSA spec. Add TF
version conv ops to .tc, so we do not have to insert a transpose op around a
conv op.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D96038
2021-02-10 22:59:38 -08:00
Sanjoy Das bac1f12727 NFC; fix typo in comment
This should have gone in with a76761cf0d.
2021-02-10 21:34:29 -08:00
Sanjoy Das a76761cf0d NFC comment-only cleanups
- Remove leftover comment from de2568aab8
 - Fix a typo in a comment
2021-02-10 21:30:52 -08:00
Nicolas Vasilache 4643fd27c8 [mlir][Linalg] Fix crash when tileSizeComputationFunction is left unspecified 2021-02-10 22:47:05 +00:00
Aart Bik 0b1764a3d7 [mlir][sparse] sparse tensor storage implementation
This revision connects the generated sparse code with an actual
sparse storage scheme, which can be initialized from a test file.
Lacking a first-class citizen SparseTensor type (with buffer),
the storage is hidden behind an opaque pointer with some "glue"
to bring the pointer back to tensor land. Rather than generating
sparse setup code for each different annotated tensor (viz. the
"pack" methods in TACO), a single "one-size-fits-all" implementation
has been added to the runtime support library.  Many details and
abstractions need to be refined in the future, but this revision
allows full end-to-end integration testing and performance
benchmarking (with on one end, an annotated Lingalg
op and, on the other end, a JIT/AOT executable).

Reviewed By: nicolasvasilache, bixia

Differential Revision: https://reviews.llvm.org/D95847
2021-02-10 11:57:24 -08:00
Nicolas Vasilache 0ac3d97bf4 [mlir][Linalg] Fix pad hoisting.
This revision fixes the indexing logic into the packed tensor that result from hoisting padding. Previously, the index was incorrectly set to the loop induction variable when in fact we need to compute the iteration count (i.e. `(iv - lb).ceilDiv(step)`).

Differential Revision: https://reviews.llvm.org/D96417
2021-02-10 16:49:38 +00:00
Nicolas Vasilache bb69de3f41 [mlir][Linalg] Add a vectorization pattern for linalg::PadTensorOp
The new pattern is exercised from the TestLinalgTransforms pass.

Differential Revision: https://reviews.llvm.org/D96410
2021-02-10 14:13:49 +00:00
Nicolas Vasilache d57a305fdf [mlir][Linalg] Fix padding related bugs.
This revision fixes the fact that the padding transformation did not have enough information to set the proper type for the padding value.
Additionally, the verifier for Yield in the presence of PadTensorOp is fixed to properly report incorrect number of results or operands. Previously, the error would be silently ignored which made the core issue difficult to debug.

Differential Revision: https://reviews.llvm.org/D96264
2021-02-08 18:59:24 +00:00
Tres Popp c2c83e97c3 Revert "Revert "Reorder MLIRContext location in BuiltinAttributes.h""
This reverts commit 511dd4f438 along with
a couple fixes.

Original message:
Now the context is the first, rather than the last input.

This better matches the rest of the infrastructure and makes
it easier to move these types to being declaratively specified.

Phabricator: https://reviews.llvm.org/D96111
2021-02-08 10:39:58 +01:00