Commit Graph

28 Commits

Author SHA1 Message Date
Matthias Springer 76a1861816 [mlir][SparseTensor] Split scf.for loop into masked/unmasked parts
Apply the "for loop peeling" pattern from SCF dialect transforms. This pattern splits scf.for loops into full and partial iterations. In the full iteration, all masked loads/stores are canonicalized to unmasked loads/stores.

Differential Revision: https://reviews.llvm.org/D107733
2021-08-19 21:53:11 +09:00
Aart Bik d37d72eaf8 [mlir][sparse] use shared util for DimOp generation
This shares more code with existing utilities. Also, to be consistent,
we moved dimension permutation on the DimOp to the tensor lowering phase.
This way, both pre-existing DimOps on sparse tensors (not likely but
possible) as well as compiler generated DimOps are handled consistently.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D108309
2021-08-18 17:12:32 -07:00
Tobias Gysi 583a754248 [mlir][linalg] Remove duplicate methods (NFC).
Remove duplicate methods used to check iterator types.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D108102
2021-08-17 09:06:17 +00:00
Aart Bik 817303ef34 [mlir][sparse] fix bug in permuting data structure
Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D107379
2021-08-03 14:27:43 -07:00
Aart Bik 160399c7ce [mlir][sparse] move comments from cpp files into dialect doc
Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D107191
2021-07-30 13:16:50 -07:00
Aart Bik 68ac2e53ff [mlir][sparse] replace linalg.copy with memref.copy
Note, this revision relies on the following revision
for a bugfix in the memref copy library in order for
all sparse integration tests to pass.

https://reviews.llvm.org/D106036

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D106038
2021-07-15 07:56:50 -07:00
Aart Bik 123e8dfcf8 [mlir][sparse] add support for std unary operations
Adds zero-preserving unary operators from std. Also adds xor.
Performs minor refactoring to remove "zero" node, and pushed
the irregular logic for negi (not support in std) into one place.

Reviewed By: gussmith23

Differential Revision: https://reviews.llvm.org/D105928
2021-07-13 14:51:13 -07:00
Aart Bik 45b3cfe843 [mlir][sparse] add support for AND and OR operations
Integral AND and OR follow the simple conjunction and disjuction rules
for lattice building. This revision also completes some of the Merge
refactoring by moving the remainder parts that are merger specific from
sparsification into utils files.

Reviewed By: gussmith23

Differential Revision: https://reviews.llvm.org/D105851
2021-07-12 17:47:18 -07:00
Aart Bik 622eb169f6 [mlir][sparse] add restrictive versions of division support
Right now, we only accept x/c with nonzero c, since this
conceptually can be treated as a x*(1/c) conjunction for both
FP and INT as far as lattice computations go. The codegen
keeps the division though to preserve precise semantics.

See discussion:
https://llvm.discourse.group/t/sparse-tensors-in-mlir/3389/28

Reviewed By: gussmith23

Differential Revision: https://reviews.llvm.org/D105731
2021-07-12 14:59:48 -07:00
Matthias Springer 2c115ecc41 [mlir][NFC] MemRef cleanup: Remove helper functions
Remove `getDynOperands` and `createOrFoldDimOp` from MemRef.h to decouple MemRef a bit from Tensor. These two functions are used in other dialects/transforms.

Differential Revision: https://reviews.llvm.org/D105260
2021-07-05 10:10:21 +09:00
Aart Bik b8a021dbe3 [mlir][sparse] support for negation and subtractions
This revision extends the sparse compiler support from fp/int addition and multiplication to fp/int negation and subtraction, thereby increasing the scope of sparse kernels that can be compiled.

Reviewed By: gussmith23

Differential Revision: https://reviews.llvm.org/D105306
2021-07-02 15:55:05 -07:00
Gus Smith 4569c14ac3 Refactor TensorExp parameters into a union
To make TensorExp clearer, this change refactors the e0/e1 fields into a union: e0/e1 for a binary op tensor expression, and tensor_num for a tensor-kinded tensor expression.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D105303
2021-07-02 14:45:56 +00:00
Aart Bik 266a7414d8 [mlir][sparse] move tensor expression builder into Merger utility
Rationale:
Follow-up on migrating lattice and tensor expression related methods into the new utility.
This also prepares the next step of generalizing the op kinds that are handled.

Reviewed By: gussmith23

Differential Revision: https://reviews.llvm.org/D105219
2021-07-01 09:27:40 -07:00
Matthias Springer c0a6318d96 [mlir][tensor] Add tensor.dim operation
* Split memref.dim into two operations: memref.dim and tensor.dim. Both ops have the same builder interface and op argument names, so that they can be used with templates in patterns that apply to both tensors and memrefs (e.g., some patterns in Linalg).
* Add constant materializer to TensorDialect (needed for folding in affine.apply etc.).
* Remove some MemRefDialect dependencies, make some explicit.

Differential Revision: https://reviews.llvm.org/D105165
2021-07-01 10:00:19 +09:00
Gus Smith 043ce4e6bd [MLIR][Sparse] Move `buildLattices` into Merger
This allows us to use `buildLattices` in the `Merger` unittests.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D104879
2021-06-26 05:05:05 +00:00
Gus Smith 744146f60b [MLIR][Sparse] Refactor lattice code into its own file
Moves iteration lattice/merger code into new SparseTensor/Utils directory. A follow-up CL will add lattice/merger unit tests.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D104757
2021-06-24 23:03:44 +00:00
Aart Bik 36b66ab9ed [mlir][sparse] add support for "simply dynamic" sparse tensor expressions
Slowly we are moving toward full support of sparse tensor *outputs*. First
step was support for all-dense annotated "sparse" tensors. This step adds
support for truly sparse tensors, but only for operations in which the values
of a tensor change, but not the nonzero structure (this was refered to as
"simply dynamic" in the [Bik96] thesis).

Some background text was posted on discourse:
https://llvm.discourse.group/t/sparse-tensors-in-mlir/3389/25

Reviewed By: gussmith23

Differential Revision: https://reviews.llvm.org/D104577
2021-06-22 13:37:32 -07:00
Matthias Springer 66f878cee9 [mlir][NFC] Remove Standard dialect dependency on MemRef dialect
* Remove dependency: Standard --> MemRef
* Add dependencies: GPUToNVVMTransforms --> MemRef, Linalg --> MemRef, MemRef --> Tensor
* Note: The `subtensor_insert_propagate_dest_cast` test case in MemRef/canonicalize.mlir will be moved to Tensor/canonicalize.mlir in a subsequent commit, which moves over the remaining Tensor ops from the Standard dialect to the Tensor dialect.

Differential Revision: https://reviews.llvm.org/D104506
2021-06-21 17:55:23 +09:00
Aart Bik 619bfe8bd2 [mlir][sparse] support new kind of scalar in sparse linalg generic op
We have several ways of introducing a scalar invariant value into
linalg generic ops (should we limit this somewhat?). This revision
makes sure we handle all of them correctly in the sparse compiler.

Reviewed By: gysit

Differential Revision: https://reviews.llvm.org/D104335
2021-06-16 11:00:49 -07:00
Aart Bik 727a63e0d9 [mlir][sparse] allow all-dense annotated "sparse" tensor output
This is a very careful start with alllowing sparse tensors at the
left-hand-side of tensor index expressions (viz. sparse output).
Note that there is a subtle difference between non-annotated tensors
(dense, remain n-dim, handled by classic bufferization) and all-dense
annotated "sparse" tensors (linearized to 1-dim without overhead
storage, bufferized by sparse compiler, backed by runtime support library).
This revision gently introduces some new IR to facilitate annotated outputs,
to be generalized to truly sparse tensors in the future.

Reviewed By: gussmith23, bixia

Differential Revision: https://reviews.llvm.org/D104074
2021-06-15 14:55:07 -07:00
Tobias Gysi 046922e100 [mlir][linalg] Add support for scalar input operands.
Up to now all structured op operands are assumed to be shaped. The patch relaxes this assumption and allows scalar input operands. In contrast to shaped operands scalar operands are not indexed and directly forwarded to the body of the operation. As all other operands, scalar operands are associated to an indexing map that in case of a scalar or a 0D-operand has an empty range.

We will use scalar operands as a replacement for the capture mechanism. In contrast to captures, the approach ensures we can generate the function signature from the operand list and it prevents outdated capture values in case a transformation updates only the capture operand but not the hidden body of a named operation.

Removing captures and updating existing operations such as linalg.fill is left for a later patch.

The patch depends on https://reviews.llvm.org/D103891 and https://reviews.llvm.org/D103890.

Differential Revision: https://reviews.llvm.org/D104109
2021-06-14 06:27:16 +00:00
Aart Bik 86e9bc1a34 [mlir][sparse] add option for 32-bit indices in scatter/gather
Controlled by a compiler option, if 32-bit indices can be handled
with zero/sign-extention alike (viz. no worries on non-negative
indices), scatter/gather operations can use the more efficient
32-bit SIMD version.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D103632
2021-06-04 16:57:12 -07:00
Tobias Gysi 2f2b5b7d28 [mlir][linalg] Cleanup LinalgOp usage in sparse compiler (NFC).
Replace the uses of deprecated Structured Op Interface methods in Sparsification.cpp. This patch is based on https://reviews.llvm.org/D103394.

Differential Revision: https://reviews.llvm.org/D103436
2021-06-02 06:21:56 +00:00
Aart Bik c194b49c9c [mlir][sparse] add full dimension ordering support
This revision completes the "dimension ordering" feature
of sparse tensor types that enables the programmer to
define a preferred order on dimension access (other than
the default left-to-right order). This enables e.g. selection
of column-major over row-major storage for sparse matrices,
but generalized to any rank, as in:

dimOrdering = affine_map<(i,j,k,l,m,n,o,p) -> (p,o,j,k,i,l,m,n)>

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D102856
2021-05-21 12:35:13 -07:00
Aart Bik bf9ef3efaa [mlir][sparse] skip sparsification for unannotated (or unhandled) cases
Skip the sparsification pass for Linalg ops without annotated tensors
(or cases that are not properly handled yet).

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D102787
2021-05-19 13:49:28 -07:00
Aart Bik 5879da496c [mlir][sparse] replace experimental flag with inplace attribute
The experimental flag for "inplace" bufferization in the sparse
compiler can be replaced with the new inplace attribute. This gives
a uniform way of expressing the more efficient way of bufferization.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D102538
2021-05-17 11:43:44 -07:00
Aart Bik 96a23911f6 [mlir][sparse] complete migration to sparse tensor type
A very elaborate, but also very fun revision because all
puzzle pieces are finally "falling in place".

1. replaces lingalg annotations + flags with proper sparse tensor types
2. add rigorous verification on sparse tensor type and sparse primitives
3. removes glue and clutter on opaque pointers in favor of sparse tensor types
4. migrates all tests to use sparse tensor types

NOTE: next CL will remove *all* obsoleted sparse code in Linalg

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D102095
2021-05-10 12:55:22 -07:00
Aart Bik a2c9d4bb04 [mlir][sparse] Introduce proper sparsification passes
This revision migrates more code from Linalg into the new permanent home of
SparseTensor. It replaces the test passes with proper compiler passes.

NOTE: the actual removal of the last glue and clutter in Linalg will follow

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D101811
2021-05-04 17:10:09 -07:00