This is the first implementation of complex (f64 and f32) support
in the sparse compiler, with complex add/mul as first operations.
Note that various features are still TBD, such as other ops, and
reading in complex values from file. Also, note that the
std::complex<float> had a bit of an ABI issue when passed as
single argument. It is still TBD if better solutions are possible.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D125596
This will enable our usual set of element types in external
environments, such as PyTACO support.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D124875
By using a shared index pool, we reduce the footprint of each "Element"
in the COO scheme and, in addition, reduce the overhead of allocating
indices (trading many allocations of vectors for allocations in a single
vector only). When the capacity is known, this means *all* allocation
can be done in advance.
This is a big win. For example, reading matrix SK-2005, with dimensions
50,636,154 x 50,636,154 and 1,949,412,601 nonzero elements improves
as follows (time in ms), or about 3.5x faster overall
```
SK-2005 before after speedup
---------------------------------------------
read 305,086.65 180,318.12 1.69
sort 2,836,096.23 510,492.87 5.56
pack 364,485.67 312,009.96 1.17
---------------------------------------------
TOTAL 3,505,668.56 1,002,820.95 3.50
```
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D124502
This avoids a rather big bug where we were reserving
dense space for the ptx/idx in the first sparse dimension.
For example, using CSR for a 140874 x 140874 matrix with
3977139 nonzero would reserve the full 19845483876 space.
This revision fixes this for now, but we need to revisit
the reservation heuristic to make this better.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D123166
This change introduces two new methods: `finalizeSegment` and `appendIndex`; and removes three old methods: `endDim`, `appendCurrentPointer`, `appendIndex`. The two new methods better encapsulate their algorithms, thus allowing to remove repetitious code in several other places.
Depends On D122435
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D122625
Prior to this change there were a number of places where the allocation and deallocation of SparseTensorCOO objects were not cleanly paired, leading to inconsistencies regarding whether each function released its tensor/coo arguments or not, as well as making it easy to run afoul of memory leaks, use-after-free, or double-free errors. This change cleans up the codegen vs runtime boundary to resolve those issues. Now, the only time the runtime library frees an object is either (a) because it's a function explicitly designed to do so, or (b) because the allocated object is entirely local to the function and would be a memory leak if not released. Thus, now the codegen takes complete responsibility for releasing any objects it caused to be allocated.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D122435
I'm using "shape" to mean the compile-time object, where zeros indicate sizes which are compile-time dynamic; and using "sizes" to mean the run-time object, where zeros indicate a dimension with no coordinates (hence resulting in trivial storage). Because their semantics differ on zeros, it's important to keep them distinguished. Although we do not define separate C++ types to capture the distinction, we can at least use variable names to do so.
This is (tangential) work towards fixing: https://github.com/llvm/llvm-project/issues/51652
Depends On D122057
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D122058
This is the more logical place for the function to live. If/when we factor out a separate class for just the `Coordinates` themselves, then the definition should be moved to `Coordinates::lexOrder` (and `Element::lexOrder` would become a thin wrapper delegating to that function).
This is (tangentially) work towards fixing: https://github.com/llvm/llvm-project/issues/51652
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D122057
This clarifies that these methods only work in append mode, not for general insertions. This is a prospective change towards https://github.com/llvm/llvm-project/issues/51652 which also performs random-access insertions, so we want to avoid confusion.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D120929
Previously, convertToMLIRSparseTensor assumes identity storage ordering and all
compressed dimensions. This change extends the function with two parameters for
users to specify the storage ordering and the sparsity of each dimension.
Modify PyTACO to reflect this change.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D120643
These routines will need to be specialized a lot more based on value types,
index types, pointer types, and permutation/dimension ordering. This is a
careful first step, providing some functionality needed in PyTACO bridge.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D120154
While testing LLVM 14.0.0 rc1 on Solaris, I ran into a compile failure:
from /var/llvm/llvm-14.0.0-rc1/rc1/llvm-project/mlir/lib/ExecutionEngine/SparseTensorUtils.cpp:22:
/usr/include/sys/types.h:103:16: error: conflicting declaration ‘typedef short int index_t’
103 | typedef short index_t;
| ^~~~~~~
In file included from
/var/llvm/llvm-14.0.0-rc1/rc1/llvm-project/mlir/lib/ExecutionEngine/SparseTensorUtils.cpp:17:
/var/llvm/llvm-14.0.0-rc1/rc1/llvm-project/mlir/include/mlir/ExecutionEngine/SparseTensorUtils.h:26:7:
note: previous declaration as ‘using index_t = uint64_t’
26 | using index_t = uint64_t;
| ^~~~~~~
The same issue had already occured in the past and fixed in D72619
<https://reviews.llvm.org/D72619>. More detailed explanation can also be
found there.
Tested on `amd64-pc-solaris2.11` and `sparcv9-solaris2.11`.
Differential Revision: https://reviews.llvm.org/D119323
Rationale:
Although file I/O is a bit alien to MLIR itself, we provide two convenient ways
for sparse tensor I/O. The input part was already there (behind the swiss army
knife sparse_tensor.new). Now we have a sparse_tensor.out to write out data. As
before, the ops are kept vague and may change in the future. For now this
allows us to compare TACO vs MLIR very easily.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D117850
This guarantees the preconditions of fromCOO; whereas prior to this, one could call the constructor directly with an unsorted tensor, which would cause fromCOO to misbehave.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D117167
Depends On D115008
This change opens the way for D115012, and removes some corner cases in `CodegenUtils.cpp`. The `SparseTensorAttrDefs.td` already specifies that we allow `0` bitwidth for the two overhead types and that it is interpreted to mean the architecture's native width.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D115010
data point using the 3-dim tensor nell-2.tns
MLIR:
READ FILE INTO COO: 24424.369294 ms ---> improves to ----> 9638.501044 ms
SORT COO BEFORE PACK: 762.834831 ms
PACK COO TO TENSOR: 1243.376245 ms
TACO:
b file read: 13270.9 ms
b pack: 7137.74 ms
b size: (12092 x 9184 x 28818), 925300328 bytes
https://github.com/llvm/llvm-project/issues/52679
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D115696
Add convertFromMLIRSparseTensor to the supporting C shared library to convert
SparseTensorStorage to COO-flavor format.
Add Python routine sparse_tensor_to_coo_tensor to convert sparse tensor storage
pointer to numpy values for COO-flavor format tensor.
Add a Python test for sparse tensor output.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D115557
This revision implements sparse outputs (from scratch) in all cases where
the loops can be reordered with all but one parallel loops outer. If the
inner parallel loop appears inside one or more reductions loops, then an
access pattern expansion is required (aka. workspaces in TACO speak).
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D115091
First version was vectors only. With some clever "path" insertion,
we now support any d-dimensional tensor. Up next: reductions too
Reviewed By: bixia, wrengr
Differential Revision: https://reviews.llvm.org/D114024
This revision contains all "sparsification" ops and rewriting necessary to support sparse output tensors when the kernel has no reduction (viz. insertions occur in lexicographic order and are "injective"). This will be later generalized to allow reductions too. Also, this first revision only supports sparse 1-d tensors (viz. vectors) as output in the runtime support library. This will be generalized to n-d tensors shortly. But this way, the revision is kept to a manageable size.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D113705