Commit Graph

120 Commits

Author SHA1 Message Date
bixia1
90aa436291 [mlir][sparse] Add layout to the memref for the indices buffers to prepare for the AOS storage optimization for COO regions.
Fix relevant FileCheck tests.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D140742
2023-01-04 07:36:11 -08:00
Ramkumar Ramachandra
0de16fafa5 mlir/DialectConversion: use std::optional (NFC)
This is part of an effort to migrate from llvm::Optional to
std::optional. This patch touches DialectConversion, and modifies
existing conversions and tests appropriately.

See also: https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716

Signed-off-by: Ramkumar Ramachandra <r@artagnon.com>

Differential Revision: https://reviews.llvm.org/D140303
2022-12-19 18:48:59 +01:00
Ramkumar Ramachandra
22426110c5 mlir/tblgen: use std::optional in generation
This is part of an effort to migrate from llvm::Optional to
std::optional. This patch changes the way mlir-tblgen generates .inc
files, and modifies tests and documentation appropriately. It is a "no
compromises" patch, and doesn't leave the user with an unpleasant mix of
llvm::Optional and std::optional.

A non-trivial change has been made to ControlFlowInterfaces to split one
constructor into two, relating to a build failure on Windows.

See also: https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716

Signed-off-by: Ramkumar Ramachandra <r@artagnon.com>

Differential Revision: https://reviews.llvm.org/D138934
2022-12-17 11:13:26 +01:00
Peiming Liu
a3672add76 [mlir][sparse] avoid unnecessary tmp COO buffer and convert when lowering ConcatentateOp.
When concat along dim 0, and all inputs/outputs are ordered with identity dimension ordering,
the concatenated coordinates will be yield in lexOrder, thus no need to sort.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D140228
2022-12-16 18:26:39 +00:00
wren romano
96fef4dc3c [mlir][sparse] Added new SparseTensorEncodingAttr::withoutOrdering factory
Reviewed By: aartbik, Peiming

Differential Revision: https://reviews.llvm.org/D140171
2022-12-15 18:14:54 -08:00
wren romano
62896428a7 [mlir][sparse] Moving/renaming genBuffer to allocaBuffer
This allows allocaBuffer to be used outside of SparseTensorConversion.cpp, which will be helpful for a some future commits.

Reviewed By: aartbik, Peiming

Differential Revision: https://reviews.llvm.org/D140047
2022-12-14 14:29:36 -08:00
wren romano
387755a35d [mlir][sparse] Simplifying SparseTensorEncodingAttr function arguments
Since STEA isa Attribute, and that's just (a wrapper around) a pointer, the extra `const` and `&` aren't necessary for function arguments.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D139886
2022-12-12 17:05:56 -08:00
bixia1
19cde2df95 [mlir][sparse] Improve concatenate operation conversion for the case with annotated all dense result.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D139345
2022-12-07 12:06:50 -08:00
wren romano
86f91e45a2 [mlir][sparse] Cleaning up the dim/lvl distinction in SparseTensorConversion
This change cleans up the conversion pass re the "dim"-vs-"lvl" and "sizes"-vs-"shape" distinctions of the runtime. A quick synopsis includes:

* Adds new `SparseTensorStorageBase::getDimSize` method, with `sparseDimSize` wrapper in SparseTensorRuntime.h, and `genDimSizeCall` generator in SparseTensorConversion.cpp
* Changes `genLvlSizeCall` to perform no logic, just generate the function call.
* Adds `createOrFold{Dim,Lvl}Call` functions to handle the logic of replacing `gen{Dim,Lvl}SizeCall` with constants whenever possible. The `createOrFoldDimCall` function replaces the old `sizeFromPtrAtDim`.
* Adds `{get,fill}DimSizes` functions for iterating `createOrFoldDimCall` across the whole type. These functions replace the old `sizesFromPtr`.
* Adds `{get,fill}DimShape` functions for lowering a `ShapedType` into constants. These functions replace the old `sizesFromType`.
* Changes the `DimOp` rewrite to do the right thing.
* Changes the `ExpandOp` rewrite to compute the proper expansion size.

Depends On D138365

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D139165
2022-12-05 16:59:42 -08:00
Kazu Hirata
1a36588ec6 [mlir] Use std::nullopt instead of None (NFC)
This patch mechanically replaces None with std::nullopt where the
compiler would warn if None were deprecated.  The intent is to reduce
the amount of manual work required in migrating from Optional to
std::optional.

This is part of an effort to migrate from llvm::Optional to
std::optional:

https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
2022-12-03 18:50:27 -08:00
wren romano
2af2e4dbb7 [mlir][sparse] Breaking up openSparseTensor to better support non-permutations
This commit updates how the `SparseTensorConversion` pass handles `NewOp`.  It breaks up the underlying `openSparseTensor` function into two parts (`SparseTensorReader::create` and `SparseTensorReader::readSparseTensor`) so that the pass can inject code for constructing `lvlSizes` between those two parts.  Migrating the construction of `lvlSizes` out of the runtime and into the pass is a necessary first step toward fully supporting non-permutations.  (The alternative would be for the pass to generate a `FuncOp` for performing the construction and then passing that to the runtime; which doesn't seem to have any benefits over the design of this commit.)  And since the pass now generates the code to call these two functions, this change also removes the `Action::kFromFile` value from the enum used by `_mlir_ciface_newSparseTensor`.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D138363
2022-12-02 11:10:57 -08:00
Aliia Khasanova
399638f98c Merge kDynamicSize and kDynamicSentinel into one constant.
resolve conflicts

Differential Revision: https://reviews.llvm.org/D138282
2022-11-21 13:01:26 +00:00
Aart Bik
0e1708ff64 [mlir][sparse] cleanup small vector constant hints
Following advise from

https://llvm.org/docs/ProgrammersManual.html#llvm-adt-smallvector-h

This revision removes the size hints from SmallVector (unless we are
certain of the resulting number of elements). Also, this replaces
SmallVector references with SmallVectorImpl references.

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D138063
2022-11-15 15:09:06 -08:00
bixia1
13c41757b1 [mlir][sparse] Only insert non-zero values to the result of the concatenate operation.
Modify the integration test to check number_of_entries and use it to limit for
outputing sparse tensor values.

Reviewed By: aartbik, Peiming

Differential Revision: https://reviews.llvm.org/D138046
2022-11-15 11:52:52 -08:00
wren romano
c518745bba [mlir][sparse] Making way for SparseTensorRuntime to support non-permutations
Systematically updates the SparseTensorRuntime to properly distinguish tensor-dimensions from storage-levels (and their associated ranks, shapes, sizes, indices, etc).  With a few exceptions which are noted in the code, this ensures the runtime has all the **semantic** changes necessary to support non-permutations.

(Whereas **operationally**, since we're still using `std::vector<uing64_t>` to represent the mappings, there's no way to pass in any interesting non-permutations.  Changing the representation to `std::function` will be done in a separate differential.)

Depends On D137680

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D137681
2022-11-14 13:48:41 -08:00
Aart Bik
a61a9a700a [mlir][sparse] first end-to-end matmul with codegen
(1) also fixes memory leak in sparse2dense rewriting
(2) still needs fix in dense2sparse by skipping zeros

Reviewed By: wrengr

Differential Revision: https://reviews.llvm.org/D137736
2022-11-09 13:40:05 -08:00
wren romano
f5ce99afa7 [mlir][sparse] Factoring out NewCallParams class in SparseTensorConversion.cpp
The new class helps encapsulate the arguments to `_mlir_ciface_newSparseTensor` so that client code doesn't depend on the details of the API.  (This makes way for the next differential which significantly alters the API.)

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D137680
2022-11-08 17:19:54 -08:00
Aart Bik
5661647e85 [mlir][sparse] build proper insertion chain
The alloc->insert/compress->load chain needs to be
properly represented with an SSA chain now in loops
and if statements to properly reflect the modifying
behavior (runtime support lib is forgiving on breaking
this, but the new codegen is not).

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D136966
2022-10-28 15:58:51 -07:00
Aart Bik
0f3e4d1afa [mlir][sparse] lower number of entries op to actual code
works both along runtime path and pure codegen path

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D136389
2022-10-21 10:48:37 -07:00
bixia1
b056d0bb69 [mlir][sparse] Refactor the convert operator conversion to support codegen for the operator.
Outline the code that generates the loop structure to iterate over a dense
tensor or a sparse constant to genDenseTensorOrSparseConstantIterLoop.

Move a few routines to CodegenUtils for sharing.

Reviewed By: wrengr

Differential Revision: https://reviews.llvm.org/D136210
2022-10-21 08:52:47 -07:00
wren romano
062f29c8d0 [mlir][sparse] Moving Enums.h into Dialect/SparseTensor/IR
Move the SparseTensorEnums library out of the ExecutionEngine directory and into Dialect/SparseTensor/IR.

Depends On D136002

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D136005
2022-10-18 15:15:18 -07:00
wren romano
0e77b63bc0 [mlir][sparse] Use the runtime DimLevelType instead of a separate tablegen enum
This differential replaces all uses of SparseTensorEncodingAttr::DimLevelType with DimLevelType.  The next differential will break out a separate library for the DimLevelType enum, so that the Dialect code doesn't need to depend on the rest of the runtime

Depends On D135995

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D135996
2022-10-18 13:45:26 -07:00
Aart Bik
9f596a7c67 [mlir][sparse] implement simple codegen for insertion (and related ops)
This is a proof of concept insertion implementation that sets up
the basic framework and implements it with push backs for just
sparse vectors. It adds insertion/compression through SSA values,
so that we properly update the memref after after pushback operation.

Note that properly using SSA values in sparsification is still TBD
but I will wait until Peiming's loop emitter is in to avoid conflicts.

Reviewed By: wrengr

Differential Revision: https://reviews.llvm.org/D136008
2022-10-17 18:02:08 -07:00
bixia1
5f919cd439 [mlir][sparse] Move routines for generating memref.alloca to CodegenUtils.
This is to allow the use of the routines in the rewrite pass.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D135890
2022-10-16 08:22:02 -07:00
bixia1
2d252a0f5c [mlir][sparse] Move a few routines to CodegenUtils.
Move a few supporting routines for generating function calls to CodegenUtils so
that they can be used by the codegen path for sparse tensor file input and
output.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D135691
2022-10-11 14:42:18 -07:00
wren romano
90fd13b0a1 [mlir][sparse] Converting SparseTensorCOO to use standard C++-style iterators.
This differential comprises three related changes: (1) it gives SparseTensorCOO standard C++-style iterators; (2) it removes the old iterator stuff from SparseTensorCOO; and (3) it introduces SparseTensorIterator which behaves like the old SparseTensorCOO iterator stuff used to.

The SparseTensorIterator class is needed because the MLIR codegen cannot easily use the C++-style iterators (hence why SparseTensorCOO had the old iterator stuff).  Distinguishing SparseTensorIterator from SparseTensorCOO also helps improve API hygiene since these two classes are used for distinct purposes.  And having SparseTensorIterator as its own class enables changing the underlying implementation in the future, without needing to worry about updating all the codegen tests etc.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D135485
2022-10-11 14:03:37 -07:00
wren romano
1b27484a49 [mlir][sparse] further implement singleton dimension level type
Handle more cases of singleton DLT including direct sparse2sparse conversion.  (Followup to D134096)

Depends On D134926

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D134933
2022-10-05 16:14:52 -07:00
Aart Bik
c48e90877f [mlir][sparse] introduce a higher-order tensor mapping
This extension to the sparse tensor type system in MLIR
opens up a whole new set of sparse storage schemes, such as
block sparse storage (e.g. BCSR) and ELL (aka jagged diagonals).

This revision merely introduces the type extension and
initial documentation. The actual interpretation of the type
(reading in tensors, lowering to code, etc.) will follow.

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D135206
2022-10-05 09:40:51 -07:00
bixia1
349bceba65 [mlir][sparse] Refactor the conversion of the tensor reshape operators.
Move genReshapeDstShape to codegen utils to support the rewriting of the tensor
reshape operators for the codegen path.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D135074
2022-10-03 11:06:49 -07:00
Peiming Liu
11069cbcb4 [mlir][sparse] refactoring: split translateIndices.
TranslateIndicesArray take an array of SSA value and convert them into another array of SSA values based on reassociation. Which makes it easier to be reused by `foreach` operator (as the indices array are given as an array of SSA values).

Reviewed By: aartbik, bixia

Differential Revision: https://reviews.llvm.org/D134918
2022-09-29 23:59:39 +00:00
wren romano
0fca5c5f45 [mlir][sparse] refactoring SparseTensorUtils: (1 of 4) file-splitting
Previously, the SparseTensorUtils.cpp library contained a C++ core implementation, but hid it in an anonymous namespace and only exposed a C-API for accessing it. Now we are factoring out that C++ core into a standalone C++ library so that it can be used directly by downstream clients (per request of one such client). This refactoring has been decomposed into a stack of differentials in order to simplify the code review process, however the full stack of changes should be considered together.

* (this): Part 1: split one file into several
* D133830: Part 2: Reorder chunks within files
* D133831: Part 3: General code cleanup
* D133833: Part 4: Update documentation

This part aims to make no changes other than the 1:N file splitting, and things which are forced to accompany that change.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D133462
2022-09-29 14:35:27 -07:00
Aart Bik
f231821e6e [mlir][sparse] provide convenience methods for toOrig/toStoredDim
Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D134833
2022-09-28 16:32:03 -07:00
Anlun Xu
fad84c3dbe [mlir][sparse] Support sparse2sparse collapse for dynamic sizes
This patch implements sparse2sparse collapse for operands with dynamic shape.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D131599
2022-09-27 18:40:59 -07:00
Aart Bik
a3610359b5 [mlir][sparse] change memref argument to proper SSA components
The indices for insert/compress were previously provided as
a memref<?xindex> with proper rank, since that matched the
argument for the runtime support libary better. However, with
proper codegen coming, providing the indices as SSA values
is much cleaner. This also brings the sparse_tensor.insert
closer to unification with tensor.insert, planned in the
longer run.

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D134404
2022-09-27 16:37:37 -07:00
Peiming Liu
938f419cf1 [mlir][sparse] Avoid generating DimOp in conversion passes.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D133592
2022-09-09 18:08:05 +00:00
Peiming Liu
180bf5f940 [mlir][sparse] fix a bug in sparse2sparse reshape.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D133521
2022-09-09 00:32:00 +00:00
Aart Bik
f76dcede3f [mlir][sparse] rename lex_insert into insert
This change goes not impact any semantics yet, but it
is in preparation for implementing the unordered and not-unique
properties. Changing lex_insert to insert is a first step.

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D133531
2022-09-08 17:26:35 -07:00
Aart Bik
dc46d5c979 [mlir][sparse] improve dimop rewriting during conversion
Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D133512
2022-09-08 13:04:28 -07:00
Aart Bik
ec8f2905a3 [mlir][sparse] fix bug in workspace dimension computation
Access pattern expansion is always done along the innermost stored
dimension, but this was incorrectly reordered due to using a
general utility typically used by original dimensions only.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D133472
2022-09-08 08:25:02 -07:00
Aart Bik
610b09074a [mlir][sparse] change variable dimension to fixed attribute pointers/indices
The "sparsification" pass does not need the ability to use runtime values for
the dimension, so the only source for variability would have been user code.
Restricting the dimension to constants simplifies code generation.

Reviewed By: Peiming, wrengr

Differential Revision: https://reviews.llvm.org/D133458
2022-09-07 16:27:24 -07:00
Aart Bik
1b434652c5 [mlir][sparse] add more dimension level types and properties
We recently removed the singleton dimension level type (see the revision
https://reviews.llvm.org/D131002) since it was unimplemented but also
incomplete (properties were missing). This revision add singleton back as
extra dimension level type, together with properties ordered/not-ordered
and unique/not-unique. Even though still not lowered to actual code, this
provides a complete way of defining many more sparse storage schemes (in
the long run, we want to support even dimension level types and properties
using the additional extensions proposed in [Chou]).

Note that the current solution of using suffixes for the properties is not
ideal, but keeps the extension relatively simple with respect to parsing and
printing. Furthermore, it is rather consistent with the TACO implementation
which uses things like Compressed-Unique as well. Nevertheless, we probably
want to separate dimension level types from properties when we add more types
and properties.

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D132897
2022-08-30 10:37:49 -07:00
Aart Bik
86b22d3120 [mlir][sparse] start a sparse codegen conversion pass
This new pass provides an alternative to the current conversion pass
that converts sparse tensor types and sparse primitives to opaque pointers
and calls into a runtime support library. This pass will map sparse tensor
types to actual data structures and primitives to actual code. In the long
run, this new pass will remove our dependence on the support library, avoid
the need to link in fully templated and expanded code, and provide much better
opportunities for optimization on the generated code.

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D132766
2022-08-29 09:39:33 -07:00
Peiming Liu
8f8c8168f3 [mlir][sparse] fix compiler warning
Reviewed By: aartbik, bixia

Differential Revision: https://reviews.llvm.org/D132040
2022-08-17 17:46:28 +00:00
Thomas Joerg
5ad7cc7f21 Fix unused variable (introduced in
c248219b09)
2022-08-17 15:18:37 +02:00
Peiming Liu
ee986ab727 [mlir][sparse] Refactoring: remove Operation * from the argument list in utility functions
This patch remove the Operation *op from the argument list in utility functions, and directly pass the Location instead of calling op->getLoc().

This should make the code more clear, as the utility function (logically) does not relies on the operation that we are currently rewriting, and they behave the same regardless of the operation.

Reviewed By: aartbik, wrengr

Differential Revision: https://reviews.llvm.org/D131991
2022-08-16 21:26:43 +00:00
Peiming Liu
c248219b09 [mlir][sparse] Implements concatenate operation for sparse tensor
This patch implements the conversion rule for operation introduced in https://reviews.llvm.org/D131200.
Also contains integration test for correctness

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D131200
2022-08-16 20:47:47 +00:00
Aart Bik
9921ef73c8 [mlir][sparse] remove singleton dimension level type (for now)
Although we have plans to support this, and many other, dimension level type(s), currently the tag is not supported. It will be easy to add this back once support is added.

NOTE: based on discussion in https://discourse.llvm.org/t/overcoming-sparsification-limitation-on-level-types/62585

https://github.com/llvm/llvm-project/issues/51658

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D131002
2022-08-02 11:48:49 -07:00
Matthias Springer
27a431f5e9 [mlir][bufferization][NFC] Move sparse_tensor.release to bufferization dialect
This op used to belong to the sparse dialect, but there are use cases for dense bufferization as well. (E.g., when a tensor alloc is returned from a function and should be deallocated at the call site.) This change moves the op to the bufferization dialect, which now has an `alloc_tensor` and a `dealloc_tensor` op.

Differential Revision: https://reviews.llvm.org/D129985
2022-07-19 09:18:19 +02:00
Kazu Hirata
10bcfeebfa [mlir] Remove unused using (NFC)
Identified with misc-unused-using-decls.
2022-07-17 18:08:48 -07:00
Matthias Springer
c66303c287 [mlir][sparse] Switch to One-Shot Bufferize
This change removes the partial bufferization passes from the sparse compilation pipeline and replaces them with One-Shot Bufferize. One-Shot Analysis (and TensorCopyInsertion) is used to resolve all out-of-place bufferizations, dense and sparse. Dense ops are then bufferized with BufferizableOpInterface. Sparse ops are still bufferized in the Sparsification pass.

Details:
* Dense allocations are automatically deallocated, unless they are yielded from a block. (In that case the alloc would leak.) All test cases are modified accordingly. E.g., some funcs now have an "out" tensor argument that is returned from the function. (That way, the allocation happens at the call site.)
* Sparse allocations are *not* automatically deallocated. They must be "released" manually. (No change, this will be addressed in a future change.)
* Sparse tensor copies are not supported yet. (Future change)
* Sparsification no longer has to consider inplacability. If necessary, allocations and/or copies are inserted during TensorCopyInsertion. All tensors are inplaceable by the time Sparsification is running. Instead of marking a tensor as "not inplaceable", it can be marked as "not writable", which will trigger an allocation and/or copy during TensorCopyInsertion.

Differential Revision: https://reviews.llvm.org/D129356
2022-07-14 09:52:48 +02:00