Commit Graph

251 Commits

Author SHA1 Message Date
Nick Kreeger
30ceb783e2 [mlir][sparse] Expose SparseTensor passes as enums instead of opaque numbers for vectorization and parallelization options.
The SparseTensor passes currently use opaque numbers for the CLI, despite using an enum internally. This patch exposes the enums instead of numbered items that are matched back to the enum.

Fixes https://github.com/llvm/llvm-project/issues/53389

Differential Revision: https://reviews.llvm.org/D123876

Please also see:
https://reviews.llvm.org/D118379
https://reviews.llvm.org/D117919
2022-09-04 01:39:35 +00:00
Nick Kreeger
91470d6352 Revert "[mlir][sparse] Expose SparseTensor passes as enums instead of opaque"
This reverts commit ef25b5d93d.
2022-09-03 15:47:40 -05:00
Nick Kreeger
ef25b5d93d [mlir][sparse] Expose SparseTensor passes as enums instead of opaque
numbers for vectorization and parallelization options.

The SparseTensor passes currently use opaque numbers for the CLI,
despite using an enum internally. This patch exposes the enums instead
of numbered items that are matched back to the enum.

Fixes https://github.com/llvm/llvm-project/issues/53389

Differential Revision: https://reviews.llvm.org/D123876

Please also see:
https://reviews.llvm.org/D118379
https://reviews.llvm.org/D117919
2022-09-03 15:45:49 -05:00
Peiming Liu
c3aeb3e644 [mlir][sparse] Introduce sparse_tensor.storage operator to create a sparse tensor storage tuple
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D133231
2022-09-03 00:08:29 +00:00
Peiming Liu
928b5b06f9 [mlir][sparse] add conversion rules for storage_get/set/callOp
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D133175
2022-09-02 18:27:54 +00:00
Aart Bik
2ddfacd95c [mlir][sparse] codegen for sparse dealloc
Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D133171
2022-09-01 22:21:20 -07:00
Aart Bik
f27b806df5 [mlir][sparse] codegen for trivial tensor cast
Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D133176
2022-09-01 21:55:18 -07:00
Aart Bik
3ae98fd259 [mlir][sparse] added codegen for dimop, pointers, indices, values
Demonstrates how sparse tensor type -> tuple -> getter
will eventually yield actual code on the memrefs directly

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D133143
2022-09-01 16:36:10 -07:00
Peiming Liu
ca01c996b2 [mlir][sparse] Add SparseTensorStorageExpansion Pass to expand compounded sparse tensor tuples
This patch adds SparseTensorStorageExpansion pass, it flattens the tuple used to store a sparse
tensor handle.

Right now, it only set up the skeleton for the pass, more lowering rules for sparse tensor storage
operation need to be added.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D133125
2022-09-01 22:47:31 +00:00
Aart Bik
1be09496bf [mlir][sparse] improved tensor type lowering
Also includes a first codegen example (although full support need tuple access)

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D133080
2022-09-01 09:24:20 -07:00
Peiming Liu
7ea643c06d [mlir][sparse] Introduce new sparse_tensor.storage_get/set to access memory that stores the handle of a sparse tensor
Introduce new sparse_tensor.storage_get/set to access memory that stores the handle of a sparse tensor. The sparse tensor storage are represented as a tuple, these operation will later be eliminated and the tuple will be flattened after sparse tensor codegen

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D133049
2022-08-31 22:15:15 +00:00
Aart Bik
f767f09252 [mlir][sparse] sparse storage scheme type conversion
This builds a compound type for the buffers required for the sparse storage scheme defined by the MLIR sparse tensor types. The use of a tuple allows for a simple 1:1 type conversion. A subsequent pass can expand this tuple into its component with an isolated 1:N type conversion.

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D133050
2022-08-31 15:12:55 -07:00
Michele Scuttari
67d0d7ac0a [MLIR] Update pass declarations to new autogenerated files
The patch introduces the required changes to update the pass declarations and definitions to use the new autogenerated files and allow dropping the old infrastructure.

Reviewed By: mehdi_amini, rriddle

Differential Review: https://reviews.llvm.org/D132838
2022-08-31 12:28:45 +02:00
Michele Scuttari
039b969b32 Revert "[MLIR] Update pass declarations to new autogenerated files"
This reverts commit 2be8af8f0e.
2022-08-30 22:21:55 +02:00
Michele Scuttari
2be8af8f0e [MLIR] Update pass declarations to new autogenerated files
The patch introduces the required changes to update the pass declarations and definitions to use the new autogenerated files and allow dropping the old infrastructure.

Reviewed By: mehdi_amini, rriddle

Differential Review: https://reviews.llvm.org/D132838
2022-08-30 21:56:31 +02:00
Aart Bik
1b434652c5 [mlir][sparse] add more dimension level types and properties
We recently removed the singleton dimension level type (see the revision
https://reviews.llvm.org/D131002) since it was unimplemented but also
incomplete (properties were missing). This revision add singleton back as
extra dimension level type, together with properties ordered/not-ordered
and unique/not-unique. Even though still not lowered to actual code, this
provides a complete way of defining many more sparse storage schemes (in
the long run, we want to support even dimension level types and properties
using the additional extensions proposed in [Chou]).

Note that the current solution of using suffixes for the properties is not
ideal, but keeps the extension relatively simple with respect to parsing and
printing. Furthermore, it is rather consistent with the TACO implementation
which uses things like Compressed-Unique as well. Nevertheless, we probably
want to separate dimension level types from properties when we add more types
and properties.

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D132897
2022-08-30 10:37:49 -07:00
Aart Bik
86b22d3120 [mlir][sparse] start a sparse codegen conversion pass
This new pass provides an alternative to the current conversion pass
that converts sparse tensor types and sparse primitives to opaque pointers
and calls into a runtime support library. This pass will map sparse tensor
types to actual data structures and primitives to actual code. In the long
run, this new pass will remove our dependence on the support library, avoid
the need to link in fully templated and expanded code, and provide much better
opportunities for optimization on the generated code.

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D132766
2022-08-29 09:39:33 -07:00
Rajas Vanjape
681b97e586 Remove TODO related to adding assert from Sparse Tensor Pipeline code
Removing the TODO related to asserting that original `pm` is for ModuleOp.
The TODO is removed for following reasons:
1. There is no easy way to do this. We currently don't have this information stored in OpPassManager object.
2. There are currently no consumers of this information and storing this information with OpPassManager for a
   simple assert will be an overkill.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D132699
2022-08-25 20:47:30 +00:00
Peiming Liu
ec495b53f8 [mlir][sparse] Folding operations that try to insert zero into an all-zero sparse tensor
The operations to fill zero into newly allocated sparse tensor are redundant, plus it failed
to lowering the test cases provided in the patch as well.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D132500
2022-08-25 17:00:04 +00:00
Kazu Hirata
06b551c944 Use llvm::is_contained (NFC) 2022-08-20 21:18:27 -07:00
Aart Bik
e3d64ccf9f [mlir][sparse] more concise sparse tensor type printing
This change omits default values from the sparse tensor type,
saving considerable text real estate for the common cases.

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D132083
2022-08-17 17:35:50 -07:00
Peiming Liu
8f8c8168f3 [mlir][sparse] fix compiler warning
Reviewed By: aartbik, bixia

Differential Revision: https://reviews.llvm.org/D132040
2022-08-17 17:46:28 +00:00
Jim Kitchen
c8bb23547f [mlir][sparse] Custom reduce with identity
Implement the new sparse_tensor.reduce operation which
accepts a starting identity value and a code block
describing how to perform the reduction.

Reviewed by: aartbik

Differential Revision: https://reviews.llvm.org/D130573
2022-08-17 11:21:46 -05:00
Thomas Joerg
5ad7cc7f21 Fix unused variable (introduced in
c248219b09)
2022-08-17 15:18:37 +02:00
Peiming Liu
ee986ab727 [mlir][sparse] Refactoring: remove Operation * from the argument list in utility functions
This patch remove the Operation *op from the argument list in utility functions, and directly pass the Location instead of calling op->getLoc().

This should make the code more clear, as the utility function (logically) does not relies on the operation that we are currently rewriting, and they behave the same regardless of the operation.

Reviewed By: aartbik, wrengr

Differential Revision: https://reviews.llvm.org/D131991
2022-08-16 21:26:43 +00:00
Peiming Liu
c248219b09 [mlir][sparse] Implements concatenate operation for sparse tensor
This patch implements the conversion rule for operation introduced in https://reviews.llvm.org/D131200.
Also contains integration test for correctness

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D131200
2022-08-16 20:47:47 +00:00
Kazu Hirata
3a6da9ebcb [mlir] Remove redundant member initialization (NFC)
Identified with readability-redundant-member-init.
2022-08-14 12:51:59 -07:00
Aart Bik
8dd07e36ca [mlir][sparse] enable integral abs recognition
The end-to-end test for this new feature also exposed a bug
in LLVM IR lowering (since then, fixed), where we need to account
for the min-poison bit as extra argument.

    declare i32 @llvm.abs.i32(i32 <src>, i1 <is_int_min_poison>)

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D131712
2022-08-12 11:36:40 -07:00
Peiming Liu
de907138ec [mlir][sparse] Add new concatente operator to sparse tensor
See https://www.tensorflow.org/xla/operation_semantics#concatenate for the operator semantics

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D131111
2022-08-08 17:23:43 +00:00
Jeff Niu
00f7096d31 [mlir][math] Rename math.abs -> math.absf
To make room for introducing `math.absi`.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D131325
2022-08-08 11:04:58 -04:00
Jacques Pienaar
d3b3f7653d [mlir] Flip to prefixed accessors (NFC) 2022-08-07 04:55:58 -07:00
Aart Bik
7f5b167336 [mlir][sparse] fix bug in complex zero detection
We were checking real-part twice, not real/imag-part.
The new test only passes after the bug fix.

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D131190
2022-08-04 13:35:13 -07:00
Aart Bik
c7bb69bc75 [mlir][sparse] replace zero yield generic op with copy in allocation
This prepares patterns that sometimes are generated by the front-end
and would prohibit fusion of SDDMM flavored kernels.

Reviewed By: springerm

Differential Revision: https://reviews.llvm.org/D131126
2022-08-04 09:33:57 -07:00
Aart Bik
ce3d0e87ac [mlir][sparse] enable SDDMM-flavored fusion
This rewriting was no longer functional after recent migration to one shot
bufferization. However, this revision makes it work again, with a CHECK test
to ensure fusion happens. Note that functionality is tested by several
integration tests.

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D130996
2022-08-02 12:40:04 -07:00
Aart Bik
9921ef73c8 [mlir][sparse] remove singleton dimension level type (for now)
Although we have plans to support this, and many other, dimension level type(s), currently the tag is not supported. It will be easy to add this back once support is added.

NOTE: based on discussion in https://discourse.llvm.org/t/overcoming-sparsification-limitation-on-level-types/62585

https://github.com/llvm/llvm-project/issues/51658

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D131002
2022-08-02 11:48:49 -07:00
bixia1
66088afbc8 [mlir][sparse] Add arith-expand pass to the sparse-compiler pipeline.
Modify an existing test to test the situation.

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D130658
2022-07-27 14:42:21 -07:00
Peiming Liu
bf59cd320e [mlir][sparse] fix error when sparse kernel is nested in a scf structrual operator.
Sparse compiler failed on the provided test (when the sparse kernel is nested in a scf structrual operator).

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D130609
2022-07-27 16:12:23 +00:00
Jacques Pienaar
a1ec0d8bdc [mlir] Flip dialects to _Prefixed
At least two weeks passed since flipped to _Both. Made some additional
NFC changes in .td files that were not converted earlier.
2022-07-21 12:03:07 -07:00
Jacques Pienaar
d2c0572b2e [mlir] Flip LinAlg dialect to _Both
This one required more changes than ideal due to overlapping generated name
with different return types. Changed getIndexingMaps to getIndexingMapsArray to
move it out of the way/highlight that it returns (more expensively) a
SmallVector and uses the prefixed name for the Attribute.

Differential Revision: https://reviews.llvm.org/D129919
2022-07-19 14:42:58 -07:00
Matthias Springer
27a431f5e9 [mlir][bufferization][NFC] Move sparse_tensor.release to bufferization dialect
This op used to belong to the sparse dialect, but there are use cases for dense bufferization as well. (E.g., when a tensor alloc is returned from a function and should be deallocated at the call site.) This change moves the op to the bufferization dialect, which now has an `alloc_tensor` and a `dealloc_tensor` op.

Differential Revision: https://reviews.llvm.org/D129985
2022-07-19 09:18:19 +02:00
Aart Bik
28ebb0b61d [mlir][sparse] migrate sparse rewriting to sparse transformations pass
The rules in the linalg file were very specific to sparse tensors so will
find a better home under sparse tensor dialect than linalg dialect. Also
moved some rewriting from sparsification into this new "pre-rewriting" file.

Reviewed By: springerm

Differential Revision: https://reviews.llvm.org/D129910
2022-07-18 09:29:22 -07:00
Kazu Hirata
10bcfeebfa [mlir] Remove unused using (NFC)
Identified with misc-unused-using-decls.
2022-07-17 18:08:48 -07:00
Jim Kitchen
2b8a4d9ce1 [mlir][sparse] Introduce new reduce op
A new sparse_tensor operation allows for
custom reduction code to be injected during
linalg.generic lowering for sparse tensors.
An identity value is provided to indicate
the starting value of the reduction. A single
block region is required to contain the
custom reduce computation.

Reviewed by: aartbik

Differential Revision: https://reviews.llvm.org/D128004
2022-07-15 15:30:41 -05:00
Matthias Springer
c66303c287 [mlir][sparse] Switch to One-Shot Bufferize
This change removes the partial bufferization passes from the sparse compilation pipeline and replaces them with One-Shot Bufferize. One-Shot Analysis (and TensorCopyInsertion) is used to resolve all out-of-place bufferizations, dense and sparse. Dense ops are then bufferized with BufferizableOpInterface. Sparse ops are still bufferized in the Sparsification pass.

Details:
* Dense allocations are automatically deallocated, unless they are yielded from a block. (In that case the alloc would leak.) All test cases are modified accordingly. E.g., some funcs now have an "out" tensor argument that is returned from the function. (That way, the allocation happens at the call site.)
* Sparse allocations are *not* automatically deallocated. They must be "released" manually. (No change, this will be addressed in a future change.)
* Sparse tensor copies are not supported yet. (Future change)
* Sparsification no longer has to consider inplacability. If necessary, allocations and/or copies are inserted during TensorCopyInsertion. All tensors are inplaceable by the time Sparsification is running. Instead of marking a tensor as "not inplaceable", it can be marked as "not writable", which will trigger an allocation and/or copy during TensorCopyInsertion.

Differential Revision: https://reviews.llvm.org/D129356
2022-07-14 09:52:48 +02:00
Kazu Hirata
c27d815249 [mlir] Use value instead of getValue (NFC) 2022-07-14 00:19:59 -07:00
Kazu Hirata
491d27013d [mlir] Use has_value instead of hasValue (NFC) 2022-07-13 00:57:02 -07:00
Aart Bik
faa00c1313 [mlir][sparse] implement sparse2sparse reshaping (expand/collapse)
A previous revision implemented expand/collapse reshaping between
dense and sparse tensors for sparse2dense and dense2sparse since those
could use the "cheap" view reshape on the already materialized
dense tensor (at either the input or output side), and do some
reshuffling from or to sparse. The dense2dense case, as always,
is handled with a "cheap" view change.

This revision implements the sparse2sparse cases. Lacking any "view"
support on sparse tensors this operation necessarily has to perform
data reshuffling on both ends.

Tracker for improving this:
https://github.com/llvm/llvm-project/issues/56477

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D129416
2022-07-11 14:49:06 -07:00
Jacques Pienaar
136d746ec7 [mlir] Flip accessors to prefixed form (NFC)
Another mechanical sweep to keep diff small for flip to _Prefixed.
2022-07-10 21:19:11 -07:00
Aart Bik
6d8e2f1e51 [mlir][sparse] implement simple reshaping (expand/collapse)
The revision makes a start with implementing expand/collapse reshaping
for sparse tensors. When either source or destination is sparse, but
other is dense, the "cheap" dense reshape can be used prior to converting
from or to a sparse tensor.

Note1
sparse to sparse reshaping is still TBD.

Note2
in the long run, we may want to implement a "view" into a sparse tensor so that the operation remains cheap and does not require data shuffling

Reviewed By: wrengr

Differential Revision: https://reviews.llvm.org/D129031
2022-07-06 14:34:30 -07:00
wren romano
875ee0ed1c [mlir][sparse] Reducing computational complexity
This is a followup to D128847.  The `AffineMap::getPermutedPosition` method performs a linear scan of the map, thus the previous implementation had asymptotic complexity of `O(|topSort| * |m|)`.  This change reduces that to `O(|topSort| + |m|)`.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D129011
2022-07-01 12:55:09 -07:00