Commit Graph

73 Commits

Author SHA1 Message Date
Peiming Liu
083ddffe47 [mlir][sparse] introduce sparse_tensor::StorageSpecifierToLLVM pass
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D140122
2022-12-22 22:45:15 +00:00
bixia1
ea4be70cea [mlir][sparse] Fix problems in creating complex zero for initialization.
Reviewed By: aartbik, wrengr

Differential Revision: https://reviews.llvm.org/D139591
2022-12-08 07:49:27 -08:00
Aart Bik
99b3849d89 [mlir][sparse] introduce vectorization pass for sparse loops
This brings back previous SIMD functionality, but in a separate pass.
The idea is to improve this new pass incrementally, going beyond for-loops
to while-loops for co-iteration as welll (masking), while introducing new
abstractions to make the lowering more progressive. The separation of
sparsification and vectorization is a very good first step on this journey.

Also brings back ArmSVE support

Still to be fine-tuned:
  + use of "index" in SIMD loop (viz. a[i] = i)
  + check that all ops really have SIMD support
  + check all forms of reductions
  + chain reduction SIMD values

Reviewed By: dcaballe

Differential Revision: https://reviews.llvm.org/D138236
2022-11-21 16:12:12 -08:00
bixia1
f81f0cb75a [mlir][sparse] Split SparseTensorRewrite into PreSparsificationRewrite and PostSparsificationRewrite.
Reviewed By: aartbik, wrengr

Differential Revision: https://reviews.llvm.org/D138153
2022-11-17 07:13:55 -08:00
Aart Bik
d1da6f23a6 [mlir][sparse] avoid single default parameters in pass constructors
Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D138054
2022-11-15 13:07:54 -08:00
bixia1
4f729d5a70 [mlir][sparse] Add rewriting rules for sparse_tensor.sort_coo.
Refactor the rewriting of sparse_tensor.sort to support the implementation of
sparse_tensor.sort_coo.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D137522
2022-11-14 08:48:53 -08:00
bixia1
7276b643f8 [mlir][sparse] Add option enable-buffer-initialization to the sparse-tensor-codegen pass.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D137733
2022-11-10 07:23:33 -08:00
bixia1
5618d2bea9 [mlir][sparse] Add option enable-buffer-initialization to initialize the memory buffers for sparse tensors to support debugging.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D137592
2022-11-08 09:54:33 -08:00
Peiming Liu
26eb2c6b42 [mlir][sparse] remove vector support in sparsification
Sparse compiler used to generate vectorized code for sparse tensors computation, but it should really be delegated to other vectorization passes for better progressive lowering.

 https://discourse.llvm.org/t/rfc-structured-codegen-beyond-rectangular-arrays/64707

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D136183
2022-10-19 18:11:29 +00:00
bixia1
c1864ab953 [mlir][sparse] Add options to sparse-tensor-rewrite to disable rewriting rules for operators foreach and convert.
This is to help simplify FileCheck tests for sparse-tensor-rewrite.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D136093
2022-10-18 09:27:32 -07:00
Aart Bik
779dcd2ecc [mlir][sparse] move sparse tensor rewriting into its own pass
Makes individual testing and debugging easier.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D135319
2022-10-05 14:52:55 -07:00
bixia1
654bbbde55 [mlir][sparse] Move the implementation of sparse_tensor.push_back to the buffer rewriter.
Reviewed By: aartbik, Peiming

Differential Revision: https://reviews.llvm.org/D134777
2022-09-29 15:06:00 -07:00
bixia1
062e515b70 [mlir][sparse] Add rewrite rule for the sort operator.
Add sparse-buffer-rewrite pass to rewrite sparse primitives on buffers to MLIR
implementation.

Add sparse rewrite rule for the sort operator.

Add FileCheck test and integration test.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D134627
2022-09-29 11:38:19 -07:00
Jakub Kuderski
abc362a107 [mlir][arith] Change dialect name from Arithmetic to Arith
Suggested by @lattner in https://discourse.llvm.org/t/rfc-define-precise-arith-semantics/65507/22.

Tested with:
`ninja check-mlir check-mlir-integration check-mlir-mlir-spirv-cpu-runner check-mlir-mlir-vulkan-runner check-mlir-examples`

and `bazel build --config=generic_clang @llvm-project//mlir:all`.

Reviewed By: lattner, Mogball, rriddle, jpienaar, mehdi_amini

Differential Revision: https://reviews.llvm.org/D134762
2022-09-29 11:23:28 -04:00
Aart Bik
4d06861950 [mlir][sparse] add "sort" to the compress op codegen
This revision also adds convenience methods to test the
dim level type/property (with the codegen being first client)

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D134776
2022-09-28 10:41:40 -07:00
Aart Bik
30b550f14c [mlir][sparse] add loop simplification to sparsification pass
Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D134090
2022-09-16 16:29:47 -07:00
Peiming Liu
eb65327fe9 [mlir][sparse] Add new option (enable-runtime-library) to sparse compiler pipeline
Add new option (enable-runtime-library) to sparse compiler pipeline, it allows us to decide whether we need to rewrite operations (e.g., concatenate, reshape) within sparsification (when using codegen) or convert them after sparsification (when using runtime library).

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D133597
2022-09-09 20:54:47 +00:00
bixia1
8a583bd53d [mlir][sparse] Add codegen for expand op.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D133454
2022-09-08 14:06:01 -07:00
Peiming Liu
edca72f5bc [mlir][sparse] Refactoring: remove dependence on tuple type when lowering sparse tensors.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D133390
2022-09-07 17:53:48 +00:00
Peiming Liu
4c46a5d54d [mlir][sparse] Refactoring: renaming StorageNewOp to StorageOp
To address comment in https://reviews.llvm.org/D133241

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D133363
2022-09-06 17:02:25 +00:00
Aart Bik
0c7abd3924 [mlir][sparse] codegen for sparse alloc
Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D133241
2022-09-06 09:37:54 -07:00
Nick Kreeger
30ceb783e2 [mlir][sparse] Expose SparseTensor passes as enums instead of opaque numbers for vectorization and parallelization options.
The SparseTensor passes currently use opaque numbers for the CLI, despite using an enum internally. This patch exposes the enums instead of numbered items that are matched back to the enum.

Fixes https://github.com/llvm/llvm-project/issues/53389

Differential Revision: https://reviews.llvm.org/D123876

Please also see:
https://reviews.llvm.org/D118379
https://reviews.llvm.org/D117919
2022-09-04 01:39:35 +00:00
Nick Kreeger
91470d6352 Revert "[mlir][sparse] Expose SparseTensor passes as enums instead of opaque"
This reverts commit ef25b5d93d.
2022-09-03 15:47:40 -05:00
Nick Kreeger
ef25b5d93d [mlir][sparse] Expose SparseTensor passes as enums instead of opaque
numbers for vectorization and parallelization options.

The SparseTensor passes currently use opaque numbers for the CLI,
despite using an enum internally. This patch exposes the enums instead
of numbered items that are matched back to the enum.

Fixes https://github.com/llvm/llvm-project/issues/53389

Differential Revision: https://reviews.llvm.org/D123876

Please also see:
https://reviews.llvm.org/D118379
https://reviews.llvm.org/D117919
2022-09-03 15:45:49 -05:00
Peiming Liu
928b5b06f9 [mlir][sparse] add conversion rules for storage_get/set/callOp
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D133175
2022-09-02 18:27:54 +00:00
Aart Bik
2ddfacd95c [mlir][sparse] codegen for sparse dealloc
Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D133171
2022-09-01 22:21:20 -07:00
Aart Bik
3ae98fd259 [mlir][sparse] added codegen for dimop, pointers, indices, values
Demonstrates how sparse tensor type -> tuple -> getter
will eventually yield actual code on the memrefs directly

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D133143
2022-09-01 16:36:10 -07:00
Peiming Liu
ca01c996b2 [mlir][sparse] Add SparseTensorStorageExpansion Pass to expand compounded sparse tensor tuples
This patch adds SparseTensorStorageExpansion pass, it flattens the tuple used to store a sparse
tensor handle.

Right now, it only set up the skeleton for the pass, more lowering rules for sparse tensor storage
operation need to be added.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D133125
2022-09-01 22:47:31 +00:00
Aart Bik
1be09496bf [mlir][sparse] improved tensor type lowering
Also includes a first codegen example (although full support need tuple access)

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D133080
2022-09-01 09:24:20 -07:00
Michele Scuttari
67d0d7ac0a [MLIR] Update pass declarations to new autogenerated files
The patch introduces the required changes to update the pass declarations and definitions to use the new autogenerated files and allow dropping the old infrastructure.

Reviewed By: mehdi_amini, rriddle

Differential Review: https://reviews.llvm.org/D132838
2022-08-31 12:28:45 +02:00
Michele Scuttari
039b969b32 Revert "[MLIR] Update pass declarations to new autogenerated files"
This reverts commit 2be8af8f0e.
2022-08-30 22:21:55 +02:00
Michele Scuttari
2be8af8f0e [MLIR] Update pass declarations to new autogenerated files
The patch introduces the required changes to update the pass declarations and definitions to use the new autogenerated files and allow dropping the old infrastructure.

Reviewed By: mehdi_amini, rriddle

Differential Review: https://reviews.llvm.org/D132838
2022-08-30 21:56:31 +02:00
Aart Bik
86b22d3120 [mlir][sparse] start a sparse codegen conversion pass
This new pass provides an alternative to the current conversion pass
that converts sparse tensor types and sparse primitives to opaque pointers
and calls into a runtime support library. This pass will map sparse tensor
types to actual data structures and primitives to actual code. In the long
run, this new pass will remove our dependence on the support library, avoid
the need to link in fully templated and expanded code, and provide much better
opportunities for optimization on the generated code.

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D132766
2022-08-29 09:39:33 -07:00
Peiming Liu
bf59cd320e [mlir][sparse] fix error when sparse kernel is nested in a scf structrual operator.
Sparse compiler failed on the provided test (when the sparse kernel is nested in a scf structrual operator).

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D130609
2022-07-27 16:12:23 +00:00
Matthias Springer
27a431f5e9 [mlir][bufferization][NFC] Move sparse_tensor.release to bufferization dialect
This op used to belong to the sparse dialect, but there are use cases for dense bufferization as well. (E.g., when a tensor alloc is returned from a function and should be deallocated at the call site.) This change moves the op to the bufferization dialect, which now has an `alloc_tensor` and a `dealloc_tensor` op.

Differential Revision: https://reviews.llvm.org/D129985
2022-07-19 09:18:19 +02:00
Aart Bik
28ebb0b61d [mlir][sparse] migrate sparse rewriting to sparse transformations pass
The rules in the linalg file were very specific to sparse tensors so will
find a better home under sparse tensor dialect than linalg dialect. Also
moved some rewriting from sparsification into this new "pre-rewriting" file.

Reviewed By: springerm

Differential Revision: https://reviews.llvm.org/D129910
2022-07-18 09:29:22 -07:00
Matthias Springer
c66303c287 [mlir][sparse] Switch to One-Shot Bufferize
This change removes the partial bufferization passes from the sparse compilation pipeline and replaces them with One-Shot Bufferize. One-Shot Analysis (and TensorCopyInsertion) is used to resolve all out-of-place bufferizations, dense and sparse. Dense ops are then bufferized with BufferizableOpInterface. Sparse ops are still bufferized in the Sparsification pass.

Details:
* Dense allocations are automatically deallocated, unless they are yielded from a block. (In that case the alloc would leak.) All test cases are modified accordingly. E.g., some funcs now have an "out" tensor argument that is returned from the function. (That way, the allocation happens at the call site.)
* Sparse allocations are *not* automatically deallocated. They must be "released" manually. (No change, this will be addressed in a future change.)
* Sparse tensor copies are not supported yet. (Future change)
* Sparsification no longer has to consider inplacability. If necessary, allocations and/or copies are inserted during TensorCopyInsertion. All tensors are inplaceable by the time Sparsification is running. Instead of marking a tensor as "not inplaceable", it can be marked as "not writable", which will trigger an allocation and/or copy during TensorCopyInsertion.

Differential Revision: https://reviews.llvm.org/D129356
2022-07-14 09:52:48 +02:00
Aart Bik
faa00c1313 [mlir][sparse] implement sparse2sparse reshaping (expand/collapse)
A previous revision implemented expand/collapse reshaping between
dense and sparse tensors for sparse2dense and dense2sparse since those
could use the "cheap" view reshape on the already materialized
dense tensor (at either the input or output side), and do some
reshuffling from or to sparse. The dense2dense case, as always,
is handled with a "cheap" view change.

This revision implements the sparse2sparse cases. Lacking any "view"
support on sparse tensors this operation necessarily has to perform
data reshuffling on both ends.

Tracker for improving this:
https://github.com/llvm/llvm-project/issues/56477

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D129416
2022-07-11 14:49:06 -07:00
Jacques Pienaar
136d746ec7 [mlir] Flip accessors to prefixed form (NFC)
Another mechanical sweep to keep diff small for flip to _Prefixed.
2022-07-10 21:19:11 -07:00
Aart Bik
6d8e2f1e51 [mlir][sparse] implement simple reshaping (expand/collapse)
The revision makes a start with implementing expand/collapse reshaping
for sparse tensors. When either source or destination is sparse, but
other is dense, the "cheap" dense reshape can be used prior to converting
from or to a sparse tensor.

Note1
sparse to sparse reshaping is still TBD.

Note2
in the long run, we may want to implement a "view" into a sparse tensor so that the operation remains cheap and does not require data shuffling

Reviewed By: wrengr

Differential Revision: https://reviews.llvm.org/D129031
2022-07-06 14:34:30 -07:00
Aart Bik
fde04aee33 [mlir][sparse] refine bufferization allocation lowering
Marking bufferization allocation operation as invalid
during sparse lowering is too strict, since dense and
sparse allocation can co-exist. This revision refines
the lowering with a dynamic type check.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D128305
2022-06-21 15:17:25 -07:00
Matthias Springer
6232a8f3d6 [mlir][sparse][NFC] Switch InitOp to bufferization::AllocTensorOp
Now that we have an AllocTensorOp (previously InitTensorOp) in the bufferization dialect, the InitOp in the sparse dialect is no longer needed.

Differential Revision: https://reviews.llvm.org/D126180
2022-06-02 00:03:52 +02:00
Aart Bik
28b6d412af [mlir][sparse] add support for complex zero/one building
Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D126039
2022-05-20 08:53:30 -07:00
Nick Kreeger
4620032ee3 Revert "[mlir][sparse] Expose SpareTensor passes as enums instead of opaque numbers for vectorization and parallelization options."
This reverts commit d59cf901cb.

Build fails on NVIDIA Sparse tests:
https://lab.llvm.org/buildbot/#/builders/61/builds/25447
2022-04-23 20:14:48 -05:00
Nick Kreeger
d59cf901cb [mlir][sparse] Expose SpareTensor passes as enums instead of opaque numbers for vectorization and parallelization options.
The SparseTensor passes currently use opaque numbers for the CLI, despite using an enum internally. This patch exposes the enums instead of numbered items that are matched back to the enum.

Fixes GitHub issue #53389

Reviewed by: aartbik, mehdi_amini

Differential Revision: https://reviews.llvm.org/D123876
2022-04-23 19:16:57 -05:00
River Riddle
eda6f907d2 [mlir][NFC] Shift a bunch of dialect includes from the .h to the .cpp
Now that dialect constructors are generated in the .cpp file, we can
drop all of the dependent dialect includes from the .h file.

Differential Revision: https://reviews.llvm.org/D124298
2022-04-23 01:09:29 -07:00
River Riddle
58ceae9561 [mlir:NFC] Remove the forward declaration of FuncOp in the mlir namespace
FuncOp has been moved to the `func` namespace for a little over a month, the
using directive can be dropped now.
2022-04-18 12:01:55 -07:00
Javier Setoain
7783a178f5 [mlir][Sparse] Add option for VLA sparsification
Use "enable-vla-vectorization=vla" to generate a vector length agnostic
loops during vectorization. This option works for vectorization strategy 2.

Differential Revision: https://reviews.llvm.org/D118379
2022-03-25 10:54:49 +00:00
wren romano
c7e24db412 [mlir][sparse] Introducing options for the SparseTensorConversion pass
This is work towards: https://github.com/llvm/llvm-project/issues/51652

This differential sets up the options and threads them through everywhere, but doesn't actually use them yet.  The differential that finally makes use of them is D122061, which is the final differential in the chain that fixes bug 51652.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D122054
2022-03-22 13:11:09 -07:00
River Riddle
4a3460a791 [mlir:FunctionOpInterface] Rename the "type" attribute to "function_type"
This removes any potential confusion with the `getType` accessors
which correspond to SSA results of an operation, and makes it
clear what the intent is (i.e. to represent the type of the function).

Differential Revision: https://reviews.llvm.org/D121762
2022-03-16 17:07:04 -07:00