Commit Graph

192 Commits

Author SHA1 Message Date
bixia1
3fdd85da06 [mlir][sparse] Add AOS optimization.
Use an array of structures to represent the indices for the tailing COO region
of a sparse tensor.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D140870
2023-01-04 18:16:04 -08:00
Peiming Liu
988733c600 [mlir][sparse] use sparse_tensor::StorageSpecifier to store dim/memSizes
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D140130
2022-12-23 00:47:36 +00:00
Paul Robinson
977c6f7867 [mlir] Convert tests to check 'target=...'
Part of the project to eliminate special handling for triples in lit
expressions.
2022-12-15 14:49:54 -08:00
Aart Bik
78ba3aa765 [mlir][sparse] performs a tab cleanup (NFC)
Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D140142
2022-12-15 12:12:06 -08:00
bixia1
089e120060 [mlir][sparse] Make the remaining integration tests run with vectorization.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D140057
2022-12-14 15:13:12 -08:00
bixia1
bfad07268b [mlir][sparse] Add another call to ConvertVectorToLLVMPass, to lower the vector operations added by ConvertMathToLLVMPass.
Run sparse_tanh with vectorization.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D139958
2022-12-14 15:06:34 -08:00
bixia1
a229c162a1 [mlir][sparse] Make some integration tests run with vectorization.
Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D139887
2022-12-13 13:26:36 -08:00
Jakub Kuderski
269177eedf Revert "[mlir][sparse] Make some integration tests run with vectorization."
This reverts commit 2d7e3ec6b5.

This broke buildbots [1] and I can also reproduce this locally.

[1] https://lab.llvm.org/buildbot#builders/61/builds/36953
2022-12-13 13:41:28 -05:00
bixia1
2d7e3ec6b5 [mlir][sparse] Make some integration tests run with vectorization.
Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D139887
2022-12-13 10:02:44 -08:00
bixia1
efaa78cae0 [mlir][sparse] Replace vector.print with printMemref for some tests.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D139489
2022-12-12 16:54:41 -08:00
bixia1
ea4be70cea [mlir][sparse] Fix problems in creating complex zero for initialization.
Reviewed By: aartbik, wrengr

Differential Revision: https://reviews.llvm.org/D139591
2022-12-08 07:49:27 -08:00
wren romano
2af2e4dbb7 [mlir][sparse] Breaking up openSparseTensor to better support non-permutations
This commit updates how the `SparseTensorConversion` pass handles `NewOp`.  It breaks up the underlying `openSparseTensor` function into two parts (`SparseTensorReader::create` and `SparseTensorReader::readSparseTensor`) so that the pass can inject code for constructing `lvlSizes` between those two parts.  Migrating the construction of `lvlSizes` out of the runtime and into the pass is a necessary first step toward fully supporting non-permutations.  (The alternative would be for the pass to generate a `FuncOp` for performing the construction and then passing that to the runtime; which doesn't seem to have any benefits over the design of this commit.)  And since the pass now generates the code to call these two functions, this change also removes the `Action::kFromFile` value from the enum used by `_mlir_ciface_newSparseTensor`.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D138363
2022-12-02 11:10:57 -08:00
bixia1
101a0c84f7 [mlir][sparse] Improve concatenate operator rewrite for annotated all dense dimensions results.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D138823
2022-11-29 12:25:52 -08:00
Hanhan Wang
0a1569a400 [mlir][NFC] Remove trailing whitespaces from *.td and *.mlir files.
This is generated by running

```
sed --in-place 's/[[:space:]]\+$//' mlir/**/*.td
sed --in-place 's/[[:space:]]\+$//' mlir/**/*.mlir
```

Reviewed By: rriddle, dcaballe

Differential Revision: https://reviews.llvm.org/D138866
2022-11-28 15:26:30 -08:00
bixia1
974b4bf9fd [mlir][sparse] Add expand_symmetry attribute to the new operator.
The attribute tells the operator to handle symmetric structures for 2D tensors.
By default, the operator assumes the input tensor is not symmetric.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D138230
2022-11-23 16:32:15 -08:00
Peiming Liu
e5e4deca5e [mlir][sparse] support affine expression on sparse dimensions (codegen implementation)
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D138172
2022-11-23 00:04:55 +00:00
bixia1
ee74d371a3 [mlir][sparse] Make three integration tests run with the codegen path.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D138233
2022-11-17 13:22:04 -08:00
bixia1
96b3bf4292 [mlir][sparse] Fix a problem in the new operator rewriter.
The getSparseTensorReaderNextX functions should return void.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D138226
2022-11-17 13:21:10 -08:00
Peiming Liu
8d615a23ef [mlir][sparse] fix crash on sparse_tensor.foreach operation on tensors with complex<T> elements.
Reviewed By: aartbik, bixia

Differential Revision: https://reviews.llvm.org/D138223
2022-11-17 19:36:15 +00:00
bixia1
c374ef2eb7 [mlir][sparse] Extend the operator new rewriter to handle isSymmetric flag.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D138214
2022-11-17 10:48:24 -08:00
bixia1
b5276b50e9 [mlir][sparse] Run an integration test with codegen.
Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D138213
2022-11-17 09:59:53 -08:00
bixia1
b5d74f0e83 [mlir][sparse] Make integration tests run on both library and codegen pathes.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D138145
2022-11-16 12:01:14 -08:00
bixia1
555e7835f4 [mlir][sparse] Fix rewriting for convert op and concatenate op.
Fix a problem in convert op rewriting where it used the original index for
ToIndicesOp.

Extend the concatenate op rewriting to handle dense destination and dynamic
shape destination.

Make the concatenate op integration test run on the codegen path.

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D138057
2022-11-15 14:45:21 -08:00
bixia1
13c41757b1 [mlir][sparse] Only insert non-zero values to the result of the concatenate operation.
Modify the integration test to check number_of_entries and use it to limit for
outputing sparse tensor values.

Reviewed By: aartbik, Peiming

Differential Revision: https://reviews.llvm.org/D138046
2022-11-15 11:52:52 -08:00
Peiming Liu
2d9d805c74 [mlir][sparse] fix memory leak in test cases
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D137985
2022-11-14 23:03:46 +00:00
bixia1
40edb8b4ab [mlir][sparse] Make three tests run with the codegen path.
Reviewed By: aartbik, Peiming

Differential Revision: https://reviews.llvm.org/D137964
2022-11-14 14:22:25 -08:00
bixia1
4f729d5a70 [mlir][sparse] Add rewriting rules for sparse_tensor.sort_coo.
Refactor the rewriting of sparse_tensor.sort to support the implementation of
sparse_tensor.sort_coo.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D137522
2022-11-14 08:48:53 -08:00
bixia1
fa46de16db [mlir][sparse][NFC] Add comments to tests that are run for with and without runtime libraries.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D137869
2022-11-14 08:21:33 -08:00
Peiming Liu
725e0849b7 [mlir][sparse] fix incorrect coordinates ordering computed by the foreach operation.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D137877
2022-11-12 04:08:50 +00:00
bixia1
da749560e1 [mlir][sparse] Extend more integration to run on the codegen path.
Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D137850
2022-11-11 09:42:23 -08:00
bixia1
57416d872a [mlir][sparse] Fix a bug in rewriting dense2dense convert op.
Permutation wasn't handled correctly. Add a test for the rewriting.

Extend an integration test to run with enable_runtime_library=false to
also test the rewriting.

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D137845
2022-11-11 09:11:54 -08:00
bixia1
1f48f4d674 [mlir][sparse] Fix a test to check all output coordinates.
Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D137805
2022-11-10 15:25:48 -08:00
Aart Bik
e6cbb91483 [mlir][sparse] skip zeros during dense2sparse
This enables the full matmul integration test with runtime_lib=true/false!

Background:
https://github.com/llvm/llvm-project/issues/51657

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D137750
2022-11-09 20:54:27 -08:00
Aart Bik
a61a9a700a [mlir][sparse] first end-to-end matmul with codegen
(1) also fixes memory leak in sparse2dense rewriting
(2) still needs fix in dense2sparse by skipping zeros

Reviewed By: wrengr

Differential Revision: https://reviews.llvm.org/D137736
2022-11-09 13:40:05 -08:00
Peiming Liu
7175f9dde1 [mlir][sparse] extend foreach operation to iterator over sparse constant.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D137679
2022-11-09 01:50:34 +00:00
Aart Bik
99fe3d2661 [mlir][sparse] 3-dimensional sparse tensor insertion test
Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D137668
2022-11-08 14:49:31 -08:00
Aart Bik
70633a8d55 [mlir][sparse] first general insertion implementation with pure codegen
This revision generalizes lowering the sparse_tensor.insert op into actual code that directly operates on the memrefs of a sparse storage scheme. The current insertion strategy does *not* rely on a cursor anymore, with introduces some testing overhead for each insertion (but still proportional to the rank, as before). Over time, we can optimize the code generation, but this version enables us to finish the effort to migrate from library to actual codegen.

Things to do:
(1) carefully deal with (un)ordered and (not)unique
(2) omit overhead when not needed
(3) optimize and specialize
(4) try to avoid the pointer "cleanup" (at HasInserts), and make sure the storage scheme is consistent at every insertion point (so that it can "escape" without concerns).

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D137457
2022-11-08 13:10:05 -08:00
Peiming Liu
75ac294b35 [mlir][sparse] support parallel for/reduction in sparsification.
This patch fix the re-revert D135927 (which caused a windows build failure) to re-enable parallel for/reduction. It also fix a warning caused by D137442.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D137565
2022-11-07 18:04:46 +00:00
Stella Stamenova
a2c4ca50ca Revert "[mlir][sparse] support Parallel for/reduction."
This reverts commit 838389780e.

This broke the windows mlir buildbot: https://lab.llvm.org/buildbot/#/builders/13/builds/27934
2022-11-07 08:48:52 -08:00
bixia1
9b800bf79d [mlir][sparse] Improve the non-stable sort implementation.
Replace the quick sort partition method with one that is more similar to the
method used by C++ std quick sort. This improves the runtime for sorting
sk_2005.mtx by more than 10x.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D137290
2022-11-07 07:38:42 -08:00
Peiming Liu
838389780e [mlir][sparse] support Parallel for/reduction.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D135927
2022-11-04 22:47:27 +00:00
bixia1
d45be88736 [mlir][sparse] Implement the rewrite for sparse_tensor.push_back a value n times.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D136654
2022-10-31 08:19:12 -07:00
wren romano
1c42c2a9dc [mlir][sparse] Cleaning up function names in test
The old "dumpAndRelease" names are no longer valid since the "release" part is handled separately now.

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D136899
2022-10-28 18:35:06 -07:00
Peiming Liu
e5cb0ee383 [mlir][sparse] run canonicalization pass after DenseOpBufferize.
As pointed out by Matthias: "DenseBufferizationPass should be run right after TensorCopyInsertionPass. (Running it after bufferizing the sparse IR is also OK.) The reason for this is that whether copies are needed for not depends on the structure of the program (SSA use-def chains). In particular, running the canonicalizer in-between is problematic because it could introduce new RaW conflicts"

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D136980
2022-10-29 01:00:05 +00:00
Aart Bik
b430a352ef [mlir][sparse] use straightline and loop to insert into tensor
This exposed a missing type conversion for codegen

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D136286
2022-10-21 16:17:15 -07:00
Aart Bik
0f3e4d1afa [mlir][sparse] lower number of entries op to actual code
works both along runtime path and pure codegen path

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D136389
2022-10-21 10:48:37 -07:00
Peiming Liu
35b3a0ce8d [mlir][sparse] support foreach on dense tensor.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D136384
2022-10-21 00:12:37 +00:00
Andrzej Warzynski
3f59734e0c [mlir][aarch64] Disable bf16 tests on AArch64
This patch disables 2 bf16 tests that are currently not supported on
AArch64. I've triaged these failures and opened [1] to track this. I
don't have a simple reproducer for dense_output_bf16.mlir, but it's
rather clear that both tests fail due to missing support for `bfloat`
operations in the AArch64 backend.

I'm not sure what the path forward to enable these tests on AArch64
should be. I think that there are two options:
  * AArch64 backened gains capability to legalize these nodes containing
    `bfloat` operands, or
  * MLIR (similarly to Clang) is taught not to emit such nodes in the
    first place.

[1] https://github.com/llvm/llvm-project/issues/58465

Differential Revision: https://reviews.llvm.org/D136273
2022-10-20 07:11:05 +00:00
Aart Bik
96cab659a1 [mlir][sparse] end-to-end sparse vector insertion codegen
Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D136275
2022-10-19 12:32:20 -07:00
Peiming Liu
26eb2c6b42 [mlir][sparse] remove vector support in sparsification
Sparse compiler used to generate vectorized code for sparse tensors computation, but it should really be delegated to other vectorization passes for better progressive lowering.

 https://discourse.llvm.org/t/rfc-structured-codegen-beyond-rectangular-arrays/64707

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D136183
2022-10-19 18:11:29 +00:00