Commit Graph

368 Commits

Author SHA1 Message Date
Aart Bik
f388a3a446 [mlir][sparse] update doc and examples of the [dis]assemble operations (#88213)
The doc and examples of the [dis]assemble operations did not reflect all
the recent changes on order of the operands. Also clarified some of the
text.
2024-04-10 09:42:12 -07:00
Aart Bik
dc4cfdbb8f [mlir][sparse] provide an AoS "view" into sparse runtime support lib (#87116)
Note that even though the sparse runtime support lib always uses SoA
storage for COO storage (and provides correct codegen by means of views
into this storage), in some rare cases we need the true physical SoA
storage as a coordinate buffer. This PR provides that functionality by
means of a (costly) coordinate buffer call.

Since this is currently only used for testing/debugging by means of the
sparse_tensor.print method, this solution is acceptable. If we ever want
a performing version of this, we should truly support AoS storage of COO
in addition to the SoA used right now.
2024-03-29 15:30:36 -07:00
Matthias Springer
a5d7fc1d10 [mlir][sparse] Fix typos in comments (#86074) 2024-03-21 12:30:48 +09:00
Matthias Springer
b1752ddf0a [mlir][sparse] Fix memory leaks (part 4) (#85729)
This commit fixes memory leaks in sparse tensor integration tests by
adding `bufferization.dealloc_tensor` ops.

Note: Buffer deallocation will be automated in the future with the
ownership-based buffer deallocation pass, making `dealloc_tensor`
obsolete (only codegen path, not when using the runtime library).

This commit fixes the remaining memory leaks in the MLIR test suite.
`check-mlir` now passes when built with ASAN.
2024-03-19 15:38:16 +09:00
Aart Bik
f3a8af07fa [mlir][sparse] best effort finalization of escaping empty sparse tensors (#85482)
This change lifts the restriction that purely allocated empty sparse
tensors cannot escape the method. Instead it makes a best effort to add
a finalizing operation before the escape.

This assumes that
(1) we never build sparse tensors across method boundaries
    (e.g. allocate in one, insert in other method)
(2) if we have other uses of the empty allocation in the
    same method, we assume that either that op will fail
    or will do the finalization for us.

This is best-effort, but fixes some very obvious missing cases.
2024-03-15 16:43:09 -07:00
Matthias Springer
e8e8df4c1b [mlir][sparse] Add has_runtime_library test op (#85355)
This commit adds a new test-only op:
`sparse_tensor.has_runtime_library`. The op returns "1" if the sparse
compiler runs in runtime library mode.

This op is useful for writing test cases that require different IR
depending on whether the sparse compiler runs in runtime library or
codegen mode.

This commit fixes a memory leak in `sparse_pack_d.mlir`. This test case
uses `sparse_tensor.assemble` to create a sparse tensor SSA value from
existing buffers. This runtime library reallocates+copies the existing
buffers; the codegen path does not. Therefore, the test requires
additional deallocations when running in runtime library mode.

Alternatives considered:
- Make the codegen path allocate. "Codegen" is the "default" compilation
mode and it is handling `sparse_tensor.assemble` correctly. The issue is
with the runtime library path, which should not allocate. Therefore, it
is better to put a workaround in the runtime library path than to work
around the issue with a new flag in the codegen path.
- Add a `sparse_tensor.runtime_only` attribute to
`bufferization.dealloc_tensor`. Verifying that the attribute can only be
attached to `bufferization.dealloc_tensor` may introduce an unwanted
dependency of `MLIRSparseTensorDialect` on `MLIRBufferizationDialect`.
2024-03-15 13:35:48 +09:00
Matthias Springer
5124eedd35 [mlir][sparse] Fix memory leaks (part 3) (#85184)
This commit fixes memory leaks in sparse tensor integration tests by
adding `bufferization.dealloc_tensor` ops.

Note: Buffer deallocation will be automated in the future with the
ownership-based buffer deallocation pass, making `dealloc_tensor`
obsolete (only codegen path, not when using the runtime library).
2024-03-15 13:31:47 +09:00
Yinying Li
88986d65e4 [mlir][sparse] Fix sparse_generate test (#85009)
std::uniform_int_distribution may behave differently in different
systems.
2024-03-12 21:39:37 -04:00
Yinying Li
c1ac9a09d0 [mlir][sparse] Finish migrating integration tests to use sparse_tensor.print (#84997) 2024-03-12 20:57:21 -04:00
Peiming Liu
94e27c265a [mlir][sparse] reuse tensor.insert operation to insert elements into … (#84987)
…a sparse tensor.
2024-03-12 16:59:17 -07:00
Yinying Li
83c9244ae4 [mlir][sparse] Migrate more tests to use sparse_tensor.print (#84833)
Continuous efforts following #84249.
2024-03-11 18:44:32 -04:00
Yinying Li
4cb5a96af6 [mlir][sparse] Migrate more tests to sparse_tensor.print (#84249)
Continuous efforts following #83946.
2024-03-07 14:02:20 -05:00
Yinying Li
6e692e726a [mlir][sparse] Migrate to sparse_tensor.print (#83946)
Continuous efforts following #83506.
2024-03-07 14:02:01 -05:00
Peiming Liu
fc9f1d49aa [mlir][sparse] use a consistent order between [dis]assembleOp and sto… (#84079)
…rage layout.
2024-03-06 09:57:41 -08:00
Aart Bik
b6ca602658 [mlir][sparse] migrate tests to sparse_tensor.print (#84055)
Continuing the efforts started in #83357
2024-03-05 12:17:45 -08:00
Aart Bik
662d821d44 [mlir][sparse] migrate datastructure tests to sparse_tensor.print (#83956)
Continuing the efforts started in llvm#83357
2024-03-04 21:14:32 -08:00
Aart Bik
275fe3ae2d [mlir][sparse] support complex type for sparse_tensor.print (#83934)
With an integration test example
2024-03-04 17:14:31 -08:00
Aart Bik
05390df497 [mlir][sparse] migration to sparse_tensor.print (#83926)
Continuing the efforts started in #83357
2024-03-04 15:49:09 -08:00
Aart Bik
691fc7cdcc [mlir][sparse] add dim/lvl information to sparse_tensor.print (#83913)
More information is more testing!
Also adjusts already migrated integration tests
2024-03-04 14:32:49 -08:00
Yinying Li
5899599b01 [mlir][sparse] Migration to sparse_tensor.print (#83506)
Continuous efforts #83357. Previously reverted #83377.
2024-02-29 19:18:35 -05:00
Mehdi Amini
71eead512e Revert "[mlir][sparse] Migration to sparse_tensor.print" (#83499)
Reverts llvm/llvm-project#83377

The test does not pass on the bot.
2024-02-29 15:04:58 -08:00
Yinying Li
1ca65dd74a [mlir][sparse] Migration to sparse_tensor.print (#83377)
Continuous efforts following #83357.
2024-02-29 14:14:31 -05:00
Aart Bik
fdf44b3777 [mlir][sparse] migrate integration tests to sparse_tensor.print (#83357)
This is first step (of many) cleaning up our tests to use the new and
exciting sparse_tensor.print operation instead of lengthy extraction +
print ops.
2024-02-28 16:40:46 -08:00
Peiming Liu
6bc7c9df7f [mlir][sparse] infer returned type for sparse_tensor.to_[buffer] ops (#83343)
The sparse structure buffers might not always be memrefs with rank == 1
with the presence of batch levels.
2024-02-28 16:10:20 -08:00
Aart Bik
8394ec9ff1 [mlir][sparse] add a few more cases to sparse_tensor.print test (#83338) 2024-02-28 14:05:40 -08:00
Aart Bik
d37affb06f [mlir][sparse] add a sparse_tensor.print operation (#83321)
This operation is mainly used for testing and debugging purposes but
provides a very convenient way to quickly inspect the contents of a
sparse tensor (all components over all stored levels).

Example:

[ [ 1, 0, 2, 0, 0, 0, 0, 0 ],
  [ 0, 0, 0, 0, 0, 0, 0, 0 ],
  [ 0, 0, 0, 0, 0, 0, 0, 0 ],
  [ 0, 0, 3, 4, 0, 5, 0, 0 ]

when stored sparse as DCSC prints as

---- Sparse Tensor ----
nse = 5
pos[0] : ( 0, 4,  )
crd[0] : ( 0, 2, 3, 5,  )
pos[1] : ( 0, 1, 3, 4, 5,  )
crd[1] : ( 0, 0, 3, 3, 3,  )
values : ( 1, 2, 3, 4, 5,  )
----
2024-02-28 12:33:26 -08:00
Peiming Liu
5248a98724 [mlir][sparse] support SoA COO in codegen path. (#82439)
*NOTE*: the `SoA` property only makes a difference on codegen path, and
is ignored in libgen path at the moment (only SoA COO is supported).
2024-02-20 17:06:21 -08:00
Matthias Springer
ccc20b4e56 [mlir][sparse] Fix memory leaks (part 2) (#81979)
This commit fixes memory leaks in sparse tensor integration tests by
adding `bufferization.dealloc_tensor` ops.

Note: Buffer deallocation will be automated in the future with the
ownership-based buffer deallocation pass, making `dealloc_tensor`
obsolete (only codegen path, not when using the runtime library).
2024-02-17 11:04:17 +01:00
Matthias Springer
b6c453c13f [mlir][sparse] Fix memory leaks (part 1) (#81843)
This commit fixes memory leaks in sparse tensor integration tests by
adding `bufferization.dealloc_tensor` ops.

Note: Buffer deallocation will be automated in the future with the
ownership-based buffer deallocation pass, making `dealloc_tensor`
obsolete (only codegen path, not when using the runtime library).
2024-02-16 09:57:04 +01:00
Aart Bik
3122969e8e [mlir][sparse] add doubly compressed test case to assembly op (#81687)
Removes audit TODO
2024-02-13 15:55:42 -08:00
Aart Bik
2400f704af [mlir][sparse] add assemble test for Batched-CSR and CSR-Dense (#81660)
These are formats supported by PyTorch sparse, so good to make sure that
our assemble instructions work on these.
2024-02-13 13:20:01 -08:00
Yinying Li
2a6b521b36 [mlir][sparse] Add more tests and verification for n:m (#81186)
1. Add python test for n out of m
2. Add more methods for python binding
3. Add verification for n:m and invalid encoding tests
4. Add e2e test for n:m

Previous PRs for n:m #80501 #79935
2024-02-09 14:34:36 -05:00
Yinying Li
e5924d6499 [mlir][sparse] Implement parsing n out of m (#79935)
1. Add parsing methods for block[n, m].
2. Encode n and m with the newly extended 64-bit LevelType enum.
3. Update 2:4 methods names/comments to n:m.
2024-02-08 14:38:42 -05:00
Peiming Liu
1ac6846263 [mlir][sparse] support sparse dilated convolution. (#80470) 2024-02-02 11:54:50 -08:00
Matthias Springer
ce7cc723b9 [mlir][memref] memref.subview: Verify result strides
The `memref.subview` verifier currently checks result shape, element type, memory space and offset of the result type. However, the strides of the result type are currently not verified. This commit adds verification of result strides for non-rank reducing ops and fixes invalid IR in test cases.

Verification of result strides for ops with rank reductions is more complex (and there could be multiple possible result types). That is left for a separate commit.

Also refactor the implementation a bit:
* If `computeMemRefRankReductionMask` could not compute the dropped dimensions, there must be something wrong with the op. Return `FailureOr` instead of `std::optional`.
* `isRankReducedMemRefType` did much more than just checking whether the op has rank reductions or not. Inline the implementation into the verifier and add better comments.
* `produceSubViewErrorMsg` does not have to be templatized.
* Fix comment and add additional assert to `ExpandStridedMetadata.cpp`, to make sure that the memref.subview verifier is in sync with the memref.subview -> memref.reinterpret_cast lowering.

Note: This change is identical to #79865, but with a fixed comment and an additional assert in `ExpandStridedMetadata.cpp`. (I reverted #79865 in #80116, but the implementation was actually correct, just the comment in `ExpandStridedMetadata.cpp` was confusing.)
2024-01-31 09:28:53 +00:00
Matthias Springer
96c907dbce Revert "[mlir][memref] memref.subview: Verify result strides" (#80116)
Reverts llvm/llvm-project#79865

I think there is a bug in the stride computation in
`SubViewOp::inferResultType`. (Was already there before this change.)

Reverting this commit for now and updating the original pull request
with a fix and more test cases.
2024-01-31 09:35:13 +01:00
Matthias Springer
db49319264 [mlir][memref] memref.subview: Verify result strides (#79865)
The `memref.subview` verifier currently checks result shape, element
type, memory space and offset of the result type. However, the strides
of the result type are currently not verified. This commit adds
verification of result strides for non-rank reducing ops and fixes
invalid IR in test cases.

Verification of result strides for ops with rank reductions is more
complex (and there could be multiple possible result types). That is
left for a separate commit.

Also refactor the implementation a bit:
* If `computeMemRefRankReductionMask` could not compute the dropped
dimensions, there must be something wrong with the op. Return
`FailureOr` instead of `std::optional`.
* `isRankReducedMemRefType` did much more than just checking whether the
op has rank reductions or not. Inline the implementation into the
verifier and add better comments.
* `produceSubViewErrorMsg` does not have to be templatized.
2024-01-31 09:14:48 +01:00
Peiming Liu
de5e4d7c69 [mlir][sparse] fix error when convolution stride is applied on a dens… (#79521)
…e level.
2024-01-25 17:11:24 -08:00
Peiming Liu
982c815aad [mlir][sparse] fix mismatch between enter/exitWhileLoop (#79493) 2024-01-25 12:21:47 -08:00
Aart Bik
575568de41 [mlir][sparse] adjust compression scheme for example (#79212) 2024-01-23 14:51:46 -08:00
Aart Bik
9cd4128998 [mlir][sparse] add a 3-d block and fiber test (#78529) 2024-01-18 07:52:42 -08:00
Yinying Li
412d784188 [mlir][sparse][CRunnerUtils] Add shuffle in CRunnerUtils (#77124)
Shuffle can generate an array of unique and random numbers from 0 to
size-1. It can be used to generate tensors with specified sparsity
level.
2024-01-09 19:46:35 -05:00
Aart Bik
273aefec66 [mlir][sparse] enable rt path for transpose COO (#76747)
COO has been supported for a while now on both
lib and codegen path. So enabling this test for
all paths and removing obsolete FIXME
2024-01-02 11:57:00 -08:00
Matthias Springer
95d6aa21fb [mlir][SparseTensor][NFC] Use tensor.empty for dense tensors (#74804)
Use `tensor.empty` + initialization for dense tensors instead of
`bufferization.alloc_tensor`.
2023-12-12 08:56:47 +09:00
Aart Bik
21213f39e2 [mlir][sparse] fix uninitialized dense tensor out in conv2d test (#74884)
Note, tensor.empty may feed into SPARSE output (meaning it truly has no
values yet), but for a DENSE output, it should always have an initial
value. We ran a verifier over all our tests and this is the only
remaining omission.
2023-12-08 12:44:57 -08:00
Aart Bik
ec9e49796d [mlir][sparse] add sparse convolution with 5x5 kernel (#74793)
Also unifies some of the test set up parts in other conv tests
2023-12-07 18:11:04 -08:00
Aart Bik
7003e255d3 [mlir][sparse] code formatting (NFC) (#74779) 2023-12-07 15:46:24 -08:00
Peiming Liu
78e2b74f96 [mlir][sparse] fix bugs when generate sparse conv_3d kernels. (#74561) 2023-12-06 15:59:10 -08:00
Peiming Liu
8206b75a1e [mlir][sparse] fix crash when generate rotated convolution kernels. (#74146) 2023-12-01 14:13:57 -08:00
Aart Bik
1944c4f76b [mlir][sparse] rename DimLevelType to LevelType (#73561)
The "Dim" prefix is a legacy left-over that no longer makes sense, since
we have a very strict "Dimension" vs. "Level" definition for sparse
tensor types and their storage.
2023-11-27 14:27:52 -08:00