Commit Graph

379 Commits

Author SHA1 Message Date
Peiming Liu
f607102a0d [mlir][sparse] partially support lowering sparse coiteration loops to scf.while/for. (#105565) 2024-08-23 10:47:44 -07:00
Zhaoshi Zheng
fe55c34d19 [MLIR][test] Run SVE and SME Integration tests using qemu-aarch64 (#101568)
To run integration tests using qemu-aarch64 on x64 host, below flags are
added to the cmake command when building mlir/llvm:

      -DMLIR_INCLUDE_INTEGRATION_TESTS=ON \
      -DMLIR_RUN_ARM_SVE_TESTS=ON \
      -DMLIR_RUN_ARM_SME_TESTS=ON \
      -DARM_EMULATOR_EXECUTABLE="<...>/qemu-aarch64" \
      -DARM_EMULATOR_OPTIONS="-L /usr/aarch64-linux-gnu" \

-DARM_EMULATOR_MLIR_CPU_RUNNER_EXECUTABLE="<llvm_arm64_build_top>/bin/mlir-cpu-runner-arm64"
\
      -DARM_EMULATOR_LLI_EXECUTABLE="<llvm_arm64_build_top>/bin/lli" \
      -DARM_EMULATOR_UTILS_LIB_DIR="<llvm_arm64_build_top>/lib"

The last three above are prebuilt on, or cross-built for, an aarch64
host.

This patch introduced substittutions of "%native_mlir_runner_utils" etc. and use
them in SVE/SME integration tests. When configured to run using qemu-aarch64,
mlir runtime util libs will be loaded from ARM_EMULATOR_UTILS_LIB_DIR, if set.

Some tests marked with 'UNSUPPORTED: target=aarch64{{.*}}' are still run
when configured with ARM_EMULATOR_EXECUTABLE and the default target is
not aarch64.
A lit config feature 'mlir_arm_emulator' is added in
mlir/test/lit.site.cfg.py.in and to UNSUPPORTED list of such tests.
2024-08-15 21:37:51 -07:00
Peiming Liu
a02010b3e9 [mlir][sparse] support sparsifying sparse kernels to sparse-iterator-based loop (#95858) 2024-06-17 16:50:12 -07:00
Jay Foad
d4a0154902 [llvm-project] Fix typo "seperate" (#95373) 2024-06-13 20:20:27 +01:00
Yinying Li
eb177803bf [mlir][sparse] Change sparse_tensor.print format (#91528)
1. Remove the trailing comma for the last element of memref and add
closing parenthesis.
2. Change integration tests to use the new format.
2024-05-09 12:09:40 -04:00
Aart Bik
c4e5a8a4d3 [mlir][sparse] support 'batch' dimensions in sparse_tensor.print (#91411) 2024-05-07 19:01:36 -07:00
Aart Bik
5c5116556f [mlir][sparse] force a properly sized view on pos/crd/val under codegen (#91288)
Codegen "vectors" for pos/crd/val use the capacity as memref size, not
the actual used size. Although the sparsifier itself always uses just
the defined pos/crd/val parts, printing these and passing them back to a
runtime environment could benefit from wrapping the basic pos/crd/val
getters into a proper memref view that sets the right size.
2024-05-07 09:20:56 -07:00
Peiming Liu
78885395c8 [mlir][sparse] support tensor.pad on CSR tensors (#90687) 2024-05-01 15:37:38 -07:00
Peiming Liu
7cbaaed636 [mlir][sparse] fix sparse tests that uses reshape operations. (#90637)
Due to generalization introduced in
https://github.com/llvm/llvm-project/pull/90040
2024-04-30 11:57:16 -07:00
Peiming Liu
dbe376651a [mlir][sparse] handle padding on sparse levels. (#90527) 2024-04-30 09:53:44 -07:00
Matthias Springer
9f3334e993 [mlir][SparseTensor] Add missing dependent dialect to pass (#88870)
This commit fixes the following error when stopping the sparse compiler
pipeline after bufferization (e.g., with `test-analysis-only`):

```
LLVM ERROR: Building op `vector.print` but it isn't known in this MLIRContext: the dialect may not be loaded or this operation hasn't been added by the dialect. See also https://mlir.llvm.org/getting_started/Faq/#registered-loaded-dependent-whats-up-with-dialects-management
```
2024-04-17 09:20:55 +02:00
Aart Bik
f388a3a446 [mlir][sparse] update doc and examples of the [dis]assemble operations (#88213)
The doc and examples of the [dis]assemble operations did not reflect all
the recent changes on order of the operands. Also clarified some of the
text.
2024-04-10 09:42:12 -07:00
Aart Bik
dc4cfdbb8f [mlir][sparse] provide an AoS "view" into sparse runtime support lib (#87116)
Note that even though the sparse runtime support lib always uses SoA
storage for COO storage (and provides correct codegen by means of views
into this storage), in some rare cases we need the true physical SoA
storage as a coordinate buffer. This PR provides that functionality by
means of a (costly) coordinate buffer call.

Since this is currently only used for testing/debugging by means of the
sparse_tensor.print method, this solution is acceptable. If we ever want
a performing version of this, we should truly support AoS storage of COO
in addition to the SoA used right now.
2024-03-29 15:30:36 -07:00
Matthias Springer
a5d7fc1d10 [mlir][sparse] Fix typos in comments (#86074) 2024-03-21 12:30:48 +09:00
Matthias Springer
b1752ddf0a [mlir][sparse] Fix memory leaks (part 4) (#85729)
This commit fixes memory leaks in sparse tensor integration tests by
adding `bufferization.dealloc_tensor` ops.

Note: Buffer deallocation will be automated in the future with the
ownership-based buffer deallocation pass, making `dealloc_tensor`
obsolete (only codegen path, not when using the runtime library).

This commit fixes the remaining memory leaks in the MLIR test suite.
`check-mlir` now passes when built with ASAN.
2024-03-19 15:38:16 +09:00
Aart Bik
f3a8af07fa [mlir][sparse] best effort finalization of escaping empty sparse tensors (#85482)
This change lifts the restriction that purely allocated empty sparse
tensors cannot escape the method. Instead it makes a best effort to add
a finalizing operation before the escape.

This assumes that
(1) we never build sparse tensors across method boundaries
    (e.g. allocate in one, insert in other method)
(2) if we have other uses of the empty allocation in the
    same method, we assume that either that op will fail
    or will do the finalization for us.

This is best-effort, but fixes some very obvious missing cases.
2024-03-15 16:43:09 -07:00
Matthias Springer
e8e8df4c1b [mlir][sparse] Add has_runtime_library test op (#85355)
This commit adds a new test-only op:
`sparse_tensor.has_runtime_library`. The op returns "1" if the sparse
compiler runs in runtime library mode.

This op is useful for writing test cases that require different IR
depending on whether the sparse compiler runs in runtime library or
codegen mode.

This commit fixes a memory leak in `sparse_pack_d.mlir`. This test case
uses `sparse_tensor.assemble` to create a sparse tensor SSA value from
existing buffers. This runtime library reallocates+copies the existing
buffers; the codegen path does not. Therefore, the test requires
additional deallocations when running in runtime library mode.

Alternatives considered:
- Make the codegen path allocate. "Codegen" is the "default" compilation
mode and it is handling `sparse_tensor.assemble` correctly. The issue is
with the runtime library path, which should not allocate. Therefore, it
is better to put a workaround in the runtime library path than to work
around the issue with a new flag in the codegen path.
- Add a `sparse_tensor.runtime_only` attribute to
`bufferization.dealloc_tensor`. Verifying that the attribute can only be
attached to `bufferization.dealloc_tensor` may introduce an unwanted
dependency of `MLIRSparseTensorDialect` on `MLIRBufferizationDialect`.
2024-03-15 13:35:48 +09:00
Matthias Springer
5124eedd35 [mlir][sparse] Fix memory leaks (part 3) (#85184)
This commit fixes memory leaks in sparse tensor integration tests by
adding `bufferization.dealloc_tensor` ops.

Note: Buffer deallocation will be automated in the future with the
ownership-based buffer deallocation pass, making `dealloc_tensor`
obsolete (only codegen path, not when using the runtime library).
2024-03-15 13:31:47 +09:00
Yinying Li
88986d65e4 [mlir][sparse] Fix sparse_generate test (#85009)
std::uniform_int_distribution may behave differently in different
systems.
2024-03-12 21:39:37 -04:00
Yinying Li
c1ac9a09d0 [mlir][sparse] Finish migrating integration tests to use sparse_tensor.print (#84997) 2024-03-12 20:57:21 -04:00
Peiming Liu
94e27c265a [mlir][sparse] reuse tensor.insert operation to insert elements into … (#84987)
…a sparse tensor.
2024-03-12 16:59:17 -07:00
Yinying Li
83c9244ae4 [mlir][sparse] Migrate more tests to use sparse_tensor.print (#84833)
Continuous efforts following #84249.
2024-03-11 18:44:32 -04:00
Yinying Li
4cb5a96af6 [mlir][sparse] Migrate more tests to sparse_tensor.print (#84249)
Continuous efforts following #83946.
2024-03-07 14:02:20 -05:00
Yinying Li
6e692e726a [mlir][sparse] Migrate to sparse_tensor.print (#83946)
Continuous efforts following #83506.
2024-03-07 14:02:01 -05:00
Peiming Liu
fc9f1d49aa [mlir][sparse] use a consistent order between [dis]assembleOp and sto… (#84079)
…rage layout.
2024-03-06 09:57:41 -08:00
Aart Bik
b6ca602658 [mlir][sparse] migrate tests to sparse_tensor.print (#84055)
Continuing the efforts started in #83357
2024-03-05 12:17:45 -08:00
Aart Bik
662d821d44 [mlir][sparse] migrate datastructure tests to sparse_tensor.print (#83956)
Continuing the efforts started in llvm#83357
2024-03-04 21:14:32 -08:00
Aart Bik
275fe3ae2d [mlir][sparse] support complex type for sparse_tensor.print (#83934)
With an integration test example
2024-03-04 17:14:31 -08:00
Aart Bik
05390df497 [mlir][sparse] migration to sparse_tensor.print (#83926)
Continuing the efforts started in #83357
2024-03-04 15:49:09 -08:00
Aart Bik
691fc7cdcc [mlir][sparse] add dim/lvl information to sparse_tensor.print (#83913)
More information is more testing!
Also adjusts already migrated integration tests
2024-03-04 14:32:49 -08:00
Yinying Li
5899599b01 [mlir][sparse] Migration to sparse_tensor.print (#83506)
Continuous efforts #83357. Previously reverted #83377.
2024-02-29 19:18:35 -05:00
Mehdi Amini
71eead512e Revert "[mlir][sparse] Migration to sparse_tensor.print" (#83499)
Reverts llvm/llvm-project#83377

The test does not pass on the bot.
2024-02-29 15:04:58 -08:00
Yinying Li
1ca65dd74a [mlir][sparse] Migration to sparse_tensor.print (#83377)
Continuous efforts following #83357.
2024-02-29 14:14:31 -05:00
Aart Bik
fdf44b3777 [mlir][sparse] migrate integration tests to sparse_tensor.print (#83357)
This is first step (of many) cleaning up our tests to use the new and
exciting sparse_tensor.print operation instead of lengthy extraction +
print ops.
2024-02-28 16:40:46 -08:00
Peiming Liu
6bc7c9df7f [mlir][sparse] infer returned type for sparse_tensor.to_[buffer] ops (#83343)
The sparse structure buffers might not always be memrefs with rank == 1
with the presence of batch levels.
2024-02-28 16:10:20 -08:00
Aart Bik
8394ec9ff1 [mlir][sparse] add a few more cases to sparse_tensor.print test (#83338) 2024-02-28 14:05:40 -08:00
Aart Bik
d37affb06f [mlir][sparse] add a sparse_tensor.print operation (#83321)
This operation is mainly used for testing and debugging purposes but
provides a very convenient way to quickly inspect the contents of a
sparse tensor (all components over all stored levels).

Example:

[ [ 1, 0, 2, 0, 0, 0, 0, 0 ],
  [ 0, 0, 0, 0, 0, 0, 0, 0 ],
  [ 0, 0, 0, 0, 0, 0, 0, 0 ],
  [ 0, 0, 3, 4, 0, 5, 0, 0 ]

when stored sparse as DCSC prints as

---- Sparse Tensor ----
nse = 5
pos[0] : ( 0, 4,  )
crd[0] : ( 0, 2, 3, 5,  )
pos[1] : ( 0, 1, 3, 4, 5,  )
crd[1] : ( 0, 0, 3, 3, 3,  )
values : ( 1, 2, 3, 4, 5,  )
----
2024-02-28 12:33:26 -08:00
Peiming Liu
5248a98724 [mlir][sparse] support SoA COO in codegen path. (#82439)
*NOTE*: the `SoA` property only makes a difference on codegen path, and
is ignored in libgen path at the moment (only SoA COO is supported).
2024-02-20 17:06:21 -08:00
Matthias Springer
ccc20b4e56 [mlir][sparse] Fix memory leaks (part 2) (#81979)
This commit fixes memory leaks in sparse tensor integration tests by
adding `bufferization.dealloc_tensor` ops.

Note: Buffer deallocation will be automated in the future with the
ownership-based buffer deallocation pass, making `dealloc_tensor`
obsolete (only codegen path, not when using the runtime library).
2024-02-17 11:04:17 +01:00
Matthias Springer
b6c453c13f [mlir][sparse] Fix memory leaks (part 1) (#81843)
This commit fixes memory leaks in sparse tensor integration tests by
adding `bufferization.dealloc_tensor` ops.

Note: Buffer deallocation will be automated in the future with the
ownership-based buffer deallocation pass, making `dealloc_tensor`
obsolete (only codegen path, not when using the runtime library).
2024-02-16 09:57:04 +01:00
Aart Bik
3122969e8e [mlir][sparse] add doubly compressed test case to assembly op (#81687)
Removes audit TODO
2024-02-13 15:55:42 -08:00
Aart Bik
2400f704af [mlir][sparse] add assemble test for Batched-CSR and CSR-Dense (#81660)
These are formats supported by PyTorch sparse, so good to make sure that
our assemble instructions work on these.
2024-02-13 13:20:01 -08:00
Yinying Li
2a6b521b36 [mlir][sparse] Add more tests and verification for n:m (#81186)
1. Add python test for n out of m
2. Add more methods for python binding
3. Add verification for n:m and invalid encoding tests
4. Add e2e test for n:m

Previous PRs for n:m #80501 #79935
2024-02-09 14:34:36 -05:00
Yinying Li
e5924d6499 [mlir][sparse] Implement parsing n out of m (#79935)
1. Add parsing methods for block[n, m].
2. Encode n and m with the newly extended 64-bit LevelType enum.
3. Update 2:4 methods names/comments to n:m.
2024-02-08 14:38:42 -05:00
Peiming Liu
1ac6846263 [mlir][sparse] support sparse dilated convolution. (#80470) 2024-02-02 11:54:50 -08:00
Matthias Springer
ce7cc723b9 [mlir][memref] memref.subview: Verify result strides
The `memref.subview` verifier currently checks result shape, element type, memory space and offset of the result type. However, the strides of the result type are currently not verified. This commit adds verification of result strides for non-rank reducing ops and fixes invalid IR in test cases.

Verification of result strides for ops with rank reductions is more complex (and there could be multiple possible result types). That is left for a separate commit.

Also refactor the implementation a bit:
* If `computeMemRefRankReductionMask` could not compute the dropped dimensions, there must be something wrong with the op. Return `FailureOr` instead of `std::optional`.
* `isRankReducedMemRefType` did much more than just checking whether the op has rank reductions or not. Inline the implementation into the verifier and add better comments.
* `produceSubViewErrorMsg` does not have to be templatized.
* Fix comment and add additional assert to `ExpandStridedMetadata.cpp`, to make sure that the memref.subview verifier is in sync with the memref.subview -> memref.reinterpret_cast lowering.

Note: This change is identical to #79865, but with a fixed comment and an additional assert in `ExpandStridedMetadata.cpp`. (I reverted #79865 in #80116, but the implementation was actually correct, just the comment in `ExpandStridedMetadata.cpp` was confusing.)
2024-01-31 09:28:53 +00:00
Matthias Springer
96c907dbce Revert "[mlir][memref] memref.subview: Verify result strides" (#80116)
Reverts llvm/llvm-project#79865

I think there is a bug in the stride computation in
`SubViewOp::inferResultType`. (Was already there before this change.)

Reverting this commit for now and updating the original pull request
with a fix and more test cases.
2024-01-31 09:35:13 +01:00
Matthias Springer
db49319264 [mlir][memref] memref.subview: Verify result strides (#79865)
The `memref.subview` verifier currently checks result shape, element
type, memory space and offset of the result type. However, the strides
of the result type are currently not verified. This commit adds
verification of result strides for non-rank reducing ops and fixes
invalid IR in test cases.

Verification of result strides for ops with rank reductions is more
complex (and there could be multiple possible result types). That is
left for a separate commit.

Also refactor the implementation a bit:
* If `computeMemRefRankReductionMask` could not compute the dropped
dimensions, there must be something wrong with the op. Return
`FailureOr` instead of `std::optional`.
* `isRankReducedMemRefType` did much more than just checking whether the
op has rank reductions or not. Inline the implementation into the
verifier and add better comments.
* `produceSubViewErrorMsg` does not have to be templatized.
2024-01-31 09:14:48 +01:00
Peiming Liu
de5e4d7c69 [mlir][sparse] fix error when convolution stride is applied on a dens… (#79521)
…e level.
2024-01-25 17:11:24 -08:00
Peiming Liu
982c815aad [mlir][sparse] fix mismatch between enter/exitWhileLoop (#79493) 2024-01-25 12:21:47 -08:00