Commit Graph

170 Commits

Author SHA1 Message Date
Longsheng Mou
129ade21bd [mlir][sparse] Replace getSparseTensorType with tryGetSparseTensorType (#109435)
This PR fixes a bug in `SparseTensorDimOpRewriter` when `tensor.dim` has
an unranked tensor type. To prevent crashes, we now use
`tryGetSparseTensorType` instead of `getSparseTensorType`. Fixes
#107807.
2024-09-30 09:16:55 +08:00
Peiming Liu
c44202574f [mlir][sparse] support sparsification to coiterate operations. (#102546) 2024-08-20 11:13:38 -07:00
Peiming Liu
fb8f492a1c [mlir][sparse] clone a empty sparse tensor when fuse convert into pro… (#92158)
…ducer.
2024-05-14 13:26:49 -07:00
Yinying Li
eb177803bf [mlir][sparse] Change sparse_tensor.print format (#91528)
1. Remove the trailing comma for the last element of memref and add
closing parenthesis.
2. Change integration tests to use the new format.
2024-05-09 12:09:40 -04:00
Aart Bik
c4e5a8a4d3 [mlir][sparse] support 'batch' dimensions in sparse_tensor.print (#91411) 2024-05-07 19:01:36 -07:00
Gaurav Shukla
97069a8619 [MLIR] Generalize expand_shape to take shape as explicit input (#90040)
This patch generalizes tensor.expand_shape and memref.expand_shape to
consume the output shape as a list of SSA values. This enables us to
implement generic reshape operations with dynamic shapes using
collapse_shape/expand_shape pairs.

The output_shape input to expand_shape follows the static/dynamic
representation that's also used in `tensor.extract_slice`.

Differential Revision: https://reviews.llvm.org/D140821

---------

Signed-off-by: Gaurav Shukla<gaurav.shukla@amd.com>
Signed-off-by: Gaurav Shukla <gaurav.shukla@amd.com>
Co-authored-by: Ramiro Leal-Cavazos <ramiroleal050@gmail.com>
2024-04-30 09:28:35 -07:00
Peiming Liu
3aeb28b93f [mlir][sparse] fold sparse convert into producer linalg op. (#89999) 2024-04-26 10:48:15 -07:00
Peiming Liu
ea3eeb483f [mlir][sparse] fuse concat and extract_slice op if possible. (#89825) 2024-04-24 13:51:41 -07:00
Mehdi Amini
8c0341df02 Revert "[MLIR] Generalize expand_shape to take shape as explicit input" (#89540)
Reverts llvm/llvm-project#69267

this broke some bots.
2024-04-21 14:33:48 +02:00
Gaurav Shukla
e095d978ba [MLIR] Generalize expand_shape to take shape as explicit input (#69267)
This patch generalizes tensor.expand_shape and memref.expand_shape to
consume the output shape as a list of SSA values. This enables us to
implement generic reshape operations with dynamic shapes using
collapse_shape/expand_shape pairs.

The output_shape input to expand_shape follows the static/dynamic
representation that's also used in `tensor.extract_slice`.

Differential Revision: https://reviews.llvm.org/D140821

Co-authored-by: Ramiro Leal-Cavazos <ramiroleal050@gmail.com>
2024-04-21 07:37:02 -04:00
Christian Sigg
a5757c5b65 Switch member calls to isa/dyn_cast/cast/... to free function calls. (#89356)
This change cleans up call sites. Next step is to mark the member
functions deprecated.

See https://mlir.llvm.org/deprecation and
https://discourse.llvm.org/t/preferred-casting-style-going-forward.
2024-04-19 15:58:27 +02:00
Aart Bik
dc4cfdbb8f [mlir][sparse] provide an AoS "view" into sparse runtime support lib (#87116)
Note that even though the sparse runtime support lib always uses SoA
storage for COO storage (and provides correct codegen by means of views
into this storage), in some rare cases we need the true physical SoA
storage as a coordinate buffer. This PR provides that functionality by
means of a (costly) coordinate buffer call.

Since this is currently only used for testing/debugging by means of the
sparse_tensor.print method, this solution is acceptable. If we ever want
a performing version of this, we should truly support AoS storage of COO
in addition to the SoA used right now.
2024-03-29 15:30:36 -07:00
Peiming Liu
94e27c265a [mlir][sparse] reuse tensor.insert operation to insert elements into … (#84987)
…a sparse tensor.
2024-03-12 16:59:17 -07:00
Aart Bik
275fe3ae2d [mlir][sparse] support complex type for sparse_tensor.print (#83934)
With an integration test example
2024-03-04 17:14:31 -08:00
Peiming Liu
52b69aa32f [mlir][sparse] support sparsifying batch levels (#83898) 2024-03-04 14:39:06 -08:00
Aart Bik
691fc7cdcc [mlir][sparse] add dim/lvl information to sparse_tensor.print (#83913)
More information is more testing!
Also adjusts already migrated integration tests
2024-03-04 14:32:49 -08:00
Peiming Liu
6bc7c9df7f [mlir][sparse] infer returned type for sparse_tensor.to_[buffer] ops (#83343)
The sparse structure buffers might not always be memrefs with rank == 1
with the presence of batch levels.
2024-02-28 16:10:20 -08:00
Aart Bik
d37affb06f [mlir][sparse] add a sparse_tensor.print operation (#83321)
This operation is mainly used for testing and debugging purposes but
provides a very convenient way to quickly inspect the contents of a
sparse tensor (all components over all stored levels).

Example:

[ [ 1, 0, 2, 0, 0, 0, 0, 0 ],
  [ 0, 0, 0, 0, 0, 0, 0, 0 ],
  [ 0, 0, 0, 0, 0, 0, 0, 0 ],
  [ 0, 0, 3, 4, 0, 5, 0, 0 ]

when stored sparse as DCSC prints as

---- Sparse Tensor ----
nse = 5
pos[0] : ( 0, 4,  )
crd[0] : ( 0, 2, 3, 5,  )
pos[1] : ( 0, 1, 3, 4, 5,  )
crd[1] : ( 0, 0, 3, 3, 3,  )
values : ( 1, 2, 3, 4, 5,  )
----
2024-02-28 12:33:26 -08:00
Matthias Springer
91d5653e3a [mlir] Use OpBuilder::createBlock in op builders and patterns (#82770)
When creating a new block in (conversion) rewrite patterns,
`OpBuilder::createBlock` must be used. Otherwise, no
`notifyBlockInserted` notification is sent to the listener.

Note: The dialect conversion relies on listener notifications to keep
track of IR modifications. Creating blocks without the builder API can
lead to memory leaks during rollback.
2024-02-24 09:10:07 +01:00
Peiming Liu
5248a98724 [mlir][sparse] support SoA COO in codegen path. (#82439)
*NOTE*: the `SoA` property only makes a difference on codegen path, and
is ignored in libgen path at the moment (only SoA COO is supported).
2024-02-20 17:06:21 -08:00
Peiming Liu
aaf916456a Reapply "[mlir][sparse] remove LevelType enum, construct LevelType from LevelFormat and Properties" (#81923) (#81934) 2024-02-15 14:48:52 -08:00
Mehdi Amini
513448d28e Revert "[mlir][sparse] remove LevelType enum, construct LevelType from LevelF…" (#81923)
Reverts llvm/llvm-project#81799 ; this broke the mlir gcc7 bot.
2024-02-15 13:26:44 -08:00
Peiming Liu
235ec0f791 [mlir][sparse] remove LevelType enum, construct LevelType from LevelF… (#81799)
…ormat and properties instead.
2024-02-15 12:31:03 -08:00
Mehdi Amini
61f64d1c23 Apply clang-tidy fixes for llvm-qualified-auto in SparseTensorRewriting.cpp (NFC) 2024-02-12 13:27:49 -08:00
Peiming Liu
298412b578 [mlir][sparse] setup SparseIterator to help generating code to traverse a sparse tensor level. (#78345) 2024-01-24 11:33:06 -08:00
Matthias Springer
5fcf907b34 [mlir][IR] Rename "update root" to "modify op" in rewriter API (#78260)
This commit renames 4 pattern rewriter API functions:
* `updateRootInPlace` -> `modifyOpInPlace`
* `startRootUpdate` -> `startOpModification`
* `finalizeRootUpdate` -> `finalizeOpModification`
* `cancelRootUpdate` -> `cancelOpModification`

The term "root" is a misnomer. The root is the op that a rewrite pattern
matches against
(https://mlir.llvm.org/docs/PatternRewriter/#root-operation-name-optional).
A rewriter must be notified of all in-place op modifications, not just
in-place modifications of the root
(https://mlir.llvm.org/docs/PatternRewriter/#pattern-rewriter). The old
function names were confusing and have contributed to various broken
rewrite patterns.

Note: The new function names use the term "modify" instead of "update"
for consistency with the `RewriterBase::Listener` terminology
(`notifyOperationModified`).
2024-01-17 11:08:59 +01:00
Matthias Springer
0a8e3dd432 [mlir][Interfaces] DestinationStyleOpInterface: Rename hasTensor/BufferSemantics (#77574)
Rename interface functions as follows:
* `hasTensorSemantics` -> `hasPureTensorSemantics`
* `hasBufferSemantics` -> `hasPureBufferSemantics`

These two functions return "true" if the op has tensor/buffer operands
but not buffer/tensor operands.

Also drop the "ranked" part from the interface, i.e., do not distinguish
between ranked/unranked types.

The new function names describe the functions more accurately. They also
align their semantics with the notion of "tensor semantics" with the
bufferization framework. (An op is supposed to be bufferized if it has
tensor operands, and we don't care if it also has memref operands.)

This change is in preparation of #75273, which adds
`BufferizableOpInterface::hasTensorSemantics`. By renaming the functions
in the `DestinationStyleOpInterface`, we can avoid name clashes between
the two interfaces.
2024-01-12 10:02:54 +01:00
Aart Bik
365777ecbe [mlir][sparse] refactor utilities into transform/utils dir (#75250)
Separates actual transformation files from supporting utility files in
the transforms directory. Includes a bazel overlay fix for the build (as
well as a bit of cleanup of that file to be less verbose and more
flexible).
2023-12-12 15:34:31 -08:00
Aart Bik
5b72950394 [mlir][sparse] move all COO related methods into SparseTensorType (#73881)
This centralizes all COO methods, and provides a cleaner API. Note that
the "enc" only constructor is a temporary workaround the need for COO
methods inside the "enc" only storage specifier.
2023-11-30 09:40:39 -08:00
Aart Bik
45288085b5 [mlir][sparse] move toCOOType into SparseTensorType class (#73708)
Migrates dangling convenience method into proper SparseTensorType class.
Also cleans up some details (picking right dim2lvl/lvl2dim). Removes
more dead code.
2023-11-28 16:04:01 -08:00
Peiming Liu
4e2f1521ec [mlir][sparse] code cleanup, remove FIXMEs (#73575) 2023-11-27 14:57:08 -08:00
Aart Bik
1944c4f76b [mlir][sparse] rename DimLevelType to LevelType (#73561)
The "Dim" prefix is a legacy left-over that no longer makes sense, since
we have a very strict "Dimension" vs. "Level" definition for sparse
tensor types and their storage.
2023-11-27 14:27:52 -08:00
Aart Bik
1dd387e106 [mlir][sparse] change dim level type -> level type (#73058)
The "dimension" before "level" does not really make sense Note that
renaming the actual type DimLevelType to LevelType is still TBD, since
this is an externally visible change (e.g. visible to Python API).
2023-11-22 09:06:22 -08:00
Aart Bik
e8fc282ff2 [mlir][sparse] avoid non-perm on sparse tensor convert for new (#72459)
This avoids seeing non-perm on the convert from COO to non-COO for
higher dimensional new operators (viz. reading in BSR).

This is step 1 out of 3 to make sparse_tensor.new work for BSR
2023-11-15 20:47:37 -08:00
Tim Harvey
c43e627457 Changed the phrase sparse-compiler to sparsifier in comments (#71578)
When the Powers That Be decided that the name "sparse compiler" should
be changed to "sparsifier", we negected to change some of the comments
in the code; this pull request completes the name change.
2023-11-07 20:55:00 +00:00
Aart Bik
2b67942139 [mlir][sparse] remove (some) deprecated dim/lvl methods (#71125)
This removes the most obvious ones. The others are still TBD.
2023-11-02 16:29:27 -07:00
Peiming Liu
53ffafb24d [mlir][sparse] support sparse constant to BSR conversion. (#71114)
support direct convert from a constant tensor defined by
SparseArrayElements to BSR
2023-11-02 14:45:39 -07:00
Aart Bik
22212ca745 [mlir][sparse] simplify some header code (#70989)
This is a first revision in a small series of changes that removes
duplications between direct encoding methods and sparse tensor type
wrapper methods (in favor of the latter abstraction, since it provides
more safety). The goal is to simply end up with "just" SparseTensorType
2023-11-02 09:31:11 -07:00
Peiming Liu
3426d330a7 [mlir][sparse] Implement rewriters to reinterpret maps on foreach (#70868) 2023-11-01 12:11:47 -07:00
Peiming Liu
ef100c228a [mlir][sparse] implements tensor.insert on sparse tensors. (#70737) 2023-10-30 16:04:41 -07:00
Peiming Liu
f82bee1367 [mlir][sparse] split post-sparsification-rewriting into two passes. (#70727) 2023-10-30 15:22:21 -07:00
Peiming Liu
7d608ee2bb [mlir][sparse] unify sparse_tensor.out rewriting rules (#70518) 2023-10-27 16:46:58 -07:00
Peiming Liu
c780352de9 [mlir][sparse] implement sparse_tensor.lvl operation. (#69993) 2023-10-24 13:23:28 -07:00
Peiming Liu
e9fa1fdec9 [mlir][sparse] support CSR/BSR conversion (#69800) 2023-10-20 17:24:34 -07:00
Peiming Liu
6456e0bbbb [mlir][sparse] implement sparse_tensor.crd_translate operation (#69653) 2023-10-20 16:18:02 -07:00
Peiming Liu
71c97c735c [mlir][sparse] avoid tensor to memref conversion in sparse tensor rewri… (#69362)
…ting rules.
2023-10-17 11:34:06 -07:00
Peiming Liu
761c9dd927 [mlir][sparse] implementating stageSparseOpPass as an interface (#69022) 2023-10-17 10:54:44 -07:00
Peiming Liu
eb14f47bf1 [mlir][sparse][NFC] fix variable naming convension (#69232) 2023-10-16 10:51:28 -07:00
Peiming Liu
f248d0b28d [mlir][sparse] implement sparse_tensor.reorder_coo (#68916)
As a side effect of the change, it also unifies the convertOp
implementation between lib/codegen path.
2023-10-12 13:22:45 -07:00
Peiming Liu
dda3dc5e38 [mlir][sparse] simplify ConvertOp rewriting rules (#68350)
Canonicalize complex convertOp into multiple stages, such that it can
either be done by a direct conversion or by sorting.
2023-10-11 09:34:11 -07:00