Commit Graph

88 Commits

Author SHA1 Message Date
Anlun Xu
fad84c3dbe [mlir][sparse] Support sparse2sparse collapse for dynamic sizes
This patch implements sparse2sparse collapse for operands with dynamic shape.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D131599
2022-09-27 18:40:59 -07:00
Aart Bik
a3610359b5 [mlir][sparse] change memref argument to proper SSA components
The indices for insert/compress were previously provided as
a memref<?xindex> with proper rank, since that matched the
argument for the runtime support libary better. However, with
proper codegen coming, providing the indices as SSA values
is much cleaner. This also brings the sparse_tensor.insert
closer to unification with tensor.insert, planned in the
longer run.

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D134404
2022-09-27 16:37:37 -07:00
Peiming Liu
938f419cf1 [mlir][sparse] Avoid generating DimOp in conversion passes.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D133592
2022-09-09 18:08:05 +00:00
Peiming Liu
180bf5f940 [mlir][sparse] fix a bug in sparse2sparse reshape.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D133521
2022-09-09 00:32:00 +00:00
Aart Bik
f76dcede3f [mlir][sparse] rename lex_insert into insert
This change goes not impact any semantics yet, but it
is in preparation for implementing the unordered and not-unique
properties. Changing lex_insert to insert is a first step.

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D133531
2022-09-08 17:26:35 -07:00
Aart Bik
dc46d5c979 [mlir][sparse] improve dimop rewriting during conversion
Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D133512
2022-09-08 13:04:28 -07:00
Aart Bik
ec8f2905a3 [mlir][sparse] fix bug in workspace dimension computation
Access pattern expansion is always done along the innermost stored
dimension, but this was incorrectly reordered due to using a
general utility typically used by original dimensions only.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D133472
2022-09-08 08:25:02 -07:00
Aart Bik
610b09074a [mlir][sparse] change variable dimension to fixed attribute pointers/indices
The "sparsification" pass does not need the ability to use runtime values for
the dimension, so the only source for variability would have been user code.
Restricting the dimension to constants simplifies code generation.

Reviewed By: Peiming, wrengr

Differential Revision: https://reviews.llvm.org/D133458
2022-09-07 16:27:24 -07:00
Aart Bik
1b434652c5 [mlir][sparse] add more dimension level types and properties
We recently removed the singleton dimension level type (see the revision
https://reviews.llvm.org/D131002) since it was unimplemented but also
incomplete (properties were missing). This revision add singleton back as
extra dimension level type, together with properties ordered/not-ordered
and unique/not-unique. Even though still not lowered to actual code, this
provides a complete way of defining many more sparse storage schemes (in
the long run, we want to support even dimension level types and properties
using the additional extensions proposed in [Chou]).

Note that the current solution of using suffixes for the properties is not
ideal, but keeps the extension relatively simple with respect to parsing and
printing. Furthermore, it is rather consistent with the TACO implementation
which uses things like Compressed-Unique as well. Nevertheless, we probably
want to separate dimension level types from properties when we add more types
and properties.

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D132897
2022-08-30 10:37:49 -07:00
Aart Bik
86b22d3120 [mlir][sparse] start a sparse codegen conversion pass
This new pass provides an alternative to the current conversion pass
that converts sparse tensor types and sparse primitives to opaque pointers
and calls into a runtime support library. This pass will map sparse tensor
types to actual data structures and primitives to actual code. In the long
run, this new pass will remove our dependence on the support library, avoid
the need to link in fully templated and expanded code, and provide much better
opportunities for optimization on the generated code.

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D132766
2022-08-29 09:39:33 -07:00
Peiming Liu
8f8c8168f3 [mlir][sparse] fix compiler warning
Reviewed By: aartbik, bixia

Differential Revision: https://reviews.llvm.org/D132040
2022-08-17 17:46:28 +00:00
Thomas Joerg
5ad7cc7f21 Fix unused variable (introduced in
c248219b09)
2022-08-17 15:18:37 +02:00
Peiming Liu
ee986ab727 [mlir][sparse] Refactoring: remove Operation * from the argument list in utility functions
This patch remove the Operation *op from the argument list in utility functions, and directly pass the Location instead of calling op->getLoc().

This should make the code more clear, as the utility function (logically) does not relies on the operation that we are currently rewriting, and they behave the same regardless of the operation.

Reviewed By: aartbik, wrengr

Differential Revision: https://reviews.llvm.org/D131991
2022-08-16 21:26:43 +00:00
Peiming Liu
c248219b09 [mlir][sparse] Implements concatenate operation for sparse tensor
This patch implements the conversion rule for operation introduced in https://reviews.llvm.org/D131200.
Also contains integration test for correctness

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D131200
2022-08-16 20:47:47 +00:00
Aart Bik
9921ef73c8 [mlir][sparse] remove singleton dimension level type (for now)
Although we have plans to support this, and many other, dimension level type(s), currently the tag is not supported. It will be easy to add this back once support is added.

NOTE: based on discussion in https://discourse.llvm.org/t/overcoming-sparsification-limitation-on-level-types/62585

https://github.com/llvm/llvm-project/issues/51658

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D131002
2022-08-02 11:48:49 -07:00
Matthias Springer
27a431f5e9 [mlir][bufferization][NFC] Move sparse_tensor.release to bufferization dialect
This op used to belong to the sparse dialect, but there are use cases for dense bufferization as well. (E.g., when a tensor alloc is returned from a function and should be deallocated at the call site.) This change moves the op to the bufferization dialect, which now has an `alloc_tensor` and a `dealloc_tensor` op.

Differential Revision: https://reviews.llvm.org/D129985
2022-07-19 09:18:19 +02:00
Kazu Hirata
10bcfeebfa [mlir] Remove unused using (NFC)
Identified with misc-unused-using-decls.
2022-07-17 18:08:48 -07:00
Matthias Springer
c66303c287 [mlir][sparse] Switch to One-Shot Bufferize
This change removes the partial bufferization passes from the sparse compilation pipeline and replaces them with One-Shot Bufferize. One-Shot Analysis (and TensorCopyInsertion) is used to resolve all out-of-place bufferizations, dense and sparse. Dense ops are then bufferized with BufferizableOpInterface. Sparse ops are still bufferized in the Sparsification pass.

Details:
* Dense allocations are automatically deallocated, unless they are yielded from a block. (In that case the alloc would leak.) All test cases are modified accordingly. E.g., some funcs now have an "out" tensor argument that is returned from the function. (That way, the allocation happens at the call site.)
* Sparse allocations are *not* automatically deallocated. They must be "released" manually. (No change, this will be addressed in a future change.)
* Sparse tensor copies are not supported yet. (Future change)
* Sparsification no longer has to consider inplacability. If necessary, allocations and/or copies are inserted during TensorCopyInsertion. All tensors are inplaceable by the time Sparsification is running. Instead of marking a tensor as "not inplaceable", it can be marked as "not writable", which will trigger an allocation and/or copy during TensorCopyInsertion.

Differential Revision: https://reviews.llvm.org/D129356
2022-07-14 09:52:48 +02:00
Aart Bik
faa00c1313 [mlir][sparse] implement sparse2sparse reshaping (expand/collapse)
A previous revision implemented expand/collapse reshaping between
dense and sparse tensors for sparse2dense and dense2sparse since those
could use the "cheap" view reshape on the already materialized
dense tensor (at either the input or output side), and do some
reshuffling from or to sparse. The dense2dense case, as always,
is handled with a "cheap" view change.

This revision implements the sparse2sparse cases. Lacking any "view"
support on sparse tensors this operation necessarily has to perform
data reshuffling on both ends.

Tracker for improving this:
https://github.com/llvm/llvm-project/issues/56477

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D129416
2022-07-11 14:49:06 -07:00
Kazu Hirata
6d5fc1e3d5 [mlir] Don't use Optional::getValue (NFC) 2022-06-20 23:20:25 -07:00
Kazu Hirata
0916d96d12 Don't use Optional::hasValue (NFC) 2022-06-20 20:17:57 -07:00
Kazu Hirata
037f09959a [mlir] Don't use Optional::hasValue (NFC) 2022-06-20 11:22:37 -07:00
Alex Zinenko
8b68da2c7d [mlir] move SCF headers to SCF/{IR,Transforms} respectively
This aligns the SCF dialect file layout with the majority of the dialects.

Reviewed By: jpienaar

Differential Revision: https://reviews.llvm.org/D128049
2022-06-20 10:18:01 +02:00
Jacques Pienaar
8df54a6a03 [mlir] Update accessors to prefixed form (NFC)
Follow up from flipping dialects to both, flip accessor used to prefixed
variant ahead to flipping from _Both to _Prefixed. This just flips to
the accessors introduced in the preceding change which are just prefixed
forms of the existing accessor changed from.

Mechanical change using helper script
https://github.com/jpienaar/llvm-project/blob/main/clang-tools-extra/clang-tidy/misc/AddGetterCheck.cpp and clang-format.
2022-06-18 17:53:22 -07:00
Aart Bik
aef20f59a5 [mlir][sparse] move from by-value to by-reference for data types
This fixes all sorts of ABI issues due to passing by-value
(using by-reference with memref's exclusively).

Reviewed By: bkramer

Differential Revision: https://reviews.llvm.org/D128018
2022-06-17 08:39:25 -07:00
Alex Zinenko
610139d2d9 [mlir] replace 'emit_c_wrappers' func->llvm conversion option with a pass
The 'emit_c_wrappers' option in the FuncToLLVM conversion requests C interface
wrappers to be emitted for every builtin function in the module. While this has
been useful to bootstrap the interface, it is problematic in the longer term as
it may unintentionally affect the functions that should retain their existing
interface, e.g., libm functions obtained by lowering math operations (see
D126964 for an example). Since D77314, we have a finer-grain control over
interface generation via an attribute that avoids the problem entirely. Remove
the 'emit_c_wrappers' option. Introduce the '-llvm-request-c-wrappers' pass
that can be run in any pipeline that needs blanket emission of functions to
annotate all builtin functions with the attribute before performing the usual
lowering that accounts for the attribute.

Reviewed By: chelini

Differential Revision: https://reviews.llvm.org/D127952
2022-06-17 11:10:31 +02:00
Matthias Springer
6232a8f3d6 [mlir][sparse][NFC] Switch InitOp to bufferization::AllocTensorOp
Now that we have an AllocTensorOp (previously InitTensorOp) in the bufferization dialect, the InitOp in the sparse dialect is no longer needed.

Differential Revision: https://reviews.llvm.org/D126180
2022-06-02 00:03:52 +02:00
wren romano
8cb332406c [mlir][sparse] Enhancing sparse=>sparse conversion.
Fixes: https://github.com/llvm/llvm-project/issues/51652

Depends On D122060

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D122061
2022-05-16 15:42:19 -07:00
Matthias Springer
e9fa559097 [mlir][sparse][NFC] Use RewriterBase/OpBuilder when possible
Most functions do not need a PatternRewriter or ConversionPatternRewriter.

Differential Revision: https://reviews.llvm.org/D125466
2022-05-13 11:37:26 +02:00
River Riddle
58ceae9561 [mlir:NFC] Remove the forward declaration of FuncOp in the mlir namespace
FuncOp has been moved to the `func` namespace for a little over a month, the
using directive can be dropped now.
2022-04-18 12:01:55 -07:00
Aart Bik
0b55f94d2b [mlir][sparse] replace stack-based access pattern with dyn-alloc
Rationale:
Allocating the temporary buffers for access pattern expansion on the stack
(using alloca) is a bit too agressive, since it easily runs out of stack space
for large enveloping tensor dimensions. This revision changes the dynamic
allocation of these buffers with explicit alloc/dealloc pairs.

Reviewed By: bixia, wrengr

Differential Revision: https://reviews.llvm.org/D123253
2022-04-06 17:10:43 -07:00
wren romano
63bdcaf92a [mlir][sparse] Moving delete coo into codegen instead of runtime library
Prior to this change there were a number of places where the allocation and deallocation of SparseTensorCOO objects were not cleanly paired, leading to inconsistencies regarding whether each function released its tensor/coo arguments or not, as well as making it easy to run afoul of memory leaks, use-after-free, or double-free errors.  This change cleans up the codegen vs runtime boundary to resolve those issues.  Now, the only time the runtime library frees an object is either (a) because it's a function explicitly designed to do so, or (b) because the allocated object is entirely local to the function and would be a memory leak if not released.  Thus, now the codegen takes complete responsibility for releasing any objects it caused to be allocated.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D122435
2022-04-01 11:08:52 -07:00
wren romano
c7e24db412 [mlir][sparse] Introducing options for the SparseTensorConversion pass
This is work towards: https://github.com/llvm/llvm-project/issues/51652

This differential sets up the options and threads them through everywhere, but doesn't actually use them yet.  The differential that finally makes use of them is D122061, which is the final differential in the chain that fixes bug 51652.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D122054
2022-03-22 13:11:09 -07:00
Benjamin Kramer
89d8035e36 Use llvm::append_range where applicable
It knows the size, so no need to call reserve beforehand. NFCI.
2022-03-18 20:05:48 +01:00
gysit
7294be2b8e [mlir][linalg] Replace linalg.fill by OpDSL variant.
The revision removes the linalg.fill operation and renames the OpDSL generated linalg.fill_tensor operation to replace it. After the change, all named structured operations are defined via OpDSL and there are no handwritten operations left.

A side-effect of the change is that the pretty printed form changes from:
```
%1 = linalg.fill(%cst, %0) : f32, tensor<?x?xf32> -> tensor<?x?xf32>
```
changes to
```
%1 = linalg.fill ins(%cst : f32) outs(%0 : tensor<?x?xf32>) -> tensor<?x?xf32>
```
Additionally, the builder signature now takes input and output value ranges as it is the case for all other OpDSL operations:
```
rewriter.create<linalg::FillOp>(loc, val, output)
```
changes to
```
rewriter.create<linalg::FillOp>(loc, ValueRange{val}, ValueRange{output})
```
All other changes remain minimal. In particular, the canonicalization patterns are the same and the `value()`, `output()`, and `result()` methods are now implemented by the FillOpInterface.

Depends On D120726

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D120728
2022-03-14 10:51:08 +00:00
River Riddle
23aa5a7446 [mlir] Rename the Standard dialect to the Func dialect
The last remaining operations in the standard dialect all revolve around
FuncOp/function related constructs. This patch simply handles the initial
renaming (which by itself is already huge), but there are a large number
of cleanups unlocked/necessary afterwards:

* Removing a bunch of unnecessary dependencies on Func
* Cleaning up the From/ToStandard conversion passes
* Preparing for the move of FuncOp to the Func dialect

See the discussion at https://discourse.llvm.org/t/standard-dialect-the-final-chapter/6061

Differential Revision: https://reviews.llvm.org/D120624
2022-03-01 12:10:04 -08:00
River Riddle
3c69bc4d6e [mlir][NFC] Remove a few op builders that simply swap parameter order
Differential Revision: https://reviews.llvm.org/D119093
2022-02-07 19:03:57 -08:00
Aart Bik
efa15f4178 [mlir][sparse] add ability for sparse tensor output
Rationale:
Although file I/O is a bit alien to MLIR itself, we provide two convenient ways
for sparse tensor I/O. The input part was already there (behind the swiss army
knife sparse_tensor.new). Now we have a sparse_tensor.out to write out data. As
before, the ops are kept vague and may change in the future. For now this
allows us to compare TACO vs MLIR very easily.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D117850
2022-01-21 15:43:29 -08:00
wren romano
c948922567 [mlir][sparse] Factoring out type-based function-name suffixes
Depends On D115010

This changes a couple of places that used to `return failure();` to now use `llvm_unreachable()` instead. However, `Transforms/Sparsification.cpp` should be doing the necessary type checks to ensure that those cases are in fact unreachable.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D115012
2022-01-04 16:17:55 -08:00
wren romano
85b8d03e12 [mlir][sparse] Factoring out Transforms/CodegenUtils.{cpp,h}
This moves a bunch of helper functions from `Transforms/SparseTensorConversion.cpp` into `Transforms/CodegenUtils.{cpp,h}` so that they can be reused by `Transforms/Sparsification.cpp`, etc.

See also the dependent D115010 which cleans up some corner cases in this change.

Reviewed By: aartbik, rriddle

Differential Revision: https://reviews.llvm.org/D115008
2022-01-04 16:11:47 -08:00
Jacques Pienaar
c0342a2de8 [mlir] Switching accessors to prefixed form (NFC)
Makes eventual prefixing flag flip smaller change.
2021-12-20 08:03:43 -08:00
Aart Bik
bb8632c1ef [mlir][sparse] fix broken build
rebase and commit crossed the getFunc change

Reviewed By: Chia-hungDuan

Differential Revision: https://reviews.llvm.org/D115270
2021-12-07 11:14:21 -08:00
Aart Bik
4f2ec7f983 [mlir][sparse] finalize sparse output in the presence of reductions
This revision implements sparse outputs (from scratch) in all cases where
the loops can be reordered with all but one parallel loops outer. If the
inner parallel loop appears inside one or more reductions loops, then an
access pattern expansion is required (aka. workspaces in TACO speak).

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D115091
2021-12-07 10:54:29 -08:00
wren romano
d8731bfc93 [mlir][sparse] Requiring emitCInterface parameter to be explicit
Depends On D115004

Cleans up code legibility by requiring the `emitCInterface` parameter to be explicit at all call-sites, and defining boolean aliases for that parameter.

Reviewed By: aartbik, rriddle

Differential Revision: https://reviews.llvm.org/D115005
2021-12-06 20:50:08 -08:00
wren romano
f527fdf51e [mlir][sparse] Code cleanup for SparseTensorConversion
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D115004
2021-12-06 14:13:35 -08:00
Alexander Belyaev
57470abc41 [mlir] Move memref.[tensor_load|buffer_cast|clone] to "bufferization" dialect.
https://llvm.discourse.group/t/rfc-dialect-for-bufferization-related-ops/4712

Differential Revision: https://reviews.llvm.org/D114552
2021-11-25 11:50:39 +01:00
wren romano
d7d7ffe254 [mlir][sparse] Adding wrappers for constantOverheadTypeEncoding
Minor code cleanup

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D114392
2021-11-23 18:30:06 -08:00
Aart Bik
f66e5769d4 [mlir][sparse] first version of "truly" dynamic sparse tensors as outputs of kernels
This revision contains all "sparsification" ops and rewriting necessary to support sparse output tensors when the kernel has no reduction (viz. insertions occur in lexicographic order and are "injective"). This will be later generalized to allow reductions too. Also, this first revision only supports sparse 1-d tensors (viz. vectors) as output in the runtime support library. This will be generalized to n-d tensors shortly. But this way, the revision is kept to a manageable size.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D113705
2021-11-15 15:33:32 -08:00
Kazu Hirata
ca1a8be06b [Transforms] Fix a warning
This patch fixes:

  mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp:124:3:
  error: default label in switch which covers all enumeration values
  [-Werror,-Wcovered-switch-default]

by removing the default case.
2021-11-05 19:30:14 -07:00
wren romano
845561ec9d [mlir][sparse] Factoring magic numbers into a header
Addresses https://bugs.llvm.org/show_bug.cgi?id=52303

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D112962
2021-11-05 15:59:16 -07:00