Commit Graph

55 Commits

Author SHA1 Message Date
Peiming Liu
edca72f5bc [mlir][sparse] Refactoring: remove dependence on tuple type when lowering sparse tensors.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D133390
2022-09-07 17:53:48 +00:00
Peiming Liu
4c46a5d54d [mlir][sparse] Refactoring: renaming StorageNewOp to StorageOp
To address comment in https://reviews.llvm.org/D133241

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D133363
2022-09-06 17:02:25 +00:00
Aart Bik
0c7abd3924 [mlir][sparse] codegen for sparse alloc
Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D133241
2022-09-06 09:37:54 -07:00
Nick Kreeger
30ceb783e2 [mlir][sparse] Expose SparseTensor passes as enums instead of opaque numbers for vectorization and parallelization options.
The SparseTensor passes currently use opaque numbers for the CLI, despite using an enum internally. This patch exposes the enums instead of numbered items that are matched back to the enum.

Fixes https://github.com/llvm/llvm-project/issues/53389

Differential Revision: https://reviews.llvm.org/D123876

Please also see:
https://reviews.llvm.org/D118379
https://reviews.llvm.org/D117919
2022-09-04 01:39:35 +00:00
Nick Kreeger
91470d6352 Revert "[mlir][sparse] Expose SparseTensor passes as enums instead of opaque"
This reverts commit ef25b5d93d.
2022-09-03 15:47:40 -05:00
Nick Kreeger
ef25b5d93d [mlir][sparse] Expose SparseTensor passes as enums instead of opaque
numbers for vectorization and parallelization options.

The SparseTensor passes currently use opaque numbers for the CLI,
despite using an enum internally. This patch exposes the enums instead
of numbered items that are matched back to the enum.

Fixes https://github.com/llvm/llvm-project/issues/53389

Differential Revision: https://reviews.llvm.org/D123876

Please also see:
https://reviews.llvm.org/D118379
https://reviews.llvm.org/D117919
2022-09-03 15:45:49 -05:00
Peiming Liu
928b5b06f9 [mlir][sparse] add conversion rules for storage_get/set/callOp
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D133175
2022-09-02 18:27:54 +00:00
Aart Bik
2ddfacd95c [mlir][sparse] codegen for sparse dealloc
Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D133171
2022-09-01 22:21:20 -07:00
Aart Bik
3ae98fd259 [mlir][sparse] added codegen for dimop, pointers, indices, values
Demonstrates how sparse tensor type -> tuple -> getter
will eventually yield actual code on the memrefs directly

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D133143
2022-09-01 16:36:10 -07:00
Peiming Liu
ca01c996b2 [mlir][sparse] Add SparseTensorStorageExpansion Pass to expand compounded sparse tensor tuples
This patch adds SparseTensorStorageExpansion pass, it flattens the tuple used to store a sparse
tensor handle.

Right now, it only set up the skeleton for the pass, more lowering rules for sparse tensor storage
operation need to be added.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D133125
2022-09-01 22:47:31 +00:00
Aart Bik
1be09496bf [mlir][sparse] improved tensor type lowering
Also includes a first codegen example (although full support need tuple access)

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D133080
2022-09-01 09:24:20 -07:00
Michele Scuttari
67d0d7ac0a [MLIR] Update pass declarations to new autogenerated files
The patch introduces the required changes to update the pass declarations and definitions to use the new autogenerated files and allow dropping the old infrastructure.

Reviewed By: mehdi_amini, rriddle

Differential Review: https://reviews.llvm.org/D132838
2022-08-31 12:28:45 +02:00
Michele Scuttari
039b969b32 Revert "[MLIR] Update pass declarations to new autogenerated files"
This reverts commit 2be8af8f0e.
2022-08-30 22:21:55 +02:00
Michele Scuttari
2be8af8f0e [MLIR] Update pass declarations to new autogenerated files
The patch introduces the required changes to update the pass declarations and definitions to use the new autogenerated files and allow dropping the old infrastructure.

Reviewed By: mehdi_amini, rriddle

Differential Review: https://reviews.llvm.org/D132838
2022-08-30 21:56:31 +02:00
Aart Bik
86b22d3120 [mlir][sparse] start a sparse codegen conversion pass
This new pass provides an alternative to the current conversion pass
that converts sparse tensor types and sparse primitives to opaque pointers
and calls into a runtime support library. This pass will map sparse tensor
types to actual data structures and primitives to actual code. In the long
run, this new pass will remove our dependence on the support library, avoid
the need to link in fully templated and expanded code, and provide much better
opportunities for optimization on the generated code.

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D132766
2022-08-29 09:39:33 -07:00
Peiming Liu
bf59cd320e [mlir][sparse] fix error when sparse kernel is nested in a scf structrual operator.
Sparse compiler failed on the provided test (when the sparse kernel is nested in a scf structrual operator).

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D130609
2022-07-27 16:12:23 +00:00
Matthias Springer
27a431f5e9 [mlir][bufferization][NFC] Move sparse_tensor.release to bufferization dialect
This op used to belong to the sparse dialect, but there are use cases for dense bufferization as well. (E.g., when a tensor alloc is returned from a function and should be deallocated at the call site.) This change moves the op to the bufferization dialect, which now has an `alloc_tensor` and a `dealloc_tensor` op.

Differential Revision: https://reviews.llvm.org/D129985
2022-07-19 09:18:19 +02:00
Aart Bik
28ebb0b61d [mlir][sparse] migrate sparse rewriting to sparse transformations pass
The rules in the linalg file were very specific to sparse tensors so will
find a better home under sparse tensor dialect than linalg dialect. Also
moved some rewriting from sparsification into this new "pre-rewriting" file.

Reviewed By: springerm

Differential Revision: https://reviews.llvm.org/D129910
2022-07-18 09:29:22 -07:00
Matthias Springer
c66303c287 [mlir][sparse] Switch to One-Shot Bufferize
This change removes the partial bufferization passes from the sparse compilation pipeline and replaces them with One-Shot Bufferize. One-Shot Analysis (and TensorCopyInsertion) is used to resolve all out-of-place bufferizations, dense and sparse. Dense ops are then bufferized with BufferizableOpInterface. Sparse ops are still bufferized in the Sparsification pass.

Details:
* Dense allocations are automatically deallocated, unless they are yielded from a block. (In that case the alloc would leak.) All test cases are modified accordingly. E.g., some funcs now have an "out" tensor argument that is returned from the function. (That way, the allocation happens at the call site.)
* Sparse allocations are *not* automatically deallocated. They must be "released" manually. (No change, this will be addressed in a future change.)
* Sparse tensor copies are not supported yet. (Future change)
* Sparsification no longer has to consider inplacability. If necessary, allocations and/or copies are inserted during TensorCopyInsertion. All tensors are inplaceable by the time Sparsification is running. Instead of marking a tensor as "not inplaceable", it can be marked as "not writable", which will trigger an allocation and/or copy during TensorCopyInsertion.

Differential Revision: https://reviews.llvm.org/D129356
2022-07-14 09:52:48 +02:00
Aart Bik
faa00c1313 [mlir][sparse] implement sparse2sparse reshaping (expand/collapse)
A previous revision implemented expand/collapse reshaping between
dense and sparse tensors for sparse2dense and dense2sparse since those
could use the "cheap" view reshape on the already materialized
dense tensor (at either the input or output side), and do some
reshuffling from or to sparse. The dense2dense case, as always,
is handled with a "cheap" view change.

This revision implements the sparse2sparse cases. Lacking any "view"
support on sparse tensors this operation necessarily has to perform
data reshuffling on both ends.

Tracker for improving this:
https://github.com/llvm/llvm-project/issues/56477

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D129416
2022-07-11 14:49:06 -07:00
Jacques Pienaar
136d746ec7 [mlir] Flip accessors to prefixed form (NFC)
Another mechanical sweep to keep diff small for flip to _Prefixed.
2022-07-10 21:19:11 -07:00
Aart Bik
6d8e2f1e51 [mlir][sparse] implement simple reshaping (expand/collapse)
The revision makes a start with implementing expand/collapse reshaping
for sparse tensors. When either source or destination is sparse, but
other is dense, the "cheap" dense reshape can be used prior to converting
from or to a sparse tensor.

Note1
sparse to sparse reshaping is still TBD.

Note2
in the long run, we may want to implement a "view" into a sparse tensor so that the operation remains cheap and does not require data shuffling

Reviewed By: wrengr

Differential Revision: https://reviews.llvm.org/D129031
2022-07-06 14:34:30 -07:00
Aart Bik
fde04aee33 [mlir][sparse] refine bufferization allocation lowering
Marking bufferization allocation operation as invalid
during sparse lowering is too strict, since dense and
sparse allocation can co-exist. This revision refines
the lowering with a dynamic type check.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D128305
2022-06-21 15:17:25 -07:00
Matthias Springer
6232a8f3d6 [mlir][sparse][NFC] Switch InitOp to bufferization::AllocTensorOp
Now that we have an AllocTensorOp (previously InitTensorOp) in the bufferization dialect, the InitOp in the sparse dialect is no longer needed.

Differential Revision: https://reviews.llvm.org/D126180
2022-06-02 00:03:52 +02:00
Aart Bik
28b6d412af [mlir][sparse] add support for complex zero/one building
Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D126039
2022-05-20 08:53:30 -07:00
Nick Kreeger
4620032ee3 Revert "[mlir][sparse] Expose SpareTensor passes as enums instead of opaque numbers for vectorization and parallelization options."
This reverts commit d59cf901cb.

Build fails on NVIDIA Sparse tests:
https://lab.llvm.org/buildbot/#/builders/61/builds/25447
2022-04-23 20:14:48 -05:00
Nick Kreeger
d59cf901cb [mlir][sparse] Expose SpareTensor passes as enums instead of opaque numbers for vectorization and parallelization options.
The SparseTensor passes currently use opaque numbers for the CLI, despite using an enum internally. This patch exposes the enums instead of numbered items that are matched back to the enum.

Fixes GitHub issue #53389

Reviewed by: aartbik, mehdi_amini

Differential Revision: https://reviews.llvm.org/D123876
2022-04-23 19:16:57 -05:00
River Riddle
eda6f907d2 [mlir][NFC] Shift a bunch of dialect includes from the .h to the .cpp
Now that dialect constructors are generated in the .cpp file, we can
drop all of the dependent dialect includes from the .h file.

Differential Revision: https://reviews.llvm.org/D124298
2022-04-23 01:09:29 -07:00
River Riddle
58ceae9561 [mlir:NFC] Remove the forward declaration of FuncOp in the mlir namespace
FuncOp has been moved to the `func` namespace for a little over a month, the
using directive can be dropped now.
2022-04-18 12:01:55 -07:00
Javier Setoain
7783a178f5 [mlir][Sparse] Add option for VLA sparsification
Use "enable-vla-vectorization=vla" to generate a vector length agnostic
loops during vectorization. This option works for vectorization strategy 2.

Differential Revision: https://reviews.llvm.org/D118379
2022-03-25 10:54:49 +00:00
wren romano
c7e24db412 [mlir][sparse] Introducing options for the SparseTensorConversion pass
This is work towards: https://github.com/llvm/llvm-project/issues/51652

This differential sets up the options and threads them through everywhere, but doesn't actually use them yet.  The differential that finally makes use of them is D122061, which is the final differential in the chain that fixes bug 51652.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D122054
2022-03-22 13:11:09 -07:00
River Riddle
4a3460a791 [mlir:FunctionOpInterface] Rename the "type" attribute to "function_type"
This removes any potential confusion with the `getType` accessors
which correspond to SSA results of an operation, and makes it
clear what the intent is (i.e. to represent the type of the function).

Differential Revision: https://reviews.llvm.org/D121762
2022-03-16 17:07:04 -07:00
River Riddle
1f971e23f0 [mlir] Trim a huge number of unnecessary dependencies on the Func dialect
The Func has a large number of legacy dependencies carried over from the old
Standard dialect, which was pervasive and contained a large number of varied
operations. With the split of the standard dialect and its demise, a lot of lingering
dead dependencies have survived to the Func dialect. This commit removes a
large majority of then, greatly reducing the dependence surface area of the
Func dialect.
2022-03-01 12:10:04 -08:00
River Riddle
23aa5a7446 [mlir] Rename the Standard dialect to the Func dialect
The last remaining operations in the standard dialect all revolve around
FuncOp/function related constructs. This patch simply handles the initial
renaming (which by itself is already huge), but there are a large number
of cleanups unlocked/necessary afterwards:

* Removing a bunch of unnecessary dependencies on Func
* Cleaning up the From/ToStandard conversion passes
* Preparing for the move of FuncOp to the Func dialect

See the discussion at https://discourse.llvm.org/t/standard-dialect-the-final-chapter/6061

Differential Revision: https://reviews.llvm.org/D120624
2022-03-01 12:10:04 -08:00
wren romano
b85ed4e0e1 [mlir][sparse] Adding standard pipeline for tests.
Addresses https://bugs.llvm.org/show_bug.cgi?id=52409 aka https://github.com/llvm/llvm-project/issues/51751

Reviewed By: aartbik, mehdi_amini

Differential Revision: https://reviews.llvm.org/D117919
2022-01-28 15:11:12 -08:00
River Riddle
7ceffae18c [mlir] Convert OpTrait::FunctionLike to FunctionOpInterface
This commit refactors the FunctionLike trait into an interface (FunctionOpInterface).
FunctionLike as it is today is already a pseudo-interface, with many users checking the
presence of the trait and then manually into functionality implemented in the
function_like_impl namespace. By transitioning to an interface, these accesses are much
cleaner (ideally with no direct calls to the impl namespace outside of the implementation
of the derived function operations, e.g. for parsing/printing utilities).

I've tried to maintain as much compatability with the current state as possible, while
also trying to clean up as much of the cruft as possible. The general migration plan for
current users of FunctionLike is as follows:

* function_like_impl -> function_interface_impl
Realistically most user calls should remove references to functions within this namespace
outside of a vary narrow set (e.g. parsing/printing utilities). Calls to the attribute name
accessors should be migrated to the `FunctionOpInterface::` equivalent, most everything
else should be updated to be driven through an instance of the interface.

* OpTrait::FunctionLike -> FunctionOpInterface
`hasTrait` checks will need to be moved to isa, along with the other various Trait vs
Interface API differences.

* populateFunctionLikeTypeConversionPattern -> populateFunctionOpInterfaceTypeConversionPattern

Fixes #52917

Differential Revision: https://reviews.llvm.org/D117272
2022-01-18 20:56:53 -08:00
River Riddle
d4d016869d [mlir] Remove populateFuncOpTypeConversionPattern
This method simply forwards to populateFunctionLikeTypeConversionPattern,
which is more general. This also helps to remove special treatment of FuncOp from
DialectConversion.

Differential Revision: https://reviews.llvm.org/D116624
2022-01-12 14:05:35 -08:00
Mehdi Amini
abb336d26b Apply clang-tidy fixes for modernize-use-equals-default to MLIR (NFC) 2022-01-02 22:18:36 +00:00
Mehdi Amini
3bab9d4eb0 Apply clang-tidy fixes for bugprone-copy-constructor-init to MLIR (NFC)
Reviewed By: rriddle, Mogball

Differential Revision: https://reviews.llvm.org/D116245
2022-01-02 01:05:30 +00:00
Mehdi Amini
be0a7e9f27 Adjust "end namespace" comment in MLIR to match new agree'd coding style
See D115115 and this mailing list discussion:
https://lists.llvm.org/pipermail/llvm-dev/2021-December/154199.html

Differential Revision: https://reviews.llvm.org/D115309
2021-12-08 06:05:26 +00:00
Alexander Belyaev
57470abc41 [mlir] Move memref.[tensor_load|buffer_cast|clone] to "bufferization" dialect.
https://llvm.discourse.group/t/rfc-dialect-for-bufferization-related-ops/4712

Differential Revision: https://reviews.llvm.org/D114552
2021-11-25 11:50:39 +01:00
wren romano
28882b6575 [mlir][sparse] Implementing sparse=>dense conversion.
Depends On D110882, D110883, D110884

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D110790
2021-10-28 15:27:35 -07:00
Aart Bik
1b15160ef3 [mlir][sparse] lower trivial tensor.cast on identical sparse tensors
Even though tensor.cast is not part of the sparse tensor dialect,
it may be used to cast static dimension sizes to dynamic dimension
sizes for sparse tensors without changing the actual sparse tensor
itself. Those cases should be lowered properly when replacing sparse
tensor types with their opaque pointers. Likewise, no op sparse
conversions are handled by this revision in a similar manner.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D112173
2021-10-25 10:30:19 -07:00
Aart Bik
b24788abd8 [mlir][sparse] implement sparse tensor init operation
Next step towards supporting sparse tensors outputs.
Also some minor refactoring of enum constants as well
as replacing tensor arguments with proper buffer arguments
(latter is required for more general sizes arguments for
the sparse_tensor.init operation, as well as more general
spares_tensor.convert operations later)

Reviewed By: wrengr

Differential Revision: https://reviews.llvm.org/D111771
2021-10-15 09:33:16 -07:00
Mogball
a54f4eae0e [MLIR] Replace std ops with arith dialect ops
Precursor: https://reviews.llvm.org/D110200

Removed redundant ops from the standard dialect that were moved to the
`arith` or `math` dialects.

Renamed all instances of operations in the codebase and in tests.

Reviewed By: rriddle, jpienaar

Differential Revision: https://reviews.llvm.org/D110797
2021-10-13 03:07:03 +00:00
Bixia Zheng
fbd5821c6f Implement the conversion from sparse constant to sparse tensors.
The sparse constant provides a constant tensor in coordinate format. We first split the sparse constant into a constant tensor for indices and a constant tensor for values. We then generate a loop to fill a sparse tensor in coordinate format using the tensors for the indices and the values. Finally, we convert the sparse tensor in coordinate format to the destination sparse tensor format.

Add tests.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D110373
2021-09-27 09:47:29 -07:00
wren romano
221856f5cd [mlir][sparse] Moved a conditional from the RT library to the generated MLIR.
When generating code to add an element to SparseTensorCOO (e.g., when doing dense=>sparse conversion), we used to check for nonzero values on the runtime side, whereas now we generate MLIR code to do that check.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D110121
2021-09-23 12:44:17 -07:00
Aart Bik
236a90802d [mlir][sparse] replace support lib conversion with actual MLIR codegen
Rationale:
Passing in a pointer to the memref data in order to implement the
dense to sparse conversion was a bit too low-level. This revision
improves upon that approach with a cleaner solution of generating
a loop nest in MLIR code itself that prepares the COO object before
passing it to our "swiss army knife" setup.  This is much more
intuitive *and* now also allows for dynamic shapes.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D108491
2021-08-23 14:26:05 -07:00
Aart Bik
05c7f450df [mlir][sparse] add dense to sparse conversion implementation
Implements lowering dense to sparse conversion, for static tensor types only.
First step towards general sparse_tensor.convert support.

Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D107681
2021-08-09 12:12:39 -07:00
Aart Bik
86e9bc1a34 [mlir][sparse] add option for 32-bit indices in scatter/gather
Controlled by a compiler option, if 32-bit indices can be handled
with zero/sign-extention alike (viz. no worries on non-negative
indices), scatter/gather operations can use the more efficient
32-bit SIMD version.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D103632
2021-06-04 16:57:12 -07:00