Commit Graph

258 Commits

Author SHA1 Message Date
bixia1
81e3079d0f [mlir][sparse] Replace sparse_tensor.sort with sparse_tensor.sort_coo for sorting COO tensors.
Add codegen pattern for sparse_tensor.indices_buffer.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D140871
2023-01-05 15:42:57 -08:00
bixia1
9bde3d0cc5 [mlir][sparse] Add operator sparse_tensor.indices_buffer.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D140762
2023-01-05 09:35:55 -08:00
bixia1
3fdd85da06 [mlir][sparse] Add AOS optimization.
Use an array of structures to represent the indices for the tailing COO region
of a sparse tensor.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D140870
2023-01-04 18:16:04 -08:00
Aart Bik
1c7ffe0c38 [mlir][sparse] add test that combines sparse codegen and lowering to llvm struct
Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D141006
2023-01-04 12:12:31 -08:00
bixia1
90aa436291 [mlir][sparse] Add layout to the memref for the indices buffers to prepare for the AOS storage optimization for COO regions.
Fix relevant FileCheck tests.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D140742
2023-01-04 07:36:11 -08:00
bixia1
840e2ba336 [mlir][sparse] Use DLT in the mangled function names for insertion.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D140484
2022-12-28 08:21:22 -08:00
Aart Bik
431f6a543e [sparse][mlir][vectorization] add support for shift-by-invariant
Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D140596
2022-12-27 11:07:13 -08:00
Peiming Liu
988733c600 [mlir][sparse] use sparse_tensor::StorageSpecifier to store dim/memSizes
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D140130
2022-12-23 00:47:36 +00:00
Peiming Liu
083ddffe47 [mlir][sparse] introduce sparse_tensor::StorageSpecifierToLLVM pass
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D140122
2022-12-22 22:45:15 +00:00
Peiming Liu
a3672add76 [mlir][sparse] avoid unnecessary tmp COO buffer and convert when lowering ConcatentateOp.
When concat along dim 0, and all inputs/outputs are ordered with identity dimension ordering,
the concatenated coordinates will be yield in lexOrder, thus no need to sort.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D140228
2022-12-16 18:26:39 +00:00
Peiming Liu
509974af02 [mlir][sparse] add folder to sparse_tensor.storage.get operation.
Reviewed By: aartbik, wrengr

Differential Revision: https://reviews.llvm.org/D140172
2022-12-15 23:51:38 +00:00
Peiming Liu
71cc0f1c04 [mlir][sparse] introduce sparse_tensor::StorageSpecifierType and related operations on it
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D139961
2022-12-14 22:27:22 +00:00
Aart Bik
70ac598197 [mlir][sparse][simd] only accept proper unit stride subscripts
Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D139983
2022-12-14 13:07:14 -08:00
Quentin Colombet
a0220ba721 [mlir][SparseTensor] Add a few tests for sparse vectorization
These tests covers mulf, ori, and subi.

NFC

Differential Revision: https://reviews.llvm.org/D139625
2022-12-14 18:59:26 +00:00
Matthias Springer
be630f07de [mlir][bufferize] Implement BufferizableOpInterface for tensor.empty
The op is not bufferizable but should be analyzable (for `EliminateEmptyTensors`, which uses the bufferization infrastructure).

Also improve debugging functionality and error messages.

Also adds a missing pass to the sparse pipeline. (tensor.empty should be replaced with bufferization.alloc_tensor, but it sometimes used to work without depending on how the tensor.empty is used. Now we always fail explicitly.)
2022-12-12 14:19:38 +01:00
Peiming Liu
b4e2b7f90c [mlir][sparse] avoid sorting when unnecessary when convert sparse tensors.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D139744
2022-12-10 00:24:42 +00:00
Peiming Liu
faa75f94f1 [mlir][sparse] reject kernels with non-sparsfiable reduction expression.
To address https://github.com/llvm/llvm-project/issues/59394.

Reduction on negation of the output tensor is a non-sparsifiable kernel, it creates cyclic dependency.

This patch reject those cases instead of crashing.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D139659
2022-12-08 23:36:30 +00:00
bixia1
dfcb67190a [mlir][sparse] Fix checks in a test.
Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D139636
2022-12-08 08:48:55 -08:00
Aart Bik
16aa4e4bd1 [mlir][sparse] introduce sparse vectorization to the sparse compiler pipeline
Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D139581
2022-12-07 16:06:53 -08:00
bixia1
19cde2df95 [mlir][sparse] Improve concatenate operation conversion for the case with annotated all dense result.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D139345
2022-12-07 12:06:50 -08:00
Aart Bik
65074179f2 [mlir][sparse] make fusion for SDDMM more robust
Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D139456
2022-12-06 14:32:19 -08:00
wren romano
86f91e45a2 [mlir][sparse] Cleaning up the dim/lvl distinction in SparseTensorConversion
This change cleans up the conversion pass re the "dim"-vs-"lvl" and "sizes"-vs-"shape" distinctions of the runtime. A quick synopsis includes:

* Adds new `SparseTensorStorageBase::getDimSize` method, with `sparseDimSize` wrapper in SparseTensorRuntime.h, and `genDimSizeCall` generator in SparseTensorConversion.cpp
* Changes `genLvlSizeCall` to perform no logic, just generate the function call.
* Adds `createOrFold{Dim,Lvl}Call` functions to handle the logic of replacing `gen{Dim,Lvl}SizeCall` with constants whenever possible. The `createOrFoldDimCall` function replaces the old `sizeFromPtrAtDim`.
* Adds `{get,fill}DimSizes` functions for iterating `createOrFoldDimCall` across the whole type. These functions replace the old `sizesFromPtr`.
* Adds `{get,fill}DimShape` functions for lowering a `ShapedType` into constants. These functions replace the old `sizesFromType`.
* Changes the `DimOp` rewrite to do the right thing.
* Changes the `ExpandOp` rewrite to compute the proper expansion size.

Depends On D138365

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D139165
2022-12-05 16:59:42 -08:00
wren romano
2af2e4dbb7 [mlir][sparse] Breaking up openSparseTensor to better support non-permutations
This commit updates how the `SparseTensorConversion` pass handles `NewOp`.  It breaks up the underlying `openSparseTensor` function into two parts (`SparseTensorReader::create` and `SparseTensorReader::readSparseTensor`) so that the pass can inject code for constructing `lvlSizes` between those two parts.  Migrating the construction of `lvlSizes` out of the runtime and into the pass is a necessary first step toward fully supporting non-permutations.  (The alternative would be for the pass to generate a `FuncOp` for performing the construction and then passing that to the runtime; which doesn't seem to have any benefits over the design of this commit.)  And since the pass now generates the code to call these two functions, this change also removes the `Action::kFromFile` value from the enum used by `_mlir_ciface_newSparseTensor`.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D138363
2022-12-02 11:10:57 -08:00
Matthias Springer
c1fef4e88a [mlir][bufferization] Make TensorCopyInsertionPass a test pass
TensorCopyInsertion should not have been exposed as a pass. This was a flaw in the original design. It is a preparation step for bufferization and certain transforms (that would otherwise be legal) are illegal between TensorCopyInsertion and actual rewrite to MemRef ops. Therefore, even if broken down as two separate steps internally, they should be exposed as a single pass.

This change affects the sparse compiler, which uses `TensorCopyInsertionPass`. A new `SparsificationAndBufferizationPass` is added to replace all passes in the sparse tensor pipeline from `TensorCopyInsertionPass` until the actual bufferization (rewrite to memref/non-tensor). It is generally unsafe to run arbitrary passes in-between, in particular passes that hoist tensor ops out of loops or change SSA use-def chains along tensor ops.

Differential Revision: https://reviews.llvm.org/D138915
2022-12-02 15:38:02 +01:00
bixia1
2aceadda78 [mlir][sparse] Put the implementation for the insertion operation to subroutines.
Previously, we generated inlined implementation for insert operation and
observed MLIR compile time increase due to the size of the main routine. We now
put the insert operation implementation in subroutines and leave the inlining
decision to the MLIR compiler.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D138957
2022-12-01 13:23:59 -08:00
River Riddle
aef89c8b41 [mlir] Cleanup lingering problems surrounding attribute/type aliases
This commit refactors attribute/type alias generation to be similar to how
we do it for operations, i.e. we generate aliases determined on what is
actually necessary when printing the IR (using a dummy printer for alias
collection). This allows for generating aliases only when necessary, and
also allows for proper propagation of when a nested alias can be deferred.
This also necessitated a fix for location parsing to actually parse aliases
instead of ignoring them.

Fixes #59041

Differential Revision: https://reviews.llvm.org/D138886
2022-11-30 17:02:54 -08:00
Aart Bik
2fda620711 [mlir][sparse][vectorization] implement "index" vectorization
This adds the capability to vectorize computations like a[i] = i.
This also generalizes the supported unary and binary ops and
adds a test for each to ensure actual SIMD code can result.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D138956
2022-11-30 11:40:05 -08:00
bixia1
101a0c84f7 [mlir][sparse] Improve concatenate operator rewrite for annotated all dense dimensions results.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D138823
2022-11-29 12:25:52 -08:00
Hanhan Wang
0a1569a400 [mlir][NFC] Remove trailing whitespaces from *.td and *.mlir files.
This is generated by running

```
sed --in-place 's/[[:space:]]\+$//' mlir/**/*.td
sed --in-place 's/[[:space:]]\+$//' mlir/**/*.mlir
```

Reviewed By: rriddle, dcaballe

Differential Revision: https://reviews.llvm.org/D138866
2022-11-28 15:26:30 -08:00
bixia1
aedf5d5831 [mlir][sparse] Improve concatenate operator rewriting for dense tensor results.
Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D138465
2022-11-28 07:56:01 -08:00
Aart Bik
cb82d375a8 [mlir][sparse][vectorization] optimize reduction chains
A few more dots on the i's of the sparse vectorizer.
Also makes reduction matching less brittle.

Reviewed By: qcolombet

Differential Revision: https://reviews.llvm.org/D138513
2022-11-26 12:40:51 -08:00
bixia1
974b4bf9fd [mlir][sparse] Add expand_symmetry attribute to the new operator.
The attribute tells the operator to handle symmetric structures for 2D tensors.
By default, the operator assumes the input tensor is not symmetric.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D138230
2022-11-23 16:32:15 -08:00
Peiming Liu
e5e4deca5e [mlir][sparse] support affine expression on sparse dimensions (codegen implementation)
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D138172
2022-11-23 00:04:55 +00:00
Peiming Liu
4fd3a1201e [mlir][sparse] support constant affine expression on dense dimension
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D138170
2022-11-22 22:34:42 +00:00
bixia1
6c01b5cdad [mlir][sparse] Fix a bug in concatenate operator rewriting.
When calculating the dynamic dimensions for the concatenate result, we
shouldn't accumulate the sizes for the non-concatenating dimensions.

Reviewed By: aartbik, Peiming

Differential Revision: https://reviews.llvm.org/D138436
2022-11-22 08:17:35 -08:00
Peiming Liu
fb28733541 [mlir][sparse] support affine expression on dense dimensions (except constant affine)
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D138169
2022-11-22 01:31:19 +00:00
Aart Bik
914eff5a8c [mlir][sparse][vector] ensure loop peeling to remove vector masks works
Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D138343
2022-11-21 16:51:06 -08:00
Aart Bik
99b3849d89 [mlir][sparse] introduce vectorization pass for sparse loops
This brings back previous SIMD functionality, but in a separate pass.
The idea is to improve this new pass incrementally, going beyond for-loops
to while-loops for co-iteration as welll (masking), while introducing new
abstractions to make the lowering more progressive. The separation of
sparsification and vectorization is a very good first step on this journey.

Also brings back ArmSVE support

Still to be fine-tuned:
  + use of "index" in SIMD loop (viz. a[i] = i)
  + check that all ops really have SIMD support
  + check all forms of reductions
  + chain reduction SIMD values

Reviewed By: dcaballe

Differential Revision: https://reviews.llvm.org/D138236
2022-11-21 16:12:12 -08:00
Lei Zhang
9bb633741a [mlir][bufferization] Support general Attribute as memory space
MemRef has been accepting a general Attribute as memory space for
a long time. This commits updates bufferization side to catch up,
which allows downstream users to plugin customized symbolic memory
space. This also eliminates quite a few `getMemorySpaceAsInt`
calls, which is deprecated.

Reviewed By: springerm

Differential Revision: https://reviews.llvm.org/D138330
2022-11-21 09:40:50 -05:00
bixia1
c374ef2eb7 [mlir][sparse] Extend the operator new rewriter to handle isSymmetric flag.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D138214
2022-11-17 10:48:24 -08:00
bixia1
f81f0cb75a [mlir][sparse] Split SparseTensorRewrite into PreSparsificationRewrite and PostSparsificationRewrite.
Reviewed By: aartbik, wrengr

Differential Revision: https://reviews.llvm.org/D138153
2022-11-17 07:13:55 -08:00
Peiming Liu
709e23a1c1 [mlir][sparse] fix CHECK test error on windows.
Reviewed By: aartbik, bixia

Differential Revision: https://reviews.llvm.org/D138147
2022-11-16 20:29:21 +00:00
Aart Bik
8474a20b1f [mlir][sparse] bring CHECK tests back (but disabled)
We have a strange nondeterministic failure on windows
by not getting the desired fill statement in the resulting
IR. Probably something wrong with our option passing or
pass construction?

https://github.com/llvm/llvm-project/issues/59016#issuecomment-1316410249

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D138089
2022-11-16 07:55:50 -08:00
Mahesh Ravishankar
fcaf6dd597 [mlir][Transforms] CSE of ops with a single block.
Currently CSE does not support CSE of ops with regions. This patch
extends the CSE support to ops with a single region.

Differential Revision: https://reviews.llvm.org/D134306
Depends on D137857
2022-11-16 02:55:43 +00:00
Mahesh Ravishankar
fc367dfa67 [mlir] Remove Transforms/SideEffectUtils.h and move the methods into Interface/SideEffectInterfaces.h.
The methods in `SideEffectUtils.h` (and their implementations in
`SideEffectUtils.cpp`) seem to have similar intent to methods already
existing in `SideEffectInterfaces.h`. Move the decleration (and
implementation) from `SideEffectUtils.h` (and `SideEffectUtils.cpp`)
into `SideEffectInterfaces.h` (and `SideEffectInterface.cpp`).

Also drop the `SideEffectInterface::hasNoEffect` method in favor of
`mlir::isMemoryEffectFree` which actually recurses into the operation
instead of just relying on the `hasRecursiveMemoryEffectTrait`
exclusively.

Differential Revision: https://reviews.llvm.org/D137857
2022-11-15 20:07:35 +00:00
wren romano
c518745bba [mlir][sparse] Making way for SparseTensorRuntime to support non-permutations
Systematically updates the SparseTensorRuntime to properly distinguish tensor-dimensions from storage-levels (and their associated ranks, shapes, sizes, indices, etc).  With a few exceptions which are noted in the code, this ensures the runtime has all the **semantic** changes necessary to support non-permutations.

(Whereas **operationally**, since we're still using `std::vector<uing64_t>` to represent the mappings, there's no way to pass in any interesting non-permutations.  Changing the representation to `std::function` will be done in a separate differential.)

Depends On D137680

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D137681
2022-11-14 13:48:41 -08:00
bixia1
4f729d5a70 [mlir][sparse] Add rewriting rules for sparse_tensor.sort_coo.
Refactor the rewriting of sparse_tensor.sort to support the implementation of
sparse_tensor.sort_coo.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D137522
2022-11-14 08:48:53 -08:00
bixia1
57416d872a [mlir][sparse] Fix a bug in rewriting dense2dense convert op.
Permutation wasn't handled correctly. Add a test for the rewriting.

Extend an integration test to run with enable_runtime_library=false to
also test the rewriting.

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D137845
2022-11-11 09:11:54 -08:00
bixia1
ccd1a5b9cb [mlir][sparse] Fix a bug in rewriting for the convert op.
The code to retrieve the number of entries isn't correct.

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D137795
2022-11-10 14:30:51 -08:00
bixia1
7276b643f8 [mlir][sparse] Add option enable-buffer-initialization to the sparse-tensor-codegen pass.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D137733
2022-11-10 07:23:33 -08:00