Use an array of structures to represent the indices for the tailing COO region
of a sparse tensor.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D140870
When concat along dim 0, and all inputs/outputs are ordered with identity dimension ordering,
the concatenated coordinates will be yield in lexOrder, thus no need to sort.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D140228
The op is not bufferizable but should be analyzable (for `EliminateEmptyTensors`, which uses the bufferization infrastructure).
Also improve debugging functionality and error messages.
Also adds a missing pass to the sparse pipeline. (tensor.empty should be replaced with bufferization.alloc_tensor, but it sometimes used to work without depending on how the tensor.empty is used. Now we always fail explicitly.)
This change cleans up the conversion pass re the "dim"-vs-"lvl" and "sizes"-vs-"shape" distinctions of the runtime. A quick synopsis includes:
* Adds new `SparseTensorStorageBase::getDimSize` method, with `sparseDimSize` wrapper in SparseTensorRuntime.h, and `genDimSizeCall` generator in SparseTensorConversion.cpp
* Changes `genLvlSizeCall` to perform no logic, just generate the function call.
* Adds `createOrFold{Dim,Lvl}Call` functions to handle the logic of replacing `gen{Dim,Lvl}SizeCall` with constants whenever possible. The `createOrFoldDimCall` function replaces the old `sizeFromPtrAtDim`.
* Adds `{get,fill}DimSizes` functions for iterating `createOrFoldDimCall` across the whole type. These functions replace the old `sizesFromPtr`.
* Adds `{get,fill}DimShape` functions for lowering a `ShapedType` into constants. These functions replace the old `sizesFromType`.
* Changes the `DimOp` rewrite to do the right thing.
* Changes the `ExpandOp` rewrite to compute the proper expansion size.
Depends On D138365
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D139165
This commit updates how the `SparseTensorConversion` pass handles `NewOp`. It breaks up the underlying `openSparseTensor` function into two parts (`SparseTensorReader::create` and `SparseTensorReader::readSparseTensor`) so that the pass can inject code for constructing `lvlSizes` between those two parts. Migrating the construction of `lvlSizes` out of the runtime and into the pass is a necessary first step toward fully supporting non-permutations. (The alternative would be for the pass to generate a `FuncOp` for performing the construction and then passing that to the runtime; which doesn't seem to have any benefits over the design of this commit.) And since the pass now generates the code to call these two functions, this change also removes the `Action::kFromFile` value from the enum used by `_mlir_ciface_newSparseTensor`.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D138363
TensorCopyInsertion should not have been exposed as a pass. This was a flaw in the original design. It is a preparation step for bufferization and certain transforms (that would otherwise be legal) are illegal between TensorCopyInsertion and actual rewrite to MemRef ops. Therefore, even if broken down as two separate steps internally, they should be exposed as a single pass.
This change affects the sparse compiler, which uses `TensorCopyInsertionPass`. A new `SparsificationAndBufferizationPass` is added to replace all passes in the sparse tensor pipeline from `TensorCopyInsertionPass` until the actual bufferization (rewrite to memref/non-tensor). It is generally unsafe to run arbitrary passes in-between, in particular passes that hoist tensor ops out of loops or change SSA use-def chains along tensor ops.
Differential Revision: https://reviews.llvm.org/D138915
Previously, we generated inlined implementation for insert operation and
observed MLIR compile time increase due to the size of the main routine. We now
put the insert operation implementation in subroutines and leave the inlining
decision to the MLIR compiler.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D138957
This commit refactors attribute/type alias generation to be similar to how
we do it for operations, i.e. we generate aliases determined on what is
actually necessary when printing the IR (using a dummy printer for alias
collection). This allows for generating aliases only when necessary, and
also allows for proper propagation of when a nested alias can be deferred.
This also necessitated a fix for location parsing to actually parse aliases
instead of ignoring them.
Fixes#59041
Differential Revision: https://reviews.llvm.org/D138886
This adds the capability to vectorize computations like a[i] = i.
This also generalizes the supported unary and binary ops and
adds a test for each to ensure actual SIMD code can result.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D138956
This is generated by running
```
sed --in-place 's/[[:space:]]\+$//' mlir/**/*.td
sed --in-place 's/[[:space:]]\+$//' mlir/**/*.mlir
```
Reviewed By: rriddle, dcaballe
Differential Revision: https://reviews.llvm.org/D138866
A few more dots on the i's of the sparse vectorizer.
Also makes reduction matching less brittle.
Reviewed By: qcolombet
Differential Revision: https://reviews.llvm.org/D138513
The attribute tells the operator to handle symmetric structures for 2D tensors.
By default, the operator assumes the input tensor is not symmetric.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D138230
When calculating the dynamic dimensions for the concatenate result, we
shouldn't accumulate the sizes for the non-concatenating dimensions.
Reviewed By: aartbik, Peiming
Differential Revision: https://reviews.llvm.org/D138436
This brings back previous SIMD functionality, but in a separate pass.
The idea is to improve this new pass incrementally, going beyond for-loops
to while-loops for co-iteration as welll (masking), while introducing new
abstractions to make the lowering more progressive. The separation of
sparsification and vectorization is a very good first step on this journey.
Also brings back ArmSVE support
Still to be fine-tuned:
+ use of "index" in SIMD loop (viz. a[i] = i)
+ check that all ops really have SIMD support
+ check all forms of reductions
+ chain reduction SIMD values
Reviewed By: dcaballe
Differential Revision: https://reviews.llvm.org/D138236
MemRef has been accepting a general Attribute as memory space for
a long time. This commits updates bufferization side to catch up,
which allows downstream users to plugin customized symbolic memory
space. This also eliminates quite a few `getMemorySpaceAsInt`
calls, which is deprecated.
Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D138330
Currently CSE does not support CSE of ops with regions. This patch
extends the CSE support to ops with a single region.
Differential Revision: https://reviews.llvm.org/D134306
Depends on D137857
The methods in `SideEffectUtils.h` (and their implementations in
`SideEffectUtils.cpp`) seem to have similar intent to methods already
existing in `SideEffectInterfaces.h`. Move the decleration (and
implementation) from `SideEffectUtils.h` (and `SideEffectUtils.cpp`)
into `SideEffectInterfaces.h` (and `SideEffectInterface.cpp`).
Also drop the `SideEffectInterface::hasNoEffect` method in favor of
`mlir::isMemoryEffectFree` which actually recurses into the operation
instead of just relying on the `hasRecursiveMemoryEffectTrait`
exclusively.
Differential Revision: https://reviews.llvm.org/D137857
Systematically updates the SparseTensorRuntime to properly distinguish tensor-dimensions from storage-levels (and their associated ranks, shapes, sizes, indices, etc). With a few exceptions which are noted in the code, this ensures the runtime has all the **semantic** changes necessary to support non-permutations.
(Whereas **operationally**, since we're still using `std::vector<uing64_t>` to represent the mappings, there's no way to pass in any interesting non-permutations. Changing the representation to `std::function` will be done in a separate differential.)
Depends On D137680
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D137681
Refactor the rewriting of sparse_tensor.sort to support the implementation of
sparse_tensor.sort_coo.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D137522
Permutation wasn't handled correctly. Add a test for the rewriting.
Extend an integration test to run with enable_runtime_library=false to
also test the rewriting.
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D137845