Commit Graph

65 Commits

Author SHA1 Message Date
Christian Ulmann
dcae289d3a [MLIR][SparseTensor] Introduce opaque pointers in LLVM dialect lowering (#70570)
This commit changes the SparseTensor LLVM dialect lowering from using
`llvm.ptr<i8>` to `llvm.ptr`. This change ensures that the lowering now
properly relies on opaque pointers, instead of working with already type
erased i8 pointers.
2023-10-31 07:34:49 +01:00
Peiming Liu
f82bee1367 [mlir][sparse] split post-sparsification-rewriting into two passes. (#70727) 2023-10-30 15:22:21 -07:00
Peiming Liu
7d608ee2bb [mlir][sparse] unify sparse_tensor.out rewriting rules (#70518) 2023-10-27 16:46:58 -07:00
Peiming Liu
c780352de9 [mlir][sparse] implement sparse_tensor.lvl operation. (#69993) 2023-10-24 13:23:28 -07:00
Aart Bik
d392073f67 [mlir][sparse] simplify reader construction of new sparse tensor (#69036)
Making the materialize-from-reader method part of the Swiss army knife
suite again removes a lot of redundant boiler plate code and unifies the
parameter setup into a single centralized utility. Furthermore, we now
have minimized the number of entry points into the library that need a
non-permutation map setup, simplifying what comes next
2023-10-16 10:25:37 -07:00
Aart Bik
2045cca0c3 [mlir][sparse] add a forwarding insertion to SparseTensorStorage (#68939) 2023-10-12 21:03:07 -07:00
Aart Bik
b7188d2877 [mlir][sparse] replace specialized buffer setup with util code (#68461)
This completely centralizes all set up related to dim2lvl and lvl2dim
for the runtime library (and even parts of direct IR codegen) into one
place! And all comptatible with the MapRef data structure that should be
used in all remaining clients of dim2lvl and lvl2dim.

NOTE: the convert_x2y.mlir tests were becoming too overloaded
      so I decided to bring them back to the basics; if e.g.
      more coverage of the foreach is required, they should
      go into isolated smalle tests
2023-10-09 08:50:59 -07:00
Aart Bik
d3af65358d [mlir][sparse] introduce MapRef, unify conversion/codegen for reader (#68360)
This revision introduces a MapRef, which will support a future
generalization beyond permutations (e.g. block sparsity). This revision
also unifies the conversion/codegen paths for the sparse_tensor.new
operation from file (eg. the readers). Note that more unification is
planned as well as general affine dim2lvl and lvl2dim (all marked with
TODOs).
2023-10-06 13:42:01 -07:00
Tobias Gysi
85175edd4e [mlir][llvm] Replace NullOp by ZeroOp (#67183)
This revision replaces the LLVM dialect NullOp by the recently
introduced ZeroOp. The ZeroOp is more generic in the sense that it
represents zero values of any LLVM type rather than null pointers only.

This is a follow to https://github.com/llvm/llvm-project/pull/65508
2023-09-25 11:11:52 +02:00
Aart Bik
3e4a8c2c7d [mlir][sparse] remove most bufferization.alloc_tensor ops from sparse (#66847)
The only ones left need actual deprecation in bufferization module.
2023-09-20 09:51:08 -07:00
Yinying Li
3dc621124f [mlir][sparse] Migrate tests to use new syntax (#66543)
**COO**
`lvlTypes = [ "compressed_nu", "singleton" ]` to `map = (d0, d1) -> (d0
: compressed(nonunique), d1 : singleton)`
`lvlTypes = [ "compressed_nu_no", "singleton_no" ]` to `map = (d0, d1)
-> (d0 : compressed(nonunique, nonordered), d1 : singleton(nonordered))`

**SortedCOO**
`lvlTypes = [ "compressed_nu", "singleton" ]` to `map = (d0, d1) -> (d0
: compressed(nonunique), d1 : singleton)`

**BCOO**
`lvlTypes = [ "dense", "compressed_hi_nu", "singleton" ]` to `map = (d0,
d1, d2) -> (d0 : dense, d1 : compressed(nonunique, high), d2 :
singleton)`

**BCSR**
`lvlTypes = [ "compressed", "compressed", "dense", "dense" ], dimToLvl =
affine_map<(d0, d1) -> (d0 floordiv 2, d1 floordiv 3, d0 mod 2, d1 mod
3)>` to
`map = ( i, j ) ->
      ( i floordiv 2 : compressed,
        j floordiv 3 : compressed,
        i mod 2 : dense,
        j mod 3 : dense
      )`

**Tensor and other supported formats(e.g. CCC, CDC, CCCC)**

Currently, ELL and slice are not supported yet in the new syntax and the
CHECK tests will be updated once printing is set to output the new
syntax.

Previous PRs: #66146, #66309, #66443
2023-09-15 16:12:20 -04:00
Yinying Li
e2e429d994 [mlir][sparse] Migrate more tests to new syntax (#66309)
CSR:
`lvlTypes = [ "dense", "compressed" ]` to `map = (d0, d1) -> (d0 :
dense, d1 : compressed)`

CSC:
`lvlTypes = [ "dense", "compressed" ], dimToLvl = affine_map<(d0, d1) ->
(d1, d0)>` to `map = (d0, d1) -> (d1 : dense, d0 : compressed)`

This is an ongoing effort: #66146
2023-09-14 12:21:13 -04:00
Yinying Li
dbe1be9aa4 [mlir][sparse] Migrate tests to use new syntax (#66146)
lvlTypes = [ "compressed" ] to map = (d0) -> (d0 : compressed)
lvlTypes = [ "dense" ] to map = (d0) -> (d0 : dense)
2023-09-13 11:41:25 -04:00
wren romano
76647fce13 [mlir][sparse] Combining dimOrdering+higherOrdering fields into dimToLvl
This is a major step along the way towards the new STEA design.  While a great deal of this patch is simple renaming, there are several significant changes as well.  I've done my best to ensure that this patch retains the previous behavior and error-conditions, even though those are at odds with the eventual intended semantics of the `dimToLvl` mapping.  Since the majority of the compiler does not yet support non-permutations, I've also added explicit assertions in places that previously had implicitly assumed it was dealing with permutations.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D151505
2023-05-30 15:19:50 -07:00
wren romano
a0615d020a [mlir][sparse] Renaming the STEA field dimLevelType to lvlTypes
This commit is part of the migration of towards the new STEA syntax/design.  In particular, this commit includes the following changes:
* Renaming compiler-internal functions/methods:
  * `SparseTensorEncodingAttr::{getDimLevelType => getLvlTypes}`
  * `Merger::{getDimLevelType => getLvlType}` (for consistency)
  * `sparse_tensor::{getDimLevelType => buildLevelType}` (to help reduce confusion vs actual getter methods)
* Renaming external facets to match:
  * the STEA parser and printer
  * the C and Python bindings
  * PyTACO

However, the actual renaming of the `DimLevelType` itself (along with all the "dlt" names) will be handled in a separate commit.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D150330
2023-05-17 14:24:09 -07:00
wren romano
84cd51bb97 [mlir][sparse] Renaming "pointer/index" to "position/coordinate"
The old "pointer/index" names often cause confusion since these names clash with names of unrelated things in MLIR; so this change rectifies this by changing everything to use "position/coordinate" terminology instead.

In addition to the basic terminology, there have also been various conventions for making certain distinctions like: (1) the overall storage for coordinates in the sparse-tensor, vs the particular collection of coordinates of a given element; and (2) particular coordinates given as a `Value` or `TypedValue<MemRefType>`, vs particular coordinates given as `ValueRange` or similar.  I have striven to maintain these distinctions
as follows:

  * "p/c" are used for individual position/coordinate values, when there is no risk of confusion.  (Just like we use "d/l" to abbreviate "dim/lvl".)

  * "pos/crd" are used for individual position/coordinate values, when a longer name is helpful to avoid ambiguity or to form compound names (e.g., "parentPos").  (Just like we use "dim/lvl" when we need a longer form of "d/l".)

    I have also used these forms for a handful of compound names where the old name had been using a three-letter form previously, even though a longer form would be more appropriate.  I've avoided renaming these to use a longer form purely for expediency sake, since changing them would require a cascade of other renamings.  They should be updated to follow the new naming scheme, but that can be done in future patches.

  * "coords" is used for the complete collection of crd values associated with a single element.  In the runtime library this includes both `std::vector` and raw pointer representations.  In the compiler, this is used specifically for buffer variables with C++ type `Value`, `TypedValue<MemRefType>`, etc.

    The bare form "coords" is discouraged, since it fails to make the dim/lvl distinction; so the compound names "dimCoords/lvlCoords" should be used instead.  (Though there may exist a rare few cases where is is appropriate to be intentionally ambiguous about what coordinate-space the coords live in; in which case the bare "coords" is appropriate.)

    There is seldom the need for the pos variant of this notion.  In most circumstances we use the term "cursor", since the same buffer is reused for a 'moving' pos-collection.

  * "dcvs/lcvs" is used in the compiler as the `ValueRange` analogue of "dimCoords/lvlCoords".  (The "vs" stands for "`Value`s".)  I haven't found the need for it, but "pvs" would be the obvious name for a pos-`ValueRange`.

    The old "ind"-vs-"ivs" naming scheme does not seem to have been sustained in more recent code, which instead prefers other mnemonics (e.g., adding "Buf" to the end of the names for `TypeValue<MemRefType>`).  I have cleaned up a lot of these to follow the "coords"-vs-"cvs" naming scheme, though haven't done an exhaustive cleanup.

  * "positions/coordinates" are used for larger collections of pos/crd values; in particular, these are used when referring to the complete sparse-tensor storage components.

    I also prefer to use these unabbreviated names in the documentation, unless there is some specific reason why using the abbreviated forms helps resolve ambiguity.

In addition to making this terminology change, this change also does some cleanup along the way:
  * correcting the dim/lvl terminology in certain places.
  * adding `const` when it requires no other code changes.
  * miscellaneous cleanup that was entailed in order to make the proper distinctions.  Most of these are in CodegenUtils.{h,cpp}

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D144773
2023-03-06 12:23:33 -08:00
wren romano
86f91e45a2 [mlir][sparse] Cleaning up the dim/lvl distinction in SparseTensorConversion
This change cleans up the conversion pass re the "dim"-vs-"lvl" and "sizes"-vs-"shape" distinctions of the runtime. A quick synopsis includes:

* Adds new `SparseTensorStorageBase::getDimSize` method, with `sparseDimSize` wrapper in SparseTensorRuntime.h, and `genDimSizeCall` generator in SparseTensorConversion.cpp
* Changes `genLvlSizeCall` to perform no logic, just generate the function call.
* Adds `createOrFold{Dim,Lvl}Call` functions to handle the logic of replacing `gen{Dim,Lvl}SizeCall` with constants whenever possible. The `createOrFoldDimCall` function replaces the old `sizeFromPtrAtDim`.
* Adds `{get,fill}DimSizes` functions for iterating `createOrFoldDimCall` across the whole type. These functions replace the old `sizesFromPtr`.
* Adds `{get,fill}DimShape` functions for lowering a `ShapedType` into constants. These functions replace the old `sizesFromType`.
* Changes the `DimOp` rewrite to do the right thing.
* Changes the `ExpandOp` rewrite to compute the proper expansion size.

Depends On D138365

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D139165
2022-12-05 16:59:42 -08:00
wren romano
2af2e4dbb7 [mlir][sparse] Breaking up openSparseTensor to better support non-permutations
This commit updates how the `SparseTensorConversion` pass handles `NewOp`.  It breaks up the underlying `openSparseTensor` function into two parts (`SparseTensorReader::create` and `SparseTensorReader::readSparseTensor`) so that the pass can inject code for constructing `lvlSizes` between those two parts.  Migrating the construction of `lvlSizes` out of the runtime and into the pass is a necessary first step toward fully supporting non-permutations.  (The alternative would be for the pass to generate a `FuncOp` for performing the construction and then passing that to the runtime; which doesn't seem to have any benefits over the design of this commit.)  And since the pass now generates the code to call these two functions, this change also removes the `Action::kFromFile` value from the enum used by `_mlir_ciface_newSparseTensor`.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D138363
2022-12-02 11:10:57 -08:00
wren romano
c518745bba [mlir][sparse] Making way for SparseTensorRuntime to support non-permutations
Systematically updates the SparseTensorRuntime to properly distinguish tensor-dimensions from storage-levels (and their associated ranks, shapes, sizes, indices, etc).  With a few exceptions which are noted in the code, this ensures the runtime has all the **semantic** changes necessary to support non-permutations.

(Whereas **operationally**, since we're still using `std::vector<uing64_t>` to represent the mappings, there's no way to pass in any interesting non-permutations.  Changing the representation to `std::function` will be done in a separate differential.)

Depends On D137680

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D137681
2022-11-14 13:48:41 -08:00
Aart Bik
0f3e4d1afa [mlir][sparse] lower number of entries op to actual code
works both along runtime path and pure codegen path

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D136389
2022-10-21 10:48:37 -07:00
Aart Bik
9f596a7c67 [mlir][sparse] implement simple codegen for insertion (and related ops)
This is a proof of concept insertion implementation that sets up
the basic framework and implements it with push backs for just
sparse vectors. It adds insertion/compression through SSA values,
so that we properly update the memref after after pushback operation.

Note that properly using SSA values in sparsification is still TBD
but I will wait until Peiming's loop emitter is in to avoid conflicts.

Reviewed By: wrengr

Differential Revision: https://reviews.llvm.org/D136008
2022-10-17 18:02:08 -07:00
bixia1
0d60000257 [mlir][sparse] Reorganize tests for the sparse_tensor.convert operator.
Rename conversion_sparse2dense.mlir and conversion_sparse2sparse.mlir to
convert_sparse2dense.mlir/sparse2sparse.mlir.

Add convert_dense2sparse.mlir. Move the sparse_tensor.convert operator tests
out of conversion.mlir.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D135922
2022-10-14 14:38:18 -07:00
Aart Bik
a3610359b5 [mlir][sparse] change memref argument to proper SSA components
The indices for insert/compress were previously provided as
a memref<?xindex> with proper rank, since that matched the
argument for the runtime support libary better. However, with
proper codegen coming, providing the indices as SSA values
is much cleaner. This also brings the sparse_tensor.insert
closer to unification with tensor.insert, planned in the
longer run.

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D134404
2022-09-27 16:37:37 -07:00
Aart Bik
f76dcede3f [mlir][sparse] rename lex_insert into insert
This change goes not impact any semantics yet, but it
is in preparation for implementing the unordered and not-unique
properties. Changing lex_insert to insert is a first step.

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D133531
2022-09-08 17:26:35 -07:00
Aart Bik
dc46d5c979 [mlir][sparse] improve dimop rewriting during conversion
Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D133512
2022-09-08 13:04:28 -07:00
Aart Bik
ec8f2905a3 [mlir][sparse] fix bug in workspace dimension computation
Access pattern expansion is always done along the innermost stored
dimension, but this was incorrectly reordered due to using a
general utility typically used by original dimensions only.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D133472
2022-09-08 08:25:02 -07:00
Aart Bik
610b09074a [mlir][sparse] change variable dimension to fixed attribute pointers/indices
The "sparsification" pass does not need the ability to use runtime values for
the dimension, so the only source for variability would have been user code.
Restricting the dimension to constants simplifies code generation.

Reviewed By: Peiming, wrengr

Differential Revision: https://reviews.llvm.org/D133458
2022-09-07 16:27:24 -07:00
Aart Bik
86b22d3120 [mlir][sparse] start a sparse codegen conversion pass
This new pass provides an alternative to the current conversion pass
that converts sparse tensor types and sparse primitives to opaque pointers
and calls into a runtime support library. This pass will map sparse tensor
types to actual data structures and primitives to actual code. In the long
run, this new pass will remove our dependence on the support library, avoid
the need to link in fully templated and expanded code, and provide much better
opportunities for optimization on the generated code.

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D132766
2022-08-29 09:39:33 -07:00
Aart Bik
a8166d8801 [mlir][sparse] move sparse2sparse conversion to own test file
Rationale:
We were running *all* conversion tests two times, just to check the
difference of one indidivual test in that file. By splitting that test
out, we have a much more focused testing setup.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D132757
2022-08-26 13:17:24 -07:00
Matthias Springer
27a431f5e9 [mlir][bufferization][NFC] Move sparse_tensor.release to bufferization dialect
This op used to belong to the sparse dialect, but there are use cases for dense bufferization as well. (E.g., when a tensor alloc is returned from a function and should be deallocated at the call site.) This change moves the op to the bufferization dialect, which now has an `alloc_tensor` and a `dealloc_tensor` op.

Differential Revision: https://reviews.llvm.org/D129985
2022-07-19 09:18:19 +02:00
Aart Bik
9a3d60e0d3 [mlir][bufferization][sparse] put restriction on sparse tensor allocation
Putting some direct use restrictions on tensor allocations in the
sparse case enables the use of simplifying assumptions in the
bufferization analysis.

Reviewed By: springerm

Differential Revision: https://reviews.llvm.org/D128463
2022-06-24 10:58:43 -07:00
Aart Bik
fde04aee33 [mlir][sparse] refine bufferization allocation lowering
Marking bufferization allocation operation as invalid
during sparse lowering is too strict, since dense and
sparse allocation can co-exist. This revision refines
the lowering with a dynamic type check.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D128305
2022-06-21 15:17:25 -07:00
Aart Bik
aef20f59a5 [mlir][sparse] move from by-value to by-reference for data types
This fixes all sorts of ABI issues due to passing by-value
(using by-reference with memref's exclusively).

Reviewed By: bkramer

Differential Revision: https://reviews.llvm.org/D128018
2022-06-17 08:39:25 -07:00
Matthias Springer
6232a8f3d6 [mlir][sparse][NFC] Switch InitOp to bufferization::AllocTensorOp
Now that we have an AllocTensorOp (previously InitTensorOp) in the bufferization dialect, the InitOp in the sparse dialect is no longer needed.

Differential Revision: https://reviews.llvm.org/D126180
2022-06-02 00:03:52 +02:00
wren romano
b364c76683 [mlir][sparse] Using non-empty function name suffix for OverheadType::kIndex
The trick of using an empty token in the `FOREVERY_O` x-macro relies on preprocessor behavior which is only standard since C99 6.10.3/4 and C++11 N3290 16.3/4 (whereas it was undefined behavior up through C++03 16.3/10).  Since the `ExecutionEngine/SparseTensorUtils.cpp` file is required to be compile-able under C++98 compatibility mode (unlike the C++11 used elsewhere in MLIR), we shouldn't rely on that behavior.

Also, using a non-empty suffix helps improve uniformity of the API, since all other primary/overhead suffixes are also non-empty.  I'm using the suffix `0` since that's the value used by the `SparseTensorEncoding` attribute for indicating the index overhead-type.

Depends On D126720

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D126724
2022-06-01 14:18:42 -07:00
Aart Bik
28b6d412af [mlir][sparse] add support for complex zero/one building
Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D126039
2022-05-20 08:53:30 -07:00
wren romano
8cb332406c [mlir][sparse] Enhancing sparse=>sparse conversion.
Fixes: https://github.com/llvm/llvm-project/issues/51652

Depends On D122060

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D122061
2022-05-16 15:42:19 -07:00
River Riddle
fb35cd3baf [mlir][NFC] Update textual references of func to func.func in SparseTensor tests
The special case parsing of `func` operations is being removed.
2022-04-20 22:17:29 -07:00
Aart Bik
0b55f94d2b [mlir][sparse] replace stack-based access pattern with dyn-alloc
Rationale:
Allocating the temporary buffers for access pattern expansion on the stack
(using alloca) is a bit too agressive, since it easily runs out of stack space
for large enveloping tensor dimensions. This revision changes the dynamic
allocation of these buffers with explicit alloc/dealloc pairs.

Reviewed By: bixia, wrengr

Differential Revision: https://reviews.llvm.org/D123253
2022-04-06 17:10:43 -07:00
wren romano
63bdcaf92a [mlir][sparse] Moving delete coo into codegen instead of runtime library
Prior to this change there were a number of places where the allocation and deallocation of SparseTensorCOO objects were not cleanly paired, leading to inconsistencies regarding whether each function released its tensor/coo arguments or not, as well as making it easy to run afoul of memory leaks, use-after-free, or double-free errors.  This change cleans up the codegen vs runtime boundary to resolve those issues.  Now, the only time the runtime library frees an object is either (a) because it's a function explicitly designed to do so, or (b) because the allocated object is entirely local to the function and would be a memory leak if not released.  Thus, now the codegen takes complete responsibility for releasing any objects it caused to be allocated.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D122435
2022-04-01 11:08:52 -07:00
gysit
7294be2b8e [mlir][linalg] Replace linalg.fill by OpDSL variant.
The revision removes the linalg.fill operation and renames the OpDSL generated linalg.fill_tensor operation to replace it. After the change, all named structured operations are defined via OpDSL and there are no handwritten operations left.

A side-effect of the change is that the pretty printed form changes from:
```
%1 = linalg.fill(%cst, %0) : f32, tensor<?x?xf32> -> tensor<?x?xf32>
```
changes to
```
%1 = linalg.fill ins(%cst : f32) outs(%0 : tensor<?x?xf32>) -> tensor<?x?xf32>
```
Additionally, the builder signature now takes input and output value ranges as it is the case for all other OpDSL operations:
```
rewriter.create<linalg::FillOp>(loc, val, output)
```
changes to
```
rewriter.create<linalg::FillOp>(loc, ValueRange{val}, ValueRange{output})
```
All other changes remain minimal. In particular, the canonicalization patterns are the same and the `value()`, `output()`, and `result()` methods are now implemented by the FillOpInterface.

Depends On D120726

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D120728
2022-03-14 10:51:08 +00:00
Aart Bik
efa15f4178 [mlir][sparse] add ability for sparse tensor output
Rationale:
Although file I/O is a bit alien to MLIR itself, we provide two convenient ways
for sparse tensor I/O. The input part was already there (behind the swiss army
knife sparse_tensor.new). Now we have a sparse_tensor.out to write out data. As
before, the ops are kept vague and may change in the future. For now this
allows us to compare TACO vs MLIR very easily.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D117850
2022-01-21 15:43:29 -08:00
Aart Bik
4f2ec7f983 [mlir][sparse] finalize sparse output in the presence of reductions
This revision implements sparse outputs (from scratch) in all cases where
the loops can be reordered with all but one parallel loops outer. If the
inner parallel loop appears inside one or more reductions loops, then an
access pattern expansion is required (aka. workspaces in TACO speak).

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D115091
2021-12-07 10:54:29 -08:00
Aart Bik
f66e5769d4 [mlir][sparse] first version of "truly" dynamic sparse tensors as outputs of kernels
This revision contains all "sparsification" ops and rewriting necessary to support sparse output tensors when the kernel has no reduction (viz. insertions occur in lexicographic order and are "injective"). This will be later generalized to allow reductions too. Also, this first revision only supports sparse 1-d tensors (viz. vectors) as output in the runtime support library. This will be generalized to n-d tensors shortly. But this way, the revision is kept to a manageable size.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D113705
2021-11-15 15:33:32 -08:00
Aart Bik
1b15160ef3 [mlir][sparse] lower trivial tensor.cast on identical sparse tensors
Even though tensor.cast is not part of the sparse tensor dialect,
it may be used to cast static dimension sizes to dynamic dimension
sizes for sparse tensors without changing the actual sparse tensor
itself. Those cases should be lowered properly when replacing sparse
tensor types with their opaque pointers. Likewise, no op sparse
conversions are handled by this revision in a similar manner.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D112173
2021-10-25 10:30:19 -07:00
Aart Bik
bd5494d127 [mlir][sparse] make index type explicit in public API of support library
The current implementation used explicit index->int64_t casts for some, but
not all instances of passing values of type "index" in and from the sparse
support library. This revision makes the situation more consistent by
using new "index_t" type at all such places  (which allows for less trivial
casting in the generated MLIR code).  Note that the current revision still
assumes that "index" is 64-bit wide. If we want to support targets with
alternative "index" bit widths, we need to build the support library different.
But the current revision is a step forward by making this requirement explicit
and more visible.

Reviewed By: wrengr

Differential Revision: https://reviews.llvm.org/D112122
2021-10-20 12:46:31 -07:00
Aart Bik
9d1db3d4a1 [mlir][sparse] generalize sparse_tensor.convert on static/dynamic dimension sizes
This revison lifts the artificial restriction on having exact matches between
source and destination type shapes. A static size may become dynamic. We still
reject changing a dynamic size into a static size to avoid the need for a
runtime "assert" on the conversion. This revision also refactors some of the
conversion code to share same-content buffers.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D111915
2021-10-18 13:54:03 -07:00
Aart Bik
b24788abd8 [mlir][sparse] implement sparse tensor init operation
Next step towards supporting sparse tensors outputs.
Also some minor refactoring of enum constants as well
as replacing tensor arguments with proper buffer arguments
(latter is required for more general sizes arguments for
the sparse_tensor.init operation, as well as more general
spares_tensor.convert operations later)

Reviewed By: wrengr

Differential Revision: https://reviews.llvm.org/D111771
2021-10-15 09:33:16 -07:00
Mogball
a54f4eae0e [MLIR] Replace std ops with arith dialect ops
Precursor: https://reviews.llvm.org/D110200

Removed redundant ops from the standard dialect that were moved to the
`arith` or `math` dialects.

Renamed all instances of operations in the codebase and in tests.

Reviewed By: rriddle, jpienaar

Differential Revision: https://reviews.llvm.org/D110797
2021-10-13 03:07:03 +00:00
Aart Bik
16b8f4ddae [mlir][sparse] add a "release" operation to sparse tensor dialect
We have several ways to materialize sparse tensors (new and convert) but no explicit operation to release the underlying sparse storage scheme at runtime (other than making an explicit delSparseTensor() library call). To simplify memory management, a sparse_tensor.release operation has been introduced that lowers to the runtime library call while keeping tensors, opague pointers, and memrefs transparent in the initial IR.

*Note* There is obviously some tension between the concept of immutable tensors and memory management methods. This tension is addressed by simply stating that after the "release" call, no further memref related operations are allowed on the tensor value. We expect the design to evolve over time, however, and arrive at a more satisfactory view of tensors and buffers eventually.

Bug:
http://llvm.org/pr52046

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D111099
2021-10-05 09:35:59 -07:00