Commit Graph

129 Commits

Author SHA1 Message Date
Matthias Springer
6232a8f3d6 [mlir][sparse][NFC] Switch InitOp to bufferization::AllocTensorOp
Now that we have an AllocTensorOp (previously InitTensorOp) in the bufferization dialect, the InitOp in the sparse dialect is no longer needed.

Differential Revision: https://reviews.llvm.org/D126180
2022-06-02 00:03:52 +02:00
bixia1
548f0841cd [mlir][sparse] Enable the test for operator expm1.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D126732
2022-06-01 11:18:17 -07:00
bixia1
a14057d4bd [mlir][sparse] Add more complex operations.
Support complex operations sqrt, expm1, and tanh.

Add tests.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D126393
2022-05-25 16:38:09 -07:00
Bixia Zheng
d390035b46 [mlir][sparse] Support more complex operations.
Add complex operations abs, neg, sin, log1p, sub and div.

Add test cases.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D126027
2022-05-20 14:39:26 -07:00
Bixia Zheng
69edacbcf0 [mlir][sparse] Add support for complex.im and complex.re to the sparse compiler.
Add a test.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D125834
2022-05-18 15:53:07 +00:00
wren romano
8cb332406c [mlir][sparse] Enhancing sparse=>sparse conversion.
Fixes: https://github.com/llvm/llvm-project/issues/51652

Depends On D122060

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D122061
2022-05-16 15:42:19 -07:00
River Riddle
a6cef03f66 [mlir] Remove the type keyword from type alias definitions
This was carry over from LLVM IR where the alias definition can
be ambiguous, but MLIR type aliases have no such problems.
Having the `type` keyword is superfluous and doesn't add anything.
This commit drops it, which also nicely aligns with the syntax for
attribute aliases (which doesn't have a keyword).

Differential Revision: https://reviews.llvm.org/D125501
2022-05-16 13:54:02 -07:00
Aart Bik
736c1b66ef [mlir][sparse] introduce complex type to sparse tensor support
This is the first implementation of complex (f64 and f32) support
in the sparse compiler, with complex add/mul as first operations.
Note that various features are still TBD, such as other ops, and
reading in complex values from file. Also, note that the
std::complex<float> had a bit of an ABI issue when passed as
single argument. It is still TBD if better solutions are possible.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D125596
2022-05-16 13:17:36 -07:00
Aart Bik
6f3c7dfb77 [mlir][sparse] add sparse sign integration test
Implements a floating-point sign operator (using the new semi-ring ops)
that accomodates +/-Inf and +/-NaN in consistent way.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D125494
2022-05-12 15:56:36 -07:00
River Riddle
a8308020ac [mlir] Remove special case parsing/printing of func operations
This was leftover from when the standard dialect was destroyed, and
when FuncOp moved to the func dialect. Now that these transitions
have settled a bit we can drop these.

Most updates were handled using a simple regex: replace `^( *)func` with `$1func.func`

Differential Revision: https://reviews.llvm.org/D124146
2022-05-06 13:36:15 -07:00
Aart Bik
5b122a7310 [mlir][sparse] integration test for zero preserving math op
Also fixes omission in lowering math ops that require lib support

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D125104
2022-05-06 10:42:33 -07:00
Bixia Zheng
1cd13e6e98 [mlir][sparse][taco] Support more data types.
Support int8, int16, int32 and int32. Also fix source code format in mlir_pytaco_utils.py.

Add tests.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D124925
2022-05-04 10:05:20 -07:00
Jim Kitchen
2c33266084 [mlir][sparse] Add lowering for unary and binary ops
Adding lowering for Unary and Binary required several changes due to
their unique nature of containing custom code for different "regions"
of the sparse structure being operated on. Along with a Kind, a pointer
to the Operation is passed along to be merged once the lattice
structure is figured out.

The original operation is maintained, as it is required for subsequent
lattice decisions. However, sparse_tensor.binary has some branches
are considered as fully handled and therefore are marked with as
kBinaryBranch to distinguish them.

A unique aspect of the custom code is that sometimes the desired result
is no result at all -- i.e. a user wants overlapping sparse entries to
become empty in the output. The solution to this is to return an
uninitialized Value(), which is checked and handled elsewhere in the
code and results in nothing being written to the output tensor for that
case.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D123057
2022-05-03 15:50:26 -05:00
Nick Kreeger
4620032ee3 Revert "[mlir][sparse] Expose SpareTensor passes as enums instead of opaque numbers for vectorization and parallelization options."
This reverts commit d59cf901cb.

Build fails on NVIDIA Sparse tests:
https://lab.llvm.org/buildbot/#/builders/61/builds/25447
2022-04-23 20:14:48 -05:00
Nick Kreeger
d59cf901cb [mlir][sparse] Expose SpareTensor passes as enums instead of opaque numbers for vectorization and parallelization options.
The SparseTensor passes currently use opaque numbers for the CLI, despite using an enum internally. This patch exposes the enums instead of numbered items that are matched back to the enum.

Fixes GitHub issue #53389

Reviewed by: aartbik, mehdi_amini

Differential Revision: https://reviews.llvm.org/D123876
2022-04-23 19:16:57 -05:00
River Riddle
87db8e4439 [mlir][NFC] Update textual references of func to func.func in Integration tests
The special case parsing of `func` operations is being removed.
2022-04-20 22:17:29 -07:00
River Riddle
2310ced874 [mlir][NFC] Update textual references of func to func.func in examples+python scripts
The special case parsing of `func` operations is being removed.
2022-04-20 22:17:26 -07:00
Bixia Zheng
cb6f8d77a2 [mlir][sparse][taco] Use the SparseCompiler from python/tools.
Copy the implementation of SparseCompiler from python/tools to taco/tools until we have a common place to install it. Modify TACO to use this SparseCompiler for compilation and jitting.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D123696
2022-04-14 15:17:18 -07:00
Aart Bik
28063a281b [mlir][sparse] refactored python setup of sparse compiler
Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D123419
2022-04-12 11:58:41 -07:00
Aart Bik
69a7759b40 [mlir][sparse] implement loop index value vectorization
with CHECK and integration test

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D122040
2022-03-21 10:40:38 -07:00
River Riddle
3655069234 [mlir] Move the Builtin FuncOp to the Func dialect
This commit moves FuncOp out of the builtin dialect, and into the Func
dialect. This move has been planned in some capacity from the moment
we made FuncOp an operation (years ago). This commit handles the
functional aspects of the move, but various aspects are left untouched
to ease migration: func::FuncOp is re-exported into mlir to reduce
the actual API churn, the assembly format still accepts the unqualified
`func`. These temporary measures will remain for a little while to
simplify migration before being removed.

Differential Revision: https://reviews.llvm.org/D121266
2022-03-16 17:07:03 -07:00
Aart Bik
f98e1c40ca [mlir][sparse] add one extra index test on f32 matrix
Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D121743
2022-03-15 17:43:30 -07:00
Bixia Zheng
3580721a59 [mlir][sparse][taco] Support the use of index values in tensor expressions.
PyTACO DSL doesn't support the use of index values as in A[i] = B[i]+ i.
We extend the DSL to support such a use in MLIR-PyTACO.

Remove an obsolete unit test. Add unit tests and PyTACO tests.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D121716
2022-03-15 15:30:55 -07:00
Aart Bik
1f3c482b76 [mlir][sparse] more test cases for linalg.index
Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D121660
2022-03-15 10:30:54 -07:00
Bixia Zheng
3a4229696d [mlir][sparse][taco] Reorder a class.
Define IndexExpr before IndexVar. This is to prepare for the next change
to support the use of index values in tensor expressions.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D121649
2022-03-15 08:51:22 -07:00
gysit
7294be2b8e [mlir][linalg] Replace linalg.fill by OpDSL variant.
The revision removes the linalg.fill operation and renames the OpDSL generated linalg.fill_tensor operation to replace it. After the change, all named structured operations are defined via OpDSL and there are no handwritten operations left.

A side-effect of the change is that the pretty printed form changes from:
```
%1 = linalg.fill(%cst, %0) : f32, tensor<?x?xf32> -> tensor<?x?xf32>
```
changes to
```
%1 = linalg.fill ins(%cst : f32) outs(%0 : tensor<?x?xf32>) -> tensor<?x?xf32>
```
Additionally, the builder signature now takes input and output value ranges as it is the case for all other OpDSL operations:
```
rewriter.create<linalg::FillOp>(loc, val, output)
```
changes to
```
rewriter.create<linalg::FillOp>(loc, ValueRange{val}, ValueRange{output})
```
All other changes remain minimal. In particular, the canonicalization patterns are the same and the `value()`, `output()`, and `result()` methods are now implemented by the FillOpInterface.

Depends On D120726

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D120728
2022-03-14 10:51:08 +00:00
Bixia Zheng
30c5269d93 [mlir][sparse][taco] Add a few unary operations.
Add operations -, abs, ceil and floor to the index notation.

Add test cases.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D121388
2022-03-11 08:08:55 -08:00
Aart Bik
0123d2a9fe [mlir][sparse] add end2end test for linalg.dot sparsification
Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D121344
2022-03-09 16:05:53 -08:00
Aart Bik
53cc3a0637 [mlir][sparse] index support in sparse compiler codegen
This revision adds support for the linalg.index to the sparse compiler
pipeline. In essence, this adds the ability to refer to indices in
the tensor index expression, as illustrated below:

 Y[i, j, k, l, m] = T[i, j, k, l, m]  * i * j

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D121251
2022-03-08 17:25:36 -08:00
Bixia Zheng
5b87e0521d [mlir][sparse][taco] Split the evaluate method into compile and compute.
This is to align with the PyTACO API better.

Modify an existing unit test to test the new routines.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D121083
2022-03-07 16:58:41 -08:00
Bixia Zheng
4b7745c176 [mlir][sparse][taco] Add more unit tests.
These unit tests resides in an internal repository. Porting the tests to the
public repository.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D121021
2022-03-07 10:10:01 -08:00
Aart Bik
d8b229a1d5 [mlir][sparse][pytaco] added test with various sparse annotations
This also found a bug in the new toMLIR code (missing permutation)

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D120783
2022-03-01 16:36:19 -08:00
Bixia Zheng
c25f3dfff3 [mlir][sparse][taco] Support tensor dimension storage ordering and more general
sparsity values.

Previously, we can't properly handle input tensors with a dimension
ordering that is different from the natural ordering or with a mixed of
compressed and dense dimensions. This change fixes the problems by
passing the dimension ordering and sparsity values to the runtime
routine.

Modify an existing test to test the situation.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D120777
2022-03-01 15:36:38 -08:00
River Riddle
23aa5a7446 [mlir] Rename the Standard dialect to the Func dialect
The last remaining operations in the standard dialect all revolve around
FuncOp/function related constructs. This patch simply handles the initial
renaming (which by itself is already huge), but there are a large number
of cleanups unlocked/necessary afterwards:

* Removing a bunch of unnecessary dependencies on Func
* Cleaning up the From/ToStandard conversion passes
* Preparing for the move of FuncOp to the Func dialect

See the discussion at https://discourse.llvm.org/t/standard-dialect-the-final-chapter/6061

Differential Revision: https://reviews.llvm.org/D120624
2022-03-01 12:10:04 -08:00
Bixia Zheng
20eaa88fff [mlir][sparse] Extend convertToMLIRSparseTensor to support permutation and more general sparsity values.
Previously, convertToMLIRSparseTensor assumes identity storage ordering and all
compressed dimensions. This change extends the function with two parameters for
users to specify the storage ordering and the sparsity of each dimension.

Modify PyTACO to reflect this change.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D120643
2022-03-01 10:51:39 -08:00
Aart Bik
180c9f9efe [mlir][sparse] enable scalar test
Removed TODO now that we support scalars properly

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D120590
2022-02-25 15:05:25 -08:00
Bixia Zheng
6f07191101 [mlir][sparse][taco] Support reduction to scalar tensors.
The PyTACO DSL doesn't support reduction to scalars. This change
enhances the MLIR-PyTACO implementation to support reduction to scalars.

Extend an existing test to show the syntax of reduction to scalars and
two methods to retrieve the scalar values.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D120572
2022-02-25 14:17:45 -08:00
Bixia Zheng
c601dfbcc2 [mlir][sparse][taco] Use np.array_equal to compare integer values.
Fix MLIR-PyTACO and some tests to use np.array_equal to compare integer
values.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D120526
2022-02-25 07:38:15 -08:00
gysit
cd2776b0d5 [mlir][OpDSL] Split arithmetic functions.
Split arithmetic function into unary and binary functions. The revision prepares the introduction of unary and binary function attributes that work similar to type function attributes.

Depends On D120108

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D120109
2022-02-25 15:27:42 +00:00
Bixia Zheng
90f22ab3ad [mlir][sparse][taco] Add support for scalar tensors.
This change allows the use of scalar tensors with index 0 in tensor index
expressions. In this case, the scalar value is broadcast to match the
dimensions of other tensors in the same expression.

Using scalar tensors as a destination in tensor index expressions is not
supported in the PyTACO DSL.

Add a PyTACO test to show the use of scalar tensors.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D120524
2022-02-25 07:20:15 -08:00
gysit
51fdd802c7 [mlir][OpDSL] Add type function attributes.
Previously, OpDSL operation used hardcoded type conversion operations (cast or cast_unsigned). Supporting signed and unsigned casts thus meant implementing two different operations. Type function attributes allow us to define a single operation that has a cast type function attribute which at operation instantiation time may be set to cast or cast_unsigned. We may for example, defina a matmul operation with a cast argument:

```
@linalg_structured_op
def matmul(A=TensorDef(T1, S.M, S.K), B=TensorDef(T2, S.K, S.N), C=TensorDef(U, S.M, S.N, output=True),
    cast=TypeFnAttrDef(default=TypeFn.cast)):
  C[D.m, D.n] += cast(U, A[D.m, D.k]) * cast(U, B[D.k, D.n])
```

When instantiating the operation the attribute may be set to the desired cast function:

```
linalg.matmul(lhs, rhs, outs=[out], cast=TypeFn.cast_unsigned)
```

The revsion introduces a enum in the Linalg dialect that maps one-by-one to the type functions defined by OpDSL.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D119718
2022-02-25 08:25:23 +00:00
Bixia Zheng
c8ae8cfb5d [mlir][sparse][taco] Add support for float32.
Previously, we only support float64. We now support float32 and float64. When
constructing a tensor without providing a data type, the default is float32.

Fix the tests to data type consistency. All PyTACO application tests now use
float32 to match the default data type of TACO. Other tests may use float32 or
float64.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D120356
2022-02-23 18:24:22 -08:00
Aart Bik
652b39b46f [mlir][sparse][linalg] add linalg rewriting specific to sparse tensors
Now that sparse tensor types are first-class citizens and the sparse compiler
is taking shape, it is time to make sure other compiler optimizations compose
well with sparse tensors. Mostly, this should be completely transparent (i.e.,
dense and sparse take the same path). However, in some cases, optimizations
only make sense in the context of sparse tensors. This is a first example of
such an optimization, where fusing a sampled elt-wise multiplication only makes
sense when the resulting kernel has a potential lower asymptotic complexity due
to the sparsity.

As an extreme example, running SDDMM with 1024x1024 matrices and a sparse
sampling matrix with only two elements runs in 463.55ms in the unfused
case but just 0.032ms in the fused case, with a speedup of 14485x that
is only possible in the exciting world of sparse computations!

Reviewed By: mravishankar

Differential Revision: https://reviews.llvm.org/D120429
2022-02-23 17:29:41 -08:00
Aart Bik
8b83b8f131 [mlir][sparse] refactor sparse compiler pipeline to single place
Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D120347
2022-02-22 16:23:56 -08:00
Aart Bik
9b9a084af0 [mlir][sparse][pytaco] test with 3-dim tensor and scalar
Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D120163
2022-02-18 15:16:35 -08:00
Aart Bik
6438783fda [mlir][sparse] provide more types for external to/from MLIR routines
These routines will need to be specialized a lot more based on value types,
index types, pointer types, and permutation/dimension ordering. This is a
careful first step, providing some functionality needed in PyTACO bridge.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D120154
2022-02-18 13:36:52 -08:00
Aart Bik
515c617003 [mlir][linalg][sparse] add linalg optimization passes "upstream"
It is time to compose Linalg related optimizations with SparseTensor
related optimizations. This is a careful first start by adding some
general Linalg optimizations "upstream" of the sparse compiler in the
full sparse compiler pipeline. Some minor changes were needed to make
those optimizations aware of sparsity.

Note that after this, we will add a sparse specific fusion rule,
just to demonstrate the power of the new composition.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D119971
2022-02-17 08:55:50 -08:00
Shao-Ce SUN
21ac474392 [NFC] Correct typo interger to integer 2022-02-17 21:17:47 +08:00
Bixia Zheng
746c68eafd [mlir][sparse][taco] Handle tensor copy and trivial reduction expression.
Handle tensor copy, such as A[i, j] = B[i, j]. Also, handle trivial
reduction expression, such as A[i] = B[i, j].

Add unit tests.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D119867
2022-02-15 15:57:18 -08:00
Aart Bik
ae76fafc3f [mlir][sparse] sparse transpose operation
This test shows that when access patterns do not match (e.g. transposing
a row-wise sparse matrix into another row-wise sparse matrix), a conversion
operation in between can enable codegen (i.e. avoid cycle in iteration graph).

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D119864
2022-02-15 11:51:30 -08:00