This allows allocaBuffer to be used outside of SparseTensorConversion.cpp, which will be helpful for a some future commits.
Reviewed By: aartbik, Peiming
Differential Revision: https://reviews.llvm.org/D140047
Since STEA isa Attribute, and that's just (a wrapper around) a pointer, the extra `const` and `&` aren't necessary for function arguments.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D139886
This patch abstracts sparse tensor memory scheme into a SparseTensorDescriptor class. Previously, the field accesses are performed in a relatively error-prone way, this patch hides the hairy details behind a SparseTensorDescriptor class to allow users access sparse tensor fields in a more cohesive way.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D138627
This is the beginning patch of a sequence of dependent patches that in together provide the affine expression on matched indexing mapping for sparse tensors.
This patch itself simply move `genAffine` into loop emitter to be prepared for upcoming patches.
D138169 provides support for affine expression on dense dimensions only (except for constant affine expression)
D138170 provides support for constant affine expressions on dense dimensions
D138171 provides **merger** support for affine expression on sparse dimension (without codegen)
D138172 provides **codegen** support (by generating a "filter" loop) for affine expression on sparse dimensions.
D138173 fixes a crash on resolveCycle when dealing with affine expressions.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D138168
(1) also fixes memory leak in sparse2dense rewriting
(2) still needs fix in dense2sparse by skipping zeros
Reviewed By: wrengr
Differential Revision: https://reviews.llvm.org/D137736
This patch re-commit D137468 and D137463, which were reverted by mistakes.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D137579
This patch fix the re-revert D135927 (which caused a windows build failure) to re-enable parallel for/reduction. It also fix a warning caused by D137442.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D137565
This reverts commit 70508b614e.
This change depends on a reverted change that broke the windows mlir buildbot; reverting to bring remaining mlir bots to green
The alloc->insert/compress->load chain needs to be
properly represented with an SSA chain now in loops
and if statements to properly reflect the modifying
behavior (runtime support lib is forgiving on breaking
this, but the new codegen is not).
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D136966
Outline the code that generates the loop structure to iterate over a dense
tensor or a sparse constant to genDenseTensorOrSparseConstantIterLoop.
Move a few routines to CodegenUtils for sharing.
Reviewed By: wrengr
Differential Revision: https://reviews.llvm.org/D136210
This differential replaces all uses of SparseTensorEncodingAttr::DimLevelType with DimLevelType. The next differential will break out a separate library for the DimLevelType enum, so that the Dialect code doesn't need to depend on the rest of the runtime
Depends On D135995
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D135996
This change is to make way for reusing the DimLevelType enum in lieu of the SparseTensorEncodingAttr::DimLevelType enum, but broken out to make it quick and easy to review
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D135995
Move a few supporting routines for generating function calls to CodegenUtils so
that they can be used by the codegen path for sparse tensor file input and
output.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D135691
Move genReshapeDstShape to codegen utils to support the rewriting of the tensor
reshape operators for the codegen path.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D135074
Now that mlir_sparsetensor_utils is a public library, this differential renames the x-macros to help avoid namespace pollution issues.
Reviewed By: aartbik, Peiming
Differential Revision: https://reviews.llvm.org/D134988
TranslateIndicesArray take an array of SSA value and convert them into another array of SSA values based on reassociation. Which makes it easier to be reused by `foreach` operator (as the indices array are given as an array of SSA values).
Reviewed By: aartbik, bixia
Differential Revision: https://reviews.llvm.org/D134918
We recently removed the singleton dimension level type (see the revision
https://reviews.llvm.org/D131002) since it was unimplemented but also
incomplete (properties were missing). This revision add singleton back as
extra dimension level type, together with properties ordered/not-ordered
and unique/not-unique. Even though still not lowered to actual code, this
provides a complete way of defining many more sparse storage schemes (in
the long run, we want to support even dimension level types and properties
using the additional extensions proposed in [Chou]).
Note that the current solution of using suffixes for the properties is not
ideal, but keeps the extension relatively simple with respect to parsing and
printing. Furthermore, it is rather consistent with the TACO implementation
which uses things like Compressed-Unique as well. Nevertheless, we probably
want to separate dimension level types from properties when we add more types
and properties.
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D132897
This is the first PR to add `F16` and `BF16` support to the sparse codegen. There are still problems in supporting these two data types, such as `BF16` is not quite working yet.
Add tests cases.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D127010
The trick of using an empty token in the `FOREVERY_O` x-macro relies on preprocessor behavior which is only standard since C99 6.10.3/4 and C++11 N3290 16.3/4 (whereas it was undefined behavior up through C++03 16.3/10). Since the `ExecutionEngine/SparseTensorUtils.cpp` file is required to be compile-able under C++98 compatibility mode (unlike the C++11 used elsewhere in MLIR), we shouldn't rely on that behavior.
Also, using a non-empty suffix helps improve uniformity of the API, since all other primary/overhead suffixes are also non-empty. I'm using the suffix `0` since that's the value used by the `SparseTensorEncoding` attribute for indicating the index overhead-type.
Depends On D126720
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D126724
By defining the `{primary,overhead}TypeFunctionSuffix` functions via the same x-macros used to generate the runtime library's functions themselves, this helps avoid bugs from typos or things getting out of sync.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D126720
This is the first implementation of complex (f64 and f32) support
in the sparse compiler, with complex add/mul as first operations.
Note that various features are still TBD, such as other ops, and
reading in complex values from file. Also, note that the
std::complex<float> had a bit of an ABI issue when passed as
single argument. It is still TBD if better solutions are possible.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D125596