This patch mechanically replaces None with std::nullopt where the
compiler would warn if None were deprecated. The intent is to reduce
the amount of manual work required in migrating from Optional to
std::optional.
This is part of an effort to migrate from llvm::Optional to
std::optional:
https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
add new interfaces to SparseTensorEncodingAttr to construct the pointer/index types based on pointer/index bitwidth.
Reviewed By: aartbik, wrengr
Differential Revision: https://reviews.llvm.org/D139141
Small change to support projected permutations in the
`getPermutedPosition` utility. Renamed to `getResultPosition`.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D138946
The attribute tells the operator to handle symmetric structures for 2D tensors.
By default, the operator assumes the input tensor is not symmetric.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D138230
This patch re-commit D137468 and D137463, which were reverted by mistakes.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D137579
This patch fixes:
mlir/lib/Dialect/SparseTensor/IR/SparseTensorDialect.cpp:717:48:
error: comparison of integers of different signs: 'int64_t' (aka
'long') and 'uint64_t' (aka 'unsigned long')
[-Werror,-Wsign-compare]
This reverts commit 70508b614e.
This change depends on a reverted change that broke the windows mlir buildbot; reverting to bring remaining mlir bots to green
This is to allow the use of a nop convert to express that the sparse tensor
allocated through bufferization::AllocTensorOp will be expanded to sparse
tensor storage by sparse tensor codegen.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D136214
This removes another massive source of redundancy, and instead has the Merger.{h,cpp} reuse the SparseTensorEnums library.
Depends On D136005
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D136123
This differential replaces all uses of SparseTensorEncodingAttr::DimLevelType with DimLevelType. The next differential will break out a separate library for the DimLevelType enum, so that the Dialect code doesn't need to depend on the rest of the runtime
Depends On D135995
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D135996
UnitAttr is optional but unwrapped builders require it. Make Change onstructing
from bool as required for when not set at moment (for UnitAttr nothing needs to
be constructed, this is true for others here too and can be addressed
together).
Differential Revision: https://reviews.llvm.org/D135058
This extension to the sparse tensor type system in MLIR
opens up a whole new set of sparse storage schemes, such as
block sparse storage (e.g. BCSR) and ELL (aka jagged diagonals).
This revision merely introduces the type extension and
initial documentation. The actual interpretation of the type
(reading in tensors, lowering to code, etc.) will follow.
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D135206
This revision also adds convenience methods to test the
dim level type/property (with the codegen being first client)
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D134776
The indices for insert/compress were previously provided as
a memref<?xindex> with proper rank, since that matched the
argument for the runtime support libary better. However, with
proper codegen coming, providing the indices as SSA values
is much cleaner. This also brings the sparse_tensor.insert
closer to unification with tensor.insert, planned in the
longer run.
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D134404
The new select operation allows filtering of sparse tensors
by conditionally keeping or removing each element. This
can be used to remove negative values or select the upper
triangle of a matrix.
The select op has a single region which operates on a single
value and must return a boolean True to keep or False to drop.
Reviewed by: aartbik
Differential Revision: https://reviews.llvm.org/D133569
The "sparsification" pass does not need the ability to use runtime values for
the dimension, so the only source for variability would have been user code.
Restricting the dimension to constants simplifies code generation.
Reviewed By: Peiming, wrengr
Differential Revision: https://reviews.llvm.org/D133458
Introduce new sparse_tensor.storage_get/set to access memory that stores the handle of a sparse tensor. The sparse tensor storage are represented as a tuple, these operation will later be eliminated and the tuple will be flattened after sparse tensor codegen
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D133049
We recently removed the singleton dimension level type (see the revision
https://reviews.llvm.org/D131002) since it was unimplemented but also
incomplete (properties were missing). This revision add singleton back as
extra dimension level type, together with properties ordered/not-ordered
and unique/not-unique. Even though still not lowered to actual code, this
provides a complete way of defining many more sparse storage schemes (in
the long run, we want to support even dimension level types and properties
using the additional extensions proposed in [Chou]).
Note that the current solution of using suffixes for the properties is not
ideal, but keeps the extension relatively simple with respect to parsing and
printing. Furthermore, it is rather consistent with the TACO implementation
which uses things like Compressed-Unique as well. Nevertheless, we probably
want to separate dimension level types from properties when we add more types
and properties.
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D132897
This change omits default values from the sparse tensor type,
saving considerable text real estate for the common cases.
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D132083
A new sparse_tensor operation allows for
custom reduction code to be injected during
linalg.generic lowering for sparse tensors.
An identity value is provided to indicate
the starting value of the reduction. A single
block region is required to contain the
custom reduce computation.
Reviewed by: aartbik
Differential Revision: https://reviews.llvm.org/D128004
Now that we have an AllocTensorOp (previously InitTensorOp) in the bufferization dialect, the InitOp in the sparse dialect is no longer needed.
Differential Revision: https://reviews.llvm.org/D126180
When the sparse_tensor dialect lowers linalg.generic,
it makes inferences about how the operations should
affect the looping logic. For example, multiplication
is an intersection while addition is a union of two
sparse tensors.
The new binary and unary op separate the looping logic
from the computation by nesting the computation code
inside a block which is merged at the appropriate level
in the lowered looping code.
The binary op can have custom computation code for the
overlap, left, and right sparse overlap regions. The
unary op can have custom computation code for the
present and absent values.
Reviewed by: aartbik
Differential Revision: https://reviews.llvm.org/D121018
Rationale:
empty line between main include for this file
moved include that actually defines code into right section
Note that this revision started as breaking up ops/attrs even more
(for bug https://github.com/llvm/llvm-project/issues/52748), but due
the the connection in Dialect.initalize(), this cannot be split further).
All heavy lifting refactoring was already done by River in previous cleanup.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D119617