While dense tensors support random accesses, it is critical to visit them in a row-major order for better cache locality. However, we previously consider dense inputs and outputs together when computing constraints for building iteration graph, it could lead us to less efficient iteration graphs.
This patch adds a new `SortMask::kIncludeDenseInput` to treat dense inputs/outputs separately when building iteration graph, thus increasing the chance for use to construct a better iteration graph.
A more fine-grained approach is to treat each input separately.
Note, related to:
https://github.com/llvm/llvm-project/issues/51651
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D144932
This change adds a new `SparseTensorType` class for making the "dim" vs "lvl" distinction more overt, and for abstracting over the differences between sparse-tensors and dense-tensors. In addition, this change also adds new type aliases `Dimension`, `Level`, and `FieldIndex` to make code more self-documenting.
Although the diff is very large, the majority of the changes are mechanical in nature (e.g., changing types to use the new aliases, updating variable names to match, etc). Along the way I also made many variables `const` when they could be; the majority of which required only adding the keyword. A few places had conditional definitions of these variables, requiring actual code changes; however, that was only done when the overall change was extremely local and easy to extract. All these changes are included in the current patch only because it would be too onerous to split them off into a separate patch.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D143800
Previously, when performing a reduction on a sparse tensor, the result
would be different depending on iteration order. For expanded access pattern,
an empty row would contribute no entry in the output. For lex ordering, the
identity would end up in the output.
This code changes that behavior and keeps track of whether any entries were
actually reduced in lex ordering, making the output consistent between the
two iteration styles.
Differential Revision: https://reviews.llvm.org/D142050
This patch moves some utils into CodegenEnv class, it should make the code easier to follow and it eliminates several indirect value assignment that use `ptr**`.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D142040
All members are now private and access is through delegate
or convenience methods only (except the loop emitter, which
is still under refactoring).
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D140519
Also, as a proof of concept, all functionality related to reductions
has been refactored into private fields and a clean public API. As a
result, some dead code was found as well. This approach also simplifies
asserting on a proper environment state for each call.
NOTE: making all other fields private and migrating more methods into
this new class is still TBD in yes another next revision!
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D140443
This cleans up a lot of parameter passing. It also prepares adding
proper "delegate" functions to the new environment and moving this
out into its own class with a better OO design.
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D140257
std::optional::value() has undesired exception checking semantics and is
unavailable in older Xcode (see _LIBCPP_AVAILABILITY_BAD_OPTIONAL_ACCESS). The
call sites block std::optional migration.
std::optional::value() has undesired exception checking semantics and is
unavailable in older Xcode (see _LIBCPP_AVAILABILITY_BAD_OPTIONAL_ACCESS). The
call sites block std::optional migration.
This patch mechanically replaces None with std::nullopt where the
compiler would warn if None were deprecated. The intent is to reduce
the amount of manual work required in migrating from Optional to
std::optional.
This is part of an effort to migrate from llvm::Optional to
std::optional:
https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
This is the beginning patch of a sequence of dependent patches that in together provide the affine expression on matched indexing mapping for sparse tensors.
This patch itself simply move `genAffine` into loop emitter to be prepared for upcoming patches.
D138169 provides support for affine expression on dense dimensions only (except for constant affine expression)
D138170 provides support for constant affine expressions on dense dimensions
D138171 provides **merger** support for affine expression on sparse dimension (without codegen)
D138172 provides **codegen** support (by generating a "filter" loop) for affine expression on sparse dimensions.
D138173 fixes a crash on resolveCycle when dealing with affine expressions.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D138168
[RFC: EnumAttr for iterator types in Linalg](https://discourse.llvm.org/t/rfc-enumattr-for-iterator-types-in-linalg/64535)
This affect touches and probably breaks most of the code that creates `linalg.generic`. A fix would be to replace calls to `getParallelIteratorTypeName/getReductionIteratorTypeName` with `mlir::utils::IteratorType::parallel/reduction` and types from `StringRef` to `mlir::utils::IteratorType`.
Due to limitations of tablegen, shared C++ definition of IteratorType enum lives in StructuredOpsUtils.td, but each dialect should have it's own EnumAttr wrapper. To avoid conflict, all enums in a dialect are put into a separate file with a separate tablegen rule.
Test dialect td files are refactored a bit.
Printed format of `linalg.generic` temporarily remains unchanged to avoid breaking code and tests in the same change.
Differential Revision: https://reviews.llvm.org/D137658
This patch fix the re-revert D135927 (which caused a windows build failure) to re-enable parallel for/reduction. It also fix a warning caused by D137442.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D137565
- argument name 'isLastOutput' in comment does not match parameter name
'hasOutput'.
- override is redundant since the function is already declared 'final'.
The alloc->insert/compress->load chain needs to be
properly represented with an SSA chain now in loops
and if statements to properly reflect the modifying
behavior (runtime support lib is forgiving on breaking
this, but the new codegen is not).
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D136966
This removes another massive source of redundancy, and instead has the Merger.{h,cpp} reuse the SparseTensorEnums library.
Depends On D136005
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D136123
This differential replaces all uses of SparseTensorEncodingAttr::DimLevelType with DimLevelType. The next differential will break out a separate library for the DimLevelType enum, so that the Dialect code doesn't need to depend on the rest of the runtime
Depends On D135995
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D135996
`getIteratorTypesArray` should be used instead. It's a better substitute for all the current usages of the interface.
The current `ArrayAttr iterator_types()` has a few problems:
* It creates an assumption operation has iterators types as an attribute, but it's not always the case. Sometime iterator types can be inferred from other attribute, or they're just static.
* ArrayAttr is an obscure contained and required extracting values in the client code.
* Makes it hard to migrate iterator types from strings to enums ([RFC](https://discourse.llvm.org/t/rfc-enumattr-for-iterator-types-in-linalg/64535/9)).
Concrete ops, like `linalg.generic` will still have iterator types as an attribute if needed.
As a side effect, this change helps a bit with migration to prefixed accessors.
Differential Revision: https://reviews.llvm.org/D135765
This extension to the sparse tensor type system in MLIR
opens up a whole new set of sparse storage schemes, such as
block sparse storage (e.g. BCSR) and ELL (aka jagged diagonals).
This revision merely introduces the type extension and
initial documentation. The actual interpretation of the type
(reading in tensors, lowering to code, etc.) will follow.
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D135206
The actual transformation doesn't support multi-output GenericOps, but
if we encounter one without sparse annotations we can just leave it
alone.
Differential Revision: https://reviews.llvm.org/D135176
tensor.empty/linalg.init_tensor produces an uninititalized tensor that can be used as a destination operand for destination-style ops (ops that implement `DestinationStyleOpInterface`).
This change makes it possible to implement `TilingInterface` for non-destination-style ops without depending on the Linalg dialect.
RFC: https://discourse.llvm.org/t/rfc-add-tensor-from-shape-operation/65101
Differential Revision: https://reviews.llvm.org/D135129