The goal is to make the naming of the future `_extended` ops more
consistent. With unsigned addition, the carry value/flag and overflow
bit are the same, but this is not true when it comes to signed addition.
Also rename the second result from `carry` to `overflow`.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D139569
At the moment, they are a part of EmptyOp::getCanonicalizationPatterns. When
extract_slice(tensor.empty) is rewritten as a new tensor.empty, it could
happen that we end up with two tensor.empty ops, since the original
tensor.empty can have two users. After bufferization such cases result in two
allocations.
Differential Revision: https://reviews.llvm.org/D139308
This adds a tile_size parameter, when it is used the tiles are
cyclically distributed onto the threads of the scf.foreach_thread op.
Differential Revision: https://reviews.llvm.org/D139474
While it is unlikely to matter in practice, there is no reason
for this value to be larger than it should be. 64 bytes is the
size of a cache line in most machines, and we can fit a full
512-bit vector in it.
Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D139434
The new function is a wrapper around the regular `getStridesAndOffset`
that offers a more compact way (as in writing less code) of getting the
relevant information.
This method is intended to be used only when it is known that the
LogicalResult of the regular `getStridesAndOffset` must be "succeeded".
This warpper will assert on that.
Differential Revision: https://reviews.llvm.org/D139529
This covers `SDot`, `SUDot`, and `UDot`. The `*AccSat` version will be
added in a follow-up revision.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D139242
It introduces a pattern that swaps `linalg.generic + tensor.pack` to
`tensor.pack + linalg.generic`. It requires all the iteration types
being parallel; the indexing map of output operand is identiy. They can
all be relaxed in the future.
The user can decide whether the propagation should be applied or not by
passing a control function.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D138882
This revision adapts the pattern in LinAlg to work on DPS interface, and
adds it to canonicalization patterns of tensor dialect. The
InsertSliceOp is skipped in the pattern because it has its own logic
about folding tensor.cast ops.
Reviewed By: pifon2a
Differential Revision: https://reviews.llvm.org/D139375
Currently, assemblyFormat `custom<A>($a) custom<B>($b)` has different spacing
if used for Ops vs Attrs/Types. Ops insert a space if needed before the custom directive,
while attributes and types do not.
This leads to the following two patterns in attributes / types:
```
# 1. Whitespace literal
let assemblyFormat = "... ` ` custom<A>($a)"
# 2. Custom printer code includes spacing
void printB(...) {
printer << ' ' << b;
}
```
Moving this spacing into the generated code allows for some cleanup in mlir and
improves the consistency of custom directives.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D138235
This patch adds `replaceAllUsesExcept` to the rewriter class.
The implementation is copy-pasted from Value + calling
`updateRootInPlace` to notify the listeners about the
corresponding IR changes.
Reviewed By: Mogball
Differential Revision: https://reviews.llvm.org/D139382
The pass was not checking for uninitialized states due to dead code.
This patch also makes LLVMFuncOp correctly return a null body when it is
external.
Fixes#58807
Depends on D139388
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D139389
This change cleans up the conversion pass re the "dim"-vs-"lvl" and "sizes"-vs-"shape" distinctions of the runtime. A quick synopsis includes:
* Adds new `SparseTensorStorageBase::getDimSize` method, with `sparseDimSize` wrapper in SparseTensorRuntime.h, and `genDimSizeCall` generator in SparseTensorConversion.cpp
* Changes `genLvlSizeCall` to perform no logic, just generate the function call.
* Adds `createOrFold{Dim,Lvl}Call` functions to handle the logic of replacing `gen{Dim,Lvl}SizeCall` with constants whenever possible. The `createOrFoldDimCall` function replaces the old `sizeFromPtrAtDim`.
* Adds `{get,fill}DimSizes` functions for iterating `createOrFoldDimCall` across the whole type. These functions replace the old `sizesFromPtr`.
* Adds `{get,fill}DimShape` functions for lowering a `ShapedType` into constants. These functions replace the old `sizesFromType`.
* Changes the `DimOp` rewrite to do the right thing.
* Changes the `ExpandOp` rewrite to compute the proper expansion size.
Depends On D138365
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D139165
This patch abstracts sparse tensor memory scheme into a SparseTensorDescriptor class. Previously, the field accesses are performed in a relatively error-prone way, this patch hides the hairy details behind a SparseTensorDescriptor class to allow users access sparse tensor fields in a more cohesive way.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D138627
We can compute the offsets and sizes for the slice of input because the
iteration domain is defined over outer loops. If the dimension is tiled,
the i-th index is the product of offset_i and inner_tile_i.
Different from tiling a pad op, we do not have to deal with reading zero
data from input. Because the tiling sizes are indicated to packed outer
dimensions. We will read either the entire tile or partial tile for each
packed tile. The scf.if and tensor.generate ops are not needed in this
context.
Co-authored-by: Lorenzo Chelini <l.chelini@icloud.com>
Reviewed By: rengolin, mravishankar
Differential Revision: https://reviews.llvm.org/D138631
This patch removes the implementation of TypedAttr and ElementsAttr
from DenseArrayAttr and, in doing so, removes the need store a shaped
type. The attribute now stores a size (number of elements), an MLIR type
as a discriminator, and a raw byte array.
The intent of DenseArrayAttr was not to be a drop-in replacement for DenseElementsAttr. It was meant to be a simple container of integers or floats that map to C++ types. The ElementsAttr implementation on DenseArrayAttr had many holes in it, and fixing those holes would require evolving DenseArrayAttr in a way that is incompatible with its original purpose.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D137606
Calculating the position of the region trailing objects isn't free,
given that it's the last trailing object, and inlining the size check
removes the need for users to explicitly add size checks for
micro-optimization.
Add support for loading, computing, and storing `gpu.subgroup` WMMA ops
in transpose mode as well. Update the GPU to NVVM lowerings to support
`transpose` mode and update integration tests as well.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D139021
Support affine.parallel in the index set analysis. It allows us to do dependence analysis containing affine.parallel in addition to affine.for and affine.if. This change only supports the constant lower/upper bound in affine.parallel. Other complicated affine map bounds will be supported in further commits.
See https://github.com/llvm/llvm-project/issues/57327
Reviewed By: bondhugula
Differential Revision: https://reviews.llvm.org/D136056
This patch mechanically replaces None with std::nullopt where the
compiler would warn if None were deprecated. The intent is to reduce
the amount of manual work required in migrating from Optional to
std::optional.
This is part of an effort to migrate from llvm::Optional to
std::optional:
https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
Querying the type trait is something that can be done at compile time. So,
replace the runtime `if` with `if constexpr`.
Differential Revision: https://reviews.llvm.org/D139264
`getKey` and `getHash` use mutually exclusive overloads based on existence of
methods to determine how to compute get the key or hash, respectively. This is
a bit verbose with `std::enable_if_t`. Simplify it a bit by using
`if constexpr` directly. As an added bonus, this is slightly quicker to
compile.
Differential Revision: https://reviews.llvm.org/D139245
This commit performs two related changes. First we adjust `readCOOValue` to take the `IsPattern` bool as a template parameter rather than a function argument. Second we factor `readCOOLoop` out from `readCOO`, and template it on `IsPattern` and `IsSymmetric`. Together this moves all the assertions and header-dependent conditionals out of the main for-loop of `readCOO`. The only remaining conditional is in the `IsSymmetric=true` variant: checking whether the element is on the diagonal or not (which cannot be lifted out of the loop).
Depends On D138363
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D138365
This commit updates how the `SparseTensorConversion` pass handles `NewOp`. It breaks up the underlying `openSparseTensor` function into two parts (`SparseTensorReader::create` and `SparseTensorReader::readSparseTensor`) so that the pass can inject code for constructing `lvlSizes` between those two parts. Migrating the construction of `lvlSizes` out of the runtime and into the pass is a necessary first step toward fully supporting non-permutations. (The alternative would be for the pass to generate a `FuncOp` for performing the construction and then passing that to the runtime; which doesn't seem to have any benefits over the design of this commit.) And since the pass now generates the code to call these two functions, this change also removes the `Action::kFromFile` value from the enum used by `_mlir_ciface_newSparseTensor`.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D138363
RewriterBase is the proper builder to use so one can listen to IR modifications (i.e. not just creation).
Differential Revision: https://reviews.llvm.org/D137922
TensorCopyInsertion should not have been exposed as a pass. This was a flaw in the original design. It is a preparation step for bufferization and certain transforms (that would otherwise be legal) are illegal between TensorCopyInsertion and actual rewrite to MemRef ops. Therefore, even if broken down as two separate steps internally, they should be exposed as a single pass.
This change affects the sparse compiler, which uses `TensorCopyInsertionPass`. A new `SparsificationAndBufferizationPass` is added to replace all passes in the sparse tensor pipeline from `TensorCopyInsertionPass` until the actual bufferization (rewrite to memref/non-tensor). It is generally unsafe to run arbitrary passes in-between, in particular passes that hoist tensor ops out of loops or change SSA use-def chains along tensor ops.
Differential Revision: https://reviews.llvm.org/D138915
This reverts commit 4e6dab98e0.
Re-apply: D138988 after fixing error on windows. Remove test for boolean
attributes as it does not make sense to apply these constraints on
boolean array.
ConstantOp uses `%idx<value>` and BoolConstantOp uses true/false, which
is similar to the printing for arith::ConstantOp.
Differential Revision: https://reviews.llvm.org/D139175