In several cases, the splitting may be known to be a noop, i.e., produce
no second part. Thread this information through the transform utilities
to the transform dialect, and differentiate it from the error state.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D141138
A transformation tiling a reduction dimension of a Linalg op needs a
tile size for said dimension. When an insufficient number of dimensions
was provided, it would segfault due to out-of-bounds access to a vector.
Also fix incorrect error reporting in the structured transform op
exercising this functionality.
Reviewed By: springerm, ThomasRaoux
Differential Revision: https://reviews.llvm.org/D141046
Make sure that both functions populate patterns with the same functionality. Both should be refactored at some point so that canonicalization patterns are not populated.
Differential Revision: https://reviews.llvm.org/D140372
The pattern generalizes a tensor::UnPackOp into a sequence of tensor +
Linalg ops, when the outer dims are all 1s. It uses the trick of
rank-reduced tensor.extract_slice to get the tile; transpose the tile;
extract sub tile for incomplete cases if needed; use tensor.insert_slice
to insert it to the destination tensor.
Reviewed By: tyb0807, chelini
Differential Revision: https://reviews.llvm.org/D140254
This function used to create new ops even if the vectorization failed. Those ops were then folded away. This caused a failure of the GreedyPatternRewriter, which no longer terminated (each time the IR is modified => one more iteration).
Differential Revision: https://reviews.llvm.org/D140286
std::optional::value() has undesired exception checking semantics and is
unavailable in older Xcode (see _LIBCPP_AVAILABILITY_BAD_OPTIONAL_ACCESS). The
call sites block std::optional migration.
This is part of an effort to migrate from llvm::Optional to
std::optional. This patch changes the way mlir-tblgen generates .inc
files, and modifies tests and documentation appropriately. It is a "no
compromises" patch, and doesn't leave the user with an unpleasant mix of
llvm::Optional and std::optional.
A non-trivial change has been made to ControlFlowInterfaces to split one
constructor into two, relating to a build failure on Windows.
See also: https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
Signed-off-by: Ramkumar Ramachandra <r@artagnon.com>
Differential Revision: https://reviews.llvm.org/D138934
The pattern generalizes a tensor::PackOp into a sequence of tensor +
Linalg ops, when the outer dims are all 1s. It uses the trick of
rank-reduced tensor.extract_slice to get the tile; transpose the tile;
use tensor.insert_slice to insert it to the destination of inner tile.
Reviewed By: pifon2a, tyb0807
Differential Revision: https://reviews.llvm.org/D140060
MoveInitOperandsToInput is put into a separate populate... function because it can interfere with certain transformations.
Differential Revision: https://reviews.llvm.org/D140091
This change extends the `ReplaceUnitExtents` pattern so that users can choose between of two strategies for generating rank reductions:
* CollapseShapeOp / ExpandShapeOp (was already implemented but code was cleaned up; default strategy)
* rank-reducing ExtractSliceOp / InsertSliceOp
Also add helper functions to the memref dialect that we already have on the tensor dialect: `getMixedSizes`, `createCanonicalRankReducingSubViewOp`, `rankReduceIfNeeded`.
We are using ReassociationIndices instead of ReassoicationExprs in many other places and this makes the code easier to read. Also adding a new test case (that also passed before).
Differential Revision: https://reviews.llvm.org/D139947
This patch fixes:
mlir/lib/Dialect/Linalg/Transforms/DataLayoutPropagation.cpp:52:42:
error: 'None' is deprecated: Use std::nullopt
instead. [-Werror,-Wdeprecated-declarations]
Considering the case that generic + pack (with outer_dim_perms), the
truth is that it is equipvelent to generic + pack + transpose. There are
two steps to bubble up the pack op accross the generic op.
Step 1. swap generic + pack -> pack + generic.
In this step, we can bind the packing information to dimensions of
iteration domain. With the information, we can pack the operands with
corresponding data tile sizes; the packed inner dimensions will be
appended to the indexing_maps. Note that the outer dimensions of
indexing maps are not changed at all.
Step 2. Fold the transpose into generic op.
The step two is just updating the indexing map, so we do not have to
handle outer_dim_perms anymore.
There could be step 3 to extract the transpose op out (i.e., generic ->
transpose + generic), then we can fold the transpose into the pack op.
This step is not done in the revision.
Co-authored-by: Lorenzo Chelini <l.chelini@icloud.com>
Reviewed By: chelini
Differential Revision: https://reviews.llvm.org/D139680
This patch introduces the initial bits to support vector masking
using the `vector.mask` operation. Vectorization changes should be
NFC for non-masked cases. We can't test masked cases directly until
we extend the Transform dialect to support masking.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D137690
This patch implements the vectorization of tensor.extract for arbitrary
tensors. It basically extends https://reviews.llvm.org/D133786 by adding
support for n-D tensors (n >= 2). This is implemented by essentially
flattening the indices.
When benchmarking the vectorized code, we have observed that it is
slower than the scalar code. That's most likely due to sub-optimal (and,
in general slow) gather loads. More work is needed to identify an
implementation and/or a representation that would lead to better code.
In the meantime, the vectorization of n-D tensors (where n >= 2) has to
be explicitly enabled. This can be done either via:
* transfer dialect's `vectorize_nd_extract` attribute,
* dedicated bool argument in the `vectorize` method from
"Vectorization.cpp".
The second option was added to control the new functionality through
means other than the transfer dialect.
Related discussion: https://github.com/iree-org/iree/issues/9198
Differential Revision: https://reviews.llvm.org/D137660
In the process, numerous insertion point issues were found and fixed.
RAII on insertion points is now used more dilligently.
Differential Revision: https://reviews.llvm.org/D139714
If an input bbArg is not used, its corresponding input operand is removed. If there are duplicate input operands or input operands that are also used as output operands, the duplicate input operands are removed. Output operands are never modified.
Differential Revision: https://reviews.llvm.org/D139709
At the moment, they are a part of EmptyOp::getCanonicalizationPatterns. When
extract_slice(tensor.empty) is rewritten as a new tensor.empty, it could
happen that we end up with two tensor.empty ops, since the original
tensor.empty can have two users. After bufferization such cases result in two
allocations.
Differential Revision: https://reviews.llvm.org/D139308
The only way to do this with the current hoisting strategy is by
lowering Affine to Scf first, but that prevents further passes on
Affine.
Differential Revision: https://reviews.llvm.org/D137600
This adds a tile_size parameter, when it is used the tiles are
cyclically distributed onto the threads of the scf.foreach_thread op.
Differential Revision: https://reviews.llvm.org/D139474
It introduces a pattern that swaps `linalg.generic + tensor.pack` to
`tensor.pack + linalg.generic`. It requires all the iteration types
being parallel; the indexing map of output operand is identiy. They can
all be relaxed in the future.
The user can decide whether the propagation should be applied or not by
passing a control function.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D138882
The only way to do this with the current hoisting strategy is by
lowering Affine to Scf first, but that prevents further passes on
Affine.
Differential Revision: https://reviews.llvm.org/D137600
Incrementing `counter` variable is inside the if statement. If the code does not enter there, the while loop will iterate infinitely. This revision moves the codes outside of if statement.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D139005
This patch mechanically replaces None with std::nullopt where the
compiler would warn if None were deprecated. The intent is to reduce
the amount of manual work required in migrating from Optional to
std::optional.
This is part of an effort to migrate from llvm::Optional to
std::optional:
https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
RewriterBase is the proper builder to use so one can listen to IR modifications (i.e. not just creation).
Differential Revision: https://reviews.llvm.org/D137922
This fixes the case where scf::LoopNest::loops is empty.
Change LoopVector and ValueVector to SmallVector.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D136926
IREE's passes depend on the behavior of SplitReduction's introduced
parallel loop being the same as the introduced dimension in the
intermediate tensor (the order of loops was changed in
https://reviews.llvm.org/D137478).
Differential Revision: https://reviews.llvm.org/D138972
Relax linalg elementwise fusion check to allow mixed consumers. Producer is still required to be fully tensor to avoid potential memref aliasing.
Differential Revision: https://reviews.llvm.org/D138759
The output operands will be added to input operands if the generic op (on tensors)
becomes an elementwise operation. The outputs of the generic op is still the same.
They will be cleaned up by ReplaceWithEmptyTensorIfUnused pattern.
This is https://reviews.llvm.org/D138251, plus a cmake dep fix.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D138843
This revision exports `collapseGenericOpIterationDims` to a header so it can be used outside of the pattern. We have use-case where we want to call this function directly.
Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D138697
The output operands will be added to input operands if the generic op (on tensors)
becomes an elementwise operation. The outputs of the generic op is still the same.
They will be cleaned up by ReplaceWithEmptyTensorIfUnused pattern.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D138251
Elementwise op fusion conserves the result of the producer in the
fused op, relying on later clean up patterns to drop unused results of
the fused op. Instead, if the producer result has no other use apart
from the consumer op, avoid making the producer result available in
the fused node. This saves some unnecessary IR manipulations.
Differential Revision: https://reviews.llvm.org/D138096