There was a bug in `TransferWriteNonPermutationLowering`, a pattern that extends the permutation map of a TransferWriteOp with leading transfer dimensions of size ones. These newly added transfer dimensions are always in-bounds, because the starting point of any dimension is in-bounds. VectorToSCF inserts out-of-bounds checks based on the "in_bounds" attribute and dims that are marked as out-of-bounds but that are actually always in-bounds lead to unnecessary "scf.if" ops.
Differential Revision: https://reviews.llvm.org/D155196
Clarify a few diagnostics so that they are more consistent with the
corresponding condition. For example:
```
if (positionAttr.size() >
static_cast<unsigned>(getSourceVectorType().getRank()))
```
should lead to ("no greater than"):
```
return emitOpError(
"expected position attribute of rank no greater than vector rank");
```
as opposed to ("smaller"):
```
return emitOpError(
"expected position attribute of rank smaller than vector rank");
```
Differential Revision: https://reviews.llvm.org/D154998
Currently the transfer splitting patterns will generate an invalid cast
when the source memref for a transfer op has a non-default memory space.
This is handled by first introducing a `memref.memory_space_cast` in
such cases.
Differential Revision: https://reviews.llvm.org/D154515
This patch is a following for the previous patch https://reviews.llvm.org/D151519.
With this patch, vector.load op with narrow bitwidth (e.g., i4) can be converted to
supported wider bitwidth (e.g., i8).
Reviewed By: hanchung
Differential Revision: https://reviews.llvm.org/D154178
Remove patterns that fold tensor subset ops into vector transfer ops from the vector dialect. These patterns already exist in the tensor dialect.
Differential Revision: https://reviews.llvm.org/D154932
* Rename op to `transform.get_parent_op`
* Match parents by "is isolated from above" and/or op name, or just the direct parent.
* Deduplication of result payload ops is optional.
Differential Revision: https://reviews.llvm.org/D154071
At the moment, only the trailing dimensions in the vector type can be
scalable, i.e. this is supported:
vector<2x[4]xf32>
and this is not allowed:
vector<[2]x4xf32>
This patch extends the vector type so that arbitrary dimensions can be
scalable. To this end, an array of bool values is added to every vector
type to denote whether the corresponding dimensions are scalable or not.
For example, for this vector:
vector<[2]x[3]x4xf32>
the following array would be created:
{true, true, false}.
Additionally, the current syntax:
vector<[2x3]x4xf32>
is replaced with:
vector<[2]x[3]x4xf32>
This is primarily to simplify parsing (this way, the parser can easily
process one dimension at a time rather than e.g. tracking whether
"scalable block" has been entered/left).
NOTE: The `isScalableDim` parameter of `VectorType` (introduced in this
patch) makes `numScalableDims` redundant. For the time being,
`numScalableDims` is preserved to facilitate the transition between the
two parameters. `numScalableDims` will be removed in one of the
subsequent patches.
This change is a part of a larger effort to enable scalable
vectorisation in Linalg. See this RFC for more context:
* https://discourse.llvm.org/t/rfc-scalable-vectorisation-in-linalg/
Differential Revision: https://reviews.llvm.org/D153372
As a convenience to the user, top-level sequence ops can optionally be used as matchers: the op type is specified by the type of the block argument.
This is similar to how pass pipeline targets can be specified on the command line (`-pass-pipeline='builtin.module(func.func(...))`).
Differential Revision: https://reviews.llvm.org/D153121
Add an extra check to make sure that transform IR is not getting modified by this op while it is being interpreted. This generally dangerous and we may want to enforce this for all transform ops that modify the payload in the future.
Users should generally try to apply patterns only to the piece of IR where it is needed (e.g., a matched function) and not the entire module (which may contain the transform IR).
This revision is in response to a crash in a downstream compiler that was caused by a dead `transform.structured.match` op that was removed by the GreedyPatternRewriteDriver's DCE while the enclosing sequence was being interpreted.
Differential Revision: https://reviews.llvm.org/D153113
The new pattern will replace elementwise(broadcast) with
broadcast(elementwise) when safe.
This change affects tests for vectorising nD-extract. In one case
("vectorize_nd_tensor_extract_with_tensor_extract") I just trimmed the
test and only preserved the key parts (scalar and contiguous load from
the original Op). We could do the same with some other tests if that
helps maintainability.
Differential Revision: https://reviews.llvm.org/D152812
In the vector distribute patterns, we used to move
`vector.broadcast`s out of `vector.warp_execute_on_lane0`s
irrespectively of how they were defined.
This could create broadcast operations with invalid semantic.
E.g.,
```
%r = warop ...[32] ... -> vector<1x2xf32> {
%val = broadcast %in : vector<64xf32> to vetor<1x64xf32>
vector.yield %val : vector<1x64xf32>
}
```
=>
```
%r = warop ...[32] ... -> vector<64xf32> {
vector.yield %in : vector<64xf32>
}
// Broadcasting to a narrower type!
broadcast %r : vector<64xf32> to vector<1x2xf32>
```
The root issue is we are trying to broadcast something that is not the same
for each thread, so there is actually nothing to propagate here.
The fix checks that the broadcast we want to create actually makes sense.
Differential Revision: https://reviews.llvm.org/D152154
* Remove `transform::PatternRegistry`.
* Add a new op for each currently registered pattern set.
* Change names of vector dialect pattern selector ops, so that they are consistent with the remaining code base.
* Remove redundant `transform.vector.extract_address_computations` op.
Differential Revision: https://reviews.llvm.org/D152249
All vector transform ops are now `PatternDescriptorOpInterface` ops that merely select the patterns. The patterns are applied by the `apply_patterns` op. This is to ensure that ops are properly tracked. (TrackingListener is used in the implementation of `apply_patterns`.) Furthermore, handles are no longer invalidated when applying patterns in the vector tests.
Differential Revision: https://reviews.llvm.org/D152174
Consider mixed precision data type, i.e., F16 input lhs, F16 input rhs, F32 accumulation, and F32 output. This is typically written as F32 <= F16*F16 + F32.
During vectorization from linalg to vector for mixed precision data type (F32 <= F16*F16 + F32), linalg.matmul introduces arith.extf on input lhs and rhs operands.
"linalg.matmul"(%lhs, %rhs, %acc) ({
^bb0(%arg1: f16, %arg2: f16, %arg3: f32):
%lhs_f32 = "arith.extf"(%arg1) : (f16) -> f32
%rhs_f32 = "arith.extf"(%arg2) : (f16) -> f32
%mul = "arith.mulf"(%lhs_f32, %rhs_f32) : (f32, f32) -> f32
%acc = "arith.addf"(%arg3, %mul) : (f32, f32) -> f32
"linalg.yield"(%acc) : (f32) -> ()
})
There are backend that natively supports mixed-precision data type and does not need the arith.extf. For example, NVIDIA A100 GPU has mma.sync.aligned.*.f32.f16.f16.f32 that can support mixed-precision data type. However, the presence of arith.extf in the IR, introduces the unnecessary casting targeting F32 Tensor Cores instead of F16 Tensor Cores for NVIDIA backend. This patch adds a folding pattern to fold arith.extf into vector.contract
Differential Revision: https://reviews.llvm.org/D151918
In the vector distribute patterns, we used to move
`vector.transfer_read`s out of `vector.warp_execute_on_lane0`s
irrespectively of how they were defined.
This could create transfer_read operations that would read values from
within the warpOp's body from outside of the body.
E.g.,
```
warpop {
%defined_in_body
%read = transfer_read %defined_in_body
vector.yield %read
}
```
=>
```
warpop {
%defined_in_body
vector.yield ...
}
// %defined_in_body is referenced outside of its scope.
%read = transfer_read %defined_in_body
```
The fix consists in checking that all the values feeding the new
`transfer_read` are defined outside of warpOp's body.
Note: We could do this check before creating any operation, but that would
mean knowing what `affine::makeComposedAffineApply` actually do. So the
current fix is a trade off of coupling the implementations of this
propagation and `makeComposedAffineApply` versus compile time.
Differential Revision: https://reviews.llvm.org/D152149
Patterns that convert extract(transfer_read) into a scalar load where
incorrectly triggering for cases where a sub-vector instead of a scalar
was extracted.
Reviewed By: nicolasvasilache, hanchung, awarzynski
Differential Revision: https://reviews.llvm.org/D151862
This PR adds support for shape casting from and to 0-D vectors.
Reviewed By: nicolasvasilache, hanchung, awarzynski
Differential Revision: https://reviews.llvm.org/D151851
The `vector.extract` folding patterns do not support 0-D vectors
(actually, 0-D vector support couldn't even be implemented as a folding
pattern as it would require replacing `vector.extract` with a
`vector.extractelement` op). This patch is bailing out folding when 0-D
vectors are found.
Reviewed By: nicolasvasilache, hanchung
Differential Revision: https://reviews.llvm.org/D151847
This patch extends the transfer drop unit dim patterns to support cases where the vector shape should also be reduced
(e.g., transfer_read(memref<1x4x1xf32>, vector<1x4x1xf32>) -> transfer_read(memref<4xf32>, vector<4xf32>).
Reviewed By: hanchung, pzread
Differential Revision: https://reviews.llvm.org/D151007
This patch adds support to shape cast a vector<1x1x1...1xElemenType> to
a vector<ElementType> and the other way around.
Differential Revision: https://reviews.llvm.org/D151169
This patch extends the vector.extract(vector.transfer_read) -> scalar
load patterns to support vector.transfer_read with multiple uses. For
now, we check that all the uses are vector.extract operations.
Supporting multiple uses is predicated under a flag.
Reviewed By: hanchung
Differential Revision: https://reviews.llvm.org/D150812
These patterns touches the structure generated from tiling so it
affects later steps like bufferization and vector hoisting.
Instead of putting them in canonicalization, this commit creates
separate entry points for them to be called explicitly.
This is NFC regarding the functionality and tests of those patterns.
It also addresses two TODO items in the codebase.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D150702
Types have been introduced a while ago and provide for better
readability and transform-time verification. Use them in the ops from
the structured transform dialect extension.
In most cases, the types are appended as trailing functional types or a
derived format of the functional type that allows for an empty right
hand size without the annoying `-> ()` syntax (similarly to `func.func`
declaration that may omit the arrow). When handles are used inside mixed
static/dynamic lists, such as tile sizes, types of those handles follow
them immediately as in `sizes [%0 : !transform.any_value, 42]`. This
allows for better readability than matching the trailing type.
Update code to remove hardcoded PDL dependencies and expunge PDL from
structured transform op code.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D144515
The existing vector.transpose lowering patterns only triggers if the
input vector is 2D. The revision extends the pattern to handle n-D
vectors which are effectively 2-D vectors (e.g., vector<1x4x1x8x1).
It refactors a common check about 2-D vectors from X86Vector
lowering to VectorUtils.h so it can be reused by both sides.
Reviewed By: dcaballe
Differential Revision: https://reviews.llvm.org/D149908
The pattern added here is intended as a last resort for targets like
SPIR-V where there are vector size restrictions and we need to be able
to break down large vector types. Vectorizing loads/stores for small
bitwidths (e.g. i8) relies on bitcasting to a larger element type and
patterns to bubble bitcast ops to where they can cancel.
This fails for cases such as
```
%1 = arith.trunci %0 : vector<2x32xi32> to vector<2x32xi8>
vector.transfer_write %1, %destination[%c0, %c0] {in_bounds = [true, true]} : vector<2x32xi8>, memref<2x32xi8>
```
where the `arith.trunci` op essentially does the job of one of the
bitcasts, leading to a bitcast that need to be further broken down
```
vector.bitcast %0 : vector<16xi8> to vector<4xi32>
```
Differential Revision: https://reviews.llvm.org/D149065
This pattern is useful for SPIR-V to unroll to a supported vector size
before later lowerings. The unrolling pattern is closer to an
elementwise op than the transfer ops because the index values from which
to extract elements are captured by the index vector and thus there is
no need to update the base offsets when unrolling gather.
Differential Revision: https://reviews.llvm.org/D149066
This was incorrect when the number of dropped source dims was smaller
than the number of dropped dst dims. We still need to insert zeros if
there is anything dropped from the src.
Differential Revision: https://reviews.llvm.org/D148636
We need to create a new type with transposed shape after
transposing the operand in `CanonicalizeContractMatmulToMMT`.
Reviewed By: kuhar, dcaballe
Differential Revision: https://reviews.llvm.org/D148470
This gives us better control to lower masked operations independently of the create mask operations.
It is often useful to maintain high-level mask information instead of lowering it too early to
too fine-grained form.
Differential Revision: https://reviews.llvm.org/D148162
Restrict the op to functions and modules. Such ops are modified in-place. The transform now consumes the handle and produces a new handle. The `target_is_module` attribute is no longer needed because a result handle is produced in either case.
Differential Revision: https://reviews.llvm.org/D147446
When dropping leading unit dims of vector.insert's operands and creating
a new vector.insert, its new position rank should be computed explicitly
in two steps: first based on the numbers of leading unit dims dropped
from the vector.insert's destination, then based on the numbers of
leading unit dims dropped from its source.
Reviewed By: pifon2a
Differential Revision: https://reviews.llvm.org/D147280
We already had vector.transpose(vector.create_mask) ->
vector.create_mask. This patch adds the constant mask version of it.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D147099
This patch removes the historical lhs and rhs masks in vector.contract,
now that vector.mask supports vector.contract and the lhs and rhs masks
are barely supported by all the vector.contract lowerings and
transformations.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D144430
This revision adds vector transform operations that allow us to better inspect the composition
of various lowerings that were previously very opaque.
This commit is NFC in that it does not change patterns beyond adding `rewriter.notifyFailure` messages
and it does not change the tests beyond breaking them into pieces and using transforms instead of
throwaway opaque test passes.
Reviewed By: ftynse, springerm
Co-authored-by: Alex Zinenko <zinenko@google.com>
Differential Revision: https://reviews.llvm.org/D146755
The old implementation parsed the optional attribute dict, only to replace its
contents by using `assign`.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D146707
When applying vector masking we may create a mask and then transpose it.
Transpositions are extremely expensive so this patch introduces a new
canonicalization pattern that remove the tranpose operation and create a
new transposed mask.
Differential Revision: https://reviews.llvm.org/D146193