Similar to `vector.transfer_read`/`vector.transfer_write`, allow 0-D
vectors.
This commit fixes
`mlir/test/Dialect/Vector/vector-transfer-to-vector-load-store.mlir`
when verifying the IR after each pattern (#74270). That test produces a
temporary 0-D load/store op.
Updates two tests for vector.contract -> vector.outerproduct
transformations:
1. Rename "vector-contract-to-outerproduct-transforms.mlir" as
"vector-contract-to-outerproduct-matmul-transforms.mlir". The new
name more accurate captures what's being tested. it is also
consistent with
"vector-contract-to-outerproduct-matvec-transforms.mlir", which
covers vector matvec operations and makes finding relevant tests
easier.
2. For matmul tests, move the traits definining the iteration spaces to
the top of the file. This is consistent with how matvec tests are
defined and also makes it easy to quickly identify what cases are
covered.
3. For matmul tests, use more meaningful names for function arguments.
This helps keep things consistent across the file (i.e. function
definitions wih check lines and comments).
4. For matvec test, move a few tests around so that the most basic case
(without masking) is first.
5. Update comments.
Tests for vector.outerproduct for scalable vectors from
"vector-scalable-outerproduct.mlir" are moved to:
* ops.mlir and invalid.mlir.
These files are effectively used to document what Ops are supported and
That's basically what the original file was testing (but specifically
for scalable vectors).
This is to avoid confusion when dealing with reduction/combining kinds.
For example, see a recent PR comment:
https://github.com/llvm/llvm-project/pull/75846#discussion_r1430722175.
Previously, they were picked to mostly mirror the names of the llvm
vector reduction intrinsics:
https://llvm.org/docs/LangRef.html#llvm-vector-reduce-fmin-intrinsic. In
isolation, it was not clear if `<maxf>` has `arith.maxnumf` or
`arith.maximumf` semantics. The new reduction kind names map 1:1 to
arith ops, which makes it easier to tell/look up their semantics.
Because both the vector and the gpu dialect depend on the arith dialect,
it's more natural to align names with those in arith than with the
lowering to llvm intrinsics.
Issue: https://github.com/llvm/llvm-project/issues/72354
The number of vector elements considered 'small' enough to extract is
parameterized.
This is to avoid going into specialized reduction lowering when a
single/couple of arith ops can do. Targets without dedicated reduction
intrinsics can use that as an emulation path too.
Depends on https://github.com/llvm/llvm-project/pull/75846.
For vectors with either leading or trailing unit dim, replaces:
elementwise(a, b)
with:
sc_a = shape_cast(a)
sc_b = shape_cast(b)
res = elementwise(sc_a, sc_b)
return shape_cast(res)
The newly inserted shape_cast Ops fold (before elementwise Op) and then
restore (after elementwise Op) the unit dim. Vectors `a` and `b` are
required to be rank > 1.
Example:
```mlir
%mul = arith.mulf %B_row, %A_row : vector<1x[4]xf32>
%cast = vector.shape_cast %mul : vector<1x[4]xf32> to vector<[4]xf32>
```
gets converted to:
```mlir
%B_row_sc = vector.shape_cast %B_row : vector<1x[4]xf32> to vector<[4]xf32>
%A_row_sc = vector.shape_cast %A_row : vector<1x[4]xf32> to vector<[4]xf32>
%mul = arith.mulf %B_row_sc, %A_row_sc : vector<[4]xf32>
%mul_sc = vector.shape_cast %mul : vector<[4]xf32> to vector<1x[4]xf32>
%cast = vector.shape_cast %mul_sc : vector<1x[4]xf32> to vector<[4]xf32>
```
In practice, the bottom 2 shape_cast(s) will be folded away.
Add a configuration option to allow vector distribution with multiple
elements written by a single lane.
This is so that we can perform vector multi-reduction with multiple
results per workgroup.
Without this patch, MLIR crashes with
```
Assertion failed: (getNumDims() == map.getNumResults() && "Number of results mismatch"), function compose, file AffineMap.cpp, line 537.
```
during parsing.
This reverts commit f42b7615b8.
The fold pattern is incorrect, because it does not even look at the
permutation of non-unit dims and is happy to replace a pattern such as
```
%22 = vector.shape_cast %21 : vector<1x256x256xf32> to vector<256x256xf32>
%23 = vector.transpose %22, [1, 0] : vector<256x256xf32> to vector<256x256xf32>
```
with
```
%22 = vector.shape_cast %21 : vector<1x256x256xf32> to vector<256x256xf32>
```
which is obviously incorrect.
Updates patterns for flattening `vector.transfer_read` by relaxing the
requirement that the "collapsed" indices are all zero. This enables
collapsing cases like this one:
```mlir
%2 = vector.transfer_read %arg4[%c0, %arg0, %arg1, %c0] ... :
memref<1x43x4x6xi32>, vector<1x2x6xi32>
```
Previously only the following case would be consider for collapsing
(all indices are 0):
```mlir
%2 = vector.transfer_read %arg4[%c0, %c0, %c0, %c0] ... :
memref<1x43x4x6xi32>, vector<1x2x6xi32>
```
Also adds some new comments and renames the `firstContiguousInnerDim`
parameter as `firstDimToCollapse` (the latter better matches the actual
meaning).
Similar updates for `vector.transfer_write` will be implemented in a
follow-up patch.
Updates "flatten vector" patterns to support more cases, namely Ops that
read/write vectors with leading unit dims. For example:
```mlir
%0 = vector.transfer_read %arg0[%c0, %c0, %c0, %c0] ... :
memref<5x4x3x2xi8, strided<[24, 6, 2, 1], offset: ?>>, vector<1x1x2x2xi8>
```
Currently, the `vector.transfer_read` above would not be flattened. With
this
change, it will be rewritten as follows:
```mlir
%collapse_shape = memref.collapse_shape %arg0 [[0, 1, 2, 3]] :
memref<5x4x3x2xi8, strided<[24, 6, 2, 1], offset: ?>>
into memref<120xi8, strided<[1], offset: ?>>
%0 = vector.transfer_read %collapse_shape[%c0] ... :
memref<120xi8, strided<[1], offset: ?>>, vector<4xi8>
%1 = vector.shape_cast %0 : vector<4xi8> to vector<1x1x2x2xi8>
```
`hasMatchingInnerContigousShape` is generalised and renamed as
`isContiguousSlice` to better match the updated functionality. A few
test names are updated to better highlight what case is being exercised.
This folds transpose(shape_cast) into a new shape_cast, when the
transpose just permutes a unit dim from the result of the shape_cast.
Example:
```
%0 = vector.shape_cast %vec : vector<[4]xf32> to vector<[4]x1xf32>
%1 = vector.transpose %0, [1, 0] : vector<[4]x1xf32> to vector<1x[4]xf32>
```
Folds to:
```
%0 = vector.shape_cast %vec : vector<[4]xf32> to vector<1x[4]xf32>
```
This is an (alternate) fix for lowering matmuls to ArmSME.
This patch refactors tests for:
vector.contract -> vector.outerproduct
for matvec operations (b += Ax). Summary of changes:
* add 2 missing cases (masked + scalable) when the operation kind is
`maxf`.
This is a part of a larger effort to add cases with scalable vectors to
tests for the Vector dialect.
Implements #72834.
The idea is similar to vector.maskedload + vector.store emulation. What
the emulation does is:
1. Get a compressed mask and load the data from destination.
2. Bitcast the data to original vector type.
3. Select values between `op.valueToStore` and the data from load using
original mask.
4. Bitcast the new value and store it to destination using compressed
masked.
This patch refactors tests for:
vector.contract -> vector.outerproduct
for matvec operations (b += Ax). Summary of changes:
* names of LIT variables are unified,
* "plain" tests (i.e. without masking and with fixed-width vectors)
are moved to the top of their respective sections,
* missing "plain" cases are added.
This is a part of a larger effort to add cases with scalable vectors to
tests for the Vector dialect. I am refactoring these tests so that it's
easier to identify what cases are tested and where to add tests for
scalable vectors.
Implements #72834.
This is a direct follow-up of #73348. The matvec trait that's used for
`@matvec_m_mk_k` was incorrectly updated from:
```
#redpar_vecmattrans_accesses = [
affine_map<(m, k) -> (m)>,
affine_map<(m, k) -> (m, k)>,
affine_map<(m, k) -> (k)>
]
indexing_maps = #redpar_vecmattrans_accesses,
iterator_types = ["reduction", "parallel"]
}
```
to:
```
#matvec_accesses_4 = [
affine_map<(m, k) -> (k)>,
affine_map<(m, k) -> (k, m)>,
affine_map<(m, k) -> (m)>
]
indexing_maps = #matvec_accesses_4,
iterator_types = ["parallel", "reduction"]
}
```
Note that these traits describe identical matvec operation, hence the
`CHECK` lines are identical for both.
Also, `#redpar_vecmattrans_trait` is identical to `#matvec_trait_8`
that's already present in:
* "vector-contract-to-outerproduct-matvec-transforms.mlir"
For this reason:
* `@matvec_m_mk_k` is moved near other tests that already use
`#matvec_trait_8`,
* `#redpar_vecmattrans_trait` is replaced `#matvec_trait_8`.
This is a part of a larger effort to add cases with scalable vectors to
tests for the Vector dialect. I am refactoring these tests so that it's
easier to identify what cases are tested and where to add tests for
scalable vectors.
Implements #72834.
The primary difficulty with distribution of masked transfers is when the
permutation map permutes the vector, in which case the distribution
logic needs to make sure the correct mask elements end up with the
distributed transfer. This is only tricky when the permutation map has a
permutation in it, so we can relax the condition for distribution.
Previously the pattern only worked when the permutation map was a minor
identity. Infer the new mask type from the new transfer map after
dropping leading unit dims.
This patch refactors tests for:
* vector.contract -> vector.outerproduct
transformations for matvec operations (b += Ax). Specifically, relevant
tests from the following 2 files:
* vector-contract-matvec-transforms.mlir
* vector-contract-to-outerproduct-transforms.mlir
are combined into one:
* vector-contract-to-outerproduct-matvec-transforms.mlir
All original tests are preserved and no new tests are added.
This is a part of a larger effort to add cases with scalable vectors
to tests for the Vector dialect. I am refactoring these test as a
preparation for follow-up patches.
Implements #72834.
Chained reductions get created during vector unrolling. These patterns
simplify them into a series of adds followed by a final reductions.
This is preferred on GPU targets like SPIR-V/Vulkan where vector
reduction gets lowered into subgroup operations that are generally more
expensive than simple vector additions.
For now, only the `add` combining kind is handled.
Update tests in "vector-contract-matvec-transforms.mlir" so that they
are consistent with similar tests in:
* "vector-contract-to-outerproduct-transforms.mlir".
This is to enable further refactoring in a follow-up patch, namely to:
* remove duplication (this will be much easier once consistent naming
is used),
* extend tests in "vector-contract-matvec-transforms.mlir" with cases
for scalable vectors,
* merge "vector-contract-matvec-transforms.mlir" and
"vector-contract-to-outerproduct-transforms.mlir" (there's no need
for 2 different files testing identical transformations).
Overview of changes in this patch:
1. Simplify the test by removing MemRef wrappers - this test verifies
Vector -> Vector transformations and MemRefs are not needed.
2. Use (m, k) indices instead of (i, j).
3. Rename function names.
This is part of a larger effort to improve test coverage for scalable
vectors in the Vector dialect. Implements #72834.
This patch extends TransferReadDropUnitDimsPattern to support dropping
unit dims from partially-static memrefs, for example:
%v = vector.transfer_read %base[%c0, %c0], %pad {in_bounds = [true, true]} :
memref<?x1xi8, strided<[?, ?], offset: ?>>, vector<[16]x1xi8>
Is rewritten as:
%dim0 = memref.dim %base, %c0 : memref<?x1xi8, strided<[?, ?], offset: ?>>
%subview = memref.subview %base[0, 0] [%dim0, 1] [1, 1] :
memref<?x1xi8, strided<[?, ?], offset: ?>> to memref<?xi8, #map1>
%v = vector.transfer_read %subview[%c0], %pad {in_bounds = [true]}
: memref<?xi8, #map1>, vector<[16]xi8>
Scalable vectors are now also supported, the scalable dims were being
dropped when creating the rank-reduced vector type. The xfer op can also
have a mask of type 'vector.create_mask', which gets rewritten as long
as the mask of the unit dim is a constant of 1.
This patch extends the vector.transpose lowering to replace:
vector.transpose %0, [1, 0] : vector<nx1x<eltty>> to vector<1xnx<eltty>>
with:
vector.shape_cast %0 : vector<nx1x<eltty>> to vector<1xnx<eltty>>
Source with leading unit-dim (inverse) is also replaced. Unit dim must
be fixed. Non-unit dim can be scalable.
A check is also added to bail out for scalable vectors before unrolling.
This patch adds handling of an empty `MaskOp` to `MaskOpRewritePattern`
and thereby fixes a crash.
It also pulls the `MaskOp` canonicalization patterns into
`LowerVectorMask` so that empty `MaskOp`s are folded away in the Pass.
Fix https://github.com/llvm/llvm-project/issues/71036
A number of the warp distribution patterns work by rewriting a warp op
in place by moving a contained op outside. This notifies the rewriter
that the warp op is changing in this case.
The `stride == 1` does not imply that we can drop it. Because it could
load more than 1 elements. We should also take source sizes and vector
sizes into account. Otherwise it generates invalid IRs. E.g.,
```mlir
func.func @foo(%arg0: memref<1x1xf32>) -> vector<4x8xf32> {
%c0 = arith.constant 0 : index
%cst = arith.constant 0.000000e+00 : f32
%0 = vector.transfer_read %arg0[%c0, %c0], %cst : memref<1x1xf32>, vector<4x8xf32>
return %0 : vector<4x8xf32>
}
```
Fixes https://github.com/openxla/iree/issues/15493
This is the last step needed for basic support for distributing masked
vector code. The lane id gets delinearized based on the distributed mask
shape and then compared against the original mask sizes to compute the
bounds for the distributed mask. Note that the distribution of masks is
implicit on the shape specified by the warp op. As a result, it is the
responsibility of the consumer of the mask to ensure the distributed
mask will match its own distribution semantics.
Currently when there is a mix of transfer read ops and transfer write
ops that need to be distributed, because the pattern for write
distribution is rooted on the transfer write, it is hard to guarantee
that the write gets distributed after the read when the two aren't
directly connected by SSA. This is likely still relatively unsafe when
there are undistributable ops, but structurally these patterns are a bit
difficult to work with. For now pattern benefits give fairly good
guarantees for happy paths.
This fixes two bugs:
1) When deciding whether a transfer read could be propagated out of
a warp op, it looked for the first yield operand that was produced by
a transfer read. If this transfer read wasn't ready to be
distributed, the pattern would not re-check for any other transfer
reads that could have been propagated.
2) When dropping dead warp results, we do so by updating the warp op
signature and splicing in the old region. This does not add the ops
in the body of the warp op back to the pattern applicator's worklist,
and thus those operations won't be DCE'd. This is a problem for
patterns like the one for transfer reads that will still see the dead
operation as a user.
Because the distribution is based on types, supporting general masked
reads requires first materializing the permutation map in IR to align
the elements of the mask with the elements read by the transfer op. For
now just support cases with the trivial permutation map.
When the mask bounds of a `vector.constant_mask` exactly equal the shape
of the vector, any transfer op consuming that mask will be unaffected by
it. Drop the mask in such cases.
General distribution of masked writes requires materializing the permutation on the vector of the write in IR to ensure the vector lines up with the mask. For now just support cases with trivial permutation maps.
This handles `vector.transfer_read`, `vector.transfer_write`, and
`vector.constant_mask`. The unit dims are only relevant for masks
created by `create_mask` and `constant_mask` if the mask size for the
unit dim is non-one, in which case all subsequent sizes must also be
zero. From the perspective of the vector transfers, however, these unit
dims can just be dropped directly.
After propagation of `vector.warp_execute_on_lane_0` through `scf.for`,
uniform operations like those on the loop iterators can now be hoisted
out of the inner warp op.
The IR is valid, but UB: there is an out-of-bound index for the position
to insert inside the vector. We should just ignore this in the folder.
Fixes#70884
Re-orders tests in vector-contract-to-outerproduct-transforms.mlir so
that the file starts as follows:
1. plain matmul
2. plain matmul with scalable vectors
3. masked matmul
4. masked matmul with scalable vectors
5. plain matmul with mixed types
6. plain matmul with mixed types and scalable vectors
All of the above share the same indexing maps. This allowed to identify
one more duplicate. Following the cases above are examples with
different maps.
In addition, added extra comments to document the tests and to split
them into categories. There is also some extra reformatting to unify the
tests.
Tests for conversions from `vector.contract` to `vector.outerproduct`
for _matvec_ operations are updated with cases for scalable vectors.
This patch updates one specific test file (there might be similar
tests elsewhere):
* vector-contract-to-outerproduct-transforms.mlir.
Only the parallel dimension is made scalable. Making the reduction
dimension scalable would lead to different patterns without
`vector.outerproduct` (that would need to be added to some other file).
One duplicate test for _matvec_ is removed.