Files
clang-p2996/mlir/test/python/dialects/vector.py
Andrzej Warzyński 2ee5586ac7 [mlir][vector] Make the in_bounds attribute mandatory (#97049)
At the moment, the in_bounds attribute has two confusing/contradicting
properties:
  1. It is both optional _and_ has an effective default-value.
  2. The default value is "out-of-bounds" for non-broadcast dims, and
     "in-bounds" for broadcast dims.

(see the `isDimInBounds` vector interface method for an example of this
"default" behaviour [1]).

This PR aims to clarify the logic surrounding the `in_bounds` attribute
by:
  * making the attribute mandatory (i.e. it is always present),
  * always setting the default value to "out of bounds" (that's
    consistent with the current behaviour for the most common cases).

#### Broadcast dimensions in tests

As per [2], the broadcast dimensions requires the corresponding
`in_bounds` attribute to be `true`:
```
  vector.transfer_read op requires broadcast dimensions to be in-bounds
```

The changes in this PR mean that we can no longer rely on the
default value in cases like the following (dim 0 is a broadcast dim):
```mlir
  %read = vector.transfer_read %A[%base1, %base2], %f, %mask
      {permutation_map = affine_map<(d0, d1) -> (0, d1)>} :
    memref<?x?xf32>, vector<4x9xf32>
```

Instead, the broadcast dimension has to explicitly be marked as "in
bounds:

```mlir
  %read = vector.transfer_read %A[%base1, %base2], %f, %mask
      {in_bounds = [true, false], permutation_map = affine_map<(d0, d1) -> (0, d1)>} :
    memref<?x?xf32>, vector<4x9xf32>
```

All tests with broadcast dims are updated accordingly.

#### Changes in "SuperVectorize.cpp" and "Vectorization.cpp"

The following patterns in "Vectorization.cpp" are updated to explicitly
set the `in_bounds` attribute to `false`:
* `LinalgCopyVTRForwardingPattern` and `LinalgCopyVTWForwardingPattern`

Also, `vectorizeAffineLoad` (from "SuperVectorize.cpp") and
`vectorizeAsLinalgGeneric` (from "Vectorization.cpp") are updated to
make sure that xfer Ops created by these hooks set the dimension
corresponding to broadcast dims as "in bounds". Otherwise, the Op
verifier would complain

Note that there is no mechanism to verify whether the corresponding
memory access are indeed in bounds. Still, this is consistent with the
current behaviour where the broadcast dim would be implicitly assumed
to be "in bounds".

[1]
4145ad2bac/mlir/include/mlir/Interfaces/VectorInterfaces.td (L243-L246)
[2]
https://mlir.llvm.org/docs/Dialects/Vector/#vectortransfer_read-vectortransferreadop
2024-07-16 16:49:52 +01:00

91 lines
2.9 KiB
Python

# RUN: %PYTHON %s | FileCheck %s
from mlir.ir import *
import mlir.dialects.builtin as builtin
import mlir.dialects.func as func
import mlir.dialects.vector as vector
def run(f):
print("\nTEST:", f.__name__)
with Context(), Location.unknown():
f()
return f
# CHECK-LABEL: TEST: testPrintOp
@run
def testPrintOp():
module = Module.create()
with InsertionPoint(module.body):
@func.FuncOp.from_py_func(VectorType.get((12, 5), F32Type.get()))
def print_vector(arg):
return vector.PrintOp(source=arg)
# CHECK-LABEL: func @print_vector(
# CHECK-SAME: %[[ARG:.*]]: vector<12x5xf32>) {
# CHECK: vector.print %[[ARG]] : vector<12x5xf32>
# CHECK: return
# CHECK: }
print(module)
# CHECK-LABEL: TEST: testTransferReadOp
@run
def testTransferReadOp():
module = Module.create()
with InsertionPoint(module.body):
vector_type = VectorType.get([2, 3], F32Type.get())
memref_type = MemRefType.get(
[ShapedType.get_dynamic_size(), ShapedType.get_dynamic_size()],
F32Type.get(),
)
index_type = IndexType.get()
mask_type = VectorType.get(vector_type.shape, IntegerType.get_signless(1))
identity_map = AffineMap.get_identity(vector_type.rank)
identity_map_attr = AffineMapAttr.get(identity_map)
f = func.FuncOp(
"transfer_read", ([memref_type, index_type, F32Type.get(), mask_type], [])
)
with InsertionPoint(f.add_entry_block()):
A, zero, padding, mask = f.arguments
vector.TransferReadOp(
vector_type,
A,
[zero, zero],
identity_map_attr,
padding,
[False, False],
mask=mask,
)
vector.TransferReadOp(
vector_type, A, [zero, zero], identity_map_attr, padding, [False, False]
)
func.ReturnOp([])
# CHECK: @transfer_read(%[[MEM:.*]]: memref<?x?xf32>, %[[IDX:.*]]: index,
# CHECK: %[[PAD:.*]]: f32, %[[MASK:.*]]: vector<2x3xi1>)
# CHECK: vector.transfer_read %[[MEM]][%[[IDX]], %[[IDX]]], %[[PAD]], %[[MASK]]
# CHECK: vector.transfer_read %[[MEM]][%[[IDX]], %[[IDX]]], %[[PAD]]
# CHECK-NOT: %[[MASK]]
print(module)
# CHECK-LABEL: TEST: testBitEnumCombiningKind
@run
def testBitEnumCombiningKind():
module = Module.create()
with InsertionPoint(module.body):
f32 = F32Type.get()
vector_type = VectorType.get([16], f32)
@func.FuncOp.from_py_func(vector_type)
def reduction(arg):
v = vector.ReductionOp(f32, vector.CombiningKind.ADD, arg)
return v
# CHECK: func.func @reduction(%[[VEC:.*]]: vector<16xf32>) -> f32 {
# CHECK: %0 = vector.reduction <add>, %[[VEC]] : vector<16xf32> into f32
print(module)