The pack_paddings attribute in the structure.pad TD Op is used to set
the `nofold` attribute in the generated tensor.pad Op. The current name
is confusing and suggests that there's a relation with the tensor.pack
Op. This patch renames it as `nofold_flags` to better match the actual
usage.
This patch enables continuous tiling of a target structured op using
diminishing tile sizes. In cases where the tensor dimensions are not
exactly divisible by the tile size, we are left with leftover tensor
chunks that are irregularly tiled. This approach enables tiling of the
leftover chunk with a smaller tile size and repeats this process
recursively using exponentially diminishing tile sizes. This eventually
generates a chain of loops that apply tiling using diminishing tile
sizes.
Adds `continuous_tile_sizes` op to the transform dialect. This op, when
given a tile size and a dimension, computes a series of diminishing tile
sizes that can be used to tile the target along the given dimension.
Additionally, this op also generates a series of chunk sizes that the
corresponding tile sizes should be applied to along the given dimension.
Adds `multiway` attribute to `transform.structured.split` that enables
multiway splitting of a single target op along the given dimension, as
specified in a list enumerating the chunk sizes.
This patch is a first pass at making consistent syntax across the
`LinalgTransformOp`s that use dynamic index lists for size parameters.
Previously, there were two different forms: inline types in the list, or
place them in the functional style tuple. This patch goes for the
latter.
In order to do this, the `printPackedOrDynamicIndexList`,
`printDynamicIndexList` and their `parse` counterparts were modified so
that the types can be optionally provided to the corresponding custom
directives.
All affected ops now use tablegen `assemblyFormat`, so custom
`parse`/`print` functions have been removed. There are a couple ops that
will likely add dynamic size support, and once that happens it should be
made sure that the assembly remains consistent with the changes in this
patch.
The affected ops are as follows: `pack`, `pack_greedily`,
`tile_using_forall`. The `tile_using_for` and `vectorize` ops already
used this syntax, but their custom assembly was removed.
---------
Co-authored-by: Oleksandr "Alex" Zinenko <ftynse@gmail.com>
This patch modifies the definition of `PadOp` to take transform params
and handles for the `pad_to_multiple_of` operand.
---------
Co-authored-by: Oleksandr "Alex" Zinenko <ftynse@gmail.com>
This makes these match the behaviour of optional attributes (which are
omitted when they are their default value of none). This allows for
concise assembly formats without a custom printer.
An extra print of " " is also removed, this does change any existing
uses of oilists, but if the parameter before the oilist is optional,
that would previously add an extra space.
This #68694 + some fixes for the MLIR Python tests, unfortunately GitHub
does not allow re-opening PRs 😕
Rename and restructure tiling-related transform ops from the structured
extension to be more homogeneous. In particular, all ops now follow a
consistent naming scheme:
- `transform.structured.tile_using_for`;
- `transform.structured.tile_using_forall`;
- `transform.structured.tile_reduction_using_for`;
- `transform.structured.tile_reduction_using_forall`.
This drops the "_op" naming artifact from `tile_to_forall_op` that
shouldn't have been included in the first place, consistently specifies
the name of the control flow op to be produced for loops (instead of
`tile_reduction_using_scf` since `scf.forall` also belongs to `scf`),
and opts for the `using` connector to avoid ambiguity.
The loops produced by tiling are now systematically placed as *trailing*
results of the transform op. While this required changing 3 out of 4 ops
(except for `tile_using_for`), this is the only choice that makes sense
when producing multiple `scf.for` ops that can be associated with a
variadic number of handles. This choice is also most consistent with
*other* transform ops from the structured extension, in particular with
fusion ops, that produce the structured op as the leading result and the
loop as the trailing result.
This PR renames the vectorization transform ops as follows:
* `structured.masked_vectorize` => `structured.vectorize`. This reflects
the fact that since [recently](https://reviews.llvm.org/D157774) the op
can also handle the unmasked case.
* `structured.vectorize` =>
`structured.vectorize_children_and_applies_patterns`. This reflects the
fact that the op does not just vectorize the given payload op but all
vectorizable children contained in it, and applies patterns before and
after for preparation and clean-up.
This rename was discussed first
[here](https://reviews.llvm.org/D157774).
The PR also adapts and cleans ups the tablegen description of the
`VectorizeChildrenAndApplyPatternsOp` (formerly `VectorizeOp`).
This PR cleans up the test of the mix-ins of this dialect. Most of the
character diff is due to factoring out the creation of the the top-level
sequence into a decorator. This decorator siginficantly shortens the
definition of the individual tests and can be used in all but one test,
where the top-level op is a PDL op. The only functional diff is due to
the fact that the decator uses `transform.any_op` instead of
`pdl.operation` for the type of the root handle. The only remaining
usages of the PDL dialects is now in the test a PDL-related op.
The mix-in of the `MultiTileSizesOp` set the default value of its
`divisor` argument. This repeats information from the tablegen
defintion, is not necessary (since the generic code deals with `None`
and default values), and has the risk of running out of sync without
people noticing. This patch removes the setting of the value and forward
`None` to the generic constructor instead.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D159416
This patch simplifies and improves the mix-in of the `TileOp`. In
particular:
* Accept all types of sizes (static, dynamic, scalable) in a single
argument `sizes`.
* Use the existing convenience function to dispatch different types of
sizes instead of repeating the implementation in the mix-in.
* Pass on `None` values as is of optional arguments to the init function
of the super class.
* Reformat with default indentation width (4 spaces vs 2 spaces).
* Add a a test for providing scalable sizes.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D159417
That commit changed the mix-ins for the Python bindings of the PadOp
including some tests, but did not change the corresponding `CHECK`
statements. This patch does that.
The mix-in did not allow to *not* set many of the arguments, even though
they represent optional attributes. Instead, it set default values,
which have different semantics in some cases. In other cases, setting
the default values is already done by the C++ layer, in which case they
are currently redundant and may be wrong in some potential future change
in the TD or C++ files. With this patch, `None` is preserved until the
generated binding, which handles them as desired.
Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D158844
Extends the existing mix-in for VectorizeOp with support for the missing unit attributes.
Also fixes the unintuitive implementation where
`structured.VectorizeOp(target=target, vectorize_padding=False)` still resulted in the creation of the UnitAttr `vectorize_padding`.
Reviewed By: ingomueller-net
Differential Revision: https://reviews.llvm.org/D158726
This PR implements python enum bindings for *all* the enums - this includes `I*Attrs` (including positional/bit) and `Dialect/EnumAttr`.
There are a few parts to this:
1. CMake: a small addition to `declare_mlir_dialect_python_bindings` and `declare_mlir_dialect_extension_python_bindings` to generate the enum, a boolean arg `GEN_ENUM_BINDINGS` to make it opt-in (even though it works for basically all of the dialects), and an optional `GEN_ENUM_BINDINGS_TD_FILE` for handling corner cases.
2. EnumPythonBindingGen.cpp: there are two weedy aspects here that took investigation:
1. If an enum attribute is not a `Dialect/EnumAttr` then the `EnumAttrInfo` record is canonical, as far as both the cases of the enum **and the `AttrDefName`**. On the otherhand, if an enum is a `Dialect/EnumAttr` then the `EnumAttr` record has the correct `AttrDefName` ("load bearing", i.e., populates `ods.ir.AttributeBuilder('<NAME>')`) but its `enum` field contains the cases, which is an instance of `EnumAttrInfo`. The solution is to generate an one enum class for both `Dialect/EnumAttr` and "independent" `EnumAttrInfo` but to make that class interopable with two builder registrations that both do the right thing (see next sub-bullet).
2. Because we don't have a good connection to cpp `EnumAttr`, i.e., only the `enum class` getters are exposed (like `DimensionAttr::get(Dimension value)`), we have to resort to parsing e.g., `Attribute.parse(f'#gpu<dim {x}>')`. This means that the set of supported `assemblyFormat`s (for the enum) is fixed at compile of MLIR (currently 2, the only 2 I saw). There might be some things that could be done here but they would require quite a bit more C API work to support generically (e.g., casting ints to enum cases and binding all the getters or going generically through the `symbolize*` methods, like `symbolizeDimension(uint32_t)` or `symbolizeDimension(StringRef)`).
A few small changes:
1. In addition, since this patch registers default builders for attributes where people might've had their own builders already written, I added a `replace` param to `AttributeBuilder.insert` (`False` by default).
2. `makePythonEnumCaseName` can't handle all the different ways in which people write their enum cases, e.g., `llvm.CConv.Intel_OCL_BI`, which gets turned into `INTEL_O_C_L_B_I` (because `llvm::convertToSnakeFromCamelCase` doesn't look for runs of caps). So I dropped it. On the otherhand regularization does need to done because some enums have `None` as a case (and others might have other python keywords).
3. I turned on `llvm` dialect generation here in order to test `nvvm.WGMMAScaleIn`, which is an enum with [[ d7e26b5620/mlir/include/mlir/IR/EnumAttr.td (L22-L25) | no explicit discriminator ]] for the `neg` case.
Note, dialects that didn't get a `GEN_ENUM_BINDINGS` don't have any enums to generate.
Let me know if I should add more tests (the three trivial ones I added exercise both the supported `assemblyFormat`s and `replace=True`).
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D157934
In particular:
* Fix and extend the support for constructing possibly nested ArrayAttrs
from lists of Python ints. This can probably be generalized further
and used in many more places.
* Add arguments for `pad_to_multiple_of` and `copy_back_op`.
* Format with black and reorder (keyword-only) arguments to match
tablegen and (`*_gen.py`) order.
* Extend tests for new features.
Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D157789
The tests of the mix-in classes of the Python bindings currently passed
even if the ops constructed by the mix-ins under test failed to verify.
This is because the assembled IR is still printed in generic form even
if it does not verify, and the `CHECK` statements are formulated in such
a lenient way that they also match that generic form. This patch adds
explicit verification to the decorator that is used for all test
functions.
Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D157790
Allow the `names` argument in `MatchOp.match_op_names` to be of type
`str` in addition to `Sequence[str]`. In this case, the argument is
treated as a list with one name, i.e., it is possible to write
`MatchOp.match_op_names(..., "test.dummy")` instead of
`MatchOp.match_op_names(..., ["test.dummy"])`.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D155807
The class did not have any mix-in until now. The new mix-in has two
overloads for the constructor of the class: one with all arguments and
one without the result types, which are defaulted to `AnyOpType`.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D155695
This patch adds a mixin for TileToForallOp to
_structured_transform_ops_ext.py with syntactic sugar for construction
such ops. First, the types of the results are made optional and filled
with common default values if omitted. Second, for num_threads and
tile_sizes, the three possible forms (static, dynamic, or packed), can
now all be given through the same respective argument, which gets
dispatched to the correct form-specific argument automatically.
Reviewed By: nicolasvasilache, ftynse
Differential Revision: https://reviews.llvm.org/D155090
The extension class to MatchOp has a class method called match_op_names.
The previous version of that function did not allow to specify the
result type. This, however, may be useful/necessary if the op consuming
the resulting handle requires a particular type (such as the
bufferization.EmptyTensorToAllocTensorOp). This patch adds an overload
to match_op_names that allows to specify the result type.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D155567
This is an ongoing series of commits that are reformatting our
Python code.
Reformatting is done with `black`.
If you end up having problems merging this commit because you
have made changes to a python file, the best way to handle that
is to run git checkout --ours <yourfile> and then reformat it
with black.
If you run into any problems, post to discourse about it and
we will try to help.
RFC Thread below:
https://discourse.llvm.org/t/rfc-document-and-standardize-python-code-style
Differential Revision: https://reviews.llvm.org/D150782
The initial bring-up of the Transform dialect relied on PDL to provide
the default handle type (`!pdl.operation`) and the matching capability.
Both are now provided natively by the Transform dialect removing the
reason to have a hard dependency on the PDL dialect and its interpreter.
Move PDL-related transform operations into a separate extension.
This requires us to introduce a dialect state extension mechanism into
the Transform dialect so it no longer needs to know about PDL constraint
functions that may be injected by extensions similarly to operations and
types. This mechanism will be reused to connect pattern application
drivers and the Transform dialect.
This completes the restructuring of the Transform dialect to remove
overrilance on PDL.
Note to downstreams: flow that are using `!pdl.operation` with Transform
dialect operations will now require `transform::PDLExtension` to be
applied to the transform dialect in order to provide the transform
handle type interface for `!pdl.operation`.
Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D151104
Use the recently introduced transform dialect parameter mechanism to
perform controllable multi-size tiling with sizes computed at the
transformation time rather than at runtime.
This requires to generalize tile and split structured transform
operations to work with any transform dialect handle types, which is
desirable in itself to avoid unchecked overuse of PDL OperationType.
Reviewed By: shabalin
Differential Revision: https://reviews.llvm.org/D140980
This diff causes the `tblgen`-erated print() function to skip printing a
`DefaultValuedAttr` attribute when the value is equal to the default.
This feature will reduce the amount of custom printing code that needs to be
written by users a relatively common scenario. As a motivating example, for the
fastmath flags in the LLVMIR dialect, we would prefer to print this:
```
%0 = llvm.fadd %arg0, %arg1 : f32
```
instead of this:
```
%0 = llvm.fadd %arg0, %arg1 {fastmathFlags = #llvm.fastmath<none>} : f32
```
This diff makes the handling of print functionality for default-valued attributes
standard.
This is an updated version of https://reviews.llvm.org/D135398, without the per-attribute bit to control printing.
Reviewed By: Mogball
Differential Revision: https://reviews.llvm.org/D135993
Use the recently introduced TransformTypeInterface instead of hardcoding
the PDLOperationType. This will allow the operations to use more
specific transform types to express pre/post-conditions in the future.
It requires the syntax and Python op construction API to be updated.
Dialect extensions will be switched separately.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D135584
Introduce two different failure propagation mode in the Transform
dialect's Sequence operation. These modes specify whether silenceable
errors produced by nested ops are immediately propagated, thus stopping
the sequence, or suppressed. The latter is useful in end-to-end
transform application scenarios where the user cannot correct the
transformation, but it is robust enough to silenceable failures. It
can be combined with the "alternatives" operation. There is
intentionally no default value to avoid favoring one mode over the
other.
Downstreams can update their tests using:
S='s/sequence \(%.*\) {/sequence \1 failures(propagate) {/'
T='s/sequence {/sequence failures(propagate) {/'
git grep -l transform.sequence | xargs sed -i -e "$S"
git grep -l transform.sequence | xargs sed -i -e "$T"
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D131774
Introduce a structured transform op that emits IR computing the multi-tile
sizes with requested parameters (target size and divisor) for the given
structured op. The sizes may fold to arithmetic constant operations when the
shape is constant. These operations may then be used to call the existing
tiling transformation with a single non-zero dynamic size (i.e. perform
strip-mining) for each of the dimensions separately, thus achieving multi-size
tiling with optional loop interchange. A separate test exercises the entire
script.
Depends On D129217
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D129287
Extend the definition of the Tile structured transform op to enable it
accepting handles to operations that produce tile sizes at runtime. This is
useful by itself and prepares for more advanced tiling strategies. Note that
the changes are relevant only to the transform dialect, the tiling
transformation itself already supports dynamic sizes.
Depends On D129216
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D129217
Introduce a new transformation on structured ops that splits the iteration
space into two parts along the specified dimension. The index at which the
splitting happens may be static or dynamic. This transformation can be seen as
a rudimentary form of index-set splitting that only supports the splitting
along hyperplanes parallel to the iteration space hyperplanes, and is therefore
decomposable into per-dimension application.
It is a key low-level transformation that enables independent scheduling for
different parts of the iteration space of the same op, which hasn't been
possible previously. It may be used to implement, e.g., multi-sized tiling. In
future, peeling can be implemented as a combination of split-off amount
computation and splitting.
The transformation is conceptually close to tiling in its separation of the
iteration and data spaces, but cannot be currently implemented on top of
TilingInterface as the latter does not properly support `linalg.index`
offsetting.
Note that the transformation intentionally bypasses folding of
`tensor.extract_slice` operations when creating them as this folding was found
to prevent repeated splitting of the same operation because due to internal
assumptions about extract/insert_slice combination in dialect utilities.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D129090