Commit Graph

80 Commits

Author SHA1 Message Date
Thomas Raoux
f7fda6ba4a [mlir][linalg] Add extra parameter to tiling reduction to foreach_thread
This adds a tile_size parameter, when it is used the tiles are
cyclically distributed onto the threads of the scf.foreach_thread op.

Differential Revision: https://reviews.llvm.org/D139474
2022-12-07 18:37:05 +00:00
Kazu Hirata
1a36588ec6 [mlir] Use std::nullopt instead of None (NFC)
This patch mechanically replaces None with std::nullopt where the
compiler would warn if None were deprecated.  The intent is to reduce
the amount of manual work required in migrating from Optional to
std::optional.

This is part of an effort to migrate from llvm::Optional to
std::optional:

https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
2022-12-03 18:50:27 -08:00
Ramkumar Ramachandra
57c893599d mlir/linalg: improve debugging in LinalgTransformOps
Make use of notifyMatchFailure in one place.

Signed-off-by: Ramkumar Ramachandra <r@artagnon.com>

Differential Revision: https://reviews.llvm.org/D139191
2022-12-03 09:55:03 +01:00
Nicolas Vasilache
a8850312c1 [mlir][Transform][NFC] Use a single rewriter instead of duplicating it everywhere
Differential Revision: https://reviews.llvm.org/D139094
2022-12-01 03:54:31 -08:00
Matthias Springer
5cb68314f3 [mlir] Fix build breakage introduced by D139026 2022-12-01 09:16:49 +01:00
Matthias Springer
504a7516a1 [mlir][linalg][transform] Add structured.replace op
This op is useful for debugging/experiments and allows users to replace ops (without arguments + IsolatedFromAbove) with the given op in the region of transform op.

Differential Revision: https://reviews.llvm.org/D139026
2022-12-01 09:04:35 +01:00
Lorenzo Chelini
a9733b8a5e [MLIR] Adopt DenseI64ArrayAttr in tensor, memref and linalg transform
This commit is a first step toward removing inconsistencies between dynamic
and static attributes (i64 v. index) by dropping `I64ArrayAttr` and
using `DenseI64ArrayAttr` in Tensor, Memref and Linalg Transform ops.
In Linalg Transform ops only `TileToScfForOp` and `TileOp` have been updated.

See related discussion: https://discourse.llvm.org/t/rfc-inconsistency-between-dynamic-and-static-attributes-i64-v-index/66612/1

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D138567
2022-11-25 09:43:30 +01:00
Alexander Belyaev
65b72a78cc [mlir] Clean-up ViewLikeOpInterface w.r.t. kDynamic change.
Differential Revision: https://reviews.llvm.org/D138478
2022-11-22 10:51:53 +01:00
Aliia Khasanova
399638f98c Merge kDynamicSize and kDynamicSentinel into one constant.
resolve conflicts

Differential Revision: https://reviews.llvm.org/D138282
2022-11-21 13:01:26 +00:00
Mehdi Amini
44601785ee Apply clang-tidy fixes for bugprone-argument-comment in LinalgTransformOps.cpp (NFC) 2022-11-18 06:22:53 +00:00
Kazu Hirata
eba3fece88 [mlir] Fix warnings
This patch fixes:

  mlir/include/mlir/ExecutionEngine/SparseTensor/Storage.h:955:20:
  error: unused variable 'sz' [-Werror,-Wunused-variable]

  mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp:1460:2:
  error: extra ';' outside of a function is incompatible with C++98
  [-Werror,-Wc++98-compat-extra-semi]
2022-11-15 12:01:00 -08:00
Thomas Raoux
99833cd818 [mlir][linalg] Add reduction tiling using scf.foreachthread
This adds a transformation to tile reduction operations to partial
reduction using scf.foreachthread. This uses
PartialReductionOpInterface to create a merge operation of the partial
tiles.

Differential Revision: https://reviews.llvm.org/D137912
2022-11-14 18:05:40 +00:00
Nicolas Vasilache
6370f75ad7 [mlir][Transform] Add support for dynamically unpacking tile_sizes / num_threads in tile_to_foreach_thread
This commit adds automatic unpacking of Value's of type pdl::OperationType to the underlying single-result OpResult.
This allows mixing single-value, attribute and multi-value pdl::Operation tile sizes and num threads to TileToForeachThreadOp.

Differential Revision: https://reviews.llvm.org/D137896
2022-11-14 04:39:57 -08:00
Guray Ozen
d93be483ea [mlir][transform] Make tile_to_foreach_thread_op builder to use ArrayAttr
D137413 clarified `scf_foreach_thread` thread mapping nicely. `tile_to_foreach_thread_op` is one of the op that generates `scf_foreach_thread`, however, its builders are still having integer array.

This is bug fix of potential problem.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D137891
2022-11-12 19:27:25 +01:00
Guray Ozen
6663f34704 [mlir] Introduce device mapper attribute for thread_dim_map and mapped to dims
`scf.foreach_thread` defines mapping its loops to processors via an integer array, see an example below. A lowering can use this mapping. However, expressing mapping as an integer array is very confusing, especially when there are multiple levels of parallelism. In addition, the op does not verify the integer array. This change introduces device mapping attribute to make mapping descriptive and verifiable. Then it makes GPU transform dialect use it.

```
scf.foreach_thread (%i, %j) in (%c1, %c2) {
	scf.foreach_thread (%i2, %j2) in (%c1, %c2)
	{...} { thread_dim_mapping = [0, 1]}
} { thread_dim_mapping = [0, 1]}
```

It first introduces a `DeviceMappingInterface` which is an attribute interface. `scf.foreach_thread` defines its mapping via this interface. A lowering must define its attributes and implement this interface as well. This way gives us a clear validation.

The change also introduces two new attributes (`#gpu.thread<x/y/z>` and `#gpu.block<x,y,z>` ). After this change, the above code prints as below, as seen here, this way clarifies the loop mappings. The change also implements consuming of these two new attribute by the transform dialect. Transform dialect binds the outermost loops to the thread blocks and innermost loops to threads.

```
scf.foreach_thread (%i, %j) in (%c1, %c2) {
	scf.foreach_thread (%i2, %j2) in (%c1, %c2)
	{...} { thread_dim_mapping = [#gpu.thread<x>, #gpu.thread<y>]}
} { thread_dim_mapping = [#gpu.block<x>, #gpu.block<y>]}
```

Reviewed By: ftynse, nicolasvasilache

Differential Revision: https://reviews.llvm.org/D137413
2022-11-11 08:44:57 +01:00
Hanhan Wang
52ffc72818 [mlir][tiling] Relax tiling to accept generating multiple operations.
Some operations need to generate multiple operations when implementing
the tiling interface. Here is a sound example in IREE, see
https://github.com/iree-org/iree/pull/10905 for more details.

Reviewed By: mravishankar

Differential Revision: https://reviews.llvm.org/D137300
2022-11-04 13:59:24 -07:00
Nicolas Vasilache
c8fab80d64 [mlir][Transform] NFC - Add custom builders for some useful transforms.
Differential Revision: https://reviews.llvm.org/D137443
2022-11-04 10:04:28 -07:00
Thomas Raoux
3310fe55d9 [mlir][linalg] Add reduction tiling transformation
Add a transformation to tile reduction ops into a parallel operation
followed by a merge operation. This is equivalent to the existing
reduction spliting transformation but using loops instead of using
higher dimensions linalg.

Differential Revision: https://reviews.llvm.org/D136586
2022-11-03 23:07:12 +00:00
Nicolas Vasilache
d4c4e49196 [mlir][Linalg] Drop usage of tileWithLinalgTilingOptions in the structured.tile transform
This is on a path to deprecation.
Context: https://discourse.llvm.org/t/psa-retire-tileandfuselinalgops-method/63850

As the interface-based transformation is more generic, some additional folding of AffineMin/MaxOp and some extra canonicalizations are needed.
This can be further evolved.

Differential Revision: https://reviews.llvm.org/D137195
2022-11-01 14:36:24 -07:00
Matthias Springer
b169643f3a [mlir][interfaces] Remove getDestinationOperands from TilingInterface
`getDestinationOperands` was almost a duplicate of `DestinationStyleOpInterface::getOutputOperands`. Now that the interface has been moved to mlir/Interfaces, it is no longer needed.

Differential Revision: https://reviews.llvm.org/D136240
2022-10-24 09:26:19 +02:00
Thomas Raoux
246e8c3502 [mlir][linalg] Add back split reduction tests dropped by previous commit
The transition to transform dialect based tests dropped several cases of
the split reduction testing. Adding them back.

Differential Revision: https://reviews.llvm.org/D136287
2022-10-19 20:42:55 +00:00
Alex Zinenko
b0bf7ffffc [mlir] add utilites for DiagnosedSilenceableFailure
This class adds helper functions similar to `emitError` for the
DiagnosedSilenceableFailure class in both the silenceable and definite
failure cases. These helpers simplify the use of said class and make
tranfsorm op application code idiomatic.

Reviewed By: springerm

Differential Revision: https://reviews.llvm.org/D136072
2022-10-17 15:31:28 +00:00
Nicolas Vasilache
4b17710369 [mlir][Linalg] Support multi-output fusion in FuseIntoContainingOp
This revision adds the ability to fuse tileable ops with multiple results to
the transform.fuse_into_containing_op.

Differential Revision: https://reviews.llvm.org/D135955
2022-10-14 03:54:54 -07:00
Nicolas Vasilache
44cfea0279 [mlir][Linalg] Retire LinalgStrategyTilePass and filter-based pattern.
Context: https://discourse.llvm.org/t/psa-retire-linalg-filter-based-patterns/63785

Uses of `LinalgTilingPattern::returningMatchAndRewrite` are replaced by a top-level `tileWithLinalgTilingOptions` function that is marked obsolete and serves
as a temporary means to transition away from `LinalgTilingOptions`-based tiling.
LinalgTilingOptions supports too many options that have been orthogonalized with the use of the transform dialect.

Additionally, the revision introduces a `transform.structured.tile_to_scf_for` structured transform operation that is needed to properly tile `tensor.pad`
via the TilingInterface. Uses of `transform.structured.tile` will be deprecated and replaced by this new op.
This will achieve the deprecation of `linalg::tileLinalgOp`.
Context: https://discourse.llvm.org/t/psa-retire-tileandfuselinalgops-method/63850

In the process of transitioning, tests that were performing tile and distribute on tensors are retired: transformations should be orthogonalized better in the future.
In particular, tiling to specific loop types and tileAndDistribute behavior are not available via the transform ops.
The behavior is still available as part of the `tileWithLinalgTilingOptions` method to allow downstream clients to transition without breakages but is meant to be retired soon.

As more tests are ported to the transform dialect, it became necessary to introduce a test-transform-dialect-erase-schedule-pass to discard the transform specification
once applied so that e2e lowering and execution is possible.

Lastly, a number of redundant tests that were testing composition of patterns are retired as they are available with a better mechanism via the transform dialect.

Differential Revision: https://reviews.llvm.org/D135573
2022-10-11 02:42:56 -07:00
Nicolas Vasilache
7915027926 [mlir][Linalg] Retire LinalgStrategyTileAndFusePass and filter-based pattern.
Context: https://discourse.llvm.org/t/psa-retire-linalg-filter-based-patterns/63785

In the process, also retire `tileConsumerAndFuseProducers` that is now replaced by `tileConsumerAndFuseProducerGreedilyUsingSCFForOp`.

Context: https://discourse.llvm.org/t/psa-retire-tileandfuselinalgops-method/63850

When performing this replacement, a change of behavior appeared: the older `tileConsumerAndFuseProducers` would split the parallel
and non-parallel dimensions automatically and perform a first level of tile-and-fuse on parallel dimensions only and then introduce a
second level of tiling-only on the reduction dimensions. The newer `tileConsumerAndFuseProducerGreedilyUsingSCFForOp` on the other hand
does not perform this breakdown. As a consequence, the transform specification is evolved to produce the same output.

Additionally, replace some uses of `unsigned` by `int64_t` where possible without pulling in larger interface changes (left for a future PR).

Context: https://www.youtube.com/watch?v=Puio5dly9N8

Lastly, tests that were performing tile and fuse and distribute on tensors are retired: the generated IR mixing scf.for, tensors and
distributed processor ids was racy at best ..

Differential Revision: https://reviews.llvm.org/D135559
2022-10-10 07:04:01 -07:00
Nicolas Vasilache
af664e4459 [mlir][Transform] Add a transform.split_handles operation and fix general silenceable bugs.
The transform.split_handles op is useful for ensuring a statically known number of operations are
tracked by the source `handle` and to extract them into individual handles
that can be further manipulated in isolation.

In the process of making the op robust wrt to silenceable errors and the suppress mode, issues were
uncovered and fixed.

The main issue was that silenceable errors were short-circuited too early and the payloads were not
set. This resulted in suppressed silenceable errors not propagating correctly.
Fixing the issue triggered a few test failures: silenceable error returns now must properly set the results state.

Reviewed By: springerm

Differential Revision: https://reviews.llvm.org/D135426
2022-10-07 09:01:34 -07:00
Guray Ozen
89bb0cae46 [mlir][transform] Create GPU transform dialect
This revision adds GPU transform dialect. It also introduce a prefix such as "transform.gpu" for all ops related to this dialect.

MLIR already had two GPU transform op in linalg. This revision moves these ops into GPUTransformOps. The Ops are as follows:

`transform.structured.map_nested_foreach_thread_to_gpu_blocks`  -> `transform.gpu.map_foreach_to_blocks`
This op selects the outermost (toplevel) foreach_thread and parallelize across GPU blocks. It can also generate `gpu_launch`.

`transform.structured.map_nested_foreach_thread_to_gpu_threads` -> `transform.gpu.map_nested_foreach_to_threads`
This op parallelizes nested foreach_thread that are inside `gpu_launch` across GPU threads.

It doesn't add new functionality, but there are some minor refactoring of the code.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D134800
2022-10-04 13:09:08 +02:00
River Riddle
10c04f4641 [mlir:GPU][NFC] Update GPU API to use prefixed accessors
This doesn't flip the switch for prefix generation yet, that'll be
done in a followup.
2022-09-30 15:27:10 -07:00
Jakub Kuderski
abc362a107 [mlir][arith] Change dialect name from Arithmetic to Arith
Suggested by @lattner in https://discourse.llvm.org/t/rfc-define-precise-arith-semantics/65507/22.

Tested with:
`ninja check-mlir check-mlir-integration check-mlir-mlir-spirv-cpu-runner check-mlir-mlir-vulkan-runner check-mlir-examples`

and `bazel build --config=generic_clang @llvm-project//mlir:all`.

Reviewed By: lattner, Mogball, rriddle, jpienaar, mehdi_amini

Differential Revision: https://reviews.llvm.org/D134762
2022-09-29 11:23:28 -04:00
Murali Vijayaraghavan
146c3ea075 [mlir] Add support for parallel dim *after* reduction dim in split reduction
Previously, splitReduction transformation added the split parallel dimension
*before* the reduction dimension, leading to tiling for reduction. This
commit creates an option to create the parallel dimension *after* the
reduction dimension, allowing us to transform the op into vertical reduction
with SIMD parallelism.

Reviewed By: ThomasRaoux, dcaballe

Differential Revision: https://reviews.llvm.org/D134764
2022-09-29 01:24:01 +00:00
Guray Ozen
f8ad6eaf92 [mlir] Refactor transform dialect's gpu block func
This revision refactors gpu block id generator lambda that is used in the transform dialect. It removes the lambda  and instead uses a static function that's name generateGpuBlockIds.

It also simplifies arguments that the function takes.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D134724
2022-09-27 12:27:17 +02:00
Thomas Raoux
e99f437140 [mlir] Plumb missing paramter to gpu transform op
rewriteMapNestedForeachThreadToGpuThreads was dropping the paramter to
skip inserting barrier

Differential Revision: https://reviews.llvm.org/D134500
2022-09-23 16:58:44 +00:00
Guray Ozen
f7907bc536 [mlir] Add map_nested_foreach_thread_to_gpu_blocks op to transform dialect
This revision adds a new op `map_nested_foreach_thread_to_gpu_blocks` to transform dialect.
If `generate_gpu_launch` argument is given, the op first generates `gpu_launch`. Otherwise, `target` must be `gpu_launch`. The op searches top level `scf.foreach_threads` inside the `gpu_launch` and distributes them with gpu.block_id attribute.
Loop mapping is explicit and given by the map_nested_foreach_thread_to_gpu_blocks op. Mapping is done one-to-one, therefore the loops disappear.
It also adds `gpu dialect` as dependent since the new op can create `gpu::LaunchOp` for given `scf::ForeachThreadOp`.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D134190
2022-09-23 16:27:10 +02:00
Mahesh Ravishankar
acc2a12c33 [mlir][Linalg] Expose the implementation of the tiling to scf.foreach_thread.
This allows downstream uses to use the implementation of the tiling
itself, while performing other transformations that are necessary to
go with it.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D134335
2022-09-22 22:19:19 +00:00
Guray Ozen
233de4e808 [mlir] Add map_nested_foreach_thread_to_gpu_threads op to transform dialect
This revision adds a new op `map_nested_foreach_thread_to_gpu_threads` to transform dialect. The op searches `scf.foreach_threads` inside the `gpu_launch` and distributes them with `gpu.thread_id` attribute.

Loop mapping is explicit and given by the `map_nested_foreach_thread_to_gpu_threads` op. Mapping is done one-to-one, therefore the loops dissappear.

The dynamic trip count or trip count that are larger than thread size are not supported for the time being. However, we can indeed support them by generating a loop inside with cyclic scheduling.

For the time being, trip counts that are dynamic or bigger than thread sizes are not supported. However, in the future the compiler can indeed generate a loop with static cyclic scheduling to support these cases.

Current mechanism allows `scf.foreach_threads` to be siblings or nested. There cannot be interleaving code between the loops when they are nested.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D133950
2022-09-19 16:27:30 +02:00
Nicolas Vasilache
12831be96c [mlir][Linalg] NFC - Cleanup internal transform APIs and produce better messages on failure to apply. 2022-09-19 04:16:15 -07:00
Nicolas Vasilache
0422a4407f [mlir][scf][Transform] Refactor transform.fuse_into_containing_op so it is iterative and supports output fusion.
This revision revisits the implementation of `transform.fuse_into_containing_op` so that it iterates on
producers one use at a time.

Support is added to fuse a producer through a foreach_thread shared tensor argument, in which case we
tile and fuse the op inside the containing op and update the shared tensor argument to the unique destination operand.
If one cannot find such a unique destination operand the transform fails.

Differential Revision: https://reviews.llvm.org/D134051
2022-09-16 09:21:46 -07:00
Guray Ozen
5279e11f06 [mlir][linalg] Retire Linalg's Vectorization Pattern
This revision retires the LinalgCodegenStrategy vectorization pattern. Please see the context: https://discourse.llvm.org/t/psa-retire-linalg-filter-based-patterns/63785.
This revision improves the transform dialect's VectorizeOp in different ways below:
- Adds LinalgDialect as a dependent dialect. When `transform.structured.vectorize` vectorizes `tensor.pad`, it generates `linalg.init_tensor`. In this case, linalg dialect must be registered.
- Inserts CopyVectorizationPattern in order to vectorize `memref.copy`.
- Creates two attributes: `disable_multi_reduction_to_contract_patterns` and `disable_transfer_permutation_map_lowering_patterns`. They are limiting the power of vectorization and are currently intended for testing purposes.

It also removes some of the "CHECK: vector.transfer_write" in the vectorization.mlir test. They are redundant writes, at the end of the code there is a rewrite to the same place. Transform dialect no longer generates them.

Depends on D133684 that retires the LinalgCodegenStrategy vectorization pass.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D133699
2022-09-15 11:23:46 +02:00
Nicolas Vasilache
e479aecd56 Revert "[mlir][scf][Transform] Refactor transform.fuse_into_containing_op so it is iterative and supports output fusion."
This reverts commit 54a5f60628 which is a WIP that was pushed by mistake.
2022-09-14 08:51:30 -07:00
Nicolas Vasilache
54a5f60628 [mlir][scf][Transform] Refactor transform.fuse_into_containing_op so it is iterative and supports output fusion.
This revision revisits the implementation of `transform.fuse_into_containing_op` so that it iterates on
producers one use at a time.

Support is added to fuse a producer through a foreach_thread shared tensor argument, in which case we
tile and fuse the op inside the containing op and update the shared tensor argument to the unique destination operand.
If one cannot find such a unique destination operand the transform fails.
2022-09-14 08:50:32 -07:00
Nicolas Vasilache
593c14d422 [mlir][Linalg] Add return type filter to the transform dialect
This allows matching ops by additionally providing an idiomatic spec for a unique return type.

Differential Revision: https://reviews.llvm.org/D133862
2022-09-14 08:50:31 -07:00
Stanley Winata
8e484b522b [mlir][linalg] Add decomposition from conv_2d_nchw
Decompose conv_2d_nchw_fchw -> conv_1d_ncw_fcw

Reviewed By: hanchung

Differential Revision: https://reviews.llvm.org/D133551
2022-09-09 16:00:37 -07:00
Matthias Springer
547942841f [mlir][interfaces] Drop dest/tileDestOperands from TilingInterface
`getTiledImplementation`/`generateResultTileValue` only computes the tiled operation, but does not insert the result into any tensor.

Differential Revision: https://reviews.llvm.org/D133015
2022-09-01 08:53:53 +02:00
Matthias Springer
416ba2256d [mlir][linalg][transform] Support dynamic tile sizes in TileToForeachThreadOp
TileToForeachThreadOp now accepts mixed SSA value operands / index attributes for tile_sizes and num_threads. (Reusing OperandsOrIntegersSizesList.) In case of an operand, a PDL_Operation must be specified that is mapped to a payload op that returns the tile size or number of threads.

Differential Revision: https://reviews.llvm.org/D131949
2022-08-22 16:48:45 +02:00
Jeff Niu
a2ad3ec7ac [mlir][ods] Support string literals in custom directives
This patch adds support for string literals as `custom` directive
arguments. This can be useful for re-using custom parsers and printers
when arguments have a known value. For example:

```
ParseResult parseTypedAttr(AsmParser &parser, Attribute &attr, Type type) {
  return parser.parseAttribute(attr, type);
}

void printTypedAttr(AsmPrinter &printer, Attribute attr, Type type) {
  return parser.printAttributeWithoutType(attr);
}
```

And in TableGen:

```
def FooOp : ... {
  let arguments = (ins AnyAttr:$a);
  let assemblyFormat = [{ custom<TypedAttr>($a, "$_builder.getI1Type()")
                          attr-dict }];
}

def BarOp : ... {
  let arguments = (ins AnyAttr:$a);
  let assemblyFormat = [{ custom<TypedAttr>($a, "$_builder.getIndexType()")
                          attr-dict }];
}
```

Instead of writing two separate sets of custom parsers and printers.

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D131603
2022-08-12 20:55:11 -04:00
Matthias Springer
0581ab65ea [mlir][linalg][transform] Support matching of attributes (and their values)
Do not just check if an attribute exists on the payload op. Also check its value.

Differential Revision: https://reviews.llvm.org/D131760
2022-08-12 14:55:00 +02:00
Nicolas Vasilache
a6bf6f25f0 [mlir][Linalg] Let FuseIntoContainingOp return success when nothing is fused.
This composes better when the op is applied in situations where it does not match.

Differential Revision: https://reviews.llvm.org/D131734
2022-08-12 02:18:31 -07:00
Kazu Hirata
9750648cb4 [mlir, flang] Use has_value instead of hasValue (NFC) 2022-08-06 11:12:47 -07:00
Jeff Niu
e179532284 [mlir] Remove types from attributes
This patch removes the `type` field from `Attribute` along with the
`Attribute::getType` accessor.

Going forward, this means that attributes in MLIR will no longer have
types as a first-class concept. This patch lays the groundwork to
incrementally remove or refactor code that relies on generic attributes
being typed. The immediate impact will be on attributes that rely on
`Attribute` containing a type, such as `IntegerAttr`,
`DenseElementsAttr`, and `ml_program::ExternAttr`, which will now need
to define a type parameter on their storage classes. This will save
memory as all other attribute kinds will no longer contain a type.

Moreover, it will not be possible to generically query the type of an
attribute directly. This patch provides an attribute interface
`TypedAttr` that implements only one method, `getType`, which can be
used to generically query the types of attributes that implement the
interface. This interface can be used to retain the concept of a "typed
attribute". The ODS-generated accessor for a `type` parameter
automatically implements this method.

Next steps will be to refactor the assembly formats of certain operations
that rely on `parseAttribute(type)` and `printAttributeWithoutType` to
remove special handling of type elision until `type` can be removed from
the dialect parsing hook entirely; and incrementally remove uses of
`TypedAttr`.

Reviewed By: lattner, rriddle, jpienaar

Differential Revision: https://reviews.llvm.org/D130092
2022-07-31 20:01:31 -04:00
Alex Zinenko
08a1b07e7c [mlir] Partially port splitting transform to TilingInterface
The structured op splitting transformation is conceptually similar to
tiling in the sense that it decomposes the iteration space of the
original op into several parts. Therefore, it is possible to implement
it using the TilingInterface to operate on iteration spaces and their
parts. However, the implementation also requires to pass updated input
operands, which is not supported by the interface, so the implementation
currently remains Linalg-specific.

Reviewed By: mravishankar

Differential Revision: https://reviews.llvm.org/D129564
2022-07-27 08:52:08 +00:00