Commit Graph

143 Commits

Author SHA1 Message Date
Ryan Holt
847a6f8f0a [mlir][MemRef] Add runtime bounds checking (#75817)
This change adds (runtime) bounds checks for `memref` ops using the
existing `RuntimeVerifiableOpInterface`. For `memref.load` and
`memref.store`, we check that the indices are in-bounds of the memref's
index space. For `memref.reinterpret_cast` and `memref.subview` we check
that the resulting address space is in-bounds of the input memref's
address space.
2023-12-22 11:49:15 +09:00
Max191
3a6f02a658 [mlir] Add subbyte emulation support for memref.store. (#73174)
This adds a conversion for narrow type emulation of memref.store ops.
The conversion replaces the memref.store with two memref.atomic_rmw ops.
Atomics are used to prevent race conditions on same-byte accesses, in
the event that two threads are storing into the same byte.

Fixes https://github.com/openxla/iree/issues/15370
2023-11-28 11:51:30 -08:00
Max191
b823f8469b [mlir] Add support for memref.alloca sub-byte emulation (#73138)
Adds a similar case to `memref.alloc` for `memref.alloca` in
EmulateNarrowTypes.

Fixes https://github.com/openxla/iree/issues/15515
2023-11-27 16:28:22 -08:00
Max191
b29332a318 [mlir] Add narrow type emulation for memref.reinterpret_cast (#73144) 2023-11-27 10:41:14 -08:00
Max191
dae3c44ce6 [mlir] Add vector.store/maskedstore of memref.subview memref alias folding (#72184)
Fixes https://github.com/openxla/iree/issues/15575
2023-11-14 14:24:54 -08:00
Quinn Dawkins
48f980c535 [mlir][memref] Add memref alias folding for masked transfers (#71476)
The contents of a mask on a masked transfer are unaffected by the
particular region of memory being read/stored to, so just forward the
mask in subview folding patterns.
2023-11-07 08:56:54 -05:00
tyb0807
5aa2c65abd [mlir][MemRef] Add subview folding pattern for vector.maskedload (#71380)
This is required for fixing https://github.com/openxla/iree/issues/15031
2023-11-06 20:08:30 +01:00
Matthias Springer
437c62178c [mlir][memref] Remove redundant memref.tensor_store op (#71010)
`bufferization.materialize_in_destination` should be used instead. Both
ops bufferize to a memcpy. This change also conceptually cleans up the
memref dialect a bit: the memref dialect no longer contains ops that
operate on tensor values.
2023-11-05 12:47:18 +09:00
Jie Fu
c308cb9da6 [mlir] Fix -Wsign-compare in ResolveShapedTypeResultDims.cpp (NFC)
/llvm-project/mlir/lib/Dialect/MemRef/Transforms/ResolveShapedTypeResultDims.cpp:98:19: error: comparison of integers of different signs: 'value_type' (aka 'long long') and 'size_t' (aka 'unsigned long') [-Werror,-Wsign-compare]
    if (*dimIndex >= reifiedResultShapes[resultNumber].size())
        ~~~~~~~~~ ^  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2023-10-31 19:04:37 +08:00
Matthias Springer
6086c272a3 [mlir][memref] Fix out-of-bounds crash when reifying result dims (#70774)
Do not crash when the input IR is invalid, i.e., when the index of the
dimension operand of a `tensor.dim`/`memref.dim` is out-of-bounds. This
fixes #70180.
2023-10-31 17:26:56 +09:00
Felix Schneider
f32b3e1caa [mlir][memref] Fix index delinearization for CollapseShapeOp folding (#68833)
The `resolveSourceIndicesCollapseShape` method is used to compute
indices into the source `MemRef` of a `CollapseShapeOp` from the
collapsed indices. This method didn't check for dynamic sizes of the
source shape which led to a crash.

Fix https://github.com/llvm/llvm-project/issues/68483
2023-10-12 07:12:43 +02:00
Kunwar Grover
8f397e04e5 [mlir][memref] Fix emulate narrow types for strided memref offset (#68181)
This patch fixes strided memref offset calculation for emulating narrow
types.

As a side effect, this patch also adds support for a 1-D subviews with
static sizes, static offsets and strides of 1 for testing. Emulate
narrow types pass was not tested for strided memrefs before this patch.
2023-10-06 04:52:33 +05:30
qcolombet
932dc9d8c4 [mlir][MemRef] Add a pattern to simplify `extract_strided_metadata(ca… (#68291)
…st)`

`expand-strided-metadata` was missing a pattern to get rid of
`memref.cast`.
The pattern is straight foward:
Produce a new `extract_strided_metadata` with the source of the cast and
fold the static information (sizes, strides, offset) along the way.
2023-10-05 14:32:42 +02:00
Stella Laurenzo
8d203100e8 Revert "[mlir][memref] Fix offset update in emulating narrow type for strided memref (#67714)"
This reverts commit 35ec6ea644.

Breaks downstream narrow type execution tests.
2023-09-29 18:49:33 -07:00
Kunwar Grover
35ec6ea644 [mlir][memref] Fix offset update in emulating narrow type for strided memref (#67714)
The offset when converting type in emulating narrow types did not
account for the offset in strided memrefs. This patch fixes this.
2023-09-29 01:08:43 +05:30
Martin Erhart
65341b09b0 [mlir][bufferization][NFC] Move memref specific implementation of AllocationOpInterface to memref dialect directory (#66637)
Follow-up on #65578
2023-09-20 14:49:52 +02:00
Matthias Springer
9b5ef2bea8 [mlir][Interfaces] LoopLikeOpInterface: Support ops with multiple regions (#66754)
This commit implements `LoopLikeOpInterface` on `scf.while`. This
enables LICM (and potentially other transforms) on `scf.while`.

`LoopLikeOpInterface::getLoopBody()` is renamed to `getLoopRegions` and
can now return multiple regions.

Also fix a bug in the default implementation of
`LoopLikeOpInterface::isDefinedOutsideOfLoop()`, which returned "false"
for some values that are defined outside of the loop (in a nested op, in
such a way that the value does not dominate the loop). This interface is
currently only used for LICM and there is no way to trigger this bug, so
no test is added.
2023-09-19 17:35:38 +02:00
Daniil Dudkin
01e80a0f41 [mlir] Add maxnumf and minnumf to AtomicRMWKind (#66442)
This commit adds the mentioned kinds of `AtomicRMWKind`
as well as code generation for them.
2023-09-15 22:41:51 +03:00
Daniil Dudkin
6f4a528698 [mlir][memref] Use dedicated ops in AtomicRMWOpConverter (#66437)
This patch refactors the `AtomicRMWOpConverter` class to use
the dedicated operations from Arith dialect instead of using
`cmpf` + `select` pattern.
Also, a test for `minimumf` kind of `atomic_rmw` has been added.
2023-09-15 00:52:35 +03:00
Daniil Dudkin
c46a04339a [mlir][arith] Rename AtomicRMWKind's maxfmaximumf, minfminimumf (#66135)
This patch is part of a larger initiative aimed at fixing floating-point
`max` and `min` operations in MLIR:
https://discourse.llvm.org/t/rfc-fix-floating-point-max-and-min-operations-in-mlir/72671.

This commit renames `maxf` and `minf` enumerators of `AtomicRMWKind`
to better reflect the current naming scheme and the goals of the RFC.
2023-09-14 01:09:37 +03:00
Oleksandr "Alex" Zinenko
e55e36de7a [mlir] alloc-to-alloca conversion for memref (#65335)
Introduce a simple conversion of a memref.alloc/dealloc pair into an
alloca in the same scope. Expose it as a transform op and a pattern.

Allocas typically lower to stack allocations as opposed to alloc/dealloc
that lower to significantly more expensive malloc/free calls. In
addition, this can be combined with allocation hoisting from loops to
further improve performance.
2023-09-05 17:58:22 +02:00
Martin Erhart
8037deb7af [mlir][memref] Add pass to expand realloc operations, simplify lowering to LLVM
There are two motivations for this change:
1. It considerably simplifies adding support for the realloc operation to the
   new buffer deallocation pass by lowering the realloc such that no
   deallocation operation is inserted and the deallocation pass itself can
   insert that dealloc
2. The lowering is expressed on a higher level and thus easier to understand,
   and the lowerings of the memref operations it is composed of don't have to
   be duplicated in the MemRefToLLVM lowering (also see discussion in
   https://reviews.llvm.org/D133424)

Reviewed By: springerm

Differential Revision: https://reviews.llvm.org/D159430
2023-09-05 08:58:40 +00:00
Mikhail Goncharov
0a0aff2d24 fix unused variable warnings in conditionals
warning was updated in 92023b1509
2023-08-30 19:09:27 +02:00
Mahesh Ravishankar
0f8bab8d59 [mlir] Revamp implementation of sub-byte load/store emulation.
When handling sub-byte emulation, the sizes of the converted `memref`s
also need to be updated (this was not done in the current
implementation). This adds the additional complexity of having to
linearize the `memref`s as well. Consider a `memref<3x3xi4>` where the
`i4` elements are packed. This has a overall size of 5 bytes (rounded
up to number of bytes). This can only be represented by a
`memref<5xi8>`. A `memref<3x2xi8>` would imply an implicit padding of
4 bits at the end of each row. So incorporate linearization into the
sub-byte load-store emulation.

This patch also updates some of the utility functions to make better
use of statically available information using `OpFoldResult` and
`makeComposedFoldedAffineApplyOps`.

Reviewed By: hanchung, yzhang93

Differential Revision: https://reviews.llvm.org/D158125
2023-08-17 20:27:53 +00:00
Matthias Springer
a02ad6c177 [mlir][bufferization] Generalize getAliasingOpResults to getAliasingValues
This revision is needed to support bufferization of `cf.br`/`cf.cond_br`. It will also be useful for better analysis of loop ops.

This revision generalizes `getAliasingOpResults` to `getAliasingValues`. An OpOperand can now not only alias with OpResults but also with BlockArguments. In the case of `cf.br` (will be added in a later revision): a `cf.br` operand will alias with the corresponding argument of the destination block.

If an op does not implement the `BufferizableOpInterface`, the analysis in conservative. It previously assumed that an OpOperand may alias with each OpResult. It now assumes that an OpOperand may alias with each OpResult and each BlockArgument of the entry block.

Differential Revision: https://reviews.llvm.org/D157957
2023-08-15 15:02:47 +02:00
Hanhan Wang
f6897c37a2 [mlir][MemRef] Bail out for unsupported cases in FoldMemRefAliasOps pass
The pass uses `computeSuffixProduct` method which only allows static
shapes. This revision adds an early-exit for dynamic cases to avoid
crash.

Reviewed By: mravishankar

Differential Revision: https://reviews.llvm.org/D157668
2023-08-11 14:52:53 -07:00
Hanhan Wang
8fc433f055 [mlir][MemRef] Move narrow type emulation common methods to MemRefUtils.
It also unifies the computation of StridedLayoutAttr. If the stride is
static known value, we can just use it.

Differential Revision: https://reviews.llvm.org/D155017
2023-07-13 14:43:21 -07:00
Matthias Springer
b23c8225e8 [mlir][NFC] Clean up builder usage around constants/non-foldable ops
* Use `create` instead of `createOrFold` for constant ops. Constants cannot be folded any further.
* Use `create` instead of `createOrFold` for ops that do not have a folder.
* Use C++ op builders that take an `int` instead of creating a `ConstantIndexOp`.
* Create `tensor::DimOp` instead of `linalg::createOrFoldDimOp` when it is certain that the operand is a tensor.

Differential Revision: https://reviews.llvm.org/D154196
2023-06-30 13:56:42 +02:00
Kai Sasaki
1fee821d22 [mlir][memref] Make result normalization aware of the number symbols
Memref normalization fails to recognize the non-zero symbols used in the memref type itself with strided, offset information. It causes the crash with the type like `memref<128x512xf32, strided<[?, ?], offset: ?>>`. The original issue is here. https://github.com/llvm/llvm-project/issues/61345

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D150250
2023-06-29 10:04:53 +09:00
yzhang93
5a1cdcbd86 [mlir] Narrow bitwidth emulation for MemRef load
This patch adds support for narrow bitwidth storage emulation. The goal is to support sub-byte type
codegen for LLVM CPU. Specifically, a type converter is added to convert memref of narrow bitwidth
(e.g., i4) into supported wider bitwidth (e.g., i8). Another focus of this patch is to populate the
pattern for int4 memref.load. memref.store pattern should be added in a seperate patch.

Reviewed By: hanchung, mravishankar

Differential Revision: https://reviews.llvm.org/D151519
2023-06-26 14:18:30 -07:00
Matthias Springer
4abccd3913 [mlir][memref][transform] Register memref dialect patterns
Differential Revision: https://reviews.llvm.org/D151998
2023-06-05 08:43:39 +02:00
Tres Popp
68f58812e3 [mlir] Move casting calls from methods to function calls
The MLIR classes Type/Attribute/Operation/Op/Value support
cast/dyn_cast/isa/dyn_cast_or_null functionality through llvm's doCast
functionality in addition to defining methods with the same name.
This change begins the migration of uses of the method to the
corresponding function call as has been decided as more consistent.

Note that there still exist classes that only define methods directly,
such as AffineExpr, and this does not include work currently to support
a functional cast/isa call.

Context:
- https://mlir.llvm.org/deprecation/ at "Use the free function variants
  for dyn_cast/cast/isa/…"
- Original discussion at https://discourse.llvm.org/t/preferred-casting-style-going-forward/68443

Implementation:
This patch updates all remaining uses of the deprecated functionality in
mlir/. This was done with clang-tidy as described below and further
modifications to GPUBase.td and OpenMPOpsInterfaces.td.

Steps are described per line, as comments are removed by git:
0. Retrieve the change from the following to build clang-tidy with an
   additional check:
   main...tpopp:llvm-project:tidy-cast-check
1. Build clang-tidy
2. Run clang-tidy over your entire codebase while disabling all checks
   and enabling the one relevant one. Run on all header files also.
3. Delete .inc files that were also modified, so the next build rebuilds
   them to a pure state.

```
ninja -C $BUILD_DIR clang-tidy

run-clang-tidy -clang-tidy-binary=$BUILD_DIR/bin/clang-tidy -checks='-*,misc-cast-functions'\
               -header-filter=mlir/ mlir/* -fix

rm -rf $BUILD_DIR/tools/mlir/**/*.inc
```

Differential Revision: https://reviews.llvm.org/D151542
2023-05-26 10:29:55 +02:00
Guray Ozen
5ec360c589 [mlir] Enable folding memref alias forvector.load
This work enables  folding memref alias pass for`vector.load`

Reviewed By: qcolombet

Differential Revision: https://reviews.llvm.org/D151447
2023-05-25 17:07:20 +02:00
Guray Ozen
46c32afbc5 [mlir] Enable folding memref alias for ldmatrix
Folding mechanism does not recognize `ldmatrix` op. This work helps pass to recognize the op and fold the memref aliases.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D151412
2023-05-25 13:10:17 +02:00
Matthias Springer
61223c49dd [mlir][GPU] Rename MLIRGPUOps CMake target to MLIRGPUDialect
This is for consistency with other dialects.

Differential Revision: https://reviews.llvm.org/D150659
2023-05-16 14:25:08 +02:00
Tres Popp
5550c82189 [mlir] Move casting calls from methods to function calls
The MLIR classes Type/Attribute/Operation/Op/Value support
cast/dyn_cast/isa/dyn_cast_or_null functionality through llvm's doCast
functionality in addition to defining methods with the same name.
This change begins the migration of uses of the method to the
corresponding function call as has been decided as more consistent.

Note that there still exist classes that only define methods directly,
such as AffineExpr, and this does not include work currently to support
a functional cast/isa call.

Caveats include:
- This clang-tidy script probably has more problems.
- This only touches C++ code, so nothing that is being generated.

Context:
- https://mlir.llvm.org/deprecation/ at "Use the free function variants
  for dyn_cast/cast/isa/…"
- Original discussion at https://discourse.llvm.org/t/preferred-casting-style-going-forward/68443

Implementation:
This first patch was created with the following steps. The intention is
to only do automated changes at first, so I waste less time if it's
reverted, and so the first mass change is more clear as an example to
other teams that will need to follow similar steps.

Steps are described per line, as comments are removed by git:
0. Retrieve the change from the following to build clang-tidy with an
   additional check:
   https://github.com/llvm/llvm-project/compare/main...tpopp:llvm-project:tidy-cast-check
1. Build clang-tidy
2. Run clang-tidy over your entire codebase while disabling all checks
   and enabling the one relevant one. Run on all header files also.
3. Delete .inc files that were also modified, so the next build rebuilds
   them to a pure state.
4. Some changes have been deleted for the following reasons:
   - Some files had a variable also named cast
   - Some files had not included a header file that defines the cast
     functions
   - Some files are definitions of the classes that have the casting
     methods, so the code still refers to the method instead of the
     function without adding a prefix or removing the method declaration
     at the same time.

```
ninja -C $BUILD_DIR clang-tidy

run-clang-tidy -clang-tidy-binary=$BUILD_DIR/bin/clang-tidy -checks='-*,misc-cast-functions'\
               -header-filter=mlir/ mlir/* -fix

rm -rf $BUILD_DIR/tools/mlir/**/*.inc

git restore mlir/lib/IR mlir/lib/Dialect/DLTI/DLTI.cpp\
            mlir/lib/Dialect/Complex/IR/ComplexDialect.cpp\
            mlir/lib/**/IR/\
            mlir/lib/Dialect/SparseTensor/Transforms/SparseVectorization.cpp\
            mlir/lib/Dialect/Vector/Transforms/LowerVectorMultiReduction.cpp\
            mlir/test/lib/Dialect/Test/TestTypes.cpp\
            mlir/test/lib/Dialect/Transform/TestTransformDialectExtension.cpp\
            mlir/test/lib/Dialect/Test/TestAttributes.cpp\
            mlir/unittests/TableGen/EnumsGenTest.cpp\
            mlir/test/python/lib/PythonTestCAPI.cpp\
            mlir/include/mlir/IR/
```

Differential Revision: https://reviews.llvm.org/D150123
2023-05-12 11:21:25 +02:00
Matthias Springer
7610087056 [mlir][memref] Add helper to make alloca ops independent
Add a helper function that makes dynamic sizes of `memref.alloca` ops independent of a given set of values. This functionality can be used to make dynamic allocations hoistable from loops.

Differential Revision: https://reviews.llvm.org/D149316
2023-05-04 14:18:46 +09:00
Mehdi Amini
5e118f933b Introduce MLIR Op Properties
This new features enabled to dedicate custom storage inline within operations.
This storage can be used as an alternative to attributes to store data that is
specific to an operation. Attribute can also be stored inside the properties
storage if desired, but any kind of data can be present as well. This offers
a way to store and mutate data without uniquing in the Context like Attribute.
See the OpPropertiesTest.cpp for an example where a struct with a
std::vector<> is attached to an operation and mutated in-place:

struct TestProperties {
  int a = -1;
  float b = -1.;
  std::vector<int64_t> array = {-33};
};

More complex scheme (including reference-counting) are also possible.

The only constraint to enable storing a C++ object as "properties" on an
operation is to implement three functions:

- convert from the candidate object to an Attribute
- convert from the Attribute to the candidate object
- hash the object

Optional the parsing and printing can also be customized with 2 extra
functions.

A new options is introduced to ODS to allow dialects to specify:

  let usePropertiesForAttributes = 1;

When set to true, the inherent attributes for all the ops in this dialect
will be using properties instead of being stored alongside discardable
attributes.
The TestDialect showcases this feature.

Another change is that we introduce new APIs on the Operation class
to access separately the inherent attributes from the discardable ones.
We envision deprecating and removing the `getAttr()`, `getAttrsDictionary()`,
and other similar method which don't make the distinction explicit, leading
to an entirely separate namespace for discardable attributes.

Recommit d572cd1b06 after fixing python bindings build.

Differential Revision: https://reviews.llvm.org/D141742
2023-05-01 23:16:34 -07:00
Mehdi Amini
1e853421a4 Revert "Introduce MLIR Op Properties"
This reverts commit d572cd1b06.

Some bots are broken and investigation is needed before relanding.
2023-05-01 15:55:58 -07:00
Mehdi Amini
d572cd1b06 Introduce MLIR Op Properties
This new features enabled to dedicate custom storage inline within operations.
This storage can be used as an alternative to attributes to store data that is
specific to an operation. Attribute can also be stored inside the properties
storage if desired, but any kind of data can be present as well. This offers
a way to store and mutate data without uniquing in the Context like Attribute.
See the OpPropertiesTest.cpp for an example where a struct with a
std::vector<> is attached to an operation and mutated in-place:

struct TestProperties {
  int a = -1;
  float b = -1.;
  std::vector<int64_t> array = {-33};
};

More complex scheme (including reference-counting) are also possible.

The only constraint to enable storing a C++ object as "properties" on an
operation is to implement three functions:

- convert from the candidate object to an Attribute
- convert from the Attribute to the candidate object
- hash the object

Optional the parsing and printing can also be customized with 2 extra
functions.

A new options is introduced to ODS to allow dialects to specify:

  let usePropertiesForAttributes = 1;

When set to true, the inherent attributes for all the ops in this dialect
will be using properties instead of being stored alongside discardable
attributes.
The TestDialect showcases this feature.

Another change is that we introduce new APIs on the Operation class
to access separately the inherent attributes from the discardable ones.
We envision deprecating and removing the `getAttr()`, `getAttrsDictionary()`,
and other similar method which don't make the distinction explicit, leading
to an entirely separate namespace for discardable attributes.

Differential Revision: https://reviews.llvm.org/D141742
2023-05-01 15:35:48 -07:00
Rahul Kayaith
6089d612a5 [mlir] Prevent implicit downcasting to interfaces
Currently conversions to interfaces may happen implicitly (e.g.
`Attribute -> TypedAttr`), failing a runtime assert if the interface
isn't actually implemented. This change marks the `Interface(ValueT)`
constructor as explicit so that a cast is required.

Where it was straightforward to I adjusted code to not require casts,
otherwise I just made them explicit.

Depends on D148491, D148492

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D148493
2023-04-20 16:31:54 -04:00
Matthias Springer
4c48f016ef [mlir][Affine][NFC] Wrap dialect in "affine" namespace
This cleanup aligns the affine dialect with all the other dialects.

Differential Revision: https://reviews.llvm.org/D148687
2023-04-20 11:19:21 +09:00
Manish Gupta
fc5c1a7676 [mlir][Memref] Fold nvgpu device cp.async on src memref to dst memref
Differential Revision: https://reviews.llvm.org/D148161
2023-04-20 01:09:44 +00:00
Nicolas Vasilache
33468a51db [mlir][Tensor] Add support for insert_slice in FoldTensorSubsetOps
Differential Revision: https://reviews.llvm.org/D148334
2023-04-14 09:34:11 -07:00
Mahesh Ravishankar
0a8d1ee739 [mlir][MemRef] Add pattern to resolve strided metadata of memref.get_global operation.
This changes adds patterns to resolve the base pointer, offset, sizes
and strides of the result of a `memref.get_global` operation. Since
the operation can only result in static shaped memrefs, current
resolution kicks in only for non-zero offsets, and identity strides.

Also

- Add a separate `populateResolveExtractStridedMetadata` method that
  adds just the pattern to resolve `<memref op>` ->
  `memref.extract_strided_metadata` operations.
- Refactor the `SubviewFolder` pattern to allow resolving
  `memref.subview` -> `memref.extract_strided_metadata`.

This allows using these patterns for cases where there are already
existing `memref.extract_strided_metadata` operations.

Reviewed By: qcolombet

Differential Revision: https://reviews.llvm.org/D147393
2023-04-03 17:41:35 +00:00
Quentin Colombet
faafd26c4d [mlir][MemRef] Move transform related functions in Transforms.h
NFC
2023-03-28 15:20:19 +02:00
Quentin Colombet
54cda2ec97 [mlir][MemRef] Add patterns to extract address computations
This patch adds patterns to rewrite memory accesses such that the resulting
accesses are only using a base pointer.
E.g.,
```mlir
memref.load %base[%off0, ...]
```

Will be rewritten in:
```mlir
%new_base = memref.subview %base[%off0,...][1,...][1,...]
memref.load %new_base[%c0,...]
```

The idea behind these patterns is to offer a way to more gradually lower
address computations.

These patterns are the exact opposite of FoldMemRefAliasOps.
I've implemented the support of only five operations in this patch:
- memref.load
- memref.store
- nvgpu.ldmatrix
- vector.transfer_read
- vector.transfer_write

Going forward we may want to provide an interface for these rewritings (and
the ones in FoldMemRefAliasOps.)
One step at a time!

Differential Revision: https://reviews.llvm.org/D146724
2023-03-28 13:52:29 +02:00
Nicolas Vasilache
4dc72d47ce [mlir][Tensor] Add a FoldTensorSubsetOps pass and patterns
These patterns follow FoldMemRefAliasOps which is further refactored for reuse.
In the process, fix FoldMemRefAliasOps handling of strides for vector.transfer ops which was previously incorrect.

These opt-in patterns generalize the existing canonicalizations on vector.transfer ops.
In the future the blanket canonicalizations will be retired.
They are kept for now to minimize porting disruptions.

Differential Revision: https://reviews.llvm.org/D146624
2023-03-23 04:03:27 -07:00
Nicolas Vasilache
829446cb45 [mlir][memref] Use folded composed affine apply ops in FoldMemRefAliasOps
Creating maximally folded and composd affine.apply operation during
FoldMemRefAliasOps composes better with other transformations without having
to interleave canonicalization passes.

Differential Revision: https://reviews.llvm.org/D146515
2023-03-21 22:17:36 -07:00
Lei Zhang
59e4fbfcd0 [mlir][memref] Fold subview into GPU subgroup MMA load/store ops
This commits adds support for folding subview into GPU subgroup
MMA load/store ops.

Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D146150
2023-03-15 17:49:32 +00:00