Commit Graph

117 Commits

Author SHA1 Message Date
Hanhan Wang
8fc433f055 [mlir][MemRef] Move narrow type emulation common methods to MemRefUtils.
It also unifies the computation of StridedLayoutAttr. If the stride is
static known value, we can just use it.

Differential Revision: https://reviews.llvm.org/D155017
2023-07-13 14:43:21 -07:00
Matthias Springer
b23c8225e8 [mlir][NFC] Clean up builder usage around constants/non-foldable ops
* Use `create` instead of `createOrFold` for constant ops. Constants cannot be folded any further.
* Use `create` instead of `createOrFold` for ops that do not have a folder.
* Use C++ op builders that take an `int` instead of creating a `ConstantIndexOp`.
* Create `tensor::DimOp` instead of `linalg::createOrFoldDimOp` when it is certain that the operand is a tensor.

Differential Revision: https://reviews.llvm.org/D154196
2023-06-30 13:56:42 +02:00
Kai Sasaki
1fee821d22 [mlir][memref] Make result normalization aware of the number symbols
Memref normalization fails to recognize the non-zero symbols used in the memref type itself with strided, offset information. It causes the crash with the type like `memref<128x512xf32, strided<[?, ?], offset: ?>>`. The original issue is here. https://github.com/llvm/llvm-project/issues/61345

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D150250
2023-06-29 10:04:53 +09:00
yzhang93
5a1cdcbd86 [mlir] Narrow bitwidth emulation for MemRef load
This patch adds support for narrow bitwidth storage emulation. The goal is to support sub-byte type
codegen for LLVM CPU. Specifically, a type converter is added to convert memref of narrow bitwidth
(e.g., i4) into supported wider bitwidth (e.g., i8). Another focus of this patch is to populate the
pattern for int4 memref.load. memref.store pattern should be added in a seperate patch.

Reviewed By: hanchung, mravishankar

Differential Revision: https://reviews.llvm.org/D151519
2023-06-26 14:18:30 -07:00
Matthias Springer
4abccd3913 [mlir][memref][transform] Register memref dialect patterns
Differential Revision: https://reviews.llvm.org/D151998
2023-06-05 08:43:39 +02:00
Tres Popp
68f58812e3 [mlir] Move casting calls from methods to function calls
The MLIR classes Type/Attribute/Operation/Op/Value support
cast/dyn_cast/isa/dyn_cast_or_null functionality through llvm's doCast
functionality in addition to defining methods with the same name.
This change begins the migration of uses of the method to the
corresponding function call as has been decided as more consistent.

Note that there still exist classes that only define methods directly,
such as AffineExpr, and this does not include work currently to support
a functional cast/isa call.

Context:
- https://mlir.llvm.org/deprecation/ at "Use the free function variants
  for dyn_cast/cast/isa/…"
- Original discussion at https://discourse.llvm.org/t/preferred-casting-style-going-forward/68443

Implementation:
This patch updates all remaining uses of the deprecated functionality in
mlir/. This was done with clang-tidy as described below and further
modifications to GPUBase.td and OpenMPOpsInterfaces.td.

Steps are described per line, as comments are removed by git:
0. Retrieve the change from the following to build clang-tidy with an
   additional check:
   main...tpopp:llvm-project:tidy-cast-check
1. Build clang-tidy
2. Run clang-tidy over your entire codebase while disabling all checks
   and enabling the one relevant one. Run on all header files also.
3. Delete .inc files that were also modified, so the next build rebuilds
   them to a pure state.

```
ninja -C $BUILD_DIR clang-tidy

run-clang-tidy -clang-tidy-binary=$BUILD_DIR/bin/clang-tidy -checks='-*,misc-cast-functions'\
               -header-filter=mlir/ mlir/* -fix

rm -rf $BUILD_DIR/tools/mlir/**/*.inc
```

Differential Revision: https://reviews.llvm.org/D151542
2023-05-26 10:29:55 +02:00
Guray Ozen
5ec360c589 [mlir] Enable folding memref alias forvector.load
This work enables  folding memref alias pass for`vector.load`

Reviewed By: qcolombet

Differential Revision: https://reviews.llvm.org/D151447
2023-05-25 17:07:20 +02:00
Guray Ozen
46c32afbc5 [mlir] Enable folding memref alias for ldmatrix
Folding mechanism does not recognize `ldmatrix` op. This work helps pass to recognize the op and fold the memref aliases.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D151412
2023-05-25 13:10:17 +02:00
Matthias Springer
61223c49dd [mlir][GPU] Rename MLIRGPUOps CMake target to MLIRGPUDialect
This is for consistency with other dialects.

Differential Revision: https://reviews.llvm.org/D150659
2023-05-16 14:25:08 +02:00
Tres Popp
5550c82189 [mlir] Move casting calls from methods to function calls
The MLIR classes Type/Attribute/Operation/Op/Value support
cast/dyn_cast/isa/dyn_cast_or_null functionality through llvm's doCast
functionality in addition to defining methods with the same name.
This change begins the migration of uses of the method to the
corresponding function call as has been decided as more consistent.

Note that there still exist classes that only define methods directly,
such as AffineExpr, and this does not include work currently to support
a functional cast/isa call.

Caveats include:
- This clang-tidy script probably has more problems.
- This only touches C++ code, so nothing that is being generated.

Context:
- https://mlir.llvm.org/deprecation/ at "Use the free function variants
  for dyn_cast/cast/isa/…"
- Original discussion at https://discourse.llvm.org/t/preferred-casting-style-going-forward/68443

Implementation:
This first patch was created with the following steps. The intention is
to only do automated changes at first, so I waste less time if it's
reverted, and so the first mass change is more clear as an example to
other teams that will need to follow similar steps.

Steps are described per line, as comments are removed by git:
0. Retrieve the change from the following to build clang-tidy with an
   additional check:
   https://github.com/llvm/llvm-project/compare/main...tpopp:llvm-project:tidy-cast-check
1. Build clang-tidy
2. Run clang-tidy over your entire codebase while disabling all checks
   and enabling the one relevant one. Run on all header files also.
3. Delete .inc files that were also modified, so the next build rebuilds
   them to a pure state.
4. Some changes have been deleted for the following reasons:
   - Some files had a variable also named cast
   - Some files had not included a header file that defines the cast
     functions
   - Some files are definitions of the classes that have the casting
     methods, so the code still refers to the method instead of the
     function without adding a prefix or removing the method declaration
     at the same time.

```
ninja -C $BUILD_DIR clang-tidy

run-clang-tidy -clang-tidy-binary=$BUILD_DIR/bin/clang-tidy -checks='-*,misc-cast-functions'\
               -header-filter=mlir/ mlir/* -fix

rm -rf $BUILD_DIR/tools/mlir/**/*.inc

git restore mlir/lib/IR mlir/lib/Dialect/DLTI/DLTI.cpp\
            mlir/lib/Dialect/Complex/IR/ComplexDialect.cpp\
            mlir/lib/**/IR/\
            mlir/lib/Dialect/SparseTensor/Transforms/SparseVectorization.cpp\
            mlir/lib/Dialect/Vector/Transforms/LowerVectorMultiReduction.cpp\
            mlir/test/lib/Dialect/Test/TestTypes.cpp\
            mlir/test/lib/Dialect/Transform/TestTransformDialectExtension.cpp\
            mlir/test/lib/Dialect/Test/TestAttributes.cpp\
            mlir/unittests/TableGen/EnumsGenTest.cpp\
            mlir/test/python/lib/PythonTestCAPI.cpp\
            mlir/include/mlir/IR/
```

Differential Revision: https://reviews.llvm.org/D150123
2023-05-12 11:21:25 +02:00
Matthias Springer
7610087056 [mlir][memref] Add helper to make alloca ops independent
Add a helper function that makes dynamic sizes of `memref.alloca` ops independent of a given set of values. This functionality can be used to make dynamic allocations hoistable from loops.

Differential Revision: https://reviews.llvm.org/D149316
2023-05-04 14:18:46 +09:00
Mehdi Amini
5e118f933b Introduce MLIR Op Properties
This new features enabled to dedicate custom storage inline within operations.
This storage can be used as an alternative to attributes to store data that is
specific to an operation. Attribute can also be stored inside the properties
storage if desired, but any kind of data can be present as well. This offers
a way to store and mutate data without uniquing in the Context like Attribute.
See the OpPropertiesTest.cpp for an example where a struct with a
std::vector<> is attached to an operation and mutated in-place:

struct TestProperties {
  int a = -1;
  float b = -1.;
  std::vector<int64_t> array = {-33};
};

More complex scheme (including reference-counting) are also possible.

The only constraint to enable storing a C++ object as "properties" on an
operation is to implement three functions:

- convert from the candidate object to an Attribute
- convert from the Attribute to the candidate object
- hash the object

Optional the parsing and printing can also be customized with 2 extra
functions.

A new options is introduced to ODS to allow dialects to specify:

  let usePropertiesForAttributes = 1;

When set to true, the inherent attributes for all the ops in this dialect
will be using properties instead of being stored alongside discardable
attributes.
The TestDialect showcases this feature.

Another change is that we introduce new APIs on the Operation class
to access separately the inherent attributes from the discardable ones.
We envision deprecating and removing the `getAttr()`, `getAttrsDictionary()`,
and other similar method which don't make the distinction explicit, leading
to an entirely separate namespace for discardable attributes.

Recommit d572cd1b06 after fixing python bindings build.

Differential Revision: https://reviews.llvm.org/D141742
2023-05-01 23:16:34 -07:00
Mehdi Amini
1e853421a4 Revert "Introduce MLIR Op Properties"
This reverts commit d572cd1b06.

Some bots are broken and investigation is needed before relanding.
2023-05-01 15:55:58 -07:00
Mehdi Amini
d572cd1b06 Introduce MLIR Op Properties
This new features enabled to dedicate custom storage inline within operations.
This storage can be used as an alternative to attributes to store data that is
specific to an operation. Attribute can also be stored inside the properties
storage if desired, but any kind of data can be present as well. This offers
a way to store and mutate data without uniquing in the Context like Attribute.
See the OpPropertiesTest.cpp for an example where a struct with a
std::vector<> is attached to an operation and mutated in-place:

struct TestProperties {
  int a = -1;
  float b = -1.;
  std::vector<int64_t> array = {-33};
};

More complex scheme (including reference-counting) are also possible.

The only constraint to enable storing a C++ object as "properties" on an
operation is to implement three functions:

- convert from the candidate object to an Attribute
- convert from the Attribute to the candidate object
- hash the object

Optional the parsing and printing can also be customized with 2 extra
functions.

A new options is introduced to ODS to allow dialects to specify:

  let usePropertiesForAttributes = 1;

When set to true, the inherent attributes for all the ops in this dialect
will be using properties instead of being stored alongside discardable
attributes.
The TestDialect showcases this feature.

Another change is that we introduce new APIs on the Operation class
to access separately the inherent attributes from the discardable ones.
We envision deprecating and removing the `getAttr()`, `getAttrsDictionary()`,
and other similar method which don't make the distinction explicit, leading
to an entirely separate namespace for discardable attributes.

Differential Revision: https://reviews.llvm.org/D141742
2023-05-01 15:35:48 -07:00
Rahul Kayaith
6089d612a5 [mlir] Prevent implicit downcasting to interfaces
Currently conversions to interfaces may happen implicitly (e.g.
`Attribute -> TypedAttr`), failing a runtime assert if the interface
isn't actually implemented. This change marks the `Interface(ValueT)`
constructor as explicit so that a cast is required.

Where it was straightforward to I adjusted code to not require casts,
otherwise I just made them explicit.

Depends on D148491, D148492

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D148493
2023-04-20 16:31:54 -04:00
Matthias Springer
4c48f016ef [mlir][Affine][NFC] Wrap dialect in "affine" namespace
This cleanup aligns the affine dialect with all the other dialects.

Differential Revision: https://reviews.llvm.org/D148687
2023-04-20 11:19:21 +09:00
Manish Gupta
fc5c1a7676 [mlir][Memref] Fold nvgpu device cp.async on src memref to dst memref
Differential Revision: https://reviews.llvm.org/D148161
2023-04-20 01:09:44 +00:00
Nicolas Vasilache
33468a51db [mlir][Tensor] Add support for insert_slice in FoldTensorSubsetOps
Differential Revision: https://reviews.llvm.org/D148334
2023-04-14 09:34:11 -07:00
Mahesh Ravishankar
0a8d1ee739 [mlir][MemRef] Add pattern to resolve strided metadata of memref.get_global operation.
This changes adds patterns to resolve the base pointer, offset, sizes
and strides of the result of a `memref.get_global` operation. Since
the operation can only result in static shaped memrefs, current
resolution kicks in only for non-zero offsets, and identity strides.

Also

- Add a separate `populateResolveExtractStridedMetadata` method that
  adds just the pattern to resolve `<memref op>` ->
  `memref.extract_strided_metadata` operations.
- Refactor the `SubviewFolder` pattern to allow resolving
  `memref.subview` -> `memref.extract_strided_metadata`.

This allows using these patterns for cases where there are already
existing `memref.extract_strided_metadata` operations.

Reviewed By: qcolombet

Differential Revision: https://reviews.llvm.org/D147393
2023-04-03 17:41:35 +00:00
Quentin Colombet
faafd26c4d [mlir][MemRef] Move transform related functions in Transforms.h
NFC
2023-03-28 15:20:19 +02:00
Quentin Colombet
54cda2ec97 [mlir][MemRef] Add patterns to extract address computations
This patch adds patterns to rewrite memory accesses such that the resulting
accesses are only using a base pointer.
E.g.,
```mlir
memref.load %base[%off0, ...]
```

Will be rewritten in:
```mlir
%new_base = memref.subview %base[%off0,...][1,...][1,...]
memref.load %new_base[%c0,...]
```

The idea behind these patterns is to offer a way to more gradually lower
address computations.

These patterns are the exact opposite of FoldMemRefAliasOps.
I've implemented the support of only five operations in this patch:
- memref.load
- memref.store
- nvgpu.ldmatrix
- vector.transfer_read
- vector.transfer_write

Going forward we may want to provide an interface for these rewritings (and
the ones in FoldMemRefAliasOps.)
One step at a time!

Differential Revision: https://reviews.llvm.org/D146724
2023-03-28 13:52:29 +02:00
Nicolas Vasilache
4dc72d47ce [mlir][Tensor] Add a FoldTensorSubsetOps pass and patterns
These patterns follow FoldMemRefAliasOps which is further refactored for reuse.
In the process, fix FoldMemRefAliasOps handling of strides for vector.transfer ops which was previously incorrect.

These opt-in patterns generalize the existing canonicalizations on vector.transfer ops.
In the future the blanket canonicalizations will be retired.
They are kept for now to minimize porting disruptions.

Differential Revision: https://reviews.llvm.org/D146624
2023-03-23 04:03:27 -07:00
Nicolas Vasilache
829446cb45 [mlir][memref] Use folded composed affine apply ops in FoldMemRefAliasOps
Creating maximally folded and composd affine.apply operation during
FoldMemRefAliasOps composes better with other transformations without having
to interleave canonicalization passes.

Differential Revision: https://reviews.llvm.org/D146515
2023-03-21 22:17:36 -07:00
Lei Zhang
59e4fbfcd0 [mlir][memref] Fold subview into GPU subgroup MMA load/store ops
This commits adds support for folding subview into GPU subgroup
MMA load/store ops.

Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D146150
2023-03-15 17:49:32 +00:00
Nicolas Vasilache
203fad476b [mlir][DialectUtils] Cleanup IndexingUtils and provide more affine variants while reusing implementations
Differential Revision: https://reviews.llvm.org/D145784
2023-03-14 03:44:59 -07:00
Matthias Springer
758329dc7c [mlir][NFC] reifyResultShapes: Add extra error checking
This change adds a new helper function `mlir::reifyResultShapes` that calls the corresponding interface method and also checks the result produced by the implementation when running in debug mode. Bugs due to incorrect interface implementations can be difficult to debug.

This helper function also reduces the amount of code needed at call sites: the cast to `ReifyRankedShapedTypeOpInterface` is done in the helper function.

Differential Revision: https://reviews.llvm.org/D145777
2023-03-10 11:37:54 +01:00
Matthias Springer
2a5b13e722 [mlir][Interfaces] ReifyRankedShapedTypeOpInterface returns OpFoldResults
`reifyResultShapes` now returns `OpFoldResult`s instead of `Value`s. This is often more efficient because many transformations immediately attempt to extract a constant from the reified values.

Differential Revision: https://reviews.llvm.org/D145250
2023-03-06 08:41:28 +01:00
Nicolas Vasilache
c888a0ce88 [mlir][MemRef] Rewrite multi-buffering with proper composable abstractions
Rewrite and document multi-buffering properly:
1. Use IndexingUtils / StaticValueUtils instead of duplicating functionality
2. Properly plumb RewriterBase through.
3. Add support
4. Better debug messages.

This revision is otherwise almost NFC, if it weren't for the extra DeallocOp
support that would previoulsy make multi-buffering fail.

Depends on: D145036

Differential Revision: https://reviews.llvm.org/D145055
2023-03-01 07:25:31 -08:00
Thomas Raoux
ddeb55ab33 [mlir] add option to multi-buffering
Allow user to apply multi-buffering transformation for cases where proving
that there is no loop carried dependency is not trivial. In this case user needs
to ensure that the data are written and read in the same iteration otherwise the
result is incorrect.

Differential Revision: https://reviews.llvm.org/D144227
2023-02-17 15:34:51 +00:00
Nicolas Vasilache
ec640382fc [mlir][MemRef] NFC - Add debug information to MultiBuffer.cpp 2023-02-16 07:19:58 -08:00
Matthias Springer
c645eb0d03 [mlir][memref] Bufferize memref.tensor_store op
This change adds the BufferizableOpInterface implementation for memref.tensor_store.

Differential Revision: https://reviews.llvm.org/D144080
2023-02-15 15:26:57 +01:00
Guray Ozen
1cb91b421e [mlir] Add nontemporal field to memref.load/store and convey to llvm.load/store
`llvm.load` op has nonTemporal field which is missing for `memref.load` and `memref.store`. This revision first adds nonTemporal field to memref's load/store op, then it lowers the field to llvm.load/store ops.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D142616
2023-02-03 14:03:38 +01:00
Matthias Springer
a2b837ab04 [mlir] GreedyPatternRewriteDriver: Entry point takes single region
The rewrite driver is typically applied to a single region or all regions of the same op. There is no longer an overload to apply the rewrite driver to a list of regions.

This simplifies the rewrite driver implementation because the scope is now a single region as opposed to a list of regions.

Note: This change is not NFC because `config.maxIterations` and `config.maxNumRewrites` is now counted for each region separately. Furthermore, worklist filtering (`scope`) is now applied to each region separately.

Differential Revision: https://reviews.llvm.org/D142611
2023-01-27 11:23:04 +01:00
Kazu Hirata
0a81ace004 [mlir] Use std::optional instead of llvm::Optional (NFC)
This patch replaces (llvm::|)Optional< with std::optional<.  I'll post
a separate patch to remove #include "llvm/ADT/Optional.h".

This is part of an effort to migrate from llvm::Optional to
std::optional:

https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
2023-01-14 01:25:58 -08:00
Kazu Hirata
a1fe1f5f77 [mlir] Add #include <optional> (NFC)
This patch adds #include <optional> to those files containing
llvm::Optional<...> or Optional<...>.

I'll post a separate patch to actually replace llvm::Optional with
std::optional.

This is part of an effort to migrate from llvm::Optional to
std::optional:

https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
2023-01-13 21:05:06 -08:00
Matthias Springer
5eee80ce5e [mlir][memref] Add runtime verification for memref::CastOp
Verify unranked -> ranked casts and casts of dynamic sizes/offset/strides to static ones.

Differential Revision: https://reviews.llvm.org/D138671
2023-01-06 14:38:56 +01:00
Adrian Kuegel
70423eedec [mlir][MemRef] Apply ClangTidy performance fix (NFC). 2023-01-02 08:41:31 +01:00
Kazu Hirata
02f4cfa33d [mlir] Fix a warning
This patch fixes:

  mlir/lib/Dialect/MemRef/Transforms/RuntimeOpVerification.cpp:33:12:
  error: variable 'foundDynamicDim' set but not used
  [-Werror,-Wunused-but-set-variable]
2022-12-21 16:15:09 -08:00
Matthias Springer
108b08f2a9 [mlir] Add RuntimeVerifiableOpInterface and transform
Static op verification cannot detect cases where an op is valid at compile time but may be invalid at runtime.

An example of such an op is `memref::ExpandShapeOp`.

Invalid at compile time: `memref.expand_shape %m [[0, 1]] : memref<11xf32> into memref<2x5xf32>`

Valid at compile time (because we do not know any better): `memref.expand_shape %m [[0, 1]] : memref<?xf32> into memref<?x5xf32>`. This op may or may not be valid at runtime depending on the runtime shape of `%m`.

Invalid runtime ops such as the one above are hard to debug because they can crash the program execution at a seemingly unrelated position or (even worse) compute an invalid result without crashing.

This revision adds a new op interface `RuntimeVerifiableOpInterface` that can be implemented by ops that provide additional runtime verification. Such runtime verification can be computationally expensive, so it is only generated on an opt-in basis by running `-generate-runtime-verification`. A simple runtime verifier for `memref::ExpandShapeOp` is provided as an example.

Differential Revision: https://reviews.llvm.org/D138576
2022-12-21 10:57:14 +01:00
Ramkumar Ramachandra
e8bcc37fff mlir/{SPIRV,Bufferization}: use std::optional in .td files (NFC)
This is part of an effort to migrate from llvm::Optional to
std::optional. 22426110c5 changed the way mlir-tblgen generates .inc
files, emitting std::optional when an Optional attribute is specified in
a .td file. It also changed several .td files hard-coding llvm::Optional
to use std::optional. However, the patch excluded a few .td files in
SPIRV and Bufferization hard-coding llvm::Optional. This patch fixes
that defect, and after this patch, references to llvm::Optional in .cpp
and .h files can be replaced mechanically.

See also: https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716

Signed-off-by: Ramkumar Ramachandra <r@artagnon.com>

Differential Revision: https://reviews.llvm.org/D140329
2022-12-20 09:23:58 +01:00
Ramkumar Ramachandra
0de16fafa5 mlir/DialectConversion: use std::optional (NFC)
This is part of an effort to migrate from llvm::Optional to
std::optional. This patch touches DialectConversion, and modifies
existing conversions and tests appropriately.

See also: https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716

Signed-off-by: Ramkumar Ramachandra <r@artagnon.com>

Differential Revision: https://reviews.llvm.org/D140303
2022-12-19 18:48:59 +01:00
Ramkumar Ramachandra
22426110c5 mlir/tblgen: use std::optional in generation
This is part of an effort to migrate from llvm::Optional to
std::optional. This patch changes the way mlir-tblgen generates .inc
files, and modifies tests and documentation appropriately. It is a "no
compromises" patch, and doesn't leave the user with an unpleasant mix of
llvm::Optional and std::optional.

A non-trivial change has been made to ControlFlowInterfaces to split one
constructor into two, relating to a build failure on Windows.

See also: https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716

Signed-off-by: Ramkumar Ramachandra <r@artagnon.com>

Differential Revision: https://reviews.llvm.org/D138934
2022-12-17 11:13:26 +01:00
Matthias Springer
ccb8a4e3f3 [mlir][memref] Fold subview(subview(x))
Folding of rank-reduced subviews is also supported.

Differential Revision: https://reviews.llvm.org/D140110
2022-12-15 17:50:12 +01:00
Quentin Colombet
64f99842a6 [mlir][ExpandStridedMetadata] Handle collapse_shape of dim of size 1 gracefully
Collapsing dimensions of size 1 with random strides (a.k.a.
non-contiguous w.r.t. collapsed dimensions) is a grey area that we'd
like to clean-up. (See https://reviews.llvm.org/D136483#3909856)

That said, the implementation in `memref-to-llvm` currently skips
dimensions of size 1 when computing the stride of a group.

While longer term we may want to clean that up, for now matches this
behavior, at least in the static case.

For the dynamic case, for this patch we stick to `min(group strides)`.
However, if we want to handle the dynamic cases correctly while allowing
non-truly-contiguous dynamic size of 1, we would need to `if-then-else`
every dynamic size. In other words `min(stride_i, for all i in group and
dim_i != 1)`.

I didn't implement that in this patch at the moment since
`memref-to-llvm` is technically broken in the general case for this. (It
currently would only produce something sensible for row major tensors.)

Differential Revision: https://reviews.llvm.org/D139329
2022-12-08 07:32:01 +00:00
Quentin Colombet
9cbd136db4 [mlir][NFC] Add a new getStridesAndOffset function
The new function is a wrapper around the regular `getStridesAndOffset`
that offers a more compact way (as in writing less code) of getting the
relevant information.

This method is intended to be used only when it is known that the
LogicalResult of the regular `getStridesAndOffset` must be "succeeded".

This warpper will assert on that.

Differential Revision: https://reviews.llvm.org/D139529
2022-12-07 13:58:28 +00:00
Kazu Hirata
1a36588ec6 [mlir] Use std::nullopt instead of None (NFC)
This patch mechanically replaces None with std::nullopt where the
compiler would warn if None were deprecated.  The intent is to reduce
the amount of manual work required in migrating from Optional to
std::optional.

This is part of an effort to migrate from llvm::Optional to
std::optional:

https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
2022-12-03 18:50:27 -08:00
Lorenzo Chelini
a9733b8a5e [MLIR] Adopt DenseI64ArrayAttr in tensor, memref and linalg transform
This commit is a first step toward removing inconsistencies between dynamic
and static attributes (i64 v. index) by dropping `I64ArrayAttr` and
using `DenseI64ArrayAttr` in Tensor, Memref and Linalg Transform ops.
In Linalg Transform ops only `TileToScfForOp` and `TileOp` have been updated.

See related discussion: https://discourse.llvm.org/t/rfc-inconsistency-between-dynamic-and-static-attributes-i64-v-index/66612/1

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D138567
2022-11-25 09:43:30 +01:00
Quentin Colombet
8b97b4e7ee [mlir][MemRef] NFC rename simplify-extract-strided-metadata
This pass has outgrown its original goal and is now going to be used to
expand certain memref operations before lowering.
Reflect that in the name.

The pass is now called expand-strided-metadata.

NFC

Differential Revision: https://reviews.llvm.org/D138448
2022-11-21 22:43:15 +00:00
Aliia Khasanova
399638f98c Merge kDynamicSize and kDynamicSentinel into one constant.
resolve conflicts

Differential Revision: https://reviews.llvm.org/D138282
2022-11-21 13:01:26 +00:00
Quentin Colombet
d665448a7f [mlir][MemRef] Change the anchor point of a reshapeLikeOp pattern
Essentially, this patches changes the anchor point of the
`extract_strided_metadata(reshapeLikeOp)` pattern from
`extract_strided_metadata` to `reshapeLikeOp`.

In details, this means that instead of replacing:
```
base, offset, sizes, strides =
  extract_strided_metadata(reshapeLikeOp(src))
```
With
```
base, offset = extract_strided_metadata(src)
sizes = <some math>
strides = <some math>
```

We replace only the reshapeLikeOp part and connect it back with a
reinterpret_cast:
```
val = reshapeLikeOp(src)
```
=>
```
base, offset, ... = extract_strided_metadata(src)
sizes = <some math>
strides = <some math>
val = reinterpret_cast base, offset, sizes, strides

Differential Revision: https://reviews.llvm.org/D136386
2022-11-14 18:56:35 +00:00