Commit Graph

143 Commits

Author SHA1 Message Date
Tres Popp
5550c82189 [mlir] Move casting calls from methods to function calls
The MLIR classes Type/Attribute/Operation/Op/Value support
cast/dyn_cast/isa/dyn_cast_or_null functionality through llvm's doCast
functionality in addition to defining methods with the same name.
This change begins the migration of uses of the method to the
corresponding function call as has been decided as more consistent.

Note that there still exist classes that only define methods directly,
such as AffineExpr, and this does not include work currently to support
a functional cast/isa call.

Caveats include:
- This clang-tidy script probably has more problems.
- This only touches C++ code, so nothing that is being generated.

Context:
- https://mlir.llvm.org/deprecation/ at "Use the free function variants
  for dyn_cast/cast/isa/…"
- Original discussion at https://discourse.llvm.org/t/preferred-casting-style-going-forward/68443

Implementation:
This first patch was created with the following steps. The intention is
to only do automated changes at first, so I waste less time if it's
reverted, and so the first mass change is more clear as an example to
other teams that will need to follow similar steps.

Steps are described per line, as comments are removed by git:
0. Retrieve the change from the following to build clang-tidy with an
   additional check:
   https://github.com/llvm/llvm-project/compare/main...tpopp:llvm-project:tidy-cast-check
1. Build clang-tidy
2. Run clang-tidy over your entire codebase while disabling all checks
   and enabling the one relevant one. Run on all header files also.
3. Delete .inc files that were also modified, so the next build rebuilds
   them to a pure state.
4. Some changes have been deleted for the following reasons:
   - Some files had a variable also named cast
   - Some files had not included a header file that defines the cast
     functions
   - Some files are definitions of the classes that have the casting
     methods, so the code still refers to the method instead of the
     function without adding a prefix or removing the method declaration
     at the same time.

```
ninja -C $BUILD_DIR clang-tidy

run-clang-tidy -clang-tidy-binary=$BUILD_DIR/bin/clang-tidy -checks='-*,misc-cast-functions'\
               -header-filter=mlir/ mlir/* -fix

rm -rf $BUILD_DIR/tools/mlir/**/*.inc

git restore mlir/lib/IR mlir/lib/Dialect/DLTI/DLTI.cpp\
            mlir/lib/Dialect/Complex/IR/ComplexDialect.cpp\
            mlir/lib/**/IR/\
            mlir/lib/Dialect/SparseTensor/Transforms/SparseVectorization.cpp\
            mlir/lib/Dialect/Vector/Transforms/LowerVectorMultiReduction.cpp\
            mlir/test/lib/Dialect/Test/TestTypes.cpp\
            mlir/test/lib/Dialect/Transform/TestTransformDialectExtension.cpp\
            mlir/test/lib/Dialect/Test/TestAttributes.cpp\
            mlir/unittests/TableGen/EnumsGenTest.cpp\
            mlir/test/python/lib/PythonTestCAPI.cpp\
            mlir/include/mlir/IR/
```

Differential Revision: https://reviews.llvm.org/D150123
2023-05-12 11:21:25 +02:00
Matthias Springer
77124386fe [mlir][tensor] Add transform to make tensor.pad loop-independent
Add a transform to make `tensor.pad` and `tensor.empty` ops independent of SCF loop IVs. Such ops can then be hoisted.

E.g.:
```
scf.for %iv = %lb to %ub step %step {
  %high = affine.apply affine_map<(d0)[s0] -> (s0 - d0)> (%i)[%ub]
  %p = tensor.pad %t low[5] high[%high] ...
  ...
}
```
Is transformed to:
```
%high_new = affine.apply affine_map<()[s0, s1] -> (-s0 + s1)> ()[%lb, %ub]
%p_hoistable = tensor.pad %t low[5] high[%high_new]
%dim = tensor.dim %t, %c0
%size = affine.apply affine_map<(d0)[s0, s1] -> (-d0 + s0 + s1 + 5)>(%iv)[%ub, %dim]
%slice = tensor.extract_slice %p_hoistable [0] [%size] [1]
```

Differential Revision: https://reviews.llvm.org/D143910
2023-04-28 11:46:32 +09:00
Matthias Springer
4c48f016ef [mlir][Affine][NFC] Wrap dialect in "affine" namespace
This cleanup aligns the affine dialect with all the other dialects.

Differential Revision: https://reviews.llvm.org/D148687
2023-04-20 11:19:21 +09:00
Matthias Springer
7c06f63176 [mlir][tensor][bufferize] Fix dealloc placement in scf.forall op
The terminator of this op is special: it does not just yield a value,
but bufferizes to a memcpy. This requires special treatment to make sure
that deallocs are placed after the memcpy. (By default, deallocs are
placed right before the terminator.)

Differential Revision: https://reviews.llvm.org/D148408
2023-04-16 09:34:43 +09:00
Nicolas Vasilache
33468a51db [mlir][Tensor] Add support for insert_slice in FoldTensorSubsetOps
Differential Revision: https://reviews.llvm.org/D148334
2023-04-14 09:34:11 -07:00
Nicolas Vasilache
2031d7d66d [mlir][Tensor] Drop SplitPaddingPatterns.
These old patterns are not in use in either MLIR or downstream projects except for one test.
Additionally this is redundant with logic in the tensor.pad tiling implementation.

Drop SplitPaddingPatterns to reduce entropy.

Differential Revision: https://reviews.llvm.org/D148207
2023-04-13 03:38:29 -07:00
Nicolas Vasilache
4dc72d47ce [mlir][Tensor] Add a FoldTensorSubsetOps pass and patterns
These patterns follow FoldMemRefAliasOps which is further refactored for reuse.
In the process, fix FoldMemRefAliasOps handling of strides for vector.transfer ops which was previously incorrect.

These opt-in patterns generalize the existing canonicalizations on vector.transfer ops.
In the future the blanket canonicalizations will be retired.
They are kept for now to minimize porting disruptions.

Differential Revision: https://reviews.llvm.org/D146624
2023-03-23 04:03:27 -07:00
Mahesh Ravishankar
809e3d8c98 [mlir][TilingInterface] Modify TilingInterface methods to better return the state of the transformed IR.
Currently the `getTiledImplementation` and `generateResultTileValue`
return just `SmallVector<Operation *>` and `FailureOr<Value>`.

- For `getTiledImplementation` returning empty implies tiling wasnt
  done. There is also an implicit assumption that the tiled operation
  results correspond to the tiled values of the result of the original
  operation. This cannot handle cases where the tiled implementation
  might use multiple operations to compute the tiled value for the
  results of the untiled operation. Sometimes, the tiled operation
  might not directly give the tiled values, and might require casts,
  etc to get a replacement.
- For `generateResultTileValue`, it is assumed that the op defining
  the returned `Value` is the operation that represents the tiled
  computation. Again presence of casts, etc violate this.

Instead make these methods return
```
struct TilingResult {
  SmallVector<Operation *> tiledOps;
  SmallVector<Value> tiledValues;
};
```

The `tiledOps` represent the operations generated that are relevant
for subsequent transformations. The `tiledValues` represent the tiled
values for the results of the original operation. This better
transmits the state of the transformed IR.

As a consequence the following methods also return `FailureOr<TilingResult>`
- `tensor::replaceExtractSliceWithTiledProducer`
- `tensor::bubbleUpPadSlice`

Differential Revision: https://reviews.llvm.org/D145133
2023-03-16 14:29:03 +00:00
Jakub Kuderski
a0a76804c4 [ADT] Allow llvm::enumerate to enumerate over multiple ranges
This does not work by a mere composition of `enumerate` and `zip_equal`,
because C++17 does not allow for recursive expansion of structured
bindings.

This implementation uses `zippy` to manage the iteratees and adds the
stream of indices as the first zipped range. Because we have an upfront
assertion that all input ranges are of the same length, we only need to
check if the second range has ended during iteration.

As a consequence of using `zippy`, `enumerate` will now follow the
reference and lifetime semantics of the `zip*` family of functions. The
main difference is that `enumerate` exposes each tuple of references
through a new tuple-like type `enumerate_result`, with the familiar
`.index()` and `.value()` member functions.

Because the `enumerate_result` returned on dereference is a
temporary, enumeration result can no longer be used through an
lvalue ref.

Reviewed By: dblaikie, zero9178

Differential Revision: https://reviews.llvm.org/D144503
2023-03-15 19:34:22 -04:00
Matthias Springer
758329dc7c [mlir][NFC] reifyResultShapes: Add extra error checking
This change adds a new helper function `mlir::reifyResultShapes` that calls the corresponding interface method and also checks the result produced by the implementation when running in debug mode. Bugs due to incorrect interface implementations can be difficult to debug.

This helper function also reduces the amount of code needed at call sites: the cast to `ReifyRankedShapedTypeOpInterface` is done in the helper function.

Differential Revision: https://reviews.llvm.org/D145777
2023-03-10 11:37:54 +01:00
Matthias Springer
2a5b13e722 [mlir][Interfaces] ReifyRankedShapedTypeOpInterface returns OpFoldResults
`reifyResultShapes` now returns `OpFoldResult`s instead of `Value`s. This is often more efficient because many transformations immediately attempt to extract a constant from the reified values.

Differential Revision: https://reviews.llvm.org/D145250
2023-03-06 08:41:28 +01:00
Matthias Springer
9fa6b3504b [mlir][bufferization] Improve aliasing OpOperand/OpResult property
`getAliasingOpOperands`/`getAliasingOpResults` now encodes OpOperand/OpResult, buffer relation and a degree of certainty. E.g.:
```
// aliasingOpOperands(%r) = {(%t, EQUIV, DEFINITE)}
// aliasingOpResults(%t) = {(%r, EQUIV, DEFINITE)}
%r = tensor.insert %f into %t[%idx] : tensor<?xf32>

// aliasingOpOperands(%r) = {(%t0, EQUIV, MAYBE), (%t1, EQUIV, MAYBE)}
// aliasingOpResults(%t0) = {(%r, EQUIV, MAYBE)}
// aliasingOpResults(%t1) = {(%r, EQUIV, MAYBE)}
%r = arith.select %c, %t0, %t1 : tensor<?xf32>
```

`BufferizableOpInterface::bufferRelation` is removed, as it is now part of `getAliasingOpOperands`/`getAliasingOpResults`.

This change allows for better analysis, in particular wrt. equivalence. This allows additional optimizations and better error checking (which is sometimes overly conservative). Examples:

* EmptyTensorElimination can eliminate `tensor.empty` inside `scf.if` blocks. This requires a modeling of equivalence: It is not a per-OpResult property anymore. Instead, it can be specified for each OpOperand and OpResult. This is important because `tensor.empty` may be eliminated only if all values on the SSA use-def chain to the final consumer (`tensor.insert_slice`) are equivalent.
* The detection of "returning allocs from a block" can be improved. (Addresses a TODO in `assertNoAllocsReturned`.) This allows us to bufferize IR such as "yielding a `tensor.extract_slice` result from an `scf.if` branch", which currently fails to bufferize because the alloc detection is too conservative.
* Better bufferization of loops. Aliases of the iter_arg can be yielded (even if they are not equivalent) without having to realloc and copy the entire buffer on each iteration.

The above-mentioned examples are not yet implemented with this change. This change just improves the BufferizableOpInterface, its implementations and related helper functions, so that better aliasing information is available for each op.

Differential Revision: https://reviews.llvm.org/D142129
2023-02-09 11:35:03 +01:00
Matthias Springer
330372f2c5 [mlir][tensor][bufferize] tensor.empty does not define the result tensor contents
This is encoded in the `BufferizableOpInterface` via `resultBufferizesToMemoryWrite = false`.

Differential Revision: https://reviews.llvm.org/D143181
2023-02-06 10:26:38 +01:00
Matthias Springer
b6ae3f8873 [mlir][tensor][bufferize] Implement getBufferType for CastOp
This interface method is used to compute the buffer type of a value during bufferization. It was missing. This is interface method is used during loop bufferization.

Also fix a bug where a cast from an unranked tensor to a ranked tensor type did not always apply a fully dynamic layout map on the result memref.

Differential Revision: https://reviews.llvm.org/D143063
2023-02-01 14:24:10 +01:00
Matthias Springer
1ac248e485 [mlir][bufferization][NFC] Rename getAliasingOpOperand/getAliasingOpResult
* `getAliasingOpOperand` => `getAliasingOpOperands`
* `getAliasingOpResult` => `getAliasingOpResults`

Also a few minor code cleanups and better documentation.

Differential Revision: https://reviews.llvm.org/D142979
2023-02-01 10:07:41 +01:00
Matthias Springer
148432ea84 [mlir][bufferization][NFC] Rename BufferRelation::None to BufferRelation::Unknown
The previous name was incorrect. `None` does not mean that there is no buffer relation between two buffers (seems to imply that they do not alias for sure); instead it means that there is no further information available.

Differential Revision: https://reviews.llvm.org/D142870
2023-01-30 11:09:28 +01:00
Matthias Springer
1840d18a10 [mlir][bufferization][NFC] Rename: "last-write" -> "definition"
The previous lingo was confusing. There are no writes on tensors. There are only definitions.

Also some minor cleanup and better documentation.

Differential Revision: https://reviews.llvm.org/D141790
2023-01-30 09:51:53 +01:00
Frederik Gossen
1125c5c0b2 [MLIR] Remove scf.if builder with explicit result types and callbacks
Instead, use the builder and infer the return type based on the inner `yield` ops.
Also, fix uses that do not create the terminator as required for the callback builders.

Differential Revision: https://reviews.llvm.org/D142056
2023-01-20 10:52:08 -05:00
Hanhan Wang
65388086e6 [mlir][tensor] Add patterns that fold ops into pack and unpack ops.
The tensor.pack ops have pad semantic, so we can fold pad + pack into
pack when

1. They have the same padding values or the pack op does not have
   padding values.
2. The pad op does not have low paddings.

The tensor.unpack ops have extract_slice semantic, so we can fold unpack
+ extract_slice into unpack when

1. All the offsets are 0s.
2. All the strides are 1s.

Reviewed By: tyb0807

Differential Revision: https://reviews.llvm.org/D141099
2023-01-11 13:51:49 -08:00
serge-sans-paille
984b800a03 Move from llvm::makeArrayRef to ArrayRef deduction guides - last part
This is a follow-up to https://reviews.llvm.org/D140896, split into
several parts as it touches a lot of files.

Differential Revision: https://reviews.llvm.org/D141298
2023-01-10 11:47:43 +01:00
Matthias Springer
6176d6a93e [mlir][tensor] Support parallel_insert_slice in MergeConsecutiveInsertExtractSlicePatterns.cpp
Differential Revision: https://reviews.llvm.org/D141116
2023-01-06 12:33:45 +01:00
Mehdi Amini
ab32f5b7ef Apply clang-tidy fixes for readability-simplify-boolean-expr in BufferizableOpInterfaceImpl.cpp (NFC) 2022-12-28 22:42:39 +00:00
Fangrui Song
cbb0981388 [mlir] llvm::Optional::value => operator*/operator->
std::optional::value() has undesired exception checking semantics and is
unavailable in older Xcode (see _LIBCPP_AVAILABILITY_BAD_OPTIONAL_ACCESS). The
call sites block std::optional migration.
2022-12-17 19:07:38 +00:00
Ramkumar Ramachandra
22426110c5 mlir/tblgen: use std::optional in generation
This is part of an effort to migrate from llvm::Optional to
std::optional. This patch changes the way mlir-tblgen generates .inc
files, and modifies tests and documentation appropriately. It is a "no
compromises" patch, and doesn't leave the user with an unpleasant mix of
llvm::Optional and std::optional.

A non-trivial change has been made to ControlFlowInterfaces to split one
constructor into two, relating to a build failure on Windows.

See also: https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716

Signed-off-by: Ramkumar Ramachandra <r@artagnon.com>

Differential Revision: https://reviews.llvm.org/D138934
2022-12-17 11:13:26 +01:00
Matthias Springer
e5dc99e642 [mlir][tensor][bufferize] Improve bufferization of DimOp/RankOp
The tensor operands do not bufferize to a memory read.

Differential Revision: https://reviews.llvm.org/D140007
2022-12-14 12:47:46 +01:00
Matthias Springer
be630f07de [mlir][bufferize] Implement BufferizableOpInterface for tensor.empty
The op is not bufferizable but should be analyzable (for `EliminateEmptyTensors`, which uses the bufferization infrastructure).

Also improve debugging functionality and error messages.

Also adds a missing pass to the sparse pipeline. (tensor.empty should be replaced with bufferization.alloc_tensor, but it sometimes used to work without depending on how the tensor.empty is used. Now we always fail explicitly.)
2022-12-12 14:19:38 +01:00
Alexander Belyaev
f6fb0a4f35 [mlir] Make patterns for folding tensor.empty optional.
At the moment, they are a part of EmptyOp::getCanonicalizationPatterns. When
extract_slice(tensor.empty) is rewritten as a new tensor.empty, it could
happen that we end up with two tensor.empty ops, since the original
tensor.empty can have two users. After bufferization such cases result in two
allocations.

Differential Revision: https://reviews.llvm.org/D139308
2022-12-07 23:01:34 +01:00
Matthias Springer
9cdf6b641d [mlir][tensor] Support parallel_insert_slice in reassociative reshape folder
Differential Revision: https://reviews.llvm.org/D139540
2022-12-07 16:25:10 +01:00
Matthias Springer
1403073790 [mlir][tensor] Fold rank-reducing insert_slice with inverse collapse_shape
Differential Revision: https://reviews.llvm.org/D139221
2022-12-05 09:17:29 +01:00
Matthias Springer
50a2bb95ab [mlir][tensor] Fold rank-reducing extract_slice with inverse expand_shape
Differential Revision: https://reviews.llvm.org/D139220
2022-12-05 09:17:24 +01:00
Matthias Springer
f92c7506e3 Revert "[mlir][tensor] Fold rank-reducing extract_slice with inverse expand_shape"
This reverts commit a076f57a1a.
2022-12-02 21:22:20 +01:00
Matthias Springer
c837a94754 Revert "[mlir][tensor] Fold rank-reducing insert_slice with inverse collapse_shape"
This reverts commit 1522a3b7b3.
2022-12-02 21:22:04 +01:00
Matthias Springer
1522a3b7b3 [mlir][tensor] Fold rank-reducing insert_slice with inverse collapse_shape
Differential Revision: https://reviews.llvm.org/D139104
2022-12-02 10:42:52 +01:00
Matthias Springer
a076f57a1a [mlir][tensor] Fold rank-reducing extract_slice with inverse expand_shape
Differential Revision: https://reviews.llvm.org/D139103
2022-12-02 10:42:46 +01:00
Matthias Springer
13593dc9dc [mlir][tensor][bufferize] Fix tensor.insert_slice regression
This reverts D132662 (apart from overall cleanups), which introduced a too aggressive optimization for tensor.insert_slice bufferization. Instead, bufferizesToMemoryRead is improved to handle some of these cases. The remaining cases can still bufferize efficiently when running the canonicalizer before the bufferization.

Differential Revision: https://reviews.llvm.org/D138745
2022-11-26 19:14:33 +01:00
Lei Zhang
9bb633741a [mlir][bufferization] Support general Attribute as memory space
MemRef has been accepting a general Attribute as memory space for
a long time. This commits updates bufferization side to catch up,
which allows downstream users to plugin customized symbolic memory
space. This also eliminates quite a few `getMemorySpaceAsInt`
calls, which is deprecated.

Reviewed By: springerm

Differential Revision: https://reviews.llvm.org/D138330
2022-11-21 09:40:50 -05:00
Matthias Springer
09dfb44193 [mlir][tensor][bufferize] Support memory_space for tensor.pad
This change adds memory space support to tensor.pad. (tensor.generate and tensor.from_elements do not support memory spaces yet.)

The memory space is inferred from the buffer of the source tensor.

Instead of lowering tensor.pad to tensor.generate + tensor.insert_slice, it is now lowered to bufferization.alloc_tensor (with the correct memory space) + linalg.map + tensor.insert_slice.

Memory space support for the remaining two tensor ops is left for a later point, as this requires some more design discussions.

Differential Revision: https://reviews.llvm.org/D136265
2022-10-27 12:29:57 +02:00
Matthias Springer
c1f0a15c65 [mlir][tensor][bufferize] Lower tensor.generate to linalg.map
There is no memref equivalent of tensor.generate. The purpose of this change is to avoid creating scf.parallel loops during bufferization.

Differential Revision: https://reviews.llvm.org/D136767
2022-10-27 12:03:13 +02:00
Matthias Springer
2d5edc644d [mlir][bufferize] Provide default BufferizableOpInterface impl for destination style ops
tensor.insert and tensor.insert_slice (as destination style ops) do no longer need to implement the entire BufferizableOpInterface.

Differential Revision: https://reviews.llvm.org/D136347
2022-10-27 10:52:47 +02:00
Christopher Bate
446981bdb6 [mlir][tensor] ExtractSliceFromReshape: handle collapsing of unit dim edge cases
Prior to this change, the "ExtractSliceFromReshape" pattern would transform

```
%collapsed = tensor.collapse_shape %input [[0, 1], [2]]
                : tensor<1x11x100xf32> into tensor<11x100xf32>
%slice = tensor.extract_slice %collapsed [%offt, 0] [%size, 100] [1, 1]
                : tensor<11x100xf32> to tensor<?x100xf32>
```

into a loop that iterated over the range `%size - %offt`, that pieces
together multiple sub-slices of `%input` along the first dimension. This
is correct but obviously inefficient. The technical condition is that
collapsing at-most-one non-unit dimension of `%src` will not result in a
subsequent slice along the corresponding dimension of `%collapsed`
mapping across discontinuities in the index space of `%src`. Thus, the
definition of a "linearized dimension" (from the perspective of
`tensor.collapse_shape`) is updated to reflect this condition.

The transform will now generate

```
%slice = tensor.extract_slice %input [0, %offt, 0][1, %size, 100] [1, 1]
            : tensor<1x11x100xf32> to tensor<1x?x100xf32>
%result = tensor.collapse_shape [[0, 1], [2]]
            : tensor<1x?x100xf32> to tensor<?x100xf32>
```

which can be further canonicalized.

Additional tests are added to check this family of edge cases.

Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D135726
2022-10-22 13:29:34 -06:00
Matthias Springer
6cdd34b973 [mlir][tensor][bufferize] Bufferize inserts into equivalent tensors in-place
Inserting a tensor into an equivalent tensor is a no-op after bufferization. No alloc is needed.

Differential Revision: https://reviews.llvm.org/D132662
2022-10-06 15:06:33 +09:00
Jakub Kuderski
abc362a107 [mlir][arith] Change dialect name from Arithmetic to Arith
Suggested by @lattner in https://discourse.llvm.org/t/rfc-define-precise-arith-semantics/65507/22.

Tested with:
`ninja check-mlir check-mlir-integration check-mlir-mlir-spirv-cpu-runner check-mlir-mlir-vulkan-runner check-mlir-examples`

and `bazel build --config=generic_clang @llvm-project//mlir:all`.

Reviewed By: lattner, Mogball, rriddle, jpienaar, mehdi_amini

Differential Revision: https://reviews.llvm.org/D134762
2022-09-29 11:23:28 -04:00
Lei Zhang
465ec4e0b4 [mlir] NFC: move mergeOffsetsSizesAndStrides into Affine/Utils
So that these utility functions can also be used ViewLikeInterface
ops not in the memref dialect.

Reviewed By: mravishankar, christopherbate

Differential Revision: https://reviews.llvm.org/D134487
2022-09-23 13:28:11 -04:00
Lei Zhang
bd81524e7f Reland "[mlir][tensor] Support more cases in MergeConsecutiveExtractSlice"
This relands commit 5d4603a02d.
It cludes fixes to GCC test failures and simplification to
the implementation.

Co-authored-by: Mahesh Ravishankar <ravishankarm@google.com>
Co-authored-by: Christopher Bate <cbate@nvidia.com>
2022-09-22 17:28:50 -04:00
Matthias Springer
04ff6009fc [mlir][tensor][bufferize] Implement getBufferType for Expand/CollapseShapeOp
This function must be implemented for all ops, where the result memref type is different from the input memref type.

Differential Revision: https://reviews.llvm.org/D134331
2022-09-21 18:31:59 +09:00
Mehdi Amini
e0a6df53b4 Revert "[mlir][tensor] Support more cases in MergeConsecutiveExtractSlice"
This reverts commit 5d4603a02d.

The Dialect/Tensor/fold-consecutive-insert-extract-slice.mlir test is
failing when built with GCC
2022-09-21 04:01:57 +00:00
Lei Zhang
2d3b54feb2 [mlir][tensor] NFC: name various Transforms/ files consistently
Use a suffix to make clear what the contents inside each file
are.

Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D134202
2022-09-20 20:17:29 -04:00
Lei Zhang
5d4603a02d [mlir][tensor] Support more cases in MergeConsecutiveExtractSlice
This commit adds utility functions to perform general merging of
OffsetSizeAndStrideOpInterface by supporting producer rank
reducing and non-unit strides.

With it we can extend MergeConsecutiveExtractSlice to support
more cases.

Co-authored-by: Mahesh Ravishankar <ravishankarm@google.com>

Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D134294
2022-09-20 20:16:03 -04:00
Lei Zhang
bb4c53b7ba [mlir][tensor] Merge consecutive insert_slice/extract_slice ops
Consecutive tensor.insert_slice/tensor.extract_slice can be
created for the case like tiling convolution and then downsizing
2-D convolutions into 1-D ones. It hinders further transformations.
So adding these patterns to clean it up.

Given that bufferization is sensitive and have requirements over
the IR structure (see https://reviews.llvm.org/D132666),
these patterns are put in Transforms/ with separate entry points
for explicit collection.

Reviewed By: ThomasRaoux, mravishankar

Differential Revision: https://reviews.llvm.org/D133871
2022-09-20 19:52:56 -04:00
Christopher Bate
4d27f06f94 [mlir][Tensor] Fix ExtractSliceFromReshape transform edge case
The transformation would fail if none of the sliced dimensions were
linearized by the producing `tensor.collapse_shape`. This is a trivial
edge case but it wasn't correctly tested. Fixes the issue and adds a test.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D134088
2022-09-19 14:02:45 -06:00