Commit Graph

205 Commits

Author SHA1 Message Date
Jacques Pienaar
e67080df99 [mlir][ods] Populate properties in generated builder (#90430)
Previously this was only populated in the create method later. This
resolves some of invalid builder paths. This may also be sufficient that
type inference functions no longer have to consider whether property
conversion has happened (but haven't verified that yet).

This also makes Attributes corresponding to Properties as optional
inside the set from attributes method. Today that is in effect what
happens with Property value initialization and folks use it to define
custom C++ types whose default initialization is what they want. This is
the behavior users get if they use properties directly. Propagating
Attributes without allowing partial setting would require iterating over
the dictionary attribute considering the properties of the op type that
will be created. This could also have been an additional method
generated or optional behavior on the set method. But doing it
consistently seems better. In terms of whats lost, it doesn't seem like
anything compared to the pure Property path where Property is default
value initialized and then partially overwritten (this doesn't seem to
buy anything else verification wise).

Default valued Properties (as specified ODS side rather than C++ side)
triggered error as the containing class was not yet complete but
referenced nested class, so that we couldn't have default initializer
for them in the parent class. Added an additional forwarding builder to
avoid needing to update call sites. This could be split out to separate
change.

Inlined templated function in unit test that was only used once. Moved
initialization earlier where seen.
2024-05-15 03:25:51 -07:00
Sayan Saha
c5e67b86ef [mlir] [tensor] Crash in getPackOpResultTypeShape for tensor.pack/unpack ops. (#90641)
Windows build of `mlir` with Visual Studio (19.36.32538 for x64) using
with the following command:

`cmake.exe -GNinja -DCMAKE_BUILD_TYPE=Release
-DLLVM_ENABLE_PROJECTS=mlir -DLLVM_ENABLE_EH=ON -DLLVM_ENABLE_RTTI=1
-DLLVM_TARGETS_TO_BUILD=host ../llvm`

is leading to a crash when calling canonicalization on
`tensor.pack`/`tensor.unpack` ops `mlir-opt --canonicalize input.mlir`
where the `input.mlir` is as follows (this is taken from one of the
filecheck tests for `tensor.pack`):

```
func.func @pack_unpack(%arg0: tensor<128x256xf32>) -> tensor<128x256xf32> {
          %pack_dest = tensor.empty() : tensor<8x16x8x32xf32>
          %unpack_dest = tensor.empty() : tensor<128x256xf32>
          %tp = tensor.pack %arg0 outer_dims_perm = [1, 0] inner_dims_pos = [0, 1] inner_tiles = [8, 32] into %pack_dest : tensor<128x256xf32> -> tensor<8x16x8x32xf32>
          %tup = tensor.unpack %tp outer_dims_perm = [1, 0] inner_dims_pos = [0, 1] inner_tiles = [8, 32] into %unpack_dest : tensor<8x16x8x32xf32> -> tensor<128x256xf32>
          return %tup : tensor<128x256xf32>
        }
```

The crash is seemingly coming from invalid memory access during
iterating over `innerDimsPos` within `getPackOpResultTypeShape`.

This crash is also causing the following tests to fail:


```
MLIR :: Dialect/Linalg/canonicalize.mlir                                                                                                                    
MLIR :: Dialect/Linalg/data-layout-propagation.mlir                                                                                                         
MLIR :: Dialect/Linalg/generalize-tensor-pack-tile.mlir                                                                                                     
MLIR :: Dialect/Linalg/generalize-tensor-pack.mlir                                                                                                          
MLIR :: Dialect/Linalg/generalize-tensor-unpack-tile.mlir                                                                                                   
MLIR :: Dialect/Linalg/generalize-tensor-unpack.mlir                                                                                                        
MLIR :: Dialect/Linalg/transform-lower-pack.mlir                                                                                                            
MLIR :: Dialect/Linalg/transform-op-fuse.mlir                                                                                                               
MLIR :: Dialect/Linalg/transform-op-pack.mlir                                                                                                               
MLIR :: Dialect/Linalg/transform-pack-greedily.mlir                                                                                                         
MLIR :: Dialect/Tensor/canonicalize.mlir                                                                                                                   
MLIR :: Dialect/Tensor/fold-into-pack-and-unpack.mlir                                                                                                       
MLIR :: Dialect/Tensor/invalid.mlir                                                                                                                         
MLIR :: Dialect/Tensor/ops.mlir                                                                                                                             
MLIR :: Dialect/Tensor/simplify-pack-unpack.mlir                                                                                                           
MLIR :: Dialect/Tensor/tiling.mlir
```
2024-05-13 18:18:20 -04:00
Peiming Liu
37ffbbb195 [mlir][tensor][sparse] don't drop encoding when infer result type (#91817)
A general question is: is it possible to support hooks here to infer the
encoding? E.g., when the extracted tensor slice is rank-reduced, the
encoding need to be updated accordingly as well.
2024-05-13 09:53:15 -07:00
Max191
7e35a9a0e7 [mlir] Replace dynamic sizes in insert_slice of tensor.cast canonicalization (#91352)
In some cases this pattern may ignore static information due to dynamic
operands in the insert_slice sizes operands, e.g.:
```
%0 = tensor.cast %arg0 : tensor<1x?xf32> to tensor<?x?xf32>
%1 = tensor.insert_slice %0 into %arg1[...] [%s0, %s1] [...] 
    : tensor<?x?xf32> into tensor<?x?xf32>
```
Can be rewritten into:
```
%1 = tensor.insert_slice %arg0 into %arg1[...] [1, %s1] [...] 
    : tensor<1x?xf32> into tensor<?x?xf32>
```
This PR updates the matching in the pattern to allow rewrites like this.
2024-05-08 15:05:53 -04:00
Benoit Jacob
62bed56efd [mlir][tensor] Remove assertion in ExpandShapeOp::build (#91361)
Unblocking downstream integrate where an expected-to-fail test was
expecting this to be a runtime verifier error, not a compiler crash:
https://github.com/llvm/torch-mlir/pull/3279.
2024-05-07 15:07:06 -04:00
Peiming Liu
d2353695f8 [mlir][NFC] update code to use mlir::dyn_cast/cast/isa (#90633)
Fix compiler warning caused by using deprecated interface
(https://github.com/llvm/llvm-project/pull/90413)
2024-04-30 11:14:11 -07:00
Gaurav Shukla
97069a8619 [MLIR] Generalize expand_shape to take shape as explicit input (#90040)
This patch generalizes tensor.expand_shape and memref.expand_shape to
consume the output shape as a list of SSA values. This enables us to
implement generic reshape operations with dynamic shapes using
collapse_shape/expand_shape pairs.

The output_shape input to expand_shape follows the static/dynamic
representation that's also used in `tensor.extract_slice`.

Differential Revision: https://reviews.llvm.org/D140821

---------

Signed-off-by: Gaurav Shukla<gaurav.shukla@amd.com>
Signed-off-by: Gaurav Shukla <gaurav.shukla@amd.com>
Co-authored-by: Ramiro Leal-Cavazos <ramiroleal050@gmail.com>
2024-04-30 09:28:35 -07:00
Rob Suderman
593f6fdcb4 [mlir][tensor] Fix tensor.reshape canonicalization (#90141)
Canonicalization defaulted to replacement when the input dims were from
unknown source. This is obviously incorrect. Tweaked and included test
to prevent future issue.
2024-04-25 17:41:12 -07:00
Mehdi Amini
8c0341df02 Revert "[MLIR] Generalize expand_shape to take shape as explicit input" (#89540)
Reverts llvm/llvm-project#69267

this broke some bots.
2024-04-21 14:33:48 +02:00
Gaurav Shukla
e095d978ba [MLIR] Generalize expand_shape to take shape as explicit input (#69267)
This patch generalizes tensor.expand_shape and memref.expand_shape to
consume the output shape as a list of SSA values. This enables us to
implement generic reshape operations with dynamic shapes using
collapse_shape/expand_shape pairs.

The output_shape input to expand_shape follows the static/dynamic
representation that's also used in `tensor.extract_slice`.

Differential Revision: https://reviews.llvm.org/D140821

Co-authored-by: Ramiro Leal-Cavazos <ramiroleal050@gmail.com>
2024-04-21 07:37:02 -04:00
Rob Suderman
c045955501 [mlir][tensor] Fold tensor.reshape for dynamic reshape (#88961)
If `tensor.reshape` occurs with `d0, d1, d2, ...` for the dimensions we
know that the reshape is a no-op. Checking for this case lets us fold
away the computation.
2024-04-19 10:36:09 -07:00
Christian Sigg
a5757c5b65 Switch member calls to isa/dyn_cast/cast/... to free function calls. (#89356)
This change cleans up call sites. Next step is to mark the member
functions deprecated.

See https://mlir.llvm.org/deprecation and
https://discourse.llvm.org/t/preferred-casting-style-going-forward.
2024-04-19 15:58:27 +02:00
Han-Chung Wang
c3e3d59fab [mlir][tensor] Fix tensor::PackOp fold() handling of padding value (#87296)
We can't just check if it is a splat constant or not. We should also
check if the value match.
2024-04-02 13:49:28 -07:00
Aart Bik
3324f4d4f4 [mlir][sparse] avoid incompatible linalg fuse-into-consumer (#86752)
This fixes an "infinite" loop bug, where the incoming IR was repeatedly
rewritten while adding identical cast operations. The test for
compatible types should include the notion of an encoding. If it
differs, then a
naive fusion into the consumer is invalid.
2024-03-26 17:16:03 -07:00
Sayan Saha
26722f5b61 [MLIR] Fix incorrect memref::DimOp canonicalization, add tensor::DimOp canonicalization (#84225)
The current canonicalization of `memref.dim` operating on the result of
`memref.reshape` into `memref.load` is incorrect as it doesn't check
whether the `index` operand of `memref.dim` dominates the source
`memref.reshape` op. It always introduces `memref.load` right after
`memref.reshape` to ensure the `memref` is not mutated before the
`memref.load` call. As a result, the following error is observed:

```
$> mlir-opt --canonicalize input.mlir

func.func @reshape_dim(%arg0: memref<*xf32>, %arg1: memref<?xindex>, %arg2: index) -> index {
    %c4 = arith.constant 4 : index
    %reshape = memref.reshape %arg0(%arg1) : (memref<*xf32>, memref<?xindex>) -> memref<*xf32>
    %0 = arith.muli %arg2, %c4 : index
    %dim = memref.dim %reshape, %0 : memref<*xf32>
    return %dim : index
  }
```

results in:

```
dominator.mlir:22:12: error: operand #1 does not dominate this use
    %dim = memref.dim %reshape, %0 : memref<*xf32>
           ^
dominator.mlir:22:12: note: see current operation: %1 = "memref.load"(%arg1, %2) <{nontemporal = false}> : (memref<?xindex>, index) -> index
dominator.mlir:21:10: note: operand defined here (op in the same block)
    %0 = arith.muli %arg2, %c4 : index
```

Properly fixing this issue requires a dominator analysis which is
expensive to run within a canonicalization pattern. So, this patch fixes
the canonicalization pattern by being more strict/conservative about the
legality condition in which we perform this canonicalization.
The more general pattern is also added to `tensor.dim`. Since tensors are
immutable we don't need to worry about where to introduce the
`tensor.extract` call after canonicalization.
2024-03-11 19:37:33 -07:00
James Newling
67ef4ae2c3 [MLIR][Tensor,MemRef] Fold expand_shape and collapse_shape if identity (#80658)
Before: op verifiers failed if the input and output ranks were the same
(i.e. no expansion or collapse). This behavior requires users of these
shape ops to verify manually that they are not creating identity
versions of these ops every time they build them -- problematic. This PR
removes this strict verification, and introduces folders for the the
identity cases.

The PR also removes the special case handling of rank-0 tensors for
expand_shape and collapse_shape, there doesn't seem to be any reason to
treat them differently.
2024-03-12 10:11:58 +09:00
Max191
e3b93a1620 [mlir] Fix bug in pack and unpack op canonicalization for folding dynamic dims (#82539)
This PR fixes a bug in the inference of pack and unpack static shapes
that should be using an inverse permutation.
2024-02-28 17:39:22 -05:00
Han-Chung Wang
eac8604d98 [mlir][tensor] Add support for tensor.unpack static shapes inference. (#81702)
The revision does not refactor the inferStaticShape for pack and unpack
ops because they can diverge quickly. Because there are more dimensions
can be inferred (i.e., with inner_tile_sizes) if the pack op does not
have padding value.

This is a follow-up of https://github.com/llvm/llvm-project/pull/80848
2024-02-19 16:26:12 -08:00
srcarroll
9466c4e629 [MLIR][tensor] Improve tensor.pack verifier to catch more cases with unconditional runtime errors (#77217)
Previously, the `tensor.pack` verifier detects unconditional runtime
errors only when tile sizes are static. Now, dynamic tiles are
considered and we only require that the input and either corresponding
tile or output size are static to determine if it will unconditionally
produce errors at runtime.
2024-02-19 12:27:24 -06:00
Mehdi Amini
a854982aa1 Apply clang-tidy fixes for readability-simplify-boolean-expr in TensorOps.cpp (NFC) 2024-02-13 20:56:05 -08:00
Mehdi Amini
69bcb69bba Apply clang-tidy fixes for llvm-qualified-auto in TensorOps.cpp (NFC) 2024-02-13 20:56:05 -08:00
Han-Chung Wang
bc08cc2ac8 [mlir][tensor] Add support for tensor.pack static shapes inference. (#80848)
Fixes https://github.com/openxla/iree/issues/16317
2024-02-13 20:20:24 -08:00
Alexey Z
4759890f85 [mlir][tensor] Fix bug in insert_slice canonical. with tensor encoding (#81045)
Previously, `InsertSliceOpSourceCastInserter` was incorrectly applied to
a case when tensor types have an encoding attribute attached to them.
The type `newSrcType` was missing that attribute from the old `srcType`,
which made the expression `srcType == newSrcType` false, since
`tensor<2x2xf32, "foo">` is not equal to `tensor<2x2xf32>`. That lead to
an endless back and forth between `InsertSliceOpSourceCastInserter` that
would introduce a cast and `InsertSliceOpCastFolder` that would remove
it right after.
2024-02-08 20:22:27 -05:00
Rob Suderman
70eb0e37a8 [mlir][tensor] Fix tensor.pad to remove newly static values (#79938)
The canonicalization incrementally converts foldable dynamic hi/lo
padding to static hi/lo values. During this canonicalization the
static-fied valued should be removed from the dynamic values.
2024-01-29 20:32:15 -08:00
Matthias Springer
5fcf907b34 [mlir][IR] Rename "update root" to "modify op" in rewriter API (#78260)
This commit renames 4 pattern rewriter API functions:
* `updateRootInPlace` -> `modifyOpInPlace`
* `startRootUpdate` -> `startOpModification`
* `finalizeRootUpdate` -> `finalizeOpModification`
* `cancelRootUpdate` -> `cancelOpModification`

The term "root" is a misnomer. The root is the op that a rewrite pattern
matches against
(https://mlir.llvm.org/docs/PatternRewriter/#root-operation-name-optional).
A rewriter must be notified of all in-place op modifications, not just
in-place modifications of the root
(https://mlir.llvm.org/docs/PatternRewriter/#pattern-rewriter). The old
function names were confusing and have contributed to various broken
rewrite patterns.

Note: The new function names use the term "modify" instead of "update"
for consistency with the `RewriterBase::Listener` terminology
(`notifyOperationModified`).
2024-01-17 11:08:59 +01:00
Han-Chung Wang
4b14205bc0 [mlir][tensor] Centralize pack/unpack related patterns. (#76603)
The revision moves pack/unpack related patterns to
PackAndUnpackPatterns.cpp. This follows the convention like other tensor
ops.

It also renames `populateSimplifyTensorPack` to
`populateSimplifyPackAndUnpackPatterns` and adds a TODO item for
tensor.unpack op.
2023-12-30 11:40:40 -08:00
Han-Chung Wang
bffdde8b8e [mlir][tensor][NFC] Fix a typo in pack simplification pattern. (#76109) 2023-12-20 17:03:55 -08:00
Rafael Ubal
214d32ccd2 Support for dynamic dimensions in 'tensor.splat' (#74626)
This feature had been marked as `TODO` in the `tensor.splat`
documentation for a while. This MR includes:

- Support for dynamically shaped tensors in the return type of
`tensor.splat` with the syntax suggested in the `TODO` comment.

- Updated op documentation.

- Bufferization support.

- Updates in op folders affected by the new feature.

- Unit tests for valid/invalid syntax, valid/invalid folding, and
lowering through bufferization.

- Additional op builders resembling those available in `tensor.empty`.
2023-12-15 13:54:45 +00:00
Quinn Dawkins
fcd54b368e [mlir][tensor] Fix tensor.concat reifyResultShapes for static result dims (#75558)
When the concatenated dim is statically sized but the inputs are
dynamically sized, reifyResultShapes must return the static shape. Fixes
the implementation of the interface for tensor.concat in such cases.
2023-12-15 08:43:58 -05:00
Rafael Ubal
a8f3860bcb [mlir][tensor] Fix bug in tensor.extract(tensor.from_elements) folder (#75109)
The folder for `tensor.extract` is not operating correctly when it is
consuming the result of a `tensor.from_elements` operation.

The existing unit test named `@extract_from_tensor.from_elements_3d` in
`mlir/test/Dialect/Tensor/canonicalize.mlir` seems an attempt to stress
this code. However, this unit tests creates a `tensor.from_elements` op
exclusively from constants, which gets folded away into a single
constant tensor. Therefore, the buggy code was never executed in unit
tests.

I have added a new unit test named
`@extract_from_tensor.from_elements_variable_3d` that makes sure the
`tensor.from_elements` op is not folded away by having its input
operands come directly from function arguments. The original folder code
would have made this test fail.

This bug was notably affecting the lowering of the `tosa.pad` op in the
`tosa-to-tensor` pass, where the generated code is likely to contain a
`tensor.from_elements` + `tensor.extract` op sequence.
2023-12-12 15:36:52 +00:00
Matthias Springer
c6dc9cd1fb [mlir] Fix build after 77f5b33c 2023-12-07 10:19:02 +09:00
Matthias Springer
75f6cad8e9 [mlir][tensor] tensor.generate: do not verify dynamic sizes (#74568)
Op verifiers should verify only local properties of an op. The dynamic
sizes of a `tensor.generate` op should not be verified. Dynamic sizes
that have a negative constant value should not prevent the
`tensor.generate` op from verifying.

Also share some code between the `tensor.empty` and `tensor.generate`
"dynamic dim -> static dim" canonicalization patterns.

Remove the `invalid-canonicalize.mlir` file and move the test case to
`canonicalize.mlir`. Canonicalization no longer produces IR that does
not verify (and leaves the op as is).
2023-12-07 08:36:07 +09:00
Rik Huijzer
68f0bc6f2e [mlir] Fix a zero stride canonicalizer crash (#74200)
This PR fixes https://github.com/llvm/llvm-project/issues/73383 and is
another shot at the refactoring proposed in
https://github.com/llvm/llvm-project/pull/72885.

---------

Co-authored-by: Kai Sasaki <lewuathe@gmail.com>
2023-12-06 07:35:18 +01:00
Rik Huijzer
c9c1b3c37f [mlir][memref] Fix an invalid dim loop motion crash (#74204)
Fixes https://github.com/llvm/llvm-project/issues/73382.

This PR suggests to replace two assertions that were introduced in
adabce4118
(https://reviews.llvm.org/D135748). According to the enum definition of
`NotSpeculatable`, an op that invokes undefined behavior is
`NotSpeculatable`.

0c06e8745f/mlir/include/mlir/Interfaces/SideEffectInterfaces.h (L248-L258)

and both `tensor.dim` and `memref.dim` state that "If the dimension
index is out of bounds, the behavior is undefined."

So therefore it seems to me that `DimOp::getSpeculatability()` should
return `NotSpeculatable` if the dimension index is out of bounds.

The added test is just a simplified version of
https://github.com/llvm/llvm-project/issues/73382.
2023-12-04 08:57:59 +01:00
Quinn Dawkins
005c83380a [mlir][tensor] Fix ReifyResultShapes implementation for tensor.concat (#74157)
Without folding the result of the initial tensor.dim, the
ReifyResultShapes implementation would be incorrect because it would
return a dynamic shape for a static result shape.
2023-12-01 19:29:56 -05:00
Quinn Dawkins
f310a5d2c1 [mlir][tensor] Add a tensor.concat operation (#72779)
This adds an operation for concatenating ranked tensors along a static
dimension, as well as a decomposition mirroring the existing lowering
from TOSA to Tensor. This offers a convergence point for "input" like
dialects that include various lowerings for concatenation operations,
easing later analysis. In the future, this op can implement the
necessary interfaces for tiling, as well as potentially add conversions
to some kind of linalg and/or memref counterpart.

This patch adds the op, the decomposition, and some basic
folding/canonicalization. Replacing lowerings with the op (such as the
TOSA lowering) will come as a follow up.

See
https://discourse.llvm.org/t/rfc-tensor-add-a-tensor-concatenate-operation/74858
2023-12-01 15:05:29 -05:00
Han-Chung Wang
171cac95a7 [mlir][tensor] Fold padding_value away for pack ops when possible. (#74005)
If we can infer statically that there are no incomplete tiles, we can
remove the optional padding operand.

Fixes https://github.com/openxla/iree/issues/15417
2023-12-01 11:12:58 -08:00
Matthias Springer
68386a74ba [mlir][tensor] Fix crash when canonicalizing invalid IR (#72888)
This commit fixes a crash of the canonicalizer when there are slice ops
with offset/size SSA values that have a negative constant value. Such
ops are invalid if they are reachable and their offsets/sizes should not
be folded to static integer values. (But such ops may appear in
non-reachable block.)

This commit fixes #71150.
2023-11-21 09:20:18 +01:00
Rik Huijzer
1949fe90bf [mlir] Verify non-negative offset and size (#72059)
In #71153, the `memref.subview` canonicalizer crashes due to a negative
`size` being passed as an operand. During `SubViewOp::verify` this
negative `size` is not yet detectable since it is dynamic and only
available after constant folding, which happens during the
canonicalization passes. As discussed in
<https://discourse.llvm.org/t/rfc-more-opfoldresult-and-mixed-indices-in-ops-that-deal-with-shaped-values/72510>,
the verifier should not be extended as it should "only verify local
aspects of an operation".

This patch fixes #71153 by not folding in aforementioned situation.

Also, this patch adds a basic offset and size check in the
`OffsetSizeAndStrideOpInterface` verifier.

Note: only `offset` and `size` are checked because `stride` is allowed
to be negative
(54d81e49e3).
2023-11-16 07:42:37 +01:00
Rik Huijzer
d0da3d8393 [mlir][tensor] Fold when source is const (#71643)
Fixes https://github.com/llvm/llvm-project/issues/60656.

This patch implements a basic fold for various reshape/resize tensor
operations. Specifically, the folding removes tensor reshape/resize ops
when they are applied to a constant tensor. For example, the following
function:

```mlir
func.func @main(%dest : tensor<8x16x8x32xf32>) -> tensor<8x16x8x32xf32> {
  %cst = arith.constant dense<1.000000e-01> : tensor<64x128xf32>
  %0 = tensor.pack %cst outer_dims_perm = [1, 0] inner_dims_pos = [0, 1]
    inner_tiles = [8, 32] into %dest : tensor<64x128xf32> -> tensor<8x16x8x32xf32>
  return %0 : tensor<8x16x8x32xf32>
}
```
will be changed into the following with `mlir-opt -canonicalize`:
```mlir
func.func @main(%arg0: tensor<8x16x8x32xf32>) -> tensor<8x16x8x32xf32> {
  %cst = arith.constant dense<1.000000e-01> : tensor<8x16x8x32xf32>
  return %cst : tensor<8x16x8x32xf32>
}
```

As a side-note, this patch is essentially an extension of
f79f430d4b.
2023-11-09 20:36:32 +01:00
MaheshRavishankar
14e7846d6e [mlir][Tensor] Fold destination-style ops into tensor.unpack operation. (#71468)
The destination operand of the `tensor.unpack` operation is only needed
to carry shape information. So if the producer of the destination
operand implements the `DestinationStyleOpInterface`, then fold it into
the `tensor.unpack` operation by replacing the destination operand with
the destination for the source.
2023-11-07 21:42:32 -08:00
Felix Schneider
24e33b5945 [mlir] Implement DestinationStyleOpInterface for scf::ForallOp (#66981)
`scf::ForallOp` has `shared_outs` tensor operands which are used to
insert partial results into in the parallel terminator. The
`scf::ForallOp` returns one tensor for each `shared_out` which then
contains the combined result from all threads. Since the parallel
terminator cannot change the shape of the `shared_out`, ForallOp is a
`DestinationStyleOp` and this patch implements the interface by
declaring the `outputs` operands as `inits` in the language of the DPS
interface.

For this change to work, we need to add an exception to the Pattern that
folds `tensor.cast` Ops into DPS Ops because `scf::Forall` needs special
handling of its `BlockArgument` Type during this folding.
2023-09-25 09:06:25 +02:00
Kohei Yamaguchi
ca8cf90c8c [mlir][tensor] Check the EmptyOp's dynamicSize to be non-negative (#65577)
This patch addresses a crash that occurs when negative dynamic sizes are
provided in tensor.emptyOp by adding a check to ensure that dynamic
sizes are non-negative.

Fixes #64064
2023-09-10 18:38:54 -07:00
Amy Wang
0aa459fc8a [MLIR][Tensor] Add Destination style RewritePattern for DimOp. (#65780)
This enables canonicalization to fold away unnecessary tensor.dim ops
which in turn enables folding away of other operations, as can be seen
in conv_tensors_dynamic where affine.min operations were folded away.
2023-09-09 06:01:56 -04:00
Fangrui Song
7557530f42 [mlir] Fix duplicate word typos; NFC
Those fixes were taken from https://reviews.llvm.org/D137338
2023-09-01 20:53:08 -07:00
Mikhail Goncharov
0a0aff2d24 fix unused variable warnings in conditionals
warning was updated in 92023b1509
2023-08-30 19:09:27 +02:00
Matthias Springer
933fde3d1c [mlir][tensor][NFC] Simplify extract_slice(cast) folder
The type computation part is not needed.

Differential Revision: https://reviews.llvm.org/D156652
2023-07-31 15:07:49 +02:00
Matthias Springer
b2826c0209 [mlir][NFC] Move offsets/sizes/strides helper to dialect utils and interface header
* Move `foldDynamicIndexList` to `DialectUtils` and simplify function.
* Move `OpWithOffsetSizesAndStridesConstantArgumentFolder` to `ViewLikeInterface` and add documentation.

Differential Revision: https://reviews.llvm.org/D156581
2023-07-31 14:53:14 +02:00
Rik Huijzer
8b61ae4e93 [MLIR][Tensor] Avoid crash on negative dimensions
In https://reviews.llvm.org/D151611, a check was added to the tensor verifier to
emit an error on negative tensor dimensions. This check allowed for dynamic
dimensions, hence negative dimensions were still able to get through the verifier.
This is a problem in situations such as #60558, where the dynamic dimension is
converted to a static (and possibly negative) dimension by another pass in the
compiler. This patch fixes that by doing another check during the
`StaticTensorGenerate` conversion, and return a failure if the dimension is
negative.

As a side-note, I have to admit that I do not know why returning a failure in
`StaticTensorGenerate` gives a nice "tensor dimensions must be non-negative"
error. I suspect that the verifier runs again when `return failure()` is called,
but I am not sure.

Fixes #60558.

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D155728
2023-07-20 10:09:34 +02:00
Matthias Springer
6596b0dde8 [mlir][tensor] Clean up tensor::DimOp usage
* Remove duplicate functions. `tensor::getMixedSize` and `tensor::getMixedSizes` should be used.
* Use `tensor::getMixedSize` instead of `createOrFold<tensor::DimOp>`. This is more efficient. `createOrFold` will create an op an immediately try to fold it. In case of a static dimension size, an attribute can be used directly.

Differential Revision: https://reviews.llvm.org/D153332
2023-06-22 10:56:17 +02:00