Commit Graph

239 Commits

Author SHA1 Message Date
Hanhan Wang
ead535b2f9 [mlir][tensor] Add producer fusion for tensor.unpack op.
Reviewed By: mravishankar

Differential Revision: https://reviews.llvm.org/D141151
2023-01-06 14:13:11 -08:00
Matthias Springer
6176d6a93e [mlir][tensor] Support parallel_insert_slice in MergeConsecutiveInsertExtractSlicePatterns.cpp
Differential Revision: https://reviews.llvm.org/D141116
2023-01-06 12:33:45 +01:00
liqinweng
5c18ae3135 [MLIR][Tensor] Canonicalize expand/collapse_shape of splat to splat
Collapsing / expanding a splatted value can be replaced with a single `tensor.splat` operation. Replace
these cases with a simple `tensor.splat` operation.

Reviewed By: rsuderman

Differential Revision: https://reviews.llvm.org/D140552
2023-01-04 13:07:55 -08:00
Mehdi Amini
ab32f5b7ef Apply clang-tidy fixes for readability-simplify-boolean-expr in BufferizableOpInterfaceImpl.cpp (NFC) 2022-12-28 22:42:39 +00:00
Mehdi Amini
1211af761f Apply clang-tidy fixes for llvm-else-after-return in TensorOps.cpp (NFC) 2022-12-22 15:33:01 +00:00
Fangrui Song
cbb0981388 [mlir] llvm::Optional::value => operator*/operator->
std::optional::value() has undesired exception checking semantics and is
unavailable in older Xcode (see _LIBCPP_AVAILABILITY_BAD_OPTIONAL_ACCESS). The
call sites block std::optional migration.
2022-12-17 19:07:38 +00:00
Ramkumar Ramachandra
22426110c5 mlir/tblgen: use std::optional in generation
This is part of an effort to migrate from llvm::Optional to
std::optional. This patch changes the way mlir-tblgen generates .inc
files, and modifies tests and documentation appropriately. It is a "no
compromises" patch, and doesn't leave the user with an unpleasant mix of
llvm::Optional and std::optional.

A non-trivial change has been made to ControlFlowInterfaces to split one
constructor into two, relating to a build failure on Windows.

See also: https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716

Signed-off-by: Ramkumar Ramachandra <r@artagnon.com>

Differential Revision: https://reviews.llvm.org/D138934
2022-12-17 11:13:26 +01:00
Hanhan Wang
83396d8549 [mlir][tensor] Implement TilingInterface for unpack op
The main issue of tiling unpack op is about incomplete tile. Since all
the dimensions are orthogonal, discussing 1-d unpack case is enough. The
core idea is to make the input slice have complete tiles. In this case,
a larger unpacked tile will be created. We'll need an extract_slice op
to shift and truncate the output.

Take Nn_to_N as an example. Say that N=32, n=8, and tiling_size=15. The
coordinates of second tile (i.e., result[15..31]) are [(1, 7), (2, 0,),
(2, 1) ... (3, 6), (3, 7)]. The first row and the last row are
incomplete in terms of inputs. It's impossible to represent an unpack op
using the coordinates. Because the input has higher rank and the math
computation of coordinate is using mod and ceilDiv. That's very tricky.

To represent the unpack op, we have to complete the rows. I.e., the
input coordinates would start with (1, 0); end with (3, 7). In this
context, the tiled unpack produces a (3 * n) elements because there are
3 rows in total. Follow by a tensor.extract_slice op, we can get the
actual result.

If the tiling sizes are multiple of inner tile sizes, it is a perfect
tiling case. In this context, the larger input and output is not needed.

Reviewed By: chelini

Differential Revision: https://reviews.llvm.org/D139362
2022-12-16 13:06:52 -08:00
Matthias Springer
e5dc99e642 [mlir][tensor][bufferize] Improve bufferization of DimOp/RankOp
The tensor operands do not bufferize to a memory read.

Differential Revision: https://reviews.llvm.org/D140007
2022-12-14 12:47:46 +01:00
Aliia Khasanova
ded75a282a Remove sentinel argument from dispatchIndexOpFoldResults.
Post clean-up after merger of kDynamicSize and kDynamicStrideOrOffset.

Differential Revision: https://reviews.llvm.org/D139929
2022-12-13 14:04:46 +01:00
Nicolas Vasilache
93bbcffc7e [mlir][Transform] Make FuseIntoContainingOp support rank-reducing extract slices
This fixes an issue where rank-reducing + fusion would not interop properly.

Differential Revision: https://reviews.llvm.org/D139844
2022-12-12 12:55:08 -08:00
Matthias Springer
be630f07de [mlir][bufferize] Implement BufferizableOpInterface for tensor.empty
The op is not bufferizable but should be analyzable (for `EliminateEmptyTensors`, which uses the bufferization infrastructure).

Also improve debugging functionality and error messages.

Also adds a missing pass to the sparse pipeline. (tensor.empty should be replaced with bufferization.alloc_tensor, but it sometimes used to work without depending on how the tensor.empty is used. Now we always fail explicitly.)
2022-12-12 14:19:38 +01:00
Alexander Belyaev
f6fb0a4f35 [mlir] Make patterns for folding tensor.empty optional.
At the moment, they are a part of EmptyOp::getCanonicalizationPatterns. When
extract_slice(tensor.empty) is rewritten as a new tensor.empty, it could
happen that we end up with two tensor.empty ops, since the original
tensor.empty can have two users. After bufferization such cases result in two
allocations.

Differential Revision: https://reviews.llvm.org/D139308
2022-12-07 23:01:34 +01:00
Matthias Springer
9cdf6b641d [mlir][tensor] Support parallel_insert_slice in reassociative reshape folder
Differential Revision: https://reviews.llvm.org/D139540
2022-12-07 16:25:10 +01:00
Lorenzo Chelini
87ecf9d155 [MLIR][Tensor] Add custom builder for unpack op
Reviewed By: hanchung

Differential Revision: https://reviews.llvm.org/D139344
2022-12-07 12:40:45 +01:00
Hanhan Wang
0f297cad4d [mlir][tensor][linalg] Introduce DataLayoutPropagation pass.
It introduces a pattern that swaps `linalg.generic + tensor.pack` to
`tensor.pack + linalg.generic`. It requires all the iteration types
being parallel; the indexing map of output operand is identiy. They can
all be relaxed in the future.

The user can decide whether the propagation should be applied or not by
passing a control function.

Reviewed By: mravishankar

Differential Revision: https://reviews.llvm.org/D138882
2022-12-06 15:00:07 -08:00
Hanhan Wang
193cefd1b1 [mlir][tensor] Adapt FoldTensorCastProducerOp pattern on DPS interface.
This revision adapts the pattern in LinAlg to work on DPS interface, and
adds it to canonicalization patterns of tensor dialect. The
InsertSliceOp is skipped in the pattern because it has its own logic
about folding tensor.cast ops.

Reviewed By: pifon2a

Differential Revision: https://reviews.llvm.org/D139375
2022-12-06 12:13:37 -08:00
Hanhan Wang
0d03ba62c5 [mlir][tensor] Implement TilingInterface for tensor.pack op.
We can compute the offsets and sizes for the slice of input because the
iteration domain is defined over outer loops. If the dimension is tiled,
the i-th index is the product of offset_i and inner_tile_i.

Different from tiling a pad op, we do not have to deal with reading zero
data from input. Because the tiling sizes are indicated to packed outer
dimensions. We will read either the entire tile or partial tile for each
packed tile. The scf.if and tensor.generate ops are not needed in this
context.

Co-authored-by: Lorenzo Chelini <l.chelini@icloud.com>

Reviewed By: rengolin, mravishankar

Differential Revision: https://reviews.llvm.org/D138631
2022-12-05 14:00:10 -08:00
Adrian Kuegel
94d3df2015 [mlir][Tensor] Apply ClangTidy performance finding (NFC) 2022-12-05 11:22:20 +01:00
Matthias Springer
1403073790 [mlir][tensor] Fold rank-reducing insert_slice with inverse collapse_shape
Differential Revision: https://reviews.llvm.org/D139221
2022-12-05 09:17:29 +01:00
Matthias Springer
50a2bb95ab [mlir][tensor] Fold rank-reducing extract_slice with inverse expand_shape
Differential Revision: https://reviews.llvm.org/D139220
2022-12-05 09:17:24 +01:00
Kazu Hirata
1a36588ec6 [mlir] Use std::nullopt instead of None (NFC)
This patch mechanically replaces None with std::nullopt where the
compiler would warn if None were deprecated.  The intent is to reduce
the amount of manual work required in migrating from Optional to
std::optional.

This is part of an effort to migrate from llvm::Optional to
std::optional:

https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
2022-12-03 18:50:27 -08:00
Matthias Springer
f92c7506e3 Revert "[mlir][tensor] Fold rank-reducing extract_slice with inverse expand_shape"
This reverts commit a076f57a1a.
2022-12-02 21:22:20 +01:00
Matthias Springer
c837a94754 Revert "[mlir][tensor] Fold rank-reducing insert_slice with inverse collapse_shape"
This reverts commit 1522a3b7b3.
2022-12-02 21:22:04 +01:00
Matthias Springer
1522a3b7b3 [mlir][tensor] Fold rank-reducing insert_slice with inverse collapse_shape
Differential Revision: https://reviews.llvm.org/D139104
2022-12-02 10:42:52 +01:00
Matthias Springer
a076f57a1a [mlir][tensor] Fold rank-reducing extract_slice with inverse expand_shape
Differential Revision: https://reviews.llvm.org/D139103
2022-12-02 10:42:46 +01:00
Lorenzo Chelini
44f7356005 [MLIR][Tensor] Add canonicalization for UnpackOp
pack(unpack(x)) -> x

Reviewed By: hanchung

Differential Revision: https://reviews.llvm.org/D138917
2022-12-01 15:17:50 +01:00
Hanhan Wang
a971d51932 [mlir][tensor] Enhance the verifier of pack and unpack op.
The outer_dims_perm must be a permutation or empty.

Reviewed By: chelini

Differential Revision: https://reviews.llvm.org/D138936
2022-11-29 15:47:52 -08:00
Hanhan Wang
e86169f090 [mlir][tensor] Add a custom builder for pack op.
The `paddingValue` and `outerDimsPerm` are optional to the op;
`innerTiles` can be variadic in terms of static sizes and dynamic sizes.
Add a custom builder for building pack op easier.

Reviewed By: mravishankar

Differential Revision: https://reviews.llvm.org/D138860
2022-11-28 15:18:42 -08:00
Matthias Springer
13593dc9dc [mlir][tensor][bufferize] Fix tensor.insert_slice regression
This reverts D132662 (apart from overall cleanups), which introduced a too aggressive optimization for tensor.insert_slice bufferization. Instead, bufferizesToMemoryRead is improved to handle some of these cases. The remaining cases can still bufferize efficiently when running the canonicalizer before the bufferization.

Differential Revision: https://reviews.llvm.org/D138745
2022-11-26 19:14:33 +01:00
Lorenzo Chelini
a9733b8a5e [MLIR] Adopt DenseI64ArrayAttr in tensor, memref and linalg transform
This commit is a first step toward removing inconsistencies between dynamic
and static attributes (i64 v. index) by dropping `I64ArrayAttr` and
using `DenseI64ArrayAttr` in Tensor, Memref and Linalg Transform ops.
In Linalg Transform ops only `TileToScfForOp` and `TileOp` have been updated.

See related discussion: https://discourse.llvm.org/t/rfc-inconsistency-between-dynamic-and-static-attributes-i64-v-index/66612/1

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D138567
2022-11-25 09:43:30 +01:00
Matthias Springer
f2d91a7ae1 [mlir][utils] Fix invalid reshapes in ComposeCollapseOfExpandOp
Do not generate CollapseShapeOps/ExpandShapeOps that have the same source and result shape. Generate casts instead. Such reshapes became invalid with D138498.

Differential Revision: https://reviews.llvm.org/D138557
2022-11-23 13:52:00 +01:00
Matthias Springer
b9745ad812 [mlir][tensor/memref] Disallow Collapse/ExpandShapeOps that do not reduce/increase the rank
CollapseShapeOp/ExpandShapeOp that do not change the rank (or increase/reduce it) are invalid.

Differential Revision: https://reviews.llvm.org/D138498
2022-11-23 09:19:35 +01:00
Matthias Springer
6052b17aab [mlir][tensor] Add dim(expand_shape/collapse_shape) folding
Differential Revision: https://reviews.llvm.org/D138487
2022-11-22 17:34:49 +01:00
Lorenzo Chelini
85e38e5292 [MLIR][Tensor] Use the existing helper function applyPermutationToVector (NFC)
Avoid duplicate code by using an existing helper function to interchange
a vector based on a permutation. Address comments emerged after landing
D138119.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D138480
2022-11-22 11:34:44 +01:00
Lorenzo Chelini
9aa505a28d Introduce tensor.pack and tensor.unpack operations
Pack and Unpack return new tensors within which the individual elements
are reshuffled according to the packing specification. This has the
consequence of modifying the canonical order in which a given operator
(i.e., Matmul) accesses the individual elements. After bufferization,
this typically translates to increased access locality and cache
behavior improvement, e.g., eliminating cache line splitting.

Co-authored-by: Mahesh Ravishankar <ravishankarm@google.com>
Co-authored-by: Han-Chung Wang <hanchung@google.com>

RFC: https://discourse.llvm.org/t/rfc-tensor-pack-and-tensor-unpack/66408/1

Reviewed By: nicolasvasilache, rengolin, hanchung

Differential Revision: https://reviews.llvm.org/D138119
2022-11-22 09:11:59 +01:00
Lei Zhang
9bb633741a [mlir][bufferization] Support general Attribute as memory space
MemRef has been accepting a general Attribute as memory space for
a long time. This commits updates bufferization side to catch up,
which allows downstream users to plugin customized symbolic memory
space. This also eliminates quite a few `getMemorySpaceAsInt`
calls, which is deprecated.

Reviewed By: springerm

Differential Revision: https://reviews.llvm.org/D138330
2022-11-21 09:40:50 -05:00
Aliia Khasanova
399638f98c Merge kDynamicSize and kDynamicSentinel into one constant.
resolve conflicts

Differential Revision: https://reviews.llvm.org/D138282
2022-11-21 13:01:26 +00:00
Mahesh Ravishankar
24f9293de8 [mlir][Tensor] Allow builders of tensor.empty to accept encoding attribute.
The `RankedTensorType` can have an optional encoding
attribute. Allowing the builders of `tensor.empty` to accept the
encoding attribute (optionally), allows building empty tensors with
the type having the encoding attribute.

Reviewed By: nicolasvasilache, hanchung, springerm

Differential Revision: https://reviews.llvm.org/D137297
2022-11-03 20:30:12 +00:00
Matthias Springer
09dfb44193 [mlir][tensor][bufferize] Support memory_space for tensor.pad
This change adds memory space support to tensor.pad. (tensor.generate and tensor.from_elements do not support memory spaces yet.)

The memory space is inferred from the buffer of the source tensor.

Instead of lowering tensor.pad to tensor.generate + tensor.insert_slice, it is now lowered to bufferization.alloc_tensor (with the correct memory space) + linalg.map + tensor.insert_slice.

Memory space support for the remaining two tensor ops is left for a later point, as this requires some more design discussions.

Differential Revision: https://reviews.llvm.org/D136265
2022-10-27 12:29:57 +02:00
Matthias Springer
c1f0a15c65 [mlir][tensor][bufferize] Lower tensor.generate to linalg.map
There is no memref equivalent of tensor.generate. The purpose of this change is to avoid creating scf.parallel loops during bufferization.

Differential Revision: https://reviews.llvm.org/D136767
2022-10-27 12:03:13 +02:00
Matthias Springer
2d5edc644d [mlir][bufferize] Provide default BufferizableOpInterface impl for destination style ops
tensor.insert and tensor.insert_slice (as destination style ops) do no longer need to implement the entire BufferizableOpInterface.

Differential Revision: https://reviews.llvm.org/D136347
2022-10-27 10:52:47 +02:00
Matthias Springer
cfaf3292df [mlir][tensor] Disallow unranked tensors for tensor.extract/insert
When writing a tensor.extract/tensor.insert, the rank of the tensor is implied by the number of specified indices. When extracting from/inserting into an unranked tensor, it should first be casted to a ranked version.

Differential Revision: https://reviews.llvm.org/D136756
2022-10-27 10:09:31 +02:00
Mahesh Ravishankar
188b041bf5 [mlir][Tensor] Change createDimValues to return a list of OpFoldResults.
Reviewed By: nicolasvasilache, hanchung, ThomasRaoux

Differential Revision: https://reviews.llvm.org/D136733
2022-10-27 03:13:56 +00:00
Mahesh Ravishankar
94b8469a88 [mlir][Tensor] Add a helper build method for pad operations with constant padding.
Drop the `createPadScalarOp` from Utils.h since it is a duplicate of
the `build` method added here.

Differential Revision: https://reviews.llvm.org/D136493
2022-10-24 18:11:53 +00:00
Matthias Springer
b169643f3a [mlir][interfaces] Remove getDestinationOperands from TilingInterface
`getDestinationOperands` was almost a duplicate of `DestinationStyleOpInterface::getOutputOperands`. Now that the interface has been moved to mlir/Interfaces, it is no longer needed.

Differential Revision: https://reviews.llvm.org/D136240
2022-10-24 09:26:19 +02:00
Christopher Bate
446981bdb6 [mlir][tensor] ExtractSliceFromReshape: handle collapsing of unit dim edge cases
Prior to this change, the "ExtractSliceFromReshape" pattern would transform

```
%collapsed = tensor.collapse_shape %input [[0, 1], [2]]
                : tensor<1x11x100xf32> into tensor<11x100xf32>
%slice = tensor.extract_slice %collapsed [%offt, 0] [%size, 100] [1, 1]
                : tensor<11x100xf32> to tensor<?x100xf32>
```

into a loop that iterated over the range `%size - %offt`, that pieces
together multiple sub-slices of `%input` along the first dimension. This
is correct but obviously inefficient. The technical condition is that
collapsing at-most-one non-unit dimension of `%src` will not result in a
subsequent slice along the corresponding dimension of `%collapsed`
mapping across discontinuities in the index space of `%src`. Thus, the
definition of a "linearized dimension" (from the perspective of
`tensor.collapse_shape`) is updated to reflect this condition.

The transform will now generate

```
%slice = tensor.extract_slice %input [0, %offt, 0][1, %size, 100] [1, 1]
            : tensor<1x11x100xf32> to tensor<1x?x100xf32>
%result = tensor.collapse_shape [[0, 1], [2]]
            : tensor<1x?x100xf32> to tensor<?x100xf32>
```

which can be further canonicalized.

Additional tests are added to check this family of edge cases.

Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D135726
2022-10-22 13:29:34 -06:00
Lorenzo Chelini
9bcac22be5 [MLIR][Tensor] Remove assert in PadOp builder
The assert is misplaced as the result type is allowed to be null. A few
lines below the result type is inferred if it is passed a nullptr.
Besides, this behavior is described in the documentation of the builder.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D136262
2022-10-19 18:02:50 +02:00
Sanjoy Das
adabce4118 Correctly model undefined behavior in {tensor|memref}.dim
These operations have undefined behavior if the index is not less than the rank of the source tensor / memref, so they cannot be freely speculated like they were before this patch.  After this patch we speculate them only if we can prove that they don't have UB.

Depends on D135505.

Reviewed By: mravishankar

Differential Revision: https://reviews.llvm.org/D135748
2022-10-12 17:30:13 -07:00
Matthias Springer
6cdd34b973 [mlir][tensor][bufferize] Bufferize inserts into equivalent tensors in-place
Inserting a tensor into an equivalent tensor is a no-op after bufferization. No alloc is needed.

Differential Revision: https://reviews.llvm.org/D132662
2022-10-06 15:06:33 +09:00