Commit Graph

78 Commits

Author SHA1 Message Date
Matthias Springer
6b76c4eafd [mlir][linalg] Bring populateFoldUnitExtentDimsVia(Reshapes/Slices)Patterns in sync
Make sure that both functions populate patterns with the same functionality. Both should be refactored at some point so that canonicalization patterns are not populated.

Differential Revision: https://reviews.llvm.org/D140372
2022-12-20 13:10:12 +01:00
Matthias Springer
d2b070d3c9 [mlir][linalg][NFC] Split populateFoldUnitExtentDimsViaReshapesPatterns
MoveInitOperandsToInput is put into a separate populate... function because it can interfere with certain transformations.

Differential Revision: https://reviews.llvm.org/D140091
2022-12-15 12:04:40 +01:00
Matthias Springer
e07149c91f [mlir][linalg] Add option to generate rank-reducing slices in DropUnitDims
This change extends the `ReplaceUnitExtents` pattern so that users can choose between of two strategies for generating rank reductions:
* CollapseShapeOp / ExpandShapeOp (was already implemented but code was cleaned up; default strategy)
* rank-reducing ExtractSliceOp / InsertSliceOp

Also add helper functions to the memref dialect that we already have on the tensor dialect: `getMixedSizes`, `createCanonicalRankReducingSubViewOp`, `rankReduceIfNeeded`.

We are using ReassociationIndices instead of ReassoicationExprs in many other places and this makes the code easier to read. Also adding a new test case (that also passed before).

Differential Revision: https://reviews.llvm.org/D139947
2022-12-14 14:10:04 +01:00
Alexander Belyaev
f6fb0a4f35 [mlir] Make patterns for folding tensor.empty optional.
At the moment, they are a part of EmptyOp::getCanonicalizationPatterns. When
extract_slice(tensor.empty) is rewritten as a new tensor.empty, it could
happen that we end up with two tensor.empty ops, since the original
tensor.empty can have two users. After bufferization such cases result in two
allocations.

Differential Revision: https://reviews.llvm.org/D139308
2022-12-07 23:01:34 +01:00
Kazu Hirata
1a36588ec6 [mlir] Use std::nullopt instead of None (NFC)
This patch mechanically replaces None with std::nullopt where the
compiler would warn if None were deprecated.  The intent is to reduce
the amount of manual work required in migrating from Optional to
std::optional.

This is part of an effort to migrate from llvm::Optional to
std::optional:

https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
2022-12-03 18:50:27 -08:00
Hanhan Wang
9b16d9d271 [mlir][linalg] Add a new pattern to handle folding unit reduction dims.
The output operands will be added to input operands if the generic op (on tensors)
becomes an elementwise operation. The outputs of the generic op is still the same.
They will be cleaned up by ReplaceWithEmptyTensorIfUnused pattern.

This is https://reviews.llvm.org/D138251, plus a cmake dep fix.

Reviewed By: mravishankar

Differential Revision: https://reviews.llvm.org/D138843
2022-11-28 14:14:43 -08:00
Ramkumar Ramachandra
537137ece1 mlir/linalg: use std::optional
This is part of an effort to migrate from llvm::Optional to std::optional:

See also: https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716

Signed-off-by: Ramkumar Ramachandra <r@artagnon.com>
2022-11-27 13:32:20 -08:00
Hanhan Wang
a827c5c7ab Revert "[mlir][linalg] Add a new pattern to handle folding unit reduction dims."
This reverts commit 6eee66d12a.

It breaks builds, see https://lab.llvm.org/buildbot/#/builders/61/builds/35742

Differential Revision: https://reviews.llvm.org/D138633
2022-11-23 19:07:01 -08:00
Hanhan Wang
6eee66d12a [mlir][linalg] Add a new pattern to handle folding unit reduction dims.
The output operands will be added to input operands if the generic op (on tensors)
becomes an elementwise operation. The outputs of the generic op is still the same.
They will be cleaned up by ReplaceWithEmptyTensorIfUnused pattern.

Reviewed By: mravishankar

Differential Revision: https://reviews.llvm.org/D138251
2022-11-23 10:47:10 -08:00
Alexander Belyaev
b4db15a949 [mlir] Rename getInputs->getDpsInputs and getOutputs->getDpsInits in DPS interface.
https://discourse.llvm.org/t/rfc-interface-for-destination-style-ops/64056

Differential Revision: https://reviews.llvm.org/D136943
2022-10-28 15:41:12 +02:00
Alexander Belyaev
a7cccb9cbb [mlir] Simplify DestinationStyleOpInterface.
Differential Revision: https://reviews.llvm.org/D135348
2022-10-17 12:43:41 +02:00
Matthias Springer
81ca5aa452 [mlir][tensor][NFC] Rename linalg.init_tensor to tensor.empty
tensor.empty/linalg.init_tensor produces an uninititalized tensor that can be used as a destination operand for destination-style ops (ops that implement `DestinationStyleOpInterface`).

This change makes it possible to implement `TilingInterface` for non-destination-style ops without depending on the Linalg dialect.

RFC: https://discourse.llvm.org/t/rfc-add-tensor-from-shape-operation/65101

Differential Revision: https://reviews.llvm.org/D135129
2022-10-04 17:25:35 +09:00
Oleg Shyshkov
1227b8ab54 [mlir] Rename getTied* methods to getMatching* in LinalgInterface.
Summary:
As mentioned in the comment to https://reviews.llvm.org/D134444, the term `tied`
is a misnomer in this context and `matching` sounds much better.

Differential Revision: https://reviews.llvm.org/D134534
2022-09-30 10:05:45 +00:00
Jakub Kuderski
abc362a107 [mlir][arith] Change dialect name from Arithmetic to Arith
Suggested by @lattner in https://discourse.llvm.org/t/rfc-define-precise-arith-semantics/65507/22.

Tested with:
`ninja check-mlir check-mlir-integration check-mlir-mlir-spirv-cpu-runner check-mlir-mlir-vulkan-runner check-mlir-examples`

and `bazel build --config=generic_clang @llvm-project//mlir:all`.

Reviewed By: lattner, Mogball, rriddle, jpienaar, mehdi_amini

Differential Revision: https://reviews.llvm.org/D134762
2022-09-29 11:23:28 -04:00
Oleg Shyshkov
1e7a6c0874 [mlir][linalg] Add getIteratorTypesArray to LinalgInterface.
Summary:
Most of the code that gets `iterator_types` from LinalgInterface is forced to
extract values from an `Attribute`. As a result, the usage pattern looks like
this:

```
SmallVector<StringRef> iterators = llvm::to_vector<4>(linalgOp.iterator_types().getAsValueRange<StringAttr>());
```

It also forces all operations that implement LinalgOp interface to have
`iterator_types` attribute even when the information can be easily infered from
other parameters.

In perfect future, `getIteratorTypeArray` should be the only method to get
iterator types from the interface. The default implementation can rely on
`iterator_types` attribute though.

The name `getIteratorTypeArray` was picked to be consistent with existing
`getIndexingMapsArray`.

This patch add a few sample usages. More cleanups will follow.

Differential Revision: https://reviews.llvm.org/D134729
2022-09-27 14:30:50 +00:00
lorenzo chelini
30dd839608 [MLIR][Linalg] Fix typos in 'DropUnitDims.cpp' 2022-09-12 11:37:31 +02:00
Nicolas Vasilache
8f2457ccf0 [mlir][Linalg] Add missing support for memory space to DropUnitDims
Previously, the memory space would not be propagated properly, creating invalid IR.

Differential Revision: https://reviews.llvm.org/D133101
2022-09-01 04:10:27 -07:00
Michele Scuttari
67d0d7ac0a [MLIR] Update pass declarations to new autogenerated files
The patch introduces the required changes to update the pass declarations and definitions to use the new autogenerated files and allow dropping the old infrastructure.

Reviewed By: mehdi_amini, rriddle

Differential Review: https://reviews.llvm.org/D132838
2022-08-31 12:28:45 +02:00
Michele Scuttari
039b969b32 Revert "[MLIR] Update pass declarations to new autogenerated files"
This reverts commit 2be8af8f0e.
2022-08-30 22:21:55 +02:00
Michele Scuttari
2be8af8f0e [MLIR] Update pass declarations to new autogenerated files
The patch introduces the required changes to update the pass declarations and definitions to use the new autogenerated files and allow dropping the old infrastructure.

Reviewed By: mehdi_amini, rriddle

Differential Review: https://reviews.llvm.org/D132838
2022-08-30 21:56:31 +02:00
Jacques Pienaar
d3b3f7653d [mlir] Flip to prefixed accessors (NFC) 2022-08-07 04:55:58 -07:00
Jacques Pienaar
d2c0572b2e [mlir] Flip LinAlg dialect to _Both
This one required more changes than ideal due to overlapping generated name
with different return types. Changed getIndexingMaps to getIndexingMapsArray to
move it out of the way/highlight that it returns (more expensively) a
SmallVector and uses the prefixed name for the Attribute.

Differential Revision: https://reviews.llvm.org/D129919
2022-07-19 14:42:58 -07:00
Nicolas Vasilache
df5c981be3 [mlir][Linalg] Add DropUnitDims support for tensor::ParallelInsertSliceOp.
ParallelInsertSlice behaves similarly to tensor::InsertSliceOp in its
rank-reducing properties.
This revision extends rank-reducing rewrite behavior and reuses most of the
existing implementation.

Differential Revision: https://reviews.llvm.org/D129091
2022-07-05 01:36:13 -07:00
Nicolas Vasilache
2fde26dfca [mlir][Linalg][NFC] Make getReassociationMapForFoldingUnitDims a visible helper function 2022-07-04 09:00:20 -07:00
Nicolas Vasilache
741f8f2bed [mlir][Tensor][NFC] Better document rank-reducing behavior of ExtractSliceOp and cleanup 2022-06-29 07:37:58 -07:00
Jacques Pienaar
04235d07ad [mlir] Update flipped accessors (NFC)
Follow up with memref flipped and flipping any intermediate changes
made.
2022-06-28 13:11:26 -07:00
Mehdi Amini
e4853be2f1 Apply clang-tidy fixes for performance-for-range-copy to MLIR (NFC) 2022-01-02 22:19:56 +00:00
gysit
b7f2c108eb [mlir][linalg] Replace LinalgOps.h and LinalgTypes.h by a single header.
After removing the range type, Linalg does not define any type. The revision thus consolidates the LinalgOps.h and LinalgTypes.h into a single Linalg.h header. Additionally, LinalgTypes.cpp is renamed to LinalgDialect.cpp to follow the convention adopted by other dialects such as the tensor dialect.

Depends On D115727

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D115728
2021-12-15 12:15:03 +00:00
Stella Laurenzo
c10995a8ad Re-apply [NFC] Generalize a couple of passes so they can operate on any FunctionLike op.
* Generalizes passes linalg-detensorize, linalg-fold-unit-extent-dims, convert-elementwise-to-linalg.
* I feel that more work could be done in the future (i.e. make FunctionLike into a proper OpInterface and extend actions in dialect conversion to be trait based), and this patch would be a good record of why that is useful.
* Note for downstreams:
  * Since these passes are now generic, they do not automatically nest with pass managers set up for implicit nesting.
  * The Detensorize pass must run on a FunctionLike, and this requires explicit nesting.
* Addressed missed comments from the original and per-suggestion removed the assert on FunctionLike in ElementwiseToLinalg and DropUnitDims.cpp, which also is what was causing the integration test to fail.

This reverts commit aa8815e42e.

Differential Revision: https://reviews.llvm.org/D115671
2021-12-13 13:33:00 -08:00
Mehdi Amini
aa8815e42e Revert "[NFC] Generalize a couple of passes so they can operate on any FunctionLike op."
This reverts commit 34696e6542.

A test is crashing on the mlir-nvidia bot.
2021-12-13 20:41:25 +00:00
Stella Laurenzo
34696e6542 [NFC] Generalize a couple of passes so they can operate on any FunctionLike op.
* Generalizes passes linalg-detensorize, linalg-fold-unit-extent-dims, convert-elementwise-to-linalg.
* I feel that more work could be done in the future (i.e. make FunctionLike into a proper OpInterface and extend actions in dialect conversion to be trait based), and this patch would be a good record of why that is useful.
* Note for downstreams:
  * Since these passes are now generic, they do not automatically nest with pass managers set up for that.
  * If running them over nested functions, you must nest explicitly. Upstream has adopted this style but *-opt still has some uses of implicit pipelines via args. See tests for argument changes needed.

Differential Revision: https://reviews.llvm.org/D115645
2021-12-13 12:01:53 -08:00
Alexander Belyaev
b618880e7b [mlir] Move linalg.tensor_expand/collapse_shape to TensorDialect.
RFC: https://llvm.discourse.group/t/rfc-reshape-ops-restructuring/3310

linalg.fill gets a canonicalizer, because `FoldFillWithTensorReshape` cannot be moved to tensorops (it uses linalg::FillOp inside). Before it was listed as a canonicalization pattern for the reshape operations, now it became a canonicalization for FillOp.

Differential Revision: https://reviews.llvm.org/D115502
2021-12-10 12:11:48 +01:00
Vladislav Vinogradov
e41ebbecf9 [mlir][RFC] Refactor layout representation in MemRefType
The change is based on the proposal from the following discussion:
https://llvm.discourse.group/t/rfc-memreftype-affine-maps-list-vs-single-item/3968

* Introduce `MemRefLayoutAttr` interface to get `AffineMap` from an `Attribute`
  (`AffineMapAttr` implements this interface).
* Store layout as a single generic `MemRefLayoutAttr`.

This change removes the affine map composition feature and related API.
Actually, while the `MemRefType` itself supported it, almost none of the upstream
can work with more than 1 affine map in `MemRefType`.

The introduced `MemRefLayoutAttr` allows to re-implement this feature
in a more stable way - via separate attribute class.

Also the interface allows to use different layout representations rather than affine maps.
For example, the described "stride + offset" form, which is currently supported in ASM parser only,
can now be expressed as separate attribute.

Reviewed By: ftynse, bondhugula

Differential Revision: https://reviews.llvm.org/D111553
2021-10-19 12:31:15 +03:00
Tobias Gysi
eaa52750ce [mlir][linalg] Verify every LinalgOp has a body.
After removing the last LinalgOps that have no region attached we can verify there is a region. The patch performs the following changes:
- Move the SingleBlockImplicitTerminator trait further up the the structured op base class.
- Adapt the LinalgOp verification since the trait only check if there is 0 or 1 block.
- Introduce a getBlock method on the LinalgOp interface.
- Access the LinalgOp body using either getBlock() or getBody() if the concrete operation type is known.

This patch is a follow up to https://reviews.llvm.org/D111233.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D111393
2021-10-14 09:08:39 +00:00
Mogball
a54f4eae0e [MLIR] Replace std ops with arith dialect ops
Precursor: https://reviews.llvm.org/D110200

Removed redundant ops from the standard dialect that were moved to the
`arith` or `math` dialects.

Renamed all instances of operations in the codebase and in tests.

Reviewed By: rriddle, jpienaar

Differential Revision: https://reviews.llvm.org/D110797
2021-10-13 03:07:03 +00:00
thomasraoux
08f0cb7719 [mlir] Prevent crash in DropUnitDim pattern due to tensor with encoding
Differential Revision: https://reviews.llvm.org/D109984
2021-09-17 12:03:16 -07:00
thomasraoux
36aac53b36 [mlir][linalg] Extend drop unit dim pattern to all cases of reduction
Even with all parallel loops reading the output value is still allowed so we
don't have to handle reduction loops differently.

Differential Revision: https://reviews.llvm.org/D109851
2021-09-17 10:09:57 -07:00
Tres Popp
44485fcd97 [mlir] Prevent assertion failure in DropUnitDims
Don't assert fail on strided memrefs when dropping unit dims.
Instead just leave them unchanged.

Differential Revision: https://reviews.llvm.org/D108205
2021-08-31 12:15:13 +02:00
Alexander Belyaev
46ef86b5d8 [mlir] Move linalg::Expand/CollapseShapeOp to memref dialect.
RFC: https://llvm.discourse.group/t/rfc-reshape-ops-restructuring/3310

Differential Revision: https://reviews.llvm.org/D106141
2021-07-16 13:32:17 +02:00
Matthias Springer
060208b4c8 [mlir][NFC] Move SubTensorOp and SubTensorInsertOp to TensorDialect
The main goal of this commit is to remove the dependency of Standard dialect on the Tensor dialect.

* Rename SubTensorOp -> tensor.extract_slice, SubTensorInsertOp -> tensor.insert_slice.
* Some helper functions are (already) duplicated between the Tensor dialect and the MemRef dialect. To keep this commit smaller, this will be cleaned up in a separate commit.
* Additional dialect dependencies: Shape --> Tensor, Tensor --> Standard
* Remove dialect dependencies: Standard --> Tensor
* Move canonicalization test cases to correct dialect (Tensor/MemRef).

Note: This is a fixed version of https://reviews.llvm.org/D104499, which was reverted due to a missing update to two CMakeFile.txt.

Differential Revision: https://reviews.llvm.org/D104676
2021-06-22 17:55:53 +09:00
Mehdi Amini
60d97fb4cf Revert "[mlir][NFC] Move SubTensorOp and SubTensorInsertOp to TensorDialect"
This reverts commit 83bf801f5f.

This breaks the build with -DBUILD_SHARED_LIBS=ON
2021-06-21 16:39:24 +00:00
Matthias Springer
83bf801f5f [mlir][NFC] Move SubTensorOp and SubTensorInsertOp to TensorDialect
The main goal of this commit is to remove the dependency of Standard dialect on the Tensor dialect.

* Rename ops: SubTensorOp --> ExtractTensorOp, SubTensorInsertOp --> InsertTensorOp
* Some helper functions are (already) duplicated between the Tensor dialect and the MemRef dialect. To keep this commit smaller, this will be cleaned up in a separate commit.
* Additional dialect dependencies: Shape --> Tensor, Tensor --> Standard
* Remove dialect dependencies: Standard --> Tensor
* Move canonicalization test cases to correct dialect (Tensor/MemRef).

Differential Revision: https://reviews.llvm.org/D104499
2021-06-22 00:11:21 +09:00
Tres Popp
6c7be41767 Support buffers in LinalgFoldUnitExtentDims
This doesn't add any canonicalizations, but executes the same
simplification on bufferSemantic linalg.generic ops by using
linalg::ReshapeOp instead of linalg::TensorReshapeOp.

Differential Revision: https://reviews.llvm.org/D103513
2021-06-15 08:22:22 +02:00
Tobias Gysi
046922e100 [mlir][linalg] Add support for scalar input operands.
Up to now all structured op operands are assumed to be shaped. The patch relaxes this assumption and allows scalar input operands. In contrast to shaped operands scalar operands are not indexed and directly forwarded to the body of the operation. As all other operands, scalar operands are associated to an indexing map that in case of a scalar or a 0D-operand has an empty range.

We will use scalar operands as a replacement for the capture mechanism. In contrast to captures, the approach ensures we can generate the function signature from the operand list and it prevents outdated capture values in case a transformation updates only the capture operand but not the hidden body of a named operation.

Removing captures and updating existing operations such as linalg.fill is left for a later patch.

The patch depends on https://reviews.llvm.org/D103891 and https://reviews.llvm.org/D103890.

Differential Revision: https://reviews.llvm.org/D104109
2021-06-14 06:27:16 +00:00
Tobias Gysi
f6b4e081dc [mlir][linalg] Prepare drop unit dims for scalar operands.
Adapt drop unit dims for structured ops taking scalar operands.

Differential Revision: https://reviews.llvm.org/D103890
2021-06-11 13:18:06 +00:00
Tobias Gysi
c698505257 [mlir][linalg] Cleanup LinalgOp usage in drop unit dims.
Replace the uses of deprecated Structured Op Interface methods in DropUnitDims.cpp. This patch is based on https://reviews.llvm.org/D103394.

Differential Revision: https://reviews.llvm.org/D103448
2021-06-03 12:27:05 +00:00
Alexander Belyaev
485c21be8a [mlir] Split linalg reshape ops into expand/collapse.
Differential Revision: https://reviews.llvm.org/D103548
2021-06-03 11:40:22 +02:00
Nicolas Vasilache
8eb18a0f3e [mlir][Standard] NFC - Drop remaining EDSC usage
Drop the remaining EDSC subdirectories and update all uses.

Differential Revision: https://reviews.llvm.org/D102911
2021-05-21 10:40:39 +00:00
Tobias Gysi
f358c37209 [mlir][linalg] Remove IndexedGenericOp support from DropUnitDims...
after introducing the IndexedGenericOp to GenericOp canonicalization (https://reviews.llvm.org/D101612).

Differential Revision: https://reviews.llvm.org/D102235
2021-05-13 14:18:59 +00:00
MaheshRavishankar
05a89312d8 [mlir][Linalg] Allow folding to rank-zero tensor when using rank-reducing subtensors.
The pattern to convert subtensor ops to their rank-reduced versions
(by dropping unit-dims in the result) can also convert to a zero-rank
tensor. Handle that case.
This also fixes a OOB access bug in the existing pattern for such
cases.

Differential Revision: https://reviews.llvm.org/D101949
2021-05-06 19:03:55 -07:00