Commit Graph

66 Commits

Author SHA1 Message Date
Matthias Springer
5f6d5ca0f8 [mlir][bufferize] Fix tensor copy insertion for dynamic tensors
TensorCopyInsertion inserts bufferization.alloc_tensor ops in case of RaW conflicts. If such a tensor is dynamically shaped, tensor.dim ops are inserted. There is an optimization for ops such as tensor.extract_slice: A copy of the result is created instead of the operand. Afterwards, all uses of the result are updated. E.g.:

```
%0 = tensor.extract_slice ... : tensor<?xf32> to tensor<?xf32>
%1 = tensor.dim %0, %c0 : tensor<?xf32>
%2 = bufferization.alloc_tensor(%dim) : tensor<?xf32>
```

All uses of %0, except for tensor.dim and bufferization.alloc_tensor (if any), should be replaced. Before this change, the use in tensor.dim was also replaced, resulting in IR that had a dominance error.

Note: There is no test case for this fix because the bug cannot be triggered with tensor.extract_slice, which implements an interface to reify result shapes. This bug appeared in an external project with a tensor.extract_slice-like op that does not implement that interface, in which case tensor.dim ops must be created. We do not have such an op in MLIR to trigger this bug.

Differential Revision: https://reviews.llvm.org/D140471
2022-12-21 12:42:02 +01:00
Matthias Springer
faa9be75ee [mlir][bufferize][NFC] Rename DialectAnalysisState and move to OneShotAnalysis
`DialectAnalysisState` is now `OneShotAnalysisState::Extension`.

This state extension mechanism is needed only for One-Shot Analysis, so it is moved from `BufferizableOpInterface.h` to `OneShotAnalysis.h`.

Extensions are now identified via TypeIDs instead of StringRefs. The API of state extensions is cleaned up and follows the same pattern as other extension mechanisms in MLIR (e.g., `transform::TransformState::Extension`).

Also delete some dead code.

Differential Revision: https://reviews.llvm.org/D135051
2022-11-22 14:34:55 +01:00
Lei Zhang
9bb633741a [mlir][bufferization] Support general Attribute as memory space
MemRef has been accepting a general Attribute as memory space for
a long time. This commits updates bufferization side to catch up,
which allows downstream users to plugin customized symbolic memory
space. This also eliminates quite a few `getMemorySpaceAsInt`
calls, which is deprecated.

Reviewed By: springerm

Differential Revision: https://reviews.llvm.org/D138330
2022-11-21 09:40:50 -05:00
Aliia Khasanova
399638f98c Merge kDynamicSize and kDynamicSentinel into one constant.
resolve conflicts

Differential Revision: https://reviews.llvm.org/D138282
2022-11-21 13:01:26 +00:00
Matthias Springer
6cdd34b973 [mlir][tensor][bufferize] Bufferize inserts into equivalent tensors in-place
Inserting a tensor into an equivalent tensor is a no-op after bufferization. No alloc is needed.

Differential Revision: https://reviews.llvm.org/D132662
2022-10-06 15:06:33 +09:00
Alex Zinenko
f096e72ce6 [mlir] switch bufferization to use strided layout attribute
Bufferization already makes the assumption that buffers pass function
boundaries in the strided form and uses the corresponding affine map layouts.
Switch it to use the recently introduced strided layout instead to avoid
unnecessary casts when bufferizing further operations to the memref dialect
counterparts that now largely rely on the strided layout attribute.

Depends On D133947

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D133951
2022-09-16 10:56:50 +02:00
Matthias Springer
f7f0c7f7e3 [mlir][bufferize] Add isRepetitiveRegion to BufferizableOpInterface
This method allows to declare regions as "repetitive" even if the parent op does not implement the RegionBranchOpInterface.

This is needed to support loop-like ops that have parallel semantics but do not branch between regions.

Differential Revision: https://reviews.llvm.org/D133113
2022-09-02 14:47:20 +02:00
Matthias Springer
123c4b0251 [mlir][SCF][bufferize] Support different iter_arg/init_arg types (scf.for)
Even though iter_arg and init_arg of an scf.for loop may have the same tensor type, their bufferized memref types are not necessarily equal. It is sometimes necessary to insert a cast in case of differing layout maps.

Differential Revision: https://reviews.llvm.org/D132860
2022-08-30 16:35:32 +02:00
Matthias Springer
111c919665 [mlir][bufferization] Generalize getBufferType
This change generalizes getBufferType. This function can be used to predict the buffer type of any tensor value (not just BlockArguments) without changing any IR. It also subsumes getMemorySpace. This is useful for loop bufferization, where the precise buffer type of an iter_arg cannot be known without examining the loop body.

Differential Revision: https://reviews.llvm.org/D132859
2022-08-30 16:26:44 +02:00
Kazu Hirata
258531b7ac Remove redundant initialization of Optional (NFC) 2022-08-20 21:18:28 -07:00
Matthias Springer
664ffa46bb [mlir][tensor][bufferize] Fix deallocation of GenerateOp/FromElementsOp
Both ops allocate a buffer. There were cases in which the buffer was not deallocated.

Differential Revision: https://reviews.llvm.org/D130469
2022-07-25 12:25:06 +02:00
Matthias Springer
74902cc96f [mlir][linalg][NFC] Cleanup: Drop linalg.inplaceable attribute
bufferization.writable is used in most cases instead. All remaining test cases are updated. Some code that is no longer needed is deleted.

Differential Revision: https://reviews.llvm.org/D129739
2022-07-14 15:50:03 +02:00
Matthias Springer
c66303c287 [mlir][sparse] Switch to One-Shot Bufferize
This change removes the partial bufferization passes from the sparse compilation pipeline and replaces them with One-Shot Bufferize. One-Shot Analysis (and TensorCopyInsertion) is used to resolve all out-of-place bufferizations, dense and sparse. Dense ops are then bufferized with BufferizableOpInterface. Sparse ops are still bufferized in the Sparsification pass.

Details:
* Dense allocations are automatically deallocated, unless they are yielded from a block. (In that case the alloc would leak.) All test cases are modified accordingly. E.g., some funcs now have an "out" tensor argument that is returned from the function. (That way, the allocation happens at the call site.)
* Sparse allocations are *not* automatically deallocated. They must be "released" manually. (No change, this will be addressed in a future change.)
* Sparse tensor copies are not supported yet. (Future change)
* Sparsification no longer has to consider inplacability. If necessary, allocations and/or copies are inserted during TensorCopyInsertion. All tensors are inplaceable by the time Sparsification is running. Instead of marking a tensor as "not inplaceable", it can be marked as "not writable", which will trigger an allocation and/or copy during TensorCopyInsertion.

Differential Revision: https://reviews.llvm.org/D129356
2022-07-14 09:52:48 +02:00
Kazu Hirata
491d27013d [mlir] Use has_value instead of hasValue (NFC) 2022-07-13 00:57:02 -07:00
Matthias Springer
606f7c8f7a [mlir][bufferization][NFC] Move more unknown type conversion logic into BufferizationOptions
The `unknownTypeConversion` bufferization option (enum) is now a type converter function option. Some logic of `getMemRefType` is now handled by that function.

This change makes type conversion more controllable. Previously, there were only two options when generating memref types for non-bufferizable ops: Static identity layout or fully dynamic layout. With this change, users of One-Shot Bufferize can provide a function with custom logic.

Differential Revision: https://reviews.llvm.org/D129273
2022-07-07 13:36:28 +02:00
Matthias Springer
c0b0b6a00a [mlir][bufferize] Infer memory space in all bufferization patterns
This change updates all remaining bufferization patterns (except for scf.while) and the remaining bufferization infrastructure to infer the memory space whenever possible instead of falling back to "0". (If a default memory space is set in the bufferization options, we still fall back to that value if the memory space could not be inferred.)

Differential Revision: https://reviews.llvm.org/D128423
2022-06-27 16:32:52 +02:00
Matthias Springer
45b995cda4 [mlir][bufferize][NFC] Change signature of allocateTensorForShapedValue
Add a failure return value and bufferization options argument. This is to keep a subsequent change smaller.

Differential Revision: https://reviews.llvm.org/D128278
2022-06-27 16:00:06 +02:00
Matthias Springer
5d50f51c97 [mlir][bufferization][NFC] Add error handling to getBuffer
This is in preparation of adding memory space support.

Differential Revision: https://reviews.llvm.org/D128277
2022-06-27 13:48:01 +02:00
Matthias Springer
ba9d886db4 [mlir][bufferization][NFC] Bufferize with PostOrder traversal
This is useful because the result type of an op can sometimes be inferred from its body (e.g., `scf.if`). This will be utilized in subsequent changes.

Also introduces a new `getBufferType` interface method on BufferizableOpInterface. This method is useful for computing a bufferized block argument type with respect to OpOperand types of the parent op.

Differential Revision: https://reviews.llvm.org/D128420
2022-06-27 12:42:41 +02:00
Matthias Springer
b06614e2e8 [mlir][bufferization][NFC] Change signature of getMemRefType
These functions now accep unsigned attributes for address spaces instead of Attributes.

Differential Revision: https://reviews.llvm.org/D128275
2022-06-27 10:41:40 +02:00
Matthias Springer
3474d10e1a [mlir][bufferization][NFC] Make escape a dialect attribute
All bufferizable ops that bufferize to an allocation receive a `bufferization.escape` attribute during TensorCopyInsertion.

Differential Revision: https://reviews.llvm.org/D128137
2022-06-23 19:34:47 +02:00
Matthias Springer
99260e9583 [mlir][bufferization] Set emitAccessorPrefix dialect flag
Generate get/set accessors on all bufferization ops. Also update all internal uses.

Differential Revision: https://reviews.llvm.org/D128057
2022-06-18 10:26:29 +02:00
Matthias Springer
b55d55ecd9 [mlir][bufferize][NFC] Remove BufferizationState
With the recent refactorings, this class is no longer needed. We can use BufferizationOptions in all places were BufferizationState was used.

Differential Revision: https://reviews.llvm.org/D127653
2022-06-17 14:04:11 +02:00
Matthias Springer
b3ebe3beed [mlir][bufferize] Bufferize after TensorCopyInsertion
This change changes the bufferization so that it utilizes the new TensorCopyInsertion pass. One-Shot Bufferize no longer calls the One-Shot Analysis. Instead, it relies on the TensorCopyInsertion pass to make the entire IR fully inplacable. The `bufferize` implementations of all ops are simplified; they no longer have to account for out-of-place bufferization decisions. These were already materialized in the IR in the form of `bufferization.alloc_tensor` ops during the TensorCopyInsertion pass.

Differential Revision: https://reviews.llvm.org/D127652
2022-06-17 13:29:52 +02:00
lorenzo chelini
f2ada383f2 [MLIR][Bufferization] Assume alias if no information is available
- Post (minor) fix after: https://reviews.llvm.org/D127301

Reviewed By: springerm

Differential Revision: https://reviews.llvm.org/D127868
2022-06-15 18:41:51 +02:00
Matthias Springer
a36c801d12 [mlir][bufferize] Better implementation of AnalysisState::isTensorYielded
If `create-deallocs=0`, mark all bufferization.alloc_tensor ops as escaping. (Unless they already have an `escape` attribute.) In the absence of analysis information, check SSA use-def chains to see if the value may be yielded.

Differential Revision: https://reviews.llvm.org/D127302
2022-06-15 10:15:47 +02:00
Matthias Springer
a3bca1181b [mlir][bufferize][NFC] Merge AlwaysCopyAnalysisState into AnalysisState
`AnalysisState` now has default implementations of all virtual functions.

Differential Revision: https://reviews.llvm.org/D127301
2022-06-15 10:08:52 +02:00
Matthias Springer
79f115911e [mlir][bufferize] Avoid tensor copies when the data is not read
There are various shortcuts in `BufferizationState::getBuffer` that avoid a buffer copy when we just need an allocation (and no initialization). This change adds those shortcuts to the TensorCopyInsertion pass, so that `getBuffer` can be simplified in a subsequent change.

Differential Revision: https://reviews.llvm.org/D126821
2022-06-10 10:26:07 +02:00
Matthias Springer
87b46776c4 [mlir][bufferize] Improve resolveConflicts for ExtractSliceOp
It is sometimes better to make a copy of the OpResult instead of making a copy of the OpOperand. E.g., when bufferizing tensor.extract_slice.

This implementation will eventually make parts of extract_slice's `bufferize` implementation obsolete (and simplify it). It will only need to handle in-place OpOperands.

Differential Revision: https://reviews.llvm.org/D126819
2022-06-09 22:19:37 +02:00
Matthias Springer
87c770bbd0 [mlir][bufferization][NFC] Put inplacability conflict resolution in op interface
The TensorCopyInsertion pass resolves out-of-place bufferization decisions by inserting explicit `bufferization.alloc_tensor` ops. This change moves that functionality into a new BufferizableOpInterface method, so that it can be overridden by op implementations. Some op bufferizations must insert additional `alloc_tensor` ops to make sure that certain aliasing invariants are not violated (e.g., scf::ForOp). This will be addressed in a subsequent change.

Differential Revision: https://reviews.llvm.org/D126817
2022-06-09 22:06:44 +02:00
Matthias Springer
058af65e78 [mlir][bufferization] Decouple buffer-deallocation from One-Shot Bufferize
The buffer deallocation pass must now be run explicitly when `allow-return-alloc` is set.

This results in a few extra buffer copies in unoptimized test cases. The proper way to avoid such copies is to relax the OpOperand/OpResult aliasing contract on ops such as scf.for. Some of these copies can also be avoided by improving the buffer deallocation pass.

Differential Revision: https://reviews.llvm.org/D126252
2022-06-09 18:20:39 +02:00
Matthias Springer
1534177f8f [mlir][bufferization][NFC] Move OpFilter out of BufferizationOptions
Differential Revision: https://reviews.llvm.org/D126568
2022-05-28 01:47:39 +02:00
Matthias Springer
ab249fd87d [mlir][bufferization][NFC] Remove dead code
There were two copies of AlwaysCopyAnalysisState. (Must have been a merge conflict mistake...)

Differential Revision: https://reviews.llvm.org/D126414
2022-05-25 22:26:00 +02:00
Matthias Springer
0ee1c0388c [mlir][bufferize] Remove hoisting functionality from One-Shot Bufferize
The same functionality is already provided by `-buffer-hoisting` and `-buffer-loop-hoisting`.

Differential Revision: https://reviews.llvm.org/D126251
2022-05-25 19:56:18 +02:00
Matthias Springer
996834e681 [mlir][SCF] Fix scf.while bufferization
Before this fix, the bufferization implementation made the incorrect assumption that the values yielded from the "before" region must match with the values yielded from the "after" region.

Differential Revision: https://reviews.llvm.org/D125835
2022-05-18 00:35:50 +02:00
Matthias Springer
f287da8a15 [mlir][bufferize] Better user control of layout maps
This changes replaces the `fully-dynamic-layout-maps` options (which was badly named) with two new options:

* `unknown-type-conversion` controls the layout maps on buffer types for which no layout map can be inferred.
* `function-boundary-type-conversion` controls the layout maps on buffer types inside of function signatures.

Differential Revision: https://reviews.llvm.org/D125615
2022-05-16 18:06:13 +02:00
Matthias Springer
8f42939a07 [mlir][bufferize][NFC] Make getContiguousMemRefType a static function
No need to expose this as public API anymore.

Differential Revision: https://reviews.llvm.org/D125361
2022-05-13 11:27:43 +02:00
Matthias Springer
2fe40c34ea [mlir][bufferize] Fix op filter
Bufferization has an optional filter to exclude certain ops from analysis+bufferization. There were a few remaining places in the codebase where the filter was not checked.

Differential Revision: https://reviews.llvm.org/D125356
2022-05-12 09:33:07 +02:00
Matthias Springer
248e113e9f [mlir][bufferize][NFC] Move helper functions to BufferizationOptions
Move helper functions for creating allocs/deallocs/memcpys to BufferizationOptions.

Differential Revision: https://reviews.llvm.org/D125375
2022-05-11 16:23:22 +02:00
Matthias Springer
9785eb1b98 [mlir][bufferize] Disallow adding new bufferizable ops during bufferization
Ops that are created during the bufferization were not analyzed (when run with One-Shot Bufferize), and users should instead create memref ops directly.

Futhermore, this fixes an issue where an op was erased (and put on the `erasedOps` list), but subsequently a new tensor op was created at the same memory location. This op was then not bufferized. Disallowing the creation of new tensor ops simplifies the bufferization and fixes such issues.

Differential Revision: https://reviews.llvm.org/D125017
2022-05-06 18:41:49 +09:00
Matthias Springer
988748c077 [mlir][bufferize] Do not copy buffers with undefined contents
Buffers with undefined contents (e.g., the result of an init_tensor) are no longer copied.

Differential Revision: https://reviews.llvm.org/D125015
2022-05-06 17:31:01 +09:00
Matthias Springer
d6dab38ae4 [mlir][bufferize][NFC] Add function boundary bufferization flag to BufferizationOptions
This makes the API easier to use. Also allows us to check for incorrect API usage for easier debugging.

Differential Revision: https://reviews.llvm.org/D124265
2022-04-23 01:11:37 +09:00
Matthias Springer
e07a7fd5c0 [mlir][bufferization] Move ModuleBufferization to bufferization dialect
* Move Module Bufferization to the bufferization dialect. The implementation is split into `OneShotModuleBufferize.cpp` and `FuncBufferizableOpInterfaceImpl.cpp`, so that the external model implementation can be easily moved to the func dialect in the future.
* Split and clean up test cases. A few test cases are still remaining in Linalg and will be updated separately.
* `linalg.inplaceable` is renamed to `bufferization.writable` to accurately reflect its current usage.
* Attributes and their verifiers are moved from the Linalg dialect to the Bufferization dialect.
* Expand documentation.
* Add a new flag to One-Shot Bufferize to allow for function boundary bufferization.

Differential Revision: https://reviews.llvm.org/D122229
2022-04-22 19:37:28 +09:00
River Riddle
58ceae9561 [mlir:NFC] Remove the forward declaration of FuncOp in the mlir namespace
FuncOp has been moved to the `func` namespace for a little over a month, the
using directive can be dropped now.
2022-04-18 12:01:55 -07:00
Matthias Springer
d7a9bf9143 [mlir][tensor] Fix verifier and bufferization of collapse_shape
Insert a buffer copy unless the dims are guaranteed to be collapsible. In the verifier, accept collapses unless they are guaranteed to be non-collapsible.

Differential Revision: https://reviews.llvm.org/D123316
2022-04-08 18:20:40 +09:00
Matthias Springer
d2608adf49 [mlir][bufferize] Do not insert useless casts for newly allocated buffers
Differential Revision: https://reviews.llvm.org/D123369
2022-04-08 18:12:02 +09:00
Matthias Springer
3b74aac29c [mlir][bufferize] Do not run the buffer deallocation pass if no allocs escape block boundaries
This fixes a bufferization issue with ops that are not supported by the buffer deallocation pass when `allow-return-allocs=0`.

Differential Revision: https://reviews.llvm.org/D122304
2022-03-23 21:07:35 +09:00
River Riddle
3655069234 [mlir] Move the Builtin FuncOp to the Func dialect
This commit moves FuncOp out of the builtin dialect, and into the Func
dialect. This move has been planned in some capacity from the moment
we made FuncOp an operation (years ago). This commit handles the
functional aspects of the move, but various aspects are left untouched
to ease migration: func::FuncOp is re-exported into mlir to reduce
the actual API churn, the assembly format still accepts the unqualified
`func`. These temporary measures will remain for a little while to
simplify migration before being removed.

Differential Revision: https://reviews.llvm.org/D121266
2022-03-16 17:07:03 -07:00
Matthias Springer
c076fa1c44 [mlir][bufferize] Deallocate returned buffers with BufferDeallocation
New buffer allocations can now be returned/yielded from blocks with `allow-return-allocs`. One-Shot Bufferize deallocates all buffers at the end of the block. If this is not possible (because the buffer escapes the block), this is now done by the existing BufferDeallocation pass.

Differential Revision: https://reviews.llvm.org/D121527
2022-03-16 23:13:34 +09:00
Matthias Springer
9e24f0f458 [mlir][bufferize] Do not deallocate allocs that are returned from a block
Such IR is rejected by default, but can be allowed with `allow-return-memref`. In preparation of future refactorings, do not deallocate such buffers.

One-Shot Analysis now gathers information about yielded tensors, so that we know during the actual bufferization whether a newly allocated buffer should be deallocated again. (Otherwise, it will leak. This will be addressed in a subsequent commit that also makes `allow-return-memref` a non-experimental flag.)

As a cleanup, `allow-return-memref` is now part of OneShotBufferizationOptions. (It was previously ignored by AlwaysCopyBufferizationState.) Moreover, AlwaysCopyBufferizationState now asserts that `create-deallocs` is deactivated to prevent surprising behavior.

Differential Revision: https://reviews.llvm.org/D121521
2022-03-16 18:59:27 +09:00