D138330 updated the deprecated `getMemorySpaceAsInt` uses to `getMemorySpace`. There are few uses that were missed.
Differential Revision: https://reviews.llvm.org/D139526
This patch mechanically replaces None with std::nullopt where the
compiler would warn if None were deprecated. The intent is to reduce
the amount of manual work required in migrating from Optional to
std::optional.
This is part of an effort to migrate from llvm::Optional to
std::optional:
https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
TensorCopyInsertion should not have been exposed as a pass. This was a flaw in the original design. It is a preparation step for bufferization and certain transforms (that would otherwise be legal) are illegal between TensorCopyInsertion and actual rewrite to MemRef ops. Therefore, even if broken down as two separate steps internally, they should be exposed as a single pass.
This change affects the sparse compiler, which uses `TensorCopyInsertionPass`. A new `SparsificationAndBufferizationPass` is added to replace all passes in the sparse tensor pipeline from `TensorCopyInsertionPass` until the actual bufferization (rewrite to memref/non-tensor). It is generally unsafe to run arbitrary passes in-between, in particular passes that hoist tensor ops out of loops or change SSA use-def chains along tensor ops.
Differential Revision: https://reviews.llvm.org/D138915
The previous error message was confusing. Also improve code documentation and some minor code cleanups.
Differential Revision: https://reviews.llvm.org/D138902
`DialectAnalysisState` is now `OneShotAnalysisState::Extension`.
This state extension mechanism is needed only for One-Shot Analysis, so it is moved from `BufferizableOpInterface.h` to `OneShotAnalysis.h`.
Extensions are now identified via TypeIDs instead of StringRefs. The API of state extensions is cleaned up and follows the same pattern as other extension mechanisms in MLIR (e.g., `transform::TransformState::Extension`).
Also delete some dead code.
Differential Revision: https://reviews.llvm.org/D135051
MemRef has been accepting a general Attribute as memory space for
a long time. This commits updates bufferization side to catch up,
which allows downstream users to plugin customized symbolic memory
space. This also eliminates quite a few `getMemorySpaceAsInt`
calls, which is deprecated.
Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D138330
The methods in `SideEffectUtils.h` (and their implementations in
`SideEffectUtils.cpp`) seem to have similar intent to methods already
existing in `SideEffectInterfaces.h`. Move the decleration (and
implementation) from `SideEffectUtils.h` (and `SideEffectUtils.cpp`)
into `SideEffectInterfaces.h` (and `SideEffectInterface.cpp`).
Also drop the `SideEffectInterface::hasNoEffect` method in favor of
`mlir::isMemoryEffectFree` which actually recurses into the operation
instead of just relying on the `hasRecursiveMemoryEffectTrait`
exclusively.
Differential Revision: https://reviews.llvm.org/D137857
Expose `function-boundary-type-conversion` in `OneShotBufferizeOp`. To
reuse options between passes and transform operations, create a
`BufferizationEnums.td`.
Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D137833
tensor.empty op elimination is an optimization that brings IR in a more bufferization-friendly form. E.g.:
```
%0 = tensor.empty()
%1 = linalg.fill(%cst, %0) {inplace = [true]}
%2 = tensor.insert_slice %1 into %t[10][20][1]
```
Is rewritten to:
```
%0 = tensor.extract_slice %t[10][20][1]
%1 = linalg.fill(%cst, %0) {inplace = [true]}
%2 = tensor.insert_slice %1 into %t[10][20][1]
```
This optimization used to operate on bufferization.alloc_tensor ops. This is not correct because the documentation of bufferization.alloc_tensor says that it always bufferizes to an allocation. Instead, this optimization should operate on tensor.empty ops, which can then be lowered to bufferization.alloc_tensor ops (if they don't get eliminated).
Differential Revision: https://reviews.llvm.org/D137162
This allows users to restrict the transformation to a
subset of the functions in a module.
For example, a user might want to apply the transformation to
a module's entry point, but not to the calls in the module
because those calls might refer to external C functions outside
of their control.
Reviewed By: springerm, nicolasvasilache
Differential Revision: https://reviews.llvm.org/D137264
tensor.insert and tensor.insert_slice (as destination style ops) do no longer need to implement the entire BufferizableOpInterface.
Differential Revision: https://reviews.llvm.org/D136347
This fixes an issue in One-Shot Bufferize that could lead to missing buffer copies in the future. This bug can currently not be triggered because of the order in which ops are analyzed (always bottom-to-top). However, if we consider different traversal orders for the analysis in the future, this bug can cause subtle issues that are difficult to debug.
Example:
```
%0 = ...
%1 = tensor.insert ... into %0
%2 = tensor.extract_slice %0
tensor.extract %2[...]
```
In case of a top-to-bottom analysis of the above IR, the `tensor.insert` is analyzed before the `tensor.extract_slice`. In that case, the `tensor.insert` will bufferize in-place because %2 is not yet known to become an alias of %0 (and therefore causing a conflict).
With this change, the `tensor.insert` will bufferize out-of-place, regardless of the traversal order.
Differential Revision: https://reviews.llvm.org/D135049
This fixes a bug where a required buffer copy was not inserted.
Not only written aliases, but also read aliases should be taken into account when computing common enclosing repetitive regions. Furthermore, for writing ops, it does not matter where the destination tensor is defined, but where the op itself is located.
Differential Revision: https://reviews.llvm.org/D135420
Inserting a tensor into an equivalent tensor is a no-op after bufferization. No alloc is needed.
Differential Revision: https://reviews.llvm.org/D132662
Bufferization already makes the assumption that buffers pass function
boundaries in the strided form and uses the corresponding affine map layouts.
Switch it to use the recently introduced strided layout instead to avoid
unnecessary casts when bufferizing further operations to the memref dialect
counterparts that now largely rely on the strided layout attribute.
Depends On D133947
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D133951
This method allows to declare regions as "repetitive" even if the parent op does not implement the RegionBranchOpInterface.
This is needed to support loop-like ops that have parallel semantics but do not branch between regions.
Differential Revision: https://reviews.llvm.org/D133113
The patch introduces the required changes to update the pass declarations and definitions to use the new autogenerated files and allow dropping the old infrastructure.
Reviewed By: mehdi_amini, rriddle
Differential Review: https://reviews.llvm.org/D132838
The patch introduces the required changes to update the pass declarations and definitions to use the new autogenerated files and allow dropping the old infrastructure.
Reviewed By: mehdi_amini, rriddle
Differential Review: https://reviews.llvm.org/D132838
Even though iter_arg and init_arg of an scf.for loop may have the same tensor type, their bufferized memref types are not necessarily equal. It is sometimes necessary to insert a cast in case of differing layout maps.
Differential Revision: https://reviews.llvm.org/D132860
This change generalizes getBufferType. This function can be used to predict the buffer type of any tensor value (not just BlockArguments) without changing any IR. It also subsumes getMemorySpace. This is useful for loop bufferization, where the precise buffer type of an iter_arg cannot be known without examining the loop body.
Differential Revision: https://reviews.llvm.org/D132859
It's only used from there, and this lets us remove the dependency from Analysis
to the Arith dialect.
Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D132928
tensor.pad is lowered to tensor.generate + tensor.insert_slice during bufferization. For best performance with constant padding values, users should vectorize the IR before bufferizing it.
This change also relaxes tje restriction that no new ops that bufferize to a memory write should be added during bufferization. Since bufferization has been split into two steps a while ago (tensor copy insertion + bufferization), it is reasonable to allow this now.
Differential Revision: https://reviews.llvm.org/D132355
bufferization.to_memref ops are not supported in One-Shot Analysis. They often trigger a failed assertion that can be confusing. Instead, scan for to_memref ops before running the analysis and immediately abort with a proper error message.
Differential Revision: https://reviews.llvm.org/D132027
AllocTensorElimination does currently not support chains where the type is
changing. AllocTensorElimination used to generate invalid IR for such
inputs. With this commit, AllocTensorElimination does no longer apply to
such inputs. (It can be extended to support such IR if needed.)
Differential Revision: https://reviews.llvm.org/D131880
This reland includes changes to the Python bindings.
Switch variadic operand and result segment size attributes to use the
dense i32 array. Dense integer arrays were introduced primarily to
represent index lists. They are a better fit for segment sizes than
dense elements attrs.
Depends on D131801
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D131803
Using a loop init_arg inside of the loop is not supported. This change adds a pre-processing pass that resolves such IR with copies.
Differential Revision: https://reviews.llvm.org/D131689
Switch variadic operand and result segment size attributes to use the
dense i32 array. Dense integer arrays were introduced primarily to
represent index lists. They are a better fit for segment sizes than
dense elements attrs.
Depends on D131738
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D131702
In the Transform dialect extensions, provide the separate mechanism to
declare dependent dialects (the dialects the transform IR depends on)
and the generated dialects (the dialects the payload IR may be
transformed into). This allows the Transform dialect clients that are
only constructing the transform IR to avoid loading the dialects
relevant for the payload IR along with the Transform dialect itself,
thus decreasing the build/link time.
Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D130289