Block arguments and yielded values are not equivalent if there are not enough block arguments. This fixes#59442.
Differential Revision: https://reviews.llvm.org/D145575
If an op is unknown to the analysis, it must be treated conservatively: assume that every operand aliases with every result.
Differential Revision: https://reviews.llvm.org/D150546
Restrict the op to functions and modules. Such ops are modified in-place. The transform now consumes the handle and produces a new handle. The `target_is_module` attribute is no longer needed because a result handle is produced in either case.
Differential Revision: https://reviews.llvm.org/D147446
`restrict` is similar to the C++ restrict keyword. Results of `to_tensor` that have the `restrict` attribute are guaranteed to not alias any other `to_tensor` result (after bufferization).
Note: Since `to_memref` ops are not supported by One-Shot Bufferize and all bufferizable ops follow DPS rules (i.e., the buffer of the result is the buffer of an operand or an alias thereof), the buffer of a `to_tensor` op that has the `restrict` attribute is always an entirely "new" buffer that is not aliasing with the future buffer of any tensor value in the entire program. This makes such `to_tensor` ops "safe" from a bufferization perspective; they cannot cause RaW conflicts.
Differential Revision: https://reviews.llvm.org/D144021
The current bufferization on function boundaries works on `func.func`
and any call op implementing `CallOpInterface`. Then, an error is thrown
if there is a `CallOpInterface` op that is not `func.call`. This is
unnecessary and breaks the pass whenever such an op occurs (such as
`llvm.call`). This PR simply restricts the handling of call ops to
`func.call`.
Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D143724
Reading from tensor.empty or bufferization.alloc_tensor (without copy) cannot cause a conflict because these ops do not specify the contents of their result tensors.
Differential Revision: https://reviews.llvm.org/D143183
Checks were too strict and by the time the patch was submitted,
the output of the test changed.
Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D142969
OneShotModuleBufferize fails if the input IR cannot be analyzed.
One can set CopyBeforeWrite=true in order to skip analysis.
In that case, a buffer copy is inserted on every write.
This leads to many copies, also in FuncOps that could be analyzed.
This change aims to copy buffers only when it is a must.
When running OneShotModuleBufferize with CopyBeforeWrite=false,
FuncOps whose names are specified in noAnalysisFuncFilter will not be
analyzed. Ops in these FuncOps will not be analyzed as well.
They will be bufferized with CopyBeforeWrite=true,
while the other ops will be bufferized with CopyBeforeWrite=false.
Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D142631
Unranked tensors can currently not be copied. They are forced to always bufferize in-place. There is typically some other OpOperand that can bufferize out-of-place instead if needed.
Note: There is IR that cannot be bufferized with One-Shot Bufferize at the moment (see invalid test case). But it is unclear if we need to support such cases. We do not have a use case at the moment. This restriction could be loosened in the future if needed.
This change improves error handling when bufferizing IR where an unranked tensor would be copied. It also disables an optimization where an OpResult was copied instead of an OpOperand in case the OpResult is an unranked tensor (Github #60187).
Differential Revision: https://reviews.llvm.org/D142331
This allows much better verification messages in consuming ops that properly declare
`TransformHandleTypeInterface` on their operands.
Downstream tests can be updated with a command resembling:
```
git grep -l "structured\.match" mlir/test | xargs -i sed -i {} -e "s/\(structured.match.*\)/\1 : (\!pdl.operation) -> \!pdl.operation/g"
```
Differential Revision: https://reviews.llvm.org/D142643
Introduce a new transform operation to replace `tensor.empty` with
`alloc_tensor` operations. The operation is a pass-through if the target
operation is already a `alloc_tensor`; otherwise, it expects a
`tensor.empty` as a target. Currently, it does not return any results.
The operation is expected to run before `one_shot_bufferize` as
`one_shot_bufferize` rejects `tensor.empty`.
Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D140026
External functions have no body, so they cannot be analyzed. Assume conservatively that each tensor bbArg may be aliasing with each tensor result. Furthermore, assume that each function arg is read and written-to after bufferization. This default behavior can be controlled with `bufferization.access` (similar to `bufferization.memory_layout`) in test cases.
Also fix a bug in the dialect attribute verifier, which did not run for region argument attributes.
Differential Revision: https://reviews.llvm.org/D139517
While it is unlikely to matter in practice, there is no reason
for this value to be larger than it should be. 64 bytes is the
size of a cache line in most machines, and we can fit a full
512-bit vector in it.
Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D139434
TensorCopyInsertion should not have been exposed as a pass. This was a flaw in the original design. It is a preparation step for bufferization and certain transforms (that would otherwise be legal) are illegal between TensorCopyInsertion and actual rewrite to MemRef ops. Therefore, even if broken down as two separate steps internally, they should be exposed as a single pass.
This change affects the sparse compiler, which uses `TensorCopyInsertionPass`. A new `SparsificationAndBufferizationPass` is added to replace all passes in the sparse tensor pipeline from `TensorCopyInsertionPass` until the actual bufferization (rewrite to memref/non-tensor). It is generally unsafe to run arbitrary passes in-between, in particular passes that hoist tensor ops out of loops or change SSA use-def chains along tensor ops.
Differential Revision: https://reviews.llvm.org/D138915
The previous error message was confusing. Also improve code documentation and some minor code cleanups.
Differential Revision: https://reviews.llvm.org/D138902
Adds RegionBranchOpInterface for AffineIf Op and tests it
using buffer deallocation pass.
Reviewed By: bondhugula
Differential Revision: https://reviews.llvm.org/D130962
MemRef has been accepting a general Attribute as memory space for
a long time. This commits updates bufferization side to catch up,
which allows downstream users to plugin customized symbolic memory
space. This also eliminates quite a few `getMemorySpaceAsInt`
calls, which is deprecated.
Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D138330
Expose `function-boundary-type-conversion` in `OneShotBufferizeOp`. To
reuse options between passes and transform operations, create a
`BufferizationEnums.td`.
Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D137833
tensor.empty op elimination is an optimization that brings IR in a more bufferization-friendly form. E.g.:
```
%0 = tensor.empty()
%1 = linalg.fill(%cst, %0) {inplace = [true]}
%2 = tensor.insert_slice %1 into %t[10][20][1]
```
Is rewritten to:
```
%0 = tensor.extract_slice %t[10][20][1]
%1 = linalg.fill(%cst, %0) {inplace = [true]}
%2 = tensor.insert_slice %1 into %t[10][20][1]
```
This optimization used to operate on bufferization.alloc_tensor ops. This is not correct because the documentation of bufferization.alloc_tensor says that it always bufferizes to an allocation. Instead, this optimization should operate on tensor.empty ops, which can then be lowered to bufferization.alloc_tensor ops (if they don't get eliminated).
Differential Revision: https://reviews.llvm.org/D137162
This fixes an issue in One-Shot Bufferize that could lead to missing buffer copies in the future. This bug can currently not be triggered because of the order in which ops are analyzed (always bottom-to-top). However, if we consider different traversal orders for the analysis in the future, this bug can cause subtle issues that are difficult to debug.
Example:
```
%0 = ...
%1 = tensor.insert ... into %0
%2 = tensor.extract_slice %0
tensor.extract %2[...]
```
In case of a top-to-bottom analysis of the above IR, the `tensor.insert` is analyzed before the `tensor.extract_slice`. In that case, the `tensor.insert` will bufferize in-place because %2 is not yet known to become an alias of %0 (and therefore causing a conflict).
With this change, the `tensor.insert` will bufferize out-of-place, regardless of the traversal order.
Differential Revision: https://reviews.llvm.org/D135049
Many tests wrap the piece of the IR related to the transform dialect
into `transform.with_pdl_patterns` without actually using PDL patterns
inside. Some of these are leftovers from migration to `structured.match`
and some others are cargo cult, both are useless and pollute the tests.
Reviewed By: guraypp
Differential Revision: https://reviews.llvm.org/D135661
Use the recently introduced TransformTypeInterface instead of hardcoding
the PDLOperationType. This will allow the operations to use more
specific transform types to express pre/post-conditions in the future.
It requires the syntax and Python op construction API to be updated.
Dialect extensions will be switched separately.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D135584
tensor.empty/linalg.init_tensor produces an uninititalized tensor that can be used as a destination operand for destination-style ops (ops that implement `DestinationStyleOpInterface`).
This change makes it possible to implement `TilingInterface` for non-destination-style ops without depending on the Linalg dialect.
RFC: https://discourse.llvm.org/t/rfc-add-tensor-from-shape-operation/65101
Differential Revision: https://reviews.llvm.org/D135129
One of the test cases matched IR from a subsequent test case. For this reason, the test case appeared to pass while it is actually broken.
This change does not fix the test case itself. It will be fixed when we overhaul the buffer deallocation implementation. (The memory leak in this test case is an edge case.)
Differential Revision: https://reviews.llvm.org/D135046
All relevant operations have been switched to primarily use the strided
layout, but still support the affine map layout. Update the relevant
tests to use the strided format instead for compatibility with how ops
now print by default.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D134045
Bufferization already makes the assumption that buffers pass function
boundaries in the strided form and uses the corresponding affine map layouts.
Switch it to use the recently introduced strided layout instead to avoid
unnecessary casts when bufferizing further operations to the memref dialect
counterparts that now largely rely on the strided layout attribute.
Depends On D133947
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D133951
The three following ops in the memref dialect: transpose, expand_shape,
collapse_shape, have been originally designed to operate on memrefs with
strided layouts but had to go through the affine map representation as the type
did not support anything else. Make these ops produce memref values with
StridedLayoutAttr instead now that it is available.
Depends On D133938
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D133947
Memref subview operation has been initially designed to work on memrefs with
strided layouts only and has never supported anything else. Port it to use the
recently added StridedLayoutAttr instead of extracting the strided from
implicitly from affine maps.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D133938
Introduce a new attribute to represent the strided memref layout. Strided
layouts are omnipresent in code generation flows and are the only kind of
layouts produced and supported by a half of operation in the memref dialect
(view-related, shape-related). However, they are internally represented as
affine maps that require a somewhat fragile extraction of the strides from the
linear form that also comes with an overhead. Furthermore, textual
representation of strided layouts as affine maps is difficult to read: compare
`affine_map<(d0, d1, d2)[s0, s1] -> (d0*32 + d1*s0 + s1 + d2)>` with
`strides: [32, ?, 1], offset: ?`. While a rudimentary support for parsing a
syntactically sugared version of the strided layout has existed in the codebase
for a long time, it does not go as far as this commit to make the strided
layout a first-class attribute in the IR.
This introduces the attribute and updates the tests that using the pre-existing
sugared form to use the new attribute instead. Most memref created
programmatically, e.g., in passes, still use the affine form with further
extraction of strides and will be updated separately.
Update and clean-up the memref type documentation that has gotten stale and has
been referring to the details of affine map composition that are long gone.
See https://discourse.llvm.org/t/rfc-materialize-strided-memref-layout-as-an-attribute/64211.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D132864
Currently, buffer deallocation considers arith.select to be
non-aliasing, which results in deallocs being inserted incorrectly. Since
arith.select doesn't implement any useful interfaces, this change just handles
it explicitly. Eventually this should probably be fixed properly, if this pass
is going to be used long term.
Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D132460
bufferization.to_memref ops are not supported in One-Shot Analysis. They often trigger a failed assertion that can be confusing. Instead, scan for to_memref ops before running the analysis and immediately abort with a proper error message.
Differential Revision: https://reviews.llvm.org/D132027
AllocTensorElimination does currently not support chains where the type is
changing. AllocTensorElimination used to generate invalid IR for such
inputs. With this commit, AllocTensorElimination does no longer apply to
such inputs. (It can be extended to support such IR if needed.)
Differential Revision: https://reviews.llvm.org/D131880
Introduce two different failure propagation mode in the Transform
dialect's Sequence operation. These modes specify whether silenceable
errors produced by nested ops are immediately propagated, thus stopping
the sequence, or suppressed. The latter is useful in end-to-end
transform application scenarios where the user cannot correct the
transformation, but it is robust enough to silenceable failures. It
can be combined with the "alternatives" operation. There is
intentionally no default value to avoid favoring one mode over the
other.
Downstreams can update their tests using:
S='s/sequence \(%.*\) {/sequence \1 failures(propagate) {/'
T='s/sequence {/sequence failures(propagate) {/'
git grep -l transform.sequence | xargs sed -i -e "$S"
git grep -l transform.sequence | xargs sed -i -e "$T"
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D131774
This operation is a NavigationOp that simplifies the writing of transform IR.
Since there is no way of refering to an interface by name, the current implementation uses
an EnumAttr and depends on the interfaces it supports.
In the future, it would be worthwhile to remove this dependence and generalize.
Differential Revision: https://reviews.llvm.org/D130267
bufferization.writable is used in most cases instead. All remaining test cases are updated. Some code that is no longer needed is deleted.
Differential Revision: https://reviews.llvm.org/D129739
This change updates all remaining bufferization patterns (except for scf.while) and the remaining bufferization infrastructure to infer the memory space whenever possible instead of falling back to "0". (If a default memory space is set in the bufferization options, we still fall back to that value if the memory space could not be inferred.)
Differential Revision: https://reviews.llvm.org/D128423
This allows for better type inference during bufferization and is in preparation of supporting memory spaces.
Differential Revision: https://reviews.llvm.org/D128581