Commit Graph

20 Commits

Author SHA1 Message Date
Alex Zinenko
91bd1156f3 [mlir] drop unused statics 2021-01-26 13:30:45 +01:00
Nicolas Vasilache
05d5125d8a [mlir] Generalize OpFoldResult usage in ops with offsets, sizes and operands.
This revision starts evolving the APIs to manipulate ops with offsets, sizes and operands towards a ValueOrAttr abstraction that is already used in folding under the name OpFoldResult.

The objective, in the future, is to allow such manipulations all the way to the level of ODS to avoid all the genuflexions involved in distinguishing between values and attributes for generic constant foldings.

Once this evolution is accepted, the next step will be a mechanical OpFoldResult -> ValueOrAttr.

Differential Revision: https://reviews.llvm.org/D95310
2021-01-25 14:17:03 +00:00
Rob Suderman
f75f391fc6 [MLIR][Linalg] Refactor transforms to use linalg::getDynOperands helper
getDynOperands behavior is commonly used in a number of passes. Refactored to
use a helper function and avoid code reuse.

Differential Revision: https://reviews.llvm.org/D94340
2021-01-11 16:24:59 -08:00
nicolasvasilache
b7ae1d3d2b [mlir][Linalg] Revisit the Linalg on tensors abstraction
This revision drops init_tensor arguments from Linalg on tensors and instead uniformizes the output buffers and output tensors to be consistent.
This significantly simplifies the usage of Linalg on tensors and is a stepping stone for
its evolution towards a mixed tensor and shape abstraction discussed in https://llvm.discourse.group/t/linalg-and-shapes/2421/19.

Differential Revision: https://reviews.llvm.org/D93469
2020-12-21 12:29:10 -08:00
Christian Sigg
c4a0405902 Add Operation* OpState::operator->() to provide more convenient access to members of Operation.
Given that OpState already implicit converts to Operator*, this seems reasonable.

The alternative would be to add more functions to OpState which forward to Operation.

Reviewed By: rriddle, ftynse

Differential Revision: https://reviews.llvm.org/D92266
2020-12-02 15:46:20 +01:00
Alexander Belyaev
fd92c5dbee [mlir][linalg] Add bufferization pattern for linalg.indexed_generic.
Differential Revision: https://reviews.llvm.org/D92014
2020-11-24 11:14:21 +01:00
Nicolas Vasilache
9ac0b314a4 [mlir][Linalg] Drop symbol_source abstraction which does not pay for itself.
Differential Revision: https://reviews.llvm.org/D91956
2020-11-23 12:43:02 +00:00
Nicolas Vasilache
01c4418544 [mlir][Linalg] NFC - Factor out Linalg functionality for shape and loop bounds computation
This revision refactors code used in various Linalg transformations and makes it a first class citizen to the LinalgStructureOpInterface. This is in preparation to allowing more advanced Linalg behavior but is otherwise NFC.

Differential revision: https://reviews.llvm.org/D91863
2020-11-23 10:17:18 +00:00
River Riddle
65fcddff24 [mlir][BuiltinDialect] Resolve comments from D91571
* Move ops to a BuiltinOps.h
* Add file comments
2020-11-19 11:12:49 -08:00
River Riddle
73ca690df8 [mlir][NFC] Remove references to Module.h and Function.h
These includes have been deprecated in favor of BuiltinDialect.h, which contains the definitions of ModuleOp and FuncOp.

Differential Revision: https://reviews.llvm.org/D91572
2020-11-17 00:55:47 -08:00
Sean Silva
703ef17e7a [mlir] Make linalg-bufferize run on FuncOp
That way, it runs in parallel across functions.
2020-11-13 15:43:24 -08:00
Sean Silva
faa66b1b2c [mlir] Bufferize tensor constant ops
We lower them to a std.global_memref (uniqued by constant value) + a
std.get_global_memref to produce the corresponding memref value.
This allows removing Linalg's somewhat hacky lowering of tensor
constants, now that std properly supports this.

Differential Revision: https://reviews.llvm.org/D91306
2020-11-12 14:56:10 -08:00
Sean Silva
ad2f9f6745 [mlir] Fix subtensor_insert bufferization.
It was incorrect in the presence of a tensor argument with multiple
uses.

The bufferization of subtensor_insert was writing into a converted
memref operand, but there is no guarantee that the converted memref for
that operand is safe to write into. In this case, the same converted
memref is written to in-place by the subtensor_insert bufferization,
violating the tensor-level semantics.

I left some comments in a TODO about ways forward on this. I will be
working actively on this problem in the coming days.

Differential Revision: https://reviews.llvm.org/D91371
2020-11-12 14:56:09 -08:00
Aart Bik
e1dbc25ee2 [mlir][sparse] integrate sparse annotation into generic linalg op
This CL integrates the new sparse annotations (hereto merely added as fully
transparent attributes) more tightly to the generic linalg op in order to add
verification of the annotations' consistency as well as to make make other
passes more aware of their presence (in the long run, rewriting rules must
preserve the integrity of the annotations).

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D91224
2020-11-11 17:26:30 -08:00
Nicolas Vasilache
6fc3a44394 [mlir][Linalg] Add support for bufferization of SubTensorOp and SubTensorInsertOp
This revision adds support for bufferization by using a mix of `tensor_load`, `subview`, `linalg.copy` and `tensor_to_memref`.
2020-11-09 16:55:36 +00:00
Sean Silva
eb8d386d51 [mlir] Make linalg-bufferize a composable bufferization pass
Previously, linalg-bufferize was a "finalizing" bufferization pass (it
did a "full" conversion). This wasn't great because it couldn't be used
composably with other bufferization passes like std-bufferize and
scf-bufferize.

This patch makes linalg-bufferize a composable bufferization pass.
Notice that the integration tests are switched over to using a pipeline
of std-bufferize, linalg-bufferize, and (to finalize the conversion)
func-bufferize. It all "just works" together.

While doing this transition, I ran into a nasty bug in the 1-use special
case logic for forwarding init tensors. That logic, while
well-intentioned, was fundamentally flawed, because it assumed that if
the original tensor value had one use, then the converted memref could
be mutated in place. That assumption is wrong in many cases. For
example:

```
  %0 = some_tensor : tensor<4xf32>
  br ^bb0(%0, %0: tensor<4xf32>, tensor<4xf32>)
^bb0(%bbarg0: tensor<4xf32>, %bbarg1: tensor<4xf32>)
  // %bbarg0 is an alias of %bbarg1. We cannot safely write
  // to it without analyzing uses of %bbarg1.
  linalg.generic ... init(%bbarg0) {...}
```

A similar example can happen in many scenarios with function arguments.
Even more sinister, if the converted memref is produced by a
`std.get_global_memref` of a constant global memref, then we might
attempt to write into read-only statically allocated storage! Not all
memrefs are writable!

Clearly, this 1-use check is not a local transformation that we can do
on the fly in this pattern, so I removed it.

The test is now drastically shorter and I basically rewrote the CHECK
lines from scratch because:
- the new composable linalg-bufferize just doesn't do as much, so there
is less to test
- a lot of the tests were related to the 1-use check, which is now gone,
so there is less to test
- the `-buffer-hoisting -buffer-deallocation` is no longer mixed in, so
the checks related to that had to be rewritten

Differential Revision: https://reviews.llvm.org/D90657
2020-11-04 10:16:55 -08:00
Sean Silva
30e130c3ed [mlir] Move some linalg patterns around.
The bufferization patterns are moved to the .cpp file, which is
preferred in the codebase when it makes sense.

The LinalgToStandard patterns are kept a header because they are
expected to be used individually. However, they are moved to
LinalgToStandard.h which is the file corresponding to where they are
defined.

This also removes TensorCastOpConverter, which is handled by
populateStdBufferizePatterns now. Eventually, the constant op lowering
will be handled as well, but it there are currently holdups on moving
it (see https://reviews.llvm.org/D89916).

Differential Revision: https://reviews.llvm.org/D90254
2020-10-30 13:48:03 -07:00
River Riddle
3fffffa882 [mlir][Pattern] Add a new FrozenRewritePatternList class
This class represents a rewrite pattern list that has been frozen, and thus immutable. This replaces the uses of OwningRewritePatternList in pattern driver related API, such as dialect conversion. When PDL becomes more prevalent, this API will allow for optimizing a set of patterns once without the need to do this per run of a pass.

Differential Revision: https://reviews.llvm.org/D89104
2020-10-26 18:01:06 -07:00
Sean Silva
9a14cb53cb [mlir][bufferize] Rename BufferAssignment* to Bufferize*
Part of the refactor discussed in:
https://llvm.discourse.group/t/what-is-the-strategy-for-tensor-memref-conversion-bufferization/1938/17

Differential Revision: https://reviews.llvm.org/D89271
2020-10-14 12:39:16 -07:00
Sean Silva
9ca97cde85 [mlir] Linalg refactor for using "bufferize" terminology.
Part of the refactor discussed in:
https://llvm.discourse.group/t/what-is-the-strategy-for-tensor-memref-conversion-bufferization/1938/17

Differential Revision: https://reviews.llvm.org/D89261
2020-10-14 12:39:15 -07:00