Commit Graph

80 Commits

Author SHA1 Message Date
Tres Popp
f30f347da1 [mlir][shape] Generalize broadcast to a variadic number of shapes
Previously broadcast was a binary op. Now it can support more inputs.
This has been changed in such a way that for now, this is an NFC for
all broadcast operations that were previously legal.

Differential Revision: https://reviews.llvm.org/D95777
2021-02-10 08:31:28 +01:00
Tres Popp
c2c83e97c3 Revert "Revert "Reorder MLIRContext location in BuiltinAttributes.h""
This reverts commit 511dd4f438 along with
a couple fixes.

Original message:
Now the context is the first, rather than the last input.

This better matches the rest of the infrastructure and makes
it easier to move these types to being declaratively specified.

Phabricator: https://reviews.llvm.org/D96111
2021-02-08 10:39:58 +01:00
Tres Popp
511dd4f438 Revert "Reorder MLIRContext location in BuiltinAttributes.h"
This reverts commit 7827753f98.
2021-02-08 09:32:42 +01:00
Tres Popp
7827753f98 Reorder MLIRContext location in BuiltinAttributes.h
Now the context is the first, rather than the last input.

This better matches the rest of the infrastructure and makes
it easier to move these types to being declaratively specified.

Differential Revision: https://reviews.llvm.org/D96111
2021-02-08 09:28:09 +01:00
Tres Popp
bc8d8e69a6 [mlir] Fold shape.eq %a, %a to true
Differential Revision: https://reviews.llvm.org/D95430
2021-01-27 16:22:15 +01:00
Jacques Pienaar
8d541a1fbe [mlir][shape] Add shape.lib attribute
Enable querying shape function library ops from the module. Currently
supports singular or array of them (as long as array has all unique ops
in mappings). The preferred canonical form would have one library, but
given the invariant on the mapping, this can easily be achieved by a
simple merging pass.

Preferred the attribute approach vs naming convention as these could be
added in multiple different ways.
2020-12-31 14:46:08 -08:00
Sean Silva
129d6e554e [mlir] Move std.tensor_cast -> tensor.cast.
This is almost entirely mechanical.

Differential Revision: https://reviews.llvm.org/D93357
2020-12-17 16:06:56 -08:00
River Riddle
186c154991 [mlir] Remove the dependency on StandardOps from FoldUtils
OperationFolder currently uses ConstantOp as a backup when trying to materialize a constant after an operation is folded. This dependency isn't really useful or necessary given that dialects can/should provide a `materializeConstant` implementation.

Fixes PR#44866

Differential Revision: https://reviews.llvm.org/D92980
2020-12-10 14:13:57 -08:00
Rahul Joshi
b0d02b698b [MLIR] Minor cleanup for Shape dialect.
- Remove some unused types from the Shape dialect
- Fix from_extent_tensor to only allow 1D index tensors
- Fix assuming_yield to only allow shape.assuming as the parent op.
- Fix some documentation typos and reword some things.

Differential Revision: https://reviews.llvm.org/D92901
2020-12-09 14:21:35 -08:00
Christian Sigg
0bf4a82a5a [mlir] Use mlir::OpState::operator->() to get to methods of mlir::Operation. This is a preparation step to remove the corresponding methods from OpState.
Reviewed By: silvas, rriddle

Differential Revision: https://reviews.llvm.org/D92878
2020-12-09 12:11:32 +01:00
Benjamin Kramer
5844bc540c [mlir][Shape] Canonicalize assume_all with one input and tensor_cast of constant_shape
This allows simplifying some more complicated shape expressions

Differential Revision: https://reviews.llvm.org/D92843
2020-12-08 17:07:24 +01:00
River Riddle
09f7a55fad [mlir][Types][NFC] Move all of the builtin Type classes to BuiltinTypes.h
This is part of a larger refactoring the better congregates the builtin structures under the BuiltinDialect. This also removes the problematic "standard" naming that clashes with the "standard" dialect, which is not defined within IR/. A temporary forward is placed in StandardTypes.h to allow time for downstream users to replaced references.

Differential Revision: https://reviews.llvm.org/D92435
2020-12-03 18:02:10 -08:00
Jacques Pienaar
e534cee26a [mlir] Add a shape function library op
Op with mapping from ops to corresponding shape functions for those op
in the library and mechanism to associate shape functions to functions.
The mapping of operand to shape function is kept separate from the shape
functions themselves as the operation is associated to the shape
function and not vice versa, and one could have a common library of
shape functions that can be used in different contexts.

Use fully qualified names and require a name for shape fn lib ops for
now and an explicit print/parse (based around the generated one & GPU
module op ones).

This commit reverts d9da4c3e73. Fixes
missing headers (don't know how that was working locally).

Differential Revision: https://reviews.llvm.org/D91672
2020-11-29 11:15:30 -08:00
Mehdi Amini
d9da4c3e73 Revert "[mlir] Add a shape function library op"
This reverts commit 6dd9596b19.

Build is broken.
2020-11-29 05:28:42 +00:00
Jacques Pienaar
6dd9596b19 [mlir] Add a shape function library op
Op with mapping from ops to corresponding shape functions for those op
in the library and mechanism to associate shape functions to functions.
The mapping of operand to shape function is kept separate from the shape
functions themselves as the operation is associated to the shape
function and not vice versa, and one could have a common library of
shape functions that can be used in different contexts.

Use fully qualified names and require a name for shape fn lib ops for
now and an explicit print/parse (based around the generated one & GPU
module op ones).

Differential Revision: https://reviews.llvm.org/D91672
2020-11-28 15:53:59 -08:00
River Riddle
fa4174792a [mlir][Inliner] Add a wouldBeCloned flag to each of the isLegalToInline hooks.
Often times the legality of inlining can change depending on if the callable is going to be inlined in-place, or cloned. For example, some operations are not allowed to be duplicated and can only be inlined if the original callable will cease to exist afterwards. The new `wouldBeCloned` flag allows for dialects to hook into this when determining legality.

Differential Revision: https://reviews.llvm.org/D90360
2020-10-28 21:49:28 -07:00
Tres Popp
ffdd4a46a9 [mlir] Shape.AssumingOp implements RegionBranchOpInterface.
This adds support for the interface and provides unambigious information
on the control flow as it is unconditional on any runtime values.
The code is tested through confirming that buffer-placement behaves as
expected.

Differential Revision: https://reviews.llvm.org/D87894
2020-09-21 11:33:11 +02:00
Sean Silva
bae6374205 [mlir][shape] Add shape.cstr_require %bool
This op is a catch-all for creating witnesses from various random kinds
of constraints. In particular, I when dealing with extents directly,
which are of `index` type, one can directly use std ops for calculating
the predicates, and then use cstr_require for the final conversion to a
witness.

Differential Revision: https://reviews.llvm.org/D87871
2020-09-17 16:56:43 -07:00
Tres Popp
b05629230e [mlir] Remove redundant shape.cstr_broadcastable canonicalization.
These canonicalizations are already handled by folding which will occur
in a superset of situations, so they are being removed.

Differential Revision: https://reviews.llvm.org/D87706
2020-09-17 09:01:13 +02:00
Federico Lebrón
7d1ed69c8a Make namespace handling uniform across dialect backends.
Now backends spell out which namespace they want to be in, instead of relying on
clients #including them inside already-opened namespaces. This also means that
cppNamespaces should be fully qualified, and there's no implicit "::mlir::"
prepended to them anymore.

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D86811
2020-09-14 20:33:31 +00:00
Frederik Gossen
136eb79a88 [MLIR][Standard] Add dynamic_tensor_from_elements operation
With `dynamic_tensor_from_elements` tensor values of dynamic size can be
created. The body of the operation essentially maps the index space to tensor
elements.

Declare SCF operations in the `scf` namespace to avoid name clash with the new
`std.yield` operation. Resolve ambiguities between `linalg/shape/std/scf.yield`
operations.

Differential Revision: https://reviews.llvm.org/D86276
2020-09-07 11:44:43 +00:00
River Riddle
65277126bf [mlir][Type] Remove the remaining usages of Type::getKind in preparation for its removal
This revision removes all of the lingering usages of Type::getKind. A consequence of this is that FloatType is now split into 4 derived types that represent each of the possible float types(BFloat16Type, Float16Type, Float32Type, and Float64Type). Other than this split, this revision is NFC.

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D85566
2020-08-12 19:33:58 -07:00
Feng Liu
5c9c4ade9d Add the inline interface to the shape dialect
This patch also fixes a minor issue that shape.rank should allow
returning !shape.size. The dialect doc has such an example for
shape.rank.

Differential Revision: https://reviews.llvm.org/D85556
2020-08-07 23:29:43 -07:00
Mehdi Amini
575b22b5d1 Revisit Dialect registration: require and store a TypeID on dialects
This patch moves the registration to a method in the MLIRContext: getOrCreateDialect<ConcreteDialect>()

This method requires dialect to provide a static getDialectNamespace()
and store a TypeID on the Dialect itself, which allows to lazyily
create a dialect when not yet loaded in the context.
As a side effect, it means that duplicated registration of the same
dialect is not an issue anymore.

To limit the boilerplate, TableGen dialect generation is modified to
emit the constructor entirely and invoke separately a "init()" method
that the user implements.

Differential Revision: https://reviews.llvm.org/D85495
2020-08-07 15:57:08 +00:00
Frederik Gossen
4cd923784e [MLIR][Shape] Expose extent tensor type builder
The extent tensor type is a `tensor<?xindex>` that is used in the shape dialect.
To facilitate the use of this type when working with the shape dialect, we
expose the helper function for its construction.

Differential Revision: https://reviews.llvm.org/D85121
2020-08-05 09:42:57 +00:00
Stephan Herhut
5d9f33aaa0 [MLIR][Shape] Add conversion for missing ops to standard
This adds conversions for const_size and to_extent_tensor. Also, cast-like operations are now folded away if the source and target types are the same.

Differential Revision: https://reviews.llvm.org/D84745
2020-07-29 12:46:18 +02:00
Stephan Herhut
6d10d317d8 [MLIR][Shape] Support transforming shape.num_elements on tensors
The current transformation to shape.reduce does not support tensor values.
This adds the required changes to make that work, including fixing the builder
for shape.reduce.

Differential Revision: https://reviews.llvm.org/D84744
2020-07-28 14:13:06 +02:00
Jacques Pienaar
595d214f47 [mlir][shape] Further operand and result type generalization
Previous changes generalized some of the operands and results. Complete
a larger group of those to simplify progressive lowering. Also update
some of the declarative asm form due to generalization. Tried to keep it
mostly mechanical.
2020-07-25 21:41:31 -07:00
Jacques Pienaar
5142448a5e [MLIR][Shape] Refactor verification
Based on https://reviews.llvm.org/D84439 but less restrictive, else we
don't allow shape_of to be able to produce a ranked output and doesn't
allow for iterative refinement here. We can consider making it more
restrictive later.
2020-07-25 14:55:19 -07:00
Frederik Gossen
670ae4b6da [MLIR][Shape] Fold shape.mul
Implement constant folding for `shape.mul`.

Differential Revision: https://reviews.llvm.org/D84438
2020-07-24 13:30:45 +00:00
Frederik Gossen
783a351785 [MLIR][Shape] Allow shape.mul to operate in indices
Differential Revision: https://reviews.llvm.org/D84437
2020-07-24 13:25:40 +00:00
Frederik Gossen
5984d74139 [MLIR][Shape] Allow get_extent to operate on extent tensors and indices
Differential Revision: https://reviews.llvm.org/D84435
2020-07-24 11:13:17 +00:00
Frederik Gossen
23a65648c0 [MLIR][Shape] Allow shape.rank to operate on extent tensors
Differential Revision: https://reviews.llvm.org/D84429
2020-07-24 10:43:39 +00:00
Frederik Gossen
d4e4d5d780 [MLIR][Shape] Allow for shape_of to return extent tensors
The operation `shape.shape_of` now returns an extent tensor `tensor<?xindex>` in
cases when no error are possible. All consuming operation will eventually accept
both, shapes and extent tensors.

Differential Revision: https://reviews.llvm.org/D84160
2020-07-24 08:40:40 +00:00
Frederik Gossen
14d3cef012 [MLIR][Shape] Generalze shape.const_shape to extent tensors
The operation `shape.const_shape` was used for constants of type shape only.
We can now also use it to create constant extent tensors.

Differential Revision: https://reviews.llvm.org/D84157
2020-07-24 08:06:24 +00:00
Frederik Gossen
f9595857b9 [MLIR][Shape] Fold shape.shape_eq
Fold `shape.shape_eq`.

Differential Revision: https://reviews.llvm.org/D82533
2020-07-20 12:25:53 +00:00
Frederik Gossen
0eb50e614c [MLIR][Shape] Allow shape.reduce to operate on extent tensors
Allow `shape.reduce` to take both `shape.shape` and `tensor<?xindex>` as an
argument.

Differential Revision: https://reviews.llvm.org/D83943
2020-07-16 13:53:37 +00:00
Stephan Herhut
8ef47244b9 [mlir][shape] Fold shape.broadcast with one scalar operand
This folds shape.broadcast where at least one operand is a scalar to the
other operand.

Also add an assemblyFormat for shape.broadcast and shape.concat.

Differential Revision: https://reviews.llvm.org/D83854
2020-07-15 18:49:12 +02:00
Tres Popp
2ef71cb7fd [mlir] Add additional Canonicalization of shape.cstr_broadcastable.
Summary:
Added canonicalization and folding was:
- Folding when either input is an attribute indicating a scalar input
which can always be broadcasted.
- Canonicalization where it can be determined that either input shape is
a scalar.
- Canonicalization where the partially specified input shapes can be
proven to be broadcastable always.

Differential Revision: https://reviews.llvm.org/D83194
2020-07-09 11:23:25 +02:00
Frederik Gossen
66e0f66d8f [MLIR][Shape] Canonicalize subsequent size_to_index and index_to_size
Eliminate the subsequent applications of `size_to_index` and `index_to_size`.

Differential Revision: https://reviews.llvm.org/D82083
2020-06-25 12:43:17 +00:00
Frederik Gossen
bf2a4f3b3a [MLIR][Shape] Canonicalize subsequent index_to_size and size_to_index
Eliminate the subsequent applications of `index_to_size` and `size_to_index`.

Differential Revision: https://reviews.llvm.org/D82082
2020-06-25 12:02:49 +00:00
Frederik Gossen
7bca97d960 [MLIR][Shape] Add canonicalization pattern for shape.rank
Replace any `rank(shape_of(tensor))` that relies on a ranked tensor with the
corresponding constant `const_size`.

Differential Revision: https://reviews.llvm.org/D82077
2020-06-25 08:39:35 +00:00
Frederik Gossen
81469527ec [MLIR][Shape] Add constant folding to shape.rank
Add constant folding for the `shape.rank` operation of the shape dialect.

Differential Revision: https://reviews.llvm.org/D82076
2020-06-25 08:32:25 +00:00
Tres Popp
3324598844 [mlir] Add a pass to remove all shape.cstr_ and assuming_ ops.
Summary:
This is to provide a utility to remove unsupported constraints or for
pipelines that happen to receive these but cannot lower them due to not
supporting assertions.

Differential Revision: https://reviews.llvm.org/D81560
2020-06-18 13:31:30 +02:00
Alexander Belyaev
e9ac792748 [mlir] Fix some of the warnings in MLIR code.
Summary:
* extra ';' in the following files:
    mlir/lib/Dialect/Linalg/Transforms/Transforms.cpp
    mlir/lib/Dialect/Shape/IR/Shape.cpp

* base class ‘mlir::ConvertVectorToSCFBase<ConvertVectorToSCFPass>’
  should be explicitly initialized in the copy constructor [-Wextra] in
    mlir/lib/Conversion/VectorToSCF/VectorToSCF.cpp

* warning: ‘bool Expression::operator==(const Expression&) const’
  defined but not used [-Wunused-function] in
    mlir/tools/mlir-linalg-ods-gen/mlir-linalg-ods-gen.cpp

Differential Revision: https://reviews.llvm.org/D81673
2020-06-11 22:18:32 +02:00
Frederik Gossen
e4184c84ca [MLIR][Shape] Make dimension an operand of get_extent
The operation `get_extent` now accepts the dimension as an operand and is no
longer limited to constant dimensions.
A helper function facilitates the common constant use case.

Differential Revision: https://reviews.llvm.org/D81248
2020-06-10 11:47:18 +00:00
msifontes
1c189d71db [mlir] Add number of operands verification for shape.assuming_all operation
Implemented a verification to ensure that the shape.assuming_all
operation always has at least one operand.
2020-06-09 09:59:04 -07:00
Frederik Gossen
215914151e [MLIR][Shape] Add support for OpAsmInterface in shape.const_size
The SSA values created with `shape.const_size` are now named depending on the
value.
A constant size of 3, e.g., is now automatically named `%c3`.

Differential Revision: https://reviews.llvm.org/D81249
2020-06-08 10:27:28 +00:00
Alexander Belyaev
250dcf61ae Revert "Revert "[MLIR] Lower shape.num_elements -> shape.reduce.""
This reverts commit a25f5cd70c.

Now the build with `-DBUILD_SHARED_LIBS=ON` is fixed.
2020-06-08 12:19:54 +02:00
Tres Popp
68a8336bf2 Revert "Revert "[mlir] Folding and canonicalization of shape.cstr_eq""
This reverts commit 12e31f6e40.
2020-06-08 10:06:55 +02:00