When attempting to cast a pybind11 handle to an MLIR C API object through
capsules, the binding code would attempt to directly access the "_CAPIPtr"
attribute on the object, leading to a rather obscure AttributeError when the
attribute was missing, e.g., on non-MLIR types. Check for its presence and
throw a TypeError instead.
Depends On D117646
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D117658
PDLDialect being a somewhat user-facing dialect and whose ops contain exclusively other PDL ops in their regions can take advantage of `OpAsmOpInterface` to provide nicer IR.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D117828
The constructor function was being defined without indicating its "__init__"
name, which made it interpret it as a regular fuction rather than a
constructor. When overload resolution failed, Pybind would attempt to print the
arguments actually passed to the function, including "self", which is not
initialized since the constructor couldn't be called. This would result in
"__repr__" being called with "self" referencing an uninitialized MLIR C API
object, which in turn would cause undefined behavior when attempting to print
in C++. Even if the correct name is provided, the mechanism used by
PybindAdaptors.h to bind constructors directly as "__init__" functions taking
"self" is deprecated by Pybind. The new mechanism does not seem to have access
to a fully-constructed "self" object (i.e., the constructor in C++ takes a
`pybind11::detail::value_and_holder` that cannot be forwarded back to Python).
Instead, redefine "__new__" to perform the required checks (there are no
additional initialization needed for attributes and types as they are all
wrappers around a C++ pointer). "__new__" can call its equivalent on a
superclass without needing "self".
Bump pybind11 dependency to 3.8.0, which is the first version that allows one
to redefine "__new__".
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D117646
This change adds full python bindings for PDL, including types and operations
with additional mixins to make operation construction more similar to the PDL
syntax.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D117458
The revision distinguishes `ReduceFn` and `ReduceFnUse`. The latter has the reduction dimensions attached while the former specifies the arithmetic function only. This separation allows us to adapt the reduction syntax a little bit and specify the reduction dimensions using square brackets (in contrast to the round brackets used for the values to reduce). It als is a preparation to add reduction function attributes to OpDSL. A reduction function attribute shall only specify the arithmetic function and not the reduction dimensions.
Example:
```
ReduceFn.max_unsigned(D.kh, D.kw)(...)
```
changes to:
```
ReduceFn.max_unsigned[D.kh, D.kw](...)
```
Depends On D115240
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D115241
The revision renames `PrimFn` to `ArithFn`. The name resembles the newly introduced arith dialect that implements most of the arithmetic functions. An exception are log/exp that are part of the math dialect.
Depends On D115239
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D115240
This revision introduces a the `TypeFn` class that similar to the `PrimFn` class contains an extensible set of type conversion functions. Having the same mechanism for both type conversion functions and arithmetic functions improves code consistency. Additionally, having an explicit function class and function name is a prerequisite to specify a conversion or arithmetic function via attribute. In a follow up commits, we will introduce function attributes to make OpDSL operations more generic. In particular, the goal is to handle signed and unsigned computation in one operations. Today, there is a linalg.matmul and a linalg.matmul_unsigned.
The commit implements the following changes:
- Introduce the class of type conversion functions `TypeFn`
- Replace the hardwired cast and cast_unsigned ops by the `TypeFn` counterparts
- Adapt the python and C++ code generation paths to support the new cast operations
Example:
```
cast(U, A[D.m, D.k])
```
changes to
```
TypeFn.cast(U, A[D.m, D.k])
```
Depends On D115237
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D115239
Renaming `AttributeDef` to `IndexAttrDef` prepares OpDSL to support different kinds of attributes and more closely reflects the purpose of the attribute.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D115237
So far, only the custom dialect types are exposed.
The build and packaging is same as for Linalg and SparseTensor, and in
need of refactoring that is beyond the scope of this patch.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D116605
Update the shapes of the convolution / pooling tests that where detected after enabling verification during printing (https://reviews.llvm.org/D114680). Also split the emit_structured_generic.py file that previously contained all tests into multiple separate files to simplify debugging.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D114731
While working on an integration, I found a lot of inconsistencies on IR printing and verification. It turns out that we were:
* Only doing "soft fail" verification on IR printing of Operation, not of a Module.
* Failed verification was interacting badly with binary=True IR printing (causing a TypeError trying to pass an `str` to a `bytes` based handle).
* For systematic integrations, it is often desirable to control verification yourself so that you can explicitly handle errors.
This patch:
* Trues up the "soft fail" semantics by having `Module.__str__` delegate to `Operation.__str__` vs having a shortcut implementation.
* Fixes soft fail in the presence of binary=True (and adds an additional happy path test case to make sure the binary functionality works).
* Adds an `assume_verified` boolean flag to the `print`/`get_asm` methods which disables internal verification, presupposing that the caller has taken care of it.
It turns out that we had a number of tests which were generating illegal IR but it wasn't being caught because they were doing a print on the `Module` vs operation. All except two were trivially fixed:
* linalg/ops.py : Had two tests for direct constructing a Matmul incorrectly. Fixing them made them just like the next two tests so just deleted (no need to test the verifier only at this level).
* linalg/opdsl/emit_structured_generic.py : Hand coded conv and pooling tests appear to be using illegal shaped inputs/outputs, causing a verification failure. I just used the `assume_verified=` flag to restore the original behavior and left a TODO. Will get someone who owns that to fix it properly in a followup (would also be nice to break this file up into multiple test modules as it is hard to tell exactly what is failing).
Notes to downstreams:
* If, like some of our tests, you get verification failures after this patch, it is likely that your IR was always invalid and you will need to fix the root cause. To temporarily revert to prior (broken) behavior, replace calls like `print(module)` with `print(module.operation.get_asm(assume_verified=True))`.
Differential Revision: https://reviews.llvm.org/D114680
Rename test/python/dialects/math.py -> math_dialect.py to avoid a
collision with a Python standard package of the same name. These test
scripts are run by path and are not part of a package. Python apparently
implicitly adds the containing directory to its PYTHONPATH. As such,
test scripts with common names run the risk of conflicting with global
names and resolution of an import for the latter happens to the former.
Differential Revision: https://reviews.llvm.org/D114568
Previously, in case there was only one `Optional` operand/result within
the list, we would always return `None` from the accessor, e.g., for a
single optional result we would generate:
```
return self.operation.results[0] if len(self.operation.results) > 1 else None
```
But what we really want is to return `None` only if the length of
`results` is smaller than the total number of element groups (i.e.,
the optional operand/result is in fact missing).
This commit also renames a few local variables in the generator to make
the distinction between `isVariadic()` and `isVariableLength()` a bit
more clear.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D113855
The name seems to have been left over from a renaming effort on an unexercised
codepaths that are difficult to catch in Python. Fix it and add a test that
exercises the codepath.
Reviewed By: gysit
Differential Revision: https://reviews.llvm.org/D114004
The ODS-based Python op bindings generator has been generating incorrect
specification of the operand segment in presence if both optional and variadic
operand groups: optional groups were treated as variadic whereas they require
separate treatement. Make sure it is the case. Also harden the tests around
generated op constructors as they could hitherto accept the code for both
optional and variadic arguments.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D113259
The current behavior is conveniently allowing to iterate on the regions of an operation
implicitly by exposing an operation as Iterable. However this is also error prone and
code that may intend to iterate on the results or the operands could end up "working"
apparently instead of throwing a runtime error.
The lack of static type checking in Python contributes to the ambiguity here, it seems
safer to not do this and require and explicit qualification to iterate (`op.results`, `op.regions`, ...).
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D111697
In several cases, operation result types can be unambiguously inferred from
operands and attributes at operation construction time. Stop requiring the user
to provide these types as arguments in the ODS-generated constructors in Python
bindings. In particular, handle the SameOperandAndResultTypes and
FirstAttrDerivedResultType traits as well as InferTypeOpInterface using the
recently added interface support. This is a significant usability improvement
for IR construction, similar to what C++ ODS provides.
Depends On D111656
Reviewed By: gysit
Differential Revision: https://reviews.llvm.org/D111811
Introduce the initial support for operation interfaces in C API and Python
bindings. Interfaces are a key component of MLIR's extensibility and should be
available in bindings to make use of full potential of MLIR.
This initial implementation exposes InferTypeOpInterface all the way to the
Python bindings since it can be later used to simplify the operation
construction methods by inferring their return types instead of requiring the
user to do so. The general infrastructure for binding interfaces is defined and
InferTypeOpInterface can be used as an example for binding other interfaces.
Reviewed By: gysit
Differential Revision: https://reviews.llvm.org/D111656
This revision also adds a few passes to the sparse compiler part to unify the transformation sequence with all other paths we currently use.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D111900
MemRefType was using a wrong `isa` function in the bindings code, which
could lead to invalid IR being constructed. Also run the verifier in
memref dialect tests.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D111784
The type can be inferred trivially, but it is currently done as string
stitching between ODS and C++ and is not easily exposed to Python.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D111712
Precursor: https://reviews.llvm.org/D110200
Removed redundant ops from the standard dialect that were moved to the
`arith` or `math` dialects.
Renamed all instances of operations in the codebase and in tests.
Reviewed By: rriddle, jpienaar
Differential Revision: https://reviews.llvm.org/D110797
Introduce support for accepting ops instead of values when constructing ops. A
single-result op can be used instead of a value, including in lists of values,
and any op can be used instead of a list of values. This is similar to, but
more powerful, than the C++ API that allows for implicitly casting an OpType to
Value if it is statically known to have a single result - the cast in Python is
based on the op dynamically having a single result, and also handles the
multi-result case. This allows to build IR in a more concise way:
op = dialect.produce_multiple_results()
other = dialect.produce_single_result()
dialect.consume_multiple_results(other, op)
instead of having to access the results manually
op = dialect.produce.multiple_results()
other = dialect.produce_single_result()
dialect.consume_multiple_results(other.result, op.operation.results)
The dispatch is implemented directly in Python and is triggered automatically
for autogenerated OpView subclasses. Extension OpView classes should use the
functions provided in ods_common.py if they want to implement this behavior.
An alternative could be to implement the dispatch in the C++ bindings code, but
it would require to forward opaque types through all Python functions down to a
binding call, which makes it hard to inspect them in Python, e.g., to obtain
the types of values.
Reviewed By: gysit
Differential Revision: https://reviews.llvm.org/D111306
Update OpDSL to support unsigned integers by adding unsigned min/max/cast signatures. Add tests in OpDSL and on the C++ side to verify the proper signed and unsigned operations are emitted.
The patch addresses an issue brought up in https://reviews.llvm.org/D111170.
Reviewed By: rsuderman
Differential Revision: https://reviews.llvm.org/D111230
Implement min and max using the newly introduced std operations instead of relying on compare and select.
Reviewed By: dcaballe
Differential Revision: https://reviews.llvm.org/D111170
The new constructor relies on type-based dynamic dispatch and allows one to
construct call operations given an object representing a FuncOp or its name as
a string, as opposed to requiring an explicitly constructed attribute.
Depends On D110947
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D110948
Constructing a ConstantOp using the default-generated API is verbose and
requires to specify the constant type twice: for the result type of the
operation and for the type of the attribute. It also requires to explicitly
construct the attribute. Provide custom constructors that take the type once
and accept a raw value instead of the attribute. This requires dynamic dispatch
based on type in the constructor. Also provide the corresponding accessors to
raw values.
In addition, provide a "refinement" class ConstantIndexOp similar to what
exists in C++. Unlike other "op view" Python classes, operations cannot be
automatically downcasted to this class since it does not correspond to a
specific operation name. It only exists to simplify construction of the
operation.
Depends On D110946
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D110947
This is an important core dialect that has not been exposed previously. Set up
the default bindings generation and provide a nicer wrapper for the `for` loop
with access to the loop configuration and body.
Depends On D110758
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D110759
Without this change, these attributes can only be accessed through the generic
operation attribute dictionary provided the caller knows the special operation
attribute names used for this purpose. Add some Python wrapping to support this
use case.
Also provide access to function arguments usable inside the function along with
a couple of quality-of-life improvements in using block arguments (function
arguments being the arguments of its entry block).
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D110758
Conversion to the LLVM dialect is being refactored to be more progressive and
is now performed as a series of independent passes converting different
dialects. These passes may produce `unrealized_conversion_cast` operations that
represent pending conversions between built-in and LLVM dialect types.
Historically, a more monolithic Standard-to-LLVM conversion pass did not need
these casts as all operations were converted in one shot. Previous refactorings
have led to the requirement of running the Standard-to-LLVM conversion pass to
clean up `unrealized_conversion_cast`s even though the IR had no standard
operations in it. The pass must have been also run the last among all to-LLVM
passes, in contradiction with the partial conversion logic. Additionally, the
way it was set up could produce invalid operations by removing casts between
LLVM and built-in types even when the consumer did not accept the uncasted
type, or could lead to cryptic conversion errors (recursive application of the
rewrite pattern on `unrealized_conversion_cast` as a means to indicate failure
to eliminate casts).
In fact, the need to eliminate A->B->A `unrealized_conversion_cast`s is not
specific to to-LLVM conversions and can be factored out into a separate type
reconciliation pass, which is achieved in this commit. While the cast operation
itself has a folder pattern, it is insufficient in most conversion passes as
the folder only applies to the second cast. Without complex legality setup in
the conversion target, the conversion infra will either consider the cast
operations valid and not fold them (a separate canonicalization would be
necessary to trigger the folding), or consider the first cast invalid upon
generation and stop with error. The pattern provided by the reconciliation pass
applies to the first cast operation instead. Furthermore, having a separate
pass makes it clear when `unrealized_conversion_cast`s could not have been
eliminated since it is the only reason why this pass can fail.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D109507
The boilerplate was setting up some arrays for testing. To fully illustrate
python - MLIR potential, however, this data should also come from numpy land.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D108336
Using the python API to easily set up sparse kernels, this test
exhaustively builds, compilers, and runs SpMM for all annotations
on a sparse tensor, making sure every version generates the correct
result. This test also illustrates using the python API to set up
a sparse kernel and sparse compilation.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D107943
Introduce the exp and log function in OpDSL. Add the soft plus operator to test the emitted IR in Python and C++.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D105420
Add the min operation to OpDSL and introduce a min pooling operation to test the implementation. The patch is a sibling of the max operation patch https://reviews.llvm.org/D105203 and the min operation is again lowered to a compare and select pair.
Differential Revision: https://reviews.llvm.org/D105345
Introduce an integration test folder in the test/python subfolder and move the opsrun.py test into the newly created folder. The test verifies named operations end-to-end using both the yaml and the python path.
Differential Revision: https://reviews.llvm.org/D105276
Add the max operation to the OpDSL and introduce a max pooling operation to test the implementation. As MLIR has no builtin max operation, the max function is lowered to a compare and select pair.
Differential Revision: https://reviews.llvm.org/D105203
Extend the OpDSL syntax with an optional `domain` function to specify an explicit dimension order. The extension is needed to provide more control over the dimension order instead of deducing it implicitly depending on the formulation of the tensor comprehension. Additionally, the patch also ensures the symbols are ordered according to the operand definitions of the operation.
Differential Revision: https://reviews.llvm.org/D105117
Add an index_dim annotation to specify the shape to loop mapping of shape-only tensors. A shape-only tensor serves is not accessed withing the body of the operation but is required to span the iteration space of certain operations such as pooling.
Differential Revision: https://reviews.llvm.org/D104767
Extend the OpDSL with index attributes. After tensors and scalars, index attributes are the third operand type. An index attribute represents a compile-time constant that is limited to index expressions. A use cases are the strides and dilations defined by convolution and pooling operations.
The patch only updates the OpDSL. The C++ yaml codegen is updated by a followup patch.
Differential Revision: https://reviews.llvm.org/D104711
The patch changes the pretty printed FillOp operand order from output, value to value, output. The change is a follow up to https://reviews.llvm.org/D104121 that passes the fill value using a scalar input instead of the former capture semantics.
Differential Revision: https://reviews.llvm.org/D104356
The patch replaces the existing capture functionality by scalar operands that have been introduced by https://reviews.llvm.org/D104109. Scalar operands behave as tensor operands except for the fact that they are not indexed. As a result ScalarDefs can be accessed directly as no indexing expression is needed.
The patch only updates the OpDSL. The C++ side is updated by a follow up patch.
Differential Revision: https://reviews.llvm.org/D104220