Commit Graph

128 Commits

Author SHA1 Message Date
Tres Popp
7827753f98 Reorder MLIRContext location in BuiltinAttributes.h
Now the context is the first, rather than the last input.

This better matches the rest of the infrastructure and makes
it easier to move these types to being declaratively specified.

Differential Revision: https://reviews.llvm.org/D96111
2021-02-08 09:28:09 +01:00
Lei Zhang
8dae90997a [mlir][vector] Add constant folding for fp16 to fp32 bitcast
Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D96041
2021-02-05 09:12:50 -05:00
Aart Bik
046612d29d [mlir][vector] verify memref of vector memory ops
This ensures the memref base + indices expression is well-formed

Reviewed By: ThomasRaoux, ftynse

Differential Revision: https://reviews.llvm.org/D94441
2021-01-11 13:32:39 -08:00
Thomas Raoux
3d693bd0bd [mlir][vector] Add memory effects to transfer_read transfer_write ops
This allow more accurate modeling of the side effects and allow dead code
elimination to remove dead transfer ops.

Differential Revision: https://reviews.llvm.org/D94318
2021-01-11 09:25:37 -08:00
Aart Bik
6728af16cf [mlir][vector] modified scatter/gather syntax, pass_thru mandatory
This change makes the scatter/gather syntax more consistent with
the syntax of all the other memory operations in the Vector dialect
(order of types, use of [] for index, etc.). This will make the MLIR
code easier to read. In addition, the pass_thru parameter of the
gather has been made mandatory (there is very little benefit in
using the implicit "undefined" values).

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D94352
2021-01-09 11:41:37 -08:00
Aart Bik
a57def30f5 [mlir][vector] generalized masked l/s and compressed l/s with indices
Adding the ability to index the base address brings these operations closer
to the transfer read and write semantics (with lowering advantages), ensures
more consistent use in vector MLIR code (easier to read), and reduces the
amount of code duplication to lower memrefs into base addresses considerably
(making codegen less error-prone).

Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D94278
2021-01-08 13:59:34 -08:00
Kazuaki Ishizaki
f88fab5006 [mlir] NFC: fix trivial typos
fix typo under include and lib directories

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D94220
2021-01-08 02:10:12 +09:00
Thomas Raoux
cf216670a0 [mlir][linalg] Add vectorization for linalg on tensor ops
Support vectorization of linalg ops using tensor inputs/outputs.

Differential Revision: https://reviews.llvm.org/D93890
2020-12-29 09:02:23 -08:00
Chris Lattner
9eb3e564d3 [ODS] Make the getType() method on a OneResult instruction return a specific type.
Implement Bug 46698, making ODS synthesize a getType() method that returns a
specific C++ class for OneResult methods where we know that class.  This eliminates
a common source of casts in things like:

   myOp.getType().cast<FIRRTLType>().getPassive()

because we know that myOp always returns a FIRRTLType.  This also encourages
op authors to type their results more tightly (which is also good for
verification).

I chose to implement this by splitting the OneResult trait into itself plus a
OneTypedResult trait, given that many things are using `hasTrait<OneResult>`
to conditionalize various logic.

While this changes makes many many ops get more specific getType() results, it
is generally drop-in compatible with the previous behavior because 'x.cast<T>()'
is allowed when x is already known to be a T.  The one exception to this is that
we need declarations of the types used by ops, which is why a couple headers
needed additional #includes.

I updated a few things in tree to remove the now-redundant `.cast<>`'s, but there
are probably many more than can be removed.

Differential Revision: https://reviews.llvm.org/D93790
2020-12-26 13:52:40 -08:00
Thomas Raoux
74186880ba [mlir][vector] Add more vector Ops canonicalization
Add canonicalization for BroadcastOp, ExtractStrideSlicesOp and ShapeCastOp

Differential Revision: https://reviews.llvm.org/D93120
2020-12-23 11:25:01 -08:00
Thomas Raoux
26c8f9081b [mlir[[vector] Extend Transfer read/write ops to support tensor types.
Transfer_ops can now work on both buffers and tensor. Right now, lowering of
the tensor case is not supported yet.

Differential Revision: https://reviews.llvm.org/D93500
2020-12-21 08:55:04 -08:00
River Riddle
1b97cdf885 [mlir][IR][NFC] Move context/location parameters of builtin Type::get methods to the start of the parameter list
This better matches the rest of the infrastructure, is much simpler, and makes it easier to move these types to being declaratively specified.

Differential Revision: https://reviews.llvm.org/D93432
2020-12-17 13:01:36 -08:00
Christian Sigg
1ffc1aaa09 [mlir] Use mlir::OpState::operator->() to get to methods of mlir::Operation.
This is a preparation step to remove those methods from OpState.

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D93098
2020-12-13 09:58:16 +01:00
Max Kudryavtsev
636db7f87c [MLIR] Fix vector::TransferWriteOp builder losing permutation map
Supervectorizer pass uses this builder and loses the permutation map.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D92145
2020-12-03 09:53:08 -08:00
Christian Sigg
c4a0405902 Add Operation* OpState::operator->() to provide more convenient access to members of Operation.
Given that OpState already implicit converts to Operator*, this seems reasonable.

The alternative would be to add more functions to OpState which forward to Operation.

Reviewed By: rriddle, ftynse

Differential Revision: https://reviews.llvm.org/D92266
2020-12-02 15:46:20 +01:00
River Riddle
65fcddff24 [mlir][BuiltinDialect] Resolve comments from D91571
* Move ops to a BuiltinOps.h
* Add file comments
2020-11-19 11:12:49 -08:00
River Riddle
73ca690df8 [mlir][NFC] Remove references to Module.h and Function.h
These includes have been deprecated in favor of BuiltinDialect.h, which contains the definitions of ModuleOp and FuncOp.

Differential Revision: https://reviews.llvm.org/D91572
2020-11-17 00:55:47 -08:00
Aart Bik
9ddb464d37 [mlir] refactor common idiom into AffineMap method
motivated by a refactoring in the new sparse code (yet to be merged), this avoids some lengthy code dup

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D91465
2020-11-13 19:18:13 -08:00
Thomas Raoux
6ad31c0f4a [mlir][vector] Support N-D vector in InsertMap/ExtractMap op
Support multi-dimension vector for InsertMap/ExtractMap op and update the
transformations. Currently the relation between IDs and dimension is implicitly
deduced from the types. We can then calculate an AffineMap based on it. In the
future the AffineMap could be part of the operation itself.

Differential Revision: https://reviews.llvm.org/D90995
2020-11-13 12:40:17 -08:00
Thomas Raoux
36480657d8 [mlir][vector] Add canonicalization patterns for ExtractStride/ShapeCast + Splat constant
Differential Revision: https://reviews.llvm.org/D90567
2020-11-03 11:29:54 -08:00
Thomas Raoux
9081e7594d [mlir][vector] Address post-commit review comments on vector ops folding patterns
Differential Revision: https://reviews.llvm.org/D90183
2020-11-02 10:57:32 -08:00
Kazuaki Ishizaki
41b09f4eff [mlir] NFC: fix trivial typos
fix typos in comments and documents

Reviewed By: jpienaar

Differential Revision: https://reviews.llvm.org/D90089
2020-10-29 04:05:22 +09:00
Thomas Raoux
bd07be4f3f [mlir][vector] Update doc strings for insert_map/extract_map and fix insert_map semantic
Based on discourse discussion, fix the doc string and remove examples with
wrong semantic. Also fix insert_map semantic by adding missing operand for
vector we are inserting into.

Differential Revision: https://reviews.llvm.org/D89563
2020-10-26 10:47:01 -07:00
Thomas Raoux
ea6a60a9a6 [mlir][vector] Add folder for ExtractStridedSliceOp
Add folder for the case where ExtractStridedSliceOp source comes from a chain
of InsertStridedSliceOp. Also add a folder for the trivial case where the
ExtractStridedSliceOp is a no-op.

Differential Revision: https://reviews.llvm.org/D89850
2020-10-23 12:18:09 -07:00
Thomas Raoux
8c72eea9a0 [mlir][vector] Add folding for ExtractOp with ShapeCastOp source
Differential Revision: https://reviews.llvm.org/D89853
2020-10-23 12:06:18 -07:00
Thomas Raoux
92e83afe44 [mlir][vector] Fold extractOp coming from broadcastOp
Combine ExtractOp with scalar result with BroadcastOp source. This is useful to
be able to incrementally convert degenerated vector of one element into scalar.

Differential Revision: https://reviews.llvm.org/D88751
2020-10-06 10:27:39 -07:00
Benjamin Kramer
6e2b267d1c Promote transpose from linalg to standard dialect
While affine maps are part of the builtin memref type, there is very
limited support for manipulating them in the standard dialect. Add
transpose to the set of ops to complement the existing view/subview ops.
This is a metadata transformation that encodes the transpose into the
strides of a memref.

I'm planning to use this when lowering operations on strided memrefs,
using the transpose to remove the stride without adding a dependency on
linalg dialect.

Differential Revision: https://reviews.llvm.org/D88651
2020-10-05 10:58:20 +02:00
Thomas Raoux
d1c8e179d8 [mlir][vector] Add canonicalization patterns for extractMap/insertMap
Add basic canonicalization patterns for the extractMap/insertMap to allow them
to be folded into Transfer ops.
Also mark transferRead as memory read so that it can be removed by dead code.

Differential Revision: https://reviews.llvm.org/D88622
2020-10-02 10:13:11 -07:00
Thomas Raoux
dd14e58252 [mlir][vector] First step of vector distribution transformation
This is the first of several steps to support distributing large vectors. This
adds instructions extract_map and insert_map that allow us to do incremental
lowering. Right now the transformation only apply to simple pointwise operation
with a vector size matching the multiplicity of the IDs used to distribute the
vector.
This can be used to distribute large vectors to loops or SPMD.

Differential Revision: https://reviews.llvm.org/D88341
2020-09-30 13:14:55 -07:00
Aart Bik
e9628955f5 [mlir] [VectorOps] Relaxed restrictions on vector.reduction types even more
Recently, restrictions on vector reductions were made more relaxed by
accepting any width signless integer and floating-point. This CL relaxes
the restriction even more by including unsigned and signed integers.

Reviewed By: bkramer

Differential Revision: https://reviews.llvm.org/D88442
2020-09-28 13:38:03 -07:00
Benjamin Kramer
2d76274b99 [mlir][VectorOps] Loosen restrictions on vector.reduction types
LLVM can deal with any integer or float type, don't arbitrarily restrict
it to f32/f64/i32/i64.

Differential Revision: https://reviews.llvm.org/D88010
2020-09-21 12:45:23 +02:00
Hanhan Wang
f16abe5f84 [mlir][Vector] Add a folder for vector.broadcast
Fold the operation if the source is a scalar constant or splat constant.

Update transform-patterns-matmul-to-vector.mlir because the broadcast ops are folded in the conversion.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D87703
2020-09-17 08:54:51 -07:00
Federico Lebrón
7d1ed69c8a Make namespace handling uniform across dialect backends.
Now backends spell out which namespace they want to be in, instead of relying on
clients #including them inside already-opened namespaces. This also means that
cppNamespaces should be fully qualified, and there's no implicit "::mlir::"
prepended to them anymore.

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D86811
2020-09-14 20:33:31 +00:00
Nicolas Vasilache
8d64df9f13 [mlir][Vector] Revisit VectorToSCF.
Vector to SCF conversion still had issues due to the interaction with the natural alignment derived by the LLVM data layout. One traditional workaround is to allocate aligned. However, this does not always work for vector sizes that are non-powers of 2.

This revision implements a more portable mechanism where the intermediate allocation is always a memref of elemental vector type. AllocOp is extended to use the natural LLVM DataLayout alignment for non-scalar types, when the alignment is not specified in the first place.

An integration test is added that exercises the transfer to scf.for + scalar lowering with a 5x5 transposition.

Differential Revision: https://reviews.llvm.org/D87150
2020-09-07 05:19:43 -04:00
Thomas Raoux
5fbfe2ec4f [mlir][vector] Add vector.bitcast operation
Based on the RFC discussed here:
https://llvm.discourse.group/t/rfc-vector-standard-add-bitcast-operation/1628/

Adding a vector.bitcast operation that allows casting to a vector of different
element type. The most minor dimension bitwidth must stay unchanged.

Differential Revision: https://reviews.llvm.org/D86580
2020-08-26 14:13:52 -07:00
Mehdi Amini
49c371b319 Add llvm_unreachable after fully covered switch to silence some warnings from GCC (NFC) 2020-08-25 23:09:11 +00:00
aartbik
451dcfae31 [mlir] [VectorOps] Cleanup mask 1-d test on constants
I forgot to address this in previous CL. Sorry about that.

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D86188
2020-08-18 19:39:17 -07:00
aartbik
6b66f21446 [mlir] [VectorOps] Canonicalization of 1-D memory operations
Masked loading/storing in various forms can be optimized
into simpler memory operations when the mask is all true
or all false. Note that the backend does similar optimizations
but doing this early may expose more opportunities for further
optimizations. This further prepares progressively lowering
transfer read and write into 1-D memory operations.

Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D85769
2020-08-13 17:15:35 -07:00
Thomas Raoux
68330ee0a9 [mlir][vector] Relax transfer_read/transfer_write restriction on memref operand
Relax the verifier for transfer_read/transfer_write operation so that it can
take a memref with a different element type than the vector being read/written.

This is based on the discourse discussion:
https://llvm.discourse.group/t/memref-cast/1514

Differential Revision: https://reviews.llvm.org/D85244
2020-08-10 08:57:48 -07:00
Mehdi Amini
575b22b5d1 Revisit Dialect registration: require and store a TypeID on dialects
This patch moves the registration to a method in the MLIRContext: getOrCreateDialect<ConcreteDialect>()

This method requires dialect to provide a static getDialectNamespace()
and store a TypeID on the Dialect itself, which allows to lazyily
create a dialect when not yet loaded in the context.
As a side effect, it means that duplicated registration of the same
dialect is not an issue anymore.

To limit the boilerplate, TableGen dialect generation is modified to
emit the constructor entirely and invoke separately a "init()" method
that the user implements.

Differential Revision: https://reviews.llvm.org/D85495
2020-08-07 15:57:08 +00:00
aartbik
39379916a7 [mlir] [VectorOps] Add masked load/store operations to Vector dialect
The intrinsics were already supported and vector.transfer_read/write lowered
direclty into these operations. By providing them as individual ops, however,
clients can used them directly, and it opens up progressively lowering transfer
operations at higher levels (rather than direct lowering to LLVM IR as done now).

Reviewed By: bkramer

Differential Revision: https://reviews.llvm.org/D85357
2020-08-05 16:45:24 -07:00
aartbik
e8dcf5f87d [mlir] [VectorOps] Add expand/compress operations to Vector dialect
Introduces the expand and compress operations to the Vector dialect
(important memory operations for sparse computations), together
with a first reference implementation that lowers to the LLVM IR
dialect to enable running on CPU (and other targets that support
the corresponding LLVM IR intrinsics).

Reviewed By: reidtatge

Differential Revision: https://reviews.llvm.org/D84888
2020-08-04 12:00:42 -07:00
Benjamin Kramer
eb41f9edde [mlir][Vector] Simplify code a bit. NFCI. 2020-08-01 14:49:19 +02:00
aartbik
19dbb230a2 [mlir] [VectorOps] Add scatter/gather operations to Vector dialect
Introduces the scatter/gather operations to the Vector dialect
(important memory operations for sparse computations), together
with a first reference implementation that lowers to the LLVM IR
dialect to enable running on CPU (and other targets that support
the corresponding LLVM IR intrinsics).

The operations can be used directly where applicable, or can be used
during progressively lowering to bring other memory operations closer to
hardware ISA support for a gather/scatter. The semantics of the operation
closely correspond to those of the corresponding llvm intrinsics.

Note that the operation allows for a dynamic index vector (which is
important for sparse computations). However, this first reference
lowering implementation "serializes" the address computation when
base + index_vector is converted to a vector of pointers. Exploring
how to use SIMD properly during these step is TBD. More general
memrefs and idiomatic versions of striding are also TBD.

Reviewed By: arpith-jacob

Differential Revision: https://reviews.llvm.org/D84039
2020-07-21 10:57:40 -07:00
Nicolas Vasilache
47cbd9f922 [mlir][Vector] NFC - Improve VectorInterfaces
This revision improves and makes better use of OpInterfaces for the Vector dialect.

Differential Revision: https://reviews.llvm.org/D84053
2020-07-20 08:24:22 -04:00
Nicolas Vasilache
ec2f2cec76 [mlir][Vector] Add folding for vector.transfer ops
This revision folds vector.transfer operations by updating the `masked` bool array attribute when more unmasked dimensions can be discovered.

Differential revision: https://reviews.llvm.org/D83586
2020-07-10 16:49:12 -04:00
aartbik
9bf6354301 [mlir] [VectorOps] Allow AXPY to be expressed as special case of OUTERPRODUCT
This specialization allows sharing more code where an AXPY follows naturally
in cases where an OUTERPRODUCT on a scalar would be generated.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D83453
2020-07-10 12:23:24 -07:00
Nicolas Vasilache
a490d387e6 [mlir][Vector] Add ExtractOp folding when fed by a TransposeOp
TransposeOp are often followed by ExtractOp.
In certain cases however, it is unnecessary (and even detrimental) to lower a TransposeOp to either a flat transpose (llvm.matrix intrinsics) or to unrolled scalar insert / extract chains.

Providing foldings of ExtractOp mitigates some of the unnecessary complexity.

Differential revision: https://reviews.llvm.org/D83487
2020-07-10 11:09:27 -04:00
Nicolas Vasilache
22c8a08fd8 [mlir][Vector] Fold chains of ExtractOp
This revision adds folding to ExtractOp by simply concatenating the position attributes.
2020-07-10 09:32:02 -04:00
Nicolas Vasilache
24ed3a9403 [mlir][Vector] Add ExtractOp folding
This revision adds foldings for ExtractOp operations that come from previous InsertOp.
InsertOp have cumulative semantic where multiple chained inserts are necessary to produce the final value from which the extracts are obtained.
Additionally, TransposeOp may be interleaved and need to be tracked in order to follow the producer consumer relationships and properly compute positions.

Differential revision: https://reviews.llvm.org/D83150
2020-07-07 16:48:49 -04:00