Commit Graph

81 Commits

Author SHA1 Message Date
River Riddle
f8479d9de5 [mlir] Set the namespace of the BuiltinDialect to 'builtin'
Historically the builtin dialect has had an empty namespace. This has unfortunately created a very awkward situation, where many utilities either have to special case the empty namespace, or just don't work at all right now. This revision adds a namespace to the builtin dialect, and starts to cleanup some of the utilities to no longer handle empty namespaces. For now, the assembly form of builtin operations does not require the `builtin.` prefix. (This should likely be re-evaluated though)

Differential Revision: https://reviews.llvm.org/D105149
2021-07-28 21:00:10 +00:00
Matthias Springer
d1a9e9a7cb [mlir][vector] Remove vector.transfer_read/write to LLVM lowering
This simplifies the vector to LLVM lowering. Previously, both vector.load/store and vector.transfer_read/write lowered directly to LLVM. With this commit, there is a single path to LLVM vector load/store instructions and vector.transfer_read/write ops must first be lowered to vector.load/store ops.

* Remove vector.transfer_read/write to LLVM lowering.
* Allow non-unit memref strides on all but the most minor dimension for vector.load/store ops.
* Add maxTransferRank option to populateVectorTransferLoweringPatterns.
* vector.transfer_reads with changing element type can no longer be lowered to LLVM. (This functionality is needed only for SPIRV.)

Differential Revision: https://reviews.llvm.org/D106118
2021-07-17 14:07:27 +09:00
Alex Zinenko
881dc34f73 [mlir] replace llvm.mlir.cast with unrealized_conversion_cast
The dialect-specific cast between builtin (ex-standard) types and LLVM
dialect types was introduced long time before built-in support for
unrealized_conversion_cast. It has a similar purpose, but is restricted
to compatible builtin and LLVM dialect types, which may hamper
progressive lowering and composition with types from other dialects.
Replace llvm.mlir.cast with unrealized_conversion_cast, and drop the
operation that became unnecessary.

Also make unrealized_conversion_cast legal by default in
LLVMConversionTarget as the majority of convesions using it are partial
conversions that actually want the casts to persist in the IR. The
standard-to-llvm conversion, which is still expected to run last, cleans
up the remaining casts  standard-to-llvm conversion, which is still
expected to run last, cleans up the remaining casts

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D105880
2021-07-16 15:14:09 +02:00
thomasraoux
291025389c [mlir][vector] Refactor Vector Unrolling and remove Tuple ops
Simplify vector unrolling pattern to be more aligned with rest of the
patterns and be closer to vector distribution.
The new implementation uses ExtractStridedSlice/InsertStridedSlice
instead of the Tuple ops. After this change the ops based on Tuple don't
have any more used so they can be removed.

This allows removing signifcant amount of dead code and will allow
extending the unrolling code going forward.

Differential Revision: https://reviews.llvm.org/D105381
2021-07-07 11:11:26 -07:00
Stephen Neuendorffer
29a50c5864 [MLIR] Update Vector To LLVM conversion to be aware of assume_alignment
vector.transfer_read and vector.transfer_write operations are converted
to llvm intrinsics with specific alignment information, however there
doesn't seem to be a way in llvm to take information from llvm.assume
intrinsics and change this alignment information.  In any
event, due the to the structure of the llvm.assume instrinsic, applying
this information at the llvm level is more cumbersome.  Instead, let's
generate the masked vector load and store instrinsic with the right
alignment information from MLIR in the first place.  Since
we're bothering to do this, lets just emit the proper alignment for
loads, stores, scatter, and gather ops too.

Differential Revision: https://reviews.llvm.org/D100444
2021-05-19 10:50:48 -07:00
Tobias Gysi
856b24df08 [mlir] test gather/scatter index vector of type index.
Test the vector to llvm lowering of index vectors with index element type.

Differential Revision: https://reviews.llvm.org/D100827
2021-04-20 11:24:04 +00:00
Tobias Gysi
b614ada0e8 [mlir] add support for index type in vectors.
The patch enables the use of index type in vectors. It is a prerequisite to support vectorization for indexed Linalg operations. This refactoring became possible due to the newly introduced data layout infrastructure. The data layout of a module defines the bitwidth of the index type needed to verify bitcasts and similar vector operations.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D99948
2021-04-08 08:17:13 +00:00
Matthias Springer
65a3f28939 [mlir] Add "mask" operand to vector.transfer_read/write.
Also factors out out-of-bounds mask generation from vector.transfer_read/write into a new MaterializeTransferMask pattern.

Differential Revision: https://reviews.llvm.org/D100001
2021-04-07 21:33:13 +09:00
Matthias Springer
95f8135043 [mlir] Change vector.transfer_read/write "masked" attribute to "in_bounds".
This is in preparation for adding a new "mask" operand. The existing "masked" attribute was used to specify dimensions that may be out-of-bounds. Such transfers can be lowered to masked load/stores. The new "in_bounds" attribute is used to specify dimensions that are guaranteed to be within bounds. (Semantics is inverted.)

Differential Revision: https://reviews.llvm.org/D99639
2021-03-31 18:04:22 +09:00
Julian Gross
e2310704d8 [MLIR] Create memref dialect and move dialect-specific ops from std.
Create the memref dialect and move dialect-specific ops
from std dialect to this dialect.

Moved ops:
AllocOp -> MemRef_AllocOp
AllocaOp -> MemRef_AllocaOp
AssumeAlignmentOp -> MemRef_AssumeAlignmentOp
DeallocOp -> MemRef_DeallocOp
DimOp -> MemRef_DimOp
MemRefCastOp -> MemRef_CastOp
MemRefReinterpretCastOp -> MemRef_ReinterpretCastOp
GetGlobalMemRefOp -> MemRef_GetGlobalOp
GlobalMemRefOp -> MemRef_GlobalOp
LoadOp -> MemRef_LoadOp
PrefetchOp -> MemRef_PrefetchOp
ReshapeOp -> MemRef_ReshapeOp
StoreOp -> MemRef_StoreOp
SubViewOp -> MemRef_SubViewOp
TransposeOp -> MemRef_TransposeOp
TensorLoadOp -> MemRef_TensorLoadOp
TensorStoreOp -> MemRef_TensorStoreOp
TensorToMemRefOp -> MemRef_BufferCastOp
ViewOp -> MemRef_ViewOp

The roadmap to split the memref dialect from std is discussed here:
https://llvm.discourse.group/t/rfc-split-the-memref-dialect-from-std/2667

Differential Revision: https://reviews.llvm.org/D98041
2021-03-15 11:14:09 +01:00
Aart Bik
df5ccf5a94 [mlir][vector] add higher dimensional support to gather/scatter
Similar to mask-load/store and compress/expand, the gather and
scatter operation now allow for higher dimension uses. Note that
to support the mixed-type index, the new syntax is:
   vector.gather %base [%i,%j] [%kvector] ....
The first client of this generalization is the sparse compiler,
which needs to define scatter and gathers on dense operands
of higher dimensions too.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D97422
2021-02-26 14:20:19 -08:00
Andrew Pritchard
08c681f645 Perform memory accesses in the same addrspace as the corresponding memref.
It's not necessarily the case on all architectures that all memory is
addressable in addrspace 0, so casting the pointer to addrspace 0 is
liable to cause problems.

Reviewed By: aartbik, ftynse, nicolasvasilache

Differential Revision: https://reviews.llvm.org/D96380
2021-02-18 12:36:16 -08:00
Diego Caballero
ee66e43a96 [mlir][Vector] Introduce 'vector.load' and 'vector.store' ops
This patch adds the 'vector.load' and 'vector.store' ops to the Vector
dialect [1]. These operations model *contiguous* vector loads and stores
from/to memory. Their semantics are similar to the 'affine.vector_load' and
'affine.vector_store' counterparts but without the affine constraints. The
most relevant feature is that these new vector operations may perform a vector
load/store on memrefs with a non-vector element type, unlike 'std.load' and
'std.store' ops. This opens the representation to model more generic vector
load/store scenarios: unaligned vector loads/stores, perform scalar and vector
memory access on the same memref, decouple memory allocation constraints from
memory accesses, etc [1]. These operations will also facilitate the progressive
lowering of both Affine vector loads/stores and Vector transfer reads/writes
for those that read/write contiguous slices from/to memory.

In particular, this patch adds the 'vector.load' and 'vector.store' ops to the
Vector dialect, implements their lowering to the LLVM dialect, and changes the
lowering of 'affine.vector_load' and 'affine.vector_store' ops to the new vector
ops. The lowering of Vector transfer reads/writes will be implemented in the
future, probably as an independent pass. The API of 'vector.maskedload' and
'vector.maskedstore' has also been changed slightly to align it with the
transfer read/write ops and the vector new ops. This will improve reusability
among all these operations. For example, the lowering of 'vector.load',
'vector.store', 'vector.maskedload' and 'vector.maskedstore' to the LLVM dialect
is implemented with a single template conversion pattern.

[1] https://llvm.discourse.group/t/memref-type-and-data-layout/

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D96185
2021-02-12 20:48:37 +02:00
Alex Zinenko
ba87f99168 [mlir] make vector to llvm conversion truly partial
Historically, the Vector to LLVM dialect conversion subsumed the Standard to
LLVM dialect conversion patterns. This was necessary because the conversion
infrastructure did not have sufficient support for reconciling type
conversions. This support is now available. Only keep the patterns related to
the Vector dialect in the Vector to LLVM conversion and require type casts
operations to be inserted if necessary. These casts will be removed by
following conversions if possible. Update integration tests to also run the
Standard to LLVM conversion.

There is a significant amount of test churn, which is due to (a) unnecessarily
strict tests in VectorToLLVM and (b) many patterns actually targeting Standard
dialect ops instead of LLVM dialect ops leading to tests actually exercising a
Vector->Standard->LLVM conversion. This churn is a good illustration of the
reason to make the conversion partial: now the tests only check the code in the
Vector to LLVM conversion and will not be randomly broken by changes in
Standard to LLVM conversion.

Arguably, it may be possible to extract Vector to Standard patterns into a
separate pass, but given the ongoing splitting of the Standard dialect, such
pass will be short-lived and will require further refactoring.

Depends On D95626

Reviewed By: nicolasvasilache, aartbik

Differential Revision: https://reviews.llvm.org/D95685
2021-02-04 11:33:24 +01:00
Diego Caballero
cf5c517c05 [mlir][Vector] Add lowering to LLVM for vector.bitcast
Add the conversion pattern for vector.bitcast to lower it to
the LLVM Dialect.

Reviewed By: ThomasRaoux, aartbik

Differential Revision: https://reviews.llvm.org/D95579
2021-02-03 01:19:20 +02:00
Alex Zinenko
bd30a796fc [mlir] use built-in vector types instead of LLVM dialect types when possible
Continue the convergence between LLVM dialect and built-in types by using the
built-in vector type whenever possible, that is for fixed vectors of built-in
integers and built-in floats. LLVM dialect vector type is still in use for
pointers, less frequent floating point types that do not have a built-in
equivalent, and scalable vectors. However, the top-level `LLVMVectorType` class
has been removed in favor of free functions capable of inspecting both built-in
and LLVM dialect vector types: `LLVM::getVectorElementType`,
`LLVM::getNumVectorElements` and `LLVM::getFixedVectorType`. Additional work is
necessary to design an implemented the extensions to built-in types so as to
remove the `LLVMFixedVectorType` entirely.

Note that the default output format for the built-in vectors does not have
whitespace around the `x` separator, e.g., `vector<4xf32>` as opposed to the
LLVM dialect vector type format that does, e.g., `!llvm.vec<4 x fp128>`. This
required changing the FileCheck patterns in several tests.

Reviewed By: mehdi_amini, silvas

Differential Revision: https://reviews.llvm.org/D94405
2021-01-12 10:04:28 +01:00
Aart Bik
6728af16cf [mlir][vector] modified scatter/gather syntax, pass_thru mandatory
This change makes the scatter/gather syntax more consistent with
the syntax of all the other memory operations in the Vector dialect
(order of types, use of [] for index, etc.). This will make the MLIR
code easier to read. In addition, the pass_thru parameter of the
gather has been made mandatory (there is very little benefit in
using the implicit "undefined" values).

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D94352
2021-01-09 11:41:37 -08:00
Aart Bik
a57def30f5 [mlir][vector] generalized masked l/s and compressed l/s with indices
Adding the ability to index the base address brings these operations closer
to the transfer read and write semantics (with lowering advantages), ensures
more consistent use in vector MLIR code (easier to read), and reduces the
amount of code duplication to lower memrefs into base addresses considerably
(making codegen less error-prone).

Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D94278
2021-01-08 13:59:34 -08:00
Alex Zinenko
dd5165a920 [mlir] replace LLVM dialect float types with built-ins
Continue the convergence between LLVM dialect and built-in types by replacing
the bfloat, half, float and double LLVM dialect types with their built-in
counterparts. At the API level, this is a direct replacement. At the syntax
level, we change the keywords to `bf16`, `f16`, `f32` and `f64`, respectively,
to be compatible with the built-in type syntax. The old keywords can still be
parsed but produce a deprecation warning and will be eventually removed.

Depends On D94178

Reviewed By: mehdi_amini, silvas, antiagainst

Differential Revision: https://reviews.llvm.org/D94179
2021-01-08 17:38:12 +01:00
Alex Zinenko
2230bf99c7 [mlir] replace LLVMIntegerType with built-in integer type
The LLVM dialect type system has been closed until now, i.e. did not support
types from other dialects inside containers. While this has had obvious
benefits of deriving from a common base class, it has led to some simple types
being almost identical with the built-in types, namely integer and floating
point types. This in turn has led to a lot of larger-scale complexity: simple
types must still be converted, numerous operations that correspond to LLVM IR
intrinsics are replicated to produce versions operating on either LLVM dialect
or built-in types leading to quasi-duplicate dialects, lowering to the LLVM
dialect is essentially required to be one-shot because of type conversion, etc.
In this light, it is reasonable to trade off some local complexity in the
internal implementation of LLVM dialect types for removing larger-scale system
complexity. Previous commits to the LLVM dialect type system have adapted the
API to support types from other dialects.

Replace LLVMIntegerType with the built-in IntegerType plus additional checks
that such types are signless (these are isolated in a utility function that
replaced `isa<LLVMType>` and in the parser). Temporarily keep the possibility
to parse `!llvm.i32` as a synonym for `i32`, but add a deprecation notice.

Reviewed By: mehdi_amini, silvas, antiagainst

Differential Revision: https://reviews.llvm.org/D94178
2021-01-07 19:48:31 +01:00
Amara Emerson
322d0afd87 [llvm][mlir] Promote the experimental reduction intrinsics to be first class intrinsics.
This change renames the intrinsics to not have "experimental" in the name.

The autoupgrader will handle legacy intrinsics.

Relevant ML thread: http://lists.llvm.org/pipermail/llvm-dev/2020-April/140729.html

Differential Revision: https://reviews.llvm.org/D88787
2020-10-07 10:36:44 -07:00
Aart Bik
54759cefdb [mlir] [VectorOps] changes to printing support for integers
(1) simplify integer printing logic by always using 64-bit print
(2) add index support (since vector<16xindex> is planned to be added)
(3) adjust naming convention print_x -> printX

Reviewed By: bkramer

Differential Revision: https://reviews.llvm.org/D88436
2020-09-28 11:43:31 -07:00
Aart Bik
b8880f5f97 [mlir] [VectorOps] generalize printing support for integers
This generalizes printing beyond just i1,i32,i64 and also accounts
for signed and unsigned interpretation in the output.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D88290
2020-09-25 04:52:21 -07:00
Benjamin Kramer
2d76274b99 [mlir][VectorOps] Loosen restrictions on vector.reduction types
LLVM can deal with any integer or float type, don't arbitrarily restrict
it to f32/f64/i32/i64.

Differential Revision: https://reviews.llvm.org/D88010
2020-09-21 12:45:23 +02:00
aartbik
3c42c0dcf6 [mlir] [VectorOps] Enable 32-bit index optimizations
Rationale:
After some discussion we decided that it is safe to assume 32-bit
indices for all subscripting in the vector dialect (it is unlikely
the dialect will be used; or even work; for such long vectors).
So rather than detecting specific situations that can exploit
32-bit indices with higher parallel SIMD, we just optimize it
by default, and let users that don't want it opt-out.

Reviewed By: nicolasvasilache, bkramer

Differential Revision: https://reviews.llvm.org/D87404
2020-09-10 00:26:27 -07:00
aartbik
060c9dd1cc [mlir] [VectorOps] Improve SIMD compares with narrower indices
When allowed, use 32-bit indices rather than 64-bit indices in the
SIMD computation of masks. This runs up to 2x and 4x faster on
a number of AVX2 and AVX512 microbenchmarks.

Reviewed By: bkramer

Differential Revision: https://reviews.llvm.org/D87116
2020-09-03 21:43:38 -07:00
Thomas Raoux
68330ee0a9 [mlir][vector] Relax transfer_read/transfer_write restriction on memref operand
Relax the verifier for transfer_read/transfer_write operation so that it can
take a memref with a different element type than the vector being read/written.

This is based on the discourse discussion:
https://llvm.discourse.group/t/memref-cast/1514

Differential Revision: https://reviews.llvm.org/D85244
2020-08-10 08:57:48 -07:00
aartbik
c3c95b9c80 [mlir] [VectorOps] Improve lowering of extract_strided_slice (and friends like shape_cast)
Using a shuffle for the last recursive step in progressive lowering not only
results in much more compact IR, but also more efficient code (since the
backend is no longer confused on subvector aliasing for longer vectors).

E.g. the following

  %f = vector.shape_cast %v0: vector<1024xf32> to vector<32x32xf32>

yields much better x86-64 code that runs 3x faster than the original.

Reviewed By: bkramer, nicolasvasilache

Differential Revision: https://reviews.llvm.org/D85482
2020-08-07 09:21:05 -07:00
aartbik
39379916a7 [mlir] [VectorOps] Add masked load/store operations to Vector dialect
The intrinsics were already supported and vector.transfer_read/write lowered
direclty into these operations. By providing them as individual ops, however,
clients can used them directly, and it opens up progressively lowering transfer
operations at higher levels (rather than direct lowering to LLVM IR as done now).

Reviewed By: bkramer

Differential Revision: https://reviews.llvm.org/D85357
2020-08-05 16:45:24 -07:00
aartbik
e8dcf5f87d [mlir] [VectorOps] Add expand/compress operations to Vector dialect
Introduces the expand and compress operations to the Vector dialect
(important memory operations for sparse computations), together
with a first reference implementation that lowers to the LLVM IR
dialect to enable running on CPU (and other targets that support
the corresponding LLVM IR intrinsics).

Reviewed By: reidtatge

Differential Revision: https://reviews.llvm.org/D84888
2020-08-04 12:00:42 -07:00
Alex Zinenko
ec1f4e7c3b [mlir] switch the modeling of LLVM types to use the new mechanism
A new first-party modeling for LLVM IR types in the LLVM dialect has been
developed in parallel to the existing modeling based on wrapping LLVM `Type *`
instances. It resolves the long-standing problem of modeling identified
structure types, including recursive structures, and enables future removal of
LLVMContext and related locking mechanisms from LLVMDialect.

This commit only switches the modeling by (a) renaming LLVMTypeNew to LLVMType,
(b) removing the old implementaiton of LLVMType, and (c) updating the tests. It
is intentionally minimal. Separate commits will remove the infrastructure built
for the transition and update API uses where appropriate.

Depends On D85020

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D85021
2020-08-04 14:29:25 +02:00
aartbik
1485fd295b [mlir] [VectorOps] Improve scatter/gather CPU performance
Replaced the linearized address with the proper LLVM way of
defining vector of base + indices in SIMD style. This yields
much better code. Some prototype results with microbencmarking
sparse matrix x vector with 50% sparsity (about 2-3x faster):

         LINEARIZED     IMPROVED
GFLOPS  sdot  saxpy     sdot saxpy
16x16    1.6   1.4       4.4  2.1
32x32    1.7   1.6       5.8  5.9
64x64    1.7   1.7       6.4  6.4
128x128  1.7   1.7       5.9  5.9
256x256  1.6   1.6       6.1  6.0
512x512  1.4   1.4       4.9  4.7

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D84368
2020-07-22 23:47:36 -07:00
aartbik
19dbb230a2 [mlir] [VectorOps] Add scatter/gather operations to Vector dialect
Introduces the scatter/gather operations to the Vector dialect
(important memory operations for sparse computations), together
with a first reference implementation that lowers to the LLVM IR
dialect to enable running on CPU (and other targets that support
the corresponding LLVM IR intrinsics).

The operations can be used directly where applicable, or can be used
during progressively lowering to bring other memory operations closer to
hardware ISA support for a gather/scatter. The semantics of the operation
closely correspond to those of the corresponding llvm intrinsics.

Note that the operation allows for a dynamic index vector (which is
important for sparse computations). However, this first reference
lowering implementation "serializes" the address computation when
base + index_vector is converted to a vector of pointers. Exploring
how to use SIMD properly during these step is TBD. More general
memrefs and idiomatic versions of striding are also TBD.

Reviewed By: arpith-jacob

Differential Revision: https://reviews.llvm.org/D84039
2020-07-21 10:57:40 -07:00
Nicolas Vasilache
affbc0cd1c [mlir] Add alignment attribute to LLVM memory ops and use in vector.transfer
Summary: The native alignment may generally not be used when lowering a vector.transfer to the underlying load/store operation. This revision fixes the unmasked load/store alignment to match that of the masked path.

Differential Revision: https://reviews.llvm.org/D83684
2020-07-13 17:35:20 -04:00
Benjamin Kramer
3bffe6022c [mlir][VectorOps] Lower vector.fma to llvm.fmuladd instead of llvm.fma
Summary:
These are semantically equivalent, but fmuladd allows decaying the op
into fmul+fadd if there is no fma instruction available. llvm.fma lowers
to scalar calls to libm fmaf, which is a lot slower.

Reviewers: nicolasvasilache, aartbik, ftynse

Subscribers: mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, liufengdb, stephenneuendorffer, Joonsoo, grosul1, Kayjukh, jurahul, msifontes

Tags: #mlir

Differential Revision: https://reviews.llvm.org/D83666
2020-07-13 12:26:03 +02:00
Nicolas Vasilache
22c8a08fd8 [mlir][Vector] Fold chains of ExtractOp
This revision adds folding to ExtractOp by simply concatenating the position attributes.
2020-07-10 09:32:02 -04:00
aartbik
ceb1b327b5 [mlir] [VectorOps] Add the ability to mark FP reductions with "reassociate" attribute
Rationale:
In general, passing "fastmath" from MLIR to LLVM backend is not supported, and even just providing such a feature for experimentation is under debate. However, passing fine-grained fastmath related attributes on individual operations is generally accepted. This CL introduces an option to instruct the vector-to-llvm lowering phase to annotate floating-point reductions with the "reassociate" fastmath attribute, which allows the LLVM backend to use SIMD implementations for such constructs. Oher lowering passes can start using this mechanism right away in cases where reassociation is allowed.

Benefit:
For some microbenchmarks on x86-avx2, speedups over 20 were observed for longer vector (due to cleaner, spill-free and SIMD exploiting code).

Usage:
mlir-opt --convert-vector-to-llvm="reassociate-fp-reductions"

Reviewed By: ftynse, mehdi_amini

Differential Revision: https://reviews.llvm.org/D82624
2020-06-26 11:03:14 -07:00
aartbik
0d82ab7885 [mlir] [VectorOps] Improve vector.constant_mask lowering
Use direct vector constants for the 1-D case. This approach
scales much better than generating elaborate insertion operations
that are eventually folded into a constant. We could of course
generalize the 1-D case to higher ranks, but this simplification
already helps in scaling some microbenchmarks that would formerly
crash on the intermediate IR length.

Reviewed By: reidtatge

Differential Revision: https://reviews.llvm.org/D82144
2020-06-19 10:40:08 -07:00
aartbik
c9eeeb3871 [mlir] [VectorOps] remove print_i1 from runtime support library
Summary:
The "i1" (viz. bool) type does not have a proper equivalent on the "C"
size. So, to avoid any ABIs issues, we simply use print_i32 on an i32
value of one or zero for true and false. This has the added advantage
that one less function needs to be implemented when porting the runtime
support library.

Reviewers: ftynse, bkramer, nicolasvasilache

Reviewed By: ftynse

Subscribers: mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, stephenneuendorffer, Joonsoo, grosul1, frgossen, Kayjukh, jurahul, msifontes

Tags: #mlir

Differential Revision: https://reviews.llvm.org/D82048
2020-06-18 11:07:43 -07:00
River Riddle
c0cd1f1c5c [mlir] Refactor BoolAttr to be a special case of IntegerAttr
This simplifies a lot of handling of BoolAttr/IntegerAttr. For example, a lot of places currently have to handle both IntegerAttr and BoolAttr. In other places, a decision is made to pick one which can lead to surprising results for users. For example, DenseElementsAttr currently uses BoolAttr for i1 even if the user initialized it with an Array of i1 IntegerAttrs.

Differential Revision: https://reviews.llvm.org/D81047
2020-06-04 16:41:24 -07:00
Nicolas Vasilache
5f9e0466f2 [mlir][Vector] Fix vector.transfer alignment calculation
https://reviews.llvm.org/D79246 introduces alignment propagation for vector transfer operations. Unfortunately, the alignment calculation is incorrect and can result in crashes.

This revision fixes the calculation by using the natural alignment of the memref elemental type, instead of the resulting vector type.

If more alignment is desired, it can be done in 2 ways:
1. use a proper vector.type_cast to transform a memref<axbxcxdxf32> into a memref<axbxvector<cxdxf32>> giving a natural alignment of vector<cxdxf32>
2. add an alignment attribute to vector transfer operations and propagate it.

With this change the alignment in the relevant tests goes down from 128 to 4.

Lastly, a few minor cleanups are performed and the custom `isMinorIdentityMap` is deprecated.

Differential Revision: https://reviews.llvm.org/D80734
2020-05-28 17:58:51 -04:00
aartbik
c295a65da4 [mlir] [VectorOps] Add 'vector.flat_transpose' operation
Summary:
Provides a representation of the linearized LLVM instrinsic.
With tests and lowering implementation to LLVM IR dialect.
Prepares better lowering for 2-D vector.transpose.

Reviewers: nicolasvasilache, ftynse, reidtatge, bkramer, dcaballe

Reviewed By: ftynse, dcaballe

Subscribers: mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, stephenneuendorffer, Joonsoo, grosul1, frgossen, Kayjukh, jurahul, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D80419
2020-05-27 11:09:48 -07:00
Nicolas Vasilache
1870e787af [mlir][Vector] Add an optional "masked" boolean array attribute to vector transfer operations
Summary:
Vector transfer ops semantic is extended to allow specifying a per-dimension `masked`
attribute. When the attribute is false on a particular dimension, lowering to LLVM emits
unmasked load and store operations.

Differential Revision: https://reviews.llvm.org/D80098
2020-05-18 11:52:08 -04:00
aartbik
fb2c4d50f1 [mlir] [VectorOps] Implement vector.constant_mask lowering to LLVM IR
Summary:
Makes this operation runnable on CPU by generating MLIR instructions
that are eventually folded into an LLVM IR constant for the mask.

Reviewers: nicolasvasilache, ftynse, reidtatge, bkramer, andydavis1

Reviewed By: nicolasvasilache, ftynse, andydavis1

Subscribers: mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, stephenneuendorffer, Joonsoo, grosul1, frgossen, Kayjukh, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D79815
2020-05-12 19:44:23 -07:00
Reid Tatge
334a4159ec [mlir][Vector] NFC - Rename vector.strided_slice into vector.extract_strided_slice
Differential Revision: https://reviews.llvm.org/D79734
2020-05-11 14:21:10 -07:00
Wen-Heng (Jack) Chung
a23f190213 [mlir][vector] set alignment when lowering transfer_read and transfer_write.
When emitting masked load / store, set alignment from data layout.

Differential Revision: https://reviews.llvm.org/D79246
2020-05-07 11:44:25 +02:00
Wen-Heng (Jack) Chung
a581c6f8cd [mlir][vector] add tests for type_cast taking non-zero addrspace
Add tests for vector.type_cast that takes memrefs on non-zero
addrspaces.

Differential Revision: https://reviews.llvm.org/D79099
2020-05-04 10:31:12 +02:00
aartbik
6937251f01 [mlir] [VectorOps] Included i1 support for vector.print
Summary:
Added boolean support to vector.print.
Useful for upcoming "mask" tests.

Reviewers: ftynse, nicolasvasilache, andydavis1

Reviewed By: andydavis1

Subscribers: mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, Joonsoo, grosul1, frgossen, Kayjukh, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D79198
2020-04-30 14:56:26 -07:00
Wen-Heng (Jack) Chung
be16075bfc [mlir][vector] let transfer_read and transfer_write take non-zero addrspace.
Enhance lowering logic and tests so vector.transfer_read and
vector.transfer_write take memrefs on non-zero addrspaces.

Differential Revision: https://reviews.llvm.org/D79023
2020-04-29 17:11:48 +02:00
Nicolas Vasilache
b2c79c50ed [mlir][VectorOps] Extend VectorTransfer lowering to n-D memref with minor identity map
Summary: This revision extends the lowering of vector transfers to work with n-D memref and 1-D vector where the permutation map is an identity on the most minor dimensions (1 for now).

Differential Revision: https://reviews.llvm.org/D78925
2020-04-27 11:20:55 -04:00