Commit Graph

114 Commits

Author SHA1 Message Date
Hanhan Wang
0a1569a400 [mlir][NFC] Remove trailing whitespaces from *.td and *.mlir files.
This is generated by running

```
sed --in-place 's/[[:space:]]\+$//' mlir/**/*.td
sed --in-place 's/[[:space:]]\+$//' mlir/**/*.mlir
```

Reviewed By: rriddle, dcaballe

Differential Revision: https://reviews.llvm.org/D138866
2022-11-28 15:26:30 -08:00
Javier Setoain
aa9647e2d0 [mlir][vector] Add vector.scalable.insert/extract ops
These new operations match the semantics of
llvm.experimental.vector.insert and llvm.experimental.vector.extract.

`vector.scalable.insert` and `vector.scalable.extract` allow,
respectively, insert vectors into scalable vectors, and extract vectors
from scalable vectors.

The discussion about the inclusion of these operations is here:
https://discourse.llvm.org/t/rfc-interfacing-between-fixed-length-and-scalable-vectors-for-vls-vector-code-on-scalable-vector-architectures

Differential Revision: https://reviews.llvm.org/D127875
2022-11-08 08:51:15 +00:00
Slava Zakharin
35c9085121 [mlir][llvmir] Support FastmathFlags for LLVM intrinsic operations.
This is required for D126305 code to propagate fastmath attributes
for Arith operations that are converted to LLVM IR intrinsics
operations.

LLVM IR intrinsic operations are using custom assembly format now
to avoid printing {fastmathFlags = #llvm.fastmath<none>}, which
is too verbose.

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D136225
2022-11-02 12:41:47 -07:00
Nicolas Vasilache
db6f8ebe06 [mlir][Vector] Support 0-D vectors in ShuffleOp
Co-authored-by: Michal Terepeta <michalt@google.com>

Reviewed-by: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D115744
2022-08-29 00:39:57 -07:00
Nicolas Vasilache
6e81eae2f7 [mlir][Vector] Support 0-D vectors in TransposeOp
Co-authored-by: Michal Terepeta <michalt@google.com>

Reviewed-by: ftynse

Differential Revision: https://reviews.llvm.org/D115743
2022-08-26 03:40:21 -07:00
Michal Terepeta
ab45a4329b [mlir][Vector] Support 0-D vectors in FMAOp
Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D115742
2022-08-24 08:49:58 -07:00
Che-Yu Wu
f250b97222 Reland "[MLIR]Extend vector.gather to support n-D result"
Reviewed By: dcaballe

Differential Revision: https://reviews.llvm.org/D132507
2022-08-24 04:18:00 +00:00
Mehdi Amini
de54bcc54c Revert "[MLIR]Extend vector.gather to support n-D result"
This reverts commit 0cbfd6fd16.

A test is crashing with the shared_lib config.
2022-08-23 20:26:38 +00:00
Che-Yu Wu
0cbfd6fd16 [MLIR]Extend vector.gather to support n-D result
Currently vector.gather only supports reading memory into a 1-D result vector.
This patch extends it to support an n-D result vector with the indices, masks,
and passthroughs in n-D vectors.

As we are trying to vectorize tensor.extract with vector.gather
(https://github.com/iree-org/iree/issues/9198), it will need to gather the
elements into an n-D vector. Having vector.gather with n-D results allows us
to avoid flatten and reshape at the vectorization stage. The backends can then
decide the optimal ways to lower the vector.gather op.

Note that this is different from n-D gathering, which is about reading n-D
memory with the n-D indices. The indices here are still only 1-D offsets on
the base.

Reviewed By: dcaballe

Differential Revision: https://reviews.llvm.org/D131905
2022-08-23 16:53:19 +00:00
Jeff Niu
b2ccfb4d95 [mlir][LLVMIR] Change ShuffleVectorOp to use assembly format
This patch moves `LLVM::ShuffleVectorOp` to assembly format and in the
process drops the extra type that can be inferred (both operand types
are required to be the same) and switches to a dense integer array.

The syntax change:

```
// Before
%0 = llvm.shufflevector %0, %1 [0 : i32, 0 : i32, 0 : i32, 0 : i32] : vector<4xf32>, vector<4xf32>
// After
%0 = llvm.shufflevector %0, %1 [0, 0, 0, 0] : vector<4xf32>
```

Reviewed By: dcaballe

Differential Revision: https://reviews.llvm.org/D132038
2022-08-18 12:46:04 -04:00
Güray Özen
85882e7d64 [mlir][Vector] Support 0-D vectors in ReductionOp
This commit adds support for 0-D vectors in ReductionOp.

Reviewed By: nicolasvasilache, dcaballe

Differential Revision: https://reviews.llvm.org/D131896
2022-08-18 09:12:47 +00:00
Thomas Raoux
8fe076ffe0 [mlir][VectorToLLVM] Fix bug in lowering of vector.reduce fmax/fmin
The lowering of fmax/fmin reduce was ignoring the optional accumulator.

Differential Revision: https://reviews.llvm.org/D129597
2022-07-12 22:03:39 +00:00
Mahesh Ravishankar
fa596c6921 [mlir][Vector] Fix reordering of floating point adds during lower of vector.contract.
Adding the accumulator value after the `vector.contract` changes the
precision of the operation. This makes sure the accumulator is carried
through to `vector.reduce` (and down to LLVM).

Differential Revision: https://reviews.llvm.org/D128674
2022-06-28 05:26:39 +00:00
River Riddle
3028bf740e [mlir][NFC] Update textual references of func to func.func in Conversion/ tests
The special case parsing of `func` operations is being removed.
2022-04-20 22:17:27 -07:00
jacquesguan
01ad70fd1d [mlir][Vector] Fold ShuffleOp if result is identical to one of source vectors.
For example, we could do the following eliminations:
  fold vector.shuffle V1, V2, [0, 1, 2, 3] : <4xi32>, <2xi32> -> V1
  fold vector.shuffle V1, V2, [4, 5] : <4xi32>, <2xi32> -> V2

Differential Revision: https://reviews.llvm.org/D122706
2022-03-31 10:46:13 +08:00
Javier Setoain
7bc8ad5109 [mlir][vector][nfc] Rename index optimizations option
We are using "enable-index-optimizations" and "indexOptimizations" as
names for an optimization that consists of using i32 for indices within
a vector. For instance, when building a vector comparison for mask
generation. The name is confusing and suggests a scope beyond these
vector indices.  This change makes the function of the option explicit
in its name.

Differential Revision: https://reviews.llvm.org/D122415
2022-03-29 11:33:22 +01:00
Javier Setoain
a75a46db89 [mlir][Vector] Enable create_mask for scalable vectors
The way vector.create_mask is currently lowered is
vector-length-dependent, and therefore incompatible with scalable vector
types. This patch adds an alternative lowering path for create_mask
operations that return a scalable vector mask.

Differential Revision: https://reviews.llvm.org/D118248
2022-03-25 10:48:59 +00:00
Javier Setoain
f2b89c7ae0 [mlir][Vector] Use create_mask in transfer mask materializations
Currently, the transfer mask is materialized by generating the vector
comparison: [offset + 0, .., offset + length - 1] < [dim, .., dim]

A better alternative is to materialize the transfer mask by using the
operation: `vector.create_mask (dim - offset)`, which will generate
simpler code and compose better with scalable vectors.

Differential Revision: https://reviews.llvm.org/D120487
2022-03-08 09:02:50 +00:00
Matthias Springer
fe0bf7d469 [mlir][vector][NFC] Use CombiningKindAttr instead of StringAttr
This makes the op consistent with other ops in vector dialect.

Differential Revision: https://reviews.llvm.org/D119343
2022-02-10 19:13:29 +09:00
River Riddle
6a8ba3186e [mlir] Split std.splat into tensor.splat and vector.splat
This is part of the larger effort to split the standard dialect. This will also allow for pruning some
additional dependencies on Standard (done in a followup).

Differential Revision: https://reviews.llvm.org/D118202
2022-02-02 14:45:12 -08:00
Nicolas Vasilache
408553dd96 [mlir][Vector] Support 0-D vectors in CreateMaskOp
The 0-D case gets lowered in almost the same way that the 1-D case does
in VectorCreateMaskOpConversion. I also had to slightly update the
verifier for the op to always require exactly 1 operand in the 0-D case.

Depends On D115220

Reviewed by: ftynse

Differential revision: https://reviews.llvm.org/D115221
2021-12-12 13:32:29 +00:00
Michal Terepeta
caf89c0db6 [mlir][Vector] Support 0-D vectors in ConstantMaskOp
To support creating both a mask with just a single `true` and `false` values,
I had to relax the restriction in the verifier that the rank is always equal to
the length of the attribute array, in other words, we now allow:

- `vector.constant_mask [0] : vector<i1>` which gets lowered to
  `arith.constant dense<false> : vector<i1>`
- `vector.constant_mask [1] : vector<i1>` which gets lowered to
  `arith.constant dense<true> : vector<i1>`

(the attribute list for the 0-D case must be a singleton containing
either `0` or `1`)

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D115023
2021-12-06 08:03:04 +00:00
Michal Terepeta
1423e8bf5d [mlir][Vector] Support 0-D vectors in BitCastOp
The implementation only allows to bit-cast between two 0-D vectors. We could
probably support casting from/to vectors like `vector<1xf32>`, but I wasn't
convinced that this would be important and it would require breaking the
invariant that `BitCastOp` works only on vectors with equal rank.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D114854
2021-12-03 08:55:59 +00:00
Stephen Neuendorffer
7386364889 Revert "[MLIR] Update Vector To LLVM conversion to be aware of assume_alignment"
This reverts commit 29a50c5864.

After LLVM lowering, the original patch incorrectly moved alignment
information across an unconstrained GEP operation.  This is only correct
for some index offsets in the GEP.  It seems that the best approach is,
in fact, to rely on LLVM to propagate information from the llvm.assume()
to users.

Thanks to Thomas Raoux for catching this.
2021-11-30 15:18:22 -08:00
Michal Terepeta
7e65fc9a60 [mlir][Vector] Support 0-D vectors in BroadcastOp
This changes the op to produce `AnyVectorOfAnyRank` following mostly the code for 1-D vectors.

Depends On D114598

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D114550
2021-11-26 17:17:18 +00:00
Michal Terepeta
cc311a155a [mlir][Vector] Support 0-D vectors in VectorPrintOpConversion
Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D114549
2021-11-25 20:12:18 +00:00
Nicolas Vasilache
3ff4e5f2a4 [mlir][Vector] Thread 0-d vectors through InsertElementOp.
This revision makes concrete use of 0-d vectors to extend the semantics of
InsertElementOp.

Reviewed By: dcaballe, pifon2a

Differential Revision: https://reviews.llvm.org/D114388
2021-11-23 12:55:11 +00:00
Nicolas Vasilache
e7026aba00 [mlir][Vector] Thread 0-d vectors through ExtractElementOp.
This revision starts making concrete use of 0-d vectors to extend the semantics of
ExtractElementOp.
In the process a new VectorOfAnyRank Tablegen OpBase.td is added to allow progressive transition to supporting 0-d vectors by gradually opting in.

Differential Revision: https://reviews.llvm.org/D114387
2021-11-23 12:39:44 +00:00
Mogball
7c5ecc8b7e [mlir][vector] Insert/extract element can accept index
`vector::InsertElementOp` and `vector::ExtractElementOp` have had their `position`
operand changed to accept `AnySignlessIntegerOrIndex` for better operability with
operations that use `index`, such as affine loops.

LLVM's `extractelement` and `insertelement` can also accept `i64`, so lowering
directly to these operations without explicitly inserting casts is allowed. SPIRV's
equivalent ops can also accept `i64`.

Reviewed By: nicolasvasilache, jpienaar

Differential Revision: https://reviews.llvm.org/D114139
2021-11-18 22:40:29 +00:00
Nicolas Vasilache
ee80ffbf9a [mlir][Linalg] Add bounded recursion declaration to FMAOp -> LLVM conversion.
FMAOp -> LLVM conversion is done progressively by peeling off 1 dimension from FMAOp at each pattern iteration. Add the recursively bounded property declaration to the pattern so that the rewriter can apply it multiple times.

Without this, FMAOps with 3+D do not lower to LLVM.

Differential Revision: https://reviews.llvm.org/D113886
2021-11-15 12:41:52 +00:00
Nicolas Vasilache
00ac874ff6 [mlir][Vector] Add InsertStridedSliceOp -> ShuffleOp for the rank-1 cases.
This also fixes the vector.shuffle C++ builder which had an incorrect type assumption that triggers with this new rewrite.
The vector.shuffle semantics were correct though.

Differential revision: https://reviews.llvm.org/D112578
2021-10-27 07:57:17 +00:00
River Riddle
015192c634 [mlir:DialectConversion] Restructure how argument/target materializations get invoked
The current implementation invokes materializations
whenever an input operand does not have a mapping for the
desired type, i.e. it requires materialization at the earliest possible
point. This conflicts with goal of dialect conversion (and also the
current documentation) which states that a materialization is only
required if the materialization is supposed to persist after the
conversion process has finished.

This revision refactors this such that whenever a target
materialization "might" be necessary, we insert an
unrealized_conversion_cast to act as a temporary materialization.
This allows for deferring the invocation of the user
materialization hooks until the end of the conversion process,
where we actually have a better sense if it's actually
necessary. This has several benefits:

* In some cases a target materialization hook is no longer
   necessary
When performing a full conversion, there are some situations
where a temporary materialization is necessary. Moving forward,
these users won't need to provide any target materializations,
as the temporary materializations do not require the user to
provide materialization hooks.

* getRemappedValue can now handle values that haven't been
   converted yet
Before this commit, it wasn't well supported to get the remapped
value of a value that hadn't been converted yet (making it
difficult/impossible to convert multiple operations in many
situations). This commit updates getRemappedValue to properly
handle this case by inserting temporary materializations when
necessary.

Another code-health related benefit is that with this change we
can move a majority of the complexity related to materializations
to the end of the conversion process, instead of handling adhoc
while conversion is happening.

Differential Revision: https://reviews.llvm.org/D111620
2021-10-27 02:09:04 +00:00
Mogball
a54f4eae0e [MLIR] Replace std ops with arith dialect ops
Precursor: https://reviews.llvm.org/D110200

Removed redundant ops from the standard dialect that were moved to the
`arith` or `math` dialects.

Renamed all instances of operations in the codebase and in tests.

Reviewed By: rriddle, jpienaar

Differential Revision: https://reviews.llvm.org/D110797
2021-10-13 03:07:03 +00:00
River Riddle
f8479d9de5 [mlir] Set the namespace of the BuiltinDialect to 'builtin'
Historically the builtin dialect has had an empty namespace. This has unfortunately created a very awkward situation, where many utilities either have to special case the empty namespace, or just don't work at all right now. This revision adds a namespace to the builtin dialect, and starts to cleanup some of the utilities to no longer handle empty namespaces. For now, the assembly form of builtin operations does not require the `builtin.` prefix. (This should likely be re-evaluated though)

Differential Revision: https://reviews.llvm.org/D105149
2021-07-28 21:00:10 +00:00
Matthias Springer
d1a9e9a7cb [mlir][vector] Remove vector.transfer_read/write to LLVM lowering
This simplifies the vector to LLVM lowering. Previously, both vector.load/store and vector.transfer_read/write lowered directly to LLVM. With this commit, there is a single path to LLVM vector load/store instructions and vector.transfer_read/write ops must first be lowered to vector.load/store ops.

* Remove vector.transfer_read/write to LLVM lowering.
* Allow non-unit memref strides on all but the most minor dimension for vector.load/store ops.
* Add maxTransferRank option to populateVectorTransferLoweringPatterns.
* vector.transfer_reads with changing element type can no longer be lowered to LLVM. (This functionality is needed only for SPIRV.)

Differential Revision: https://reviews.llvm.org/D106118
2021-07-17 14:07:27 +09:00
Alex Zinenko
881dc34f73 [mlir] replace llvm.mlir.cast with unrealized_conversion_cast
The dialect-specific cast between builtin (ex-standard) types and LLVM
dialect types was introduced long time before built-in support for
unrealized_conversion_cast. It has a similar purpose, but is restricted
to compatible builtin and LLVM dialect types, which may hamper
progressive lowering and composition with types from other dialects.
Replace llvm.mlir.cast with unrealized_conversion_cast, and drop the
operation that became unnecessary.

Also make unrealized_conversion_cast legal by default in
LLVMConversionTarget as the majority of convesions using it are partial
conversions that actually want the casts to persist in the IR. The
standard-to-llvm conversion, which is still expected to run last, cleans
up the remaining casts  standard-to-llvm conversion, which is still
expected to run last, cleans up the remaining casts

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D105880
2021-07-16 15:14:09 +02:00
thomasraoux
291025389c [mlir][vector] Refactor Vector Unrolling and remove Tuple ops
Simplify vector unrolling pattern to be more aligned with rest of the
patterns and be closer to vector distribution.
The new implementation uses ExtractStridedSlice/InsertStridedSlice
instead of the Tuple ops. After this change the ops based on Tuple don't
have any more used so they can be removed.

This allows removing signifcant amount of dead code and will allow
extending the unrolling code going forward.

Differential Revision: https://reviews.llvm.org/D105381
2021-07-07 11:11:26 -07:00
Stephen Neuendorffer
29a50c5864 [MLIR] Update Vector To LLVM conversion to be aware of assume_alignment
vector.transfer_read and vector.transfer_write operations are converted
to llvm intrinsics with specific alignment information, however there
doesn't seem to be a way in llvm to take information from llvm.assume
intrinsics and change this alignment information.  In any
event, due the to the structure of the llvm.assume instrinsic, applying
this information at the llvm level is more cumbersome.  Instead, let's
generate the masked vector load and store instrinsic with the right
alignment information from MLIR in the first place.  Since
we're bothering to do this, lets just emit the proper alignment for
loads, stores, scatter, and gather ops too.

Differential Revision: https://reviews.llvm.org/D100444
2021-05-19 10:50:48 -07:00
Tobias Gysi
856b24df08 [mlir] test gather/scatter index vector of type index.
Test the vector to llvm lowering of index vectors with index element type.

Differential Revision: https://reviews.llvm.org/D100827
2021-04-20 11:24:04 +00:00
Tobias Gysi
b614ada0e8 [mlir] add support for index type in vectors.
The patch enables the use of index type in vectors. It is a prerequisite to support vectorization for indexed Linalg operations. This refactoring became possible due to the newly introduced data layout infrastructure. The data layout of a module defines the bitwidth of the index type needed to verify bitcasts and similar vector operations.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D99948
2021-04-08 08:17:13 +00:00
Matthias Springer
65a3f28939 [mlir] Add "mask" operand to vector.transfer_read/write.
Also factors out out-of-bounds mask generation from vector.transfer_read/write into a new MaterializeTransferMask pattern.

Differential Revision: https://reviews.llvm.org/D100001
2021-04-07 21:33:13 +09:00
Matthias Springer
95f8135043 [mlir] Change vector.transfer_read/write "masked" attribute to "in_bounds".
This is in preparation for adding a new "mask" operand. The existing "masked" attribute was used to specify dimensions that may be out-of-bounds. Such transfers can be lowered to masked load/stores. The new "in_bounds" attribute is used to specify dimensions that are guaranteed to be within bounds. (Semantics is inverted.)

Differential Revision: https://reviews.llvm.org/D99639
2021-03-31 18:04:22 +09:00
Julian Gross
e2310704d8 [MLIR] Create memref dialect and move dialect-specific ops from std.
Create the memref dialect and move dialect-specific ops
from std dialect to this dialect.

Moved ops:
AllocOp -> MemRef_AllocOp
AllocaOp -> MemRef_AllocaOp
AssumeAlignmentOp -> MemRef_AssumeAlignmentOp
DeallocOp -> MemRef_DeallocOp
DimOp -> MemRef_DimOp
MemRefCastOp -> MemRef_CastOp
MemRefReinterpretCastOp -> MemRef_ReinterpretCastOp
GetGlobalMemRefOp -> MemRef_GetGlobalOp
GlobalMemRefOp -> MemRef_GlobalOp
LoadOp -> MemRef_LoadOp
PrefetchOp -> MemRef_PrefetchOp
ReshapeOp -> MemRef_ReshapeOp
StoreOp -> MemRef_StoreOp
SubViewOp -> MemRef_SubViewOp
TransposeOp -> MemRef_TransposeOp
TensorLoadOp -> MemRef_TensorLoadOp
TensorStoreOp -> MemRef_TensorStoreOp
TensorToMemRefOp -> MemRef_BufferCastOp
ViewOp -> MemRef_ViewOp

The roadmap to split the memref dialect from std is discussed here:
https://llvm.discourse.group/t/rfc-split-the-memref-dialect-from-std/2667

Differential Revision: https://reviews.llvm.org/D98041
2021-03-15 11:14:09 +01:00
Aart Bik
df5ccf5a94 [mlir][vector] add higher dimensional support to gather/scatter
Similar to mask-load/store and compress/expand, the gather and
scatter operation now allow for higher dimension uses. Note that
to support the mixed-type index, the new syntax is:
   vector.gather %base [%i,%j] [%kvector] ....
The first client of this generalization is the sparse compiler,
which needs to define scatter and gathers on dense operands
of higher dimensions too.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D97422
2021-02-26 14:20:19 -08:00
Andrew Pritchard
08c681f645 Perform memory accesses in the same addrspace as the corresponding memref.
It's not necessarily the case on all architectures that all memory is
addressable in addrspace 0, so casting the pointer to addrspace 0 is
liable to cause problems.

Reviewed By: aartbik, ftynse, nicolasvasilache

Differential Revision: https://reviews.llvm.org/D96380
2021-02-18 12:36:16 -08:00
Diego Caballero
ee66e43a96 [mlir][Vector] Introduce 'vector.load' and 'vector.store' ops
This patch adds the 'vector.load' and 'vector.store' ops to the Vector
dialect [1]. These operations model *contiguous* vector loads and stores
from/to memory. Their semantics are similar to the 'affine.vector_load' and
'affine.vector_store' counterparts but without the affine constraints. The
most relevant feature is that these new vector operations may perform a vector
load/store on memrefs with a non-vector element type, unlike 'std.load' and
'std.store' ops. This opens the representation to model more generic vector
load/store scenarios: unaligned vector loads/stores, perform scalar and vector
memory access on the same memref, decouple memory allocation constraints from
memory accesses, etc [1]. These operations will also facilitate the progressive
lowering of both Affine vector loads/stores and Vector transfer reads/writes
for those that read/write contiguous slices from/to memory.

In particular, this patch adds the 'vector.load' and 'vector.store' ops to the
Vector dialect, implements their lowering to the LLVM dialect, and changes the
lowering of 'affine.vector_load' and 'affine.vector_store' ops to the new vector
ops. The lowering of Vector transfer reads/writes will be implemented in the
future, probably as an independent pass. The API of 'vector.maskedload' and
'vector.maskedstore' has also been changed slightly to align it with the
transfer read/write ops and the vector new ops. This will improve reusability
among all these operations. For example, the lowering of 'vector.load',
'vector.store', 'vector.maskedload' and 'vector.maskedstore' to the LLVM dialect
is implemented with a single template conversion pattern.

[1] https://llvm.discourse.group/t/memref-type-and-data-layout/

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D96185
2021-02-12 20:48:37 +02:00
Alex Zinenko
ba87f99168 [mlir] make vector to llvm conversion truly partial
Historically, the Vector to LLVM dialect conversion subsumed the Standard to
LLVM dialect conversion patterns. This was necessary because the conversion
infrastructure did not have sufficient support for reconciling type
conversions. This support is now available. Only keep the patterns related to
the Vector dialect in the Vector to LLVM conversion and require type casts
operations to be inserted if necessary. These casts will be removed by
following conversions if possible. Update integration tests to also run the
Standard to LLVM conversion.

There is a significant amount of test churn, which is due to (a) unnecessarily
strict tests in VectorToLLVM and (b) many patterns actually targeting Standard
dialect ops instead of LLVM dialect ops leading to tests actually exercising a
Vector->Standard->LLVM conversion. This churn is a good illustration of the
reason to make the conversion partial: now the tests only check the code in the
Vector to LLVM conversion and will not be randomly broken by changes in
Standard to LLVM conversion.

Arguably, it may be possible to extract Vector to Standard patterns into a
separate pass, but given the ongoing splitting of the Standard dialect, such
pass will be short-lived and will require further refactoring.

Depends On D95626

Reviewed By: nicolasvasilache, aartbik

Differential Revision: https://reviews.llvm.org/D95685
2021-02-04 11:33:24 +01:00
Diego Caballero
cf5c517c05 [mlir][Vector] Add lowering to LLVM for vector.bitcast
Add the conversion pattern for vector.bitcast to lower it to
the LLVM Dialect.

Reviewed By: ThomasRaoux, aartbik

Differential Revision: https://reviews.llvm.org/D95579
2021-02-03 01:19:20 +02:00
Alex Zinenko
bd30a796fc [mlir] use built-in vector types instead of LLVM dialect types when possible
Continue the convergence between LLVM dialect and built-in types by using the
built-in vector type whenever possible, that is for fixed vectors of built-in
integers and built-in floats. LLVM dialect vector type is still in use for
pointers, less frequent floating point types that do not have a built-in
equivalent, and scalable vectors. However, the top-level `LLVMVectorType` class
has been removed in favor of free functions capable of inspecting both built-in
and LLVM dialect vector types: `LLVM::getVectorElementType`,
`LLVM::getNumVectorElements` and `LLVM::getFixedVectorType`. Additional work is
necessary to design an implemented the extensions to built-in types so as to
remove the `LLVMFixedVectorType` entirely.

Note that the default output format for the built-in vectors does not have
whitespace around the `x` separator, e.g., `vector<4xf32>` as opposed to the
LLVM dialect vector type format that does, e.g., `!llvm.vec<4 x fp128>`. This
required changing the FileCheck patterns in several tests.

Reviewed By: mehdi_amini, silvas

Differential Revision: https://reviews.llvm.org/D94405
2021-01-12 10:04:28 +01:00
Aart Bik
6728af16cf [mlir][vector] modified scatter/gather syntax, pass_thru mandatory
This change makes the scatter/gather syntax more consistent with
the syntax of all the other memory operations in the Vector dialect
(order of types, use of [] for index, etc.). This will make the MLIR
code easier to read. In addition, the pass_thru parameter of the
gather has been made mandatory (there is very little benefit in
using the implicit "undefined" values).

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D94352
2021-01-09 11:41:37 -08:00