Commit Graph

270 Commits

Author SHA1 Message Date
Diego Caballero
42a6ad7bad [mlir][Vector] Fix n-D vector.extract/insert lowering to LLVM (#87591)
The lowering of n-D vector.extract/insert ops to LLVM is not supported
but if one of these accidentally reaches the vector-to-llvm conversion
patterns, we end up with a kind of puzzling crash. This PR fixes that
crash and gracefully bails out in those cases.
2024-04-05 15:01:20 -07:00
Kojo Acquah
cb6ff746e0 [mlir][ArmNeon] Implements LowerVectorToArmNeon Pattern for SMMLA (#81895)
This patch adds a the `LowerVectorToArmNeonPattern` patterns to the
ArmNeon.

This pattern inspects `vector.contract` ops that can be 1-1 mapped to an
`arm.neon.smmla` intrinsic. The contract ops must be separated into
tiles who's inputs must fit that of a single smmla op (`2x8xi32` inputs
and `2x2xi32` output). The `vector.contract` inputs must be sign
extended from narrow types (<=i8) to be converted. If all conditions are
met, an smmla op is inserted with additional `vector.shape_casts` to
handle linearizing the input and output dimension.
2024-03-08 14:50:13 -08:00
Aart Bik
c1b8c6cf41 [mlir][vector][print] do not append newline to printing pure strings (#83213)
Since the vector.print str provides no punctuation control, it is
slightly more flexible to let the client of this operation decide
whether there should be a trailing newline. This allows for printing
like

vector.print str "nse = "
vector.print %nse : index

as

nse = 42
2024-02-28 10:18:21 -08:00
Benjamin Maxwell
a1a6860314 [mlir][VectorOps] Add unrolling for n-D vector.interleave ops (#80967)
This unrolls n-D vector.interleave ops like:

```mlir
vector.interleave %i, %j : vector<6x3xf32>
```

To a sequence of 1-D operations:
```mlir
%i_0 = vector.extract %i[0] 
%j_0 = vector.extract %j[0] 
%res_0 = vector.interleave %i_0, %j_0 : vector<3xf32>
vector.insert %res_0, %result[0] :
// ... repeated x6
```

The 1-D operations can then be directly lowered to LLVM.

Depends on: #80966
2024-02-20 14:33:33 +00:00
Benjamin Maxwell
79ce2c93ae [mlir][VectorOps] Add conversion of 1-D vector.interleave ops to LLVM (#80966)
The 1-D case directly maps to LLVM intrinsics. The n-D case will be
handled by unrolling to 1-D first (in a later patch).

Depends on: #80965
2024-02-13 10:47:33 +00:00
Krzysztof Drewniak
5cfe24eee4 [mlir][Vector] Add nontemporal attribute, mirroring memref (#76752)
Since vector loads and stores from scalar memrefs translate to
llvm.load/store, add the ability to tag said loads and stores as
nontemporal. This mirrors functionality available in memref.load/store.
2024-01-09 11:05:20 -06:00
Cullen Rhodes
4db0bd28e8 [mlir][vector][nfc] remove unused template parameter (#75931) 2023-12-20 08:06:25 +00:00
Jakub Kuderski
560564f51c [mlir][vector][gpu] Align minf/maxf reduction kind names with arith (#75901)
This is to avoid confusion when dealing with reduction/combining kinds.
For example, see a recent PR comment:
https://github.com/llvm/llvm-project/pull/75846#discussion_r1430722175.

Previously, they were picked to mostly mirror the names of the llvm
vector reduction intrinsics:
https://llvm.org/docs/LangRef.html#llvm-vector-reduce-fmin-intrinsic. In
isolation, it was not clear if `<maxf>` has `arith.maxnumf` or
`arith.maximumf` semantics. The new reduction kind names map 1:1 to
arith ops, which makes it easier to tell/look up their semantics.

Because both the vector and the gpu dialect depend on the arith dialect,
it's more natural to align names with those in arith than with the
lowering to llvm intrinsics.

Issue: https://github.com/llvm/llvm-project/issues/72354
2023-12-20 00:14:43 -05:00
Benjamin Maxwell
dbb8643333 [mlir][LLVM] Support immargs in LLVM_IntrOpBase intrinsics (#73013)
This extends `LLVM_IntrOpBase` so that it can be passed a list of
`immArgPositions` and a list (of the same length) of `immArgAttrNames`.
`immArgPositions` contains the positions of `immargs` on the LLVM IR
intrinsic, and `immArgAttrNames` maps those to a corresponding MLIR
attribute.

This allows modeling LLVM `immargs` as MLIR attributes, which is the
closest match semantically (and had already been done manually for the
LLVM dialect intrinsics).

This has two upsides:
* It's slightly easier to implement intrinsics with immargs now
(especially if they make use of other features, such as overloads)
* It clearly defines that `immargs` should map to attributes, before
there was no mention of `immargs` in LLVMOpBase.td, so implementing them
was unclear

This works with other features of the `LLVM_IntrOpBase`, so `immargs`
can be marked as overloaded too (which is used in some intrinsics).

As part of this patch (and to test correctness) existing intrinsics have
been updated to use these new parameters.

This also uncovered a few issues with the
`llvm.intr.vector.insert/extract` intrinsics. First, the argument order
for insert did not match the LLVM intrinsic, and secondly, both were
missing a mlirBuilder (so failed to import from LLVM IR). This is
corrected with this patch (and a test case added).
2023-11-23 10:12:12 +00:00
Benjamin Maxwell
dff97c1e4c [mlir][ArmSME] Move ArmSME -> intrinsics lowerings to convert-arm-sme-to-llvm pass (#72890)
This gives more flexibility with when these lowerings are performed,
without also lowering unrelated vector ops.

This is a NFC (other than adding a new `-convert-arm-sme-to-llvm` pass)
2023-11-22 13:36:36 +00:00
Christian Ulmann
ceb4dc4477 [MLIR][VectorToLLVM] Remove typed pointer support (#71075)
This commit removes the support for lowering Vector to LLVM dialect with
typed pointers. Typed pointers have been deprecated for a while now and
it's planned to soon remove them from the LLVM dialect.

Related PSA:
https://discourse.llvm.org/t/psa-removal-of-typed-pointers-from-the-llvm-dialect/74502
2023-11-03 11:16:11 +01:00
Benjamin Maxwell
3be3883e6d [mlir][VectorOps] Support string literals in vector.print (#68695)
Printing strings within integration tests is currently quite annoyingly
verbose, and can't be tucked into shared helpers as the types depend on
the length of the string:

```
llvm.mlir.global internal constant @hello_world("Hello, World!\0")

func.func @entry() {
  %0 = llvm.mlir.addressof @hello_world : !llvm.ptr<array<14 x i8>>
  %1 = llvm.mlir.constant(0 : index) : i64
  %2 = llvm.getelementptr %0[%1, %1]
    : (!llvm.ptr<array<14 x i8>>, i64, i64) -> !llvm.ptr<i8>
  llvm.call @printCString(%2) : (!llvm.ptr<i8>) -> ()
  return
}
```

So this patch adds a simple extension to `vector.print` to simplify
this:
```
func.func @entry() {
   // Print a vector of characters ;)
   vector.print str "Hello, World!"
   return
}
```

Most of the logic for this is now shared with `cf.assert` which already
does something similar.

Depends on #68694
2023-10-24 09:34:14 +01:00
Benjamin Maxwell
7bbfd2aec0 [mlir][ArmSVE] Restructure sources to match ArmSME dialect (NFC) (#68399)
This rearranges the Arm SVE dialect to have the same structure of the
Arm SME dialect. So this just moves around some source files and adds a
ArmSVE_IntrOp base class for SVE intrinsics. This makes later changes a
little easier and more consistent other dialects.
2023-10-09 10:02:55 +01:00
Quinn Dawkins
78c49743c7 [MLIR][Vector] Allow non-default memory spaces in gather/scatter lowerings (#67500)
GPU targets can gather on non-default address spaces (e.g. global), so
this removes the check for the default memory space.
2023-09-28 19:20:32 -04:00
Diego Caballero
98f6289a34 [mlir][Vector] Add support for Value indices to vector.extract/insert
`vector.extract/insert` ops only support constant indices. This PR is
extending them so that arbitrary values can be used instead.

This work is part of the RFC: https://discourse.llvm.org/t/rfc-psa-remove-vector-extractelement-and-vector-insertelement-ops-in-favor-of-vector-extract-and-vector-insert-ops

Differential Revision: https://reviews.llvm.org/D155034
2023-09-22 00:39:32 +00:00
Nicolas Vasilache
1b8b556443 [mlir][Vector] Add fastmath flags to vector.reduction (#66905)
This revision pipes the fastmath attribute support through the
vector.reduction op. This seemingly simple first step already requires
quite some genuflexions, file and builder reorganization. In the
process, retire the boolean reassoc flag deep in the LLVM dialect
builders and just use the fastmath attribute.

During conversions, templated builders for predicated intrinsics are
partially cleaned up. In the future, to finalize the cleanups, one
should consider adding fastmath to the VPIntrinsic ops.
2023-09-20 16:57:20 +02:00
Daniil Dudkin
8f5d519458 [mlir][vector] Implement Workaround Lowerings for Masked fm**imum Reductions
This patch is part of a larger initiative aimed at fixing floating-point `max` and `min` operations in MLIR: https://discourse.llvm.org/t/rfc-fix-floating-point-max-and-min-operations-in-mlir/72671.

Within LLVM, there are no masked reduction counterparts for vector reductions such as `fmaximum` and `fminimum`.
More information can be found here: https://github.com/llvm/llvm-project/issues/64940#issuecomment-1690694156.

To address this issue in MLIR, where we need to generate appropriate lowerings for these cases, we employ regular non-masked intrinsics.
However, we modify the input vector using the `arith.select` operation to effectively deactivate undesired elements using a "neutral mask value".
The neutral mask value is the smallest possible value for the `fmaximum` reduction and the largest possible value for the `fminimum` reduction.

Depends on D158618

Reviewed By: dcaballe

Differential Revision: https://reviews.llvm.org/D158773
2023-09-13 22:49:08 +00:00
Daniil Dudkin
709b27427b [mlir][vector] Bring back maxf/minf reductions
This patch is part of a larger initiative aimed at fixing floating-point `max` and `min` operations in MLIR: https://discourse.llvm.org/t/rfc-fix-floating-point-max-and-min-operations-in-mlir/72671.

In line with the mentioned RFC, this patch  tackles tasks 2.3 and 2.4.
It adds LLVM conversions for the `maxf`/`minf` reductions to the non-NaN-propagating LLVM intrinsics.

Depends on D158618

Reviewed By: dcaballe

Differential Revision: https://reviews.llvm.org/D158659
2023-09-13 22:49:07 +00:00
Daniil Dudkin
4a831250b8 [mlir][vector] Rename vector reductions: maxfmaximumf, minfminimumf
This patch is part of a larger initiative aimed at fixing floating-point `max` and `min` operations in MLIR: https://discourse.llvm.org/t/rfc-fix-floating-point-max-and-min-operations-in-mlir/72671.

Here, we are addressing task 2.1 from the plan, which involves renaming the vector reductions to align with the semantics of the corresponding LLVM intrinsics.

Reviewed By: dcaballe

Differential Revision: https://reviews.llvm.org/D158618
2023-09-13 22:49:07 +00:00
Krzysztof Drewniak
df852599f3 [mlir] Split up VectorToLLVM pass
Currently, the VectorToLLVM patterns are built into a library along
with the corresponding pass, which also pulls in all the
platform-specific vector dialects (like AMXDialect) to apply all the
vector to LLVM conversions.

This causes dependency bloat when writing libraries - for example the
GPU to LLVM passes, which use the vector to LLVM patterns, don't need
the X86Vector dialect to be present at all.

This commit partitions the library into VectorToLLVM and
VectorToLLVMPass, where the latter pulls in all the other vector
transformations.

Reviewed By: nicolasvasilache, mehdi_amini

Differential Revision: https://reviews.llvm.org/D158287
2023-09-13 16:09:56 +00:00
Matthias Springer
ce254598b7 [mlir][Conversion] Store const type converter in ConversionPattern
ConversionPatterns do not (and should not) modify the type converter that they are using.

* Make `ConversionPattern::typeConverter` const.
* Make member functions of the `LLVMTypeConverter` const.
* Conversion patterns take a const type converter.
* Various helper functions (that are called from patterns) now also take a const type converter.

Differential Revision: https://reviews.llvm.org/D157601
2023-08-14 09:03:11 +02:00
Benjamin Maxwell
f36e909da0 [mlir][VectorOps] Use SCF for vector.print and allow scalable vectors
Reland of the original patch after updating the Python binding tests,
a few CUDA/GPU MLIR tests, and ensuring the assembly format is
round-trippable.

This patch splits the lowering of vector.print into first converting
an n-D print into a loop of scalar prints of the elements, then a second
pass that converts those scalar prints into the runtime calls. The
former is done in VectorToSCF and the latter in VectorToLLVM.

The main reason for this is to allow printing scalable vector types,
which are not possible to fully unroll at compile time, though this
also avoids fully unrolling very large vectors.

To allow VectorToSCF to add the necessary punctuation between vectors
and elements, a "punctuation" attribute has been added to vector.print.
This abstracts calling the runtime functions such as printNewline(),
without leaking the LLVM details into the higher abstraction levels.
For example:

  vector.print punctuation <comma>

lowers to

  llvm.call @printComma() : () -> ()

The output format and runtime functions remain the same, which avoids
the need to alter a large number of tests (aside from the pipelines).

Reviewed By: awarzynski, c-rhodes, aartbik

Differential Revision: https://reviews.llvm.org/D156519
2023-08-11 09:29:54 +00:00
Mehdi Amini
1b272d21c8 Revert "[mlir][VectorOps] Use SCF for vector.print and allow scalable vectors"
This reverts commit 490dae26cb.

Bot is broken, seems like there is a problem of ambiguity in the parser.
2023-08-09 19:37:01 -07:00
Benjamin Maxwell
490dae26cb [mlir][VectorOps] Use SCF for vector.print and allow scalable vectors
Reland of the original patch after updating the Python binding tests and
a few CUDA/GPU MLIR tests.

This patch splits the lowering of vector.print into first converting
an n-D print into a loop of scalar prints of the elements, then a second
pass that converts those scalar prints into the runtime calls. The
former is done in VectorToSCF and the latter in VectorToLLVM.

The main reason for this is to allow printing scalable vector types,
which are not possible to fully unroll at compile time, though this
also avoids fully unrolling very large vectors.

To allow VectorToSCF to add the necessary punctuation between vectors
and elements, a "punctuation" attribute has been added to vector.print.
This abstracts calling the runtime functions such as printNewline(),
without leaking the LLVM details into the higher abstraction levels.
For example:

  vector.print <comma>

lowers to

  llvm.call @printComma() : () -> ()

The output format and runtime functions remain the same, which avoids
the need to alter a large number of tests (aside from the pipelines).

Reviewed By: awarzynski, c-rhodes, aartbik

Differential Revision: https://reviews.llvm.org/D156519
2023-08-09 11:47:18 +00:00
Benjamin Maxwell
b160442dd2 Revert "[mlir][VectorOps] Use SCF for vector.print and allow scalable vectors"
This reverts commit 3875804a07.

This caused some test failures for the MLIR python bindings. Reverting
until those are addressed.
2023-08-09 09:54:05 +00:00
Benjamin Maxwell
3875804a07 [mlir][VectorOps] Use SCF for vector.print and allow scalable vectors
This patch splits the lowering of vector.print into first converting
an n-D print into a loop of scalar prints of the elements, then a second
pass that converts those scalar prints into the runtime calls. The
former is done in VectorToSCF and the latter in VectorToLLVM.

The main reason for this is to allow printing scalable vector types,
which are not possible to fully unroll at compile time, though this
also avoids fully unrolling very large vectors.

To allow VectorToSCF to add the necessary punctuation between vectors
and elements, a "punctuation" attribute has been added to vector.print.
This abstracts calling the runtime functions such as printNewline(),
without leaking the LLVM details into the higher abstraction levels.
For example:

  vector.print <comma>

lowers to

  llvm.call @printComma() : () -> ()

The output format and runtime functions remain the same, which avoids
the need to alter a large number of tests (aside from the pipelines).

Reviewed By: awarzynski, c-rhodes, aartbik

Differential Revision: https://reviews.llvm.org/D156519
2023-08-09 09:38:05 +00:00
Daniil Dudkin
dad9de0ae5 [mlir][vector] Improve lowering to LLVM for minf, maxf reductions
This patch improves the lowering by changing target LLVM intrinsics from
`reduce.fmax` and `reduce.fmin`,
which have different semantic for handling NaN,
to `reduce.fmaximum` and `reduce.fminimum` ones.

Fixes #63969

Depends on D155869

Reviewed By: dcaballe

Differential Revision: https://reviews.llvm.org/D155877
2023-08-02 20:26:59 +03:00
Matthias Springer
16b75cd2bb [mlir][vector] Use DenseI64ArrayAttr for ExtractOp/InsertOp positions
`DenseI64ArrayAttr` provides a better API than `I64ArrayAttr`. E.g., accessors returning `ArrayRef<int64_t>` (instead of `ArrayAttr`) are generated.

Differential Revision: https://reviews.llvm.org/D156684
2023-07-31 15:25:37 +02:00
Andrzej Warzynski
3fa5ee67ba [mlir][ArmSME] Introduce custom TypeConverter for ArmSME
At the moment, SME-to-LLVM lowerings rely entirely on
`LLVMTypeConverter`. This patch introduces a dedicated `TypeConverter`
that inherits from `LLVMTypeConverter` (it will also be used when
lowering ArmSME Ops to LLVM).

The new type converter merely disables lowerings for `VectorType` to
prevent 2-d scalable vectors (common in the context of ArmSME), e.g.

   `vector<[16]x[16]xi8>`,

entering the LLVM Type converter. LLVM does not support arrays of
scalable vectors and hence the need for specialisation. In the case of
SME such types are effectively eliminated when emitting LLVM IR
intrinsics for SME.

Differential Revision: https://reviews.llvm.org/D155365
2023-07-18 09:35:32 +00:00
Andrzej Warzynski
447bb5bee4 [mlir][ArmSME] Introduce new lowering layer (Vector -> ArmSME)
At the moment, the lowering from the Vector dialect to SME looks like
this:

  * Vector --> SME LLVM IR intrinsics

This patch introduces a new lowering layer between the Vector dialect
and the Arm SME extension:

  * Vector --> ArmSME dialect (custom Ops) --> SME LLVM IR intrinsics.

This is motivated by 2 considerations:
1. Storing `ZA` to memory (e.g. `vector.transfer_write`) requires an
   `scf.for` loop over all rows of `ZA`. Similar logic will apply to
   "load to ZA from memory". This is a rather complex transformation and
   a custom Op seems justified.
2. As discussed in [1], we need to prevent the LLVM type converter from
   having to convert types unsupported in LLVM, e.g.
   `vector<[16]x[16]xi8>`. A dedicated abstraction layer with custom Ops
   opens a path to some fine tuning (e.g. custom type converters) that
   will allow us to avoid this.

To facilitate this change, two new custom SME Op are introduced:

  * `TileStoreOp`, and
  * `ZeroOp`.

Note that no new functionality is added - these Ops merely model what's
already supported. In particular, the following tile size is assumed
(dimension and element size are fixed):

  * `vector<[16]x[16]xi8>`

The new lowering layer is introduced via a conversion pass between the
Vector and the SME dialects. You can use the `-convert-vector-to-sme`
flag to run it. The following function:
```
func.func @example(%arg0 : memref<?x?xi8>) {
  // (...)
  %cst = arith.constant dense<0> : vector<[16]x[16]xi8>
  vector.transfer_write %cst, %arg0 : vector<[16]x[16]xi8>, memref<?x?xi8>
  return
}
```
would be lowered to:
```
  func.func @example(%arg0: memref<?x?xi8>) {
    // (...)
    %0 = arm_sme.zero : vector<[16]x[16]xi8>
    arm_sme.tile_store %arg0[%c0, %c0], %0 : memref<?x?xi8>, vector<[16]x[16]xi8>
    return
  }
```

Later, a mechanism will be introduced to guarantee that `arm_sme.zero`
and `arm_sme.tile_store` operate on the same virtual tile. For `i8`
elements this is not required as there is only one tile.

In order to lower the above output to LLVM, use
  * `-convert-vector-to-llvm="enable-arm-sme"`.

[1] https://github.com/openxla/iree/issues/14294

Reviewed By: WanderAway

Differential Revision: https://reviews.llvm.org/D154867
2023-07-18 08:04:59 +00:00
Matthias Springer
b1d2687501 [mlir][IR] Remove duplicate isLastMemrefDimUnitStride functions
This function is duplicated in various dialects.

Differential Revision: https://reviews.llvm.org/D155462
2023-07-17 16:31:04 +02:00
Cullen Rhodes
564713c471 [mlir][ArmSME] Add basic lowering of vector.transfer_write to zero
This patch adds support for lowering a 'vector.transfer_write' of zeroes
and type 'vector<[16x16]xi8>' to the SME 'zero {za}' instruction [1],
which zeroes the entire accumulator, and then writing it out to memory
with the 'str' instruction [2].

This contributes to supporting a path from 'linalg.fill' to SME.

[1] https://developer.arm.com/documentation/ddi0602/2022-06/SME-Instructions/ZERO--Zero-a-list-of-64-bit-element-ZA-tiles-
[2] https://developer.arm.com/documentation/ddi0602/2022-06/SME-Instructions/STR--Store-vector-from-ZA-array-

Reviewed By: awarzynski, dcaballe, WanderAway

Differential Revision: https://reviews.llvm.org/D152508
2023-07-03 10:18:43 +00:00
Andrzej Warzynski
f22af204ed [mlir][VectorType] Remove numScalableDims from the vector type
This is a follow-up of https://reviews.llvm.org/D153372 in which
`numScalableDims` (single integer) was effectively replaced with
`isScalableDim` bitmask.

This change is a part of a larger effort to enable scalable
vectorisation in Linalg. See this RFC for more context:
  * https://discourse.llvm.org/t/rfc-scalable-vectorisation-in-linalg/

Differential Revision: https://reviews.llvm.org/D153412
2023-06-28 13:53:45 +01:00
Cullen Rhodes
65305aeab9 [mlir][ArmSME] Insert intrinsics to enable/disable ZA
This patch adds two LLVM intrinsics to the ArmSME dialect:

  * llvm.aarch64.sme.za.enable
  * llvm.aarch64.sme.za.disable

for enabling the ZA storage array [1], as well as patterns for inserting
them during legalization to LLVM at the start and end of functions if
the function has the 'arm_za' attribute (D152695).

In the future ZA should probably be automatically enabled/disabled when
lowering from vector to SME, but this should be sufficient for now at
least until we have patterns lowering to SME instructions that use ZA.

N.B. The backend function attribute 'aarch64_pstate_za_new' can be used
manage ZA state (as was originally tried in D152694), but it emits calls
to the following SME support routines [2] for the lazy-save mechanism
[3]:

  * __arm_tpidr2_restore
  * __arm_tpidr2_save

These will soon be added to compiler-rt but there's currently no public
implementation, and using this attribute would introduce an MLIR
dependency on compiler-rt. Furthermore, this mechanism is for routines
with ZA enabled calling other routines with it also enabled. We can
choose not to enable ZA in the compiler when this is case.

Depends on D152695

[1] https://developer.arm.com/documentation/ddi0616/aa
[2] https://github.com/ARM-software/abi-aa/blob/main/aapcs64/aapcs64.rst#sme-support-routines
[3] https://github.com/ARM-software/abi-aa/blob/main/aapcs64/aapcs64.rst#the-za-lazy-saving-scheme

Reviewed By: awarzynski, dcaballe

Differential Revision: https://reviews.llvm.org/D153050
2023-06-16 09:40:48 +00:00
Cullen Rhodes
1e41a29d73 Revert "[mlir][ArmSME] Add initial dialect with basic lowering of vector.transfer write to zero"
Apologies I shouldn't have comitted this, need to wait until the planned
MLIR ODM:

  https://discourse.llvm.org/t/rfc-creating-a-armsme-dialect/67208/76

This reverts commit a48fe89885.
2023-06-14 09:03:10 +00:00
Cullen Rhodes
a48fe89885 [mlir][ArmSME] Add initial dialect with basic lowering of vector.transfer write to zero
This patch adds support for lowering a `vector.transfer_write` of zeroes
and type `vector<[16x16]xi8>` to the SME `zero {za}` instruction [1],
which zeroes the entire accumulator.

This contributes to supporting a path from `linalg.fill` to SME.

[1] https://developer.arm.com/documentation/ddi0602/2022-06/SME-Instructions/ZERO--Zero-a-list-of-64-bit-element-ZA-tiles-

Reviewed By: awarzynski, dcaballe

Differential Revision: https://reviews.llvm.org/D152508
2023-06-14 08:46:53 +00:00
Tres Popp
5550c82189 [mlir] Move casting calls from methods to function calls
The MLIR classes Type/Attribute/Operation/Op/Value support
cast/dyn_cast/isa/dyn_cast_or_null functionality through llvm's doCast
functionality in addition to defining methods with the same name.
This change begins the migration of uses of the method to the
corresponding function call as has been decided as more consistent.

Note that there still exist classes that only define methods directly,
such as AffineExpr, and this does not include work currently to support
a functional cast/isa call.

Caveats include:
- This clang-tidy script probably has more problems.
- This only touches C++ code, so nothing that is being generated.

Context:
- https://mlir.llvm.org/deprecation/ at "Use the free function variants
  for dyn_cast/cast/isa/…"
- Original discussion at https://discourse.llvm.org/t/preferred-casting-style-going-forward/68443

Implementation:
This first patch was created with the following steps. The intention is
to only do automated changes at first, so I waste less time if it's
reverted, and so the first mass change is more clear as an example to
other teams that will need to follow similar steps.

Steps are described per line, as comments are removed by git:
0. Retrieve the change from the following to build clang-tidy with an
   additional check:
   https://github.com/llvm/llvm-project/compare/main...tpopp:llvm-project:tidy-cast-check
1. Build clang-tidy
2. Run clang-tidy over your entire codebase while disabling all checks
   and enabling the one relevant one. Run on all header files also.
3. Delete .inc files that were also modified, so the next build rebuilds
   them to a pure state.
4. Some changes have been deleted for the following reasons:
   - Some files had a variable also named cast
   - Some files had not included a header file that defines the cast
     functions
   - Some files are definitions of the classes that have the casting
     methods, so the code still refers to the method instead of the
     function without adding a prefix or removing the method declaration
     at the same time.

```
ninja -C $BUILD_DIR clang-tidy

run-clang-tidy -clang-tidy-binary=$BUILD_DIR/bin/clang-tidy -checks='-*,misc-cast-functions'\
               -header-filter=mlir/ mlir/* -fix

rm -rf $BUILD_DIR/tools/mlir/**/*.inc

git restore mlir/lib/IR mlir/lib/Dialect/DLTI/DLTI.cpp\
            mlir/lib/Dialect/Complex/IR/ComplexDialect.cpp\
            mlir/lib/**/IR/\
            mlir/lib/Dialect/SparseTensor/Transforms/SparseVectorization.cpp\
            mlir/lib/Dialect/Vector/Transforms/LowerVectorMultiReduction.cpp\
            mlir/test/lib/Dialect/Test/TestTypes.cpp\
            mlir/test/lib/Dialect/Transform/TestTransformDialectExtension.cpp\
            mlir/test/lib/Dialect/Test/TestAttributes.cpp\
            mlir/unittests/TableGen/EnumsGenTest.cpp\
            mlir/test/python/lib/PythonTestCAPI.cpp\
            mlir/include/mlir/IR/
```

Differential Revision: https://reviews.llvm.org/D150123
2023-05-12 11:21:25 +02:00
Nicolas Vasilache
2bc4c3e920 [mlir][Vector] NFC - Reorganize vector patterns
Vector dialect patterns have grown enormously in the past year to a point where they are now impenetrable.
Start reorganizing them towards finer-grained control.

Differential Revision: https://reviews.llvm.org/D146736
2023-03-23 11:30:25 -07:00
Adrian Kuegel
4eb1f1fab3 [mlir] Apply ClangTidyLegacy findings (NFC)
- 'override' is redundant since the function is already declared 'final'.
- 'virtual' is redundant since the function is already declared 'override'.
2023-03-08 10:39:21 +01:00
Aart Bik
657f60a07b [mlir][vector] add support for printing f16 and bf16
Love or hate it, but the vector.print operation was the very
first operation that actually made "end-to-end" CHECK integration
testing possible for MLIR. This revision adds support for
the -until recently- less common but important floating-point
types f16 and bf16.

This will become useful for accelerator specific testing (e.g. NVidia GPUs)

Reviewed By: wrengr

Differential Revision: https://reviews.llvm.org/D145207
2023-03-03 08:58:25 -08:00
Mehdi Amini
6904998829 Fix crash in ConvertVectorToLLVM.cpp pattern
Fixes #61094
2023-03-03 11:16:43 +01:00
Diego Caballero
c339f9e1c3 [mlir][Vector] Support masking for more contraction flavors
This patch adds masking support for more contraction flavors including those
with any combiner operation (add, mul, min, max, and, or, etc.) and
regular matmul contractions.

Combiner operations that are performing vertical reductions (and,
therefore, they are not represented with a horizontal reduction
operation) can be executed unmasked. However, the previous value of
the accumulator must be propagated for lanes that shouldn't accumulate.
We achieve this goal by introducing a select operation after the
accumulator to choose between the combined and the previous accumulator
value. This design decision is made to avoid introducing masking support
to all the arithmetic and logical operations in the Arith dialect. VP
intrinsics do not support pass-thru values either so we would have to
generate the same sequence when lowering to LLVM. The op + select
pattern is peepholed by some backend with native masking support for those
operations.

Consequently, this patch removes masking support from the vector.fma
operation to follow the same approach for all the combiner operations.

Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D144239
2023-02-22 01:47:44 +00:00
Lei Zhang
a1aad28d29 [mlir][vector] NFC: Improve vector type accessor methods
Plain `getVectorType()` can be quite confusing and error-prone
given that, well, vector ops always work on vector types, and
it can commonly involve both source and result vectors. So this
commit makes various such accessor methods to be explicit w.r.t.
source or result vectors.

Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D144159
2023-02-16 04:08:33 +00:00
Diego Caballero
9452356ddc [mlir][Vector] Add support for masked vector.contract
This patch adds support for masking vector.contract ops with the
vector.mask approach. This also includes the lowering of vector.contract
through the vector.outerproduct path to LLVM. For now, this only adds
support for one of the many potential flavors of
vector.contract/vector.outerproduct but unsupported cases will fail
gratefully.

Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D143965
2023-02-15 06:10:22 +00:00
Diego Caballero
e9b82a5c4f [mlir][Vector] Add LLVM lowering for masked reductions
This patch adds the conversion patterns to lower masked reduction
operations to the corresponding vp intrinsics in LLVM.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D142177
2023-02-15 06:10:11 +00:00
Markus Böck
4a2d4588a0 [mlir][Vector] add pass option to convert-vector-to-llvm to emit opaque pointers
Part of https://discourse.llvm.org/t/rfc-switching-the-llvm-dialect-and-dialect-lowerings-to-opaque-pointers/68179

This patch allows the use of `convert-vector-to-llvm` with LLVM opaque pointer types. The changes required were relatively minor:
* there were a few GEPs and loads that required specifying the result or base type
* a few bitcasts, especially when interacting with memref have to not be emitted when using opaque pointers

Differential Revision: https://reviews.llvm.org/D143853
2023-02-13 11:49:20 +01:00
Markus Böck
cd4ca2d7f9 [mlir] Port Conversion Passes to LLVM to use TableGen generated constructors and options
See https://github.com/llvm/llvm-project/issues/57475 for more context.

Using auto-generated constructors and options has significant advantages:
* It forces a uniform style and expectation for consuming a pass
* It allows to very easily add, remove or change options to a pass by simply making the changes in TableGen
* Its less code

This patch in particular ports all the conversion passes which lower to LLVM to use the auto generated constructors and options. For the most part, care was taken so that auto generated constructor functions have the same name as they previously did. Only following slight breaking changes (which I consider as worth the churn) have been made:
* `mlir::cf::createConvertControlFlowToLLVMPass` has been moved to the `mlir` namespace. This is consistent with basically all conversion passes
* `createGpuToLLVMConversionPass` now takes a proper options struct array for its pass options. The pass options are now also autogenerated.
* `LowerVectorToLLVMOptions` has been replaced by the autogenerated `ConvertVectorToLLVMPassOptions` which is automatically kept up to date by TableGen
* I had to move one function in the GPU to LLVM lowering as it is used as default value for an option.
* All passes that previously returned `unique_ptr<OperationPass<...>>` now simply return `unique_ptr<Pass>`

Differential Revision: https://reviews.llvm.org/D143773
2023-02-10 20:47:18 +01:00
Krzysztof Drewniak
499abb243c Add generic type attribute mapping infrastructure, use it in GpuToX
Remapping memory spaces is a function often needed in type
conversions, most often when going to LLVM or to/from SPIR-V (a future
commit), and it is possible that such remappings may become more
common in the future as dialects take advantage of the more generic
memory space infrastructure.

Currently, memory space remappings are handled by running a
special-purpose conversion pass before the main conversion that
changes the address space attributes. In this commit, this approach is
replaced by adding a notion of type attribute conversions
TypeConverter, which is then used to convert memory space attributes.

Then, we use this infrastructure throughout the *ToLLVM conversions.
This has the advantage of loosing the requirements on the inputs to
those passes from "all address spaces must be integers" to "all
memory spaces must be convertible to integer spaces", a looser
requirement that reduces the coupling between portions of MLIR.

ON top of that, this change leads to the removal of most of the calls
to getMemorySpaceAsInt(), bringing us closer to removing it.

(A rework of the SPIR-V conversions to use this new system will be in
a folowup commit.)

As a note, one long-term motivation for this change is that I would
eventually like to add an allocaMemorySpace key to MLIR data layouts
and then call getMemRefAddressSpace(allocaMemorySpace) in the
relevant *ToLLVM in order to ensure all alloca()s, whether incoming or
produces during the LLVM lowering, have the correct address space for
a given target.

I expect that the type attribute conversion system may be useful in
other contexts.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D142159
2023-02-09 18:00:46 +00:00
Kazu Hirata
a9a1b45b3c [VectorToLLVM] Use std::optional in ConvertVectorToLLVM.cpp (NFC)
This is part of an effort to migrate from llvm::Optional to
std::optional:

https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
2022-12-10 10:43:39 -08:00
Kazu Hirata
1a36588ec6 [mlir] Use std::nullopt instead of None (NFC)
This patch mechanically replaces None with std::nullopt where the
compiler would warn if None were deprecated.  The intent is to reduce
the amount of manual work required in migrating from Optional to
std::optional.

This is part of an effort to migrate from llvm::Optional to
std::optional:

https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
2022-12-03 18:50:27 -08:00