Refactors the following pairs of helper hooks:
* `dynamicallyInsertSubVector` + `staticallyInsertSubVector`
* `dynamicallyExtractSubVector` + `staticallyExtractSubVector`
These hooks are very similar, so I have unified the variable names and
various conditions to make the actual differences clearer.
Propagate `Extract(Elementwise(...))` -> `Elemetwise(Extract...)`.
Currenly limited to the case when extract is the single use of
elementwise to avoid introducing additional elementwise ops.
A number of places in our codebase special case to use
extractelement/insertelement for 0D vectors, because extract/insert did
not support 0D vectors previously. Since insert/extract support 0D
vectors now, use them instead of special casing.
This patch decouples unrolling vector.gather and lowering vector.gather
to llvm.masked.gather.
This is consistent with how vector.load, vector.store,
vector.maskedload, vector.maskedstore lower to LLVM.
Some interesting test changes from this patch:
- 2D vector.gather lowering to llvm tests are deleted. This is
consistent with other memory load/store ops.
- There are still tests for 2D vector.gather, but the constant mask for
these test is modified. This is because with the updated lowering, one
of the unrolled vector.gather disappears because it is masked off (also
demonstrating why this is a better lowering path)
Overall, this makes vector.gather take the same consistent path for
lowering to LLVM as other load/store ops.
Discourse Discussion:
https://discourse.llvm.org/t/rfc-improving-gather-codegen-for-vector-dialect/85011/13
This is PR refactors `alignedConversionPrecondition` from
VectorEmulateNarrowType.cpp and adds new helper hooks.
**Update `alignedConversionPrecondition` (1)**
This method doesn't require the vector type for the "container" argument. The
underlying element type is sufficient. The corresponding argument has been
renamed as `containerTy` - this is meant as the multi-byte container element
type (`i8`, `i16`, `i32`, etc). With this change, the updated invocations of
`alignedConversionPrecondition` (in e.g. `RewriteAlignedSubByteIntExt`) make it
clear that the container element type is assumed to be `i8`.
**Update alignedConversionPrecondition (2):**
The final check in `alignedConversionPrecondition` has been replaced with a new
helper method, `isSubByteVecFittable`. This helper hook is now also re-used in
`ConvertVectorTransferRead` (to improve code re-use).
**Other updates**
Extended + unified comments.
**Implements**: https://github.com/llvm/llvm-project/issues/123630
The `vector.contract` op on two matrices A and B will be lowered to
individual dot products of each row and column of A and B respectively.
The existing lowering will extract each column of B for each row of A,
which leads to multiple values in the IR representing the same columns
of B.
This PR makes changes to the `ContractOpToDotLowering` to make sure that
the columns of B are only ever extracted once, so then the SSA values
representing the extracted columns are then re-used in the IR for later
dot products.
I have updated the existing vector-contract-to-dot-transforms test.
The `VectorTransformsOptions` on the `ConvertVectorToLLVMPass` is
currently represented as a struct, which makes it not serialisable. This
means a pass pipeline that contains this pass cannot be represented as
textual form, which breaks reproducer generation and options such as
`--dump-pass-pipeline`.
This PR expands the `VectorTransformsOptions` struct into the two
options that are actually used by the Pass' patterns:
`vector-contract-lowering` and `vector-transpose-lowering` . The other
options present in VectorTransformOptions are not used by any patterns
in this pass.
Additionally, I have changed some interfaces to only take these specific
options over the full options struct as, again, the vector contract and
transpose lowering patterns only need one of their respective options.
Finally, I have added a simple lit test that just prints the pass
pipeline using `--dump-pass-pipeline` to ensure the options on this pass
remain serialisable.
Fixes#129046
Deprecate the `match` and `rewrite` functions. They mainly exist for
historic reasons. This PR also updates all remaining uses of in the MLIR
codebase.
This is addressing a
[comment](https://github.com/llvm/llvm-project/pull/129861#pullrequestreview-2662696084)
on an earlier PR.
Note for LLVM integration: `SplitMatchAndRewrite` will be deleted soon,
update your patterns to use `matchAndRewrite` instead of separate
`match` / `rewrite`.
---------
Co-authored-by: Jakub Kuderski <jakub@nod-labs.com>
The vast majority of rewrite / conversion patterns uses a combined
`matchAndRewrite` instead of separate `match` and `rewrite` functions.
This PR optimizes the code base for the most common case where users
implement a combined `matchAndRewrite`. There are no longer any `match`
and `rewrite` functions in `RewritePattern`, `ConversionPattern` and
their derived classes. Instead, there is a `SplitMatchAndRewriteImpl`
class that implements `matchAndRewrite` in terms of `match` and
`rewrite`.
Details:
* The `RewritePattern` and `ConversionPattern` classes are simpler
(fewer functions). Especially the `ConversionPattern` class, which now
has 5 fewer functions. (There were various `rewrite` overloads to
account for 1:1 / 1:N patterns.)
* There is a new class `SplitMatchAndRewriteImpl` that derives from
`RewritePattern` / `OpRewritePatern` / ..., along with a type alias
`RewritePattern::SplitMatchAndRewrite` for convenience.
* Fewer `llvm_unreachable` are needed throughout the code base. Instead,
we can use pure virtual functions. (In cases where users previously had
to implement `rewrite` or `matchAndRewrite`, etc.)
* This PR may also improve the number of [`-Woverload-virtual`
warnings](https://discourse.llvm.org/t/matchandrewrite-hiding-virtual-functions/84933)
that are produced by GCC. (To be confirmed...)
Note for LLVM integration: Patterns with separate `match` / `rewrite`
implementations, must derive from `X::SplitMatchAndRewrite` instead of
`X`.
---------
Co-authored-by: River Riddle <riddleriver@gmail.com>
Moves tests for `rewriteAlignedSubByteIntExt` and
`rewriteAlignedSubByteIntTrunc` into a dedicated files. Also adds +
fixes some comments.
This is merely for better organisation and so that it's easier to
identify the patterns and edge cases being tested.
1. Documents `ConvertVectorStore`. As the generated output is rather complex, I
have refined the comments + variable names in:
* "vector-emulate-narrow-type-unaligned-non-atomic.mlir",
to serve as reference for this pattern.
2. As a follow-on for #123527, renames `isAlignedEmulation` to `isFullyAligned`
and `numSrcElemsPerDest` to `emulatedPerContainerElem`.
This PR continues with the introduction of poison as initialization
vector, in this particular case, in LowerVectorBitCast,
LowerVectorBroadcast and LowerVectorTranspose.
This is the first PR that introduces `ub.poison` vectors as part of a
rewrite/conversion pattern in the Vector dialect. It replaces the
`arith.constant dense<0>` vector initialization for
`vector.insert_slice` ops with a poison vector.
This PR depends on all the previous PRs that introduced support for
poison in Vector operations such as `vector.shuffle`, `vector.extract`,
`vector.insert`, including ODS, canonicalization and lowering support.
This PR may improve end-to-end compilation time through LLVM, depending
on the workloads.
This PR adds a folder for `vector.extract(ub.poison) -> ub.poison`. It
also replaces `create` with `createOrFold` insert/extract ops in vector
unroll and transpose lowering patterns to trigger the poison foldings
introduced recently.
This is PR 2 in a series of N patches aimed at improving
"VectorEmulateNarrowType.cpp". This is mainly minor refactoring, no
major functional changes are made/added.
**CHANGE 1**
Renames the variable "scale". Note, "scale" could mean either:
* "container-elements-per-emulated-type", or
* "emulated-elements-per-container-type".
While from the context it is clear that it's always the former (original
type is always a sub-byte type and the emulated type is usually `i8`),
this PR reduces the cognitive load by making this clear.
**CHANGE 2**
Replaces `isUnalignedEmulation` with `isFullyAligned`
Note, `isUnalignedEmulation` is always computed following a
"per-element-alignment" condition:
```cpp
// Check per-element alignment.
if (containerBits % emulatedBits != 0) {
return rewriter.notifyMatchFailure(
op, "impossible to pack emulated elements into container elements "
"(bit-wise misalignment)");
}
// (...)
bool isUnalignedEmulation = origElements % emulatedPerContainerElem != 0;
```
Given that `isUnalignedEmulation` captures only one of two conditions
required for "full alignment", it should be re-named as
`isPartiallyUnalignedEmulation`. Instead, I've flipped the condition and
renamed it as `isFullyAligned`:
```cpp
bool isFullyAligned = origElements % emulatedPerContainerElem == 0;
```
**CHANGE 3**
* Unifies various comments throughout the file (for consistency).
* Adds new comments throughout the file and adds TODOs where high-level
comments are missing.
**GitHub issue to track this work**:
https://github.com/llvm/llvm-project/issues/123630
Updates `emulatedVectorLoad` that was introduced in #115922.
Specifically, ATM `emulatedVectorLoad` mixes "emulated type" and
"container type". This only became clear after #123526 in which the
concepts of "emulated" and "container" types were introduced.
This is an NFC change and simply updates the variable naming.
This is PR 1 in a series of N patches aimed at improving
"VectorEmulateNarrowType.cpp". This is mainly minor refactoring, no
major functional changes are made/added.
This PR renames:
* `srcBits`/`dstBits` + `oldElementType`/`newElementType`
to improve consistency in naming within the file. This is illustrated
below:
```cpp
// Extracted from VectorEmulateNarrowType.cpp
// BEFORE (mixing old/new and src/dst):
// Type oldElementType = op.getType().getElementType();
// Type newElementType = convertedType.getElementType();
// int srcBits = oldElementType.getIntOrFloatBitWidth();
// int dstBits = newElementType.getIntOrFloatBitWidth();
// AFTER (consistently using emulated/container):
Type emulatedElemType = op.getType().getElementType();
Type containerElemType = convertedType.getElementType();
int emulatedBits = emulatedElemTy.getIntOrFloatBitWidth();
int containerBits = containerElemTy.getIntOrFloatBitWidth();
```
Also adds some comments and unifies related "rewriter notification"
messages.
**GitHub issue to track this work:**
* https://github.com/llvm/llvm-project/issues/123630
It looks like scalable `vector.insertslice/extractslice` ops made their way
through lowering patterns that generate `vector.shuffle` ops. I'm not
sure why this wasn't caught by the verifier, probably because the
shuffle op was folded into something else as part of the same rewrite
and the IR wasn't verified.
This PR fixes the issue by preventing scalable vector.insertslice/extractslice
ops to be lowered to vector shuffles. Instead, they are now lowered to a
sequence of insertslice/extractelement ops using an existing patter.
This patch enables unaligned, statically indexed storing of vectors with
sub emulation width element types.
To illustrate the mechanism, consider the example of storing
vector<7xi2> into memref<3x7xi2>[1, 0].
In this case the linearized indices of those bits being overwritten are
[14, 28), which are:
* the last 2 bits of byte no.2
* byte no.3
* first 4 bits of byte no.4
Because memory accesses are in bytes, byte no.2 and no.4 in the above
example are only being modified partially.
In the case of multi-threading scenario, in order to avoid data
contention, these two bytes must be handled atomically.
This PR implements a generalization of the existing more efficient
lowering of shape casts from 2-D to 1D and 1-D to 2-D vectors. This
significantly reduces code size and generates more performant code for
n-D shape casts that make their way to LLVM/SPIR-V.
`BreakDownVectorBitCast` leverages
* `vector.extract_strided_slices` + `vector.insert_strided_slices`
As these Ops do not support extracting scalable sub-vectors (i.e.
extracting/inserting a fraction of a scalable dim), it's best to bail
out.
0-d vectors are supported now and so these patterns are no longer
required. This covers a part of this issue
https://github.com/llvm/llvm-project/issues/112913 . Additionally this
removes %arg2 in mlir/test/Conversion/GPUCommon/transfer_write.mlir and
renames %arg3 to %arg2 as %arg2 was originally not required.
In the LLVM style guide, we prefer not using braced initializer lists to
call a constructor. Also, we prefer using an equal before the open curly
brace if we use a braced initializer list when initializing a variable.
See
https://llvm.org/docs/CodingStandards.html#do-not-use-braced-initializer-lists-to-call-a-constructor
for more details.
The style guide does not explain the reason well. There is an article
from abseil, which mentions few benefits. E.g., we can avoid the most
vexing parse, etc. See https://abseil.io/tips/88 for more details.
Signed-off-by: hanhanW <hanhan0912@gmail.com>
Adds rewrites for i2 to i8 signed and unsigned extension, similar to the
ones that already exist for i4 to i8 conversion.
I use this for i6 quantized models, and this gives me roughly a 2x
speedup for an i6 4096x4096 dequantization-matmul on an AMD 5950x.
I didn't add the rewrite for i8 to i2 truncation because I currently
don't use it, but if this is needed, I can add it as well.
---------
Co-authored-by: Andrzej Warzyński <andrzej.warzynski@gmail.com>
In `Gather1DToConditionalLoads`, currently we will check if the stride
of the most minor dim of the input memref is 1. And if not, the
rewriting pattern will not be applied. However, according to the
verification of `vector.load` here:
4e32271e8b/mlir/lib/Dialect/Vector/IR/VectorOps.cpp (L4971-L4975)
.. if the output vector type of `vector.load` contains only one element,
we can ignore the requirement of the stride of the input memref, i.e.
the input memref can be with any stride layout attribute in such case.
So here we can allow more cases in lowering `vector.gather` by relaxing
such check.
As shown in the test case attached in this patch
[here](1933fbad58/mlir/test/Dialect/Vector/vector-gather-lowering.mlir (L151)),
now `vector.gather` of memref with non-trivial stride can be lowered
successfully if the result vector contains only one element.
---------
Signed-off-by: PragmaTwice <twice@apache.org>
Co-authored-by: Andrzej Warzyński <andrzej.warzynski@gmail.com>
This commit updates the internal `ConversionValueMapping` data structure
in the dialect conversion driver to support 1:N replacements. This is
the last major commit for adding 1:N support to the dialect conversion
driver.
Since #116470, the infrastructure already supports 1:N replacements. But
the `ConversionValueMapping` still stored 1:1 value mappings. To that
end, the driver inserted temporary argument materializations (converting
N SSA values into 1 value). This is no longer the case. Argument
materializations are now entirely gone. (They will be deleted from the
type converter after some time, when we delete the old 1:N dialect
conversion driver.)
Note for LLVM integration: Replace all occurrences of
`addArgumentMaterialization` (except for 1:N dialect conversion passes)
with `addSourceMaterialization`.
---------
Co-authored-by: Markus Böck <markus.boeck02@gmail.com>
The greedy rewriter is used in many different flows and it has a lot of
convenience (work list management, debugging actions, tracing, etc). But
it combines two kinds of greedy behavior 1) how ops are matched, 2)
folding wherever it can.
These are independent forms of greedy and leads to inefficiency. E.g.,
cases where one need to create different phases in lowering and is
required to applying patterns in specific order split across different
passes. Using the driver one ends up needlessly retrying folding/having
multiple rounds of folding attempts, where one final run would have
sufficed.
Of course folks can locally avoid this behavior by just building their
own, but this is also a common requested feature that folks keep on
working around locally in suboptimal ways.
For downstream users, there should be no behavioral change. Updating
from the deprecated should just be a find and replace (e.g., `find ./
-type f -exec sed -i
's|applyPatternsAndFoldGreedily|applyPatternsGreedily|g' {} \;` variety)
as the API arguments hasn't changed between the two.
Note that PointerUnion::{is,get} have been soft deprecated in
PointerUnion.h:
// FIXME: Replace the uses of is(), get() and dyn_cast() with
// isa<T>, cast<T> and the llvm::dyn_cast<T>
I'm not touching PointerUnion::dyn_cast for now because it's a bit
complicated; we could blindly migrate it to dyn_cast_if_present, but
we should probably use dyn_cast when the operand is known to be
non-null.
Continue the move of `warp_execute_on_lane_0` op to the gpu dialect
(#116994). This patch creates a utils library in GPU and moves generic
helper functions there.
This patch simplifies and extends the logic used when compressing masks
emitted by `vector.constant_mask` to support extracting 1-D vectors from
multi-dimensional vector loads. It streamlines mask computation, making
it applicable for multi-dimensional mask generation, improving the
overall handling of masked load operations.
Previously when `numFrontPadElems` is not zero, `getCompressedMaskOp`
produces wrong result if the mask generator op is a
`vector.create_mask`.
This patch resolves the issue by including `numFrontPadElems` into the
mask generation.
Signed-off-by: Alan Li <me@alanli.org>
Previously the patch was not expecting to handle non-static index, when
the index is a non constant value it will crash.
This patch is to make sure it return gracefully instead of crashing.
This is a NFC-ish change that moves
vector.extractelement/vector.insertelement vector distribution patterns
to vector.insert/vector.extract.
Before:
0-d/1-d vector.extract -> vector.extractelement -> distributed
vector.extractelement
2-d+ vector.extract -> distributed vector.extract
After:
scalar input vector.extract -> distributed vector.extract
vector.extractelement -> distributed vector.extract
2d+ vector.extract -> distributed vector.extract
The same changes are done for insertelement/insert. The change allows us
to remove reliance on vector.extractelement/vector.insertelement, which
are soon to be depreciated:
https://discourse.llvm.org/t/rfc-psa-remove-vector-extractelement-and-vector-insertelement-ops-in-favor-of-vector-extract-and-vector-insert-ops/71116/8
No extra tests are included because this patch doesn't introduce /
remove any functionality. It only changes the chain of lowerings. This
change can be completly NFC if we make the distributed operation
vector.extractelement/vector.insertelement, but that is slightly weird,
because you are going from extractelement -> extract -> extractelement.
This commit adds support for handling mask constants generated by the
`arith.constant` op in the `VectorEmulateNarrowType` pattern.
Previously, this pattern would not match due to the lack of mask
constant handling in `getCompressedMaskOp`.
The changes include:
1. Updating `getCompressedMaskOp` to recognize and handle
`arith.constant` ops as mask value sources.
2. Handling cases where the mask is not aligned with the emulated load
width. The compressed mask is adjusted to account for the offset.
Limitations:
- The arith.constant op can only have 1-dimensional constant values.
Resolves: #115742
Signed-off-by: Alan Li <me@alanli.org>