This patch fixes:
mlir/lib/Dialect/Vector/Transforms/VectorDistribute.cpp:947:13:
error: variable 'distributedDim' set but not used
[-Werror,-Wunused-but-set-variable]
In case the distributed dim of the dest vector is also a dim of the src vector, each lane inserts a smaller part of the source vector. Otherwise, one lane inserts the entire src vector and the other lanes do nothing.
Differential Revision: https://reviews.llvm.org/D137953
In case of a distribution, only one lane inserts the scalar value. In case of a broadcast, every lane inserts the scalar.
Differential Revision: https://reviews.llvm.org/D137929
Ops such as `%1 = vector.extract %0[2] : vector<5x96xf32>`.
Distribute the source vector, then extract. In case of a 1d extract, rewrite to vector.extractelement.
Differential Revision: https://reviews.llvm.org/D137646
Relax unnecessary restriction when distribution a vector.reduce op.
All the float and integer types can be supported by user's lambda.
Differential Revision: https://reviews.llvm.org/D141094
* Rewrite vector.transfer_write of vectors with 1 element to
memref.store
* Rewrite vector.extract(vector.transfer_read) to memref.load
Differential Revision: https://reviews.llvm.org/D140391
std::optional::value() has undesired exception checking semantics and is
unavailable in older Xcode (see _LIBCPP_AVAILABILITY_BAD_OPTIONAL_ACCESS). The
call sites block std::optional migration.
This is part of an effort to migrate from llvm::Optional to
std::optional. This patch changes the way mlir-tblgen generates .inc
files, and modifies tests and documentation appropriately. It is a "no
compromises" patch, and doesn't leave the user with an unpleasant mix of
llvm::Optional and std::optional.
A non-trivial change has been made to ControlFlowInterfaces to split one
constructor into two, relating to a build failure on Windows.
See also: https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
Signed-off-by: Ramkumar Ramachandra <r@artagnon.com>
Differential Revision: https://reviews.llvm.org/D138934
This patch introduces the initial bits to support vector masking
using the `vector.mask` operation. Vectorization changes should be
NFC for non-masked cases. We can't test masked cases directly until
we extend the Transform dialect to support masking.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D137690
This patch mechanically replaces None with std::nullopt where the
compiler would warn if None were deprecated. The intent is to reduce
the amount of manual work required in migrating from Optional to
std::optional.
This is part of an effort to migrate from llvm::Optional to
std::optional:
https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
RewriterBase is the proper builder to use so one can listen to IR modifications (i.e. not just creation).
Differential Revision: https://reviews.llvm.org/D137922
This revision refactors and cleans up a bunch of infra related to vector, shapes and indexing into more reusable APIs.
Differential Revision: https://reviews.llvm.org/D138501
This patch replaces:
return Optional<T>();
with:
return None;
to make the migration from llvm::Optional to std::optional easier.
Specifically, I can deprecate None (in my source tree, that is) to
identify all the instances of None that should be replaced with
std::nullopt.
Note that "return None" far outnumbers "return Optional<T>();". There
are more than 2000 instances of "return None" in our source tree.
All of the instances in this patch come from functions that return
Optional<T> except Archive::findSym and ASTNodeImporter::import, where
we return Expected<Optional<T>>. Note that we can construct
Expected<Optional<T>> from any parameter convertible to Optional<T>,
which None certainly is.
This is part of an effort to migrate from llvm::Optional to
std::optional:
https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
Differential Revision: https://reviews.llvm.org/D138464
Ops that were modifed in-place (`finalizeRootUpdate` was called) should be reprocessed by the GreedyPatternRewriter. This is currently not happening with `GreedyRewriteConfig::maxIterations = 1`.
Note: If your project goes into an infinite loop because of this change, you likely have one or multiple faulty patterns that modify the same operations in-place (`updateRootInplace`) indefinitely.
Differential Revision: https://reviews.llvm.org/D138038
Masking hasn't been widely used in vector transfer ops and the semantics
of the mask operand were a bit loose. This patch states that the mask
operand in a vector transfer op is applied to the read/write part of the
operation and, therefore, its shape should match the shape of the
elements read/written from/into the memref/tensor regardless of any
permutation/broadcasting also applied by the transfer operation.
Reviewers: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D138079
The methods in `SideEffectUtils.h` (and their implementations in
`SideEffectUtils.cpp`) seem to have similar intent to methods already
existing in `SideEffectInterfaces.h`. Move the decleration (and
implementation) from `SideEffectUtils.h` (and `SideEffectUtils.cpp`)
into `SideEffectInterfaces.h` (and `SideEffectInterface.cpp`).
Also drop the `SideEffectInterface::hasNoEffect` method in favor of
`mlir::isMemoryEffectFree` which actually recurses into the operation
instead of just relying on the `hasRecursiveMemoryEffectTrait`
exclusively.
Differential Revision: https://reviews.llvm.org/D137857
Not all shape of vectors can be casted into other types, we add a check
to not fold insertOp into bitcast if the shape does not support it.
Examples of unsupported shape castings are f16 vectors to f32 if the
shape is not multiple of 2s. or int8 to int32 if shapes are not multiple
of 4.
Reviewed By: antiagainst, ThomasRaoux
Differential Revision: https://reviews.llvm.org/D137802
Ops such as `%1 = vector.extractelement %0[%pos : index] : vector<96xf32>`.
In case of an extract from a 1D vector, the source vector is distributed. The lane into which the requested position falls, extracts the element and shuffles it to all other lanes.
Differential Revision: https://reviews.llvm.org/D137336
This is useful for breaking down extract_strided_slice and potentially
cancel with other extract / insert ops before or after.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D137471
Quantization method is crucial and ubiqutous in accelerating machine
learning workloads. Most of these methods uses f16 and i8 types.
This patch relaxes the type contraints on warp reduce distribution to
allow these types. Furthermore, this patch also changed the interface
and moved the initial reduction of data to a single thread into the
distributedReductionFn, this gives flexibility for developers to control
how they are obtaining the initial lane value, which might differ based
on the input types. (i.e to shuffle 32-width type, we need to reduce f16
to 2xf16 types rather than a single element).
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D137691
[RFC: EnumAttr for iterator types in Linalg](https://discourse.llvm.org/t/rfc-enumattr-for-iterator-types-in-linalg/64535)
This affect touches and probably breaks most of the code that creates `linalg.generic`. A fix would be to replace calls to `getParallelIteratorTypeName/getReductionIteratorTypeName` with `mlir::utils::IteratorType::parallel/reduction` and types from `StringRef` to `mlir::utils::IteratorType`.
Due to limitations of tablegen, shared C++ definition of IteratorType enum lives in StructuredOpsUtils.td, but each dialect should have it's own EnumAttr wrapper. To avoid conflict, all enums in a dialect are put into a separate file with a separate tablegen rule.
Test dialect td files are refactored a bit.
Printed format of `linalg.generic` temporarily remains unchanged to avoid breaking code and tests in the same change.
Differential Revision: https://reviews.llvm.org/D137658
- argument name 'isLastOutput' in comment does not match parameter name
'hasOutput'.
- override is redundant since the function is already declared 'final'.
When a value used in the forOp is defined outside the region but within
the parent warpOp we need to return and distribute the value to pass it
to new operations created within the loop.
Also simplify the lambda interface.
Differential Revision: https://reviews.llvm.org/D137146
This patch introduces the lowering for xfer ops masked with `vector.mask`.
Vector reductions are not lowered yet because new LLVM intrinsics are needed
in the LLVM dialect.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D136741
`vector.contract` is being lowered to the default mul/add contraction
regardless if of the kind indicated. Stop the lowering completely in
this case until the correct one can be implemented.
Reviewed By: springerm, ThomasRaoux
Differential Revision: https://reviews.llvm.org/D136079
This patch takes the first step towards a more principled modeling of undefined behavior in MLIR as discussed in the following discourse threads:
1. https://discourse.llvm.org/t/semantics-modeling-undefined-behavior-and-side-effects/4812
2. https://discourse.llvm.org/t/rfc-mark-tensor-dim-and-memref-dim-as-side-effecting/65729
This patch in particular does the following:
1. Introduces a ConditionallySpeculatable OpInterface that dynamically determines whether an Operation can be speculated.
2. Re-defines `NoSideEffect` to allow undefined behavior, making it necessary but not sufficient for speculation. Also renames it to `NoMemoryEffect`.
3. Makes LICM respect the above semantics.
4. Changes all ops tagged with `NoSideEffect` today to additionally implement ConditionallySpeculatable and mark themselves as always speculatable. This combined trait is named `Pure`. This makes this change NFC.
For out of tree dialects:
1. Replace `NoSideEffect` with `Pure` if the operation does not have any memory effects, undefined behavior or infinite loops.
2. Replace `NoSideEffect` with `NoSideEffect` otherwise.
The next steps in this process are (I'm proposing to do these in upcoming patches):
1. Update operations like `tensor.dim`, `memref.dim`, `scf.for`, `affine.for` to implement a correct hook for `ConditionallySpeculatable`. I'm also happy to update ops in other dialects if the respective dialect owners would like to and can give me some pointers.
2. Update other passes that speculate operations to consult `ConditionallySpeculatable` in addition to `NoMemoryEffect`. I could not find any other than LICM on a quick skim, but I could have missed some.
3. Add some documentation / FAQs detailing the differences between side effects, undefined behavior, speculatabilty.
Reviewed By: rriddle, mehdi_amini
Differential Revision: https://reviews.llvm.org/D135505
This commit adds a pattern to merge accumulator and result
`vector.transpose` ops into `vector.contract`. This kind of
pattern can be generated for NCHW convolution vectorization,
where we use transposes to convert the 1-D NCW convolution
into NWC during vectorization. Merging the transpose would
mean we can avoid materialize vector extract/insert for
transposes and it makes further vector level transformations
easier.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D135496
Make sure we consider other subviews of the same buffer when doing store
to load forwarding or dead store elimination.
Differential Revision: https://reviews.llvm.org/D134576
One of the vector transformation patterns has been indiscriminately
converting layouts to affine maps. Leverage the strided form when
possible.
Reviewed By: nicolasvasilache, dcaballe
Differential Revision: https://reviews.llvm.org/D134047
Bufferization already makes the assumption that buffers pass function
boundaries in the strided form and uses the corresponding affine map layouts.
Switch it to use the recently introduced strided layout instead to avoid
unnecessary casts when bufferizing further operations to the memref dialect
counterparts that now largely rely on the strided layout attribute.
Depends On D133947
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D133951
Simplify the lowering of warp_execute_on_lane0 of scf.if by making the
logic more generic. Also remove the assumption that the most inner
dimension is the dimension distributed.
Differential Revision: https://reviews.llvm.org/D133826
This revision significantly improves and tests the broadcast behavior of vector.warp_execute_on_lane_0.
Previously, the implementation of the broadcast behavior of vector.warp_execute_on_lane_0
assumed that the broadcasted value was always of scalar type.
This is not necessarily the case.
Differential Revision: https://reviews.llvm.org/D133767
This is the first step in replacing interator_type from strings with enums in Vector and Linalg dialect. This change adds IteratorTypeAttr and uses it in ContractionOp.
To avoid breaking all the tests, print/parse code has conversion between string and enum for now.
There is a shared code in StructuredOpsUtils.h that expects iterator types to be strings. To break this dependancy, this change forks helper function `isParallelIterator` and `isReductionIterator` to utils in both dialects and adds `getIteratorTypeNames()` to support backward compatibility with StructuredGenerator.
In the later changes, I plan to add a similar enum attribute to Linalg.
Differential Revision: https://reviews.llvm.org/D133696
The logic to figure out if a transfer op can be flattened wasn't
considering the shape being loaded therefore it was incorrectly assuming
some transfer ops were reading contigous data.
Differential Revision: https://reviews.llvm.org/D133544