This patch mechanically replaces None with std::nullopt where the
compiler would warn if None were deprecated. The intent is to reduce
the amount of manual work required in migrating from Optional to
std::optional.
This is part of an effort to migrate from llvm::Optional to
std::optional:
https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
RewriterBase is the proper builder to use so one can listen to IR modifications (i.e. not just creation).
Differential Revision: https://reviews.llvm.org/D137922
- Use `zip_equal` where iteratees are supposted to have equal lenght.
- Use `zip_first` where the first iteratee is supposed to be the
shortest.
- Use `llvm::enumerate` instead of calculating index manually.
- Use structured bindings to unpack tuples where appropriate.
- Fix a bug in a comparison in `intersectsWhereNonNegative`.
Both `zip_first` (after D138858) and `zip_equal` (introduced in D138865)
assert interatee lengths, which allows us to more precisely convey
whether we want to iterate over the common prefix (`zip`), or expect all
lengths to be the same (`zip_equal`).
Reviewed By: dcaballe, antiagainst
Differential Revision: https://reviews.llvm.org/D139022
This helper handles non trivial cases of broadcast + optional transpose creation
that should not leak to the outside world.
Differential Revision: https://reviews.llvm.org/D139003
This patch is part of a larger simplification effort of vector transfer
operations. It removes the flag `lower-permutation-maps` from
VectorToSCF conversion and enables the lowering of permutation maps
by default. This means that VectorToSCF will always lower permutation
maps to independent broadcast/transpose operations before lowering
vector operations to SCF.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D138742
Fold InsertStridedOp(ConstantOp into ConstantOp) -> ConstantOp.
This pattern comes with vector size threshold to make sure we do not
introduce too many large constants.
This help clean up code created by the Wide Integer Emulation pass.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D138739
This revision fixes a bug in the vector.extract folding that was missing
handling the "dim-1" broadcasting case in vector.broadcast.
Differential Revision: https://reviews.llvm.org/D138804
This pattern comes with vector size threshold to make sure we do not
introduce too many large constants.
This help clean up code created by the Wide Integer Emulation pass.
Reviewed By: dcaballe
Differential Revision: https://reviews.llvm.org/D138733
This allows us to better canonicalize/clean-up code created by the Wide
Integer Emulation pass.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D138606
This generalizes the existing fold for `ExtractOp(non-splat constant)`
to work with vector results. The vector case is handled by extracting
the subrange of attribute array.
My main use it to clean up code generated by the Wide Integer Emulation
pass.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D138690
This op significantly improves transfor dialect usage when using vector abstractions.
It also brings us closer to writing simple end-to-end unit tests that guard against subtle regressions in how patterns combine.
Differential Revision: https://reviews.llvm.org/D138723
This revision refactors and cleans up a bunch of infra related to vector, shapes and indexing into more reusable APIs.
Differential Revision: https://reviews.llvm.org/D138501
This patch replaces:
return Optional<T>();
with:
return None;
to make the migration from llvm::Optional to std::optional easier.
Specifically, I can deprecate None (in my source tree, that is) to
identify all the instances of None that should be replaced with
std::nullopt.
Note that "return None" far outnumbers "return Optional<T>();". There
are more than 2000 instances of "return None" in our source tree.
All of the instances in this patch come from functions that return
Optional<T> except Archive::findSym and ASTNodeImporter::import, where
we return Expected<Optional<T>>. Note that we can construct
Expected<Optional<T>> from any parameter convertible to Optional<T>,
which None certainly is.
This is part of an effort to migrate from llvm::Optional to
std::optional:
https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
Differential Revision: https://reviews.llvm.org/D138464
Ops that were modifed in-place (`finalizeRootUpdate` was called) should be reprocessed by the GreedyPatternRewriter. This is currently not happening with `GreedyRewriteConfig::maxIterations = 1`.
Note: If your project goes into an infinite loop because of this change, you likely have one or multiple faulty patterns that modify the same operations in-place (`updateRootInplace`) indefinitely.
Differential Revision: https://reviews.llvm.org/D138038
Masking hasn't been widely used in vector transfer ops and the semantics
of the mask operand were a bit loose. This patch states that the mask
operand in a vector transfer op is applied to the read/write part of the
operation and, therefore, its shape should match the shape of the
elements read/written from/into the memref/tensor regardless of any
permutation/broadcasting also applied by the transfer operation.
Reviewers: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D138079
The methods in `SideEffectUtils.h` (and their implementations in
`SideEffectUtils.cpp`) seem to have similar intent to methods already
existing in `SideEffectInterfaces.h`. Move the decleration (and
implementation) from `SideEffectUtils.h` (and `SideEffectUtils.cpp`)
into `SideEffectInterfaces.h` (and `SideEffectInterface.cpp`).
Also drop the `SideEffectInterface::hasNoEffect` method in favor of
`mlir::isMemoryEffectFree` which actually recurses into the operation
instead of just relying on the `hasRecursiveMemoryEffectTrait`
exclusively.
Differential Revision: https://reviews.llvm.org/D137857
Not all shape of vectors can be casted into other types, we add a check
to not fold insertOp into bitcast if the shape does not support it.
Examples of unsupported shape castings are f16 vectors to f32 if the
shape is not multiple of 2s. or int8 to int32 if shapes are not multiple
of 4.
Reviewed By: antiagainst, ThomasRaoux
Differential Revision: https://reviews.llvm.org/D137802
Ops such as `%1 = vector.extractelement %0[%pos : index] : vector<96xf32>`.
In case of an extract from a 1D vector, the source vector is distributed. The lane into which the requested position falls, extracts the element and shuffles it to all other lanes.
Differential Revision: https://reviews.llvm.org/D137336
This is useful for breaking down extract_strided_slice and potentially
cancel with other extract / insert ops before or after.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D137471
Quantization method is crucial and ubiqutous in accelerating machine
learning workloads. Most of these methods uses f16 and i8 types.
This patch relaxes the type contraints on warp reduce distribution to
allow these types. Furthermore, this patch also changed the interface
and moved the initial reduction of data to a single thread into the
distributedReductionFn, this gives flexibility for developers to control
how they are obtaining the initial lane value, which might differ based
on the input types. (i.e to shuffle 32-width type, we need to reduce f16
to 2xf16 types rather than a single element).
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D137691
[RFC: EnumAttr for iterator types in Linalg](https://discourse.llvm.org/t/rfc-enumattr-for-iterator-types-in-linalg/64535)
This affect touches and probably breaks most of the code that creates `linalg.generic`. A fix would be to replace calls to `getParallelIteratorTypeName/getReductionIteratorTypeName` with `mlir::utils::IteratorType::parallel/reduction` and types from `StringRef` to `mlir::utils::IteratorType`.
Due to limitations of tablegen, shared C++ definition of IteratorType enum lives in StructuredOpsUtils.td, but each dialect should have it's own EnumAttr wrapper. To avoid conflict, all enums in a dialect are put into a separate file with a separate tablegen rule.
Test dialect td files are refactored a bit.
Printed format of `linalg.generic` temporarily remains unchanged to avoid breaking code and tests in the same change.
Differential Revision: https://reviews.llvm.org/D137658
Those methods were added long time ago. Now we get the same methods generated by tablegen, so there is no need for duplicates.
Differential Revision: https://reviews.llvm.org/D137544
- argument name 'isLastOutput' in comment does not match parameter name
'hasOutput'.
- override is redundant since the function is already declared 'final'.
When a value used in the forOp is defined outside the region but within
the parent warpOp we need to return and distribute the value to pass it
to new operations created within the loop.
Also simplify the lambda interface.
Differential Revision: https://reviews.llvm.org/D137146
This patch introduces the lowering for xfer ops masked with `vector.mask`.
Vector reductions are not lowered yet because new LLVM intrinsics are needed
in the LLVM dialect.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D136741
This MaskingOpInterface provides masking cababilitites to those
operations that implement it. For only is only implemented by the `vector.mask`
operation and it's used to break the dependency between the Vector
dialect (where the `vector.mask` op lives) and operations implementing
the MaskableOpInterface.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D136734
The foldMemRefCast method is defined in memref namespace; the
foldTensorCast method is defined in tensor namespace. This revision
deletes the dup code and use the unified methods.
Reviewed By: dcaballe
Differential Revision: https://reviews.llvm.org/D136379
`vector.contract` is being lowered to the default mul/add contraction
regardless if of the kind indicated. Stop the lowering completely in
this case until the correct one can be implemented.
Reviewed By: springerm, ThomasRaoux
Differential Revision: https://reviews.llvm.org/D136079