This patch mechanically replaces None with std::nullopt where the
compiler would warn if None were deprecated. The intent is to reduce
the amount of manual work required in migrating from Optional to
std::optional.
This is part of an effort to migrate from llvm::Optional to
std::optional:
https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
The only current options to create a supervectorization pass from an
external dialect is to use `createSuperVectorizePass` with the virtual
vector dimensions as a parameter, but the pass accepts other parameters.
This patch enables external users to create a supervectorizer pass
exposing all available option.
Differential Revision: https://reviews.llvm.org/D134632
The patch introduces the required changes to update the pass declarations and definitions to use the new autogenerated files and allow dropping the old infrastructure.
Reviewed By: mehdi_amini, rriddle
Differential Review: https://reviews.llvm.org/D132838
The patch introduces the required changes to update the pass declarations and definitions to use the new autogenerated files and allow dropping the old infrastructure.
Reviewed By: mehdi_amini, rriddle
Differential Review: https://reviews.llvm.org/D132838
Insert the select op before the combiner op when vectorizing a
reduction loop that needs a mask, so the vectorized reduction loop
can pass isLoopParallel check and be transformed correctly in later
passes.
Reviewed By: dcaballe
Differential Revision: https://reviews.llvm.org/D124047
This provides a way to create an operation without manipulating
OperationState directly. This is useful for creating unregistered ops.
Reviewed By: rriddle, mehdi_amini
Differential Revision: https://reviews.llvm.org/D120787
This reduces the dependencies of the MLIRVector target and makes the dialect consistent with other dialects.
Differential Revision: https://reviews.llvm.org/D118533
The only benefit of FunctionPass is that it filters out function
declarations. This isn't enough to justify carrying it around, as we can
simplify filter out declarations when necessary within the pass. We can
also explore with better scheduling primitives to filter out declarations
at the pipeline level in the future.
The definition of FunctionPass is left intact for now to allow time for downstream
users to migrate.
Differential Revision: https://reviews.llvm.org/D117182
The current state of the top level Analysis/ directory is that it contains two libraries;
a generic Analysis library (free from dialect dependencies), and a LoopAnalysis library
that contains various analysis utilities that originated from Affine loop transformations.
This commit moves the LoopAnalysis to the more appropriate home of `Dialect/Affine/Analysis/`,
given the use and intention of the majority of the code within it. After the move, if there
are generic utilities that would fit better in the top-level Analysis/ directory, we can move
them.
Differential Revision: https://reviews.llvm.org/D117351
When constructing an OperationName, the overwhelming majority of
cases are from registered operations. This revision adds a non-locked
lookup into the currently registered operations, which prevents locking
in the common case. This revision also optimizes several uses of
RegisteredOperationName that expect the operation to be registered,
e.g. such as in OpBuilder.
These changes provides a reasonable speedup (5-10%) in some
compilations, especially on platforms where locking is expensive.
Differential Revision: https://reviews.llvm.org/D117187
Changed the remaining appearances of alloc to memref.alloc in several
documentation sections, since they lead to misunderstandings, if they
are used.
Differential Revision: https://reviews.llvm.org/D116999
Per the discussion in https://reviews.llvm.org/D116345 it makes sense
to move AtomicRMWOp out of the standard dialect. This was accentuated by the
need to add a fold op with a memref::cast. The only dialect
that would permit this is the memref dialect (keeping it in the standard dialect
or moving it to the arithmetic dialect would require those dialects to have a
dependency on the memref dialect, which breaks linking).
As the AtomicRMWKind enum is used throughout, this has been moved to Arith.
Reviewed By: Mogball
Differential Revision: https://reviews.llvm.org/D116392
Precursor: https://reviews.llvm.org/D110200
Removed redundant ops from the standard dialect that were moved to the
`arith` or `math` dialects.
Renamed all instances of operations in the codebase and in tests.
Reviewed By: rriddle, jpienaar
Differential Revision: https://reviews.llvm.org/D110797
1. Add support to vectorize induction variables of loops that are
not mapped to any vector dimension in SuperVectorize pass.
2. Fix a bug in getForInductionVarOwner.
Reviewed By: dcaballe
Differential Revision: https://reviews.llvm.org/D111370
When we vectorize a scalar constant, the vector constant is inserted before its
first user if the scalar constant is defined outside the loops to be vectorized.
It is possible that the vector constant does not dominate all its users. To fix
the problem, we find the innermost vectorized loop that encloses that first user
and insert the vector constant at the top of the loop body.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D106609
This patch adds support for vectorizing loops with 'iter_args'
implementing known reductions along the vector dimension. Comparing to
the non-vector-dimension case, two additional things are done during
vectorization of such loops:
- The resulting vector returned from the loop is reduced to a scalar
using `vector.reduce`.
- In some cases a mask is applied to the vector yielded at the end of
the loop to prevent garbage values from being written to the
accumulator.
Vectorization of reduction loops is disabled by default. To enable it, a
map from loops to array of reduction descriptors should be explicitly passed to
`vectorizeAffineLoops`, or `vectorize-reductions=true` should be passed
to the SuperVectorize pass.
Current limitations:
- Loops with a non-unit step size are not supported.
- n-D vectorization with n > 1 is not supported.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D100694
Introduce a basic support for parallelizing affine loops with reductions
expressed using iteration arguments. Affine parallelism detector now has a flag
to assume such reductions are parallel. The transformation handles a subset of
parallel reductions that are can be expressed using affine.parallel:
integer/float addition and multiplication. This requires to detect the
reduction operation since affine.parallel only supports a fixed set of
reduction operators.
Reviewed By: chelini, kumasento, bondhugula
Differential Revision: https://reviews.llvm.org/D101171
This patch adds support for vectorizing loops with 'iter_args' when those loops
are not a vector dimension. This allows vectorizing outer loops with an inner
'iter_args' loop (e.g., reductions). Vectorizing scenarios where 'iter_args'
loops are vector dimensions would require more work (e.g., analysis,
generating horizontal reduction, etc.) not included in this patch.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D97892
This patch replaces the root-terminal vectorization approach implemented in the
Affine vectorizer with a topological order approach that vectorizes all the
operations within the target loop nest. These are the most important changes
introduced by the new algorithm:
* Removed tracking of root and terminal ops. Existing vectorization
functionality is preserved and extended so that loop nests without
root-terminal chains can be vectorized.
* Vectorizing a loop nest now only requires a single topological traversal.
* A new vector loop nest is incrementally built along the vectorization
process. The original scalar loop is kept intact. No cloning guard is needed
to recover the scalar loop if vectorization fails. This approach also
simplifies the challenging task of replacing a loop operation amid the
vectorization process without invalidating the analysis information that
depends on the original loop.
* Vectorization of specific operations has been implemented as independent,
preparing them to be moved to a potential vectorization interface.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D97442
This patch fixes a heap-use-after-free introduced by the recent changes
in the vectorizer: https://reviews.llvm.org/rG95db7b4aeaad590f37720898e339a6d54313422f
The problem is due to the way candidate loops are visited. All candidate loops
are pattern-matched beforehand using the 'NestedMatch' utility. These matches may
intersect with each other so it may happen that we try to vectorize a loop that
was previously vectorized. The new vectorization algorithm replaces the original
loops that are vectorized with new loops and, therefore, any reference to the
original loops in the pre-computed matches becomes invalid.
This patch fixes the problem by classifying the candidate matches into buckets
before vectorization. Each bucket contains all the matches that intersect. The
vectorizer uses these buckets to make sure that we only vectorize *one* match from
each bucket, at most.
Differential Revision: https://reviews.llvm.org/D98382
This patch adds support for vectorizing loops with 'iter_args' when those loops
are not a vector dimension. This allows vectorizing outer loops with an inner
'iter_args' loop (e.g., reductions). Vectorizing scenarios where 'iter_args'
loops are vector dimensions would require more work (e.g., analysis,
generating horizontal reduction, etc.) not included in this patch.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D97892
This patch replaces the root-terminal vectorization approach implemented in the
Affine vectorizer with a topological order approach that vectorizes all the
operations within the target loop nest. These are the most important changes
introduced by the new algorithm:
* Removed tracking of root and terminal ops. Existing vectorization
functionality is preserved and extended so that loop nests without
root-terminal chains can be vectorized.
* Vectorizing a loop nest now only requires a single topological traversal.
* A new vector loop nest is incrementally built along the vectorization
process. The original scalar loop is kept intact. No cloning guard is needed
to recover the scalar loop if vectorization fails. This approach also
simplifies the challenging task of replacing a loop operation amid the
vectorization process without invalidating the analysis information that
depends on the original loop.
* Vectorization of specific operations has been implemented as independent,
preparing them to be moved to a potential vectorization interface.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D97442
This makes ignoring a result explicit by the user, and helps to prevent accidental errors with dropped results. Marking LogicalResult as no discard was always the intention from the beginning, but got lost along the way.
Differential Revision: https://reviews.llvm.org/D95841
Given that OpState already implicit converts to Operator*, this seems reasonable.
The alternative would be to add more functions to OpState which forward to Operation.
Reviewed By: rriddle, ftynse
Differential Revision: https://reviews.llvm.org/D92266
Adding missing code that should have been part of "D85869: Utility to
vectorize loop nest using strategy."
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D88346
This patch adds a utility based on SuperVectorizer to vectorize an
affine loop nest using a given vectorization strategy. This strategy allows
targeting specific loops for vectorization instead of relying of the
SuperVectorizer analysis to choose the right loops to vectorize.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D85869
Rename 'setInsertionPointAfter(Value)' API to avoid ambiguity with
'setInsertionPointAfter(Operation *)' for SingleResult operations which
implicitly convert to Value (see D86756).
Differential Revision: https://reviews.llvm.org/D87155
This patch adds basic support for vectorization of uniform values to SuperVectorizer.
For now, only invariant values to the target vector loops are considered uniform. This
enables the vectorization of loops that use function arguments and external definitions
to the vector loops. We could extend uniform support in the future if we implement some
kind of divergence analysis algorithm.
Reviewed By: nicolasvasilache, aartbik
Differential Revision: https://reviews.llvm.org/D86756
This patch refactors a small part of the Super Vectorizer code to
a utility so that it can be used independently from the pass. This
aligns vectorization with other utilities that we already have for loop
transformations, such as fusion, interchange, tiling, etc.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D84289
Summary:
Vector transfer ops semantic is extended to allow specifying a per-dimension `masked`
attribute. When the attribute is false on a particular dimension, lowering to LLVM emits
unmasked load and store operations.
Differential Revision: https://reviews.llvm.org/D80098