Along the way, make the default pattern fail instead of crashing
when an elementwise op is not supported yet.
Reviewed By: kuhar
Differential Revision: https://reviews.llvm.org/D139280
Add support for loading, computing, and storing `gpu.subgroup` WMMA ops
in transpose mode as well. Update the GPU to NVVM lowerings to support
`transpose` mode and update integration tests as well.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D139021
This patch mechanically replaces None with std::nullopt where the
compiler would warn if None were deprecated. The intent is to reduce
the amount of manual work required in migrating from Optional to
std::optional.
This is part of an effort to migrate from llvm::Optional to
std::optional:
https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
This commit extends the `ResourceLimitsAttr` to support specifying
a minimal and maximal subgroup size, and extends `EntryPointABIAttr`
to support specifying the requested subgroup size. This is possible
now in Vulkan with the VK_EXT_subgroup_size_control extension.
For OpenCL it's possible to use the `SubgroupSize` execution mode
directly.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D138962
Enables transposed gpu.subgroup_mma_load_matrix and updates the lowerings in Vector to GPU and GPU to SPIRV. Needed to enable B transpose matmuls lowering to wmma ops.
Taken over from author: stanley-nod <stanley@nod-labs.com>
Reviewed By: ThomasRaoux, antiagainst
Differential Revision: https://reviews.llvm.org/D138770
This allows for incrementally updating the old API usages without
needing to update everything at once. These will be left on Both
for a little bit and then flipped to prefixed when all APIs have been
updated.
Differential Revision: https://reviews.llvm.org/D134386
The patch introduces the required changes to update the pass declarations and definitions to use the new autogenerated files and allow dropping the old infrastructure.
Reviewed By: mehdi_amini, rriddle
Differential Review: https://reviews.llvm.org/D132838
The patch introduces the required changes to update the pass declarations and definitions to use the new autogenerated files and allow dropping the old infrastructure.
Reviewed By: mehdi_amini, rriddle
Differential Review: https://reviews.llvm.org/D132838
Made passes converting ops from other dialects to spirv OperationPass,
so that downstream compiler could put them in a proper nested pass
manager to lower device code only.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D131591
This makes it easier to use as a utility function to query the
mappings, including the reverse.
This commit also drops some storage classes that aren't needed
for now.
Reviewed By: kuhar
Differential Revision: https://reviews.llvm.org/D131411
This commit moves MemRef memory space to SPIR-V storage class
conversion out of the main SPIR-V type converter. Now the mapping
should happen as a prelimiary step before performing the final
conversion to SPIR-V. Flows are expect to write their own memory
space mappings like the `MapMemRefStorageClassPass` to handle
memory space mappings according to their needs.
This is needed because SPIR-V is serving multiple client APIs,
including Vulkan and OpenCL. Different client APIs might want
to use different storage classes for buffers in a particular
memory space, e.g., `StorageBuffer` for Vulkan vs. `CrossWorkgroup`
for OpenCL when converting the default 0 memory space. Hardcoding
a specific mapping makes that hard. While it's possible to embed
selection logic further inside the main type converter, it will
make the main type converter even complicated. So it's better to
separate the concerns, as mapping the memory space is really
concretizing the meaning of those numeric memory spaces in the
particular context of SPIR-V lowering.
Reviewed By: kuhar
Differential Revision: https://reviews.llvm.org/D131410
This removes any potential confusion with the `getType` accessors
which correspond to SSA results of an operation, and makes it
clear what the intent is (i.e. to represent the type of the function).
Differential Revision: https://reviews.llvm.org/D121762
* It doesn't required by OpenCL/Intel Level Zero and can be set programmatically.
* Add GPU to spirv lowering in case when attribute is not present.
* Set higher benefit to WorkGroupSizeConversion pattern so it will always try to lower first from the attribute.
Differential Revision: https://reviews.llvm.org/D120399
StandardToSPIRV currently contains an assortment of patterns converting from
different dialects to SPIRV. This commit splits up StandardToSPIRV into separate
conversions for each of the dialects involved (some of which already exist).
Differential Revision: https://reviews.llvm.org/D120767
This commit refactors the FunctionLike trait into an interface (FunctionOpInterface).
FunctionLike as it is today is already a pseudo-interface, with many users checking the
presence of the trait and then manually into functionality implemented in the
function_like_impl namespace. By transitioning to an interface, these accesses are much
cleaner (ideally with no direct calls to the impl namespace outside of the implementation
of the derived function operations, e.g. for parsing/printing utilities).
I've tried to maintain as much compatability with the current state as possible, while
also trying to clean up as much of the cruft as possible. The general migration plan for
current users of FunctionLike is as follows:
* function_like_impl -> function_interface_impl
Realistically most user calls should remove references to functions within this namespace
outside of a vary narrow set (e.g. parsing/printing utilities). Calls to the attribute name
accessors should be migrated to the `FunctionOpInterface::` equivalent, most everything
else should be updated to be driven through an instance of the interface.
* OpTrait::FunctionLike -> FunctionOpInterface
`hasTrait` checks will need to be moved to isa, along with the other various Trait vs
Interface API differences.
* populateFunctionLikeTypeConversionPattern -> populateFunctionOpInterfaceTypeConversionPattern
Fixes#52917
Differential Revision: https://reviews.llvm.org/D117272
NamedAttribute is currently represented as an std::pair, but this
creates an extremely clunky .first/.second API. This commit
converts it to a class, with better accessors (getName/getValue)
and also opens the door for more convenient API in the future.
Differential Revision: https://reviews.llvm.org/D113956
There are several aspects of the API that either aren't easy to use, or are
deceptively easy to do the wrong thing. The main change of this commit
is to remove all of the `getValue<T>`/`getFlatValue<T>` from ElementsAttr
and instead provide operator[] methods on the ranges returned by
`getValues<T>`. This provides a much more convenient API for the value
ranges. It also removes the easy-to-be-inefficient nature of
getValue/getFlatValue, which under the hood would construct a new range for
the type `T`. Constructing a range is not necessarily cheap in all cases, and
could lead to very poor performance if used within a loop; i.e. if you were to
naively write something like:
```
DenseElementsAttr attr = ...;
for (int i = 0; i < size; ++i) {
// We are internally rebuilding the APFloat value range on each iteration!!
APFloat it = attr.getFlatValue<APFloat>(i);
}
```
Differential Revision: https://reviews.llvm.org/D113229
Precursor: https://reviews.llvm.org/D110200
Removed redundant ops from the standard dialect that were moved to the
`arith` or `math` dialects.
Renamed all instances of operations in the codebase and in tests.
Reviewed By: rriddle, jpienaar
Differential Revision: https://reviews.llvm.org/D110797
This has been a TODO for a long time, and it brings about many advantages (namely nice accessors, and less fragile code). The existing overloads that accept ArrayRef are now treated as deprecated and will be removed in a followup (after a small grace period). Most of the upstream MLIR usages have been fixed by this commit, the rest will be handled in a followup.
Differential Revision: https://reviews.llvm.org/D110293
- Fixed symbol insertion into `symNameToModuleMap`. Insertion
needs to happen whether symbols are renamed or not.
- Added check for the VCE triple and avoid dropping it.
- Disabled function deduplication. It requires more careful
rules. Right now it can remove different functions.
- Added tests for symbol rename listener.
- And some other code/comment cleanups.
Reviewed By: ergawy
Differential Revision: https://reviews.llvm.org/D106886
Split out GPU ops library from GPU transforms. This allows libraries to
depend on GPU Ops without needing/building its transforms.
Differential Revision: https://reviews.llvm.org/D105472
This allows us to remove the `spv.mlir.endmodule` op and
all the code associated with it.
Along the way, tightened the APIs for `spv.module` a bit
by removing some aliases. Now we use `getRegion` to get
the only region, and `getBody` to get the region's only
block.
Reviewed By: mravishankar, hanchung
Differential Revision: https://reviews.llvm.org/D103265
The current design uses a unique entry for each argument/result attribute, with the name of the entry being something like "arg0". This provides for a somewhat sparse design, but ends up being much more expensive (from a runtime perspective) in-practice. The design requires building a string every time we lookup the dictionary for a specific arg/result, and also requires N attribute lookups when collecting all of the arg/result attribute dictionaries.
This revision restructures the design to instead have an ArrayAttr that contains all of the attribute dictionaries for arguments and another for results. This design reduces the number of attribute name lookups to 1, and allows for O(1) lookup for individual element dictionaries. The major downside is that we can end up with larger memory usage, as the ArrayAttr contains an entry for each element even if that element has no attributes. If the memory usage becomes too problematic, we can experiment with a more sparse structure that still provides a lot of the wins in this revision.
This dropped the compilation time of a somewhat large TensorFlow model from ~650 seconds to ~400 seconds.
Differential Revision: https://reviews.llvm.org/D102035
This commit add utility functions for creating push constant
storage variable and loading values from it.
Along the way, performs some clean up:
* Deleted `setABIAttrs`, which is just a 4-liner function
with one user.
* Moved `SPIRVConverstionTarget` into `mlir` namespace,
to be consistent with `SPIRVTypeConverter` and
`LLVMConversionTarget`.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D99725
This doesn't change APIs, this just cleans up the many in-tree uses of these
names to use the new preferred names. We'll keep the old names around for a
couple weeks to help transitions.
Differential Revision: https://reviews.llvm.org/D99127
This updates the codebase to pass the context when creating an instance of
OwningRewritePatternList, and starts removing extraneous MLIRContext
parameters. There are many many more to be removed.
Differential Revision: https://reviews.llvm.org/D99028
'getAttrs' has been explicitly marked deprecated. This patch refactors
to use Operation::getAttrs().
Reviewed By: csigg
Differential Revision: https://reviews.llvm.org/D97546
This makes ignoring a result explicit by the user, and helps to prevent accidental errors with dropped results. Marking LogicalResult as no discard was always the intention from the beginning, but got lost along the way.
Differential Revision: https://reviews.llvm.org/D95841
The dialect conversion framework was enhanced to handle type
conversion automatically. OpConversionPattern already contains
a pointer to the TypeConverter. There is no need to duplicate it
in a separate subclass. This removes the only reason for a
SPIRVOpLowering subclass. It adapts to use core infrastructure
and simplifies the code.
Also added a utility function to OpConversionPattern for getting
TypeConverter as a certain subclass.
Reviewed By: hanchung
Differential Revision: https://reviews.llvm.org/D94080