Replace void* with std::byte* in AsyncRuntime to make it clear that these pointers point to a memory region.
Reviewed By: ezhulenev
Differential Revision: https://reviews.llvm.org/D140428
This is a fairly large changeset, but it can be broken into a few
pieces:
- `llvm/Support/*TargetParser*` are all moved from the LLVM Support
component into a new LLVM Component called "TargetParser". This
potentially enables using tablegen to maintain this information, as
is shown in https://reviews.llvm.org/D137517. This cannot currently
be done, as llvm-tblgen relies on LLVM's Support component.
- This also moves two files from Support which use and depend on
information in the TargetParser:
- `llvm/Support/Host.{h,cpp}` which contains functions for inspecting
the current Host machine for info about it, primarily to support
getting the host triple, but also for `-mcpu=native` support in e.g.
Clang. This is fairly tightly intertwined with the information in
`X86TargetParser.h`, so keeping them in the same component makes
sense.
- `llvm/ADT/Triple.h` and `llvm/Support/Triple.cpp`, which contains
the target triple parser and representation. This is very intertwined
with the Arm target parser, because the arm architecture version
appears in canonical triples on arm platforms.
- I moved the relevant unittests to their own directory.
And so, we end up with a single component that has all the information
about the following, which to me seems like a unified component:
- Triples that LLVM Knows about
- Architecture names and CPUs that LLVM knows about
- CPU detection logic for LLVM
Given this, I have also moved `RISCVISAInfo.h` into this component, as
it seems to me to be part of that same set of functionality.
If you get link errors in your components after this patch, you likely
need to add TargetParser into LLVM_LINK_COMPONENTS in CMake.
Differential Revision: https://reviews.llvm.org/D137838
Add printMemref for complex data types and index type. Add printMemref for 1d
type beyond f32.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D139475
The bytecode reader currently has no mechanism that allows for directly referencing
data from the input buffer safely. This commit adds shared_ptr<SourceMgr> overloads
that provide an explicit and safe way of extending the lifetime of the input. The usage of
these new overloads is adopted in all of our tooling, and is implicitly used in the filename
only parser methods.
Differential Revision: https://reviews.llvm.org/D139366
This change cleans up the conversion pass re the "dim"-vs-"lvl" and "sizes"-vs-"shape" distinctions of the runtime. A quick synopsis includes:
* Adds new `SparseTensorStorageBase::getDimSize` method, with `sparseDimSize` wrapper in SparseTensorRuntime.h, and `genDimSizeCall` generator in SparseTensorConversion.cpp
* Changes `genLvlSizeCall` to perform no logic, just generate the function call.
* Adds `createOrFold{Dim,Lvl}Call` functions to handle the logic of replacing `gen{Dim,Lvl}SizeCall` with constants whenever possible. The `createOrFoldDimCall` function replaces the old `sizeFromPtrAtDim`.
* Adds `{get,fill}DimSizes` functions for iterating `createOrFoldDimCall` across the whole type. These functions replace the old `sizesFromPtr`.
* Adds `{get,fill}DimShape` functions for lowering a `ShapedType` into constants. These functions replace the old `sizesFromType`.
* Changes the `DimOp` rewrite to do the right thing.
* Changes the `ExpandOp` rewrite to compute the proper expansion size.
Depends On D138365
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D139165
This patch mechanically replaces None with std::nullopt where the
compiler would warn if None were deprecated. The intent is to reduce
the amount of manual work required in migrating from Optional to
std::optional.
This is part of an effort to migrate from llvm::Optional to
std::optional:
https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
This commit updates how the `SparseTensorConversion` pass handles `NewOp`. It breaks up the underlying `openSparseTensor` function into two parts (`SparseTensorReader::create` and `SparseTensorReader::readSparseTensor`) so that the pass can inject code for constructing `lvlSizes` between those two parts. Migrating the construction of `lvlSizes` out of the runtime and into the pass is a necessary first step toward fully supporting non-permutations. (The alternative would be for the pass to generate a `FuncOp` for performing the construction and then passing that to the runtime; which doesn't seem to have any benefits over the design of this commit.) And since the pass now generates the code to call these two functions, this change also removes the `Action::kFromFile` value from the enum used by `_mlir_ciface_newSparseTensor`.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D138363
In particular, this silences warnings from [-Wsign-compare].
This is a revised version of D137735, which got reverted due to a sign-comparison warning on LLVM's Windows buildbot (which was not on MLIR's Windows buildbot). Differences vs the previous differential:
* `vectorToMemref` now uses `detail::checkOverflowCast` to silence the warning that caused the the previous differential to get reverted.
* `MEMREF_GET_USIZE` now uses `detail::checkOverflowCast` rather than `static_cast`
* `ASSERT_USIZE_EQ` added to abbreviate another common idiom, and to ensure that we use `detail::safelyEQ` everywhere (to silence a few other warnings)
* A couple for-loops now use `index_type` for the induction variable, since their upper bound uses that typedef too. (Namely `_mlir_ciface_getSparseTensorReaderDimSizes` and `_mlir_ciface_outSparseTensorWriterNext`)
Depends on D138149
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D137998
This is a reposting of D137737, which got reverted when D137735 did. There are no changes other than rebasing.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D138000
This patch fixes:
mlir/lib/ExecutionEngine/SparseTensorRuntime.cpp:195:30: warning:
cast from type ‘const long unsigned int*’ to type ‘void*’ casts away
qualifiers [-Wcast-qual]
This patch fixes:
mlir/lib/ExecutionEngine/SparseTensorRuntime.cpp:296:31: error:
comparison of integers of different signs: 'int64_t' (aka 'long')
and 'const uint64_t' (aka 'const unsigned long')
[-Werror,-Wsign-compare]
mlir/lib/ExecutionEngine/SparseTensorRuntime.cpp:297:67: error:
comparison of integers of different signs: 'int64_t' (aka 'long')
and 'const uint64_t' (aka 'const unsigned long')
[-Werror,-Wsign-compare]
mlir/lib/ExecutionEngine/SparseTensorRuntime.cpp:298:31: error:
comparison of integers of different signs: 'int64_t' (aka 'long') and
'const uint64_t' (aka 'const unsigned long') [-Werror,-Wsign-compare]
mlir/lib/ExecutionEngine/SparseTensorRuntime.cpp:479:30: error:
comparison of integers of different signs: 'int64_t' (aka 'long')
and 'const uint64_t' (aka 'const unsigned long')
[-Werror,-Wsign-compare]
In particular, this silences warnings from [-Wsign-compare].
Depends On D137681
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D137735
Systematically updates the SparseTensorRuntime to properly distinguish tensor-dimensions from storage-levels (and their associated ranks, shapes, sizes, indices, etc). With a few exceptions which are noted in the code, this ensures the runtime has all the **semantic** changes necessary to support non-permutations.
(Whereas **operationally**, since we're still using `std::vector<uing64_t>` to represent the mappings, there's no way to pass in any interesting non-permutations. Changing the representation to `std::function` will be done in a separate differential.)
Depends On D137680
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D137681
This allows the MLIR transformer to see the command line options and
make desicions based on them. No change upstream, but my use-case is to
look at the entry point name and type to make sure I can use them.
Differential Revision: https://reviews.llvm.org/D137861
This change make sure that ExecutionEngine's pass pipeline is identical to one
used by clang. Previously, SLPVectorization was not enabled which caused
differences in code...
...generation.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D137248
Move the SparseTensorEnums library out of the ExecutionEngine directory and into Dialect/SparseTensor/IR.
Depends On D136002
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D136005
This differential splits the SparseTensorEnums library out from the SparseTensorRuntime library. The actual moving of files will be handled in the next differential.
Depends On D135996
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D136002
This change is to make way for reusing the DimLevelType enum in lieu of the SparseTensorEncodingAttr::DimLevelType enum, but broken out to make it quick and easy to review
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D135995
Use the routine for openSparseTensorCOO and getSparseTensorReaderNext.
Reviewed By: aartbik, wrengr
Differential Revision: https://reviews.llvm.org/D135732
The "mlir_xxx_utils" naming scheme is reserved/intended for shared libraries, whereas this library must be static due to issues of linking DLLs on Windows. So we rename the library to avoid any potential confusion. In addition we also rename the ExecutionEngine/SparseTensorUtils.{h,cpp} files to match the new library name.
Reviewed By: aartbik, stella.stamenova
Differential Revision: https://reviews.llvm.org/D135613
This differential comprises three related changes: (1) it gives SparseTensorCOO standard C++-style iterators; (2) it removes the old iterator stuff from SparseTensorCOO; and (3) it introduces SparseTensorIterator which behaves like the old SparseTensorCOO iterator stuff used to.
The SparseTensorIterator class is needed because the MLIR codegen cannot easily use the C++-style iterators (hence why SparseTensorCOO had the old iterator stuff). Distinguishing SparseTensorIterator from SparseTensorCOO also helps improve API hygiene since these two classes are used for distinct purposes. And having SparseTensorIterator as its own class enables changing the underlying implementation in the future, without needing to worry about updating all the codegen tests etc.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D135485
This differential adjusts the numeric values for DimLevelType values: using the low-order two bits for recording the "No" and "Nu" properties, and the high-order bits for the formats per se. (The choice of encoding may seem a bit peculiar, since the bits are mapped to negative properties rather than positive properties. But this was done in order to preserve the collation order of DimLevelType values. If we don't care about collation order, then we may prefer to flip the semantics of the property bits, so that they're less surprising to readers.)
Using distinguished bits for the properties and formats enables faster implementation for the predicates detecting those properties/formats, which matters because this is in the runtime library itself (rather than on the codegen side of things). This differential pushes through the changes to the enum values, and optimizes the basic predicates. However it does not optimize all the places where we check compound predicates (e.g., "is compressed or singleton"), to help reduce rebasing conflict with D134933. Those optimizations will be done after this differential and D134933 are landed.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D135004
Handle more cases of singleton DLT including direct sparse2sparse conversion. (Followup to D134096)
Depends On D134926
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D134933
This adds a `--no-implicit-module` option, which disables the insertion
of a top-level `builtin.module` during parsing. The top-level op is
required to have the `SymbolTable` trait.
The majority of the change here is removing `ModuleOp` from interfaces.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D134238
This patch fixes:
mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp:1348:31: error: comparison
of integers of different signs: 'size_t' (aka 'unsigned long') and
'int64_t' (aka 'long') [-Werror,-Wsign-compare]
mlir/lib/ExecutionEngine/SparseTensor/File.cpp:110:3: error: default
label in switch which covers all enumeration values
[-Werror,-Wcovered-switch-default]
Now that mlir_sparsetensor_utils is a public library, this differential renames the x-macros to help avoid namespace pollution issues.
Reviewed By: aartbik, Peiming
Differential Revision: https://reviews.llvm.org/D134988
This differential corrects a few minor rebasing errors from the recent slew of differentials for factoring out the mlir_sparsetensor_utils library.
Reviewed By: aartbik, Peiming
Differential Revision: https://reviews.llvm.org/D134985
Followup to D133835 for fixing the warning on LLVM's Windows buildbot
Reviewed By: aartbik, Peiming, stella.stamenova
Differential Revision: https://reviews.llvm.org/D134925
The previous sorting-based check was O(n*log n). The new implementation is O(n).
This is a followup to the refactoring of D133462, D133830, D133831, and D133833.
Depends On D133833
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D133838
This is a followup to the refactoring of D133462, D133830, D133831, and D133833.
Depends On D133833
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D133837
This is a followup to the refactoring of D133462, D133830, D133831, and D133833.
Depends On D133833
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D133835
Previously, the SparseTensorUtils.cpp library contained a C++ core implementation, but hid it in an anonymous namespace and only exposed a C-API for accessing it. Now we are factoring out that C++ core into a standalone C++ library so that it can be used directly by downstream clients (per request of one such client). This refactoring has been decomposed into a stack of differentials in order to simplify the code review process, however the full stack of changes should be considered together.
* D133462: Part 1: split one file into several
* D133830: Part 2: Reorder chunks within files
* D133831: Part 3: General code cleanup
* (this): Part 4: Update documentation
This part updates existing documentation, adds new documentation, and reflows the test for some existing documentation.
Depends On D133831
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D133833
Previously, the SparseTensorUtils.cpp library contained a C++ core implementation, but hid it in an anonymous namespace and only exposed a C-API for accessing it. Now we are factoring out that C++ core into a standalone C++ library so that it can be used directly by downstream clients (per request of one such client). This refactoring has been decomposed into a stack of differentials in order to simplify the code review process, however the full stack of changes should be considered together.
* D133462: Part 1: split one file into several
* D133830: Part 2: Reorder chunks within files
* (this): Part 3: General code cleanup
* D133833: Part 4: Update documentation
This part performs some general code cleanup including:
* making more things `const`, especially for the targets of pointers
* using preincrement wherever possible ([[ https://llvm.org/docs/CodingStandards.html#prefer-preincrement | per LLVM style guide ]])
* adding messages to most `assert` statments.
* moving argument casting from the core function/method definitions to the CPP wrappers
Depends On D133830
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D133831