Commit Graph

445 Commits

Author SHA1 Message Date
Aart Bik
b700a90cc0 [mlir][gpu][sparse] add gpu ops for sparse matrix computations
This revision extends the GPU dialect with ops that can be lowered to
host-oriented sparse matrix library calls (in this case cuSparse focused
although the ops could be generalized to support more GPUs in principle).
This will allow the "sparse compiler pipeline" to accelerate sparse operations
(see follow up revisions with examples of this).

For some background;

https://discourse.llvm.org/t/sparse-compiler-and-gpu-code-generation/69786/2

Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D150152
2023-05-12 10:44:36 -07:00
Tres Popp
5550c82189 [mlir] Move casting calls from methods to function calls
The MLIR classes Type/Attribute/Operation/Op/Value support
cast/dyn_cast/isa/dyn_cast_or_null functionality through llvm's doCast
functionality in addition to defining methods with the same name.
This change begins the migration of uses of the method to the
corresponding function call as has been decided as more consistent.

Note that there still exist classes that only define methods directly,
such as AffineExpr, and this does not include work currently to support
a functional cast/isa call.

Caveats include:
- This clang-tidy script probably has more problems.
- This only touches C++ code, so nothing that is being generated.

Context:
- https://mlir.llvm.org/deprecation/ at "Use the free function variants
  for dyn_cast/cast/isa/…"
- Original discussion at https://discourse.llvm.org/t/preferred-casting-style-going-forward/68443

Implementation:
This first patch was created with the following steps. The intention is
to only do automated changes at first, so I waste less time if it's
reverted, and so the first mass change is more clear as an example to
other teams that will need to follow similar steps.

Steps are described per line, as comments are removed by git:
0. Retrieve the change from the following to build clang-tidy with an
   additional check:
   https://github.com/llvm/llvm-project/compare/main...tpopp:llvm-project:tidy-cast-check
1. Build clang-tidy
2. Run clang-tidy over your entire codebase while disabling all checks
   and enabling the one relevant one. Run on all header files also.
3. Delete .inc files that were also modified, so the next build rebuilds
   them to a pure state.
4. Some changes have been deleted for the following reasons:
   - Some files had a variable also named cast
   - Some files had not included a header file that defines the cast
     functions
   - Some files are definitions of the classes that have the casting
     methods, so the code still refers to the method instead of the
     function without adding a prefix or removing the method declaration
     at the same time.

```
ninja -C $BUILD_DIR clang-tidy

run-clang-tidy -clang-tidy-binary=$BUILD_DIR/bin/clang-tidy -checks='-*,misc-cast-functions'\
               -header-filter=mlir/ mlir/* -fix

rm -rf $BUILD_DIR/tools/mlir/**/*.inc

git restore mlir/lib/IR mlir/lib/Dialect/DLTI/DLTI.cpp\
            mlir/lib/Dialect/Complex/IR/ComplexDialect.cpp\
            mlir/lib/**/IR/\
            mlir/lib/Dialect/SparseTensor/Transforms/SparseVectorization.cpp\
            mlir/lib/Dialect/Vector/Transforms/LowerVectorMultiReduction.cpp\
            mlir/test/lib/Dialect/Test/TestTypes.cpp\
            mlir/test/lib/Dialect/Transform/TestTransformDialectExtension.cpp\
            mlir/test/lib/Dialect/Test/TestAttributes.cpp\
            mlir/unittests/TableGen/EnumsGenTest.cpp\
            mlir/test/python/lib/PythonTestCAPI.cpp\
            mlir/include/mlir/IR/
```

Differential Revision: https://reviews.llvm.org/D150123
2023-05-12 11:21:25 +02:00
Andrzej Warzynski
3283a6fb0d [mlir-cpu-runner] Add export_executable_symbols in CMake
This patch is primarily about the change in
"mlir/tools/mlir-cpu-runner/CMakeLists.txt". LLJIT needs access to
symbols (e.g. llvm_orc_registerEHFrameSectionWrapper) that will be
defined in the executable when LLVM is linked statically. This change is
consistent with how other tools within LLVM use LLJIT. It is required to
make sure that:
```
 $ mlir-cpu-runner --host-supports-jit
```
correctly returns `true` on platforms that do support JITting (in my
case that's AArch64 Linux).

The change in "mlir/lib/ExecutionEngine/CMakeLists.txt" is required to
avoid ODR violations when symbols from `mlir-cpu-runner` are exported
and when loading `libmlir_async_runtime.so` in `mlir-cpu-runner`.
Specifically, to avoid `EnableABIBreakingChecks` being defined twice.
For more context:
  * https://github.com/llvm/llvm-project/issues/61712
  * https://github.com/llvm/llvm-project/issues/61856
  * https://reviews.llvm.org/D146935 (this PR)

This change relands ccdcfad081

Fixes #61856

Differential Revision: https://reviews.llvm.org/D146935
2023-04-07 14:40:57 +00:00
max
8f7c8a6ea7 Add gpu::HostUnregisterOp
Without explicitly unregistering you will get

```
'cuMemHostRegister(ptr, sizeBytes, 0)' failed with 'CUDA_ERROR_HOST_MEMORY_ALREADY_REGISTERED'
```

in CUDA (for example) after repeated runs (e.g., during benchmarking the same kernel).

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D147277
2023-04-06 15:07:12 -05:00
Andrzej Warzynski
fb0b035e35 [mlir-cpu-runner] Add support for -mattr and -march flags
This patch adds support for `-mattr` and `-march` in mlir-cpu-runner.
With this change, one should be able to consistently use mlir-cpu-runner
for MLIR's integration tests (instead of e.g. resorting to lli when some
additional flags are needed). This is demonstrated in
concatenate_dim_1.mlir.

In order to support the new flags, this patch makes sure that
MLIR's ExecutionEngine/JITRunner (that mlir-cpu-runner is built on top of):
  * takes into account the new command line flags when creating
    TargetMachine,
  * avoids recreating TargetMachine if one is already available,
  * creates LLVM's DataLayout based on the previously configured
    TargetMachine.
This is necessary in order to make sure that the command line
configuration is propagated correctly to the backend code generator.

A few additional updates are made in order to facilitate this change,
including support for debug dumps from JITRunner.

Differential Revision: https://reviews.llvm.org/D146917
2023-03-31 07:34:24 +00:00
Lang Hames
557a0ea8af [mlir] Update JitRunner, ExecutionEngine after LLVM commit 8b1771bd9f.
LLVM commit 8b1771bd9f replaced JITEvaluatedSymbol with ExecutorSymbolDef.
2023-03-27 19:18:04 -07:00
Sergio Afonso
0e9523efda [mlir] Support lowering of dialect attributes attached to top-level modules
This patch supports the processing of dialect attributes attached to top-level
module-type operations during MLIR-to-LLVMIR lowering.

This approach modifies the `mlir::translateModuleToLLVMIR()` function to call
`ModuleTranslation::convertOperation()` on the top-level operation, after its
body has been lowered. This, in turn, will get the
`LLVMTranslationDialectInterface` object associated to that operation's dialect
before trying to use it for lowering prior to processing dialect attributes
attached to the operation.

Since there are no `LLVMTranslationDialectInterface`s for the builtin and GPU
dialects, which define their own module-type operations, this patch also adds
and registers them. The requirement for always calling
`mlir::registerBuiltinDialectTranslation()` before any translation of MLIR to
LLVM IR where builtin module operations are present is introduced. The purpose
of these new translation interfaces is to succeed when processing module-type
operations, allowing the lowering process to continue and to prevent the
introduction of failures related to not finding such interfaces.

Differential Revision: https://reviews.llvm.org/D145932
2023-03-21 12:54:26 +00:00
Artem Belevich
d4ba4c6af7 Revert unintentionally committed "Use nvptxcompile library."
This reverts commit 5f66348e59.
2023-03-17 14:23:42 -07:00
Artem Belevich
5f66348e59 Use nvptxcompile library.
Differential Revision: https://reviews.llvm.org/D145527
2023-03-17 14:08:53 -07:00
Nikita Popov
a8f6b5763e [PassBuilder] Support O0 in default pipelines
The default and pre-link pipeline builders currently require you to
call a separate method for optimization level O0, even though they
have perfectly well-defined O0 optimization pipelines.

Accept O0 optimization level and call buildO0DefaultPipeline()
internally, so all consumers don't need to repeat this.

Differential Revision: https://reviews.llvm.org/D146200
2023-03-17 10:00:05 +01:00
Jakub Kuderski
8c258fda1f [ADT][mlir][NFCI] Do not use non-const lvalue-refs with enumerate
Replace references to enumerate results with either result_pairs
(reference wrapper type) or structured bindings. I did not use
structured bindings everywhere as it wasn't clear to me it would
improve readability.

This is in preparation to the switch to zip semantics which won't
support non-const lvalue reference to elements:
https://reviews.llvm.org/D144503.

I chose to use values instead of const lvalue-refs because MLIR is
biased towards avoiding `const` local variables. This won't degrade
performance because currently `result_pair` is cheap to copy (size_t
+ iterator), and in the future, the enumerator iterator dereference
will return temporaries anyway.

Reviewed By: dblaikie

Differential Revision: https://reviews.llvm.org/D146006
2023-03-15 10:43:56 -04:00
wren romano
84cd51bb97 [mlir][sparse] Renaming "pointer/index" to "position/coordinate"
The old "pointer/index" names often cause confusion since these names clash with names of unrelated things in MLIR; so this change rectifies this by changing everything to use "position/coordinate" terminology instead.

In addition to the basic terminology, there have also been various conventions for making certain distinctions like: (1) the overall storage for coordinates in the sparse-tensor, vs the particular collection of coordinates of a given element; and (2) particular coordinates given as a `Value` or `TypedValue<MemRefType>`, vs particular coordinates given as `ValueRange` or similar.  I have striven to maintain these distinctions
as follows:

  * "p/c" are used for individual position/coordinate values, when there is no risk of confusion.  (Just like we use "d/l" to abbreviate "dim/lvl".)

  * "pos/crd" are used for individual position/coordinate values, when a longer name is helpful to avoid ambiguity or to form compound names (e.g., "parentPos").  (Just like we use "dim/lvl" when we need a longer form of "d/l".)

    I have also used these forms for a handful of compound names where the old name had been using a three-letter form previously, even though a longer form would be more appropriate.  I've avoided renaming these to use a longer form purely for expediency sake, since changing them would require a cascade of other renamings.  They should be updated to follow the new naming scheme, but that can be done in future patches.

  * "coords" is used for the complete collection of crd values associated with a single element.  In the runtime library this includes both `std::vector` and raw pointer representations.  In the compiler, this is used specifically for buffer variables with C++ type `Value`, `TypedValue<MemRefType>`, etc.

    The bare form "coords" is discouraged, since it fails to make the dim/lvl distinction; so the compound names "dimCoords/lvlCoords" should be used instead.  (Though there may exist a rare few cases where is is appropriate to be intentionally ambiguous about what coordinate-space the coords live in; in which case the bare "coords" is appropriate.)

    There is seldom the need for the pos variant of this notion.  In most circumstances we use the term "cursor", since the same buffer is reused for a 'moving' pos-collection.

  * "dcvs/lcvs" is used in the compiler as the `ValueRange` analogue of "dimCoords/lvlCoords".  (The "vs" stands for "`Value`s".)  I haven't found the need for it, but "pvs" would be the obvious name for a pos-`ValueRange`.

    The old "ind"-vs-"ivs" naming scheme does not seem to have been sustained in more recent code, which instead prefers other mnemonics (e.g., adding "Buf" to the end of the names for `TypeValue<MemRefType>`).  I have cleaned up a lot of these to follow the "coords"-vs-"cvs" naming scheme, though haven't done an exhaustive cleanup.

  * "positions/coordinates" are used for larger collections of pos/crd values; in particular, these are used when referring to the complete sparse-tensor storage components.

    I also prefer to use these unabbreviated names in the documentation, unless there is some specific reason why using the abbreviated forms helps resolve ambiguity.

In addition to making this terminology change, this change also does some cleanup along the way:
  * correcting the dim/lvl terminology in certain places.
  * adding `const` when it requires no other code changes.
  * miscellaneous cleanup that was entailed in order to make the proper distinctions.  Most of these are in CodegenUtils.{h,cpp}

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D144773
2023-03-06 12:23:33 -08:00
Aart Bik
657f60a07b [mlir][vector] add support for printing f16 and bf16
Love or hate it, but the vector.print operation was the very
first operation that actually made "end-to-end" CHECK integration
testing possible for MLIR. This revision adds support for
the -until recently- less common but important floating-point
types f16 and bf16.

This will become useful for accelerator specific testing (e.g. NVidia GPUs)

Reviewed By: wrengr

Differential Revision: https://reviews.llvm.org/D145207
2023-03-03 08:58:25 -08:00
bixia1
27ea470f22 [mlir][sparse] Add runtime support for reading a COO tensor and writing the data to the given indices and values buffers.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D143862
2023-02-28 08:28:13 -08:00
bixia1
e800c664f0 [mlir][sparse] Extend readCOOIndices to support overhead types beyond index_type.
This is to prepare for implementing the C API for reading a COO tensor to the
given buffers for indices and values.

Reviewed By: wrengr

Differential Revision: https://reviews.llvm.org/D143861
2023-02-13 14:51:59 -08:00
Han Zhu
8e628f72b0 [mlir] Avoid building some shared libraries when PIC is off
When `LLVM_ENABLE_PIC = OFF`, shared libraries cannot be built against code
that's compiled without -fPIC. Example error message:
``
ld.lld: error: relocation R_X86_64_32 cannot be used against local symbol;
recompile with -fPIC
>>> defined in lib/libLLVMSupport.a(StringMap.cpp.o)
>>> referenced by StringMap.cpp
>>>               StringMap.cpp.o:(llvm::StringMapImpl::StringMapImpl(unsigned
>>>               int, unsigned int)) in archive lib/libLLVMSupport.a
``
Similar to [how libclang handles
this](https://github.com/llvm/llvm-project/blob/main/clang/tools/clang-shlib/CMakeLists.txt#L2-L4),
skip building these shared libraries when `LLVM_ENABLE_PIC = OFF`.

Differential Revision: https://reviews.llvm.org/D142941
2023-02-13 10:12:57 -08:00
Archibald Elliott
d768bf994f [NFC][TargetParser] Replace uses of llvm/Support/Host.h
The forwarding header is left in place because of its use in
`polly/lib/External/isl/interface/extract_interface.cc`, but I have
added a GCC warning about the fact it is deprecated, because it is used
in `isl` from where it is included by Polly.
2023-02-10 09:59:46 +00:00
Krzysztof Drewniak
d6fb08abf6 [mlri][ExecutionEngine] Don't globaly set CMake prefixes to find HIP
The HIP CMake files expect to find their own dependencies and don't
use or respect PATHS or HINTS, relying on CMAKE_PREFIX_PATH to contain
/opt/rocm and /opt/rocm/hip . This is not great for the rest of the
build. Therefore, copy the CMake prefix path, add the ROCm
directories, find HIP, and reset the path to its old value.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D142391
2023-01-27 21:06:07 +00:00
Mehdi Amini
a988a1f81d Replace use of sprintf with snprint in SparseTensorRuntime.cpp (NFC)
This fixes a warning on MacOS:

warning: 'sprintf' is deprecated: This function is provided for compatibility
reasons only.  Due to security concerns inherent in the design of sprintf(3),
it is highly recommended that you use snprintf(3) instead.
2023-01-25 06:32:44 -08:00
Kazu Hirata
0a81ace004 [mlir] Use std::optional instead of llvm::Optional (NFC)
This patch replaces (llvm::|)Optional< with std::optional<.  I'll post
a separate patch to remove #include "llvm/ADT/Optional.h".

This is part of an effort to migrate from llvm::Optional to
std::optional:

https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
2023-01-14 01:25:58 -08:00
Kazu Hirata
a1fe1f5f77 [mlir] Add #include <optional> (NFC)
This patch adds #include <optional> to those files containing
llvm::Optional<...> or Optional<...>.

I'll post a separate patch to actually replace llvm::Optional with
std::optional.

This is part of an effort to migrate from llvm::Optional to
std::optional:

https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
2023-01-13 21:05:06 -08:00
serge-sans-paille
984b800a03 Move from llvm::makeArrayRef to ArrayRef deduction guides - last part
This is a follow-up to https://reviews.llvm.org/D140896, split into
several parts as it touches a lot of files.

Differential Revision: https://reviews.llvm.org/D141298
2023-01-10 11:47:43 +01:00
Kazu Hirata
5273219e57 [mlir] Use std::optional instead of llvm::Optional (NFC)
This is part of an effort to migrate from llvm::Optional to
std::optional:

https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
2023-01-02 18:56:09 -08:00
yijiagu
4109276fb4 Replace void* with std::byte* in AsyncRuntime
Replace void* with std::byte* in AsyncRuntime to make it clear that these pointers point to a memory region.

Reviewed By: ezhulenev

Differential Revision: https://reviews.llvm.org/D140428
2022-12-20 19:17:35 -08:00
Archibald Elliott
f09cf34d00 [Support] Move TargetParsers to new component
This is a fairly large changeset, but it can be broken into a few
pieces:
- `llvm/Support/*TargetParser*` are all moved from the LLVM Support
  component into a new LLVM Component called "TargetParser". This
  potentially enables using tablegen to maintain this information, as
  is shown in https://reviews.llvm.org/D137517. This cannot currently
  be done, as llvm-tblgen relies on LLVM's Support component.
- This also moves two files from Support which use and depend on
  information in the TargetParser:
  - `llvm/Support/Host.{h,cpp}` which contains functions for inspecting
    the current Host machine for info about it, primarily to support
    getting the host triple, but also for `-mcpu=native` support in e.g.
    Clang. This is fairly tightly intertwined with the information in
    `X86TargetParser.h`, so keeping them in the same component makes
    sense.
  - `llvm/ADT/Triple.h` and `llvm/Support/Triple.cpp`, which contains
    the target triple parser and representation. This is very intertwined
    with the Arm target parser, because the arm architecture version
    appears in canonical triples on arm platforms.
- I moved the relevant unittests to their own directory.

And so, we end up with a single component that has all the information
about the following, which to me seems like a unified component:
- Triples that LLVM Knows about
- Architecture names and CPUs that LLVM knows about
- CPU detection logic for LLVM

Given this, I have also moved `RISCVISAInfo.h` into this component, as
it seems to me to be part of that same set of functionality.

If you get link errors in your components after this patch, you likely
need to add TargetParser into LLVM_LINK_COMPONENTS in CMake.

Differential Revision: https://reviews.llvm.org/D137838
2022-12-20 11:05:50 +00:00
bixia1
9889753c6d [mlir][crunner] Add support for invoking std::sort.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D139880
2022-12-13 12:31:14 -08:00
bixia1
9b1513eee4 [mlir][runner] Add more printMemref functions.
Add printMemref for complex data types and index type. Add printMemref for 1d
type beyond f32.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D139475
2022-12-12 09:37:50 -08:00
River Riddle
18546ff8dd [mlir:Bytecode] Add shared_ptr<SourceMgr> overloads to allow safe mmap of data
The bytecode reader currently has no mechanism that allows for directly referencing
data from the input buffer safely. This commit adds shared_ptr<SourceMgr> overloads
that provide an explicit and safe way of extending the lifetime of the input. The usage of
these new overloads is adopted in all of our tooling, and is implicitly used in the filename
only parser methods.

Differential Revision: https://reviews.llvm.org/D139366
2022-12-11 22:45:34 -08:00
bixia1
3032c07d3a [mlir][crunner] Add support for random number generation.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D139374
2022-12-06 08:54:00 -08:00
wren romano
86f91e45a2 [mlir][sparse] Cleaning up the dim/lvl distinction in SparseTensorConversion
This change cleans up the conversion pass re the "dim"-vs-"lvl" and "sizes"-vs-"shape" distinctions of the runtime. A quick synopsis includes:

* Adds new `SparseTensorStorageBase::getDimSize` method, with `sparseDimSize` wrapper in SparseTensorRuntime.h, and `genDimSizeCall` generator in SparseTensorConversion.cpp
* Changes `genLvlSizeCall` to perform no logic, just generate the function call.
* Adds `createOrFold{Dim,Lvl}Call` functions to handle the logic of replacing `gen{Dim,Lvl}SizeCall` with constants whenever possible. The `createOrFoldDimCall` function replaces the old `sizeFromPtrAtDim`.
* Adds `{get,fill}DimSizes` functions for iterating `createOrFoldDimCall` across the whole type. These functions replace the old `sizesFromPtr`.
* Adds `{get,fill}DimShape` functions for lowering a `ShapedType` into constants. These functions replace the old `sizesFromType`.
* Changes the `DimOp` rewrite to do the right thing.
* Changes the `ExpandOp` rewrite to compute the proper expansion size.

Depends On D138365

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D139165
2022-12-05 16:59:42 -08:00
Benjamin Kramer
fcf4e360ba Iterate over StringMaps using structured bindings. NFCI. 2022-12-04 18:36:41 +01:00
Kazu Hirata
1a36588ec6 [mlir] Use std::nullopt instead of None (NFC)
This patch mechanically replaces None with std::nullopt where the
compiler would warn if None were deprecated.  The intent is to reduce
the amount of manual work required in migrating from Optional to
std::optional.

This is part of an effort to migrate from llvm::Optional to
std::optional:

https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
2022-12-03 18:50:27 -08:00
Kazu Hirata
fccab9f90b [mlir] Fix an unused variable warning
This patch fixes:

  mlir/lib/ExecutionEngine/SparseTensorRuntime.cpp:646:18: error:
  unused variable 'dimRank' [-Werror,-Wunused-variable]
2022-12-02 16:25:07 -08:00
wren romano
2af2e4dbb7 [mlir][sparse] Breaking up openSparseTensor to better support non-permutations
This commit updates how the `SparseTensorConversion` pass handles `NewOp`.  It breaks up the underlying `openSparseTensor` function into two parts (`SparseTensorReader::create` and `SparseTensorReader::readSparseTensor`) so that the pass can inject code for constructing `lvlSizes` between those two parts.  Migrating the construction of `lvlSizes` out of the runtime and into the pass is a necessary first step toward fully supporting non-permutations.  (The alternative would be for the pass to generate a `FuncOp` for performing the construction and then passing that to the runtime; which doesn't seem to have any benefits over the design of this commit.)  And since the pass now generates the code to call these two functions, this change also removes the `Action::kFromFile` value from the enum used by `_mlir_ciface_newSparseTensor`.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D138363
2022-12-02 11:10:57 -08:00
wren romano
a3e4888350 [mlir][sparse] Macros to clean up StridedMemRefType in the SparseTensorRuntime
In particular, this silences warnings from [-Wsign-compare].

This is a revised version of D137735, which got reverted due to a sign-comparison warning on LLVM's Windows buildbot (which was not on MLIR's Windows buildbot).  Differences vs the previous differential:

* `vectorToMemref` now uses `detail::checkOverflowCast` to silence the warning that caused the the previous differential to get reverted.
* `MEMREF_GET_USIZE` now uses `detail::checkOverflowCast` rather than `static_cast`
* `ASSERT_USIZE_EQ` added to abbreviate another common idiom, and to ensure that we use `detail::safelyEQ` everywhere (to silence a few other warnings)
* A couple for-loops now use `index_type` for the induction variable, since their upper bound uses that typedef too. (Namely `_mlir_ciface_getSparseTensorReaderDimSizes` and `_mlir_ciface_outSparseTensorWriterNext`)

Depends on D138149

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D137998
2022-11-16 16:40:13 -08:00
wren romano
b32831f4a9 [mlir][sparse] move SparseTensorReader functions into the _mlir_ciface_ section
This is a reposting of D137737, which got reverted when D137735 did.  There are no changes other than rebasing.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D138000
2022-11-15 17:49:41 -08:00
Kazu Hirata
6d5dbc7d34 [mlir] Fix a warning
This patch fixes:

  mlir/lib/ExecutionEngine/SparseTensorRuntime.cpp:195:30: warning:
  cast from type ‘const long unsigned int*’ to type ‘void*’ casts away
  qualifiers [-Wcast-qual]
2022-11-15 12:21:20 -08:00
Kazu Hirata
cd5ee321e5 [mlir] Fix warnings
This patch fixes:

  mlir/lib/ExecutionEngine/SparseTensorRuntime.cpp:296:31: error:
  comparison of integers of different signs: 'int64_t' (aka 'long')
  and 'const uint64_t' (aka 'const unsigned long')
  [-Werror,-Wsign-compare]

  mlir/lib/ExecutionEngine/SparseTensorRuntime.cpp:297:67: error:
  comparison of integers of different signs: 'int64_t' (aka 'long')
  and 'const uint64_t' (aka 'const unsigned long')
  [-Werror,-Wsign-compare]

  mlir/lib/ExecutionEngine/SparseTensorRuntime.cpp:298:31: error:
 comparison of integers of different signs: 'int64_t' (aka 'long') and
 'const uint64_t' (aka 'const unsigned long') [-Werror,-Wsign-compare]

  mlir/lib/ExecutionEngine/SparseTensorRuntime.cpp:479:30: error:
  comparison of integers of different signs: 'int64_t' (aka 'long')
  and 'const uint64_t' (aka 'const unsigned long')
  [-Werror,-Wsign-compare]
2022-11-15 12:16:03 -08:00
Stella Stamenova
af5c307945 Revert "[mlir][sparse] Macros to clean up StridedMemRefType in the SparseTensorRuntime" and "[mlir][sparse] move SparseTensorReader functions into the _mlir_ciface_ section"
This reverts commits 6c22dad and 92bc3fb.

These broke the windows mlir buildbot.
2022-11-14 16:18:04 -08:00
wren romano
92bc3fb5b1 [mlir][sparse] move SparseTensorReader functions into the _mlir_ciface_ section
Depends On D137735

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D137737
2022-11-14 13:50:29 -08:00
wren romano
6c22dad9c2 [mlir][sparse] Macros to clean up StridedMemRefType in the SparseTensorRuntime
In particular, this silences warnings from [-Wsign-compare].

Depends On D137681

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D137735
2022-11-14 13:49:38 -08:00
wren romano
c518745bba [mlir][sparse] Making way for SparseTensorRuntime to support non-permutations
Systematically updates the SparseTensorRuntime to properly distinguish tensor-dimensions from storage-levels (and their associated ranks, shapes, sizes, indices, etc).  With a few exceptions which are noted in the code, this ensures the runtime has all the **semantic** changes necessary to support non-permutations.

(Whereas **operationally**, since we're still using `std::vector<uing64_t>` to represent the mappings, there's no way to pass in any interesting non-permutations.  Changing the representation to `std::function` will be done in a separate differential.)

Depends On D137680

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D137681
2022-11-14 13:48:41 -08:00
Renato Golin
1b99e8ba48 [MLIR] Move JitRunner Options to header, pass to mlirTransformer
This allows the MLIR transformer to see the command line options and
make desicions based on them. No change upstream, but my use-case is to
look at the entry point name and type to make sure I can use them.

Differential Revision: https://reviews.llvm.org/D137861
2022-11-13 18:55:48 +00:00
Mehdi Amini
a4ef4445a0 Apply clang-tidy fixes for readability-container-size-empty in ExecutionEngine.cpp (NFC) 2022-11-12 23:47:38 +00:00
Denys Shabalin
f11e07416a [mlir] Use the same pipeline tuning options as clang for execution engine
This change make sure that ExecutionEngine's pass pipeline is identical to one
used by clang. Previously, SLPVectorization was not enabled which caused
differences in code...

...generation.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D137248
2022-11-02 15:07:01 +01:00
Emilio Cota
17dbd80ff7 [mlir] Fix typo s/utilties/utilities/ (including in file name)
Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D136887
2022-10-27 17:14:33 -04:00
Denys Shabalin
95c083f579 [mlir] Fix and test python bindings for dump_to_object_file
Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D136334
2022-10-20 15:53:16 +02:00
wren romano
062f29c8d0 [mlir][sparse] Moving Enums.h into Dialect/SparseTensor/IR
Move the SparseTensorEnums library out of the ExecutionEngine directory and into Dialect/SparseTensor/IR.

Depends On D136002

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D136005
2022-10-18 15:15:18 -07:00
wren romano
181b04d276 [mlir][sparse] Factoring out SparseTensorEnums library
This differential splits the SparseTensorEnums library out from the SparseTensorRuntime library. The actual moving of files will be handled in the next differential.

Depends On D135996

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D136002
2022-10-18 14:20:33 -07:00
wren romano
d9affadd00 [mlir][sparse] rename the values of the runtime DimLevelType
This change is to make way for reusing the DimLevelType enum in lieu of the SparseTensorEncodingAttr::DimLevelType enum, but broken out to make it quick and easy to review

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D135995
2022-10-18 12:08:19 -07:00