Commit Graph

53 Commits

Author SHA1 Message Date
Tim Harvey
dce7a7cf69 Changed all code and comments that used the phrase "sparse compiler" to instead use "sparsifier" (#71875)
The changes in this p.r. mostly center around the tests that use the
flag sparse_compiler (also: sparse-compiler).
2023-11-15 20:12:35 +00:00
Christian Ulmann
52491c99fa [MLIR][LLVM] Remove typed pointer remnants from integration tests (#71208)
This commit removes all LLVM dialect typed pointers from the integration
tests. Typed pointers have been deprecated for a while now and it's
planned to soon remove them from the LLVM dialect.

Related PSA:
https://discourse.llvm.org/t/psa-removal-of-typed-pointers-from-the-llvm-dialect/74502
2023-11-03 21:21:25 +01:00
Yinying Li
e2e429d994 [mlir][sparse] Migrate more tests to new syntax (#66309)
CSR:
`lvlTypes = [ "dense", "compressed" ]` to `map = (d0, d1) -> (d0 :
dense, d1 : compressed)`

CSC:
`lvlTypes = [ "dense", "compressed" ], dimToLvl = affine_map<(d0, d1) ->
(d1, d0)>` to `map = (d0, d1) -> (d1 : dense, d0 : compressed)`

This is an ongoing effort: #66146
2023-09-14 12:21:13 -04:00
Mehdi Amini
6f5ebfb987 Fix MLIR integration test that requires ARM SVE to reproduce
Fix-forward for a9f3009758
2023-09-09 15:29:00 -07:00
Mehdi Amini
a9f3009758 Switch MLIR to use the internal LIT shell by default (#65415) 2023-09-09 13:51:27 -07:00
Andrzej Warzynski
23e5130ebf [mlir][test] Reland: Refactor SparseTensor CPU integration tests
CHANGES SINCE THE ORIGINAL VERSION
----------------------------------
The default test set-up was extracted from
  * SparseTensor/CPU/lit.local.cfg.
and duplicated in all tests. This is to support downstream users that
don't use these local LIT config files.

SUMMARY OF CHANGES
------------------
This patch aims to reduce test duplication. This is a direct follow-up of:
  1. https://reviews.llvm.org/D155403 (test duplication), and
  2. https://reviews.llvm.org/D155405 (code re-use),

All SVE/VLA tests are now enabled _conditionally_ and refactored to use
`mlir-cpu-runner` rather than `lli`. The former helps with test
duplication and the latter with code re-use.

A few additional refactoring changes are included.

1. The reduce verbosity, long runtime library names like:

  %mlir_native_utils_lib_dir/libmlir_c_runner_utils%shlibext

are replaced with:

  %mlir_c_runner_utils

2. In order to keep the code and the comments in sync, and to maintain
   consistency across the tests, the following:

  enable-runtime-library=true

is swapped with (and vice-versa):

  enable-runtime-library=false

Note that this change won't affect test coverage. Only few tests
required such update.

3. A VLS vectorization `RUN` line is added in tests where there was a
   VLA/VLS `RUN` line, but no VLS `RUN` line (with a few exceptions of
   tests that only contained one `RUN` line to begin with).

4. A few test variables are renamed/added. Most notable example:
  * %{options}` --> %{sparse_compiler_opts}

TEST RUNTIME IMPROVEMENT
------------------------
Tl;Dr This change improves test execution time by ~25%.

At the moment, the following `llvm-lit` invocation takes ~7.30s on my
AArch64 workstation (with SVE):

  llvm-lit  <llvm-project>/mlir/test/Integration/Dialect/SparseTensor/CPU/

This timing doesn't change no matter what the value of the following
CMake variable is (that should disable some tests):

  MLIR_RUN_ARM_SVE_TESTS

With this patch, the execution time will indeed depend on the value of
the above CMake variable:
  * with `MLIR_RUN_ARM_SVE_TESTS=true` the timing remains intact,
  * with `MLIR_RUN_ARM_SVE_TESTS=false` the timing drops to ~5.40s (~25%
    improvement).
This is expected:
  * on average there are 4 `RUN` lines per test,
  * _without this change_ (and with `MLIR_RUN_ARM_SVE_TESTS=false`) the
    4th `RUN` line would in most cases duplicate the 3rd `RUN` line,
  * _with this change) (and with `MLIR_RUN_ARM_SVE_TESTS=false`) the
    4th `RUN` line becomes empty.

PATCH SIZE
----------
While rather large and touching many files, most changes in this patch
are rather mechanical. All test configurations have been preserved and
only in a handful of cases new `RUN` lines added.

Differential Revision: https://reviews.llvm.org/D156625
2023-08-11 08:16:01 +00:00
Aart Bik
5a1f87f9fc Revert "[mlir][test] Refactor SparseTensor CPU integration tests"
This reverts commit e77e891d89.

Differential Revision: https://reviews.llvm.org/D156947
2023-08-02 15:46:41 -07:00
Andrzej Warzynski
e77e891d89 [mlir][test] Refactor SparseTensor CPU integration tests
SUMMARY OF CHANGES
------------------
This patch aims to reduce test duplication and to improve code re-use in
SparseTensor integration tests for CPU. This is a direct follow-up of:
  1. https://reviews.llvm.org/D155403 (test duplication), and
  2. https://reviews.llvm.org/D155405 (code re-use),

The key logic for this patch is implemented in:
  * SparseTensor/CPU/lit.local.cfg.
Essentially, the set-up that used to be repeated across all test files
has been extracted into a common LIT configuration file. This makes code
re-use straightforward.

All SVE/VLA tests are now enabled _conditionally_ and refactored to use
`mlir-cpu-runner` rather than `lli`. The former helps with test
duplication and the latter with code re-use.

A few additional refactoring changes are included.

1. The reduce verbosity, long runtime library names like:

  %mlir_native_utils_lib_dir/libmlir_c_runner_utils%shlibext

are replaced with:

  %mlir_c_runner_utils

2. In order to keep the code and the comments in sync, and to maintain
   consistency across the tests, the following:

  enable-runtime-library=true

is swapped with (and vice-versa):

  enable-runtime-library=false

Note that this change won't affect test coverage. Only few tests
required such update.

3. A VLS vectorization `RUN` line is added in tests where there was a
   VLA/VLS `RUN` line, but no VLS `RUN` line (with a few exceptions of
   tests that only contained one `RUN` line to begin with).

4. A few test variables are renamed/added. Most notable example:
  * %{options}` --> %{sparse_compiler_opts}

TEST RUNTIME IMPROVEMENT
------------------------
Tl;Dr This change improves test execution time by ~25%.

At the moment, the following `llvm-lit` invocation takes ~7.30s on my
AArch64 workstation (with SVE):

  llvm-lit  <llvm-project>/mlir/test/Integration/Dialect/SparseTensor/CPU/

This timing doesn't change no matter what the value of the following
CMake variable is (that should disable some tests):

  MLIR_RUN_ARM_SVE_TESTS

With this patch, the execution time will indeed depend on the value of
the above CMake variable:
  * with `MLIR_RUN_ARM_SVE_TESTS=true` the timing remains intact,
  * with `MLIR_RUN_ARM_SVE_TESTS=false` the timing drops to ~5.40s (~25%
    improvement).
This is expected:
  * on average there are 4 `RUN` lines per test,
  * _without this change_ (and with `MLIR_RUN_ARM_SVE_TESTS=false`) the
    4th `RUN` line would in most cases duplicate the 3rd `RUN` line,
  * _with this change) (and with `MLIR_RUN_ARM_SVE_TESTS=false`) the
    4th `RUN` line becomes empty.

PATCH SIZE
----------
While rather large and touching many files, most changes in this patch
are rather mechanical. All test configurations have been preserved and
only in a handful of cases new `RUN` lines added.

Differential Revision: https://reviews.llvm.org/D156625
2023-08-02 20:21:50 +00:00
wren romano
a0615d020a [mlir][sparse] Renaming the STEA field dimLevelType to lvlTypes
This commit is part of the migration of towards the new STEA syntax/design.  In particular, this commit includes the following changes:
* Renaming compiler-internal functions/methods:
  * `SparseTensorEncodingAttr::{getDimLevelType => getLvlTypes}`
  * `Merger::{getDimLevelType => getLvlType}` (for consistency)
  * `sparse_tensor::{getDimLevelType => buildLevelType}` (to help reduce confusion vs actual getter methods)
* Renaming external facets to match:
  * the STEA parser and printer
  * the C and Python bindings
  * PyTACO

However, the actual renaming of the `DimLevelType` itself (along with all the "dlt" names) will be handled in a separate commit.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D150330
2023-05-17 14:24:09 -07:00
Cullen Rhodes
baafc74ab0 [mlir][test][Integration] Refactor Arm emulator configuration
The logic enabling the Arm SVE (and now SME) integration tests for
various dialects, that may run under emulation, is now duplicated in
several places.

This patch moves the configuration to the top-level MLIR integration
tests Lit config and renames the '%lli' substitution in contexts where
it will run exclusively (ArmSVE, ArmSME) on AArch64 (and possibly under
emulation) to '%lli_aarch64_cmd', and '%lli_host_or_aarch64_cmd' for
contexts where it may run AArch64 (also possibly under emulation). The
latter is for integration tests that have target-specific and
target-agnostic codepaths such as SparseTensor, which supports scalable
vectors.

The two substitutions have the same effect but the names are different to
convey this information. The '%lli_aarch64_cmd' substitution could be
used in the SparseTensor tests but that would be a misnomer if the host
were x86 and the MLIR_RUN_SVE_TESTS=OFF.

The reason for renaming the '%lli' substitution is to not prevent running other
target-specific integration tests at the same time, since the same substitution
'%lli' is used for lli in other integration tests:

  * mlir/test/Integration/Dialect/Vector/CPU/X86Vector              - (AVX emulation via Intel SDE)
  * mlir/test/Integration/Dialect/Vector/CPU/AMX                    - (AMX emulation via Intel SDE)
  * mlir/test/Integration/Dialect/LLVMIR/CPU/test-vp-intrinsic.mlir - (RISCV emulation via QEMU if supported, native otherwise)

and substituting '%lli' at the top-level with Arm specific logic would override
this.

Reviewed By: awarzynski

Differential Revision: https://reviews.llvm.org/D148929
2023-04-26 09:57:43 +00:00
wren romano
84cd51bb97 [mlir][sparse] Renaming "pointer/index" to "position/coordinate"
The old "pointer/index" names often cause confusion since these names clash with names of unrelated things in MLIR; so this change rectifies this by changing everything to use "position/coordinate" terminology instead.

In addition to the basic terminology, there have also been various conventions for making certain distinctions like: (1) the overall storage for coordinates in the sparse-tensor, vs the particular collection of coordinates of a given element; and (2) particular coordinates given as a `Value` or `TypedValue<MemRefType>`, vs particular coordinates given as `ValueRange` or similar.  I have striven to maintain these distinctions
as follows:

  * "p/c" are used for individual position/coordinate values, when there is no risk of confusion.  (Just like we use "d/l" to abbreviate "dim/lvl".)

  * "pos/crd" are used for individual position/coordinate values, when a longer name is helpful to avoid ambiguity or to form compound names (e.g., "parentPos").  (Just like we use "dim/lvl" when we need a longer form of "d/l".)

    I have also used these forms for a handful of compound names where the old name had been using a three-letter form previously, even though a longer form would be more appropriate.  I've avoided renaming these to use a longer form purely for expediency sake, since changing them would require a cascade of other renamings.  They should be updated to follow the new naming scheme, but that can be done in future patches.

  * "coords" is used for the complete collection of crd values associated with a single element.  In the runtime library this includes both `std::vector` and raw pointer representations.  In the compiler, this is used specifically for buffer variables with C++ type `Value`, `TypedValue<MemRefType>`, etc.

    The bare form "coords" is discouraged, since it fails to make the dim/lvl distinction; so the compound names "dimCoords/lvlCoords" should be used instead.  (Though there may exist a rare few cases where is is appropriate to be intentionally ambiguous about what coordinate-space the coords live in; in which case the bare "coords" is appropriate.)

    There is seldom the need for the pos variant of this notion.  In most circumstances we use the term "cursor", since the same buffer is reused for a 'moving' pos-collection.

  * "dcvs/lcvs" is used in the compiler as the `ValueRange` analogue of "dimCoords/lvlCoords".  (The "vs" stands for "`Value`s".)  I haven't found the need for it, but "pvs" would be the obvious name for a pos-`ValueRange`.

    The old "ind"-vs-"ivs" naming scheme does not seem to have been sustained in more recent code, which instead prefers other mnemonics (e.g., adding "Buf" to the end of the names for `TypeValue<MemRefType>`).  I have cleaned up a lot of these to follow the "coords"-vs-"cvs" naming scheme, though haven't done an exhaustive cleanup.

  * "positions/coordinates" are used for larger collections of pos/crd values; in particular, these are used when referring to the complete sparse-tensor storage components.

    I also prefer to use these unabbreviated names in the documentation, unless there is some specific reason why using the abbreviated forms helps resolve ambiguity.

In addition to making this terminology change, this change also does some cleanup along the way:
  * correcting the dim/lvl terminology in certain places.
  * adding `const` when it requires no other code changes.
  * miscellaneous cleanup that was entailed in order to make the proper distinctions.  Most of these are in CodegenUtils.{h,cpp}

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D144773
2023-03-06 12:23:33 -08:00
Markus Böck
9048ea28da Reland "[mlir] Make the vast majority of intgration and runner tests work on Windows"
This reverts commit 5561e17411

The logic was moved from cmake into lit fixing the issue that lead to the revert and potentially others with multi-config cmake generators

Differential Revision: https://reviews.llvm.org/D143925
2023-02-15 19:14:43 +01:00
Aart Bik
5561e17411 Revert "[mlir] Make the vast majority of integration and runner tests work on Windows"
This reverts commit 161b9d741a.

REASON:

cmake --build . --target check-mlir-integration

Failed Tests (186):
  MLIR :: Integration/Dialect/Arith/CPU/test-wide-int-emulation-addi-i16.mlir
  MLIR :: Integration/Dialect/Arith/CPU/test-wide-int-emulation-cmpi-i16.mlir
  MLIR :: Integration/Dialect/Arith/CPU/test-wide-int-emulation-compare-results-i16.mlir
  MLIR :: Integration/Dialect/Arith/CPU/test-wide-int-emulation-constants-i16.mlir
  MLIR :: Integration/Dialect/Arith/CPU/test-wide-int-emulation-max-min-i16.mlir
  MLIR :: Integration/Dialect/Arith/CPU/test-wide-int-emulation-muli-i16.mlir
  MLIR :: Integration/Dialect/Arith/CPU/test-wide-int-emulation-shli-i16.mlir
  MLIR :: Integration/Dialect/Arith/CPU/test-wide-int-emulation-shrsi-i16.mlir
  MLIR :: Integration/Dialect/Arith/CPU/test-wide-int-emulation-shrui-i16.mlir
  MLIR :: Integration/Dialect/Async/CPU/microbench-linalg-async-parallel-for.mlir
  MLIR :: Integration/Dialect/Async/CPU/microbench-scf-async-parallel-for.mlir
  MLIR :: Integration/Dialect/Async/CPU/test-async-parallel-for-1d.mlir
  MLIR :: Integration/Dialect/Async/CPU/test-async-parallel-for-2d.mlir
  MLIR :: Integration/Dialect/Complex/CPU/correctness.mlir
  MLIR :: Integration/Dialect/LLVMIR/CPU/X86/test-inline-asm-vector.mlir
  MLIR :: Integration/Dialect/LLVMIR/CPU/X86/test-inline-asm.mlir
  MLIR :: Integration/Dialect/LLVMIR/CPU/test-vector-reductions-fp.mlir
  MLIR :: Integration/Dialect/LLVMIR/CPU/test-vector-reductions-int.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/matmul-vs-matvec.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/rank-reducing-subview.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/test-collapse-tensor.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/test-conv-1d-call.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/test-conv-1d-nwc-wcf-call.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/test-conv-2d-call.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/test-conv-2d-nhwc-hwcf-call.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/test-conv-3d-call.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/test-conv-3d-ndhwc-dhwcf-call.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/test-elementwise.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/test-expand-tensor.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/test-one-shot-bufferize.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/test-padtensor.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/test-subtensor-insert-multiple-uses.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/test-subtensor-insert.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/test-tensor-e2e.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/test-tensor-matmul.mlir
  MLIR :: Integration/Dialect/Memref/cast-runtime-verification.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/concatenate.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/dense_output.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/dense_output_bf16.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/dense_output_f16.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_abs.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_binary.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_cast.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_codegen_dim.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_codegen_foreach.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_complex32.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_complex64.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_complex_ops.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_constant_to_sparse_tensor.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_conv_1d_nwc_wcf.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_conv_2d.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_conv_2d_nhwc_hwcf.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_conv_3d.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_conv_3d_ndhwc_dhwcf.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_conversion.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_conversion_dyn.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_conversion_ptr.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_conversion_sparse2dense.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_conversion_sparse2sparse.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_dot.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_expand.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_file_io.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_filter_conv2d.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_flatten.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_foreach_slices.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_index.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_index_dense.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_insert_1d.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_insert_2d.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_insert_3d.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_matmul.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_matrix_ops.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_matvec.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_mttkrp.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_out_mult_elt.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_out_reduction.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_out_simple.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_pack.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_quantized_matmul.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_re_im.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_reduce_custom.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_reduce_custom_prod.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_reductions.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_reductions_prod.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_reshape.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_rewrite_push_back.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_rewrite_sort.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_rewrite_sort_coo.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_sampled_matmul.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_sampled_mm_fusion.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_scale.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_scf_nested.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_select.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_sign.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_sorted_coo.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_spmm.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_storage.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_sum.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_sum_bf16.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_sum_c32.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_sum_f16.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_tanh.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_tensor_mul.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_tensor_ops.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_transpose.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_unary.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_vector_ops.mlir
  MLIR :: Integration/Dialect/SparseTensor/python/test_SDDMM.py
  MLIR :: Integration/Dialect/SparseTensor/python/test_SpMM.py
  MLIR :: Integration/Dialect/SparseTensor/python/test_elementwise_add_sparse_output.py
  MLIR :: Integration/Dialect/SparseTensor/python/test_output.py
  MLIR :: Integration/Dialect/SparseTensor/python/test_stress.py
  MLIR :: Integration/Dialect/SparseTensor/taco/test_MTTKRP.py
  MLIR :: Integration/Dialect/SparseTensor/taco/test_SDDMM.py
  MLIR :: Integration/Dialect/SparseTensor/taco/test_SpMM.py
  MLIR :: Integration/Dialect/SparseTensor/taco/test_SpMV.py
  MLIR :: Integration/Dialect/SparseTensor/taco/test_Tensor.py
  MLIR :: Integration/Dialect/SparseTensor/taco/test_scalar_tensor_algebra.py
  MLIR :: Integration/Dialect/SparseTensor/taco/test_simple_tensor_algebra.py
  MLIR :: Integration/Dialect/SparseTensor/taco/test_tensor_complex.py
  MLIR :: Integration/Dialect/SparseTensor/taco/test_tensor_types.py
  MLIR :: Integration/Dialect/SparseTensor/taco/test_tensor_unary_ops.py
  MLIR :: Integration/Dialect/SparseTensor/taco/test_true_dense_tensor_algebra.py
  MLIR :: Integration/Dialect/SparseTensor/taco/unit_test_tensor_core.py
  MLIR :: Integration/Dialect/SparseTensor/taco/unit_test_tensor_io.py
  MLIR :: Integration/Dialect/SparseTensor/taco/unit_test_tensor_utils.py
  MLIR :: Integration/Dialect/Standard/CPU/test-ceil-floor-pos-neg.mlir
  MLIR :: Integration/Dialect/Standard/CPU/test_subview.mlir
  MLIR :: Integration/Dialect/Vector/CPU/AMX/test-mulf-full.mlir
  MLIR :: Integration/Dialect/Vector/CPU/AMX/test-mulf.mlir
  MLIR :: Integration/Dialect/Vector/CPU/AMX/test-muli-ext.mlir
  MLIR :: Integration/Dialect/Vector/CPU/AMX/test-muli-full.mlir
  MLIR :: Integration/Dialect/Vector/CPU/AMX/test-muli.mlir
  MLIR :: Integration/Dialect/Vector/CPU/AMX/test-tilezero-block.mlir
  MLIR :: Integration/Dialect/Vector/CPU/AMX/test-tilezero.mlir
  MLIR :: Integration/Dialect/Vector/CPU/X86Vector/test-dot.mlir
  MLIR :: Integration/Dialect/Vector/CPU/X86Vector/test-inline-asm-vector-avx512.mlir
  MLIR :: Integration/Dialect/Vector/CPU/X86Vector/test-mask-compress.mlir
  MLIR :: Integration/Dialect/Vector/CPU/X86Vector/test-rsqrt.mlir
  MLIR :: Integration/Dialect/Vector/CPU/X86Vector/test-sparse-dot-product.mlir
  MLIR :: Integration/Dialect/Vector/CPU/X86Vector/test-vp2intersect-i32.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-0-d-vectors.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-broadcast.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-compress.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-constant-mask.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-contraction.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-create-mask-v4i1.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-create-mask.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-expand.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-extract-strided-slice.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-flat-transpose-col.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-flat-transpose-row.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-fma.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-gather.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-index-vectors.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-insert-strided-slice.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-maskedload.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-maskedstore.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-matrix-multiply-col.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-matrix-multiply-row.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-outerproduct-f32.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-outerproduct-i64.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-print-int.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-realloc.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-reductions-f32-reassoc.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-reductions-f32.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-reductions-f64-reassoc.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-reductions-f64.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-reductions-i32.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-reductions-i4.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-reductions-i64.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-reductions-si4.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-reductions-ui4.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-scan.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-scatter.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-shape-cast.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-shuffle.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-sparse-dot-matvec.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-sparse-saxpy-jagged-matvec.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-transfer-read-1d.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-transfer-read-2d.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-transfer-read-3d.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-transfer-read.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-transfer-to-loops.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-transfer-write.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-transpose.mlir

Testing Time: 0.29s
  Unsupported:  31
  Passed     :   5
  Failed     : 186

Differential Revision: https://reviews.llvm.org/D143970
2023-02-13 18:30:52 -08:00
Markus Böck
161b9d741a [mlir] Make the vast majority of integration and runner tests work on Windows
This patch contains the changes required to make the vast majority of integration and runner tests run on Windows.
Historically speaking, the JIT support for Windows has been lacking behind, but recent versions of ORC JIT have now caught up and works for basically all examples in repo.

Sadly due to these tests previously not working on Windows, basically all of them are making unix-like assumptions about things like filenames, paths, shell syntax etc.
This patch fixes all these issues in one big swoop and enables Windows support for the vast majority of integration tests.

More specifically, following changes had to be done:
* The various JIT runners used paths to the runtime libraries that assumed a Unix toolchain layout and filenames. I abstracted the specific path and filename of these runtime libraries away by making the paths to the runtime libraries be passed from cmake into lit. This now also allows a much more convenient syntax: `--shared-libs=%mlir_c_runner_utils` instead of `--shared-libs=%mlir_lib_dir/lib/libmlir_c_runner_utils%shlibext`
* Some tests using python set environment variables using the `ENV=VALUE cmd` format. This works on Unix, but on Windows it has to prefixed using `env ENV=VALUE cmd`
* Some tests used C functions that are simply not available or exported on Windows (`fabsf`, `aligned_alloc`). These tests have either been adjusted or explicitly marked as `UNSUPPORTED`

Some tests remain disabled on Windows as before:
* In SparseTensor some tests have non-trivial logic for finding the runtime libraries which seems to be required for the use of emulators. I do not have the time to port these so I simply kept them disabled
* Some tests requiring special hardware which I simply cannot test remain disabled on Windows. These include usage of AVX512 or AMX

The tests for `mlir-vulkan-runner` and `mlir-spirv-runner` all work now as well and so do the vast majority of `mlir-cpu-runner`.

Differential Revision: https://reviews.llvm.org/D143925
2023-02-13 22:24:20 +01:00
Andrzej Warzynski
bfe4ce3f83 [mlir][sparse] Port the remaining integration tests to use SVE
This patch updates the remaining SparseCompiler integration tests to
target SVE when available.

Two tests will require some investigation in the future:
  * sparse_matmul.mlir
  * sparse_tanh.mlir
The former passes regardless - that's due to how `CHECK` lines are
defined. The latter fails when SVE is enabled, but passes when it's
disabled. I marked it as UNSUPPORTED as there is no mechanism to XFAIL a
test conditionally. Also, see [1] for more details.

[1] https://github.com/llvm/llvm-project/issues/60626

Differential Revision: https://reviews.llvm.org/D143514
2023-02-09 10:14:53 +00:00
Javier Setoain
66d555aa33 [mlir][sparse][ArmSVE] Enable sparse integration tests for ArmSVE
This patch adds the logic necessary to target the sparse-tensor dialect
integration tests for SVE. As the LLVM backend for AArch64 does not
currently support product reductions, the corresponding tests are
disabled for SVE.

Not all tests have been updated yet. The remaining tests will be
refactored in a separate patch shortly.

Differential Revision: https://reviews.llvm.org/D121304

Co-authored-by: Andrzej Warzynski <andrzej.warzynski@arm.com>
2023-01-24 15:21:08 +00:00
Aart Bik
78ba3aa765 [mlir][sparse] performs a tab cleanup (NFC)
Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D140142
2022-12-15 12:12:06 -08:00
bixia1
a229c162a1 [mlir][sparse] Make some integration tests run with vectorization.
Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D139887
2022-12-13 13:26:36 -08:00
Jakub Kuderski
269177eedf Revert "[mlir][sparse] Make some integration tests run with vectorization."
This reverts commit 2d7e3ec6b5.

This broke buildbots [1] and I can also reproduce this locally.

[1] https://lab.llvm.org/buildbot#builders/61/builds/36953
2022-12-13 13:41:28 -05:00
bixia1
2d7e3ec6b5 [mlir][sparse] Make some integration tests run with vectorization.
Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D139887
2022-12-13 10:02:44 -08:00
bixia1
b5d74f0e83 [mlir][sparse] Make integration tests run on both library and codegen pathes.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D138145
2022-11-16 12:01:14 -08:00
Peiming Liu
75ac294b35 [mlir][sparse] support parallel for/reduction in sparsification.
This patch fix the re-revert D135927 (which caused a windows build failure) to re-enable parallel for/reduction. It also fix a warning caused by D137442.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D137565
2022-11-07 18:04:46 +00:00
Stella Stamenova
a2c4ca50ca Revert "[mlir][sparse] support Parallel for/reduction."
This reverts commit 838389780e.

This broke the windows mlir buildbot: https://lab.llvm.org/buildbot/#/builders/13/builds/27934
2022-11-07 08:48:52 -08:00
Peiming Liu
838389780e [mlir][sparse] support Parallel for/reduction.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D135927
2022-11-04 22:47:27 +00:00
Peiming Liu
e5cb0ee383 [mlir][sparse] run canonicalization pass after DenseOpBufferize.
As pointed out by Matthias: "DenseBufferizationPass should be run right after TensorCopyInsertionPass. (Running it after bufferizing the sparse IR is also OK.) The reason for this is that whether copies are needed for not depends on the structure of the program (SSA use-def chains). In particular, running the canonicalizer in-between is problematic because it could introduce new RaW conflicts"

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D136980
2022-10-29 01:00:05 +00:00
Peiming Liu
26eb2c6b42 [mlir][sparse] remove vector support in sparsification
Sparse compiler used to generate vectorized code for sparse tensors computation, but it should really be delegated to other vectorization passes for better progressive lowering.

 https://discourse.llvm.org/t/rfc-structured-codegen-beyond-rectangular-arrays/64707

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D136183
2022-10-19 18:11:29 +00:00
Aart Bik
0078f7234e [mlir][sparse] fix some asan detected leaks in integration tests
One was an oversight, but the others seem something that regressed.
Matthias to have look later.

https://github.com/llvm/llvm-project/issues/57663

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D133624
2022-09-09 17:00:53 -07:00
Christian Sigg
0f2ec35691 [MLIR] Switch lit tests to %mlir_lib_dir and %mlir_src_dir replacements.
The old replacements will be removed soon:
- `%linalg_test_lib_dir`
- `%cuda_wrapper_library_dir`
- `%spirv_wrapper_library_dir`
- `%vulkan_wrapper_library_dir`
- `%mlir_runner_utils_dir`
- `%mlir_integration_test_dir`

Reviewed By: herhut

Differential Revision: https://reviews.llvm.org/D133270
2022-09-06 12:34:14 +02:00
Nick Kreeger
30ceb783e2 [mlir][sparse] Expose SparseTensor passes as enums instead of opaque numbers for vectorization and parallelization options.
The SparseTensor passes currently use opaque numbers for the CLI, despite using an enum internally. This patch exposes the enums instead of numbered items that are matched back to the enum.

Fixes https://github.com/llvm/llvm-project/issues/53389

Differential Revision: https://reviews.llvm.org/D123876

Please also see:
https://reviews.llvm.org/D118379
https://reviews.llvm.org/D117919
2022-09-04 01:39:35 +00:00
Nick Kreeger
91470d6352 Revert "[mlir][sparse] Expose SparseTensor passes as enums instead of opaque"
This reverts commit ef25b5d93d.
2022-09-03 15:47:40 -05:00
Nick Kreeger
ef25b5d93d [mlir][sparse] Expose SparseTensor passes as enums instead of opaque
numbers for vectorization and parallelization options.

The SparseTensor passes currently use opaque numbers for the CLI,
despite using an enum internally. This patch exposes the enums instead
of numbered items that are matched back to the enum.

Fixes https://github.com/llvm/llvm-project/issues/53389

Differential Revision: https://reviews.llvm.org/D123876

Please also see:
https://reviews.llvm.org/D118379
https://reviews.llvm.org/D117919
2022-09-03 15:45:49 -05:00
Rajas Vanjape
333f98b4b6 [mlir][sparse][nfc] Use tensor.generate in sparse integration tests
Currently, dense tensors are initialized in Sparse Integration tests using
"buffer.tensor_alloc and scf.for" . This makes code harder to read and maintain.
This diff uses tensor.generate instead to initialize dense tensors.

Testing: Ran integration tests after building with -DLLVM_USE_SANITIZER=Address flag.

Reviewed By: springerm

Differential Revision: https://reviews.llvm.org/D131404
2022-08-08 16:44:45 +00:00
Matthias Springer
106d695287 [mlir][sparse][NFC] Update remaining test cases
No more to_memref, memref.alloc or memref.dealloc when possible.

Differential Revision: https://reviews.llvm.org/D130023
2022-07-19 09:21:10 +02:00
Matthias Springer
27a431f5e9 [mlir][bufferization][NFC] Move sparse_tensor.release to bufferization dialect
This op used to belong to the sparse dialect, but there are use cases for dense bufferization as well. (E.g., when a tensor alloc is returned from a function and should be deallocated at the call site.) This change moves the op to the bufferization dialect, which now has an `alloc_tensor` and a `dealloc_tensor` op.

Differential Revision: https://reviews.llvm.org/D129985
2022-07-19 09:18:19 +02:00
Matthias Springer
c66303c287 [mlir][sparse] Switch to One-Shot Bufferize
This change removes the partial bufferization passes from the sparse compilation pipeline and replaces them with One-Shot Bufferize. One-Shot Analysis (and TensorCopyInsertion) is used to resolve all out-of-place bufferizations, dense and sparse. Dense ops are then bufferized with BufferizableOpInterface. Sparse ops are still bufferized in the Sparsification pass.

Details:
* Dense allocations are automatically deallocated, unless they are yielded from a block. (In that case the alloc would leak.) All test cases are modified accordingly. E.g., some funcs now have an "out" tensor argument that is returned from the function. (That way, the allocation happens at the call site.)
* Sparse allocations are *not* automatically deallocated. They must be "released" manually. (No change, this will be addressed in a future change.)
* Sparse tensor copies are not supported yet. (Future change)
* Sparsification no longer has to consider inplacability. If necessary, allocations and/or copies are inserted during TensorCopyInsertion. All tensors are inplaceable by the time Sparsification is running. Instead of marking a tensor as "not inplaceable", it can be marked as "not writable", which will trigger an allocation and/or copy during TensorCopyInsertion.

Differential Revision: https://reviews.llvm.org/D129356
2022-07-14 09:52:48 +02:00
River Riddle
a6cef03f66 [mlir] Remove the type keyword from type alias definitions
This was carry over from LLVM IR where the alias definition can
be ambiguous, but MLIR type aliases have no such problems.
Having the `type` keyword is superfluous and doesn't add anything.
This commit drops it, which also nicely aligns with the syntax for
attribute aliases (which doesn't have a keyword).

Differential Revision: https://reviews.llvm.org/D125501
2022-05-16 13:54:02 -07:00
Nick Kreeger
4620032ee3 Revert "[mlir][sparse] Expose SpareTensor passes as enums instead of opaque numbers for vectorization and parallelization options."
This reverts commit d59cf901cb.

Build fails on NVIDIA Sparse tests:
https://lab.llvm.org/buildbot/#/builders/61/builds/25447
2022-04-23 20:14:48 -05:00
Nick Kreeger
d59cf901cb [mlir][sparse] Expose SpareTensor passes as enums instead of opaque numbers for vectorization and parallelization options.
The SparseTensor passes currently use opaque numbers for the CLI, despite using an enum internally. This patch exposes the enums instead of numbered items that are matched back to the enum.

Fixes GitHub issue #53389

Reviewed by: aartbik, mehdi_amini

Differential Revision: https://reviews.llvm.org/D123876
2022-04-23 19:16:57 -05:00
River Riddle
87db8e4439 [mlir][NFC] Update textual references of func to func.func in Integration tests
The special case parsing of `func` operations is being removed.
2022-04-20 22:17:29 -07:00
wren romano
b85ed4e0e1 [mlir][sparse] Adding standard pipeline for tests.
Addresses https://bugs.llvm.org/show_bug.cgi?id=52409 aka https://github.com/llvm/llvm-project/issues/51751

Reviewed By: aartbik, mehdi_amini

Differential Revision: https://reviews.llvm.org/D117919
2022-01-28 15:11:12 -08:00
Alexander Belyaev
57470abc41 [mlir] Move memref.[tensor_load|buffer_cast|clone] to "bufferization" dialect.
https://llvm.discourse.group/t/rfc-dialect-for-bufferization-related-ops/4712

Differential Revision: https://reviews.llvm.org/D114552
2021-11-25 11:50:39 +01:00
Mogball
a54f4eae0e [MLIR] Replace std ops with arith dialect ops
Precursor: https://reviews.llvm.org/D110200

Removed redundant ops from the standard dialect that were moved to the
`arith` or `math` dialects.

Renamed all instances of operations in the codebase and in tests.

Reviewed By: rriddle, jpienaar

Differential Revision: https://reviews.llvm.org/D110797
2021-10-13 03:07:03 +00:00
Aart Bik
16b8f4ddae [mlir][sparse] add a "release" operation to sparse tensor dialect
We have several ways to materialize sparse tensors (new and convert) but no explicit operation to release the underlying sparse storage scheme at runtime (other than making an explicit delSparseTensor() library call). To simplify memory management, a sparse_tensor.release operation has been introduced that lowers to the runtime library call while keeping tensors, opague pointers, and memrefs transparent in the initial IR.

*Note* There is obviously some tension between the concept of immutable tensors and memory management methods. This tension is addressed by simply stating that after the "release" call, no further memref related operations are allowed on the tensor value. We expect the design to evolve over time, however, and arrive at a more satisfactory view of tensors and buffers eventually.

Bug:
http://llvm.org/pr52046

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D111099
2021-10-05 09:35:59 -07:00
Alex Zinenko
8b58ab8ccd [mlir] Factor type reconciliation out of Standard-to-LLVM conversion
Conversion to the LLVM dialect is being refactored to be more progressive and
is now performed as a series of independent passes converting different
dialects. These passes may produce `unrealized_conversion_cast` operations that
represent pending conversions between built-in and LLVM dialect types.
Historically, a more monolithic Standard-to-LLVM conversion pass did not need
these casts as all operations were converted in one shot. Previous refactorings
have led to the requirement of running the Standard-to-LLVM conversion pass to
clean up `unrealized_conversion_cast`s even though the IR had no standard
operations in it. The pass must have been also run the last among all to-LLVM
passes, in contradiction with the partial conversion logic. Additionally, the
way it was set up could produce invalid operations by removing casts between
LLVM and built-in types even when the consumer did not accept the uncasted
type, or could lead to cryptic conversion errors (recursive application of the
rewrite pattern on `unrealized_conversion_cast` as a means to indicate failure
to eliminate casts).

In fact, the need to eliminate A->B->A `unrealized_conversion_cast`s is not
specific to to-LLVM conversions and can be factored out into a separate type
reconciliation pass, which is achieved in this commit. While the cast operation
itself has a folder pattern, it is insufficient in most conversion passes as
the folder only applies to the second cast. Without complex legality setup in
the conversion target, the conversion infra will either consider the cast
operations valid and not fold them (a separate canonicalization would be
necessary to trigger the folding), or consider the first cast invalid upon
generation and stop with error. The pattern provided by the reconciliation pass
applies to the first cast operation instead. Furthermore, having a separate
pass makes it clear when `unrealized_conversion_cast`s could not have been
eliminated since it is the only reason why this pass can fail.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D109507
2021-09-09 16:51:24 +02:00
Aart Bik
c5735fada4 [mlir][sparse] enable a few vectorized runs in integration tests
Recent changes outside sparse compiler exposed the requirement of running a
new pass (lower-affine) but this only became apparent with private testing.
By adding some vectorized runs to integration test, we will detect the need
for such changes earlier and also widen codegen coverage of course.

Reviewed By: gussmith23

Differential Revision: https://reviews.llvm.org/D108667
2021-08-24 16:08:01 -07:00
Aart Bik
2b013a6c8a [mlir][sparse] use proper type alias for filename ptr
Reviewed By: gussmith23

Differential Revision: https://reviews.llvm.org/D106904
2021-07-28 10:25:24 -07:00
Aart Bik
e6e79b3f0b [mlir][sparse] remove linalg-to-loops from integration tests
With the migration from linalg.copy to memref.copy, this pass
(which was there solely to handle the linalg.copy op) is no
longer required for the end-to-end path for sparse compilation.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D106073
2021-07-15 09:14:46 -07:00
Alex Zinenko
75e5f0aac9 [mlir] factor memref-to-llvm lowering out of std-to-llvm
After the MemRef has been split out of the Standard dialect, the
conversion to the LLVM dialect remained as a huge monolithic pass.
This is undesirable for the same complexity management reasons as having
a huge Standard dialect itself, and is even more confusing given the
existence of a separate dialect. Extract the conversion of the MemRef
dialect operations to LLVM into a separate library and a separate
conversion pass.

Reviewed By: herhut, silvas

Differential Revision: https://reviews.llvm.org/D105625
2021-07-09 14:49:52 +02:00
Aart Bik
96a23911f6 [mlir][sparse] complete migration to sparse tensor type
A very elaborate, but also very fun revision because all
puzzle pieces are finally "falling in place".

1. replaces lingalg annotations + flags with proper sparse tensor types
2. add rigorous verification on sparse tensor type and sparse primitives
3. removes glue and clutter on opaque pointers in favor of sparse tensor types
4. migrates all tests to use sparse tensor types

NOTE: next CL will remove *all* obsoleted sparse code in Linalg

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D102095
2021-05-10 12:55:22 -07:00
Aart Bik
a2c9d4bb04 [mlir][sparse] Introduce proper sparsification passes
This revision migrates more code from Linalg into the new permanent home of
SparseTensor. It replaces the test passes with proper compiler passes.

NOTE: the actual removal of the last glue and clutter in Linalg will follow

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D101811
2021-05-04 17:10:09 -07:00