Commit Graph

284 Commits

Author SHA1 Message Date
Yinying Li
3dc621124f [mlir][sparse] Migrate tests to use new syntax (#66543)
**COO**
`lvlTypes = [ "compressed_nu", "singleton" ]` to `map = (d0, d1) -> (d0
: compressed(nonunique), d1 : singleton)`
`lvlTypes = [ "compressed_nu_no", "singleton_no" ]` to `map = (d0, d1)
-> (d0 : compressed(nonunique, nonordered), d1 : singleton(nonordered))`

**SortedCOO**
`lvlTypes = [ "compressed_nu", "singleton" ]` to `map = (d0, d1) -> (d0
: compressed(nonunique), d1 : singleton)`

**BCOO**
`lvlTypes = [ "dense", "compressed_hi_nu", "singleton" ]` to `map = (d0,
d1, d2) -> (d0 : dense, d1 : compressed(nonunique, high), d2 :
singleton)`

**BCSR**
`lvlTypes = [ "compressed", "compressed", "dense", "dense" ], dimToLvl =
affine_map<(d0, d1) -> (d0 floordiv 2, d1 floordiv 3, d0 mod 2, d1 mod
3)>` to
`map = ( i, j ) ->
      ( i floordiv 2 : compressed,
        j floordiv 3 : compressed,
        i mod 2 : dense,
        j mod 3 : dense
      )`

**Tensor and other supported formats(e.g. CCC, CDC, CCCC)**

Currently, ELL and slice are not supported yet in the new syntax and the
CHECK tests will be updated once printing is set to output the new
syntax.

Previous PRs: #66146, #66309, #66443
2023-09-15 16:12:20 -04:00
Aart Bik
d2e787d5d7 [mlir][sparse][tensor] replace bufferization with empty tensor (#66450)
Rationale:
    A bufferization.alloc_tensor can be directly replaced
    with tensor.empty since these are more or less semantically
    equivalent. The latter is considered a bit more "pure"
    with respect to SSA semantics.
2023-09-15 11:45:42 -07:00
Yinying Li
2a07f0fd40 [mlir][sparse] Migrate more tests to use new syntax (#66443)
**Dense**
`lvlTypes = [ "dense", "dense" ]` to `map = (d0, d1) -> (d0 : dense, d1
: dense)`
`lvlTypes = [ "dense", "dense" ], dimToLvl = affine_map<(i,j) -> (j,i)>`
to `map = (d0, d1) -> (d1 : dense, d0 : dense)`

**DCSR**
`lvlTypes = [ "compressed", "compressed" ]` to `map = (d0, d1) -> (d0 :
compressed, d1 : compressed)`

**DCSC**
`lvlTypes = [ "compressed", "compressed" ], dimToLvl = affine_map<(i,j)
-> (j,i)>` to `map = (d0, d1) -> (d1 : compressed, d0 : compressed)`

**Block Row**
`lvlTypes = [ "compressed", "dense" ]` to `map = (d0, d1) -> (d0 :
compressed, d1 : dense)`

**Block Column**
`lvlTypes = [ "compressed", "dense" ], dimToLvl = affine_map<(i,j) ->
(j,i)>` to `map = (d0, d1) -> (d1 : compressed, d0 : dense)`

This is an ongoing effort: #66146, #66309
2023-09-14 23:19:57 +00:00
Yinying Li
e2e429d994 [mlir][sparse] Migrate more tests to new syntax (#66309)
CSR:
`lvlTypes = [ "dense", "compressed" ]` to `map = (d0, d1) -> (d0 :
dense, d1 : compressed)`

CSC:
`lvlTypes = [ "dense", "compressed" ], dimToLvl = affine_map<(d0, d1) ->
(d1, d0)>` to `map = (d0, d1) -> (d1 : dense, d0 : compressed)`

This is an ongoing effort: #66146
2023-09-14 12:21:13 -04:00
Peiming Liu
098f46dce3 [sparse] allow unpack op to return 0-ranked tensor type. (#66269)
Many frontends canonicalize scalar into 0-ranked tensor, it change will
hopefully make the operation easier to use for those cases.
2023-09-13 11:33:01 -07:00
Yinying Li
dbe1be9aa4 [mlir][sparse] Migrate tests to use new syntax (#66146)
lvlTypes = [ "compressed" ] to map = (d0) -> (d0 : compressed)
lvlTypes = [ "dense" ] to map = (d0) -> (d0 : dense)
2023-09-13 11:41:25 -04:00
Peiming Liu
64df1c08d0 [sparse] allow unpack op to return any integer type. (#66161) 2023-09-12 17:27:51 -07:00
Mehdi Amini
6f5ebfb987 Fix MLIR integration test that requires ARM SVE to reproduce
Fix-forward for a9f3009758
2023-09-09 15:29:00 -07:00
Mehdi Amini
a9f3009758 Switch MLIR to use the internal LIT shell by default (#65415) 2023-09-09 13:51:27 -07:00
Aart Bik
b86d3cbc12 [mlir][sparse] complete various FIXMEs in sparse support lib
Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D159245
2023-08-30 21:30:25 -07:00
Peiming Liu
22e8d5b428 [mlir][sparse] Support strided convolution on dense level.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D159020
2023-08-30 20:00:50 +00:00
Peiming Liu
07bd5f20bc [mlir][sparse] Support strided convolution on compressed level.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D158912
2023-08-30 19:37:50 +00:00
Peiming Liu
96e1914aa2 [mlir][sparse] fix crash when generating convolution kernel with sparse input in DCCD format.
Reviewed By: aartbik, anlunx

Differential Revision: https://reviews.llvm.org/D159170
2023-08-30 17:49:36 +00:00
Yinying Li
51ebecf309 [mlir][sparse] Changed sparsity properties to use _ instead of -
Example: compressed-no -> compressed_no

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D158567
2023-08-23 17:00:27 +00:00
Peiming Liu
8c8aecdca9 [mlir][sparse] Supporting (non)uniqueness in SparseTensorStorage::lexDiff.
Fix copied from https://reviews.llvm.org/D156946 but with a legit test case that triggers the bug.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D158578
2023-08-23 03:48:53 +00:00
Peiming Liu
6ca0b27298 [mlir][sparse] more complicated test for dual sparse convolution kernel.
Reviewed By: anlunx

Differential Revision: https://reviews.llvm.org/D158443
2023-08-21 18:48:01 +00:00
Andrzej Warzynski
51eaee3b42 [mlir][SparseTensor] Fix test regression
Fix a regression caused by https://reviews.llvm.org/D158012. Failing
bot:
  * https://lab.llvm.org/buildbot/#/builders/179/builds/7122

Note that both `RUN` lines in the affected file were previously
tested with similar configuraiton (_with_ and _without_ vectorisation).
This change restores that, though the new setting (from D158012) is
used, i.e.

  * with direct IR generation, `enable-runtime-library=true`.

This is sufficient to make the test pass and allows us to investigate
the root cause offline. Issue reported here:

  https://github.com/llvm/llvm-project/issues/64727
2023-08-16 09:37:07 +00:00
Peiming Liu
fa6726e27b [mlir][sparse] supports sparse_tensor.pack on libgen path
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D158012
2023-08-15 20:20:54 +00:00
Andrzej Warzynski
25396e1352 [mlir][test] Fix typo in a test
Remove unnecessary `"` that prevent correct `RUN` line expansion.

Introduced in:
  *https://reviews.llvm.org/D156625

Bot failure:
  * https://lab.llvm.org/buildbot/#/builders/61/builds/47437
2023-08-11 09:37:08 +01:00
Andrzej Warzynski
23e5130ebf [mlir][test] Reland: Refactor SparseTensor CPU integration tests
CHANGES SINCE THE ORIGINAL VERSION
----------------------------------
The default test set-up was extracted from
  * SparseTensor/CPU/lit.local.cfg.
and duplicated in all tests. This is to support downstream users that
don't use these local LIT config files.

SUMMARY OF CHANGES
------------------
This patch aims to reduce test duplication. This is a direct follow-up of:
  1. https://reviews.llvm.org/D155403 (test duplication), and
  2. https://reviews.llvm.org/D155405 (code re-use),

All SVE/VLA tests are now enabled _conditionally_ and refactored to use
`mlir-cpu-runner` rather than `lli`. The former helps with test
duplication and the latter with code re-use.

A few additional refactoring changes are included.

1. The reduce verbosity, long runtime library names like:

  %mlir_native_utils_lib_dir/libmlir_c_runner_utils%shlibext

are replaced with:

  %mlir_c_runner_utils

2. In order to keep the code and the comments in sync, and to maintain
   consistency across the tests, the following:

  enable-runtime-library=true

is swapped with (and vice-versa):

  enable-runtime-library=false

Note that this change won't affect test coverage. Only few tests
required such update.

3. A VLS vectorization `RUN` line is added in tests where there was a
   VLA/VLS `RUN` line, but no VLS `RUN` line (with a few exceptions of
   tests that only contained one `RUN` line to begin with).

4. A few test variables are renamed/added. Most notable example:
  * %{options}` --> %{sparse_compiler_opts}

TEST RUNTIME IMPROVEMENT
------------------------
Tl;Dr This change improves test execution time by ~25%.

At the moment, the following `llvm-lit` invocation takes ~7.30s on my
AArch64 workstation (with SVE):

  llvm-lit  <llvm-project>/mlir/test/Integration/Dialect/SparseTensor/CPU/

This timing doesn't change no matter what the value of the following
CMake variable is (that should disable some tests):

  MLIR_RUN_ARM_SVE_TESTS

With this patch, the execution time will indeed depend on the value of
the above CMake variable:
  * with `MLIR_RUN_ARM_SVE_TESTS=true` the timing remains intact,
  * with `MLIR_RUN_ARM_SVE_TESTS=false` the timing drops to ~5.40s (~25%
    improvement).
This is expected:
  * on average there are 4 `RUN` lines per test,
  * _without this change_ (and with `MLIR_RUN_ARM_SVE_TESTS=false`) the
    4th `RUN` line would in most cases duplicate the 3rd `RUN` line,
  * _with this change) (and with `MLIR_RUN_ARM_SVE_TESTS=false`) the
    4th `RUN` line becomes empty.

PATCH SIZE
----------
While rather large and touching many files, most changes in this patch
are rather mechanical. All test configurations have been preserved and
only in a handful of cases new `RUN` lines added.

Differential Revision: https://reviews.llvm.org/D156625
2023-08-11 08:16:01 +00:00
Aart Bik
5a1f87f9fc Revert "[mlir][test] Refactor SparseTensor CPU integration tests"
This reverts commit e77e891d89.

Differential Revision: https://reviews.llvm.org/D156947
2023-08-02 15:46:41 -07:00
Andrzej Warzynski
e77e891d89 [mlir][test] Refactor SparseTensor CPU integration tests
SUMMARY OF CHANGES
------------------
This patch aims to reduce test duplication and to improve code re-use in
SparseTensor integration tests for CPU. This is a direct follow-up of:
  1. https://reviews.llvm.org/D155403 (test duplication), and
  2. https://reviews.llvm.org/D155405 (code re-use),

The key logic for this patch is implemented in:
  * SparseTensor/CPU/lit.local.cfg.
Essentially, the set-up that used to be repeated across all test files
has been extracted into a common LIT configuration file. This makes code
re-use straightforward.

All SVE/VLA tests are now enabled _conditionally_ and refactored to use
`mlir-cpu-runner` rather than `lli`. The former helps with test
duplication and the latter with code re-use.

A few additional refactoring changes are included.

1. The reduce verbosity, long runtime library names like:

  %mlir_native_utils_lib_dir/libmlir_c_runner_utils%shlibext

are replaced with:

  %mlir_c_runner_utils

2. In order to keep the code and the comments in sync, and to maintain
   consistency across the tests, the following:

  enable-runtime-library=true

is swapped with (and vice-versa):

  enable-runtime-library=false

Note that this change won't affect test coverage. Only few tests
required such update.

3. A VLS vectorization `RUN` line is added in tests where there was a
   VLA/VLS `RUN` line, but no VLS `RUN` line (with a few exceptions of
   tests that only contained one `RUN` line to begin with).

4. A few test variables are renamed/added. Most notable example:
  * %{options}` --> %{sparse_compiler_opts}

TEST RUNTIME IMPROVEMENT
------------------------
Tl;Dr This change improves test execution time by ~25%.

At the moment, the following `llvm-lit` invocation takes ~7.30s on my
AArch64 workstation (with SVE):

  llvm-lit  <llvm-project>/mlir/test/Integration/Dialect/SparseTensor/CPU/

This timing doesn't change no matter what the value of the following
CMake variable is (that should disable some tests):

  MLIR_RUN_ARM_SVE_TESTS

With this patch, the execution time will indeed depend on the value of
the above CMake variable:
  * with `MLIR_RUN_ARM_SVE_TESTS=true` the timing remains intact,
  * with `MLIR_RUN_ARM_SVE_TESTS=false` the timing drops to ~5.40s (~25%
    improvement).
This is expected:
  * on average there are 4 `RUN` lines per test,
  * _without this change_ (and with `MLIR_RUN_ARM_SVE_TESTS=false`) the
    4th `RUN` line would in most cases duplicate the 3rd `RUN` line,
  * _with this change) (and with `MLIR_RUN_ARM_SVE_TESTS=false`) the
    4th `RUN` line becomes empty.

PATCH SIZE
----------
While rather large and touching many files, most changes in this patch
are rather mechanical. All test configurations have been preserved and
only in a handful of cases new `RUN` lines added.

Differential Revision: https://reviews.llvm.org/D156625
2023-08-02 20:21:50 +00:00
Andrzej Warzynski
e62f366b01 [mlir] Update SVE integration tests to use mlir-cpu-runner
With the recent addition of "-mattr" and "-march" to the list of options
supported by mlir-cpu-runner [1], the SVE integration
tests can be updated to use mlir-cpu-runner instead of lli. This will
allow better code re-use and more consistency

This patch updates 2 tests to demonstrate the new logic. The remaining
tests will be updated in the follow-up patches.

[1] https://reviews.llvm.org/D146917

Depends on D155403

Differential Revision: https://reviews.llvm.org/D155405
2023-07-19 08:29:17 +00:00
Andrzej Warzynski
aa9a10ac1d [mlir][SparseTensor][ArmSVE] Conditionally disable SVE RUN line
This patch updates one SparseTensor integration test so that the VLA
vectorisation is run conditionally based on the value of the
MLIR_RUN_ARM_SME_TESTS CMake variable.

This change opens the path to reduce the duplication of RUN lines in
"mlir/test/Integration/Dialect/SparseTensor/CPU/". ATM, there are
usually 2 RUN lines to test vectorization in SparseTensor integration
tests:
  * one for VLS vectorisation,
  * one for VLA vectorisation whenever that's available and which
    reduces to VLS vectorisation when VLA is not supported.
When VLA is not available, VLS vectorisation is verified twice. This
duplication should be avoided - integration test are relatively
expansive to run.

This patch makes sure that the 2nd vectorisation RUN line becomes:
```
  if (SVE integration tests are enabled)
    run VLA vectorisation
  else
    return
```
This logic is implemented using LIT's (relatively new) conditional
substitution [1]. It enables us to guarantee that all RUN lines are
unique and that the VLA vectorisation is only enabled when supported.

This patch updates only 1 test to set-up and to demonstrate the logic.
Subsequent patches will update the remaining tests.

[1] https://www.llvm.org/docs/TestingGuide.html

Differential Revision: https://reviews.llvm.org/D155403
2023-07-18 06:59:08 +00:00
Peiming Liu
fc5d8fce7d [mlir][sparse] support dual sparse convolution.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D152601
2023-07-10 16:49:32 +00:00
Peiming Liu
a63d6a0014 [mlir][sparse] make UnpackOp return the actual filled length of unpacked memory
This might simplify frontend implementation by avoiding recomputation for the same value.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D154244
2023-06-30 21:35:15 +00:00
Peiming Liu
e7df82816b [mlir][sparse] rewrite arith::SelectOp to semiring operations to sparsify it.
Reviewed By: aartbik, K-Wu

Differential Revision: https://reviews.llvm.org/D153397
2023-06-21 21:22:18 +00:00
Peiming Liu
faf7cd97d0 [mlir][sparse] merger extension to support sparsifying arith::CmpI/CmpF operation
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D152761
2023-06-15 17:26:50 +00:00
Aart Bik
80fe3168b5 [mlir][sparse] add support for direct prod/and/min/max reductions
We recently fixed a bug in "sparsifying" such reductions, since
it incorrectly changed this into reductions over stored elements
only , which only works for add/sub/or/xor. However, we still want
to be able to "sparsify" the reductions even in the general case,
and this is a first step by rewriting them into a custom reduction
that feeds in the implicit zeros. NOTE HOWEVER, that in the long run
we want to do this better and feed in any implicit zero only ONCE
for efficiency.

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D152580
2023-06-12 09:27:47 -07:00
Aart Bik
e2167d89db [mlir][sparse] refine absent branch feeding into custom op
Document better that unary/binary may only feed to the output
or the input of a custom reduction (not even a regular reduction
since it may have "no value"!). Also fixes a bug when present
branch is empty and feeds into custom reduction.

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D152224
2023-06-06 09:57:15 -07:00
Peiming Liu
23dc96bbe4 [mlir][sparse] fix crashes when using custom reduce with unary operation.
The tests case is directly copied from https://reviews.llvm.org/D152179 authored by @aartbik

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D152204
2023-06-05 23:41:26 +00:00
Peiming Liu
e7b4c93f5e [mlir][sparse] fix crash when using sparse_tensor::UnaryOp and ReduceOp.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D152048
2023-06-03 01:19:05 +00:00
Aart Bik
6a38c772d4 [mlir][sparse] fixed bug with unary op, dense output
Note that by sparse compiler convention, dense output
is zerod out when not set, so complement results in
zeros where elements were present.

Reviewed By: wrengr

Differential Revision: https://reviews.llvm.org/D152046
2023-06-02 18:15:33 -07:00
Peiming Liu
ce6f8c5afe [mlir][sparse] fix various bug to support sparse pooling
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D151776
2023-06-02 17:34:47 +00:00
Aart Bik
378f1885e3 [mlir][sparse] enhance sparse reduction support
Formerly, we accepted and/prod reductions as a standard
reduction but these change the semantics after sparsification
by not looking at implicit zeros. Therefore, we only accept
standard reductions that are insensitive to implicit vs.
explicit zeros, and leave the more complex reductions to
the sparse_tensor.reduce custom reduction implementation.

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D151929
2023-06-01 16:30:21 -07:00
Peiming Liu
54ac02dd16 [mlir][sparse] fix crashes when generation conv_2d_nchw_fchw with Compressed Dense Compressed Dense sparse encoding.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D151773
2023-05-31 18:06:01 +00:00
wren romano
540d5e0ce6 [mlir][sparse] Updating STEA parser/printer to use the name "dimSlices"
Depends On D151505

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D151513
2023-05-30 15:50:07 -07:00
wren romano
76647fce13 [mlir][sparse] Combining dimOrdering+higherOrdering fields into dimToLvl
This is a major step along the way towards the new STEA design.  While a great deal of this patch is simple renaming, there are several significant changes as well.  I've done my best to ensure that this patch retains the previous behavior and error-conditions, even though those are at odds with the eventual intended semantics of the `dimToLvl` mapping.  Since the majority of the compiler does not yet support non-permutations, I've also added explicit assertions in places that previously had implicitly assumed it was dealing with permutations.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D151505
2023-05-30 15:19:50 -07:00
Peiming Liu
db7f639b90 [mlir][sparse] fix a crash when generating sparse convolution with nchw input
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D151744
2023-05-30 20:16:54 +00:00
Tobias Hieta
f9008e6366 [NFC][Py Reformat] Reformat python files in mlir subdir
This is an ongoing series of commits that are reformatting our
Python code.

Reformatting is done with `black`.

If you end up having problems merging this commit because you
have made changes to a python file, the best way to handle that
is to run git checkout --ours <yourfile> and then reformat it
with black.

If you run into any problems, post to discourse about it and
we will try to help.

RFC Thread below:

https://discourse.llvm.org/t/rfc-document-and-standardize-python-code-style

Differential Revision: https://reviews.llvm.org/D150782
2023-05-26 08:05:40 +02:00
Peiming Liu
f7b8b005ff [mlir][sparse] fix bugs when computing the memory size when lowering pack op.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D151481
2023-05-25 19:19:52 +00:00
Peiming Liu
b2e6b73544 [mlir][sparse] extend unpack operation to unpack arbitrary encodings.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D151174
2023-05-23 22:34:01 +00:00
Peiming Liu
de56088866 [mlir][sparse] Support packing external data into arbitrary sparse tensor encoding.
We previously only support packing two array (values and coordinates) into COO tensors.
This patch allows packing inputs into arbitrary sparse tensor format.

It also deletes the "implicit" data canonicalization performed inside sparse compiler,
but instead requires users to canonicalize the data before passing it to the sparse compiler.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D150916
2023-05-19 17:41:49 +00:00
wren romano
f56a7383f0 [mlir][sparse] Fixing sparse_reshape.mlir integration test (followup to D150822)
For some reason, even though D150822 passed the buildbot, it failed to
catch this test

Reviewed By: anlunx

Differential Revision: https://reviews.llvm.org/D150830
2023-05-17 16:56:47 -07:00
wren romano
a0615d020a [mlir][sparse] Renaming the STEA field dimLevelType to lvlTypes
This commit is part of the migration of towards the new STEA syntax/design.  In particular, this commit includes the following changes:
* Renaming compiler-internal functions/methods:
  * `SparseTensorEncodingAttr::{getDimLevelType => getLvlTypes}`
  * `Merger::{getDimLevelType => getLvlType}` (for consistency)
  * `sparse_tensor::{getDimLevelType => buildLevelType}` (to help reduce confusion vs actual getter methods)
* Renaming external facets to match:
  * the STEA parser and printer
  * the C and Python bindings
  * PyTACO

However, the actual renaming of the `DimLevelType` itself (along with all the "dlt" names) will be handled in a separate commit.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D150330
2023-05-17 14:24:09 -07:00
Anlun Xu
6116ca67ab [mlir][sparse] Add sparse rewriting rules for tensor::ReshapeOp
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D149564
2023-05-16 14:56:33 -07:00
Andrzej Warzynski
20bf8c403c [mlir][SparseTensor][ArmSVE] Disable scalable vectorisation in a test
The MLIR SVE integration tests are now enabled in the
clang-aarch64-full-2stage buildbot under emulation (QEMU) and one of the
sparse integration tests is failing [1]:

  * Integration/Dialect/SparseTensor/CPU/concatenate_dim_1.mlir

That test is failing because we we don't have a LIT substitution to
replace:
  ```
  ; RUN: mlir-cpu-runner <command>
  ```
with
  ```
  ; RUN: <emulator> mlir-cpu-runner <command>
  ```
clang-aarch64-full-2stage does not support SVE natively and hence all
SVE integration tests require emulation. Other SVE tests use `lli` (for
which we do have the required substitution) and hence are not affected.

This patch simplifies concatenate_dim_1.mlir to always use fixed-width
vectorisation. We will re-enable scalable vectorisation once LIT
substitutions for `mlir-cpu-runner` are updated.

[1] https://lab.llvm.org/buildbot/#/builders/179/builds/6062
2023-05-02 21:14:38 +00:00
Cullen Rhodes
707b6e94b8 [mlir][SparseTensor][ArmSVE] Fix missing lli substitutions
The MLIR SVE integration tests are now enabled in the
clang-aarch64-full-2stage buildbot under emulation (QEMU) and two of the
sparse integration tests are failing [1]:

  * mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_sorted_coo.mlir
  * mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_spmm.mlir

The reason for this is the SVE RUN lines use plain 'lli' rather than the
'%lli_host_or_aarch64_cmd' substitution that's necessary to run under
emulation. The CI doesn't support SVE so the tests will SIGILL unless
run under emulation.

I should note the logs don't show a SIGILL, only the non-descript:

  FileCheck error: '<stdin>' is empty.

but I expect this is what's actually happening.

https://lab.llvm.org/buildbot/#/builders/179/builds/6051/steps/12/logs/stdio
2023-05-02 14:43:48 +00:00
Peiming Liu
d4db528938 [mlir][sparse] extend unpack operation to support unpacking a batched COO type
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D149103
2023-05-01 18:17:29 +00:00
Cullen Rhodes
baafc74ab0 [mlir][test][Integration] Refactor Arm emulator configuration
The logic enabling the Arm SVE (and now SME) integration tests for
various dialects, that may run under emulation, is now duplicated in
several places.

This patch moves the configuration to the top-level MLIR integration
tests Lit config and renames the '%lli' substitution in contexts where
it will run exclusively (ArmSVE, ArmSME) on AArch64 (and possibly under
emulation) to '%lli_aarch64_cmd', and '%lli_host_or_aarch64_cmd' for
contexts where it may run AArch64 (also possibly under emulation). The
latter is for integration tests that have target-specific and
target-agnostic codepaths such as SparseTensor, which supports scalable
vectors.

The two substitutions have the same effect but the names are different to
convey this information. The '%lli_aarch64_cmd' substitution could be
used in the SparseTensor tests but that would be a misnomer if the host
were x86 and the MLIR_RUN_SVE_TESTS=OFF.

The reason for renaming the '%lli' substitution is to not prevent running other
target-specific integration tests at the same time, since the same substitution
'%lli' is used for lli in other integration tests:

  * mlir/test/Integration/Dialect/Vector/CPU/X86Vector              - (AVX emulation via Intel SDE)
  * mlir/test/Integration/Dialect/Vector/CPU/AMX                    - (AMX emulation via Intel SDE)
  * mlir/test/Integration/Dialect/LLVMIR/CPU/test-vp-intrinsic.mlir - (RISCV emulation via QEMU if supported, native otherwise)

and substituting '%lli' at the top-level with Arm specific logic would override
this.

Reviewed By: awarzynski

Differential Revision: https://reviews.llvm.org/D148929
2023-04-26 09:57:43 +00:00