Commit Graph

13776 Commits

Author SHA1 Message Date
bixia1
a0568eabaf [mlir][sparse] Add dependence on bufferization.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D139571
2022-12-07 15:18:36 -08:00
Mahesh Ravishankar
242d5b2ba4 [mlir][Transforms] Simplify region before simplifying operation in CSE.
This covers more options for CSE. It also ensures that two operations
that have same operands but different regions to begin with, but same
regions after `simplifyRegions`, don't get both added to the list of
`knownValues`.

Fixes #59135

Differential Revision: https://reviews.llvm.org/D139490
2022-12-07 23:11:14 +00:00
Jakub Kuderski
bafc3a2b22 [mlir][arith] Fix comment typo. NFC. 2022-12-07 17:21:41 -05:00
Jakub Kuderski
28246b7e75 [mlir][arith] Rename addui_carry to addui_extended
The goal is to make the naming of the future `_extended` ops more
consistent. With unsigned addition, the carry value/flag and overflow
bit are the same, but this is not true when it comes to signed addition.

Also rename the second result from `carry` to `overflow`.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D139569
2022-12-07 17:15:56 -05:00
Alexander Belyaev
f6fb0a4f35 [mlir] Make patterns for folding tensor.empty optional.
At the moment, they are a part of EmptyOp::getCanonicalizationPatterns. When
extract_slice(tensor.empty) is rewritten as a new tensor.empty, it could
happen that we end up with two tensor.empty ops, since the original
tensor.empty can have two users. After bufferization such cases result in two
allocations.

Differential Revision: https://reviews.llvm.org/D139308
2022-12-07 23:01:34 +01:00
Javier Setoain
da291bab81 [mlir] Add hoisting of transfer ops in affine loops
The only way to do this with the current hoisting strategy is by
lowering Affine to Scf first, but that prevents further passes on
Affine.

Differential Revision: https://reviews.llvm.org/D137600
2022-12-07 20:08:07 +00:00
bixia1
19cde2df95 [mlir][sparse] Improve concatenate operation conversion for the case with annotated all dense result.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D139345
2022-12-07 12:06:50 -08:00
Rob Suderman
8e7630ece1 [mlir][tosa] Fix tosa.resize for i48 accumulator
Implementation assumed a i32 accumulator. Fixed the implementation to
work with an i32 accumulator.

Reviewed By: NatashaKnk

Differential Revision: https://reviews.llvm.org/D139365
2022-12-07 11:27:33 -08:00
Thomas Raoux
f7fda6ba4a [mlir][linalg] Add extra parameter to tiling reduction to foreach_thread
This adds a tile_size parameter, when it is used the tiles are
cyclically distributed onto the threads of the scf.foreach_thread op.

Differential Revision: https://reviews.llvm.org/D139474
2022-12-07 18:37:05 +00:00
Mitch Phillips
538f69f69c Disable flaky MLIR async.mlir test on ASan.
Test keeps flaking on the ASan buildbot:
https://github.com/llvm/llvm-project/issues/57231
2022-12-07 09:47:08 -08:00
Emilio Cota
72d76a2403 [mlir][bufferize] lower allocation alignment from 128 to 64 bytes
While it is unlikely to matter in practice, there is no reason
for this value to be larger than it should be. 64 bytes is the
size of a cache line in most machines, and we can fit a full
512-bit vector in it.

Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D139434
2022-12-07 11:12:46 -05:00
Matthias Springer
9cdf6b641d [mlir][tensor] Support parallel_insert_slice in reassociative reshape folder
Differential Revision: https://reviews.llvm.org/D139540
2022-12-07 16:25:10 +01:00
Will Dietz
d41b3bf7c3 [mlir][Pass] Fix dropped statistics with nested adaptors.
When running in parallel, nesting more than once caused
statistics to be dropped.

Fix by also preparing "async" pass managers before merging,
as they may also have "async" pass managers within.

Add test checking reported statistics have expected values
with and without threading enabled.

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D139459
2022-12-07 08:31:43 -06:00
Quentin Colombet
9cbd136db4 [mlir][NFC] Add a new getStridesAndOffset function
The new function is a wrapper around the regular `getStridesAndOffset`
that offers a more compact way (as in writing less code) of getting the
relevant information.

This method is intended to be used only when it is known that the
LogicalResult of the regular `getStridesAndOffset` must be "succeeded".

This warpper will assert on that.

Differential Revision: https://reviews.llvm.org/D139529
2022-12-07 13:58:28 +00:00
Lorenzo Chelini
87ecf9d155 [MLIR][Tensor] Add custom builder for unpack op
Reviewed By: hanchung

Differential Revision: https://reviews.llvm.org/D139344
2022-12-07 12:40:45 +01:00
Matthias Springer
5d04f0c937 [mlir][bufferize] Update remaining getMemorySpaceAsInt API uses
D138330 updated the deprecated `getMemorySpaceAsInt` uses to `getMemorySpace`. There are few uses that were missed.

Differential Revision: https://reviews.llvm.org/D139526
2022-12-07 12:28:14 +01:00
Markus Böck
836852878c [mlir][tblgen][docs] Use correct introductionary prefix for the Syntax description of attributes and types
The doc generator currently has the use of `!` as prefix hardcoded, despite being incorrect for Attributes. These start with `#`.

This patch fixes that little issue by using `#` for AttrDefs and `!` for TypeDefs in the `Syntax` field of the generated Markdown file.

Differential Revision: https://reviews.llvm.org/D139524
2022-12-07 12:25:47 +01:00
Matthias Springer
0abf513d0f [mlir][bufferize] Support parallel_insert_slice in EmptyTensorElimination
Differential Revision: https://reviews.llvm.org/D139431
2022-12-07 11:39:12 +01:00
Jakub Kuderski
0d691ac447 [mlir][spirv] Fix integer dot product format attr validation
Do not allow formats for non-scalar vector operands.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D139495
2022-12-06 23:29:42 -05:00
Mahesh Ravishankar
0478401c43 [mlir][TilingInterface] Add test for tile + fuse of sequence of reductions.
This just adds a test. With CSE of single block ops, and other
previously landed changes, this works at HEAD. Just adding a test that
triggered this line of work that I missed adding.

Differential Revision: https://reviews.llvm.org/D139385
2022-12-07 02:19:19 +00:00
liqinweng
d38c9e68b6 [MLIR][Arith] Add Canonicalize test for trunci
Reviewed By: Mogball

Differential Revision: https://reviews.llvm.org/D139399
2022-12-07 10:06:21 +08:00
Jakub Kuderski
f7f4dd6743 [mlir][spirv] Define spirv.*DotAccSat integer dot product ops
This covers `SDotAccSat`, `SUDotAccSat`, and `UDotAccSat`.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D139243
2022-12-06 20:22:48 -05:00
Jakub Kuderski
03e6bf5f56 [mlir][spirv] Define spirv.*Dot integer dot product ops
This covers `SDot`, `SUDot`, and `UDot`. The `*AccSat` version will be
added in a follow-up revision.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D139242
2022-12-06 20:17:41 -05:00
Aart Bik
8e0abf8d61 [mlir][sparse] cleanup some pass documentation
Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D139473
2022-12-06 15:37:39 -08:00
Hanhan Wang
0f297cad4d [mlir][tensor][linalg] Introduce DataLayoutPropagation pass.
It introduces a pattern that swaps `linalg.generic + tensor.pack` to
`tensor.pack + linalg.generic`. It requires all the iteration types
being parallel; the indexing map of output operand is identiy. They can
all be relaxed in the future.

The user can decide whether the propagation should be applied or not by
passing a control function.

Reviewed By: mravishankar

Differential Revision: https://reviews.llvm.org/D138882
2022-12-06 15:00:07 -08:00
Aart Bik
65074179f2 [mlir][sparse] make fusion for SDDMM more robust
Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D139456
2022-12-06 14:32:19 -08:00
Krzysztof Parzyszek
c589730ad5 [YAML] Convert Optional to std::optional 2022-12-06 12:49:32 -08:00
Wheest
5ad919a1d6 [mlir] [docs] Broken link in MLIR Toy docs
In the Ch6 of the Toy Example for MLIR, there is a broken link.  If ones goes to [the page for Chapter 6](https://mlir.llvm.org/docs/Tutorials/Toy/Ch-6/), and click on the link "Conversion to the LLVM IR Dialect", one will see it takes you to a page that no longer exists.

I believe this should actually be [this link](https://mlir.llvm.org/docs/TargetLLVMIR/).

Note to reviewers that this is my first submitted patch to LLVM, and using the phabricator system, so there is a higher risk that I have made an error, and brief feedback on these patch notes would be appreciated.

Reviewer rational: Users who git blame say contributed to the tutorial.

I believe that automated tests on these markdown docs could reduce the risk of this kind of error occurring again.  For example, [this Python package](https://pypi.org/project/linkcheckmd/) checks for broken markdown links.  However, I am unsure where in the existing testing infrastructure this could go, I am only somewhat familiar with the C++ side.

Reviewed By: Mogball

Differential Revision: https://reviews.llvm.org/D133977
2022-12-06 12:14:59 -08:00
Hanhan Wang
193cefd1b1 [mlir][tensor] Adapt FoldTensorCastProducerOp pattern on DPS interface.
This revision adapts the pattern in LinAlg to work on DPS interface, and
adds it to canonicalization patterns of tensor dialect. The
InsertSliceOp is skipped in the pattern because it has its own logic
about folding tensor.cast ops.

Reviewed By: pifon2a

Differential Revision: https://reviews.llvm.org/D139375
2022-12-06 12:13:37 -08:00
Kevin Gleason
279d294d26 Use consistent spacing before custom directives for op and attr/type assemblyFormat.
Currently, assemblyFormat `custom<A>($a) custom<B>($b)` has different spacing
if used for Ops vs Attrs/Types. Ops insert a space if needed before the custom directive,
while attributes and types do not.

This leads to the following two patterns in attributes / types:

```
# 1. Whitespace literal
let assemblyFormat = "... ` ` custom<A>($a)"

# 2. Custom printer code includes spacing
void printB(...) {
  printer << ' ' << b;
}
```

Moving this spacing into the generated code allows for some cleanup in mlir and
improves the consistency of custom directives.

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D138235
2022-12-06 12:13:02 -08:00
Mitch Phillips
969f0cba7e Revert "[mlir] Add hoisting of transfer ops in affine loops"
This reverts commit 825da072a8.

Reason: Broke the sanitizer buildbots. See original review for more
details: https://reviews.llvm.org/D137600
2022-12-06 09:44:59 -08:00
Peiming Liu
191c43f60e Revert "Revert "[mlir][sparse] Refactoring: abstract sparse tensor memory scheme into a SparseTensorDescriptor class.""
This reverts commit 10033a179f. Plus, it fixed windows warnings and gcc errors

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D139384
2022-12-06 17:12:06 +00:00
bixia1
3032c07d3a [mlir][crunner] Add support for random number generation.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D139374
2022-12-06 08:54:00 -08:00
Adrian Kuegel
f083c9bdef [mlir][SparseTensor] Apply ClangTidyLegacy finding (NFC).
Converting integer literal to bool, use bool literal instead.
2022-12-06 13:29:47 +01:00
Javier Setoain
825da072a8 [mlir] Add hoisting of transfer ops in affine loops
The only way to do this with the current hoisting strategy is by
lowering Affine to Scf first, but that prevents further passes on
Affine.

Differential Revision: https://reviews.llvm.org/D137600
2022-12-06 10:07:21 +00:00
Kazu Hirata
e823abab48 [mlir] Use std::nullopt instead of None in comments (NFC)
This is part of an effort to migrate from llvm::Optional to
std::optional:

https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
2022-12-06 00:03:44 -08:00
Diego Caballero
77603e28ce [mlir] Add replaceAllUsesExcept to rewriter
This patch adds `replaceAllUsesExcept` to the rewriter class.
The implementation is copy-pasted from Value + calling
`updateRootInPlace` to notify the listeners about the
corresponding IR changes.

Reviewed By: Mogball

Differential Revision: https://reviews.llvm.org/D139382
2022-12-06 07:42:15 +00:00
Fangrui Song
3cfe412e4c [TableGen] llvm::Optional => std::optional 2022-12-06 07:21:02 +00:00
Ramkumar Ramachandra
2a19625424 mlir/tosa: move tosa.pad from Linalg to Tensor conversion
Since tosa.pad is lowered strictly to artih and tensor ops, move
ConvertPad from TosaToLinalg to TosaToTensor, benefitting non-Linalg
Tosa targets. TensorToLinalg exists, and is trivial, so nothing is lost.

Signed-off-by: Ramkumar Ramachandra <r@artagnon.com>

Differential Revision: https://reviews.llvm.org/D139091
2022-12-06 07:39:29 +01:00
Jeff Niu
34535801d6 [mlir] UnsignedWhenEquivalent ignore dead code
The pass was not checking for uninitialized states due to dead code.
This patch also makes LLVMFuncOp correctly return a null body when it is
external.

Fixes #58807

Depends on D139388

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D139389
2022-12-05 20:38:44 -08:00
Jeff Niu
0f06da6412 [mlir][llvm] Mark LLVMReturnOp as ReturnLike
Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D139388
2022-12-05 20:38:37 -08:00
Jeff Niu
a8ccf0ef5b [mlir][ods] Allow ArrayOfAttr implicit conversion to ArrayRef
Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D139372
2022-12-05 19:13:11 -08:00
Stella Laurenzo
35d26be219 Don't use root logger at import time
At import time, these calls to `logging.debug()` implicitly call `logging.basicConfig` (https://docs.python.org/3/library/logging.html#logging.basicConfig), setting logging config for the whole project which cannot then be overwritten later. For instance, consider the following test script:

```
import logging
import jax

logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)
logger.info('info')
```
This should log out `'info'`, but because when `import jax` is called, this `_mlir_lib/__init__.py` file is run and a `logging.debug` is called, calling `logging.basicConfig`, my `logging.basicConfig(level=logging.INFO)` does nothing.

Fix: instead of using root logger, use a module level logger.

Found in this issue: https://github.com/google/jax/issues/12526

Reviewed By: stellaraccident

Differential Revision: https://reviews.llvm.org/D134812
2022-12-05 17:42:38 -08:00
Stella Stamenova
10033a179f Revert "[mlir][sparse] Refactoring: abstract sparse tensor memory scheme into a SparseTensorDescriptor class."
This reverts commit 8a7e69d145.

This broke the windows mlir buildbot: https://lab.llvm.org/buildbot/#/builders/13/builds/29257
2022-12-05 17:20:01 -08:00
wren romano
86f91e45a2 [mlir][sparse] Cleaning up the dim/lvl distinction in SparseTensorConversion
This change cleans up the conversion pass re the "dim"-vs-"lvl" and "sizes"-vs-"shape" distinctions of the runtime. A quick synopsis includes:

* Adds new `SparseTensorStorageBase::getDimSize` method, with `sparseDimSize` wrapper in SparseTensorRuntime.h, and `genDimSizeCall` generator in SparseTensorConversion.cpp
* Changes `genLvlSizeCall` to perform no logic, just generate the function call.
* Adds `createOrFold{Dim,Lvl}Call` functions to handle the logic of replacing `gen{Dim,Lvl}SizeCall` with constants whenever possible. The `createOrFoldDimCall` function replaces the old `sizeFromPtrAtDim`.
* Adds `{get,fill}DimSizes` functions for iterating `createOrFoldDimCall` across the whole type. These functions replace the old `sizesFromPtr`.
* Adds `{get,fill}DimShape` functions for lowering a `ShapedType` into constants. These functions replace the old `sizesFromType`.
* Changes the `DimOp` rewrite to do the right thing.
* Changes the `ExpandOp` rewrite to compute the proper expansion size.

Depends On D138365

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D139165
2022-12-05 16:59:42 -08:00
Lei Zhang
50882b4daf [mlir] List more elementwise ops in VectorToGPU MMA conversion
Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D139244
2022-12-05 22:51:19 +00:00
Jakub Kuderski
2442aa3447 [mlir][spirv] Add extensions implied by SPIR-V 1.6
This adds existing extensions as implied by SPIR-V 1.6.

Also clean up the surrounding code.

Fixes: https://github.com/llvm/llvm-project/issues/59348.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D139369
2022-12-05 17:40:29 -05:00
Lei Zhang
2c7827da4f [mlir][spirv] Add GPU subgroup MMA to spirv.MMAMatrixTimesScalar
Along the way, make the default pattern fail instead of crashing
when an elementwise op is not supported yet.

Reviewed By: kuhar

Differential Revision: https://reviews.llvm.org/D139280
2022-12-05 22:30:50 +00:00
Lei Zhang
3c278e5e27 [mlir][spirv] Fix spirv.MatrixTimesScalar for cooperative matrix
spirv.MatrixTimesScalar is allowed to use cooperative matrix.

Reviewed By: kuhar

Differential Revision: https://reviews.llvm.org/D139279
2022-12-05 22:13:23 +00:00
Peiming Liu
8a7e69d145 [mlir][sparse] Refactoring: abstract sparse tensor memory scheme into a SparseTensorDescriptor class.
This patch abstracts sparse tensor memory scheme into a SparseTensorDescriptor class. Previously, the field accesses are performed in a relatively error-prone way, this patch hides the hairy details behind a SparseTensorDescriptor class to allow users access sparse tensor fields in a more cohesive way.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D138627
2022-12-05 22:11:53 +00:00