Commit Graph

101 Commits

Author SHA1 Message Date
Christian Ulmann
48b126e30b [mlir][llvm] Ensure immediate usage in intrinsics
This commit changes intrinsics that have immarg parameter attributes to
model these parameters as attributes, instead of operands. Using
operands only works if the operation is an `llvm.mlir.constant`,
otherwise the exported LLVMIR is invalid.

Reviewed By: gysit

Differential Revision: https://reviews.llvm.org/D151692
2023-06-12 06:57:42 +00:00
Tres Popp
68f58812e3 [mlir] Move casting calls from methods to function calls
The MLIR classes Type/Attribute/Operation/Op/Value support
cast/dyn_cast/isa/dyn_cast_or_null functionality through llvm's doCast
functionality in addition to defining methods with the same name.
This change begins the migration of uses of the method to the
corresponding function call as has been decided as more consistent.

Note that there still exist classes that only define methods directly,
such as AffineExpr, and this does not include work currently to support
a functional cast/isa call.

Context:
- https://mlir.llvm.org/deprecation/ at "Use the free function variants
  for dyn_cast/cast/isa/…"
- Original discussion at https://discourse.llvm.org/t/preferred-casting-style-going-forward/68443

Implementation:
This patch updates all remaining uses of the deprecated functionality in
mlir/. This was done with clang-tidy as described below and further
modifications to GPUBase.td and OpenMPOpsInterfaces.td.

Steps are described per line, as comments are removed by git:
0. Retrieve the change from the following to build clang-tidy with an
   additional check:
   main...tpopp:llvm-project:tidy-cast-check
1. Build clang-tidy
2. Run clang-tidy over your entire codebase while disabling all checks
   and enabling the one relevant one. Run on all header files also.
3. Delete .inc files that were also modified, so the next build rebuilds
   them to a pure state.

```
ninja -C $BUILD_DIR clang-tidy

run-clang-tidy -clang-tidy-binary=$BUILD_DIR/bin/clang-tidy -checks='-*,misc-cast-functions'\
               -header-filter=mlir/ mlir/* -fix

rm -rf $BUILD_DIR/tools/mlir/**/*.inc
```

Differential Revision: https://reviews.llvm.org/D151542
2023-05-26 10:29:55 +02:00
Fabian Mora
041f1abee1 [mlir][memref] Fix num elements in lowering of memref.alloca op to LLVM
Fixes a mistake in the lowering of memref.alloca to llvm.alloca, as llvm.alloca uses the number of elements to allocate in the stack and not the size in bytes.

Reference:
LLVM IR: https://llvm.org/docs/LangRef.html#alloca-instruction
LLVM MLIR: https://mlir.llvm.org/docs/Dialects/LLVM/#llvmalloca-mlirllvmallocaop

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D150705
2023-05-22 16:23:00 +00:00
Mehdi Amini
bbe5bf1788 Cleanup uses of getAttrDictionary() in MLIR to use getDiscardableAttrDictionary() when possible
This also speeds up some benchmarks in compiling simple fortan file by 2x!
Fixes #62687

Differential Revision: https://reviews.llvm.org/D150540
2023-05-15 11:35:50 -07:00
Oleg Shyshkov
b4d6aada62 [mlir][memref] Extract isStaticShapeAndContiguousRowMajor as a util function.
Differential Revision: https://reviews.llvm.org/D150543
2023-05-15 17:09:04 +02:00
Oleg Shyshkov
cad08503b8 [mlir][memref] Lower copy of memrefs with outer size-1 dims to intrinsic memcpy.
With this change, more `memref.copy` will be lowered to the efficient `memcpy`. For example,

```
memref.copy %subview, %alloc : memref<1x576xf32, strided<[704, 1]>> to memref<1x576xf32>
```

Differential Revision: https://reviews.llvm.org/D150448
2023-05-12 17:18:13 +02:00
Tres Popp
5550c82189 [mlir] Move casting calls from methods to function calls
The MLIR classes Type/Attribute/Operation/Op/Value support
cast/dyn_cast/isa/dyn_cast_or_null functionality through llvm's doCast
functionality in addition to defining methods with the same name.
This change begins the migration of uses of the method to the
corresponding function call as has been decided as more consistent.

Note that there still exist classes that only define methods directly,
such as AffineExpr, and this does not include work currently to support
a functional cast/isa call.

Caveats include:
- This clang-tidy script probably has more problems.
- This only touches C++ code, so nothing that is being generated.

Context:
- https://mlir.llvm.org/deprecation/ at "Use the free function variants
  for dyn_cast/cast/isa/…"
- Original discussion at https://discourse.llvm.org/t/preferred-casting-style-going-forward/68443

Implementation:
This first patch was created with the following steps. The intention is
to only do automated changes at first, so I waste less time if it's
reverted, and so the first mass change is more clear as an example to
other teams that will need to follow similar steps.

Steps are described per line, as comments are removed by git:
0. Retrieve the change from the following to build clang-tidy with an
   additional check:
   https://github.com/llvm/llvm-project/compare/main...tpopp:llvm-project:tidy-cast-check
1. Build clang-tidy
2. Run clang-tidy over your entire codebase while disabling all checks
   and enabling the one relevant one. Run on all header files also.
3. Delete .inc files that were also modified, so the next build rebuilds
   them to a pure state.
4. Some changes have been deleted for the following reasons:
   - Some files had a variable also named cast
   - Some files had not included a header file that defines the cast
     functions
   - Some files are definitions of the classes that have the casting
     methods, so the code still refers to the method instead of the
     function without adding a prefix or removing the method declaration
     at the same time.

```
ninja -C $BUILD_DIR clang-tidy

run-clang-tidy -clang-tidy-binary=$BUILD_DIR/bin/clang-tidy -checks='-*,misc-cast-functions'\
               -header-filter=mlir/ mlir/* -fix

rm -rf $BUILD_DIR/tools/mlir/**/*.inc

git restore mlir/lib/IR mlir/lib/Dialect/DLTI/DLTI.cpp\
            mlir/lib/Dialect/Complex/IR/ComplexDialect.cpp\
            mlir/lib/**/IR/\
            mlir/lib/Dialect/SparseTensor/Transforms/SparseVectorization.cpp\
            mlir/lib/Dialect/Vector/Transforms/LowerVectorMultiReduction.cpp\
            mlir/test/lib/Dialect/Test/TestTypes.cpp\
            mlir/test/lib/Dialect/Transform/TestTransformDialectExtension.cpp\
            mlir/test/lib/Dialect/Test/TestAttributes.cpp\
            mlir/unittests/TableGen/EnumsGenTest.cpp\
            mlir/test/python/lib/PythonTestCAPI.cpp\
            mlir/include/mlir/IR/
```

Differential Revision: https://reviews.llvm.org/D150123
2023-05-12 11:21:25 +02:00
Quentin Colombet
e02d4142dc [MemRefToLLVM] Add a method in MemRefDescriptor to get the buffer addr
This patch pushes the computation of the start address of a memref in one
place (a method in MemRefDescriptor.)

This allows all the (indirect) users of this method to produce the start
address in the same way.

Thanks to this change, we expose more CSEs opportunities and thanks to
that, the backend is able to properly find the `llvm.assume` expression
related to the base address as demonstrated in the added test.

Differential Revision: https://reviews.llvm.org/D148947
2023-04-26 06:12:18 +02:00
Quentin Colombet
68e1aef68e [MemRefToLLVM] Fix the lowering of memref.assume_alignment
`memref.assume_alignment` annotates the alignment of the source buffer
not the base pointer.
Put diffrently, prior to this patch `memref.assume_alignment` would lower
to `llvm.assume %buffer.base.isAligned(X)` whereas what we want is
`llvm.assume (%buffer.base + %buffer.offset).isAligned(X)`.
In other words, we were missing to include the offset in the expression
checked by the `llvm.assume`.

Differential Revision: https://reviews.llvm.org/D148930
2023-04-26 06:07:00 +02:00
Matthias Springer
eb7f355725 [mlir][NFC] Minor cleanups around ShapedType
* Remove unnecessary casts.
* Use concrete shaped types (e.g., `MemRefType`, `RankedTensorType`) instead of `ShapedType` when possible.
* Minor documentation cleanups.

Differential Revision: https://reviews.llvm.org/D148488
2023-04-19 11:30:45 +09:00
Alexander Belyaev
e7833c20d8 [mlir] Use splitBlock instread of createBlock in GenericAtomicRMWLowering.
When generic_atomic_rmw is inside of memref.alloca_scope, then the pattern would fail.

Differential Revision: https://reviews.llvm.org/D145901
2023-03-13 18:14:04 +01:00
Johannes Reifferscheid
b58daf91d6 Add a lowering for memref.dealloc with unranked memrefs.
This is permitted by the op, but the current lowering generates invalid IR.

Reviewed By: springerm

Differential Revision: https://reviews.llvm.org/D144090
2023-02-16 14:19:16 +01:00
Quentin Colombet
4bc2357c3d [mlir][MemRef|Tensor] Fix the handling of DimOp
Although specifying an index that is out of bounds for both `memref.dim`
and `tensor.dim` produces an undefined behavior, this is still valid IR.
In particular, we could expose an out of bound index because of some
optimizations, for instance as demonstrated with
https://github.com/llvm/llvm-project/issues/60295, and this shouldn't
cause the compiler to abort.

This patch removes the overzealous verifier checks and properly handles
out of bound indices (as in it doesn't crash the compiler, but still
produces UB).

This fixes https://github.com/llvm/llvm-project/issues/60295.

Note: That `shape.dim` has a similar problem but we're not supposed to
produce UB in this case. Instead we're supposed to propagate an error in
the resulting value and I don't know how to do that at the moment. Hence I
left this part out of the patch.

Differential Revision: https://reviews.llvm.org/D143999
2023-02-16 11:38:19 +01:00
Krzysztof Drewniak
7fb9bbe5f0 [mlir][Memref] Add memref.memory_space_cast and its lowerings
Address space casts are present in common MLIR targets (LLVM, SPIRV).
Some planned rewrites (such as one of the potential fixes to the fact
that the AMDGPU backend requires alloca() to live in address space 5 /
the GPU private memory space) may require such casts to be inserted
into MLIR code, where those address spaces could be represented by
arbitrary memory space attributes.

Therefore, we define memref.memory_space_cast and its lowerings.

Depends on D141293

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D141148
2023-02-09 21:44:57 +00:00
Krzysztof Drewniak
d0f19ce774 [mlir] Handle different pointer sizes in unranked memref descriptors
The code for unranked memref descriptors assumed that
sizeof(!llvm.ptr) == lizeof(!llvm.ptr<N>) for all address spaces N.
This is not always true (ex. the AMDGPU compiler backend has
sizeof(!llvm.ptr) = 64 bits but sizeof(!llvm.ptr<5>) = 32 bits, where
address space 5 is used for stack allocations). While this is merely
an overallocation in the case where a non-0 address space has pointers
smaller than the default, the existing code could cause OOB memory
accesses when sizeof(!llvm.ptr<N>) > sizeof(!llvm.ptr).

So, add an address spaces parameter to computeSizes in order to
partially resolve this class of bugs. Note that the LLVM data layout
in the conversion passes is currently set to "" and not constructed
from the MLIR data layout or some other source, but this could change
in the future.

Depends on D142159

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D141293
2023-02-09 19:14:58 +00:00
Krzysztof Drewniak
499abb243c Add generic type attribute mapping infrastructure, use it in GpuToX
Remapping memory spaces is a function often needed in type
conversions, most often when going to LLVM or to/from SPIR-V (a future
commit), and it is possible that such remappings may become more
common in the future as dialects take advantage of the more generic
memory space infrastructure.

Currently, memory space remappings are handled by running a
special-purpose conversion pass before the main conversion that
changes the address space attributes. In this commit, this approach is
replaced by adding a notion of type attribute conversions
TypeConverter, which is then used to convert memory space attributes.

Then, we use this infrastructure throughout the *ToLLVM conversions.
This has the advantage of loosing the requirements on the inputs to
those passes from "all address spaces must be integers" to "all
memory spaces must be convertible to integer spaces", a looser
requirement that reduces the coupling between portions of MLIR.

ON top of that, this change leads to the removal of most of the calls
to getMemorySpaceAsInt(), bringing us closer to removing it.

(A rework of the SPIR-V conversions to use this new system will be in
a folowup commit.)

As a note, one long-term motivation for this change is that I would
eventually like to add an allocaMemorySpace key to MLIR data layouts
and then call getMemRefAddressSpace(allocaMemorySpace) in the
relevant *ToLLVM in order to ensure all alloca()s, whether incoming or
produces during the LLVM lowering, have the correct address space for
a given target.

I expect that the type attribute conversion system may be useful in
other contexts.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D142159
2023-02-09 18:00:46 +00:00
Tobias Gysi
7f97895f5b [mlir][llvm] Add extra attributes to the atomic ops.
The revision adds a number of extra arguments to the
atomic read modify write and compare and exchange
operations. The extra arguments include the volatile,
weak, syncscope, and alignment attributes.

The implementation also adapts the fence operation to use
a assembly format and generalizes the helper used
to obtain the syncscope name.

Reviewed By: Dinistro

Differential Revision: https://reviews.llvm.org/D143554
2023-02-09 10:23:06 +01:00
Markus Böck
50ea17b849 [mlir][MemRef] Add option to -finalize-memref-to-llvm to emit opaque pointers
This is the first patch in a series of patches part of this RFC: https://discourse.llvm.org/t/rfc-switching-the-llvm-dialect-and-dialect-lowerings-to-opaque-pointers/68179

This patch adds the ability to lower the memref dialect to the LLVM Dialect with the use of opaque pointers instead of typed pointers. The latter are being phased out of LLVM and this patch is part of an effort to phase them out of MLIR as well. To do this, we'll need to support both typed and opaque pointers in lowering passes, to allow downstream projects to change without breakage.

The gist of changes required to change a conversion pass are:
* Change any `LLVM::LLVMPointerType::get` calls to NOT use an element type if opaque pointers are to be used.
* Use the `build` method of `llvm.load` with the explicit result type. Since the pointer does not have an element type anymore it has to be specified explicitly.
* Use the `build` method of `llvm.getelementptr` with the explicit `basePtrType`. Ditto to above, we have to now specify what the element type is so that GEP can do its indexing calculations
* Use the `build` method of `llvm.alloca` with the explicit `elementType`. Ditto to the above, alloca needs to know how many bytes to allocate through the element type.
* Get rid of any `llvm.bitcast`s
* Adapt the tests to the above. Note that `llvm.store` changes syntax as well when using opaque pointers

I'd like to note that the 3 `build` method changes work for both opaque and typed pointers, so unconditionally using the explicit element type form is always correct.

For the testsuite a practical approach suggested by @ftynse was taken: I created a separate test file for testing the typed pointer lowering of Ops. This mostly comes down to checking that bitcasts have been created at the appropiate places, since these are required for typed pointer support.

Differential Revision: https://reviews.llvm.org/D143268
2023-02-08 10:15:44 +01:00
Markus Böck
eafca23037 [mlir][MemRef] Add required address space cast when lowering alloc to LLVM
alloc uses either `malloc` or a plugable allocation function for allocating the required memory. Both of these functions always return a `llvm.ptr<i8>`, aka a pointer in the default address space. When allocating for a memref in a different memory space however, no address space cast is created, leading to invalid LLVM IR being generated.

This is currently not caught by the verifier since the pointer to the memory is always bitcast which currently lacks a verifier disallowing address space casts. Translating to actual LLVM IR would cause the verifier to go off, since bitcast cannot translate from one address space to another: https://godbolt.org/z/3a1z97rc9

This patch fixes that issue by generating an address space cast if the address space of the allocation function does not match the address space of the resulting memref.

Not sure whether this is actually a real life problem. I found this issue while converting the pass to using opaque pointers which gets rid of all the bitcasts and hence caused type errors without the address space cast.

Differential Revision: https://reviews.llvm.org/D143341
2023-02-06 12:10:07 +01:00
Guray Ozen
1cb91b421e [mlir] Add nontemporal field to memref.load/store and convey to llvm.load/store
`llvm.load` op has nonTemporal field which is missing for `memref.load` and `memref.store`. This revision first adds nonTemporal field to memref's load/store op, then it lowers the field to llvm.load/store ops.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D142616
2023-02-03 14:03:38 +01:00
Quentin Colombet
cb4ccd38fa [mlir][Conversion] Rename the MemRefToLLVM pass
Since the recent MemRef refactoring that centralizes the lowering of
complex MemRef operations outside of the conversion framework, the
MemRefToLLVM pass doesn't directly convert these complex operations.

Instead, to fully convert the whole MemRef dialect space, MemRefToLLVM
needs to run after `expand-strided-metadata`.

Make this more obvious by changing the name of the pass and the option
associated with it from `convert-memref-to-llvm` to
`finalize-memref-to-llvm`.
The word "finalize" conveys that this pass needs to run after something
else and that something else is documented in its tablegen description.

This is a follow-up patch related to the conversation at:
https://discourse.llvm.org/t/psa-you-need-to-run-expand-strided-metadata-before-memref-to-llvm-now/66956/14

Differential Revision: https://reviews.llvm.org/D142463
2023-01-27 09:10:10 +00:00
Quentin Colombet
3914836273 [mlir][MemRefToLLVM] Remove the code for lowering collaspe/expand_shape
collapse/expand_shape are supposed to be expanded before we hit the
lowering code.
The expansion is done with the pass called expand-strided-metadata.

This patch is NFC in spirit but not in practice because
expand-strided-metadata won't try to accomodate for "invalid" strides
for dynamic sizes that are 1 at runtime.

The previous code was broken in that respect too, but differently: it
handled only the case of row-major layouts.
That whole part is being reworked separately.

Differential Revision: https://reviews.llvm.org/D136483
2023-01-23 15:41:55 +00:00
Jeff Niu
4d67b27817 [mlir] Add operations to BlockAndValueMapping and rename it to IRMapping
The patch adds operations to `BlockAndValueMapping` and renames it to `IRMapping`. When operations are cloned, old operations are mapped to the cloned operations. This allows mapping from an operation to a cloned operation. Example:

```
Operation *opWithRegion = ...
Operation *opInsideRegion = &opWithRegion->front().front();

IRMapping map
Operation *newOpWithRegion = opWithRegion->clone(map);
Operation *newOpInsideRegion = map.lookupOrNull(opInsideRegion);
```

Migration instructions:
All includes to `mlir/IR/BlockAndValueMapping.h` should be replaced with `mlir/IR/IRMapping.h`. All uses of `BlockAndValueMapping` need to be renamed to `IRMapping`.

Reviewed By: rriddle, mehdi_amini

Differential Revision: https://reviews.llvm.org/D139665
2023-01-12 13:16:05 -08:00
serge-sans-paille
984b800a03 Move from llvm::makeArrayRef to ArrayRef deduction guides - last part
This is a follow-up to https://reviews.llvm.org/D140896, split into
several parts as it touches a lot of files.

Differential Revision: https://reviews.llvm.org/D141298
2023-01-10 11:47:43 +01:00
Ramkumar Ramachandra
22426110c5 mlir/tblgen: use std::optional in generation
This is part of an effort to migrate from llvm::Optional to
std::optional. This patch changes the way mlir-tblgen generates .inc
files, and modifies tests and documentation appropriately. It is a "no
compromises" patch, and doesn't leave the user with an unpleasant mix of
llvm::Optional and std::optional.

A non-trivial change has been made to ControlFlowInterfaces to split one
constructor into two, relating to a build failure on Windows.

See also: https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716

Signed-off-by: Ramkumar Ramachandra <r@artagnon.com>

Differential Revision: https://reviews.llvm.org/D138934
2022-12-17 11:13:26 +01:00
Kazu Hirata
7d2b180e68 [MemRefToLLVM] Use std::optional in MemRefToLLVM.cpp (NFC)
This is part of an effort to migrate from llvm::Optional to
std::optional:

https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
2022-12-10 10:41:43 -08:00
Kazu Hirata
1a36588ec6 [mlir] Use std::nullopt instead of None (NFC)
This patch mechanically replaces None with std::nullopt where the
compiler would warn if None were deprecated.  The intent is to reduce
the amount of manual work required in migrating from Optional to
std::optional.

This is part of an effort to migrate from llvm::Optional to
std::optional:

https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
2022-12-03 18:50:27 -08:00
Quentin Colombet
786cbb09ed Re-apply "[mlir][MemRefToLLVM] Remove the code for lowering subview"
This reverts commit d0650d1089.

Original commit message:
Subviews are supposed to be expanded before we hit the lowering
code.
The expansion is done with the pass called
expand-strided-metadata.

Add a test that demonstrate how these passes can be linked up to achieve
the desired lowering.

This patch is NFC in spirit but not in practice because `subview` gets
lowered into `reinterpret_cast(extract_strided_metadata, <some math>)`
which lowers in two memref descriptors (one for `reinterpert_cast` and
one for `extract_strided_metadata`), which creates some noise of the
form: `extractvalue(unrealized_cast(extractvalue[0]))[0]` that is
currently not simplified within MLIR but that is really just noop in
that case.

Differential Revision: https://reviews.llvm.org/D136377
2022-12-02 15:26:58 +00:00
Quentin Colombet
d0650d1089 Revert "[mlir][MemRefToLLVM] Remove the code for lowering subview"
This reverts commit c8e15afa4c.

This breaks some integration tests, see
https://lab.llvm.org/buildbot/#/builders/220/builds/10446

I have to update a bunch of RUN lines in the tests to use the new
lowering scheme. Nothing complicated but let's keep the build clean
while I'm fixing that.
2022-12-02 14:19:37 +00:00
Quentin Colombet
c8e15afa4c [mlir][MemRefToLLVM] Remove the code for lowering subview
Subviews are supposed to be expanded before we hit the lowering
code.
The expansion is done with the pass called
expand-strided-metadata.

Add a test that demonstrate how these passes can be linked up to achieve
the desired lowering.

This patch is NFC in spirit but not in practice because `subview` gets
lowered into `reinterpret_cast(extract_strided_metadata, <some math>)`
which lowers in two memref descriptors (one for `reinterpert_cast` and
one for `extract_strided_metadata`), which creates some noise of the
form: `extractvalue(unrealized_cast(extractvalue[0]))[0]` that is
currently not simplified within MLIR but that is really just noop in
that case.

Differential Revision: https://reviews.llvm.org/D136377
2022-12-02 10:17:06 +00:00
Lorenzo Chelini
a9733b8a5e [MLIR] Adopt DenseI64ArrayAttr in tensor, memref and linalg transform
This commit is a first step toward removing inconsistencies between dynamic
and static attributes (i64 v. index) by dropping `I64ArrayAttr` and
using `DenseI64ArrayAttr` in Tensor, Memref and Linalg Transform ops.
In Linalg Transform ops only `TileToScfForOp` and `TileOp` have been updated.

See related discussion: https://discourse.llvm.org/t/rfc-inconsistency-between-dynamic-and-static-attributes-i64-v-index/66612/1

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D138567
2022-11-25 09:43:30 +01:00
Aliia Khasanova
399638f98c Merge kDynamicSize and kDynamicSentinel into one constant.
resolve conflicts

Differential Revision: https://reviews.llvm.org/D138282
2022-11-21 13:01:26 +00:00
Quentin Colombet
200266a0a1 [mlir][MemRef] Fix the lowering of extract_strided_metadata
The first result of the extract_strided_metadata operation is a MemRef,
not a naked pointer.
This patch fixes the lowering of this operation in MemRefToLLVM so that
we properly materialize the full MemRef structure and not just the base,
naked, pointer.

Differential Revision: https://reviews.llvm.org/D137364
2022-11-05 01:06:38 +00:00
Quentin Colombet
55d08a8670 [mlir][MemRefToLLVM] Lower extract_strided_metadata
The `extract_strided_metadata` operation literally breaks down a memory
descriptor into its different components.
Teach the MemRefToLLVM conversion framework this fact.

Differential Revision: https://reviews.llvm.org/D136304
2022-10-26 21:29:35 +00:00
Andi Drebes
f76e40d1a4 [mlir] MemRefToLLVM: Save / restore stack when lowering memref.copy
The MemRef to LLVM conversion pass emits `llvm.alloca` operations to promote MemRef descriptors to the stack when lowering `memref.copy` operations for operands which do not have a contiguous layout in memory. The original stack position is never restored after the allocations, which creates an issue when the copy operation is embedded into a loop with a high trip count, ultimately resulting in a segmentation fault due to the stack growing too large.

Below is as a minimal example illustrating the issue:

```
module {
  func.func @main() {
    %arg0 = memref.alloc() : memref<32x64xi64>
    %arg1 = memref.alloc() : memref<16x32xi64>
    %lb = arith.constant 0 : index
    %ub = arith.constant 100000 : index
    %step = arith.constant 1 : index
    %slice = memref.subview %arg0[16,32][16,32][1,1] :
       memref<32x64xi64> to memref<16x32xi64, #map>

    scf.for %i = %lb to %ub step %step {
       memref.copy %slice, %arg1 :
         memref<16x32xi64, #map> to memref<16x32xi64>
    }

    return
  }
}
```

When running the code above, e.g., with mlir-cpu-runner, the execution crashes with a segmentation fault:

```
$ mlir-opt \
    --convert-scf-to-cf \
    --convert-memref-to-llvm \
    --convert-func-to-llvm
    --convert-cf-to-llvm \
    --reconcile-unrealized-casts <file> | \
  mlir-cpu-runner \
    -e main -entry-point-result=void \
    --shared-libs=$PWD/build/lib/libmlir_c_runner_utils.so
[...]
Segmentation fault
```

This patch causes the code lowering a `memref.copy` operation in the MemRefToLLVM pass to emit a pair of matching `llvm.intr.stacksave` and `llvm.intr.stackrestore` operations around the promotion of memory descriptors and the subsequent call to `memrefCopy` in order to restore the stack to its original position after the call.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D135756
2022-10-13 10:13:04 +02:00
Tobias Gysi
a938f3a9b5 [mlir] Fix bitwidth of memref-to-llvm constant.
One constant generated in MemRefToLLVM had a hardcoded bitwidth of
64 bits. The fix uses the typeConverter to create a constant that
matches the bitwidth of the provided by the data layout. The issue was
detected in an attempt to add a verifier to the LLVM ICmp operation that
checks that the types of the compared arguments match.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D135775
2022-10-12 17:13:01 +03:00
Jakub Kuderski
abc362a107 [mlir][arith] Change dialect name from Arithmetic to Arith
Suggested by @lattner in https://discourse.llvm.org/t/rfc-define-precise-arith-semantics/65507/22.

Tested with:
`ninja check-mlir check-mlir-integration check-mlir-mlir-spirv-cpu-runner check-mlir-mlir-vulkan-runner check-mlir-examples`

and `bazel build --config=generic_clang @llvm-project//mlir:all`.

Reviewed By: lattner, Mogball, rriddle, jpienaar, mehdi_amini

Differential Revision: https://reviews.llvm.org/D134762
2022-09-29 11:23:28 -04:00
Nicolas Vasilache
07801f713e [mlir][memref]Add conversion support for memref.extract_aligned_pointer_as_index to LLVM
Reviewed By: pifon2a

Differential Revision: https://reviews.llvm.org/D134834
2022-09-29 02:35:00 -07:00
Michele Scuttari
939c5cf6a7 [MLIR] Migrate MemRef -> LLVM conversion pass to the auto-generated constructor
See #57475

Differential Revision: https://reviews.llvm.org/D134607
2022-09-26 09:54:53 +02:00
Peiming Liu
8b587113b7 [mlir][memref] fix overflow in realloc
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D134511
2022-09-23 03:07:23 +00:00
bixia1
9f13b9346b [mlir][memref] Add realloc op.
Add memref.realloc and canonicalization of the op. Add conversion patterns for
lowering the op to LLVM using unaligned alloc or aligned alloc based on the
conversion option.

Add filecheck tests for parsing and converting the op. Add an integration test.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D133424
2022-09-21 08:04:00 -07:00
Alex Zinenko
46b90a7b5d [mlir] make remaining memref dialect ops produce strided layouts
The three following ops in the memref dialect: transpose, expand_shape,
collapse_shape, have been originally designed to operate on memrefs with
strided layouts but had to go through the affine map representation as the type
did not support anything else. Make these ops produce memref values with
StridedLayoutAttr instead now that it is available.

Depends On D133938

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D133947
2022-09-16 10:56:48 +02:00
Michele Scuttari
67d0d7ac0a [MLIR] Update pass declarations to new autogenerated files
The patch introduces the required changes to update the pass declarations and definitions to use the new autogenerated files and allow dropping the old infrastructure.

Reviewed By: mehdi_amini, rriddle

Differential Review: https://reviews.llvm.org/D132838
2022-08-31 12:28:45 +02:00
Michele Scuttari
039b969b32 Revert "[MLIR] Update pass declarations to new autogenerated files"
This reverts commit 2be8af8f0e.
2022-08-30 22:21:55 +02:00
Michele Scuttari
2be8af8f0e [MLIR] Update pass declarations to new autogenerated files
The patch introduces the required changes to update the pass declarations and definitions to use the new autogenerated files and allow dropping the old infrastructure.

Reviewed By: mehdi_amini, rriddle

Differential Review: https://reviews.llvm.org/D132838
2022-08-30 21:56:31 +02:00
Jeff Niu
5e0c3b4309 [mlir][LLVMIR] Clean up the definitions of ReturnOp/CallOp 2022-08-11 00:35:02 -04:00
Jeff Niu
5c5af910fe [mlir][LLVMIR] "Modernize" Insert/ExtractValueOp
This patch "modernizes" the LLVM `insertvalue` and `extractvalue`
operations to use DenseI64ArrayAttr, since they only require an array of
indices and previously there was confusion about whether to use i32 or
i64 arrays, and to use assembly format.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D131537
2022-08-10 12:51:11 -04:00
Jeff Niu
0af643f3ce [mlir][LLVMIR] (NFC) Add convenience builders for ConstantOp
And clean up some of the user code
2022-08-09 15:34:36 -04:00
Benjamin Kramer
9fa59e7643 [mlir] Use C++17 structured bindings instead of std::tie where applicable. NFCI 2022-08-09 13:34:17 +02:00
Markus Böck
8805cf2660 [mlir] Remove redundant inline from D131323 2022-08-08 13:18:09 +02:00