Commit Graph

78 Commits

Author SHA1 Message Date
mlevesquedion
d4fd20258f [mlir] Use arith max or min ops instead of cmp + select (#82178)
I believe the semantics should be the same, but this saves 1 op and simplifies the code.

For example, the following two instructions:

```
%2 = cmp sgt %0, %1
%3 = select %2, %0, %1
```

Are equivalent to:

```
%2 = maxsi %0 %1
```
2024-02-21 12:28:05 -08:00
Oleksandr "Alex" Zinenko
8134a8fc3f [mlir] use TypeSize and uint64_t in DataLayout (#72874)
Data layout queries may be issued for types whose size exceeds the range
of 32-bit integer as well as for types that don't have a size known at
compile time, such as scalable vectors. Use best practices from LLVM IR
and adopt `llvm::TypeSize` for size-related queries and `uint64_t` for
alignment-related queries.

See #72678.
2023-11-21 16:12:27 +01:00
Max191
ce6ef990fe [mlir] Remove convertible identity restriction for memref.atomic_rmw to LLVM (#72262)
memref.atomic_rmw will fail to convert for memref types that have an offset because they do not have identity maps. This restriction is overly conservative, so this changes the restriction to only strided memref types.
2023-11-14 22:45:14 -05:00
Christian Ulmann
b28a296cdf [MLIR][MemRefToLLVM] Remove typed pointer support (#70909)
This commit removes the support for lowering MemRefToLLVM to LLVM
dialect with typed pointers. Typed pointers have been deprecated for a
while now and it's planned to soon remove them from the LLVM dialect.

Related PSA:
https://discourse.llvm.org/t/psa-removal-of-typed-pointers-from-the-llvm-dialect/74502
2023-11-02 07:35:58 +01:00
Krzysztof Drewniak
73c6248cc2 [mlir][MemRefToLLVM] Fix crashes with unconvertable memory spaces (#70694)
Fixes #70160

The issue is resolved by:
1. Changing the call to address space conversion to use the correct
return type, preventing the code from moving past the if and into the
crashing optional dereference.
2. Adding handling to the AllocLikeOp rewriter for the case where the
underlying buffer allocation fails.
2023-10-31 09:01:43 -05:00
Tobias Gysi
85175edd4e [mlir][llvm] Replace NullOp by ZeroOp (#67183)
This revision replaces the LLVM dialect NullOp by the recently
introduced ZeroOp. The ZeroOp is more generic in the sense that it
represents zero values of any LLVM type rather than null pointers only.

This is a follow to https://github.com/llvm/llvm-project/pull/65508
2023-09-25 11:11:52 +02:00
Felix Schneider
c336a06144 [mlir] [memref] Fix alignment bug in memref.copy lowering
memref.copy gets lowered to a function call sometimes, this function
is passed the element size of the memref in bytes as an argument.
The element size passed to the copyMemRef() function call can be
miscalculated if the LLVM IR uses aligned access to the memory.

This can be fixed by using llvm.getelementptr to calculate the element
size natively. This is also done in the other lowering path that lowers
to an intrinsic.

Fix https://github.com/llvm/llvm-project/issues/64072

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D156126
2023-09-14 13:18:12 +02:00
Felix Schneider
55088efe06 [mlir][memref] Fix MemrefToLLVM lowering pattern for memref.transpose
The lowering pattern to LLVM for memref.transpose has a bug where
instead of transposing from (source) -> (dest) it actually transposes
(dest) -> (source). This patch fixes the bug and updates the test.

Fix https://github.com/llvm/llvm-project/issues/65145

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D159290
2023-09-14 13:12:55 +02:00
Martin Erhart
8037deb7af [mlir][memref] Add pass to expand realloc operations, simplify lowering to LLVM
There are two motivations for this change:
1. It considerably simplifies adding support for the realloc operation to the
   new buffer deallocation pass by lowering the realloc such that no
   deallocation operation is inserted and the deallocation pass itself can
   insert that dealloc
2. The lowering is expressed on a higher level and thus easier to understand,
   and the lowerings of the memref operations it is composed of don't have to
   be duplicated in the MemRefToLLVM lowering (also see discussion in
   https://reviews.llvm.org/D133424)

Reviewed By: springerm

Differential Revision: https://reviews.llvm.org/D159430
2023-09-05 08:58:40 +00:00
Matthias Springer
1fdbbd158d [mlir][Conversion] Implement ConvertToLLVMPatternInterface for FuncDialect
Also a new pass option `ConvertToLLVMPass` to populate only patterns from the specified dialects. This is needed because the existing test cases expect that only ops from certain dialects are lowered. (E.g., "arith-to-llvm" expects that only "arith" ops are lowered but not "func" ops.)

Differential Revision: https://reviews.llvm.org/D157627
2023-08-11 10:05:30 +02:00
Matthias Springer
876a480cac [mlir][Conversion] Add type converter parameter to ConvertToLLVMPatternInterface
Most `*-to-llvm` conversion patterns require a type converter. This
revision adds a type converter to the
`populateConvertToLLVMConversionPatterns` function and implements the
interface for the MemRef dialect.

Differential Revision: https://reviews.llvm.org/D157387
2023-08-09 09:00:46 +02:00
Christian Ulmann
48b126e30b [mlir][llvm] Ensure immediate usage in intrinsics
This commit changes intrinsics that have immarg parameter attributes to
model these parameters as attributes, instead of operands. Using
operands only works if the operation is an `llvm.mlir.constant`,
otherwise the exported LLVMIR is invalid.

Reviewed By: gysit

Differential Revision: https://reviews.llvm.org/D151692
2023-06-12 06:57:42 +00:00
Fabian Mora
041f1abee1 [mlir][memref] Fix num elements in lowering of memref.alloca op to LLVM
Fixes a mistake in the lowering of memref.alloca to llvm.alloca, as llvm.alloca uses the number of elements to allocate in the stack and not the size in bytes.

Reference:
LLVM IR: https://llvm.org/docs/LangRef.html#alloca-instruction
LLVM MLIR: https://mlir.llvm.org/docs/Dialects/LLVM/#llvmalloca-mlirllvmallocaop

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D150705
2023-05-22 16:23:00 +00:00
Oleg Shyshkov
cad08503b8 [mlir][memref] Lower copy of memrefs with outer size-1 dims to intrinsic memcpy.
With this change, more `memref.copy` will be lowered to the efficient `memcpy`. For example,

```
memref.copy %subview, %alloc : memref<1x576xf32, strided<[704, 1]>> to memref<1x576xf32>
```

Differential Revision: https://reviews.llvm.org/D150448
2023-05-12 17:18:13 +02:00
Quentin Colombet
e02d4142dc [MemRefToLLVM] Add a method in MemRefDescriptor to get the buffer addr
This patch pushes the computation of the start address of a memref in one
place (a method in MemRefDescriptor.)

This allows all the (indirect) users of this method to produce the start
address in the same way.

Thanks to this change, we expose more CSEs opportunities and thanks to
that, the backend is able to properly find the `llvm.assume` expression
related to the base address as demonstrated in the added test.

Differential Revision: https://reviews.llvm.org/D148947
2023-04-26 06:12:18 +02:00
Quentin Colombet
68e1aef68e [MemRefToLLVM] Fix the lowering of memref.assume_alignment
`memref.assume_alignment` annotates the alignment of the source buffer
not the base pointer.
Put diffrently, prior to this patch `memref.assume_alignment` would lower
to `llvm.assume %buffer.base.isAligned(X)` whereas what we want is
`llvm.assume (%buffer.base + %buffer.offset).isAligned(X)`.
In other words, we were missing to include the offset in the expression
checked by the `llvm.assume`.

Differential Revision: https://reviews.llvm.org/D148930
2023-04-26 06:07:00 +02:00
Alexander Belyaev
e7833c20d8 [mlir] Use splitBlock instread of createBlock in GenericAtomicRMWLowering.
When generic_atomic_rmw is inside of memref.alloca_scope, then the pattern would fail.

Differential Revision: https://reviews.llvm.org/D145901
2023-03-13 18:14:04 +01:00
Johannes Reifferscheid
b58daf91d6 Add a lowering for memref.dealloc with unranked memrefs.
This is permitted by the op, but the current lowering generates invalid IR.

Reviewed By: springerm

Differential Revision: https://reviews.llvm.org/D144090
2023-02-16 14:19:16 +01:00
Quentin Colombet
4bc2357c3d [mlir][MemRef|Tensor] Fix the handling of DimOp
Although specifying an index that is out of bounds for both `memref.dim`
and `tensor.dim` produces an undefined behavior, this is still valid IR.
In particular, we could expose an out of bound index because of some
optimizations, for instance as demonstrated with
https://github.com/llvm/llvm-project/issues/60295, and this shouldn't
cause the compiler to abort.

This patch removes the overzealous verifier checks and properly handles
out of bound indices (as in it doesn't crash the compiler, but still
produces UB).

This fixes https://github.com/llvm/llvm-project/issues/60295.

Note: That `shape.dim` has a similar problem but we're not supposed to
produce UB in this case. Instead we're supposed to propagate an error in
the resulting value and I don't know how to do that at the moment. Hence I
left this part out of the patch.

Differential Revision: https://reviews.llvm.org/D143999
2023-02-16 11:38:19 +01:00
Krzysztof Drewniak
7fb9bbe5f0 [mlir][Memref] Add memref.memory_space_cast and its lowerings
Address space casts are present in common MLIR targets (LLVM, SPIRV).
Some planned rewrites (such as one of the potential fixes to the fact
that the AMDGPU backend requires alloca() to live in address space 5 /
the GPU private memory space) may require such casts to be inserted
into MLIR code, where those address spaces could be represented by
arbitrary memory space attributes.

Therefore, we define memref.memory_space_cast and its lowerings.

Depends on D141293

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D141148
2023-02-09 21:44:57 +00:00
Krzysztof Drewniak
d0f19ce774 [mlir] Handle different pointer sizes in unranked memref descriptors
The code for unranked memref descriptors assumed that
sizeof(!llvm.ptr) == lizeof(!llvm.ptr<N>) for all address spaces N.
This is not always true (ex. the AMDGPU compiler backend has
sizeof(!llvm.ptr) = 64 bits but sizeof(!llvm.ptr<5>) = 32 bits, where
address space 5 is used for stack allocations). While this is merely
an overallocation in the case where a non-0 address space has pointers
smaller than the default, the existing code could cause OOB memory
accesses when sizeof(!llvm.ptr<N>) > sizeof(!llvm.ptr).

So, add an address spaces parameter to computeSizes in order to
partially resolve this class of bugs. Note that the LLVM data layout
in the conversion passes is currently set to "" and not constructed
from the MLIR data layout or some other source, but this could change
in the future.

Depends on D142159

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D141293
2023-02-09 19:14:58 +00:00
Krzysztof Drewniak
499abb243c Add generic type attribute mapping infrastructure, use it in GpuToX
Remapping memory spaces is a function often needed in type
conversions, most often when going to LLVM or to/from SPIR-V (a future
commit), and it is possible that such remappings may become more
common in the future as dialects take advantage of the more generic
memory space infrastructure.

Currently, memory space remappings are handled by running a
special-purpose conversion pass before the main conversion that
changes the address space attributes. In this commit, this approach is
replaced by adding a notion of type attribute conversions
TypeConverter, which is then used to convert memory space attributes.

Then, we use this infrastructure throughout the *ToLLVM conversions.
This has the advantage of loosing the requirements on the inputs to
those passes from "all address spaces must be integers" to "all
memory spaces must be convertible to integer spaces", a looser
requirement that reduces the coupling between portions of MLIR.

ON top of that, this change leads to the removal of most of the calls
to getMemorySpaceAsInt(), bringing us closer to removing it.

(A rework of the SPIR-V conversions to use this new system will be in
a folowup commit.)

As a note, one long-term motivation for this change is that I would
eventually like to add an allocaMemorySpace key to MLIR data layouts
and then call getMemRefAddressSpace(allocaMemorySpace) in the
relevant *ToLLVM in order to ensure all alloca()s, whether incoming or
produces during the LLVM lowering, have the correct address space for
a given target.

I expect that the type attribute conversion system may be useful in
other contexts.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D142159
2023-02-09 18:00:46 +00:00
Markus Böck
50ea17b849 [mlir][MemRef] Add option to -finalize-memref-to-llvm to emit opaque pointers
This is the first patch in a series of patches part of this RFC: https://discourse.llvm.org/t/rfc-switching-the-llvm-dialect-and-dialect-lowerings-to-opaque-pointers/68179

This patch adds the ability to lower the memref dialect to the LLVM Dialect with the use of opaque pointers instead of typed pointers. The latter are being phased out of LLVM and this patch is part of an effort to phase them out of MLIR as well. To do this, we'll need to support both typed and opaque pointers in lowering passes, to allow downstream projects to change without breakage.

The gist of changes required to change a conversion pass are:
* Change any `LLVM::LLVMPointerType::get` calls to NOT use an element type if opaque pointers are to be used.
* Use the `build` method of `llvm.load` with the explicit result type. Since the pointer does not have an element type anymore it has to be specified explicitly.
* Use the `build` method of `llvm.getelementptr` with the explicit `basePtrType`. Ditto to above, we have to now specify what the element type is so that GEP can do its indexing calculations
* Use the `build` method of `llvm.alloca` with the explicit `elementType`. Ditto to the above, alloca needs to know how many bytes to allocate through the element type.
* Get rid of any `llvm.bitcast`s
* Adapt the tests to the above. Note that `llvm.store` changes syntax as well when using opaque pointers

I'd like to note that the 3 `build` method changes work for both opaque and typed pointers, so unconditionally using the explicit element type form is always correct.

For the testsuite a practical approach suggested by @ftynse was taken: I created a separate test file for testing the typed pointer lowering of Ops. This mostly comes down to checking that bitcasts have been created at the appropiate places, since these are required for typed pointer support.

Differential Revision: https://reviews.llvm.org/D143268
2023-02-08 10:15:44 +01:00
Markus Böck
eafca23037 [mlir][MemRef] Add required address space cast when lowering alloc to LLVM
alloc uses either `malloc` or a plugable allocation function for allocating the required memory. Both of these functions always return a `llvm.ptr<i8>`, aka a pointer in the default address space. When allocating for a memref in a different memory space however, no address space cast is created, leading to invalid LLVM IR being generated.

This is currently not caught by the verifier since the pointer to the memory is always bitcast which currently lacks a verifier disallowing address space casts. Translating to actual LLVM IR would cause the verifier to go off, since bitcast cannot translate from one address space to another: https://godbolt.org/z/3a1z97rc9

This patch fixes that issue by generating an address space cast if the address space of the allocation function does not match the address space of the resulting memref.

Not sure whether this is actually a real life problem. I found this issue while converting the pass to using opaque pointers which gets rid of all the bitcasts and hence caused type errors without the address space cast.

Differential Revision: https://reviews.llvm.org/D143341
2023-02-06 12:10:07 +01:00
Guray Ozen
1cb91b421e [mlir] Add nontemporal field to memref.load/store and convey to llvm.load/store
`llvm.load` op has nonTemporal field which is missing for `memref.load` and `memref.store`. This revision first adds nonTemporal field to memref's load/store op, then it lowers the field to llvm.load/store ops.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D142616
2023-02-03 14:03:38 +01:00
Tobias Gysi
d8153ae299 [mlir][llvm] Opaque pointer support for atomic and call ops.
This revision adapts the printers and parsers of the LLVM Dialect
AtomicRMWOp, AtomicCmpXchgOp, CallOp, and InvokeOp to support both
opaque and typed pointers by printing the pointer types explicitly.
Previously, the printers and parser of these operations silently assumed
typed pointers. This assumption is problematic if a lowering or the
LLVM IR import produce LLVM Dialect with opaque pointers and the IR is
then printed and parsed, for example, when running mlir-translate. In
LLVM IR itself all tests with typed pointers are already gone. It is
thus important to start switching to opaque pointers.

This revision can be seen as a preparation step for the switch of the
LLVM Dialect to opaque pointers. Once printing and parsing works
seamlessly, all lowerings to LLVM Dialect can be switched to produce
opaque pointers. After a transition period, LLVM Dialect itself can by
simplified to support opaque pointers only.

Reviewed By: ftynse, Dinistro

Differential Revision: https://reviews.llvm.org/D142884
2023-02-01 10:08:52 +01:00
Quentin Colombet
cb4ccd38fa [mlir][Conversion] Rename the MemRefToLLVM pass
Since the recent MemRef refactoring that centralizes the lowering of
complex MemRef operations outside of the conversion framework, the
MemRefToLLVM pass doesn't directly convert these complex operations.

Instead, to fully convert the whole MemRef dialect space, MemRefToLLVM
needs to run after `expand-strided-metadata`.

Make this more obvious by changing the name of the pass and the option
associated with it from `convert-memref-to-llvm` to
`finalize-memref-to-llvm`.
The word "finalize" conveys that this pass needs to run after something
else and that something else is documented in its tablegen description.

This is a follow-up patch related to the conversation at:
https://discourse.llvm.org/t/psa-you-need-to-run-expand-strided-metadata-before-memref-to-llvm-now/66956/14

Differential Revision: https://reviews.llvm.org/D142463
2023-01-27 09:10:10 +00:00
Quentin Colombet
3914836273 [mlir][MemRefToLLVM] Remove the code for lowering collaspe/expand_shape
collapse/expand_shape are supposed to be expanded before we hit the
lowering code.
The expansion is done with the pass called expand-strided-metadata.

This patch is NFC in spirit but not in practice because
expand-strided-metadata won't try to accomodate for "invalid" strides
for dynamic sizes that are 1 at runtime.

The previous code was broken in that respect too, but differently: it
handled only the case of row-major layouts.
That whole part is being reworked separately.

Differential Revision: https://reviews.llvm.org/D136483
2023-01-23 15:41:55 +00:00
Benjamin Chetioui
a6c8f06f55 [mlir] Clean up typos in FileCheck directives in various tests.
Reviewed By: tpopp

Differential Revision: https://reviews.llvm.org/D139698
2022-12-12 09:29:14 +01:00
Quentin Colombet
786cbb09ed Re-apply "[mlir][MemRefToLLVM] Remove the code for lowering subview"
This reverts commit d0650d1089.

Original commit message:
Subviews are supposed to be expanded before we hit the lowering
code.
The expansion is done with the pass called
expand-strided-metadata.

Add a test that demonstrate how these passes can be linked up to achieve
the desired lowering.

This patch is NFC in spirit but not in practice because `subview` gets
lowered into `reinterpret_cast(extract_strided_metadata, <some math>)`
which lowers in two memref descriptors (one for `reinterpert_cast` and
one for `extract_strided_metadata`), which creates some noise of the
form: `extractvalue(unrealized_cast(extractvalue[0]))[0]` that is
currently not simplified within MLIR but that is really just noop in
that case.

Differential Revision: https://reviews.llvm.org/D136377
2022-12-02 15:26:58 +00:00
Quentin Colombet
d0650d1089 Revert "[mlir][MemRefToLLVM] Remove the code for lowering subview"
This reverts commit c8e15afa4c.

This breaks some integration tests, see
https://lab.llvm.org/buildbot/#/builders/220/builds/10446

I have to update a bunch of RUN lines in the tests to use the new
lowering scheme. Nothing complicated but let's keep the build clean
while I'm fixing that.
2022-12-02 14:19:37 +00:00
Quentin Colombet
c8e15afa4c [mlir][MemRefToLLVM] Remove the code for lowering subview
Subviews are supposed to be expanded before we hit the lowering
code.
The expansion is done with the pass called
expand-strided-metadata.

Add a test that demonstrate how these passes can be linked up to achieve
the desired lowering.

This patch is NFC in spirit but not in practice because `subview` gets
lowered into `reinterpret_cast(extract_strided_metadata, <some math>)`
which lowers in two memref descriptors (one for `reinterpert_cast` and
one for `extract_strided_metadata`), which creates some noise of the
form: `extractvalue(unrealized_cast(extractvalue[0]))[0]` that is
currently not simplified within MLIR but that is really just noop in
that case.

Differential Revision: https://reviews.llvm.org/D136377
2022-12-02 10:17:06 +00:00
Hanhan Wang
0a1569a400 [mlir][NFC] Remove trailing whitespaces from *.td and *.mlir files.
This is generated by running

```
sed --in-place 's/[[:space:]]\+$//' mlir/**/*.td
sed --in-place 's/[[:space:]]\+$//' mlir/**/*.mlir
```

Reviewed By: rriddle, dcaballe

Differential Revision: https://reviews.llvm.org/D138866
2022-11-28 15:26:30 -08:00
Quentin Colombet
200266a0a1 [mlir][MemRef] Fix the lowering of extract_strided_metadata
The first result of the extract_strided_metadata operation is a MemRef,
not a naked pointer.
This patch fixes the lowering of this operation in MemRefToLLVM so that
we properly materialize the full MemRef structure and not just the base,
naked, pointer.

Differential Revision: https://reviews.llvm.org/D137364
2022-11-05 01:06:38 +00:00
rkayaith
13bd410962 [mlir][Pass] Include anchor op in -pass-pipeline
In D134622 the printed form of a pass manager is changed to include the
name of the op that the pass manager is anchored on. This updates the
`-pass-pipeline` argument format to include the anchor op as well, so
that the printed form of a pipeline can be directly passed to
`-pass-pipeline`. In most cases this requires updating
`-pass-pipeline='pipeline'` to
`-pass-pipeline='builtin.module(pipeline)'`.

This also fixes an outdated assert that prevented running a
`PassManager` anchored on `'any'`.

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D134900
2022-11-03 11:36:12 -04:00
Quentin Colombet
55d08a8670 [mlir][MemRefToLLVM] Lower extract_strided_metadata
The `extract_strided_metadata` operation literally breaks down a memory
descriptor into its different components.
Teach the MemRefToLLVM conversion framework this fact.

Differential Revision: https://reviews.llvm.org/D136304
2022-10-26 21:29:35 +00:00
Andi Drebes
f76e40d1a4 [mlir] MemRefToLLVM: Save / restore stack when lowering memref.copy
The MemRef to LLVM conversion pass emits `llvm.alloca` operations to promote MemRef descriptors to the stack when lowering `memref.copy` operations for operands which do not have a contiguous layout in memory. The original stack position is never restored after the allocations, which creates an issue when the copy operation is embedded into a loop with a high trip count, ultimately resulting in a segmentation fault due to the stack growing too large.

Below is as a minimal example illustrating the issue:

```
module {
  func.func @main() {
    %arg0 = memref.alloc() : memref<32x64xi64>
    %arg1 = memref.alloc() : memref<16x32xi64>
    %lb = arith.constant 0 : index
    %ub = arith.constant 100000 : index
    %step = arith.constant 1 : index
    %slice = memref.subview %arg0[16,32][16,32][1,1] :
       memref<32x64xi64> to memref<16x32xi64, #map>

    scf.for %i = %lb to %ub step %step {
       memref.copy %slice, %arg1 :
         memref<16x32xi64, #map> to memref<16x32xi64>
    }

    return
  }
}
```

When running the code above, e.g., with mlir-cpu-runner, the execution crashes with a segmentation fault:

```
$ mlir-opt \
    --convert-scf-to-cf \
    --convert-memref-to-llvm \
    --convert-func-to-llvm
    --convert-cf-to-llvm \
    --reconcile-unrealized-casts <file> | \
  mlir-cpu-runner \
    -e main -entry-point-result=void \
    --shared-libs=$PWD/build/lib/libmlir_c_runner_utils.so
[...]
Segmentation fault
```

This patch causes the code lowering a `memref.copy` operation in the MemRefToLLVM pass to emit a pair of matching `llvm.intr.stacksave` and `llvm.intr.stackrestore` operations around the promotion of memory descriptors and the subsequent call to `memrefCopy` in order to restore the stack to its original position after the call.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D135756
2022-10-13 10:13:04 +02:00
Tobias Gysi
a938f3a9b5 [mlir] Fix bitwidth of memref-to-llvm constant.
One constant generated in MemRefToLLVM had a hardcoded bitwidth of
64 bits. The fix uses the typeConverter to create a constant that
matches the bitwidth of the provided by the data layout. The issue was
detected in an attempt to add a verifier to the LLVM ICmp operation that
checks that the types of the compared arguments match.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D135775
2022-10-12 17:13:01 +03:00
Nicolas Vasilache
07801f713e [mlir][memref]Add conversion support for memref.extract_aligned_pointer_as_index to LLVM
Reviewed By: pifon2a

Differential Revision: https://reviews.llvm.org/D134834
2022-09-29 02:35:00 -07:00
Peiming Liu
8b587113b7 [mlir][memref] fix overflow in realloc
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D134511
2022-09-23 03:07:23 +00:00
bixia1
9f13b9346b [mlir][memref] Add realloc op.
Add memref.realloc and canonicalization of the op. Add conversion patterns for
lowering the op to LLVM using unaligned alloc or aligned alloc based on the
conversion option.

Add filecheck tests for parsing and converting the op. Add an integration test.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D133424
2022-09-21 08:04:00 -07:00
Ivan Butygin
54d81e49e3 [mlir] Allow negative strides and offset in StridedLayoutAttr
Negative strides are useful for creating reverse-view of array. We don't have specific example for negative offset yet but will add it for consistency.

Differential Revision: https://reviews.llvm.org/D134147
2022-09-21 13:21:53 +02:00
Alex Zinenko
f3fae035c7 [mlir] use strided layout in structured codegen-related tests
All relevant operations have been switched to primarily use the strided
layout, but still support the affine map layout. Update the relevant
tests to use the strided format instead for compatibility with how ops
now print by default.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D134045
2022-09-17 08:11:28 +02:00
Alex Zinenko
46b90a7b5d [mlir] make remaining memref dialect ops produce strided layouts
The three following ops in the memref dialect: transpose, expand_shape,
collapse_shape, have been originally designed to operate on memrefs with
strided layouts but had to go through the affine map representation as the type
did not support anything else. Make these ops produce memref values with
StridedLayoutAttr instead now that it is available.

Depends On D133938

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D133947
2022-09-16 10:56:48 +02:00
Alex Zinenko
519847fefc [mlir] materialize strided memref layout as attribute
Introduce a new attribute to represent the strided memref layout. Strided
layouts are omnipresent in code generation flows and are the only kind of
layouts produced and supported by a half of operation in the memref dialect
(view-related, shape-related). However, they are internally represented as
affine maps that require a somewhat fragile extraction of the strides from the
linear form that also comes with an overhead. Furthermore, textual
representation of strided layouts as affine maps is difficult to read: compare
`affine_map<(d0, d1, d2)[s0, s1] -> (d0*32 + d1*s0 + s1 + d2)>` with
`strides: [32, ?, 1], offset: ?`. While a rudimentary support for parsing a
syntactically sugared version of the strided layout has existed in the codebase
for a long time, it does not go as far as this commit to make the strided
layout a first-class attribute in the IR.

This introduces the attribute and updates the tests that using the pre-existing
sugared form to use the new attribute instead. Most memref created
programmatically, e.g., in passes, still use the affine form with further
extraction of strides and will be updated separately.

Update and clean-up the memref type documentation that has gotten stale and has
been referring to the details of affine map composition that are long gone.

See https://discourse.llvm.org/t/rfc-materialize-strided-memref-layout-as-an-attribute/64211.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D132864
2022-08-30 17:19:58 +02:00
Jacques Pienaar
7d273fde11 [mlir] Populate default attributes on op creation
Default attributes were only handled by ODS accessors generated with the
intention that these behave as if set attributes. This addresses the
long standing TODO to address this inconsistency. Moving the
initialization to construction vs every access. Removing need for
duplicated default attribute population in python bindings.

Switch some of the OpenMP ones to optional attribute with default as the
currently set default values are not legal. May need to dig more there.

Switched LinAlg generated ones to optional attribute with default as its
quite widely used and unclear where it falls on two different
interpretations.

Differential Revision: https://reviews.llvm.org/D130916
2022-08-22 16:49:46 -07:00
Michele Scuttari
29fbe60009 [MLIR] Rename the generic LLVM allocation and deallocation functions
The generic allocation and deallocation instructions, which are optionally used during the MemRef -> LLVM conversion, should have a name that is specifically bound to their origin, that is the conversion pass itself.

Reviewed By: silvas

Differential Revision: https://reviews.llvm.org/D130588
2022-08-02 18:23:14 +00:00
Markus Böck
bd7eff1f2a [mlir][flang] Make use of the new GEPArg builder of GEP Op to simplify code
This is the follow up on https://reviews.llvm.org/D130730 which goes through upstream code and removes creating constant values in favour of using the constant indices in GEP directly. This leads to less and more readable code and more compact IR as well.

Differential Revision: https://reviews.llvm.org/D130731
2022-08-01 17:22:55 +02:00
Tres Popp
984d1bf8c0 Remove empty AffineExpr stride canonicalization in makeCanonicalStridedLayoutExpr
The "optimization" would replace the AffineMap for an empty shape with a 0 to represent its indexing (stride * dimension) logic. Meanwhile other pieces of core logic (such as getStridesAndOffset and makeStridedLinearLayoutMap) require strides for all dimensions to ensure no aliasing can occur which would occur if the shape was not empty. For now, this optimization is removed as different pieces of core types disagree on this, so the optimization should be caller supplied or should be consistent throughout the infrastructure.

Differential Revision: https://reviews.llvm.org/D130772
2022-08-01 11:15:32 +02:00
Michele Scuttari
a8601f11fb [MLIR] Generic 'malloc', 'aligned_alloc' and 'free' functions
When converted to the LLVM dialect, the memref.alloc and memref.free operations were generating calls to hardcoded 'malloc' and 'free' functions. This didn't leave any freedom to users to provide their custom implementation. Those operations now convert into calls to '_mlir_alloc' and '_mlir_free' functions, which have also been implemented into the runtime support library as wrappers to 'malloc' and 'free'. The same has been done for the 'aligned_alloc' function.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D128791
2022-07-25 15:52:51 +02:00