This reverts commit d0650d1089.
Original commit message:
Subviews are supposed to be expanded before we hit the lowering
code.
The expansion is done with the pass called
expand-strided-metadata.
Add a test that demonstrate how these passes can be linked up to achieve
the desired lowering.
This patch is NFC in spirit but not in practice because `subview` gets
lowered into `reinterpret_cast(extract_strided_metadata, <some math>)`
which lowers in two memref descriptors (one for `reinterpert_cast` and
one for `extract_strided_metadata`), which creates some noise of the
form: `extractvalue(unrealized_cast(extractvalue[0]))[0]` that is
currently not simplified within MLIR but that is really just noop in
that case.
Differential Revision: https://reviews.llvm.org/D136377
This reverts commit c8e15afa4c.
This breaks some integration tests, see
https://lab.llvm.org/buildbot/#/builders/220/builds/10446
I have to update a bunch of RUN lines in the tests to use the new
lowering scheme. Nothing complicated but let's keep the build clean
while I'm fixing that.
Subviews are supposed to be expanded before we hit the lowering
code.
The expansion is done with the pass called
expand-strided-metadata.
Add a test that demonstrate how these passes can be linked up to achieve
the desired lowering.
This patch is NFC in spirit but not in practice because `subview` gets
lowered into `reinterpret_cast(extract_strided_metadata, <some math>)`
which lowers in two memref descriptors (one for `reinterpert_cast` and
one for `extract_strided_metadata`), which creates some noise of the
form: `extractvalue(unrealized_cast(extractvalue[0]))[0]` that is
currently not simplified within MLIR but that is really just noop in
that case.
Differential Revision: https://reviews.llvm.org/D136377
This is generated by running
```
sed --in-place 's/[[:space:]]\+$//' mlir/**/*.td
sed --in-place 's/[[:space:]]\+$//' mlir/**/*.mlir
```
Reviewed By: rriddle, dcaballe
Differential Revision: https://reviews.llvm.org/D138866
The first result of the extract_strided_metadata operation is a MemRef,
not a naked pointer.
This patch fixes the lowering of this operation in MemRefToLLVM so that
we properly materialize the full MemRef structure and not just the base,
naked, pointer.
Differential Revision: https://reviews.llvm.org/D137364
In D134622 the printed form of a pass manager is changed to include the
name of the op that the pass manager is anchored on. This updates the
`-pass-pipeline` argument format to include the anchor op as well, so
that the printed form of a pipeline can be directly passed to
`-pass-pipeline`. In most cases this requires updating
`-pass-pipeline='pipeline'` to
`-pass-pipeline='builtin.module(pipeline)'`.
This also fixes an outdated assert that prevented running a
`PassManager` anchored on `'any'`.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D134900
The `extract_strided_metadata` operation literally breaks down a memory
descriptor into its different components.
Teach the MemRefToLLVM conversion framework this fact.
Differential Revision: https://reviews.llvm.org/D136304
The MemRef to LLVM conversion pass emits `llvm.alloca` operations to promote MemRef descriptors to the stack when lowering `memref.copy` operations for operands which do not have a contiguous layout in memory. The original stack position is never restored after the allocations, which creates an issue when the copy operation is embedded into a loop with a high trip count, ultimately resulting in a segmentation fault due to the stack growing too large.
Below is as a minimal example illustrating the issue:
```
module {
func.func @main() {
%arg0 = memref.alloc() : memref<32x64xi64>
%arg1 = memref.alloc() : memref<16x32xi64>
%lb = arith.constant 0 : index
%ub = arith.constant 100000 : index
%step = arith.constant 1 : index
%slice = memref.subview %arg0[16,32][16,32][1,1] :
memref<32x64xi64> to memref<16x32xi64, #map>
scf.for %i = %lb to %ub step %step {
memref.copy %slice, %arg1 :
memref<16x32xi64, #map> to memref<16x32xi64>
}
return
}
}
```
When running the code above, e.g., with mlir-cpu-runner, the execution crashes with a segmentation fault:
```
$ mlir-opt \
--convert-scf-to-cf \
--convert-memref-to-llvm \
--convert-func-to-llvm
--convert-cf-to-llvm \
--reconcile-unrealized-casts <file> | \
mlir-cpu-runner \
-e main -entry-point-result=void \
--shared-libs=$PWD/build/lib/libmlir_c_runner_utils.so
[...]
Segmentation fault
```
This patch causes the code lowering a `memref.copy` operation in the MemRefToLLVM pass to emit a pair of matching `llvm.intr.stacksave` and `llvm.intr.stackrestore` operations around the promotion of memory descriptors and the subsequent call to `memrefCopy` in order to restore the stack to its original position after the call.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D135756
One constant generated in MemRefToLLVM had a hardcoded bitwidth of
64 bits. The fix uses the typeConverter to create a constant that
matches the bitwidth of the provided by the data layout. The issue was
detected in an attempt to add a verifier to the LLVM ICmp operation that
checks that the types of the compared arguments match.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D135775
Add memref.realloc and canonicalization of the op. Add conversion patterns for
lowering the op to LLVM using unaligned alloc or aligned alloc based on the
conversion option.
Add filecheck tests for parsing and converting the op. Add an integration test.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D133424
Negative strides are useful for creating reverse-view of array. We don't have specific example for negative offset yet but will add it for consistency.
Differential Revision: https://reviews.llvm.org/D134147
All relevant operations have been switched to primarily use the strided
layout, but still support the affine map layout. Update the relevant
tests to use the strided format instead for compatibility with how ops
now print by default.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D134045
The three following ops in the memref dialect: transpose, expand_shape,
collapse_shape, have been originally designed to operate on memrefs with
strided layouts but had to go through the affine map representation as the type
did not support anything else. Make these ops produce memref values with
StridedLayoutAttr instead now that it is available.
Depends On D133938
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D133947
Introduce a new attribute to represent the strided memref layout. Strided
layouts are omnipresent in code generation flows and are the only kind of
layouts produced and supported by a half of operation in the memref dialect
(view-related, shape-related). However, they are internally represented as
affine maps that require a somewhat fragile extraction of the strides from the
linear form that also comes with an overhead. Furthermore, textual
representation of strided layouts as affine maps is difficult to read: compare
`affine_map<(d0, d1, d2)[s0, s1] -> (d0*32 + d1*s0 + s1 + d2)>` with
`strides: [32, ?, 1], offset: ?`. While a rudimentary support for parsing a
syntactically sugared version of the strided layout has existed in the codebase
for a long time, it does not go as far as this commit to make the strided
layout a first-class attribute in the IR.
This introduces the attribute and updates the tests that using the pre-existing
sugared form to use the new attribute instead. Most memref created
programmatically, e.g., in passes, still use the affine form with further
extraction of strides and will be updated separately.
Update and clean-up the memref type documentation that has gotten stale and has
been referring to the details of affine map composition that are long gone.
See https://discourse.llvm.org/t/rfc-materialize-strided-memref-layout-as-an-attribute/64211.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D132864
Default attributes were only handled by ODS accessors generated with the
intention that these behave as if set attributes. This addresses the
long standing TODO to address this inconsistency. Moving the
initialization to construction vs every access. Removing need for
duplicated default attribute population in python bindings.
Switch some of the OpenMP ones to optional attribute with default as the
currently set default values are not legal. May need to dig more there.
Switched LinAlg generated ones to optional attribute with default as its
quite widely used and unclear where it falls on two different
interpretations.
Differential Revision: https://reviews.llvm.org/D130916
The generic allocation and deallocation instructions, which are optionally used during the MemRef -> LLVM conversion, should have a name that is specifically bound to their origin, that is the conversion pass itself.
Reviewed By: silvas
Differential Revision: https://reviews.llvm.org/D130588
This is the follow up on https://reviews.llvm.org/D130730 which goes through upstream code and removes creating constant values in favour of using the constant indices in GEP directly. This leads to less and more readable code and more compact IR as well.
Differential Revision: https://reviews.llvm.org/D130731
The "optimization" would replace the AffineMap for an empty shape with a 0 to represent its indexing (stride * dimension) logic. Meanwhile other pieces of core logic (such as getStridesAndOffset and makeStridedLinearLayoutMap) require strides for all dimensions to ensure no aliasing can occur which would occur if the shape was not empty. For now, this optimization is removed as different pieces of core types disagree on this, so the optimization should be caller supplied or should be consistent throughout the infrastructure.
Differential Revision: https://reviews.llvm.org/D130772
When converted to the LLVM dialect, the memref.alloc and memref.free operations were generating calls to hardcoded 'malloc' and 'free' functions. This didn't leave any freedom to users to provide their custom implementation. Those operations now convert into calls to '_mlir_alloc' and '_mlir_free' functions, which have also been implemented into the runtime support library as wrappers to 'malloc' and 'free'. The same has been done for the 'aligned_alloc' function.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D128791
Shape can be memref of index type, so memref::LoadOp result need to be converted into llvm type.
Differential Revision: https://reviews.llvm.org/D129965
When converted to the LLVM dialect, the memref.alloc and memref.free operations were generating calls to hardcoded 'malloc' and 'free' functions. This didn't leave any freedom to users to provide their custom implementation. Those operations now convert into calls to '_mlir_alloc' and '_mlir_free' functions, which have also been implemented into the runtime support library as wrappers to 'malloc' and 'free'. The same has been done for the 'aligned_alloc' function.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D128791
Prior to this patch, the lowering of memref.reshape operations to the
LLVM dialect failed if the shape argument had a static shape with
dynamic dimensions. This patch adds the necessary support so that when
the shape argument has dynamic values, the lowering probes the dimension
at runtime to set the size in the `MemRefDescriptor` type. This patch
also computes the stride for dynamic dimensions by deriving it from the
sizes of the inner dimensions.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D126604
This patch references code for translating memref.reinterpret_cast ops
to add translation rules for memref.reshape ops that have a static shape
argument. Since reshape ops don't have offsets, sizes, or strides, this
patch simply sets the allocated and aligned pointers of the MemRef
descriptor.
Reviewed By: ftynse, cathyzhyi
Differential Revision: https://reviews.llvm.org/D125039
This change borrows the ideas from `computeExpanded/CollapsedLayoutMap`
and computes the dynamic strides at runtime for the memref descriptors.
Differential Revision: https://reviews.llvm.org/D124001
The last remaining operations in the standard dialect all revolve around
FuncOp/function related constructs. This patch simply handles the initial
renaming (which by itself is already huge), but there are a large number
of cleanups unlocked/necessary afterwards:
* Removing a bunch of unnecessary dependencies on Func
* Cleaning up the From/ToStandard conversion passes
* Preparing for the move of FuncOp to the Func dialect
See the discussion at https://discourse.llvm.org/t/standard-dialect-the-final-chapter/6061
Differential Revision: https://reviews.llvm.org/D120624
When lowering to memrefCopy call, the size for i1 type was calculated as 0.
Instead of using getTypeSizeInBits() and dividing by 8, we should just use getTypeSize().
Differential Revision: https://reviews.llvm.org/D119540
The lowering creates llvm.insertvalue with the rank value, so it needs to use
index type instead of 64 bit integer type. Otherwise, we get an error:
llvm.insertvalue' op Type mismatch: cannot insert 'i64' into '!llvm.struct<(i32, ptr<i8>)>'
Differential Revision: https://reviews.llvm.org/D119534
This is part of splitting up the standard dialect. The move makes sense anyways,
given that the memref dialect already holds memref.atomic_rmw which is the non-region
sibling operation of std.generic_atomic_rmw (the relationship is even more clear given
they have nearly the same description % how they represent the inner computation).
Differential Revision: https://reviews.llvm.org/D118209
Enable ReassociatingReshapeOpConversion with "non-identity" layouts.
This removes an early-return in this function, which seems unnecessary and is
preventing some memref.collapse_shape from converting to LLVM (see included lit test).
It seems unnecessary because the return message says "only empty layout map is supported"
but there actually is code in this function to deal with non-empty layout maps. Maybe
it refers to an earlier state of implementation and is just out of date?
Though, there is another concern about this early return: the condition that it actually
checks, `{src,dst}MemrefType.getLayout().isIdentity()`, is not quite the same as what the
return message says, "only empty layout map is supported". Stepping through this
`getLayout().isIdentity()` code in GDB, I found that it evaluates to `.getAffineMap().isIdentity()`
which does (AffineMap.cpp:271):
```
if (getNumDims() != getNumResults())
return false;
```
This seems that it would always return false for memrefs of rank greater than 1 ?
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D114808
Recent commits added a possibility for indices in LLVM dialect GEP operations
to be supplied directly as constant attributes to ensure they remain such until
translation to LLVM IR happens. Make this required for indexing into LLVM
struct types to match LLVM IR requirements, otherwise the translation would
assert on constructing such IR.
For better compatibility with MLIR-style operation construction interface,
allow GEP operations to be constructed programmatically using Values pointing
to known constant operations as struct indices.
Depends On D116758
Reviewed By: wsmoses
Differential Revision: https://reviews.llvm.org/D116759
Per the discussion in https://reviews.llvm.org/D116345 it makes sense
to move AtomicRMWOp out of the standard dialect. This was accentuated by the
need to add a fold op with a memref::cast. The only dialect
that would permit this is the memref dialect (keeping it in the standard dialect
or moving it to the arithmetic dialect would require those dialects to have a
dependency on the memref dialect, which breaks linking).
As the AtomicRMWKind enum is used throughout, this has been moved to Arith.
Reviewed By: Mogball
Differential Revision: https://reviews.llvm.org/D116392
The semantics of the ops that implement the
`OffsetSizeAndStrideOpInterface` is that if the number of offsets,
sizes or strides are less than the rank of the source, then some
default values are filled along the trailing dimensions (0 for offset,
source dimension of sizes, and 1 for strides). This is confusing,
especially with rank-reducing semantics. Immediate issue here is that
the methods of `OffsetSizeAndStridesOpInterface` assumes that the
number of values is same as the source rank. This cause out-of-bounds
errors.
So simplifying the specification of `OffsetSizeAndStridesOpInterface`
to make it invalid to specify number of offsets/sizes/strides not
equal to the source rank.
Differential Revision: https://reviews.llvm.org/D115677
The current implementation invokes materializations
whenever an input operand does not have a mapping for the
desired type, i.e. it requires materialization at the earliest possible
point. This conflicts with goal of dialect conversion (and also the
current documentation) which states that a materialization is only
required if the materialization is supposed to persist after the
conversion process has finished.
This revision refactors this such that whenever a target
materialization "might" be necessary, we insert an
unrealized_conversion_cast to act as a temporary materialization.
This allows for deferring the invocation of the user
materialization hooks until the end of the conversion process,
where we actually have a better sense if it's actually
necessary. This has several benefits:
* In some cases a target materialization hook is no longer
necessary
When performing a full conversion, there are some situations
where a temporary materialization is necessary. Moving forward,
these users won't need to provide any target materializations,
as the temporary materializations do not require the user to
provide materialization hooks.
* getRemappedValue can now handle values that haven't been
converted yet
Before this commit, it wasn't well supported to get the remapped
value of a value that hadn't been converted yet (making it
difficult/impossible to convert multiple operations in many
situations). This commit updates getRemappedValue to properly
handle this case by inserting temporary materializations when
necessary.
Another code-health related benefit is that with this change we
can move a majority of the complexity related to materializations
to the end of the conversion process, instead of handling adhoc
while conversion is happening.
Differential Revision: https://reviews.llvm.org/D111620
Precursor: https://reviews.llvm.org/D110200
Removed redundant ops from the standard dialect that were moved to the
`arith` or `math` dialects.
Renamed all instances of operations in the codebase and in tests.
Reviewed By: rriddle, jpienaar
Differential Revision: https://reviews.llvm.org/D110797
LLVM considers global variables marked as externals to be defined within the module if it is initialized (including to an undef). Other external globals are considered as being defined externally and imported into the current translation unit. Lowering of MLIR Global Ops does not properly propagate undefined initializers, resulting in a global which is expected to be defined within the current TU, not being defined.
Differential Revision: https://reviews.llvm.org/D108252
Historically the builtin dialect has had an empty namespace. This has unfortunately created a very awkward situation, where many utilities either have to special case the empty namespace, or just don't work at all right now. This revision adds a namespace to the builtin dialect, and starts to cleanup some of the utilities to no longer handle empty namespaces. For now, the assembly form of builtin operations does not require the `builtin.` prefix. (This should likely be re-evaluated though)
Differential Revision: https://reviews.llvm.org/D105149
The dialect-specific cast between builtin (ex-standard) types and LLVM
dialect types was introduced long time before built-in support for
unrealized_conversion_cast. It has a similar purpose, but is restricted
to compatible builtin and LLVM dialect types, which may hamper
progressive lowering and composition with types from other dialects.
Replace llvm.mlir.cast with unrealized_conversion_cast, and drop the
operation that became unnecessary.
Also make unrealized_conversion_cast legal by default in
LLVMConversionTarget as the majority of convesions using it are partial
conversions that actually want the casts to persist in the IR. The
standard-to-llvm conversion, which is still expected to run last, cleans
up the remaining casts standard-to-llvm conversion, which is still
expected to run last, cleans up the remaining casts
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D105880
After the MemRef has been split out of the Standard dialect, the
conversion to the LLVM dialect remained as a huge monolithic pass.
This is undesirable for the same complexity management reasons as having
a huge Standard dialect itself, and is even more confusing given the
existence of a separate dialect. Extract the conversion of the MemRef
dialect operations to LLVM into a separate library and a separate
conversion pass.
Reviewed By: herhut, silvas
Differential Revision: https://reviews.llvm.org/D105625