Commit Graph

50 Commits

Author SHA1 Message Date
Benjamin Chetioui
a6c8f06f55 [mlir] Clean up typos in FileCheck directives in various tests.
Reviewed By: tpopp

Differential Revision: https://reviews.llvm.org/D139698
2022-12-12 09:29:14 +01:00
Quentin Colombet
786cbb09ed Re-apply "[mlir][MemRefToLLVM] Remove the code for lowering subview"
This reverts commit d0650d1089.

Original commit message:
Subviews are supposed to be expanded before we hit the lowering
code.
The expansion is done with the pass called
expand-strided-metadata.

Add a test that demonstrate how these passes can be linked up to achieve
the desired lowering.

This patch is NFC in spirit but not in practice because `subview` gets
lowered into `reinterpret_cast(extract_strided_metadata, <some math>)`
which lowers in two memref descriptors (one for `reinterpert_cast` and
one for `extract_strided_metadata`), which creates some noise of the
form: `extractvalue(unrealized_cast(extractvalue[0]))[0]` that is
currently not simplified within MLIR but that is really just noop in
that case.

Differential Revision: https://reviews.llvm.org/D136377
2022-12-02 15:26:58 +00:00
Quentin Colombet
d0650d1089 Revert "[mlir][MemRefToLLVM] Remove the code for lowering subview"
This reverts commit c8e15afa4c.

This breaks some integration tests, see
https://lab.llvm.org/buildbot/#/builders/220/builds/10446

I have to update a bunch of RUN lines in the tests to use the new
lowering scheme. Nothing complicated but let's keep the build clean
while I'm fixing that.
2022-12-02 14:19:37 +00:00
Quentin Colombet
c8e15afa4c [mlir][MemRefToLLVM] Remove the code for lowering subview
Subviews are supposed to be expanded before we hit the lowering
code.
The expansion is done with the pass called
expand-strided-metadata.

Add a test that demonstrate how these passes can be linked up to achieve
the desired lowering.

This patch is NFC in spirit but not in practice because `subview` gets
lowered into `reinterpret_cast(extract_strided_metadata, <some math>)`
which lowers in two memref descriptors (one for `reinterpert_cast` and
one for `extract_strided_metadata`), which creates some noise of the
form: `extractvalue(unrealized_cast(extractvalue[0]))[0]` that is
currently not simplified within MLIR but that is really just noop in
that case.

Differential Revision: https://reviews.llvm.org/D136377
2022-12-02 10:17:06 +00:00
Hanhan Wang
0a1569a400 [mlir][NFC] Remove trailing whitespaces from *.td and *.mlir files.
This is generated by running

```
sed --in-place 's/[[:space:]]\+$//' mlir/**/*.td
sed --in-place 's/[[:space:]]\+$//' mlir/**/*.mlir
```

Reviewed By: rriddle, dcaballe

Differential Revision: https://reviews.llvm.org/D138866
2022-11-28 15:26:30 -08:00
Quentin Colombet
200266a0a1 [mlir][MemRef] Fix the lowering of extract_strided_metadata
The first result of the extract_strided_metadata operation is a MemRef,
not a naked pointer.
This patch fixes the lowering of this operation in MemRefToLLVM so that
we properly materialize the full MemRef structure and not just the base,
naked, pointer.

Differential Revision: https://reviews.llvm.org/D137364
2022-11-05 01:06:38 +00:00
rkayaith
13bd410962 [mlir][Pass] Include anchor op in -pass-pipeline
In D134622 the printed form of a pass manager is changed to include the
name of the op that the pass manager is anchored on. This updates the
`-pass-pipeline` argument format to include the anchor op as well, so
that the printed form of a pipeline can be directly passed to
`-pass-pipeline`. In most cases this requires updating
`-pass-pipeline='pipeline'` to
`-pass-pipeline='builtin.module(pipeline)'`.

This also fixes an outdated assert that prevented running a
`PassManager` anchored on `'any'`.

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D134900
2022-11-03 11:36:12 -04:00
Quentin Colombet
55d08a8670 [mlir][MemRefToLLVM] Lower extract_strided_metadata
The `extract_strided_metadata` operation literally breaks down a memory
descriptor into its different components.
Teach the MemRefToLLVM conversion framework this fact.

Differential Revision: https://reviews.llvm.org/D136304
2022-10-26 21:29:35 +00:00
Andi Drebes
f76e40d1a4 [mlir] MemRefToLLVM: Save / restore stack when lowering memref.copy
The MemRef to LLVM conversion pass emits `llvm.alloca` operations to promote MemRef descriptors to the stack when lowering `memref.copy` operations for operands which do not have a contiguous layout in memory. The original stack position is never restored after the allocations, which creates an issue when the copy operation is embedded into a loop with a high trip count, ultimately resulting in a segmentation fault due to the stack growing too large.

Below is as a minimal example illustrating the issue:

```
module {
  func.func @main() {
    %arg0 = memref.alloc() : memref<32x64xi64>
    %arg1 = memref.alloc() : memref<16x32xi64>
    %lb = arith.constant 0 : index
    %ub = arith.constant 100000 : index
    %step = arith.constant 1 : index
    %slice = memref.subview %arg0[16,32][16,32][1,1] :
       memref<32x64xi64> to memref<16x32xi64, #map>

    scf.for %i = %lb to %ub step %step {
       memref.copy %slice, %arg1 :
         memref<16x32xi64, #map> to memref<16x32xi64>
    }

    return
  }
}
```

When running the code above, e.g., with mlir-cpu-runner, the execution crashes with a segmentation fault:

```
$ mlir-opt \
    --convert-scf-to-cf \
    --convert-memref-to-llvm \
    --convert-func-to-llvm
    --convert-cf-to-llvm \
    --reconcile-unrealized-casts <file> | \
  mlir-cpu-runner \
    -e main -entry-point-result=void \
    --shared-libs=$PWD/build/lib/libmlir_c_runner_utils.so
[...]
Segmentation fault
```

This patch causes the code lowering a `memref.copy` operation in the MemRefToLLVM pass to emit a pair of matching `llvm.intr.stacksave` and `llvm.intr.stackrestore` operations around the promotion of memory descriptors and the subsequent call to `memrefCopy` in order to restore the stack to its original position after the call.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D135756
2022-10-13 10:13:04 +02:00
Tobias Gysi
a938f3a9b5 [mlir] Fix bitwidth of memref-to-llvm constant.
One constant generated in MemRefToLLVM had a hardcoded bitwidth of
64 bits. The fix uses the typeConverter to create a constant that
matches the bitwidth of the provided by the data layout. The issue was
detected in an attempt to add a verifier to the LLVM ICmp operation that
checks that the types of the compared arguments match.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D135775
2022-10-12 17:13:01 +03:00
Nicolas Vasilache
07801f713e [mlir][memref]Add conversion support for memref.extract_aligned_pointer_as_index to LLVM
Reviewed By: pifon2a

Differential Revision: https://reviews.llvm.org/D134834
2022-09-29 02:35:00 -07:00
Peiming Liu
8b587113b7 [mlir][memref] fix overflow in realloc
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D134511
2022-09-23 03:07:23 +00:00
bixia1
9f13b9346b [mlir][memref] Add realloc op.
Add memref.realloc and canonicalization of the op. Add conversion patterns for
lowering the op to LLVM using unaligned alloc or aligned alloc based on the
conversion option.

Add filecheck tests for parsing and converting the op. Add an integration test.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D133424
2022-09-21 08:04:00 -07:00
Ivan Butygin
54d81e49e3 [mlir] Allow negative strides and offset in StridedLayoutAttr
Negative strides are useful for creating reverse-view of array. We don't have specific example for negative offset yet but will add it for consistency.

Differential Revision: https://reviews.llvm.org/D134147
2022-09-21 13:21:53 +02:00
Alex Zinenko
f3fae035c7 [mlir] use strided layout in structured codegen-related tests
All relevant operations have been switched to primarily use the strided
layout, but still support the affine map layout. Update the relevant
tests to use the strided format instead for compatibility with how ops
now print by default.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D134045
2022-09-17 08:11:28 +02:00
Alex Zinenko
46b90a7b5d [mlir] make remaining memref dialect ops produce strided layouts
The three following ops in the memref dialect: transpose, expand_shape,
collapse_shape, have been originally designed to operate on memrefs with
strided layouts but had to go through the affine map representation as the type
did not support anything else. Make these ops produce memref values with
StridedLayoutAttr instead now that it is available.

Depends On D133938

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D133947
2022-09-16 10:56:48 +02:00
Alex Zinenko
519847fefc [mlir] materialize strided memref layout as attribute
Introduce a new attribute to represent the strided memref layout. Strided
layouts are omnipresent in code generation flows and are the only kind of
layouts produced and supported by a half of operation in the memref dialect
(view-related, shape-related). However, they are internally represented as
affine maps that require a somewhat fragile extraction of the strides from the
linear form that also comes with an overhead. Furthermore, textual
representation of strided layouts as affine maps is difficult to read: compare
`affine_map<(d0, d1, d2)[s0, s1] -> (d0*32 + d1*s0 + s1 + d2)>` with
`strides: [32, ?, 1], offset: ?`. While a rudimentary support for parsing a
syntactically sugared version of the strided layout has existed in the codebase
for a long time, it does not go as far as this commit to make the strided
layout a first-class attribute in the IR.

This introduces the attribute and updates the tests that using the pre-existing
sugared form to use the new attribute instead. Most memref created
programmatically, e.g., in passes, still use the affine form with further
extraction of strides and will be updated separately.

Update and clean-up the memref type documentation that has gotten stale and has
been referring to the details of affine map composition that are long gone.

See https://discourse.llvm.org/t/rfc-materialize-strided-memref-layout-as-an-attribute/64211.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D132864
2022-08-30 17:19:58 +02:00
Jacques Pienaar
7d273fde11 [mlir] Populate default attributes on op creation
Default attributes were only handled by ODS accessors generated with the
intention that these behave as if set attributes. This addresses the
long standing TODO to address this inconsistency. Moving the
initialization to construction vs every access. Removing need for
duplicated default attribute population in python bindings.

Switch some of the OpenMP ones to optional attribute with default as the
currently set default values are not legal. May need to dig more there.

Switched LinAlg generated ones to optional attribute with default as its
quite widely used and unclear where it falls on two different
interpretations.

Differential Revision: https://reviews.llvm.org/D130916
2022-08-22 16:49:46 -07:00
Michele Scuttari
29fbe60009 [MLIR] Rename the generic LLVM allocation and deallocation functions
The generic allocation and deallocation instructions, which are optionally used during the MemRef -> LLVM conversion, should have a name that is specifically bound to their origin, that is the conversion pass itself.

Reviewed By: silvas

Differential Revision: https://reviews.llvm.org/D130588
2022-08-02 18:23:14 +00:00
Markus Böck
bd7eff1f2a [mlir][flang] Make use of the new GEPArg builder of GEP Op to simplify code
This is the follow up on https://reviews.llvm.org/D130730 which goes through upstream code and removes creating constant values in favour of using the constant indices in GEP directly. This leads to less and more readable code and more compact IR as well.

Differential Revision: https://reviews.llvm.org/D130731
2022-08-01 17:22:55 +02:00
Tres Popp
984d1bf8c0 Remove empty AffineExpr stride canonicalization in makeCanonicalStridedLayoutExpr
The "optimization" would replace the AffineMap for an empty shape with a 0 to represent its indexing (stride * dimension) logic. Meanwhile other pieces of core logic (such as getStridesAndOffset and makeStridedLinearLayoutMap) require strides for all dimensions to ensure no aliasing can occur which would occur if the shape was not empty. For now, this optimization is removed as different pieces of core types disagree on this, so the optimization should be caller supplied or should be consistent throughout the infrastructure.

Differential Revision: https://reviews.llvm.org/D130772
2022-08-01 11:15:32 +02:00
Michele Scuttari
a8601f11fb [MLIR] Generic 'malloc', 'aligned_alloc' and 'free' functions
When converted to the LLVM dialect, the memref.alloc and memref.free operations were generating calls to hardcoded 'malloc' and 'free' functions. This didn't leave any freedom to users to provide their custom implementation. Those operations now convert into calls to '_mlir_alloc' and '_mlir_free' functions, which have also been implemented into the runtime support library as wrappers to 'malloc' and 'free'. The same has been done for the 'aligned_alloc' function.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D128791
2022-07-25 15:52:51 +02:00
Ivan Butygin
d4217e6cc8 [mlir][memref] Missing type conversion in memref.reshape llvm lowering
Shape can be memref of index type, so memref::LoadOp result need to be converted into llvm type.

Differential Revision: https://reviews.llvm.org/D129965
2022-07-21 11:15:35 +02:00
Mehdi Amini
d04c2b2fd9 Revert "[MLIR] Generic 'malloc', 'aligned_alloc' and 'free' functions"
This reverts commit 3e21fb616d.

A lot of integration tests are failing on the bot.
2022-07-18 18:07:36 +00:00
Michele Scuttari
3e21fb616d [MLIR] Generic 'malloc', 'aligned_alloc' and 'free' functions
When converted to the LLVM dialect, the memref.alloc and memref.free operations were generating calls to hardcoded 'malloc' and 'free' functions. This didn't leave any freedom to users to provide their custom implementation. Those operations now convert into calls to '_mlir_alloc' and '_mlir_free' functions, which have also been implemented into the runtime support library as wrappers to 'malloc' and 'free'. The same has been done for the 'aligned_alloc' function.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D128791
2022-07-18 17:58:58 +02:00
Ashay Rane
5fee1799f4 [mlir] translate memref.reshape with static shapes but dynamic dims
Prior to this patch, the lowering of memref.reshape operations to the
LLVM dialect failed if the shape argument had a static shape with
dynamic dimensions.  This patch adds the necessary support so that when
the shape argument has dynamic values, the lowering probes the dimension
at runtime to set the size in the `MemRefDescriptor` type.  This patch
also computes the stride for dynamic dimensions by deriving it from the
sizes of the inner dimensions.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D126604
2022-06-02 10:00:58 -07:00
Eugene Zhulenev
705f048cbb [mlir] MemRefToLLVM: convert memref.view operations for empty memrefs
Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D126094
2022-05-20 16:43:54 -07:00
Ashay Rane
5380e30e04 [mlir] translate memref.reshape ops that have static shapes
This patch references code for translating memref.reinterpret_cast ops
to add translation rules for memref.reshape ops that have a static shape
argument.  Since reshape ops don't have offsets, sizes, or strides, this
patch simply sets the allocated and aligned pointers of the MemRef
descriptor.

Reviewed By: ftynse, cathyzhyi

Differential Revision: https://reviews.llvm.org/D125039
2022-05-12 11:57:20 -07:00
Benjamin Kramer
dccb73318a [mlir][MemRef] Return 0 for the canonical strided layout expr of a 0d memref
There can't be any strides, and the offset for the canonical expr is
always 0. Fixes #55229.

Differential Revision: https://reviews.llvm.org/D124795
2022-05-03 10:58:44 +02:00
Yi Zhang
e1318078a4 Support non identity layout map for reshape ops in MemRefToLLVM lowering
This change borrows the ideas from `computeExpanded/CollapsedLayoutMap`
and computes the dynamic strides at runtime for the memref descriptors.

Differential Revision: https://reviews.llvm.org/D124001
2022-04-26 13:03:53 -04:00
River Riddle
3028bf740e [mlir][NFC] Update textual references of func to func.func in Conversion/ tests
The special case parsing of `func` operations is being removed.
2022-04-20 22:17:27 -07:00
River Riddle
23aa5a7446 [mlir] Rename the Standard dialect to the Func dialect
The last remaining operations in the standard dialect all revolve around
FuncOp/function related constructs. This patch simply handles the initial
renaming (which by itself is already huge), but there are a large number
of cleanups unlocked/necessary afterwards:

* Removing a bunch of unnecessary dependencies on Func
* Cleaning up the From/ToStandard conversion passes
* Preparing for the move of FuncOp to the Func dialect

See the discussion at https://discourse.llvm.org/t/standard-dialect-the-final-chapter/6061

Differential Revision: https://reviews.llvm.org/D120624
2022-03-01 12:10:04 -08:00
Benjamin Kramer
27cd2a6284 [mlir][MemRef] Lower memref.copy with an offset to memcpy
memcpy can handle them as long as they're contiguous.

Differential Revision: https://reviews.llvm.org/D119938
2022-02-16 17:18:31 +01:00
Adrian Kuegel
2219f9f57c [mlir][MemRef] Fix MemRefCopyOpLowering to use correct number of bytes
When lowering to memrefCopy call, the size for i1 type was calculated as 0.
Instead of using getTypeSizeInBits() and dividing by 8, we should just use getTypeSize().

Differential Revision: https://reviews.llvm.org/D119540
2022-02-11 13:59:08 +01:00
Adrian Kuegel
5b02a48085 [mlir][MemRef] Fix MemRefCastOpLowering for 32 bit index type.
The lowering creates llvm.insertvalue with the rank value, so it needs to use
index type instead of 64 bit integer type. Otherwise, we get an error:

llvm.insertvalue' op Type mismatch: cannot insert 'i64' into '!llvm.struct<(i32, ptr<i8>)>'

Differential Revision: https://reviews.llvm.org/D119534
2022-02-11 12:37:15 +01:00
River Riddle
632a4f8829 [mlir] Move std.generic_atomic_rmw to the memref dialect
This is part of splitting up the standard dialect. The move makes sense anyways,
given that the memref dialect already holds memref.atomic_rmw which is the non-region
sibling operation of std.generic_atomic_rmw (the relationship is even more clear given
they have nearly the same description % how they represent the inner computation).

Differential Revision: https://reviews.llvm.org/D118209
2022-01-26 11:52:01 -08:00
Benoit Jacob
499703e9c0 Enable ReassociatingReshapeOpConversion with "non-identity" layouts.
Enable ReassociatingReshapeOpConversion with "non-identity" layouts.

This removes an early-return in this function, which seems unnecessary and is
preventing some memref.collapse_shape from converting to LLVM (see included lit test).

It seems unnecessary because the return message says "only empty layout map is supported"
but there actually is code in this function to deal with non-empty layout maps. Maybe
it refers to an earlier state of implementation and is just out of date?

Though, there is another concern about this early return: the condition that it actually
checks, `{src,dst}MemrefType.getLayout().isIdentity()`, is not quite the same as what the
return message says, "only empty layout map is supported". Stepping through this
`getLayout().isIdentity()` code in GDB, I found that it evaluates to `.getAffineMap().isIdentity()`
which does (AffineMap.cpp:271):

```
  if (getNumDims() != getNumResults())
    return false;
```

This seems that it would always return false for memrefs of rank greater than 1 ?

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D114808
2022-01-13 17:46:20 +00:00
Alex Zinenko
f50cfc44d6 [mlir] Require struct indices in LLVM::GEPOp to be constant
Recent commits added a possibility for indices in LLVM dialect GEP operations
to be supplied directly as constant attributes to ensure they remain such until
translation to LLVM IR happens. Make this required for indexing into LLVM
struct types to match LLVM IR requirements, otherwise the translation would
assert on constructing such IR.

For better compatibility with MLIR-style operation construction interface,
allow GEP operations to be constructed programmatically using Values pointing
to known constant operations as struct indices.

Depends On D116758

Reviewed By: wsmoses

Differential Revision: https://reviews.llvm.org/D116759
2022-01-07 09:56:05 +01:00
William S. Moses
a6a583dae4 [MLIR] Move AtomicRMW into MemRef dialect and enum into Arith
Per the discussion in https://reviews.llvm.org/D116345 it makes sense
to move AtomicRMWOp out of the standard dialect. This was accentuated by the
need to add a fold op with a memref::cast. The only dialect
that would permit this is the memref dialect (keeping it in the standard dialect
or moving it to the arithmetic dialect would require those dialects to have a
dependency on the memref dialect, which breaks linking).

As the AtomicRMWKind enum is used throughout, this has been moved to Arith.

Reviewed By: Mogball

Differential Revision: https://reviews.llvm.org/D116392
2021-12-30 14:31:33 -05:00
MaheshRavishankar
7df7586a0b [mlir][MemRef] Deprecate unspecified trailing offset, size, and strides semantics of OffsetSizeAndStrideOpInterface.
The semantics of the ops that implement the
`OffsetSizeAndStrideOpInterface` is that if the number of offsets,
sizes or strides are less than the rank of the source, then some
default values are filled along the trailing dimensions (0 for offset,
source dimension of sizes, and 1 for strides). This is confusing,
especially with rank-reducing semantics. Immediate issue here is that
the methods of `OffsetSizeAndStridesOpInterface` assumes that the
number of values is same as the source rank. This cause out-of-bounds
errors.

So simplifying the specification of `OffsetSizeAndStridesOpInterface`
to make it invalid to specify number of offsets/sizes/strides not
equal to the source rank.

Differential Revision: https://reviews.llvm.org/D115677
2021-12-29 11:18:29 -08:00
Alexander Belyaev
15f8f3e20a [mlir] Split std.rank into tensor.rank and memref.rank.
Move `std.rank` similarly to how `std.dim` was moved to TensorOps and MemRefOps.

Differential Revision: https://reviews.llvm.org/D115665
2021-12-14 10:15:55 +01:00
River Riddle
015192c634 [mlir:DialectConversion] Restructure how argument/target materializations get invoked
The current implementation invokes materializations
whenever an input operand does not have a mapping for the
desired type, i.e. it requires materialization at the earliest possible
point. This conflicts with goal of dialect conversion (and also the
current documentation) which states that a materialization is only
required if the materialization is supposed to persist after the
conversion process has finished.

This revision refactors this such that whenever a target
materialization "might" be necessary, we insert an
unrealized_conversion_cast to act as a temporary materialization.
This allows for deferring the invocation of the user
materialization hooks until the end of the conversion process,
where we actually have a better sense if it's actually
necessary. This has several benefits:

* In some cases a target materialization hook is no longer
   necessary
When performing a full conversion, there are some situations
where a temporary materialization is necessary. Moving forward,
these users won't need to provide any target materializations,
as the temporary materializations do not require the user to
provide materialization hooks.

* getRemappedValue can now handle values that haven't been
   converted yet
Before this commit, it wasn't well supported to get the remapped
value of a value that hadn't been converted yet (making it
difficult/impossible to convert multiple operations in many
situations). This commit updates getRemappedValue to properly
handle this case by inserting temporary materializations when
necessary.

Another code-health related benefit is that with this change we
can move a majority of the complexity related to materializations
to the end of the conversion process, instead of handling adhoc
while conversion is happening.

Differential Revision: https://reviews.llvm.org/D111620
2021-10-27 02:09:04 +00:00
Mogball
a54f4eae0e [MLIR] Replace std ops with arith dialect ops
Precursor: https://reviews.llvm.org/D110200

Removed redundant ops from the standard dialect that were moved to the
`arith` or `math` dialects.

Renamed all instances of operations in the codebase and in tests.

Reviewed By: rriddle, jpienaar

Differential Revision: https://reviews.llvm.org/D110797
2021-10-13 03:07:03 +00:00
Eugene Zhulenev
8276ac13e9 [mlir] Add alignment attribute to memref.global
Revived https://reviews.llvm.org/D102435

Add alignment attribute to `memref.global` and propagate it to llvm global in memref->llvm lowering

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D111309
2021-10-07 06:21:57 -07:00
William S. Moses
8c2ff7b69e [MLIR] Correct linkage of lowered globalop
LLVM considers global variables marked as externals to be defined within the module if it is initialized (including to an undef). Other external globals are considered as being defined externally and imported into the current translation unit. Lowering of MLIR Global Ops does not properly propagate undefined initializers, resulting in a global which is expected to be defined within the current TU, not being defined.

Differential Revision: https://reviews.llvm.org/D108252
2021-08-18 11:09:43 -04:00
River Riddle
f8479d9de5 [mlir] Set the namespace of the BuiltinDialect to 'builtin'
Historically the builtin dialect has had an empty namespace. This has unfortunately created a very awkward situation, where many utilities either have to special case the empty namespace, or just don't work at all right now. This revision adds a namespace to the builtin dialect, and starts to cleanup some of the utilities to no longer handle empty namespaces. For now, the assembly form of builtin operations does not require the `builtin.` prefix. (This should likely be re-evaluated though)

Differential Revision: https://reviews.llvm.org/D105149
2021-07-28 21:00:10 +00:00
Yi Zhang
381c3b9299 Dyanamic shape support for memref reassociation reshape ops
Only memref with identity layout map is supported for now.

Reviewed By: mravishankar

Differential Revision: https://reviews.llvm.org/D106180
2021-07-19 15:14:36 -07:00
Alex Zinenko
881dc34f73 [mlir] replace llvm.mlir.cast with unrealized_conversion_cast
The dialect-specific cast between builtin (ex-standard) types and LLVM
dialect types was introduced long time before built-in support for
unrealized_conversion_cast. It has a similar purpose, but is restricted
to compatible builtin and LLVM dialect types, which may hamper
progressive lowering and composition with types from other dialects.
Replace llvm.mlir.cast with unrealized_conversion_cast, and drop the
operation that became unnecessary.

Also make unrealized_conversion_cast legal by default in
LLVMConversionTarget as the majority of convesions using it are partial
conversions that actually want the casts to persist in the IR. The
standard-to-llvm conversion, which is still expected to run last, cleans
up the remaining casts  standard-to-llvm conversion, which is still
expected to run last, cleans up the remaining casts

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D105880
2021-07-16 15:14:09 +02:00
Alexander Belyaev
46ef86b5d8 [mlir] Move linalg::Expand/CollapseShapeOp to memref dialect.
RFC: https://llvm.discourse.group/t/rfc-reshape-ops-restructuring/3310

Differential Revision: https://reviews.llvm.org/D106141
2021-07-16 13:32:17 +02:00
Alex Zinenko
75e5f0aac9 [mlir] factor memref-to-llvm lowering out of std-to-llvm
After the MemRef has been split out of the Standard dialect, the
conversion to the LLVM dialect remained as a huge monolithic pass.
This is undesirable for the same complexity management reasons as having
a huge Standard dialect itself, and is even more confusing given the
existence of a separate dialect. Extract the conversion of the MemRef
dialect operations to LLVM into a separate library and a separate
conversion pass.

Reviewed By: herhut, silvas

Differential Revision: https://reviews.llvm.org/D105625
2021-07-09 14:49:52 +02:00