Existing implementation of the LLVM type converter for LLVM structs
containing incompatible types was attempting to change identifiers of
the struct in case of name clash post-conversion (all identified structs
have different names post-conversion since one cannot change the body of
the struct once initialized). Beyond a trivial error of not updating the
counter in renaming, this approach was broken for recursive structs that
can't be made aware of the renaming and would use the pre-existing
struct with clashing name instead.
For example, given
`!llvm.struct<"_Converted.foo", (struct<"_Converted.foo">, f32)>`
the following type
`!llvm.struct<"foo", (struct<"foo", index>)>`
would incorrectly convert to
```
!llvm.struct<"_Converted_1.foo",
(struct<"_Converted.foo",
(struct<"_Converted.foo">, f32)>)>
```
Remove this incorrect renaming and simply refuse to convert types if it
would lead to identifier clashes for structs with different bodies.
Document the expectation that such generated names are reserved and must
not be present in the input IR of the converter. If we ever actually
need to use handle such cases, this can be achieved by temporarily
renaming structs with reserved identifiers to an unreserved name and
back in a pre/post-processing pass that does _not_ use the type
conversion infra.
This commit removes the support for typed pointers from the LLVM
dialect. Typed pointers have been deprecated for a while and thus this
removal was announced in a PSA:
https://discourse.llvm.org/t/psa-removal-of-typed-pointers-from-the-llvm-dialect/74502
This change includes:
- Changing the ` LLVMPointerType`
- Removing remaining usages of the builders and the now removed element
type
- Fixing assembly formats that require fully qualified pointer types
- Updating ODS pointer constraints
Printing strings within integration tests is currently quite annoyingly
verbose, and can't be tucked into shared helpers as the types depend on
the length of the string:
```
llvm.mlir.global internal constant @hello_world("Hello, World!\0")
func.func @entry() {
%0 = llvm.mlir.addressof @hello_world : !llvm.ptr<array<14 x i8>>
%1 = llvm.mlir.constant(0 : index) : i64
%2 = llvm.getelementptr %0[%1, %1]
: (!llvm.ptr<array<14 x i8>>, i64, i64) -> !llvm.ptr<i8>
llvm.call @printCString(%2) : (!llvm.ptr<i8>) -> ()
return
}
```
So this patch adds a simple extension to `vector.print` to simplify
this:
```
func.func @entry() {
// Print a vector of characters ;)
vector.print str "Hello, World!"
return
}
```
Most of the logic for this is now shared with `cf.assert` which already
does something similar.
Depends on #68694
This revision replaces the LLVM dialect NullOp by the recently
introduced ZeroOp. The ZeroOp is more generic in the sense that it
represents zero values of any LLVM type rather than null pointers only.
This is a follow to https://github.com/llvm/llvm-project/pull/65508
This patch changes vector type conversion to return failure on n-D
scalable vector types instead of asserting.
This is an alternative approach to #65261 that aims to enable lowering
of Vector ops directly to ArmSME intrinsics where possible, and seems
more consistent with other type conversions. It's trivial to hit the
assert at the moment and it could be interpreted as n-D scalable vector
types being a bug, when they're valid types in the Vector dialect.
By returning failure it will generally fail more gracefully,
particularly for release builds or other builds where assertions are
disabled.
This vector keeps tracks of recursive types through the recursive invocations
of `convertType()`. However this is something only useful for some specific
cases, in which the dedicated conversion callbacks can handle this stack
privately.
This allows removing a mutable member of the type converter.
Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D158351
ConversionPatterns do not (and should not) modify the type converter that they are using.
* Make `ConversionPattern::typeConverter` const.
* Make member functions of the `LLVMTypeConverter` const.
* Conversion patterns take a const type converter.
* Various helper functions (that are called from patterns) now also take a const type converter.
Differential Revision: https://reviews.llvm.org/D157601
This revision removes the createIndexConstant method, which implicitly creates constants of the
getIndexType type and updates all uses to the more explicit createIndexAttrConstant which requires
an explicit Type parameter.
This is an NFC step towards entangling index type conversion in LLVM lowering.
The selection of which index type to use requires finer granularity than the existing
implementations which all rely on pass level flags and end up in mismatches, especially on GPUs
with multiple address spaces of different capacities.
This revision also includes an NFC fix to MemRefToLLVM.cpp that prevents a crash in cases where
an integer memory space cannot be derived for a MemRef.
Differential Revision: https://reviews.llvm.org/D156854
When converting/lowering to LLVM, MLIR does not support n-D scalable
vectors at the moment. This patch simply clarifies an assert that's
meant to catch such unsupported cases.
Note that we may want to relax this condition in the future.
Differential Revision: https://reviews.llvm.org/D154302
At the moment, only the trailing dimensions in the vector type can be
scalable, i.e. this is supported:
vector<2x[4]xf32>
and this is not allowed:
vector<[2]x4xf32>
This patch extends the vector type so that arbitrary dimensions can be
scalable. To this end, an array of bool values is added to every vector
type to denote whether the corresponding dimensions are scalable or not.
For example, for this vector:
vector<[2]x[3]x4xf32>
the following array would be created:
{true, true, false}.
Additionally, the current syntax:
vector<[2x3]x4xf32>
is replaced with:
vector<[2]x[3]x4xf32>
This is primarily to simplify parsing (this way, the parser can easily
process one dimension at a time rather than e.g. tracking whether
"scalable block" has been entered/left).
NOTE: The `isScalableDim` parameter of `VectorType` (introduced in this
patch) makes `numScalableDims` redundant. For the time being,
`numScalableDims` is preserved to facilitate the transition between the
two parameters. `numScalableDims` will be removed in one of the
subsequent patches.
This change is a part of a larger effort to enable scalable
vectorisation in Linalg. See this RFC for more context:
* https://discourse.llvm.org/t/rfc-scalable-vectorisation-in-linalg/
Differential Revision: https://reviews.llvm.org/D153372
Whereas LLVM currently doesn't have any types for 8-bit floats, and
whereas existing 8-bit float APIs (for instance, the AMDGCN
intrinsics) take such floats as (packed) bytes, translate the MLIR
8-bit float types to i8 during LLVM lowering.
In order to not special-case arith.constant for bitcasting constants
to their integer form, amend the MLIR to LLVM translator to turn 8-bit
float constants into i8 constants with the same value (by use of
APFloat's bitcast method).
This change can be reverted once LLVM has 8-bit float types.
Reviewed By: gysit
Differential Revision: https://reviews.llvm.org/D153160
This commit changes intrinsics that have immarg parameter attributes to
model these parameters as attributes, instead of operands. Using
operands only works if the operation is an `llvm.mlir.constant`,
otherwise the exported LLVMIR is invalid.
Reviewed By: gysit
Differential Revision: https://reviews.llvm.org/D151692
The MLIR classes Type/Attribute/Operation/Op/Value support
cast/dyn_cast/isa/dyn_cast_or_null functionality through llvm's doCast
functionality in addition to defining methods with the same name.
This change begins the migration of uses of the method to the
corresponding function call as has been decided as more consistent.
Note that there still exist classes that only define methods directly,
such as AffineExpr, and this does not include work currently to support
a functional cast/isa call.
Context:
- https://mlir.llvm.org/deprecation/ at "Use the free function variants
for dyn_cast/cast/isa/…"
- Original discussion at https://discourse.llvm.org/t/preferred-casting-style-going-forward/68443
Implementation:
This patch updates all remaining uses of the deprecated functionality in
mlir/. This was done with clang-tidy as described below and further
modifications to GPUBase.td and OpenMPOpsInterfaces.td.
Steps are described per line, as comments are removed by git:
0. Retrieve the change from the following to build clang-tidy with an
additional check:
main...tpopp:llvm-project:tidy-cast-check
1. Build clang-tidy
2. Run clang-tidy over your entire codebase while disabling all checks
and enabling the one relevant one. Run on all header files also.
3. Delete .inc files that were also modified, so the next build rebuilds
them to a pure state.
```
ninja -C $BUILD_DIR clang-tidy
run-clang-tidy -clang-tidy-binary=$BUILD_DIR/bin/clang-tidy -checks='-*,misc-cast-functions'\
-header-filter=mlir/ mlir/* -fix
rm -rf $BUILD_DIR/tools/mlir/**/*.inc
```
Differential Revision: https://reviews.llvm.org/D151542
The MLIR classes Type/Attribute/Operation/Op/Value support
cast/dyn_cast/isa/dyn_cast_or_null functionality through llvm's doCast
functionality in addition to defining methods with the same name.
This change begins the migration of uses of the method to the
corresponding function call as has been decided as more consistent.
Note that there still exist classes that only define methods directly,
such as AffineExpr, and this does not include work currently to support
a functional cast/isa call.
Caveats include:
- This clang-tidy script probably has more problems.
- This only touches C++ code, so nothing that is being generated.
Context:
- https://mlir.llvm.org/deprecation/ at "Use the free function variants
for dyn_cast/cast/isa/…"
- Original discussion at https://discourse.llvm.org/t/preferred-casting-style-going-forward/68443
Implementation:
This first patch was created with the following steps. The intention is
to only do automated changes at first, so I waste less time if it's
reverted, and so the first mass change is more clear as an example to
other teams that will need to follow similar steps.
Steps are described per line, as comments are removed by git:
0. Retrieve the change from the following to build clang-tidy with an
additional check:
https://github.com/llvm/llvm-project/compare/main...tpopp:llvm-project:tidy-cast-check
1. Build clang-tidy
2. Run clang-tidy over your entire codebase while disabling all checks
and enabling the one relevant one. Run on all header files also.
3. Delete .inc files that were also modified, so the next build rebuilds
them to a pure state.
4. Some changes have been deleted for the following reasons:
- Some files had a variable also named cast
- Some files had not included a header file that defines the cast
functions
- Some files are definitions of the classes that have the casting
methods, so the code still refers to the method instead of the
function without adding a prefix or removing the method declaration
at the same time.
```
ninja -C $BUILD_DIR clang-tidy
run-clang-tidy -clang-tidy-binary=$BUILD_DIR/bin/clang-tidy -checks='-*,misc-cast-functions'\
-header-filter=mlir/ mlir/* -fix
rm -rf $BUILD_DIR/tools/mlir/**/*.inc
git restore mlir/lib/IR mlir/lib/Dialect/DLTI/DLTI.cpp\
mlir/lib/Dialect/Complex/IR/ComplexDialect.cpp\
mlir/lib/**/IR/\
mlir/lib/Dialect/SparseTensor/Transforms/SparseVectorization.cpp\
mlir/lib/Dialect/Vector/Transforms/LowerVectorMultiReduction.cpp\
mlir/test/lib/Dialect/Test/TestTypes.cpp\
mlir/test/lib/Dialect/Transform/TestTransformDialectExtension.cpp\
mlir/test/lib/Dialect/Test/TestAttributes.cpp\
mlir/unittests/TableGen/EnumsGenTest.cpp\
mlir/test/python/lib/PythonTestCAPI.cpp\
mlir/include/mlir/IR/
```
Differential Revision: https://reviews.llvm.org/D150123
This patch pushes the computation of the start address of a memref in one
place (a method in MemRefDescriptor.)
This allows all the (indirect) users of this method to produce the start
address in the same way.
Thanks to this change, we expose more CSEs opportunities and thanks to
that, the backend is able to properly find the `llvm.assume` expression
related to the base address as demonstrated in the added test.
Differential Revision: https://reviews.llvm.org/D148947
Conversions to the LLVM dialect have an option to use the "bare pointer"
calling convention that converts memref types differently than the
default convention. It has crept into the conversion of operations that
are not related to calls but do require multiresult-to-struct packing.
Use a similar mechanism for the latter without using the calling
convention.
Reviewed By: gysit
Differential Revision: https://reviews.llvm.org/D148086
Currently the use of bare pointer calling convention is controlled
globally through use of an option in the `LLVMTypeConverter`. To allow
more fine-grained control use an attribute on a function to drive the
calling convention to use.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D147494
Replace references to enumerate results with either result_pairs
(reference wrapper type) or structured bindings. I did not use
structured bindings everywhere as it wasn't clear to me it would
improve readability.
This is in preparation to the switch to zip semantics which won't
support non-const lvalue reference to elements:
https://reviews.llvm.org/D144503.
I chose to use values instead of const lvalue-refs because MLIR is
biased towards avoiding `const` local variables. This won't degrade
performance because currently `result_pair` is cheap to copy (size_t
+ iterator), and in the future, the enumerator iterator dereference
will return temporaries anyway.
Reviewed By: dblaikie
Differential Revision: https://reviews.llvm.org/D146006
Address space casts are present in common MLIR targets (LLVM, SPIRV).
Some planned rewrites (such as one of the potential fixes to the fact
that the AMDGPU backend requires alloca() to live in address space 5 /
the GPU private memory space) may require such casts to be inserted
into MLIR code, where those address spaces could be represented by
arbitrary memory space attributes.
Therefore, we define memref.memory_space_cast and its lowerings.
Depends on D141293
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D141148
The code for unranked memref descriptors assumed that
sizeof(!llvm.ptr) == lizeof(!llvm.ptr<N>) for all address spaces N.
This is not always true (ex. the AMDGPU compiler backend has
sizeof(!llvm.ptr) = 64 bits but sizeof(!llvm.ptr<5>) = 32 bits, where
address space 5 is used for stack allocations). While this is merely
an overallocation in the case where a non-0 address space has pointers
smaller than the default, the existing code could cause OOB memory
accesses when sizeof(!llvm.ptr<N>) > sizeof(!llvm.ptr).
So, add an address spaces parameter to computeSizes in order to
partially resolve this class of bugs. Note that the LLVM data layout
in the conversion passes is currently set to "" and not constructed
from the MLIR data layout or some other source, but this could change
in the future.
Depends on D142159
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D141293
Remapping memory spaces is a function often needed in type
conversions, most often when going to LLVM or to/from SPIR-V (a future
commit), and it is possible that such remappings may become more
common in the future as dialects take advantage of the more generic
memory space infrastructure.
Currently, memory space remappings are handled by running a
special-purpose conversion pass before the main conversion that
changes the address space attributes. In this commit, this approach is
replaced by adding a notion of type attribute conversions
TypeConverter, which is then used to convert memory space attributes.
Then, we use this infrastructure throughout the *ToLLVM conversions.
This has the advantage of loosing the requirements on the inputs to
those passes from "all address spaces must be integers" to "all
memory spaces must be convertible to integer spaces", a looser
requirement that reduces the coupling between portions of MLIR.
ON top of that, this change leads to the removal of most of the calls
to getMemorySpaceAsInt(), bringing us closer to removing it.
(A rework of the SPIR-V conversions to use this new system will be in
a folowup commit.)
As a note, one long-term motivation for this change is that I would
eventually like to add an allocaMemorySpace key to MLIR data layouts
and then call getMemRefAddressSpace(allocaMemorySpace) in the
relevant *ToLLVM in order to ensure all alloca()s, whether incoming or
produces during the LLVM lowering, have the correct address space for
a given target.
I expect that the type attribute conversion system may be useful in
other contexts.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D142159
Part of https://discourse.llvm.org/t/rfc-switching-the-llvm-dialect-and-dialect-lowerings-to-opaque-pointers/68179
FuncToLLVM contains some logic working with Memrefs and their lowerings and in the process creating pointer types, loads and allocas. This patch ports the code of these to be compatible with opaque pointers and adds a pass option to enable the use of opaque pointers within the pass.
For the migration effort, the tests have been rewritten to use opaque pointers with dedicated test files for typed pointer support
Differential Revision: https://reviews.llvm.org/D143608
This is the first patch in a series of patches part of this RFC: https://discourse.llvm.org/t/rfc-switching-the-llvm-dialect-and-dialect-lowerings-to-opaque-pointers/68179
This patch adds the ability to lower the memref dialect to the LLVM Dialect with the use of opaque pointers instead of typed pointers. The latter are being phased out of LLVM and this patch is part of an effort to phase them out of MLIR as well. To do this, we'll need to support both typed and opaque pointers in lowering passes, to allow downstream projects to change without breakage.
The gist of changes required to change a conversion pass are:
* Change any `LLVM::LLVMPointerType::get` calls to NOT use an element type if opaque pointers are to be used.
* Use the `build` method of `llvm.load` with the explicit result type. Since the pointer does not have an element type anymore it has to be specified explicitly.
* Use the `build` method of `llvm.getelementptr` with the explicit `basePtrType`. Ditto to above, we have to now specify what the element type is so that GEP can do its indexing calculations
* Use the `build` method of `llvm.alloca` with the explicit `elementType`. Ditto to the above, alloca needs to know how many bytes to allocate through the element type.
* Get rid of any `llvm.bitcast`s
* Adapt the tests to the above. Note that `llvm.store` changes syntax as well when using opaque pointers
I'd like to note that the 3 `build` method changes work for both opaque and typed pointers, so unconditionally using the explicit element type form is always correct.
For the testsuite a practical approach suggested by @ftynse was taken: I created a separate test file for testing the typed pointer lowering of Ops. This mostly comes down to checking that bitcasts have been created at the appropiate places, since these are required for typed pointer support.
Differential Revision: https://reviews.llvm.org/D143268
The new function is a wrapper around the regular `getStridesAndOffset`
that offers a more compact way (as in writing less code) of getting the
relevant information.
This method is intended to be used only when it is known that the
LogicalResult of the regular `getStridesAndOffset` must be "succeeded".
This warpper will assert on that.
Differential Revision: https://reviews.llvm.org/D139529
This patch mechanically replaces None with std::nullopt where the
compiler would warn if None were deprecated. The intent is to reduce
the amount of manual work required in migrating from Optional to
std::optional.
This is part of an effort to migrate from llvm::Optional to
std::optional:
https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
Improve type conversion error propagation/failure during LLVM lowering.
BEFORE
```
llvm-mlir/mlir/lib/Conversion/LLVMCommon/TypeConverter.cpp:304: SmallVector<mlir::Type, 5> mlir::LLVMTypeConverter::getMemRefDescriptorFields(mlir::MemRefType, bool): Assertion `isStrided(type) && "Non-strided layout maps must have been normalized away"' failed.
PLEASE submit a bug report to https://bugs.llvm.org/ and include the crash backtrace.
Stack dump:
...
```
AFTER
```
<unknown>:0: error: integer overflow during size computation
<unknown>:0: error: Conversion to strided form failed either due to non-strided layout maps (which should have been normalized away) or other reasons
<unknown>:0: error: failed to legalize operation 'gpu.func' that was explicitly marked illegal
<unknown>:0: note: see current operation:
"gpu.func"() ( {
...
```
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D139072
The first result of the extract_strided_metadata operation is a MemRef,
not a naked pointer.
This patch fixes the lowering of this operation in MemRefToLLVM so that
we properly materialize the full MemRef structure and not just the base,
naked, pointer.
Differential Revision: https://reviews.llvm.org/D137364
This diff adds initial (partial) support for "fastmath" attributes for floating
point operations in the arithmetic dialect. The "fastmath" attributes are
implemented using a default-valued bit enum. The defined flags currently mirror
the fastmath flags in the LLVM dialect (and in LLVM itself). Extending the
set of flags (if necessary) is left as a future task.
In this diff:
- Definition of FastMathAttr as a custom attribute in the Arithmetic dialect
that inherits from the EnumAttr class.
- Definition of ArithFastMathInterface, which is an interface that is
implemented by operations that have an arith::fastmath attribute.
- Declaration of a default-valued fastmath attribute for unary and (some) binary
floating point operations in the Arithmetic dialect.
- Conversion code to lower arithmetic fastmath flags to LLVM fastmath flags
NOT in this diff (but planned or currently in progress):
- Documentation of flag meanings
- Addition of FastMathAttr attributes to other dialects that might lower to the
Arithmetic dialect (e.g. Math and Complex)
- Folding/rewrite implementations that are enabled by fastmath flags
- Specification of fastmath values from Python bindings (pending other in-
progress diffs)
Reviewed By: mehdi_amini, vzakhari
Differential Revision: https://reviews.llvm.org/D126305
`oneToOneRewrite` segfaulted for zero result-ops because a null type was being
passed to the op builders.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D132702
Currently vector.gather only supports reading memory into a 1-D result vector.
This patch extends it to support an n-D result vector with the indices, masks,
and passthroughs in n-D vectors.
As we are trying to vectorize tensor.extract with vector.gather
(https://github.com/iree-org/iree/issues/9198), it will need to gather the
elements into an n-D vector. Having vector.gather with n-D results allows us
to avoid flatten and reshape at the vectorization stage. The backends can then
decide the optimal ways to lower the vector.gather op.
Note that this is different from n-D gathering, which is about reading n-D
memory with the n-D indices. The indices here are still only 1-D offsets on
the base.
Reviewed By: dcaballe
Differential Revision: https://reviews.llvm.org/D131905