Files
clang-p2996/flang/lib/Optimizer/OpenMP/MapInfoFinalization.cpp
agozillon e508bacce4 [Flang][OpenMP] Derived type explicit allocatable member mapping (#113557)
This PR is one of 3 in a PR stack, this is the primary change set which
seeks to extend the current derived type explicit member mapping support
to handle descriptor member mapping at arbitrary levels of nesting. The
PR stack seems to do this reasonably (from testing so far) but as you
can create quite complex mappings with derived types (in particular when
adding allocatable derived types or arrays of allocatable derived types)
I imagine there will be hiccups, which I am more than happy to address.
There will also be further extensions to this work to handle the
implicit auto-magical mapping of descriptor members in derived types and
a few other changes planned for the future (with some ideas on
optimizing things).

The changes in this PR primarily occur in the OpenMP lowering and the
OMPMapInfoFinalization pass.

In the OpenMP lowering several utility functions were added or extended
to support the generation of appropriate intermediate member mappings
which are currently required when the parent (or multiple parents) of a
mapped member are descriptor types. We need to map the entirety of these
types or do a "deep copy" for lack of a better term, where we map both
the base address and the descriptor as without the copying of both of
these we lack the information in the case of the descriptor to access
the member or attach the pointers data to the pointer and in the latter
case we require the base address to map the chunk of data. Currently we
do not segment descriptor based derived types as we do with regular
non-descriptor derived types, we effectively map their entirety in all
cases at the moment, I hope to address this at some point in the future
as it adds a fair bit of a performance penalty to having nestings of
allocatable derived types as an example. The process of mapping all
intermediate descriptor members in a members path only occurs if a
member has an allocatable or object parent in its symbol path or the
member itself is a member or allocatable. This occurs in the
createParentSymAndGenIntermediateMaps function, which will also generate
the appropriate address for the allocatable member within the derived
type to use as a the varPtr field of the map (for intermediate
allocatable maps and final allocatable mappings). In this case it's
necessary as we can't utilise the usual Fortran::lower functionality
such as gatherDataOperandAddrAndBounds without causing issues later in
the lowering due to extra allocas being spawned which seem to affect the
pointer attachment (at least this is my current assumption, it results
in memory access errors on the device due to incorrect map information
generation). This is similar to why we do not use the MLIR value
generated for this and utilise the original symbol provided when mapping
descriptor types external to derived types. Hopefully this can be
rectified in the future so this function can be simplified and more
closely aligned to the other type mappings. We also make use of
fir::CoordinateOp as opposed to the HLFIR version as the HLFIR version
doesn't support the appropriate lowering to FIR necessary at the moment,
we also cannot use a single CoordinateOp (similarly to a single GEP) as
when we index through a descriptor operation (BoxType) we encounter
issues later in the lowering, however in either case we need access to
intermediate descriptors so individual CoordinateOp's aid this
(although, being able to compress them into a smaller amount of
CoordinateOp's may simplify the IR and perhaps result in a better end
product, something to consider for the future).

The other large change area was in the OMPMapInfoFinalization pass,
where the pass had to be extended to support the expansion of box types
(or multiple nestings of box types) within derived types, or box type
derived types. This requires expanding each BoxType mapping from one
into two maps and then modifying all of the existing member indices of
the overarching parent mapping to account for the addition of these new
members alongside adjusting the existing member indices to support the
addition of these new maps which extend the original member indices (as
a base address of a box type is currently considered a member of the box
type at a position of 0 as when lowered to LLVM-IR it's a pointer
contained at this position in the descriptor type, however, this means
extending mapped children of this expanded descriptor type to
additionally incorporate the new member index in the correct location in
its own index list). I believe there is a reasonable amount of comments
that should aid in understanding this better, alongside the test
alterations for the pass.

A subset of the changes were also aimed at making some of the utilities
for packing and unpacking the DenseIntElementsAttr containing the member
indices shareable across the lowering and OMPMapInfoFinalization, this
required moving some functions to the Lower/Support/Utils.h header, and
transforming the lowering structure containing the member index data
into something more similar to the version used in
OMPMapInfoFinalization. There we also some other attempts at tidying
things up in relation to the member index data generation in the
lowering, some of which required creating a logical operator for the
OpenMP ID class so it can be utilised as a map key (it simply utilises
the symbol address for the moment as ordering isn't particularly
important).

Otherwise I have added a set of new tests encompassing some of the
mappings currently supported by this PR (unfortunately as you can have
arbitrary nestings of all shapes and types it's not very feasible to
cover them all).
2024-11-16 12:28:37 +01:00

528 lines
25 KiB
C++

//===- MapInfoFinalization.cpp -----------------------------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
//===----------------------------------------------------------------------===//
/// \file
/// An OpenMP dialect related pass for FIR/HLFIR which performs some
/// pre-processing of MapInfoOp's after the module has been lowered to
/// finalize them.
///
/// For example, it expands MapInfoOp's containing descriptor related
/// types (fir::BoxType's) into multiple MapInfoOp's containing the parent
/// descriptor and pointer member components for individual mapping,
/// treating the descriptor type as a record type for later lowering in the
/// OpenMP dialect.
///
/// The pass also adds MapInfoOp's that are members of a parent object but are
/// not directly used in the body of a target region to its BlockArgument list
/// to maintain consistency across all MapInfoOp's tied to a region directly or
/// indirectly via a parent object.
//===----------------------------------------------------------------------===//
#include "flang/Optimizer/Builder/FIRBuilder.h"
#include "flang/Optimizer/Dialect/FIRType.h"
#include "flang/Optimizer/Dialect/Support/KindMapping.h"
#include "flang/Optimizer/OpenMP/Passes.h"
#include "mlir/Dialect/Func/IR/FuncOps.h"
#include "mlir/Dialect/OpenMP/OpenMPDialect.h"
#include "mlir/IR/BuiltinDialect.h"
#include "mlir/IR/BuiltinOps.h"
#include "mlir/IR/Operation.h"
#include "mlir/IR/SymbolTable.h"
#include "mlir/Pass/Pass.h"
#include "mlir/Support/LLVM.h"
#include "llvm/ADT/SmallPtrSet.h"
#include "llvm/Frontend/OpenMP/OMPConstants.h"
#include <algorithm>
#include <cstddef>
#include <iterator>
#include <numeric>
namespace flangomp {
#define GEN_PASS_DEF_MAPINFOFINALIZATIONPASS
#include "flang/Optimizer/OpenMP/Passes.h.inc"
} // namespace flangomp
namespace {
class MapInfoFinalizationPass
: public flangomp::impl::MapInfoFinalizationPassBase<
MapInfoFinalizationPass> {
/// Helper class tracking a members parent and its
/// placement in the parents member list
struct ParentAndPlacement {
mlir::omp::MapInfoOp parent;
size_t index;
};
/// Tracks any intermediate function/subroutine local allocations we
/// generate for the descriptors of box type dummy arguments, so that
/// we can retrieve it for subsequent reuses within the functions
/// scope
std::map</*descriptor opaque pointer=*/void *,
/*corresponding local alloca=*/fir::AllocaOp>
localBoxAllocas;
/// getMemberUserList gathers all users of a particular MapInfoOp that are
/// other MapInfoOp's and places them into the mapMemberUsers list, which
/// records the map that the current argument MapInfoOp "op" is part of
/// alongside the placement of "op" in the recorded users members list. The
/// intent of the generated list is to find all MapInfoOp's that may be
/// considered parents of the passed in "op" and in which it shows up in the
/// member list, alongside collecting the placement information of "op" in its
/// parents member list.
void
getMemberUserList(mlir::omp::MapInfoOp op,
llvm::SmallVectorImpl<ParentAndPlacement> &mapMemberUsers) {
for (auto *user : op->getUsers())
if (auto map = mlir::dyn_cast_if_present<mlir::omp::MapInfoOp>(user))
for (auto [i, mapMember] : llvm::enumerate(map.getMembers()))
if (mapMember.getDefiningOp() == op)
mapMemberUsers.push_back({map, i});
}
void getAsIntegers(llvm::ArrayRef<mlir::Attribute> values,
llvm::SmallVectorImpl<int64_t> &ints) {
ints.reserve(values.size());
llvm::transform(values, std::back_inserter(ints),
[](mlir::Attribute value) {
return mlir::cast<mlir::IntegerAttr>(value).getInt();
});
}
/// This function will expand a MapInfoOp's member indices back into a vector
/// so that they can be trivially modified as unfortunately the attribute type
/// that's used does not have modifiable fields at the moment (generally
/// awkward to work with)
void getMemberIndicesAsVectors(
mlir::omp::MapInfoOp mapInfo,
llvm::SmallVectorImpl<llvm::SmallVector<int64_t>> &indices) {
indices.reserve(mapInfo.getMembersIndexAttr().getValue().size());
llvm::transform(mapInfo.getMembersIndexAttr().getValue(),
std::back_inserter(indices), [this](mlir::Attribute value) {
auto memberIndex = mlir::cast<mlir::ArrayAttr>(value);
llvm::SmallVector<int64_t> indexes;
getAsIntegers(memberIndex.getValue(), indexes);
return indexes;
});
}
/// When provided a MapInfoOp containing a descriptor type that
/// we must expand into multiple maps this function will extract
/// the value from it and return it, in certain cases we must
/// generate a new allocation to store into so that the
/// fir::BoxOffsetOp we utilise to access the descriptor datas
/// base address can be utilised.
mlir::Value getDescriptorFromBoxMap(mlir::omp::MapInfoOp boxMap,
fir::FirOpBuilder &builder) {
mlir::Value descriptor = boxMap.getVarPtr();
if (!fir::isTypeWithDescriptor(boxMap.getVarType()))
if (auto addrOp = mlir::dyn_cast_if_present<fir::BoxAddrOp>(
boxMap.getVarPtr().getDefiningOp()))
descriptor = addrOp.getVal();
if (!mlir::isa<fir::BaseBoxType>(descriptor.getType()))
return descriptor;
// The fir::BoxOffsetOp only works with !fir.ref<!fir.box<...>> types, as
// allowing it to access non-reference box operations can cause some
// problematic SSA IR. However, in the case of assumed shape's the type
// is not a !fir.ref, in these cases to retrieve the appropriate
// !fir.ref<!fir.box<...>> to access the data we need to map we must
// perform an alloca and then store to it and retrieve the data from the new
// alloca.
mlir::OpBuilder::InsertPoint insPt = builder.saveInsertionPoint();
mlir::Block *allocaBlock = builder.getAllocaBlock();
mlir::Location loc = boxMap->getLoc();
assert(allocaBlock && "No alloca block found for this top level op");
builder.setInsertionPointToStart(allocaBlock);
auto alloca = builder.create<fir::AllocaOp>(loc, descriptor.getType());
builder.restoreInsertionPoint(insPt);
builder.create<fir::StoreOp>(loc, descriptor, alloca);
return alloca;
}
/// Function that generates a FIR operation accessing the descriptor's
/// base address (BoxOffsetOp) and a MapInfoOp for it. The most
/// important thing to note is that we normally move the bounds from
/// the descriptor map onto the base address map.
mlir::omp::MapInfoOp genBaseAddrMap(mlir::Value descriptor,
mlir::OperandRange bounds,
int64_t mapType,
fir::FirOpBuilder &builder) {
mlir::Location loc = descriptor.getLoc();
mlir::Value baseAddrAddr = builder.create<fir::BoxOffsetOp>(
loc, descriptor, fir::BoxFieldAttr::base_addr);
// Member of the descriptor pointing at the allocated data
return builder.create<mlir::omp::MapInfoOp>(
loc, baseAddrAddr.getType(), descriptor,
mlir::TypeAttr::get(llvm::cast<mlir::omp::PointerLikeType>(
fir::unwrapRefType(baseAddrAddr.getType()))
.getElementType()),
baseAddrAddr, /*members=*/mlir::SmallVector<mlir::Value>{},
/*membersIndex=*/mlir::ArrayAttr{}, bounds,
builder.getIntegerAttr(builder.getIntegerType(64, false), mapType),
builder.getAttr<mlir::omp::VariableCaptureKindAttr>(
mlir::omp::VariableCaptureKind::ByRef),
/*name=*/builder.getStringAttr(""),
/*partial_map=*/builder.getBoolAttr(false));
}
/// This function adjusts the member indices vector to include a new
/// base address member. We take the position of the descriptor in
/// the member indices list, which is the index data that the base
/// addresses index will be based off of, as the base address is
/// a member of the descriptor. We must also alter other members
/// that are members of this descriptor to account for the addition
/// of the base address index.
void adjustMemberIndices(
llvm::SmallVectorImpl<llvm::SmallVector<int64_t>> &memberIndices,
size_t memberIndex) {
llvm::SmallVector<int64_t> baseAddrIndex = memberIndices[memberIndex];
// If we find another member that is "derived/a member of" the descriptor
// that is not the descriptor itself, we must insert a 0 for the new base
// address we have just added for the descriptor into the list at the
// appropriate position to maintain correctness of the positional/index data
// for that member.
for (llvm::SmallVector<int64_t> &member : memberIndices)
if (member.size() > baseAddrIndex.size() &&
std::equal(baseAddrIndex.begin(), baseAddrIndex.end(),
member.begin()))
member.insert(std::next(member.begin(), baseAddrIndex.size()), 0);
// Add the base address index to the main base address member data
baseAddrIndex.push_back(0);
// Insert our newly created baseAddrIndex into the larger list of indices at
// the correct location.
memberIndices.insert(std::next(memberIndices.begin(), memberIndex + 1),
baseAddrIndex);
}
/// Adjusts the descriptor's map type. The main alteration that is done
/// currently is transforming the map type to `OMP_MAP_TO` where possible.
/// This is because we will always need to map the descriptor to device
/// (or at the very least it seems to be the case currently with the
/// current lowered kernel IR), as without the appropriate descriptor
/// information on the device there is a risk of the kernel IR
/// requesting for various data that will not have been copied to
/// perform things like indexing. This can cause segfaults and
/// memory access errors. However, we do not need this data mapped
/// back to the host from the device, as per the OpenMP spec we cannot alter
/// the data via resizing or deletion on the device. Discarding any
/// descriptor alterations via no map back is reasonable (and required
/// for certain segments of descriptor data like the type descriptor that are
/// global constants). This alteration is only inapplicable to `target exit`
/// and `target update` currently, and that's due to `target exit` not
/// allowing `to` mappings, and `target update` not allowing both `to` and
/// `from` simultaneously. We currently try to maintain the `implicit` flag
/// where necessary, although it does not seem strictly required.
unsigned long getDescriptorMapType(unsigned long mapTypeFlag,
mlir::Operation *target) {
if (llvm::isa_and_nonnull<mlir::omp::TargetExitDataOp,
mlir::omp::TargetUpdateOp>(target))
return mapTypeFlag;
bool hasImplicitMap =
(llvm::omp::OpenMPOffloadMappingFlags(mapTypeFlag) &
llvm::omp::OpenMPOffloadMappingFlags::OMP_MAP_IMPLICIT) ==
llvm::omp::OpenMPOffloadMappingFlags::OMP_MAP_IMPLICIT;
return llvm::to_underlying(
hasImplicitMap
? llvm::omp::OpenMPOffloadMappingFlags::OMP_MAP_TO |
llvm::omp::OpenMPOffloadMappingFlags::OMP_MAP_IMPLICIT
: llvm::omp::OpenMPOffloadMappingFlags::OMP_MAP_TO);
}
mlir::omp::MapInfoOp genDescriptorMemberMaps(mlir::omp::MapInfoOp op,
fir::FirOpBuilder &builder,
mlir::Operation *target) {
llvm::SmallVector<ParentAndPlacement> mapMemberUsers;
getMemberUserList(op, mapMemberUsers);
// TODO: map the addendum segment of the descriptor, similarly to the
// base address/data pointer member.
mlir::Value descriptor = getDescriptorFromBoxMap(op, builder);
auto baseAddr = genBaseAddrMap(descriptor, op.getBounds(),
op.getMapType().value_or(0), builder);
mlir::ArrayAttr newMembersAttr;
mlir::SmallVector<mlir::Value> newMembers;
llvm::SmallVector<llvm::SmallVector<int64_t>> memberIndices;
if (!mapMemberUsers.empty() || !op.getMembers().empty())
getMemberIndicesAsVectors(
!mapMemberUsers.empty() ? mapMemberUsers[0].parent : op,
memberIndices);
// If the operation that we are expanding with a descriptor has a user
// (parent), then we have to expand the parent's member indices to reflect
// the adjusted member indices for the base address insertion. However, if
// it does not then we are expanding a MapInfoOp without any pre-existing
// member information to now have one new member for the base address, or
// we are expanding a parent that is a descriptor and we have to adjust
// all of its members to reflect the insertion of the base address.
if (!mapMemberUsers.empty()) {
// Currently, there should only be one user per map when this pass
// is executed. Either a parent map, holding the current map in its
// member list, or a target operation that holds a map clause. This
// may change in the future if we aim to refactor the MLIR for map
// clauses to allow sharing of duplicate maps across target
// operations.
assert(mapMemberUsers.size() == 1 &&
"OMPMapInfoFinalization currently only supports single users of a "
"MapInfoOp");
ParentAndPlacement mapUser = mapMemberUsers[0];
adjustMemberIndices(memberIndices, mapUser.index);
llvm::SmallVector<mlir::Value> newMemberOps;
for (auto v : mapUser.parent.getMembers()) {
newMemberOps.push_back(v);
if (v == op)
newMemberOps.push_back(baseAddr);
}
mapUser.parent.getMembersMutable().assign(newMemberOps);
mapUser.parent.setMembersIndexAttr(
builder.create2DI64ArrayAttr(memberIndices));
} else {
newMembers.push_back(baseAddr);
if (!op.getMembers().empty()) {
for (auto &indices : memberIndices)
indices.insert(indices.begin(), 0);
memberIndices.insert(memberIndices.begin(), {0});
newMembersAttr = builder.create2DI64ArrayAttr(memberIndices);
newMembers.append(op.getMembers().begin(), op.getMembers().end());
} else {
llvm::SmallVector<llvm::SmallVector<int64_t>> memberIdx = {{0}};
newMembersAttr = builder.create2DI64ArrayAttr(memberIdx);
}
}
mlir::omp::MapInfoOp newDescParentMapOp =
builder.create<mlir::omp::MapInfoOp>(
op->getLoc(), op.getResult().getType(), descriptor,
mlir::TypeAttr::get(fir::unwrapRefType(descriptor.getType())),
/*varPtrPtr=*/mlir::Value{}, newMembers, newMembersAttr,
/*bounds=*/mlir::SmallVector<mlir::Value>{},
builder.getIntegerAttr(
builder.getIntegerType(64, false),
getDescriptorMapType(op.getMapType().value_or(0), target)),
op.getMapCaptureTypeAttr(), op.getNameAttr(),
/*partial_map=*/builder.getBoolAttr(false));
op.replaceAllUsesWith(newDescParentMapOp.getResult());
op->erase();
return newDescParentMapOp;
}
// We add all mapped record members not directly used in the target region
// to the block arguments in front of their parent and we place them into
// the map operands list for consistency.
//
// These indirect uses (via accesses to their parent) will still be
// mapped individually in most cases, and a parent mapping doesn't
// guarantee the parent will be mapped in its totality, partial
// mapping is common.
//
// For example:
// map(tofrom: x%y)
//
// Will generate a mapping for "x" (the parent) and "y" (the member).
// The parent "x" will not be mapped, but the member "y" will.
// However, we must have the parent as a BlockArg and MapOperand
// in these cases, to maintain the correct uses within the region and
// to help tracking that the member is part of a larger object.
//
// In the case of:
// map(tofrom: x%y, x%z)
//
// The parent member becomes more critical, as we perform a partial
// structure mapping where we link the mapping of the members y
// and z together via the parent x. We do this at a kernel argument
// level in LLVM IR and not just MLIR, which is important to maintain
// similarity to Clang and for the runtime to do the correct thing.
// However, we still do not map the structure in its totality but
// rather we generate an un-sized "binding" map entry for it.
//
// In the case of:
// map(tofrom: x, x%y, x%z)
//
// We do actually map the entirety of "x", so the explicit mapping of
// x%y, x%z becomes unnecessary. It is redundant to write this from a
// Fortran OpenMP perspective (although it is legal), as even if the
// members were allocatables or pointers, we are mandated by the
// specification to map these (and any recursive components) in their
// entirety, which is different to the C++ equivalent, which requires
// explicit mapping of these segments.
void addImplicitMembersToTarget(mlir::omp::MapInfoOp op,
fir::FirOpBuilder &builder,
mlir::Operation *target) {
auto mapClauseOwner =
llvm::dyn_cast_if_present<mlir::omp::MapClauseOwningOpInterface>(
target);
// TargetDataOp is technically a MapClauseOwningOpInterface, so we
// do not need to explicitly check for the extra cases here for use_device
// addr/ptr
if (!mapClauseOwner)
return;
auto addOperands = [&](mlir::MutableOperandRange &mutableOpRange,
mlir::Operation *directiveOp,
unsigned blockArgInsertIndex = 0) {
if (!llvm::is_contained(mutableOpRange.getAsOperandRange(),
op.getResult()))
return;
// There doesn't appear to be a simple way to convert MutableOperandRange
// to a vector currently, so we instead use a for_each to populate our
// vector.
llvm::SmallVector<mlir::Value> newMapOps;
newMapOps.reserve(mutableOpRange.size());
llvm::for_each(
mutableOpRange.getAsOperandRange(),
[&newMapOps](mlir::Value oper) { newMapOps.push_back(oper); });
for (auto mapMember : op.getMembers()) {
if (llvm::is_contained(mutableOpRange.getAsOperandRange(), mapMember))
continue;
newMapOps.push_back(mapMember);
if (directiveOp) {
directiveOp->getRegion(0).insertArgument(
blockArgInsertIndex, mapMember.getType(), mapMember.getLoc());
blockArgInsertIndex++;
}
}
mutableOpRange.assign(newMapOps);
};
auto argIface =
llvm::dyn_cast<mlir::omp::BlockArgOpenMPOpInterface>(target);
if (auto mapClauseOwner =
llvm::dyn_cast<mlir::omp::MapClauseOwningOpInterface>(target)) {
mlir::MutableOperandRange mapMutableOpRange =
mapClauseOwner.getMapVarsMutable();
unsigned blockArgInsertIndex =
argIface
? argIface.getMapBlockArgsStart() + argIface.numMapBlockArgs()
: 0;
addOperands(
mapMutableOpRange,
llvm::dyn_cast_or_null<mlir::omp::TargetOp>(argIface.getOperation()),
blockArgInsertIndex);
}
if (auto targetDataOp = llvm::dyn_cast<mlir::omp::TargetDataOp>(target)) {
mlir::MutableOperandRange useDevAddrMutableOpRange =
targetDataOp.getUseDeviceAddrVarsMutable();
addOperands(useDevAddrMutableOpRange, target,
argIface.getUseDeviceAddrBlockArgsStart() +
argIface.numUseDeviceAddrBlockArgs());
mlir::MutableOperandRange useDevPtrMutableOpRange =
targetDataOp.getUseDevicePtrVarsMutable();
addOperands(useDevPtrMutableOpRange, target,
argIface.getUseDevicePtrBlockArgsStart() +
argIface.numUseDevicePtrBlockArgs());
}
}
// We retrieve the first user that is a Target operation, of which
// there should only be one currently. Every MapInfoOp can be tied to
// at most one Target operation and at the minimum no operations.
// This may change in the future with IR cleanups/modifications,
// in which case this pass will need updating to support cases
// where a map can have more than one user and more than one of
// those users can be a Target operation. For now, we simply
// return the first target operation encountered, which may
// be on the parent MapInfoOp in the case of a member mapping.
// In that case, we traverse the MapInfoOp chain until we
// find the first TargetOp user.
mlir::Operation *getFirstTargetUser(mlir::omp::MapInfoOp mapOp) {
for (auto *user : mapOp->getUsers()) {
if (llvm::isa<mlir::omp::TargetOp, mlir::omp::TargetDataOp,
mlir::omp::TargetUpdateOp, mlir::omp::TargetExitDataOp,
mlir::omp::TargetEnterDataOp>(user))
return user;
if (auto mapUser = llvm::dyn_cast<mlir::omp::MapInfoOp>(user))
return getFirstTargetUser(mapUser);
}
return nullptr;
}
// This pass executes on omp::MapInfoOp's containing descriptor based types
// (allocatables, pointers, assumed shape etc.) and expanding them into
// multiple omp::MapInfoOp's for each pointer member contained within the
// descriptor.
//
// From the perspective of the MLIR pass manager this runs on the top level
// operation (usually function) containing the MapInfoOp because this pass
// will mutate siblings of MapInfoOp.
void runOnOperation() override {
mlir::ModuleOp module =
mlir::dyn_cast_or_null<mlir::ModuleOp>(getOperation());
if (!module)
module = getOperation()->getParentOfType<mlir::ModuleOp>();
fir::KindMapping kindMap = fir::getKindMapping(module);
fir::FirOpBuilder builder{module, std::move(kindMap)};
// We wish to maintain some function level scope (currently
// just local function scope variables used to load and store box
// variables into so we can access their base address, an
// quirk of box_offset requires us to have an in memory box, but Fortran
// in certain cases does not provide this) whilst not subjecting
// ourselves to the possibility of race conditions while this pass
// undergoes frequent re-iteration for the near future. So we loop
// over function in the module and then map.info inside of those.
getOperation()->walk([&](mlir::func::FuncOp func) {
// clear all local allocations we made for any boxes in any prior
// iterations from previous function scopes.
localBoxAllocas.clear();
func->walk([&](mlir::omp::MapInfoOp op) {
// TODO: Currently only supports a single user for the MapInfoOp. This
// is fine for the moment, as the Fortran frontend will generate a
// new MapInfoOp with at most one user currently. In the case of
// members of other objects, like derived types, the user would be the
// parent. In cases where it's a regular non-member map, the user would
// be the target operation it is being mapped by.
//
// However, when/if we optimise/cleanup the IR we will have to extend
// this pass to support multiple users, as we may wish to have a map
// be re-used by multiple users (e.g. across multiple targets that map
// the variable and have identical map properties).
assert(llvm::hasSingleElement(op->getUsers()) &&
"OMPMapInfoFinalization currently only supports single users "
"of a MapInfoOp");
if (fir::isTypeWithDescriptor(op.getVarType()) ||
mlir::isa_and_present<fir::BoxAddrOp>(
op.getVarPtr().getDefiningOp())) {
builder.setInsertionPoint(op);
mlir::Operation *targetUser = getFirstTargetUser(op);
assert(targetUser && "expected user of map operation was not found");
genDescriptorMemberMaps(op, builder, targetUser);
}
});
// Wait until after we have generated all of our maps to add them onto
// the target's block arguments, simplifying the process as there would be
// no need to avoid accidental duplicate additions.
func->walk([&](mlir::omp::MapInfoOp op) {
mlir::Operation *targetUser = getFirstTargetUser(op);
assert(targetUser && "expected user of map operation was not found");
addImplicitMembersToTarget(op, builder, targetUser);
});
});
}
};
} // namespace