Commit Graph

196 Commits

Author SHA1 Message Date
Quinn Dawkins
75f7295419 [mlir][Tensor] Fix unpack -> transpose folding pattern for padded unpacks (#90678)
Previously if the producer tensor.unpack op had "unpadding" semantics,
the folding pattern would construct a destination that does not match
with the result type of the transpose. Because both ops are DPS we can
just reuse the destination of the transpose.

Additionally cleans up a bunch of trailing whitespace in the test file.
2024-04-30 20:17:35 -04:00
Gaurav Shukla
97069a8619 [MLIR] Generalize expand_shape to take shape as explicit input (#90040)
This patch generalizes tensor.expand_shape and memref.expand_shape to
consume the output shape as a list of SSA values. This enables us to
implement generic reshape operations with dynamic shapes using
collapse_shape/expand_shape pairs.

The output_shape input to expand_shape follows the static/dynamic
representation that's also used in `tensor.extract_slice`.

Differential Revision: https://reviews.llvm.org/D140821

---------

Signed-off-by: Gaurav Shukla<gaurav.shukla@amd.com>
Signed-off-by: Gaurav Shukla <gaurav.shukla@amd.com>
Co-authored-by: Ramiro Leal-Cavazos <ramiroleal050@gmail.com>
2024-04-30 09:28:35 -07:00
Mehdi Amini
8c0341df02 Revert "[MLIR] Generalize expand_shape to take shape as explicit input" (#89540)
Reverts llvm/llvm-project#69267

this broke some bots.
2024-04-21 14:33:48 +02:00
Gaurav Shukla
e095d978ba [MLIR] Generalize expand_shape to take shape as explicit input (#69267)
This patch generalizes tensor.expand_shape and memref.expand_shape to
consume the output shape as a list of SSA values. This enables us to
implement generic reshape operations with dynamic shapes using
collapse_shape/expand_shape pairs.

The output_shape input to expand_shape follows the static/dynamic
representation that's also used in `tensor.extract_slice`.

Differential Revision: https://reviews.llvm.org/D140821

Co-authored-by: Ramiro Leal-Cavazos <ramiroleal050@gmail.com>
2024-04-21 07:37:02 -04:00
Matthias Springer
40dd3aa91d [mlir][Interfaces] Variable abstraction for ValueBoundsOpInterface (#87980)
This commit generalizes and cleans up the `ValueBoundsConstraintSet`
API. The API used to provide function overloads for comparing/computing
bounds of:
- index-typed SSA value
- dimension of shaped value
- affine map + operands

This commit removes all overloads. There is now a single entry point for
each `compare` variant and each `computeBound` variant. These functions
now take a `Variable`, which is internally represented as an affine map
and map operands.

This commit also adds support for computing bounds for an affine map +
operands. There was previously no public API for that.
2024-04-16 10:59:02 +02:00
Prashant Kumar
aa7ae1ba0b [mlir][tensor] Fold producer linalg transpose with consumer unpack an… (#86795)
…d viceversa

-- Adds folding of producer linalg transpose op with consumer unpack op,
also adds folding of producer unpack op and consumer transpose op.
-- Minor bug fixes w.r.t. to the test cases.
2024-03-28 23:13:33 +05:30
Kazu Hirata
706c1302f9 [Dialect] Fix a warning
This patch fixes:

  mlir/lib/Dialect/Tensor/Transforms/MergeConsecutiveInsertExtractSlicePatterns.cpp:158:17:
  error: 'matchAndRewrite' overrides a member function but is not
  marked 'override' [-Werror,-Wsuggest-override]
2024-03-28 09:43:04 -07:00
Jerry Wu
f566b079f1 [MLIR] Add pattern to fold insert_slice of extract_slice (#86328)
Fold the `tensor.insert_slice` of `tensor.extract_slice` into
`tensor_extract_slice` when the `insert_slice` simply expand some unit
dims dropped by the `extract_slice`.
2024-03-28 11:18:47 -04:00
Justin Lebar
fab2bb8bfd Add llvm::min/max_element and use it in llvm/ and mlir/ directories. (#84678)
For some reason this was missing from STLExtras.
2024-03-10 20:00:13 -07:00
ian Bearman
067d2779fc [MLIR] Setting MemorySpace During Bufferization (#78484)
Collection of changes with the goal of being able to convert `encoding`
to `memorySpace` during bufferization
- new API for encoder to allow implementation to select destination
memory space
- update existing bufferization implementations to support the new
interface
2024-02-08 16:59:37 +01:00
Hugo Trachino
65066c0277 [mlir] Use create instead of createOrFold for ConstantOp as folding has no effect (NFC) (#80129)
This aims to clean-up confusing uses of
builder.createOrFold<ConstantOp> since folding of constants fails.
2024-01-31 23:40:37 -08:00
Han-Chung Wang
ad3cda7a04 [mlir][tensor] Enhance SimplifyUnPackToCollapseShape for unit dim cases. (#79262) 2024-01-25 06:54:33 -08:00
Matthias Springer
5cc0f76d34 [mlir][IR] Add rewriter API for moving operations (#78988)
The pattern rewriter documentation states that "*all* IR mutations [...]
are required to be performed via the `PatternRewriter`." This commit
adds two functions that were missing from the rewriter API:
`moveOpBefore` and `moveOpAfter`.

After an operation was moved, the `notifyOperationInserted` callback is
triggered. This allows listeners such as the greedy pattern rewrite
driver to react to IR changes.

This commit narrows the discrepancy between the kind of IR modification
that can be performed and the kind of IR modifications that can be
listened to.
2024-01-25 11:01:28 +01:00
Han-Chung Wang
f59eef6515 [mlir][tensor] Enhance SimplifyPackToExpandShape for unit dim cases. (#79247)
Progress on https://github.com/openxla/iree/issues/16181
2024-01-24 18:47:25 -08:00
Han-Chung Wang
2472c45ba3 [mlir][tensor] Enhance pack/unpack simplification for identity outer_dims_perm cases. (#77409)
They can be simplified to reshape ops if outer_dims_perm is an identity
permutation. The revision adds a `isIdentityPermutation` method to
IndexingUtils.
2024-01-10 08:30:34 -08:00
Prathamesh Tagore
113bce0c79 [mlir][tensor] Fold producer linalg transpose with consumer tensor pack (#75658)
Successor to https://github.com/llvm/llvm-project/pull/74206 

Partial fix to https://github.com/openxla/iree/issues/15367
2024-01-10 06:55:27 -08:00
Han-Chung Wang
76cb0bb7a4 [mlir][tensor] Add a pattern to simplify tensor.unpack to collpase shape (#76607) 2024-01-03 09:34:52 -08:00
Han-Chung Wang
78348b6915 [mlir][tensor] Improve tensor.pack simplication pattern. (#76606)
A tensor.pack op can be rewritten to a tensor.expand_shape op if the
packing only happens on inner most dimension.

This also formats the lit checks better.
2024-01-02 09:34:24 -08:00
Han-Chung Wang
4b14205bc0 [mlir][tensor] Centralize pack/unpack related patterns. (#76603)
The revision moves pack/unpack related patterns to
PackAndUnpackPatterns.cpp. This follows the convention like other tensor
ops.

It also renames `populateSimplifyTensorPack` to
`populateSimplifyPackAndUnpackPatterns` and adds a TODO item for
tensor.unpack op.
2023-12-30 11:40:40 -08:00
Prathamesh Tagore
f397bdf5ae [mlir][tensor] Fold consumer linalg transpose with producer tensor pack (#74206)
Partial fix to https://github.com/openxla/iree/issues/15367
2023-12-13 14:26:19 -08:00
Quinn Dawkins
f310a5d2c1 [mlir][tensor] Add a tensor.concat operation (#72779)
This adds an operation for concatenating ranked tensors along a static
dimension, as well as a decomposition mirroring the existing lowering
from TOSA to Tensor. This offers a convergence point for "input" like
dialects that include various lowerings for concatenation operations,
easing later analysis. In the future, this op can implement the
necessary interfaces for tiling, as well as potentially add conversions
to some kind of linalg and/or memref counterpart.

This patch adds the op, the decomposition, and some basic
folding/canonicalization. Replacing lowerings with the op (such as the
TOSA lowering) will come as a follow up.

See
https://discourse.llvm.org/t/rfc-tensor-add-a-tensor-concatenate-operation/74858
2023-12-01 15:05:29 -05:00
Felix Schneider
6343ee7292 [mlir] Fix handling of "no rank reduction" case in two Patterns (#71293)
This patch fixes two checks where a `SmallBitVector` containing the
potential dropped dims of a SubView/ExtractSlice operation was queried
via `empty()` instead of `none()`.
2023-11-10 08:20:51 +01:00
Matthias Springer
ff614a5729 [mlir][Interfaces] LISH: Add helpers for hyperrectangular subsets (#70628)
The majority of subset ops operate on hyperrectangular subsets. This
commit adds a new optional interface method
(`getAccessedHyperrectangularSlice`) that can be implemented by such
subset ops. If implemented, the other `operatesOn...` interface methods
of the `SubsetOpInterface` do not have to be implemented anymore.

The comparison logic for hyperrectangular subsets (is
disjoint/equivalent) is implemented with `ValueBoundsOpInterface`. This
makes the subset hoisting more powerful: simple cases where two
different SSA values always have the same runtime value can now be
supported.
2023-11-01 11:29:00 +09:00
Matthias Springer
1abd8d1a8d [mlir][Interfaces] Add SubsetOpInterface and SubsetExtractionOpInterface (#70617)
There is currently an op interface for subset insertion ops
(`SubsetInsertionOpInterface`), but not for subset extraction ops. This
commit adds `SubsetExtractionOpInterface` to `mlir/Interfaces`, as well
as a common dependent op interface: `SubsetOpInterface`.

- `SubsetOpInterface` is for ops that operate on tensor subsets. It
provides interface methods to check if two subset ops operate on
equivalent or disjoint subsets. Ops that implement this interface must
implement either `SubsetExtractionOpInterface` or
`SubsetInsertionOpInterface`.
- `SubsetExtractionOpInterface` is for ops that extract from a tensor at
a subset. E.g., `tensor.extract_slice`, `tensor.gather`,
`vector.transfer_read`. Current implemented only on
`tensor.extract_slice`.
- `SubsetInsertionOpInterface` is for ops that insert into a destination
tensor at a subset. E.g., `tensor.insert_slice`,
`tensor.parallel_insert_slice`, `tensor.scatter`,
`vector.transfer_write`. Currently only implemented on
`tensor.insert_slice`, `tensor.parallel_insert_slice`.

Other changes:
- Rename `SubsetInsertionOpInterface.td` to `SubsetOpInterface.td`.
- Add helper functions to `ValueBoundsOpInterface.cpp` for checking
whether two slices are disjoint.

The new interfaces will be utilized by a new "loop-invariant subset
hoisting"
transformation. (This new transform is roughly
what `Linalg/Transforms/SubsetHoisting.cpp` is doing, but in a generic
and interface-driven way.)
2023-11-01 10:26:31 +09:00
Matthias Springer
a8d0c86174 [mlir][Interfaces][NFC] Move SubsetInsertionOpInterface to mlir/Interfaces (#70615)
`SubsetInsertionOpInterface` is an interface for ops that insert into a
destination tensor at a subset. It is currently used by the
bufferization framework to support efficient
`tensor.extract_slice/insert_slice` bufferization and to drive "empty
tensor elimination".

This commit moves the interface to `mlir/Interfaces`. This is in
preparation of adding a new "loop-invariant subset hoisting"
transformation to
`mlir/Transforms/Utils/LoopInvariantCodeMotionUtils.cpp`, which will
utilize `SubsetInsertionOpInterface`. (This new transform is roughly
what `Linalg/Transforms/SubsetHoisting.cpp` is doing, but in a generic
and interface-driven way.)
2023-10-30 13:42:44 +09:00
Matthias Springer
5558504374 [mlir][IR] Make OpOperand comparable (#70410)
Two `OpOperand`s are the same if they belong to the same owner and have
the same operand number. There are currently no comparison operators
defined on `OpOperand` and we work around this in multiple places by
comparing pointers.

Note: `OpOperand`s are stored in an op, so it is valid to compare their
pointers to determine if they are the same operand. E.g.,
`getOperandNumber` is also implemented via pointer arithmetics.
2023-10-27 15:51:45 +09:00
Matthias Springer
2e3c62b15d [mlir][tensor][NFC] Simplify SubsetInsertionOpInterface implementation (#69999)
`tensor.insert_slice` and `tensor.parallel_insert_slice` can share the
same implementation.
2023-10-25 08:45:39 +09:00
Matthias Springer
ea71d2d0fe [mlir][tensor][bufferize] Reshapes: Fix memory side effects and memory space (#68195)
* `tensor.collapse_shape` may bufferize to a memory read because the op
may have to reallocate the source buffer.
* `tensor.reshape` should not use `bufferization.clone` for
reallocation. This op has requirements wrt. the order of buffer
writes/reads. Use `memref.alloc` and `memref.copy` instead. Also fix a
bug where the memory space of the source buffer was not propagated to
the reallocated buffer.
2023-10-05 14:33:04 +02:00
Matthias Springer
58678d3bcf [mlir][tensor][bufferize] tensor.empty bufferizes to allocation (#68201)
`BufferizableOpInterface::bufferizesToAllocation` is queried when
forming equivalence sets during bufferization. It is not really needed
for ops like `tensor.empty` which do not have tensor operands, but it
should be added for consistency.

This change should have been part of #68080. No test is added because
the return value of this function is irrelevant for ops without tensor
operands. (However, this function acts as a form documentation,
describing the bufferization semantics of the op.)
2023-10-05 14:06:00 +02:00
Matthias Springer
8823e961f6 [mlir][ODS] Change get...Mutable to return OpOperand & for single operands (#66519)
The TableGen code generator now generates C++ code that returns a single
`OpOperand &` for `get...Mutable` of operands that are not variadic and
not optional. `OpOperand::set`/`assign` can be used to set a value (same
as `MutableOperandRange::assign`). This is safer than
`MutableOperandRange` because only single values (and no longer
`ValueRange`) can be assigned.

E.g.:
```
// Assignment of multiple values to non-variadic operand.
// Before: Compiles, but produces invalid op.
// After: Compilation error.
extractSliceOp.getSourceMutable().assign({v1, v2});
```
2023-10-04 08:35:40 +02:00
Matthias Springer
464dfeba44 [mlir][tensor][bufferize] tensor.empty bufferizes to an allocation (#68080)
Make `tensor.empty` bufferizable, so that the
`-empty-tensor-to-alloc-tensor` pass becomes optional. This makes the
bufferization easier to use. `tensor.empty` used to be non-bufferizable,
so that there two separate ops, one that can be optimized away
(`tensor.empty`) and one that is guaranteed to bufferize to an
allocation (`bufferization.alloc_tensor`). With the recent improvements
of "empty tensor elimination" this is no longer needed and
`bufferization.alloc_tensor` can be phased out.
2023-10-03 16:00:37 +02:00
Spenser Bauman
0a0c7e8978 [mlir][tensor] Bufferize tensor.reshape with non-identity layouts (#65654)
Bufferization of tensor.reshape generates a memref.reshape operation.
memref.reshape requires the source memref to have an identity layout.
The bufferization process may result in the source memref having a
non-identity layout, resulting in a verification failure.

This change causes the bufferization interface for tensor.reshape to
copy the source memref to a new buffer when the source has a
non-identity layout.
2023-09-19 09:50:43 +09:00
Martin Erhart
6bf043e743 [mlir][bufferization] Remove allow-return-allocs and create-deallocs pass options, remove bufferization.escape attribute (#66619)
This commit removes the deallocation capabilities of
one-shot-bufferization. One-shot-bufferization should never deallocate
any memrefs as this should be entirely handled by the
ownership-based-buffer-deallocation pass going forward. This means the
`allow-return-allocs` pass option will default to true now,
`create-deallocs` defaults to false and they, as well as the escape
attribute indicating whether a memref escapes the current region, will
be removed. A new `allow-return-allocs-from-loops` option is added as a
temporary workaround for some bufferization limitations.
2023-09-18 16:44:48 +02:00
Matthias Springer
0f952cfe24 [mlir][IR] Change MutableOperandRange::operator[] to return an OpOperand & (#66515)
`operator[]` returns `OpOperand &` instead of `Value`.

* This allows users to get OpOperands by name instead of "magic" number.
E.g., `extractSliceOp->getOpOperand(0)` can be written as
`extractSliceOp.getSourceMutable()[0]`.
* `OperandRange` provides a read-only API to operands: `operator[]`
returns `Value`. `MutableOperandRange` now provides a mutable API:
`operator[]` returns `OpOperand &`, which can be used to set operands.

Note: The TableGen code generator could be changed to return `OpOperand
&` (instead of `MutableOperandRange`) for non-variadic and non-optional
arguments in a subsequent change. Then the `[0]` part in the above
example would no longer be necessary.
2023-09-18 09:43:03 +02:00
Matthias Springer
a1ef5a9437 [mlir][bufferization] Empty tensor elimination based on SubsetOpInterface (#65766)
This commit generalizes empty tensor elimination to operate on subset
ops. No new test cases are added because all current subset ops were
already supported previously. From this perspective, this change is NFC.

A new interface method (and a helper method) are added to
`SubsetInsertionOpInterface` to build the subset of the destination
tensor.
2023-09-14 09:45:22 +02:00
Martin Erhart
c199f7dc62 Revert "[mlir][bufferization] Remove allow-return-allocs and create-deallocs pass options, remove bufferization.escape attribute"
This reverts commit 6a91dfedeb.

This caused problems in downstream projects. We are reverting to give
them more time for integration.
2023-09-13 13:53:48 +00:00
Matthias Springer
8143307b33 [mlir][bufferization] Generalize tensor slice rules to subset ops (#65619)
This commit generalizes the special
tensor.extract_slice/tensor.insert_slice bufferization rules to tensor
subset ops.

Ops that insert a tensor into a tensor at a specified subset (e.g.,
tensor.insert_slice, tensor.scatter) can implement the
`SubsetInsertionOpInterface`.

Apart from adding a new op interface (extending the API), this change is
NFC. The only ops that currently implement the new interface are
tensor.insert_slice and tensor.parallel_insert_slice, and those ops were
are supported by One-Shot Bufferize.
2023-09-13 12:27:19 +02:00
Martin Erhart
6a91dfedeb [mlir][bufferization] Remove allow-return-allocs and create-deallocs pass options, remove bufferization.escape attribute
This is the first commit in a series with the goal to rework the
BufferDeallocation pass. Currently, this pass heavily relies on copies
to perform correct deallocations, which leads to very slow code and
potentially high memory usage. Additionally, there are unsupported cases
such as returning memrefs which this series of commits aims to add
support for as well.

This first commit removes the deallocation capabilities of
one-shot-bufferization.One-shot-bufferization should never deallocate any
memrefs as this should be entirely handled by the buffer-deallocation pass
going forward. This means the allow-return-allocs pass option will
default to true now, create-deallocs defaults to false and they, as well
as the escape attribute indicating whether a memref escapes the current region,
will be removed.

The documentation should w.r.t. these pass option changes should also be
updated in this commit.

Reviewed By: springerm

Differential Revision: https://reviews.llvm.org/D156662
2023-09-13 09:30:22 +00:00
Matthias Springer
878950b82c [mlir][bufferization] Simplify getBufferType
`getBufferType` computes the bufferized type of an SSA value without bufferizing any IR. This is useful for predicting the bufferized type of iter_args of a loop.

To avoid endless recursion (e.g., in the case of "scf.for", the type of the iter_arg depends on the type of init_arg and the type of the yielded value; the type of the yielded value depends on the type of the iter_arg again), `fixedTypes` was used to fall back to "fixed" type. A simpler way is to maintain an "invocation stack". `getBufferType` implementations can then inspect the invocation stack to detect repetitive computations (typically when computing the bufferized type of a block argument).

Also improve error messages in case of inconsistent memory spaces inside of a loop.

Differential Revision: https://reviews.llvm.org/D158060
2023-08-16 15:02:07 +02:00
Hanhan Wang
3cb0128d26 [mlir][tensor] Do not fold rank-reduced extract_slice into unpack op.
Differential Revision: https://reviews.llvm.org/D157927
2023-08-15 10:28:21 -07:00
Matthias Springer
a02ad6c177 [mlir][bufferization] Generalize getAliasingOpResults to getAliasingValues
This revision is needed to support bufferization of `cf.br`/`cf.cond_br`. It will also be useful for better analysis of loop ops.

This revision generalizes `getAliasingOpResults` to `getAliasingValues`. An OpOperand can now not only alias with OpResults but also with BlockArguments. In the case of `cf.br` (will be added in a later revision): a `cf.br` operand will alias with the corresponding argument of the destination block.

If an op does not implement the `BufferizableOpInterface`, the analysis in conservative. It previously assumed that an OpOperand may alias with each OpResult. It now assumes that an OpOperand may alias with each OpResult and each BlockArgument of the entry block.

Differential Revision: https://reviews.llvm.org/D157957
2023-08-15 15:02:47 +02:00
Matthias Springer
867afe5e53 [mlir][vector] Remove duplicate tensor subset <-> vector transfer patterns
Remove patterns that fold tensor subset ops into vector transfer ops from the vector dialect. These patterns already exist in the tensor dialect.

Differential Revision: https://reviews.llvm.org/D154932
2023-07-11 11:12:29 +02:00
Matthias Springer
ef4f5357e3 [mlir][linalg] BufferizeToAllocationOp: Do not copy uninitialized buffers
Tensors/buffers that do not have any defined contents (e.g., `tensor.empty`) are no longer copied.

Differential Revision: https://reviews.llvm.org/D154081
2023-07-04 16:40:10 +02:00
Matthias Springer
6596b0dde8 [mlir][tensor] Clean up tensor::DimOp usage
* Remove duplicate functions. `tensor::getMixedSize` and `tensor::getMixedSizes` should be used.
* Use `tensor::getMixedSize` instead of `createOrFold<tensor::DimOp>`. This is more efficient. `createOrFold` will create an op an immediately try to fold it. In case of a static dimension size, an attribute can be used directly.

Differential Revision: https://reviews.llvm.org/D153332
2023-06-22 10:56:17 +02:00
Matthias Springer
efc290ce9c [mlir][affine] More efficient makeComposedFolded... helpers
The old code used to materialize constants as ops, immediately folded them into the resulting affine map and then deleted the constant ops again. Instead, directly fold the attributes into the affine map. Furthermore, all helpers accept `OpFoldResult` instead of `Value` now. This makes the code at call sites more efficient, because it is no longer necessary to materialize a `Value`, just to be able to use these helper functions.

Note: The API has changed (accepts OpFoldResult instead of Value), otherwise this change is NFC.

Differential Revision: https://reviews.llvm.org/D153324
2023-06-22 10:47:38 +02:00
Matthias Springer
9340996706 [mlir][tensor] Add pattern to rewrite tensor.generate as a constant
Only ops with a static tensor type and a constant yield value are rewritten.

Differential Revision: https://reviews.llvm.org/D152511
2023-06-09 12:56:07 +02:00
Matthias Springer
40052b08de [mlir][tensor] Add option to fold only tensor.empty with a single use
This is useful for transformations such as bufferization, which is looking for tensor.extract_slice/insert_slice pairs.

Also fix the documentation of the corresponding tranform op.

Differential Revision: https://reviews.llvm.org/D152455
2023-06-09 12:36:55 +02:00
Ingo Müller
9dbb8eefd4 [mlir][tensor] Implement getBufferType for ReshapeOp.
This function should be implemented for ops that work in one-shot
bufferization.

Reviewed By: springerm

Differential Revision: https://reviews.llvm.org/D151548
2023-06-02 13:40:48 +00:00
Matthias Springer
26864d8fb4 [mlir][tensor] Add pattern to drop redundant insert_slice rank expansion
Drop insert_slice rank expansions if they are directly followed by an inverse rank reduction.

Differential Revision: https://reviews.llvm.org/D151800
2023-06-01 08:47:53 +02:00
Ingo Müller
45aaa67fce [mlir][tensor] Fix one-shot bufferization of tensor.reshape.
I believe that the previous implementation did not work on any input. It
called getMemRefType with `layout = {}`, presumably with the intention
to create a MemrefType with identity layout. However, the implementation
of that function returns a MemrefType with *unknown* layout if it is
provided with a default-constructed layout attribute. This patch uses
getMemRefTypeWithStaticIdentityLayout instead, with has identical
behavior except for the case of a default-constructed layout, which it
passes on as-is to the MemrefType.

This problem did not surface in the test because tensor.reshape was not
tested with -one-shot-bufferize. This patch introduces a test copied
from the tests for -tesnor-bufferize adapted in as follows: since the
test is run with "bufferize-function-boundaries", a tensor that is
passed into the function is bufferized into a memref with unknown
layout, which wouldn't be a valid intput for memref.reshape, so the
tests now uses a tensor constructed with arith.constant inside of the
function.

Reviewed By: springerm

Differential Revision: https://reviews.llvm.org/D151544
2023-05-26 09:39:49 +00:00