Commit Graph

94 Commits

Author SHA1 Message Date
Jakub Kuderski
07677113ff [mlir][vector] Add pattern to break down reductions into arith ops (#75727)
The number of vector elements considered 'small' enough to extract is
parameterized.                                                   
                                                                 
This is to avoid going into specialized reduction lowering when a
single/couple of arith ops can do. Targets without dedicated reduction  
intrinsics can use that as an emulation path too.                  
                                                                   
Depends on https://github.com/llvm/llvm-project/pull/75846.
2023-12-18 17:54:54 -05:00
Hsiangkai Wang
f643eec892 [mlir][vector] Add emulation patterns for vector masked load/store (#74834)
In this patch, it will convert

```
vector.maskedload %base[%idx_0, %idx_1], %mask, %pass_thru
```

to

```
%ivalue = %pass_thru
%m = vector.extract %mask[0]
%result0 = scf.if %m {
  %v = memref.load %base[%idx_0, %idx_1]
  %combined = vector.insert %v, %ivalue[0]
  scf.yield %combined
} else {
  scf.yield %ivalue
}
%m = vector.extract %mask[1]
%result1 = scf.if %m {
  %v = memref.load %base[%idx_0, %idx_1 + 1]
  %combined = vector.insert %v, %result0[1]
  scf.yield %combined
} else {
  scf.yield %result0
}
...
```

It will convert

```
vector.maskedstore %base[%idx_0, %idx_1], %mask, %value
```

to

```
%m = vector.extract %mask[0]
scf.if %m {
  %extracted = vector.extract %value[0]
  memref.store %extracted, %base[%idx_0, %idx_1]
}
%m = vector.extract %mask[1]
scf.if %m {
  %extracted = vector.extract %value[1]
  memref.store %extracted, %base[%idx_0, %idx_1 + 1]
}
...
```
2023-12-15 11:35:48 +00:00
Andrzej Warzyński
c02d07fdf0 [mlir][vector] Add pattern to drop unit dim from elementwise(a, b)) (#74817)
For vectors with either leading or trailing unit dim, replaces:

    elementwise(a, b)

with:

    sc_a = shape_cast(a)
    sc_b = shape_cast(b)
    res = elementwise(sc_a, sc_b)
    return shape_cast(res)

The newly inserted shape_cast Ops fold (before elementwise Op) and then
restore (after elementwise Op) the unit dim. Vectors `a` and `b` are
required to be rank > 1.

Example:
```mlir
  %mul = arith.mulf %B_row, %A_row : vector<1x[4]xf32>
  %cast = vector.shape_cast %mul : vector<1x[4]xf32> to vector<[4]xf32>
```

gets converted to:

```mlir
  %B_row_sc = vector.shape_cast %B_row : vector<1x[4]xf32> to vector<[4]xf32>
  %A_row_sc = vector.shape_cast %A_row : vector<1x[4]xf32> to vector<[4]xf32>
  %mul = arith.mulf %B_row_sc, %A_row_sc : vector<[4]xf32>
  %mul_sc = vector.shape_cast %mul : vector<[4]xf32> to vector<1x[4]xf32>
  %cast = vector.shape_cast %mul_sc : vector<1x[4]xf32> to vector<[4]xf32>
```

In practice, the bottom 2 shape_cast(s) will be folded away.
2023-12-13 20:29:12 +00:00
Jakub Kuderski
8063622721 [mlir][vector] Allow vector distribution with multiple written elements (#75122)
Add a configuration option to allow vector distribution with multiple
elements written by a single lane.

This is so that we can perform vector multi-reduction with multiple
results per workgroup.
2023-12-12 13:15:17 -05:00
Andrzej Warzyński
2eb9e33cc5 [mlir][Vector] Update patterns for flattening vector.xfer Ops (2/N) (#73523)
Updates patterns for flattening `vector.transfer_read` by relaxing the
requirement that the "collapsed" indices are all zero. This enables
collapsing cases like this one:

```mlir
  %2 = vector.transfer_read %arg4[%c0, %arg0, %arg1, %c0] ... :
    memref<1x43x4x6xi32>, vector<1x2x6xi32>
```

Previously only the following case would be consider for collapsing
(all indices are 0):

```mlir
  %2 = vector.transfer_read %arg4[%c0, %c0, %c0, %c0] ... :
    memref<1x43x4x6xi32>, vector<1x2x6xi32>
```

Also adds some new comments and renames the `firstContiguousInnerDim`
parameter as `firstDimToCollapse` (the latter better matches the actual
meaning).

Similar updates for `vector.transfer_write` will be implemented in a
follow-up patch.
2023-12-05 08:35:58 +00:00
Jakub Kuderski
d33bad66d8 [mlir][vector] Add patterns to simplify chained reductions (#73048)
Chained reductions get created during vector unrolling. These patterns
simplify them into a series of adds followed by a final reductions.

This is preferred on GPU targets like SPIR-V/Vulkan where vector
reduction gets lowered into subgroup operations that are generally more
expensive than simple vector additions.

For now, only the `add` combining kind is handled.
2023-11-22 10:30:04 -05:00
Quinn Dawkins
df49a97ab2 [mlir][vector] Root the transfer write distribution pattern on the warp op (#71868)
Currently when there is a mix of transfer read ops and transfer write
ops that need to be distributed, because the pattern for write
distribution is rooted on the transfer write, it is hard to guarantee
that the write gets distributed after the read when the two aren't
directly connected by SSA. This is likely still relatively unsafe when
there are undistributable ops, but structurally these patterns are a bit
difficult to work with. For now pattern benefits give fairly good
guarantees for happy paths.
2023-11-10 08:49:33 -05:00
Mehdi Amini
830b9b072d Update some uses of getAttr() to be explicit about Inherent vs Discardable (NFC) 2023-09-12 01:33:47 -07:00
Andrzej Warzynski
576b184d6e [mlir][vector] Add support for scalable vectors in trimLeadingOneDims
This patch updates one specific hook in "VectorDropLeadUnitDim.cpp" to
make sure that "scalable dims" are handled correctly. While this change
affects multiple patterns, I am only adding one regression tests that
captures one specific case that affects me right now.

I am also adding Vector dialect to the list of dependencies of
`-test-vector-to-vector-lowering`. Otherwise my test case won't work as
a standalone test.

Differential Revision: https://reviews.llvm.org/D157993
2023-08-22 08:45:59 +00:00
Andrzej Warzynski
4d339ec91e [mlir][Vector] Add pattern to reorder elementwise and broadcast ops
The new pattern will replace elementwise(broadcast) with
broadcast(elementwise) when safe.

This change affects tests for vectorising nD-extract. In one case
("vectorize_nd_tensor_extract_with_tensor_extract") I just trimmed the
test and only preserved the key parts (scalar and contiguous load from
the original Op). We could do the same with some other tests if that
helps maintainability.

Differential Revision: https://reviews.llvm.org/D152812
2023-06-15 10:13:41 +01:00
Matthias Springer
faae4d5d81 [mlir][vector][transform] Expose tensor slice -> transfer folding patterns
Add a new transform op to populate patterns: ApplyFoldTensorSliceIntoTransferPatternsOp.

Differential Revision: https://reviews.llvm.org/D152531
2023-06-09 16:23:25 +02:00
Manish Gupta
9a795f0c59 [mlir][Vector] Adds a pattern to fold arith.extf into vector.contract
Consider mixed precision data type, i.e., F16 input lhs, F16 input rhs, F32 accumulation, and F32 output. This is typically written as F32 <= F16*F16 + F32.

During vectorization from linalg to vector for mixed precision data type (F32 <= F16*F16 + F32), linalg.matmul introduces arith.extf on input lhs and rhs operands.

"linalg.matmul"(%lhs, %rhs, %acc) ({
      ^bb0(%arg1: f16, %arg2: f16, %arg3: f32):
        %lhs_f32 = "arith.extf"(%arg1) : (f16) -> f32
        %rhs_f32 = "arith.extf"(%arg2) : (f16) -> f32
       %mul = "arith.mulf"(%lhs_f32, %rhs_f32) : (f32, f32) -> f32
        %acc = "arith.addf"(%arg3, %mul) : (f32, f32) -> f32
      "linalg.yield"(%acc) : (f32) -> ()
    })
There are backend that natively supports mixed-precision data type and does not need the arith.extf. For example, NVIDIA A100 GPU has mma.sync.aligned.*.f32.f16.f16.f32 that can support mixed-precision data type. However, the presence of arith.extf in the IR, introduces the unnecessary casting targeting F32 Tensor Cores instead of F16 Tensor Cores for NVIDIA backend. This patch adds a folding pattern to fold arith.extf into vector.contract

Differential Revision: https://reviews.llvm.org/D151918
2023-06-05 23:22:20 +00:00
Diego Caballero
14726cd691 [mlir][Vector] Extend xfer_read(extract)->scalar load to support multiple uses
This patch extends the vector.extract(vector.transfer_read) -> scalar
load patterns to support vector.transfer_read with multiple uses. For
now, we check that all the uses are vector.extract operations.
Supporting multiple uses is predicated under a flag.

Reviewed By: hanchung

Differential Revision: https://reviews.llvm.org/D150812
2023-05-19 21:03:18 +00:00
Lei Zhang
e000b62a34 [mlir][vector] Separate out vector transfer + tensor slice patterns
These patterns touches the structure generated from tiling so it
affects later steps like bufferization and vector hoisting.
Instead of putting them in canonicalization, this commit creates
separate entry points for them to be called explicitly.

This is NFC regarding the functionality and tests of those patterns.
It also addresses two TODO items in the codebase.

Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D150702
2023-05-17 09:01:19 -07:00
Tres Popp
5550c82189 [mlir] Move casting calls from methods to function calls
The MLIR classes Type/Attribute/Operation/Op/Value support
cast/dyn_cast/isa/dyn_cast_or_null functionality through llvm's doCast
functionality in addition to defining methods with the same name.
This change begins the migration of uses of the method to the
corresponding function call as has been decided as more consistent.

Note that there still exist classes that only define methods directly,
such as AffineExpr, and this does not include work currently to support
a functional cast/isa call.

Caveats include:
- This clang-tidy script probably has more problems.
- This only touches C++ code, so nothing that is being generated.

Context:
- https://mlir.llvm.org/deprecation/ at "Use the free function variants
  for dyn_cast/cast/isa/…"
- Original discussion at https://discourse.llvm.org/t/preferred-casting-style-going-forward/68443

Implementation:
This first patch was created with the following steps. The intention is
to only do automated changes at first, so I waste less time if it's
reverted, and so the first mass change is more clear as an example to
other teams that will need to follow similar steps.

Steps are described per line, as comments are removed by git:
0. Retrieve the change from the following to build clang-tidy with an
   additional check:
   https://github.com/llvm/llvm-project/compare/main...tpopp:llvm-project:tidy-cast-check
1. Build clang-tidy
2. Run clang-tidy over your entire codebase while disabling all checks
   and enabling the one relevant one. Run on all header files also.
3. Delete .inc files that were also modified, so the next build rebuilds
   them to a pure state.
4. Some changes have been deleted for the following reasons:
   - Some files had a variable also named cast
   - Some files had not included a header file that defines the cast
     functions
   - Some files are definitions of the classes that have the casting
     methods, so the code still refers to the method instead of the
     function without adding a prefix or removing the method declaration
     at the same time.

```
ninja -C $BUILD_DIR clang-tidy

run-clang-tidy -clang-tidy-binary=$BUILD_DIR/bin/clang-tidy -checks='-*,misc-cast-functions'\
               -header-filter=mlir/ mlir/* -fix

rm -rf $BUILD_DIR/tools/mlir/**/*.inc

git restore mlir/lib/IR mlir/lib/Dialect/DLTI/DLTI.cpp\
            mlir/lib/Dialect/Complex/IR/ComplexDialect.cpp\
            mlir/lib/**/IR/\
            mlir/lib/Dialect/SparseTensor/Transforms/SparseVectorization.cpp\
            mlir/lib/Dialect/Vector/Transforms/LowerVectorMultiReduction.cpp\
            mlir/test/lib/Dialect/Test/TestTypes.cpp\
            mlir/test/lib/Dialect/Transform/TestTransformDialectExtension.cpp\
            mlir/test/lib/Dialect/Test/TestAttributes.cpp\
            mlir/unittests/TableGen/EnumsGenTest.cpp\
            mlir/test/python/lib/PythonTestCAPI.cpp\
            mlir/include/mlir/IR/
```

Differential Revision: https://reviews.llvm.org/D150123
2023-05-12 11:21:25 +02:00
Quinn Dawkins
650f04feda [mlir][vector] Add pattern to break down vector.bitcast
The pattern added here is intended as a last resort for targets like
SPIR-V where there are vector size restrictions and we need to be able
to break down large vector types. Vectorizing loads/stores for small
bitwidths (e.g. i8) relies on bitcasting to a larger element type and
patterns to bubble bitcast ops to where they can cancel.
This fails for cases such as
```
%1 = arith.trunci %0 : vector<2x32xi32> to vector<2x32xi8>
vector.transfer_write %1, %destination[%c0, %c0] {in_bounds = [true, true]} : vector<2x32xi8>, memref<2x32xi8>
```
where the `arith.trunci` op essentially does the job of one of the
bitcasts, leading to a bitcast that need to be further broken down
```
vector.bitcast %0 : vector<16xi8> to vector<4xi32>
```

Differential Revision: https://reviews.llvm.org/D149065
2023-04-25 20:18:02 -04:00
Quinn Dawkins
435f7d4c2e [mlir][vector] Add unroll pattern for vector.gather
This pattern is useful for SPIR-V to unroll to a supported vector size
before later lowerings. The unrolling pattern is closer to an
elementwise op than the transfer ops because the index values from which
to extract elements are captured by the index vector and thus there is
no need to update the base offsets when unrolling gather.

Differential Revision: https://reviews.llvm.org/D149066
2023-04-24 14:02:59 -04:00
Matthias Springer
4c48f016ef [mlir][Affine][NFC] Wrap dialect in "affine" namespace
This cleanup aligns the affine dialect with all the other dialects.

Differential Revision: https://reviews.llvm.org/D148687
2023-04-20 11:19:21 +09:00
Nicolas Vasilache
553cebde06 [mlir][Vector] Use a RewriterBase for IR rewrites in VectorTransferOpTransforms 2023-03-25 01:48:50 -07:00
Nicolas Vasilache
8b51340740 [mlir][Vector][Transforms] Improve the control over individual vector lowerings and transforms
This revision adds vector transform operations that allow us to better inspect the composition
of various lowerings that were previously very opaque.

This commit is NFC in that it does not change patterns beyond adding `rewriter.notifyFailure` messages
and it does not change the tests beyond breaking them into pieces and using transforms instead of
throwaway opaque test passes.

Reviewed By: ftynse, springerm

Co-authored-by: Alex Zinenko <zinenko@google.com>

Differential Revision: https://reviews.llvm.org/D146755
2023-03-24 14:01:39 +00:00
Nicolas Vasilache
2bc4c3e920 [mlir][Vector] NFC - Reorganize vector patterns
Vector dialect patterns have grown enormously in the past year to a point where they are now impenetrable.
Start reorganizing them towards finer-grained control.

Differential Revision: https://reviews.llvm.org/D146736
2023-03-23 11:30:25 -07:00
Nicolas Vasilache
73bec2b2c3 [mlir][Vector] Retire one old filter-based test
Differential Revision: https://reviews.llvm.org/D146742
2023-03-23 11:00:35 -07:00
Jakub Kuderski
f80a976acd [mlir][vector] Add gather lowering patterns
This is for targets that do not support gather-like ops, e.g., SPIR-V.

Gather is expanded into lower-level vector ops with memory accesses
guarded with `scf.if`.

I also considered generating `vector.maskedload`s, but decided against
it to keep the `memref` and `tensor` codepath closer together. There's a
good chance that if a target doesn't support gather it does not support
masked loads either.

Issue: https://github.com/llvm/llvm-project/issues/60905

Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D145942
2023-03-14 10:59:30 -04:00
Jakub Kuderski
fb7ef637a8 [mlir][vector][nvgpu] Move MMA contraction preparation to VectorUtils
This pattern is not specific to nvgpu; I intend to use in SPIR-V codegen. `VectorTransforms` seems like a more generally useful place.

In addition:
-  Fix a bug in the second condition (the dimensions were swapped for RHS).
-  Add tests.
-  Add support for externally provided filter functions, similar to other vector transforms.
-  Prefer to transpose before zero/sign-extending inputs.

Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D145638
2023-03-09 14:56:21 -05:00
Kazu Hirata
0a81ace004 [mlir] Use std::optional instead of llvm::Optional (NFC)
This patch replaces (llvm::|)Optional< with std::optional<.  I'll post
a separate patch to remove #include "llvm/ADT/Optional.h".

This is part of an effort to migrate from llvm::Optional to
std::optional:

https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
2023-01-14 01:25:58 -08:00
Kazu Hirata
a1fe1f5f77 [mlir] Add #include <optional> (NFC)
This patch adds #include <optional> to those files containing
llvm::Optional<...> or Optional<...>.

I'll post a separate patch to actually replace llvm::Optional with
std::optional.

This is part of an effort to migrate from llvm::Optional to
std::optional:

https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
2023-01-13 21:05:06 -08:00
Matthias Springer
2ec98ffbf1 [mlir][vector] Add scalar vector xfer to memref patterns
These patterns devectorize scalar transfers such as vector<f32> or vector<1xf32>.

Differential Revision: https://reviews.llvm.org/D140215
2022-12-19 10:27:49 +01:00
Kazu Hirata
1a36588ec6 [mlir] Use std::nullopt instead of None (NFC)
This patch mechanically replaces None with std::nullopt where the
compiler would warn if None were deprecated.  The intent is to reduce
the amount of manual work required in migrating from Optional to
std::optional.

This is part of an effort to migrate from llvm::Optional to
std::optional:

https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
2022-12-03 18:50:27 -08:00
Nicolas Vasilache
de13eeda11 [mlir][Vector] Add a Broadcast::createBroadcastOp helper
This helper handles non trivial cases of broadcast + optional transpose creation
that should not leak to the outside world.

Differential Revision: https://reviews.llvm.org/D139003
2022-11-30 05:32:14 -08:00
Nicolas Vasilache
7a69a9d7ae [NFC][mlir] VectorUtils / IndexingUtils simplifications and cleanups
This revision refactors and cleans up a bunch of infra related to vector, shapes and indexing into more reusable APIs.

Differential Revision: https://reviews.llvm.org/D138501
2022-11-22 23:42:29 -08:00
Matthias Springer
9d51b4e4e7 [mlir][vector] Support vector.extractelement distribution of 1D vectors
Ops such as `%1 = vector.extractelement %0[%pos : index] : vector<96xf32>`.

In case of an extract from a 1D vector, the source vector is distributed. The lane into which the requested position falls, extracts the element and shuffles it to all other lanes.

Differential Revision: https://reviews.llvm.org/D137336
2022-11-10 15:07:56 +01:00
Lei Zhang
39c80656fe [mlir][vector] Convert extract_strided_slice to extract & insert chain
This is useful for breaking down extract_strided_slice and potentially
cancel with other extract / insert ops before or after.

Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D137471
2022-11-09 19:42:07 -05:00
stanley-nod
d2061530dc [mlir][vector] Modify constraint and interface for warp reduce on f16 and i8
Quantization method is crucial and ubiqutous in accelerating machine
learning workloads. Most of these methods uses f16 and i8 types.

This patch relaxes the type contraints on warp reduce distribution to
allow these types. Furthermore, this patch also changed the interface
and moved the initial reduction of data to a single thread into the
distributedReductionFn, this gives flexibility for developers to control
how they are obtaining the initial lane value, which might differ based
on the input types. (i.e to shuffle 32-width type, we need to reduce f16
to 2xf16 types rather than a single element).

Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D137691
2022-11-09 11:52:17 -08:00
Thomas Raoux
91f62f0e35 [mlir][vector] Fix distribution of scf.for with value coming from above
When a value used in the forOp is defined outside the region but within
the parent warpOp we need to return and distribute the value to pass it
to new operations created within the loop.
Also simplify the lambda interface.

Differential Revision: https://reviews.llvm.org/D137146
2022-11-02 04:15:18 +00:00
Nicolas Vasilache
05fa8e88f4 [mlir][Linalg] Retire LinalgStrategyLowerVectorsPass and filter-based patterns
Context: https://discourse.llvm.org/t/psa-retire-linalg-filter-based-patterns/63785

Depends on D135200

Differential Revision: https://reviews.llvm.org/D135222
2022-10-05 00:55:27 -07:00
River Riddle
986b5c56ea [mlir] Flip Async/GPU/OpenACC/OpenMP to use Both accessors
This allows for incrementally updating the old API usages without
needing to update everything at once. These will be left on Both
for a little bit and then flipped to prefixed when all APIs have been
updated.

Differential Revision: https://reviews.llvm.org/D134386
2022-09-21 17:36:13 -07:00
Thomas Raoux
54db8cc7b1 [mlir][vector] Remove ExtractMap/InsertMap operations
As discussed on discourse: https://discourse.llvm.org/t/vector-vector-distribution-large-vector-to-small-vector/1983/22
removing insert_map/extract_map op as vector distribution now uses
warp_execute_on_lane_0 op.

Differential Revision: https://reviews.llvm.org/D134000
2022-09-16 17:41:26 +00:00
Nicolas Vasilache
27cc31b64c [mlir][vector] NFC - Clean up vector patterns and propagate benefit through populate functions
Differential Revision: https://reviews.llvm.org/D133559
2022-09-09 02:45:22 -07:00
Fangrui Song
62a4e6ab15 [mlir] Remove unneeded cl::ZeroOrMore for ListOption variables. NFC 2022-06-30 19:04:44 -07:00
Kazu Hirata
037f09959a [mlir] Don't use Optional::hasValue (NFC) 2022-06-20 11:22:37 -07:00
Alex Zinenko
8b68da2c7d [mlir] move SCF headers to SCF/{IR,Transforms} respectively
This aligns the SCF dialect file layout with the majority of the dialects.

Reviewed By: jpienaar

Differential Revision: https://reviews.llvm.org/D128049
2022-06-20 10:18:01 +02:00
Thomas Raoux
6834803c3d [mlir][vector] NFC remove dependency of VectorTransform to GPU dialect
Make the reduction distribution pattern more generic and remove layering
problem. The new pattern to distribute reduction is now independent of
GPU and takes a lamdba to decide how the distributed reduction should be
generated.

Differential Revision: https://reviews.llvm.org/D127867
2022-06-15 16:08:29 +00:00
Thomas Raoux
087aba4f0f [mlir][vector] Add pattern to distribute vector reduction to GPU shuffles
Add a pattern to do ad hoc lowering of vector.reduction to a sequence of
warp shuffles. This allow distributing reduction on a warp for GPU targets.
Also add an execution test for warp reduction.

co-authored with @springerm

Differential Revision: https://reviews.llvm.org/D127176
2022-06-14 05:49:16 +00:00
Thomas Raoux
76cf33dab2 [mlir][vector] Add patterns to ppropagate vector distribution
Add patterns to propagate vector distribution and remove dead
arguments. This handles propagation for several vector operations.

recommit after minor bug fix.

Differential Revision: https://reviews.llvm.org/D127167
2022-06-14 05:26:10 +00:00
Thomas Raoux
2d32dac8bb Revert "[mlir][vector] Add patterns to ppropagate vector distribution"
This reverts commit 1c84800c42.

This was causing asan crash.
2022-06-13 17:55:31 +00:00
Thomas Raoux
1c84800c42 [mlir][vector] Add patterns to ppropagate vector distribution
Add patterns to propagate vector distribution and remove dead
arguments. This handles propagation for several vector operations.

Differential Revision: https://reviews.llvm.org/D127167
2022-06-13 16:38:50 +00:00
Thomas Raoux
ed0288f7c4 [mlir][vector] Add patterns for vector distribution
Add pattern to hoist scalar code outside of warp distribute region as
those cannot be distributed and we would want to execute them on all
the lanes.
Add patterns to distribute transfer_write ops. Those operations can be
distributed in different ways and it is control by user.

Differential Revision: https://reviews.llvm.org/D127152
2022-06-10 17:46:51 +00:00
Mogball
d7ef488bb6 [mlir][gpu] Move GPU headers into IR/ and Transforms/
Depends on D127350

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D127352
2022-06-09 22:49:03 +00:00
Christopher Bate
9f1221521f Recommit "[mlir][vector] Allow unroll of contraction in arbitrary order"
Fixed issue with vector.contract default unroll permutation.

Adds support for vector unroll transformations to unroll in different
orders. For example, the vector.contract can be unrolled into a
smaller set of contractions. There is a choice of how to unroll the
decomposition based on the traversal order of (dim0, dim1, dim2).
The choice of traversal order can now be specified by a callback which
given by the caller of the transform. For now, only the
vector.contract, vector.transfer_read/transfer_write operations
support the callback.

Differential Revision: https://reviews.llvm.org/D127004
2022-06-09 14:01:19 -06:00
Christopher Bate
53fe155b3f Revert "[mlir][vector] Allow unroll of contraction in arbitrary order"
Reverts commit 1469ebf838 (original commit)
Reverts commit a392a39f75 (build fix for above commit)

The commit broke tests in out-of-tree projects, indicating that some logical
error was made in the previous change but not covered by current tests.
2022-06-07 14:54:01 -06:00