Commit Graph

72 Commits

Author SHA1 Message Date
Tim Shen
3ccaac3cdd [mlir] Add MemRefTypeBuilder and refactor some MemRefType::get().
The refactored MemRefType::get() calls all intend to clone from another
memref type, with some modifications. In fact, some calls dropped memory space
during the cloning. Migrate them to the cloning API so that nothing gets
dropped if they are not explicitly listed.

It's close to NFC but not quite, as it helps with propagating memory spaces in
some places.

Differential Revision: https://reviews.llvm.org/D73296
2020-01-30 23:30:46 -08:00
River Riddle
82170d5619 [mlir] Update various operations to declaratively specify their assembly format.
Summary:
This revision switches over many operations to use the declarative methods for defining the assembly specification. This updates operations in the NVVM, ROCDL, Standard, and VectorOps dialects.

Differential Revision: https://reviews.llvm.org/D73407
2020-01-30 11:43:40 -08:00
River Riddle
aff4ed7326 [mlir][NFC] Update Operation::getResultTypes to use ArrayRef<Type> instead of iterator_range.
Summary: The new internal representation of operation results now allows for accessing the result types to be more efficient. Changing the API to ArrayRef is more efficient and removes the need to explicitly materialize vectors in several places.

Differential Revision: https://reviews.llvm.org/D73429
2020-01-27 19:57:48 -08:00
Diego Caballero
6fb3d59746 [mlir] Remove 'valuesToRemoveIfDead' from PatternRewriter API
Summary:
Remove 'valuesToRemoveIfDead' from PatternRewriter API. The removal
functionality wasn't implemented and we decided [1] not to implement it in
favor of having more powerful DCE approaches.

[1] https://github.com/tensorflow/mlir/pull/212

Reviewers: rriddle, bondhugula

Reviewed By: rriddle

Subscribers: liufengdb, mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72545
2020-01-27 14:00:34 -08:00
Mehdi Amini
308571074c Mass update the MLIR license header to mention "Part of the LLVM project"
This is an artifact from merging MLIR into LLVM, the file headers are
now aligned with the rest of the project.
2020-01-26 03:58:30 +00:00
Ahmed Taei
ab03564706 [mlir] : Fix ViewOp shape folder for identity affine maps
Summary: Fix the ViewOpShapeFolder in case of no affine mapping associated with a Memref construct identity mapping.

Reviewers: nicolasvasilache

Subscribers: mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, liufengdb, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72735
2020-01-15 00:54:00 +00:00
River Riddle
2bdf33cc4c [mlir] NFC: Remove Value::operator* and Value::operator-> now that Value is properly value-typed.
Summary: These were temporary methods used to simplify the transition.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D72548
2020-01-11 08:54:39 -08:00
Ahmed Taei
1e25109f93 Canonicalize static alloc followed by memref_cast and std.view
Summary: Rewrite alloc, memref_cast, std.view into allo, std.view by droping memref_cast.

Reviewers: nicolasvasilache

Subscribers: mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72379
2020-01-08 11:50:33 -08:00
Uday Bondhugula
be775a0038 [MLIR] [NFC] fix unused var warning
Summary:
Fix this warning:
`
[69/106] Building CXX object tools/mlir/lib/Dialect/StandardOps/CMakeFiles/MLIRStandardOps.dir/Ops.cpp.o
/home/uday/llvm-project/mlir/lib/Dialect/StandardOps/Ops.cpp: In member function ‘virtual mlir::PatternMatchResult {anonymous}::ViewOpShapeFolder::matchAndRewrite(mlir::ViewOp, mlir::PatternRewriter&) const’:
/home/uday/llvm-project/mlir/lib/Dialect/StandardOps/Ops.cpp:2575:14: warning: variable ‘dynamicOffsetOperandCount’ set but not used [-Wunused-but-set-variable]
 2575 |     unsigned dynamicOffsetOperandCount = 0;
`

Reviewers: rriddle, mehdi_amini, ftynse

Reviewed By: ftynse

Subscribers: jpienaar, burmako, shauheen, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71922
2019-12-27 12:04:46 +01:00
Eric Christopher
3d18ce7154 Remove an unused static function. 2019-12-26 18:34:14 -08:00
Eric Christopher
3009cee75f Fix a -Wcovered-switch-default warning by moving the unreachable out of the
covered switch.
2019-12-26 18:29:54 -08:00
River Riddle
e62a69561f NFC: Replace ValuePtr with Value and remove it now that Value is value-typed.
ValuePtr was a temporary typedef during the transition to a value-typed Value.

PiperOrigin-RevId: 286945714
2019-12-23 16:36:53 -08:00
Mehdi Amini
56222a0694 Adjust License.txt file to use the LLVM license
PiperOrigin-RevId: 286906740
2019-12-23 15:33:37 -08:00
River Riddle
35807bc4c5 NFC: Introduce new ValuePtr/ValueRef typedefs to simplify the transition to Value being value-typed.
This is an initial step to refactoring the representation of OpResult as proposed in: https://groups.google.com/a/tensorflow.org/g/mlir/c/XXzzKhqqF_0/m/v6bKb08WCgAJ

This change will make it much simpler to incrementally transition all of the existing code to use value-typed semantics.

PiperOrigin-RevId: 286844725
2019-12-22 22:00:23 -08:00
Manuel Freiberger
22954a0e40 Add integer bit-shift operations to the standard dialect.
Rename the 'shlis' operation in the standard dialect to 'shift_left'. Add tests
for this operation (these have been missing so far) and add a lowering to the
'shl' operation in the LLVM dialect.

Add also 'shift_right_signed' (lowered to LLVM's 'ashr') and 'shift_right_unsigned'
(lowered to 'lshr').

The original plan was to name these operations 'shift.left', 'shift.right.signed'
and 'shift.right.unsigned'. This works if the operations are prefixed with 'std.'
in MLIR assembly. Unfortunately during import the short form is ambigous with
operations from a hypothetical 'shift' dialect. The best solution seems to omit
dots in standard operations for now.

Closes tensorflow/mlir#226

PiperOrigin-RevId: 286803388
2019-12-22 10:02:13 -08:00
Uday Bondhugula
47034c4bc5 Introduce prefetch op: affine -> std -> llvm intrinsic
Introduce affine.prefetch: op to prefetch using a multi-dimensional
subscript on a memref; similar to affine.load but has no effect on
semantics, but only on performance.

Provide lowering through std.prefetch, llvm.prefetch and map to llvm's
prefetch instrinsic. All attributes reflected through the lowering -
locality hint, rw, and instr/data cache.

  affine.prefetch %0[%i, %j + 5], false, 3, true : memref<400x400xi32>

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#225

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/225 from bondhugula:prefetch 4c3b4e93bc64d9a5719504e6d6e1657818a2ead0
PiperOrigin-RevId: 286212997
2019-12-18 10:00:04 -08:00
River Riddle
4562e389a4 NFC: Remove unnecessary 'llvm::' prefix from uses of llvm symbols declared in mlir namespace.
Aside from being cleaner, this also makes the codebase more consistent.

PiperOrigin-RevId: 286206974
2019-12-18 09:29:20 -08:00
River Riddle
7ac42fa26e Refactor various canonicalization patterns as in-place folds.
This is more efficient, and allows for these to fire in more situations: e.g. createOrFold, DialectConversion, etc.

PiperOrigin-RevId: 285476837
2019-12-13 17:19:02 -08:00
River Riddle
b030e4a4ec Try to fold operations in DialectConversion when trying to legalize.
This change allows for DialectConversion to attempt folding as a mechanism to legalize illegal operations. This also expands folding support in OpBuilder::createOrFold to generate new constants when folding, and also enables it to work in the context of a PatternRewriter.

PiperOrigin-RevId: 285448440
2019-12-13 16:47:26 -08:00
River Riddle
e7aa47ff11 NFC: Cleanup the various Op::print methods.
This cleans up the implementation of the various operation print methods. This is done via a combination of code cleanup, adding new streaming methods to the printer(e.g. operand ranges), etc.

PiperOrigin-RevId: 285285181
2019-12-12 15:32:21 -08:00
River Riddle
9ed22ae5b8 Refactor the various operand/result/type iterators to use indexed_accessor_range.
This has several benefits:
* The implementation is much cleaner and more efficient.
* The ranges now have support for many useful operations: operator[], slice, drop_front, size, etc.
* Value ranges can now directly query a range for their types via 'getTypes()': e.g:
   void foo(Operation::operand_range operands) {
     auto operandTypes = operands.getTypes();
   }

PiperOrigin-RevId: 284834912
2019-12-10 13:21:22 -08:00
Lei Zhang
9a4c2df480 NFC: Expose constFoldBinaryOp via a header
This allows other dialects to reuse the logic to support constant
folding binary operations and reduces code duplication.

PiperOrigin-RevId: 284428721
2019-12-08 06:25:54 -08:00
River Riddle
d6ee6a0310 Update the builder API to take ValueRange instead of ArrayRef<Value *>
This allows for users to provide operand_range and result_range in builder.create<> calls, instead of requiring an explicit copy into a separate data structure like SmallVector/std::vector.

PiperOrigin-RevId: 284360710
2019-12-07 10:35:41 -08:00
River Riddle
9d1a0c72b4 Add a new ValueRange class.
This class represents a generic abstraction over the different ways to represent a range of Values: ArrayRef<Value *>, operand_range, result_range. This class will allow for removing the many instances of explicit SmallVector<Value *, N> construction. It has the same memory cost as ArrayRef, and only suffers cost from indexing(if+elsing the different underlying representations).

This change only updates a few of the existing usages, with more to be changed in followups; e.g. 'build' API.

PiperOrigin-RevId: 284307996
2019-12-06 20:07:23 -08:00
Uday Bondhugula
3ade6a7d15 DimOp folding for alloc/view dynamic dimensions
Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#253

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/253 from bondhugula:dimop a4b464f24ae63fd259114558d87e11b8ee4dae86
PiperOrigin-RevId: 284169689
2019-12-06 06:00:54 -08:00
nmostafa
daff60cd68 Add UnrankedMemRef Type
Closes tensorflow/mlir#261

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/261 from nmostafa:nmostafa/unranked 96b6e918f6ed64496f7573b2db33c0b02658ca45
PiperOrigin-RevId: 284037040
2019-12-05 13:13:20 -08:00
Mahesh Ravishankar
c5ba37b6ae Add a pass to legalize operations before lowering to SPIR-V.
Not all StandardOps can be lowered to SPIR-V. For example, subview op
implementation requires use of pointer bitcasts which is not valid
according to SPIR-V spec (or at least is ambiguous about it). Such ops
need to be removed/transformed before lowering to SPIR-V. The
SPIRVLegalizationPass is added a place where such legalizations can be
added. Current implementation folds the subview ops with load/stores
so that the lowering itself does not have to convert a subview op.

PiperOrigin-RevId: 283642981
2019-12-03 16:06:17 -08:00
Alex Zinenko
993e79e9bd Fix ViewOp to have at most one offset operand
As described in the documentation, ViewOp is expected to take an optional
dynamic offset followed by a list of dynamic sizes. However, the ViewOp parser
did not include a check for the offset being a single value and accepeted a
list of values instead.

Furthermore, several tests have been exercising the wrong syntax of a ViewOp,
passing multiple values to the dyanmic stride list, which was not caught by the
parser. The trailing values could have been erronously interpreted as dynamic
sizes. This is likely due to resyntaxing of the ViewOp, with the previous
syntax taking the list of sizes before the offset. Update the tests to use the
syntax with the offset preceding the sizes.

Worse, the conversion of ViewOp to the LLVM dialect assumed the wrong order of
operands with offset in the trailing position, and erronously relied on the
permissive parsing that interpreted trailing dynamic offset values as leading
dynamic sizes. Fix the lowering to use the correct order of operands.

PiperOrigin-RevId: 283532506
2019-12-03 06:23:04 -08:00
Lei Zhang
0d22a3fdc8 NFC: Update std.subview op to use AttrSizedOperandSegments
This turns a few manually written helper methods into auto-generated ones.

PiperOrigin-RevId: 283339617
2019-12-02 07:52:00 -08:00
Ben Vanik
38d7870ee5 Make std.divis and std.diviu support ElementsAttr folding.
PiperOrigin-RevId: 282434465
2019-11-25 14:31:43 -08:00
Mahesh Ravishankar
1ea231bd39 Allow memref_cast from static strides to dynamic strides.
Memref_cast supports cast from static shape to dynamic shape
memrefs. The same should be true for strides as well, i.e a memref
with static strides can be casted to a memref with dynamic strides.

PiperOrigin-RevId: 282381862
2019-11-25 11:08:56 -08:00
Ben Vanik
d2284f1f0b Support folding of StandardOps with DenseElementsAttr.
PiperOrigin-RevId: 282270243
2019-11-24 19:23:38 -08:00
Mahesh Ravishankar
6db8530c26 Add more canonicalizations for SubViewOp.
Depending on which of the offsets, sizes, or strides are constant, the
subview op can be canonicalized in different ways. Add such
canonicalizations, which generalize the existing approach of
canonicalizing subview op only if all of offsets, sizes and shapes are
constants.

PiperOrigin-RevId: 282010703
2019-11-22 12:14:18 -08:00
Mahesh Ravishankar
1145cebdab Verify subview op result has dynamic shape, when sizes are specified.
If the sizes are specified as arguments to the subview op, then the
shape must be dynamic as well.

PiperOrigin-RevId: 281591608
2019-11-20 14:16:05 -08:00
Mahesh Ravishankar
19212105dd Changes to SubViewOp to make it more amenable to canonicalization.
The current SubViewOp specification allows for either all offsets,
shape and stride to be dynamic or all of them to be static. There are
opportunities for more fine-grained canonicalization based on which of
these are static. For example, if the sizes are static, the result
memref is of static shape. The specification of SubViewOp is modified
to allow on or more of offsets, shapes and strides to be statically
specified. The verification is updated to ensure that the result type
of the subview op is consistent with which of these are static and
which are dynamic.

PiperOrigin-RevId: 281560457
2019-11-20 12:32:51 -08:00
River Riddle
eb418559ef Add a new OpAsmOpInterface to allow for ops to directly hook into the AsmPrinter.
This interface provides more fine-grained hooks into the AsmPrinter than the dialect interface, allowing for operations to define the asm name to use for results directly on the operations themselves. The hook is also expanded to enable defining named result "groups". Get a special name to use when printing the results of this operation.
The given callback is invoked with a specific result value that starts a
result "pack", and the name to give this result pack. To signal that a
result pack should use the default naming scheme, a None can be passed
in instead of the name.

For example, if you have an operation that has four results and you want
to split these into three distinct groups you could do the following:

  setNameFn(getResult(0), "first_result");
  setNameFn(getResult(1), "middle_results");
  setNameFn(getResult(3), ""); // use the default numbering.

This would print the operation as follows:

  %first_result, %middle_results:2, %0 = "my.op" ...

PiperOrigin-RevId: 281546873
2019-11-20 10:45:45 -08:00
Tian Jin
d8563c0e3a Use SmallVectorImpl instead of SmallVector for function parameters (NFC)
Closes tensorflow/mlir#247

PiperOrigin-RevId: 281185661
2019-11-18 16:59:03 -08:00
Andy Davis
a6a287335d Fix SubViewOp stride calculation in constant folding.
Adds unit tests for subview offset and stride argument constant folding.

PiperOrigin-RevId: 281161041
2019-11-18 15:01:08 -08:00
Stephan Herhut
f0f3b71d67 Implement folding of pattern dim(subview(_)[...][s1, ..., sn][...], i) -> si.
PiperOrigin-RevId: 281042016
2019-11-18 04:31:33 -08:00
Lei Zhang
a0986bf43d NFC: Convert CmpIPredicate in StandardOps to use EnumAttr
This turns several hand-written functions to auto-generated ones.

PiperOrigin-RevId: 280684326
2019-11-15 10:17:31 -08:00
Andy Davis
a4669cd3b4 Adds canonicalizer to SubViewOp which folds constants from base memref and operands into the subview result memref type.
Changes SubViewOp to support zero operands case, when offset, strides and sizes are all constant.

PiperOrigin-RevId: 280485075
2019-11-14 12:23:04 -08:00
Nicolas Vasilache
0bd6390b54 Deprecate linalg.subview in favor of std.subview
This CL uses the now standard std.subview in linalg.
Two shortcuts are currently taken to allow this port:
1. the type resulting from a view is currently degraded to fully dynamic to pass the SubViewOp verifier.
2. indexing into SubViewOp may access out of bounds since lowering to LLVM does not currently enforce it by construction.

These will be fixed in subsequent commits after discussions.

PiperOrigin-RevId: 280250129
2019-11-13 12:10:09 -08:00
Nicolas Vasilache
f51a155337 Add support for alignment attribute in std.alloc.
This CL adds an extra pointer to the memref descriptor to allow specifying alignment.

In a previous implementation, we used 2 types: `linalg.buffer` and `view` where the buffer type was the unit of allocation/deallocation/alignment and `view` was the unit of indexing.

After multiple discussions it was decided to use a single type, which conflates both, so the memref descriptor now needs to carry both pointers.

This is consistent with the [RFC-Proposed Changes to MemRef and Tensor MLIR Types](https://groups.google.com/a/tensorflow.org/forum/#!searchin/mlir/std.view%7Csort:date/mlir/-wKHANzDNTg/4K6nUAp8AAAJ).

PiperOrigin-RevId: 279959463
2019-11-12 07:06:54 -08:00
River Riddle
9b9c647cef Add support for nested symbol references.
This change allows for adding additional nested references to a SymbolRefAttr to allow for further resolving a symbol if that symbol also defines a SymbolTable. If a referenced symbol also defines a symbol table, a nested reference can be used to refer to a symbol within that table. Nested references are printed after the main reference in the following form:

  symbol-ref-attribute ::= symbol-ref-id (`::` symbol-ref-id)*

Example:

  module @reference {
    func @nested_reference()
  }

  my_reference_op @reference::@nested_reference

Given that SymbolRefAttr is now more general, the existing functionality centered around a single reference is moved to a derived class FlatSymbolRefAttr. Followup commits will add support to lookups, rauw, etc. for scoped references.

PiperOrigin-RevId: 279860501
2019-11-11 18:18:31 -08:00
Andy Davis
5cf6e0ce7f Adds std.subview operation which takes dynamic offsets, sizes and strides and returns a memref type which represents sub/reduced-size view of its memref argument.
This operation is a companion operation to the std.view operation added as proposed in "Updates to the MLIR MemRefType" RFC.

PiperOrigin-RevId: 279766410
2019-11-11 10:33:27 -08:00
Andy Davis
8f00b4494d Swap operand order in std.view operation so that offset appears before dynamic sizes in the operand list.
PiperOrigin-RevId: 279114236
2019-11-07 10:20:23 -08:00
Andy Davis
5fbdb67b0a Add canonicalizer for ViewOp which folds constants into the ViewOp memref shape and layout map strides and offset.
PiperOrigin-RevId: 279088023
2019-11-07 08:05:03 -08:00
Nicolas Vasilache
72040bf7c8 Update Linalg to use std.view
Now that a view op has graduated to the std dialect, we can update Linalg to use it and remove ops that have become obsolete. As a byproduct, the linalg buffer and associated ops can also disappear.

PiperOrigin-RevId: 279073591
2019-11-07 06:33:10 -08:00
Ben Vanik
68bd355505 Adding an m_NonZero constant integer matcher.
This is useful for making matching cases where a non-zero value is required more readable, such as the results of a constant comparison that are expected to be equal.

PiperOrigin-RevId: 278932874
2019-11-06 14:01:55 -08:00
Andy Davis
b5654d1311 Add ViewOp verification for dynamic strides, and address some comments from previous change.
PiperOrigin-RevId: 278903187
2019-11-06 11:25:54 -08:00