Commit Graph

105 Commits

Author SHA1 Message Date
Alex Zinenko
e0ea706a59 [mlir] ConvertStandardToLLVM: do not rely on command line options internally
The patterns for converting `std.alloc` and `std.dealoc` can be configured to
use `llvm.alloca` instead of calling `malloc` and `free`. This configuration
has been only possible through a command-line flag, despite the presence of a
(misleading) parameter in the pass constructor. Use the parameter instead and
only initalize it from the command line flags if the pass is constructed from
the mlir-opt registration.
2020-02-03 13:50:41 +01:00
Alex Zinenko
f3fa4a34b6 [mlir] Drop customization hooks from StandardToLLVM conversion
Summary:
These hooks were originally introduced to support passes deriving the
StandardToLLVM conversion, in particular converting types from different
dialects to LLVM types in a single-step conversion. They are no longer in use
since the pass and conversion infrastructure has evolved sufficiently to make
defining new passes with exactly the same functionality simple through the use
of populate* functions, conversion targets and type converters. Remove the
hooks. Any users of this hooks can call the dialect conversion infrastructure
directly instead, which is likely to require less LoC than these hooks.

Differential Revision: https://reviews.llvm.org/D73795
2020-02-03 13:26:17 +01:00
Diego Caballero
e5aaf30cf1 [mlir] Introduce bare ptr calling convention for MemRefs in LLVM dialect
Summary:
This patch introduces an alternative calling convention for
MemRef function arguments in LLVM dialect. It converts MemRef
function arguments to LLVM bare pointers to the MemRef element
type instead of creating a MemRef descriptor. Bare pointers are
then promoted to a MemRef descriptors at the beginning of the
function. This calling convention is only enabled with a flag.

Reviewers: ftynse, bondhugula, nicolasvasilache, rriddle, mehdi_amini

Reviewed By: ftynse, rriddle, mehdi_amini

Subscribers: Joonsoo, flaub, merge_guards_bot, jholewinski, mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, csigg, arpith-jacob, mgester, lucyrfox, herhut, aartbik, liufengdb, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D72802
2020-01-31 15:19:38 -08:00
Alex Zinenko
23ccc055c7 [mlir] Remove the dependency of StdToLLVM on LoopToStd
This is a leftover of a temporary state where loop operations were in Standard
dialect.
2020-01-31 19:47:33 +01:00
Lubomir Litchev
fcabccd3d9 [MLIR] Add the sqrt operation to mlir.
Summary: Add and pipe through the sqrt operation for Standard and LLVM dialects.

Reviewers: nicolasvasilache, ftynse

Reviewed By: ftynse

Subscribers: frej, ftynse, merge_guards_bot, flaub, mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73571
2020-01-30 08:07:38 -08:00
River Riddle
aff4ed7326 [mlir][NFC] Update Operation::getResultTypes to use ArrayRef<Type> instead of iterator_range.
Summary: The new internal representation of operation results now allows for accessing the result types to be more efficient. Changing the API to ArrayRef is more efficient and removes the need to explicitly materialize vectors in several places.

Differential Revision: https://reviews.llvm.org/D73429
2020-01-27 19:57:48 -08:00
Mehdi Amini
308571074c Mass update the MLIR license header to mention "Part of the LLVM project"
This is an artifact from merging MLIR into LLVM, the file headers are
now aligned with the rest of the project.
2020-01-26 03:58:30 +00:00
River Riddle
2bdf33cc4c [mlir] NFC: Remove Value::operator* and Value::operator-> now that Value is properly value-typed.
Summary: These were temporary methods used to simplify the transition.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D72548
2020-01-11 08:54:39 -08:00
Tung Le Duc
e5957ac3d7 [mlir] Fix the wrong computation of dynamic strides for lowering AllocOp to LLVM
Leftover change from before the MLIR merge, reviewed at accepted at
https://github.com/tensorflow/mlir/pull/338.
2019-12-28 23:33:28 +01:00
River Riddle
e62a69561f NFC: Replace ValuePtr with Value and remove it now that Value is value-typed.
ValuePtr was a temporary typedef during the transition to a value-typed Value.

PiperOrigin-RevId: 286945714
2019-12-23 16:36:53 -08:00
Mehdi Amini
56222a0694 Adjust License.txt file to use the LLVM license
PiperOrigin-RevId: 286906740
2019-12-23 15:33:37 -08:00
River Riddle
35807bc4c5 NFC: Introduce new ValuePtr/ValueRef typedefs to simplify the transition to Value being value-typed.
This is an initial step to refactoring the representation of OpResult as proposed in: https://groups.google.com/a/tensorflow.org/g/mlir/c/XXzzKhqqF_0/m/v6bKb08WCgAJ

This change will make it much simpler to incrementally transition all of the existing code to use value-typed semantics.

PiperOrigin-RevId: 286844725
2019-12-22 22:00:23 -08:00
Manuel Freiberger
22954a0e40 Add integer bit-shift operations to the standard dialect.
Rename the 'shlis' operation in the standard dialect to 'shift_left'. Add tests
for this operation (these have been missing so far) and add a lowering to the
'shl' operation in the LLVM dialect.

Add also 'shift_right_signed' (lowered to LLVM's 'ashr') and 'shift_right_unsigned'
(lowered to 'lshr').

The original plan was to name these operations 'shift.left', 'shift.right.signed'
and 'shift.right.unsigned'. This works if the operations are prefixed with 'std.'
in MLIR assembly. Unfortunately during import the short form is ambigous with
operations from a hypothetical 'shift' dialect. The best solution seems to omit
dots in standard operations for now.

Closes tensorflow/mlir#226

PiperOrigin-RevId: 286803388
2019-12-22 10:02:13 -08:00
Marcel Koester
6054610bbe Added LLVM ops and lowering phases from standard dialect for FAbs, FCeil, Cos, FNeg, CopySign.
Added test cases for the newly added LLVM operations and lowering features.

Closes tensorflow/mlir#300

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/300 from dfki-jugr:std_to_llvm da6168bbc1a369ae2e99ad3881fdddd82f075dd4
PiperOrigin-RevId: 286231169
2019-12-18 11:42:43 -08:00
Uday Bondhugula
47034c4bc5 Introduce prefetch op: affine -> std -> llvm intrinsic
Introduce affine.prefetch: op to prefetch using a multi-dimensional
subscript on a memref; similar to affine.load but has no effect on
semantics, but only on performance.

Provide lowering through std.prefetch, llvm.prefetch and map to llvm's
prefetch instrinsic. All attributes reflected through the lowering -
locality hint, rw, and instr/data cache.

  affine.prefetch %0[%i, %j + 5], false, 3, true : memref<400x400xi32>

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#225

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/225 from bondhugula:prefetch 4c3b4e93bc64d9a5719504e6d6e1657818a2ead0
PiperOrigin-RevId: 286212997
2019-12-18 10:00:04 -08:00
River Riddle
4562e389a4 NFC: Remove unnecessary 'llvm::' prefix from uses of llvm symbols declared in mlir namespace.
Aside from being cleaner, this also makes the codebase more consistent.

PiperOrigin-RevId: 286206974
2019-12-18 09:29:20 -08:00
River Riddle
74278dd01e NFC: Use TypeSwitch to simplify existing code.
PiperOrigin-RevId: 286066371
2019-12-17 14:57:41 -08:00
Alex Zinenko
42b3fe8335 Make it possible to override the lowering of MemRef to the LLVM dialect. NFC.
The lowering of MemRef types to the LLVM dialect is connected to the underlying
runtime representation of structured memory buffers. It has changed several
times in the past and reached the current state of a LLVM structured-typed
descriptor containing two pointers and all sizes. In several reported use
cases, a different, often simpler, lowering scheme is required. For example,
lowering statically-shaped memrefs to bare LLVM pointers to simplify aliasing
annotation. Split the pattern population functions into those include
memref-related operations and the remaining ones. Users are expected to extend
TypeConverter::convertType to handle the memref types differently.
PiperOrigin-RevId: 286030610
2019-12-17 12:10:04 -08:00
Alex Zinenko
0bdc72d2df StdToLLVM conversion: drop getMemRefElementType utility function
This function has become redundant with MemRefDescriptor::getElementType and is
no longer necessary. Use the MemRefDescriptor pervasively to concentrate
descriptor-related logic in one place and drop the utility function.

PiperOrigin-RevId: 286024168
2019-12-17 11:43:59 -08:00
Alex Zinenko
0684aa9a8b Make memref promotion during std->LLVM lowering the default calling convention
During the conversion from the standard dialect to the LLVM dialect,
memref-typed arguments are promoted from registers to memory and passed into
functions by pointer. This had been introduced into the lowering to work around
the abesnce of calling convention modeling in MLIR to enable better
interoperability with LLVM IR generated from C, and has been exerciced for
several months. Make this promotion the default calling covention when
converting to the LLVM dialect. This adds the documentation, simplifies the
code and makes the conversion consistent across function operations and
function types used in other places, e.g. in high-order functions or
attributes, which would not follow the same rule previously.

PiperOrigin-RevId: 285751280
2019-12-16 05:17:14 -08:00
Ehsan Toosi
f7bffad5a7 Added lowering of std.tanh to llvm function call to tanh and tanhf.
Closes tensorflow/mlir#312

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/312 from dfki-ehna:tanh 9e89b072ff91ff390ad739501745114feb3ac856
PiperOrigin-RevId: 285205674
2019-12-12 09:25:15 -08:00
Nicolas Vasilache
9dfa84a269 Add std.log* and llvm.intr.log* that correspond to the LLVMIR intrinsics
PiperOrigin-RevId: 285073483
2019-12-11 15:25:34 -08:00
Kazuaki Ishizaki
ae05cf27c6 Minor spelling tweaks
Closes tensorflow/mlir#304

PiperOrigin-RevId: 284568358
2019-12-09 09:23:48 -08:00
Uday Bondhugula
a63f6e0bf9 Replace spurious SmallVector constructions with ValueRange
Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#305

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/305 from bondhugula:value_range 21d1fae73f549e3c8e72b60876eff1b864cea39c
PiperOrigin-RevId: 284541027
2019-12-09 06:26:33 -08:00
River Riddle
d6ee6a0310 Update the builder API to take ValueRange instead of ArrayRef<Value *>
This allows for users to provide operand_range and result_range in builder.create<> calls, instead of requiring an explicit copy into a separate data structure like SmallVector/std::vector.

PiperOrigin-RevId: 284360710
2019-12-07 10:35:41 -08:00
River Riddle
9d1a0c72b4 Add a new ValueRange class.
This class represents a generic abstraction over the different ways to represent a range of Values: ArrayRef<Value *>, operand_range, result_range. This class will allow for removing the many instances of explicit SmallVector<Value *, N> construction. It has the same memory cost as ArrayRef, and only suffers cost from indexing(if+elsing the different underlying representations).

This change only updates a few of the existing usages, with more to be changed in followups; e.g. 'build' API.

PiperOrigin-RevId: 284307996
2019-12-06 20:07:23 -08:00
Alex Zinenko
e216a72ab8 Add conversions of GPU func with memory attributions to LLVM/NVVM
GPU functions use memory attributions, a combination of Op attributes and
region arguments, to specify function-wide buffers placed in workgroup or
private memory spaces. Introduce a lowering pattern for GPU functions to be
converted to LLVM functions taking into account memory attributions. Workgroup
attributions get transformed into module-level globals with unique names
derived from function names. Private attributions get converted into
llvm.allocas inside the function body. In both cases, we inject at the
beginning of the function the IR that obtains the raw pointer to the data and
populates a MemRef descriptor based on the MemRef type of buffer, making
attributions compose with the rest of the MemRef lowering and transparent for
use with std.load and std.store. While using raw pointers instead of
descriptors might have been more efficient, it is better implemented as a
canonicalization or a separate transformation so that non-attribution memrefs
could also benefit from it.

PiperOrigin-RevId: 284208396
2019-12-06 10:08:43 -08:00
Kazuaki Ishizaki
84a6182ddd minor spelling tweaks
Closes tensorflow/mlir#290

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/290 from kiszk:spelling_tweaks_201912 9d9afd16a723dd65754a04698b3976f150a6054a
PiperOrigin-RevId: 284169681
2019-12-06 05:59:30 -08:00
nmostafa
daff60cd68 Add UnrankedMemRef Type
Closes tensorflow/mlir#261

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/261 from nmostafa:nmostafa/unranked 96b6e918f6ed64496f7573b2db33c0b02658ca45
PiperOrigin-RevId: 284037040
2019-12-05 13:13:20 -08:00
Nicolas Vasilache
b3f7cf80a7 Add a CL option to Standard to LLVM lowering to use alloca instead of malloc/free.
In the future, a more configurable malloc and free interface should be used and exposed via
extra parameters to the `createLowerToLLVMPass`. Until requirements are gathered, a simple CL flag allows generating code that runs successfully on hardware that cannot use the stdlib.

PiperOrigin-RevId: 283833424
2019-12-04 14:16:00 -08:00
Alex Zinenko
993e79e9bd Fix ViewOp to have at most one offset operand
As described in the documentation, ViewOp is expected to take an optional
dynamic offset followed by a list of dynamic sizes. However, the ViewOp parser
did not include a check for the offset being a single value and accepeted a
list of values instead.

Furthermore, several tests have been exercising the wrong syntax of a ViewOp,
passing multiple values to the dyanmic stride list, which was not caught by the
parser. The trailing values could have been erronously interpreted as dynamic
sizes. This is likely due to resyntaxing of the ViewOp, with the previous
syntax taking the list of sizes before the offset. Update the tests to use the
syntax with the offset preceding the sizes.

Worse, the conversion of ViewOp to the LLVM dialect assumed the wrong order of
operands with offset in the trailing position, and erronously relied on the
permissive parsing that interpreted trailing dynamic offset values as leading
dynamic sizes. Fix the lowering to use the correct order of operands.

PiperOrigin-RevId: 283532506
2019-12-03 06:23:04 -08:00
Stephan Herhut
2125c0e3a8 Extend conversion of SubViewOp to llvm to also support cases where size and stride
are constant (i.e., there are no size and stride operands).

We recently added canonicalization that rewrites constant size and stride operands to
SubViewOp into static information in the type, so these patterns now occur during code
generation.

PiperOrigin-RevId: 283524688
2019-12-03 05:11:49 -08:00
Alex Zinenko
fdbb99cd62 Add linkage support to LLVMFuncOp
A recent commit introduced the Linkage attribute to the LLVM dialect and used
it in the Global Op. Also use it in LLVMFuncOp. As per LLVM Language Reference,
if the linkage attribute is omitted, the function is assumed to have external
linkage.

PiperOrigin-RevId: 283493299
2019-12-03 00:26:44 -08:00
Lei Zhang
0d22a3fdc8 NFC: Update std.subview op to use AttrSizedOperandSegments
This turns a few manually written helper methods into auto-generated ones.

PiperOrigin-RevId: 283339617
2019-12-02 07:52:00 -08:00
Mahesh Ravishankar
19212105dd Changes to SubViewOp to make it more amenable to canonicalization.
The current SubViewOp specification allows for either all offsets,
shape and stride to be dynamic or all of them to be static. There are
opportunities for more fine-grained canonicalization based on which of
these are static. For example, if the sizes are static, the result
memref is of static shape. The specification of SubViewOp is modified
to allow on or more of offsets, shapes and strides to be statically
specified. The verification is updated to ensure that the result type
of the subview op is consistent with which of these are static and
which are dynamic.

PiperOrigin-RevId: 281560457
2019-11-20 12:32:51 -08:00
Alex Zinenko
8961d8e32f Change conversion CLI flag from -lower-to-llvm to -convert-std-to-llvm
The command-line flag name `lower-to-llvm` for the pass performing dialect
conversion from the Standard dialect to the LLVM dialect is misleading and
inconsistent with most of the conversion passses. It leads the user to believe
that there are no restrictions on what can be converted, while in fact only a
subset of the Standard dialect can be converted (with operations from other
dialects converted by separate passes). Use `convert-std-to-llvm` that better
reflects what the pass does and is consistent with most other conversions.

PiperOrigin-RevId: 281238797
2019-11-19 00:34:51 -08:00
Alex Zinenko
062dd406b1 ConvertStandardToLLVM: replace assertion with graceful failure
The assertion was introduced in the early days of dialect conversion
infrastructure when we had the matching function separate from the rewriting
function. The infrastructure evolved to have a common matchAndRewrite function
and the separate matching function was dropped without chaning the rewriting
that became matchAndRewrite. This has led to assertion being triggered. Return
a matchFailure instead of failing an assertion on unsupported types.

Closes tensorflow/mlir#230

PiperOrigin-RevId: 281113741
2019-11-18 11:26:24 -08:00
Alex Zinenko
b34a861d5a Make positions of elements in MemRef descriptor private
Previous commits removed all uses of LLVMTypeConverter::k*PosInMemRefDescriptor
outside of the MemRefDescriptor class. These numbers are an implementation
detail and can be hidden under a layer of more semantic APIs.

PiperOrigin-RevId: 280442444
2019-11-14 09:17:38 -08:00
Alex Zinenko
bf5916e7a4 Use MemRefDescriptor in Vector-to-LLVM convresion
Following up on the consolidation of MemRef descriptor conversion, update
Vector-to-LLVM conversion to use the helper class that abstracts away the
implementation details of the MemRef descriptor. This also makes the types of
the attributes in emitted llvm.insert/extractelement operations consistently
i64 instead of a mix of index and i64.

PiperOrigin-RevId: 280441451
2019-11-14 09:05:42 -08:00
Alex Zinenko
7c28de4aef Use MemRefDescriptor in Linalg-to-LLVM conversion
Following up on the consolidation of MemRef descriptor conversion, update
Linalg-to-LLVM conversion to use the helper class that abstracts away the
implementation details of the MemRef descriptor. This required MemRefDescriptor
to become publicly visible. Since this conversion is heavily EDSC-based,
introduce locally an additional wrapper that uses builder and location pointed
to by the EDSC context while emitting descriptor manipulation operations.

PiperOrigin-RevId: 280429228
2019-11-14 08:04:10 -08:00
Alex Zinenko
ee5c2256ef Concentrate memref descriptor manipulation logic in one place
Memref descriptor is becoming increasingly complex. Memrefs are manipulated by
multiple standard instructions, each of which has a non-trivial lowering to the
LLVM dialect. This leads to verbose code that manipulates the descriptors
exposing the internals of insert/extractelement opreations. Implement a wrapper
class that contains a memref descriptor and provides semantically named methods
that build the primitive IR operations instead.

PiperOrigin-RevId: 280371225
2019-11-14 00:49:12 -08:00
Nicolas Vasilache
51de3f688e Add LLVM lowering of std.subview
A followup CL will replace usage of linalg.subview by std.subview.

PiperOrigin-RevId: 279961981
2019-11-12 07:23:18 -08:00
Nicolas Vasilache
f51a155337 Add support for alignment attribute in std.alloc.
This CL adds an extra pointer to the memref descriptor to allow specifying alignment.

In a previous implementation, we used 2 types: `linalg.buffer` and `view` where the buffer type was the unit of allocation/deallocation/alignment and `view` was the unit of indexing.

After multiple discussions it was decided to use a single type, which conflates both, so the memref descriptor now needs to carry both pointers.

This is consistent with the [RFC-Proposed Changes to MemRef and Tensor MLIR Types](https://groups.google.com/a/tensorflow.org/forum/#!searchin/mlir/std.view%7Csort:date/mlir/-wKHANzDNTg/4K6nUAp8AAAJ).

PiperOrigin-RevId: 279959463
2019-11-12 07:06:54 -08:00
Nicolas Vasilache
72040bf7c8 Update Linalg to use std.view
Now that a view op has graduated to the std dialect, we can update Linalg to use it and remove ops that have become obsolete. As a byproduct, the linalg buffer and associated ops can also disappear.

PiperOrigin-RevId: 279073591
2019-11-07 06:33:10 -08:00
Nicolas Vasilache
7f6c6084b5 Add lowering of std.view to LLVM
This CL ports the lowering of linalg.view to the newly introduced std.view.
Differences in implementation relate to std.view having slightly different semantics:
1. a static or dynamic offset can be specified.
2. the size of the (contiguous) shape is passed instead of a range.
3. static size and stride information is extracted from the memref type rather than the range.

Besides these differences, lowering behaves the same.
A future CL will update Linalg to use this unified infrastructure.

PiperOrigin-RevId: 278948853
2019-11-06 15:06:16 -08:00
Nicolas Vasilache
2823b68580 Implement lowering of VectorTypeCastOp to LLVM
A VectorTypeCastOp can only be used to lower between statically sized contiguous memrefs of scalar and matching vector type. The sizes and strides are thus fully static and easy to determine.

A relevant test is added.

This is a step towards solving tensorflow/mlir#189.

PiperOrigin-RevId: 275538981
2019-10-18 14:00:06 -07:00
River Riddle
dfe09cc621 Add support for PatternRewriter::eraseOp.
This hook is useful when an operation is known to be dead, and no replacement values make sense.

PiperOrigin-RevId: 275052756
2019-10-16 09:50:57 -07:00
Nicolas Vasilache
abf5c60af9 Add conversion for splat of vectors of 2+D
This CL adds a missing lowering for splat of multi-dimensional vectors.
Additional support is also added to the runtime utils library to allow printing memrefs with such vectors.

PiperOrigin-RevId: 274794723
2019-10-15 06:53:08 -07:00
Alex Zinenko
8c2ea32072 Emit LLVM IR equivalent of sizeof when lowering alloc operations
Originally, the lowering of `alloc` operations has been computing the number of
bytes to allocate when lowering based on the properties of MLIR type. This does
not take into account type legalization that happens when compiling LLVM IR
down to target assembly. This legalization can widen the type, potentially
leading to out-of-bounds accesses to `alloc`ed data due to mismatches between
address computation that takes the widening into account and allocation that
does not. Use the LLVM IR's equivalent of `sizeof` to compute the number of
bytes to be allocated:
  %0 = getelementptr %type* null, %indexType 0
  %1 = ptrtoint %type* %0 to %indexType
adapted from
http://nondot.org/sabre/LLVMNotes/SizeOf-OffsetOf-VariableSizedStructs.txt

PiperOrigin-RevId: 274159900
2019-10-11 06:33:26 -07:00
Uday Bondhugula
47596f5345 Drop obsolete code from std to llvm memref lowering
- dropping what looks like outdated code post some of the previous
  updates

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#179

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/179 from bondhugula:llfix 2a72ea441fe1b3924802273ffbe9870afeb90f91
PiperOrigin-RevId: 274158273
2019-10-11 06:31:18 -07:00