Commit Graph

185 Commits

Author SHA1 Message Date
Nicolas Vasilache 5133673df4 [mlir] Extend semantic of OffsetSizeAndStrideOpInterface.
OffsetSizeAndStrideOpInterface now have the ability to specify only a leading subset of
offset, sizes, strides operands/attributes.
The size of that leading subset must be limited by the corresponding entry in `getArrayAttrMaxRanks` to avoid overflows.
Missing trailing dimensions are assumed to span the whole range (i.e. [0 .. dim)).
This brings more natural semantics to slice-like op on top of subview and is a simplifies to removing all uses of SliceOp in dependent projects.

Differential revision: https://reviews.llvm.org/D95441
2021-01-27 09:02:35 +00:00
Nicolas Vasilache 05d5125d8a [mlir] Generalize OpFoldResult usage in ops with offsets, sizes and operands.
This revision starts evolving the APIs to manipulate ops with offsets, sizes and operands towards a ValueOrAttr abstraction that is already used in folding under the name OpFoldResult.

The objective, in the future, is to allow such manipulations all the way to the level of ODS to avoid all the genuflexions involved in distinguishing between values and attributes for generic constant foldings.

Once this evolution is accepted, the next step will be a mechanical OpFoldResult -> ValueOrAttr.

Differential Revision: https://reviews.llvm.org/D95310
2021-01-25 14:17:03 +00:00
Nicolas Vasilache dbf9bedf40 [mlir][Linalg] Add a hoistPaddingOnTensors transformation
This transformation anchors on a padding op whose result is only used as an input
to a Linalg op and pulls it out of a given number of loops.
The result is a packing of padded tailes of ops that is amortized just before
the outermost loop from which the pad operation is hoisted.

Differential revision: https://reviews.llvm.org/D95243
2021-01-25 12:41:18 +00:00
Nicolas Vasilache 3747eb9c85 [mlir][Linalg] Add a padding option to Linalg tiling
This revision allows the base Linalg tiling pattern to optionally require padding to
a constant bounding shape.
When requested, a simple analysis is performed, similar to buffer promotion.
A temporary `linalg.simple_pad` op is added to model padding for the purpose of
connecting the dots. This will be replaced by a more fleshed out `linalg.pad_tensor`
op when it is available.
In the meantime, this temporary op serves the purpose of exhibiting the necessary
properties required from a more fleshed out pad op, to compose with transformations
properly.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D95149
2021-01-25 09:17:30 +00:00
River Riddle 6ccf2d62b4 [mlir] Add an interface for Cast-Like operations
A cast-like operation is one that converts from a set of input types to a set of output types. The arity of the inputs may be from 0-N, whereas the arity of the outputs may be anything from 1-N. Cast-like operations are removable in cases where they produce a "no-op", i.e when the input types and output types match 1-1.

Differential Revision: https://reviews.llvm.org/D94831
2021-01-20 16:28:17 -08:00
Nicolas Vasilache 866cb26039 [mlir] Fix SubTensorInsertOp semantics
Like SubView, SubTensor/SubTensorInsertOp are allowed to have rank-reducing/expanding semantics. In the case of SubTensorInsertOp , the rank of offsets/sizes/strides should be the rank of the destination tensor.

Also, add a builder flavor for SubTensorOp to return a rank-reduced tensor.

Differential Revision: https://reviews.llvm.org/D95076
2021-01-20 20:16:01 +00:00
Jackson Fellows 1bf2b1665b Implement constant folding for DivFOp
Add a constant folder for DivFOp. Analogous to existing folders for
AddFOp, SubFOp, and MulFOp. Matches the behavior of existing LLVM
constant folding (999f5da6b3/llvm/lib/IR/ConstantFold.cpp (L1432)).

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D94939
2021-01-19 23:08:06 +00:00
Sean Silva be7352c00d [mlir][splitting std] move 2 more ops to `tensor`
- DynamicTensorFromElementsOp
- TensorFromElements

Differential Revision: https://reviews.llvm.org/D94994
2021-01-19 13:49:25 -08:00
Kazuaki Ishizaki f88fab5006 [mlir] NFC: fix trivial typos
fix typo under include and lib directories

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D94220
2021-01-08 02:10:12 +09:00
Nicolas Vasilache ed507bc4d5 [mlir] NFC - Fix SubViewOp printing
Avoid casting the source operand type allows better debugging when conversion patterns
fail to produce a proper MemRefType.
2020-12-30 16:34:37 +00:00
nicolasvasilache b7ae1d3d2b [mlir][Linalg] Revisit the Linalg on tensors abstraction
This revision drops init_tensor arguments from Linalg on tensors and instead uniformizes the output buffers and output tensors to be consistent.
This significantly simplifies the usage of Linalg on tensors and is a stepping stone for
its evolution towards a mixed tensor and shape abstraction discussed in https://llvm.discourse.group/t/linalg-and-shapes/2421/19.

Differential Revision: https://reviews.llvm.org/D93469
2020-12-21 12:29:10 -08:00
Sean Silva 129d6e554e [mlir] Move `std.tensor_cast` -> `tensor.cast`.
This is almost entirely mechanical.

Differential Revision: https://reviews.llvm.org/D93357
2020-12-17 16:06:56 -08:00
MaheshRavishankar de031216bf [mlir] Add canonicalization from `tensor_cast` to `dim` op.
Fold a `tensor_cast` -> `dim` to take the `dim` of the original tensor.

Differential Revision: https://reviews.llvm.org/D93492
2020-12-17 14:45:51 -08:00
River Riddle 1b97cdf885 [mlir][IR][NFC] Move context/location parameters of builtin Type::get methods to the start of the parameter list
This better matches the rest of the infrastructure, is much simpler, and makes it easier to move these types to being declaratively specified.

Differential Revision: https://reviews.llvm.org/D93432
2020-12-17 13:01:36 -08:00
Sean Silva 444822d77a Revert "Revert "[mlir] Start splitting the `tensor` dialect out of `std`.""
This reverts commit 0d48d265db.

This reapplies the following commit, with a fix for CAPI/ir.c:

[mlir] Start splitting the `tensor` dialect out of `std`.

This starts by moving `std.extract_element` to `tensor.extract` (this
mirrors the naming of `vector.extract`).

Curiously, `std.extract_element` supposedly works on vectors as well,
and this patch removes that functionality. I would tend to do that in
separate patch, but I couldn't find any downstream users relying on
this, and the fact that we have `vector.extract` made it seem safe
enough to lump in here.

This also sets up the `tensor` dialect as a dependency of the `std`
dialect, as some ops that currently live in `std` depend on
`tensor.extract` via their canonicalization patterns.

Part of RFC: https://llvm.discourse.group/t/rfc-split-the-tensor-dialect-from-std/2347/2

Differential Revision: https://reviews.llvm.org/D92991
2020-12-11 14:30:50 -08:00
Sean Silva 0d48d265db Revert "[mlir] Start splitting the `tensor` dialect out of `std`."
This reverts commit cab8dda90f.

I mistakenly thought that CAPI/ir.c failure was unrelated to this
change. Need to debug it.
2020-12-11 14:15:41 -08:00
Sean Silva cab8dda90f [mlir] Start splitting the `tensor` dialect out of `std`.
This starts by moving `std.extract_element` to `tensor.extract` (this
mirrors the naming of `vector.extract`).

Curiously, `std.extract_element` supposedly works on vectors as well,
and this patch removes that functionality. I would tend to do that in
separate patch, but I couldn't find any downstream users relying on
this, and the fact that we have `vector.extract` made it seem safe
enough to lump in here.

This also sets up the `tensor` dialect as a dependency of the `std`
dialect, as some ops that currently live in `std` depend on
`tensor.extract` via their canonicalization patterns.

Part of RFC: https://llvm.discourse.group/t/rfc-split-the-tensor-dialect-from-std/2347/2

Differential Revision: https://reviews.llvm.org/D92991
2020-12-11 13:50:55 -08:00
River Riddle 1f5f006d9d [mlir][StandardOps] Verify that the result of an integer constant is signless
This was missed when supported for unsigned/signed integer types was first added, and results in crashes if a user tries to create/print a constant with the incorrect integer type.

Fixes PR#46222

Differential Revision: https://reviews.llvm.org/D92981
2020-12-10 12:40:10 -08:00
Christian Sigg 0bf4a82a5a [mlir] Use mlir::OpState::operator->() to get to methods of mlir::Operation. This is a preparation step to remove the corresponding methods from OpState.
Reviewed By: silvas, rriddle

Differential Revision: https://reviews.llvm.org/D92878
2020-12-09 12:11:32 +01:00
River Riddle 09f7a55fad [mlir][Types][NFC] Move all of the builtin Type classes to BuiltinTypes.h
This is part of a larger refactoring the better congregates the builtin structures under the BuiltinDialect. This also removes the problematic "standard" naming that clashes with the "standard" dialect, which is not defined within IR/. A temporary forward is placed in StandardTypes.h to allow time for downstream users to replaced references.

Differential Revision: https://reviews.llvm.org/D92435
2020-12-03 18:02:10 -08:00
Christian Sigg c4a0405902 Add `Operation* OpState::operator->()` to provide more convenient access to members of Operation.
Given that OpState already implicit converts to Operator*, this seems reasonable.

The alternative would be to add more functions to OpState which forward to Operation.

Reviewed By: rriddle, ftynse

Differential Revision: https://reviews.llvm.org/D92266
2020-12-02 15:46:20 +01:00
Nicolas Vasilache c247081025 [mlir] NFC - Refactor and expose a helper printOffsetSizesAndStrides helper function.
Print part of an op of the form:
```
  <optional-offset-prefix>`[` offset-list `]`
  <optional-size-prefix>`[` size-list `]`
  <optional-stride-prefix>[` stride-list `]`
```

Also address some leftover nits.

Differential revision: https://reviews.llvm.org/D92031
2020-11-24 20:00:59 +00:00
Nicolas Vasilache b6c71c13a3 [mlir] NFC - Refactor and expose a parsing helper for OffsetSizeAndStrideInterface
Parse trailing part of an op of the form:
```
  <optional-offset-prefix>`[` offset-list `]`
  <optional-size-prefix>`[` size-list `]`
  <optional-stride-prefix>[` stride-list `]`
```
Each entry in the offset, size and stride list either resolves to an integer
constant or an operand of index type.
Constants are added to the `result` as named integer array attributes with
name `OffsetSizeAndStrideOpInterface::getStaticOffsetsAttrName()` (resp.
`getStaticSizesAttrName()`, `getStaticStridesAttrName()`).

Append the number of offset, size and stride operands to `segmentSizes`
before adding it to `result` as the named attribute:
`OpTrait::AttrSizedOperandSegments<void>::getOperandSegmentSizeAttr()`.
Offset, size and stride operands resolution occurs after `preResolutionFn`
to give a chance to leading operands to resolve first, after parsing the
types.
```
ParseResult parseOffsetsSizesAndStrides(
    OpAsmParser &parser, OperationState &result, ArrayRef<int> segmentSizes,
    llvm::function_ref<ParseResult(OpAsmParser &, OperationState &)>
        preResolutionFn = nullptr,
    llvm::function_ref<ParseResult(OpAsmParser &)> parseOptionalOffsetPrefix =
        nullptr,
    llvm::function_ref<ParseResult(OpAsmParser &)> parseOptionalSizePrefix =
        nullptr,
    llvm::function_ref<ParseResult(OpAsmParser &)> parseOptionalStridePrefix =
        nullptr);
```

Differential revision: https://reviews.llvm.org/D92030
2020-11-24 19:45:16 +00:00
Nicolas Vasilache a8de412f51 [mlir] NFC - Expose an OffsetSizeAndStrideOpInterface
This revision will make it easier to create new ops base on the strided memref abstraction outside of the std dialect.

OffsetSizeAndStrideOpInterface is an interface for ops that allow specifying mixed dynamic and static offsets, sizes and strides variadic operands.
    Ops that implement this interface need to expose the following methods:
      1. `getArrayAttrRanks` to specify the length of static integer
          attributes.
      2. `offsets`, `sizes` and `strides` variadic operands.
      3. `static_offsets`, resp. `static_sizes` and `static_strides` integer
          array attributes.

    The invariants of this interface are:
      1. `static_offsets`, `static_sizes` and `static_strides` have length
          exactly `getArrayAttrRanks()`[0] (resp. [1], [2]).
      2. `offsets`, `sizes` and `strides` have each length at most
         `getArrayAttrRanks()`[0] (resp. [1], [2]).
      3. if an entry of `static_offsets` (resp. `static_sizes`,
         `static_strides`) is equal to a special sentinel value, namely
         `ShapedType::kDynamicStrideOrOffset` (resp. `ShapedType::kDynamicSize`,
         `ShapedType::kDynamicStrideOrOffset`), then the corresponding entry is
         a dynamic offset (resp. size, stride).
      4. a variadic `offset` (resp. `sizes`, `strides`) operand  must be present
         for each dynamic offset (resp. size, stride).

    This interface is useful to factor out common behavior and provide support
    for carrying or injecting static behavior through the use of the static
    attributes.

Differential Revision: https://reviews.llvm.org/D92011
2020-11-24 14:42:47 +00:00
Stephan Herhut a89e55ca57 [mlir][std] Canonicalize a dim(memref_reshape) into a load from the shape operand
This canonicalization helps propagate shape information through the program.

Differential Revision: https://reviews.llvm.org/D91854
2020-11-20 14:03:02 +01:00
Stephan Herhut 6af81ea1d6 [mlir][std] Fold load(tensor_to_memref) into extract_element
This canonicalization is useful to resolve loads into scalar values when
doing partial bufferization.

Differential Revision: https://reviews.llvm.org/D91855
2020-11-20 13:42:11 +01:00
Stephan Herhut cb778c3423 [mlir][std] Fold comparisons when the operands are equal
For equal operands, comparisons can be decided statically.

Differential Revision: https://reviews.llvm.org/D91856
2020-11-20 13:26:41 +01:00
River Riddle 65fcddff24 [mlir][BuiltinDialect] Resolve comments from D91571
* Move ops to a BuiltinOps.h
* Add file comments
2020-11-19 11:12:49 -08:00
Stephan Herhut c4472f8b4c [mlir][std] Canonicalize extract_element(tensor_cast).
Canonicalize extract_element(tensor_cast(v)) to just extract_element(v).

Differential Revision: https://reviews.llvm.org/D91621
2020-11-17 14:41:39 +01:00
Stephan Herhut 3598605c0b [mlir][std] Fold dim(dynamic_tensor_from_elements, %cst)
The shape of the result of a dynamic_tensor_from_elements is defined via its
result type and operands. We already fold dim operations when they reference
one of the statically sized dimensions. Now, also fold dim on the dynamically
sized dimensions by picking the corresponding operand.

Differential Revision: https://reviews.llvm.org/D91616
2020-11-17 14:39:59 +01:00
River Riddle 73ca690df8 [mlir][NFC] Remove references to Module.h and Function.h
These includes have been deprecated in favor of BuiltinDialect.h, which contains the definitions of ModuleOp and FuncOp.

Differential Revision: https://reviews.llvm.org/D91572
2020-11-17 00:55:47 -08:00
Christian Sigg 5bdb21df21 [mlir] Use assemblyFormat in AllocLikeOp.
Split operands into dynamicSizes and symbolOperands.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D90589
2020-11-11 10:27:20 +01:00
Alex Zinenko 0c782c214b [mlir] Add folding of memref_cast inside another memref_cast
There exists a generic folding facility that folds the operand of a memref_cast
into users of memref_cast that support this. However, it was not used for the
memref_cast itself. Fix it to enable elimination of memref_cast chains such as

  %1 = memref_cast %0 : A to B
  %2 = memref_cast %1 : B to A

that is achieved by combining the folding with the existing "A to A" cast
elimination.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D90910
2020-11-06 10:42:40 +01:00
Alexandre Eichenberger 0795715616 [mlir][std] Add SignedCeilDivIOp and SignedFloorDivIOp with std to std lowering triggered by -std-expand-divs option. The new operations support positive/negative nominator/denominator numbers.
Differential Revision: https://reviews.llvm.org/D89726

Signed-off-by: Alexandre Eichenberger <alexe@us.ibm.com>
2020-11-04 14:16:23 -05:00
Rahul Joshi 63e72aa4f5 [MLIR] Remove NoSideEffect from std.global_memref op.
- Also spell "isUninitialized" correctly.

Differential Revision: https://reviews.llvm.org/D90768
2020-11-04 10:31:19 -08:00
Nicolas Vasilache 85ff2705cd [mlir][std] Add DimOp folding for dim(tensor_load(m)) -> dim(m).
Differential Revision: https://reviews.llvm.org/D90755
2020-11-04 13:06:22 +00:00
Rahul Joshi c254b0bb69 [MLIR] Introduce std.global_memref and std.get_global_memref operations.
- Add standard dialect operations to define global variables with memref types and to
  retrieve the memref for to a named global variable
- Extend unit tests to test verification for these operations.

Differential Revision: https://reviews.llvm.org/D90337
2020-11-02 13:43:04 -08:00
River Riddle fa4174792a [mlir][Inliner] Add a `wouldBeCloned` flag to each of the `isLegalToInline` hooks.
Often times the legality of inlining can change depending on if the callable is going to be inlined in-place, or cloned. For example, some operations are not allowed to be duplicated and can only be inlined if the original callable will cease to exist afterwards. The new `wouldBeCloned` flag allows for dialects to hook into this when determining legality.

Differential Revision: https://reviews.llvm.org/D90360
2020-10-28 21:49:28 -07:00
River Riddle 501fda0167 [mlir][Inliner] Add a new hook for checking if it is legal to inline a callable into a call
In certain situations it isn't legal to inline a call operation, but this isn't something that is possible(at least not easily) to prevent with the current hooks. This revision adds a new hook so that dialects with call operations that shouldn't be inlined can prevent it.

Differential Revision: https://reviews.llvm.org/D90359
2020-10-28 21:49:28 -07:00
Alexander Belyaev 7a996027b9 [mlir] Convert memref_reshape to memref_reinterpret_cast.
Differential Revision: https://reviews.llvm.org/D90235
2020-10-28 21:15:32 +01:00
Alexander Belyaev 461605c418 [mlir] Add MemRefReinterpretCastOp definition to Standard.
Reuse most code for printing/parsing/verification from SubViewOp.

https://llvm.discourse.group/t/rfc-standard-memref-cast-ops/1454/15

Differential Revision: https://https://reviews.llvm.org/D89720
2020-10-22 15:17:22 +02:00
Alexander Belyaev d2ed2f16b8 [mlir] Add MemRefReshapeOp definition to Standard.
https://llvm.discourse.group/t/rfc-standard-memref-cast-ops/1454/15

Differential Revision: https://reviews.llvm.org/D89784
2020-10-22 13:29:13 +02:00
Geoffrey Martin-Noble ad0b2d9d46 Add llvm_unreachable to avoid MSVC warning
Without this I get a warning about not all paths returning.

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D89760
2020-10-19 20:29:33 -07:00
River Riddle a8feeee15f [mlir] Add canonicalization for cond_br that feed into a cond_br on the same condition
```
   ...
   cond_br %cond, ^bb1(...), ^bb2(...)
 ...
 ^bb1: // has single predecessor
   ...
   cond_br %cond, ^bb3(...), ^bb4(...)
```

 ->

```
   ...
   cond_br %cond, ^bb1(...), ^bb2(...)
 ...
 ^bb1: // has single predecessor
   ...
   br ^bb3(...)
```

Differential Revision: https://reviews.llvm.org/D89604
2020-10-18 13:51:02 -07:00
River Riddle 71eeb5ec4d [mlir] Add a new SymbolUserOpInterface class
The initial goal of this interface is to fix the current problems with verifying symbol user operations, but can extend beyond that in the future. The current problems with the verification of symbol uses are:
* Extremely inefficient:
Most current symbol users perform the symbol lookup using the slow O(N) string compare methods, which can lead to extremely long verification times in large modules.
* Invalid/break the constraints of verification pass
If the symbol reference is not-flat(and even if it is flat in some cases) a verifier for an operation is not permitted to touch the referenced operation because it may be in the process of being mutated by a different thread within the pass manager.

The new SymbolUserOpInterface exposes a method `verifySymbolUses` that will be invoked from the parent symbol table to allow for verifying the constraints of any referenced symbols. This method is passed a `SymbolTableCollection` to allow for O(1) lookups of any necessary symbol operation.

Differential Revision: https://reviews.llvm.org/D89512
2020-10-16 12:08:48 -07:00
Sean Silva ee491ac91e [mlir] Add std.tensor_to_memref op and teach the infra about it
The opposite of tensor_to_memref is tensor_load.

- Add some basic tensor_load/tensor_to_memref folding.
- Add source/target materializations to BufferizeTypeConverter.
- Add an example std bufferization pattern/pass that shows how the
  materialiations work together (more std bufferization patterns to come
  in subsequent commits).
  - In coming commits, I'll document how to write composable
  bufferization passes/patterns and update the other in-tree
  bufferization passes to match this convention. The populate* functions
  will of course continue to be exposed for power users.

The naming on tensor_load/tensor_to_memref and their pretty forms are
not very intuitive. I'm open to any suggestions here. One key
observation is that the memref type must always be the one specified in
the pretty form, since the tensor type can be inferred from the memref
type but not vice-versa.

With this, I've been able to replace all my custom bufferization type
converters in npcomp with BufferizeTypeConverter!

Part of the plan discussed in:
https://llvm.discourse.group/t/what-is-the-strategy-for-tensor-memref-conversion-bufferization/1938/17

Differential Revision: https://reviews.llvm.org/D89437
2020-10-15 12:19:20 -07:00
Stephan Herhut 307124535f [mlir][standard] Fix parsing of scalar subview and canonicalize
Parsing of a scalar subview did not create the required static_offsets attribute.
This also adds support for folding scalar subviews away.

Differential Revision: https://reviews.llvm.org/D89467
2020-10-15 16:41:54 +02:00
Jakub Lichman e547b1e243 [mlir] Rank reducing subview conversion to LLVM
This commit adjusts SubViewOp lowering to take rank reduction into account.

Differential Revision: https://reviews.llvm.org/D88883
2020-10-08 13:47:22 +00:00
Jakub Lichman e7cf723051 [mlir] Added strides check to rank reducing subview verification
Added missing strides check to verification method of rank reducing subview
which enforces strides specification for the resulting type.

Differential Revision: https://reviews.llvm.org/D88879
2020-10-08 08:39:07 +00:00
Nicolas Vasilache 346b9d1772 [mlir][Linalg] Canonicalize TensorCastOp away when it feeds a LinalgOp.
This canonicalization is the counterpart of MemRefCastOp -> LinalgOp but on tensors.

This is needed to properly canonicalize post linalg tiling on tensors.

Differential Revision: https://reviews.llvm.org/D88729
2020-10-05 14:48:21 +00:00
Benjamin Kramer 6e2b267d1c Promote transpose from linalg to standard dialect
While affine maps are part of the builtin memref type, there is very
limited support for manipulating them in the standard dialect. Add
transpose to the set of ops to complement the existing view/subview ops.
This is a metadata transformation that encodes the transpose into the
strides of a memref.

I'm planning to use this when lowering operations on strided memrefs,
using the transpose to remove the stride without adding a dependency on
linalg dialect.

Differential Revision: https://reviews.llvm.org/D88651
2020-10-05 10:58:20 +02:00
Stephen Neuendorffer 34d12c15f7 [MLIR] Better message for FuncOp type mismatch
Previously the actual types were not shown, which makes the message
difficult to grok in the context of long lowering chains.  Also, it
appears that there were no actual tests for this.

Differential Revision: https://reviews.llvm.org/D88318
2020-10-02 09:31:44 -07:00
Nicolas Vasilache 86b14d0969 [mlir] Attempt to appease gcc-5 const char* -> StringLiteral conversion issu 2020-10-02 10:53:48 -04:00
Nicolas Vasilache cf9503c1b7 [mlir] Add subtensor_insert operation
Differential revision: https://reviews.llvm.org/D88657
2020-10-02 06:32:31 -04:00
Nicolas Vasilache 787bf5e383 [mlir] Add canonicalization for the `subtensor` op
Differential revision: https://reviews.llvm.org/D88656
2020-10-02 06:05:52 -04:00
Nicolas Vasilache e3de249a4c [mlir] Add a subtensor operation
This revision introduces a `subtensor` op, which is the counterpart of `subview` for a tensor operand. This also refactors the relevant pieces to allow reusing the `subview` implementation where appropriate.

This operation will be used to implement tiling for Linalg on tensors.
2020-10-02 05:35:30 -04:00
Jakub Lichman 14088a6f5d [mlir] Added support for rank reducing subviews
This commit adds support for subviews which enable to reduce resulting rank
by dropping static dimensions of size 1.

Differential Revision: https://reviews.llvm.org/D88534
2020-09-30 11:15:18 +00:00
Stephan Herhut 5e0ded2689 [mlir][Standard] Canonicalize chains of tensor_cast operations
Adds a pattern that replaces a chain of two tensor_cast operations by a single tensor_cast operation if doing so will not remove constraints on the shapes.
2020-09-17 16:50:38 +02:00
Stephan Herhut c897a7fb3e [mlir][Standard] Add canonicalizer for dynamic_tensor_from_elements
This add canonicalizer for

- extracting an element from a dynamic_tensor_from_elements
- propagating constant operands to the type of dynamic_tensor_from_elements

Differential Revision: https://reviews.llvm.org/D87525
2020-09-15 15:38:14 +02:00
Lubomir Litchev ef7a255c03 Add support for casting elements in vectors for certain Std dialect type conversion operations.
Added support to the Std dialect cast operations to do casts in vector types when feasible.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D87410
2020-09-14 07:45:46 -07:00
Sean Silva 84a6da67e6 [mlir] Fix some edge cases around 0-element TensorFromElementsOp
This introduces a builder for the more general case that supports zero
elements (where the element type can't be inferred from the ValueRange,
since it might be empty).

Also, fix up some cases in ShapeToStandard lowering that hit this. It
happens very easily when dealing with shapes of 0-D tensors.

The SameOperandsAndResultElementType is redundant with the new
TypesMatchWith and prevented having zero elements.

Differential Revision: https://reviews.llvm.org/D87492
2020-09-11 10:58:35 -07:00
Frederik Gossen 018f6936db [MLIR][Standard] Simplify `tensor_from_elements`
Define assembly format and add required traits.

Differential Revision: https://reviews.llvm.org/D87366
2020-09-10 14:42:51 +00:00
Frederik Gossen 5106a8b8f8 [MLIR][Shape] Lower `shape_of` to `dynamic_tensor_from_elements`
Take advantage of the new `dynamic_tensor_from_elements` operation in `std`.
Instead of stack-allocated memory, we can now lower directly to a single `std`
operation.

Differential Revision: https://reviews.llvm.org/D86935
2020-09-09 07:55:13 +00:00
Frederik Gossen 133322d2e3 [MLIR][Standard] Update `tensor_from_elements` assembly format
Remove the redundant parenthesis that are used for none of the other operation
formats.

Differential Revision: https://reviews.llvm.org/D86287
2020-09-09 07:45:46 +00:00
Frederik Gossen 136eb79a88 [MLIR][Standard] Add `dynamic_tensor_from_elements` operation
With `dynamic_tensor_from_elements` tensor values of dynamic size can be
created. The body of the operation essentially maps the index space to tensor
elements.

Declare SCF operations in the `scf` namespace to avoid name clash with the new
`std.yield` operation. Resolve ambiguities between `linalg/shape/std/scf.yield`
operations.

Differential Revision: https://reviews.llvm.org/D86276
2020-09-07 11:44:43 +00:00
River Riddle 431bb8b318 [mlir][ODS] Use c++ types for integer attributes of fixed width when possible.
Unsigned and Signless attributes use uintN_t and signed attributes use intN_t, where N is the fixed width. The 1-bit variants use bool.

Differential Revision: https://reviews.llvm.org/D86739
2020-09-01 13:43:32 -07:00
Mars Saxman d34df52377 Implement FPToUI and UIToFP ops in standard dialect
Add the unsigned complements to the existing FPToSI and SIToFP operations in the
standard dialect, with one-to-one lowerings to the corresponding LLVM operations.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D85557
2020-08-19 22:49:09 +02:00
Mehdi Amini de71b46a51 Add missing parsing for attributes to std.generic_atomic_rmw op
Fix llvm.org/pr47182

Differential Revision: https://reviews.llvm.org/D86030
2020-08-16 22:13:58 +00:00
Mehdi Amini 872bdc0be7 Remove unused static helper getMemRefTypeFromTensorType() (NFC) 2020-08-08 05:37:42 +00:00
Mehdi Amini 575b22b5d1 Revisit Dialect registration: require and store a TypeID on dialects
This patch moves the registration to a method in the MLIRContext: getOrCreateDialect<ConcreteDialect>()

This method requires dialect to provide a static getDialectNamespace()
and store a TypeID on the Dialect itself, which allows to lazyily
create a dialect when not yet loaded in the context.
As a side effect, it means that duplicated registration of the same
dialect is not an issue anymore.

To limit the boilerplate, TableGen dialect generation is modified to
emit the constructor entirely and invoke separately a "init()" method
that the user implements.

Differential Revision: https://reviews.llvm.org/D85495
2020-08-07 15:57:08 +00:00
Alexander Belyaev 9c94908320 BEGIN_PUBLIC
[mlir] Add support for unranked case for `tensor_store` and `tensor_load` ops.
END_PUBLIC

Differential Revision: https://reviews.llvm.org/D85518
2020-08-07 14:32:52 +02:00
Alexander Belyaev 3effc35015 [mlir] Lower DimOp to LLVM for unranked memrefs.
Differential Revision: https://reviews.llvm.org/D85361
2020-08-06 11:46:11 +02:00
Rahul Joshi 1d6a724aa1 [MLIR] Change FunctionType::get() and TupleType::get() to use TypeRange
- Moved TypeRange into its own header/cpp file, and add hashing support.
- Change FunctionType::get() and TupleType::get() to use TypeRange

Differential Revision: https://reviews.llvm.org/D85075
2020-08-04 12:43:40 -07:00
Stephan Herhut 823ffef009 [mlir][Standard] Allow unranked memrefs as operands to dim and rank
`std.dim` currently only accepts ranked memrefs and `std.rank` is limited to
tensors.

Differential Revision: https://reviews.llvm.org/D84790
2020-07-29 14:42:58 +02:00
Rahul Joshi e2b716105b [MLIR] Add argument related API to Region
- Arguments of the first block of a region are considered region arguments.
- Add API on Region class to deal with these arguments directly instead of
  using the front() block.
- Changed several instances of existing code that can use this API
- Fixes https://bugs.llvm.org/show_bug.cgi?id=46535

Differential Revision: https://reviews.llvm.org/D83599
2020-07-14 09:28:29 -07:00
Frederik Gossen 1ee0d22f26 [MLIR][Standard] Erase redundant assertions `std.assert`
Differential Revision: https://reviews.llvm.org/D83118
2020-07-14 10:09:39 +00:00
Alexander Belyaev 1ea289681a [mlir] Add ViewLikeOpInterface to std.memref_cast.
Summery:  It's needed for correct work of BufferPlacement.

Differential Revision: https://reviews.llvm.org/D83385
2020-07-08 14:32:23 +02:00
River Riddle 9db53a1827 [mlir][NFC] Remove usernames and google bug numbers from TODO comments.
These were largely leftover from when MLIR was a google project, and don't really follow LLVM guidelines.
2020-07-07 01:40:52 -07:00
Rahul Joshi ee394e6842 [MLIR] Add variadic isa<> for Type, Value, and Attribute
- Also adopt variadic llvm::isa<> in more places.
- Fixes https://bugs.llvm.org/show_bug.cgi?id=46445

Differential Revision: https://reviews.llvm.org/D82769
2020-06-29 15:04:48 -07:00
Alex Zinenko 42de94f839 [mlir] do not hardcode the name of the undefined function in the error message
The error message in the `std.constant` verifier for function-typed constants
had the name of the undefined function hardcoded to `bar`. Report the actual
name instead.

Differential Revision: https://reviews.llvm.org/D82666
2020-06-29 10:05:06 +02:00
Frederik Gossen e34b88309e [MLIR][Shape] Lower `shape_of` for unranked tensors
Lower `shape_of` for unranked tensors.
Materializes shape in stack-allocated memory.

Differential Revision: https://reviews.llvm.org/D82196
2020-06-25 08:50:45 +00:00
Frederik Gossen 6f2943fb19 [MLIR][Standard] Fix use of `dyn_cast_or_null`
The value may be a function argument in which case `getDefiningOp` will return a
`nullptr`.

Differential Revision: https://reviews.llvm.org/D81965
2020-06-16 21:06:15 +00:00
Frederik Gossen 0990f1a3ad [MLIR][Standard] Lower `std.dim` with dynamic dimension operand to LLVM
Implement the missing lowering from `std.dim` to the LLVM dialect in case of a
dynamic dimension.

Differential Revision: https://reviews.llvm.org/D81834
2020-06-16 20:57:42 +00:00
MaheshRavishankar 462e3ccdd0 [mlir][StandardDialect] Add some folding for operations in standard dialect.
Add the following canonicalization
- and(x, 1) -> x
- subi(x, 0) -> x

Differential Revision: https://reviews.llvm.org/D81534
2020-06-15 22:52:29 -07:00
Frederik Gossen 361f664850 [MLIR][Standard] Add documentation for `std.dim` and fix test cases
Apply post-commit suggestions (see https://reviews.llvm.org/D81551).
Add documentation, simplify, and fix test cases.

Differential Revision: https://reviews.llvm.org/D81722
2020-06-15 10:40:36 +00:00
Rahul Joshi a0dd5e876f [MLIR] Print function name when ReturnOp verification fails
Summary:
- Print function name when ReturnOp verification fails
- This helps easily finding the invalid ReturnOp in an IR dump.

Differential Revision: https://reviews.llvm.org/D81513
2020-06-10 17:22:49 -07:00
Rob Suderman 3d56f166bd [mlir][StandardOps] Updated IndexCastOp to support tensor<index> cast
Summary:
We now support index casting for tensor<index> to tensor<int>. This
better supports compatibility with the Shape dialect.

Differential Revision: https://reviews.llvm.org/D81611
2020-06-10 17:19:08 -07:00
Frederik Gossen 904f91db5f [MLIR][Standard] Make the `dim` operation index an operand.
Allow for dynamic indices in the `dim` operation.
Rather than an attribute, the index is now an operand of type `index`.
This allows to apply the operation to dynamically ranked tensors.
The correct lowering of dynamic indices remains to be implemented.

Differential Revision: https://reviews.llvm.org/D81551
2020-06-10 13:54:47 +00:00
Stephen Neuendorffer 698462336a [MLIR] expose applyCmpPredicate
This is useful for manipulating the standard dialect from transformations
outside of the standard dialect.

Differential Revision: https://reviews.llvm.org/D80609
2020-06-09 22:25:03 -07:00
River Riddle c0cd1f1c5c [mlir] Refactor BoolAttr to be a special case of IntegerAttr
This simplifies a lot of handling of BoolAttr/IntegerAttr. For example, a lot of places currently have to handle both IntegerAttr and BoolAttr. In other places, a decision is made to pick one which can lead to surprising results for users. For example, DenseElementsAttr currently uses BoolAttr for i1 even if the user initialized it with an Array of i1 IntegerAttrs.

Differential Revision: https://reviews.llvm.org/D81047
2020-06-04 16:41:24 -07:00
Alexander Belyaev c3098e4f40 [MLIR] Add TensorFromElementsOp to Standard ops.
Differential Revision: https://reviews.llvm.org/D80705
2020-05-28 15:48:10 +02:00
MaheshRavishankar 0e88eb5c51 [mlir][spirv] Adapt subview legalization to the updated op semantics.
The subview semantics changes recently to allow for more natural
representation of constant offsets and strides. The legalization of
subview op for lowering to SPIR-V needs to account for this.
Also change the linearization to use the strides from the affine map
of a memref.

Differential Revision: https://reviews.llvm.org/D80270
2020-05-20 12:00:21 -07:00
Benjamin Kramer 350dadaa8a Give helpers internal linkage. NFC. 2020-05-19 22:16:37 +02:00
Nicolas Vasilache 8b78c50e82 [mlir] Fix incorrect indexing of subview in DimOp folding.
DimOp folding is using bare accesses to underlying SubViewOp operands.
This is generally incorrect and is fixed in this revision.

Differential Revision: https://reviews.llvm.org/D80017
2020-05-15 13:50:40 -04:00
Nicolas Vasilache e0b99a5de4 [mlir] Add SubViewOp::getOrCreateRanges and fix folding pattern
The existing implementation of SubViewOp::getRanges relies on all
offsets/sizes/strides to be dynamic values and does not work in
combination with canonicalization. This revision adds a
SubViewOp::getOrCreateRanges to create the missing constants in the
canonicalized case.

This allows reactivating the fused pass with staged pattern
applications.

However another issue surfaces that the SubViewOp verifier is now too
strict to allow folding. The existing folding pattern is turned into a
canonicalization pattern which rewrites memref_cast + subview into
subview + memref_cast.

The transform-patterns-matmul-to-vector can then be reactivated.

Differential Revision: https://reviews.llvm.org/D79759
2020-05-13 10:11:30 -04:00
Nicolas Vasilache 63c0e72b2f [mlir] Revisit std.subview handling of static information.
The main objective of this revision is to change the way static information is represented, propagated and canonicalized in the SubViewOp.

In the current implementation the issue is that canonicalization may strictly lose information because static offsets are combined in irrecoverable ways into the result type, in order to fit the strided memref representation.

The core semantics of the op do not change but the parser and printer do: the op always requires `rank` offsets, sizes and strides. These quantities can now be either SSA values or static integer attributes.

The result type is automatically deduced from the static information and more powerful canonicalizations (as powerful as the representation with sentinel `?` values allows). Previously static information was inferred on a best-effort basis from looking at the source and destination type.

Relevant tests are rewritten to use the idiomatic `offset: x, strides : [...]`-form. Bugs are corrected along the way that were not trivially visible in flattened strided memref form.

Lowering to LLVM is updated, simplified and now supports all cases.
A mixed static-dynamic mode test that wouldn't previously lower is added.

It is an open question, and a longer discussion, whether a better result type representation would be a nicer alternative. For now, the subview op carries the required semantic.

Differential Revision: https://reviews.llvm.org/D79662
2020-05-12 20:04:44 -04:00
Sam McCall 691e826995 Revert "[mlir] Revisit std.subview handling of static information."
This reverts commit 80d133b24f.

Per Stephan Herhut: The canonicalizer pattern that was added creates
forms of the subview op that cannot be lowered.

This is shown by failing Tensorflow XLA tests such as:
  tensorflow/compiler/xla/service/mlir_gpu/tests:abs.hlo.test
Will provide more details offline, they rely on logs from private CI.
2020-05-12 15:18:50 +02:00
Nicolas Vasilache 80d133b24f [mlir] Revisit std.subview handling of static information.
Summary:
The main objective of this revision is to change the way static information is represented, propagated and canonicalized in the SubViewOp.

In the current implementation the issue is that canonicalization may strictly lose information because static offsets are combined in irrecoverable ways into the result type, in order to fit the strided memref representation.

The core semantics of the op do not change but the parser and printer do: the op always requires `rank` offsets, sizes and strides. These quantities can now be either SSA values or static integer attributes.

The result type is automatically deduced from the static information and more powerful canonicalizations (as powerful as the representation with sentinel `?` values allows). Previously static information was inferred on a best-effort basis from looking at the source and destination type.

Relevant tests are rewritten to use the idiomatic `offset: x, strides : [...]`-form. Bugs are corrected along the way that were not trivially visible in flattened strided memref form.

It is an open question, and a longer discussion, whether a better result type representation would be a nicer alternative. For now, the subview op carries the required semantic.

Reviewers: ftynse, mravishankar, antiagainst, rriddle!, andydavis1, timshen, asaadaldien, stellaraccident

Reviewed By: mravishankar

Subscribers: aartbik, bondhugula, mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, liufengdb, stephenneuendorffer, Joonsoo, bader, grosul1, frgossen, Kayjukh, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D79662
2020-05-11 17:44:24 -04:00
Sean Silva 98eead8186 [mlir][Value] Add v.getDefiningOp<OpTy>()
Summary:
This makes a common pattern of
`dyn_cast_or_null<OpTy>(v.getDefiningOp())` more concise.

Differential Revision: https://reviews.llvm.org/D79681
2020-05-11 12:55:27 -07:00
Nicolas Vasilache 6ed61a26c2 [mlir] Simplify and better document std.view semantics
This [discussion](https://llvm.discourse.group/t/viewop-isnt-expressive-enough/991/2) raised some concerns with ViewOp.

In particular, the handling of offsets is incorrect and does not match the op description.
Note that with an elemental type change, offsets cannot be part of the type in general because sizeof(srcType) != sizeof(dstType).

Howerver, offset is a poorly chosen term for this purpose and is renamed to byte_shift.

Additionally, for all intended purposes, trying to support non-identity layouts for this op does not bring expressive power but rather increases code complexity.

This revision simplifies the existing semantics and implementation.
This simplification effort is voluntarily restrictive and acts as a stepping stone towards supporting richer semantics: treat the non-common cases as YAGNI for now and reevaluate based on concrete use cases once a round of simplification occurred.

Differential revision: https://reviews.llvm.org/D79541
2020-05-11 12:29:23 -04:00