Commit Graph

55 Commits

Author SHA1 Message Date
Morten Borup Petersen 6c1436a9b0 [MLIR][SCF] Parenthesize multiple return types in scf.execute_region asm op
Previously, ExecuteRegionOps with multiple return values would fail a round-trip test due to missing parenthesis around the types.

Differential Revision: https://reviews.llvm.org/D108402
2021-08-19 21:31:51 +01:00
Marcel Koester 0425332015 [mlir] Added new RegionBranchTerminatorOpInterface and adapted uses of hasTrait<ReturnLike>.
This CL adds a new RegionBranchTerminatorOpInterface to query information about operands that can be
passed to successor regions. Similar to the BranchOpInterface, it allows to freely define the
involved operands. However, in contrast to the BranchOpInterface, it expects an additional region
number to distinguish between various use cases which might require different operands passed to
different regions.

Moreover, we added new utility functions (namely getMutableRegionBranchSuccessorOperands and
getRegionBranchSuccessorOperands) to query (mutable) operand ranges for operations equiped with the
ReturnLike trait and/or implementing the newly added interface.  This simplifies reasoning about
terminators in the scope of the nested regions.

We also adjusted the SCF.ConditionOp to benefit from the newly added capabilities.

Differential Revision: https://reviews.llvm.org/D105018
2021-07-26 06:39:31 +02:00
William S. Moses dfb34c0df9 [MLIR][SCF] Inline ExecuteRegion if parent can contain multiple blocks
The executeregionop is used to allow multiple blocks within SCF constructs. If the container allows multiple blocks, inline the region

Differential Revision: https://reviews.llvm.org/D104960
2021-06-30 10:03:22 -04:00
Stella Laurenzo 485cc55edf [mlir] Generare .cpp.inc files for dialects.
* Previously, we were only generating .h.inc files. We foresee the need to also generate implementations and this is a step towards that.
* Discussed in https://llvm.discourse.group/t/generating-cpp-inc-files-for-dialects/3732/2
* Deviates from the discussion above by generating a default constructor in the .cpp.inc file (and adding a tablegen bit that disables this in case if this is user provided).
* Generating the destructor started as a way to flush out the missing includes (produces a link error), but it is a strict improvement on its own that is worth doing (i.e. by emitting key methods in the .cpp file, we root vtables in one translation unit, which is a non-controversial improvement).

Differential Revision: https://reviews.llvm.org/D105070
2021-06-29 20:10:30 +00:00
William S. Moses 2ab27758d5 Revert "[MLIR][SCF] Inline ExecuteRegion if parent can contain multiple blocks"
This reverts commit 5d6240b77e.

The commit was mistakenly landed without a PR approval, this will be
reverted now and resubmitted.
2021-06-28 13:52:30 -04:00
William S. Moses 5d6240b77e [MLIR][SCF] Inline ExecuteRegion if parent can contain multiple blocks
The executeregionop is used to allow multiple blocks within SCF constructs. If the container allows multiple blocks, inline the region

Differential Revision: https://reviews.llvm.org/D104960
2021-06-28 13:09:22 -04:00
William S. Moses 44985872b8 [MLIR][SCF] Inline single block ExecuteRegionOp
This commit adds a canonicalization pass which inlines any single block execute region

Differential Revision: https://reviews.llvm.org/D104865
2021-06-24 13:15:26 -04:00
Uday Bondhugula 18c8c934d8 [MLIR] Introduce scf.execute_region op
Introduce the execute_region op that is able to hold a region which it
executes exactly once. The op encapsulates a CFG within itself while
isolating it from the surrounding control flow. Proposal discussed here:
https://llvm.discourse.group/t/introduce-std-inlined-call-op-proposal/282

execute_region enables one to inline a function without lowering out all
other higher level control flow constructs (affine.for/if, scf.for/if)
to the flat list of blocks / CFG form. It thus allows the benefit of
transforms on higher level control flow ops available in the presence of
the inlined calls. The inlined calls continue to benefit from
propagation of SSA values across their top boundary. Functions won’t
have to remain outlined until later than desired.  Abstractions like
affine execute_regions, lambdas with implicit captures could be lowered
to this without first lowering out structured loops/ifs or outlining.
But two potential early use cases are of: (1) an early inliner (which
can inline functions by introducing execute_region ops), (2) lowering of
an affine.execute_region, which cleanly maps to an scf.execute_region
when going from the affine dialect to the scf dialect.

Differential Revision: https://reviews.llvm.org/D75837
2021-06-18 15:22:33 +05:30
MaheshRavishankar 621d93d263 [mlir][SCF] Remove empty else blocks of `scf.if` operations.
Differential Revision: https://reviews.llvm.org/D104273
2021-06-15 15:07:20 -07:00
Butygin 4184018253 [mlir][SCF] Canonicalize nested ParallelOp's
Differential Revision: https://reviews.llvm.org/D102799
2021-05-22 14:00:00 +03:00
William S. Moses f4a2dbfe29 [MLIR][SCF] Combine adjacent scf.if with same condition
Differential Revision: https://reviews.llvm.org/D101798
2021-05-05 00:39:58 -04:00
William S. Moses 8e211bf1c8 [MLIR][SCF] Assume uses of condition in the body of scf.while is true
Differential Revision: https://reviews.llvm.org/D101801
2021-05-04 11:39:07 -04:00
Lorenzo Chelini de94b1855c [mlir] Fix top-level comments (NFC) 2021-04-29 13:06:40 +02:00
William S. Moses ca27260701 [MLIR] Add SCF.if Condition Canonicalizations
Add two canoncalizations for scf.if.
  1) A canonicalization that allows users of a condition within an if to assume the condition
     is true if in the true region, etc.
  2) A canonicalization that removes yielded statements that are equivalent to the condition
     or its negation

Differential Revision: https://reviews.llvm.org/D101012
2021-04-26 20:13:08 -04:00
Nicolas Vasilache 843f1fc825 [mlir][scf] Add scf.for + tensor.cast canonicalization pattern
Fold scf.for iter_arg/result pairs that go through incoming/ougoing
a tensor.cast op pair so as to pull the tensor.cast inside the scf.for:

```
  %0 = tensor.cast %t0 : tensor<32x1024xf32> to tensor<?x?xf32>
  %1 = scf.for %i = %c0 to %c1024 step %c32 iter_args(%iter_t0 = %0)
     -> (tensor<?x?xf32>) {
    %2 = call @do(%iter_t0) : (tensor<?x?xf32>) -> tensor<?x?xf32>
    scf.yield %2 : tensor<?x?xf32>
  }
  %2 = tensor.cast %1 : tensor<?x?xf32> to tensor<32x1024xf32>
  use_of(%2)
```

folds into:

```
  %0 = scf.for %arg2 = %c0 to %c1024 step %c32 iter_args(%arg3 = %arg0)
      -> (tensor<32x1024xf32>) {
    %2 = tensor.cast %arg3 : tensor<32x1024xf32> to tensor<?x?xf32>
    %3 = call @do(%2) : (tensor<?x?xf32>) -> tensor<?x?xf32>
    %4 = tensor.cast %3 : tensor<?x?xf32> to tensor<32x1024xf32>
    scf.yield %4 : tensor<32x1024xf32>
  }
  use_of(%0)
```

Differential Revision: https://reviews.llvm.org/D100661
2021-04-16 16:50:21 +00:00
Butygin eb31540066 [mlir] Canonicalize single-iteration ParallelOp
Differential Revision: https://reviews.llvm.org/D100248
2021-04-13 13:42:19 +03:00
Chris Lattner dc4e913be9 [PatternMatch] Big mechanical rename OwningRewritePatternList -> RewritePatternSet and insert -> add. NFC
This doesn't change APIs, this just cleans up the many in-tree uses of these
names to use the new preferred names.  We'll keep the old names around for a
couple weeks to help transitions.

Differential Revision: https://reviews.llvm.org/D99127
2021-03-22 17:20:50 -07:00
Butygin 5657f93e78 [mlir] Canonicalize IfOp with trivial `then` and `else` bodies to list of SelectOp's
* Do we need a threshold on maximum number of Yeild arguments processed (maximum number of SelectOp's to be generated)?
* Had to modify some old IfOp tests to not get optimized by this pattern

Differential Revision: https://reviews.llvm.org/D98592
2021-03-20 12:18:49 +03:00
lorenzo chelini 4c782a24d9 [mlir] Fix typo in SCF.cpp (NFC) 2021-03-18 19:15:33 +01:00
lorenzo chelini 0a74a7161b [mlir] scf::ForOp: Drop iter arguments (and corresponding result) with no use
'ForOpIterArgsFolder' can now remove iterator arguments (and corresponding
results) with no use.

Example:

```
%cst = constant 32 : i32

%0:2 = scf.for %arg1 = %lb to %ub step %step iter_args(%arg2 = %arg0, %arg3 = %cst)
  -> (i32, i32) {
  %1 = addu %arg2, %cst : i32
  scf.yield %1, %1 : i32, i32
}

use(%0#0)

```

%arg3 is not used in the block, and its corresponding result `%0#1` has no use,
thus remove the iter argument.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D98711
2021-03-17 12:06:17 +00:00
Lorenzo Chelini fd7eee64c5 scf::ForOp: Fold away iterator arguments with no use and for which the corresponding input is yielded
Enhance 'ForOpIterArgsFolder' to remove unused iteration arguments in a
scf::ForOp. If the block argument corresponding to the given iterator has no
use and the yielded value equals the input, we fold it away.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D98503
2021-03-16 07:01:25 +00:00
Julian Gross e2310704d8 [MLIR] Create memref dialect and move dialect-specific ops from std.
Create the memref dialect and move dialect-specific ops
from std dialect to this dialect.

Moved ops:
AllocOp -> MemRef_AllocOp
AllocaOp -> MemRef_AllocaOp
AssumeAlignmentOp -> MemRef_AssumeAlignmentOp
DeallocOp -> MemRef_DeallocOp
DimOp -> MemRef_DimOp
MemRefCastOp -> MemRef_CastOp
MemRefReinterpretCastOp -> MemRef_ReinterpretCastOp
GetGlobalMemRefOp -> MemRef_GetGlobalOp
GlobalMemRefOp -> MemRef_GlobalOp
LoadOp -> MemRef_LoadOp
PrefetchOp -> MemRef_PrefetchOp
ReshapeOp -> MemRef_ReshapeOp
StoreOp -> MemRef_StoreOp
SubViewOp -> MemRef_SubViewOp
TransposeOp -> MemRef_TransposeOp
TensorLoadOp -> MemRef_TensorLoadOp
TensorStoreOp -> MemRef_TensorStoreOp
TensorToMemRefOp -> MemRef_BufferCastOp
ViewOp -> MemRef_ViewOp

The roadmap to split the memref dialect from std is discussed here:
https://llvm.discourse.group/t/rfc-split-the-memref-dialect-from-std/2667

Differential Revision: https://reviews.llvm.org/D98041
2021-03-15 11:14:09 +01:00
Nicolas Vasilache 35908406dc [mlir][scf] Canonicalize scf.for last tensor iteration result.
Canonicalize the iter_args of an scf::ForOp that involve a tensor_load and
for which only the last loop iteration is actually visible outside of the
loop. The canonicalization looks for a pattern such as:
```
   %t0 = ... : tensor_type
   %0 = scf.for ... iter_args(%bb0 : %t0) -> (tensor_type) {
     ...
     // %m is either tensor_to_memref(%bb00) or defined above the loop
     %m... : memref_type
     ... // uses of %m with potential inplace updates
     %new_tensor = tensor_load %m : memref_type
     ...
     scf.yield %new_tensor : tensor_type
   }
```

`%bb0` may have either 0 or 1 use. If it has 1 use it must be exactly a
`%m = tensor_to_memref %bb0` op that feeds into the yielded `tensor_load`
op.

If no aliasing write of `%new_tensor` occurs between tensor_load and yield
then the value %0 visible outside of the loop is the last `tensor_load`
produced in the loop.

For now, we approximate the absence of aliasing by only supporting the case
when the tensor_load is the operation immediately preceding the yield.

The canonicalization rewrites the pattern as:
```
   // %m is either a tensor_to_memref or defined above
   %m... : memref_type
   scf.for ... { // no iter_args
     ... // uses of %m with potential inplace updates
   }
   %0 = tensor_load %m : memref_type
```

Differential revision: https://reviews.llvm.org/D97953
2021-03-05 09:42:19 +00:00
Christian Sigg 8c074cb0b7 [mlir] Mark OpState::getAttrs() deprecated.
Fix call sites.

The method will be removed 2 weeks later.

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D97464
2021-02-25 20:54:42 +01:00
Nicolas Vasilache 551ba72760 [mlir] NFC - Use declarative assembly for scf::YieldOp 2021-02-23 11:17:30 +00:00
Tres Popp 787d771dce [mlir] Don't return nullptrs from scf::IfOp::getSuccessorRegions
Previously this might happen if there was no elseRegion and the method
was asked for all successor regions.

Differential Revision: https://reviews.llvm.org/D96764
2021-02-16 12:06:30 +01:00
Lei Zhang 5b7619c90b [mlir] Fix scf.for single iteration canonicalization check
We should be check whether lb + step >= ub to determine
whether this is a single iteration. Previously we were
checking lb + lb >= ub.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D95440
2021-02-02 18:30:50 -05:00
Lei Zhang a2e791e396 Revert "[mlir] Fix scf.for single iteration canonicalization check"
This reverts commit b2b35697dc.
It gotten accidentially landed before LGTM.
2021-02-02 11:13:39 -05:00
Lei Zhang b2b35697dc [mlir] Fix scf.for single iteration canonicalization check
We should be check whether lb + step >= ub to determine
whether this is a single iteration. Previously we were
checking lb + lb >= ub.

Differential Revision: https://reviews.llvm.org/D95440
2021-02-02 11:08:56 -05:00
Christian Sigg 0bf4a82a5a [mlir] Use mlir::OpState::operator->() to get to methods of mlir::Operation. This is a preparation step to remove the corresponding methods from OpState.
Reviewed By: silvas, rriddle

Differential Revision: https://reviews.llvm.org/D92878
2020-12-09 12:11:32 +01:00
Christian Sigg c4a0405902 Add `Operation* OpState::operator->()` to provide more convenient access to members of Operation.
Given that OpState already implicit converts to Operator*, this seems reasonable.

The alternative would be to add more functions to OpState which forward to Operation.

Reviewed By: rriddle, ftynse

Differential Revision: https://reviews.llvm.org/D92266
2020-12-02 15:46:20 +01:00
Alex Zinenko 31a233d463 [mlir] canonicalize away zero-iteration SCF for loops
An SCF 'for' loop does not iterate if its lower bound is equal to its upper
bound. Remove loops where both bounds are the same SSA value as such bounds are
guaranteed to be equal. Similarly, remove 'parallel' loops where at least one
pair of respective lower/upper bounds is specified by the same SSA value.

Reviewed By: gysit

Differential Revision: https://reviews.llvm.org/D91880
2020-11-23 15:04:31 +01:00
Alex Zinenko 18d0f7d5c3 [mlir] add canonicalization patterns for trivial SCF 'for' and 'if'
Add canoncalization patterns to remove zero-iteration 'for' loops, replace
single-iteration 'for' loops with their bodies; remove known-false conditionals
with no 'else' branch and replace conditionals with known value by the
respective region. Although similar transformations are performed at the CFG
level, not all flows reach that level, e.g., the GPU flow may want to remove
single-iteration loops before deciding on loop mapping to thread dimensions.

Reviewed By: herhut

Differential Revision: https://reviews.llvm.org/D91865
2020-11-20 19:04:39 +01:00
Nicolas Vasilache 7625742237 [mlir][Linalg] Add support for tileAndDistribute on tensors.
scf.parallel is currently not a good fit for tiling on tensors.
Instead provide a path to parallelism directly through scf.for.
For now, this transformation ignores the distribution scheme and always does a block-cyclic mapping (where block is the tile size).

Differential revision: https://reviews.llvm.org/D90475
2020-11-16 11:12:50 +00:00
Eugene Zhulenev bb0d5f767d [mlir] Add NumberOfExecutions analysis + update RegionBranchOpInterface interface to query number of region invocations
Implements RFC discussed in: https://llvm.discourse.group/t/rfc-operationinstancesinterface-or-any-better-name/2158/10

Reviewed By: silvas, ftynse, rriddle

Differential Revision: https://reviews.llvm.org/D90922
2020-11-11 01:43:17 -08:00
Nicolas Vasilache f202d32216 [mlir][SCF] Add canonicalization pattern for scf::For to eliminate yields that just forward.
For instance:
```
func @for_yields_3(%lb : index, %ub : index, %step : index) -> (i32, i32, i32) {
  %a = call @make_i32() : () -> (i32)
  %b = call @make_i32() : () -> (i32)
  %r:3 = scf.for %i = %lb to %ub step %step iter_args(%0 = %a, %1 = %a, %2 = %b) -> (i32, i32, i32) {
    %c = call @make_i32() : () -> (i32)
    scf.yield %0, %c, %2 : i32, i32, i32
  }
  return %r#0, %r#1, %r#2 : i32, i32, i32
}
```

Canonicalizes as:
```
  func @for_yields_3(%arg0: index, %arg1: index, %arg2: index) -> (i32, i32, i32) {
    %0 = call @make_i32() : () -> i32
    %1 = call @make_i32() : () -> i32
    %2 = scf.for %arg3 = %arg0 to %arg1 step %arg2 iter_args(%arg4 = %0) -> (i32) {
      %3 = call @make_i32() : () -> i32
      scf.yield %3 : i32
    }
    return %0, %2, %1 : i32, i32, i32
  }
```

Differential Revision: https://reviews.llvm.org/D90745
2020-11-04 11:36:27 +00:00
Alex Zinenko 79716559b5 [mlir] Add a generic while/do-while loop to the SCF dialect
The new construct represents a generic loop with two regions: one executed
before the loop condition is verifier and another after that. This construct
can be used to express both a "while" loop and a "do-while" loop, depending on
where the main payload is located. It is intended as an intermediate
abstraction for lowering, which will be added later. This form is relatively
easy to target from higher-level abstractions and supports transformations such
as loop rotation and LICM.

Differential Revision: https://reviews.llvm.org/D90255
2020-11-04 09:43:13 +01:00
River Riddle fa4174792a [mlir][Inliner] Add a `wouldBeCloned` flag to each of the `isLegalToInline` hooks.
Often times the legality of inlining can change depending on if the callable is going to be inlined in-place, or cloned. For example, some operations are not allowed to be duplicated and can only be inlined if the original callable will cease to exist afterwards. The new `wouldBeCloned` flag allows for dialects to hook into this when determining legality.

Differential Revision: https://reviews.llvm.org/D90360
2020-10-28 21:49:28 -07:00
Tobias Gysi 93377888ae [mlir] add scf.if op canonicalization pattern that removes unused results
The patch adds a canonicalization pattern that removes the unused results of scf.if operation. As a result, cse may remove unused computations in the then and else regions of the scf.if operation.

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D89029
2020-10-11 10:40:28 +02:00
Federico Lebrón 7d1ed69c8a Make namespace handling uniform across dialect backends.
Now backends spell out which namespace they want to be in, instead of relying on
clients #including them inside already-opened namespaces. This also means that
cppNamespaces should be fully qualified, and there's no implicit "::mlir::"
prepended to them anymore.

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D86811
2020-09-14 20:33:31 +00:00
Frederik Gossen 136eb79a88 [MLIR][Standard] Add `dynamic_tensor_from_elements` operation
With `dynamic_tensor_from_elements` tensor values of dynamic size can be
created. The body of the operation essentially maps the index space to tensor
elements.

Declare SCF operations in the `scf` namespace to avoid name clash with the new
`std.yield` operation. Resolve ambiguities between `linalg/shape/std/scf.yield`
operations.

Differential Revision: https://reviews.llvm.org/D86276
2020-09-07 11:44:43 +00:00
Mehdi Amini 575b22b5d1 Revisit Dialect registration: require and store a TypeID on dialects
This patch moves the registration to a method in the MLIRContext: getOrCreateDialect<ConcreteDialect>()

This method requires dialect to provide a static getDialectNamespace()
and store a TypeID on the Dialect itself, which allows to lazyily
create a dialect when not yet loaded in the context.
As a side effect, it means that duplicated registration of the same
dialect is not an issue anymore.

To limit the boilerplate, TableGen dialect generation is modified to
emit the constructor entirely and invoke separately a "init()" method
that the user implements.

Differential Revision: https://reviews.llvm.org/D85495
2020-08-07 15:57:08 +00:00
Rahul Joshi a3ad8f92b4 [MLIR] Add type checking capability to RegionBranchOpInterface
- Add function `verifyTypes` that Op's can call to do type checking verification
  along the control flow edges described the Op's RegionBranchOpInterface.
- We cannot rely on the verify methods on the OpInterface because the interface
  functions assume valid Ops, so they may crash if invoked on unverified Ops.
  (For example, scf.for getSuccessorRegions() calls getRegionIterArgs(), which
  dereferences getBody() block. If the scf.for is invalid with no body, this
  can lead to a segfault). `verifyTypes` can be called post op-verification to
  avoid this.

Differential Revision: https://reviews.llvm.org/D82829
2020-07-15 11:14:07 -07:00
Rahul Joshi 52af9c59e3 [MLIR] Add a NoRegionArguments trait
- This trait will verify that all regions attached to an Op have no arguments
- Fixes https://bugs.llvm.org/show_bug.cgi?id=46521 : Add trait NoRegionArguments

Differential Revision: https://reviews.llvm.org/D83016
2020-07-06 09:05:38 -07:00
Tobias Gysi 48f1d4fcd2 [mlir] parallel loop canonicalization
Summary:
The patch introduces a canonicalization pattern for parallel loops. The pattern removes single-iteration loop dimensions if the loop bounds and steps are constants.

Reviewers: herhut, ftynse

Reviewed By: herhut

Subscribers: mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, stephenneuendorffer, Joonsoo, grosul1, Kayjukh, jurahul, msifontes

Tags: #mlir

Differential Revision: https://reviews.llvm.org/D82191
2020-06-26 09:57:08 +02:00
Rahul Joshi d891d738d9 [MLIR][NFC] Adopt variadic isa<>
Differential Revision: https://reviews.llvm.org/D82489
2020-06-24 17:02:44 -07:00
Feng Liu 5048933c47 [mlir] Added the dialect inliner to the SCF dialect
Currently no restrictions are added to the destination regions.

Subscribers: mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, aartbik, stephenneuendorffer, Joonsoo, grosul1, Kayjukh, jurahul, msifontes

Tags: #mlir

Differential Revision: https://reviews.llvm.org/D82336
2020-06-23 10:49:05 -07:00
Alex Zinenko 3adced3494 [mlir] Introduce callback-based builders to SCF Parallel and Reduce ops
Similarly to `scf::ForOp`, introduce additional `function_ref` arguments to
`::build` functions of SCF `ParallelOp` and `ReduceOp`. The provided functions
will be called to construct the body of the respective operations while
constructing the operation itself. Exercise them in LoopUtils.

Differential Revision: https://reviews.llvm.org/D81872
2020-06-16 20:51:32 +02:00
River Riddle c0cd1f1c5c [mlir] Refactor BoolAttr to be a special case of IntegerAttr
This simplifies a lot of handling of BoolAttr/IntegerAttr. For example, a lot of places currently have to handle both IntegerAttr and BoolAttr. In other places, a decision is made to pick one which can lead to surprising results for users. For example, DenseElementsAttr currently uses BoolAttr for i1 even if the user initialized it with an Array of i1 IntegerAttrs.

Differential Revision: https://reviews.llvm.org/D81047
2020-06-04 16:41:24 -07:00
Alex Zinenko 195d8571b9 [mlir] post-commit review fixes
This fixes several post-commit nits from D79688 and D80135, namely
typos, debug output and control flow inversion.
2020-06-02 12:08:17 +02:00