Commit Graph

119 Commits

Author SHA1 Message Date
River Riddle 09f7a55fad [mlir][Types][NFC] Move all of the builtin Type classes to BuiltinTypes.h
This is part of a larger refactoring the better congregates the builtin structures under the BuiltinDialect. This also removes the problematic "standard" naming that clashes with the "standard" dialect, which is not defined within IR/. A temporary forward is placed in StandardTypes.h to allow time for downstream users to replaced references.

Differential Revision: https://reviews.llvm.org/D92435
2020-12-03 18:02:10 -08:00
Jacques Pienaar e534cee26a [mlir] Add a shape function library op
Op with mapping from ops to corresponding shape functions for those op
in the library and mechanism to associate shape functions to functions.
The mapping of operand to shape function is kept separate from the shape
functions themselves as the operation is associated to the shape
function and not vice versa, and one could have a common library of
shape functions that can be used in different contexts.

Use fully qualified names and require a name for shape fn lib ops for
now and an explicit print/parse (based around the generated one & GPU
module op ones).

This commit reverts d9da4c3e73. Fixes
missing headers (don't know how that was working locally).

Differential Revision: https://reviews.llvm.org/D91672
2020-11-29 11:15:30 -08:00
Mehdi Amini d9da4c3e73 Revert "[mlir] Add a shape function library op"
This reverts commit 6dd9596b19.

Build is broken.
2020-11-29 05:28:42 +00:00
Jacques Pienaar 6dd9596b19 [mlir] Add a shape function library op
Op with mapping from ops to corresponding shape functions for those op
in the library and mechanism to associate shape functions to functions.
The mapping of operand to shape function is kept separate from the shape
functions themselves as the operation is associated to the shape
function and not vice versa, and one could have a common library of
shape functions that can be used in different contexts.

Use fully qualified names and require a name for shape fn lib ops for
now and an explicit print/parse (based around the generated one & GPU
module op ones).

Differential Revision: https://reviews.llvm.org/D91672
2020-11-28 15:53:59 -08:00
River Riddle fa4174792a [mlir][Inliner] Add a `wouldBeCloned` flag to each of the `isLegalToInline` hooks.
Often times the legality of inlining can change depending on if the callable is going to be inlined in-place, or cloned. For example, some operations are not allowed to be duplicated and can only be inlined if the original callable will cease to exist afterwards. The new `wouldBeCloned` flag allows for dialects to hook into this when determining legality.

Differential Revision: https://reviews.llvm.org/D90360
2020-10-28 21:49:28 -07:00
Tres Popp ffdd4a46a9 [mlir] Shape.AssumingOp implements RegionBranchOpInterface.
This adds support for the interface and provides unambigious information
on the control flow as it is unconditional on any runtime values.
The code is tested through confirming that buffer-placement behaves as
expected.

Differential Revision: https://reviews.llvm.org/D87894
2020-09-21 11:33:11 +02:00
Sean Silva bae6374205 [mlir][shape] Add `shape.cstr_require %bool`
This op is a catch-all for creating witnesses from various random kinds
of constraints. In particular, I when dealing with extents directly,
which are of `index` type, one can directly use std ops for calculating
the predicates, and then use cstr_require for the final conversion to a
witness.

Differential Revision: https://reviews.llvm.org/D87871
2020-09-17 16:56:43 -07:00
Tres Popp b05629230e [mlir] Remove redundant shape.cstr_broadcastable canonicalization.
These canonicalizations are already handled by folding which will occur
in a superset of situations, so they are being removed.

Differential Revision: https://reviews.llvm.org/D87706
2020-09-17 09:01:13 +02:00
Federico Lebrón 7d1ed69c8a Make namespace handling uniform across dialect backends.
Now backends spell out which namespace they want to be in, instead of relying on
clients #including them inside already-opened namespaces. This also means that
cppNamespaces should be fully qualified, and there's no implicit "::mlir::"
prepended to them anymore.

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D86811
2020-09-14 20:33:31 +00:00
Frederik Gossen 136eb79a88 [MLIR][Standard] Add `dynamic_tensor_from_elements` operation
With `dynamic_tensor_from_elements` tensor values of dynamic size can be
created. The body of the operation essentially maps the index space to tensor
elements.

Declare SCF operations in the `scf` namespace to avoid name clash with the new
`std.yield` operation. Resolve ambiguities between `linalg/shape/std/scf.yield`
operations.

Differential Revision: https://reviews.llvm.org/D86276
2020-09-07 11:44:43 +00:00
River Riddle 65277126bf [mlir][Type] Remove the remaining usages of Type::getKind in preparation for its removal
This revision removes all of the lingering usages of Type::getKind. A consequence of this is that FloatType is now split into 4 derived types that represent each of the possible float types(BFloat16Type, Float16Type, Float32Type, and Float64Type). Other than this split, this revision is NFC.

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D85566
2020-08-12 19:33:58 -07:00
Feng Liu 5c9c4ade9d Add the inline interface to the shape dialect
This patch also fixes a minor issue that shape.rank should allow
returning !shape.size. The dialect doc has such an example for
shape.rank.

Differential Revision: https://reviews.llvm.org/D85556
2020-08-07 23:29:43 -07:00
Mehdi Amini 575b22b5d1 Revisit Dialect registration: require and store a TypeID on dialects
This patch moves the registration to a method in the MLIRContext: getOrCreateDialect<ConcreteDialect>()

This method requires dialect to provide a static getDialectNamespace()
and store a TypeID on the Dialect itself, which allows to lazyily
create a dialect when not yet loaded in the context.
As a side effect, it means that duplicated registration of the same
dialect is not an issue anymore.

To limit the boilerplate, TableGen dialect generation is modified to
emit the constructor entirely and invoke separately a "init()" method
that the user implements.

Differential Revision: https://reviews.llvm.org/D85495
2020-08-07 15:57:08 +00:00
Frederik Gossen 4cd923784e [MLIR][Shape] Expose extent tensor type builder
The extent tensor type is a `tensor<?xindex>` that is used in the shape dialect.
To facilitate the use of this type when working with the shape dialect, we
expose the helper function for its construction.

Differential Revision: https://reviews.llvm.org/D85121
2020-08-05 09:42:57 +00:00
Stephan Herhut 5d9f33aaa0 [MLIR][Shape] Add conversion for missing ops to standard
This adds conversions for const_size and to_extent_tensor. Also, cast-like operations are now folded away if the source and target types are the same.

Differential Revision: https://reviews.llvm.org/D84745
2020-07-29 12:46:18 +02:00
Stephan Herhut 6d10d317d8 [MLIR][Shape] Support transforming shape.num_elements on tensors
The current transformation to shape.reduce does not support tensor values.
This adds the required changes to make that work, including fixing the builder
for shape.reduce.

Differential Revision: https://reviews.llvm.org/D84744
2020-07-28 14:13:06 +02:00
Jacques Pienaar 595d214f47 [mlir][shape] Further operand and result type generalization
Previous changes generalized some of the operands and results. Complete
a larger group of those to simplify progressive lowering. Also update
some of the declarative asm form due to generalization. Tried to keep it
mostly mechanical.
2020-07-25 21:41:31 -07:00
Jacques Pienaar 5142448a5e [MLIR][Shape] Refactor verification
Based on https://reviews.llvm.org/D84439 but less restrictive, else we
don't allow shape_of to be able to produce a ranked output and doesn't
allow for iterative refinement here. We can consider making it more
restrictive later.
2020-07-25 14:55:19 -07:00
Frederik Gossen 670ae4b6da [MLIR][Shape] Fold `shape.mul`
Implement constant folding for `shape.mul`.

Differential Revision: https://reviews.llvm.org/D84438
2020-07-24 13:30:45 +00:00
Frederik Gossen 783a351785 [MLIR][Shape] Allow `shape.mul` to operate in indices
Differential Revision: https://reviews.llvm.org/D84437
2020-07-24 13:25:40 +00:00
Frederik Gossen 5984d74139 [MLIR][Shape] Allow `get_extent` to operate on extent tensors and indices
Differential Revision: https://reviews.llvm.org/D84435
2020-07-24 11:13:17 +00:00
Frederik Gossen 23a65648c0 [MLIR][Shape] Allow `shape.rank` to operate on extent tensors
Differential Revision: https://reviews.llvm.org/D84429
2020-07-24 10:43:39 +00:00
Frederik Gossen d4e4d5d780 [MLIR][Shape] Allow for `shape_of` to return extent tensors
The operation `shape.shape_of` now returns an extent tensor `tensor<?xindex>` in
cases when no error are possible. All consuming operation will eventually accept
both, shapes and extent tensors.

Differential Revision: https://reviews.llvm.org/D84160
2020-07-24 08:40:40 +00:00
Frederik Gossen 14d3cef012 [MLIR][Shape] Generalze `shape.const_shape` to extent tensors
The operation `shape.const_shape` was used for constants of type shape only.
We can now also use it to create constant extent tensors.

Differential Revision: https://reviews.llvm.org/D84157
2020-07-24 08:06:24 +00:00
Frederik Gossen f9595857b9 [MLIR][Shape] Fold `shape.shape_eq`
Fold `shape.shape_eq`.

Differential Revision: https://reviews.llvm.org/D82533
2020-07-20 12:25:53 +00:00
Frederik Gossen 0eb50e614c [MLIR][Shape] Allow `shape.reduce` to operate on extent tensors
Allow `shape.reduce` to take both `shape.shape` and `tensor<?xindex>` as an
argument.

Differential Revision: https://reviews.llvm.org/D83943
2020-07-16 13:53:37 +00:00
Stephan Herhut 8ef47244b9 [mlir][shape] Fold shape.broadcast with one scalar operand
This folds shape.broadcast where at least one operand is a scalar to the
other operand.

Also add an assemblyFormat for shape.broadcast and shape.concat.

Differential Revision: https://reviews.llvm.org/D83854
2020-07-15 18:49:12 +02:00
Tres Popp 2ef71cb7fd [mlir] Add additional Canonicalization of shape.cstr_broadcastable.
Summary:
Added canonicalization and folding was:
- Folding when either input is an attribute indicating a scalar input
which can always be broadcasted.
- Canonicalization where it can be determined that either input shape is
a scalar.
- Canonicalization where the partially specified input shapes can be
proven to be broadcastable always.

Differential Revision: https://reviews.llvm.org/D83194
2020-07-09 11:23:25 +02:00
Frederik Gossen 66e0f66d8f [MLIR][Shape] Canonicalize subsequent `size_to_index` and `index_to_size`
Eliminate the subsequent applications of `size_to_index` and `index_to_size`.

Differential Revision: https://reviews.llvm.org/D82083
2020-06-25 12:43:17 +00:00
Frederik Gossen bf2a4f3b3a [MLIR][Shape] Canonicalize subsequent `index_to_size` and `size_to_index`
Eliminate the subsequent applications of `index_to_size` and `size_to_index`.

Differential Revision: https://reviews.llvm.org/D82082
2020-06-25 12:02:49 +00:00
Frederik Gossen 7bca97d960 [MLIR][Shape] Add canonicalization pattern for `shape.rank`
Replace any `rank(shape_of(tensor))` that relies on a ranked tensor with the
corresponding constant `const_size`.

Differential Revision: https://reviews.llvm.org/D82077
2020-06-25 08:39:35 +00:00
Frederik Gossen 81469527ec [MLIR][Shape] Add constant folding to `shape.rank`
Add constant folding for the `shape.rank` operation of the shape dialect.

Differential Revision: https://reviews.llvm.org/D82076
2020-06-25 08:32:25 +00:00
Tres Popp 3324598844 [mlir] Add a pass to remove all shape.cstr_ and assuming_ ops.
Summary:
This is to provide a utility to remove unsupported constraints or for
pipelines that happen to receive these but cannot lower them due to not
supporting assertions.

Differential Revision: https://reviews.llvm.org/D81560
2020-06-18 13:31:30 +02:00
Alexander Belyaev e9ac792748 [mlir] Fix some of the warnings in MLIR code.
Summary:
* extra ';' in the following files:
    mlir/lib/Dialect/Linalg/Transforms/Transforms.cpp
    mlir/lib/Dialect/Shape/IR/Shape.cpp

* base class ‘mlir::ConvertVectorToSCFBase<ConvertVectorToSCFPass>’
  should be explicitly initialized in the copy constructor [-Wextra] in
    mlir/lib/Conversion/VectorToSCF/VectorToSCF.cpp

* warning: ‘bool Expression::operator==(const Expression&) const’
  defined but not used [-Wunused-function] in
    mlir/tools/mlir-linalg-ods-gen/mlir-linalg-ods-gen.cpp

Differential Revision: https://reviews.llvm.org/D81673
2020-06-11 22:18:32 +02:00
Frederik Gossen e4184c84ca [MLIR][Shape] Make dimension an operand of `get_extent`
The operation `get_extent` now accepts the dimension as an operand and is no
longer limited to constant dimensions.
A helper function facilitates the common constant use case.

Differential Revision: https://reviews.llvm.org/D81248
2020-06-10 11:47:18 +00:00
msifontes 1c189d71db [mlir] Add number of operands verification for shape.assuming_all operation
Implemented a verification to ensure that the shape.assuming_all
operation always has at least one operand.
2020-06-09 09:59:04 -07:00
Frederik Gossen 215914151e [MLIR][Shape] Add support for `OpAsmInterface` in `shape.const_size`
The SSA values created with `shape.const_size` are now named depending on the
value.
A constant size of 3, e.g., is now automatically named `%c3`.

Differential Revision: https://reviews.llvm.org/D81249
2020-06-08 10:27:28 +00:00
Alexander Belyaev 250dcf61ae Revert "Revert "[MLIR] Lower shape.num_elements -> shape.reduce.""
This reverts commit a25f5cd70c.

Now the build with `-DBUILD_SHARED_LIBS=ON` is fixed.
2020-06-08 12:19:54 +02:00
Tres Popp 68a8336bf2 Revert "Revert "[mlir] Folding and canonicalization of shape.cstr_eq""
This reverts commit 12e31f6e40.
2020-06-08 10:06:55 +02:00
Tres Popp d216f983e6 Revert "Revert "[mlir] Canonicalization and folding of shape.cstr_broadcastable""
This reverts commit 4261b026ad.
2020-06-08 10:06:55 +02:00
Mehdi Amini a25f5cd70c Revert "[MLIR] Lower shape.num_elements -> shape.reduce."
This reverts commit e80617df89.

This broke the build with `-DBUILD_SHARED_LIBS=ON`
2020-06-07 19:32:36 +00:00
Alexander Belyaev e80617df89 [MLIR] Lower shape.num_elements -> shape.reduce.
Differential Revision: https://reviews.llvm.org/D81279
2020-06-07 16:39:21 +02:00
Alexander Belyaev 50f68c1e33 [mlir] Add verifier for `shape.yield`.
Differential Revision: https://reviews.llvm.org/D81262
2020-06-07 15:40:11 +02:00
Tres Popp 4261b026ad Revert "[mlir] Canonicalization and folding of shape.cstr_broadcastable"
This reverts commit 6aab709459.

Some users have failing builds with ShapeCanonicalization.td, so revert
for now.
2020-06-06 11:17:44 +02:00
Tres Popp 12e31f6e40 Revert "[mlir] Folding and canonicalization of shape.cstr_eq"
This reverts commit 0a554e607f.

Some users have build failures when building ShapeCanonicalization.td,
so revert changes that created and rely on it.
2020-06-06 11:08:41 +02:00
Alexander Belyaev 04fb2b6123 [Mlir] Implement printer, parser, verifier and builder for shape.reduce.
Differential Revision: https://reviews.llvm.org/D81186
2020-06-05 11:25:32 +02:00
Tres Popp 4ffe6bd8a7 [mlir] NFC formatting cleanup. 2020-06-05 11:00:20 +02:00
Tres Popp 655e08ceeb [mlir] Canonicalization of shape.assuming
Summary:
This will inline the region to a shape.assuming in the case that the
input witness is found to be statically true.

Differential Revision: https://reviews.llvm.org/D80302
2020-06-05 11:00:20 +02:00
Tres Popp 0a554e607f [mlir] Folding and canonicalization of shape.cstr_eq
In the case of all inputs being constant and equal, cstr_eq will be
replaced with a true_witness.

Differential Revision: https://reviews.llvm.org/D80303
2020-06-05 11:00:20 +02:00
Tres Popp 6aab709459 [mlir] Canonicalization and folding of shape.cstr_broadcastable
This allows replacing of this op with a true witness in the case of both
inputs being const_shapes and being found to be broadcastable.

Differential Revision: https://reviews.llvm.org/D80304
2020-06-05 11:00:19 +02:00
Tres Popp 4a255bbd29 [mlir] Add folding for shape.any
If any input to shape.any is a const_shape, shape.any can be replaced
with that input.

Differential Revision: https://reviews.llvm.org/D80305
2020-06-05 11:00:19 +02:00
Tres Popp 6b3a5bff93 [mlir] Folding of shape.assuming_all
This allows assuming_all to be replaced when all inputs are known to be
statically passing witnesses.

Differential Revision: https://reviews.llvm.org/D80306
2020-06-05 11:00:19 +02:00
Tres Popp 1c3e38d98c [mlir] Add a shape op that returns a constant witness
This will later be used during canonicalization and folding steps to replace
statically known passing constraints.

Differential Revision: https://reviews.llvm.org/D80307
2020-06-05 11:00:19 +02:00
Jacques Pienaar 5b454b98d6 [mlir] Remove unneeded inference trait/fns
Therse are all handled with the simple return type inference in ODS.
Also update some summaries to match what is recommended in ODS doc.
2020-06-03 13:09:07 -07:00
Frederik Gossen fdaa391e3d [MLIR] Add `num_elements` to the shape dialect
The operation `num_elements` determines the number of elements for a given
shape.
That is the product of its dimensions.

Differential Revision: https://reviews.llvm.org/D80281
2020-05-28 14:05:58 +00:00
Frederik Gossen 6594d54571 [MLIR] Add `index_to_size` and `size_to_index` to the shape dialect
Add the two conversion operations `index_to_size` and `size_to_index` to the
shape dialect.
This facilitates the conversion of index types between the shape and the
standard dialect.

Differential Revision: https://reviews.llvm.org/D80280
2020-05-28 13:57:20 +00:00
Frederik Gossen e73bb4fba7 [MLIR] Move `ConcatOp` to its lexicographic position
Purely cosmetic change.
The operation implementations in `Shape.cpp` are now lexicographic order.

Differential Revision: https://reviews.llvm.org/D80277
2020-05-28 13:37:22 +00:00
Sean Silva 25132b36a8 [mlir][shape] Use IndexElementsAttr in Shape dialect.
Summary:
Index is the proper type for storing shapes when constant folding, so
this fixes the previous code (which was using i64).

Differential Revision: https://reviews.llvm.org/D80600
2020-05-27 13:39:49 -07:00
Jacques Pienaar 31f40f603d [mlir] Add simple generator for return types
Take advantage of equality constrains to generate the type inference interface.
This is used for equality and trivially built types. The type inference method
is only generated when no type inference trait is specified already.

This reorders verification that changes some test error messages.

Differential Revision: https://reviews.llvm.org/D80484
2020-05-27 08:45:55 -07:00
Sean Silva cf42b70439 [mlir][shape] Add `shape.get_extent`.
Summary:
This op extracts an extent from a shape.

This also is the first op which constant folds to shape.const_size,
which revealed that shape.const_size needs a folder (ConstantLike ops
seem to always need folders for the constant folding infra to work).

Differential Revision: https://reviews.llvm.org/D80394
2020-05-26 17:03:40 -07:00
Tres Popp fb6986ef69 [mlir] Custom printing/parsing for Shape::AssumingOp
Summary:
Additionally, this adds traits and builder methods to AssumingYieldOp
and names the input witness to the AssumingOp.

Differential Revision: https://reviews.llvm.org/D80187
2020-05-20 10:39:26 +02:00
Sean Silva 21b0eff773 [mlir][shape] Add `shape.from_extents`.
Summary:
This is a basic op needed for creating shapes from SSA values
representing the extents.

Differential Revision: https://reviews.llvm.org/D79833
2020-05-19 14:26:08 -07:00
Tres Popp a26883e5aa [MLIR] Add shape.witness type and ops
Summary: These represent shape based preconditions on execution of code.

Differential Revision: https://reviews.llvm.org/D79717
2020-05-15 14:33:54 +02:00
Jacques Pienaar 5eae715a31 [mlir] Add NamedAttrList
This is a wrapper around vector of NamedAttributes that keeps track of whether sorted and does some minimal effort to remain sorted (doing more, e.g., appending attributes in sorted order, could be done in follow up). It contains whether sorted and if a DictionaryAttr is queried, it caches the returned DictionaryAttr along with whether sorted.

Change MutableDictionaryAttr to always return a non-null Attribute even when empty (reserve null cases for errors). To this end change the getter to take a context as input so that the empty DictionaryAttr could be queried. Also create one instance of the empty dictionary attribute that could be reused without needing to lock context etc.

Update infer type op interface to use DictionaryAttr and use NamedAttrList to avoid incurring multiple conversion costs.

Fix bug in sorting helper function.

Differential Revision: https://reviews.llvm.org/D79463
2020-05-07 12:33:36 -07:00
Sean Silva 57a7cd7a13 [shape] Add inferReturnTypes to a couple ops.
- ShapeOfOp
- BroadcastOp

Differential Revision: https://reviews.llvm.org/D78822
2020-04-24 16:10:20 -07:00
Sean Silva 5fff169daa [shape] More constant folding
- shape split_at
- shape.broadcast
- shape.concat
- shape.to_extent_tensor

Differential Revision: https://reviews.llvm.org/D78821
2020-04-24 16:10:19 -07:00
Sean Silva d1ad267a56 [shape] Basic constant folding.
- Implement a first constant fold for shape.shape_of (more ops coming in subsequent patches)
- Implement the right builder interfaces for ShapeType and other types
- Splits shape.constant into shape.const_size and shape.const_shape which plays better with dyn_cast and building vs one polymorphic op.

Also, fix the RUN line in ops.mlir to properly verify round-tripping.
2020-04-24 15:49:35 -07:00
Sean Silva 569e4f9bc9 `shape` dialect: add some ops
- add `to_extent_tensor`
 - rename `create_shape` to `from_extent_tensor` for symmetry
- add `split_at` and `concat` ops for basic shape manipulations

This set of ops is inspired by the requirements of lowering a dynamic-shape-aware batch matmul op. For such an op, the "matrix" dimensions aren't subject to broadcasting but the others are, and so we need to slice, broadcast, and reconstruct the final output shape. Furthermore, the actual broadcasting op used downstream uses a tensor of extents as its preferred shape interface for the actual op that does the broadcasting.

However, this functionality is quite general. It's obvious that `to_extent_tensor` is needed long-term to support many common patterns that involve computations on shapes. We can evolve the shape manipulation ops introduced here. The specific choices made here took into consideration the potentially unranked nature of the !shape.shape type, which means that a simple listing of dimensions to extract isn't possible in general.

Differential Revision: https://reviews.llvm.org/D76817
2020-03-27 16:38:42 -07:00
Jacques Pienaar 9a65d683e0 [mlir] Add target for Shape dialect
Summary:
Add targets and basic printing/parsing of types in Shape dialect.

Differential Revision: https://reviews.llvm.org/D76321
2020-03-17 14:54:25 -07:00