[mlir] NFC: fix trivial typo in documents
Reviewers: mravishankar, antiagainst, nicolasvasilache, herhut, aartbik, mehdi_amini, bondhugula Reviewed By: mehdi_amini, bondhugula Subscribers: bondhugula, jdoerfert, mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, nicolasvasilache, csigg, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, Joonsoo, bader, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D76993
This commit is contained in:
parent
49d00824bb
commit
b632bd88a6
|
|
@ -394,7 +394,7 @@ struct MemRefDescriptor {
|
||||||
T *aligned;
|
T *aligned;
|
||||||
intptr_t offset;
|
intptr_t offset;
|
||||||
intptr_t sizes[N];
|
intptr_t sizes[N];
|
||||||
intptr_t stides[N];
|
intptr_t strides[N];
|
||||||
};
|
};
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -126,7 +126,7 @@ mlir/lib/Conversion/GPUCommon.
|
||||||
Each conversion typically exists in a separate library, declared with
|
Each conversion typically exists in a separate library, declared with
|
||||||
add_mlir_conversion_library(). Conversion libraries typically depend
|
add_mlir_conversion_library(). Conversion libraries typically depend
|
||||||
on their source and target dialects, but may also depend on other
|
on their source and target dialects, but may also depend on other
|
||||||
dialects (e.g. MLIRStandard). Typically this dependence is specifid
|
dialects (e.g. MLIRStandard). Typically this dependence is specified
|
||||||
using target_link_libraries() and the PUBLIC keyword. For instance:
|
using target_link_libraries() and the PUBLIC keyword. For instance:
|
||||||
|
|
||||||
```cmake
|
```cmake
|
||||||
|
|
|
||||||
|
|
@ -93,8 +93,8 @@ DiagnosticEngine engine = ctx->getDiagEngine();
|
||||||
// or failure if the diagnostic should be propagated to the previous handlers.
|
// or failure if the diagnostic should be propagated to the previous handlers.
|
||||||
DiagnosticEngine::HandlerID id = engine.registerHandler(
|
DiagnosticEngine::HandlerID id = engine.registerHandler(
|
||||||
[](Diagnostic &diag) -> LogicalResult {
|
[](Diagnostic &diag) -> LogicalResult {
|
||||||
bool should_propage_diagnostic = ...;
|
bool should_propagate_diagnostic = ...;
|
||||||
return failure(should_propage_diagnostic);
|
return failure(should_propagate_diagnostic);
|
||||||
});
|
});
|
||||||
|
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -710,7 +710,7 @@ to:
|
||||||
- Note that `attr-dict` does not overlap with individual attributes. These
|
- Note that `attr-dict` does not overlap with individual attributes. These
|
||||||
attributes will simply be elided when printing the attribute dictionary.
|
attributes will simply be elided when printing the attribute dictionary.
|
||||||
|
|
||||||
##### Type Inferrence
|
##### Type Inference
|
||||||
|
|
||||||
One requirement of the format is that the types of operands and results must
|
One requirement of the format is that the types of operands and results must
|
||||||
always be present. In certain instances, the type of a variable may be deduced
|
always be present. In certain instances, the type of a variable may be deduced
|
||||||
|
|
|
||||||
|
|
@ -311,7 +311,7 @@ with NVCC from a textual representation. While this was a pragmatic
|
||||||
short-term solution it made it hard to perform low-level rewrites that
|
short-term solution it made it hard to perform low-level rewrites that
|
||||||
would have helped with register reuse in the ***compute-bound regime***.
|
would have helped with register reuse in the ***compute-bound regime***.
|
||||||
- The same reliance on emitting CUDA code made it difficult to
|
- The same reliance on emitting CUDA code made it difficult to
|
||||||
create cost models when time came. This made it artifically harder to
|
create cost models when time came. This made it artificially harder to
|
||||||
prune out bad solutions than necessary. This resulted in excessive
|
prune out bad solutions than necessary. This resulted in excessive
|
||||||
runtime evaluation, as reported in the paper [Machine Learning Systems
|
runtime evaluation, as reported in the paper [Machine Learning Systems
|
||||||
are Stuck in a Rut](https://dl.acm.org/doi/10.1145/3317550.3321441).
|
are Stuck in a Rut](https://dl.acm.org/doi/10.1145/3317550.3321441).
|
||||||
|
|
|
||||||
|
|
@ -443,7 +443,7 @@ def GPU_ReturnOp : GPU_Op<"return", [HasParent<"GPUFuncOp">, NoSideEffect,
|
||||||
let description = [{
|
let description = [{
|
||||||
A terminator operation for regions that appear in the body of `gpu.func`
|
A terminator operation for regions that appear in the body of `gpu.func`
|
||||||
functions. The operands to the `gpu.return` are the result values returned
|
functions. The operands to the `gpu.return` are the result values returned
|
||||||
by an incovation of the `gpu.func`.
|
by an invocation of the `gpu.func`.
|
||||||
}];
|
}];
|
||||||
|
|
||||||
let builders = [OpBuilder<"Builder *builder, OperationState &result", " // empty">];
|
let builders = [OpBuilder<"Builder *builder, OperationState &result", " // empty">];
|
||||||
|
|
|
||||||
|
|
@ -865,7 +865,7 @@ def LLVM_MatrixMultiplyOp
|
||||||
|
|
||||||
/// Create a llvm.matrix.transpose call, transposing a `rows` x `columns` 2-D
|
/// Create a llvm.matrix.transpose call, transposing a `rows` x `columns` 2-D
|
||||||
/// `matrix`, as specified in the LLVM MatrixBuilder.
|
/// `matrix`, as specified in the LLVM MatrixBuilder.
|
||||||
def LLVM_MatrixTranposeOp
|
def LLVM_MatrixTransposeOp
|
||||||
: LLVM_OneResultOp<"intr.matrix.transpose">,
|
: LLVM_OneResultOp<"intr.matrix.transpose">,
|
||||||
Arguments<(ins LLVM_Type:$matrix, I32Attr:$rows, I32Attr:$columns)> {
|
Arguments<(ins LLVM_Type:$matrix, I32Attr:$rows, I32Attr:$columns)> {
|
||||||
string llvmBuilder = [{
|
string llvmBuilder = [{
|
||||||
|
|
|
||||||
|
|
@ -328,7 +328,7 @@ def ConvOp : LinalgStructured_Op<"conv", [NInputs<2>, NOutputs<1>]> {
|
||||||
// F(z0, ..., zN-1, q, k) *
|
// F(z0, ..., zN-1, q, k) *
|
||||||
// I(b, x0 + z0 - pad_low_0, ..., xN-1 + zN-1 - pad_low_N-1, q)
|
// I(b, x0 + z0 - pad_low_0, ..., xN-1 + zN-1 - pad_low_N-1, q)
|
||||||
// -> O(b, x0, ..., xN-1, k)
|
// -> O(b, x0, ..., xN-1, k)
|
||||||
// for N equal to `nWindow`. If there is no padding attirbute, it will be
|
// for N equal to `nWindow`. If there is no padding attribute, it will be
|
||||||
// ignored.
|
// ignored.
|
||||||
llvm::Optional<SmallVector<AffineMap, 8>> referenceIndexingMaps() {
|
llvm::Optional<SmallVector<AffineMap, 8>> referenceIndexingMaps() {
|
||||||
MLIRContext *context = getContext();
|
MLIRContext *context = getContext();
|
||||||
|
|
|
||||||
|
|
@ -119,12 +119,13 @@ def LinalgStructuredInterface : OpInterface<"LinalgOp"> {
|
||||||
// Other interface methods.
|
// Other interface methods.
|
||||||
//===------------------------------------------------------------------===//
|
//===------------------------------------------------------------------===//
|
||||||
InterfaceMethod<
|
InterfaceMethod<
|
||||||
"Return the reference iterators for this named op (if any are specied). "
|
"Return the reference iterators for this named op (if any are "
|
||||||
"These reference iterators are used to specify the default behavior of "
|
"specified). These reference iterators are used to specify the default "
|
||||||
"the op. Typically this would be a static method but in order to allow "
|
"behavior of the op. Typically this would be a static method but in "
|
||||||
"rank-polymorphic ops, this needs to be per object instance. Named ops "
|
"order to allow rank-polymorphic ops, this needs to be per object "
|
||||||
"must define referenceIterators, even if empty for the 0-D case. "
|
"instance. Named ops must define referenceIterators, even if empty for "
|
||||||
"Generic ops on the other hand have a None `referenceIterators`",
|
"the 0-D case. Generic ops on the other hand have a None "
|
||||||
|
"`referenceIterators`",
|
||||||
"llvm::Optional<SmallVector<StringRef, 8>>", "referenceIterators"
|
"llvm::Optional<SmallVector<StringRef, 8>>", "referenceIterators"
|
||||||
>,
|
>,
|
||||||
InterfaceMethod<
|
InterfaceMethod<
|
||||||
|
|
|
||||||
|
|
@ -126,7 +126,7 @@ def quant_ConstFakeQuant : quant_Op<"const_fake_quant",
|
||||||
Given a const min, max, num_bits and narrow_range attribute, applies the
|
Given a const min, max, num_bits and narrow_range attribute, applies the
|
||||||
same uniform quantization simulation as is done by the TensorFlow
|
same uniform quantization simulation as is done by the TensorFlow
|
||||||
fake_quant_with_min_max_args op. See the fakeQuantAttrsToType() utility
|
fake_quant_with_min_max_args op. See the fakeQuantAttrsToType() utility
|
||||||
method and the quant-convert-simulated-quantization pass for futher details.
|
method and the quant-convert-simulated-quantization pass for further details.
|
||||||
}];
|
}];
|
||||||
|
|
||||||
let arguments = (ins
|
let arguments = (ins
|
||||||
|
|
@ -155,7 +155,7 @@ def quant_ConstFakeQuantPerAxis : quant_Op<"const_fake_quant_per_axis",
|
||||||
Given a const min, max, num_bits and narrow_range attribute, applies the
|
Given a const min, max, num_bits and narrow_range attribute, applies the
|
||||||
same per axis uniform quantization simulation as is done by the TensorFlow
|
same per axis uniform quantization simulation as is done by the TensorFlow
|
||||||
fake_quant_with_min_max_vars_per_channel op. See the fakeQuantAttrsToType()
|
fake_quant_with_min_max_vars_per_channel op. See the fakeQuantAttrsToType()
|
||||||
utility method and the quant-convert-simulated-quantization pass for futher
|
utility method and the quant-convert-simulated-quantization pass for further
|
||||||
details.
|
details.
|
||||||
}];
|
}];
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -3285,7 +3285,7 @@ def SPV_OpcodeAttr :
|
||||||
class SPV_Op<string mnemonic, list<OpTrait> traits = []> :
|
class SPV_Op<string mnemonic, list<OpTrait> traits = []> :
|
||||||
Op<SPIRV_Dialect, mnemonic, !listconcat(traits, [
|
Op<SPIRV_Dialect, mnemonic, !listconcat(traits, [
|
||||||
// TODO(antiagainst): We don't need all of the following traits for
|
// TODO(antiagainst): We don't need all of the following traits for
|
||||||
// every op; only the suitabble ones should be added automatically
|
// every op; only the suitable ones should be added automatically
|
||||||
// after ODS supports dialect-specific contents.
|
// after ODS supports dialect-specific contents.
|
||||||
DeclareOpInterfaceMethods<QueryMinVersionInterface>,
|
DeclareOpInterfaceMethods<QueryMinVersionInterface>,
|
||||||
DeclareOpInterfaceMethods<QueryMaxVersionInterface>,
|
DeclareOpInterfaceMethods<QueryMaxVersionInterface>,
|
||||||
|
|
|
||||||
|
|
@ -35,7 +35,7 @@ def ShapeDialect : Dialect {
|
||||||
shapes as input, return the invalid shape if one of its operands is an
|
shapes as input, return the invalid shape if one of its operands is an
|
||||||
invalid shape. This avoids flagging multiple errors for one verification
|
invalid shape. This avoids flagging multiple errors for one verification
|
||||||
failure. The dialect itself does not specify how errors should be combined
|
failure. The dialect itself does not specify how errors should be combined
|
||||||
(there are multiple different options, from always chosing first operand,
|
(there are multiple different options, from always choosing first operand,
|
||||||
concatting etc. on how to combine them).
|
concatting etc. on how to combine them).
|
||||||
}];
|
}];
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1286,7 +1286,7 @@ def Vector_TransposeOp :
|
||||||
the permutation of ranks in the n-sized integer array attribute.
|
the permutation of ranks in the n-sized integer array attribute.
|
||||||
In the operation
|
In the operation
|
||||||
```mlir
|
```mlir
|
||||||
%1 = vector.tranpose %0, [i_1, .., i_n]
|
%1 = vector.transpose %0, [i_1, .., i_n]
|
||||||
: vector<d_1 x .. x d_n x f32>
|
: vector<d_1 x .. x d_n x f32>
|
||||||
to vector<d_trans[0] x .. x d_trans[n-1] x f32>
|
to vector<d_trans[0] x .. x d_trans[n-1] x f32>
|
||||||
```
|
```
|
||||||
|
|
@ -1294,7 +1294,7 @@ def Vector_TransposeOp :
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
```mlir
|
```mlir
|
||||||
%1 = vector.tranpose %0, [1, 0] : vector<2x3xf32> to vector<3x2xf32>
|
%1 = vector.transpose %0, [1, 0] : vector<2x3xf32> to vector<3x2xf32>
|
||||||
|
|
||||||
[ [a, b, c], [ [a, d],
|
[ [a, b, c], [ [a, d],
|
||||||
[d, e, f] ] -> [b, e],
|
[d, e, f] ] -> [b, e],
|
||||||
|
|
|
||||||
|
|
@ -100,7 +100,8 @@ def InferShapedTypeOpInterface : OpInterface<"InferShapedTypeOpInterface"> {
|
||||||
InterfaceMethod<
|
InterfaceMethod<
|
||||||
/*desc=*/[{Reify the shape computation for the operation.
|
/*desc=*/[{Reify the shape computation for the operation.
|
||||||
|
|
||||||
Insert operations using the given OpBulder that computes the result shape.
|
Insert operations using the given OpBuilder that computes the result
|
||||||
|
shape.
|
||||||
}],
|
}],
|
||||||
/*retTy=*/"LogicalResult",
|
/*retTy=*/"LogicalResult",
|
||||||
/*methodName=*/"reifyReturnTypeShapes",
|
/*methodName=*/"reifyReturnTypeShapes",
|
||||||
|
|
|
||||||
|
|
@ -510,7 +510,7 @@ func @dynamic_dim_memref(%arg0: memref<8x?xi32>) { return }
|
||||||
// Tensor types
|
// Tensor types
|
||||||
//===----------------------------------------------------------------------===//
|
//===----------------------------------------------------------------------===//
|
||||||
|
|
||||||
// Check that tensor element types are kept untouched with proper capabilites.
|
// Check that tensor element types are kept untouched with proper capabilities.
|
||||||
module attributes {
|
module attributes {
|
||||||
spv.target_env = #spv.target_env<
|
spv.target_env = #spv.target_env<
|
||||||
#spv.vce<v1.0, [Int8, Int16, Int64, Float16, Float64], []>,
|
#spv.vce<v1.0, [Int8, Int16, Int64, Float16, Float64], []>,
|
||||||
|
|
|
||||||
|
|
@ -34,7 +34,7 @@ func @int_attrs_pass() {
|
||||||
// -----
|
// -----
|
||||||
|
|
||||||
//===----------------------------------------------------------------------===//
|
//===----------------------------------------------------------------------===//
|
||||||
// Check that the maximum and minumum integer attribute values are
|
// Check that the maximum and minimum integer attribute values are
|
||||||
// representable and preserved during a round-trip.
|
// representable and preserved during a round-trip.
|
||||||
//===----------------------------------------------------------------------===//
|
//===----------------------------------------------------------------------===//
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -5,7 +5,7 @@
|
||||||
// writing a local test source. We filter out platform-specific intrinsic
|
// writing a local test source. We filter out platform-specific intrinsic
|
||||||
// includes from the main file to avoid unnecessary dependencies and decrease
|
// includes from the main file to avoid unnecessary dependencies and decrease
|
||||||
// the test cost. The command-line flags further ensure a specific intrinsic is
|
// the test cost. The command-line flags further ensure a specific intrinsic is
|
||||||
// processed and we only check the ouptut below.
|
// processed and we only check the output below.
|
||||||
// We also verify emission of type specialization for overloadable intrinsics.
|
// We also verify emission of type specialization for overloadable intrinsics.
|
||||||
//
|
//
|
||||||
// RUN: cat %S/../../../llvm/include/llvm/IR/Intrinsics.td \
|
// RUN: cat %S/../../../llvm/include/llvm/IR/Intrinsics.td \
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue