[mlir] NFC: fix trivial typo in documents

Reviewers: mravishankar, antiagainst, nicolasvasilache, herhut, aartbik, mehdi_amini, bondhugula

Reviewed By: mehdi_amini, bondhugula

Subscribers: bondhugula, jdoerfert, mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, nicolasvasilache, csigg, arpith-jacob, mgester, lucyrfox, aartbik, liufengdb, Joonsoo, bader, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76993
This commit is contained in:
Kazuaki Ishizaki 2020-03-29 03:20:02 +09:00
parent 49d00824bb
commit b632bd88a6
17 changed files with 27 additions and 25 deletions

View File

@ -394,7 +394,7 @@ struct MemRefDescriptor {
T *aligned;
intptr_t offset;
intptr_t sizes[N];
intptr_t stides[N];
intptr_t strides[N];
};
```

View File

@ -126,7 +126,7 @@ mlir/lib/Conversion/GPUCommon.
Each conversion typically exists in a separate library, declared with
add_mlir_conversion_library(). Conversion libraries typically depend
on their source and target dialects, but may also depend on other
dialects (e.g. MLIRStandard). Typically this dependence is specifid
dialects (e.g. MLIRStandard). Typically this dependence is specified
using target_link_libraries() and the PUBLIC keyword. For instance:
```cmake

View File

@ -93,8 +93,8 @@ DiagnosticEngine engine = ctx->getDiagEngine();
// or failure if the diagnostic should be propagated to the previous handlers.
DiagnosticEngine::HandlerID id = engine.registerHandler(
[](Diagnostic &diag) -> LogicalResult {
bool should_propage_diagnostic = ...;
return failure(should_propage_diagnostic);
bool should_propagate_diagnostic = ...;
return failure(should_propagate_diagnostic);
});

View File

@ -710,7 +710,7 @@ to:
- Note that `attr-dict` does not overlap with individual attributes. These
attributes will simply be elided when printing the attribute dictionary.
##### Type Inferrence
##### Type Inference
One requirement of the format is that the types of operands and results must
always be present. In certain instances, the type of a variable may be deduced

View File

@ -311,7 +311,7 @@ with NVCC from a textual representation. While this was a pragmatic
short-term solution it made it hard to perform low-level rewrites that
would have helped with register reuse in the ***compute-bound regime***.
- The same reliance on emitting CUDA code made it difficult to
create cost models when time came. This made it artifically harder to
create cost models when time came. This made it artificially harder to
prune out bad solutions than necessary. This resulted in excessive
runtime evaluation, as reported in the paper [Machine Learning Systems
are Stuck in a Rut](https://dl.acm.org/doi/10.1145/3317550.3321441).

View File

@ -443,7 +443,7 @@ def GPU_ReturnOp : GPU_Op<"return", [HasParent<"GPUFuncOp">, NoSideEffect,
let description = [{
A terminator operation for regions that appear in the body of `gpu.func`
functions. The operands to the `gpu.return` are the result values returned
by an incovation of the `gpu.func`.
by an invocation of the `gpu.func`.
}];
let builders = [OpBuilder<"Builder *builder, OperationState &result", " // empty">];

View File

@ -865,7 +865,7 @@ def LLVM_MatrixMultiplyOp
/// Create a llvm.matrix.transpose call, transposing a `rows` x `columns` 2-D
/// `matrix`, as specified in the LLVM MatrixBuilder.
def LLVM_MatrixTranposeOp
def LLVM_MatrixTransposeOp
: LLVM_OneResultOp<"intr.matrix.transpose">,
Arguments<(ins LLVM_Type:$matrix, I32Attr:$rows, I32Attr:$columns)> {
string llvmBuilder = [{

View File

@ -328,7 +328,7 @@ def ConvOp : LinalgStructured_Op<"conv", [NInputs<2>, NOutputs<1>]> {
// F(z0, ..., zN-1, q, k) *
// I(b, x0 + z0 - pad_low_0, ..., xN-1 + zN-1 - pad_low_N-1, q)
// -> O(b, x0, ..., xN-1, k)
// for N equal to `nWindow`. If there is no padding attirbute, it will be
// for N equal to `nWindow`. If there is no padding attribute, it will be
// ignored.
llvm::Optional<SmallVector<AffineMap, 8>> referenceIndexingMaps() {
MLIRContext *context = getContext();

View File

@ -119,12 +119,13 @@ def LinalgStructuredInterface : OpInterface<"LinalgOp"> {
// Other interface methods.
//===------------------------------------------------------------------===//
InterfaceMethod<
"Return the reference iterators for this named op (if any are specied). "
"These reference iterators are used to specify the default behavior of "
"the op. Typically this would be a static method but in order to allow "
"rank-polymorphic ops, this needs to be per object instance. Named ops "
"must define referenceIterators, even if empty for the 0-D case. "
"Generic ops on the other hand have a None `referenceIterators`",
"Return the reference iterators for this named op (if any are "
"specified). These reference iterators are used to specify the default "
"behavior of the op. Typically this would be a static method but in "
"order to allow rank-polymorphic ops, this needs to be per object "
"instance. Named ops must define referenceIterators, even if empty for "
"the 0-D case. Generic ops on the other hand have a None "
"`referenceIterators`",
"llvm::Optional<SmallVector<StringRef, 8>>", "referenceIterators"
>,
InterfaceMethod<

View File

@ -126,7 +126,7 @@ def quant_ConstFakeQuant : quant_Op<"const_fake_quant",
Given a const min, max, num_bits and narrow_range attribute, applies the
same uniform quantization simulation as is done by the TensorFlow
fake_quant_with_min_max_args op. See the fakeQuantAttrsToType() utility
method and the quant-convert-simulated-quantization pass for futher details.
method and the quant-convert-simulated-quantization pass for further details.
}];
let arguments = (ins
@ -155,7 +155,7 @@ def quant_ConstFakeQuantPerAxis : quant_Op<"const_fake_quant_per_axis",
Given a const min, max, num_bits and narrow_range attribute, applies the
same per axis uniform quantization simulation as is done by the TensorFlow
fake_quant_with_min_max_vars_per_channel op. See the fakeQuantAttrsToType()
utility method and the quant-convert-simulated-quantization pass for futher
utility method and the quant-convert-simulated-quantization pass for further
details.
}];

View File

@ -3285,7 +3285,7 @@ def SPV_OpcodeAttr :
class SPV_Op<string mnemonic, list<OpTrait> traits = []> :
Op<SPIRV_Dialect, mnemonic, !listconcat(traits, [
// TODO(antiagainst): We don't need all of the following traits for
// every op; only the suitabble ones should be added automatically
// every op; only the suitable ones should be added automatically
// after ODS supports dialect-specific contents.
DeclareOpInterfaceMethods<QueryMinVersionInterface>,
DeclareOpInterfaceMethods<QueryMaxVersionInterface>,

View File

@ -35,7 +35,7 @@ def ShapeDialect : Dialect {
shapes as input, return the invalid shape if one of its operands is an
invalid shape. This avoids flagging multiple errors for one verification
failure. The dialect itself does not specify how errors should be combined
(there are multiple different options, from always chosing first operand,
(there are multiple different options, from always choosing first operand,
concatting etc. on how to combine them).
}];

View File

@ -1286,7 +1286,7 @@ def Vector_TransposeOp :
the permutation of ranks in the n-sized integer array attribute.
In the operation
```mlir
%1 = vector.tranpose %0, [i_1, .., i_n]
%1 = vector.transpose %0, [i_1, .., i_n]
: vector<d_1 x .. x d_n x f32>
to vector<d_trans[0] x .. x d_trans[n-1] x f32>
```
@ -1294,7 +1294,7 @@ def Vector_TransposeOp :
Example:
```mlir
%1 = vector.tranpose %0, [1, 0] : vector<2x3xf32> to vector<3x2xf32>
%1 = vector.transpose %0, [1, 0] : vector<2x3xf32> to vector<3x2xf32>
[ [a, b, c], [ [a, d],
[d, e, f] ] -> [b, e],

View File

@ -100,7 +100,8 @@ def InferShapedTypeOpInterface : OpInterface<"InferShapedTypeOpInterface"> {
InterfaceMethod<
/*desc=*/[{Reify the shape computation for the operation.
Insert operations using the given OpBulder that computes the result shape.
Insert operations using the given OpBuilder that computes the result
shape.
}],
/*retTy=*/"LogicalResult",
/*methodName=*/"reifyReturnTypeShapes",

View File

@ -510,7 +510,7 @@ func @dynamic_dim_memref(%arg0: memref<8x?xi32>) { return }
// Tensor types
//===----------------------------------------------------------------------===//
// Check that tensor element types are kept untouched with proper capabilites.
// Check that tensor element types are kept untouched with proper capabilities.
module attributes {
spv.target_env = #spv.target_env<
#spv.vce<v1.0, [Int8, Int16, Int64, Float16, Float64], []>,

View File

@ -34,7 +34,7 @@ func @int_attrs_pass() {
// -----
//===----------------------------------------------------------------------===//
// Check that the maximum and minumum integer attribute values are
// Check that the maximum and minimum integer attribute values are
// representable and preserved during a round-trip.
//===----------------------------------------------------------------------===//

View File

@ -5,7 +5,7 @@
// writing a local test source. We filter out platform-specific intrinsic
// includes from the main file to avoid unnecessary dependencies and decrease
// the test cost. The command-line flags further ensure a specific intrinsic is
// processed and we only check the ouptut below.
// processed and we only check the output below.
// We also verify emission of type specialization for overloadable intrinsics.
//
// RUN: cat %S/../../../llvm/include/llvm/IR/Intrinsics.td \