This change makes the public API of SparseTensorUtils.cpp explicit, whereas before the publicity of these functions was only implicit. Implicit publicity is sufficient for mlir-opt to generate calls to these functions, but it's not enough to enable C/C++ code to call them directly in the usual way (i.e., without going through codegen). Thus, leaving the publicity implicit prevents development of other tools (e.g., microbenchmarks).
In addition this change also marks the functions MLIR_CRUNNERUTILS_EXPORT, which is required by the JIT under certain configurations (albeit not for anything in our test suite).
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D126105
The Transform dialect uses the side effect modeling mechanism to record the
effects of the transform ops on the mapping between Transform IR values and
Payload IR ops. Introduce a checker pass that warns if a Transform IR value is
used after it has been freed (consumed). This pass is mostly intended as a
debugging aid in addition to the verification/assertion mechanisms in the
transform interpreter. It reports all potential use-after-free situations.
The implementation makes a series of simplifying assumptions to be simple and
conservative. A more advanced implementation would rely on the data flow-like
analysis associated with a side-effect resource rather than a value, which is
currently not supported by the analysis infrastructure.
Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D126381
This change adds a new op `alloc_tensor` to the bufferization dialect. During bufferization, this op is always lowered to a buffer allocation (unless it is "eliminated" by a pre-processing pass). It is useful to have such an op in tensor land, because it allows users to model tensor SSA use-def chains (which drive bufferization decisions) and because tensor SSA use-def chains can be analyzed by One-Shot Bufferize, while memref values cannot.
This change also replaces all uses of linalg.init_tensor in bufferization-related code with bufferization.alloc_tensor.
linalg.init_tensor and bufferization.alloc_tensor are similar, but the purpose of the former one is just to carry a shape. It does not indicate a memory allocation.
linalg.init_tensor is not suitable for modelling SSA use-def chains for bufferization purposes, because linalg.init_tensor is marked as not having side effects (in contrast to alloc_tensor). As such, it is legal to move linalg.init_tensor ops around/CSE them/etc. This is not desirable for alloc_tensor; it represents an explicit buffer allocation while still in tensor land and such allocations should not suddenly disappear or get moved around when running the canonicalizer/CSE/etc.
BEGIN_PUBLIC
No public commit message needed for presubmit.
END_PUBLIC
Differential Revision: https://reviews.llvm.org/D126003
Some functions like `stpncpy` are implemented in terms of `memset` but are not
currently using `-fno-builtin-memset`. This is somewhat hidden by the fact that
we use `-ffreestanding` globally and that `-ffreestanding` implies
`-fno-builtin` for Clang.
This patch also removes `-mllvm -combiner-global-alias-analysis` that is Clang
specific and that does not bring substantial gains on modern processors.
Also we keep `-mllvm --tail-merge-threshold=0` for aarch64 in CMakeLists.txt
but we omit it in the Bazel config. This is because Bazel consumes the source
files directly and so it can use PGO to take optimal decisions locally.
Differential Revision: https://reviews.llvm.org/D125894
The approach I took was to define a dialect 'extern' attribute that a GlobalOp can take as a value to signify external linkage. I think this approach should compose well and should also work with wherever the OpaqueElements work goes in the future (since that is just another kind of attribute). I special cased the GlobalOp parser/printer for this case because it is significantly easier on the eyes.
In the discussion, Jeff Niu had proposed an alternative syntax for GlobalOp that I ended up not taking. I did try to implement it but a) I don't think it made anything easier to read in the common case, and b) it made the parsing/printing logic a lot more complicated (I think I would need a completely custom parser/printer to do it well). Please have a look at the common cases where the global type and initial value type match: I don't think how I have it is too bad. The less common cases seem ok to me.
I chose to only implement the direct, constant load op since that is non side effecting and there was still discussion pending on that.
Differential Revision: https://reviews.llvm.org/D124318
Lowering through libm gives us a baseline version, even though it's not
going to be particularly fast. This is similar to what we do for some
math dialect ops.
Differential Revision: https://reviews.llvm.org/D125550
This is the first implementation of complex (f64 and f32) support
in the sparse compiler, with complex add/mul as first operations.
Note that various features are still TBD, such as other ops, and
reading in complex values from file. Also, note that the
std::complex<float> had a bit of an ABI issue when passed as
single argument. It is still TBD if better solutions are possible.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D125596
This pass is to handle computationally complex operations like
tensor.pad which are not simply lowered to the exact same operation in
the memref dialect.
Differential Revision: https://reviews.llvm.org/D125384
This patch fixes the includes for the new UInt class so that the api
test now passes, additionally it fixes the bazel files to account for
the new dependencies.
Differential Revision: https://reviews.llvm.org/D125490
Add lowering of the vector.warp_execute_on_lane_0 into scf.if plus memory
transfer for the operands and yield values.
This also add an integration test running on GPU warp. The same tests can be
later re-used with different comment lines to tests distribution
transformations.
This is mostly from @springerm contribution.
Differential Revision: https://reviews.llvm.org/D125430
While executing the test suite for Tensorflow(v2.8.0), we encountered multiple TC failures with the below error
```
'z14' is not a recognized processor for this target
```
This patch adds the s390x target to the build target list. It fixes TC failures in multiple modules of Tensorflow on s390x arch. It is also tested to have no effect on x86 machines.
Reviewed By: GMNGeoffrey
Differential Revision: https://reviews.llvm.org/D125096
Move async copy operations to NVGPU as they only exist on NV target and are
designed to match ptx semantic. This allows us to also add more fine grain
caching hint attribute to the op.
Add hint to bypass L1 and hook it up to NVVM op.
Differential Revision: https://reviews.llvm.org/D125244
This ensures that attributes such as the index bitwidth propagate
correctly to the AMDGPUToROCDL patterns.
Differential Revision: https://reviews.llvm.org/D125320
By analogy with the NVGPU dialect, introduce an AMDGPU dialect for
AMD-specific intrinsic wrappers.
The dialect initially includes wrappers around the raw buffer intrinsics.
On AMD GPUs, a memref can be converted to a "buffer descriptor" that
allows more precise control of memory access, such as by allowing for
out of bounds loads/stores to be replaced by 0/ignored without adding
additional conditional logic, which is important for performance.
The repository currently contains a limited conversion from
transfer_read/transfer_write to Mubuf intrinsics, which are an older,
deprecated intrinsic for the same functionality.
The new amdgpu.raw_buffer_* ops allow these operations to be used
explicitly and for including metadata such as whether the target
chipset is an RDNA chip or not (which impacts the interpretation of
some bits in the buffer descriptor), while still maintaining an
MLIR-like interface.
(This change also exposes the floating-point atomic add intrinsic.)
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D122765
Currently, the sequence of Transform dialect operations only supports a single
use of each operand (verified by the `transform.sequence` operation). This was
originally motivated by the need to guard against accessing a payload IR
operation associated with a transform IR value after this operation has likely
been rewritten by a transformation. However, not all Transform dialect
operations rewrite payload IR, in particular the "navigation" operation such as
`transform.pdl_match` do not.
Introduce memory effects to the Transform dialect operations to describe their
effect on the payload IR and the mapping between payload IR opreations and
transform IR values. Use these effects to replace the single-use rule, allowing
repeated reads and disallowing use-after-free, where operations with the "free"
effect are considered to "consume" the transform IR value and rewrite the
corresponding payload IR operations). As an additional improvement, this
enables code motion transformation on the transform IR itself.
Reviewed By: Mogball
Differential Revision: https://reviews.llvm.org/D124181
Add shape func op for use (primarily) in shape function_library op. Allows
setting default dialect for some simpler authoring. This is a minimal version
of the ops needed.
Differential Revision: https://reviews.llvm.org/D124055
* Move Module Bufferization to the bufferization dialect. The implementation is split into `OneShotModuleBufferize.cpp` and `FuncBufferizableOpInterfaceImpl.cpp`, so that the external model implementation can be easily moved to the func dialect in the future.
* Split and clean up test cases. A few test cases are still remaining in Linalg and will be updated separately.
* `linalg.inplaceable` is renamed to `bufferization.writable` to accurately reflect its current usage.
* Attributes and their verifiers are moved from the Linalg dialect to the Bufferization dialect.
* Expand documentation.
* Add a new flag to One-Shot Bufferize to allow for function boundary bufferization.
Differential Revision: https://reviews.llvm.org/D122229
This introduces a pair of ops to the Transform dialect that connect it to PDL
patterns. Transform dialect relies on PDL for matching the Payload IR ops that
are about to be transformed. For this purpose, it provides a container op for
patterns, a "pdl_match" op and transform interface implementations that call
into the pattern matching infrastructure.
To enable the caching of compiled patterns, this also provides the extension
mechanism for TransformState. Extensions allow one to store additional
information in the TransformState and thus communicate it between different
Transform dialect operations when they are applied. They can be added and
removed when applying transform ops. An extension containing a symbol table in
which the pattern names are resolved and a pattern compilation cache is
introduced as the first client.
Depends On D123664
Reviewed By: Mogball
Differential Revision: https://reviews.llvm.org/D124007
The legacy passes are deprecated now and would be removed in near
future. This patch tries to remove legacy passes in coroutines.
Reviewed By: aeubanks
Differential Revision: https://reviews.llvm.org/D123918
Sequence is an important transform combination primitive that just indicates
transform ops being applied in a row. The simplest version requires fails
immediately if any transformation in the sequence fails. Introducing this
operation allows one to start placing transform IR within other IR.
Depends On D123135
Reviewed By: Mogball, rriddle
Differential Revision: https://reviews.llvm.org/D123664
This dialect provides operations that can be used to control transformation of
the IR using a different portion of the IR. It refers to the IR being
transformed as payload IR, and to the IR guiding the transformation as
transform IR.
The main use case for this dialect is orchestrating fine-grain transformations
on individual operations or sets thereof. For example, it may involve finding
loop-like operations with specific properties (e.g., large size) in the payload
IR, applying loop tiling to those and only those operations, and then applying
loop unrolling to the inner loops produced by the previous transformations. As
such, it is not intended as a replacement for the pass infrastructure, nor for
the pattern rewriting infrastructure. In the most common case, the transform IR
will be processed and applied to payload IR by a pass. Transformations
expressed by the transform dialect may be implemented using the pattern
infrastructure or any other relevant MLIR component.
This dialect is designed to be extensible, that is, clients of this dialect are
allowed to inject additional operations into this dialect using the newly
introduced in this patch `TransformDialectExtension` mechanism. This allows the
dialect to avoid a dependency on the implementation of the transformation as
well as to avoid introducing dialect-specific transform dialects.
See https://discourse.llvm.org/t/rfc-interfaces-and-dialects-for-precise-ir-transformation-control/60927.
Reviewed By: nicolasvasilache, Mogball, rriddle
Differential Revision: https://reviews.llvm.org/D123135
Move the operations that correspond to LLVM IR intrinsics in a separate .td
file. This makes it easier to maintain the intrinsics and decreases the compile
time of LLVMDialect.cpp by ~25%.
Depends On D123310
Reviewed By: wsmoses, jacquesguan
Differential Revision: https://reviews.llvm.org/D123315
Adding annotations on as-needed bases, currently only for memrefCopy, but in general all C API functions that take pointers to memory allocated/initialized inside the jit-compiled code must be annotated, to be able to run with msan.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D123557
(With C++ exceptions, `clang++ --target=mips64{,el}-linux-gnu -fpie -pie
-fuse-ld=lld` has link errors (lld does not implement some strange R_MIPS_64
.eh_frame handling in GNU ld). However, sanitizer-x86_64-linux-qemu used this to
build ScudoUnitTests. Pined ScudoUnitTests to -no-pie.)
Default the option introduced in D113372 to ON to match all(?) major Linux
distros. This matches GCC and improves consistency with Android and linux-musl
which always default to PIE.
Note: CLANG_DEFAULT_PIE_ON_LINUX may be removed in the future.
Differential Revision: https://reviews.llvm.org/D120305
Putting __support/FPUtil/x86_64/FMA.h in `hdrs` will trigger a
compilation action for that header, and it will always `#error` out for
non-FMA targets. Move these platform-specific headers that are
conditionally included to `textual_hdrs` instead.
Make FMA flag checks more accurate for x86-64 targets, and refactor
polyeval to use multiply and add instead when FMA instructions are not
available.
Reviewed By: michaelrj, sivachandra
Differential Revision: https://reviews.llvm.org/D123335
(The upgrade of the ppc64le bot and D121257 have fixed compiler-rt failures. Tested by nemanjai.)
Default the option introduced in D113372 to ON to match all(?) major Linux
distros. This matches GCC and improves consistency with Android and linux-musl
which always default to PIE.
Note: CLANG_DEFAULT_PIE_ON_LINUX may be removed in the future.
Differential Revision: https://reviews.llvm.org/D120305
Or rather, error out if it is set to something other than ON. This
removes the ability to enable the legacy pass manager by default,
but does not remove the ability to explicitly enable it through
various flags like -flegacy-pass-manager or -enable-new-pm=0.
I checked, and our test suite definitely doesn't pass with
LLVM_ENABLE_NEW_PASS_MANAGER=OFF anymore.
Differential Revision: https://reviews.llvm.org/D123126
Apply scale should be optionally disabled when lowering via TosaToStandard.
In most cases it should persist until the lowering to specific backend.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D122948
Pass the padding options using arrays instead of lambdas. In particular pass the padding value as string and use the argument parser to create the padding value. Arrays are a more natural choice that matches the current use cases and avoids converting arrays to lambdas.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D122309
These dialects have Python bindings and are tested, but were not
previously exposed through Bazel filegroups.
Differential Revision: https://reviews.llvm.org/D122138
This did not show up as build error because the build also works if the
dependency is transitively available. But there should be a direct
dependency anyway.
Prior to this change the __support_cpp_array_ref target's only dependency was libc_root.
but it #includes "TypeTraits.h" and Array.h for that matter.
These dependencies matter when building in distributed build systems and the relevant
files must be know for the distributed build to ship them to the executor.
Differential Revision: https://reviews.llvm.org/D121974
This binary is used in LIT tests.
Test: `bazel run --config=generic_clang -c opt @llvm-project//llvm:llvm-remark-size-diff -- --help`
Reviewed By: dblaikie
Differential Revision: https://reviews.llvm.org/D121742
previously the support_standalone_cpp target contained all of the files
in the __support/cpp folder. This change splits these out so that only
what is needed is included. In addition, this change adds the new
support files that previously didn't have targets.
Reviewed By: lntue, gchatelet
Differential Revision: https://reviews.llvm.org/D121314
This improves the modularity of the bufferization.
From now on, all ops that do not implement BufferizableOpInterface are considered hoisting barriers. Previously, all ops that do not implement the interface were not considered barriers and such ops had to be marked as barriers explicitly. This was unsafe because we could've hoisted across unknown ops where it was not safe to hoist.
As a side effect, this allows for cleaning up AffineBufferizableOpInterfaceImpl. This build unit no longer needed and can be deleted.
Differential Revision: https://reviews.llvm.org/D121519
Add support for translating data layout specifications for integer and float
types between MLIR and LLVM IR. This is a first step towards removing the
string-based LLVM dialect data layout attribute on modules. The latter is still
available and will remain so until the first-class MLIR modeling can fully
replace it.
Depends On D120739
Reviewed By: wsmoses
Differential Revision: https://reviews.llvm.org/D120740
See post-commit discussion on https://reviews.llvm.org/D120305.
This change breaks the clang-ppc64le-rhel buildbot, though
there is suspicion that it's an issue with the bot. The change
also had a larger than expected impact on compile-time and
code-size.
This reverts commit 3c4ed02698
and some followup changes.
Previously, OpDSL operation used hardcoded type conversion operations (cast or cast_unsigned). Supporting signed and unsigned casts thus meant implementing two different operations. Type function attributes allow us to define a single operation that has a cast type function attribute which at operation instantiation time may be set to cast or cast_unsigned. We may for example, defina a matmul operation with a cast argument:
```
@linalg_structured_op
def matmul(A=TensorDef(T1, S.M, S.K), B=TensorDef(T2, S.K, S.N), C=TensorDef(U, S.M, S.N, output=True),
cast=TypeFnAttrDef(default=TypeFn.cast)):
C[D.m, D.n] += cast(U, A[D.m, D.k]) * cast(U, B[D.k, D.n])
```
When instantiating the operation the attribute may be set to the desired cast function:
```
linalg.matmul(lhs, rhs, outs=[out], cast=TypeFn.cast_unsigned)
```
The revsion introduces a enum in the Linalg dialect that maps one-by-one to the type functions defined by OpDSL.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D119718
Default the option introduced in D113372 to ON to match all(?) major Linux
distros. This matches GCC and improves consistency with Android and linux-musl
which always default to PIE.
Note: CLANG_DEFAULT_PIE_ON_LINUX will be removed in the future.
Reviewed By: thesamesam
Differential Revision: https://reviews.llvm.org/D120305
This transformation is useful to break dependency between consecutive loop
iterations by increasing the size of a temporary buffer. This is usually
combined with heavy software pipelining.
Differential Revision: https://reviews.llvm.org/D119406
It is time to compose Linalg related optimizations with SparseTensor
related optimizations. This is a careful first start by adding some
general Linalg optimizations "upstream" of the sparse compiler in the
full sparse compiler pipeline. Some minor changes were needed to make
those optimizations aware of sparsity.
Note that after this, we will add a sparse specific fusion rule,
just to demonstrate the power of the new composition.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D119971
This commit adds a pattern to wrap a tensor.pad op with
an scf.if op to separate the cases where we don't need padding
(all pad sizes are actually zeros) and where we indeed need
padding.
This pattern is meant to handle padding inside tiled loops.
Under such cases the padding sizes typically depend on the
loop induction variables. Splitting them would allow treating
perfect tiles and edge tiles separately.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D117018
The pad-slice swap pattern generates `scf.if` and `tensor.generate`
to guard against zero-sized slices if it cannot prove the slice is
always non-zero. This is safe but quite conservative. It can be
unnecessary for cases where we know by problem definition such cases
does not exist, even if with dynamic shaped ops or unknown tile/slice
sizes, e.g., convolution padding size = 1 with kernel dim size = 3.
So this commit introduces a control to the pattern to specify
whether to generate the if constructs to handle such cases better,
given that once the if constructs is materialized, it's very hard
to analyze and simplify.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D117017
Rationale:
empty line between main include for this file
moved include that actually defines code into right section
Note that this revision started as breaking up ops/attrs even more
(for bug https://github.com/llvm/llvm-project/issues/52748), but due
the the connection in Dialect.initalize(), this cannot be split further).
All heavy lifting refactoring was already done by River in previous cleanup.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D119617
Makes lld-link work in a non-MSVC shell by autodetecting MSVC toolchain. Also
adds support for /winsysroot and a few other switches.
All this is done by refactoring to share code with clang-cl's existing support
for the same.
Differential Revision: https://reviews.llvm.org/D118070
Although type punning is defined for union in C, it is UB in C++.
This patch introduces a bit_cast function to convert between types in a safe way.
This is necessary to get llvm-libc compile with GCC.
This patch is extracted from D119002.
Differential Revision: https://reviews.llvm.org/D119145
Build failures are not a particularly helpful way to enforce not using
deprecated APIs and that isn't the point of the Bazel build.
At the same time, this removes `-Wno-unused` this is a check that we do
enforce in the Google internal build and so are ok maintaining in our
maintenance of the upstream Bazel build (the comment about not wanting
to do so was from a time when this was in a separate repository and I was
the only one maintaining it).
Differential Revision: https://reviews.llvm.org/D118671
D116542 adds EmbedBufferInModule which introduces a layer violation
(https://llvm.org/docs/CodingStandards.html#library-layering).
See 2d5f857a1e for detail.
EmbedBufferInModule does not use BitcodeWriter functionality and should be moved
LLVMTransformsUtils. While here, change the function case to the prevailing
convention.
It seems that EmbedBufferInModule just follows the steps of
EmbedBitcodeInModule. EmbedBitcodeInModule calls WriteBitcodeToFile but has IR
update operations which ideally should be refactored to another library.
Reviewed By: jhuber6
Differential Revision: https://reviews.llvm.org/D118666
There is a layer violation and can break clang -fmodule-name=X -fmodules-strict-decluse builds:
* LLVMTransformUtils has `#include "llvm/Bitcode/BitcodeWriterPass.h"`
* LLVMBitWriter depends on LLVMTransformUtils after D116542
Temporarily work around the issue.
This reduces the dependencies of the MLIRVector target and makes the dialect consistent with other dialects.
Differential Revision: https://reviews.llvm.org/D118533
Also reimplement `std-bufferize` in terms of BufferizableOpInterface-based bufferization. The old `std.select` bufferization pattern is no longer needed and deleted.
Differential Revision: https://reviews.llvm.org/D118559
Refactor sqrt implementations:
- Move architecture specific instructions from `src/math/<arch>` to `src/__support/FPUtil/<arch>` folder.
- Move generic implementation of `sqrt` to `src/__support/FPUtil/generic` folder and add it as a header library.
- Use `src/__support/FPUtil/sqrt.h` for architecture/generic selections.
- Add unit tests for generic implementation of `sqrt`.
Reviewed By: sivachandra
Differential Revision: https://reviews.llvm.org/D118173
This would have enabled me to notice the MLIR test file needed updating too, preventing the test file of 2074eef5db from being necessary.
layering_check is already enabled in mlir/BUILD.bazel. I don't know why I didn't see the other breakage there.
Differential Revision: https://reviews.llvm.org/D118125
The underlying issue here is that line 125 checks if linkopts was assigned (by line 120) but it's not initialized, so that becomes a crash. This was noticed by someone trying to use Docker, so no terminfo library installed.
Reviewed By: GMNGeoffrey
Differential Revision: https://reviews.llvm.org/D118270
No longer go through an external model. Also put BufferizableOpInterface into the same build target as the BufferizationDialect. This allows for some code reuse between BufferizationOps canonicalizers and BufferizableOpInterface implementations.
Differential Revision: https://reviews.llvm.org/D117987
This is in preparation of unifying the existing bufferization with One-Shot bufferization.
A subsequent commit will replace `tensor-bufferize`'s implementation with the BufferizableOpInterface-based implementation and move over missing test cases.
Differential Revision: https://reviews.llvm.org/D117984
This commit is the first step towards unifying core bufferization and One-Shot Bufferize.
This commit does not move over the implementations of BufferizableOpInterface yet. This will be done in separate commits. This change does also not move the unit tests yet. The tests will be moved together with op interface implementations and split into separate files.
Differential Revision: https://reviews.llvm.org/D117641
The code in `BufferizableOpInterface`'s header/source no longer contains any analysis code. This makes it easier to run the bufferization with a different analysis or without any analysis.
Differential Revision: https://reviews.llvm.org/D117478