Remove uses of to-be-deprecated API. In cases where the correct
element type was not immediately obvious to me, fall back to
explicit getPointerElementType().
For the use in LLVMOps.td I used the getPointerElementType()
escape hatch, as it's not obvious to me how the load type
should be properly obtained here.
A series of preceding patches changed the mechanism for translating MLIR to
LLVM IR to use dialect interface with delayed registration. It is no longer
necessary for specific dialects to derive from ModuleTranslation. Remove all
virtual methods from ModuleTranslation and factor out the entry point to be a
free function.
Also perform some cleanups in ModuleTranslation internals.
Depends On D96774
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D96775
This new invoke will pack a list of argument before calling the
`invokePacked` method. It accepts returned value as output argument
wrapped in `ExecutionEngine::Result<T>`, and delegate the packing of
arguments to a trait to allow for customization for some types.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D95961
All these potential null pointer dereferences are reported by my static analyzer for null smart pointer dereferences, which has a different implementation from `alpha.cplusplus.SmartPtr`.
The checked pointers in this patch are initialized by Target::createXXX functions. When the creator function pointer is not correctly set, a null pointer will be returned, or the creator function may originally return a null pointer.
Some of them may not make sense as they may be checked before entering the function, but I fixed them all in this patch. I submit this fix because 1) similar checks are found in some other places in the LLVM codebase for the same return value of the function; and, 2) some of the pointers are dereferenced before they are checked, which may definitely trigger a null pointer dereference if the return value is nullptr.
Reviewed By: tejohnson, MaskRay, jpienaar
Differential Revision: https://reviews.llvm.org/D91410
These includes have been deprecated in favor of BuiltinDialect.h, which contains the definitions of ModuleOp and FuncOp.
Differential Revision: https://reviews.llvm.org/D91572
This patch introduces a SPIR-V runner. The aim is to run a gpu
kernel on a CPU via GPU -> SPIRV -> LLVM conversions. This is a first
prototype, so more features will be added in due time.
- Overview
The runner follows similar flow as the other runners in-tree. However,
having converted the kernel to SPIR-V, we encode the bind attributes of
global variables that represent kernel arguments. Then SPIR-V module is
converted to LLVM. On the host side, we emulate passing the data to device
by creating in main module globals with the same symbolic name as in kernel
module. These global variables are later linked with ones from the nested
module. We copy data from kernel arguments to globals, call the kernel
function from nested module and then copy the data back.
- Current state
At the moment, the runner is capable of running 2 modules, nested one in
another. The kernel module must contain exactly one kernel function. Also,
the runner supports rank 1 integer memref types as arguments (to be scaled).
- Enhancement of JitRunner and ExecutionEngine
To translate nested modules to LLVM IR, JitRunner and ExecutionEngine were
altered to take an optional (default to `nullptr`) function reference that
is a custom LLVM IR module builder. This allows to customize LLVM IR module
creation from MLIR modules.
Reviewed By: ftynse, mravishankar
Differential Revision: https://reviews.llvm.org/D86108
Due to the original type system implementation, LLVMDialect in MLIR contains an
LLVMContext in which the relevant objects (types, metadata) are created. When
an MLIR module using the LLVM dialect (and related intrinsic-based dialects
NVVM, ROCDL, AVX512) is converted to LLVM IR, it could only live in the
LLVMContext owned by the dialect. The type system no longer relies on the
LLVMContext, so this limitation can be removed. Instead, translation functions
now take a reference to an LLVMContext in which the LLVM IR module should be
constructed. The caller of the translation functions is responsible for
ensuring the same LLVMContext is not used concurrently as the translation no
longer uses a dialect-wide context lock.
As an additional bonus, this change removes the need to recreate the LLVM IR
module in a different LLVMContext through printing and parsing back, decreasing
the compilation overhead in JIT and GPU-kernel-to-blob passes.
Reviewed By: rriddle, mehdi_amini
Differential Revision: https://reviews.llvm.org/D85443
MLIR tests in "mlir/test/mlir-cpu-runner" fails in SystemZ (z14) because
of incompatible datalayout error. This patch fixes it by setting host
CPU name in createTargetMachine()
Differential Revision: https://reviews.llvm.org/D80130
This change makes the ModuleTranslation threadsafe by locking on the
LLVMContext. Furthermore, we now clone the llvm module into a new
context when compiling to PTX similar to what the OrcJit does.
Differential Revision: https://reviews.llvm.org/D78207
Summary:
This way, clients can opt-out of the GDB notification listener. Also, this
changes the semantics of enabling the object cache, which seemed the wrong
way around.
Reviewers: rriddle, nicolasvasilache, ftynse, andydavis1
Reviewed By: nicolasvasilache
Subscribers: mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, Joonsoo, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D75787
MLIR ExecutionEngine and derived tools (e.g., mlir-cpu-runner) would trigger an
assertion inside ORC JIT while ExecutionEngine is being destructed after a
failed linking due to a missing function definition. The reason for this is the
JIT lookup that may return an Error referring to strings stored internally by
the JIT. If the Error outlives the ExecutionEngine, it would want have a
dangling reference, which is currently caught by an assertion inside JIT thanks
to hand-rolled reference counting. Rewrap the error message into a string
before returning.
Differential Revision: https://reviews.llvm.org/D75508
This fixes test failures caused by a change to the name of the main
dylib, now called 'main'. It also hardens the engine against potential
future changes to the name.
This is how it should've been and brings it more in line with
std::string_view. There should be no functional change here.
This is mostly mechanical from a custom clang-tidy check, with a lot of
manual fixups. It uncovers a lot of minor inefficiencies.
This doesn't actually modify StringRef yet, I'll do that in a follow-up.
Changes to ORC in ce2207abaf changed the
APIs in IRCompileLayer, now requiring the custom compiler to be wrapped
in IRCompileLayer::IRCompiler. Even though MLIR relies on Orc
CompileUtils, the type is still visible in several places in the code.
Adapt those to the new API.
* Fixes use of anonymous namespace for static methods.
* Uses explicit qualifiers(mlir::) instead of wrapping the definition with the namespace.
PiperOrigin-RevId: 286222654
- the JIT codegen was being run at the default -O0 level; instead,
propagate the opt level from the cmd line.
Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>
Closestensorflow/mlir#123
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/123 from bondhugula:jit-runner 3b055e47f94c9a48bf487f6400787478738cda02
PiperOrigin-RevId: 267778586
The refactoring of ExecutionEngine dropped the usage of the irTransform function used to pass -O3 and other options to LLVM. As a consequence, the proper optimizations do not kick in in LLMV-land.
This CL makes use of the transform function and allows producing avx512 instructions, on an internal example, when using:
`mlir-cpu-runner -dump-object-file=1 -object-filename=foo.o` combined with `objdump -D foo.o`.
Assembly produced resembles:
```
2b2e: 62 72 7d 48 18 04 0e vbroadcastss (%rsi,%rcx,1),%zmm8
2b35: 62 71 7c 48 28 ce vmovaps %zmm6,%zmm9
2b3b: 62 72 3d 48 a8 c9 vfmadd213ps %zmm1,%zmm8,%zmm9
2b41: 62 f1 7c 48 28 cf vmovaps %zmm7,%zmm1
2b47: 62 f2 3d 48 a8 c8 vfmadd213ps %zmm0,%zmm8,%zmm1
2b4d: 62 f2 7d 48 18 44 0e vbroadcastss 0x4(%rsi,%rcx,1),%zmm0
2b54: 01
2b55: 62 71 7c 48 28 c6 vmovaps %zmm6,%zmm8
2b5b: 62 72 7d 48 a8 c3 vfmadd213ps %zmm3,%zmm0,%zmm8
2b61: 62 f1 7c 48 28 df vmovaps %zmm7,%zmm3
2b67: 62 f2 7d 48 a8 da vfmadd213ps %zmm2,%zmm0,%zmm3
2b6d: 62 f2 7d 48 18 44 0e vbroadcastss 0x8(%rsi,%rcx,1),%zmm0
2b74: 02
2b75: 62 f2 7d 48 a8 f5 vfmadd213ps %zmm5,%zmm0,%zmm6
2b7b: 62 f2 7d 48 a8 fc vfmadd213ps %zmm4,%zmm0,%zmm7
```
etc.
Fixestensorflow/mlir#120
PiperOrigin-RevId: 267281097
This commit introduces the bits to be able to dump JIT-compile
objects to external files by passing an object cache to OrcJit.
The new functionality is tested in mlir-cpu-runner under the flag
`dump-object-file`.
Closestensorflow/mlir#95
PiperOrigin-RevId: 266439265
This CL makes use of the standard LLVM LLJIT and removes the need for a custom JIT implementation within MLIR.
To achieve this, one needs to clone (i.e. serde) the produced llvm::Module into a new LLVMContext. This is currently necessary because the llvm::LLVMContext is owned by the LLVMDialect, somewhat deep in the call hierarchy.
In the future we should remove the reliance of serding the llvm::Module by allowing the injection of an LLVMContext from the top-level. Unfortunately this will require deeper API changes and impact multiple places. It is therefore left for future work.
PiperOrigin-RevId: 264737459
Switch to C++14 standard method as llvm::make_unique has been removed (
https://reviews.llvm.org/D66259). Also mark some targets as c++14 to ease next
integrates.
PiperOrigin-RevId: 263953918
LLVM r368707 updated the APIs in llvm::orc::DynamicLibrarySearchGenerator to
use unique_ptr for holding the instance of the generator. Update our uses of
DynamicLibrarySearchGenerator in the ExecutionEngine to reflect that.
PiperOrigin-RevId: 263539855
LLVM r367686 changed the locking scheme to avoid potential deadlocks and the
related llvm::orc::ThreadSafeModule APIs ExecutionEngine was relying upon,
breaking the MLIR build. Update our use of ThreadSafeModule to unbreak the
build.
PiperOrigin-RevId: 261566571
As with Functions, Module will soon become an operation, which are value-typed. This eases the transition from Module to ModuleOp. A new class, OwningModuleRef is provided to allow for owning a reference to a Module, and will auto-delete the held module on destruction.
PiperOrigin-RevId: 256196193
Originally, ExecutionEngine was created before MLIR had a proper pass
management infrastructure or an LLVM IR dialect (using the LLVM target
directly). It has been running a bunch of lowering passes to convert the input
IR from Standard+Affine dialects to LLVM IR and, later, to the LLVM IR dialect.
This is no longer necessary and is even undesirable for compilation flows that
perform their own conversion to the LLVM IR dialect. Drop this integration and
make ExecutionEngine accept only the LLVM IR dialect. Users of the
ExecutionEngine can call the relevant passes themselves.
--
PiperOrigin-RevId: 249004676
This CL performs post-commit cleanups.
It adds the ability to specify which shared libraries to load dynamically in ExecutionEngine. The linalg integration test is updated to use a shared library.
Additional minor cleanups related to LLVM lowering of Linalg are also included.
--
PiperOrigin-RevId: 248346589
This CL extends the execution engine to allow the additional resolution of symbols names
that have been registered explicitly. This allows linking static library symbols that have not been explicitly exported with the -rdynamic linking flag (which is deemed too intrusive).
--
PiperOrigin-RevId: 247969504
LLVM Orc JIT changed the API for DynamicLibrarySearchGenerator::
GetForCurrentProcess to only take one value of the DataLayout that it actually
uses instead of the whole data layout. Update MLIR ExecutionEngine call to
this function accordingly.
--
PiperOrigin-RevId: 244820235
This allows client to be able to reuse the same logic to setup a module
for the ExecutionEngine without instanciating one. One use case is running
the optimization pipeline but not JIT-ing.
--
PiperOrigin-RevId: 242614380
The existing implementation of the ExecutionEngine unconditionally runs a list
of "default" MLIR passes on the module upon creation. These passes include,
among others, dialect conversions from affine to standard and from standard to
LLVM IR dialects. In some cases, these conversions might have been performed
before ExecutionEngine is created. More advanced use cases may be performing
additional transformations that the "default" passes will conflict with.
Provide an overload for ExecutionEngine::create that takes a PassManager
configured with the passes to run on the module. If it is not provided, do not
run any passes. The engine will not be created if the input module, after the
pass manager, has any other dialect than the LLVM IR dialect.
--
PiperOrigin-RevId: 242127393
There are two places containing constant folding logic right now: the ConstantFold
pass and the GreedyPatternRewriteDriver. The logic was not shared and started to
drift apart. We were testing constant folding logic using the ConstantFold pass,
but lagged behind the GreedyPatternRewriteDriver, where we really want the constant
folding to happen.
This CL pulled the logic into utility functions and classes for sharing between
these two places. A new ConstantFoldHelper class is created to help constant fold
and de-duplication.
Also, renamed the ConstantFold pass to TestConstantFold to make it clear that it is
intended for testing purpose.
--
PiperOrigin-RevId: 241971681